ClassificationMetric

class ClassificationMetric[source]

Bases: Metric, ABC

A base class for classification metrics.

Attributes Summary

binarize

whether the metric needs binarized scores

closed_expectation

whether there is a closed-form solution of the expectation

closed_variance

whether there is a closed-form solution of the variance

key

Return the key for use in metric result dictionaries.

supports_weights

whether the metric supports weights

synonyms

synonyms for this metric

Methods Summary

__call__(y_true, y_score[, weights])

Evaluate the metric.

extra_repr()

Generate the extra repr, cf.

forward(y_true, y_score[, sample_weight])

Calculate the metric.

get_description()

Get the description.

get_link()

Get the link from the docdata.

get_range()

Get the math notation for the range of this metric.

iter_extra_repr()

Iterate over the components of the extra_repr().

Attributes Documentation

binarize: ClassVar[bool | None] = None

whether the metric needs binarized scores

closed_expectation: ClassVar[bool] = False

whether there is a closed-form solution of the expectation

closed_variance: ClassVar[bool] = False

whether there is a closed-form solution of the variance

key

Return the key for use in metric result dictionaries.

supports_weights: ClassVar[bool] = False

whether the metric supports weights

synonyms: ClassVar[Collection[str]] = ()

synonyms for this metric

Methods Documentation

__call__(y_true: ndarray, y_score: ndarray, weights: ndarray | None = None) float[source]

Evaluate the metric.

Parameters:
  • y_true (ndarray) – shape: (num_samples,) the true labels, either 0 or 1.

  • y_score (ndarray) – shape: (num_samples,) the predictions, either continuous or binarized.

  • weights (ndarray | None) –

    shape: (num_samples,) weights for individual predictions

    Warning

    not all metrics support sample weights - check supports_weights first

Returns:

the scalar metric value

Raises:

ValueError – when weights are provided but the function does not support them.

Return type:

float

extra_repr() str

Generate the extra repr, cf. :meth`torch.nn.Module.extra_repr`.

Returns:

the extra part of the repr()

Return type:

str

abstractmethod forward(y_true: ndarray, y_score: ndarray, sample_weight: ndarray | None = None) float[source]

Calculate the metric.

Parameters:
  • y_true (ndarray) – shape: (num_samples,) the true label, either 0 or 1.

  • y_score (ndarray) – shape: (num_samples,) the predictions, either as continuous scores, or as binarized prediction (depending on the concrete metric at hand).

  • sample_weight (ndarray | None) – shape: (num_samples,) sample weights

Returns:

a scalar metric value

Return type:

float

# noqa:DAR202

classmethod get_description() str

Get the description.

Return type:

str

Get the link from the docdata.

Return type:

str

classmethod get_range() str

Get the math notation for the range of this metric.

Return type:

str

iter_extra_repr() Iterable[str]

Iterate over the components of the extra_repr().

This method is typically overridden. A common pattern would be

def iter_extra_repr(self) -> Iterable[str]:
    yield from super().iter_extra_repr()
    yield "<key1>=<value1>"
    yield "<key2>=<value2>"
Returns:

an iterable over individual components of the extra_repr()

Return type:

Iterable[str]