ClassificationMetric

class ClassificationMetric[source]

Bases: Metric, ABC

A base class for classification metrics.

Attributes Summary

binarize

whether the metric needs binarized scores

closed_expectation

whether there is a closed-form solution of the expectation

closed_variance

whether there is a closed-form solution of the variance

key

Return the key for use in metric result dictionaries.

supports_weights

whether the metric supports weights

synonyms

synonyms for this metric

Methods Summary

__call__(y_true, y_score[, weights])

Evaluate the metric.

extra_repr()

Generate the extra repr, cf.

forward(y_true, y_score[, sample_weight])

Calculate the metric.

get_description()

Get the description.

get_link()

Get the link from the docdata.

get_range()

Get the math notation for the range of this metric.

iter_extra_repr()

Iterate over the components of the extra_repr().

Attributes Documentation

binarize: ClassVar[bool | None] = None

whether the metric needs binarized scores

closed_expectation: ClassVar[bool] = False

whether there is a closed-form solution of the expectation

closed_variance: ClassVar[bool] = False

whether there is a closed-form solution of the variance

key

Return the key for use in metric result dictionaries.

Return type:

str

supports_weights: ClassVar[bool] = False

whether the metric supports weights

synonyms: ClassVar[Collection[str]] = ()

synonyms for this metric

Methods Documentation

__call__(y_true, y_score, weights=None)[source]

Evaluate the metric.

Parameters:
  • y_true (ndarray) – shape: (num_samples,) the true labels, either 0 or 1.

  • y_score (ndarray) – shape: (num_samples,) the predictions, either continuous or binarized.

  • weights (UnionType[ndarray, None]) –

    shape: (num_samples,) weights for individual predictions

    Warning

    not all metrics support sample weights - check supports_weights first

Return type:

float

Returns:

the scalar metric value

Raises:

ValueError – when weights are provided but the function does not support them.

extra_repr()

Generate the extra repr, cf. :meth`torch.nn.Module.extra_repr`.

Return type:

str

Returns:

the extra part of the repr()

abstract forward(y_true, y_score, sample_weight=None)[source]

Calculate the metric.

Parameters:
  • y_true (ndarray) – shape: (num_samples,) the true label, either 0 or 1.

  • y_score (ndarray) – shape: (num_samples,) the predictions, either as continuous scores, or as binarized prediction (depending on the concrete metric at hand).

  • sample_weight (UnionType[ndarray, None]) – shape: (num_samples,) sample weights

Return type:

float

Returns:

a scalar metric value

classmethod get_description()

Get the description.

Return type:

str

Get the link from the docdata.

Return type:

str

classmethod get_range()

Get the math notation for the range of this metric.

Return type:

str

iter_extra_repr()

Iterate over the components of the extra_repr().

This method is typically overridden. A common pattern would be

def iter_extra_repr(self) -> Iterable[str]:
    yield from super().iter_extra_repr()
    yield "<key1>=<value1>"
    yield "<key2>=<value2>"
Return type:

Iterable[str]

Returns:

an iterable over individual components of the extra_repr()