ZArithmeticMeanRank

class ZArithmeticMeanRank(base_cls=None, **kwargs)[source]

Bases: ZMetric

The z-scored arithmetic mean rank.

Initialize the derived metric.

Parameters:

Attributes Summary

binarize

whether the metric needs binarized scores

closed_expectation

whether there is a closed-form solution of the expectation

closed_variance

whether there is a closed-form solution of the variance

increasing

Z-adjusted metrics are formulated to be increasing

key

Return the key for use in metric result dictionaries.

name

The name of the metric

needs_candidates

whether the metric requires the number of candidates for each ranking task

supported_rank_types

Z-adjusted metrics can only be applied to realistic ranks

supports_weights

whether the metric supports weights

synonyms

synonyms for this metric

value_range

the value range

Methods Summary

__call__(ranks[, num_candidates, weights])

Evaluate the metric.

adjust(base_metric_result, num_candidates[, ...])

Adjust base metric results based on the number of candidates.

expected_value(num_candidates[, ...])

Compute expected metric value.

extra_repr()

Generate the extra repr, cf.

get_coefficients(num_candidates[, weights])

Compute the scaling coefficients.

get_description()

Get the description.

get_link()

Get the link from the docdata.

get_range()

Get the math notation for the range of this metric.

get_sampled_values(num_candidates, num_samples)

Calculate the metric on sampled rank arrays.

iter_extra_repr()

Iterate over the components of the extra_repr().

numeric_expected_value(**kwargs)

Compute expected metric value by summation.

numeric_expected_value_with_ci(**kwargs)

Estimate expected value with confidence intervals.

numeric_variance(**kwargs)

Compute variance by summation.

numeric_variance_with_ci(**kwargs)

Estimate variance with confidence intervals.

std(num_candidates[, num_samples, weights])

Compute the standard deviation.

variance(num_candidates[, num_samples, weights])

Compute variance.

Attributes Documentation

binarize: ClassVar[bool] = False

whether the metric needs binarized scores

closed_expectation: ClassVar[bool] = True

whether there is a closed-form solution of the expectation

closed_variance: ClassVar[bool] = True

whether there is a closed-form solution of the variance

increasing: ClassVar[bool] = True

Z-adjusted metrics are formulated to be increasing

key

Return the key for use in metric result dictionaries.

Return type:

str

name: ClassVar[str] = 'z-Mean Rank (zMR)'

The name of the metric

needs_candidates: ClassVar[bool] = True

whether the metric requires the number of candidates for each ranking task

supported_rank_types: ClassVar[Collection[Literal['optimistic', 'realistic', 'pessimistic']]] = ('realistic',)

Z-adjusted metrics can only be applied to realistic ranks

supports_weights: ClassVar[bool] = True

whether the metric supports weights

synonyms: ClassVar[Collection[str]] = ('zamr', 'zmr')

synonyms for this metric

value_range: ClassVar[ValueRange] = ValueRange(lower=None, lower_inclusive=False, upper=None, upper_inclusive=False)

the value range

Methods Documentation

__call__(ranks, num_candidates=None, weights=None)

Evaluate the metric.

Parameters:
  • ranks (ndarray) – shape: s the individual ranks

  • num_candidates (Optional[ndarray]) – shape: s the number of candidates for each individual ranking task

  • weights (Optional[ndarray]) – shape: s the weights for the individual ranks

Return type:

float

adjust(base_metric_result, num_candidates, weights=None)

Adjust base metric results based on the number of candidates.

Parameters:
  • base_metric_result (float) – the result of the base metric

  • num_candidates (ndarray) – the number of candidates

  • weights (Optional[ndarray]) – shape: s the weights for the individual ranking tasks

Return type:

float

Returns:

the adjusted metric

Note

since the adjustment only depends on the number of candidates, but not the ranks of the predictions, this method can also be used to adjust published results without access to the trained models.

expected_value(num_candidates, num_samples=None, weights=None, **kwargs)

Compute expected metric value.

The expectation is computed under the assumption that each individual rank follows a discrete uniform distribution \(\mathcal{U}\left(1, N_i\right)\), where \(N_i\) denotes the number of candidates for ranking task \(r_i\).

Parameters:
  • num_candidates (ndarray) – the number of candidates for each individual rank computation

  • num_samples (Optional[int]) – the number of samples to use for simulation, if no closed form expected value is implemented

  • weights (Optional[ndarray]) – shape: s the weights for the individual ranking tasks

  • kwargs – additional keyword-based parameters passed to get_sampled_values(), if no closed form solution is available

Return type:

float

Returns:

the expected value of this metric

Raises:

NoClosedFormError – raised if a closed form expectation has not been implemented and no number of samples are given

Note

Prefers analytical solution, if available, but falls back to numeric estimation via summation, cf. RankBasedMetric.numeric_expected_value().

extra_repr()

Generate the extra repr, cf. :meth`torch.nn.Module.extra_repr`.

Return type:

str

Returns:

the extra part of the repr()

get_coefficients(num_candidates, weights=None)

Compute the scaling coefficients.

Parameters:
  • num_candidates (ndarray) – the number of candidates

  • weights (Optional[ndarray]) – the weights for the individual ranking tasks

Return type:

AffineTransformationParameters

Returns:

a tuple (scale, offset)

classmethod get_description()

Get the description.

Return type:

str

Get the link from the docdata.

Return type:

str

classmethod get_range()

Get the math notation for the range of this metric.

Return type:

str

get_sampled_values(num_candidates, num_samples, weights=None, generator=None, memory_intense=True)

Calculate the metric on sampled rank arrays.

Parameters:
  • num_candidates (ndarray) – shape: s the number of candidates for each ranking task

  • num_samples (int) – the number of samples

  • weights (Optional[ndarray]) – shape: s the weights for the individual ranking tasks

  • generator (Optional[Generator]) – a random state for reproducibility

  • memory_intense (bool) – whether to use a more memory-intense, but more time-efficient variant

Return type:

ndarray

Returns:

shape: (num_samples,) the metric evaluated on num_samples sampled rank arrays

iter_extra_repr()

Iterate over the components of the extra_repr().

This method is typically overridden. A common pattern would be

def iter_extra_repr(self) -> Iterable[str]:
    yield from super().iter_extra_repr()
    yield "<key1>=<value1>"
    yield "<key2>=<value2>"
Return type:

Iterable[str]

Returns:

an iterable over individual components of the extra_repr()

numeric_expected_value(**kwargs)

Compute expected metric value by summation.

The expectation is computed under the assumption that each individual rank follows a discrete uniform distribution \(\mathcal{U}\left(1, N_i\right)\), where \(N_i\) denotes the number of candidates for ranking task \(r_i\).

Parameters:

kwargs – keyword-based parameters passed to get_sampled_values()

Return type:

float

Returns:

The estimated expected value of this metric

Warning

Depending on the metric, the estimate may not be very accurate and converge slowly, cf. https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.rv_discrete.expect.html

numeric_expected_value_with_ci(**kwargs)

Estimate expected value with confidence intervals.

Return type:

ndarray

numeric_variance(**kwargs)

Compute variance by summation.

The variance is computed under the assumption that each individual rank follows a discrete uniform distribution \(\mathcal{U}\left(1, N_i\right)\), where \(N_i\) denotes the number of candidates for ranking task \(r_i\).

Parameters:

kwargs – keyword-based parameters passed to get_sampled_values()

Return type:

float

Returns:

The estimated variance of this metric

Warning

Depending on the metric, the estimate may not be very accurate and converge slowly, cf. https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.rv_discrete.expect.html

numeric_variance_with_ci(**kwargs)

Estimate variance with confidence intervals.

Return type:

ndarray

std(num_candidates, num_samples=None, weights=None, **kwargs)

Compute the standard deviation.

Parameters:
  • num_candidates (ndarray) – the number of candidates for each individual rank computation

  • num_samples (Optional[int]) – the number of samples to use for simulation, if no closed form expected value is implemented

  • weights (Optional[ndarray]) – shape: s the weights for the individual ranking tasks

  • kwargs – additional keyword-based parameters passed to variance(),

Return type:

float

Returns:

The standard deviation (i.e. the square root of the variance) of this metric

For a detailed explanation, cf. RankBasedMetric.variance().

variance(num_candidates, num_samples=None, weights=None, **kwargs)

Compute variance.

The variance is computed under the assumption that each individual rank follows a discrete uniform distribution \(\mathcal{U}\left(1, N_i\right)\), where \(N_i\) denotes the number of candidates for ranking task \(r_i\).

Parameters:
  • num_candidates (ndarray) – the number of candidates for each individual rank computation

  • num_samples (Optional[int]) – the number of samples to use for simulation, if no closed form expected value is implemented

  • weights (Optional[ndarray]) – shape: s the weights for the individual ranking tasks

  • kwargs – additional keyword-based parameters passed to get_sampled_values(), if no closed form solution is available

Return type:

float

Returns:

The variance of this metric

Raises:

NoClosedFormError – Raised if a closed form variance has not been implemented and no number of samples are given

Note

Prefers analytical solution, if available, but falls back to numeric estimation via summation, cf. RankBasedMetric.numeric_variance().