RankBasedMetricResults

class RankBasedMetricResults(data)[source]

Bases: MetricResults[RankBasedMetricKey]

Results from computing metrics.

Initialize the result wrapper.

Attributes Summary

metrics

Methods Summary

create_random([random_state])

Create random results useful for testing.

from_ranks(metrics, rank_and_candidates)

Create rank-based metric results from the given rank/candidate sets.

key_from_string(s)

Get the rank-based metric key.

Attributes Documentation

Parameters:

data (Mapping[MetricKeyType, float]) –

metrics: ClassVar[Mapping[str, Type[Metric]]] = {'adjusted_arithmetic_mean_rank': <class 'pykeen.metrics.ranking.AdjustedArithmeticMeanRank'>, 'adjusted_arithmetic_mean_rank_index': <class 'pykeen.metrics.ranking.AdjustedArithmeticMeanRankIndex'>, 'adjusted_geometric_mean_rank_index': <class 'pykeen.metrics.ranking.AdjustedGeometricMeanRankIndex'>, 'adjusted_hits_at_k': <class 'pykeen.metrics.ranking.AdjustedHitsAtK'>, 'adjusted_inverse_harmonic_mean_rank': <class 'pykeen.metrics.ranking.AdjustedInverseHarmonicMeanRank'>, 'arithmetic_mean_rank': <class 'pykeen.metrics.ranking.ArithmeticMeanRank'>, 'count': <class 'pykeen.metrics.ranking.Count'>, 'geometric_mean_rank': <class 'pykeen.metrics.ranking.GeometricMeanRank'>, 'harmonic_mean_rank': <class 'pykeen.metrics.ranking.HarmonicMeanRank'>, 'hits_at_10': <class 'pykeen.metrics.ranking.HitsAtK'>, 'inverse_arithmetic_mean_rank': <class 'pykeen.metrics.ranking.InverseArithmeticMeanRank'>, 'inverse_geometric_mean_rank': <class 'pykeen.metrics.ranking.InverseGeometricMeanRank'>, 'inverse_harmonic_mean_rank': <class 'pykeen.metrics.ranking.InverseHarmonicMeanRank'>, 'inverse_median_rank': <class 'pykeen.metrics.ranking.InverseMedianRank'>, 'median_absolute_deviation': <class 'pykeen.metrics.ranking.MedianAbsoluteDeviation'>, 'median_rank': <class 'pykeen.metrics.ranking.MedianRank'>, 'standard_deviation': <class 'pykeen.metrics.ranking.StandardDeviation'>, 'variance': <class 'pykeen.metrics.ranking.Variance'>, 'z_arithmetic_mean_rank': <class 'pykeen.metrics.ranking.ZArithmeticMeanRank'>, 'z_geometric_mean_rank': <class 'pykeen.metrics.ranking.ZGeometricMeanRank'>, 'z_hits_at_k': <class 'pykeen.metrics.ranking.ZHitsAtK'>, 'z_inverse_harmonic_mean_rank': <class 'pykeen.metrics.ranking.ZInverseHarmonicMeanRank'>}

Methods Documentation

classmethod create_random(random_state=None)[source]

Create random results useful for testing.

Return type:

RankBasedMetricResults

Parameters:

random_state (int | None) –

classmethod from_ranks(metrics, rank_and_candidates)[source]

Create rank-based metric results from the given rank/candidate sets.

Return type:

RankBasedMetricResults

Parameters:
classmethod key_from_string(s)[source]

Get the rank-based metric key.

The key input is understood as a dot-separated composition of

  1. The side (one of “head”, “tail”, or “both”). Most publications exclusively report “both”. If not given “both” is assumed.

  2. The rank type (one of “optimistic”, “pessimistic”, “realistic”). If not given, “realistic” is assumed.

  3. The metric name, e.g., “adjusted_mean_rank_index”, “adjusted_mean_rank”, “mean_rank, “mean_reciprocal_rank”,

    “inverse_geometric_mean_rank”, or “hits@k” where k defaults to 10 but can be substituted for an integer. By default, 1, 3, 5, and 10 are available. Other K’s can be calculated by setting the appropriate variable in the evaluation_kwargs in the pykeen.pipeline.pipeline() or setting ks in the pykeen.evaluation.RankBasedEvaluator.

In general, all metrics are available for all combinations of sides/types except AMR and AMRI, which are only calculated for the average type. This is because the calculation of the expected MR in the optimistic and pessimistic case scenarios is still an active area of research and therefore has no implementation yet.

Parameters:

s (UnionType[str, None]) – a string denoting a metric key

Return type:

RankBasedMetricKey

Returns:

The resolved key.

Raises:

ValueError – if the string cannot be resolved to a metric key

Get the average MR

>>> RankBasedMetricResults.key_from_string('both.realistic.mean_rank')
RankBasedMetricKey(side='both', rank_type='realistic', metric='arithmetic_mean_rank')

If you only give a metric name, it assumes that it’s for ‘both’ sides and ‘realistic’ type.

>>> RankBasedMetricResults.key_from_string('adjusted_mean_rank_index')
RankBasedMetricKey(side='both', rank_type='realistic', metric='adjusted_arithmetic_mean_rank_index')

This function will do its best to infer what’s going on if you only specify one part.

>>> RankBasedMetricResults.key_from_string('head.mean_rank')
RankBasedMetricKey(side='head', rank_type='realistic', metric='arithmetic_mean_rank')
>>> RankBasedMetricResults.key_from_string('optimistic.mean_rank')
RankBasedMetricKey(side='both', rank_type='optimistic', metric='arithmetic_mean_rank')

Get the default Hits @ K (where \(k=10\))

>>> RankBasedMetricResults.key_from_string('hits@k')
RankBasedMetricKey(side='both', rank_type='realistic', metric='hits_at_10')

Get a given Hits @ K

>>> RankBasedMetricResults.key_from_string('hits@5')
RankBasedMetricKey(side='both', rank_type='realistic', metric='hits_at_5')