RankBasedMetricResults

class RankBasedMetricResults(data)[source]

Bases: pykeen.evaluation.evaluator.MetricResults

Results from computing metrics.

Initialize the result wrapper.

Attributes Summary

metrics

Methods Summary

create_random([random_state])

Create random results useful for testing.

from_ranks(metrics, rank_and_candidates)

Create rank-based metric results from the given rank/candidate sets.

get_metric(name)

Get the rank-based metric.

to_df()

Output the metrics as a pandas dataframe.

to_dict()

Get the results as a dictionary.

to_flat_dict()

Get the results as a flattened dictionary.

Attributes Documentation

metrics: ClassVar[Mapping[str, Type[Metric]]] = {'adjusted_arithmetic_mean_rank': <class 'pykeen.metrics.ranking.AdjustedArithmeticMeanRank'>, 'adjusted_arithmetic_mean_rank_index': <class 'pykeen.metrics.ranking.AdjustedArithmeticMeanRankIndex'>, 'adjusted_geometric_mean_rank_index': <class 'pykeen.metrics.ranking.AdjustedGeometricMeanRankIndex'>, 'adjusted_hits_at_k': <class 'pykeen.metrics.ranking.AdjustedHitsAtK'>, 'adjusted_inverse_harmonic_mean_rank': <class 'pykeen.metrics.ranking.AdjustedInverseHarmonicMeanRank'>, 'arithmetic_mean_rank': <class 'pykeen.metrics.ranking.ArithmeticMeanRank'>, 'count': <class 'pykeen.metrics.ranking.Count'>, 'geometric_mean_rank': <class 'pykeen.metrics.ranking.GeometricMeanRank'>, 'harmonic_mean_rank': <class 'pykeen.metrics.ranking.HarmonicMeanRank'>, 'hits_at_10': <class 'pykeen.metrics.ranking.HitsAtK'>, 'inverse_arithmetic_mean_rank': <class 'pykeen.metrics.ranking.InverseArithmeticMeanRank'>, 'inverse_geometric_mean_rank': <class 'pykeen.metrics.ranking.InverseGeometricMeanRank'>, 'inverse_harmonic_mean_rank': <class 'pykeen.metrics.ranking.InverseHarmonicMeanRank'>, 'inverse_median_rank': <class 'pykeen.metrics.ranking.InverseMedianRank'>, 'median_absolute_deviation': <class 'pykeen.metrics.ranking.MedianAbsoluteDeviation'>, 'median_rank': <class 'pykeen.metrics.ranking.MedianRank'>, 'standard_deviation': <class 'pykeen.metrics.ranking.StandardDeviation'>, 'variance': <class 'pykeen.metrics.ranking.Variance'>, 'z_arithmetic_mean_rank': <class 'pykeen.metrics.ranking.ZArithmeticMeanRank'>, 'z_geometric_mean_rank': <class 'pykeen.metrics.ranking.ZGeometricMeanRank'>, 'z_hits_at_k': <class 'pykeen.metrics.ranking.ZHitsAtK'>, 'z_inverse_harmonic_mean_rank': <class 'pykeen.metrics.ranking.ZInverseHarmonicMeanRank'>}

Methods Documentation

classmethod create_random(random_state=None)[source]

Create random results useful for testing.

Return type

RankBasedMetricResults

classmethod from_ranks(metrics, rank_and_candidates)[source]

Create rank-based metric results from the given rank/candidate sets.

Return type

RankBasedMetricResults

get_metric(name)[source]

Get the rank-based metric.

Parameters

name (str) –

The name of the metric, created by concatenating three parts:

  1. The side (one of “head”, “tail”, or “both”). Most publications exclusively report “both”.

  2. The type (one of “optimistic”, “pessimistic”, “realistic”)

  3. The metric name (“adjusted_mean_rank_index”, “adjusted_mean_rank”, “mean_rank, “mean_reciprocal_rank”, “inverse_geometric_mean_rank”, or “hits@k” where k defaults to 10 but can be substituted for an integer. By default, 1, 3, 5, and 10 are available. Other K’s can be calculated by setting the appropriate variable in the evaluation_kwargs in the pykeen.pipeline.pipeline() or setting ks in the pykeen.evaluation.RankBasedEvaluator.

In general, all metrics are available for all combinations of sides/types except AMR and AMRI, which are only calculated for the average type. This is because the calculation of the expected MR in the optimistic and pessimistic case scenarios is still an active area of research and therefore has no implementation yet.

Return type

float

Returns

The value for the metric

Raises

ValueError if an invalid name is given.

Get the average MR

>>> metric_results.get('both.realistic.mean_rank')

If you only give a metric name, it assumes that it’s for “both” sides and “realistic” type.

>>> metric_results.get('adjusted_mean_rank_index')

This function will do its best to infer what’s going on if you only specify one part.

>>> metric_results.get('left.mean_rank')
>>> metric_results.get('optimistic.mean_rank')

Get the default Hits @ K (where \(k=10\))

>>> metric_results.get('hits@k')

Get a given Hits @ K

>>> metric_results.get('hits@5')
to_df()[source]

Output the metrics as a pandas dataframe.

Return type

DataFrame

to_dict()[source]

Get the results as a dictionary.

Return type

Mapping[Union[Literal[‘head’, ‘relation’, ‘tail’], Literal[‘both’]], Mapping[Literal[‘optimistic’, ‘realistic’, ‘pessimistic’], Mapping[str, float]]]

to_flat_dict()[source]

Get the results as a flattened dictionary.