RankBasedMetricResults
- class RankBasedMetricResults(arithmetic_mean_rank, geometric_mean_rank, median_rank, harmonic_mean_rank, inverse_arithmetic_mean_rank, inverse_geometric_mean_rank, inverse_harmonic_mean_rank, inverse_median_rank, rank_count, rank_std, rank_var, rank_mad, hits_at_k, adjusted_arithmetic_mean_rank, adjusted_arithmetic_mean_rank_index)[source]
Bases:
pykeen.evaluation.evaluator.MetricResults
Results from computing metrics.
Methods Summary
from_dict
(kvs, *[, infer_missing])- rtype
~A
from_json
(s, *[, parse_float, parse_int, ...])- rtype
~A
get_metric
(name)Get the rank-based metric.
schema
(*[, infer_missing, only, exclude, ...])- rtype
SchemaF
[~A]
to_df
()Output the metrics as a pandas dataframe.
to_dict
([encode_json])Get the results as a flattened dictionary.
to_json
(*[, skipkeys, ensure_ascii, ...])- rtype
Methods Documentation
- classmethod from_dict(kvs, *, infer_missing=False)
- Return type
~A
- classmethod from_json(s, *, parse_float=None, parse_int=None, parse_constant=None, infer_missing=False, **kw)
- Return type
~A
- get_metric(name)[source]
Get the rank-based metric.
- Parameters
name (
str
) –The name of the metric, created by concatenating three parts:
The side (one of “head”, “tail”, or “both”). Most publications exclusively report “both”.
The type (one of “optimistic”, “pessimistic”, “realistic”)
The metric name (“adjusted_mean_rank_index”, “adjusted_mean_rank”, “mean_rank, “mean_reciprocal_rank”, “inverse_geometric_mean_rank”, or “hits@k” where k defaults to 10 but can be substituted for an integer. By default, 1, 3, 5, and 10 are available. Other K’s can be calculated by setting the appropriate variable in the
evaluation_kwargs
in thepykeen.pipeline.pipeline()
or settingks
in thepykeen.evaluation.RankBasedEvaluator
.
In general, all metrics are available for all combinations of sides/types except AMR and AMRI, which are only calculated for the average type. This is because the calculation of the expected MR in the optimistic and pessimistic case scenarios is still an active area of research and therefore has no implementation yet.
- Return type
- Returns
The value for the metric
- Raises
ValueError – if an invalid name is given.
Get the average MR
>>> metric_results.get('both.realistic.mean_rank')
If you only give a metric name, it assumes that it’s for “both” sides and “realistic” type.
>>> metric_results.get('adjusted_mean_rank_index')
This function will do its best to infer what’s going on if you only specify one part.
>>> metric_results.get('left.mean_rank') >>> metric_results.get('optimistic.mean_rank')
Get the default Hits @ K (where \(k=10\))
>>> metric_results.get('hits@k')
Get a given Hits @ K
>>> metric_results.get('hits@5')
- classmethod schema(*, infer_missing=False, only=None, exclude=(), many=False, context=None, load_only=(), dump_only=(), partial=False, unknown=None)
- Return type
SchemaF
[~A]