LCWAEvaluationLoop
- class LCWAEvaluationLoop(triples_factory, evaluator=None, evaluator_kwargs=None, targets=('head', 'tail'), mode=None, additional_filter_triples=None, **kwargs)[source]
Bases:
EvaluationLoop
[Mapping
[Literal
[‘head’, ‘relation’, ‘tail’],LongTensor
]]Evaluation loop using 1:n scoring.
For brevity, we only describe evaluation for tail prediction. Let \((h, r, t) \in \mathcal{T}_{eval}\) denote an evaluation triple. Then, we calculate scores for all triples \((h, r, t')\) with \(t' \in \mathcal{E}\), i.e., for replacing the true tail \(t\) by all entities.
Initialize the evaluation loop.
- Parameters:
triples_factory (
CoreTriplesFactory
) – the evaluation triples factoryevaluator (
Union
[str
,Evaluator
,Type
[Evaluator
],None
]) – the evaluator, or a hint thereofevaluator_kwargs (
Optional
[Mapping
[str
,Any
]]) – additional keyword-based parameters for instantiating the evaluatortargets (
Collection
[Literal
[‘head’, ‘relation’, ‘tail’]]) – the prediction targets.mode (
Optional
[Literal
[‘training’, ‘validation’, ‘testing’]]) – the inductive mode, or None for transductive evaluationadditional_filter_triples (
Union
[LongTensor
,CoreTriplesFactory
,Sequence
[Union
[LongTensor
,CoreTriplesFactory
]],None
]) – additional filter triples to use for creating the filterkwargs – additional keyword-based parameters passed to
EvaluationLoop.__init__()
. Should not contain the keys dataset or evaluator.