LCWAEvaluationLoop
- class LCWAEvaluationLoop(triples_factory: CoreTriplesFactory, evaluator: str | Evaluator | type[Evaluator] | None = None, evaluator_kwargs: Mapping[str, Any] | None = None, targets: Collection[Literal['head', 'relation', 'tail']] = ('head', 'tail'), mode: Literal['training', 'validation', 'testing'] | None = None, additional_filter_triples: Tensor | CoreTriplesFactory | Sequence[Tensor | CoreTriplesFactory] | None = None, **kwargs)[source]
Bases:
EvaluationLoop
[Mapping
[Literal
[‘head’, ‘relation’, ‘tail’],Tensor
]]Evaluation loop using 1:n scoring.
For brevity, we only describe evaluation for tail prediction. Let \((h, r, t) \in \mathcal{T}_{eval}\) denote an evaluation triple. Then, we calculate scores for all triples \((h, r, t')\) with \(t' \in \mathcal{E}\), i.e., for replacing the true tail \(t\) by all entities.
Initialize the evaluation loop.
- Parameters:
triples_factory (CoreTriplesFactory) – the evaluation triples factory
evaluator (str | Evaluator | type[Evaluator] | None) – the evaluator, or a hint thereof
evaluator_kwargs (Mapping[str, Any] | None) – additional keyword-based parameters for instantiating the evaluator
targets (Collection[Literal['head', 'relation', 'tail']]) – the prediction targets.
mode (Literal['training', 'validation', 'testing'] | None) – the inductive mode, or None for transductive evaluation
additional_filter_triples (Tensor | CoreTriplesFactory | Sequence[Tensor | CoreTriplesFactory] | None) – additional filter triples to use for creating the filter
kwargs – additional keyword-based parameters passed to
EvaluationLoop.__init__()
. Should not contain the keys dataset or evaluator.
Methods Summary
Get the collator to use for the data loader.
process_batch
(batch)Process a single batch.
Methods Documentation