class LCWAEvaluationLoop(triples_factory, evaluator=None, evaluator_kwargs=None, targets=('head', 'tail'), mode=None, additional_filter_triples=None, **kwargs)[source]

Bases: EvaluationLoop[Mapping[Literal[‘head’, ‘relation’, ‘tail’], LongTensor]]

Evaluation loop using 1:n scoring.

For brevity, we only describe evaluation for tail prediction. Let \((h, r, t) \in \mathcal{T}_{eval}\) denote an evaluation triple. Then, we calculate scores for all triples \((h, r, t')\) with \(t' \in \mathcal{E}\), i.e., for replacing the true tail \(t\) by all entities.

Initialize the evaluation loop.

  • triples_factory (CoreTriplesFactory) – the evaluation triples factory

  • evaluator (Union[str, Evaluator, Type[Evaluator], None]) – the evaluator, or a hint thereof

  • evaluator_kwargs (Optional[Mapping[str, Any]]) – additional keyword-based parameters for instantiating the evaluator

  • targets (Collection[Literal[‘head’, ‘relation’, ‘tail’]]) – the prediction targets.

  • mode (Optional[Literal[‘training’, ‘validation’, ‘testing’]]) – the inductive mode, or None for transductive evaluation

  • additional_filter_triples (Union[LongTensor, CoreTriplesFactory, Sequence[Union[LongTensor, CoreTriplesFactory]], None]) – additional filter triples to use for creating the filter

  • kwargs – additional keyword-based parameters passed to EvaluationLoop.__init__(). Should not contain the keys dataset or evaluator.

Methods Summary


Get the collator to use for the data loader.


Process a single batch.

Methods Documentation


Get the collator to use for the data loader.


Process a single batch.


batch (Mapping[Literal[‘head’, ‘relation’, ‘tail’], LongTensor]) – one batch of evaluation samples from the dataset.

Return type: