LitModule
- class LitModule(dataset: str | Dataset | type[Dataset] | None = 'nations', dataset_kwargs: Mapping[str, Any] | None = None, mode: Literal['training', 'validation', 'testing'] | None = None, model: str | Model | type[Model] | None = 'distmult', model_kwargs: Mapping[str, Any] | None = None, batch_size: int = 32, learning_rate: float = 0.001, label_smoothing: float = 0.0, optimizer: str | Optimizer | type[Optimizer] | None = None, optimizer_kwargs: Mapping[str, Any] | None = None)[source]
Bases:
LightningModule
A base module for training models with PyTorch Lightning.
Create the lightning module.
- Parameters:
dataset (str | Dataset | type[Dataset] | None) – the dataset, or a hint thereof
dataset_kwargs (Mapping[str, Any] | None) – additional keyword-based parameters passed to the dataset
mode (Literal['training', 'validation', 'testing'] | None) – the inductive mode; defaults to transductive training
model (str | Model | type[Model] | None) – the model, or a hint thereof
model_kwargs (Mapping[str, Any] | None) – additional keyword-based parameters passed to the model
batch_size (int) – the training batch size
learning_rate (float) – the learning rate
label_smoothing (float) – the label smoothing
optimizer (str | Optimizer | type[Optimizer] | None) – the optimizer, or a hint thereof
optimizer_kwargs (Mapping[str, Any] | None) – additional keyword-based parameters passed to the optimizer. should not contain lr, or params.
Methods Summary
Configure the optimizers.
forward
(hr_batch)Perform the prediction or inference step by wrapping
pykeen.models.ERModel.predict_t()
.on_before_zero_grad
(optimizer)Called after
training_step()
and beforeoptimizer.zero_grad()
.Create the training data loader.
training_step
(batch, batch_idx)Perform a training step.
Create the validation data loader.
validation_step
(batch, batch_idx, *args, ...)Perform a validation step.
Methods Documentation
- forward(hr_batch: Tensor) Tensor [source]
Perform the prediction or inference step by wrapping
pykeen.models.ERModel.predict_t()
.- Parameters:
hr_batch (Tensor) – shape: (batch_size, 2), dtype: long The indices of (head, relation) pairs.
- Returns:
shape: (batch_size, num_entities), dtype: float For each h-r pair, the scores for all possible tails.
- Return type:
Note
in lightning, forward defines the prediction/inference actions
- on_before_zero_grad(optimizer: Optimizer) None [source]
Called after
training_step()
and beforeoptimizer.zero_grad()
.Called in the training loop after taking an optimizer step and before zeroing grads. Good place to inspect weight information with weights updated.
This is where it is called:
for optimizer in optimizers: out = training_step(...) model.on_before_zero_grad(optimizer) # < ---- called here optimizer.zero_grad() backward()
- Args:
optimizer: The optimizer for which grads should be zeroed.
- Parameters:
optimizer (Optimizer)
- Return type:
None
- train_dataloader() DataLoader [source]
Create the training data loader.
- Return type:
- val_dataloader() DataLoader | Sequence[DataLoader] [source]
Create the validation data loader.
- Return type: