LRScheduler

class LRScheduler(optimizer: Optimizer, last_epoch=-1, verbose='deprecated')[source]

Bases: object

Adjusts the learning rate during optimization.

Methods Summary

get_last_lr()

Return last computed learning rate by current scheduler.

get_lr()

Compute learning rate using chainable form of the scheduler.

load_state_dict(state_dict)

Load the scheduler's state.

print_lr(is_verbose, group, lr[, epoch])

Display the current learning rate.

state_dict()

Return the state of the scheduler as a dict.

step([epoch])

Perform a step.

Methods Documentation

Parameters:

optimizer (Optimizer)

get_last_lr() List[float][source]

Return last computed learning rate by current scheduler.

Return type:

List[float]

get_lr() List[float][source]

Compute learning rate using chainable form of the scheduler.

Return type:

List[float]

load_state_dict(state_dict: Dict[str, Any])[source]

Load the scheduler’s state.

Args:
state_dict (dict): scheduler state. Should be an object returned

from a call to state_dict().

Parameters:

state_dict (Dict[str, Any])

print_lr(is_verbose: bool, group: Dict[str, Any], lr: float, epoch: int | None = None)[source]

Display the current learning rate.

Deprecated since version 2.4: print_lr() is deprecated. Please use get_last_lr() to access the learning rate.

Parameters:
state_dict()[source]

Return the state of the scheduler as a dict.

It contains an entry for every variable in self.__dict__ which is not the optimizer.

step(epoch: int | None = None)[source]

Perform a step.

Parameters:

epoch (int | None)