lr_schedulers_hpo_defaults

lr_schedulers_hpo_defaults: Mapping[type[LRScheduler], Mapping[str, Any]] = {<class 'torch.optim.lr_scheduler.CosineAnnealingLR'>: {'T_max': {'high': 1000, 'low': 10, 'step': 50, 'type': <class 'int'>}}, <class 'torch.optim.lr_scheduler.CosineAnnealingWarmRestarts'>: {'T_0': {'high': 200, 'low': 10, 'step': 50, 'type': <class 'int'>}}, <class 'torch.optim.lr_scheduler.CyclicLR'>: {'base_lr': {'high': 0.1, 'low': 0.001, 'scale': 'log', 'type': <class 'float'>}, 'max_lr': {'high': 0.3, 'low': 0.1, 'scale': 'log', 'type': <class 'float'>}}, <class 'torch.optim.lr_scheduler.ExponentialLR'>: {'gamma': {'high': 1.0, 'low': 0.8, 'step': 0.025, 'type': <class 'float'>}}, <class 'torch.optim.lr_scheduler.LambdaLR'>: {'lr_lambda': {'choices': [<function <lambda>>, <function <lambda>>], 'type': 'categorical'}}, <class 'torch.optim.lr_scheduler.MultiStepLR'>: {'gamma': {'high': 0.9, 'low': 0.1, 'step': 0.1, 'type': <class 'float'>}, 'milestones': {'choices': [75, 130, 190, 240, 370], 'type': 'categorical'}}, <class 'torch.optim.lr_scheduler.MultiplicativeLR'>: {'lr_lambda': {'choices': [<function <lambda>>, <function <lambda>>, <function <lambda>>], 'type': 'categorical'}}, <class 'torch.optim.lr_scheduler.OneCycleLR'>: {'max_lr': {'high': 0.3, 'low': 0.1, 'scale': 'log', 'type': <class 'float'>}}, <class 'torch.optim.lr_scheduler.StepLR'>: {'gamma': {'high': 0.9, 'low': 0.1, 'step': 0.1, 'type': <class 'float'>}, 'step_size': {'high': 50, 'low': 1, 'step': 5, 'type': <class 'int'>}}}

The default strategy for optimizing the lr_schedulers’ hyper-parameters, based on torch.optim.lr_scheduler.LRScheduler