docs/source-pytorch/upgrade/sections/1_9_advanced.rst
.. list-table:: adv. user 1.9 :widths: 40 40 20 :header-rows: 1
pl.lite modulelightning_fabricPR15953_strategy='dp'strategy='ddp' or DeepSpeed insteadPR16748_LightningModule.training_epoch_end hooksLightningModule.on_train_epoch_end hookPR16520_LightningModule.validation_epoch_end hookLightningModule.on_validation_epoch_end hookPR16520_LightningModule.test_epoch_end hooksLightningModule.on_test_epoch_end hookPR16520_multiple_trainloader_modeCombinedLoader(..., mode=...) and set mode directly nowPR16800_move_metrics_to_cputorchmetricsPR16358_track_grad_normon_before_optimizer_step hook and pass the argument directly and LightningModule.log_grad_norm() hookPR16745_ PR16745_replace_sampler_ddpuse_distributed_sampler; the sampler gets created not only for the DDP strategieson_tpu argument in LightningModule.optimizer_step hookPR16537_ :doc:Manual Optimization <../../model/manual_optimization>using_lbfgs argument in LightningModule.optimizer_step hookPR16538_ :doc:Manual Optimization <../../model/manual_optimization>nvidia/apex in any formtorch.amp insteadPR16039_ :doc:Precision <../../common/precision>using_native_ampPR16039_ :doc:Precision <../../common/precision>amp_backendPR16039_ :doc:Precision <../../common/precision>amp_levelPR16039_ :doc:Precision <../../common/precision>using_native_ampPR16039_ :doc:Precision <../../common/precision>amp_backendPR16039_ :doc:Precision <../../common/precision>amp_levelPR16039_ :doc:Precision <../../common/precision>FairScale integrationlightning-Fairscale_pl.overrides.fairscale.LightningShardedDataParallelPR16400_ :doc:FSDP <../../accelerators/gpu_expert>pl.plugins.precision.fully_sharded_native_amp.FullyShardedNativeMixedPrecisionPluginPR16400_ :doc:FSDP <../../accelerators/gpu_expert>pl.plugins.precision.sharded_native_amp.ShardedNativeMixedPrecisionPluginPR16400_ :doc:FSDP <../../accelerators/gpu_expert>pl.strategies.fully_sharded.DDPFullyShardedStrategyPR16400_ :doc:FSDP <../../accelerators/gpu_expert>pl.strategies.sharded.DDPShardedStrategyPR16400_ :doc:FSDP <../../accelerators/gpu_expert>pl.strategies.sharded_spawn.DDPSpawnShardedStrategyPR16400_ :doc:FSDP <../../accelerators/gpu_expert>save_config_overwrite parameters in LightningCLIsave_config_kwargs parameterPR14998_save_config_multifile parameters in LightningCLIsave_config_kwargs parameterPR14998_Loop.replace()PR14998_ Fabric_Loop.run()PR14998_ Fabric_Loop.connect()PR14998_ Fabric_trainer.fit_loop propertyPR14998_ Fabric_trainer.validate_loop propertyPR14998_ Fabric_trainer.test_loop propertyPR14998_ Fabric_trainer.predict_loop propertyPR14998_ Fabric_Trainer.loop and fetching classesopt_idx argument in BaseFinetuning.finetune_functionPR16539_opt_idx argument in Callback.on_before_optimizer_stepPR16539_ :doc:Manual Optimization <../../model/manual_optimization>optimizer_idx as an optional argument in LightningModule.training_stepPR16539_ :doc:Manual Optimization <../../model/manual_optimization>optimizer_idx argument in LightningModule.on_before_optimizer_stepPR16539_ :doc:Manual Optimization <../../model/manual_optimization>optimizer_idx argument in LightningModule.configure_gradient_clippingPR16539_ :doc:Manual Optimization <../../model/manual_optimization>optimizer_idx argument in LightningModule.optimizer_stepPR16539_ :doc:Manual Optimization <../../model/manual_optimization>optimizer_idx argument in LightningModule.optimizer_zero_gradPR16539_ :doc:Manual Optimization <../../model/manual_optimization>optimizer_idx argument in LightningModule.lr_scheduler_stepPR16539_ :doc:Manual Optimization <../../model/manual_optimization>LightningModule.configure_optimizersPR16539_ :doc:Manual Optimization <../../model/manual_optimization>optimizer argument in LightningModule.backwardPR16539_ :doc:Manual Optimization <../../model/manual_optimization>optimizer_idx argument in LightningModule.backwardPR16539_ :doc:Manual Optimization <../../model/manual_optimization>optimizer_idx argument in PrecisionPlugin.optimizer_stepPR16539_ :doc:Manual Optimization <../../model/manual_optimization>optimizer_idx argument in PrecisionPlugin.,backwardPR16539_ :doc:Manual Optimization <../../model/manual_optimization>optimizer_idx argument in PrecisionPlugin.optimizer_stepPR16539_ :doc:Manual Optimization <../../model/manual_optimization>optimizer_idx argument in Strategy.backwardPR16539_ :doc:Manual Optimization <../../model/manual_optimization>optimizer_idx argument in Strategy.optimizer_stepPR16539_ :doc:Manual Optimization <../../model/manual_optimization>Trainer.optimizer_frequencies attributeManual Optimization <../../model/manual_optimization>PL_INTER_BATCH_PARALLELISM environment flagPR16355_lightning-Horovod_lightning-ColossalAI_QuantizationAwareTraining callbackPR16750_LightningModule.training_step_end hookLightningModule.on_train_batch_end hookPR16791_LightningModule.validation_step_end hookLightningModule.on_validation_batch_end hookPR16791_LightningModule.test_step_end hookLightningModule.on_test_batch_end hookPR16791_pl.strategies.DDPSpawnStrategyDDPStrategy(start_method='spawn') with proper starting methodPR16809_training_step loss in the progress barself.log("loss", ..., prog_bar=True) instead.PR16192_outputs argument from the on_predict_epoch_end hooktrainer.predict_loop.predictionsPR16655_self.log()PR16389_.. _Fabric: https://lightning.ai/docs/fabric/ .. _lightning-Horovod: https://github.com/Lightning-AI/lightning-Horovod .. _lightning-ColossalAI: https://lightning.ai/docs/pytorch/2.1.0/integrations/strategies/colossalai.html .. _lightning-Fairscale: https://github.com/Lightning-Sandbox/lightning-Fairscale
.. _pr15953: https://github.com/Lightning-AI/pytorch-lightning/pull/15953 .. _pr16748: https://github.com/Lightning-AI/pytorch-lightning/pull/16748 .. _pr16520: https://github.com/Lightning-AI/pytorch-lightning/pull/16520 .. _pr16800: https://github.com/Lightning-AI/pytorch-lightning/pull/16800 .. _pr16358: https://github.com/Lightning-AI/pytorch-lightning/pull/16358 .. _pr16745: https://github.com/Lightning-AI/pytorch-lightning/pull/16745 .. _pr16537: https://github.com/Lightning-AI/pytorch-lightning/pull/16537 .. _pr16538: https://github.com/Lightning-AI/pytorch-lightning/pull/16538 .. _pr16039: https://github.com/Lightning-AI/pytorch-lightning/pull/16039 .. _pr16400: https://github.com/Lightning-AI/pytorch-lightning/pull/16400 .. _pr14998: https://github.com/Lightning-AI/pytorch-lightning/pull/14998 .. _pr16539: https://github.com/Lightning-AI/pytorch-lightning/pull/16539 .. _pr16355: https://github.com/Lightning-AI/pytorch-lightning/pull/16355 .. _pr16750: https://github.com/Lightning-AI/pytorch-lightning/pull/16750 .. _pr16791: https://github.com/Lightning-AI/pytorch-lightning/pull/16791 .. _pr16809: https://github.com/Lightning-AI/pytorch-lightning/pull/16809 .. _pr16192: https://github.com/Lightning-AI/pytorch-lightning/pull/16192 .. _pr16655: https://github.com/Lightning-AI/pytorch-lightning/pull/16655 .. _pr16389: https://github.com/Lightning-AI/pytorch-lightning/pull/16389