doc/source/tune/api/env.rst
.. _tune-env-vars:
Some of Ray Tune's behavior can be configured using environment variables. These are the environment variables Ray Tune currently considers:
1 disables this automatic creation. Please note that this will most likely
affect analyzing your results after the tuning run.ray.init() if
not attached to a Ray session.1 disables adding these date strings.tune.report() and passed a metric parameter to Tuner(), a scheduler,
or a search algorithm, Tune will error
if the metric was not reported in the result. Setting this environment variable
to 1 will disable this check.1 will disable signal handling and stop execution right away. Defaults to
0.600 (seconds).2.'auto'.
'auto' measures the time it takes to snapshot the experiment state
and adjusts the period so that ~5% of the driver's time is spent on snapshotting.
You should set this to a fixed value (ex: TUNE_GLOBAL_CHECKPOINT_S=60)
to snapshot your experiment state every X seconds.auto, which will be updated to max(200, cluster_cpus * 1.1) for random/grid search and 1
for any other search algorithms.5 (seconds).1, will print all trial errors as they come up. Otherwise, errors
will only be saved as text files to the trial directory and not printed. Defaults to 1.1 disables result buffering. Cannot be used with checkpoint_at_end.
Defaults to disabled.ExperimentAnalysis <ray.tune.ExperimentAnalysis> dataframes. Defaults to . (but will be
changed to / in future versions of Ray).number_of_trial/10 seconds,
but never longer than this value. Defaults to 100 (seconds).RUNNING state
for this amount of seconds. If the Ray Tune job is stuck in this state (most likely due to insufficient resources),
the warning message is printed repeatedly every this amount of seconds. Defaults to 60 (seconds).RUNNING state for this amount of seconds.
If the Ray Tune job is stuck in this state (most likely due to insufficient resources), the warning message is printed
repeatedly every this amount of seconds. Defaults to 60 (seconds).0. While this retry counter is taking effect, per trial failure number will not be incremented, which
is compared against max_failures.1, only the metric defined by checkpoint_score_attribute
will be stored with each Checkpoint. As a result, Result.best_checkpoints will contain only this metric,
omitting others that would normally be included. This can significantly reduce memory usage, especially when many
checkpoints are stored or when metrics are large. Defaults to 0 (i.e., all metrics are stored).experimental new console output <https://github.com/ray-project/ray/issues/36949>_.There are some environment variables that are mostly relevant for integrated libraries:
wandb login
instead.