com.unity.ml-agents/Documentation~/Migrating.md
gym-unity package has been refactored into the ml-agents-envs package. Please update your imports accordingly.from gym_unity.unity_gym_env import UnityToGymWrapper
from mlagents_envs.envs.unity_gym_env import UnityToGymWrapper
| Deprecated API | Suggested Replacement |
|---|---|
IActuator ActuatorComponent.CreateActuator() | IActuator[] ActuatorComponent.CreateActuators() |
IActionReceiver.PackActions(in float[] destination) | none |
Agent.CollectDiscreteActionMasks(DiscreteActionMasker actionMasker) | Agent.WriteDiscreteActionMask(IDiscreteActionMask actionMask) |
Agent.Heuristic(float[] actionsOut) | Agent.Heuristic(in ActionBuffers actionsOut) |
Agent.OnActionReceived(float[] vectorAction) | Agent.OnActionReceived(ActionBuffers actions) |
Agent.GetAction() | Agent.GetStoredActionBuffers() |
BrainParameters.SpaceType, VectorActionSize, VectorActionSpaceType, and NumActions | BrainParameters.ActionSpec |
ObservationWriter.AddRange(IEnumerable<float> data, int writeOffset = 0) | ObservationWriter. AddList(IList<float> data, int writeOffset = 0 |
SensorComponent.IsVisual() and IsVector() | none |
VectorSensor.AddObservation(IEnumerable<float> observation) | VectorSensor.AddObservation(IList<float> observation) |
SideChannelsManager | SideChannelManager |
IDiscreteActionMask.WriteMask() was removed, and replaced with SetActionEnabled(). Instead of returning an IEnumerable with indices to disable, you can now call SetActionEnabled for each index to disable (or enable). As an example, if you overrode Agent.WriteDiscreteActionMask() with something that looked like:public override void WriteDiscreteActionMask(IDiscreteActionMask actionMask)
{
var branch = 2;
var actionsToDisable = new[] {1, 3};
actionMask.WriteMask(branch, actionsToDisable);
}
the equivalent code would now be
public override void WriteDiscreteActionMask(IDiscreteActionMask actionMask)
{
var branch = 2;
actionMask.SetActionEnabled(branch, 1, false);
actionMask.SetActionEnabled(branch, 3, false);
}
IActuator interface now implements IHeuristicProvider. Please add the corresponding Heuristic(in ActionBuffers) method to your custom Actuator classes.ISensor.GetObservationShape() method and ITypedSensor and IDimensionPropertiesSensor interfaces were removed, and GetObservationSpec() was added. You can use ObservationSpec.Vector() or ObservationSpec.Visual() to generate ObservationSpecs that are equivalent to the previous shape. For example, if your old ISensor looked like:public override int[] GetObservationShape()
{
return new[] { m_Height, m_Width, m_NumChannels };
}
the equivalent code would now be
public override ObservationSpec GetObservationSpec()
{
return ObservationSpec.Visual(m_Height, m_Width, m_NumChannels);
}
ISensor.GetCompressionType() method and ISparseChannelSensor interface was removed, and GetCompressionSpec() was added. You can use CompressionSpec.Default() or CompressionSpec.Compressed() to generate CompressionSpecs that are equivalent to the previous values. For example, if your old ISensor looked like:public virtual SensorCompressionType GetCompressionType()
{
return SensorCompressionType.None;
}
the equivalent code would now be
public CompressionSpec GetCompressionSpec()
{
return CompressionSpec.Default();
}
SensorComponent.GetObservationShape() was removed.SensorComponent.CreateSensor() was replaced with CreateSensors(), which returns an ISensor[].The Match-3 integration utilities are now included in com.unity.ml-agents.
The AbstractBoard interface was changed:
AbstractBoard no longer contains Rows, Columns, NumCellTypes, and NumSpecialTypes fields.public abstract BoardSize GetMaxBoardSize() was added as an abstract method. BoardSize is a new struct that contains Rows, Columns, NumCellTypes, and NumSpecialTypes fields, with the same meanings as the old AbstractBoard fields.public virtual BoardSize GetCurrentBoardSize() is an optional method; by default it returns GetMaxBoardSize(). If you wish to use a single behavior to work with multiple board sizes, override GetCurrentBoardSize() to return the current BoardSize. The values returned by GetCurrentBoardSize() must be less than or equal to the corresponding values from GetMaxBoardSize().The sensor configuration has changed:
GridNumSide -> GridSize, RotateToAgent -> RotateWithAgent, ObserveMask -> ColliderMask, DetectableObjects -> DetectableTagsDepthType (ChanelBase/ChannelHot) option and ChannelDepth are removed. Now the default is one-hot encoding for detected tag. If you were using original GridSensor without overriding any method, switching to new GridSensor will produce similar effect for training although the actual observations will be slightly different.For creating your GridSensor implementation with custom data:
GridSensorBase instead of GridSensor. Besides overriding GetObjectData(), you will also need to consider override GetCellObservationSize(), IsDataNormalized() and GetProcessCollidersMethod() according to the data you collect. Also you'll need to override GridSensorComponent.GetGridSensors() and return your custom GridSensor.tagIndex in GetObjectData() has changed from 1-indexed to 0-indexed and the data type changed from float to int. The index of first detectable tag will be 0 instead of 1. normalizedDistance was removed from input.dataBuffer instead of creating and returning a new array.IsDataNormalized(). Sensors with non-normalized data cannot use PNG compression type.GetObjectData() anymore. The values received from GetObjectData() will be the observation sent to the trainer.The way that Sentis processes LSTM (recurrent neural networks) has changed. As a result, models trained with previous versions of ML-Agents will not be usable at inference if they were trained with a memory setting in the .yaml config file. If you want to use a model that has a recurrent neural network in this release of ML-Agents, you need to train the model using the python trainer from this release.
IHeuristicProvider interface to have your actuator handle the generation of actions when an Agent is running in heuristic mode.VectorSensor.AddObservation(IEnumerable<float>) is deprecated. Use VectorSensor.AddObservation(IList<float>) instead.ObservationWriter.AddRange() is deprecated. Use ObservationWriter.AddList() instead.ActuatorComponent.CreateActuator() is deprecated. Please use override ActuatorComponent.CreateActuators instead. Since ActuatorComponent.CreateActuator() is abstract, you will still need to override it in your class until it is removed. It is only ever called if you don't override ActuatorComponent.CreateActuators. You can suppress the warnings by surrounding the method with the following pragma:
#pragma warning disable 672
public IActuator CreateActuator() { ... }
#pragma warning restore 672
Agent.CollectDiscreteActionMasks() was deprecated and should be replaced with Agent.WriteDiscreteActionMask()Agent.Heuristic(float[]) was deprecated and should be replaced with Agent.Heuristic(ActionBuffers).Agent.OnActionReceived(float[]) was deprecated and should be replaced with Agent.OnActionReceived(ActionBuffers).Agent.GetAction() was deprecated and should be replaced with Agent.GetStoredActionBuffers().The default implementation of these will continue to call the deprecated versions where appropriate. However, the deprecated versions may not be compatible with continuous and discrete actions on the same Agent.
BrainParameters.VectorActionSize was deprecated; you can now set BrainParameters.ActionSpec.NumContinuousActions or BrainParameters.ActionSpec.BranchSizes instead.BrainParameters.VectorActionSpaceType was deprecated, since both continuous and discrete actions can now be used.BrainParameters.NumActions() was deprecated. Use BrainParameters.ActionSpec.NumContinuousActions and
BrainParameters.ActionSpec.NumDiscreteActions instead.TrainerFactory class, it was moved to the trainers/trainer folder.components folder containing bc and reward_signals code was moved to the trainers/tf folderfrom mlagents.trainers.trainer_util import TrainerFactory to from mlagents.trainers.trainer import TrainerFactoryfrom mlagents.trainers.trainer_util import handle_existing_directories to from mlagents.trainers.directory_utils import validate_existing_directoriesmlagents.trainers.components with mlagents.trainers.tf.components in your import statements.python -m mlagents.trainers.upgrade_config -h to see the script usage. Note that you will have had to upgrade to/install the current version of ML-Agents before running the script. To update manually:
parameter_randomization section, rename that section to environment_parameterscurriculum section, you will need to rewrite your curriculum with this format.results/ instead of summaries/ and models/.max_step in the TerminalStep and TerminalSteps objects was renamed interrupted.get_behavior_names() and get_behavior_specs() methods were combined into the property behavior_specs that contains a mapping from behavior names to behavior spec.use_visual and allow_multiple_visual_obs in the UnityToGymWrapper constructor were replaced by allow_multiple_obs which allows one or more visual observations and vector observations to be used simultaneously.--save-freq has been removed from the CLI and is now configurable in the trainer configuration file.--lesson has been removed from the CLI. Lessons will resume when using --resume. To start at a different lesson, modify your Curriculum configuration.python -m mlagents.trainers.upgrade_config -h to see the script usage. Note that you will have had to upgrade to/install the current version of ML-Agents before running the script.To do it manually, copy your <BehaviorName> sections from trainer_config.yaml into a separate trainer configuration file, under a behaviors section. The default section is no longer needed. This new file should be specific to your environment, and not contain configurations for multiple environments (unless they have the same Behavior Names).
curriculum section.parameter_randomization in the main trainer configuration.UnityEnvironment directly, replace max_step with interrupted in the TerminalStep and TerminalSteps objects.get_behavior_names() and get_behavior_specs() in UnityEnvironment with behavior_specs.UnityToGymWrapper, remove use_visual and allow_multiple_visual_obs from the constructor and add allow_multiple_obs = True if the environment contains either both visual and vector observations or multiple visual observations.--save-freq in the CLI, add a checkpoint_interval value in your trainer configuration, and set it equal to save-freq * n_agents_in_scene.MLAgents C# namespace was renamed to Unity.MLAgents, and other nested namespaces were similarly renamed (#3843).--load and --train command-line flags have been deprecated and replaced with --resume and --inference.--run-id twice will now throw an error.play_against_current_self_ratio self-play trainer hyperparameter has been renamed to play_against_latest_model_ratiomlagents-learn for training, this should be a transparent change.Agent methods GiveModel, Done, InitializeAgent, AgentAction and AgentReset have been removed.Agent.Heuristic() was changed to take a float[] as a parameter, instead of returning the array. This was done to prevent a common source of error where users would return arrays of the wrong size.SideChannelManager to register, unregister and access side channels.EnvironmentParameters replaces the default FloatProperties. You can access the EnvironmentParameters with Academy.Instance.EnvironmentParameters on C#. If you were previously creating a UnityEnvironment in python and passing it a FloatPropertiesChannel, create an EnvironmentParametersChannel instead.SideChannel.OnMessageReceived is now a protected method (was public)Academy.Instance.StatsRecorder.Add(key, value)(#3660)num_updates and train_interval for SAC have been replaced with steps_per_update.UnityEnv class from the gym-unity package was renamed UnityToGymWrapper and no longer creates the UnityEnvironment. Instead, the UnityEnvironment must be passed as input to the constructor of UnityToGymWrapperAgent.maxStep was renamed to Agent.MaxStep. For a full list of changes, see the pull request. (#3828)WriteAdapter was renamed to ObservationWriter. (#3834)using MLAgents with using Unity.MLAgents. Replace other nested namespaces such as using MLAgents.Sensors with using Unity.MLAgents.Sensors--load flag with --resume when calling mlagents-learn, and don't use the --train flag as training will happen by default. To run without training, use --inference.--force command-line flag.Heuristic(), change the signature to public override void Heuristic(float[] actionsOut) and assign values to actionsOut instead of returning an array.SideChannels you must:
Academy.FloatProperties with Academy.Instance.EnvironmentParameters.Academy.RegisterSideChannel and Academy.UnregisterSideChannel were removed. Use SideChannelManager.RegisterSideChannel and SideChannelManager.UnregisterSideChannel instead.steps_per_update to be around equal to the number of agents in your environment, times num_updates and divided by train_interval.UnityEnv with UnityToGymWrapper in your code. The constructor no longer takes a file name as input but a fully constructed UnityEnvironment instead.Agent.CollectObservations() virtual method now takes as input a VectorSensor sensor as argument. The Agent.AddVectorObs() methods were removed.SetMask was renamed to SetMask method must now be called on the DiscreteActionMasker argument of the CollectDiscreteActionMasks virtual method.DiscreteActionMasker. SetMask takes two arguments : the branch index and the list of masked actions for that branch.Monitor class has been moved to the Examples Project. (It was prone to errors during testing)MLAgents.Sensors namespace has been introduced. All sensors classes are part of the MLAgents.Sensors namespace.MLAgents.SideChannels namespace has been introduced. All side channel classes are part of the MLAgents.SideChannels namespace.RayPerceptionSensor.PerceiveStatic() was changed to take an input class and write to an output class, and the method was renamed to Perceive().SetMask method must now be called on the DiscreteActionMasker argument of the CollectDiscreteActionMasks method.GetStepCount() on the Agent class has been replaced with the property getter StepCount--multi-gpu option has been removed temporarily.AgentInfo.actionMasks has been renamed to AgentInfo.discreteActionMasks.BrainParameters and SpaceType have been removed from the public APIBehaviorParameters have been removed from the public API.DecisionRequester has been made internal (you can still use the DecisionRequesterComponent from the inspector). RepeatAction was renamed TakeActionsBetweenDecisions for clarity.Agent class have been renamed. The original method names will be removed in a later release:
InitializeAgent() was renamed to Initialize()AgentAction() was renamed to OnActionReceived()AgentReset() was renamed to OnEpisodeBegin()Done() was renamed to EndEpisode()GiveModel() was renamed to SetModel()IFloatProperties interface has been removed.OnMessageReceived now takes a IncomingMessage argument, and QueueMessageToSend takes an OutgoingMessage argument.on_message_received now takes a IncomingMessage argument, and queue_message_to_send takes an OutgoingMessage argument.using MLAgents.Sensors; in addition to using MLAgents; on top of your Agent's script.CollectObservations() with CollectObservations(VectorSensor sensor). In addition, replace all calls to AddVectorObs() with sensor.AddObservation() or sensor.AddOneHotObservation() on the VectorSensor passed as argument.SetActionMask on your Agent to DiscreteActionMasker.SetActionMask in CollectDiscreteActionMasks.RayPerceptionSensor.PerceiveStatic() manually, add your inputs to a RayPerceptionInput. To get the previous float array output, iterate through RayPerceptionOutput.rayOutputs and call RayPerceptionOutput.RayOutput.ToFloatArray().Agent.GetStepCount() with Agent.StepCountInitializeAgent() to Initialize()AgentAction() to OnActionReceived()AgentReset() to OnEpisodeBegin()Done() to EndEpisode()GiveModel() to SetModel()IFloatProperties variables with FloatPropertiesChannel variables.SideChannels, update the signatures of your methods, and add your data to the OutgoingMessage or read it from the IncomingMessage.UnitySDK folder has been split into a Unity Package (com.unity.ml-agents) and an examples project (Project). Please follow the Installation Guide to get up and running with this new repo structure.Done() on the Agent will now reset it immediately and call the AgentReset virtual method. (This is to simplify the previous logic in which the Agent had to wait for the next EnvironmentStep to reset)AgentOnDone virtual method on the Agent has been removed.Decision Period and On Demand decision checkbox have been removed from the Agent. On demand decision is now the default (calling RequestDecision on the Agent manually.)--num-runs command-line option has been removed from mlagents-learn.agentParameters field of the Agent has been removed. (Contained only maxStep information)maxStep is now a public field on the Agent. (Was moved from agentParameters)Info field of the Agent has been made private. (Was only used internally and not meant to be modified outside of the Agent)GetReward() method on the Agent has been removed. (It was being confused with GetCumulativeReward())AgentAction struct no longer contains a value field. (Value estimates were not set during inference)GetValueEstimate() method on the Agent has been removed.UpdateValueAction() method on the Agent has been removed.RayPerception3D and RayPerception2D classes were removed, and the legacyHitFractionBehavior argument was removed from RayPerceptionSensor.PerceiveStatic().com.unity.ml-agents package into your project in the Installation Guide.AgentOnDone and did not have the checkbox Reset On Done checked in the inspector, you must call the code that was in AgentOnDone manually.AddReward() or SetReward() before calling Done(). Previously, the order didn't matter.On Demand Decision for your Agent, you must add a DecisionRequester component to your Agent GameObject and set its Decision Period field to the old Decision Period of the Agent.max_steps and summary_freq in your trainer_config.yaml by the number of Agents in the scene.mlagents-learn for training, this should be a transparent change.
reset() on the Low-Level Python API no longer takes a train_mode argument. To modify the performance/speed of the engine, you must use an EngineConfigurationChannelreset() on the Low-Level Python API no longer takes a config argument. UnityEnvironment no longer has a reset_parameters field. To modify float properties in the environment, you must use a FloatPropertiesChannel. For more information, refer to the Low Level Python API documentationCustomResetParameters are now removed.Training Configuration nor Inference Configuration field in the inspector. To modify the configuration from the Low-Level Python API, use an EngineConfigurationChannel. To modify it during training, use the new command line arguments --width, --height, --quality-level, --time-scale and --target-frame-rate in mlagents-learn.Default Reset Parameters field in the inspector. The Academy class no longer has a ResetParameters. To access shared float properties with Python, use the new FloatProperties field on the Academy.mlagents.envs was renamed to mlagents_envs. The previous repo layout depended on PEP420, which caused problems with some of our tooling such as mypy and pylint.Training Configuration in the Academy inspector, you will need to pass your custom configuration at every training run using the new command line arguments --width, --height, --quality-level, --time-scale and --target-frame-rate.--slow in mlagents-learn, you will need to pass your old Inference Configuration of the Academy inspector with the new command line arguments --width, --height, --quality-level, --time-scale and --target-frame-rate instead.mlagents.envs should be replaced with mlagents_envs.Use Heuristic checkbox in Behavior Parameters has been replaced with a Behavior Type dropdown menu. This has the following options:
Default corresponds to the previous unchecked behavior, meaning that Agents will train if they connect to a python trainer, otherwise they will perform inference.Heuristic Only means the Agent will always use the Heuristic() method. This corresponds to having "Use Heuristic" selected in 0.11.0.Inference Only means the Agent will always perform inference.RayPerception3d.Perceive() that was causing the endOffset to be used incorrectly. However this may produce different behavior from previous versions if you use a non-zero startOffset. To reproduce the old behavior, you should increase the value of endOffset by startOffset. You can verify your raycasts are performing as expected in scene view using the debug rays.(# of rays) * (# of tags + 2) to the State Size in Behavior Parameters, but this is no longer necessary, so you should reduce the State Size by this amount. Making this change will require retraining your model, since the observations that RayPerceptionSensorComponent3D produces are different from the old behavior.The type or namespace 'Sentis' could not be found or The type or namespace 'Google' could not be found, you will need to install the Sentis preview package.Heuristic() method in the Agent class and check the use heuristic checkbox in the Behavior Parameters.Assets folder. Then, add a Behavior Parameters component to each Agent GameObject. You will then need to complete the fields on the new Behavior Parameters component with the BrainParameters of the old Brain.UnitySDK/Assets/ML-Agents/Scripts/Communicator.cs and its class Communicator have been renamed to UnitySDK/Assets/ML-Agents/Scripts/ICommunicator.cs and ICommunicatorrespectively.SpaceType Enums discrete, and continuous have been renamed to Discrete and Continuous.Done call as well as the capacity to set Max Steps on the Academy. Therefore, an AcademyReset will never be triggered from C# (only from Python). If you want to reset the simulation after a fixed number of steps, or when an event in the simulation occurs, we recommend looking at our multi-agent example environments (such as FoodCollector). In our examples, groups of Agents can be reset through an "Area" that can reset groups of Agents.mlagents.envs.UnityEnvironment was removed. If you are using the Python API, change from mlagents_envs import UnityEnvironment to from mlagents_envs.environment import UnityEnvironment.trainer_config.yaml.num_envs steps).gamma: Define a new extrinsic reward signal and set it's gamma to your new gamma.use_curiosity, curiosity_strength, curiosity_enc_size: Define a curiosity reward signal and set its strength to curiosity_strength, and encoding_size to curiosity_enc_size. Give it the same gamma as your extrinsic signal to mimic previous behavior.num_envs for an approximate comparison. You may need to change max_steps in your config as appropriate as well.ml-agents and ml-agents-envs.--worker-id option of learn.py has been removed, use --base-port instead if you'd like to run multiple instances of learn.py.ml-agents or ml-agents-envs please check the Installing for Development in the Installation documentation.ENABLE_TENSORFLOW flag in your Unity Project settingsBrains are now Scriptable Objects instead of MonoBehaviors.
You can no longer modify the type of a Brain. If you want to switch between PlayerBrain and LearningBrain for multiple agents, you will need to assign a new Brain to each agent separately. Note: You can pass the same Brain to multiple agents in a scene by leveraging Unity's prefab system or look for all the agents in a scene using the search bar of the Hierarchy window with the word Agent.
We replaced the Internal and External Brain with Learning Brain. When you need to train a model, you need to drag it into the Broadcast Hub inside the Academy and check the Control checkbox.
We removed the Broadcast checkbox of the Brain, to use the broadcast functionality, you need to drag the Brain into the Broadcast Hub.
When training multiple Brains at the same time, each model is now stored into a separate model file rather than in the same file under different graph scopes.
The Learning Brain graph scope, placeholder names, output names and custom placeholders can no longer be modified.
Brain GameObjects in the scene. (Delete all of the Brain GameObjects under Academy in the scene.)Brain Scriptable Objects using Assets -> Create -> ML-Agents for each type of the Brain you plan to use, and put the created files under a folder called Brains within your project.Brain Parameters to be the same as the parameters used in the Brain GameObjects.Brain field in the Inspector, you need to drag the appropriate Brain ScriptableObject in it.Broadcast Hub field in the inspector, which is list of brains used in the scene. To train or control your Brain from the mlagents-learn Python script, you need to drag the relevant LearningBrain ScriptableObjects used in your scene into entries into this list.unity-environment has been renamed UnitySDK.python folder has been renamed to ml-agents. It now contains two packages, mlagents.env and mlagents.trainers. mlagents.env can be used to interact directly with a Unity environment, while mlagents.trainers contains the classes for training agents.2017.1 or later to 2017.4 or later. 2017.4 is an LTS (Long Term Support) version that helps us maintain good quality and support. Earlier versions of Unity might still work, but you may encounter an error listed here.In order to run a training session, you can now use the command mlagents-learn instead of python3 learn.py after installing the mlagents packages. This change is documented here. For example, if we previously ran
python3 learn.py 3DBall --train
from the python subdirectory (which is changed to ml-agents subdirectory in v0.5), we now run
mlagents-learn config/trainer_config.yaml --env=3DBall --train
from the root directory where we installed the ML-Agents Toolkit.
mlagents-learn. For an example trainer configuration file, see trainer_config.yaml. An example of passing a trainer configuration to mlagents-learn is shown above.--env option.min_lesson_length now specifies the minimum number of episodes in a lesson and affects reward thresholding.Max Steps of the Academy to use curriculum learning.using MLAgents; needs to be added in all of the C# scripts that use ML-Agents.pip3 install -e . within your ml-agents/python folder to update your Python packages.There are a large number of new features and improvements in the ML-Agents Toolkit v0.3 which change both the training process and Unity API in ways which will cause incompatibilities with environments made using older versions. This page is designed to highlight those changes for users familiar with v0.1 or v0.2 in order to ensure a smooth transition.
ppo.py and PPO.ipynb Python notebook have been replaced with a single learn.py script as the launching point for training with ML-Agents. For more information on using learn.py, see here.trainer_config.yaml file. For more information on using this file, see here.AddReward() or SetReward().Done() method.CollectStates() has been replaced by CollectObservations(), which now no longer returns a list of floats.AddVectorObs() within CollectObservations(). Note that you can call AddVectorObs() with floats, integers, lists and arrays of floats, Vector3 and Quaternions.AgentStep() has been replaced by AgentAction().WaitTime() has been removed.Frame Skip field of the Academy is replaced by the Agent's Decision Frequency field, enabling the Agent to make decisions at different frequencies.state with vector_observation and observation with visual_observation. In addition, you must remove the epsilon placeholder.In order to more closely align with the terminology used in the Reinforcement Learning field, and to be more descriptive, we have changed the names of some of the concepts used in ML-Agents. The changes are highlighted in the table below.
| Old - v0.2 and earlier | New - v0.3 and later |
|---|---|
| State | Vector Observation |
| Observation | Visual Observation |
| Action | Vector Action |
| N/A | Text Observation |
| N/A | Text Action |