Docs/core_sensors.md
Sensors are actors that retrieve data from their surroundings. They are crucial to create learning environment for driving agents.
This page summarizes everything necessary to start handling sensors. It introduces the types available and a step-by-step guide of their life cycle. The specifics for every sensor can be found in the sensors reference.
The class carla.Sensor defines a special type of actor able to measure and stream data.
listen() method to receive and manage the data.Despite their differences, all the sensors are used in a similar way.
As with every other actor, find the blueprint and set specific attributes. This is essential when handling sensors. Their attributes will determine the results obtained. These are detailed in the sensors reference.
The following example sets a dashboard HD camera.
# Find the blueprint of the sensor.
blueprint = world.get_blueprint_library().find('sensor.camera.rgb')
# Modify the attributes of the blueprint to set image resolution and field of view.
blueprint.set_attribute('image_size_x', '1920')
blueprint.set_attribute('image_size_y', '1080')
blueprint.set_attribute('fov', '110')
# Set the time in seconds between sensor captures
blueprint.set_attribute('sensor_tick', '1.0')
attachment_to and attachment_type, are crucial. Sensors should be attached to a parent actor, usually a vehicle, to follow it around and gather the information. The attachment type will determine how its position is updated regarding said vehicle.
transform = carla.Transform(carla.Location(x=0.8, z=1.7))
sensor = world.spawn_actor(blueprint, transform, attach_to=my_vehicle)
!!! Important When spawning with attachment, location must be relative to the parent actor.
Every sensor has a listen() method. This is called every time the sensor retrieves data.
The argument callback is a lambda function. It describes what should the sensor do when data is retrieved. This must have the data retrieved as an argument.
# do_something() will be called each time a new image is generated by the camera.
sensor.listen(lambda data: do_something(data))
...
# This collision sensor would print everytime a collision is detected.
def callback(event):
for actor_id in event:
vehicle = world_ref().get_actor(actor_id)
print('Vehicle too close: %s' % vehicle.type_id)
sensor02.listen(callback)
Most sensor data objects have a function to save the information to disk. This will allow it to be used in other environments.
Sensor data differs a lot between sensor types. Take a look at the sensors reference to get a detailed explanation. However, all of them are always tagged with some basic information.
| Sensor data attribute | Type | Description |
|---|---|---|
frame | int | Frame number when the measurement took place. |
timestamp | double | Timestamp of the measurement in simulation seconds since the beginning of the episode. |
transform | carla.Transform | World reference of the sensor at the time of the measurement. |
!!! Important
is_listening is a sensor attribute that enables/disables data listening at will.
sensor_tick is a blueprint attribute that sets the simulation time between data received.
Take a shot of the world from their point of view. For cameras that return carla.Image, you can use the helper class carla.ColorConverter to modify the image to represent different information.
| Sensor | Output | Overview |
|---|---|---|
| Depth | carla.Image | Renders the depth of the elements in the field of view in a gray-scale map. |
| RGB | carla.Image | Provides clear vision of the surroundings. Looks like a normal photo of the scene. |
| Optical Flow | carla.Image | Renders the motion of every pixel from the camera. |
| Semantic segmentation | carla.Image | Renders elements in the field of view with a specific color according to their tags. |
| Instance segmentation | carla.Image | Renders elements in the field of view with a specific color according to their tags and a unique object ID. |
| DVS | carla.DVSEventArray | Measures changes of brightness intensity asynchronously as an event stream. |
Retrieve data when the object they are attached to registers a specific event.
| Sensor | Output | Overview |
|---|---|---|
| Collision | carla.CollisionEvent | Retrieves collisions between its parent and other actors. |
| Lane invasion | carla.LaneInvasionEvent | Registers when its parent crosses a lane marking. |
| Obstacle | carla.ObstacleDetectionEvent | Detects possible obstacles ahead of its parent. |
Different functionalities such as navigation, measurement of physical properties and 2D/3D point maps of the scene.
| Sensor | Output | Overview |
|---|---|---|
| GNSS | carla.GNSSMeasurement | Retrieves the geolocation of the sensor. |
| IMU | carla.IMUMeasurement | Comprises an accelerometer, a gyroscope, and a compass. |
| LIDAR | carla.LidarMeasurement | A rotating LIDAR. Generates a 4D point cloud with coordinates and intensity per point to model the surroundings. |
| Radar | carla.RadarMeasurement | 2D point map modelling elements in sight and their movement regarding the sensor. |
| Semantic LIDAR | carla.SemanticLidarMeasurement | A rotating LIDAR. Generates a 3D point cloud with extra information regarding instance and semantic segmentation. |
That is a wrap on sensors and how do these retrieve simulation data.
Thus concludes the introduction to CARLA. However there is yet a lot to learn.