Back to Supervision

Serialise Detections to a JSON File

docs/notebooks/serialise-detections-to-json.ipynb

0.28.09.3 KB
Original Source

Serialise Detections to a JSON File


This cookbook introduce sv.JSONSink tool designed to write captured object detection data to file from video streams/file

Click the Open in Colab button to run the cookbook on Google Colab.

python
!pip install -q inference requests tqdm supervision
python
import json
from typing import List
from collections import defaultdict

import numpy as np
import pandas as pd

import supervision as sv
from supervision.assets import download_assets, VideoAssets
from inference import InferencePipeline
from inference.core.interfaces.camera.entities import VideoFrame

The parameters defined below are:

  • SOURCE_VIDEO_PATH - the path to the input video
  • CONFIDENCE_THRESHOLD - do not include detections below this confidence level
  • IOU_THRESHOLD - discard detections that overlap with others by more than this IOU ratio
  • FILE_NAME - write the json output to this file
  • INFERENCE_MODEL - model id. This cookbook uses a model alias, but it can also be a fine-tuned model or a model from the Universe.
python
SOURCE_VIDEO_PATH = download_assets(VideoAssets.PEOPLE_WALKING)
CONFIDENCE_THRESHOLD = 0.3
IOU_THRESHOLD = 0.7
FILE_NAME = "detections.json"
INFERENCE_MODEL = "yolov8n-640"

As a result of executing the above download_assets(VideoAssets.PEOPLE_WALKING) , you will download a video file and save it at the SOURCE_VIDEO_PATH. Keep in mind that the video preview below works only in the web version of the cookbooks and not in Google Colab.

<video controls width="1280"> <source src="https://media.roboflow.com/supervision/video-examples/people-walking.mp4" type="video/mp4"> </video>

## Read single frame from video

The get_video_frames_generator enables us to easily iterate over video frames. Let's create a video generator for our sample input file and display its first frame on the screen.

python
generator = sv.get_video_frames_generator(SOURCE_VIDEO_PATH)
frame = next(generator)
sv.plot_image(frame, (12, 12))

We can also use VideoInfo.from_video_path to learn basic information about our video, such as duration, resolution, or FPS.

python
sv.VideoInfo.from_video_path(SOURCE_VIDEO_PATH)

Initialize ByteTrack

ByteTrack is a multi-object tracking algorithm used by Supervision to track and link detected objects across multiple frames, providing consistent IDs for each object.

python
byte_track = sv.ByteTrack(minimum_consecutive_frames=3)
byte_track.reset()

Initialize sv.JSONSink

To save detections to a JSON file, open our sv.JSONSink and then pass the sv.Detections object resulting from the inference to it.

Note that empty detections will be skipped.

python
json_sink = sv.JSONSink(FILE_NAME)
json_sink.open()

Process video and save detections to json file

The InferencePipeline interface is made for streaming and is likely the best route to go for real time use cases. It is an asynchronous interface that can consume many different video sources including local devices (like webcams), RTSP video streams, video files, etc. With this interface, you define the source of a video stream and sinks.

All the operations we plan to perform for each frame of our video - detection, tracking, annotation, and write to json - are encapsulated in a function named callback.

python
def callback(predictions: dict, frame: VideoFrame) -> np.ndarray:
    detections = sv.Detections.from_inference(predictions)
    
    # Only keep person detections
    detections = detections[detections.class_id == 0]
    detections.data["class_name"] = np.array(["person" for _ in range(len(detections))])

    detections = byte_track.update_with_detections(detections)
    json_sink.append(detections, custom_data={'frame_number': frame.frame_id})
python
pipeline = InferencePipeline.init(
    model_id=INFERENCE_MODEL,
    video_reference=SOURCE_VIDEO_PATH,
    on_prediction=callback,
    iou_threshold=IOU_THRESHOLD,
    confidence=CONFIDENCE_THRESHOLD,
)
python
pipeline.start()
pipeline.join()
python
json_sink.write_and_close()

Visualizate results of detections json data with Pandas

Let's take a look at our resulting data with by using Pandas.

It will also be created in your current directory with the name detections.json as well.

python
df = pd.read_json(FILE_NAME)
df

Convert JSON data to sv.Detections

python
def json_to_detections(json_file: str) -> List[sv.Detections]:
    rows_by_frame_number = defaultdict(list)
    with open(json_file, "r") as f:
        data = json.load(f)
    for row in data:
        frame_number = int(row["frame_number"])
        rows_by_frame_number[frame_number].append(row)

    detections_list = []
    for frame_number, rows in rows_by_frame_number.items():
        xyxy = []
        class_id = []
        confidence = []
        tracker_id = []
        custom_data = defaultdict(list)

        for row in rows:
            xyxy.append([row[key] for key in ["x_min", "y_min", "x_max", "y_max"]])
            class_id.append(row["class_id"])
            confidence.append(row["confidence"])
            tracker_id.append(row["tracker_id"])

            for custom_key in row.keys():
                if custom_key in ["x_min", "y_min", "x_max", "y_max", "class_id", "confidence", "tracker_id"]:
                    continue
                custom_data[custom_key].append(row[custom_key])

        if all([val == "" for val in class_id]):
            class_id = None
        if all([val == "" for val in confidence]):
            confidence = None
        if all([val == "" for val in tracker_id]):
            tracker_id = None

        detections_list.append(
            sv.Detections(
                xyxy=np.array(xyxy, dtype=np.float32),
                class_id=np.array(class_id, dtype=int),
                confidence=np.array(confidence, dtype=np.float32),
                tracker_id=np.array(tracker_id, dtype=int),
                data=dict(custom_data)
            )
        )
    
    return detections_list
python
detections_list = json_to_detections(FILE_NAME)
detections_list

print(f"Detections: {len(detections_list)}")
print(detections_list[0])

### Annotate First Frame

Visualize the first frame of a video alongside the initial detections obtained by parsing JSON data into sv.Detections objects. The annotated image will show the original video frame, marked with the first bounding box detected from the parsed data, providing a visual representation of the identified object(s) in the scene.

Get back sv.Detections

python
FRAME_NUMBER = 100

detections = detections_list[FRAME_NUMBER]
frame_number = detections.data["frame_number"][0]


generator = sv.get_video_frames_generator(SOURCE_VIDEO_PATH, start=FRAME_NUMBER)
frame = next(generator)

First frame from video (Before Annotate)

### Annotate Image with Detections

Finally, we can annotate the image with the predictions. Since we are working with an object detection model, we will use the sv.BoxAnnotator and sv.LabelAnnotator classes.

python
bounding_box_annotator = sv.BoxAnnotator()
label_annotator = sv.LabelAnnotator()

annotated_frame = frame.copy()
annotated_frame = bounding_box_annotator.annotate(scene=annotated_frame, detections=detections)
annotated_frame = label_annotator.annotate(scene=annotated_frame, detections=detections)
sv.plot_image(annotated_frame, (12, 12))

## References 📚