files/en-us/web/api/rtcencodedvideoframe/getmetadata/index.md
{{APIRef("WebRTC")}}{{AvailableInWorkers("window_and_dedicated")}}
The getMetadata() method of the {{domxref("RTCEncodedVideoFrame")}} interface returns an object containing the metadata associated with the frame.
This includes information about the frame, such as its size, video encoding, other frames needed to construct a full image, timestamp, and other information.
getMetadata()
None.
An object with the following properties:
contributingSources
synchronizationSource would include the ssrc of the application, while contributingSources would include the ssrc values of all the individual video and audio sources.dependencies
frameId
height
mimeType
payloadType
receiveTime
rtpTimestamp
spatialIndex
synchronizationSource
RTCInboundRtpStreamStats.ssrc).temporalIndex
width
This example WebRTC encoded transform implementation shows how you might get the frame metadata in a transform() function and log it.
addEventListener("rtctransform", (event) => {
const transform = new TransformStream({
async transform(encodedFrame, controller) {
// Get the metadata and log
const frameMetaData = encodedFrame.getMetadata();
console.log(frameMetaData);
// Enqueue the frame without modifying
controller.enqueue(encodedFrame);
},
});
event.transformer.readable
.pipeThrough(transform)
.pipeTo(event.transformer.writable);
});
The resulting object from a local webcam might look like the one shown below. Note that there are no contributing sources because there is just one source.
{
"contributingSources": [],
"mimeType": "video/VP8",
"payloadType": 96,
"rtpTimestamp": 2503280194,
"synchronizationSource": 1736709460,
"dependencies": [],
"frameId": 1,
"height": 240,
"spatialIndex": 0,
"temporalIndex": 0,
"width": 320
}
{{Specifications}}
{{Compat}}