files/en-us/web/api/rtcencodedaudioframe/getmetadata/index.md
{{APIRef("WebRTC")}}{{AvailableInWorkers("window_and_dedicated")}}
The getMetadata() method of the {{domxref("RTCEncodedAudioFrame")}} interface returns an object containing the metadata associated with the frame.
This includes information about the frame, such as the audio encoding used, the synchronization source and contributing sources, and the sequence number (for incoming frames).
getMetadata()
None.
An object with the following properties:
audioLevel
10^(-rfc_level/20).
If the RFC6464 header extension is not present in the received packets of the frame, audioLevel will be undefined.captureTime
contributingSources
synchronizationSource would include the ssrc of the application, while contributingSources would include the ssrc values of all the individual audio sources.mimeType
payloadType
receiveTime
rtpTimestamp
sequenceNumber
synchronizationSource
This example WebRTC encoded transform implementation shows how you might get the frame metadata in a transform() function and log it.
addEventListener("rtctransform", (event) => {
const transform = new TransformStream({
async transform(encodedFrame, controller) {
// Get the metadata and log
const frameMetaData = encodedFrame.getMetadata();
console.log(frameMetaData);
// Enqueue the frame without modifying
controller.enqueue(encodedFrame);
},
});
event.transformer.readable
.pipeThrough(transform)
.pipeTo(event.transformer.writable);
});
The resulting object from a local microphone might look like the one shown below.
Note that there are no contributing sources because there is just one source, and no sequenceNumber because this is an outgoing frame.
{
"captureTime": 19745.400000000373,
"contributingSources": [],
"mimeType": "audio/opus",
"payloadType": 111,
"rtpTimestamp": 1786045165,
"synchronizationSource": 3365032712,
"audioLevel": 0.001584893192461114
}
{{Specifications}}
{{Compat}}