files/en-us/web/api/audioprocessingevent/index.md
{{APIRef("Web Audio API")}}{{deprecated_header}}
The AudioProcessingEvent interface of the Web Audio API represents events that occur when a {{domxref("ScriptProcessorNode")}} input buffer is ready to be processed.
An audioprocess event with this interface is fired on a {{domxref("ScriptProcessorNode")}} when audio processing is required. During audio processing, the input buffer is read and processed to produce output audio data, which is then written to the output buffer.
[!WARNING] This feature has been deprecated and should be replaced by an
AudioWorklet.
{{InheritanceDiagram}}
AudioProcessingEvent object.Also implements the properties inherited from its parent, {{domxref("Event")}}.
numberOfInputChannels,
of the factory method {{domxref("BaseAudioContext/createScriptProcessor", "AudioContext.createScriptProcessor()")}}.
Note that the returned <code>AudioBuffer</code> is only valid in the scope of the event handler.The following example shows how to use of a ScriptProcessorNode to take a
track loaded via {{domxref("BaseAudioContext/decodeAudioData", "AudioContext.decodeAudioData()")}}, process it, adding a bit
of white noise to each audio sample of the input track (buffer) and play it through the
{{domxref("AudioDestinationNode")}}. For each channel and each sample frame, the
scriptNode.onaudioprocess function takes the associated
audioProcessingEvent and uses it to loop through each channel of the input
buffer, and each sample in each channel, and add a small amount of white noise, before
setting that result to be the output sample in each case.
[!NOTE] For a full working example, see our script-processor-node GitHub repo. (You can also access the source code.)
const myScript = document.querySelector("script");
const myPre = document.querySelector("pre");
const playButton = document.querySelector("button");
// Create AudioContext and buffer source
let audioCtx;
async function init() {
audioCtx = new AudioContext();
const source = audioCtx.createBufferSource();
// Create a ScriptProcessorNode with a bufferSize of 4096 and
// a single input and output channel
const scriptNode = audioCtx.createScriptProcessor(4096, 1, 1);
// Load in an audio track using fetch() and decodeAudioData()
try {
const response = await fetch("viper.ogg");
const arrayBuffer = await response.arrayBuffer();
source.buffer = await audioCtx.decodeAudioData(arrayBuffer);
} catch (err) {
console.error(
`Unable to fetch the audio file: ${name} Error: ${err.message}`,
);
}
// Give the node a function to process audio events
scriptNode.addEventListener("audioprocess", (audioProcessingEvent) => {
// The input buffer is the song we loaded earlier
let inputBuffer = audioProcessingEvent.inputBuffer;
// The output buffer contains the samples that will be modified
// and played
let outputBuffer = audioProcessingEvent.outputBuffer;
// Loop through the output channels (in this case there is only one)
for (let channel = 0; channel < outputBuffer.numberOfChannels; channel++) {
let inputData = inputBuffer.getChannelData(channel);
let outputData = outputBuffer.getChannelData(channel);
// Loop through the 4096 samples
for (let sample = 0; sample < inputBuffer.length; sample++) {
// make output equal to the same as the input
outputData[sample] = inputData[sample];
// add noise to each output sample
outputData[sample] += (Math.random() * 2 - 1) * 0.1;
}
}
});
source.connect(scriptNode);
scriptNode.connect(audioCtx.destination);
source.start();
// When the buffer source stops playing, disconnect everything
source.addEventListener("ended", () => {
source.disconnect(scriptNode);
scriptNode.disconnect(audioCtx.destination);
});
}
// wire up play button
playButton.addEventListener("click", () => {
if (!audioCtx) {
init();
}
});
{{Specifications}}
{{Compat}}