files/en-us/web/api/baseaudiocontext/createchannelsplitter/index.md
{{ APIRef("Web Audio API") }}
The createChannelSplitter() method of the {{domxref("BaseAudioContext")}} Interface is used to create a {{domxref("ChannelSplitterNode")}},
which is used to access the individual channels of an audio stream and process them separately.
[!NOTE] The {{domxref("ChannelSplitterNode.ChannelSplitterNode", "ChannelSplitterNode()")}} constructor is the recommended way to create a {{domxref("ChannelSplitterNode")}}; see Creating an AudioNode.
createChannelSplitter(numberOfOutputs)
numberOfOutputs
A {{domxref("ChannelSplitterNode")}}.
The following simple example shows how you could separate a stereo track (say, a piece of music), and process the left and right channel differently. To use them, you need to use the second and third parameters of the {{domxref("AudioNode/connect", "AudioNode.connect(AudioNode)")}} method, which allow you to specify the index of the channel to connect from and the index of the channel to connect to.
const ac = new AudioContext();
ac.decodeAudioData(someStereoBuffer, (data) => {
const source = ac.createBufferSource();
source.buffer = data;
const splitter = ac.createChannelSplitter(2);
source.connect(splitter);
const merger = ac.createChannelMerger(2);
// Reduce the volume of the left channel only
const gainNode = ac.createGain();
gainNode.gain.setValueAtTime(0.5, ac.currentTime);
splitter.connect(gainNode, 0);
// Connect the splitter back to the second input of the merger: we
// effectively swap the channels, here, reversing the stereo image.
gainNode.connect(merger, 0, 1);
splitter.connect(merger, 1, 0);
const dest = ac.createMediaStreamDestination();
// Because we have used a ChannelMergerNode, we now have a stereo
// MediaStream we can use to pipe the Web Audio graph to WebRTC,
// MediaRecorder, etc.
merger.connect(dest);
});
{{Specifications}}
{{Compat}}