files/en-us/web/api/audiocontext/index.md
{{APIRef("Web Audio API")}}
The AudioContext interface represents an audio-processing graph built from audio modules linked together, each represented by an {{domxref("AudioNode")}}.
An audio context controls both the creation of the nodes it contains and the execution of the audio processing, or decoding. You need to create an AudioContext before you do anything else, as everything happens inside a context. It's recommended to create one AudioContext and reuse it instead of initializing a new one each time, and it's OK to use a single AudioContext for several different audio sources and pipeline concurrently.
{{InheritanceDiagram}}
AudioContext object.Also inherits properties from its parent interface, {{domxref("BaseAudioContext")}}.
AudioContext passing the audio from the {{domxref("AudioDestinationNode")}} to the audio subsystem.Also inherits methods from its parent interface, {{domxref("BaseAudioContext")}}.
AudioTimestamp object containing two audio timestamp values relating to the current audio context.AudioContext.Basic audio context declaration:
const audioCtx = new AudioContext();
const oscillatorNode = audioCtx.createOscillator();
const gainNode = audioCtx.createGain();
const finish = audioCtx.destination;
// etc.
{{Specifications}}
{{Compat}}