files/en-us/web/api/web_speech_api/index.md
{{DefaultAPISidebar("Web Speech API")}}
The Web Speech API enables you to incorporate voice data into web apps.
The Web Speech API has two parts: SpeechSynthesis (Text-to-Speech), and SpeechRecognition (Asynchronous Speech Recognition.)
The Web Speech API enables web apps to handle voice data. It has two components:
For more details on using these features, see Using the Web Speech API.
SpeechSynthesisVoice has its own relative speech service including information about language, name and URI.[NoInterfaceObject] interface called SpeechSynthesisGetter, and Implemented by the Window object, the speechSynthesis property provides access to the {{domxref("SpeechSynthesis")}} controller, and therefore the entry point to speech synthesis functionality.The concept of grammar has been removed from the Web Speech API. Related features remain in the specification and are still recognized by supporting browsers for backwards compatibility, but they have no effect on speech recognition services.
For information on errors reported by the Speech API (for example, "language-not-supported" and "language-unavailable"), see the following documentation:
error property of the SpeechRecognitionErrorEvent objecterror property of the SpeechSynthesisErrorEvent objectAccess to the on-device speech recognition functionality of the Web Speech API is controlled by the {{httpheader("Permissions-Policy/on-device-speech-recognition", "on-device-speech-recognition")}} {{httpheader("Permissions-Policy")}} directive.
Specifically, where a defined policy blocks usage, any attempts to call the API's {{domxref("SpeechRecognition.available_static", "SpeechRecognition.available()")}} or {{domxref("SpeechRecognition.install_static", "SpeechRecognition.install()")}} methods will fail.
Our Web Speech API examples illustrate speech recognition and synthesis.
{{Specifications}}
{{Compat}}