According to the W3C Working Draft, the API is designed to be used in conjunction with other APIs and elements on the web platform, notably the XMLHttpRequest (using the responseType and response attributes), and in particular for games and interactive applications. It is also anticipated to be used with the canvas 2D and WebGL 3D graphics APIs.
I will present: a list of browsers that support it, a brief introductory overview, and tutorials, demonstrations, and experiments that others have contributed to and have made available on the web utilizing the creative features of the Web Audio API.
AudioContext is the interface that represents a set of AudioNode objects and their particular connections allowing for arbitrary routing of signals to the AudioDestinationNode interface, which is what the user hears. According to the specification, in most use cases, only a single AudioContext is used per web document. For example, an AudioContext is constructed as follows:
var context = new AudioContext();
A very simple graphical representation of an example AudioContext interface is displayed in Figure A.
Each audio node can also represent a set of filters, distortion, or other audio effects that can be placed between each of the sources (nodes) and the destination (output).
Currently there are five attributes that can be used:
- destination -- where the AudioDestinationNode is a single input representing the final destination for all audio to be rendered to the audio hardware.
- sampleRate – the sample frames per second in which the AudioContext interface handles audio inputs.
- currentTime – the time in seconds from start zero when the context is created and increments in real-time.
- listener – is an AudioListener that is used for 3D spatialization.
- activeSourceCount – the number of AudioBufferSourceNodes that are active and playing.
For more information on the specifications for Web Audio API interfaces, parameters, and other effects, check out the Audio API section.
Tutorials, demonstrations, and experiments
The following tutorials, demonstrations, and experiments use the Web Audio API in whole or as an integral part of their implementations.
The HTML5 Rocks website has a great introductory tutorial entitled Getting Started with Web Audio API authored by Boris Smus (Figure B). The tutorial has sample code snippets and sound demonstrations that allow you to experience playing sounds with rhythm, changing the volume of sound, altering the play/pause functionality, incorporating the cross-fading between two sounds, and applying simple filters to alter the sound, and more.
Check out the Chrome Experiment ToneCraft by DinahMoe, which lets you create tunes by adding or removing seven color blocks from the left sidebar. Each block represents individual sound notes, which can be placed in any combination onto a grid in a 3D environment similar to Minecraft. You can stack and arrange blocks for unique tones that get sampled through a loop, and then press the spacebar to play/stop the tones. I played around with ToneCraft and found it to be a fun and creative audio instrument, creating this interesting tone loop (Figure C).
Plink is another DinahMoe experiment. It allows you to choose from eight instruments represented by the colors on the right sidebar and then, when clicking with the mouse and moving the cursor (instrument) up and down, the musical staff allows you to play notes and tunes up and down the scale. The multiplayer musical game experience allows users to create a temporary Plink username that attaches to their cursor, and you can join others who are playing as well to create interesting music (Figure D).
Frequency Modulation (FM) with Web Audio API by Gaëtan Renaudeau (who works as a web architect with Zenexity, the Paris startup behind Play Framework) includes an informative series of audio FM synthesis examples. Demonstrations in his post include a Low-Frequency Oscillation (LFO) demo, a Modulator in audible range sample, Frequency ratios: harmonic or dissonant sounds example, and adding in an Envelope for automating amplitude. Another example shows how fine-tuning the Modulator's frequency very close to the Carrier's frequency creates interesting audio effects (Figure E).
The second exploration by Gaëtan Renaudeau uses Beez, WebRTC, and the Web Audio API to create a web real-time audio experiment using smartphones as synthesizer effect controllers. The experiment consists of controlling an audio stream running on a desktop web page with audio effect pads running on phones via a mobile web interface. This only works on Android Chrome devices with WebRTC required (Figure F).
JAM with Chrome, which is part of the Google Chrome Web Audio API Demos, allows you to play music online with your friends. The web application lets you select from 18 instruments, including various bass, electric, and acoustic guitars and several drum kits and keyboards. With each instrument, you can select Easy or Pro mode. Plus, several Autoplay modes are offered, along with six chords and knobs (depending on the instrument) that allow you to adjust the chorus, stereo delay, autowha, slapback delay, and more. In addition to the Web Audio API, the technology behind JAM includes WebSockets, HTML5 Canvas, CSS3, and the Go programming language (Figure G).
What this means for web developers
The trending shift with more web-enabled audio content and video multimedia content seems to be catching more steam and growth with online listening and viewing every day. Interfaces such as the HTML5 Web Audio API and other multimedia APIs are sure to see more implementations in the near future, especially with the expanse of applications that use audio on the web and for various mobile devices.
Ryan has performed in a broad range of technology support roles for electric-generation utilities, including nuclear power plants, and for the telecommunications industry. He has worked in web development for the restaurant industry and the Federal government.