How can I play audio elements in sync with key presses? - javascript

HTML
<textarea id="words" name="words"></textarea>
<audio id="type" src="type.mp3"></audio>
JS
document.getElementById('words').onkeydown = function(){
document.getElementById('type').play();
}
I want to make type.mp3 to play anytime I press any key.
But, it is not played in sync with the key.
I am looking for a pure JS solution.

The audio media element depends on the buffering mechanism of the browser and may not play instantly when play is called.
To play sounds in sync with key-presses you would have to use the Web Audio API instead which allows you to play a in-memory buffer and therefor instantly.
Here is an example of how you can load and trigger the sound:
window.AudioContext = window.AudioContext || window.webkitAudioContext;
var request = new XMLHttpRequest(),
url = "https://dl.dropboxusercontent.com/s/8fp1hnkwp215gfs/chirp.wav",
actx = new AudioContext(),
abuffer;
// load file via XHR
request.open("GET", url, true);
request.responseType = "arraybuffer";
request.onload = function() {
// Asynchronously decode the audio file data in request.response
actx.decodeAudioData(request.response,
function(buffer) {
if (buffer) {
abuffer = buffer; // keep a reference to decoded buffer
setup(); // setup handler
}
}
)
};
request.send();
// setup key handler
function setup() {
document.getElementById("txt").onkeydown = play;
}
// play sample - a new buffer source must be created each time
function play() {
var src = actx.createBufferSource();
src.buffer = abuffer;
src.connect(actx.destination);
src.start(0);
}
<textarea id=txt></textarea>
(note: there seem to be a bug in Firefox at the time of this writing reporting a column which does not exist in the code at the send() call - if a problem, try the code in Chrome).

Javascript is an asynchronous and event-driven language, so you can't make a synchronous function.

Related

WebAudioApi StreamSource

I'd like to use the WebAudioApi with streams. Prelistening is very important and can't be realized when I have to wait for each audio-file to be downloaded.
Downloading the entire audio data is not intended, but the only way it can get it work at the moment:
request.open('GET', src, true);
request.responseType = 'arraybuffer';
request.onload = function() {
var audioData = request.response;
//audioData is the entire downloaded audio-file, which is required by the audioCtx anyway
audioCtx.decodeAudioData(audioData, function(buffer) {
source.buffer = buffer;
source.connect(audioCtx.destination);
source.loop = true;
source.play();
},
function(e){"Error with decoding audio data" + e.err});
}
request.send();
I found a possibility to use a stream, when requesting it from the navigator mediaDevices:
navigator.mediaDevices.getUserMedia ({audio: true, video: true})
.then(function(stream) {
var audioCtx = new AudioContext();
var source = audioCtx.createMediaStreamSource(stream);
source.play();
Is it possible to use the xhr instead of the navigator mediaDevices to get the stream:
//fetch doesn't support a range-header, which would make seeking impossible with a stream (I guess)
fetch(src).then(response => {
const reader = response.body.getReader();
//ReadableStream is not working with createMediaStreamSource
const stream = new ReadableStream({...})
var audioCtx = new AudioContext();
var source = audioCtx.createMediaStreamSource(stream);
source.play();
It doesn't work, because the ReadableStream does not work with createMediaStreamSource.
My first step is realizing the functionality of the html-audio element with seek-functionality. Is there any way to get a xhr-stream and put it into an audioContext?
The final idea is to create an single-track-audio-editor with fades, cutting, prelistening, mixing and export functionality.
EDIT:
Another atempt was to use the html audio and create a SourceNode from it:
var audio = new Audio();
audio.src = src;
var source = audioCtx.createMediaElementSource(audio);
source.connect(audioCtx.destination);
//the source doesn't contain the start method now
//the mediaElement-reference is not handled by the internal Context-Schedular
source.mediaElement.play();
The audio-element supports a stream, but cannot be handled by the context-schedular. This is important in order to create an audio editor with prelistening functionality.
It would be great to reference the standard sourceNode's buffer with the audio-element buffer, but I couldn't find out how to connect them.
I experienced this problem before and have been working on a demo solution below to stream audio in chunks with the Streams API. Seeking is not currently implemented, but it could be derived. Because bypassing decodeAudioData() is currently required, custom decoders must be provided that allow for chunk-based decoding:
https://github.com/AnthumChris/fetch-stream-audio

Does MediaElementSource uses less memory than BufferSource in Web Audio API?

I am making a little app that will play audio files (mp3,wav) with the ability to use an equalizer on them (say a regular Audio Player), for this I am using the Web Audio Api.
I manage to get the play part in two ways. Using decodeAudioData of BaseAudioContext
function getData() {
source = audioCtx.createBufferSource();
var request = new XMLHttpRequest();
request.open('GET', 'viper.ogg', true);
request.responseType = 'arraybuffer';
request.onload = function() {
var audioData = request.response;
audioCtx.decodeAudioData(audioData, function(buffer) {
source.buffer = buffer;
source.connect(audioCtx.destination);
source.loop = true;
},
function(e){ console.log("Error with decoding audio data" + e.err); });
}
request.send();
}
// wire up buttons to stop and play audio
play.onclick = function() {
getData();
source.start(0);
play.setAttribute('disabled', 'disabled');
}
and and much easier way with Audio() and createMediaElementSource()
let audioContainer = new Audio('assets/mp3/pink_noise.wav');
let _sourceNodes = _AudioContext.createMediaElementSource(_audioContainer);
_sourceNodes.connect(_AudioContext.destination);
_audioContainer.play();
I think the second one use less memory than createBufferSource() because createBufferSource() stores the complete audio file in memory. But I am not sure about this I really do not have to much experience with tools like Chrome Dev tools to read it correctly.
Does createMediaElementSource() use less memory than createBufferSource() ?
Edit:
Using Chrome's Task Manager seems like when using createBufferSource() just loading the file increases the Memory column something around 40000k against +/-60k with createMediaElementSource(), and the Javascript Memory 1000k vs 20k
I think you've found the answer in the task manager.
You need to be aware of a couple of things.
With a media element, you lose sample-accurate control; this may not be important to you
You need appropriate access permissions when using a MediaElementAudioSourceNode; this may not be a problem if all of your assets are on the same server

Javascript - Streaming Audio On The Fly (Web Audio API & XHR)

I have a simple xmlhttprequest running for fetching an audio file, when it's done fetching it decodes the audio and plays it.
var xhr = new XMLHttpRequest();
xhr.open('GET', /some url/, true);
xhr.responseType = 'arraybuffer';
xhr.onload = function() {
decode(xhr.response);
}.bind(this);
xhr.send(null);
The problem with this however, is that the file decodes only after the request is loaded/finished downloading. Is there an approach for streaming audio without having to wait for it to finish downloading?, without the usage of <audio> tags
You still need HTML5 Audio object but instead of using it directly and playing with it you can use use MediaElementAudioSourceNode along with Audio element to take advantage of Web API.
Excerpt from here
Rather than going the usual path of loading a sound directly by
issuing an XMLHttpRe quest and then decoding the buffer, you can use
the media stream audio source node (MediaElementAudioSourceNode) to
create nodes that behave much like audio source nodes
(AudioSourceNode), but wrap an existing tag. Once we have this
node connected to our audio graph, we can use our knowledge of the Web
Audio API to do great things. This small example applies a low-pass
filter to the tag:
Sample Code:
window.addEventListener('load', onLoad, false);
function onLoad() {
var audio = new Audio();
source = context.createMediaElementSource(audio);
var filter = context.createBiquadFilter();
filter.type = filter.LOWPASS;
filter.frequency.value = 440;
source.connect(this.filter);
filter.connect(context.destination);
audio.src = 'http://example.com/the.mp3';
audio.play();
}

What is the best audio file format to play audio from javascript in custom chromecast receiver

I have developed a simple custom chromecast receiver for a game.
In it I play short sounds from the receiver javascript using:
this.bounceSound = new Audio("paddle.ogg");
to create the audio object when the game is loaded, and then using:
this.bounceSound.play();
to play the sound when needed in the game.
This works fine in chrome on my laptop, but when running the receiver in my chromecast some sounds don't play, others are delayed.
Could this be a problem with my choice of sound format (.ogg) for the audio files?
If not, what else could be the problem.
Are their any best practices on the details of the sounds files (frequency, bit depth, etc?).
Thanks
Just for the record to avoid future confusion of other developers trying to load and play back multiple short sounds at the same time:
On Chromecast, the HTML video and audio tags can only support a
single active media element at a time.
(Source: https://plus.google.com/+LeonNicholls/posts/3Fq5jcbxipJ - make sure to read the rest, it contains also important information about limitations)
Only one audio element will be loaded, others get error code 4 (that was at least the case during my debugging sessions). The correct way of loading and playing back several short sounds is using the Web Audio API as explained by Leon Nicholls in his Google+ post I linked to above.
Simple Web Audio API Wrapper
I whipped up a crude replacement for the HTMLAudioElement in JavaScript that is based on the Web Audio API:
function WebAudio(src) {
if(src) this.load(src);
}
WebAudio.prototype.audioContext = new AudioContext;
WebAudio.prototype.load = function(src) {
if(src) this.src = src;
console.log('Loading audio ' + this.src);
var self = this;
var request = new XMLHttpRequest;
request.open("GET", this.src, true);
request.responseType = "arraybuffer";
request.onload = function() {
self.audioContext.decodeAudioData(request.response, function(buffer) {
if (!buffer) {
if(self.onerror) self.onerror();
return;
}
self.buffer = buffer;
if(self.onload)
self.onload(self);
}, function(error) {
self.onerror(error);
});
};
request.send();
};
WebAudio.prototype.play = function() {
var source = this.audioContext.createBufferSource();
source.buffer = this.buffer;
source.connect(this.audioContext.destination);
source.start(0);
};
It can be used as follows:
var audio1 = new WebAudio('sounds/sound1.ogg');
audio1.onload = function() {
audio1.play();
}
var audio2 = new WebAudio('sounds/sound2.ogg');
audio2.onload = function() {
audio2.play();
}
You should download the sounds up front before you start the game. Also be aware that these sounds will then be stored in memory and Chromecast has very limited memory for that. Make sure these sounds are small and will all fit into memory.

HTML5 video full preload in javascript

I have a high quality video which I cannot compress too much as it's going to be the base of a lot of image analysis whereby each frame will be redrawn into the canvas and then manipulated.
I'm trying to preload the whole thing before playing it as I can't have the video stop, buffer and continue. Is there an event which I can listen for which signifies that the whole video has preloaded before I commence playback?
Here's how I'm doing it in JS/jQuery:
this.canvas = this.el.find("canvas")[0];
this.video = this.el.find("video")[0];
this.ctx = this.canvas.getContext("2d");
this.video.autoplay = false;
this.video.addEventListener("play",this.draw)
this.video.addEventListener("timeupdate",this.draw)
this.video.addeventlistener("ended",this.trigger("complete",this))
This will load the entire video in JavaScript
var r = new XMLHttpRequest();
r.onload = function() {
myVid.src = URL.createObjectURL(r.response);
myVid.play();
};
if (myVid.canPlayType('video/mp4;codecs="avc1.42E01E, mp4a.40.2"')) {
r.open("GET", "slide.mp4");
}
else {
r.open("GET", "slide.webm");
}
r.responseType = "blob";
r.send();
canplaythrough is the event that should fire when enough data has downloaded to play without buffering.
From the Opera teams excellent (although maybe very slightly dated now) resource Everything you need to know about HTML5 video and audio
If the load is successful, whether using the src attribute or using source elements, then as data is being downloaded, progress events are fired. When enough data has been loaded to determine the video's dimensions and duration, a loadedmetadata event is fired. When enough data has been loaded to render a frame, the loadeddata event is fired. When enugh data has been loaded to be able to play a little bit of the video, a canplay event is fired. When the browser determines that it can play through the whole video without stopping for downloading more data, a canplaythrough event is fired; this is also when the video starts playing if it has a autoplay attribute.
'canplaythrough' support matrix available here: https://caniuse.com/mdn-api_htmlmediaelement_canplaythrough_event
You can get around the support limitations by binding the load element to the same function, as it will trigger on those.
Download the video using fetch
Convert the response to a blob
Create an object URL from the blob (e.g. blob:http://localhost:8080/df3c4336-2d9f-4ba9-9714-2e9e6b2b8888)
async function preloadVideo(src) {
const res = await fetch(src);
const blob = await res.blob();
return URL.createObjectURL(blob);
}
Usage:
const video = document.createElement("video");
video.src = await preloadVideo("https://example.com/video.mp4");
Hope this could help you
var xhrReq = new XMLHttpRequest();
xhrReq.open('GET', 'yourVideoSrc', true);
xhrReq.responseType = 'blob';
xhrReq.onload = function() {
if (this.status === 200) {
var vid = URL.createObjectURL(this.response);
video.src = vid;
}
}
xhrReq.onerror = function() {
console.log('err' ,arguments);
}
xhrReq.onprogress = function(e){
if(e.lengthComputable) {
var percentComplete = ((e.loaded/e.total)*100|0) + '%';
console.log('progress: ', percentComplete);
}
}
xhrReq.send();
and then , if your video src has another domain ,you have to handle CORS .
So far the most trustable solution we found was to play it and wait for the buffer to be fully loaded.
Which means if the video is long, you will have to wait for almost all the video length.
That isn't cool, i know.
Wondering if someone has figured out some other magically reliable way of doing it ( ideally using something like PreloadJS which automatically falls back to flash when HTML5 video isn't supported ).
You can use this nice plugin:
https://github.com/GianlucaGuarini/jquery.html5loader
In its API there is a onComplete event that is triggered when the plugin finishes to load all the sources
Does this work?
video.onloadeddata = function(){
video.onseeked = function(){
if(video.seekable.end(0) >= video.duration-0.1){
alert("Video is all loaded!");
} else {
video.currentTime=video.buffered.end(0); // Seek ahead to force more buffering
}
};
video.currentTime=0; // first seek to trigger the event
};

Categories