Suppose you use the Web Audio API to play a pure tone:
ctx = new AudioContext();
src = ctx.createOscillator();
src.frequency = 261.63; //Play middle C
src.connect(ctx.destination);
src.start();
But, later on you decide you want to stop the sound:
src.stop();
From this point on, src is now completely useless; if you try to start it again, you get:
src.start()
VM564:1 Uncaught DOMException: Failed to execute 'start' on 'AudioScheduledSourceNode': cannot call start more than once.
at <anonymous>:1:5
If you were making, say, a little online keyboard, you're constantly turning notes on and off. It seems really clunky to remove the old object from the audio nodes graph, create a brand new object, and connect() it into the graph, (and then discard the object later) when it would be simpler to just turn it on and off when needed.
Is there some important reason the Web Audio API does things like this? Or is there some cleaner way of restarting an audio source?
Use connect() and disconnect(). You can then change the values of any AudioNode to change the sound.
(The button is because AudioContext requires a user action to run in Snippet.)
play = () => {
d.addEventListener('mouseover',()=>src.connect(ctx.destination));
d.addEventListener('mouseout',()=>src.disconnect(ctx.destination));
ctx = new AudioContext();
src = ctx.createOscillator();
src.frequency = 261.63; //Play middle C
src.start();
}
div {
height:32px;
width:32px;
background-color:red
}
div:hover {
background-color:green
}
<button onclick='play();this.disabled=true;'>play</button>
<div id='d'></div>
This is exactly how the web audio api works. Sound generator nodes like oscillator nodes and audio buffer source nodes are intended to be used once. Every time you want to play your oscillator, you have to create it and set it up, just like you said. I know it seems like a hassle, but you can abstract it into a play() method that handles those details for you, so you don't have to think about it every time you play an oscillator. Also, don't worry about the performance implications of creating so many nodes. The web audio api is intended to be used this way.
If you just want to make music on the internet, and you're not as interested in learning the ins and outs of the web audio api, you might be interested in using a library I wrote that makes things like this easier: https://github.com/rserota/wad
I am working on a 12 Voice Polyphonic Syntesizer with 2 Osc per Voice.
I now never Stop the Osc's. I disconnect the Osc's. You can do that by setTimeout. For the Time take the longest release Phase (1 of 2) from the amp Enveloop for this Set of Osc's. Subtract the AudioContext.currentTime(), multiply with 1000 (setTimeout works with milisecs, web Audio works with seconds.)
Related
I'm trying for the first time to use Web Audio API in Javascript.
For a personal project i'm trying to control the volume, but I have some difficulties.
I'm using this git project : https://github.com/kelvinau/circular-audio-wave
In this project I added this function that use in the function play() :
changeVolume() {
const volume = this.context.createGain();
volume.gain.value = 0.1;
volume.connect(this.context.destination)
this.sourceNode.connect(volume)
}
When I set the gain to 0 it doesn't mute the sound. But when i set to 3 its working and the sound is louder.
Do you know why I can increase the volume but I can't lower it ?
From what you are describing it sounds as if there is still a direct connection from the sourceNode to the destination of the context.
It should work if you remove this line from the original example.
this.sourceNode.connect(this.context.destination);
I need some help understanding what the best practice is for creating a PIXI.extras.AnimatedSprite from spritesheet(s). I am currently loading 3 medium-sized spritesheets for 1 animation, created by TexturePacker, I collect all the frames and then play. However the first time playing the animation is very unsmooth, and almost jumps immediately to the end, from then on it plays really smooth. I have read a bit and I can see 2 possible causes. 1) The lag might be caused by the time taken to upload the textures to the GPU. There is a PIXI plugin called prepare renderer.plugins.prepare.upload which should enable me to upload them before playing and possibly smoothen out the initial loop. 2) Having an AnimatedSprite build from more than one texture/image is not ideal and could be the cause.
Question 1: Should I use the PIXI Prepare plugin, will this help, and if so how do I actually use it. Documentation for it is incredibly limited.
Question 2: Is having frames across multiple textures a bad idea, could it be the cause & why?
A summarised example of what I am doing:
function loadSpriteSheet(callback){
let loader = new PIXI.loaders.Loader()
loader.add('http://mysite.fake/sprite1.json')
loader.add('http://mysite.fake/sprite2.json')
loader.add('http://mysite.fake/sprite3.json')
loader.once('complete', callback)
loader.load()
}
loadSpriteSheet(function(resource){
// helper function to get all the frames from multiple textures
let frameArray = getFrameFromResource(resource)
let animSprite = new PIXI.extras.AnimatedSprite(frameArray)
stage.addChild(animSprite)
animSprite.play()
})
Question 1
So I have found a solution, possibly not the solution but it works well for me. The prepare plugin was the right solution but never worked. WebGL needs the entire texture(s) uploaded not the frames. The way textures are uploaded to the GPU is via renderer.bindTexture(texture). When the PIXI loader receives a sprite atlas url e.g. my_sprites.json it automatically downloads the image file and names it as mysprites.json_image in the loaders resources. So you need to grab that, make a texture and upload it to the GPU. So here is the updated code:
let loader = new PIXI.loaders.Loader()
loader.add('http://mysite.fake/sprite1.json')
loader.add('http://mysite.fake/sprite2.json')
loader.add('http://mysite.fake/sprite3.json')
loader.once('complete', callback)
loader.load()
function uploadToGPU(resourceName){
resourceName = resourceName + '_image'
let texture = new PIXI.Texture.fromImage(resourceName)
this.renderer.bindTexture(texture)
}
loadSpriteSheet(function(resource){
uploadToGPU('http://mysite.fake/sprite1.json')
uploadToGPU('http://mysite.fake/sprite2.json')
uploadToGPU('http://mysite.fake/sprite3.json')
// helper function to get all the frames from multiple textures
let frameArray = getFrameFromResource(resource)
let animSprite = new PIXI.extras.AnimatedSprite(frameArray)
this.stage.addChild(animSprite)
animSprite.play()
})
Question 2
I never really discovered and answer but the solution to Question 1 has made my animations perfectly smooth so in my case, I see no performance issues.
I was building an audio program and hit a stumbling block on the .createMediaElementSource method. I was able to solve the problem, but I do not quite know why the solution works.
In my HTML, I created an audio player: <audio id="myAudio><source src="music.mp3"></audio>
Now in my JS:
context = new AudioContext();
audio = document.getElementById('myAudio');
source = context.createMediaElementSource(audio);
audio.play();
doesn't work. The audio element loads, but doesn't play the song, nor is there audio.
However! This JS code works:
context = ...; //same as above
audio...;
source = context.createMediaElementSource(audio[0]);
audio.play();
All I changed was adding a [0] to the audio and the program suddenly works again. Since .getElementById doesn't return an array, I don't know why referring to audio as an array works, but just referring to audio does not.
A few months late, but in case others stumble upon this and want an answer:
This behaviour is described in the Web Audio API spec:
The createMediaElementSource method
Creates a MediaElementAudioSourceNode given an HTMLMediaElement. As a consequence of calling this method, audio playback from the HTMLMediaElement will be re-routed into the processing graph of the AudioContext.
Emphasis mine. Since the output from the audio element is now routed into the newly created MediaElementAudioSourceNode instance (instead of the original destination, usually your speakers), you need to route the output of the instance back to the original destination:
var audio = document.getElementById('myAudio');
var ctx = new AudioContext();
var src = ctx.createMediaElementSource(audio);
src.connect(ctx.destination); // connect the output of the source to your speakers
audio.play();
The reason it worked when you added [0] is that document.getElementById doesn't return an array, or an element with a defined key of "0". As such, you might as well have written ctx.createMediaElementSource(undefined), which doesn't re-route the audio from the #myAudio element.
I've built this HTML5 video player that I am loading into a canvas to manipulate and back onto a canvas to display it. The video starts out quite slow and the frame rate only gets worse each time it is played. All I am currently manipulating in the video now is the color value when the video is paused, but will eventually be using real time manipulation throughout videos that will be posted in the future.
I used the below tutorial to learn this trick https://www.youtube.com/watch?v=zjQzP3mOXdc
Here is the relevant code, but there may possibly be interference coming from elsewhere so feel free to check the source code at the link at the bottom
var v = document.getElementById('video');
var color = "#DA7AC1";
var processes={
timerCallback:function() {
if (this.v2.paused || this.v2.ended) {
return;
}
this.ctxIn.drawImage(this.v2,0,0,this.width,this.height);
this.pixelScan();
var self=this;
setTimeout(function() {
self.timerCallback();
}, 0);
},
doLoad:function(){
this.v2=document.getElementById("video");
this.cIn=document.getElementById("cIn");
this.ctxIn=this.cIn.getContext("2d");
this.cOut=document.getElementById("cOut");
this.ctxOut=this.cOut.getContext("2d");
var self=this;
this.v2.addEventListener("playing", function() {
self.width=self.v2.videoWidth;
self.height=self.v2.videoHeight;
cIn.width=self.v2.videoWidth;
cIn.height=self.v2.videoHeight;
cOut.width=self.v2.videoWidth;
cOut.height=self.v2.videoHeight;
self.timerCallback();
}, false);
},
pixelScan: function() {
var frame = this.ctxIn.getImageData(0,0,this.width,this.height);
for(var i=0; i<frame.data.length;i+=4) {
var grayscale=frame.data[i]*.3+frame.data[i+1]*.59+frame.data[i+2]*.11;
frame.data[i]=grayscale;
frame.data[i+1]=grayscale;
frame.data[i+2]=grayscale;
}
this.ctxOut.putImageData(frame,0,0);
return;
}
}
http://coreytegeler.com/ethan/
Any and all help would be greatly appreciated! Thanks!
Reason 1
Try to adjust your timer avoiding 0 as timeout value:
setTimeout(function() {
self.timerCallback();
}, 34);
34ms is plenty as video frame rate is typically never more than 30 FPS (NTSC) or 25 FPS (PAL), ie 1000 / 30. If you use 0 you risk stacking up your calls which means the browser will be busy trying to empty the event queue.
If you use anything lower than 33-34ms you end up having the same frame processed twice or more which of course is unnecessary (your video is actually 29.97 FPS/NTSC so you might want to consider keeping 34ms).
Reason 2
The video resolution is also full HD (1920x1080) which is a bit too much for canvas and JS to process in real-time (for a typcial consumer computer). Try to reduce the video size so a normal spec'ed computer will be able to process the data.
Reason 3 (in part)
You don't need two on-screen canvases or even an on-screen video. Try to create these tags dynamically and not inserting them into the DOM. Use a single canvas on-screen and draw the result to that (you can putImageData from one canvas to another).
Reason 4 (in part)
Ideally, replace setTimeout with a requestAnimationFrame approach as this improves the synchronization and efficiency considerably. You can implement a toggle to reduce the FPS to for example 30 as you don't need to process each frame twice (ref. 30 FPS video frame rate).
Update
To create these elements dynamically (ref reason 3) you can do something like this:
var canvas = document.createElement('canvas'),
video = document.createElement('video'),
ctx = canvas.getContext('2d');
video.preload = 'auto';
video.addEventListener('canplay', start, false);
if (video.canPlayType('video/mp4')) {
video.src = 'videoUrl.mp4';
} else if ...etc.
Then when the video has loaded enough data (on metadata or canplay) you set the off-screen (and on-screen) canvas element to the size of the video:
canvas.width = video.videoWidth;
canvas.height = video.videoHeight;
Then when playing process its buffer and copy to the on-screen canvas you defined before.
You don't have have an off-screen canvas - I merely mention this as you in your original code used and in and out canvas IIRC. You can simply use a single on-screen canvas and the off-screen video and draw to the video frame to the canvas, process it and put back the processed data. Should work fine too in this case.
I ran a profile in chrome and it points to line 46 as taking up the most CPU.
setTimeout(function() {
self.timerCallback();
}, 0);
Perhaps increasing the timeout will stop it from lagging.
I had the same issues and tried a number of fixes. I was using Premier Elements which didn't export to mp4 and using HandBrake to convert the format. I also Tried FFMpeg to do the conversion, but neither worked.
What I did was switch to Kdenlive as my video editor, it exported directly to MP4, and that video worked perfectly.
So, if you are have this slow render issue, it is probably an issues with the video encoding. Easiest fix is to get a high quality video editor like Premier Pro, Final Cut, or Kdenlive. Kdenlive is free but it has a huge learning curve and poor public documentation.
Im creating my html5 app for testing and Im working with audio api , for generating sound on keyboard I'm doing something like this
keyboard.keyDown(function (note, frequency) {
var oscillator = context.createOscillator(),
gainNode = context.createGainNode();
oscillator.type = 2;
oscillator.frequency.value = frequency;
gainNode.gain.value = 0.3;
oscillator.connect(gainNode);
if (typeof oscillator.noteOn !== 'undefined') {
oscillator.noteOn(0);
}
gainNode.connect(context.destination);
nodes.push(oscillator);
});
now my question is , cause I tryied to find examples on google but with no success ,what are the other parammetars that can be used for getting sound sounds like piano or some electronic instrument except oscillator and how to pass them ?
I'm assuming you are fairly new to synthesis. Before trying synthesis algorithms in code, I'd recommend playing with some of the software synthesizers that are available - VST or otherwise. This will give you a handle on the kind of parameters you want to be introducing into your algorithm. http://www.soundonsound.com/sos/allsynthsecrets.htm is an index for a series of really good synthesis tutorials. (Start at the bottom - part 1!)
Once you are ready to start experimenting in code, a great place to start would be to introduce an envelope to change the volume or pitch of the sound over time (changing a parameter over time like this is called 'modulation'). This video may be of interest: http://www.youtube.com/watch?v=A6pp6OMU5r8
Bear in mind that almost all acoustic instruments are difficult to convincingly synthesize algorithmically, and by far the easiest way to get close to a piano is to use samples of real piano notes.