So, I just found out that you can record sound using javascript. That's just awesome!
I intantly created new project to do something on my own. However, as soon as I opened source code of the example script, I found out that there are no explanatory comments at all.
I started googling and found a long and interesting article about AudioContext that doesn't to be aware of the recording at all (it only mentions remixinf sounds) and MDN article, that contains all the information - succesfully hiding the one I'm after.
I'm also aware of existing frameworks that deal with the thing (somehow, maybe). But if I wanted to have a sound recorder I'd download one - but I'm really curious how the thing works.
Now not only that I'm not familiar with the coding part of the thing, I'm also curious how the whole thing will work - do I get intensity in specific time? Much like in any osciloscope?
Or can I already get spectral analysis for the sample?
So, just to avoid any mistakes: Please, could anyone explain the simplest and most straightforward way to get the input data using above-mentioned API and eventually provide a code with explanatory comments?
If you just want to use mic input as a source on WebAudio API, the following code worked for me. It was based on: https://gist.github.com/jarlg/250decbbc50ce091f79e
navigator.getUserMedia = navigator.getUserMedia
|| navigator.webkitGetUserMedia
|| navigator.mozGetUserMedia;
navigator.getUserMedia({video:false,audio:true},callback,console.log);
function callback(stream){
ctx = new AudioContext();
mic = ctx.createMediaStreamSource(stream);
spe = ctx.createAnalyser();
spe.fftSize = 256;
bufferLength = spe.frequencyBinCount;
dataArray = new Uint8Array(bufferLength);
spe.getByteTimeDomainData(dataArray);
mic.connect(spe);
spe.connect(ctx.destination);
draw();
}
Related
Suppose you use the Web Audio API to play a pure tone:
ctx = new AudioContext();
src = ctx.createOscillator();
src.frequency = 261.63; //Play middle C
src.connect(ctx.destination);
src.start();
But, later on you decide you want to stop the sound:
src.stop();
From this point on, src is now completely useless; if you try to start it again, you get:
src.start()
VM564:1 Uncaught DOMException: Failed to execute 'start' on 'AudioScheduledSourceNode': cannot call start more than once.
at <anonymous>:1:5
If you were making, say, a little online keyboard, you're constantly turning notes on and off. It seems really clunky to remove the old object from the audio nodes graph, create a brand new object, and connect() it into the graph, (and then discard the object later) when it would be simpler to just turn it on and off when needed.
Is there some important reason the Web Audio API does things like this? Or is there some cleaner way of restarting an audio source?
Use connect() and disconnect(). You can then change the values of any AudioNode to change the sound.
(The button is because AudioContext requires a user action to run in Snippet.)
play = () => {
d.addEventListener('mouseover',()=>src.connect(ctx.destination));
d.addEventListener('mouseout',()=>src.disconnect(ctx.destination));
ctx = new AudioContext();
src = ctx.createOscillator();
src.frequency = 261.63; //Play middle C
src.start();
}
div {
height:32px;
width:32px;
background-color:red
}
div:hover {
background-color:green
}
<button onclick='play();this.disabled=true;'>play</button>
<div id='d'></div>
This is exactly how the web audio api works. Sound generator nodes like oscillator nodes and audio buffer source nodes are intended to be used once. Every time you want to play your oscillator, you have to create it and set it up, just like you said. I know it seems like a hassle, but you can abstract it into a play() method that handles those details for you, so you don't have to think about it every time you play an oscillator. Also, don't worry about the performance implications of creating so many nodes. The web audio api is intended to be used this way.
If you just want to make music on the internet, and you're not as interested in learning the ins and outs of the web audio api, you might be interested in using a library I wrote that makes things like this easier: https://github.com/rserota/wad
I am working on a 12 Voice Polyphonic Syntesizer with 2 Osc per Voice.
I now never Stop the Osc's. I disconnect the Osc's. You can do that by setTimeout. For the Time take the longest release Phase (1 of 2) from the amp Enveloop for this Set of Osc's. Subtract the AudioContext.currentTime(), multiply with 1000 (setTimeout works with milisecs, web Audio works with seconds.)
While it is easy enough to get firstPaint times from dev tools, is there a way to get the timing from JS?
Yes, this is part of the paint timing API.
You probably want the timing for first-contentful-paint, which you can get using:
const paintTimings = performance.getEntriesByType('paint');
const fmp = paintTimings.find(({ name }) => name === "first-contentful-paint");
enter code here
console.log(`First contentful paint at ${fmp.startTime}ms`);
Recently new browser APIs like PerformanceObserver and PerformancePaintTiming have made it easier to retrieve First Contentful Paint (FCP) by Javascript.
I made a JavaScript library called Perfume.js which works with few lines of code
const perfume = new Perfume({
firstContentfulPaint: true
});
// ⚡️ Perfume.js: First Contentful Paint 2029.00 ms
I realize you asked about First Paint (FP) but would suggest using First Contentful Paint (FCP) instead.
The primary difference between the two metrics is FP marks the point
when the browser renders anything that is visually different from what
was on the screen prior to navigation. By contrast, FCP is the point
when the browser renders the first bit of content from the DOM, which
may be text, an image, SVG, or even a canvas element.
if(typeof(PerformanceObserver)!=='undefined'){ //if browser is supporting
const observer = new PerformanceObserver((list) => {
for (const entry of list.getEntries()) {
console.log(entry.entryType);
console.log(entry.startTime);
console.log(entry.duration);
}
});
observer.observe({entryTypes: ['paint']});
}
this will help you just paste this code in starting of your js app before everything else.
As stated in the title, I have been running in an issue regarding the HTMLVideoElement when connected to the WebAudioAPI inside Firefox.
The following sample gives a minimal example reproducing the issue:
var video = document.getElementById('video');
var ctx = new AudioContext();
var sourceNode = ctx.createMediaElementSource(video);
sourceNode.connect(ctx.destination);
video.playbackRate = 3;
video.play();
As soon as the video element is connected to the audio pipeline, I cannot get the playbackRate setter to work anymore.
I've been looking for a way to set this value somewhere inside the AudioContext or the HTMLMediaElementSourceNode objects but those classes do not seem to handle playback-rate on their own.
Please note that this sample works fine on Chrome. And I don't really see what seems to be the problem here.
Thanks
Already reported over the Firefox's bug tracker: https://bugzilla.mozilla.org/show_bug.cgi?id=966247
I was toying with socket.io, ThreeJS, Javascript, and NodeJS to create a simple client/server using ThreeJS's graphics. I wasn't sure if all of these frameworks would even work together, but I decided to give it a shot since I've seen similar examples online before even though I cannot find a simple one to dissect or experiment with. It's mainly to experiment with, but I also wanted to make a small little concept-game as proof of what I've learned so far.
I posted my code here: https://gist.github.com/netsider/63c414d83bd806b4e7eb
Sorry if it's a little untidy, but I did my best to make it as readable as possible.
Basically, right now the server-side NodeJS script seems to run fine (Run with "node server-alpha.js"), and the client script (client-alpha.html, which you can just open in a browser) connects to the server, and displays a list of users (who are also connected). However, my intention was for each user to be able to move his/her own cube around, and right now each cube only gets added to the screen (rather than being added, subtracted, and then added again - to give the illusion of movement). If you run both pieces of code and connected one or two users and move the arrow keys a few times for each, you'll see what I'm talking about.
Can anybody help me with this? I tried several different ways to remove the cube (and remembered to call render(), after each)... but everything I tried didn't seem to work. It always resulted in the cubes just being added to the screen, and never subtracted.
I added comments in the code to make things a little easier, as I know this is quite a bit of code to go through (if it's not your own, anyway).
Thanks, any help would be greatly appreciated... as I'm really stuck trying to make the cubes just move.
Also, I'm having trouble adding the Fly-Controls (FlyControls.js - it's commented out ATM), so if someone could tell me where I went wrong I'd appreciate that a lot also.
Ok so you don't want to keep remaking the cubes, all you need to do is change the position.
Also in game development, it is almost a requirement to use object oriented design, a good way to go about this would be to make a player object, so..
CPlayerList = new Array(); // an array of player objects
function ClientPlayer()
{
this.Cube;
this.Name = "unnamed";
this.Id = 0;
this.Create = function(name,pos,id)
{
this.Name = name;
this.Id = id;
var cubeGeometry = new THREE.BoxGeometry(10, 10, 10);
var cubeMaterial = new THREE.MeshLambertMaterial({color: 'red', transparent:false, opacity:1.0});
this.Cube = new THREE.Mesh(cubeGeometry, cubeMaterial);
this.Cube.position.x = pos.x;
this.Cube.position.y = pos.y;
this.Cube.position.z = 20; // don't know why this is 20, remember y axis is up & down in opengl/threejs
scene.add(this.Cube);
}
this.Move = function(vector)
{
this.Cube.position.set(this.Cube.position.x + vector.x, this.Cube.position.y + vector.y, 20);
}
}
So on the server you need a ServerPlayer object which holds similar data, and assign ids on the server before sending them to the clients. So when you send it to the client you want to make a new ClientPlayer, call player.Create() and then push it to the CPlayerList, like so:
function newCPlayer(data)
{
var newPly = new ClientPlayer();
newPly.Create(data.name,data.pos,data.id);
CPlayerList.push(newPly);
}
Then when you call your movePlayer() function, you can simply loop through your players array
function movePlayer(keyStroke, clientID)
{
if (keyStroke == 39)
{
CPlayerList.forEach(function(player,i,a)
{
if(player.Id === clientID)
{
player.Move(new THREE.Vector3(1,0,0));
}
}
}
}
This is just the client code, but this should help you get started, let me know if there's anything you're unclear on.
Also here's an example of a game using a similar design: http://82.199.155.77:3000/ (ctrl+shift+j in chrome to view client sources) and server code: http://pastebin.com/PRPaimG9
Im creating my html5 app for testing and Im working with audio api , for generating sound on keyboard I'm doing something like this
keyboard.keyDown(function (note, frequency) {
var oscillator = context.createOscillator(),
gainNode = context.createGainNode();
oscillator.type = 2;
oscillator.frequency.value = frequency;
gainNode.gain.value = 0.3;
oscillator.connect(gainNode);
if (typeof oscillator.noteOn !== 'undefined') {
oscillator.noteOn(0);
}
gainNode.connect(context.destination);
nodes.push(oscillator);
});
now my question is , cause I tryied to find examples on google but with no success ,what are the other parammetars that can be used for getting sound sounds like piano or some electronic instrument except oscillator and how to pass them ?
I'm assuming you are fairly new to synthesis. Before trying synthesis algorithms in code, I'd recommend playing with some of the software synthesizers that are available - VST or otherwise. This will give you a handle on the kind of parameters you want to be introducing into your algorithm. http://www.soundonsound.com/sos/allsynthsecrets.htm is an index for a series of really good synthesis tutorials. (Start at the bottom - part 1!)
Once you are ready to start experimenting in code, a great place to start would be to introduce an envelope to change the volume or pitch of the sound over time (changing a parameter over time like this is called 'modulation'). This video may be of interest: http://www.youtube.com/watch?v=A6pp6OMU5r8
Bear in mind that almost all acoustic instruments are difficult to convincingly synthesize algorithmically, and by far the easiest way to get close to a piano is to use samples of real piano notes.