Is it possible to create an AudioBufferSourceNode from another node?
The reason I ask is that I'm trying to create a waveform visualization, not just from a buffer source, but of an entire finished song that I "program" with the web audio API. I can time and program the song perfectly and it sounds just the way I want in the end. I can also create individual visualizations from the source WAV files that I use to create the song.
I used the code from this post to create the visualizations and show them in multiple canvases:
Create a waveform of the full track with Web Audio API
So here's the code snippet that creates the visualizations:
var canvas1 = document.querySelector('.visualizer1');
var canvasCtx1 = canvas1.getContext("2d");
createvis(mybuffer, canvasCtx1);
function createvis(buff, Ctx) {
var leftChannel = buff.getChannelData(0); // Float32Array describing left channel
var lineOpacity = 640 / leftChannel.length;
Ctx.save();
Ctx.fillStyle = '#222';
Ctx.fillRect(0, 0, 640, 100);
Ctx.strokeStyle = '#121';
Ctx.globalCompositeOperation = 'lighter';
Ctx.translate(0, 100 / 2);
Ctx.globalAlpha = 0.06; // lineOpacity ;
for (var i = 0; i < leftChannel.length; i++) {
// on which line do we get ?
var x = Math.floor(640 * i / leftChannel.length);
var y = leftChannel[i] * 100 / 2;
Ctx.beginPath();
Ctx.moveTo(x, 0);
Ctx.lineTo(x + 1, y);
Ctx.stroke();
}
Ctx.restore();
}
The results ended up being pretty cool. I end up with an image for each source buffer that looks something like this:
http://www.bklorraine.com/waveform.png
However, as I load my various source buffers into createBufferSource nodes, I chain them and add effects and it would be nice for the waveforms I generate here to reflect these effects. It would also be nice if the final context.destination that contains the output of everything (ie the finished song) could have a visualization as well.
I would also like to eventually change my "program" so that it doesn't just assemble a song from source wav files, but also combines them with nodes that generate a signal from scratch and, obviously, these particular sounds wouldn't have any sort of source buffer to create a visualization from.
Does anyone know if this is possible using the javascript web audio api?
Related
I'm working with a big company, and part of their flow is scanning a QR code to register some data. The problem is, in order to test this, I need to generate a QR code from the data, photograph it on my phone, and scan it in through my laptop's camera.
There are NPM modules for creating QR codes from data so that's okay, but I was wondering if it's somehow possible to override getUserMedia to return a stream of bytes that is just a QR code? I was thinking of maybe encapsulating all this into one nice chrome extension, but from looking around online, I'm not sure how I'd 'override' the camera input and replace it with a stream of QR code bytes instead.
Thanks
The HTMLCanvasElement has a captureStream() method that does produce a MediaStream with a VideoTrack similar to what getUserMedia({video: true}) produces.
This is a convenient way to test various things with a video stream, without needing an human in the loop:
const width = 1280;
const height = 720
const canvas = Object.assign(document.createElement("canvas"), {width, height});
const ctx = canvas.getContext("2d");
// you'd do the drawing you wish, here I prepare some noise
const img = new ImageData(width, height);
const data = new Uint32Array(img.data.buffer);
const anim = () => {
for(let i = 0; i<data.length; i++) {
data[i] = 0xFF000000 + Math.random()*0xFFFFFF;
}
ctx.putImageData(img, 0, 0);
requestAnimationFrame(anim);
};
requestAnimationFrame(anim);
// extract the MediaStream from the canvas
const stream = canvas.captureStream();
// Use it in your test (here I'll just display it in the <video>)
document.querySelector("video").srcObject = stream;
video { height: 100vh }
<video controls autoplay></video>
But in your case, you need to separate the concerns.
The QR code detection tests should be done on their own, and these can certainly use still images instead of a MediaStream.
Locked. There are disputes about this question’s content being resolved at this time. It is not currently accepting new answers or interactions.
I have created a Web Audio API Biquad filters (Lowpass, Highpass etc) using JavaScript. The application works (I think....) well, it's displaying on the canvas without errors so i'm guessing it does. Anyway, I'm not a pro at JavaScript, far from it. I showed someone a small snippet of my code and they said it was very messy and that i'm not building my audio graph properly for example, not connecting all of the nodes from start to finish.
Now I know that the Source connects to Gain. Gain connects to Filter. Filter connects to Destination. I tried to look at it but I can't figure out what's wrong and how to fix it.
JavaScript
// Play the sound.
function playSound(buffer) {
aSoundSource = audioContext.createBufferSource(); // creates a sound source.
aSoundSource.buffer = buffer; // tell the source which sound to play.
aSoundSource.connect(analyser); // Connect the source to the analyser.
analyser.connect(audioContext.destination); // Connect the analyser to the context's destination (the speakers).
aSoundSource.start(0); // play the source now.
//Create Filter
var filter = audioContext.createBiquadFilter();
//Create the audio graph
aSoundSource.connect(filter);
//Set the gain node
gainNode = audioContext.createGain();
aSoundSource.connect(gainNode); //Connect the source to the gain node
gainNode.connect(audioContext.destination);
//Set the current volume
var volume = document.getElementById('volume').value;
gainNode.gain.value = volume;
//Create and specify parameters for Low-Pass Filter
filter.type = "lowpass"; //Low pass filter
filter.frequency.value = 440;
//End Filter
//Connect source to destination(speaker)
filter.connect(audioContext.destination);
//Set the playing flag
playing = true;
//Clear the spectrogram canvas
var canvas = document.getElementById("canvas2");
var context = canvas.getContext("2d");
context.fillStyle = "rgb(255,255,255)";
context.fillRect (0, 0, spectrogramCanvasWidth, spectrogramCanvasHeight);
// Start visualizer.
requestAnimFrame(drawVisualisation);
}
Because of this, my volume bar thingy has stopped working. I also tried doing "Highpass filter" but it's displaying the same thing. I'm confused and have no one else to ask. By the way, the person I asked didn't help but just said it's messy...
Appreciate all of the help guys and thank you!
So, there's a lot of context missing because of how you posted this - e.g. you don't have your drawVisualisation() code, and you don't explain exactly what you mean by your "volume bar thingy has stopped working".
My guess is that it's just that you have a graph that connects your source node to the output (audiocontext.destination) three times in parallel - through the analyser (which is a pass-thru, and is connected to the output), through the filter, AND through the gain node. Your analyser in this case would show the unfiltered signal output only (you won't see any effect from the the filter, because that's a parallel signal path), and the actual output is summing three chains of the source node (one through the filter, one through the analyser, one through the gain node) - which might have some odd phasing effects, but will also triple the volume (approximately) and quite possibly clip.
Your graph looks like this:
source → destination
↳ filter → destination
↳ gain → destination
What you probably want is to connect each of these nodes in series, like this:
source → filter → gain → destination
I think you want something like this:
// Play the sound.
function playSound(buffer) {
aSoundSource = audioContext.createBufferSource(); // creates a sound source.
aSoundSource.buffer = buffer; // tell the source which sound to play.
//Create Filter
var filter = audioContext.createBiquadFilter();
//Create and specify parameters for Low-Pass Filter
filter.type = "lowpass"; //Low pass filter
filter.frequency.value = 440;
//Create the gain node
gainNode = audioContext.createGain();
//Set the current volume
var volume = document.getElementById('volume').value;
gainNode.gain.value = volume;
//Set up the audio graph
aSoundSource.connect(filter);
filter.connect(gainNode);
gainNode.connect(analyser);
analyser.connect(audioContext.destination);
aSoundSource.start(0); // play the source now.
aSoundSource.connect(gainNode); //Connect the source to the gain node
//Set the playing flag
playing = true;
//Clear the spectrogram canvas
var canvas = document.getElementById("canvas2");
var context = canvas.getContext("2d");
context.fillStyle = "rgb(255,255,255)";
context.fillRect (0, 0, spectrogramCanvasWidth, spectrogramCanvasHeight);
// Start visualizer.
requestAnimFrame(drawVisualisation);
}
Does webP have a javascript API?
I want to seek to a specific point in an animated webP. I haven't come across any documentation to suggest it does but no harm asking SO.
Note: I'm not interested in the HTML5 video element, webM or other video formats.
Abstract
Does webP have a javascript API?
It seems that webP is not lightly supported as an api. But you could control them on the backend and on the frontend.
The last, the one that I understand, is able to manipulated not very efficiently but simplified.
For example?
✓Pausing an animation
(however I wouldn't recommend doing such a thing):
[].slice.apply(document.images).filter(is_webp_image).map(pause_webp);
const is_webp_image=(i)=> {
return /^(?!data:).*\.webp/i.test(i.src);
}
const pause_webp=(i)=> {
var c = document.createElement('canvas');
var w = c.width = i.width;
var h = c.height = i.height;
c.getContext('2d').drawImage(i, 0, 0, w, h);
try {
i.src = c.toDataURL("image/webp"); // if possible, retain all css aspects
} catch(e) { // cross-domain -- mimic original with all its tag attributes
for (var j = 0, a; a = i.attributes[j]; j++)
c.setAttribute(a.name, a.value);
i.parentNode.replaceChild(c, i);
}
}
That was pause a webp or gif.
✓Controlling playback:
To control playback, I recommend to slice the webp on the backend like this pseudocode; given a webp file variable and some self-made or adopted backend API there:
// pseudocode:
var li=[];
for(var itr=0;itr<len(webp)/*how long*/ ;itr++){
list.push( giveWebURI (webp.slice(itr,len(webp))));
// ^ e.g http://example.org/versions/75-sec-until-end.webp#
};
Then on the frontend JS:
const playFrom=(time)=>{
document.querySelector(/*selector*/).src=`
http://example.org/versions/${time}-sec-until-end.webp
`;
};
I would call this but an introduction into backend/frontend/file interactions.
Still, you could draw something out of this. Blessings!
I'm trying to run an html5 canvas script on a node server. The ideas is to have a client page show a canvas with some simple animation/graphics and then send this image data (info about each pixel) to the server.
function draw() {
var canvas = document.getElementById("canvas");
var ctx = canvas.getContext("2d");
var socket = io.connect();
canvas.addEventListener('mousemove', function(evt)
{
ctx.beginPath();
ctx.moveTo(0, 0);
ctx.lineTo(evt.clientX, evt.clientY);
ctx.stroke();
pixels = ctx.getImageData(0, 0, 400, 400);
socket.emit('message', pixels);
});
}
However, socket.emit causes the script (the animation in the canvas) to run too slow. The script is supposed to display a line from the origin (top left) to current mouse position. The lines are appearing but irregularly - a bunch of lines every 2 or 3 seconds. If I comment out the socket.emit, the sketch runs smoothly. This means that ctx.getImageData is not the one causing the delay.
Is there a way to send this information to the server without slowing down the script?
The pixel information sent to the server will later be used by a second client to populate its canvas. I'm trying to build something like a set of synchronized sketch - draw in one canvas and see the results in many. I'm really new to developing for the web so I'm not sure if this can even be done in real time.
I'm trying to use the web audio API to create an audio stream with the left and right channels generated with different oscillators. The output of the left channel is correct, but the right channel is 0. Based on the spec, I can't see what I'm doing wrong.
Tested in Chrome dev.
Code:
var context = new AudioContext();
var l_osc = context.createOscillator();
l_osc.type = "sine";
l_osc.frequency.value = 100;
var r_osc = context.createOscillator();
r_osc.type = "sawtooth";
r_osc.frequency.value = 100;
// Combine the left and right channels.
var merger = context.createChannelMerger(2);
merger.channelCountMode = "explicit";
merger.channelInterpretation = "discrete";
l_osc.connect(merger, 0, 0);
r_osc.connect(merger, 0, 1);
var dest_stream = context.createMediaStreamDestination();
merger.connect(dest_stream);
// Dump the generated waveform to a MediaStream output.
l_osc.start();
r_osc.start();
var track = dest_stream.stream.getAudioTracks()[0];
var plugin = document.getElementById('plugin');
plugin.postMessage(track);
The channelInterpretation means the merger node will mix the stereo oscillator connections to two channels each - but then because you have an explicit channelCountMode, it's stacking the two-channels-per-connection to get four channels and (because it's explicit) just dropping the top two channels. Unfortunately the second two channels are the two channels from the second input - so it loses all channels from the second connection.
In general, you shouldn't need to mess with the channelCount interpretation stuff, and it will do the right thing for stereo.