Getting video audio from another domain for visualisation - javascript

On my web page I'm looking to get an audio level visualisation from a video source created on another domain, both of which I control. I am considering using the this library, which contains the following example code:
var myMeterElement = document.getElementById('my-peak-meter');
var myAudio = document.getElementById('my-audio');
var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
var sourceNode = audioCtx.createMediaElementSource(myAudio);
sourceNode.connect(audioCtx.destination);
var meterNode = webAudioPeakMeter.createMeterNode(sourceNode, audioCtx);
webAudioPeakMeter.createMeter(myMeterElement, meterNode, {});
myAudio.addEventListener('play', function() {
audioCtx.resume();
});
Because the source is from another domain, I cannot access the element directly with getElement. I understand the postMessage() function allows the sending cross-origin data, however am not sure how to apply it to a scenario like this, or whether this will even work. Any suggestions?

Related

Trying to load video through javascript

I want to dynamically load videos from a server to a client using javascript DOM manipulation. I tried this code but the callback function onload doesn't run and in addition in the network tab in the request headers accept field is "/".
function doAjaxVideo(param, lambda) {
let video = document.createElement("video");
let sourceElem = document.createElement("source");
sourceElem.src = param;
sourceElem.type = "video/webm";
video.appendChild(sourceElem);
video.autoplay = "true";
video.onload = () => lambda();
}
doAjaxVideo("/video.webm", function() { console.log("Ready!") }
Any suggestions on how to tweak this so it works? Or maybe another way of doing it.
Image of network tab in devtools
You can try onloadeddata. https://www.w3schools.com/jsref/event_onloadeddata.asp
Because you want it to take action when the data is loaded, not the video element itself.

Updating objectURL in JavaScript

Suppose I am receiving data from a video chat and I need to add the data to the video element in the HTML page.
So here is the code:
var payload = []; // This array keeps updating, since it is getting the data from the network using media stream.
var remoteVideo = document.getElementById('remoteVideo');
var buffer = new Blob([], { type: "video/x-matroska;codecs=avc1,opus" });
var url = URL.createObjectURL(buffer);
remoteVideo.src = url;
Now, I am getting data in the buffer. How do I update the url i have created instead of creating a new one again to view the video?
I think you might not need to update the url at all using MediaSource,the process goes like this:
Create a MediaSource Object.
Create an object URL from the MediaSource using createObjectURl
Set the video's src to the object URL
listen tosourceopen event and when it occurs, create a SourceBuffer instance.
Use SourceBuffer.appendBuffer() to add all of your chunks to the video.
But pay close attention to MediaSource limitations and considerations.
EDIT:
I found this Answer which explains the process described above more precisely and also points out the considerations you should take, and also an example.

Use XMLHttpRequest to load multiple audio files and append them to play in Web Audio API

I have a web that do, load 3 different audio files (each 1 second tho) with determined order and then merge then into one single Audio Buffer (one after another)
To demonstrate what I want to do, this is the sample code snippet:
var AudioContext = window.AudioContext || window.webkitAudioContext;
var audioContextApp = new AudioContext();
var buffer1 = audioContextApp.createBufferSource();
var buffer2 = audioContextApp.createBufferSource();
var buffer3 = audioContextApp.createBufferSource();
var request1 = new XMLHttpRequest;
request1.open('GET', URL_FIRST_SOUND, true);
request1.onload = function() {
var undecodedAudio = request1.response;
audioContextApp.decodeAudioData(undecodedAudio, function(buffer) {
buffer1.buffer = buffer;
});
}
request1.load()
// DO the same thing with request2, request3 to load second and third sound.
Now I don't know how to properly append those 3 audio buffers into one and let the user play the merged audio.
Well I figured out one solution myself. When I connected each buffer to the audioContext.destination, I can simply specify the time when the second audio play, which is the current time plus the duration of the first AudioBuffer.

Web Audio API, get getByteTimeDomainData for left / right channel in two arrays.

I'm currently trying to create an audio visualisation using an web audio api, namely I'm attempting to produce lissajou figures from a given audio source.
I came across this post, but I'm missing some preconditions. How can I get the time domain data for the left / right channels? Currently it seems I'm only getting the merged data.
Any help or hint would be much appreciated.
$(document).ready(function () {
var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
var audioElement = document.getElementById('audioElement');
var audioSrc = audioCtx.createMediaElementSource(audioElement);
var analyser = audioCtx.createAnalyser();
// Bind analyser to media element source.
audioSrc.connect(analyser);
audioSrc.connect(audioCtx.destination);
//var timeDomainData = new Uint8Array(analyser.frequencyBinCount);
var timeDomainData = new Uint8Array(200);
// loop and update time domain data array.
function renderChart() {
requestAnimationFrame(renderChart);
// Copy frequency data to timeDomainData array.
analyser.getByteTimeDomainData(timeDomainData);
// debugging: print to console
console.log(timeDomainData);
}
// Run the loop
renderChart();
});
The observation is correct, the waveform is the down-mixed result. From the current spec (my emphasis):
Copies the current down-mixed time-domain (waveform) data into the
passed unsigned byte array. [...]
To get around this you could use a channel splitter (createChannelSplitter()) and assign each channel to two separate analyzer nodes.
For more details on createChannelSplitter() see this link.

Connecting MediaElementAudioSourceNode to AudioContext.destination doesn't work

Here's a fiddle to show the problem. Basically, whenever the createMediaElementSource method of an AudioContext object is called, the output of the audio element is re-routed into the returned MediaElementAudioSourceNode. This is all fine and according to spec; however, when I then try to reconnect the output to the speakers (using the destination of the AudioContext), nothing happens.
Am I missing something obvious here? Maybe it has to do with cross-domain audio files? I just couldn't find any information on the topic on Google, and didn't see a note of it in the specs.
Code from the fiddle is:
var a = new Audio();
a.src = "http://webaudioapi.com/samples/audio-tag/chrono.mp3";
a.controls = true;
a.loop = true;
a.autoplay = true;
document.body.appendChild(a);
var ctx = new AudioContext();
// PROBLEM HERE
var shouldBreak = true;
var src;
if (shouldBreak) {
// this one stops playback
// it should redirect output from audio element to the MediaElementAudioSourceNode
// but src.connect(ctx.destination) does not fix it
src = ctx.createMediaElementSource(a);
src.connect(ctx.destination);
}
Yes, the Web Audio API requires that the audio adhere to the Same-Origin Policy. If the audio you're attempting to play is not from the same origin then the appropriate Access-Control headers are required. The resource in your example does not have the required headers.

Categories