web worker not communicating with react component - javascript

I use audio-recorder-polyfill in my React project, to make possible audio recording for Safari. It seems to work in getting the recording to take place, however, no audio data gets available. The event "dataavailable" never gets fired, and no data seems to be "compiled" after stopping recording either.
recordFunc() {
navigator.mediaDevices.getUserMedia({ audio: true }).then(stream => {
recorder = new MediaRecorder(stream);
// Set record to <audio> when recording will be finished
recorder.addEventListener('dataavailable', e => {
this.audio.current.src = URL.createObjectURL(e.data);
})
// Start recording
recorder.start();
})
}
stopFunc() {
// Stop recording
recorder.stop();
// Remove “recording” icon from browser tab
recorder.stream.getTracks().forEach(i => i.stop());
}

There have been a number of similar issues posted on audio-recorder-polyfill's issue tracker.
a
b
c
d
e
Root cause
One of those issues, #4 (not listed above), is still open. Several comments on that issue tracker hint that the root issue is that Safari cancels the AudioContext if it was not created in a handler for a user interaction (e.g. a click).
Possible solutions
You may be able to get it to work if you:
Do the initialisation inside a handler for user interaction (i.e. <button onclick="recordFunc()">)
Do not attempt to reuse the MediaStream returned from getUserMedia() for multiple recordings
Do not attempt more than 4 (or 6?) audio recordings on the same page (sources [1], [2] mention that Safari will block this)
Alternative libraries
You might also be able to try the StereoAudioRecorder class from the RecordRTC package, which has more users (3K) but appears less maintained, and might work
Upcoming support
If you'd prefer to stick to the MediaRecorder API and the tips above don't work for you, the good news is that there is experimental support for MediaRecorder in Safari 12.4 and up (iOS and macOS), and it appears to be supported in the latest Safari Technology Preview.
See also
The comments in this issue may also be useful

Related

Implementing an AudioWorklet in Safari

I have been exploring some of the samples which are listed here: Google Chrome Labs - Audio Worklet. They have helped me get most of the way to implementing my own WASM based AudioWorklet. It soon came to my attention that Safari doesn't support Worklet.addModule()? I can't find any alternative ways online that demonstrate how to implement a Audio Worklet without addModule, can anyone help me understand if it is possible use an Audio Worklet in Safari without this method?
I found that in Chrome, you have to:
// wait for user interaction
const context = new AudioContext()
await context.audioWorklet.addModule('./worklet.js')
const worklet = new AudioWorkletNode(context, 'workletName')
worklet.connect(context.destination)
However in Safari, if I enable "Console > Media Logging" I can see this fails with
BaseAudioContext::willBeginPlayback returning false, not processing user gesture or capturing
This seems to be due to the await between the user interaction and the new AudioWorkletNode.
If I change the order in Safari to:
const context = new AudioContext()
await context.audioWorklet.addModule('./worklet.js')
// wait for user interaction
const worklet = new AudioWorkletNode(context, 'workletName')
worklet.connect(context.destination)
This works, but it now doesn't work in Chrome (!) which now gives the error:
The AudioContext was not allowed to start. It must be resumed (or created) after a user gesture on the page
Perhaps enable "Console > Media Logging" is Safari dev tools and see if this is the same issue you are getting?
To make it work in both Safari and Chrome, you can create the AudioContext on page load and then call resume after user interaction:
const context = new AudioContext()
await context.audioWorklet.addModule('./worklet.js')
// wait for user interaction
await context.resume();
const worklet = new AudioWorkletNode(context, 'workletName')
worklet.connect(context.destination)
I'm not sure why it didn't work for you but the latest versions of Safari support the AudioWorklet (with some bugs). It doesn't work on pages served over http. But it should work on pages served over https though. Maybe that was the issue.
If you want to support older browsers (that don't support the AudioWorklet yet) you could try standardized-audio-context or jariseon/audioworklet-polyfill or GoogleChromeLabs/audioworklet-polyfill or audioworkletpolyfill.js from the javascriptmusic project.

Audio/Video syncing problem while recording and playing webm chunks on Chrome of Windows 10

Problem
I am recording the webm chunks by MediaRecorder at Chrome 83 in Windows 10 and sending these to other computer. These chunks are playing on another Chrome by using Media Source Extension(MSE).
sourceBuffer.appendBuffer(webmChunkData);
Everything works fine between 1 to 1.20 seconds. But after that, the audio/video syncing problem starts. The gap between audio and video is minimal at the moment, but as time increases, the gap also rises.
The weird thing is that we can see the different behaviour on different browsers, let me show this by
Chrome's version is 83+ in almost all operating systems.
Camera can be the problem ?
I think Camera is not the problem as I have dual operating systems Fedora and Windows in the same machine. And webm chunks play fine with the Fedora.
Sample rate can be the problem ?
I doubt this. But when I compare the sample rate used by browsers while playing. chrome://media-internals shows 48000 for both with and without a syncing problem.
Warning message from Chrome
Chrome which has sync problem also shows the below message on chrome://media-internals
Question:
Why there is an audio/video syncing problem when both recording and playing are performed on Chrome browser in Windows 10?
How can I eliminate this syncing problem?
I believe I have a workaround for you. The problem seems specific to Chrome + MediaRecorder + VP8, and has nothing to do with MSE or the platform. I have the same issues on Chrome 98 on Mac 12.2.1. Additionally, if you decrease the .start(timeslice) argument, the issue will appear more rapidly and more severely.
However... when I use VP9 the problem does not manifest!
I'm using code like this:
function supportedMimeTypes(): string[] {
// From least desirable to most desirable
return [
// most compatible, but chrome creates bad chunks
'video/webm;codecs=vp8,opus',
// works in chrome, firefox falls back to vp8
'video/webm;codecs=vp9,opus'
].filter(
(m) => MediaRecorder.isTypeSupported(m)
);
}
const mimeType = supportedMimeTypes().pop();
if (!mimeType) throw new Error("Could not find a supported mime-type");
const recorder = new MediaRecorder(stream, {
// be sure to use a mimeType with a specific `codecs=` as otherwise
// chrome completely ignores it and uses video/x-matroska!
// https://stackoverflow.com/questions/64233494/mediarecorder-does-not-produce-a-valid-webm-file
mimeType,
});
recorder.start(1000)
The resulting VP9 appears to play in Firefox, and a VP8 recorded in Firefox plays well in Chrome too.

AudioContext compromising app performance the more audio files I play

I'm developing an Elecron app (JavaScript) to audio visualization. There is a Playlist() instance which receives audio file paths the user wants to play. When the first audio finishes, it plays the next one. So far so good. The app does an intense computational work extracting audio features from each channel, re-rendering canvases and animating plots. It does it beautifully.
The problem is: each time the app plays a next file, the more slow it gets, as if all the audio data before is still somewhere. I've found in documentation the method close() from AudioContext():
"The close() method of the AudioContext Interface closes the audio context, releasing any system audio resources that it uses."
"An AudioContext can now be explicitly closed, thereby releasing any hardware resources associated with the AudioContext. Without this, developers had to depend on garbage collection of the AudioContext to release hardware resources."
I also have found this example of closing and restarting audio contexts:
https://github.com/mdn/webaudio-examples/blob/master/audiocontext-states/index.html
https://mdn.github.io/webaudio-examples/audiocontext-states/
The problem is that I use a audioContext.createMediaElementSource(HTMLelementID) and it doesn't allow me to restart everything recreating all the nodes like in the example. A simplified code that represents what I did before is:
class Audio() {
constructor(audioElementID, playlistObj) {
this.audioContext = new AudioContext();
this.audioElement = document.getElementById(audioElementID);
this.track = this.audioContext.createMediaElementSource(this.audioElement);
this.gainNode = this.audioContext.createGain();
this.track.connect(this.gainNode);
this.gainNode.connect(this.audioContext.destination);
this.audioElement.addEventListener('ended', () => {
playlistObj.playnextTrack() // changes the src from the html element (audioElementID) and sets this.audioElement.currentTime to 0
}
}
// everything is a property here for debugging reasons
}
const audio = new Audio('audioID', playlist);
// playlist defined somewhere else
To implement the close() method I had to change (just exactly the example, a function that recreates everything again):
class Audio() {
constructor(audioElementID, playlistObj) {
this.createAudioContext = () => {
this.audioContext = new AudioContext();
this.audioElement = document.getElementById(audioElementID);
this.track = this.audioContext.createMediaElementSource(this.audioElement);
this.gainNode = this.audioContext.createGain();
this.track.connect(this.gainNode);
this.gainNode.connect(this.audioContext.destination);
this.audioElement.addEventListener('ended', () => {
playlistObj.playNextTrack() // changes the src from the html element (audioElementID) and sets this.audioElement.currentTime to 0
}
}
this.createAudioContext();
}
}
and in playlist.playNextTrack() I pause the audioElement, call audio.audioContext.close(), wait for it (it's a promise), call audio.createAudioContext() to recreate everything and plays. The logic returns an error at this.track = this.audioContext.createMediaElementSource(this.audioElement) with:
"Failed to execute 'createMediaElementSource' on 'BaseAudioContext': HTMLMediaElement already connected previously to a different MediaElementSourceNode, at Audio.createAudioContext"
In the example, the audio source is just a random oscillator and not a mp3 audio file.
I'm really stuck here. Don't know what to do. I'm not even sure if AudioContext() really holds data from all the audio files before causing this performance problem. And if so, how could I reconnect the HTMLMediaElement to a new node audio.createAudioContext() creates? I've already tried audio.track.disconnect()but it doesn't work (as it shouldn't because here I'm disconnecting track from gainNode). And also audioElement doesn't have a disconnect()method as It's just a html element.
Any idea?
UPDATE:
I passed over the problem of recreating the audio context deleting and creating again the html element. But the problem persist: the more new audio files are played, the app gets slower. More precisely now: the more new AudioContext() is created, the slower it gets (even if I close the previous one).
I'm really stuck here. Don't know what to do. I'm not even sure if AudioContext() really holds data from all the audio files before causing this performance problem.
No, it's really unlikely this is the case. The AudioContext sets up things like the sample rate, output destination, and the graph. That's all.
The close() method of the AudioContext Interface closes the audio context, releasing any system audio resources that it uses.
You're misunderstanding what this means. Those "system audio resources" are the sound devices. While the AudioContext is running, there is an audio device requested. This is particularly meaningful in low power environments, like mobile. Another example would be Bluetooth. If the AudioContext is kept running, your Bluetooth headset may just stay on. If the AudioContext is allowed to close, then the Bluetooth headset may go to sleep.
And if so, how could I recconect the HTMLMediaElement to a new node audio.createAudioContext() creates?
You don't. While it would be nice if the API supported this, it seems it doesn't. Simply create a new HTMLMediaElement.
What you should do is properly profile your application to figure out where the slowdown is occurring. Use your developer tools. Might be faster though just to start commenting out sections of things that are running. We certainly can't tell you where the problem is, specifically, from the code you've shown.

Change the VideoTrack of a MediaStream object

In a Nutshell: I'm trying to change the VideoTrack of a MediaStream object.
(Documentation: https://developer.mozilla.org/en-US/docs/WebRTC/MediaStream_API)
I have a MediaStream object __o_jsep_stream_audiovideo which is created by the sipml library.
__o_jsep_stream_audiovideo looks like this:
So it has one AudioTrack and one VideoTrack. At first the VideoTrack comes from the users camera (e.g label: "FaceTime Camera").
According to the Documentation:
A MediaStream consists of zero or more MediaStreamTrack objects, representing various audio or video tracks.
So we should be fine adding more Tracks to this Stream.
I'm trying to switch/exchange the VideoTrack with that from another stream. The other stream (streamB) originates from Chromes ScreenCapture api (label: "Screen")
I tried:
__o_jsep_stream_audiovideo.addTrack(streamB.getVideoTracks()[0])
which doesn't seem to have any effect.
I also tried assigning the videoTracks directly (which was desperate I know).
I must be missing something obvious could you point me in the right direction?
I'm running
Chrome (Version 34.0.1847.131) and
Canary (Version 36.0.1976.2 canary)
OSX 10.9.2
When you talk about change video track, we mean 2 areas:
change the remote video track (what the others can see from u)
WebRTC gets new version of doing that, since it deprecates addStream/removeStream.
However, the excelence is that they introduce new interface replaceTrack
stream.getTracks().forEach(function(track) {
// remote
qcClient.calls.values().forEach(function(call) {
var sender = call.pc.getSenders().find(function(s) {
return s.track.kind == track.kind;
});
sender.replaceTrack(track);
});
});
change your display video (You see yourself)
Better to just add a new video element (or using existing video element) But assign srcObject to the new captured stream
Adding and removing tracks on a MediaStream object do not signal a renegotiation and there are also issues with a MediaStream having two tracks of the same type in chrome.
You should probably just add the separate mediastream to the peer connection so that it can fire a re-negotiation and handle the streams. The Track add/remove functionality in chrome is very naive and not very granular and you should move away from it as much as you can.

Why aren't Safari or Firefox able to process audio data from MediaElementSource?

Neither Safari or Firefox are able to process audio data from a MediaElementSource using the Web Audio API.
var audioContext, audioProcess, audioSource,
result = document.createElement('h3'),
output = document.createElement('span'),
mp3 = '//www.jonathancoulton.com/wp-content/uploads/encodes/Smoking_Monkey/mp3/09_First_of_May_mp3_3a69021.mp3',
ogg = '//upload.wikimedia.org/wikipedia/en/4/45/ACDC_-_Back_In_Black-sample.ogg',
gotData = false, data, audio = new Audio();
function connect() {
audioContext = window.AudioContext ? new AudioContext() : new webkitAudioContext(),
audioSource = audioContext.createMediaElementSource( audio ),
audioScript = audioContext.createScriptProcessor( 2048 );
audioSource.connect( audioScript );
audioSource.connect( audioContext.destination );
audioScript.connect( audioContext.destination );
audioScript.addEventListener('audioprocess', function(e){
if ((data = e.inputBuffer.getChannelData(0)[0]*3)) {
output.innerHTML = Math.abs(data).toFixed(3);
if (!gotData) gotData = true;
}
}, false);
}
(function setup(){
audio.volume = 1/3;
audio.controls = true;
audio.autoplay = true;
audio.src = audio.canPlayType('audio/mpeg') ? mp3 : ogg;
audio.addEventListener('canplay', connect);
result.innerHTML = 'Channel Data: ';
output.innerHTML = '0.000';
document.body.appendChild(result).appendChild(output);
document.body.appendChild(audio);
})();
Are there any plans to patch this in the near future? Or is there some work-around that would still provide the audio controls to the user?
For Apple, this something that could be fixed in the WebKit Nightlies or will we have to wait until Safari 8.0 release to get HTML5 <audio> playing nicely with the Web Audio API? The Web Audio API has existed in Safari since at least version 6.0 and I initially posted this question long before Safari 7.0 was released. Is there a reason this wasn't fixed already? Will it ever be fixed?
For Mozilla, I know you're still in the process of switching over from the old Audio Data API, but is this a known issue with your Web Audio implementation and is it going to be fixed before the next release of Firefox?
This answer is quoted almost exactly from my answer to a related question: Firefox 25 and AudioContext createJavaScriptNote not a function
Firefox does support MediaElementSource if the media adheres to the Same-Origin Policy, however there is no error produced by Firefox when attempting to use media from a remote origin.
The specification is not really specific about it (pun intended), but I've been told that this is an intended behavior, and the issue is actually with Chrome… It's the Blink implementations (Chrome, Opera) that need to be updated to require CORS.
MediaElementSource Node and Cross-Origin Media Resources:
From: Robert O'Callahan <robert#ocallahan.org>
Date: Tue, 23 Jul 2013 16:30:00 +1200
To: "public-audio#w3.org" <public-audio#w3.org>
HTML media elements can play media resources from any origin. When an
element plays a media resource from an origin different from the page's
origin, we must prevent page script from being able to read the contents of
the media (e.g. extract video frames or audio samples). In particular we
should prevent ScriptProcessorNodes from getting access to the media's
audio samples. We should also information about samples leaking in other
ways (e.g. timing channel attacks). Currently the Web Audio spec says
nothing about this.
I think we should solve this by preventing any non-same-origin data from
entering Web Audio. That will minimize the attack surface and the impact on
Web Audio.
My proposal is to make MediaElementAudioSourceNode convert data coming from
a non-same origin stream to silence.
If this proposal makes it into spec it will be nearly impossible for a developer to even realize why his MediaElementSource is not working. As it stands right now, calling createMediaElementSource() on an <audio> element in Firefox 26 actually stops the <audio> controls from working at all and throws no errors.
What dangerous things can you do with the audio/video data from a remote origin? The general idea is that without applying the Same-Origin Policy to a MediaElementSource node, some malicious javascript could access media that only the user should have access to (session, vpn, local server, network drives) and send its contents—or some representation of it—to an attacker.
The HTML5 media elements don't have these restrictions by default. You can include remote media across all browsers by using the <audio>, <img>, or <video> elements. It's only when you want to manipulate or extract the data from these remote resources that the Same-Origin Policy comes into play.
[It's] for the same reason that you cannot dump image data cross-origin via <canvas>: media may contain sensitive information and therefore allowing rogue sites to dump and re-route content is a security issue. - #nmaier
createMediaElementSource() does not work properly in Safari 8.0.5 (and possibly earlier) but is fixed in Webkit Nightly as of 10600.5.17, r183978

Categories