JavaScript Web Audio API and stopping samples - javascript

I'm trying to understand an aspect of JavaScript Web Audio API playing samples.
More specifically, I'm using Web Audio API to play a very short sample (actually a "click" sample, for a metronome).
This is what I do:
let sample = this.setupSample(this.audioContext, click1);
sample.start(time);
sample.stop(time + this.noteLength);
This piece of code is working if I run the app from desktop (my pc), but when I try to run the app from mobile, as soon as I click the "start" button to play the metronome, I get the following error:
InvalidStateError: The object is in an invalid state.
93|   sample.stop(time + this.noteLength);
The first thing I tried was commenting that line:
let sample = this.setupSample(this.audioContext, click1);
sample.start(time);
// sample.stop(time + this.noteLength);
In this way, the app runs correctly both from desktop and mobile.
So, it seems to me that calling stop() on a sample that has already ended is not necessary, maybe even wrong.
Hence my question: is it wrong to call stop() on a sample already ended? Should I consider like it's implicitly called if the sample has stopped (ended) by itself? Plus, why I don't get that InvalidStateError error from desktop?
Thank you.
EDIT: to be more clear, here's what setupSample does:
setupSample = (audioContext, samplePath) => {
const source = audioContext.createBufferSource();
const myRequest = new Request(samplePath);
fetch(myRequest).then(function(response) {
return response.arrayBuffer();
}).then(function(buffer) {
audioContext.decodeAudioData(buffer, function(decodedData) {
source.buffer = decodedData;
source.connect(audioContext.destination);
})
})
return source;
}

Related

iPhone 14 won't record using MediaRecorder

Our website records audio and plays it back for a user. It has worked for years with many different devices, but it started failing on the iPhone 14. I created a test app at https://nmp-recording-test.netlify.app/ so I can see what is going on. It works perfectly on all devices but it only works the first time on an iPhone 14. It works on other iPhones and it works on iPad and MacBooks using Safari or any other browser.
It looks like it will record if that is the first audio you ever do. If I get an AudioContext somewhere else the audio playback will work for that, but then the recording won't.
The only symptom I can see is that it doesn't call MediaRecorder.ondataavailable when it is not working, but I assume that is because it isn't recording.
Here is the pattern that I'm seeing with my test site:
Click "new recording". (the level indicator moves, the data available callback is triggered)
Click "listen" I hear what I just did
Click "new recording". (no levels move, no data is reported)
Click "listen" nothing is played.
But if I do anything, like click the metronome on and off then it won't record the FIRST time, either.
The "O.G. Recording" is the original way I was doing the recording, using deprecated method createMediaStreamSource() and createScriptProcessor()/createJavaScriptNode(). I thought maybe iPhone finally got rid of that, so I created the MediaRecorder version.
What I'm doing, basically, is (truncated to show the important part):
const chunks = []
function onSuccess(stream: MediaStream) {
mediaRecorder = new MediaRecorder(stream);
mediaRecorder.ondataavailable = function (e) {
chunks.push(e.data);
}
mediaRecorder.start(1000);
}
navigator.mediaDevices.getUserMedia({ audio: true }).then(onSuccess, onError);
Has anyone else seen anything different in the way the iPhone 14 handles recording?
Does anyone have a suggestion about how to debug this?
If you have an iPhone 14, would you try my test program above and let me know if you get the same results? We only have one iPhone 14 to test with, and maybe there is something weird about that device.
If it works you should see a number of lines something like data {"len":6784} appear every second when you are recording.
--- EDIT ---
I reworked the code similar to Frank zeng's suggestion and I am getting it to record, but it is still not right. The volume is really low, it looks like there are some dropouts, and there is a really long pause when resuming the AudioContext.
The new code seems to work perfectly in the other devices and browsers I have access to.
--- EDIT 2 ---
There were two problems - one is that the deprecated use of createScriptProcessor stopped working but the second one was an iOS bug that was fixed in version 16.2. So rewriting to use the AudioWorklet was needed, but keeping the recording going once it is started is not needed.
I have the same problem as you,I think the API of AudioContent.createScriptProcessor is Invalid in Iphone14, I used new API About AudioWorkletNode to replace it. And don't closed the stream, Because the second recording session of iPhone 14 is too laggy, Remember to destroy the data after recording. After testing, I have solved this problem,Here's my code,
// get stream
window.navigator.mediaDevices.getUserMedia(options).then(async (stream) => {
// that.stream = stream
that.context = new AudioContext()
await that.context.resume()
const rate = that.context.sampleRate || 44100
that.mp3Encoder = new lamejs.Mp3Encoder(1, rate, 128)
that.mediaSource = that.context.createMediaStreamSource(stream)
// API开始逐步淘汰了,如果可用则继续用,如果不可用则采用worklet方案写入音频数据
if (that.context.createScriptProcessor && typeof that.context.createScriptProcessor === 'function') {
that.mediaProcessor = that.context.createScriptProcessor(0, 1, 1)
that.mediaProcessor.onaudioprocess = event => {
window.postMessage({ cmd: 'encode', buf: event.inputBuffer.getChannelData(0) }, '*')
that._decode(event.inputBuffer.getChannelData(0))
}
} else { // 采用新方案
that.mediaProcessor = await that.initWorklet()
}
resolve()
})
// content of audioworklet function
async initWorklet() {
try {
/*音频流数据分析节点*/
let audioWorkletNode;
/*---------------加载AudioWorkletProcessor模块并将其添加到当前的Worklet----------------------------*/
await this.context.audioWorklet.addModule('/get-voice-node.js');
/*---------------AudioWorkletNode绑定加载后的AudioWorkletProcessor---------------------------------*/
audioWorkletNode = new AudioWorkletNode(this.context, "get-voice-node");
/*-------------AudioWorkletNode和AudioWorkletProcessor通信使用MessagePort--------------------------*/
console.log('audioWorkletNode', audioWorkletNode)
const messagePort = audioWorkletNode.port;
messagePort.onmessage = (e) => {
let channelData = e.data[0];
window.postMessage({ cmd: 'encode', buf: channelData }, '*')
this._decode(channelData)
}
return audioWorkletNode;
} catch (e) {
console.log(e)
}
}
// content of get-voice-node.js, Remember to put it in the static resource directory
class GetVoiceNode extends AudioWorkletProcessor {
/*
* options由new AudioWorkletNode()时传递
* */
constructor() {
super()
}
/*
* `inputList`和outputList`都是输入或输出的数组
* 比较坑的是只有128个样本???如何设置
* */
process (inputList, outputList, parameters) {
// console.log(inputList)
if(inputList.length>0&&inputList[0].length>0){
this.port.postMessage(inputList[0]);
}
return true //回来让系统知道我们仍处于活动状态并准备处理音频。
}
}
registerProcessor('get-voice-node', GetVoiceNode)
Destroy the recording instance and free the memory,if want use it the nextTime,you have better create new instance
this.recorder.stop()
this.audioDurationTimer && window.clearInterval(this.audioDurationTimer)
const audioBlob = this.recorder.getMp3Blob()
// Destroy the recording instance and free the memory
this.recorder = null

Why do I have an error "TypeError: Cannot read properties of null (reading 'getTracks')", but the code is working?

Here is my code :
const videoElem = document.getElementById("videoScan");
var stream = document.getElementById("videoScan").srcObject;
var tracks = stream.getTracks();
tracks.forEach(function (track) {
track.stop();
});
videoElem.srcObject = null;
I am trying to stop all camera streams in order to make ZXing work. However, I have the following error (only on android tablet) :
Problem is, I can't solve it, but everything is working well. And when I do a try/catch, code stops working. I can't figure out why this error is displaying.
When you say "everything is working well", you mean that you get what you want as output?
If the code throwing the error is the same you are showing us, then it means that streams contains null.
Since streams is a const value (thus once only) assigned on the previous instructions, it means that document.getElementById("videoScan").srcObject is null too.
I'd suggest you to do some other debug by, i.e., printing out to JS console (if you have access to it from your Android debug environment, I'm sure there's a way) what's inside stream, before trying to access its getTracks method reference.
Please do also check the compatibility with srcObject with your Android (browser?) environment: MDN has a compatibility list you can inspect.
The simplest way to catch this null reference error, is to check if tracks actually exists.
const videoElem = document.getElementById("videoScan");
const stream = videoElem.srcObject;
if (stream && stream?.getTracks()) {
const tracks = stream.getTracks();
tracks.forEach(function (track) {
track.stop();
});
} else {
console.warn('Stream object is not available');
}
videoElem.srcObject = null;
So I found the solution. The error stopped the code from continuing, which, in my case, actually made it work. I made a very basic mistake : not paying enough attention to the debug results. However, if anyone has the error "Uncaught (in promise) DOMException: Could not start video source" with ZXing on Android, you simply have to cut all video streams to allow chrome starting an other camera. It will not work otherwise.

Asynchronous javascript in a synchronous function when combining mediaStreams from getUserMedia and getDisplayMedia?

My team is adapting the sipml5 library to create a html5 softphone for use in our organization. The full repository is here: https://github.com/L1kMakes/sipml5-ng . We have the code working well; audio and video calls work flawlessly. In the original code we forked from (which was from like 2012) screen sharing was accomplished with a browser plugin, but HTML 5 and WebRTC have changed to allow this to be done with just JavaScript now.
I am having difficulty adapting the code to accommodate this. I am starting with the code here on line 828: https://github.com/L1kMakes/sipml5-ng/blob/master/src/tinyMEDIA/src/tmedia_session_jsep.js This works, though without audio. That makes sense as the only possible audio stream from a screen share is the screen audio, not the mic audio. I am attempting to initialize an audio stream from getUserMedia, grab a video stream from getDisplayMedia, and present that to the client as a single mediaStream. Here's my adapted code:
if ( this.e_type == tmedia_type_e.SCREEN_SHARE ) {
// Plugin-less screen share using WebRTC requires "getDisplayMedia" instead of "getUserMedia"
// Because of this, audio constraints become limited, and we have to use async to deal with
// the promise variable for the mediastream. This is a change since Chrome 71. We are able
// to use the .then aspect of the promise to call a second mediaStream, then attach the audio
// from that to the video of our second screenshare mediaStream, enabling plugin-less screen
// sharing with audio.
let o_stream = null;
let o_streamAudio = null;
let o_streamVideo = null;
let o_streamAudioTrack = null;
let o_streamVideoTrack = null;
try {
navigator.mediaDevices.getDisplayMedia(
{
audio: false,
video: !!( this.e_type.i_id & tmedia_type_e.VIDEO.i_id ) ? o_video_constraints : false
}
).then(o_streamVideo => {
o_streamVideoTrack = o_streamVideo.getVideoTracks()[0];
navigator.mediaDevices.getUserMedia(
{
audio: o_audio_constraints,
video: false
}
).then(o_streamAudio => {
o_streamAudioTrack = o_streamAudio.getAudioTracks()[0];
o_stream = new MediaStream( [ o_streamVideoTrack , o_streamAudioTrack ] );
tmedia_session_jsep01.onGetUserMediaSuccess(o_stream, This);
});
});
} catch ( s_error ) {
tmedia_session_jsep01.onGetUserMediaError(s_error, This);
}
} else {
try {
navigator.mediaDevices.getUserMedia(
{
audio: (this.e_type == tmedia_type_e.SCREEN_SHARE) ? false : !!(this.e_type.i_id & tmedia_type_e.AUDIO.i_id) ? o_audio_constraints : false,
video: !!(this.e_type.i_id & tmedia_type_e.VIDEO.i_id) ? o_video_constraints : false // "SCREEN_SHARE" contains "VIDEO" flag -> (VIDEO & SCREEN_SHARE) = VIDEO
}
).then(o_stream => {
tmedia_session_jsep01.onGetUserMediaSuccess(o_stream, This);
});
} catch (s_error ) {
tmedia_session_jsep01.onGetUserMediaError(s_error, This);
}
}
My understanding is, o_stream should represent the resolved mediaStream tracks, not a promise, when doing a screen share. On the other end, we are using the client "MicroSIP." When making a video call, when the call is placed, I get my video preview locally in our web app, then when the call is answered the MicroSIP client gets a green square for a second, then resolves to my video. When I make a screen share call, my local web app sees the local preview of the screen share, but upon answering the call, my MicroSIP client just gets a green square and never gets the actual screen share.
The video constraints for both are the same. If I add debugging output to get more descriptive of what is actually in the media streams, they appear identical as far as I can tell. I made a test video call and a test screen share call, captured debug logs from each and held them side by side in notepad++...all appears to be identical save for the explicit debug describing the traversal down the permission request tree with "GetUserMedia" and "GetDisplayMedia." I can't really post the debug logs here as cleaning them up of information from my organization would leave them pretty barren. Save for the extra debug output on the "getDisplayMedia" call before "getUserMedia", timestamps, and uniqueID's related to individual calls, the log files are identical.
I am wondering if the media streams are not resolving from their promises before the "then" is completed, but asynchronous javascript and promises is still a bit over my head. I do not believe I should convert this function to async, but I have nothing else to debug here; the mediaStream is working as I can see it locally, but I'm stumped on figuring out what is going on with the remote send.
The solution was...nothing, the code was fine. It turns out the recipient SIP client we were using had an issue where it just aborts if it gets video larger than 640x480.

Can't get Web Audio API to work with iOS 11 Safari

So iOS 11 Safari was supposed to add support for the Web Audio API, but it still doesn't seem to work with this javascript code:
//called on page load
get_user_media = get_user_media || navigator.webkitGetUserMedia;
get_user_media = get_user_media || navigator.mozGetUserMedia;
get_user_media.call(navigator, { "audio": true }, use_stream, function () { });
function use_stream(stream){
var audio_context = new AudioContext();
var microphone = audio_context.createMediaStreamSource(stream);
window.source = microphone; // Workaround for https://bugzilla.mozilla.org/show_bug.cgi?id=934512
var script_processor = audio_context.createScriptProcessor(1024, 1, 1);
script_processor.connect(audio_context.destination);
microphone.connect(script_processor);
//do more stuff which involves processing the data from user's microphone...
}
I copy pasted most of this code, so I only have a cursory understanding of it. I know that it's supposed to (and does, on other browsers) capture the user's microphone for further processing. I know that the code breaks on the var audio_context = new AudioContext(); line (as in, no code after that is run), but don't have any error messages cause I don't have a mac which is required to debug iOS Safari (apple die already >_<) Anyone know what's going on and/or how to fix it?
e: forgot to mention that I looked it up and apparently I need the keyword "webkit" before using Web Audio API in Safari, but making it var audio_context = new webkitAudioContext(); doesn't work either
#TomW was on the right track - basically the webkitAudioContext is suspended unless it's created in direct response to the user's tap (before you get the stream).
See my answer at https://stackoverflow.com/a/46534088/933879 for more details and a working example.
Nothing works on mobile save to home screen apps. I issued a bug report to Apple developer. Got a response that it was a duplicate ( which means they know..no clue if or when they will actually fix it).

WebRTC remote-stream video readyState : "muted" while audio is working

everything just works fine (createOffer, createAnswer, iceCandidates, ...), but then the incoming remoteStream has 2 tracks, the audioTrack which is working and the videoTrack which is not working with readyState: "muted".
if i do createOffer on pageload and then with start call do crreateOffer again with the same peerConnection, also the video displays correctly (but then i'll get in firefox the "Cannot create offer in state have-local-offer".
any ideas what could be the problem? (code is quite too complex for showing here)
Can you the local video on both the sides?
-> In a pc only one browser will get access to camera at any time either chrome/firefox)
-> Try calling between two different machines or chrome-to-chrome or firefox-to-firefox.
"Cannot create offer in state have-local-offer"
It mean you already created an offer and trying to create again without setting remote answer.
Calling createOffer again is not good idea. Make sure you create the offer in the following way(synchronously).
After receiving the stream gUM callback, then add it peerConnection.
After adding the stream then create the offer, in case of answer set remote offer also before creating answer.
I was having this issue while preparing the MediaStream on the iOS app. It turns out the the I wasn't passing the correct RTCMediaConstraints.
The problem resolved after I switch and use [RTCMediaConstraints defaultConstraints].
For example:
- (RTCVideoTrack *)createLocalVideoTrack {
RTCVideoTrack* localVideoTrack = nil;
RTCMediaConstraints *mediaConstraints = [RTCMediaConstraints defaultConstraints];
RTCAVFoundationVideoSource *source =
[[self peerConnectionFactory] avFoundationVideoSourceWithConstraints:mediaConstraints];
localVideoTrack =
[[self peerConnectionFactory] videoTrackWithSource:source
trackId:kARDVideoTrackId];
return localVideoTrack;
}

Categories