Lag when playing mic audio directly to output using web audio api - javascript

I simply want to play audio coming in the microphone directly to the output, using the code below. But there is a lag, about 0.2 seconds. Is there a way to reduce this delay ?
navigator.getUserMedia = navigator.getUserMedia ||navigator.webkitGetUserMedia || navigator.mozGetUserMedia;
var aCtx;
var analyser;
var microphone;
if (navigator.getUserMedia) {
navigator.getUserMedia(
{audio: true},
function(stream) {
aCtx = new AudioContext();
microphone = aCtx.createMediaStreamSource(stream);
var destination=aCtx.destination;
microphone.connect(destination);
},
function(){ console.log("Error 003.")}
);
}

You can disable any pre-processing which should reduce the delay to a minimum.
Instead of this ...
navigator.getUserMedia({
audio: true
})
... you would then write this ...
navigator.mediaDevices.getUserMedia({
audio: {
autoGainControl: false,
echoCancellation: false,
noiseSuppression: false
}
})
... to disable all pre-processing.
Please note that I also used navigator.mediaDevices.getUserMedia instead of navigator.getUserMedia. Usage of the latter is deprecated.

Related

Camera stream not loading on all devices with maximum quality requirements (HTML/Angular)

I'm developing an application in Angular to capture a photo with the maximum possible quality depending on the device's camera.
Currently, I have this code:
HTML:
<video #video id="video" autoplay muted playsinline></video>
Angular TS:
_requestCameraStream(width: number, height: number, secondTry: boolean) {
if (navigator.mediaDevices.getUserMedia)
{
navigator.mediaDevices
.getUserMedia({
video: {
facingMode: 'environment',
width: { ideal: width },
height: { ideal: height },
frameRate: { ideal: 30 },
},
})
.then(stream => {
console.log('_getUserMedia -> stream loaded');
this.loadStream(stream);
})
.catch(err => {
console.log(err);
if (!secondTry)
{
console.log('Started second try');
this._requestCameraStream(2560, 1440, true);
}
else
{
this.router.navigateByUrl('id-document/camera-error');
}
});
}
}
private loadStream(stream: MediaStream) {
const videoDevices = stream.getVideoTracks();
this.lastStream = stream;
this.video!.nativeElement.srcObject = stream;
this.video!.nativeElement.play();
this.ref.detectChanges();
}
Basically I check if the device has a camera available and try to load it with the width and height values that the function receives. On the first try I call the function as follows:
this._requestCameraStream(4096, 2160, false);
If the stream fails to load (probably because the camera does not support 4k quality) then it tries again with the values this._requestCameraStream(2560, 1440, true);
This is actually working pretty well on most devices, but on a Galaxy Note 10 Plus, the stream does not load, but if I click the button to take the picture, the camera does capture the image in 4k quality.
I suspect that the camera has a higher resolution than the screen, so the camera can capture a 4k image, but the screen can't load a 4k video as a preview. The problem is: the system does not trigger any warning or errors that I could capture. It is as if the preview loaded successfully.
How can I detect and treat this error? Or maybe, is there any other way that I can request the camera to capture a maximum quality image with the preview loading correctly?
You can try defining a range of resolutions instead of trying only two
async _requestCameraStream() {
if (!navigator.mediaDevices.getUserMedia) return;
try {
const stream = await navigator.mediaDevices
.getUserMedia({
video: {
facingMode: {
exact: "environment"
},
width: { min: 2288, ideal: 4096, max: 4096 },
height: { min: 1080, ideal: 2160, max: 2160 },
frameRate: { ideal: 30 },
},
});
if(stream) {
console.log('_getUserMedia -> stream loaded');
this.loadStream(stream);
}
}catch (err) {
console.log(err);
this.router.navigateByUrl('id-document/camera-error');
}
}
I think your current approach is correct to capture the maximum quality image. I have used a similar approach in one of my projects. I think the problem is with the video playback. There is an autoplay policy in the browser it behaves differently in a different browser. This browser policy does not allow to play video content without any user intervention. Check this URL, https://developer.chrome.com/blog/autoplay/
I think you should add a muted attribute in the video element and would be better if you ask the user to click before capturing the camera stream. May be this does not solve the problem which you are facing but this problem will be there in some browsers like iPhone. Apple devices do not allow any video content without user intervention.
Regards,
omi

Which alternatives exists for WebRTC getUserMedia given the incompatibility with IOS?

Forgive me if the question is bad, but I developed a streaming service for work using WebRTC getUserMedia on the front-end and connected it with Socket.IO on NodeJs, that has problem only with iPhones and Safari on MacOS. Looking on Stack Overflow and other forums I understood that happens because it is not compatibile. So my question is what can I use alternatively?
Do I need to use a JavaScript library like ReactJS or another?
I believe your problem lies not with getUserMedia(), but with MediaRecorder. iOS, and indeed Safari in general, doesn't handle MediaRecorder correctly. (Is Safari taking over from IE as the incompatible browser everybody loves to hate?)
I was able to hack around this problem by creating a MediaRecorder polyfill that delivers motion JPEG rather than the webm usually made when you code video. It's video-only, and generates cheezy "video" at that.
If you put the index.js file for this at /js/jpegMediaRecorder.js you can do something like this.
<script src="/js/mpegMediaRecorder.js"></script>
<script>
document.addEventListener( 'DOMContentLoaded', function () {
function handleError (error) {
console.log('getUserMedia error: ', error.message, error.name)
}
function onDataAvailableHandler (event) {
var buf = event.data || null
if (event.data && event.data instanceof Blob && event.data.size !== 0) {
/* send the data to somebody */
} }
function mediaReady (stream) {
var mediaRecorder = new MediaRecorder(stream, {
mimeType: 'image/jpeg', videoBitsPerSecond: 125000, qualityParameter: 0.9
})
mediaRecorder.addEventListener('dataavailable', datahandler)
mediaRecorder.start(10)
}
function start () {
const constraints = {
video: { width: {min: 160, ideal: 176, max: 640},
height: {min: 120, ideal: 144, max: 480},
frameRate: {min: 4, ideal: 10, max: 30} },
audio: false }
navigator.mediaDevices.getUserMedia(constraints)
.then(mediaReady)
.catch(handleError)
}
start()
})
</script>

How to turn off an event in javascript

Im making an audio transcripter script for my webpage and i want it to stop listening the user when it capture silence, i can capture the silence, but i cant delete or turn off the event 'stopped_speaking'. Is there a way to do it? if its not, how can i resolve my problem? thx.
here my code:
function bg_startRecording() {
navigator.mediaDevices.getUserMedia({
video: false,
audio: true
}).then(async function(stream) {
recorder = RecordRTC(stream, {
type: 'audio',
mimeType: 'audio/wav',
recorderType: StereoAudioRecorder,
disableLogs: false,
numberOfAudioChannels: 1,
});
recorder.startRecording();
var options = {};
speechEvents = hark(stream, options);
speechEvents.on('stopped_speaking', function() {
speechEvents.off('stopped_speaking'); <----- throw an error
bg_stopRecording() //function to stop recording and transcript
});
})
};
The error:
main.js:49 Uncaught TypeError: speechEvents.off is not a function
at Object.stopped_speaking (main.js:49)
at harker.emit (hark.js:16)
at hark.js:109

When I try to stream my webcam video by using getUserMedia() api a black screen is appearing

I am trying to access the user's webcam by using navigator.getUserMedia(). I am assigning the video.srcObject to this stream. But I am getting a black screen on video.
I event tried with navigator.mediaDevices.getUserMedia()
<video controls id="webcam"></video>
<script>
const webcam = document.getElementById("webcam");
function startVideo() {
navigator
.getUserMedia({
video: true,
audio: false
},
liveStream => {
console.log(liveStream);
webcam.setAttribute("controls", 'true');
webcam.srcObject = liveStream;
webcam.play();
},
error => console.log(error)
)
}
startVideo();
</script>

WebRTC Chrome Camera constraints

I am trying to set the getusermedia video constraints like setting min/max frame-rates and resolutions etc... in my peer.js webrtc application which is a simple peer to peer chat application. I have being trying to integrate it into my application but it seems to break it.Any help would be greatly appreciated other online tutorials look different to my app set up. Down at function 1 is where I have been trying to set the constraints it just doesn't show the video anymore. Is this the correct place?
Also will these constraints work on a video-file playing instead of the webcam?. I am using the Google chrome flags that plays a video file instead of a camera.
navigator.getWebcam = (navigator.getUserMedia ||
navigator.webkitGetUserMedia ||
navigator.mozGetUserMedia ||
navigator.msGetUserMedia);
// PeerJS object ** FOR PRODUCTION, GET YOUR OWN KEY at http://peerjs.com/peerserver **
var peer = new Peer({
key: 'XXXXXXXXXXXXXXXX',
debug: 3,
config: {
'iceServers': [{
url: 'stun:stun.l.google.com:19302'
}, {
url: 'stun:stun1.l.google.com:19302'
}, {
url: 'turn:numb.viagenie.ca',
username: "XXXXXXXXXXXXXXXXXXXXXXXXX",
credential: "XXXXXXXXXXXXXXXXX"
}]
}
});
// On open, set the peer id so when peer is on we display our peer id as text
peer.on('open', function(){
$('#my-id').text(peer.id);
});
peer.on('call', function(call) {
// Answer automatically for demo
call.answer(window.localStream);
step3(call);
});
// Click handlers setup
$(function() {
$('#make-call').click(function() {
//Initiate a call!
var call = peer.call($('#callto-id').val(), window.localStream);
step3(call);
});
$('end-call').click(function() {
window.existingCall.close();
step2();
});
// Retry if getUserMedia fails
$('#step1-retry').click(function() {
$('#step1-error').hide();
step();
});
// Get things started
step1();
});
function step1() {
//Get audio/video stream
navigator.getWebcam({audio: true, video: true}, function(stream){
// Display the video stream in the video object
$('#my-video').prop('src', URL.createObjectURL(stream));
// Displays error
window.localStream = stream;
step2();
}, function(){ $('#step1-error').show(); });
}
function step2() { //Adjust the UI
$('#step1', '#step3').hide();
$('#step2').show();
}
function step3(call) {
// Hang up on an existing call if present
if (window.existingCall) {
window.existingCall.close();
}
// Wait for stream on the call, then setup peer video
call.on('stream', function(stream) {
$('#their-video').prop('src', URL.createObjectURL(stream));
});
$('#step1', '#step2').hide();
$('#step3').show();
}
Your JavaScript looks invalid. You can't declare a var inside a function argument list. Did you paste wrong? Try:
var constraints = {
audio: false,
video: { mandatory: { minWidth: 1280, minHeight: 720 } }
};
navigator.getWebcam(constraints, function(stream){ etc. }
Now it's valid JavaScript at least. I'm not familiar with PeerJS, but the constraints you're using look like the Chrome ones, so if you're on Chrome then hopefully they'll work, unless PeerJS does it differently for some reason.
Your subject says "WebRTC Camera constraints" so I should mention that the Chrome constraints are non-standard. See this answer for an explanation.

Categories