Which alternatives exists for WebRTC getUserMedia given the incompatibility with IOS? - javascript

Forgive me if the question is bad, but I developed a streaming service for work using WebRTC getUserMedia on the front-end and connected it with Socket.IO on NodeJs, that has problem only with iPhones and Safari on MacOS. Looking on Stack Overflow and other forums I understood that happens because it is not compatibile. So my question is what can I use alternatively?
Do I need to use a JavaScript library like ReactJS or another?

I believe your problem lies not with getUserMedia(), but with MediaRecorder. iOS, and indeed Safari in general, doesn't handle MediaRecorder correctly. (Is Safari taking over from IE as the incompatible browser everybody loves to hate?)
I was able to hack around this problem by creating a MediaRecorder polyfill that delivers motion JPEG rather than the webm usually made when you code video. It's video-only, and generates cheezy "video" at that.
If you put the index.js file for this at /js/jpegMediaRecorder.js you can do something like this.
<script src="/js/mpegMediaRecorder.js"></script>
<script>
document.addEventListener( 'DOMContentLoaded', function () {
function handleError (error) {
console.log('getUserMedia error: ', error.message, error.name)
}
function onDataAvailableHandler (event) {
var buf = event.data || null
if (event.data && event.data instanceof Blob && event.data.size !== 0) {
/* send the data to somebody */
} }
function mediaReady (stream) {
var mediaRecorder = new MediaRecorder(stream, {
mimeType: 'image/jpeg', videoBitsPerSecond: 125000, qualityParameter: 0.9
})
mediaRecorder.addEventListener('dataavailable', datahandler)
mediaRecorder.start(10)
}
function start () {
const constraints = {
video: { width: {min: 160, ideal: 176, max: 640},
height: {min: 120, ideal: 144, max: 480},
frameRate: {min: 4, ideal: 10, max: 30} },
audio: false }
navigator.mediaDevices.getUserMedia(constraints)
.then(mediaReady)
.catch(handleError)
}
start()
})
</script>

Related

Camera stream not loading on all devices with maximum quality requirements (HTML/Angular)

I'm developing an application in Angular to capture a photo with the maximum possible quality depending on the device's camera.
Currently, I have this code:
HTML:
<video #video id="video" autoplay muted playsinline></video>
Angular TS:
_requestCameraStream(width: number, height: number, secondTry: boolean) {
if (navigator.mediaDevices.getUserMedia)
{
navigator.mediaDevices
.getUserMedia({
video: {
facingMode: 'environment',
width: { ideal: width },
height: { ideal: height },
frameRate: { ideal: 30 },
},
})
.then(stream => {
console.log('_getUserMedia -> stream loaded');
this.loadStream(stream);
})
.catch(err => {
console.log(err);
if (!secondTry)
{
console.log('Started second try');
this._requestCameraStream(2560, 1440, true);
}
else
{
this.router.navigateByUrl('id-document/camera-error');
}
});
}
}
private loadStream(stream: MediaStream) {
const videoDevices = stream.getVideoTracks();
this.lastStream = stream;
this.video!.nativeElement.srcObject = stream;
this.video!.nativeElement.play();
this.ref.detectChanges();
}
Basically I check if the device has a camera available and try to load it with the width and height values that the function receives. On the first try I call the function as follows:
this._requestCameraStream(4096, 2160, false);
If the stream fails to load (probably because the camera does not support 4k quality) then it tries again with the values this._requestCameraStream(2560, 1440, true);
This is actually working pretty well on most devices, but on a Galaxy Note 10 Plus, the stream does not load, but if I click the button to take the picture, the camera does capture the image in 4k quality.
I suspect that the camera has a higher resolution than the screen, so the camera can capture a 4k image, but the screen can't load a 4k video as a preview. The problem is: the system does not trigger any warning or errors that I could capture. It is as if the preview loaded successfully.
How can I detect and treat this error? Or maybe, is there any other way that I can request the camera to capture a maximum quality image with the preview loading correctly?
You can try defining a range of resolutions instead of trying only two
async _requestCameraStream() {
if (!navigator.mediaDevices.getUserMedia) return;
try {
const stream = await navigator.mediaDevices
.getUserMedia({
video: {
facingMode: {
exact: "environment"
},
width: { min: 2288, ideal: 4096, max: 4096 },
height: { min: 1080, ideal: 2160, max: 2160 },
frameRate: { ideal: 30 },
},
});
if(stream) {
console.log('_getUserMedia -> stream loaded');
this.loadStream(stream);
}
}catch (err) {
console.log(err);
this.router.navigateByUrl('id-document/camera-error');
}
}
I think your current approach is correct to capture the maximum quality image. I have used a similar approach in one of my projects. I think the problem is with the video playback. There is an autoplay policy in the browser it behaves differently in a different browser. This browser policy does not allow to play video content without any user intervention. Check this URL, https://developer.chrome.com/blog/autoplay/
I think you should add a muted attribute in the video element and would be better if you ask the user to click before capturing the camera stream. May be this does not solve the problem which you are facing but this problem will be there in some browsers like iPhone. Apple devices do not allow any video content without user intervention.
Regards,
omi

Write or stream audio (live-voice) file to variable as a binary in Node.js

I am working with audio streams in Node.js. As for now, my code doesn't have utils.promisfy and I have 3 stages of it. So after the 2nd .pipe I am writing file to disk in wav audio format with required params.
Code example below:
import { FileWriter } from 'wav';
const filename = `./${Date.now()}-${userId}.wav`;
const encoder = new OpusEncoder(16000, 1);
receiver
.subscribe(userId, {
end: {
behavior: EndBehaviorType.AfterSilence,
duration: 100,
},
})
// OpusDecodingStream is a custom class, which convert audio, like a gzip stage for file.
.pipe(new OpusDecodingStream({}, encoder))
.pipe(
// Writes wav file to disk, also can be replaces with FileRead, part of wav module
new FileWriter(filename, {
channels: 1,
sampleRate: 16000,
}),
);
The problem is: I need to transfer (not streaming!) resulting audio file in binary format via axios POST method. So I guess, it's a bit wrong to write file on disk instead of writing it in variable, and after stream ends, send it right to required URL. Something (by logic) which I'd like to see:
// other code
const fileStringBinary = await receiver
.subscribe(userId, {
end: {
behavior: EndBehaviorType.AfterSilence,
duration: 100,
},
})
.pipe(new OpusDecodingStream({}, encoder))
.pipe(
return new FileWriter(filename, {
channels: 1,
sampleRate: 16000,
}),
);
await axios.post('https://url.com', {
data: fileStringBinary
});
Unfortunately I am not so good with streams and especially with audio one, so I am looking for a bit help or any useful advice will be welcome for me.
I understand, that I could write my file to directory, find it there, read once again with node:steam createReadStream and then POST it to required URL. This is not what I need. I'd like to skip this useless stages with writing and then reading. I believe that there is a way to transform steam to binary format and write it down to js variable.
That was a bit treaky after all, but I guess I figure it out:
const stream = receiver
.subscribe(userId, {
end: {
behavior: EndBehaviorType.AfterSilence,
duration: 100,
},
})
.pipe(
new opus.OggLogicalBitstream({
opusHead: new opus.OpusHead({
channelCount: 2,
sampleRate: 48000,
}),
pageSizeControl: {
maxPackets: 10,
},
crc: false,
}),
);
const data = [];
stream.on('data', (chunk) => {
data.push(chunk);
});
stream.on('end', async () => {
try {
const response = await axios.post(
`https://url.com${postParams}`,
Buffer.concat(data),
{
headers: {
Authorization: `Api-Key ${token}`,
'Content-Type': 'application/x-www-form-urlencoded',
},
},
);
console.log(response);
} catch (e) {
console.log(e);
}
});
Unfortunately, I haven't found a better solution, then using old-school events model with data and on end. My working case is connected with Discord.js voice recording without file and using stream for voice recognition.
I will be glad if someone will provide a better-syntax solution, and in that case I'll accept this answer as solved.

Using applyConstraints(constraints) to change Twilio's LocalVideoTrack resolution of an ongoing video chat

I have tried this code but it is failing, does noting:
const constraints = {
width: {
min: 320,
max: 480
},
height: {
min: 240,
max: 400
},
advanced: [{
width: 1920,
height: 1280
},
{
aspectRatio: 1.333
}
]
};
navigator.mediaDevices.getUserMedia({
video: true
})
.then(mediaStream => {
const track = mediaStream.getVideoTracks()[0];
track.applyConstraints(constraints)
.then(() => {
// Do something with the track such as using the Image Capture API.
})
.catch(e => {
console.log(e);
// The constraints could not be satisfied by the available devices.
});
});
Any suggestion how to change video resolution of the local video track on the fly when using Twilio's Video Chat API.
If the track is already published you can access the MediaStreamTrack and change the capture constraints.
const videoTrackPublication = [...room.localParticipant.tracks.values()][0];
videoTrackPublication._signaling._trackTransceiver.track.applyConstraints({
width: 640,
height: 360
}).catch(e => {
console.error('Error while applying capture constraints:', e.message);
});
if the track isn't published just pass in the MediaTrackConstraints
const { createLocalTracks } = require('twilio-video');
const tracks = await createLocalTracks({
video: <MediaTrackConstraints>
});
You can read more about bandwidth consumption and Track Priority and Network Bandwidth Profile API.
Using the Track Priority API will allow you to change the track priority.
Network Bandwidth Profile API consumes Track priorities to determine which Tracks are more relevant from the bandwidth allocation perspective.
Cant getting it work:
var videoContraints = {
width: {min: 320, max: 1242, ideal: 1080},
height: {min: 480, max: 2688, ideal: 1920},
resizeMode: 'crop-and-scale',
aspectRatio: 0.5625 // 9:16
};
Video.createLocalVideoTrack({
video: videoContraints,
}).then(track => {
localMediaContainer.appendChild(track.attach());
//
that.twilioDoctorTrackVideo = track;
});
Can do anything, the outputed video is always 640x480

Unable to obtain desktop picker dialog #electron

I am unable to get desktop picker dialog for available sources. I am newbie can someone guide me what am I missing? In chrome we use "chrome.desktopCapture.chooseDesktopMedia"? I obtained source from below code.
function onAccessApproved(error, sources) {
if (error) throw error;
for (var i = 0; i < sources.length; ++i) {
{
navigator.webkitGetUserMedia({
audio: false,
video: {
mandatory: {
chromeMediaSource: 'desktop',
chromeMediaSourceId: sources[i].id,
minWidth: 1280,
maxWidth: 1280,
minHeight: 720,
maxHeight: 720
}
}
}, gotShareStream, errorCallback);
return;
}
I have tried Option link but I am getting BrowserWindow undefined error.
Thanks!
I haven't used electron, but in WebRTC you need to use something like this video: {optional: [{sourceId: source.id}]}. And don't do this for all the sources - do this only to get a stream from that source.
To get the available sources use navigator.mediaDevices.enumerateDevices() and then filter them by kind which can be audioinput, audiooutput, videoinput and videooutput.

WebRTC Chrome Camera constraints

I am trying to set the getusermedia video constraints like setting min/max frame-rates and resolutions etc... in my peer.js webrtc application which is a simple peer to peer chat application. I have being trying to integrate it into my application but it seems to break it.Any help would be greatly appreciated other online tutorials look different to my app set up. Down at function 1 is where I have been trying to set the constraints it just doesn't show the video anymore. Is this the correct place?
Also will these constraints work on a video-file playing instead of the webcam?. I am using the Google chrome flags that plays a video file instead of a camera.
navigator.getWebcam = (navigator.getUserMedia ||
navigator.webkitGetUserMedia ||
navigator.mozGetUserMedia ||
navigator.msGetUserMedia);
// PeerJS object ** FOR PRODUCTION, GET YOUR OWN KEY at http://peerjs.com/peerserver **
var peer = new Peer({
key: 'XXXXXXXXXXXXXXXX',
debug: 3,
config: {
'iceServers': [{
url: 'stun:stun.l.google.com:19302'
}, {
url: 'stun:stun1.l.google.com:19302'
}, {
url: 'turn:numb.viagenie.ca',
username: "XXXXXXXXXXXXXXXXXXXXXXXXX",
credential: "XXXXXXXXXXXXXXXXX"
}]
}
});
// On open, set the peer id so when peer is on we display our peer id as text
peer.on('open', function(){
$('#my-id').text(peer.id);
});
peer.on('call', function(call) {
// Answer automatically for demo
call.answer(window.localStream);
step3(call);
});
// Click handlers setup
$(function() {
$('#make-call').click(function() {
//Initiate a call!
var call = peer.call($('#callto-id').val(), window.localStream);
step3(call);
});
$('end-call').click(function() {
window.existingCall.close();
step2();
});
// Retry if getUserMedia fails
$('#step1-retry').click(function() {
$('#step1-error').hide();
step();
});
// Get things started
step1();
});
function step1() {
//Get audio/video stream
navigator.getWebcam({audio: true, video: true}, function(stream){
// Display the video stream in the video object
$('#my-video').prop('src', URL.createObjectURL(stream));
// Displays error
window.localStream = stream;
step2();
}, function(){ $('#step1-error').show(); });
}
function step2() { //Adjust the UI
$('#step1', '#step3').hide();
$('#step2').show();
}
function step3(call) {
// Hang up on an existing call if present
if (window.existingCall) {
window.existingCall.close();
}
// Wait for stream on the call, then setup peer video
call.on('stream', function(stream) {
$('#their-video').prop('src', URL.createObjectURL(stream));
});
$('#step1', '#step2').hide();
$('#step3').show();
}
Your JavaScript looks invalid. You can't declare a var inside a function argument list. Did you paste wrong? Try:
var constraints = {
audio: false,
video: { mandatory: { minWidth: 1280, minHeight: 720 } }
};
navigator.getWebcam(constraints, function(stream){ etc. }
Now it's valid JavaScript at least. I'm not familiar with PeerJS, but the constraints you're using look like the Chrome ones, so if you're on Chrome then hopefully they'll work, unless PeerJS does it differently for some reason.
Your subject says "WebRTC Camera constraints" so I should mention that the Chrome constraints are non-standard. See this answer for an explanation.

Categories