I am new to webRTC. I have built one streaming server in node.js which is working fine with uploaded mp4 files. Now I succeeded to access webcam in HTML5 with webRTC with code bellow
if (!navigator.getUserMedia) {
navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia;
}
if (!navigator.getUserMedia){
alert('getUserMedia not supported in this browser.');
}
navigator.getUserMedia(mediaOptions, success, function(e) {
console.log(e);
});
function success(stream){
var video = document.querySelector("#player");
video.src = window.URL.createObjectURL(stream);
socket.emit('my other event', { my: stream});
}
As you can see I am sending the stream but in server end I am getting nothing. But another data I am getting. Please help !
The stream you are sending is not "the streaming of the media" for audio you have to use the WebAudioAPI ScriptProcessorNode
https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html#ScriptProcessorNode
https://developer.mozilla.org/en-US/docs/Web/API/ScriptProcessorNode
For video is not so straightforward, you have to use a canvas to get the video data, sse this thread
Get raw pixel data from HTML5 video
Related
Hi,
I am creating a pipeline where I need to access data from the camera and do some OpenCV algorithms in it. I am able to send the video from the source using webRTC. https://lostechies.com/derickbailey/2014/03/13/build-a-local-webcam-with-webrtc-in-less-than-20-lines/
But, What I need help with is how to receive the video stream in Python and do the processing. How can I access the video feed from a webRTC stream to the Python backend?
This is the javascript code running.
(function(){
var mediaOptions = { audio: false, video: true };
if (!navigator.getUserMedia) {
navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia;
}
if (!navigator.getUserMedia){
return alert('getUserMedia not supported in this browser.');
}
navigator.getUserMedia(mediaOptions, success, function(e) {
console.log(e);
});
function success(stream){
var video = document.querySelector("#player");
video.src = window.URL.createObjectURL(stream);
}
})();
I need help in receiving the video from this Javascript using Python.
I'm the author of aiortc. Have you checked out the server example, as it illustrates how to process video using OpenCV?
https://github.com/jlaine/aiortc/tree/master/examples/server
https://webrtchacks.com/webrtc-cv-tensorflow/ shows a fairly in-depth tutorial for doing WebRTC + tensorflow. You can probably swap out tensorflow for opencv easily. This captures a frame from the webcam and sends it using HTTP every once in a while. If you want to go more realtime than that you will have to use WebRTC on the server, e.g. using https://github.com/jlaine/aiortc
Since RTCPeerConnection.addStream() is deprecated, how does one now add a media stream such as video to a peerConnection?
Here is the video stream capture function that I am currently using. I wanted to see if I am getting a media stream back from my camera and indeed I am. But then how do I add the stream to a peer connection?
var constraints = {
audio: false,
video: true
};
function successCallback(stream) {
window.stream = stream; // stream available to console
if (window.URL) {
document.getElementById('localVid').src = window.URL.createObjectURL(stream);
} else {
document.getElementById('localVid').src = stream;
}
peerConn.addStream(s); //deprecated so onaddstream is not fired
}
function errorCallback(error) {
console.log('navigator.getUserMedia error: ', error);
}
I have successfully set up a signaling system for data channels and now I am moving on to streaming, however the limited resources are frustrating and I need some help.
Mozilla decided to deprecate addStream without Chrome implementing the new alternative addTrack yet. addStream is still supported and I doubt it is going to be removed while it remains the one and only method available in Chrome.
Note that onaddstream is fired if a remote stream to a peerconnection, not if a local stream is added.
I want to make a simple audio only stream over WebRTC, using Peer.js. I'm running the simple PeerServer locally.
The following works perfectly fine in Firefox 30, but I can't get it to work in Chrome 35. I would expect there was something wrong with the PeerJS setup, but Chrome -> Firefox works perfectly fine, while Chrome -> Chrome seems to send the stream, but won't play over speakers.
Setting up getUserMedia Note: uncommenting those lines below will let me hear the loopback in Chrome and Firefox.
navigator.getUserMedia = (navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia);
window.AudioContext = window.AudioContext || window.webkitAudioContext;
if(navigator.getUserMedia) {
navigator.getUserMedia({video: false, audio: true}, getMediaSuccess, getMediaError);
} else {
alert('getUserMedia not supported.');
}
var localMediaStream;
//var audioContext = new AudioContext();
function getMediaSuccess(mediaStream) {
//var microphone = audioContext.createMediaStreamSource(mediaStream);
//microphone.connect(audioContext.destination);
localMediaStream = mediaStream;
}
function getMediaError(err) {
alert('getUserMedia error. See console.');
console.error(err);
}
Making the connection
var peer = new Peer({host: '192.168.1.129', port: 9000});
peer.on('open', function(id) {
console.log('My ID:', id);
});
peer.on('call', function(call) {
console.log('answering call with', localMediaStream);
call.answer(localMediaStream);
//THIS WORKS IN CHROME, localMediaStream exists
call.on('stream', function(stream) {
console.log('streamRecieved', stream);
//THIS WORKS IN CHROME, the stream has come through
var audioContext = new AudioContext();
var audioStream = audioContext.createMediaStreamSource(stream);
audioStream.connect(audioContext.destination);
//I HEAR AUDIO IN FIREFOX, BUT NOT CHROME
});
call.on('error', function(err) {
console.log(err);
//LOGS NO ERRORS
});
});
function connect(id) {
var voiceStream = peer.call(id, localMediaStream);
}
This still appears to be an issue even in Chrome 73.
The solution that saved me for now is to also connect the media stream to a muted HTML audio element. This seems to make the stream work and audio starts flowing into the WebAudio nodes.
This would look something like:
let a = new Audio();
a.muted = true;
a.srcObject = stream;
a.addEventListener('canplaythrough', () => {
a = null;
});
let audioStream = audioContext.createMediaStreamSource(stream);
audioStream.connect(audioContext.destination);
JSFiddle: https://jsfiddle.net/jmcker/4naq5ozc/
Original Chromium issue and workaround:
https://bugs.chromium.org/p/chromium/issues/detail?id=121673#c121
New Chromium issue: https://bugs.chromium.org/p/chromium/issues/detail?id=687574 https://bugs.chromium.org/p/chromium/issues/detail?id=933677
In Chrome, it is a known bug currently where remote audio streams gathered from a peer connection are not accessible through the AudioAPI.
Latest comment on the bug:
We are working really hard towards the feature. The reason why this
takes long time is that we need to move the APM to chrome first,
implement a render mixer to get the unmixed data from WebRtc, then we
can hook up the remote audio stream to webaudio.
It was recently patched in Firefox as I remember this being an issue on there as well in the past.
I was unable to play the stream using web audio but I did manage to play it uses a basic audio element:
var audio = new Audio();
audio.src = (URL || webkitURL || mozURL).createObjectURL(remoteStream);
audio.play();
In recent days, I tried to use javascript to record audio stream.
I found that there is no example code which works.
Is there any browser supporting?
Here is my code
navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia ||
navigator.mozGetUserMedia || navigator.msGetUserMedia;
navigator.getUserMedia({ audio: true }, gotStream, null);
function gotStream(stream) {
msgStream = stream;
msgStreamRecorder = stream.record(); // no method record :(
}
getUserMedia gives you access to the device, but it is up to you to record the audio. To do that, you'll want to 'listen' to the device, building a buffer of the data. Then when you stop listening to the device, you can format that data as a WAV file (or any other format). Once formatted you can upload it to your server, S3, or play it directly in the browser.
To listen to the data in a way that is useful for building your buffer, you will need a ScriptProcessorNode. A ScriptProcessorNode basically sits between the input (microphone) and the output (speakers), and gives you a chance to manipulate the audio data as it streams. Unfortunately the implementation is not straightforward.
You'll need:
getUserMedia to access the device
AudioContext to create a MediaStreamAudioSourceNode and a ScriptProcessorNode
MediaStreamAudioSourceNode to represent the audio stream
ScriptProcessorNode to get access to the streaming audio data via an onaudioprocessevent. The event exposes the channel data that you'll build your buffer with.
Putting it all together:
navigator.getUserMedia({audio: true},
function(stream) {
// create the MediaStreamAudioSourceNode
var context = new AudioContext();
var source = context.createMediaStreamSource(stream);
var recLength = 0,
recBuffersL = [],
recBuffersR = [];
// create a ScriptProcessorNode
if(!context.createScriptProcessor){
node = context.createJavaScriptNode(4096, 2, 2);
} else {
node = context.createScriptProcessor(4096, 2, 2);
}
// listen to the audio data, and record into the buffer
node.onaudioprocess = function(e){
recBuffersL.push(e.inputBuffer.getChannelData(0));
recBuffersR.push(e.inputBuffer.getChannelData(1));
recLength += e.inputBuffer.getChannelData(0).length;
}
// connect the ScriptProcessorNode with the input audio
source.connect(node);
// if the ScriptProcessorNode is not connected to an output the "onaudioprocess" event is not triggered in chrome
node.connect(context.destination);
},
function(e) {
// do something about errors
});
Rather than building all of this yourself I suggest you use the AudioRecorder code, which is awesome. It also handles writing the buffer to a WAV file. Here is a demo.
Here's another great resource.
for browsers that support MediaRecorder API, use it.
for older browsers that does not support MediaRecorder API, there are three ways to do it
as wav
all code client-side.
uncompressed recording.
source code --> http://github.com/mattdiamond/Recorderjs
as mp3
all code client-side.
compressed recording.
source code --> http://github.com/Mido22/mp3Recorder
as opus packets (can get output as wav, mp3 or ogg)
client and server(node.js) code.
compressed recording.
source code --> http://github.com/Mido22/recordOpus
You could check this site:
https://webaudiodemos.appspot.com/AudioRecorder/index.html
It stores the audio into a file (.wav) on the client side.
There is a bug that currently does not allow audio only. Please see http://code.google.com/p/chromium/issues/detail?id=112367
Currently, this is not possible without sending the data over to the server side. However, this would soon become possible in the browser if they start supporting the MediaRecorder working draft.
Is it possible to record sound with html5 yet? I have downloaded the latest canary version of chrome and use the following code:
navigator.getUserMedia = navigator.webkitGetUserMedia || navigator.getUserMedia;
navigator.getUserMedia({audio: true}, gotAudio, noStream);
This then prompts the user to allow audio recording, and if you say "yes" a message appears saying that chrome is recording. However is it possible to access the audio buffer with the raw data in it? I don't seem to be able to find out how. There are suggested specs that havn't been implemented yet does anyone know if it can actually be done on any browser now, and provide instructions?
Webkit and Chrome audio API's support recording, however as their API's evolve it will be difficult to maintain code that uses them.
An active open-source project named Sink.js allows recording and also allows you to push raw samples: https://github.com/jussi-kalliokoski/sink.js/. Since the project is pretty active they have been able to keep on top of changes in Webkit and Chrome as they come out.
Here you can find my example, but it's not working (partly). Because AUDIO recording is not yet implemented to chrome. Thats why you'll get a 404 error, which is says can not find BLOB.
Also there is a form below it is because my aim was sending that BLOB to a php file, but since not working, i can't try. Save it, you may use later.
<audio></audio>
<input onclick="startRecording()" type="button" value="start recording" />
<input onclick="stopRecording()" type="button" value="stop recording and play" />
<div></div>
<!--
<form enctype="multipart/form-data">
<input name="file" type="file" />
<input type="button" value="Upload" />
</form>
-->
<script>
var onFailed = function(e) {
console.log('sorry :(', e);
};
window.URL = window.URL || window.webkitURL;
navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia ||
navigator.mozGetUserMedia || navigator.msGetUserMedia ||
var localStream
var audio = document.querySelector('audio');
var stop = document.getElementById('stop');
function startRecording(){
if (navigator.getUserMedia) {
navigator.getUserMedia({audio: true, video: false, toString : function() {return "video,audio";}}, function(stream) {
audio.src = window.URL.createObjectURL(stream);
document.getElementsByTagName('div')[0].innerHTML = audio.src;
localStream = stream;
}, onFailed);
} else {
alert('Unsupported');
//audio.src = 'someaudio.ogg'; // fallback.
}
}
function stopRecording(){
localStream.stop();
audio.play();
}
function sendAudio(){
}
</script>
note: some information and howto for firefox: https://hacks.mozilla.org/2012/07/getusermedia-is-ready-to-roll/
Now it is possible to access the audio buffer using Audio context API and getChannelData().
Here's a project on gitHub that records audio directly in MP3 format and saves it on the webserver: https://github.com/nusofthq/Recordmp3js
In recorder.js you will see how the audio buffer is accessed individually by channels like so:
this.node.onaudioprocess = function(e){
if (!recording) return;
worker.postMessage({
command: 'record',
buffer: [
e.inputBuffer.getChannelData(0),
//e.inputBuffer.getChannelData(1)
]
});
}
For a more detailed explanation of the implementation you can read the following blogpost:
http://nusofthq.com/blog/recording-mp3-using-only-html5-and-javascript-recordmp3-js/