Use audio capture as a MediaStream source in Web RTC - javascript

In web RTC, at least using Chrome, it's possible to make a screen capture in order to stream it. It's accomplished by using the experimental chromeMediaSource constraint.
I would like to do the same but capturing only audio in order to be able to send it to a webpage. I mean, I would like to capture not the micro but the audio 'played' by my machine in order to send it to a website.
Is there such constraint in web RTC? If the answer is 'yes' is there a Firefox equivalent?

You may want to look at the MediaStream Recording API, which has been implemented in Firefox Nightly and has an Intent to Implement for Chrome.
I've put a demo at simpl.info/mediarecorder.
With Web Audio, try RecorderJS: there's a demo at webaudiodemos.appspot.com/AudioRecorder.

Related

Is it possible to video record the screen using MediaRecorder using javascript?

I would like to capture either part or the whole screen and record it as a video from a web page in javascript.
Currently, I can record video from a web cam with the built in MediaRecorder but I would like to know if it is possible to get screen output and use that as a stream for the MediaRecorder?
I'd like to know if there is a standard way to do this without using any 3rd party libraries? (I can record audio/web cam video in almost all the browsers as of 2018)
You can record Screen or Windows only using browser extensions (in Chrome for instance with these). Which as you can see they ask specific permissions to the browser.
Alternatively you can record content inside your web app using MediaRecorder. But unfortunately has a pretty limited support.
As you pointed out, stream from webcam is widely supported today.

Recording and uploading a video in IE10

So far, I have found red5, but I can't get it to run (no video arrives at the server side), so I was looking for a flash-based getUserMedia and found: https://github.com/addyosmani/getUserMedia.js . But how do I get the video on the server? IE10 doesn't support webRTC, which http://lynckia.com/licode/ is built on.
getUserMedia is a WebRTC API. It doesn't exist on IE10.
Your alternatives are
Go with a Flash based solution (red5, Wowza, etc)
Use a plugin for WebRTC on IE (check out this one: https://bloggeek.me/temasys-free-webrtc-plugin/
Use Ziggeo (they should be able to use WebRTC or Flash automatically for you, taking care of all relevant transcoding and format changes necessary to playback the recorded stream). CameraTag (virtually the same at first glance) was also suggested in another answer, and likely, there are more.

Live Streaming to Browser

I'm a bit at my wits end here.
I want to stream a live video broadcast to a web browser.
Currently I use ffmpeg to stream a directshow live source as a webm stream to node.js which then forwards the stream to the http request from the <video> element. So far everything works.
live source -> ffmpeg -> POST [webm] -> node.js -> GET [webm] -> video tag
My problem is that the source clock and the web clients clock doesn't exactly match each other (not that surprising). For video this is not a problem, dropping or duplicating a frame every now and then is not noticeable. However, with audio it is another issue. From what I've been able to figure out so far Chrome (or any other browser) does not perform any form of audio resampling compensation (e.g. swr_set_compensation from ffmpeg) to compensate this mismatch. Instead I get quite audible audio distortions (a loud beep) when the playback buffer runs out of samples.
My question is whether it is possible to achieve proper playback (with audio) of a live source in a web browser?
I haven't tried using silverlight or flash for playback yet. Would that possibly work better?
Live media (audio or/and video) streaming to a web browser has been possible for a couple of years though it is still making progress as of today. It is the next big thing for media on the web and many platforms like Youtube are already on board.
A typical live media streaming scenario is:
audio/video feed > transcoding > streaming > player
At each step you have several technological possibilities available. However I should already mention here that the road to live media streaming is paved with proprietary technologies.
audio/video feed: either raw or very lightly compressed media format and cannot be uploaded as such to the Internet. You need to transcode it. You may have to use a grabbing device like a PCI Express card or USB/thunderbolt device to get your cam onto a computer.
transcoding: you have software (ffmpeg, Flash media live encoder, Wirecast) or hardware solutions (streamingmedia.com has a wealth of information on the subject like here). H264/AAC is the current media professional standard and streams are often transcoded to multiple renditions (bitrate) to suit different network conditions.
streaming: you most likely need to target multiple devices to deliver your live stream. Not all devices support the same streaming protocol. HLS works on Apple devices and Android > 4.1. HDS or RTMP works in Flash, Smooth streaming in Silverlight. You cannot reach all devices with one protocol so in this case you would need a streaming server like Wowza or Red5. A streaming server take as an input a transcoded live stream and prepare it for cross device delivery while sustaining a massive number of simultaneous connections (over a thousand is not uncommon nowadays). It can also add functionalities like DVR or DRM. As of today the effort is around HTTP adaptive bitrate delivery. Large companies add CDN support for global delivery.
player: to display your live stream with various options like custom layout, closed captions, ads, chat module and more. Flash has been leading the market up until now for live media streaming on desktop. You can use HTML5 video for iOS and Android where HLS is supported.
Coming in fast is MPEG DASH and it works live with HTML5 video. There is a JS lib that supports live. I have tested it and it works though I may not use it for a production case scenario just yet as it is still a bit clunky (on demand support is better) and browser support is narrow at the moment (As of 8/30/13, Desktop Chrome, Desktop Internet Explorer 11, and Mobile Chrome Beta for Android are the only browsers supported).
I cannot comment much on your solution because I have not used node.js for streaming but it sounds like an interesting effort. A typical solution I would use relating to your case:
Device > ffmpeg (H264/AAC) > Wowza > Hybrid player (Flash + HTML5).
Instead of Wowza you could use Red5 (free/open source - but not much activity as of late). You can also look into Nginx RTMP module which supports HLS and MPEG DASH on top of RTMP.
For flash I use Strobe from Adobe which support live streaming and is easy to set up and a fallback to HTML5 where flash is not supported. I use SWFObject lib to detect flash support and feed a HLS URL to an HTML5 video tag for mobile devices. You can use RTSP for Android < 4.1 and other mobile devices.
Another thing I should mention is real time communications. For video/audio conferencing you could have a look at WebRTC. Those 2 articles should get you on the right track. Here and here. WebRTC will work great for one to few, one to one, few to few. If you need to support more concurrent connections you can have a look at Licode or tokbox.

HTML5 base64 encoded audio on mobile devices

I'm writing a platform with an audio playback component. Audio is uploaded to the server as an wav/mp3/ogg file, and then (like the rest of our media), converted to base64 and stored within our redis database.
To play the audio back at the client side we make an AJAX request to the server for the base64 encoded audio. We have a desktop version that compliments the mobile application, at the moment audio playback works like this:
recording.sound = new Audio("data:audio/ogg;base64," + recording.audio);
recording.sound.play(); // this works
Today we started our tests on mobile devices, and have so far been unable to get it working, even on mobile browsers that apparently support HTML5 audio.
Any ideas? Or if this is not possible, is there a different approach we can take? Ideally there should be a mobile compatible version of the web app, and there has to be a phonegap version.
The reason might not be a technical one here, from Apple developer site:
In Safari on iOS (for all devices, including iPad), where the user may be on a cellular network and be charged per data unit, preload and autoplay are disabled. No data is loaded until the user initiates it. This means the JavaScript play() and load() methods are also inactive until the user initiates playback, unless the play() or load() method is triggered by user action. In other words, a user-initiated Play button works, but an onLoad="play()" event does not.
same applies to Android devices.
read more here: Safari HTML5 Audio and Video Guide
But „audio/wav“ doesn't exist. See spec here: http://www.iana.org/assignments/media-types/audio
You should use „audio/vnd.dts“ for .wav file, „audio/mpeg“ for .mp3 file and „audio/ogg“ for .ogg file...
OK, try StackOverflow search, see:
https://stackoverflow.com/search?q=audio+codec+support+mobile+devices+html5
or https://stackoverflow.com/search?q=audio+codec+support+mobile+devices+html
or try Google
Some search results, that might be useful:
In search for a library that knows to detect support for specific Audio Video format files
or html5 vs flash - full comparison chart anywhere?

HTML5 Audio Mixing

I would like to make a web application where people can add recorded sounds and samples to a timeline.
I want it to output 1 soundfile (approximately 3min long) which will be send to the server.
Now I would like to do this with the HTML5 Audio API and found out that I could do this with the AudioContext But Audiocontext is only supported in chrome.
Now I do dislike flash and I wanted to ask if there is any way to do this with HTML5 with decent browser support (so newest Chrome, IE and Firefox).
Some thoughts:
I could record audio using the HTML5 audio API and the user can add it to the timeline. When finished the client will upload all the audio files to the server including the location on the timetable. The server can combine the audio files to make it one file. Now this is an solution, but I would prefer to do this kind of work on the client side
At the moment in my opinion it is not possible (if it should be html5 and supported by IE and Firefox), see the List of Browser that Support the Audio API
Browsers with Audio API Support. But this Information could be outdated already.(these Browsers update so frequently)
You could wait, first serve only Chrome and hope the other Browser, catch up(IE might be a problem). Or you use Java (if you dont like flash). The other technologie out there is Silverlight, but it is "dead", so i wouldn't recommend it.
I hope my input helps a bit.

Categories