I am able to record audio direct in the browser from the device-microphone. I am using for this HTML5 (audio-tag) and JavaScript (.getUserMedia). Now I am looking for the best way to stream it live to my shoutcast-server. I found several threads to this topic but they are all >5 years old and do not really solve my problem.
What is the best way to to stream audio live from the browser to a shoutcast server?
The SHOUTcast source protocol is non-standard and cannot be streamed to directly from a browser. Some sort of translating proxy is required.
It's also likely that this proxy will need to transcode the audio as well, as the codecs available to you in-browser are not typically used in SHOUTcast streams.
Related
What are the fundamental differences between Media Source Extensions and WebRTC?
If I may project my own understanding for a moment. WebRTC includes an the RTCPeerConnection which handles getting streams from Media Streams and passing them into a protocol for streaming to connected peers of the application. It seems under the hood WebRTC abstracting a lot of the bigger issues like codecs and transcoding. Would this be a correct assessment?
Where does Media Source Extensions fit into things? I have limited knowledge but have seen examples where developers are running adaptive streaming. Does MSE only deal with streams from your server?
Help would be much appreciated.
Unfortunately, these new browser-related protocols are being designed and developed by W3C and IETF in rather unorganized manner, not completely technically driven, but reflecting battles between Apple, Google and Microsoft, all trying to standardize their own technologies. Similarly, different browsers choose to adopt only certain standards or parts of standards which makes developer life extremely hard.
I have implemented both Media Source Extensions and WebRTC, so I think I can answer your question:
Media Source Extensions is just a player inside the browser.
You create a MediaSource object
https://developer.mozilla.org/en-US/docs/Web/API/MediaSource
and assign it to your video element like this
video.src = URL.createObjectURL(mediaSource);
Then your javascript code can fetch media segments from somewhere (your server or webserver) and supply to SourceBuffer attached to MediaSource, for playback.
WebRTC is not just a player, it is also a capture, encoding and sending mechanism. So it is a player too, and you use it a little differently from Media Source Extensions. Here you create another object: MediaStream object
https://developer.mozilla.org/en-US/docs/Web/API/MediaStream
and assign it to your video element like this
video.srcObject = URL.createObjectURL(mediaStream);
Notice that in this case the mediaStream object is not created directly by yourself, but supplied to you by WebRTC APIs such as getUserMedia.
So, to summarize, in both cases you use video element to play, but with Media Source Extensions you have to supply media segments by yourself, while with WebRTC you use WebRTC API to supply media. And, once again, with WebRTC you can also capture user's webcam, encode it and send to another browser to play, enabling p2p video chat, for example.
Media Source Extensions browsers adoption: http://caniuse.com/#feat=mediasource
WebRTC browsers adoption: http://iswebrtcreadyyet.com/
I've doing Javascript programming for some time but it's always been related to data updating, saving, manipulating, etc.
I have no idea how something like an in-browser audio player gets audio (especially live, streaming audio) from the internet and plays it out of my computer speakers.
How does this happen in Javascript?
For example, how does a website deliver live audio to my speakers using Javascript? http://player.streamtheworld.com/liveplayer.php?callsign=WVIEAM
The live audio is not much different from pre-recorded audio... it's just played back as it's received, and when live it's encoded as it's recorded.
In browsers these days, the most basic form of streaming audio is a simple <audio> tag. By changing the src attribute from a file to a stream, you're up and running:
<audio src="http://cdn.audiopump.co/waug/main_mp3_256k" />
The browser doesn't know or care in this case that the audio is a live stream. All it knows is that there's some media data that it's fetching via HTTP, and playing back while it comes in.
If your browser compatibility is good, it would be preferable to use the MediaSource API, giving you more control (such as switching to a different quality stream mid-stream, like in HLS) and ensuring that the browser doesn't try to cache what is effectively an inifinitely sized file.
For example, how does a website deliver live audio to my speakers using Javascript? http://player.streamtheworld.com/liveplayer.php?callsign=WVIEAM
This particular site is ran by Triton Digital, and they still use Flash. Many sites still do this as a holdover from a time when HTML5 audio was not widely supported. There is little reason to do this today.
Other reasons to use Flash include incompatible server protocols. If your streaming server is using RTMP, you're stuck with Flash as browsers don't speak RTMP.
There used to be an issue with streaming AAC in-browser due to browsers not properly handling AAC wrapped in ADTS. (This encapsulation is required for streaming AAC in most situations.) Most browsers have resolved this, but I suspect that this is the reason Triton Digital is still using their Flash solution. By using Flash, they can play AAC/ADTS streams.
I would like to create an easy video blogging solution utilizing WebRTC technology to enable recording of video/audio directly from the browser, similar to Youtube's My_Webcam. The server component should be based on Node.js.
I found some Node.js libraries for general WebRTC connection managment (webRTC.io, Holla, EasyRTC), but it seems they don't enable recording of streams on the server.
What's the best way to implement server-side recording? Are there libraries and tutorials available?
This guy has a ton of interesting WebRTC experiments, including audio/video recording: https://github.com/muaz-khan/
Here's a demo of recording: https://www.webrtc-experiment.com/RTCMultiConnection-v1.4-Demos/RecordRTC-and-RTCMultiConnection.html
It collects the audio and video streams on the client and gives you a blob of audio and a blob of video that you can then upload/splice together.
Not exactly what you were hoping for, I think, but could probably get the job done. Hope that helps.
You may use node-webkit to achieve this. Node webkit is essentially a browser in node.js.
This isn't another one of those "How can I record audio in the browser?" questions... I know that the HTML5 Stream API is around the corner and Flash can already access the user's microphone and camera. I'm simply wondering, as a Javascript developer with little knowledge of Flash, if anyone has developed a JS library that hooks into Flash's device capabilities for recording but sends the results back to javascript (presumably using ExternalInterface).
In other words... libraries like SoundManager2 utilize a Flash fallback for audio playback, but they don't seem to allow for recording. Has anyone written a JS library that uses an invisible Flash movie to allow audio recording?
This does most of what you're looking for:
https://code.google.com/p/wami-recorder/
It records audio and sends it to a server via an HTTP POST (avoiding the need for a Flash Media Server.) A JavaScript API is available via ExternalInterface.
I'm not sure why you'd want the audio bytes in JavaScript, but it would probably be easy to modify it to do that too.
Unfortunately, you can't really do Flash audio recording in browser only. The Flash audio interfaces are all designed (surprise surprise) to talk to a Flash media server (or Red5): there is no interface to store recorded audio data locally and pass the recorded audio data to Javascript.
Once you have Red5/FMS setup you can control the recording process from Javascript: you can start/stop/playback the audio stream to/from the server. However, for security reasons you have to have a flash movie that is a minimum of 216 x 138 (see http://blog.natebeck.net/2009/01/tip-of-the-day-tricks-of-the-mic-settings-panel/ for a writeup) otherwise the settings manager won't be shown: this prevents people hiding an audio recording flash widget on a page and eavesdropping.
So no, no invisible flash controlled from javascript.
What is the fastest way to stream live video using JavaScript? Is WebSockets over TCP a fast enough protocol to stream a video of, say, 30fps?
Is WebSockets over TCP a fast enough protocol to stream a video of, say, 30fps?
Yes.. it is, take a look at this project. Websockets can easily handle HD videostreaming.. However, you should go for Adaptive Streaming. I explain here how you could implement it.
Currently we're working on a webbased instant messaging application with chat, filesharing and video/webcam support. With some bits and tricks we got streaming media through websockets (used HTML5 Media Capture to get the stream from our webcams).
You need to build a stream API and a Media Stream Transceiver to control the related media processing and transport.
The Media Source Extensions has been proposed which would allow for Adaptive Bitrate Streaming implementations.
To answer the question:
What is the fastest way to stream live video using JavaScript? Is
WebSockets over TCP a fast enough protocol to stream a video of, say,
30fps?
Yes, Websocket can be used to transmit over 30 fps and even 60 fps.
The main issue with Websocket is that it is low-level and you have to deal with may other issues than just transmitting video chunks. All in all it's a great transport for video and also audio.
It's definitely conceivable but I am not sure we're there yet. In the meantime, I'd recommend using something like Silverlight with IIS Smooth Streaming. Silverlight is plugin-based, but it works on Windows/OSX/Linux. Some day the HTML5 <video> element will be the way to go, but that will lack support for a little while.