Problem viewing janus webrtc video stream - javascript

I am trying to use ffmpeg and janus-gateway to stream video in the local network. I am piping the h264 video directly in to ffmpeg and from there it gets transferred to janus as an rtp stream. Janus then does the rest.
The problem is, that when I try to open the stream using the streamingtest html page included in janus, I can select the stream, but I never get to see anything. On the console where I started janus, it throws multiple errors starting with: "SDP missing mandatory information"
Apparently the SDP is missing some authorization like this:
a=ice-ufrag:?
a=ice-pwd:?
I guess that it is an issue of the javascript on the demo page.
When I load the page and click the start button, it does everything as it is supposed to and there are no errors yet.
It populates the list of available streams with my stream and when using the network analyzer in firefox I can see, that janus is sending the correct SDP to the javascript of the page. That SDP contains the correct info about the stream and also the ice authorization info.
When I then select the stream and click on the start button, the javascript sends a request containing an SDP to janus, but this SDP is completely different from the one received earlier and is indeed missing the ice authorization info. It also has a bunch of completely wrong info in it. For example this SDP is for VP8 video, while my stream and also the correct SDP received earlier are actually H264 video.
Can someone post a easy example for receiving just a single webrtc video stream from janus, please?
I have been searching for an example for a while, but haven't found anything apart from the demo thats not working for me and completely unrelated webrtc videoconference or chatroom examples, that are not of any use for me.
All I am trying to do is getting a signle H264 video stream with as little latency as possible or even zero latency from a raspberry pi to a html webpage locally hosted from the same raspberry pi.
I have tried using hls, but that is just too much latency and someone suggested to use webrtc...

I had a similiar problem
After a "one day fight" - I got it working with reolink webcam on my janus-webrtc installation tvbox-based UserLAnd (https://github.com/virtimus/tinyHomeServer):
in reolink web admin (settings/recording/encode):
record audio - yes
resolution 2560*1920
frame rate 8
max bit rate 1024
h264 profile high (this was important to me)
janus.plugin.streaming.jcfg:
reolink-rtp: {
type = "rtp"
id = 999
description = "Reolink RTP"
audio = true
audioport = 5051
audiopt = 111
audiortpmap = "opus/48000/2"
video = true
videoport = 5052
videopt = 96
videortpmap = "H264/90000"
videofmtp = "profile-level-id=42e028;packetization-mode=1"
#videofmtp = "profile-level-id=420032;packetization-mode=1"
}
ffmpeg command (dual forward video/audio):
ffmpeg -i 'rtsp://admin:[password]#192.168.2.148:554/h264Preview_01_main' -an -c:v copy -flags global_header -bsf dump_extra -f rtp rtp://localhost:5052 -vn -codec:a libopus -f rtp rtp://localhost:5051

Never mind.
I have now switched to using uv4l for both the video stream and hosting the actual webpage that displays the video stream.
This worked pretty much right out of the box and was relatively easy to implement.

Related

Sending a stream from the browser to a Node JS server

The general idea: I created a Node JS program that interacts with multiple APIs to recreate a home assistant (like Alexia or Siri). It interacts mainly with IBM Watson. My first goal was to setup Dialogflow so that I could have a real AI processing the questions but due to the update to Dialogflow v2, I have to use Google Cloud and It's too much trouble for me so I just got with a hand-made script that reads possible responses from a configurable list.
My actual goal is to get an audio stream from the user and send it inside my main program. I have set up an express server. It responds with a HTML page when you GET on '/'. The page is the following:
<!DOCTYPE html>
<html lang='fr'>
<head>
<script>
let state = false
function button() {
navigator.mediaDevices.getUserMedia({audio: true})
.then(function(mediaStream) {
// And here I got my stream. So now what do I do?
})
.catch(function(err) {
console.log(err)
});
}
</script>
<title>Audio recorder</title>
</head>
<body>
<button onclick='button()'>Lancer l'audio</button>
</body>
</html>
It records audio from the user when they click the button with mediaDevices.getUserMedia()
My configuration looks like this:
What I'm looking for is a way to launch the recording, then press the stop button and when the stop button is pressed, it automatically send the stream to the Node program. It's preferable if the output is a stream because it's the input type for IBM Watson (or else I will have to store the file, then read it and then delete it).
Thanks for your attention.
Fun fact: The imgur ID of my image starts with "NUL", which means "NOOB" in French lol
Most browsers, but not all (I'm looking at you, Mobile Safari), support the capture and streaming of audio (and video, which you don't care about) using the getUserMedia() and MediaRecorder APIs. With these APIs you can transmit your captured audio in small chunks via WebSockets, or socket.io, or a series of POST requests, to your nodejs server. Then the nodejs server can send them along to your recognition service. The challenge here: the audio is compressed and encapsulated in webm. If your service accepts audio in that format, this strategy will work for you.
Or you can try using node-ogg and node-vorbis to accept and decode. (I haven't done this.)
There may be other ways. Maybe somebody who knows one will answer.

How can I prevent breakup/choppiness/glitches when using an AudioWorklet to stream captured audio?

We've been working on a JavaScript-based audio chat client that runs in the browser and sends audio samples to a server via a WebSocket. We previously tried using the Web Audio API's ScriptProcessorNode to obtain the sample values. This worked well on our desktops and laptops, but we experienced poor audio quality when transmitting from a handheld platform we must support. We've attributed this to the documented script processor performance issues (https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API). On the handheld, with a script processor buffer size of 2048, audio consistently had breakups. At the next highest size interval (4096), the audio was smooth (no breakups), but there was too much latency (around two seconds).
Our results from ScriptProcessorNode prompted experimentation with Audio Worklet. Unfortunately, with our worklet implementation, audio quality is worse: both breakups and latency, even on our laptops. I'm wondering if there's a way to tweak our worklet implementation to get better performance, or if what we're experiencing is to be expected from (is "par for the course" for) the current state of audio worklets (Chromium issues 796330, 813825, and 836306 seem relevant).
Here's a little more detail on what the code does:
Create a MediaStreamStreamSourceNode with the MediaStream obtained from getUserMedia.
Connect the source node to our worklet node implementation (extends AudioWorkletNode).
Our worklet processor implementation (extends AudioWorkletProcessor) buffers blocks that arrive as the "input" argument to its process method.
When buffer is full, use MessagePort to send the buffer contents to the worklet node.
Worklet node transmits the buffer contents over a WebSocket connection.
The process method is below. The var "samples" is a Float32Array, which gets initialized to the buffer size and reused. I've experimented with buffer size a bit, but it doesn't seem to have an impact. The approach is based on the guidance in section 4.1 of AudioWorklet: The future of web audio to minimize memory allocations.
if (micKeyed == true) {
if (inputs[0][0].length == framesPerBlock) {
samples.set(inputs[0][0], currentBlockIndex * framesPerBlock);
currentBlockIndex++;
if (currentBlockIndex == lastBlockIndex) {
// console.log('About to send buffer.');
this.port.postMessage(samples);
currentBlockIndex = 0;
}
} else {
console.error("Got a block of unexpected length!!!");
}
}
return true;
Currently testing with PCs running Chrome 72.0.3626.109 on CentOS 7. Our handhelds are Panasonic FZ-N1 running Chrome 72.0.3626.105 on Android 6.0.1.
Thank you for reading and any suggestions you may be able to provide.

How can I get the raw stream URL of an iheartradio station

I am trying to make a discord bot to stream the iheart80s at 103.7 radio station,
and so far I can not find a direct stream URL to give my discord bot.
I have tried to get the JSON via Python but that just returns http.client.BadStatusLine: ICY 200 OK
I am using discord.js.
And here is the function I am trying to feed the stream URL into:
function(CmdArg,CmdMsg){
const voiceChannel = CmdMsg.member.voiceChannel;
voiceChannel.join().then(connection => resolve(connection)).catch(err =>reject(err));
const stream = () => {
return request.get({
uri: CmdArg,
followAllRedirects: true,
encoding: null
});
}
console.log(stream);
CmdMsg.guild.voiceConnection.playStream(stream, { passes: token.passes });
}
It has changed yet Again, none of the wireshark solutions, or searching for certain texts in the inspect tab of chrome/firefox helped, but I figured it out...
The example I'll be using is: I heart Mix 90.1 Toluca https://www.iheart.com/live/mix-901-toluca-6566/
Click on the Share icon, it's the one to the right of the 'connect' button.
Look for the Embed Widget section.
Then copy from the end of src=". Example: The whole text is
<iframe allow="autoplay" width="100%" height="200" src="https://www.iheart.com/live/mix-901-toluca-6566/?embed=true" frameborder="0"></iframe>
and I will copy
https://www.iheart.com/live/mix-901-toluca-6566/?embed=true
Ok, great. Now we have much less html and urls to work with. Now copy that url and paste it in your web browser (you can now close the previous radio webpage). Rick click the play button, select Inspect.
Now, using Ctrl+F (press only ctrl and F), look for stream. You should get several results, but they will be all on the same text line, so don't worry.
Now double click in that load of JSON data, should start with something like
{"initialPropos" : {....., now copy all that text and paste in a text editor which will let you search through text (e.g. vim, notepad, MS Word)
Now, open the text editor with the pasted text. Now look for streams. The first result should start with something like: ,"streams":{"hls...
Great! Now you may copy the stream of your liking, some may not be available, but in this example, the following stream types are available:
{
"hls_stream":"http://playerservices.streamtheworld.com/api/livestream-redirect/XHENOFMAAC.m3u8",
"pls_stream":"http://playerservices.streamtheworld.com/pls/XHENOFMAAC.pls",
"secure_hls_stream":"https://playerservices.streamtheworld.com/api/livestream-redirect/XHENOFMAAC.m3u8",
"secure_pls_stream":"https://playerservices.streamtheworld.com/pls/XHENOFMAAC.pls"}
Now, you may choose the stream of your liking and open it with your favorite audio/video player. E.g. (be sure to put it in double quotes so it won't get manipulated by your shell or smth) mpv "http://playerservices.streamtheworld.com/api/livestream-redirect/XHENOFMAAC.m3u8"
The End! Now you should be good to go! If I was not at all clear, please tell me so. The only reason I've created a stack overflow account is to post this and help others know how to do this, so please let me know if this is not working!!
Cheers.
I was able to find the iheart url by doing this...
Go to desired radio station web site.
Click on listen/play.
Right click page then view page source.
Find/search for..."icy"
The first instance of "icy"should be here..
"shoutcast_stream":"http:// (various letters and numbers). playlists.ihrhls.com/(4 numbers) icy"
I copy and pasted the http address into winamp and it worked.
http://c10icyelb.prod.playlists.ihrhls.com/4342_icy
You can use mitmproxy on a computer to inspect the traffic for the app. I was able to identify http://c10.prod.playlists.ihrhls.com/4342/playlist.m3u8 as the stream source of that radio station. Of course, the url may change over time, so ymmv.
Find links off iHeartRadio: (I am using channel 93.3 as an example)
Open iHeart radio and find your stream *NOTE: DO NOT START THE STREAM YET
Use Wire Shark or any other capture tool and start the capture right before playing the stream on iHeart Radio
Stop the stream then stop the capture immediately after (This limits the amount of packets we capture to only the ones we want)
Use a filter and type in tcp contains "ihrh"
Open all packets which include the link. *NOTE: Ignore any text which contains "ihrh" we are looking for a link
Find server link (ex. c13.prod.playlists.ihrhls.com) *NOTE: make sure it is "c" and a number like "c13" NOT "cdn" or anything else
Go in internet explorer and go to stream on iheart radio
Look at url and notice the code (ex. https://www.iheart.com/live/channel-933-241/?autoplay=true&pname=fire&campid=local&callletters=khts-fm)
Use server link and then add "/" then the code found from the step above
Next, add the words "/playlist.m3u8" to the end of the link
After completing all the steps you get the final link which is in our case : https://c13.prod.playlists.ihrhls.com/241/playlist.m3u8
> Find/search for..."icy"
This is not a reliable method since other servers may be used e.g. streamtheworld, edgefcs etc. - and some might use RTMP (which your player may not be able to handle)
Here is a Perl module & sample scripts which can do the job:
https://metacpan.org/release/IHeartRadio-Streams
reference: https://askubuntu.com/questions/470269/how-do-i-add-iheartradio-stations-to-radio-tray/470277
I feel to stream a radio station you need a proxy server and ffmpeg setup, because multiple users will be connecting to stream via your website/webapp/app. ffmpeg on server side will read radio station stream (http stream or rtsp stream) and feed it to your streaming proxy server which will manage multiple connections.

How to stop audio playback in Firefox & use source.connect() hack

I'm developing a webapp that (in part) records some audio using recorder.js, and sends it to a server. I'm trying to target Firefox, so I have to use this hack to keep the audio source from cutting off:
// Hack for a Firefox bug that stops input after a few seconds
window.source = audio_context.createMediaStreamSource(stream);
source.connect(audio_context.destination);
I think that this is causing audio to be played back through the computer speakers, but I'm not sure. I'm kind of a newbie when it comes to web audio. My goal is to eliminate the audio that is being played out of the speakers.
EDIT:
Here's a link to my JS file on Github: https://github.com/miller9904/Jonathan/blob/master/js/main.js#L87
you have to connect the source to the node( through which you retrieve data which you are going to record), replace this.node with what variable name you have assigned to yuor node used for processing.
window.source.connect(this.node);
//this.node.connect(this.context.destination);
edit: just checked, connecting to destination is not compulsory, also make sure your node variable does not get garbage collected( which i am assuming is happening in your case, since recording stops after few seconds.)

Streaming Video to Web Browser

I would like to display a live video stream in a web browser. (Compatibility with IE, Firefox, and Chrome would be awesome, if possible.) Someone else will be taking care of streaming the video, but I have to be able to receive and display it. I will be receiving the video over UDP, but for now I am just using VLC to stream it to myself for testing purposes. Is there an open source library that might help me accomplish this using HTML and/or JavaScript? Or a good website that would help me figure out how to do this on my own?
I've read a bit about RTSP, which seems like the traditional option for something like this. That might be what I have to fall back on if I can't accomplish this using UDP, but if that is the case I still am unsure of how to go about this using RTSP/RTMP/RTP, or what the differences between all those acronyms are, if any.
I thought HTTP adaptive streaming might be the best option for a while, but it seemed like all the solutions using that were proprietary (Microsoft IIS Smooth Streaming, Apple HTTP Live Streaming, or Adobe HTTP Dynamic Streaming), and I wasn't having much luck figuring out how to accomplish it on my own. MPEG-DASH sounded like an awesome solution as well, but it doesn't seem to be in use yet since it is still so new. But now I am told that I should expect to receive the video over UDP anyways, so those solutions probably don't matter for me anymore.
I've been Googling this stuff for several days without much luck on finding anything to help me implement it. All I can find are articles explaining what the technologies are (e.g. RTSP, HTTP Adaptive Streaming, etc.) or tools that you can buy to stream your own videos over the web. Your guidance would be greatly appreciated!
It is incorrect that most video sites use FLV, MP4 is the most widely supported format and it is played via Flash players as well.
The easiest way to accomplish what you want is to open a S3Amzon/cloudFront account and work with JW player. Then you have access to RTMP software to stream video and audio. This service is very cheap. if you want to know more about this, check out these tutorials:
http://www.miracletutorials.com/category/s3-amazon-cloudfront/ Start at the bottom and work your way up to the tutorials higher up.
I hope this will help you get yourself on your way.
If you don't need sound, you can send JPEGs with header like this:
Content-Type: multipart/x-mixed-replace
This is a simple demo with nodejs, it uses library opencv4nodejs to generate images. You can use any other HTTP server which allows to append data to the socket while keeping connection opened. Tested on Chrome and FF on Ubuntu Linux.
To run the sample you will need to install this library with npm install opencv4nodejs, it might take while, then start the server like this: node app.js, here is app.js itself
var http = require('http')
const cv = require('opencv4nodejs');
var m=new cv.Mat(300, 300, cv.CV_8UC3);
var cnt = 0;
const blue = new cv.Vec3(255, 220, 120);
const yellow = new cv.Vec3(255, 220, 0);
var lastTs = Date.now();
http.createServer((req, res) => {
if (req.url=='/'){
res.end("<!DOCTYPE html><style>iframe {transform: scale(.67)}</style><html>This is a streaming video:<br>" +
"<img src='/frame'></img></html>")
} else if (req.url=='/frame') {
res.writeHead(200, { 'Content-Type': 'multipart/x-mixed-replace;boundary=myboundary' });
var x =0;
var fps=0,fcnt=0;
var next = function () {
var ts = Date.now();
var m1=m.copy();
fcnt++;
if (ts-lastTs > 1000){
lastTs = ts;
fps = fcnt;
fcnt=0;
}
m1.putText(`frame ${cnt} FPS=${fps}`, new cv.Point2(20,30),1,1,blue);
m1.drawCircle(new cv.Point2(x,50),10,yellow,-1);
x+=1;
if (x>m.cols) x=0;
cnt++;
var buf = cv.imencode(".jpg",m1);
res.write("--myboundary\r\nContent-type:image/jpeg\r\nDaemonId:0x00258009\r\n\r\n");
res.write(buf,function () {
next();
});
};
next();
}
}).listen(80);
A bit later I've found this example with some more details in python: https://blog.miguelgrinberg.com/post/video-streaming-with-flask
UPDATE: it also works, if you stream this into html img tag.
True cross-browser streaming is only possible through "rich media" clients like Flash, which is why almost all video websites default to serving video using Adobe's proprietary .flv format.
For non-live video the advent of video embeds in HTML5 shows promise, and using Canvas and JavaSCript streaming should be technically possible, but handling streams and preloading binary video objects would have to be done in JavaScript and would not be very straightforward.

Categories