I am currently using nodejs and express to stream video to tags simply by waiting for app.get on the video tag src address and then using res.writeHead and res.write to provide the data.
I would like to be able to do the something similar but with lower latency using WebRTC. However, I am bit confused as to how to achieve this and haven't really found any good information resource.
Can anyone recommend any good examples, nodejs packages etc... that might be helpful?
I was hoping to do something like:
// Nodejs Server
rtcServer.on('connection', function(connection) {
var videoSource = getVideoDataSource();
videoSource.on('data', function(data) {
connection.write(data);
});
});
rtcServer.listen(8000);
-
// HTML Client
<video src="???:8000"/>
Related
I am trying to use ffmpeg and janus-gateway to stream video in the local network. I am piping the h264 video directly in to ffmpeg and from there it gets transferred to janus as an rtp stream. Janus then does the rest.
The problem is, that when I try to open the stream using the streamingtest html page included in janus, I can select the stream, but I never get to see anything. On the console where I started janus, it throws multiple errors starting with: "SDP missing mandatory information"
Apparently the SDP is missing some authorization like this:
a=ice-ufrag:?
a=ice-pwd:?
I guess that it is an issue of the javascript on the demo page.
When I load the page and click the start button, it does everything as it is supposed to and there are no errors yet.
It populates the list of available streams with my stream and when using the network analyzer in firefox I can see, that janus is sending the correct SDP to the javascript of the page. That SDP contains the correct info about the stream and also the ice authorization info.
When I then select the stream and click on the start button, the javascript sends a request containing an SDP to janus, but this SDP is completely different from the one received earlier and is indeed missing the ice authorization info. It also has a bunch of completely wrong info in it. For example this SDP is for VP8 video, while my stream and also the correct SDP received earlier are actually H264 video.
Can someone post a easy example for receiving just a single webrtc video stream from janus, please?
I have been searching for an example for a while, but haven't found anything apart from the demo thats not working for me and completely unrelated webrtc videoconference or chatroom examples, that are not of any use for me.
All I am trying to do is getting a signle H264 video stream with as little latency as possible or even zero latency from a raspberry pi to a html webpage locally hosted from the same raspberry pi.
I have tried using hls, but that is just too much latency and someone suggested to use webrtc...
I had a similiar problem
After a "one day fight" - I got it working with reolink webcam on my janus-webrtc installation tvbox-based UserLAnd (https://github.com/virtimus/tinyHomeServer):
in reolink web admin (settings/recording/encode):
record audio - yes
resolution 2560*1920
frame rate 8
max bit rate 1024
h264 profile high (this was important to me)
janus.plugin.streaming.jcfg:
reolink-rtp: {
type = "rtp"
id = 999
description = "Reolink RTP"
audio = true
audioport = 5051
audiopt = 111
audiortpmap = "opus/48000/2"
video = true
videoport = 5052
videopt = 96
videortpmap = "H264/90000"
videofmtp = "profile-level-id=42e028;packetization-mode=1"
#videofmtp = "profile-level-id=420032;packetization-mode=1"
}
ffmpeg command (dual forward video/audio):
ffmpeg -i 'rtsp://admin:[password]#192.168.2.148:554/h264Preview_01_main' -an -c:v copy -flags global_header -bsf dump_extra -f rtp rtp://localhost:5052 -vn -codec:a libopus -f rtp rtp://localhost:5051
Never mind.
I have now switched to using uv4l for both the video stream and hosting the actual webpage that displays the video stream.
This worked pretty much right out of the box and was relatively easy to implement.
The general idea: I created a Node JS program that interacts with multiple APIs to recreate a home assistant (like Alexia or Siri). It interacts mainly with IBM Watson. My first goal was to setup Dialogflow so that I could have a real AI processing the questions but due to the update to Dialogflow v2, I have to use Google Cloud and It's too much trouble for me so I just got with a hand-made script that reads possible responses from a configurable list.
My actual goal is to get an audio stream from the user and send it inside my main program. I have set up an express server. It responds with a HTML page when you GET on '/'. The page is the following:
<!DOCTYPE html>
<html lang='fr'>
<head>
<script>
let state = false
function button() {
navigator.mediaDevices.getUserMedia({audio: true})
.then(function(mediaStream) {
// And here I got my stream. So now what do I do?
})
.catch(function(err) {
console.log(err)
});
}
</script>
<title>Audio recorder</title>
</head>
<body>
<button onclick='button()'>Lancer l'audio</button>
</body>
</html>
It records audio from the user when they click the button with mediaDevices.getUserMedia()
My configuration looks like this:
What I'm looking for is a way to launch the recording, then press the stop button and when the stop button is pressed, it automatically send the stream to the Node program. It's preferable if the output is a stream because it's the input type for IBM Watson (or else I will have to store the file, then read it and then delete it).
Thanks for your attention.
Fun fact: The imgur ID of my image starts with "NUL", which means "NOOB" in French lol
Most browsers, but not all (I'm looking at you, Mobile Safari), support the capture and streaming of audio (and video, which you don't care about) using the getUserMedia() and MediaRecorder APIs. With these APIs you can transmit your captured audio in small chunks via WebSockets, or socket.io, or a series of POST requests, to your nodejs server. Then the nodejs server can send them along to your recognition service. The challenge here: the audio is compressed and encapsulated in webm. If your service accepts audio in that format, this strategy will work for you.
Or you can try using node-ogg and node-vorbis to accept and decode. (I haven't done this.)
There may be other ways. Maybe somebody who knows one will answer.
I want to send audio signal coming from my audio interface (focusrite saffire) to my nodejs server. How should I go about doing this? The easiest way would be to access the audio interface from the browser (html5) like capturing microphone output with getUserMedia (https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getUserMedia), but couldn't find a way to access my audio interface through that library. Otherwise, I'm planning on creating a desktop application, but don't know if there is a function/library to allow access to my usb-connected audio interface.
This probably has little to do with the Javascript or the MediaDevices API. On Linux, using Firefox, PulseAudio is required to interface with your audio hardware. Since your soundcard is an Input/Output interface, you should be able to test it pretty easily by simply playing any sound file in the browser.
Most of PulseAudio configuration can be achieved using pavucontrol GUI. You should check the "configuration" tab and both "input device" & "output device" tabs to make sure your Focusrite is correctly set up and used as sound I/O.
Once this is done, you should be able to access an audio stream using the following (only available if on a "secure context", ie localhost or served through HTTPS, as stated in the MDN page you mentionned):
navigator.mediaDevices.getUserMedia({ audio: true })
.then(function(stream) {
// do whatever you want with this audio stream
})
(code snippet taken from the MDN page about MediaDevices)
Sending audio to the server is a whole other story. Have you tried anything? Are you looking for a real-time communication with the server? If so, I'd start by having a look at the WebSockets API. The WebRTC docs might be worth reading too, but it is more oriented to client-to-client communications.
How to use exact device id
Use media device constrain to pass exact device id.
An sample would be
const preferedDeviceName = '<your_device_name>'
const deviceList = await navigator.mediaDevices.enumerateDevices()
const audio = devices.find((device) => device.kind === 'audioinput' && device.label === preferedDeviceName)
const {deviceId} = audio;
navigator.mediaDevices.getUserMedia({audio: { deviceId }}
How to process and send to the backend
Possible duplicate of this
Still not clear follow this
// Server.js
var http = require('http');
var path = require('path');
var fs = require('fs');
http.createServer(function (request, response) {
console.log('request starting...');
var filePath = '.' + request.url;
if (filePath == './')
filePath = './index.html';
path.exists(filePath, function(exists) {
if (exists) {
fs.readFile(filePath, function(error, content) {
if (error) {
response.writeHead(500);
response.end();
}
else {
response.writeHead(200, { 'Content-Type': 'text/html' });
response.end(content, 'utf-8');
}
});
}
else {
response.writeHead(404);
response.end();
}
});
}).listen(8125);
console.log('Server running at http://127.0.0.1:8125/');
// index.html
<html>
<head>
<title>Rockin' Page</title>
<link type="text/css" rel="stylesheet" href="style.css" />
<script type="text/javascript" src="jquery-1.7.1.min.js"></script>
</head>
<body>
<p>This is a page. For realz, yo.</p>
</body>
<script type="text/javascript">
$(document).ready(function() {
alert('happenin');
});
</script>
</html>
I am able to run my static page, but i have couple of questions down the line.
What do i do next? i mean what to develop and what to learn? i am confused.. what is the difference i am doing with my current webserver.
Is node.js just an replacement of my Apache Webserver.
Can anyone clearly explain me the main purpose of nodejs
node.js is a platform (language, library, and interpreter), and Turing-complete, i.e. you can do anything with it. Most likely, you'll want a web application which is interactive in some way. Have a look at examples like a chat room. There are also lots of other resources on how to get started.
In the end, it's up to you what you want your site to be. A chatroom? A forum? A search engine? A multiplayer game? If you just want to transfer static files (i.e. have no need for server state or communication between clients), there's no need to use node.js.
Questions
What do i do next? i mean what to develop and what to learn? i am confused.. what is the difference i am doing with my current webserver.
Is node.js just an replacement of my Apache Webserver.
Can anyone clearly explain me the main purpose of nodejs
Answers
Start with some simple examples and/or tutorials. I've forked Mastering Node on github, which is a quick read but is also still a work in progress. I've used expressjs for quickly creating static sites (like my online resume). I also use node.js and nodeunit for testing JavaScript or performing scripting tasks that could otherwise be done in bash, php, batch, perl, etc.
node.js gives an IO wrapper to Google's V8 JavaScript engine. This means that JavaScript is not bound to a web browser and can interact with any type of IO. That means files, sockets, processes (phihag's Turing-complete answer). It can do pretty much anything.
The main purpose of nodejs is that IO code is evented and (mostly) non-blocking. For example, in ASP.NET, when a web server receives a request that request's thread is blocked until all processing is complete (unless processed by an asynchronous handler, which is the exception not the rule). In node.js (express, railwayjs, etc.), the request processing is handled by events and callbacks. Code is executed asynchronously and callbacks are executed when complete. This is similar to the asynchronous pages of ASP.NET, the main difference being that node.js and web frameworks on top of it don't create millions of threads. I believe the threading issue is discussed in Ryan's video.
here is an excellent video from nodejs creator ryan ... http://www.youtube.com/watch?v=jo_B4LTHi3I
it explains what it is with code examples it is really good.
here are some more resources that you can take a look at
http://blog.jayway.com/2011/05/15/a-not-very-short-introduction-to-node-js/
http://www.nodetuts.com/
http://www.howtonode.org/
I would like to display a live video stream in a web browser. (Compatibility with IE, Firefox, and Chrome would be awesome, if possible.) Someone else will be taking care of streaming the video, but I have to be able to receive and display it. I will be receiving the video over UDP, but for now I am just using VLC to stream it to myself for testing purposes. Is there an open source library that might help me accomplish this using HTML and/or JavaScript? Or a good website that would help me figure out how to do this on my own?
I've read a bit about RTSP, which seems like the traditional option for something like this. That might be what I have to fall back on if I can't accomplish this using UDP, but if that is the case I still am unsure of how to go about this using RTSP/RTMP/RTP, or what the differences between all those acronyms are, if any.
I thought HTTP adaptive streaming might be the best option for a while, but it seemed like all the solutions using that were proprietary (Microsoft IIS Smooth Streaming, Apple HTTP Live Streaming, or Adobe HTTP Dynamic Streaming), and I wasn't having much luck figuring out how to accomplish it on my own. MPEG-DASH sounded like an awesome solution as well, but it doesn't seem to be in use yet since it is still so new. But now I am told that I should expect to receive the video over UDP anyways, so those solutions probably don't matter for me anymore.
I've been Googling this stuff for several days without much luck on finding anything to help me implement it. All I can find are articles explaining what the technologies are (e.g. RTSP, HTTP Adaptive Streaming, etc.) or tools that you can buy to stream your own videos over the web. Your guidance would be greatly appreciated!
It is incorrect that most video sites use FLV, MP4 is the most widely supported format and it is played via Flash players as well.
The easiest way to accomplish what you want is to open a S3Amzon/cloudFront account and work with JW player. Then you have access to RTMP software to stream video and audio. This service is very cheap. if you want to know more about this, check out these tutorials:
http://www.miracletutorials.com/category/s3-amazon-cloudfront/ Start at the bottom and work your way up to the tutorials higher up.
I hope this will help you get yourself on your way.
If you don't need sound, you can send JPEGs with header like this:
Content-Type: multipart/x-mixed-replace
This is a simple demo with nodejs, it uses library opencv4nodejs to generate images. You can use any other HTTP server which allows to append data to the socket while keeping connection opened. Tested on Chrome and FF on Ubuntu Linux.
To run the sample you will need to install this library with npm install opencv4nodejs, it might take while, then start the server like this: node app.js, here is app.js itself
var http = require('http')
const cv = require('opencv4nodejs');
var m=new cv.Mat(300, 300, cv.CV_8UC3);
var cnt = 0;
const blue = new cv.Vec3(255, 220, 120);
const yellow = new cv.Vec3(255, 220, 0);
var lastTs = Date.now();
http.createServer((req, res) => {
if (req.url=='/'){
res.end("<!DOCTYPE html><style>iframe {transform: scale(.67)}</style><html>This is a streaming video:<br>" +
"<img src='/frame'></img></html>")
} else if (req.url=='/frame') {
res.writeHead(200, { 'Content-Type': 'multipart/x-mixed-replace;boundary=myboundary' });
var x =0;
var fps=0,fcnt=0;
var next = function () {
var ts = Date.now();
var m1=m.copy();
fcnt++;
if (ts-lastTs > 1000){
lastTs = ts;
fps = fcnt;
fcnt=0;
}
m1.putText(`frame ${cnt} FPS=${fps}`, new cv.Point2(20,30),1,1,blue);
m1.drawCircle(new cv.Point2(x,50),10,yellow,-1);
x+=1;
if (x>m.cols) x=0;
cnt++;
var buf = cv.imencode(".jpg",m1);
res.write("--myboundary\r\nContent-type:image/jpeg\r\nDaemonId:0x00258009\r\n\r\n");
res.write(buf,function () {
next();
});
};
next();
}
}).listen(80);
A bit later I've found this example with some more details in python: https://blog.miguelgrinberg.com/post/video-streaming-with-flask
UPDATE: it also works, if you stream this into html img tag.
True cross-browser streaming is only possible through "rich media" clients like Flash, which is why almost all video websites default to serving video using Adobe's proprietary .flv format.
For non-live video the advent of video embeds in HTML5 shows promise, and using Canvas and JavaSCript streaming should be technically possible, but handling streams and preloading binary video objects would have to be done in JavaScript and would not be very straightforward.