visualizing PCM data streamed over a peer connection in realtime? - javascript

I have an experimental sound project I'm working on that streams PCM audio data over a webRTC connection.
Unfortunately I have to do some special processing to the raw data before sending it over the stream, so I haven't been able to pipe the stream directly over webRTC. Instead I do processing in the ScriptProcessorNode
I know that ScriptProcessorNode has been deprecated in favor of AudioWorkerNode Essentially I am doing the same thing because I have a web worker which processes the script processor node's data.
This processed data is then sent over webRTC, which I want to visualize and interact with on the other end.
I came across several repos that do this kind of thing but I can't seem to find one that works efficiently with real-time peer streamed data.
wavesurfer-js.org/
works great but only loads full audio files, not streamed data. I was able to manipulate the library a little bit to be able to update the waveform visualization with live stream data, but the way I'm doing it is not performant.
https://github.com/bbc/waveform-data.js + https://github.com/bbc/peaks.js
This one looks promising and I have yet to try it. Lots of features for interacting with the waveform.
github.com/soundcloud/waveformjs
Works well and has an 'update' function that is performant, but the library is not being maintained and doesn't support much functionality beyond viewing the waveform.
Does anyone have any experience with a good library for this purpose?
Thanks

Related

run Javascript three.j animation on performant machine/server but display it on a website, thus running also on not performant devices

I'll try my best to explain this as good as I can:
I have programmed an art installation (interactive animation with three.js), it is running very smooth on my laptop, but not on older machines and almost on no mobile devices. Is there any way to run the javascript animation on a performant machine/server but display/render it on a website, thus running also on not performant devices. I was thinking about something like done for CloudGaming (as in shadow.tech) or services like parsec.app maybe...
Any ideas/hints?
I've thought about the same concept before too. The answer to your question rendering the three js on server and displaying it on an older machine will be something like this:
You are going to have to render three js on a server then basically live stream it to the client then capture the client's controls and send it back to the server to apply them but latency is going to be a pain in the ass so better write your code nice and neat and a lot of error handling. Come to think about it Google stadia does the same think too when you play games on youtube.
Server Architecture
So roughly thinking about it I would build my renderer backend like this:
First create a room with multiple instances of your three js code. => this basically means creating multiple (i.e index.js, index1.js, index2.js, etc...) Then Now decide if you wanna do this with the Straight Forward Recorded and Stream method or capture the stream directly from the canvas then broadcast it.
Recorded and Stream method
This means you create the js room then render and display it on the server machine then capture what is displaying on your screen using something like OBS studio then broadcast it to the client. I don't recommend this method.
OR
captureStream() method
After creating your room in your server run the three js code then access the canvas you are rendering the three js code in like this:
var canvas = document.querySelector('canvas');
var video = document.querySelector('video');
// Optional frames per second argument.
var stream = canvas.captureStream(60);
// Set the source of the <video> element to be the stream from the <canvas>.
video.srcObject = stream;
Then use something like webrtc and broadcast it to the client.
This is an example site you can look at: https://webrtc.github.io/samples/src/content/capture/canvas-pc/
Honestly don't ditch the client interaction idea. Instead write code to detect latency before hand then decide to enable it for that specific client
Hope this helps

Rendering webpage as H.264

I'm trying to render HTML as a H.264 stream, and then streaming it to another PC on my network.
I've got the last part, streaming to to another PC on my network down.
Now my only problem is rendering the webpage.
I can't render it once, because it isn't a static webpage.
I need to load the webpage, fetch images, run javascript and open websockets.
The only way I can imagine this working, is if I run a browser (or maybe something like CEF?), and "capture" the output, and render it as H.264
I'm basically trying to do the same as OBS' BrowserSource, but the only reason I'm NOT using OBS, is because I can't find a good way to run it headless.
NOTE: I need to be able to do it through the commandline, completely headless.
I've done this with the Chromium Tab Capture API, and the Off-Screen Tab Capture API.
Chromium will conveniently handle all the rendering, including bringing in WebGL rendered stuff, and composite all together in a nice neat MediaStream for you. From there, you can use it in a WebRTC call or pass off to a MediaRecorder instance.
The off-screen tab capture even has a nice separate isolated process that cannot access local cameras and such.
https://developer.chrome.com/extensions/tabCapture
We are using Caspar CG to render a rotating news display for live TV broadcast. Extremely powerful open source tool built by the Swedish public service company SVT. A true Swiss army knife type of software, highly recommend:
https://github.com/CasparCG/server

Webrtc RTCdatachannel server c++

I'm trying to create a simple webrtc server in c++ , so i can transfer data between browser and server (no need for peer-to-peer) and i only need RTCdatachannel no media or audio is involved.
i tried this example:
https://github.com/llamerada-jp/webrtc-cpp-sample
But unfortunately i didn't manage to compile this and also thats an old project so it may be irrelevant now.
Can someone provide a good example ? even some guidelines will be great:)
I recently wrote a standalone library implementing WebRTC Data Channels in C++: https://github.com/paullouisageneau/libdatachannel
It's way easier to compile than the reference WebRTC full implementation, so since you don't need video or audio, you could use in on the server side. You can find usage examples in the test directory. Note you'll still need to transmit the SDP offer and answer between the client and the server to negociate the data channel.

How to do realtime audio analysis with playback in NodeJs?

So here is my problem. I want to play audio from nodejs running on a raspberry Pi and then adjust the brightness of an LED strip also connected to the same PI based on the frequency readings from the audio file. However I can't seem to find anything in node that gives the same functionality as the WebAudio API AnalyserNode.
I found a few libraries (https://www.npmjs.com/package/audio-render) that come close and are based on Web Audio API but the frequency values it produces are completely incorrect. I verified this by comparing it to a browser version I created using the Web Audio API.
I need the audio to play from node while also being analyzed to affect the brightness levels.
Any help would be appreciated. I really thought this would be simpler to handle in node but 6 hours later and I'm still without a solution.
Victor Dibiya at IBM has an excellent example that illustrates how to use the web-audio-api module to decode an audio file into a buffer array of PCM data from which one can extract amplitude data from sound files and infer beats:
https://github.com/victordibia/beats
I have this working on a Raspberry Pi with LEDs controlled via Fadecandy.

Web Audio Api 16 bit to 32 bit too slow.(Specially with such time critical operation)/Workers

I am getting 16 bit audio from server and currently sending it from server.
It is interleaved.
That means I need to loop and separate right and left into 2 32 bit arrays in javascript.
Well this is just too slow for javascript to execute and schedule the play time. things get out of sync. This is a live stream. So web api seems to be implemented for only local syths and such. Streaming pcm does not seem to be a good approach.I know that you would never send PCM to begin with. Well say i wanna send vorbis or something similar. They have to be in a container like .ogg or webm or something but browsers have their internal buffering and we have very little /no control.
so O tried sending ADPCM and decoding it to PCM on client in Javascript. That is also too slow.
If i send my data and preprocess it on server. Uninterleave onserver and convert it to 32 bit floats and send it to client. The data size doubles. 16 bit to 32 bit.
So what is the best way to render 16 audio with out processing on client side.
Also can you play audio from worker threads. Would implementing the conversions in a worker thread help. I mean there is all that websocket communication going on and JS is single threaded.
I would also Like to add that doing the computation on chrome on mac pro works (much better, I almost hear no glitch between samples) when I compare it to a client running on a pc,
No, you cannot currently play audio from Worker threads. However, I really doubt that your problem is in the cost of de-interleaving audio data; have you tried just sending a mono stream? Properly synchronizing and buffering a stream in a real network environment is quite complex.

Categories