So here is my problem. I want to play audio from nodejs running on a raspberry Pi and then adjust the brightness of an LED strip also connected to the same PI based on the frequency readings from the audio file. However I can't seem to find anything in node that gives the same functionality as the WebAudio API AnalyserNode.
I found a few libraries (https://www.npmjs.com/package/audio-render) that come close and are based on Web Audio API but the frequency values it produces are completely incorrect. I verified this by comparing it to a browser version I created using the Web Audio API.
I need the audio to play from node while also being analyzed to affect the brightness levels.
Any help would be appreciated. I really thought this would be simpler to handle in node but 6 hours later and I'm still without a solution.
Victor Dibiya at IBM has an excellent example that illustrates how to use the web-audio-api module to decode an audio file into a buffer array of PCM data from which one can extract amplitude data from sound files and infer beats:
https://github.com/victordibia/beats
I have this working on a Raspberry Pi with LEDs controlled via Fadecandy.
Related
I've been researching for this for a minute but I'm not getting a straight answer to my exact question. I'm trying to understand the process behind video players switching video quality (480p, 720p, 1080p, etc.).
In order to achieve that, I am first asking if this is more of a front-end thing or back-end thing? And to illustrate the first answer, does one:
A) Upload one video file to a server (S3/Google Cloud) at the highest quality, then use a video tag in HTML and add a property to specify which quality is desired?
B) Upload one video file to a server (S3/Google Cloud) at the highest quality, then use JS to control the playback quality?
C) Split highest quality uploaded file into multiple files with different streaming quality using server code, then use front-end JS to choose which quality is needed?
D) Realize that this is way more work than it seems and should leave it to professional video streaming services, such as JWPlayer?
Or am I not seeing another option that's possible without a streaming service, without actually building a streaming service?
If the answer is pretty much D, what service do you recommend?
Note: I know YouTube and Vimeo can handle this but I'm not trying to have that kind of overhead.
It is answer 'C' as noted in the comments, and maybe partly answer 'D' also.
You need a video streaming server that supports one of the popular adjustable bitrate streaming protocols (ABR) DASH or HLS. ABR allows the client device or player download the video in chunks, e.g 10 second chunks, and select the next chunk from the bit rate most appropriate to the current network conditions.
There are open source streaming servers available such as GStreamer and licensed ones like Wowza, which you can sue if you want to host the videos yourself.
For some example of ABR see this answer: https://stackoverflow.com/a/42365034/334402
I have an experimental sound project I'm working on that streams PCM audio data over a webRTC connection.
Unfortunately I have to do some special processing to the raw data before sending it over the stream, so I haven't been able to pipe the stream directly over webRTC. Instead I do processing in the ScriptProcessorNode
I know that ScriptProcessorNode has been deprecated in favor of AudioWorkerNode Essentially I am doing the same thing because I have a web worker which processes the script processor node's data.
This processed data is then sent over webRTC, which I want to visualize and interact with on the other end.
I came across several repos that do this kind of thing but I can't seem to find one that works efficiently with real-time peer streamed data.
wavesurfer-js.org/
works great but only loads full audio files, not streamed data. I was able to manipulate the library a little bit to be able to update the waveform visualization with live stream data, but the way I'm doing it is not performant.
https://github.com/bbc/waveform-data.js + https://github.com/bbc/peaks.js
This one looks promising and I have yet to try it. Lots of features for interacting with the waveform.
github.com/soundcloud/waveformjs
Works well and has an 'update' function that is performant, but the library is not being maintained and doesn't support much functionality beyond viewing the waveform.
Does anyone have any experience with a good library for this purpose?
Thanks
I've discovered (at least in chrome) web audio resamples wav files to 48k when using decodeAudioData. Any way to prevent this and force it to use the file's original sample rate? I'm sure this is fine for game development, but i'm trying to write some audio editing tools and this sort of thing isn't cool. I'd like to be fully in control of when/if resampling occurs.
As far as I know, you're just going to get whatever sampling rate your AudioContext is using (which will be determined by your sound card, I believe).
They lay out the steps here: https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html#dfn-decodeAudioData
"The decoding thread will take the result, representing the decoded linear PCM audio data, and resample it to the sample-rate of the AudioContext if it is different from the sample-rate of audioData. The final result (after possibly sample-rate converting) will be stored in an AudioBuffer."
Nope, you can't prevent the resampling of decodeAudioData into the AudioContext's sampleRate. Load and create AudioBuffers yourself, or decode the file into a buffer in an OfflineAudioContext that is fixed to the rate it was originally set to (although it's going to be hard to tell what that is, I imagine).
There is discussion on this point - https://github.com/WebAudio/web-audio-api/issues/30.
There is now an webcomponent for loading audio using sox :
https://www.npmjs.com/package/sox-element
It allows you to decode audio at the original sample rate. The data is unaltered form the original.
I am getting 16 bit audio from server and currently sending it from server.
It is interleaved.
That means I need to loop and separate right and left into 2 32 bit arrays in javascript.
Well this is just too slow for javascript to execute and schedule the play time. things get out of sync. This is a live stream. So web api seems to be implemented for only local syths and such. Streaming pcm does not seem to be a good approach.I know that you would never send PCM to begin with. Well say i wanna send vorbis or something similar. They have to be in a container like .ogg or webm or something but browsers have their internal buffering and we have very little /no control.
so O tried sending ADPCM and decoding it to PCM on client in Javascript. That is also too slow.
If i send my data and preprocess it on server. Uninterleave onserver and convert it to 32 bit floats and send it to client. The data size doubles. 16 bit to 32 bit.
So what is the best way to render 16 audio with out processing on client side.
Also can you play audio from worker threads. Would implementing the conversions in a worker thread help. I mean there is all that websocket communication going on and JS is single threaded.
I would also Like to add that doing the computation on chrome on mac pro works (much better, I almost hear no glitch between samples) when I compare it to a client running on a pc,
No, you cannot currently play audio from Worker threads. However, I really doubt that your problem is in the cost of de-interleaving audio data; have you tried just sending a mono stream? Properly synchronizing and buffering a stream in a real network environment is quite complex.
I have been throwing an idea around that requires comparison of client microphone input and an existing wav file.
I would like to compare audio waves, client side if possible, returning a percentage of accuracy and was hoping this could be accomplished with the new HTML5 getUserMedia API. However I have not been able to find a viable solution thus far.
A prime example would be a graphical representation of an analogue tuner, I would like to compare mic input to audio recordings of different notes and keys.
Can this be achieved client side via JavaScript? And if not, are there any APIs out there that are already doing this?