I've discovered (at least in chrome) web audio resamples wav files to 48k when using decodeAudioData. Any way to prevent this and force it to use the file's original sample rate? I'm sure this is fine for game development, but i'm trying to write some audio editing tools and this sort of thing isn't cool. I'd like to be fully in control of when/if resampling occurs.
As far as I know, you're just going to get whatever sampling rate your AudioContext is using (which will be determined by your sound card, I believe).
They lay out the steps here: https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html#dfn-decodeAudioData
"The decoding thread will take the result, representing the decoded linear PCM audio data, and resample it to the sample-rate of the AudioContext if it is different from the sample-rate of audioData. The final result (after possibly sample-rate converting) will be stored in an AudioBuffer."
Nope, you can't prevent the resampling of decodeAudioData into the AudioContext's sampleRate. Load and create AudioBuffers yourself, or decode the file into a buffer in an OfflineAudioContext that is fixed to the rate it was originally set to (although it's going to be hard to tell what that is, I imagine).
There is discussion on this point - https://github.com/WebAudio/web-audio-api/issues/30.
There is now an webcomponent for loading audio using sox :
https://www.npmjs.com/package/sox-element
It allows you to decode audio at the original sample rate. The data is unaltered form the original.
Related
I am working with React and Node. My project is having requirement to merge videos and play it on the player. Is possible anyhow, I can do it either on my React side using some canvas or on the back end side using some module other than ffmpeg?
Just want to show preview nothing else is it possible?
Any help would be much appriciated
Right now what I am doing is playing the videos one by one on the player
{vidoes.map((url) => <ReactPlayer src={url}/>)}
Now what I want to do is to draw the images on the canvas. So for that I am playing all the urls at once but how can I play them into series and save the next canvas images until the before one completes?
To achieve continous playing in browsers for multiple input video files there’s no need for server side processing. Static file serving for multiple video files (we’ll call them chunks from now on) is enough.
At first the .appendBytes() method for the playing buffer of a video player was invented in Flash to allow for switching video streams (or different quality chunks). It was also particularly useful for live video where the next video file doesn’t exist when the playing starts. It also allowed multiple resolution video chunks to play one after the other seamelessly, which, at the time, didn’t work in any other player, including VLC (and I think ffmpeg didn’t have it either).
HTML5 browsers have added an .appendBuffer() method to add video chunks to the currently playing video buffer.
It allows you to hands-on pre-load whatever content you need with full control of what gets played and when (you are in control of buffering and what comes next at all times).
You can start dumping video chunks into the playing buffer at any time.
On the server side ffmpeg cannot solve your problem in the same way the browser can as you will have to handle very difficult corner cases where the video resolution, codecs, audio channels, etc. might differ. Also, a browser only solution is vastly superior to doing remuxing or video encoding on the server.
MDN: https://developer.mozilla.org/en-US/docs/Web/API/SourceBuffer/appendBuffer
This will solve all your problems related to video playback of multiple source files.
If you want to do this on the backend, as stated in the comment, you will likely need to include ffmpg. There are some libraries though that make is simpler like fluent-ffmpeg
assuming you already have the files on your node server, you can use something like:
const ffmpeg = require('fluent-ffmpeg');
ffmpeg('/path/to/input1.avi')
.input('/path/to/input2.avi')
.on('error', function(err) {
console.log('An error occurred: ' + err.message);
})
.on('end', function() {
console.log('Merging finished !');
})
.mergeToFile('/path/to/merged.avi', '/path/to/tempDir');
note that this is a simple example taken directly from https://github.com/fluent-ffmpeg/node-fluent-ffmpeg#mergetofilefilename-tmpdir-concatenate-multiple-inputs.
Make sure you read through the prerequisites since it requires ffmpeg to be installed and configured.
There are other aspects that you might need to consider, such as differing codecs and resolutions. Hopefully this gives you a good starting point though.
Pretty sure most, if not all video manipulation libraries use ffmpeg under the hood.
Seeing as you're a react dev, you might appreciate Remotion to manipulate the videos as needed. It doesn't run in the frontend but does have some useful features.
I am trying to trim leading and trailing silence from an audio file recorded in browser before I send it off to be stored by the server.
I have been looking for examples to better understand the WebAudioApi but examples are scattered and cover depricated methods like the "ScriptProcessorNode" I thought I was close when I found this example
HTML Audio recording until silence?
which I was eager to see at least silence being processed, which I think I can use to subsequently trim. However after loading the example in a sandbox it does not appear to detect silence in a way that I can understand.
If anyone has any help or advice it would be greatly appreciated!
While ScriptProcessorNode is deprecated, it's not going away any time soon. You should use AudioWorkletNode if you can (but not all browsers support that).
But since you have the recorded audio in a file, I would decode it using decodeAudioData to get an AudioBuffer. Then use getChannelData(n) to get a Float32Array for the n'th channel. Analyze this array however you want to determine where the silence at the beginning ends and the silence at the end begins. Do this for each n.
Now you know where the non-silent part is. WebAudio has no way of encoding this audio, so you'll either have to do your own encoding, or maybe get MediaRecorder to encode this so you can send it off to your server.
I currently need to extract snapshot(s) from an IP Camera using RTSP on a web page.
VLC Web Plugin works well with playing stream, but before I get my hands dirty on playing with its Javascript API, can some one tell me whether the API can help me to take the snapshot(s) of the stream, like the way it done with VLC Media Player, cuz it does not present on the above page.
If the answer is 'No', please give me some other way to do this.
Thanks in advance.
Dang Loi.
The VLC plugin only provides metadata properties accessible from JavaScript.
For this reason there is no way to access the bitmap/video itself as plugins runs sand-boxed in the browser. The only way to obtain such data would be if the plugin itself provided a mechanism for it.
The only way to grab a frame is therefor to use a generic screen snagger (such as SnagIt), of course, without the ability to control it from JavaScript.
You could, as an option, look into the HTML5 Video element to see if you can use your video source with that. In that case you could grab frames, draw them to canvas and from there save it as an image.
Another option in case the original stream format isn't supported, is to transcode it on the fly to a format supported by the browser. Here is one such transcoder.
The Web Audio API seems cool, but I'd really like to use it to processes audio files and then save them as wav files again, and I don't really need to listen to them while they are processing. Is this possible? Is there something like encodeAudioData() to turn the audio buffer back into an ArrayBuffer so I could put it back in a file?
Edit: recorderJS seems almost perfect, but it only outputs 16-bit wavs. Any chance there is something that can do pro-audio formats (24-bit or 32-bit float)?
In Web Audio API specification, there is the Offline Context which does exactly what you need.
OfflineAudioContext is a particular type of AudioContext for rendering/mixing-down (potentially) faster than real-time. It does not render to the audio hardware, but instead renders as quickly as possible, calling a completion event handler with the result provided as an AudioBuffer.
Let's say I have a canvas element, and I need to turn the image on the canvas into a PNG or JPEG. Of course, I can simply use canvas.toDataURL, but the problem is that I need to do this a twenty times a second, and canvas.toDataURL is extremely slow -- so slow that the capturing process misses frames because the browser is busy converting to PNG.
My idea is to call context.getImageData(...), which evidently is much faster, and send the returned CanvasPixelArray to a Web Worker, which will then process the raw image data into a PNG or JPEG. The problem is that I can't access the native canvas.toDataURL from within the Web Worker, so instead, I'll need to resort to pure JavaScript. I tried searching for libraries intended for Node.js, but those are written in C++. Are there any libraries in pure JavaScript that will render raw image data into a PNG or JPEG?
There have been several JavaScript ports of libpng, including pnglets (very old), data:demo, and PNGlib.
The PNG spec itself is pretty straightforward – the most difficult part you may have is with implementing a simple PNG encoder yourself is getting ZLIB right (for which there are also many independent JavaScript implementations out there).
There's actually a C++ to JavaScript compiler called Emscripten.
Someone made a port of libpng, which you might want to check.
I was able to write my own PNG encoder, which supports both RGB and palette depending on how many colors there are. It's intended to be run as a Web Worker. I've open-sourced it as usain-png.