I am using html5 web audio api in my application. Application is simple, I have
BufferSourceNode -> GainNode -> lowpass filter -> context.destination
Now I want to save the output after applying the filters. So I decided to add recorder before
context.destination. But this doesn't work, it gives some noise sound while playing the audio, though my recorder records filter effects successfully.
Am I doing it in right way or is there any better way to do this?
Two things:
1) if you are going to use the buffer anyway - even if you're not() - you might want to consider using an OfflineAudioContext (https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html#OfflineAudioContext-section). OACs can run faster than real-time, so you don't need to "record" it in real-time; you set up your nodes, call startRendering(), and the oncomplete event hands you an audiobuffer. () If you still want a .WAV file, you can pull the WAV encoding function out of Recordjs and use it to encode an arbitrary buffer.
2) That sounds like an error in your code - it should work either way, without causing extra noise. Do you have a code sample you can send me?
Related
Normally, I am able to find the answer I am looking for, however, I have come across an issue that I am not finding resolution for yet..
Given a MessageEvent whoms body contains a 1-... second video file,
webm, as a binaryString. I can parse this as a dataURL and update the
src, however, I would like to instead build a growing buffer that can
be streamed to the srcObj, as if it were the mediaDevice ?
I am working on a scalable API for broadcasting video data that has as few dependencies as possible.
String trimming is possible as well, maybe just trim the binary string using a regex that removes all header data and continuously append to srcObj. The stream may be in excess of 1 GB total chunks, meaning src="..." may not be scalable friendly in terms of growing the string over time, additional solutions may include toggling different video sources to achieve a smoother transition. I can manipulate the binary string in php on the server or use a python, cpp, ruby, node, service as long as it routes the output to the correct socket.
I am not utilizing webRTC.
Thanks, the Stack Overflow community is awesome, I do not get to say that often enough.
I have 2 videos that I want to be joined as one to have of faster loading experience on the client. That means having to load seperate video data and play them like a play list but with a smooth transition too.
I have two ways of doing it
1. Load streams seperately, spliting them into chunks, comnbining them and making feeding them as source for the video Element.
In experiment 1 I have a few issues.
i. Without the for looped fetch, I can correctly set a single stream to the source of my video BUT when using the loop to fetch each stream seperately, it does not work:
- I looked for a way to combine video streams and I found How to use Blob URL, MediaSource or other methods to play concatenated Blobs of media fragments?. However it tackles editing one video - splitting one video and joining back also mentioning that you cant just seperate chunks and combine and expect it to play.
In my case its 2 different, complete videos(even duplicate video files but from different streams). It seems this route is an overkill for my problem. Is there a simpler way, perhaps a library than can help me join these streams and convert them to a blob?
Here is my attempt. Another confusion is the chunks array in my custom hook is still empty after the loop finishes running:
Codesanbox Demo 1 - Streams
2. use canvas to draw the videos one over the other in sequence
I found a working solution and ported it to react. Here is my demo:Codesandbox Demo 2 - Canvas. The problem I have here is that this ONLY works on safari with a few issues. It does not work at all on other browsers.
Here is the original working solution BUT that same example does not work well when I replace video sources with my urls. Could it be because mine is not .mp4? Check it out here Original Solution on codesandbox.Same code, different source, it also shows me a The index is not in the allowed range. error..
What a I doing wrong in both cases?
I guess No 2 could be a seperate question but I am posting here under the same problem. Lemme know if I should post it seperately.
Thank you in advance
This is sort of expanding on my previous question Web Audio API- onended event scope, but I felt it was a separate enough issue to warrant a new thread.
I'm basically trying to do double buffering using the web audio API in order to get audio to play with low latency. The basic idea is we have 2 buffers. Each is written to while the other one plays, and they keep playing back and forth to form continuous audio.
The solution in the previous thread works well enough as long as the buffer size is large enough, but latency takes a bit of a hit, as the smallest buffer I ended up being able to use was about 4000 samples long, which at my chosen sample rate of 44.1k would be about 90ms of latency.
I understand that from the previous answer that the issue is in the use of the onended event, and it has been suggested that a ScriptProcessorNode might be of better use. However, it's my understanding that a ScriptProcessorNode has its own buffer of a certain size that is built-in which you access whenever the node receives audio and which you determine in the constructor:
var scriptNode = context.createScriptProcessor(4096, 2, 2); // buffer size, channels in, channels out
I had been using two alternating source buffers initially. Is there a way to access those from a ScriptProcessorNode, or do I need to change my approach?
No, there's no way to use other buffers in a scriptprocessor. Today, your best approach would be to use a scriptprocessor and write the samples directly into there.
Note that the way AudioBuffers work, you're not guaranteed in your previous approach to not be copying and creating new buffers anyway - you can't simultaneously be accessing a buffer from the audio thread and the main thread.
In the future, using an audio worker will be a bit better - it will avoid some of the thread-hopping - but if you're (e.g.) streaming buffers down from a network source, you won't be able to avoid copying. (It's not that expensively, actually.) If you're generating the audio buffer, you should generate it in the onaudioprocess.
I am writing a simple mpeg-dash streaming player using HTML5 video element.
I am creating MediaSource and attaching a SourceBuffer to it. Then I am appending dash fragments into this sourcebuffer and everything is working fine.
Now, what I want to do is, I want to pre-fetch those segments dynamically depending upon current time of the media element.
While doing this there are lot of doubts and which are not answered by MediaSource document.
Is it possible to know how much data sourceBuffer can support at a time? If I have a very large video and append all the fragments into sourcebuffer, will it accommodate all fragments or cause errors or will slow down my browser?
How to compute number of fragments in sourcebuffer?
How to compute the presentation time or end time of the last segment in SourceBuffer?
How do we remove only specific set of fragments from SourceBuffer and replace them with segments with other resolutions? (I want to do it to support adaptive resolution switching run time.)
Thanks.
The maximum amount of buffered data is an implementation detail and is not exposed to the developer in any way AFAIK. According to the spec, when appending new data the browser will execute the coded frame eviction algorithm which removes any buffered data deemed unnecessary by the browser. Browsers tend to remove any part of the stream that has already been played and don't remove parts of the stream that are in the future relative to current time. This means that if the stream is very large and the dash player downloads it very quickly, faster than the MSE can play it, then there will be a lot of the stream that cannot be remove by the coded frame eviction algorithm and this may cause the append buffer method to throw a QuotaExceededError. Of course a good dash player should monitor the buffered amount and not download excessive amounts of data.
In plain text: You have nothing to worry about, unless your player downloads all of the stream as quickly as possible without taking under consideration the current buffered amount.
The MSE API works with a stream of data (audio or video). It has no knowledge of segments. Theoretically you could get the buffered timerange and map to to a pair of segments using the timing data provided in the MPD. But this is fragile IMHO. Better is to keep track of the downloaded and fed segments.
Look at the buffered property. The easiest way to get the end time in seconds of the last appended segments is simply: videoElement.buffered.end(0)
If by presentation time you mean the Presentation TimeStamp of the last buffered frame then there is no way of doing this apart from parsing the stream itself.
To remove buffered data you can use the remove method.
Quality switching is actually quite easy although the spec doesn't say much about it. To switch the qualities the only thing you have to do is append the init header for the new quality to the SourceBuffer. After that you can append the segments for the new quality as usual.
I personally find the youtube dash mse test player a good place to learn.
The amount of data a sourceBuffer can support depends on the MSE implementation and therefore the browser vendor. Once you reached the maximum value, this will of course result in an error.
You cannot directly get the number of segments in SourceBuffer, but you can get the actual buffered time. In combination with the duration of the segments you are able to compute it.
I recommend to have a look in open source DASH player projects like dashjs or ExoPlayer, which implement all your desired functionalities. Or maybe even use a commercial solution like bitdash.
I have an mp3 file as byte array. How to turn it back to a sound and play using javascript?
Thanks
As far as I know this is decidedly non trivial.
You can turn the array into a data URI, and then play it back normally.
You can post it back to a server to do the encoding and play it back normally.
You can use a fancy API
2 seems inefficient, 3 requires browser specific support. So, use 1. I havent tried it, but check out http://www.bitsnbites.eu/?p=1. You should expect this to be way less efficient than native code.
This is just a follow-up on Philip JF's answer:
"1" will probably work fine without any of the tricky stuff explained on the bitsnbites link. Since mp3 files are without header, you can pass on the data to the URL "as is", without WAVE header. So the way to go (modified from the bitsnbites page):
Construct the string to be played as a DATA URI:
Initialize a string with "data:audio/mpeg;base64,"
Append the mp3 byte array as a formatted string in base64 encoding using the btoa() function.
Then you can invoke this Data URI in order to play it.
References:
https://developer.mozilla.org/en/DOM/window.btoa
http://en.wikipedia.org/wiki/Data_URI_scheme