When using Web Audio, you can connect all sounds you create to one globally created gainNode and use that node to have a "Master Volume" property. This is very handy when you want to be able to change the master volume on the fly and want it to affect all sounds immediately.
Now, I am trying to accomplish the same, but for playbackRate. For reference: this would be for a web game where you can use a power-up to slow down time, which should also slow down all music and sounds.
Each sound I create is a AudioBufferSourceNode linked to a chain of processing nodes. Now I know that the the AudioBufferSourceNode itself has a playbackRate property you can change. This is great, but it'd require me to cache all AudioBufferSourceNodes I create, loop over them and change their playbackRate if I wanted to change a "global playbackRate" on the fly. It'd be perfect if I could accomplish this in the same way as with the global gainNode, but couldn't find a way to do that.
What would be the proper way to implement such a feature? Would you recommend caching all AudioBufferSourceNodes created (can be thousands) and loop over them? That's the way I do this with HTML5 Audio, but it seems hacky for Web Audio, which is much more advanced.
If you want more information, please ask and I will update the question!
You can't directly do that. There are some source nodes that don't have playback rate controls - like live input. In this case, you're best off doing what you suggest - keeping a list of active sounds to loop through.
You could use a granular method to resample and pitch-bend it down - like the "pitch bend" code in my audio input effects demo (https://webaudiodemos.appspot.com/input/). That's a bit costly to keep around just in case you want to make the effect, though.
Related
I've been trying to create an 88 key piano with the Web Audio API. The plan is to run all the 88 oscillators first in the appropriate frequencies and then use Oscillator.connect() and Oscillator.disconnect() methods on the respective oscillators as the piano keys are pressed and released. The state of the AudioContext will be "running" all the time. Now, I have two questions here,
Is this the right way to do it?
I get a clicking noise at the start and end of the sounds when I play them. Why is this happening and how to get rid of it?
PS: The reason for creating a piano like this is to indulge myself in the delight of having created something from scratch. So using prerecorded sounds is not an option.
IF you wanted to do it that way, add a gain node to each oscillator, and then turn the gain off and on, instead of disconnect and reconnect.
That's probably what's causing your clicks and snaps. More below.
BUT... that's still pretty overkill, having 88 oscillators. The standard way keyboards do this is with a limited polyphony.
Create an array of ten oscillators, all hooked to their own gain, each gain hooked to the destination.
Keep track of how many keys are being pressed, and how many oscillators are in use.
keysPressed = {}
// on key down
keysPressed["60"] = nextAvailableOsc()
At any given time there are ten oscillators ready to go, one for each finger. If for some reason you require more, add them dynamically.
The clicking sound is because you're hard disconnecting and reconnecting running oscillators. Use a gain node in between the osc and destination, and turn that on and off.
Also, you might get clicks when changing the values hard such as
gainNode.gain.value = 0
That can create a glitch in the sound stream.. It should be:
gainNode.gain.setValueAtTime(0, ctx.currentTime + 1)
Maybe the + 1 is necessary. There's also setTargetAtTime and rampToAtTime methods that make things even smoother:
https://developer.mozilla.org/en-US/docs/Web/API/AudioParam
An alternative approach is to create the oscillators on demand. When a key is pressed, create one or more oscillators (for harmonics). These can feed into one (or more?) gain nodes which has an automation for the attack and sustain phase. When the key is released, automate the gain for a release phase and schedule the oscillator(s) to stop after the release phase has ended. Drop the reference to all the oscillators now.
I find this easier to reason about than having an array of oscillators, and there's no limit to the polyphony. But this approach generates more garbage that has to be handled eventually by the collector.
Our organization has the need for what amounts to a YouTube style annotation system. Essentially, what we need is the ability to overlay text/images over video at specific times.
I did my best to search for existing React components or even existing vanilla JS libs for a reference implementation, but came up empty. If anyone knows of any resources I may have missed, the rest of this post may not even be needed.
I need help with the strategy to render these overlay components at specific times in the video, and making sure that we stay synchronized with the video's time. Since we are already using Redux, my initial thought was to ramp up on RxJS and redux-observable, and create a stream/observable using a timeout scheduler to avoid some sort of polling strategy. I'd also be listening for play/pause/skip events from the video to cancel/restart the timeout scheduler.
I've never used RxJS before, so I wanted to get some feedback before starting to ramp up on knowledge and moving to implementation. Are there any inherent flaws in what I outlined above? Is there a different strategy that may work better?
Thanks guys!
TLDR; Need help creating time synced components overlayed on video.
It's not about React or RxJS frameworks, but about JavaScript. As soon as you have JavaScript solution, it would be possible to fit it in almost any framework. So the key question is -- is there an available JavaScript solution?
Well, such question is already answered. Check here: "Youtube type annotation in html5 videos
".
I'm investigating the possibility to manipulate the beat per second of an html5 audio tag.
Natively the tag offers the playbackspeed. Unfortunately the granularity is not fine enough.
Let's for example assume a modern pop song, and let's say the current BPM values is 128.
I'd love to be able to slow done the track by 1bpm, or probably even 0.1bpm, or even better 0.05 bpm (which is today pretty normal for almost all audio software).
Also, consider there are 2 way to change BPM, once is by maintaining the key and the other by not maintaining it.
In the first case, you speed up the track but the pitch doesn't change, in the second case it does.
Was wondering if anyone out there has been working on this, manipalting directly the byte data.
I liked the both statements
"According to WebAudio specification (http://www.w3.org/TR/webaudio/) you CAN" AND
"When I found out that IE doesn't support webaudio, it made it pointless."
at:
http://www.html5gamedevs.com/topic/6255-can-you-change-audio-pitch-at-runtime/
Anyway a nice demo to 'steal with pride':
https://github.com/urtzurd/html-audio
I'm currently helping a friend develop a web application in which I need ~6 audio tracks (all using the same time signature) to continuously loop and stay in sync. To give context, it is a typeface-music pairing application where as soon as a typeface is chosen, the associated audio loop starts playing and as the user keeps picking typefaces, the tracks layer and begin to resemble a song.
I've tried using SoundJS and the Buzz sound library, but I keep running into the same problem: there is always a slight delay between loops. This would be fine if all my audio tracks were the same length, but they aren't, so very quickly things go out of sync.
This seems to be a known problem, but I can't seem to find any answer to how to fix it. I came across Hivenfour's SeamlessLoop 2.0, but - unless I'm using it completely wrong - it doesn't actually seem to work (setting a volume returns an error).
If anyone has experience with this, I would truly appreciate any input! Thanks :)
SoundJS WebAudioPlugin uses a look ahead approach with web audio that will loop seamlessly, which is described here in a what will probably be a very helpful on audio timing.
Also be aware that some compression formats will insert white noise into sounds. I believe mp3 does this. WAV is supported broadly and does not.
As for HTMLAudioPlugin, we loop as smoothly as the browser will allow but it does not have the same precision as WebAudio.
Hope that helps.
Is there any way to have two or more (preferably three) html5 < video > tags playing simultaneously and to be in perfect sync.
If I have let's say three tiles of one video and I want them to appear in browser as one big video. They need to be perfectly synchronized. Without even smallest visual/vertical hint that they are tiled.
Unfortunately I cannot use MediaController because it is not supported well enough.
I've tried some workouts, including canvases, but I still get visual differentiation. Has anyone had any similar problem/solution?
Disclaimer: I'm not a video guy, but here are some thoughts anyway.
If they need to be absolutely perfect...you are fighting several problems at once:
A device might not be powerful enough to acquire, synchronize and render 3 streams at once.
Even if #1 is solved, a device is never totally dedicated to your task. For example, it might pause for garbage collection between processing stream#1 and stream#2--resulting in dropped/unsynchronized frames.
So to give yourself the best chance at perfection, you should first merge your 3 videos into 1 vertical video in the studio (or using studio software).
Then you can use the extended clipping properties of canvas context.drawImage to break each single frame into 2-3 separate frames.
Additionally, buffer a few frames you acquire on the stream (this goes without saying!).
Use requestAnimationFrame (RAF) to control the drawing. RAF does a fairly good job of drawing frames when system resources are available and delaying frames when system resources are lacking.
Your result won't be perfect, but they will be synchronized. You will always have to make the decision whether to drop or delay frames when system resources are unavailable, but at least the frames you do present will be synchronized.
As far as I know it's currently impossible to play HTML5 video frame-by-frame, or seek to a frame accurate time-code. The nearest seek seems to be precise to roughly 1-second.
But you can still get pretty close using the some of the media frameworks:
Popcorn.js library made for synchronizing video with content.
mediagroup.js another library used to add support for mediagroup attributes on HTML5 media elements
The only feature that allowed that is named mediaGroup and it was removed from Chrome(apparently for not being popular enough). It's still present in WebKit. Relevant discussion here and here.
I think you can implement you own "mediagroup"-like tag using wasm though without DOM support it may be tricky.