I'm investigating the possibility to manipulate the beat per second of an html5 audio tag.
Natively the tag offers the playbackspeed. Unfortunately the granularity is not fine enough.
Let's for example assume a modern pop song, and let's say the current BPM values is 128.
I'd love to be able to slow done the track by 1bpm, or probably even 0.1bpm, or even better 0.05 bpm (which is today pretty normal for almost all audio software).
Also, consider there are 2 way to change BPM, once is by maintaining the key and the other by not maintaining it.
In the first case, you speed up the track but the pitch doesn't change, in the second case it does.
Was wondering if anyone out there has been working on this, manipalting directly the byte data.
I liked the both statements
"According to WebAudio specification (http://www.w3.org/TR/webaudio/) you CAN" AND
"When I found out that IE doesn't support webaudio, it made it pointless."
at:
http://www.html5gamedevs.com/topic/6255-can-you-change-audio-pitch-at-runtime/
Anyway a nice demo to 'steal with pride':
https://github.com/urtzurd/html-audio
Related
We are creating an app that lets a user capture a number of images and it will try to create a 3D model of the target object. In order to help the users capture useful images we give them some guidance while they move their phone from one capture to the next.
We have a nice prototype working by means of navigator.mediaDevices.getUserMedia() that captures video, displays it in a <video> element, and has an overlay that shows how to move the phone. When they are ready they press a button and we grab the current frame of the streamed video.
We were quite happy with this until we realized that very often the captured image would not have enough quality; mainly they tend to be a bit blurred because the user may not hold the device totally still. This causes the math behind creating the 3D model to fail.
I am now tasked with attempting to improve this but I think I don't have many options. Here is what I have been investigating and their drawbacks:
JavaScript's ImageCapture API. This seems to be exactly what we need: a way to actually take a picture instead of grabbing a frame from a video. While the API has still an experimental status, it seems pretty stable and Chrome has it implemented since version 59. The problem is that Safari (our main target) does not have it implemented and it seems they won't ever do. I can't really find information on what their plan is though but as of today, this is not an option.
Use the input element of type file with the attribute capture. While this lets me capture images with the native camera, I cannot give the user any guide as far as I know.
Create a whole mobile app. This requires another year of work and requesting our existing users to install an app, which may not be possible. Also leaves Android devices out which we'd prefer not to.
While typing this I thought of perhaps using the video instead of capturing the images, but not sure this would help in any way.
Instead of a different approach to the way of capturing the image, I could try to only grab the image if I can confirm that the device is as close as still as possible (using a threshold value). Perhaps I could use the gyroscope for this (we are using it to check they have moved the device to a place and angle we consider useful for the process). The drawback of this is that I am not sure it would really mitigate our problem... how still is still enough? is it possible for the person to be that still for a second?
So my question here is, can anyone think of another alternative to those I descrived? or perhaps improve one of the enumerated ones?
BTW does anyone know what are Apple's plans for the ImageCapture API?
When using Web Audio, you can connect all sounds you create to one globally created gainNode and use that node to have a "Master Volume" property. This is very handy when you want to be able to change the master volume on the fly and want it to affect all sounds immediately.
Now, I am trying to accomplish the same, but for playbackRate. For reference: this would be for a web game where you can use a power-up to slow down time, which should also slow down all music and sounds.
Each sound I create is a AudioBufferSourceNode linked to a chain of processing nodes. Now I know that the the AudioBufferSourceNode itself has a playbackRate property you can change. This is great, but it'd require me to cache all AudioBufferSourceNodes I create, loop over them and change their playbackRate if I wanted to change a "global playbackRate" on the fly. It'd be perfect if I could accomplish this in the same way as with the global gainNode, but couldn't find a way to do that.
What would be the proper way to implement such a feature? Would you recommend caching all AudioBufferSourceNodes created (can be thousands) and loop over them? That's the way I do this with HTML5 Audio, but it seems hacky for Web Audio, which is much more advanced.
If you want more information, please ask and I will update the question!
You can't directly do that. There are some source nodes that don't have playback rate controls - like live input. In this case, you're best off doing what you suggest - keeping a list of active sounds to loop through.
You could use a granular method to resample and pitch-bend it down - like the "pitch bend" code in my audio input effects demo (https://webaudiodemos.appspot.com/input/). That's a bit costly to keep around just in case you want to make the effect, though.
How can I control the rendering loop frame rate in KineticJS? The docs for Kinetic.Animation show a frame rate being passed to the render callback, and Kinetic.Tween seems to have no frame rate logic, but I don't see anyway to force, say, a 30fps when 60fps is possible.
Loads of context for the curious follows, but the question is that simple. If anyone reads on, other advice is welcome. If you already know the answer, don't waste your time reading on!
I'm developing a music app that combines some DOM-based GUI controls (current iteration using jQuery Mobile) and Canvas-based GUI controls (using KineticJS). The latter involve some animation. Because the animated elements are triggered by music playback, I'm using Kinetic.Tween to avoid the complexity of remembering how long a given note has been playing (which Kinetic.Animation would require doing).
This approach works great at 60fps in Chrome (on a fast machine) but is just slow enough on iOS 6.1 Safari (iPad 2) that manipulating controls while animations are happening gets a little janky. I'm not using WebGL (unless KineticJS or Chrome does this by default for canvas?), and that's not an option when I package for native UIWebView.
As I'm getting beyond prototype into wanting to make more committed tech decisions, I see the following options, in order of perceived goodness:
Figure out how to cap the frame rate. Because my animations heavily use alpha fades but do not involve motion, I believe I could get away with 20-30fps and look fine. Could also scale this up on faster devices.
Don't respond immediately to touch inputs, but add them to a queue which I poll at a constant interval and only use the freshest for things like touchmove. This has no impact on my non-interactive animated elements, but tackles the problem from the other direction, trying to reduce the load of user interaction. This would require making Kinetic controls static and manually tracking touch coordinates (not terrible effort if it actually helped).
Rewrite DOM-based GUI to canvas-based (KineticJS); rewrite WebAudio-based engine to HTML5 audio; leverage CocoonJS or Ejecta for GPU-acceleration. This means having to hand-code stuff like file choosers and nav menus and such (bad). Losing WebAudio is pretty serious as it eliminates features like DSP effects and very fine-grained, low-latency timing (which is working just fine on an iPad 2).
Rewrite the app to separate DOM based GUI and WebAudio from Canvas-based elements, leverage CocoonJS. I'm not sure if/how well this works out, but the fact that CocoonJS passes JavaScript code as strings between the 2 components makes me very skittish about how solid this idea is. It's probably doable, but best case I'm very tied to CocoonJS moving forwards. I don't like architecting this way, but maybe it's not as bad as it sounds?
Make animations less juicy. This is least good not because of its design impact but because, as it is, I'm only animating ~20 simple shapes at any time in my central view component, however they include transparency and span an area ~1000x300. Other components like sliders are similarly bare-bones. In other words, it's not very juicy right now.
Overcome severe allergy to Objective-C; forget about the browser, Android, and that other mobile OS. Have a fast app that performs natively and has shiny Apple-approved widgets. My biggest problem with this approach is not wanting to be stuck in Objective-C reality for years, skillset-wise. I just don't like it.
Buy an iPad 3 or later. Since I already am pretending Android doesn't exist (I don't have any devices to test), why not pretend no one still has iPad 2? I think this is passing the buck -- if I can get acceptable performance on iPad 2, I will feel confident about the app's performance as I add more features.
I may be overlooking options or otherwise naive about how to tackle this. Some would say what I'm trying to build is just silly. But it's working pretty well just not ready for prime time on the iPad 2.
Yes, you can control the Kinetic.Animation framerate
The Kinetic.Animation sends in a frame object which has a frame.time property.
That .time is a running timer that you can use to throttle your animation speed.
Here's an example that throttles the Kinetic.Animation: http://jsfiddle.net/m1erickson/Hn3cC/
var lastTime;
var frameDelay=1000;
var loop = new Kinetic.Animation(function(frame) {
var time = frame.time
if(!lastTime){lastTime=time;}
var elapsed = time-lastTime;
if(elapsed>=frameDelay){
// frameDelay has expired, so animate stuff now
// set lastTime for the next loop
lastTime=time;
}
}, layer);
loop.start();
Working from #markE's suggestions, I tried a few things and found a solution. It's ultimately not rocket science, but sharing what I figured out:
First, tried the hack of doubling Tween durations and targets, using a timer to stop them at 50%. This kinda sorta worked but was hard to get to look good and was pretty error prone in coding bogus targets like negative opacity or height or whatnot.
Second, having read the source to Tween and looked at docs again for Animation, decided I could locally use Animation instances instead of Tween instances, and allow the closure scope to hang onto the relevant note properties. Eventually got this working smoothly and finally realized a big Duh! which is that throttling the frame rate of several independently animating things does not in any way throttle the overall frame rate.
Lastly, decided to give my component a render() method that calls itself in a loop with requestAnimationFrame, exits immediately if called before my clamp time, and inside render() I update all objects in the Kinetic canvas and call layer.drawScene(). Because there is now only one animation, this drops frame rate to whatever I need and the app is fast on iPad 2 (looks exactly the same to my eyes too).
So Kinetic is still helping for its higher level canvas API, and so far my other control widgets are still easy code using Kinetic to handle user input and dragging, now performing much better as the big beast component is not eating up the CPU.
The short answer to my original question is that no, you can't lock the overall frame rate for very complex animations, but as Mark said, you can for anything that fits in a single Animation instance.
Note that I could have still used Animation without giving it a layer or explicitly calling any draw() methods, but since I'd still have to write all the logic to determine individual element's current visual state, there was no gain to doing this. What would be very useful would be if Tween could accept a parameter to not automatically render. This would simplify code like mine, as I could shorthand the animation on individual objects but still choose when to actually do the heavy lifting of rendering everything. Seeing how much this whole exercise gained in performance on the iPad 2, might be worth adding this option to the framework.
Is there any way to have two or more (preferably three) html5 < video > tags playing simultaneously and to be in perfect sync.
If I have let's say three tiles of one video and I want them to appear in browser as one big video. They need to be perfectly synchronized. Without even smallest visual/vertical hint that they are tiled.
Unfortunately I cannot use MediaController because it is not supported well enough.
I've tried some workouts, including canvases, but I still get visual differentiation. Has anyone had any similar problem/solution?
Disclaimer: I'm not a video guy, but here are some thoughts anyway.
If they need to be absolutely perfect...you are fighting several problems at once:
A device might not be powerful enough to acquire, synchronize and render 3 streams at once.
Even if #1 is solved, a device is never totally dedicated to your task. For example, it might pause for garbage collection between processing stream#1 and stream#2--resulting in dropped/unsynchronized frames.
So to give yourself the best chance at perfection, you should first merge your 3 videos into 1 vertical video in the studio (or using studio software).
Then you can use the extended clipping properties of canvas context.drawImage to break each single frame into 2-3 separate frames.
Additionally, buffer a few frames you acquire on the stream (this goes without saying!).
Use requestAnimationFrame (RAF) to control the drawing. RAF does a fairly good job of drawing frames when system resources are available and delaying frames when system resources are lacking.
Your result won't be perfect, but they will be synchronized. You will always have to make the decision whether to drop or delay frames when system resources are unavailable, but at least the frames you do present will be synchronized.
As far as I know it's currently impossible to play HTML5 video frame-by-frame, or seek to a frame accurate time-code. The nearest seek seems to be precise to roughly 1-second.
But you can still get pretty close using the some of the media frameworks:
Popcorn.js library made for synchronizing video with content.
mediagroup.js another library used to add support for mediagroup attributes on HTML5 media elements
The only feature that allowed that is named mediaGroup and it was removed from Chrome(apparently for not being popular enough). It's still present in WebKit. Relevant discussion here and here.
I think you can implement you own "mediagroup"-like tag using wasm though without DOM support it may be tricky.
I am thinking as a challenge i should write a javascript based game. I want sound, images and input. A background to simulate a screen (like 640x480 with all my images in it) would be useful to separate the rest of the page from the 'game'. What should i look at?
Some things i would need
Framecontrol. A way to get the current time (or delta).
Image, displaying it and moving it. How do i display full image. Knowing pixel access may be cool.
Input A way to lock it in a box (like flash does) is cool.
Sound play simple sounds on demand (like when i get a hit). Several sounds at once would be great
Bottlenecks. What are things that will kill the CPU?
Restrictions. What cant i do? I hear i cant 'sleep' to wait. I must set a callback
Good or best pratice. What are good things i can do to either keep speed up or to lower glitch or compatibility problems.
I'm going to answer this looking at things from a mootools javascript perspective:
Framecontrol. A way to get the current time (or delta).
periodical()
Image, displaying it and moving it. How do i display full image.
setStyles()
Input A way to lock it in a box (like
flash does) is cool.
Plain old CSS
Sound play simple sounds on demand
(like when i get a hit).
Swiff, remote();
Bottlenecks. What are things that will
kill the CPU?
Internet Explorer.
Restrictions.
3D ... ?
What are good things i can do ... to
lower glitch or compatibility
problems.
Use a framework.
As a starting point, you may want to write it for Opera, as Opera provides a game canvas that will help you out.
For some examples of games in javascript:
http://dev.opera.com/articles/view/3d-games-with-canvas-and-raycasting-part/
http://my.opera.com/WebApplications/blog/show.dml/200788
This one is interesting, it is demos of games using the canvas element.
http://www.canvasdemos.com/tag/games/
The best way to see where the problems are is to start writing the game, and then you will see what may be a problem. By looking at demos you can get an idea what performance issues they encountered. For example, a full 3D Doom game will have problems, but, as the first article above explains, there are some ways to optimize for javascript.
Once you get it working with Opera, then you can look at Firefox 3.5+ and Safari, as well as Chrome, and see if you can make some changes to have it work on those. How many platforms it works on depends on how much work you want to do for it. For a proof-of-concept pick the easiest browser for your task.
The best place to start would be to get very familiar with the <canvas> tag (it allows you to draw anything on screen)
This may help a lot:
http://benfirshman.com/projects/jsnes/
its an online NES emulator that renders everything on screen - the source is also available
Hope that helps =)