How to make web app like "Epic Sax Gandalf" using JavaScript? - javascript

I want to create an application that when launched on different devices will display the same content (music, photo or video) at the same time.
Here is simple example.
And real-life example. :)
My first idea was based on machine local time.
timestamp = new Date().getTime()
(timestamp(\d{4}$) === "0000") => play music and animation
music = 10s,
animation = 10s
and for every every 10 seconds, start this function.
I know, however, that this solution may not work and the content will still be unsynchronized.
So, does anyone know how to achieve the effect I'm talking about using javascript?

I actually had the same idea as you had and implemented a little proof of concept for it.
You can try it out like this:
Open the deployed site (either in two browsers/windows or even on different devices)
Choose the same unique channel-id for all your opened sites
Click "Start"
One window should have a heading with "Leader". The others should be "Follower"s. By clicking on the Video-id field you can paste the video id of any youtube video (or choose from a small list of recommended ones).
Click "Play" on each window
Wait a bit - it can take up to 1 minute until the videos synchronize
Each follower has a "System time offset". On the same device it should be nearly 0ms. This is the time that the system-time (Date.now()) in the browser differs from the system time on the Leader window.
On the top left of each video you can see a time that changes every few seconds and should be under 20ms (after the videos are synchronized). This is the time the video-feed differs from its optimal time in relation to the system time.
(I would love to know wether it works for you too. My Pusher deployment is EU-based so maybe problems with the increased latency could occur...)
How does it work?
The synchronisation happens in two steps:
Step 1.: Synchronizing the system times
I basically implemented the NTP (Network time protocol) Algorithm in JS via websockets or Pusher JS as my channel of communication between each Follower- and the Leader-clients. Look under "Clock synchronization algorithm" in the Wikipedia article for more information.
Step 2.: Synchronizing the video feed to the "reference time"
At the currentTime (= synchronized system time) we want the currentVideoTimeto be at currentTime % videoLength. Because the currentTime or system time has been synchronized between the clients in Step 1 and the videoLength is obviously the same in all the clients (because they are supposed to play the same video) the currentVideoTime is the same too.
The big problem is that if I would just start the video at the correct time on all clients (via setTimeout()) they probably wouldn't play at the same time, because one system has e.g. network problems and the video still buffers or another program wants in this moment the processing power of the OS. Depending on the device the time from calling the start function of the video player and the actual starting of the video differs too.
I'm solving this by checking every second wether the video is at the right position (= currentTime % videoLength). If the difference to the right position is bigger than 20ms, I'm stopping the video, skipping the video to the position where it should be in 5s + the time it was late before and start it again.
The code is a bit more sophisticated (and complicated) but this is the general idea.
sync-difference-estimator
synchronized-player

Related

How to prevent Javascript from suspending when the user tabs out?

I'm developing a web game which uses a fixed physics timestep.
Currently I am doing all my physics and rendering in a requestAnimationFrame function.
This works perfectly, but requestAnimationFrame suspends and stops running when the user tabs out (to save processor performance I suppose).
As a result, if you tab out for like 30 minutes and then come back, the game has to simulate 30 minutes of physics in order to get back to the right spot.
This is very bad, as it causes the tab to freeze while the CPU cranks to 100% for the next 3 minutes while it runs all those physics calculations.
I think the only solution is to make the physics run while the tab is inactive, but I'm not sure how. I suppose I should only have the rendering in requestAnimationFrame, because it doesn't matter if that gets missed. But the physics need to be maintained.
Apparently setInterval also suspends when the user tabs out.
So, how should this be handled? I need a way to run physics every 16.6667 milliseconds regardless of whether the user is tabbed in or not. Is this possible? I've heard about WebWorkers but is that the actual solution? To put the entire game inside a WebWorker? What about when I want to change the DOM and things like that? Seems like a bit much.

How to determine if video frame is ready for composition?

I've found some similar question on Stackoverflow but it does not really address my problem.
I'm playing multiple videos into a WebGL texture, one after another. It's based on user input, something like a web-based VJ tool.
Copying is done easily, and I have internal clock that is sync with same fps like the video I'm playing (eg. 30fps) and frames are updated correctly. Exactly like one of the answers offered in the above mentioned question. All that works well.
Texture is updated with:
gl.texSubImage2D(gl.TEXTURE_2D,0,0,0,this.gl.RGBA,this.gl.UNSIGNED_BYTE,video);
My problem is how to detect when the very first frame is available for composition. Video does not start playback immediately in the real environment (eg. average 4G, video on CDN), but sometimes takes 1s or more to start playback (to buffer sufficient data).
If I attempt to start updating texture prior the first frame is available, I get WebGL error thrown:
WebGL: INVALID_VALUE: texSubImage2D: no video
I'm aware of video.load(); method that can be called in advance (eg. on user interaction), however I have ~50 video files that I need to play (in unknown order as it depends on user input), and older phones (like iPhone 7) have major performance drop when I do that, to the point Safari sometimes crashes.
I'm looking for a reliable way to determine when video started actual playback. Events such as onplay don't seem to fire when first frame is available, but much earlier.
I have also tried ontimeupdate event, but that one does not seem to fire when first frame is available for composition, but earlier as well, just like onplay. I can see the event fired, and I start updating texture when it's fired for the first time, but at the beginning of updates it generates WebGL error (for about 0.5-1s, depending on network speed) until video actually shows the first frame. Once buffered, no errors are thrown.
This issue is more visible in 4G/mobile network. Also tested in Chrome with throttling speed. Un-throttled speed will get me 1-4 WebGL warnings prior showing first frame, while eg. 12mbps throttled will give me 100-200 warnings, prior video frame is presented.
I've seen requestVideoFrameCallback but it doesn't have coverage I need (iOS Safari not even planned anytime soon).
I'm trying to avoid updating texture if video frame is not ready for composition, but can't find a reliable way to determine when it is ready.
Any help is highly appreciated!
Alright, I have found a solution at listening on playing event.
I was listening on play and on timeupdate, didn't think of on playing being so different.
It does fire after the first frame is available for composition and now I don't have anymore those WebGL errors. Tested on an Android 10 device as well as on iOS 14.5 iPhone 7 device.

Is there any way to compensate for drift between the css event.elapsedTime with the time passed in the play method of an AudioContext note?

From my tests, and from searching to find out more about the problem, my best guess is that css animations may be using a different physical clock from the one used to stream audio. If so perhaps the answer to this is that it can't be done, but am asking in case I am missing anything.
It is for my online metronome here.
It is able to plays notes reasonably accurately in response to an event listener for the css animationiteration event. The eventlistener is set up using e.g.
x.addEventListener("animationstart",playSoundBall2);
See here.
However if I try to synchronize the bounce with the sample precise timing of the AudioContext method that's when I run into problems.
What I do is to use the first css callback just to record the audio context time for the css elapsed time of 0. Then I play the notes using the likes of:
oscillator.start(desired_start_time);
You can try it out with the option on the page: "Schedule notes in advance for sample-precise timing of the audio" on the page here.
You can check how much it drifts by switching on "Add debug info to extra info" on the same page.
On my development machine it works fine with Firefox. But on Edge and Chrome it drifts away from the bounce. And not in a steady way. Can be fine for several minutes and then the bounce starts to slow down relative to the audio stream.
It is not a problem with browser activity - if I move the browser around and try to interrupt the animation the worst that happens is that it may drop notes and if the browser isn't active it is liable to drop notes. But the ones it plays are exactly in time.
My best guess so far, is that it might be that the browser is using the system time, while the audiocontext play method is scheduling it at a precise point in a continuous audio stream. Those may well be using different hardware clocks, from online searches for the problem.
Firefox may for some reason be using the same hardware clock, maybe just on my development machine.
If this is correct, it rather looks as if there is no way to guarantee to precisely synchronize html audio played using AudioContext with css animations.
If that is so I would also think you probably can't guarantee to synchronize it with any javascript animations as it would depend on which clocks the browser uses for the animations, and how that relates to whatever clock is used for streaming audio.
But can this really be the case? What do animators do who need to synchronize sound with animations for long periods of time? Or do they only ever synchronize them for a few minutes at a time?
I wouldn't have noticed if it weren't that the metronome naturally is used for long periods at a time. It can get so bad that the click is several seconds out from the bounce after just two or three minutes.
At other times - well while writing this I've had the metronome going for ten minutes in my Windows 10 app and it has drifted, but only by 20-30 ms relative to the bounce. So it is very irregular, so you can't hope to solve this by adding in some fixed speed up or slow down to get them in time with each other.
I am writing this just in case there is a way to do this in javascript, anything I'm missing. I'm also interested to know if it makes any difference if I use other methods of animation. I can't see how one could use the audio context clock directly for animation as you can only schedule notes in the audio stream, can't schedule a callback at a particular exact time in the future according to the audio stream.

HTML 5 how to record filtered canvas

I am gonna use my webcam as a source and show my view on webpage , than I will manipulate my view like (blacknwhite , fiseye, etc.) and show that manipulated video in my canvas.
An example ( http://photobooth.orange-coding.net/ )
Ok everything is cool for now . I can capture that manipulated canvas as a image.
Is there any way to record that manipulated canvas as video?
I also found an example (https://www.webrtc-experiment.com/ffmpeg/audio-plus-canvas-recording.html)
But when I tried that code on my webcam recording project , it's just recording my source view(not blacknwhite) . It is not implementing my effect to record.
Any idea or is it possible ?
Thank you.
Recording video in the browser is like getting blood out of a stone. If you hit it hard and long enough against your head, there will be blood, eventually. But it's a painful experience, you it will certainly give you a headache!
There is currently no way of recording video in real-time from a canvas element. But there is proposed a Mediastream Recording API which includes video (and it excludes the canvas part). Currently only audio is supported, and only if FF.
You can grab an image as often as possible and use it as a sequence, but there are several issues you will run into:
You will not get full frame-rate if you choose to grab the image as JPEG or PNG (PNG is not very useful with video as there is no alpha)
If you choose to grab the raw data you may achieve full frame rate (note that frame rate for video is typically never above 30 FPS) but you will fill up the memory very quickly, and you would need a point in time to process the frames into something that can be transferred to server or downloaded. JavaScript is single threaded and no matter how you twist and turn this stage, you will get gaps in the video when this process is invoked (unless you have a lot of memory and can wait until the end - but this not good for a public available solution if that's the goal).
You will have no proper sinc like time-code (to sync by) so the video will be like the movies from Chaplins day, variable. You can get close by binding high-resolution timestamps but not accurate enough as you will have no way of getting the stamp at the very time you grab the image.
No sound is recorded; if you do record audio in FF using the API, you have no way to properly sync the audio with the video anyways (which already has its own problems ref. above)
Up until now we are still at single frame sequences. If you record one minute # 30 fps you have 60x30 frames, or 1800 pictures/buffers per minute. If you record in HD720 and choose grabbing the raw buffer (the most realistic option here) you will end up with 1800 x 1280 x 720 x 4 (RGBA) bytes per minute, or 6,635,520,000 bytes, ie. 6.18 GB per minute - and that's just in raw size. Even if you lower the resolution to lets say 720x480 you'll end up with 2.32 GB/min.
You can alternatively process them into a video format, it's possible, but currently there are next to none solutions for this (there has been one, but it had varying result which is probably why it's hard be found...), so you are left to this yourselves - and that is a complete project involving writing encoder, compressor etc. And the memory usage will be quite high as you need to create each frame in separate buffers until you know the full length, then create a storage buffer to hold them all and so forth. And even if you did, compressing more than 6 GB worth of data (or event "just" 2 GB) is not gonna make user or browser very happy (if there is any memory left)...
Or bite the dust and go with a commercial Flash based solution (but that excludes your image processing and pretty much takes over the camera... so not really an option in this case).
The only realistic option, IMO, is to wait for the aforementioned API - this will let your browser do all the hard work, in compiled optimized code, enable frame by frame compression leaving the memory pretty much intact, and give very little headache compared to the alternative(s) above. There may be an option to apply shaders to the stream at one point, or integrate it with some canvas processing (not on the table in this proposal AFAICS) so recording real-time from a canvas will still be a challenge.
This is where server side processing comes in...
(of course, a screen recorder is an option which is of curse completely non-integrated, but will enable you to demo your effects at least...).

How to keep a live MediaSource video stream in-sync?

I have a server application which renders a 30 FPS video stream then encodes and muxes it in real-time into a WebM Byte Stream.
On the client side, an HTML5 page opens a WebSocket to the server, which starts generating the stream when connection is accepted. After the header is delivered, each subsequent WebSocket frame consists of a single WebM SimpleBlock. A keyframe occurs every 15 frames and when this happens a new Cluster is started.
The client also creates a MediaSource, and on receiving a frame from the WS, appends the content to its active buffer. The <video> starts playback immediately after the first frame is appended.
Everything works reasonably well. My only issue is that the network jitter causes the playback position to drift from the actual time after a while. My current solution is to hook into the updateend event, check the difference between the video.currentTime and the timecode on the incoming Cluster and manually update the currentTime if it falls outside an acceptable range. Unfortunately, this causes a noticeable pause and jump in the playback which is rather unpleasant.
The solution also feels a bit odd: I know exactly where the latest keyframe is, yet I have to convert it into a whole second (as per the W3C spec) before I can pass it into currentTime, where the browser presumably has to then go around and find the nearest keyframe.
My question is this: is there a way to tell the Media Element to always seek to the latest keyframe available, or keep the playback time synchronised with the system clock time?
network jitter causes the playback position to drift
That's not your problem. If you are experiencing drop-outs in the stream, you aren't buffering enough before playback to begin with, and playback just has an appropriately sized buffer, even if a few seconds behind realtime (which is normal).
My current solution is to hook into the updateend event, check the difference between the video.currentTime and the timecode on the incoming Cluster
That's close to the correct method. I suggest you ignore the timecode of incoming cluster and instead inspect your buffered time ranges. What you've received on the WebM cluster, and what's been decoded are two different things.
Unfortunately, this causes a noticeable pause and jump in the playback which is rather unpleasant.
How else would you do it? You can either jump to realtime, or you can increase playback speed to catch up to realtime. Either way, if you want to catch up to realtime, you have to skip in time to do that.
The solution also feels a bit odd: I know exactly where the latest keyframe is
You may, but the player doesn't until that media is decoded. In any case, keyframe is irrelevant... you can seek to non-keyframe locations. The browser will decode ahead of P/B-frames as required.
I have to convert it into a whole second (as per the W3C spec) before I can pass it into currentTime
That's totally false. The currentTime is specified as a double. https://www.w3.org/TR/2011/WD-html5-20110113/video.html#dom-media-currenttime
My question is this: is there a way to tell the Media Element to always seek to the latest keyframe available, or keep the playback time synchronised with the system clock time?
It's going to play the last buffer automatically. You don't need to do anything. You're doing your job by ensuring media data lands in the buffer and setting playback as close to that as reasonable. You can always advance it forward if a network condition changes that allows you to do this, but frankly it sounds as if you just have broken code and a broken buffering strategy. Otherwise, playback would be simply smooth.
Catching up if fallen behind is not going to happen automatically, and nor should it. If the player pauses due to the buffer being drained, a buffer needs to be built back up again before playback can resume. That's the whole point of the buffer.
Furthermore, your expectation of keeping anything in-time with the system clock is not a good idea and is unreasonable. Different devices have different refresh rates, will handle video at different rates. Just hit play and let it play. If you end up being several seconds off, go ahead and set currentTime, but be very confident of what you've buffered before doing so.

Categories