OpenGL or other image processing in the browser, without canvas? - javascript

I'm trying to find a way to make a program similar to after effects or blender in the browser, in the sense of developing certain animations etc., and compiling the animations and keyframes info a video, preferably frame-by-frame
Using the built in methods of the canvas and media recorder, I have yet to find a way to process a video by inserting one frame at a time, to a set framerate, usually the media recorder just records live in real time, it doesn't seem to be meant for frame by frame rendering, and also the canvas is limited in the sense of being too slow or unreliable to consistently get the image from it using either toDataURL or glGetPixels etc.
Is there a way to process videos in JavaScript without using the canvas at all?
Meaning, instead it would just generate the image data and display it as an img tag, or even a real time low quality video encoding and streaming, on the client side, and attach the stream to a video tag somehow, or perhaps the canvas could be used for development of the keyframes etc. but the actual processing would be done by scratch, is this the best way to go about this? And if so, how can it be done?

Related

Why does Facebook using videos and png as gif?

We can post an animated GIF on facebook.But Facebook saves it as a video. for an example This is an animated GIF on Facebook . But if you look at the source, then you can find This video . Also, all animated stickers are like this on the source. They are animated by a JS. Why Facebook doesn't use GIF files directly? Is there any good reason?
You can not easily start/stop GIF animations.
“Stopping” them would only be possible by replacing them with a static image anyway; and trying to get a GIF animation started with precise timing is hard to achieve cross-browser as well – some browsers play an animation only once, if that same image gets used somewhere else again later, they won’t re-play the animation, start it at different times, etc.
So most likely this choice was made to have full control in that regard, which using a video instead of a GIF easily offers.
(Plus, advanced current video codecs often offer better compression than the rather old GIF format.)

HTML 5 how to record filtered canvas

I am gonna use my webcam as a source and show my view on webpage , than I will manipulate my view like (blacknwhite , fiseye, etc.) and show that manipulated video in my canvas.
An example ( http://photobooth.orange-coding.net/ )
Ok everything is cool for now . I can capture that manipulated canvas as a image.
Is there any way to record that manipulated canvas as video?
I also found an example (https://www.webrtc-experiment.com/ffmpeg/audio-plus-canvas-recording.html)
But when I tried that code on my webcam recording project , it's just recording my source view(not blacknwhite) . It is not implementing my effect to record.
Any idea or is it possible ?
Thank you.
Recording video in the browser is like getting blood out of a stone. If you hit it hard and long enough against your head, there will be blood, eventually. But it's a painful experience, you it will certainly give you a headache!
There is currently no way of recording video in real-time from a canvas element. But there is proposed a Mediastream Recording API which includes video (and it excludes the canvas part). Currently only audio is supported, and only if FF.
You can grab an image as often as possible and use it as a sequence, but there are several issues you will run into:
You will not get full frame-rate if you choose to grab the image as JPEG or PNG (PNG is not very useful with video as there is no alpha)
If you choose to grab the raw data you may achieve full frame rate (note that frame rate for video is typically never above 30 FPS) but you will fill up the memory very quickly, and you would need a point in time to process the frames into something that can be transferred to server or downloaded. JavaScript is single threaded and no matter how you twist and turn this stage, you will get gaps in the video when this process is invoked (unless you have a lot of memory and can wait until the end - but this not good for a public available solution if that's the goal).
You will have no proper sinc like time-code (to sync by) so the video will be like the movies from Chaplins day, variable. You can get close by binding high-resolution timestamps but not accurate enough as you will have no way of getting the stamp at the very time you grab the image.
No sound is recorded; if you do record audio in FF using the API, you have no way to properly sync the audio with the video anyways (which already has its own problems ref. above)
Up until now we are still at single frame sequences. If you record one minute # 30 fps you have 60x30 frames, or 1800 pictures/buffers per minute. If you record in HD720 and choose grabbing the raw buffer (the most realistic option here) you will end up with 1800 x 1280 x 720 x 4 (RGBA) bytes per minute, or 6,635,520,000 bytes, ie. 6.18 GB per minute - and that's just in raw size. Even if you lower the resolution to lets say 720x480 you'll end up with 2.32 GB/min.
You can alternatively process them into a video format, it's possible, but currently there are next to none solutions for this (there has been one, but it had varying result which is probably why it's hard be found...), so you are left to this yourselves - and that is a complete project involving writing encoder, compressor etc. And the memory usage will be quite high as you need to create each frame in separate buffers until you know the full length, then create a storage buffer to hold them all and so forth. And even if you did, compressing more than 6 GB worth of data (or event "just" 2 GB) is not gonna make user or browser very happy (if there is any memory left)...
Or bite the dust and go with a commercial Flash based solution (but that excludes your image processing and pretty much takes over the camera... so not really an option in this case).
The only realistic option, IMO, is to wait for the aforementioned API - this will let your browser do all the hard work, in compiled optimized code, enable frame by frame compression leaving the memory pretty much intact, and give very little headache compared to the alternative(s) above. There may be an option to apply shaders to the stream at one point, or integrate it with some canvas processing (not on the table in this proposal AFAICS) so recording real-time from a canvas will still be a challenge.
This is where server side processing comes in...
(of course, a screen recorder is an option which is of curse completely non-integrated, but will enable you to demo your effects at least...).

How to perfectly sync two or more html5 video tags?

Is there any way to have two or more (preferably three) html5 < video > tags playing simultaneously and to be in perfect sync.
If I have let's say three tiles of one video and I want them to appear in browser as one big video. They need to be perfectly synchronized. Without even smallest visual/vertical hint that they are tiled.
Unfortunately I cannot use MediaController because it is not supported well enough.
I've tried some workouts, including canvases, but I still get visual differentiation. Has anyone had any similar problem/solution?
Disclaimer: I'm not a video guy, but here are some thoughts anyway.
If they need to be absolutely perfect...you are fighting several problems at once:
A device might not be powerful enough to acquire, synchronize and render 3 streams at once.
Even if #1 is solved, a device is never totally dedicated to your task. For example, it might pause for garbage collection between processing stream#1 and stream#2--resulting in dropped/unsynchronized frames.
So to give yourself the best chance at perfection, you should first merge your 3 videos into 1 vertical video in the studio (or using studio software).
Then you can use the extended clipping properties of canvas context.drawImage to break each single frame into 2-3 separate frames.
Additionally, buffer a few frames you acquire on the stream (this goes without saying!).
Use requestAnimationFrame (RAF) to control the drawing. RAF does a fairly good job of drawing frames when system resources are available and delaying frames when system resources are lacking.
Your result won't be perfect, but they will be synchronized. You will always have to make the decision whether to drop or delay frames when system resources are unavailable, but at least the frames you do present will be synchronized.
As far as I know it's currently impossible to play HTML5 video frame-by-frame, or seek to a frame accurate time-code. The nearest seek seems to be precise to roughly 1-second.
But you can still get pretty close using the some of the media frameworks:
Popcorn.js library made for synchronizing video with content.
mediagroup.js another library used to add support for mediagroup attributes on HTML5 media elements
The only feature that allowed that is named mediaGroup and it was removed from Chrome(apparently for not being popular enough). It's still present in WebKit. Relevant discussion here and here.
I think you can implement you own "mediagroup"-like tag using wasm though without DOM support it may be tricky.

canvas: create a game-like exploding stars effect for gamification purposes

Coming from the backbone side of web development we are trying to find a solution for a request to add visual and sound effects to our task management web application.
For starters - we are looking for a way to create an exploding stars effect like you see in games.
Can this be done with HTML5 canvas?
Should we use flash?
Any ideas how to start?
By now, just about anything Flash can do visually can be done by the HTML5 canvas on modern browsers.
For a 'star burst' visual effect, it sounds like a simple matter of creating a random array of objects that move away in random pre-set directions every time the canvas updates.
Example: http://jsfiddle.net/amDAW/ (click on the canvas to create a starburst)
As for sounds, this isn't handled in the canvas, but rather either the Audio tag or the fairly new WebAudio API. If you go with the former (more browser support), your biggest concern will be with resource preloading, but there are some helper libraries that can abstract this away (shameless advertising: https://github.com/jsweeneydev/ResourceLoader).

Adjust playback speed of videos in a browser

There is a program at enounce.com that it will increase the play speed of a video in a browser. I think that 95% of videos on the internet run on flash therefore this tool can be useful. I am wondering how that program was created. Maybe they modify the html source in the browser? perhaps it looks for the swf video playing on your browser and it injects some JavaScript on that html element to increase the speed. I been researching on Google and I think it is possible to alter the playback speed of a video with JavaScript. If it is not modifying the html page then it will be nice to at least know how this can be achieved. Also if a video plays on your browser it has to be saved somewhere in your computer I believe. That's why you can seek back and forth once the video finished downloading. why is it that it is almost impossible to find it and the only way of getting that video will be by capturing the packages with a package sniffer? anyways that is not my question I am just really curious on how that program achieves doing what it does. it speeds up everything even Pandora songs.
MySpeed seems to intercept the media stream coming from the server into the Flash player that sits in your browser. It changes the speed on the fly, and sends the result to the Flash player.
PS: If you need to control the playing speed of your own videos I recommend looking into the VLC Browser Plug-in, or the QuickTime player, which also has very good speed control features (from Javascript). Or you could use the HTML5 <video> tag.
Afaik, Flash-based players like Longtail/JWPlayer and Nonverbla don't have very good support for this.

Categories