Youtube Style video annotation/overlays (React) - javascript

Our organization has the need for what amounts to a YouTube style annotation system. Essentially, what we need is the ability to overlay text/images over video at specific times.
I did my best to search for existing React components or even existing vanilla JS libs for a reference implementation, but came up empty. If anyone knows of any resources I may have missed, the rest of this post may not even be needed.
I need help with the strategy to render these overlay components at specific times in the video, and making sure that we stay synchronized with the video's time. Since we are already using Redux, my initial thought was to ramp up on RxJS and redux-observable, and create a stream/observable using a timeout scheduler to avoid some sort of polling strategy. I'd also be listening for play/pause/skip events from the video to cancel/restart the timeout scheduler.
I've never used RxJS before, so I wanted to get some feedback before starting to ramp up on knowledge and moving to implementation. Are there any inherent flaws in what I outlined above? Is there a different strategy that may work better?
Thanks guys!
TLDR; Need help creating time synced components overlayed on video.

It's not about React or RxJS frameworks, but about JavaScript. As soon as you have JavaScript solution, it would be possible to fit it in almost any framework. So the key question is -- is there an available JavaScript solution?
Well, such question is already answered. Check here: "Youtube type annotation in html5 videos
".

Related

Alternatives to Image Capture for mobile Safari

We are creating an app that lets a user capture a number of images and it will try to create a 3D model of the target object. In order to help the users capture useful images we give them some guidance while they move their phone from one capture to the next.
We have a nice prototype working by means of navigator.mediaDevices.getUserMedia() that captures video, displays it in a <video> element, and has an overlay that shows how to move the phone. When they are ready they press a button and we grab the current frame of the streamed video.
We were quite happy with this until we realized that very often the captured image would not have enough quality; mainly they tend to be a bit blurred because the user may not hold the device totally still. This causes the math behind creating the 3D model to fail.
I am now tasked with attempting to improve this but I think I don't have many options. Here is what I have been investigating and their drawbacks:
JavaScript's ImageCapture API. This seems to be exactly what we need: a way to actually take a picture instead of grabbing a frame from a video. While the API has still an experimental status, it seems pretty stable and Chrome has it implemented since version 59. The problem is that Safari (our main target) does not have it implemented and it seems they won't ever do. I can't really find information on what their plan is though but as of today, this is not an option.
Use the input element of type file with the attribute capture. While this lets me capture images with the native camera, I cannot give the user any guide as far as I know.
Create a whole mobile app. This requires another year of work and requesting our existing users to install an app, which may not be possible. Also leaves Android devices out which we'd prefer not to.
While typing this I thought of perhaps using the video instead of capturing the images, but not sure this would help in any way.
Instead of a different approach to the way of capturing the image, I could try to only grab the image if I can confirm that the device is as close as still as possible (using a threshold value). Perhaps I could use the gyroscope for this (we are using it to check they have moved the device to a place and angle we consider useful for the process). The drawback of this is that I am not sure it would really mitigate our problem... how still is still enough? is it possible for the person to be that still for a second?
So my question here is, can anyone think of another alternative to those I descrived? or perhaps improve one of the enumerated ones?
BTW does anyone know what are Apple's plans for the ImageCapture API?

is it possible to cache a button?

I am working on converting the flash game to createjs. I am using adobe animate cc 2017 Facing performance issues. Memory is growing higher. WhenI GOOGLE it I want to add cache. I started adding in the converted flash file which does not have animations. When I added and test it I lost the button effects. is there any way that I can cache a button
A button created in Animate (I assume using the ButtonHelper?) is essentially a MovieClip with different states that activate when you interact with it. If you cache a MovieClip, it will store the current state into a single cache-canvas, which is why it will no longer update.
If your Button has vector or complex states, you could cache those frames instead, and leave the Button/MovieClip un-cached. It would help to see what the contents contain. Feel free to post some code, and I can update my answer with some suggestions.
About EaselJS Caching
Caching is valuable when you have vector, text, or grouped content that don't change a lot. It is even better if you can group those caches into a shared SpriteSheet, which helps the GPU manage less textures. Note that simply "caching things" will not necessarily get you back any performance depending on what you are doing.

Real time motion object detection/recognition on webcam stream in javascript and webgl

I have just started a school assignment where they want me to detect and recognize objects in a webcam stream, in an webgl application. It will be added to an already existing javascript plugin. It is important that this will be done in real time since the objects will change stuff in the application.
Example, if a user wears a yellow shirt with a specific icon on it will change the layout of the application.
I have researched this a few days now and found some intresting articles.
This seems like an intresting approach:
http://research.ijcaonline.org/volume83/number3/pxc3892575.pdf
And ofcourse the SURF algorithm seems to be a legimate approach:
http://www.vision.ee.ethz.ch/~surf/eccv06.pdf
So my question is "what algortihms might be best to implement?".
And also if possible, which might be easiest to implement? I have quite limited time and this is only one of the objectives for this project.
I appreciate all the help and answers I can get.
edit. Surf is not acceptable because of patents.
Please refer to the HAAR cascade or SURF based classifiers featured in this answer (replace "face" with "any object").
However, you would probably need to train your own classifiers which is not possible within a limited timeframe.
Alternatively, simplify your application and go with simple color tracking.

Web Audio - How to change playbackRate of all sounds instantly?

When using Web Audio, you can connect all sounds you create to one globally created gainNode and use that node to have a "Master Volume" property. This is very handy when you want to be able to change the master volume on the fly and want it to affect all sounds immediately.
Now, I am trying to accomplish the same, but for playbackRate. For reference: this would be for a web game where you can use a power-up to slow down time, which should also slow down all music and sounds.
Each sound I create is a AudioBufferSourceNode linked to a chain of processing nodes. Now I know that the the AudioBufferSourceNode itself has a playbackRate property you can change. This is great, but it'd require me to cache all AudioBufferSourceNodes I create, loop over them and change their playbackRate if I wanted to change a "global playbackRate" on the fly. It'd be perfect if I could accomplish this in the same way as with the global gainNode, but couldn't find a way to do that.
What would be the proper way to implement such a feature? Would you recommend caching all AudioBufferSourceNodes created (can be thousands) and loop over them? That's the way I do this with HTML5 Audio, but it seems hacky for Web Audio, which is much more advanced.
If you want more information, please ask and I will update the question!
You can't directly do that. There are some source nodes that don't have playback rate controls - like live input. In this case, you're best off doing what you suggest - keeping a list of active sounds to loop through.
You could use a granular method to resample and pitch-bend it down - like the "pitch bend" code in my audio input effects demo (https://webaudiodemos.appspot.com/input/). That's a bit costly to keep around just in case you want to make the effect, though.

CreateJS/EaselJS web page (not game) consisting of several pages

I have coded an existing (pure) HTML5 Canvas web page consisting of several pages, 'buttons' and 'hotspots'. It is pure canvas javascript code.
Reason why I put 'buttons' and 'hotspots' in quotes is because I have actually implemented those in pure javascript from scratch without using any framework, just created 'classes' for buttons, hotspots, mouse event detection, etc.
These elements are approaching end of its functionality, so I need better elements and especially a scrollbar which will respond well to mouse scrolling.
As web site is redesigned and a lot more new and complex requests are needed to be implemented, it is no more feasible to continue coding in javascript as such, i.e. I need a serious graphical framework.
Between KineticJS and CreateJS/EaselJS I chose the latter.
Now, since this is not an one-page game, but several page long website with somewhat complex navigation relation between pages, can someone advise me what approach should I take?
Containers, just 'pages' with 'buttons' on it, what should be taken for a button, how to handle different pages and machine states in CreateJS/EaselJS?
Did I made the right choice? Is this easier in KineticJS?
Can you share an experience and/or advice, please?
Since EaselJS is "just" a graphical framework, there is no native support for states. However compared to KineticJS I wouldn't say that there is a huge difference for you purpose (someone correct me if I'm wrong here)
I'd use the same approach of using Containers as Pages. For buttons I'd use the ButtonHelper-class: http://www.createjs.com/Docs/EaselJS/classes/ButtonHelper.html
You probably knew most of that already, but I thought I'd still share it, maybe it does help you.

Categories