flag start point and end point of streaming video node.js - javascript

I'm trying to write a nodejs website for streaming html5 video (webm, mp4). But don't know how to know when user finished viewing video. That mean, we need a start point (the point that user start to view video), and an end point(the point that user finished to view video). And we can know how many percents that they are viewed the video.
The video are located at our server.

The video HTML element (which is a subclass of MediaElement) has currentTime and duration properties that you can look at in JavaScript, and also a bunch of events you can listen for, like onplay, onplaying, onpause, ontimeupdate, etc.
https://developer.mozilla.org/en-US/docs/Web/API/HTMLMediaElement

Related

How to make web app like "Epic Sax Gandalf" using JavaScript?

I want to create an application that when launched on different devices will display the same content (music, photo or video) at the same time.
Here is simple example.
And real-life example. :)
My first idea was based on machine local time.
timestamp = new Date().getTime()
(timestamp(\d{4}$) === "0000") => play music and animation
music = 10s,
animation = 10s
and for every every 10 seconds, start this function.
I know, however, that this solution may not work and the content will still be unsynchronized.
So, does anyone know how to achieve the effect I'm talking about using javascript?
I actually had the same idea as you had and implemented a little proof of concept for it.
You can try it out like this:
Open the deployed site (either in two browsers/windows or even on different devices)
Choose the same unique channel-id for all your opened sites
Click "Start"
One window should have a heading with "Leader". The others should be "Follower"s. By clicking on the Video-id field you can paste the video id of any youtube video (or choose from a small list of recommended ones).
Click "Play" on each window
Wait a bit - it can take up to 1 minute until the videos synchronize
Each follower has a "System time offset". On the same device it should be nearly 0ms. This is the time that the system-time (Date.now()) in the browser differs from the system time on the Leader window.
On the top left of each video you can see a time that changes every few seconds and should be under 20ms (after the videos are synchronized). This is the time the video-feed differs from its optimal time in relation to the system time.
(I would love to know wether it works for you too. My Pusher deployment is EU-based so maybe problems with the increased latency could occur...)
How does it work?
The synchronisation happens in two steps:
Step 1.: Synchronizing the system times
I basically implemented the NTP (Network time protocol) Algorithm in JS via websockets or Pusher JS as my channel of communication between each Follower- and the Leader-clients. Look under "Clock synchronization algorithm" in the Wikipedia article for more information.
Step 2.: Synchronizing the video feed to the "reference time"
At the currentTime (= synchronized system time) we want the currentVideoTimeto be at currentTime % videoLength. Because the currentTime or system time has been synchronized between the clients in Step 1 and the videoLength is obviously the same in all the clients (because they are supposed to play the same video) the currentVideoTime is the same too.
The big problem is that if I would just start the video at the correct time on all clients (via setTimeout()) they probably wouldn't play at the same time, because one system has e.g. network problems and the video still buffers or another program wants in this moment the processing power of the OS. Depending on the device the time from calling the start function of the video player and the actual starting of the video differs too.
I'm solving this by checking every second wether the video is at the right position (= currentTime % videoLength). If the difference to the right position is bigger than 20ms, I'm stopping the video, skipping the video to the position where it should be in 5s + the time it was late before and start it again.
The code is a bit more sophisticated (and complicated) but this is the general idea.
sync-difference-estimator
synchronized-player

How to keep a live MediaSource video stream in-sync?

I have a server application which renders a 30 FPS video stream then encodes and muxes it in real-time into a WebM Byte Stream.
On the client side, an HTML5 page opens a WebSocket to the server, which starts generating the stream when connection is accepted. After the header is delivered, each subsequent WebSocket frame consists of a single WebM SimpleBlock. A keyframe occurs every 15 frames and when this happens a new Cluster is started.
The client also creates a MediaSource, and on receiving a frame from the WS, appends the content to its active buffer. The <video> starts playback immediately after the first frame is appended.
Everything works reasonably well. My only issue is that the network jitter causes the playback position to drift from the actual time after a while. My current solution is to hook into the updateend event, check the difference between the video.currentTime and the timecode on the incoming Cluster and manually update the currentTime if it falls outside an acceptable range. Unfortunately, this causes a noticeable pause and jump in the playback which is rather unpleasant.
The solution also feels a bit odd: I know exactly where the latest keyframe is, yet I have to convert it into a whole second (as per the W3C spec) before I can pass it into currentTime, where the browser presumably has to then go around and find the nearest keyframe.
My question is this: is there a way to tell the Media Element to always seek to the latest keyframe available, or keep the playback time synchronised with the system clock time?
network jitter causes the playback position to drift
That's not your problem. If you are experiencing drop-outs in the stream, you aren't buffering enough before playback to begin with, and playback just has an appropriately sized buffer, even if a few seconds behind realtime (which is normal).
My current solution is to hook into the updateend event, check the difference between the video.currentTime and the timecode on the incoming Cluster
That's close to the correct method. I suggest you ignore the timecode of incoming cluster and instead inspect your buffered time ranges. What you've received on the WebM cluster, and what's been decoded are two different things.
Unfortunately, this causes a noticeable pause and jump in the playback which is rather unpleasant.
How else would you do it? You can either jump to realtime, or you can increase playback speed to catch up to realtime. Either way, if you want to catch up to realtime, you have to skip in time to do that.
The solution also feels a bit odd: I know exactly where the latest keyframe is
You may, but the player doesn't until that media is decoded. In any case, keyframe is irrelevant... you can seek to non-keyframe locations. The browser will decode ahead of P/B-frames as required.
I have to convert it into a whole second (as per the W3C spec) before I can pass it into currentTime
That's totally false. The currentTime is specified as a double. https://www.w3.org/TR/2011/WD-html5-20110113/video.html#dom-media-currenttime
My question is this: is there a way to tell the Media Element to always seek to the latest keyframe available, or keep the playback time synchronised with the system clock time?
It's going to play the last buffer automatically. You don't need to do anything. You're doing your job by ensuring media data lands in the buffer and setting playback as close to that as reasonable. You can always advance it forward if a network condition changes that allows you to do this, but frankly it sounds as if you just have broken code and a broken buffering strategy. Otherwise, playback would be simply smooth.
Catching up if fallen behind is not going to happen automatically, and nor should it. If the player pauses due to the buffer being drained, a buffer needs to be built back up again before playback can resume. That's the whole point of the buffer.
Furthermore, your expectation of keeping anything in-time with the system clock is not a good idea and is unreasonable. Different devices have different refresh rates, will handle video at different rates. Just hit play and let it play. If you end up being several seconds off, go ahead and set currentTime, but be very confident of what you've buffered before doing so.

Load multiple videos without playing

I'm building a video player where each scene is filmed from multiple angles. All videos are hosted on YouTube. I'd like to allow the user to be able to switch between angles seamlessly during playback.
To facilitate this, I need a way to load videos from YouTube without playing them. That way I can load alternate angles in the background while one angle is playing. When the user switches angle, the new angle should be at least partially loaded and ready to play immediately.
Unfortunately, I can't find a way to load a video without playing it.
The loadVideoById method autoplays the video as soon as the request to load the video has returned so that won't work.
Is this possible?
There's no way to cue up a video and force it to pre-buffer.
You can load (as opposed to cue) a video and then immediately pause it, and it may or may not pre-buffer, but that's dependent on a number of factors and is outside your control as someone using the API.

Seek video position width video from another domain

http://www.67games.com/video.html
You will notice after the video starts that if you try to jump to another position in the video it will reset and start over. So that is the problem.
As you can see the source is external. This uses JW PLayer, but I have tried with other players to.
I have tried with other hosts and video formats and its the same issue.
Can somebody help me with this cause I'm stuck?
I don't think the source media file being on another server/domain has anything to do with your problem.
You have a progressive download video here, and seeks just fine to the point of download. Yes, it does restart from the beginning if you try and seek past the last downloaded mark, but that is to be expected. (How could a client seek into data it doesn't even have yet?)
You have two choices:
Tweak the player so that it never attempts to seek beyond the last downloaded point. If you are only going to have clips this small that this solution seems fine.
Put the clip behind a streaming or psedu streaming server. (More appropriate for longer clips.)
Streaming: red5, Wowza, FMS, etc
Pseudo streaming: apache/lighttpd/etc with mod_h264_streaming + some client side logic denote the position to stream to

Multiple audio tracks for HTML5 video

I'm building a video for my website with HTML5. Ideally, I'd have only one silent video file, and five different audio tracks in different languages that sync up with the video.
Then I'd have a button that allows users to switch between audio tracks, even as the video is playing; and the correct audio track would come to life (without the video pausing or starting over or anything; much like a DVD audio track selection).
I can do this quite simply in Flash, but I don't want to. There has to be a way to do this in pure HTML5 or HTML5+jQuery. I'm thinking you'd play all the audio files at 0 volume, and only increase the volume of the active track... but I don't know how to even do that, let alone handle it when the user pauses or rewinds the video...
Thanks in advance!
Synchronization between audio and video is far more complex than simply starting the audio and video at the same time. Sound cards will playback at slightly different rates. (What is 44.1 kHz to me, might actually be 44.095 kHz to you.)
Often, the video is synchronized to the audio stream, but the player is what handles this. By loading up multiple objects for playback, you are effectively pulling them out of sync.
The only way this is going to work is if you can find a way to control the different audio streams from the video player. I don't know if this is possible.
Instead, I propose that you encode the video multiple times, with the different streams. You can use FFMPEG for this, and even automate the process, depending on your workflow. Switching among streams becomes a problem, but most video players are robust enough to guess the byte offset in the file, when given the bitrate.
If you only needed two languages, you could simply adjust the balance between a left and right stereo audio channel.
If you're willing to let all five tracks download, why not just mux them into the video? Videos are not limited to a single audio track (even AVI could do multiple audio tracks). Then syncing should be handled for you. You'd just enable/disable the audio tracks as needed.
It is doable with Web Audio API.
Part of my program listens to video element events and stops or restarts audio tracks created using web audio API. This gives me an ability to turn on and off any of the tracks in perfect sync.
There are some drawbacks.
There is no Web Audio API support in Internet Explorers except for Edge.
The technique works with buffered audio only and that's limiting. There are some problems with large files: https://bugs.chromium.org/p/chromium/issues/detail?id=71704

Categories