dash.js reloads the same segment many times - javascript

I'm using dash.js reference player and I'm trying to do frame-wise steps through a video by simply pausing and calling seek() with the appropriate timestamps on each step.
But there is an annoying delay after each step, because the player seems to keep reloading the same segment again and again.
The screenshot shows the requests:
The first step caused segment 19 to be (re-)loaded. After some idle time, the next 3 segments were loaded, too. Then I did 5 steps in a row, each of which resulted in a request for segment 19.
Is there any way to make dash.js cache a segment while I'm stepping through its content?

This would be an interesting read for your issue, even though it seems Ahmed Basil had answered well. DASH Monitoring

I would recommend using an SDN-based script with P4/OpenFlow to run your player with the segmented video, the reference player automatically records all the segments that are loaded into the player and send them to the client.
If you use a tool like Wireshark, you will be able to see all the segments used in the experimental run. To answer your question, the player usually reloads the same segment when there is an interruption (network or physical) wise.
It could be trying to change the segment root folder due to the player's preferences and resolution used. It will always reload a segment when that condition appears or it's trying to change a resolution due to an interruption. To conclude, yes there is a way to make dash.js cache a segment while it's playing a file. Simply make the Wireshark application monitor your player, this way you will see all the segments used, shown in the image here.

Related

How to determine if video frame is ready for composition?

I've found some similar question on Stackoverflow but it does not really address my problem.
I'm playing multiple videos into a WebGL texture, one after another. It's based on user input, something like a web-based VJ tool.
Copying is done easily, and I have internal clock that is sync with same fps like the video I'm playing (eg. 30fps) and frames are updated correctly. Exactly like one of the answers offered in the above mentioned question. All that works well.
Texture is updated with:
gl.texSubImage2D(gl.TEXTURE_2D,0,0,0,this.gl.RGBA,this.gl.UNSIGNED_BYTE,video);
My problem is how to detect when the very first frame is available for composition. Video does not start playback immediately in the real environment (eg. average 4G, video on CDN), but sometimes takes 1s or more to start playback (to buffer sufficient data).
If I attempt to start updating texture prior the first frame is available, I get WebGL error thrown:
WebGL: INVALID_VALUE: texSubImage2D: no video
I'm aware of video.load(); method that can be called in advance (eg. on user interaction), however I have ~50 video files that I need to play (in unknown order as it depends on user input), and older phones (like iPhone 7) have major performance drop when I do that, to the point Safari sometimes crashes.
I'm looking for a reliable way to determine when video started actual playback. Events such as onplay don't seem to fire when first frame is available, but much earlier.
I have also tried ontimeupdate event, but that one does not seem to fire when first frame is available for composition, but earlier as well, just like onplay. I can see the event fired, and I start updating texture when it's fired for the first time, but at the beginning of updates it generates WebGL error (for about 0.5-1s, depending on network speed) until video actually shows the first frame. Once buffered, no errors are thrown.
This issue is more visible in 4G/mobile network. Also tested in Chrome with throttling speed. Un-throttled speed will get me 1-4 WebGL warnings prior showing first frame, while eg. 12mbps throttled will give me 100-200 warnings, prior video frame is presented.
I've seen requestVideoFrameCallback but it doesn't have coverage I need (iOS Safari not even planned anytime soon).
I'm trying to avoid updating texture if video frame is not ready for composition, but can't find a reliable way to determine when it is ready.
Any help is highly appreciated!
Alright, I have found a solution at listening on playing event.
I was listening on play and on timeupdate, didn't think of on playing being so different.
It does fire after the first frame is available for composition and now I don't have anymore those WebGL errors. Tested on an Android 10 device as well as on iOS 14.5 iPhone 7 device.

Automatic Thumbnails/Screenshots for Chapter in HTML Video

I found some examples, where people used a canvas and javascript to take multiple screenshots of a running video.
You can see these examples here or here.
The code sets a time interval, draws the current timeframe to a canvas and uses this to create a screenshot.
I am wondering if it would be possible to use a similar technique, to automatically create a kind of preview for chapters of the video.
But this would require to grab a bunch of screenshots before the video started.
I failed to implement this, so I would like to know, if it is at all possible.
I know that one could use pretaken screenshots for the chapters, but I wanted to automate this process.
Thanks in advance for your answers.
This could be done in theory by jumping to specific times in the video (say every 10 seconds) using video.currentTime, waiting for the frame to be available (using progress events), drawing the frame to a canvas (canvas.drawImage) and storing it in some way (say an array of images having image.src = canvas.toDataURL).
However, this process will take time because at least the relevant parts of the video would need to be loaded in the browser so the frame could be grabbed. The video would not be playable during the process as it is being skipped to different frames.
This behavior is usually not acceptable, but it really depends on your specific use case.

HTML 5 how to record filtered canvas

I am gonna use my webcam as a source and show my view on webpage , than I will manipulate my view like (blacknwhite , fiseye, etc.) and show that manipulated video in my canvas.
An example ( http://photobooth.orange-coding.net/ )
Ok everything is cool for now . I can capture that manipulated canvas as a image.
Is there any way to record that manipulated canvas as video?
I also found an example (https://www.webrtc-experiment.com/ffmpeg/audio-plus-canvas-recording.html)
But when I tried that code on my webcam recording project , it's just recording my source view(not blacknwhite) . It is not implementing my effect to record.
Any idea or is it possible ?
Thank you.
Recording video in the browser is like getting blood out of a stone. If you hit it hard and long enough against your head, there will be blood, eventually. But it's a painful experience, you it will certainly give you a headache!
There is currently no way of recording video in real-time from a canvas element. But there is proposed a Mediastream Recording API which includes video (and it excludes the canvas part). Currently only audio is supported, and only if FF.
You can grab an image as often as possible and use it as a sequence, but there are several issues you will run into:
You will not get full frame-rate if you choose to grab the image as JPEG or PNG (PNG is not very useful with video as there is no alpha)
If you choose to grab the raw data you may achieve full frame rate (note that frame rate for video is typically never above 30 FPS) but you will fill up the memory very quickly, and you would need a point in time to process the frames into something that can be transferred to server or downloaded. JavaScript is single threaded and no matter how you twist and turn this stage, you will get gaps in the video when this process is invoked (unless you have a lot of memory and can wait until the end - but this not good for a public available solution if that's the goal).
You will have no proper sinc like time-code (to sync by) so the video will be like the movies from Chaplins day, variable. You can get close by binding high-resolution timestamps but not accurate enough as you will have no way of getting the stamp at the very time you grab the image.
No sound is recorded; if you do record audio in FF using the API, you have no way to properly sync the audio with the video anyways (which already has its own problems ref. above)
Up until now we are still at single frame sequences. If you record one minute # 30 fps you have 60x30 frames, or 1800 pictures/buffers per minute. If you record in HD720 and choose grabbing the raw buffer (the most realistic option here) you will end up with 1800 x 1280 x 720 x 4 (RGBA) bytes per minute, or 6,635,520,000 bytes, ie. 6.18 GB per minute - and that's just in raw size. Even if you lower the resolution to lets say 720x480 you'll end up with 2.32 GB/min.
You can alternatively process them into a video format, it's possible, but currently there are next to none solutions for this (there has been one, but it had varying result which is probably why it's hard be found...), so you are left to this yourselves - and that is a complete project involving writing encoder, compressor etc. And the memory usage will be quite high as you need to create each frame in separate buffers until you know the full length, then create a storage buffer to hold them all and so forth. And even if you did, compressing more than 6 GB worth of data (or event "just" 2 GB) is not gonna make user or browser very happy (if there is any memory left)...
Or bite the dust and go with a commercial Flash based solution (but that excludes your image processing and pretty much takes over the camera... so not really an option in this case).
The only realistic option, IMO, is to wait for the aforementioned API - this will let your browser do all the hard work, in compiled optimized code, enable frame by frame compression leaving the memory pretty much intact, and give very little headache compared to the alternative(s) above. There may be an option to apply shaders to the stream at one point, or integrate it with some canvas processing (not on the table in this proposal AFAICS) so recording real-time from a canvas will still be a challenge.
This is where server side processing comes in...
(of course, a screen recorder is an option which is of curse completely non-integrated, but will enable you to demo your effects at least...).

SDL2 - RenderPresent taking 20-30+ms randomly (in a Node.JS FFI call)

I've been updating a Node.JS FFI to SDL to use SDL2. (https://github.com/Freezerburn/node-sdl/tree/sdl2) And so far, it's been going well and I can successfully render 1600+ colored textures without too much issue. However, I just started running into an issue that I cannot seem to figure out, and does not seem to have anything to do with the FFI, GC, speed of Javascript, etc.
The problem is that when I call SDL_RenderPresent with VSYNC enabled, occasionally, every few seconds, this call will take 20-30 or more milliseconds to complete. And it looks like this is happening 2-3 times in a row. This causes a very brief, but noticeable, visual hitch in whatever is moving on the screen. The rest of the time, this call will take the normal amount of time to display whatever was drawn to the screen at the correct time to be synced up with the screen, and everything looks very smooth.
You can see this in action if you clone the repository mentioned above. Build it with node-gyp, then just run test.js. (I can embed the test code into StackOverflow, but I figured it would be easier to just have the full example on GitHub) Requires SDL2, SDL2_ttf, SDL2_image to be in /Library/Frameworks. (this is still in development, so there's nothing fancy put together for finding SDL2 automatically, or having the required code in the repository, or pulled from somewhere, etc.)
EDIT: This should likely go under the gamedev StackExchange site. Don't know if it can be moved/link or not.
Doing some more research online, I've discovered what the "problem" was. This was something I'd never really encountered before, (somehow) so I thought it was some obvious problem where I was not using SDL correctly.
Turns out, graphics being "jittery" is a problem every game can/does face, and there are common ways to get around it. Basically, the problem is that a CPU cannot run every process/thread in the OS completely in parallel. Sometimes a process has to be paused in order to run something else. When this happens during a frame update, it can cause that frame to take up to twice as long as normal to actually be pushed to the screen. This is where the jitter comes from. It became most obvious that this was the problem after reading a Unity question about a similar jitter, where a commenter pointed out that running something such as the Activity Monitor on OS X will cause the jitter to happen regularly, every couple seconds. About the same amount of time between when the Activity Monitor polls all the running processes for information. Killing the Activity Monitor caused the jitter to be much less regular.
So there is no real way to guarantee that your code will be run every 16 milliseconds on the dot, and that it will always ever be another 16 milliseconds before your code gets run again. You have to separate the timing for code that handles events, movement, AI, etc. from the timing for when a new frame will be rendered in order to get a perfectly smooth experience. This generally means that you will run all your logic fewer times per second than you will be drawing frames, and then predicting where every object will be in between actual updates, and draw the object in that spot. See deWiTTERS game loop article for some more concrete details on this, on top of a fantastic overview of game loops in general.
Note that this prediction method of delivering a smooth game experience does not come without problems. The main one being that if you are displaying an object in a predicted location without actually doing the full collision detection on it, that object could very easily clip into other objects for a few frames. In the pong clone I am writing to test the SDL bindings, with the predicted object drawing, if I hold right while up against a wall the paddle will repeatedly clip into the wall before popping back out as location is predicted to be further than it is allowed. This is a separate problem that has to be dealt with in a different way. I am just letting the reader know of this problem.

Seek video position width video from another domain

http://www.67games.com/video.html
You will notice after the video starts that if you try to jump to another position in the video it will reset and start over. So that is the problem.
As you can see the source is external. This uses JW PLayer, but I have tried with other players to.
I have tried with other hosts and video formats and its the same issue.
Can somebody help me with this cause I'm stuck?
I don't think the source media file being on another server/domain has anything to do with your problem.
You have a progressive download video here, and seeks just fine to the point of download. Yes, it does restart from the beginning if you try and seek past the last downloaded mark, but that is to be expected. (How could a client seek into data it doesn't even have yet?)
You have two choices:
Tweak the player so that it never attempts to seek beyond the last downloaded point. If you are only going to have clips this small that this solution seems fine.
Put the clip behind a streaming or psedu streaming server. (More appropriate for longer clips.)
Streaming: red5, Wowza, FMS, etc
Pseudo streaming: apache/lighttpd/etc with mod_h264_streaming + some client side logic denote the position to stream to

Categories