I have a server that generates pngs very rapidly and I need to make this into a poor-man's video feed. Actually creating a video feed is not an option.
What I have working right now is a recursive loop that looks a little like this (in pseudo-code):
function update() {
image.src = imagepath + '?' + timestamp; // ensures the image will update
image.onload = function () {update()};
}
This works, however after a while, it crashes the browser (Google Chrome, after more than 10 minutes or so). These images are being updated very frequently (several times a second). It seems the images are caching, which causes the browser to run out of memory.
Which of these solutions would solve the problem while maintaining fast refresh:
HTML5 canvas with drawImage
HTML5 canvas with CanvasPixelArray (raw pixel manipulation)
I have access to the raw binary as a Uint8Array, and the image isn't too large (less than 50 kb or so, 720 x 480 pixels).
Alternatively, is there anyway to clear old images from the cache or to avoid caching altogether?
EDIT:
Note, this is not a tool for regular users. It's a tool for diagnosing analog hardware problems for engineers. The reason for the browser is platform independence (should work on Linux, Windows, Mac, iPad, etc without any software changes).
The crashing is due to http://code.google.com/p/chromium/issues/detail?id=36142. Try creating object URLs (use XHR2 responseType = "arraybuffer" along with BlobBuilder) and revoking (using URL.revokeObjectURL) the previous frame after the next frame is loaded.
Edit: You really should be processing these into a live low-fps video stream on the server side, which will end up giving you greatly decreased latency and faster load times.
#Eli Grey seems to have identified the source of your crashing. It looks like they have a fix in the works, so if you don't want to modify your approach hopefully that will be resolved soon.
With regard to your other question, you should definitely stick with drawImage() if you can. If I understand your intention of using the CanvasPixelArray, you are considering iterating over each pixel in the canvas and updating it with your new pixel information? If so, this will be nowhere near as efficient as drawImage(). Furthermore, this approach is completely unnecessary for you because you (presumably) do not need to reference the data in the previous frame.
Whether fortunately or not, you cannot directly swap out the internal CanvasPixelArray object stored within an HTML5 canvas. If you have a properly-formatted array of pixel data, the only way you can update a canvas element is by calling either drawImage() or putImageData(). Right now, putImageData() is much slower than drawImage(), as you can see here: http://jsperf.com/canvas-drawimage-vs-putimagedata. If you have any sort of transparency in the frames of your video, you will likely want to clear the canvas and then use drawImage() (otherwise you could see through to previous frames).
Having said all that, I don't know that you really even need to use a canvas for this. Was your motivation for looking into using a canvas so that you could avoid caching (which now doesn't seem to be the underlying issue for you)?
If the "movie" is data-driven (ie. based on numbers and calculations), you may be able to push MUCH more data to the browser as text and then have javascript render it client-side into a movie. The "player" in the client can then request the data in batches as it needs it.
If not, one thing you could do is simply limit the frames-per-second (fps) of the script, possibly a hard-coded value, or a slider / setable value. Assuming this doesn't limit the utility of the tool, at the very least it would let the browser run longer w/o crashing.
Lastly, there are lots of things that can be done with headers (eg. in the .htaccess file) to indicate to browsers to cache or not cache content.
iPad, you say ?.. Nevertheless, i would advice using Flash/video or HTML5/video.
Because WebKit is very easily crashed with even moderate influx of images, either just big images or just a huge number of small ones..
From the other side, XHR with base64 image data or pixel array MIGHT work. I have had short polling app, which was able to run for 10-12 hours with XHR polling server every 10 seconds.
Also, consider delta compression, - like, if its histogram with abscissa being time scale - you can only send a little slice from the rigth, - of course, for things like heat-maps, you cannot do that.
These images are being updated very frequently (several times a
second).
.. if its critical to update at such a high rate - you MUST use long polling.
Related
I want to measure the roundtrip time in a web application to see how long it takes for a request to be sent, answered, and interpreted. I send a request to a database server, it sends me back some data and I want to visualize that data using WebGL. The WebGL part just consists of setting up a texture and plot it onto a quad.
I want my measurement to start when the request was sent and to stop as soon as the rendering has finished. For now, my (maybe naive) approach was something like this:
ws.send(JSON.stringify(request));
start = performance.now();
ws.onmessage = (d) => {
...
render(d); // here goes some typical plain WebGL code for preparing and plotting a 2D texture on a quad
end = performance.now();
}
roundtrip = end - start;
But I'm not sure if this is an accurate measurement and if end really refers to the finally drawn canvas. Is there any way to get the exact moment when the frame has been rendered? For now, I don't have a typical render loop, but instead just update the view with a new frame when a new request is triggered. I'm aware of gl.finish() but this doesn't seem to be an adequate solution. I also heard about WebGLSync from the WebGL2 API, but first I'm using WebGL1 and second it doesn't feel like a that much complicated problem...
What do you mean by "rendered"? Do you mean "visible to the user" or do you mean "pixels put in the backbuffer" or something else?
For "visible to the user" there is no way short of setting up a camera on another machine to look at your screen. There could be several frames of latency based on all the things between WebGL and the user. The browser compositing things, whether the browser is single, double, or triple buffered. The OS and how it composites windows, the monitor or TV's image processing, whether the user is using HDMI, DisplayPort, VGA, DisplayLink, Airplay, Chromecast
For "pixels put in the backbuffer" like the other answer you linked to says using gl.finish or similar like gl.readPixels will not tell you how long something took. It will instead tell you how long something took + how long it took to stop the graphics pipeline and there is no easy way to subtract out that last part. On the other hand you might be able to use it to tell if one way of doing things is faster than another way of doing things.
This is sort of expanding on my previous question Web Audio API- onended event scope, but I felt it was a separate enough issue to warrant a new thread.
I'm basically trying to do double buffering using the web audio API in order to get audio to play with low latency. The basic idea is we have 2 buffers. Each is written to while the other one plays, and they keep playing back and forth to form continuous audio.
The solution in the previous thread works well enough as long as the buffer size is large enough, but latency takes a bit of a hit, as the smallest buffer I ended up being able to use was about 4000 samples long, which at my chosen sample rate of 44.1k would be about 90ms of latency.
I understand that from the previous answer that the issue is in the use of the onended event, and it has been suggested that a ScriptProcessorNode might be of better use. However, it's my understanding that a ScriptProcessorNode has its own buffer of a certain size that is built-in which you access whenever the node receives audio and which you determine in the constructor:
var scriptNode = context.createScriptProcessor(4096, 2, 2); // buffer size, channels in, channels out
I had been using two alternating source buffers initially. Is there a way to access those from a ScriptProcessorNode, or do I need to change my approach?
No, there's no way to use other buffers in a scriptprocessor. Today, your best approach would be to use a scriptprocessor and write the samples directly into there.
Note that the way AudioBuffers work, you're not guaranteed in your previous approach to not be copying and creating new buffers anyway - you can't simultaneously be accessing a buffer from the audio thread and the main thread.
In the future, using an audio worker will be a bit better - it will avoid some of the thread-hopping - but if you're (e.g.) streaming buffers down from a network source, you won't be able to avoid copying. (It's not that expensively, actually.) If you're generating the audio buffer, you should generate it in the onaudioprocess.
I am writing a simple mpeg-dash streaming player using HTML5 video element.
I am creating MediaSource and attaching a SourceBuffer to it. Then I am appending dash fragments into this sourcebuffer and everything is working fine.
Now, what I want to do is, I want to pre-fetch those segments dynamically depending upon current time of the media element.
While doing this there are lot of doubts and which are not answered by MediaSource document.
Is it possible to know how much data sourceBuffer can support at a time? If I have a very large video and append all the fragments into sourcebuffer, will it accommodate all fragments or cause errors or will slow down my browser?
How to compute number of fragments in sourcebuffer?
How to compute the presentation time or end time of the last segment in SourceBuffer?
How do we remove only specific set of fragments from SourceBuffer and replace them with segments with other resolutions? (I want to do it to support adaptive resolution switching run time.)
Thanks.
The maximum amount of buffered data is an implementation detail and is not exposed to the developer in any way AFAIK. According to the spec, when appending new data the browser will execute the coded frame eviction algorithm which removes any buffered data deemed unnecessary by the browser. Browsers tend to remove any part of the stream that has already been played and don't remove parts of the stream that are in the future relative to current time. This means that if the stream is very large and the dash player downloads it very quickly, faster than the MSE can play it, then there will be a lot of the stream that cannot be remove by the coded frame eviction algorithm and this may cause the append buffer method to throw a QuotaExceededError. Of course a good dash player should monitor the buffered amount and not download excessive amounts of data.
In plain text: You have nothing to worry about, unless your player downloads all of the stream as quickly as possible without taking under consideration the current buffered amount.
The MSE API works with a stream of data (audio or video). It has no knowledge of segments. Theoretically you could get the buffered timerange and map to to a pair of segments using the timing data provided in the MPD. But this is fragile IMHO. Better is to keep track of the downloaded and fed segments.
Look at the buffered property. The easiest way to get the end time in seconds of the last appended segments is simply: videoElement.buffered.end(0)
If by presentation time you mean the Presentation TimeStamp of the last buffered frame then there is no way of doing this apart from parsing the stream itself.
To remove buffered data you can use the remove method.
Quality switching is actually quite easy although the spec doesn't say much about it. To switch the qualities the only thing you have to do is append the init header for the new quality to the SourceBuffer. After that you can append the segments for the new quality as usual.
I personally find the youtube dash mse test player a good place to learn.
The amount of data a sourceBuffer can support depends on the MSE implementation and therefore the browser vendor. Once you reached the maximum value, this will of course result in an error.
You cannot directly get the number of segments in SourceBuffer, but you can get the actual buffered time. In combination with the duration of the segments you are able to compute it.
I recommend to have a look in open source DASH player projects like dashjs or ExoPlayer, which implement all your desired functionalities. Or maybe even use a commercial solution like bitdash.
I'm testing a streaming web application that uses MediaSourceAPI. Everything works fine, however when i stream big files (i.e 240MB or more), the buffer of the video has a strange behavior. To be more clear i attached three images you can check. My script creates a mediaSource object, then it calls addSourceBuffer and then it calls appendBuffer many time as there are chunks to append. I think that i do not configure well the buffer and so the mediaSource API use a default value for the buffer length.
Could you help me please?
Visit https://productforums.google.com/forum/#!category-topic/chrome/report-a-problem-and-get-troubleshooting-help/windows8/Stable/0igRzDJQ7ds
There is a max limit on the size of the SourceBuffers, maybe you're exceeding those? When they exceed the limits, the browsers will start evicting buffer segments according to some defined algorithm.
If you are appending as much data to the source buffers as you can, you might want to introduce a limit. E.g. for us, when playing HD video at 4.5mps, we could have a buffer size of about 3-4 minutes before we saw some odd behaviour (e.g. segments being evicted in front of the videos currentTime)
I have image and txt file of same size say 200 KB.
Now i would like to calculate the time to download the image and text of text file.
Now my question is adding image to DOM and and then calculating time on onload and requesting content of txt file using Ajax will they take same time or due to image and xml they will take different time ? why ?
Hi i have run a small experiment on the local machine and results are surprising.
Experiment environment
add 8MB image to DOM and calculate
time for its download i tried this
for 20 times making sure that the
image is not getting cached. I found
that on local machine it takes
around 4 seconds.
8 MB text file is requested using the AJAX and i found that it takes around 20 sec to download the textual contents
The think is clear from the experiment but question is still there why ? can any one help on this.
I've never actually tested it, but the transport portion should take the same amount of time. There might be a bit more overhead involved in the image as it involves a bit of reflow on load, firing events, that sort of thing, but I wouldn't think you'd be able to notice the difference.
Beyond that, we start getting into network topology and optimization stuff, like are any of the links making use of on-the-fly point to point encryption and if so, do they transfer the text file faster because it compresses better, that sort of thing. But that way lies madness, if you're talking about figuring this out across a heterogenous network (say, like the internet).