We are taking screenshots using PhantomJs, which works quite well for us. Now we face the challenge that some of the websites we take screenshots of have animations when the page loads. We can of course use a simple timeout of, let's say, 5 seconds, but we surely don't want to wait actual 5 seconds idling for each screenshot.
Is there a way to "simulate" a screenshot, i.e. to tell Phantom to internally fast forward 5 seconds while actually, from the outside, no time passes?
No, there is no way to fast forward the time 5 seconds.
As a workaround, you might want to take a screenshot every second and see if it changes with a timeout at 10 seconds for example. The comparison of the images can be done in various ways like actual byte by byte comparison or invocation of some image library that can actually compare the image data itself where you can set a match even when the images are only 99% equal.
Related
I've been updating a Node.JS FFI to SDL to use SDL2. (https://github.com/Freezerburn/node-sdl/tree/sdl2) And so far, it's been going well and I can successfully render 1600+ colored textures without too much issue. However, I just started running into an issue that I cannot seem to figure out, and does not seem to have anything to do with the FFI, GC, speed of Javascript, etc.
The problem is that when I call SDL_RenderPresent with VSYNC enabled, occasionally, every few seconds, this call will take 20-30 or more milliseconds to complete. And it looks like this is happening 2-3 times in a row. This causes a very brief, but noticeable, visual hitch in whatever is moving on the screen. The rest of the time, this call will take the normal amount of time to display whatever was drawn to the screen at the correct time to be synced up with the screen, and everything looks very smooth.
You can see this in action if you clone the repository mentioned above. Build it with node-gyp, then just run test.js. (I can embed the test code into StackOverflow, but I figured it would be easier to just have the full example on GitHub) Requires SDL2, SDL2_ttf, SDL2_image to be in /Library/Frameworks. (this is still in development, so there's nothing fancy put together for finding SDL2 automatically, or having the required code in the repository, or pulled from somewhere, etc.)
EDIT: This should likely go under the gamedev StackExchange site. Don't know if it can be moved/link or not.
Doing some more research online, I've discovered what the "problem" was. This was something I'd never really encountered before, (somehow) so I thought it was some obvious problem where I was not using SDL correctly.
Turns out, graphics being "jittery" is a problem every game can/does face, and there are common ways to get around it. Basically, the problem is that a CPU cannot run every process/thread in the OS completely in parallel. Sometimes a process has to be paused in order to run something else. When this happens during a frame update, it can cause that frame to take up to twice as long as normal to actually be pushed to the screen. This is where the jitter comes from. It became most obvious that this was the problem after reading a Unity question about a similar jitter, where a commenter pointed out that running something such as the Activity Monitor on OS X will cause the jitter to happen regularly, every couple seconds. About the same amount of time between when the Activity Monitor polls all the running processes for information. Killing the Activity Monitor caused the jitter to be much less regular.
So there is no real way to guarantee that your code will be run every 16 milliseconds on the dot, and that it will always ever be another 16 milliseconds before your code gets run again. You have to separate the timing for code that handles events, movement, AI, etc. from the timing for when a new frame will be rendered in order to get a perfectly smooth experience. This generally means that you will run all your logic fewer times per second than you will be drawing frames, and then predicting where every object will be in between actual updates, and draw the object in that spot. See deWiTTERS game loop article for some more concrete details on this, on top of a fantastic overview of game loops in general.
Note that this prediction method of delivering a smooth game experience does not come without problems. The main one being that if you are displaying an object in a predicted location without actually doing the full collision detection on it, that object could very easily clip into other objects for a few frames. In the pong clone I am writing to test the SDL bindings, with the predicted object drawing, if I hold right while up against a wall the paddle will repeatedly clip into the wall before popping back out as location is predicted to be further than it is allowed. This is a separate problem that has to be dealt with in a different way. I am just letting the reader know of this problem.
The heading sums is all. Though, the case is where I have a long (20x2000px) picture as a sprite for thumbnails. It would be nice if I could start showing the sprite only for the thumbnails that already have required-part of the sprite loaded, and show loader in the meantime.
All I need is to know how much of the picture has been loaded in pixels from the top (supposing that it is not progressive). I thought of using file size to estimate that, though that would be very inaccurate.
The main question everyone is having - why to do this at all?
There is a page that displays somewhat 100 thumbnails. It would be a nice thing if this page had a sprite of those thumbnails generated in the descending thumbnail order.
Such page already exists. The screenshot is attached. User can see a gray placeholder while the sprite is being loaded. I want to display the thumbnail only when the required part of the sprite for that thumbnail is already loaded.
#Guy Sounds like a theoretical question then... Per your comment on the answer below, if you're loading 10MB 'sprites' you're doing it wrong.
No, there is nothing wrong about it if this can be achieved. That would reduce the number of calls by 100 every time the page is being called. That is a remarkable speed improvement even if everything is cached.
I see what you're trying to do, but in short, you can't. Counting pixels in JavaScript, if it possible at all (maybe with canvas? I don't think so though) would just be unreasonably resource-consuming. Loading all the images separately (i.e., not as one sprite), however, will give you exactly the effect you're looking for as a default on most browsers, albeit at the cost of more requests.
The solution? Use a Content Delivery Network (CDN), so the browser can fetch all 100 images at the same time, without necessarily putting the strain on your own server.
EDIT:
After some additional searching, I found what looks to be a solution here and is similar to a solution provided here. The basic idea is to make an AJAX request and monitor the progress.
If I'm understanding you correctly, you want to avoid that brief period of time that a page is loading (or after a even occurs) where images haven't finished transferring and don't yet appear where they should.
The problem I think you're going to run into (if this is a scenario where the page is loading) is that you're waiting for your placeholder image and the sprite to come across the wire. By the time your placeholder gets over, your sprite may have gotten there already or be milliseconds behind, and you haven't avoided the situation described above.
If you're dealing with a mouseover event or something similar where the sprite is requested for the first time, you can pre-load the sprite image by calling it via JavaScript when the page loads, so it'll already be cached and ready when the event fires.
I already have a theoretical solution. Before I start working on it, it would be nice if anyone can tell me if there is any major fault in my thinking.
The image is generated server-side, screenshot after screenshot. Therefore, after every screenshot merged into the sprite I can save the thumbnail size information to the database along with the corresponding entry.
Once user lands on the page, I will keep checking how many bytes of the sprite are loaded, loop through every entry that is pending to be displayed, check if the value is greater or equal to the entry "weight" and display or continue the loop appropriately.
I need to be able to benchmark a particular build of a webkit-based browser and am measuring the length of time it takes to do certain stuff like DOM manipulation, memory limits etc.
I have a test below which records the length of time it takes to simultaneously load in 10 fairly heavy PNG graphics. In code, I need to be able to time how long it takes for the load to finish. I have tried setting
the onLoad function on the dynamic Image object to produce a time in ms. However, as shown in the cap below it is giving an inaccurate reading because the reading it gives is a tiny amount due to it only recording the data transfer part of the load and then there is a considerable (3000+ms) delay for when the images are viewable - looped in blue, this is the browser reflow cycle.
Is there some event in webkit I can use to record when the browser has finished a reflow so that I can benchmark this? I have to be able to record the time in milliseconds in code because the build of webkit I am testing has no developer tools. I am able to observe the difference in Chrome ok but the performance between the two builds differs drastically and I need to be able to quantify it accurately for comparison.
If you are using jQuery, you could try recording the time between document ready and window load, that would give you an approximation.
(function(){
var start, end;
$(document).ready(function(){
start = new Date()
});
$(window).load(function(){
end = new Date();
console.log(end.getTime() - start.getTime());
});
}());
Edit:
Have you taken a look at the Browserscope reflow timer? Basically it checks to see how long it takes for the browser to return control to the JavaScript engine after changes to the dom. According the the page it should work in any browser, although I haven't tested it personally. Perhaps you could adapt the code run during the tests to time the reflow in your page.
Also, you might want to have a look at CSS Stress Test. The bookmarklet is really great for page performance testing. http://andy.edinborough.org/CSS-Stress-Testing-and-Performance-Profiling
How about setting the PNG as a div background-image and running the stress test, it should enable/disable the image multiple times with timings.
I am using the Flipbook jquery plugin with a large png image sequence. I emailed the creator of the plugin and asked if there's any way to create some sort of "loader" or allow it to load in chunks and start playing after a certain amount of images are loaded. He responded with the following:
This should be possible and I was thinking of doing this but it wasn't needed at the time.
In the flip book code the shouldStartAnimation function determines if the animation should start by incrementing a counter and checking that counter against the total number of frames. When all the frames have been loaded the timer starts which flips the frames. This code could be changed so the timer starts after half the frames were loaded or something. It could also get really fancy and figure out how long each frame is taking to load and then guess how many frames it needs to let load before it can start playing the sequence so all the frames are loaded by the time it needs them.
Unfortunately I don't have time to make these changes myself, but feel free to modify this code for your needs :)
https://gist.github.com/719686
Unfortunately I don't know enough javascript to get this done, and I don't know exactly how much work this would be for someone who did. I am just hoping someone here might have some more helpful info or advice to help me figure this out (or, obviously, if this is easy enough for someone to just do, that would be amazing).
Add one more option default, the following, make sure to have proper "comas" in right places.
'loadbeforestart': 10, //start animation when 10 frames are loaded
And edit a variable in the following function, replace variable "end" with "loadbeforestart"
function shouldStartAnimation(){
//console.log('pre: '+imageName(i));
preloadCount += step;
if(preloadCount >= loadbeforestart){
setTimeout(flipImage, holdTime);
}
}
This should do the trick, I think*
I have written some javascript whereas if you click on some divs those expand with some data.
it takes a few seconds to populate the divs.
So to avoid users getting frustrated I have done the following:
if user clicks on div then add animated gif (moving bar,...) on div
when data is ready event is triggered and animated gif is removed
can somebody suggest a better approach or pattern ?
since I don't know how expensive for the browser to render animated gifs...
thanks
Rendering GIFs is not very expensive in terms of performance. Displaying animated GIF loaders etc. is definitely better than doing nothing on waiting time. It is much more important to users to know that something is happening than finishing a split second earlier.
The best thing would be to progressively reveal information as you receive it, but in the final form from the beginning. This is easier said than done. Browsers do half of this by rendering different page elements as they are received but fail at the other half by moving things around as formatting information is received and by making you think you can do something before you are actually allowed to do it.
The second best thing would be to have an animated gif that gives some indication of how long the process will take. Many systems fail utterly at this and have been compared to a car navigation system that says "Estimated Time of Arrival 10 minutes .. 8 minutes ... 2 days ... 1 second ... 3 hours."
Because of those failures, I would say that what you are doing is the right thing.
As long as the GIF itself is small and properly preloaded, that's a fine way to go.