I have a gif of a candle with an animated flame. My webpage will show a few of these gifs in a row. It would be much more realistic if all the gifs didn't start at once after the page loads (otherwise I get a line of 'synchronised' candles).
I can create multiple gifs with different flame animations and then randomise which ones get shown but this will take extra bandwidth and will add an extra level of complexity.
Is there a way to maybe cascade start the gifs? i.e. start each one after a random amount of time in order that they are out of synch and look a bit more realistic?
Maybe using jquery? Or simple javascript?
Many thanks
You can't do that.
If you place the same image on multiple places, it's going to always look the same.
You could edit the image and change the order of the frames within the gif, save and repeat this a few times, and then load the gifs, but this will only work if the images are already loaded (cached) on your browser.
If this is the first time you load the page, it could just happen that an image will be loaded exactly when another image started from the first frame of the loaded image, so it will look as if they are in sync.
You could load the images using setTimeout but this will have the same problem as described above when the user first enters your page. You can delay the request for loading the file, but you have no control of the speed in which the file will be downloaded and therefor no control on when exactly the first frame will start playing.
If I were you, I'd try creating a big sprite of different images, each starting the animation from a different frame. There will be only one request and all animations will play together, making sure frames are never in sync.
When displaying the images on the site, make sure to properly show only the part you want for each image. This will give you the effect of flames burning at random times.
Related
Edit: Solved it and what I wanted had nothing to do with lazyloading. I'm leaving the original question and post as it is but removed the excerpt (since the excerpt is pretty much pointless and I'm deleting it on codesandbox.io). The answer which I came up with is below the question.
A user visits a page with 20 images. However I do not want all images to instantly load as the website will feel slow if the images are big or if there are way more images.
If the user is in a certain image, load that image and preload the next 5 images (for example) and not the rest. As the user scrolls, the website will continuously load the next 5 images. I already have intersection observer set up as well as a custom image loading component.
I know of a solution where I keep calling my server as the user scrolls. However I would not like that as it might be too excessive. (And it's an image focused/heavy website)
Preferably solved using vanilla javascript or css. If the solution might be too complicated, I wouldn't mind using a lightweight package. I have no clue as to where to start with this.
Note: The project is running on Nuxt.js
Well I coded out what I needed thanks to Kissu for the article. Although what I wanted had nothing related to lazy image, the article somehow gave me a hint on how it can be done. It was a simple use of a combination of intersection observer, conditionals and "v-if" for rendering that component
Set up intersection Observer to track which image index the user is on.
Add a variable to track the furthest image the user has scrolled to. So our website renders and keeps all previous images on the browser)
Set up a "v-if" as to render the next few images. i.e. "v-if" - Render the image of the index that the user is on + next images. E.g. To preload next 5 image
<imageComponent v-if="lastImgViewed+5 > CurrentImage"></imageComponent>
That's It! Simple! Although I don't know if it's lean or the best way to do this - performance wise.
My program is simply showing part of a big picture. Users can move the view port to see different part of the picture. And the picture may be animation, so the program will keep rendering each frame.
It uses texSubImage2D to load the whole picture and renders a part of picture based on the texture loaded, which means if users want to see next picture, the program will load the new picture by texSubImage2D.
Now I want to show two pictures, that is rendering two pictures at the same time, by using the same way.
When user move the view port to the right border of current picutre, the next picture will show and keep the current picture.
What I did is creating a new renderer and use texSubImage2D to load the new picture. Which is working but loading the new picture takes some time so the program lags when running texSubImage2D.
My program is a browser program and uses JS. So there is only one thread and it has to take time to run texSubImage2D.
Is there any method to fix lagging?
Eg. Is it possible only loading a part of the picture so that I can assign loading process to several frames instead of loading whole picture in one frame? It seems that texSubImage2D can only load the whole picture.
This is only one idea I can come up with. There may be other solutions.
Thanks for any help.
The heading sums is all. Though, the case is where I have a long (20x2000px) picture as a sprite for thumbnails. It would be nice if I could start showing the sprite only for the thumbnails that already have required-part of the sprite loaded, and show loader in the meantime.
All I need is to know how much of the picture has been loaded in pixels from the top (supposing that it is not progressive). I thought of using file size to estimate that, though that would be very inaccurate.
The main question everyone is having - why to do this at all?
There is a page that displays somewhat 100 thumbnails. It would be a nice thing if this page had a sprite of those thumbnails generated in the descending thumbnail order.
Such page already exists. The screenshot is attached. User can see a gray placeholder while the sprite is being loaded. I want to display the thumbnail only when the required part of the sprite for that thumbnail is already loaded.
#Guy Sounds like a theoretical question then... Per your comment on the answer below, if you're loading 10MB 'sprites' you're doing it wrong.
No, there is nothing wrong about it if this can be achieved. That would reduce the number of calls by 100 every time the page is being called. That is a remarkable speed improvement even if everything is cached.
I see what you're trying to do, but in short, you can't. Counting pixels in JavaScript, if it possible at all (maybe with canvas? I don't think so though) would just be unreasonably resource-consuming. Loading all the images separately (i.e., not as one sprite), however, will give you exactly the effect you're looking for as a default on most browsers, albeit at the cost of more requests.
The solution? Use a Content Delivery Network (CDN), so the browser can fetch all 100 images at the same time, without necessarily putting the strain on your own server.
EDIT:
After some additional searching, I found what looks to be a solution here and is similar to a solution provided here. The basic idea is to make an AJAX request and monitor the progress.
If I'm understanding you correctly, you want to avoid that brief period of time that a page is loading (or after a even occurs) where images haven't finished transferring and don't yet appear where they should.
The problem I think you're going to run into (if this is a scenario where the page is loading) is that you're waiting for your placeholder image and the sprite to come across the wire. By the time your placeholder gets over, your sprite may have gotten there already or be milliseconds behind, and you haven't avoided the situation described above.
If you're dealing with a mouseover event or something similar where the sprite is requested for the first time, you can pre-load the sprite image by calling it via JavaScript when the page loads, so it'll already be cached and ready when the event fires.
I already have a theoretical solution. Before I start working on it, it would be nice if anyone can tell me if there is any major fault in my thinking.
The image is generated server-side, screenshot after screenshot. Therefore, after every screenshot merged into the sprite I can save the thumbnail size information to the database along with the corresponding entry.
Once user lands on the page, I will keep checking how many bytes of the sprite are loaded, loop through every entry that is pending to be displayed, check if the value is greater or equal to the entry "weight" and display or continue the loop appropriately.
Many many sites uses this technique (facebook, google as well)
For example, open facebook.com
Save this page (not as *.MHTM but HTML with images) (mean login page)
It saves:
facebook_files(dir)
facebook.html(file)
Then inside the folder, You can see one GIF file which containts all primary images for the page.
The question is: How to read many chunks inside one file??
And how to call this approach?
Those images are called "sprites". Take a look a this article on them.
The basic idea is that whenever you want to use an image from the sprite, you have an element which just shows part of the big sprite image. So each "image" in your page is actually a div with this image as the background, just offset so the right part shows through.
The main advantage is that your page needs fewer requests and so loads faster.
There are some tools online that make using sprites easier. I haven't used any of them so can't recommend one, but using a tool would save you a lot of work.
This is what you call "spriting", like the spriting used in arcade games (one image of a character with it's different positions). Basically it's one huge chunk of image containing smaller images.
The advantage of this approach is that instead of 100 different HTTP requests for 100 tiny gifs (which causes overhead), you only need to request one huge image containing this 100 gifs. Then instead of using <img> per image, you use the CSS background instead, then use background-position to align the right image "through" the container to show the right image.
I need to dynamically load and put on screen huge number of images — it can be something like 1000–3000. Usually these pictures are of different size, and I'm getting their URLs from user. So, some of these pictures can be 1024x800 or 10x40 pixels.
I wrote a good JS script showing them nicely on one page (ala Google Images Search style), but they are still very heavy on RAM used (a hundred 500K images on one page is not good), so I thought about the option of really resizing images. Like making an image that’s 1000x800 pixels something like 100x80, and then forget (free the ram) of the original one.
Can this be done using JavaScript (without server side processing)?
I would suggest a different approach: Use pagination.
Display, say, 15 images. Then the user click on 'next page' and the next page is shown.
Or, even better, you can script that when the user reaches the end of the page the next page is automatically loaded.
If such thing is not what you want to do. Maybe you want to do a collage of images, then maybe you can check CSS3 transforms. I think they should be fast.
What you want to do is to take some pressure from the client so that it can handle all the images. Letting it resize all the images (JavaScript is client side) will do exactly the opposite because actually resizing an image is usually way more expensive than just displaying it (and not possible with browser JS anyway).
Usually there is always a better solution than displaying that many items at once. One would be dynamic loading e.g. when a user scrolls down the page (like the new Facebook profiles do) or using pagination. I can't imagine that all 1k - 3k images will be visible all at once.
There is no native JS way of doing this. You may be able to hack something using Flash but you really should resize the images on the server because:
You will save on bandwidth transferring those large 500K images to the client.
The client will be able to cache those images.
You'll get a faster loading page.
You'll be able to fit a lot more thumbnail images in memory and therefore will require less pagination.
more...
I'm (pretty) sure it can be done in browsers that support canvas. If this is a path you would like to take you should start here.
I see to possible problems with the canvas approach:
It will probably take a really long time (relatively speaking) to resize many images. Because of this, you're probably going to have to look into utilizing webworkers.
Will the browser actually free up any memory if you remove the image from the DOM and/or delete/null all references to those images? I don't know.
And some pretty pictures of a canvas-resized image:
this answer needs a ninja:--> Qk