How to preload all images before removing loading animation? - javascript

I'm working on a image sequence animation and I came across a problem when a new user enters the site for the first time:
Once I have loaded the images from the external database the full screen loading animation removes itself from the screen but now when the user enters the animation, all the images has not been loaded yet so the animation seems buggy & broken. After few seconds all the images has loaded and the animation works just like it should.
So my question is how can I wait till all images has been loaded fully before removing the loading animation to prevent the user from using the animation while the images are still loading?
Here's the function for getting the images:
async getImages(state) {
var images = []
for (let i = 0; i < state.frameCount; i++) {
var imgIndex = (i + 1).toString().padStart(4, '0')
const img = new Image();
img.index = Number(imgIndex)
img.id = imgIndex
var storageRef = this.$fire.storage.ref(`/web-2-160-final.${imgIndex}.png`);
storageRef.getDownloadURL().then((url) => {
console.log('IMG URL',url)
var imgSrc = url
img.src = imgSrc;
img.classList.add('full-screen')
images.push(img);
function percentage(partialValue, totalValue) {
state.percentage = (100 * partialValue) / totalValue
}
percentage(images.length, state.frameCount)
if(images.length == state.frameCount) setImages()
})
}
const setImages = () => {
state.isLoaded = true
var lowestToHighest = images.sort((a,b) => a.index - b.index)
console.log('NEW ARRAY', lowestToHighest)
state.images = images
console.log(state.images)
}
},

First loop over all the download URLs that you need for each image. Collect each Promise returned from every storageRef.getDownloadURL() call.
Wait for every URL to be retrieved with Promise.all(). This ensures that the code waits until every URL is retrieved and ensures that the order remains in the correct order.
Then loop over every URL and create an image for each URL. Use the onload event of the image to return the image whenever it is finished loading. Collect everything in an array of promises and await it again with Promise.all().
The result should be an array of (loaded) images in the order you provided.
async getImages(state) {
const downloadUrls = [];
for (let i = 0; i < state.frameCount; i++) {
const imgIndex = (i + 1).toString().padStart(4, '0');
const storageRef = this.$fire.storage.ref(`/web-2-160-final.${imgIndex}.png`);
const downloadUrl = storageRef.getDownloadURL();
downloadUrls.push(downloadUrl);
}
const urls = await Promise.all(downloadUrls);
let amountOfPreloadedImages = 0;
const images = await Promise.all(urls.map(url =>
new Promise((resolve, reject) => {
const image = new Image();
image.src = url;
image.onload = () => resolve(image);
image.onerror = (error) => reject(error);
}).then(image => {
amountOfPreloadedImages++;
state.percentage = 100 * amountOfPreloadedImages / state.frameCount;
return image;
})
));
state.images = images;
}

Related

How to validate an image before sending to DALL E API with front-end javascript

I'm trying to validate images before being sent to the DALL E API. The API rejects certain image properties that I need to validate:
file type
image dimensions
file size
whether the image has an alpha channel
What is the best way to do this?
This is a solution that works for me, although I would guess there are more efficient ways to accomplish it:
Validate items 1-3:
// Takes an imageBase64 url (we create locally, within javascript, from a file) and checks the dimensions once rendered.
function checkImageDims(imageBase64: string) {
const img = new Image();
img.src = imageBase64;
const promise = new Promise<string | null>((resolve, reject) => {
img.onload = () => {
const { width, height } = img;
if (height !== width) {
resolve(
`Image needs to be square. Current dimensions are ${width}x${height}`
);
} else {
resolve(null);
}
img.onerror = reject;
};
});
return promise;
}
// I am using AWS Amplify with S3. This gets the URL to the image:
const getS3Url = await Storage.get(`myFolder/${s3ObjectName}`);
// Once we have the URL
const fetched = await fetch(getS3Url);
const blobbed = await fetched.blob();
const imageBase64 = URL.createObjectURL(blobbed);
const dimensionsError = await checkImageDims(imageBase64);
if (dimensionsError) return dimensionsError;
console.log({
size: blobbed.size,
type: blobbed.type,
});
Validate item 4 (alpha)
// Checks image for alpha channel (transparency) https://stackoverflow.com/a/41302302/11664580
function checkForAlphaChannel(buffer: ArrayBuffer) {
const view = new DataView(buffer);
// is a PNG?
if (view.getUint32(0) === 0x89504e47 && view.getUint32(4) === 0x0d0a1a0a) {
// We know format field exists in the IHDR chunk. The chunk exists at
// offset 8 +8 bytes (size, name) +8 (depth) & +9 (type)
const type = view.getUint8(8 + 8 + 9);
return type === 4 || type === 6; // grayscale + alpha or RGB + alpha
}
return false;
}
const arrayBuffer = await blobbed.arrayBuffer();
const checkAlpha = checkForAlphaChannel(arrayBuffer);
console.log({checkAlpha})
Credit https://stackoverflow.com/a/41302302/11664580 for the Alpha validation.

Uncaught (in promise) Error: Failed to compile fragment shader

So I'm using tensorflow JS and python for training models. Now I'm working on the website so that abstract doctors could upload an MRI image and get the prediction. Here's my code:
<script>
async function LoadModels(){
model = undefined;
model = await tf.loadLayersModel("http://127.0.0.1:5500/modelsBrain/modelBrain.json");
const image = document.getElementById("image");
const image1 = tf.browser.fromPixels(image);
const image2 = tf.reshape(image1, [1,200,200,3]);
const prediction = model.predict(image2);
const softmaxPred = prediction.softmax().dataSync();
alert(softmaxPred);
let top5 = Array.from(softmaxPred)
.map(function (p, i) {
return {
probability: p,
className: TARGET_CLASSES_BRAIN[i]
};
}).sort(function (a, b) {
return b.probability - a.probability;
}).slice(0, 4);
const pred = [[]];
top5.forEach(function (p) {
pred.push(p.className, p.probability);
alert(p.className + ' ' + p.probability);
});
}
const fileInput = document.getElementById("file-input");
const image = document.getElementById("image");
function getImage() {
if(!fileInput.files[0])
throw new Error("Image not found");
const file = fileInput.files[0];
const reader = new FileReader();
reader.onload = function (event) {
const dataUrl = event.target.result;
const imageElement = new Image();
imageElement.src = dataUrl;
imageElement.onload = async function () {
image.setAttribute("src", this.src);
image.setAttribute("height", this.height);
image.setAttribute("width", this.width);
await LoadModels();
};
};
reader.readAsDataURL(file);
}
fileInput.addEventListener("change", getImage);
</script>
This error occurrs not every (!) Live Server open. I am confused, what seems to be the problem?
Error CONTEXT_LOST_WEBGL is 99% due to low GPU memory - what kind of HW do you have available? Alternatively, you can try WASM backend which runs computation on CPU and doesn't require GPU resources.
Btw, you're not deallocating your tensors anywhere so if you're running this in a loop for multiple inputs, you do have a massive memory leak. But if error occurs on the first input already, your GPU simply is not good enough for this model.

I have a JavaScript array and I want to download each image in the array

Context: I have a lot of images on my social media. I wanted to download all of the images from my social medias, so I made a script that takes all of the images links and puts them in an array (script is executed in the console ). So far it only works on Twitter and it scrolls every 2 seconds and grabs new links.
What I want to do: I want to be able to go through my array and download each image file all while staying in the console. How do I do that? (also being able to download videos if possible )
I googled the problem of course , but my knowledge is limited.
I saw something about download tag but it only works in html
There might be an easy way kinda of like url.download but I haven't found it
let timePassed = 0 ;
var listOfImages = [];
var videoThumb = "Video Thumb wasn't caught " ;
var timeToWait = 120 ; //You wait 120 before stopping scrolling and loging out the array
var scroll = setInterval(function() {
timePassed += 2; //add 2 seconds to time passed
var getImage = document.querySelectorAll(".css-9pa8cd"); //Class that makes mhas the image with the url
for (let i = 2; i < getImage.length; i++) {
let imageUrl = getImage[i].src ;
let newStrWithoutSmall ;
if (imageUrl.includes("&name=small")) {
if ((imageUrl.includes("video_thumb"))) {
// To catch videos
videoThumb = "videoThumb was caught!";
} else {
// To catch the images they have &name=small in their url so we delete it
newStrWithoutSmall = imageUrl.substring(0, imageUrl.length - 11);
listOfImages.push(newStrWithoutSmall);
}
}
}
if (timePassed > timeToWait) {
clearInterval(scroll);
}
window.scrollBy(0,1000); // Scroll down by 1000px and 0on the side
}, 2000); //That means every 2 seconds
var showListImageAndVideos = setTimeout(function() {
console.log(listOfImages); // Array of all of the images
console.log(videoThumb); // Array of all of the videos
}, (timeToWait*1000)) //timeToWait
Here you can use async/await to download each image sequentially in the for loop using fetch.
let timePassed = 0 ;
var listOfImages = [];
var videoThumb = "Video Thumb wasn't caught " ;
var timeToWait = 120 ; //You wait 120 before stopping scrolling and loging out the array
function toDataURL(url) {
return fetch(url).then((response) => {
return response.blob();
}).then(blob => {
return URL.createObjectURL(blob);
});
}
async function download(image) {
const a = document.createElement("a");
a.href = await toDataURL(image);
a.download = image;
document.body.appendChild(a);
a.click();
document.body.removeChild(a);
}
var scroll = setInterval( async function() {
timePassed += 2; //add 2 seconds to time passed
var getImage = document.querySelectorAll(".css-9pa8cd"); //Class that makes mhas the image with the url
for (let i = 2; i < getImage.length; i++) {
let imageUrl = getImage[i].src ;
let newStrWithoutSmall ;
if (imageUrl.includes("&name=small")) {
if ((imageUrl.includes("video_thumb"))) {
// To catch videos
videoThumb = "videoThumb was caught!";
} else {
// To catch the images they have &name=small in their url so we delete it
newStrWithoutSmall = imageUrl.substring(0, imageUrl.length - 11);
listOfImages.push(newStrWithoutSmall);
await download(newStrWithoutSmall);
}
}
}
if (timePassed > timeToWait) {
clearInterval(scroll);
}
window.scrollBy(0,1000); // Scroll down by 1000px and 0on the side
}, 2000); //That means every 2 seconds
This may work in your case:
async function downloadFiles(array){
array.map(async sUrl=>{
await fetch(sUrl)
.then(resp => resp.blob())
.then(blob => {
var fileName = sUrl.substring(sUrl.lastIndexOf('/') + 1, sUrl.length);
const url = window.URL.createObjectURL(blob);
const a = document.createElement('a');
a.style.display = 'none';
a.href = url;
a.download = fileName;
document.body.appendChild(a);
a.click();
window.URL.revokeObjectURL(url);
document.body.removeChild(a);
})
.catch(() => {});
})
};
Based on:
http://pixelscommander.com/javascript/javascript-file-download-ignore-content-type/
Download File Using Javascript/jQuery
Note:
You may be better off just using a download manager like JDownloader.

tf.browser.fromPixels returns all zeros from img element

I am using tensorflowjs to do some front-end image classification. I am trying to use tf.browser.fromPixels to convert an img element to a tensor. However, I am getting all zeros of shape [160, 160, 3]. I am using the FileReader api to read an image from the file system via the <input type="file"> element. Here's some of the code:
function getFiles(event) {
const files = event.target.files;
let tempStore = [];
for (let i = 0; i < files.length; ++i) {
tempStore.push(files[i]);
}
return tempStore;
}
const imageElement = document.getElementById("upload");
imageElement.addEventListener("change", event => {
const files = getFiles(event);
Promise.all(files.map(loadImg)).then(d => {
console.log("All done !!!", d);
});
});
const loadImg = imgFile => {
return new Promise((resolve, reject) => {
let reader = new FileReader();
let imgEl = document.createElement("img");
reader.onload = async e => {
imgEl.src = e.target.result;
imgEl.setAttribute("width", 160);
imgEl.setAttribute("height", 160);
document.body.append(imgEl);
const fromPixels = tf.browser.fromPixels(imgEl);
resolve(fromPixels);
};
reader.onerror = reject;
reader.readAsDataURL(imgFile);
});
};
The image gets appended to document body properly.
The imageElement is of the form:
<img src="data:image/jpeg;base64,....." width=160 height=160>
You are creating the tensor from the image when the img tag has not yet been loaded. Here is the way to go
imgEl.src = e.target.result;
imgEl.setAttribute("width", 160);
imgEl.setAttribute("height", 160);
document.body.append(imgEl);
im.onload = () => {
// create the tensor after the image has loaded
const fromPixels = tf.browser.fromPixels(imgEl);
resolve(fromPixels);
}

JavaScript onload and onerror not being called

var images;
function preloadTrial(actor, event) {
return new Promise(function(res) {
var i = 0;
images = [];
var handler = function(resolve, reject) {
var img = new Image;
var source = '/static/videos/' + actor + '/' + event + '/' + i + '.png';
img.onload = function() {
i++;
resolve(img);
}
img.onerror = function() {
reject()
}
img.src = source;
}
var _catch = function() { res(images) }
var operate = function(value) {
if (value) images.push(value);
new Promise(handler).then(operate).catch(_catch);
}
operate();
})
}
function playSequence(time){
var delta = (time - currentTime) / 1000;
currentFrame += (delta * FPS);
var frameNum = Math.floor(currentFrame);
if (frameNum >= numFramesPlay) {
currentFrame = frameNum = 0;
return;
}else{
requestAnimationFrame(playSequence);
currentImage.src = images[frameNum];
currentTime = time;
console.log("display"+currentImage.src);
}
};
function rightNow() {
if (window['performance'] && window['performance']['now']) {
return window['performance']['now']();
} else {
return +(new Date());
}
};
currentImage = document.getElementById("instructionImage");
// Then use like this
preloadTrial('examples', 'ex1').then(function(value) {
playSequence(currentTime=rightNow());
});
I wrote a Javascript function that is suppose to load a directory full of numbered .png files. However, I do not know the number of items inside the directory beforehand. So I made a function that continues to store images until the source gives me an error. But when I run the code the program does not even enter the .onload and .onerror functions, resulting in an infinite loop.
Edit: This is my current code. It appears that images are correctly assigned and pushed into the array images. But when I attempt to load it onto a img tag (currentImage.src) and run playSequence, it does not display.
You could use promises to handle the pre-loading of the images.
Chain the resolves on the onload event and reject onerror to end the cycle.
function preloadImages(baseurl, extension, starter) {
return new Promise(function(res) {
var i = starter;
var images = [];
// Inner promise handler
var handler = function(resolve, reject) {
var img = new Image;
var source = baseurl + i + '.' + extension;
img.onload = function() {
i++;
resolve(img);
}
img.onerror = function() {
reject('Rejected after '+ i + 'frames.');
}
img.src = source;
}
// Once you catch the inner promise you resolve the outer one.
var _catch = function() { res(images) }
var operate = function(value) {
if (value) images.push(value);
// Inner recursive promises chain.
// Stop with the catch resolving the outer promise.
new Promise(handler).then(operate).catch(_catch);
}
operate();
})
}
To simulate a video player, you can draw on a HTML5 canvas.
function play(canvas, imagelist, refreshRate, frameWidth, frameHeight) {
// Since we're using promises, let's promisify the animation too.
return new Promise(function(resolve) {
// May need to adjust the framerate
// requestAnimationFrame is about 60/120 fps depending on the browser
// and the refresh rate of the display devices.
var ctx = canvas.getContext('2d');
var ts, i = 0, delay = 1000 / refreshRate;
var roll = function(timestamp) {
if (!ts || timestamp - ts >= delay) {
// Since the image was prefetched you need to specify the rect.
ctx.drawImage(imagelist[i], 0, 0, frameWidth, frameHeight);
i++;
ts = timestamp;
}
if (i < imagelist.length)
requestAnimationFrame(roll);
else
resolve(i);
}
roll();
})
}
To test I used ffmpeg to cut a video with the following command:
ffmpeg -i input.mp4 -ss 00:00:14.435 -vframes 100 %d.png
And I used devd.io to quickly create a static folder containing the script and images and a basic index.html.
imageroller.js - with the above code.
var preload = preloadImages('/static/videos/examples/testvid/', 'png', 1);
preload.then(function(value) {
console.log('starting play');
var canvas = document.getElementById("canvas");
play(canvas, value, 24, 720, 400) // ~480p 24fps
.then(function(frame){
console.log('roll finished after ' + frame + ' frames.')
})
});
While the preloading of the images was pretty slow, if you keep the number of frames to an acceptable level you can make some nice loops.
I haven't tested the snippet below (and there are probably cleaner solutions) but the idea should be correct. Basically we have a recursive function loadImages(), and we pass in the images array and a callback function. We wait for our current image to load; if it loads, we push it into images and call loadImages() again. If it throws an error, we know we are finished loading, so we return our callback function. Let me know if you have any questions.
function preloadTrial(actor, event) {
let images = [];
loadImages(images, actor, event, function () {
// code to run when done loading
});
};
function loadImages (images, actor, event, callback) {
let img = new Image();
let i = images.length;
let source ='/static/videos/'+actor+'/'+event+'/'+i+'.png';
img.onload = function() {
images.push(img);
return loadImages(images, actor, event, callback);
}
img.onerror = function() {
return callback(images);
}
img.src = source;
}
The optimal solution would be to provide a server-side API that tells you beforehand, how many Images there are in the directories.
If that is not possible, you should load the images one after the other to prevent excess requests to the server. In this case i would put the image loading code in a separate function and call it if the previous image was loaded successfully, like so:
function loadImage(actor, event, i, loadCallback, errorCallback) {
var image = new Image();
var source ='/static/videos/'+actor+'/'+event+'/'+i+'.png';
image.onload = loadCallback;
image.onerror = errorCallback;
image.src = source;
return image;
}
and then call this function in your while loop and in the loadCallback.

Categories