So as the title says, i'm trying to load a base64 encoded png image into a p5js image, here's a simplification of how im doing it:
PS: I'm using a base64 image because it's generated from a server
var img;
function setup() {
// Canvas setup code ....
img = loadImage('loading.png'); // show an image which says that it's loading
img.loadPixels();
// More setup code ...
}
// ...
function draw() {
image(img, 0, 0, img.width, img.height);
// ... more code
}
// ...
function receivedCallback(data) { // This function gets called once we receive the data
// Data looks like this: { width: 100, height: 100, data: "...base64 data...." }
img.resize(data.width, data.height);
var imgData = data.data;
var imgBlob = new Blob([imgData], { type: "image/png" }); // Create a blob
var urlCreator = window.URL || window.webkitURL;
var imageUrl = urlCreator.createObjectURL(imgBlob); // Craft the url
img.src = imageUrl;
img.loadPixels();
img.updatePixels();
}
But, it's not working which is why I am asking here.
If there is any way to do it I would appreciate it very much.
Thanks in advance.
EDIT
Steve's didn't quite work, I had to replace img.src = 'data:image/png;base64,' + data.data with img = loadImage('data:image/png;base64,' + data.data);
You can use Data URLs directly. It can look something like this:
function receivedCallback(data) { // This function gets called once we receive the data
// Data looks like this: { width: 100, height: 100, data: "...base64 data...." }
loadImage('data:image/png;base64,' + data.data, function (newImage) {
img = newImage;
});
}
If you need to use blobs for some reason, you can look into base64 to blob conversions, like this answer.
Steve's didn't quite work, I had to load the image instead of directly changing it's source.
So instead of img.src = 'data:image/png;base64,' + data.data.
I needed to do img = loadImage('data:image/png;base64,' + data.data);
Thanks to Steve for his answer though!
There's no difference between loading a base64 image and a regular image from a file in P5.js. Simply replace the URL in your loadImage call with the base64 data.
In the interests of a runnable, complete example (note that the base64 image is very small):
const imgData = "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAgAAAAIAQMAAAD+wSzIAAAABlBMVEX///+/v7+jQ3Y5AAAADklEQVQI12P4AIX8EAgALgAD/aNpbtEAAAAASUVORK5CYII";
let img;
function preload() {
img = loadImage(imgData);
}
function draw() {
image(img, 0, 0);
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.5.0/p5.min.js"></script>
If your data is supposed to load at some indeterminate point in the future, say, after an interaction, then you can use a callback to determine when the image has loaded. This is somewhat use-case specific, but the pattern might look like:
const imgData = "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAgAAAAIAQMAAAD+wSzIAAAABlBMVEX///+/v7+jQ3Y5AAAADklEQVQI12P4AIX8EAgALgAD/aNpbtEAAAAASUVORK5CYII";
let img;
function setup() {
createCanvas(300, 300);
}
function draw() {
clear();
textSize(14);
text("click to load image", 10, 10);
if (img) {
image(img, 0, 0);
}
}
function mousePressed() {
if (!img) {
loadImage(imgData, imgResult => {
img = imgResult;
});
}
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.5.0/p5.min.js"></script>
If you have some async code in there, no problem. Just call loadImage whenever you have the base64 string ready, for example:
function mousePressed() {
if (!img) {
fetch("your endpoint")
.then(response => response.json())
.then(data => {
// prepend "data:image/png;base64," if
// `data.data` doesn't already have it
loadImage(data.data, imgResult => {
img = imgResult;
});
});
}
}
All this said, loading an image in response to a click is a somewhat odd pattern for most apps. Usually, you'd want to preload all of your asssets in the preload function so that you can render them immediately when requested. If you load via a fetch call in response to an event, there'll likely be an ugly-looking delay (or needing to show a temporary message/placeholder image/spinner) before rendering the image, so make sure this isn't an xy problem.
Related
I'm trying to use the merge-images script (https://github.com/lukechilds/merge-images) to merge some images into a single one using nodejs.
I'm having trouble understanding what ecatly I'm supposed to provide in the then() method of mergeImages.
This is what I have so far:
const mergeImages = require('merge-images');
const { createCanvas, Canvas, Image } = require('canvas');
const width = 2880;
const height = 2880;
var canvas = createCanvas(width, height);
const ctx = canvas.getContext('2d');
var img = new Image();
img.onload = () => ctx.drawImage(img, 0, 0);
img.onerror = err => { throw err }
img.src = './result.png';
mergeImages([
'./lunas_parts/1.png', './lunas_parts/2.png', './lunas_parts/3.png',
], {
Canvas: Canvas,
Image: Image
})
.then(b64 => img = b64);
I do have an empty result.png image in the right location, as well as the 1, 2 and 3 .png files. The console isn't showing any error when I execute the above script, but result.png remains empty after the execution.
Is the canvas image source I'm using in then() not correct? What am I supposed to pass there exactly?
Thanks in advance for any help.
I am setting up a canvas and then drawing an image into that canvas, followed by calling canvas.toBlob(function(blob)...), but I am finding the blob argument inside the toBlob is sometimes null.
Why would this be? Should I be waiting for something after drawImage or something (even though in the snippet - you can see I wait for the image to be loaded before proceeding)?
//--------------------------------------------------
function doImageFileInsert(fileinput) {
var newImg = document.createElement('img');
var img = fileinput.files[0];
let reader = new FileReader();
reader.onload = (e) => {
let base64 = e.target.result;
newImg.src = base64;
doTest(newImg);
};
reader.readAsDataURL(img);
fileinput.value = ''; // reset ready for another file
}
//--------------------------------------------------
function doTest(imgElem) {
console.log('doTest');
var canvas = document.createElement("canvas");
var w = imgElem.width;
var h = imgElem.height;
canvas.width = w;
canvas.height = h;
var ctx = canvas.getContext("2d");
ctx.clearRect(0, 0, canvas.width, canvas.height);
ctx.drawImage(imgElem, 0, 0, w, h);
canvas.toBlob(function(blob) {
if (blob) {
console.log('blob is good');
} else {
console.log('blob is null');
alert('blob is null');
}
}, 'image/jpeg', 0.9);
}
canvas, div {
border:solid 1px grey;
padding:10px;
margin:5px;
border-radius:9px;
}
img {
width:100%;
height:auto;
}
<input type='file' value='Add' onChange='doImageFileInsert(this)'>
Also available at https://jsfiddle.net/Abeeee/rtwcge5h/24/.
If you add images via the Choose-File button enough times then you get the problem (alert('blob is null')).
There are only a few reasons why toBlob would produce null:
A bug in the browser's encoder (never seen it myself).
A canvas whose area is bigger than the maximum supported by the UA.
A canvas whose width or height is 0.
Since you are not waiting for the image to load, its width and height properties are still 0, and you fall in the third bullet above since you do set the canvas's size to these.
So to fix your error, wait for the image has loaded before doing anything with it.
Also, note that you should almost never use FileReader.readAsDataURL(), and certainly not to display media files from a Blob, instead generate a blob:// URI from these Blobs using URL.createObjectURL().
But in your case, you can even use the better createImageBitmap API which will take care of loading the image in your Blob and will generate a memory friendly ImageBitmap object which is ready to be drawn by the GPU without any more computation.
Only Safari hasn't implemented the basics of this API yet, but they should soon (it's exposed behind flags, and unflagged in TP version), and I wrote a polyfill you can use to fix the holes in various implementations.
const input = document.querySelector("input");
input.oninput = async (evt) => {
const img = await createImageBitmap(input.files[0]);
const canvas = document.createElement("canvas");
canvas.width = img.width;
canvas.height = img.height;
const ctx = canvas.getContext("2d");
ctx.drawImage(img, 0, 0);
// I hope you'll do something more here,
// reencoding an iamge to JPEG through the Canvas API is a bad idea
// Canvas encoders aim at speed, not quality
canvas.toBlob( (blob) => {
console.log( blob );
}, "image/jpeg", 0.9 );
};
<!-- createImageBitmap polyfill for Safari -->
<script src="https://cdn.jsdelivr.net/gh/Kaiido/createImageBitmap/dist/createImageBitmap.js"></script>
<input type="file" accept="image/*">
Okay, it looks like it was the caller to the canvas work that was the problem - and indeed the image had not been loaded fully by the time the drawImage (probably) was run.
The original call to doTest() was
function doImageFileInsert(fileinput) {
var contenteditable = document.getElementById('mydiv');
var newImg = document.createElement('img');
//contenteditable.appendChild(newImg);
var img = fileinput.files[0];
let reader = new FileReader();
reader.onload = (e) => {
let base64 = e.target.result;
newImg.src = base64;
doTest(newImg); <-----
};
reader.readAsDataURL(img);
}
but it was the call that was at fault
changing it to
function doImageFileInsert(fileinput) {
var contenteditable = document.getElementById('mydiv');
var newImg = document.createElement('img');
//contenteditable.appendChild(newImg);
var img = fileinput.files[0];
let reader = new FileReader();
reader.onload = (e) => {
let base64 = e.target.result;
newImg.src = base64;
//doTest(newImg);
newImg.onload = (e) => { doTest(newImg); }; <-----
};
reader.readAsDataURL(img);
}
seems to fix it. A working version can be seen in https://jsfiddle.net/Abeeee/rtwcge5h/25/
Here is a more modern approach if you wish to use it
/**
* #param {HTMLInputElement} fileInput
*/
async function doImageFileInsert (fileInput) {
const img = new Image()
img.src = URL.createObjectURL(fileInput.files[0])
await img.decode()
doTest(img)
}
or this that don't work in all browser (would require a polyfill)
/**
* #param {HTMLInputElement} fileInput
*/
function doImageFileInsert (fileInput) {
createImageBitmap(fileInput.files[0]).then(doTest)
}
The FileReader is such a legacy pattern now days when there is new promise based read methods on the blob itself and that you can also use URL.createObjectURL(file) instead of wasting time encoding a file to base64 url (only to be decoded back to binary again by the <img>) it's a waste of time, processing, and RAM.
img.decode()
Blob#instance_methods
createObjectURL
Good morning everyone!
I'm currently accessing an API to display a random image of dogs. Now, it's working; however, the function only shows the images that are ".gif" or ".img" format. If the images are of ".mp4" or "WebM" format, then the feature won't display the image.
I've tried to create an if statement that will allow me to access the ".mp4" and ".WebM" images, should they appear. In theory, it works. I've used .split to access the last part of the file, whether it's ".mp4" or "WebM," however, no changes are being made in comparison to what I initially need for it to do, (which is to display the image, regardless of its file type). The code that I'm posting is as far as I've managed to get in regards to progress. I'm stuck on how to proceed forward with it.
const userAction = async () => {
const response = await fetch('https://random.dog/woof.json');
const myJson = await response.json();
return myJson.url
};
function button() {
document.getElementById('btn').onclick = async function () {
const src = await userAction();
const img = document.createElement('img');
img.src = src;
img.width = 500;
img.height = 500;
img.alt = "doggo";
img.id = 'doggo_image';
let pieces = src.split(".");
// Need to figure this part out
if (pieces[pieces.length-1] === "mp4" || pieces[pieces.length-1] === "webM") {
let video = document.createElement("video");
document.body.append(video);
} else {
}
const current_image = document.getElementById(img.id);
if (current_image) {
document.body.replaceChild(img, current_image);
} else {
document.body.appendChild(img);
}
};
}
I'm expecting for the code to print the image if it's mp4, webM, or img. Right now, it's currently only printing img files from the api.
You are not associating the video element with the source of the video.
So before appending the element to the DOM, you need to add the src:
let video = document.createElement("video");
video.src = src;
document.body.append(video);
I've got the proof-of-concept I need for loading in multiple images with FileReader.readAsDataURL().
However, my clients' workflow is such that they load in hundreds of images at a time which crashes the browser.
Is there any way to load in these images in as actual thumbnails (16k vs 16Mb)?
First, don't use a FileReader, at all.
Instead, to display any data from a Blob, use the URL.createObjectURL method.
FileReader will load the binary data thrice in memory (one when reading the Blob for conversion, one as Base64 String and one when passed as
the src of your HTMLElement.)
In case of a file stored on the user-disk, a blobURL will load the data only once in memory, from the HTMLElement. A blobURL is indeed a direct pointer to the Blob's data.
So this will already free a lot of memory for you.
inp.onchange = e => {
for(let file of inp.files) {
let url = URL.createObjectURL(file);
let img = new Image(200);
img.src = url;
document.body.appendChild(img);
}
};
<input type="file" webkitdirectory accepts="image/*" id="inp">
Now, if it is not enough, to generate thumbnails, you could draw all these images on canvases and reduce their size if needed.
But keep in mind that to be able to do so, you'd have to first load the original image's data anyway, and that you cannot be sure how will browser cleanup this used memory. So might do more harm than anything by creating these thumbnail versions.
Anyway, here is a basic example of how it could get implemented:
inp.onchange = e => {
Promise.all([...inp.files].map(toThumbnail))
.then(imgs => document.body.append.apply(document.body, imgs.filter(v => v)))
.catch(console.error);
};
function toThumbnail(file) {
return loadImage(file)
.then(drawToCanvas)
.catch(_ => {});
}
function loadImage(file) {
const url = URL.createObjectURL(file);
const img = new Image();
return new Promise((res, rej) => {
img.onload = e => res(img);
img.onerror = rej;
img.src = url;
});
};
function drawToCanvas(img) {
const w = Math.min(img.naturalWidth, 200);
const r = w / img.naturalWidth;
const h = img.naturalHeight * r;
const canvas = Object.assign(
document.createElement('canvas'), {
width: w,
height: h
}
);
canvas.getContext('2d').drawImage(img, 0, 0, w, h);
return canvas;
}
<input type="file" webkitdirectory accepts="image/*" id="inp">
I am trying to take an image from the clipboard within electron, and put it on a canvas.
In the electron main process, I have this function which is triggered by a menu item. If I write img.toBitmap().toString() to the console it outputs the image data, so I know that there is an image there.
export function pasteImage() {
let img = clipboard.readImage()
console.log('send paste message')
mainWindow.webContents.send('paste-image', img.toBitmap())
}
Next I have this method which takes the buffer and converts it to a blob. It then should load it into the image and draw the image to the canvas.
public pasteImage(image: Buffer, x: number = 0, y: number = 0) {
let blob = new Blob([image], { type: 'image/bmp' })
let url = URL.createObjectURL(blob)
let img = new Image
img.addEventListener('load', () => {
console.log('loaded image')
this.ctx.drawImage(img, x, y)
URL.revokeObjectURL(url)
})
img.src = url
}
ipcRenderer.addListener('paste-image', (e: EventEmitter, img: Buffer) => {
let canvas = document.createElement('canvas')
let layer = new layer(canvas)
layer.pasteImage(img)
})
The issue I am having is that the load event never gets triggered, however the method pasteImage(image, x, y) is executing.