I'm trying to create a bitmap from a SVG file but I'm getting the following error:
Uncaught (in promise) DOMException: The source image could not be decoded.
I'm using the following code:
const response = await fetch(MARKER_PATH);
const svgStr = await response.text();
const svgBlob = new Blob([svgStr], { type: 'image/svg+xml' });
this.marker = await createImageBitmap(svgBlob);
The marker code can be found at this link. The SVG is valid (or so I think). I can see it correctly and if I draw it how an image on a canvas, then it works perfectly so I do not why it's failing.
I have thought that maybe it has to do with the encoding, but that does not make sense either because the SVG can be loaded correctly in other environments. Any idea what's going on?
Currently no browser does support creating an ImageBitmap from a Blob that holds an SVG document, while per specs they should.
I wrote a monkey-patch that does fill this hole (and others) that you can use:
fetch("https://gist.githubusercontent.com/antoniogamiz/d1bf0b12fb2698d1b96248d410bb4219/raw/b76455e193281687bb8355dd9400d17565276000/marker.svg")
.then( r => r.blob() )
.then( createImageBitmap )
.then( console.log )
.catch( console.error );
<script src="https://cdn.jsdelivr.net/gh/Kaiido/createImageBitmap/dist/createImageBitmap.js"></script>
Basically, for this case, I perform a first (async) test using a dummy Blob that should work, and if it detects that the UA doesn't support this feature, Blobs input with a type: "image/svg+xml" are converted to an HTMLImageElement that points to a blob:// URI to the Blob.
This means that this fix does not work in Workers.
Also note that per specs only SVG images with an intrinsic width and height (i.e an absolute width and height attributes) are supported by this method.
Related
I have an image that is stored in a buffer. I'm trying to set it as the window icon, but I can't find a way to do so. There is no path to the image, so I can't just use win.setIcon('path/to/image').
I tried to do the following, to no success.
win.setIcon(buffer); // giving the buffer by itself
win.setIcon(buffer.toString('base64')); // giving the buffer as base64
win.setIcon(`data:image/png;base64,${buffer.toString('base64')}`); // giving as base64 url
let imageObject = new Image();
imageObject.src = `data:image/png;base64,${buffer.toString('base64')}`;
win.setIcon(imageObject); // giving image object
According to Electron's documentation, BrowserWindow.setIcon () takes either a string or a NativeImage, a data type provided by Electron. You can convert your buffer to a NativeImage by using the following code:
const { nativeImage } = require ("electron");
win.setIcon (nativeImage.createFromBuffer (buffer));
If that does not help, you can also pass your buffer as a Base 64 string in a data URL (like you have tried before) to the function createFromDataURL. For more information, see the documentation on NativeImage. It is also worth noting that you can pass advanced options to the createFromBuffer function to give Electron more hints about how to display your icon.
I'm rendering large images which are streamed natively by the browser.
What I need is a Javascript event that indicates that the image's dimensions were retrieved from its metadata. The only event that seems to be firing is the onload event, but this is not useful as the dimensions were known long before that. I've tried loadstart but it does not fire for img elements.
Is there a loadedmetadata event for the img element in html5?
There is not an equivalent of loadedmetadata for img elements.
The most updated specs at the time of writting are the w3 Recommendation (5.2) (or the w3 WD (5.3)) and the WHATWG Living Standard. Although I find easier to browse through all the events in MDN; their docs are more user friendly.
You can check that loadedmetadata is the only event related to metadata and that it applies just to HTMLMediaElements.
You could take advantage of the Streams API to access the streams of data, process them and extract the metadata yourself. It has two caveats, though: it is an experimental technology with limited support and you will need to look for a way to read the image dimensions from the data stream depending on the image format.
I put together an example for PNG images based on MDN docs.
Following the PNG spec, the dimensions of a PNG image are just after the signature, at the beginning of the IHDR chunk (i.e., width at bytes 16-19, height at 20-23). Although it is not guaranteed, you can bet that the metadata of every image format is available in the first chunk that you receive.
const image = document.getElementById('img');
// Fetch the original image
fetch('https://upload.wikimedia.org/wikipedia/commons/d/de/Wikipedia_Logo_1.0.png')
// Retrieve its body as ReadableStream
.then(response => {
const reader = response.body.getReader();
return stream = new ReadableStream({
start(controller) {
let firstChunkReceived = false;
return pump();
function pump() {
return reader.read().then(({
done,
value
}) => {
// When no more data needs to be consumed, close the stream
if (done) {
controller.close();
return;
}
// Log the chunk of data in console
console.log('data chunk: [' + value + ']');
// Retrieve the metadata from the first chunk
if (!firstChunkReceived) {
firstChunkReceived = true;
let width = (new DataView(value.buffer, 16, 20)).getInt32();
let height = (new DataView(value.buffer, 20, 24)).getInt32();
console.log('width: ' + width + '; height: ' + height);
}
// Enqueue the next data chunk into our target stream
controller.enqueue(value);
return pump();
});
}
}
})
}).then(stream => new Response(stream))
.then(response => response.blob())
.then(blob => URL.createObjectURL(blob))
.then(url => console.log(image.src = url))
.catch(err => console.error(err));
<img id="img" src="" alt="Image preview...">
Disclaimer: when I read this question I knew that the Streams API could be used but I've never been in a need to extract metadata so I've never made ANY research about it. It could be that there are other APIs or libraries that do a better job, more straightforward and with wider browser support.
I am currently using webcam (not native camera) on a web page to take a photo on users' mobile phone. Like this:
var video: HTMLVideoElement;
...
var context = canvas.getContext('2d');
context.drawImage(video, 0, 0, width, height);
var jpegData = canvas.toDataURL('image/jpeg', compression);
In such a way, I can now successfully generate a JPEG image data from web camera, and display it on the web page.
However, I found that the EXIF data is missing.
according to this:
Canvas.drawImage() will ignore all EXIF metadata in images,
including the Orientation. This behavior is especially troublesome
on iOS devices. You should detect the Orientation yourself and use
rotate() to make it right.
I would love the JPEG image contain the EXIF GPS data. Is there a simple way to include camera EXIF data during the process?
Thanks!
Tested on Pixel 3 - it works. Please note - sometimes it does not work with some desktop web-cameras. you will need exif-js to get the EXIF object from example.
const stream = await navigator.mediaDevices.getUserMedia({ video : true });
const track = stream.getVideoTracks()[0];
let imageCapture = new ImageCapture(track);
imageCapture.takePhoto().then((blob) => {
const newFile = new File([blob], "MyJPEG.jpg", { type: "image/jpeg" });
EXIF.getData(newFile, function () {
const make = EXIF.getAllTags(newFile);
console.log("All data", make);
});
});
unfortunately there's no way to extract exif from canvas.
Although, if you have access to jpeg, you can extract exif from that. For that I'd recommend exifr instead of widely popular exif-js because exif-js has been unmaintained for two years and still has breaking bugs in it (n is undefined).
With exifr you can either parse everything
exifr.parse('./myimage.jpg').then(output => {
console.log('Camera:', output.Make, output.Model))
})
or just a few tags
let output = await exifr.parse(file, ['ISO', 'Orientation', 'LensModel'])
First of, according to what I found so far, there is no way to include exif data during canvas context drawing.
Second, there is a way to work around, which is to extract the exif data from the original JPEG file, then after canvas context drawing, put the extracted exif data back into the newly drawn JPEG file.
It's messy and a little hacky, but for now this is the work around.
Thanks!
I need to generate snapshots for seo.
I am using puppeteer(headless chrome) for this purpose.
On main page i have a canvas, on which i start to draw once the component has mounted (my main site is in react).
Issue is that when i get the html from puppeteer, the drawing on the canvas is not there.
In puppeteer code i wait till the content is not loaded.
html = await page.content()
How can i make puppeteer wait till the point canvas is not painted.
page.content will only return the HTML representation of the DOM. To get the actual image of a canvas inside the DOM, you can use the function toDataURL. This will return the image that is shown in a base64-encoded string.
Code sample
const dataUrl = await page.evaluate(() => {
const canvas = document.querySelector("#canvas-selector");
return canvas.toDataURL();
});
// dataUrl looks like this: "data:image/png;base64,iVBORw..."
const base64String = dataUrl.substr(dataUrl.indexOf(',') + 1); // get everything after the comma
const imgBuffer = Buffer.from(base64String, 'base64'); //
fs.writeFileSync('image.png', imgBuffer);
The evaluate call will return the base64 encoded buffer of the image. You need to first remove the "data:...," from that and then you can put that into a buffer. The buffer can then be saved (or handled in any other way).
My web app calls a Web API service, which returns an image. The service returns nothing but an image. Calling the service is little different because there is a function in the routing code that adds the required auth-code and such. Anyway, my point is, I don't have the full URL and even if I did, I wouldn't want to pass it into code in plain-text. So what I have is a response, and that response is an image.
getThumb(filename: string) {
return this.http.get('/Picture/' + filename).subscribe(response => {
return response;
});
}
What I need to do is draw that image on to a canvas. From what I've seen on the internet so far, it looks like I want to create an image element, then assign that element src a URL, then I can add it to the canvas. It's the src part that's perplexing me. All the samples I see are either loading the image from a local filesystem or predefined URL, or from a base64 string, etc. I can't figure out how to just load an image I have as a response from a service. I'm sure I'm overthinking it.
Does anyone have some sample code to illustrate this?
e.g Something like this:
var img = new Image(); // Create new img element
img.src = ... ; // Set source to image
You could convert the image to Base64. In my example, you request the image and convert it to a blob using response.blob(). Once it's a blob, use fileReader.readAsDataURL to get the Base64.
const fileReader = new FileReader();
fetch("image-resource").then((response) => {
if(response.ok) {
return response.blob();
}
}).then((blob) => {
fileReader.readAsDataURL(blob);
fileReader.onloadend = () => {
console.log(fileReader.result);
}
});
References:
readAsDataURL
Blob