Is there a `loadedmetadata` for html5 img? - javascript

I'm rendering large images which are streamed natively by the browser.
What I need is a Javascript event that indicates that the image's dimensions were retrieved from its metadata. The only event that seems to be firing is the onload event, but this is not useful as the dimensions were known long before that. I've tried loadstart but it does not fire for img elements.
Is there a loadedmetadata event for the img element in html5?

There is not an equivalent of loadedmetadata for img elements.
The most updated specs at the time of writting are the w3 Recommendation (5.2) (or the w3 WD (5.3)) and the WHATWG Living Standard. Although I find easier to browse through all the events in MDN; their docs are more user friendly.
You can check that loadedmetadata is the only event related to metadata and that it applies just to HTMLMediaElements.
You could take advantage of the Streams API to access the streams of data, process them and extract the metadata yourself. It has two caveats, though: it is an experimental technology with limited support and you will need to look for a way to read the image dimensions from the data stream depending on the image format.
I put together an example for PNG images based on MDN docs.
Following the PNG spec, the dimensions of a PNG image are just after the signature, at the beginning of the IHDR chunk (i.e., width at bytes 16-19, height at 20-23). Although it is not guaranteed, you can bet that the metadata of every image format is available in the first chunk that you receive.
const image = document.getElementById('img');
// Fetch the original image
fetch('https://upload.wikimedia.org/wikipedia/commons/d/de/Wikipedia_Logo_1.0.png')
// Retrieve its body as ReadableStream
.then(response => {
const reader = response.body.getReader();
return stream = new ReadableStream({
start(controller) {
let firstChunkReceived = false;
return pump();
function pump() {
return reader.read().then(({
done,
value
}) => {
// When no more data needs to be consumed, close the stream
if (done) {
controller.close();
return;
}
// Log the chunk of data in console
console.log('data chunk: [' + value + ']');
// Retrieve the metadata from the first chunk
if (!firstChunkReceived) {
firstChunkReceived = true;
let width = (new DataView(value.buffer, 16, 20)).getInt32();
let height = (new DataView(value.buffer, 20, 24)).getInt32();
console.log('width: ' + width + '; height: ' + height);
}
// Enqueue the next data chunk into our target stream
controller.enqueue(value);
return pump();
});
}
}
})
}).then(stream => new Response(stream))
.then(response => response.blob())
.then(blob => URL.createObjectURL(blob))
.then(url => console.log(image.src = url))
.catch(err => console.error(err));
<img id="img" src="" alt="Image preview...">
Disclaimer: when I read this question I knew that the Streams API could be used but I've never been in a need to extract metadata so I've never made ANY research about it. It could be that there are other APIs or libraries that do a better job, more straightforward and with wider browser support.

Related

Detect if AVIF image is animated using JavaScript

Is there a way to detect if an AVIF image is animated using JavaScript?
Absolutely no frameworks or libraries.
The new ImageDecoder API can tell this to you.
You'd pass a ReadableStream of your data to it, and then check if one of the decoder's tracks has its animated metadata set to true:
if (!window.ImageDecoder) {
console.warn("Your browser doesn't support the ImageDecoder API yet, we'd need to load a library");
}
// from https://colinbendell.github.io/webperf/animated-gif-decode/avif.html
fetch("https://colinbendell.github.io/webperf/animated-gif-decode/6.avif").then((resp) => test("animated", resp.body));
// from https://github.com/link-u/avif-sample-images cc-by-sa 4.0 Kaede Fujisaki
fetch("https://raw.githubusercontent.com/link-u/avif-sample-images/master/fox.profile1.8bpc.yuv444.avif").then((resp) => test("static", resp.body));
document.querySelector("input").onchange = ({target}) => test("your image", target.files[0].stream());
async function test(name, stream) {
const decoder = new ImageDecoder({ data: stream, type: "image/avif" });
// wait for we have some metadata
await decoder.tracks.ready;
// log if one of the tracks is animated
console.log(name, [...decoder.tracks].some((track) => track.animated));
}
<input type=file>
However beware this API is still not widely supported, since only Chromium based browsers have an implementation currently.

Cannot create bitmap from SVG

I'm trying to create a bitmap from a SVG file but I'm getting the following error:
Uncaught (in promise) DOMException: The source image could not be decoded.
I'm using the following code:
const response = await fetch(MARKER_PATH);
const svgStr = await response.text();
const svgBlob = new Blob([svgStr], { type: 'image/svg+xml' });
this.marker = await createImageBitmap(svgBlob);
The marker code can be found at this link. The SVG is valid (or so I think). I can see it correctly and if I draw it how an image on a canvas, then it works perfectly so I do not why it's failing.
I have thought that maybe it has to do with the encoding, but that does not make sense either because the SVG can be loaded correctly in other environments. Any idea what's going on?
Currently no browser does support creating an ImageBitmap from a Blob that holds an SVG document, while per specs they should.
I wrote a monkey-patch that does fill this hole (and others) that you can use:
fetch("https://gist.githubusercontent.com/antoniogamiz/d1bf0b12fb2698d1b96248d410bb4219/raw/b76455e193281687bb8355dd9400d17565276000/marker.svg")
.then( r => r.blob() )
.then( createImageBitmap )
.then( console.log )
.catch( console.error );
<script src="https://cdn.jsdelivr.net/gh/Kaiido/createImageBitmap/dist/createImageBitmap.js"></script>
Basically, for this case, I perform a first (async) test using a dummy Blob that should work, and if it detects that the UA doesn't support this feature, Blobs input with a type: "image/svg+xml" are converted to an HTMLImageElement that points to a blob:// URI to the Blob.
This means that this fix does not work in Workers.
Also note that per specs only SVG images with an intrinsic width and height (i.e an absolute width and height attributes) are supported by this method.

HTML video loop re-downloads video file

I have an HTML5 video that is rather large. I'm also using Chrome. The video element has the loop attribute but each time the video "loops", the browser re-downloads the video file. I have set Cache-Control "max-age=15768000, private". However, this does not prevent any extra downloads of the identical file. I am using Amazon S3 to host the file. Also the s3 server responds with the Accepts Ranges header which causes the several hundred partial downloads of the file to be requested with the 206 http response code.
Here is my video tag:
<video autoplay="" loop="" class="asset current">
<source src="https://mybucket.s3.us-east-2.amazonaws.com/myvideo.mp4">
</video>
UPDATE:
It seems that the best solution is to prevent the Accept Ranges header from being sent with the original response and instead use a 200 http response code. How can this be achieved so that the video is fully cached through an .htaccess file?
Thanks in advance.
I don't know for sure what's the real issue you are facing.
It could be that Chrome has a max-size limit to what they'd cache, and if it the case, then not using Range-Requests wouldn't solve anything.
An other possible explanation is that caching media is not really a simple task.
Without seeing your file it's hard to tell for sure in which case you are, but you have to understand that to play a media, the browser doesn't need to fetch the whole file.
For instance, you can very well play a video file in an <audio> element, since the video stream won't be used, a browser could very well omit it completely and download only the audio stream. Not sure if any does, but they could. Most media formats do physically separate audio and video streams in the file and their byte positions are marked in the metadata.
They could certainly cache the Range-Requests they perform, but I think it's still quite rare they do.
But as tempting it might be to disable Range-Requests, you've to know that some browsers (Safari) will not play your media if your server doesn't allow Range-Requests.
So even then, it's probably not what you want.
The first thing you may want to try is to optimize your video for web usage. Instead of mp4, serve a webm file. These will generally take less space for the same quality and maybe you'll avoid the max-size limitation.
If the resulting file is still too big, then a dirty solution would be to use a MediaSource so that the file is kept in memory and you need to fetch it only once.
In the following example, the file will be fetched entirely only once, by chunks of 1MB, streamed by the MediaSource as it's being fetched and then only the data in memory will be used for looping plays:
document.getElementById('streamVid').onclick = e => (async () => {
const url = 'https://upload.wikimedia.org/wikipedia/commons/transcoded/2/22/Volcano_Lava_Sample.webm/Volcano_Lava_Sample.webm.360p.webm';
// you must know the mimeType of your video before hand.
const type = 'video/webm; codecs="vp8, vorbis"';
if( !MediaSource.isTypeSupported( type ) ) {
throw 'Unsupported';
}
const source = new MediaSource();
source.addEventListener('sourceopen', sourceOpen);
document.getElementById('out').src = URL.createObjectURL( source );
// async generator Range-Fetcher
async function* fetchRanges( url, chunk_size = 1024 * 1024 ) {
let chunk = new ArrayBuffer(1);
let cursor = 0;
while( chunk.byteLength ) {
const resp = await fetch( url, {
method: "get",
headers: { "Range": "bytes=" + cursor + "-" + ( cursor += chunk_size ) }
}
)
chunk = resp.ok && await resp.arrayBuffer();
cursor++; // add one byte for next iteration, Ranges are inclusive
yield chunk;
}
}
// set up our MediaSource
async function sourceOpen() {
const buffer = source.addSourceBuffer( type );
buffer.mode = "sequence";
// waiting forward to appendAsync...
const appendBuffer = ( chunk ) => {
return new Promise( resolve => {
buffer.addEventListener( 'update', resolve, { once: true } );
buffer.appendBuffer( chunk );
} );
}
// while our RangeFetcher is running
for await ( const chunk of fetchRanges(url) ) {
if( chunk ) { // append to our MediaSource
await appendBuffer( chunk );
}
else { // when done
source.endOfStream();
}
}
}
})().catch( console.error );
<button id="streamVid">stream video</button>
<video id="out" controls muted autoplay loop></video>
Google chrome has a limit to the size of it's file catch. In this case my previous answer would not work. You should use something like file-compressor this resource may be able to compress the file enough that it makes the file cache eligible. The webbrowser can have a new cache size manually set however this is not doable if the end user has not designated their cache to agree with the space required to hold the long video.
A possibility that people that got here are facing - the dev-tool has a "Disable Cache" Option under Network tab. When enabled (meaning cache is disabled) the browser probably doesn't cache the videos, hence needs to download them again.
disable cache from network tab

How to create or convert text to audio at chromium browser?

While trying to determine a solution to How to use Web Speech API at chromium? found that
var voices = window.speechSynthesis.getVoices();
returns an empty array for voices identifier.
Not certain if lack of support at chromium browser is related to this issue Not OK, Google: Chromium voice extension pulled after spying concerns?
Questions:
1) Are there any workarounds which can implement the requirement of creating or converting audio from text at chromium browser?
2) How can we, the developer community, create an open source database of audio files reflecting both common and uncommon words; served with appropriate CORS headers?
There are several possible workarounds that have found which provide the ability to create audio from text; two of which require requesting an external resource, the other uses meSpeak.js by #masswerk.
Using approach described at Download the Audio Pronunciation of Words from Google, which suffers from not being able to pre-determine which words actually exist as a file at the resource without writing a shell script or performing a HEAD request to check if a network error occurs. For example, the word "do" is not available at the resource used below.
window.addEventListener("load", () => {
const textarea = document.querySelector("textarea");
const audio = document.createElement("audio");
const mimecodec = "audio/webm; codecs=opus";
audio.controls = "controls";
document.body.appendChild(audio);
audio.addEventListener("canplay", e => {
audio.play();
});
let words = textarea.value.trim().match(/\w+/g);
const url = "https://ssl.gstatic.com/dictionary/static/sounds/de/0/";
const mediatype = ".mp3";
Promise.all(
words.map(word =>
fetch(`https://query.yahooapis.com/v1/public/yql?q=select * from data.uri where url="${url}${word}${mediatype}"&format=json&callback=`)
.then(response => response.json())
.then(({query: {results: {url}}}) =>
fetch(url).then(response => response.blob())
.then(blob => blob)
)
)
)
.then(blobs => {
// const a = document.createElement("a");
audio.src = URL.createObjectURL(new Blob(blobs, {
type: mimecodec
}));
// a.download = words.join("-") + ".webm";
// a.click()
})
.catch(err => console.log(err));
});
<textarea>what it does my ninja?</textarea>
Resources at Wikimedia Commons Category:Public domain are not necessary served from same directory, see How to retrieve Wiktionary word content?, wikionary API - meaning of words.
If the precise location of the resource is known, the audio can be requested, though the URL may include prefixes other than the word itself.
fetch("https://upload.wikimedia.org/wikipedia/commons/c/c5/En-uk-hello-1.ogg")
.then(response => response.blob())
.then(blob => new Audio(URL.createObjectURL(blob)).play());
Not entirely sure how to use the Wikipedia API, How to get Wikipedia content using Wikipedia's API?, Is there a clean wikipedia API just for retrieve content summary? to get only the audio file. The JSON response would need to be parsed for text ending in .ogg, then a second request would need to be made for the resource itself.
fetch("https://en.wiktionary.org/w/api.php?action=parse&format=json&prop=text&callback=?&page=hello")
.then(response => response.text())
.then(data => {
new Audio(location.protocol + data.match(/\/\/upload\.wikimedia\.org\/wikipedia\/commons\/[\d-/]+[\w-]+\.ogg/).pop()).play()
})
// "//upload.wikimedia.org/wikipedia/commons/5/52/En-us-hello.ogg\"
which logs
Fetch API cannot load https://en.wiktionary.org/w/api.php?action=parse&format=json&prop=text&callback=?&page=hello. No 'Access-Control-Allow-Origin' header is present on the requested resource
when not requested from same origin. We would need to try to use YQL again, though not certain how to formulate the query to avoid errors.
The third approach uses a slightly modified version of meSpeak.js to generate the audio without making an external request. The modification was to create a proper callback for .loadConfig() method
fetch("https://gist.githubusercontent.com/guest271314/f48ee0658bc9b948766c67126ba9104c/raw/958dd72d317a6087df6b7297d4fee91173e0844d/mespeak.js")
.then(response => response.text())
.then(text => {
const script = document.createElement("script");
script.textContent = text;
document.body.appendChild(script);
return Promise.all([
new Promise(resolve => {
meSpeak.loadConfig("https://gist.githubusercontent.com/guest271314/8421b50dfa0e5e7e5012da132567776a/raw/501fece4fd1fbb4e73f3f0dc133b64be86dae068/mespeak_config.json", resolve)
}),
new Promise(resolve => {
meSpeak.loadVoice("https://gist.githubusercontent.com/guest271314/fa0650d0e0159ac96b21beaf60766bcc/raw/82414d646a7a7ef11bb04ddffe4091f78ef121d3/en.json", resolve)
})
])
})
.then(() => {
// takes approximately 14 seconds to get here
console.log(meSpeak.isConfigLoaded());
meSpeak.speak("what it do my ninja", {
amplitude: 100,
pitch: 5,
speed: 150,
wordgap: 1,
variant: "m7"
});
})
.catch(err => console.log(err));
one caveat of the above approach being that it takes approximately 14 and a half seconds for the three files to load before the audio is played back. However, avoids external requests.
It would be a positive to either or both 1) create a FOSS, developer maintained database or directory of sounds for both common and uncommon words; 2) perform further development of meSpeak.js to reduce load time of the three necessary files; and use Promise based approaches to provide notifications of the progress of of the loading of the files and readiness of the application.
In this users' estimation, it would be a useful resource if developers themselves created and contributed to an online database of files which responded with an audio file of the specific word. Not entirely sure if github is the appropriate venue to host audio files? Will have to consider the possible options if interest in such a project is shown.

JavaScript: Writing to download stream

I want to download an encrypted file from my server, decrypt it and save it locally. I want to decrypt the file and write it locally as it is being downloaded rather than waiting for the download to finish, decrypting it and then putting the decrypted file in an anchor tag. The main reason I want to do this is so that with large files the browser does not have to store hundreds of megabytes or several gigabytes in memory.
This is only going to be possible with a combination of service worker + fetch + stream
A few browser has worker and fetch but even fewer support fetch with streaming (Blink)
new Response(new ReadableStream({...}))
I have built a streaming file saver lib to communicate with a service worker in other to intercept network request: StreamSaver.js
It's a little bit different from node's stream here is an example
function unencrypt(){
// should return Uint8Array
return new Uint8Array()
}
// We use fetch instead of xhr that has streaming support
fetch(url).then(res => {
// create a writable stream + intercept a network response
const fileStream = streamSaver.createWriteStream('filename.txt')
const writer = fileStream.getWriter()
// stream the response
const reader = res.body.getReader()
const pump = () => reader.read()
.then(({ value, done }) => {
let chunk = unencrypt(value)
// Write one chunk, then get the next one
writer.write(chunk) // returns a promise
// While the write stream can handle the watermark,
// read more data
return writer.ready.then(pump)
)
// Start the reader
pump().then(() =>
console.log('Closed the stream, Done writing')
)
})
There are also two other way you can get streaming response with xhr, but it's not standard and doesn't mather if you use them (responseType = ms-stream || moz-chunked-arrayBuffer) cuz StreamSaver depends on fetch + ReadableStream any ways and can't be used in any other way
Later you will be able to do something like this when WritableStream + Transform streams gets implemented as well
fetch(url).then(res => {
const fileStream = streamSaver.createWriteStream('filename.txt')
res.body
.pipeThrogh(unencrypt)
.pipeTo(fileStream)
.then(done)
})
It's also worth mentioning that the default download manager is commonly associated with background download so ppl sometimes close the tab when they see the download. But this is all happening in the main thread so you need to warn the user when they leave
window.onbeforeunload = function(e) {
if( download_is_done() ) return
var dialogText = 'Download is not finish, leaving the page will abort the download'
e.returnValue = dialogText
return dialogText
}
New solution has arrived: showSaveFilePicker/FileSystemWritableFileStream, supported in Chrome, Edge, and Opera since October 2020 (and with a ServiceWorker-based shim for Firefox—from the author of the other major answer!), will allow you to do this directly:
async function streamDownloadDecryptToDisk(url, DECRYPT) {
// create readable stream for ciphertext
let rs_src = fetch(url).then(response => response.body);
// create writable stream for file
let ws_dest = window.showSaveFilePicker().then(handle => handle.createWritable());
// create transform stream for decryption
let ts_dec = new TransformStream({
async transform(chunk, controller) {
controller.enqueue(await DECRYPT(chunk));
}
});
// stream cleartext to file
let rs_clear = rs_src.then(s => s.pipeThrough(ts_dec));
return (await rs_clear).pipeTo(await ws_dest);
}
Depending on performance—if you're trying to compete with MEGA, for instance—you might also consider modifying DECRYPT(chunk) to allow you to use ReadableStreamBYOBReader with it:
…zero-copy reading from an underlying byte source. It is used for efficient copying from underlying sources where the data is delivered as an "anonymous" sequence of bytes, such as files.
For security reasons, browsers do not allow piping an incoming readable stream directly to the local file system, so you have two ways to solve it:
window.open(Resource_URL): download the resource in a new window with
Content_Disposition set to "attachment";
<a download href="path/to/resource"></a>: using the "download" attribute of
AnchorElement to download stream into the hard disk;
hope these helps :)

Categories