React beginner here, I'm getting this error 'Uncaught DOMException: Failed to execute 'drawImage' on 'CanvasRenderingContext2D': The HTMLImageElement provided is in the 'broken' state.'
i have been stuck with this for few hours now.
That error is showing on drawImage which is inside if statement.
Is there a way to tell it if there is an image then go on and draw it ( ctx.drawImage(img, 0, 0)), but if there isnt any image then dont give any error(dont show any image either), something like what Optional chaining (?.) does ?
English is not my mother language so could be mistakes.
my code:
const CameraCanvas = useRef<any>();
const CameraImage = useRef<any>();
const updatePreviewUrl = () => {
if (currentCam) {
let curSnapshotUrl = state.snapshot?.snapshotUrl.split("?ts=")[0];
let newSnapshotUrl =
`api/cam/${....}/` +
`${curCam.ident}/snapsht`;
if (curSnapshotUrl ....) {
setState({
...
});
}
}
};
const updateCanvas = () => {
updatePreviewUrl();
const canvas = CameraCanvas.current;
const ctx = canvas.getContext("2d");
const img = CameraImage.current;
ctx.rect(0, 0, canvas.width, canvas.height);
ctx.fill();
if (currentCam) {
ctx.drawImage(img, 0, 0);
}
};
useEffect(() => {
updateCanvas();
const img = CameraImage.current;
img.onload = updateCanvas;
return () => {
img.onload = null;
};
});
Related
Basically I want to be able to perform effectively this same code:
const video = document.getElementById('video');
const canvas = document.getElementById('canvas');
const context = canvas.getContext('2d');
const draw = () => {
context.drawImage(video, 0, 0);
requestAnimationFrame(draw);
}
video.onplay = () => {
requestAnimationFrame(draw);
}
only using an offscreen canvas. I can send images over messages to the worker the offscreen canvas is on, but not video as it's directly tied to an HTMLElement. Is there currently a way to somehow still render video data or a MediaStream in an offscreen canvas?
You can send frames of a video to an OffscreenCanvas in a Web Worker by modifying your script with the following changes:
const worker = new Worker('my-worker.js');
const video = document.getElementById('video');
const stream = video.captureStream();
const [track] = stream.getVideoTracks();
const imageCapture = new ImageCapture(track);
const canvas = document.getElementById('canvas');
const offscreen = canvas.transferControlToOffscreen();
worker.postMessage({ offscreen }, [offscreen]);
const draw = () => {
imageCapture.grabFrame().then(imageBitmap => {
worker.postMessage({ imageBitmap }, [imageBitmap]);
});
requestAnimationFrame(draw);
};
video.onplay = () => {
requestAnimationFrame(draw);
};
my-worker.js
let canvas;
let context;
addEventListener('message', event => {
if (event.data.offscreen) {
canvas = event.data.offscreen;
context = canvas.getContext('2d');
} else if (event.data.imageBitmap && context) {
context.drawImage(event.data.imageBitmap, 0, 0);
// do something with frame
}
});
References
HTMLMediaElement.prototype.captureStream()
MediaStream.prototype.getVideoTracks()
new ImageCapture()
ImageCapture.prototype.grabFrame()
I'm trying to learn JavaScript, making my first game. How I can make all images onload in one function and later draw it in the canvas making my code shorter?
How can I put a lot of images in an array and later us it in a function.
This is my third day of learning JavaScript.
Thanks in advance.
var cvs = document.getElementById('canvas');
var ctx = cvs.getContext('2d');
//load images
var bird = new Image();
var bg = new Image();
var fg = new Image();
var pipeNorth = new Image();
var pipeSouth = new Image();
//images directions
bg.src = "assets/bg.png";
bird.src = "assets/bird.png";
fg.src = "assets/fg.png";
pipeNorth.src = "assets/pipeNorth.png";
pipeSouth.src = "assets/pipeSouth.png";
var heightnum = 80;
var myHeight = pipeSouth.height+heightnum;
var bX = 10;
var bY = 150;
var gravity = 0.5;
// Key Control :D
document.addEventListener("keydown",moveUP)
function moveUP(){
bY -= 20;
}
//pipe coordinates
var pipe = [];
pipe[0] = {
x : cvs.width,
y : 0
}
//draw images
//Background img
bg.onload = function back(){
ctx.drawImage(bg,0,0);
}
//pipe north
pipeNorth.onload = function tubo(){
for(var i = 0; i < pipe.length; i++){
ctx.drawImage(pipeNorth,pipe[i].x,pipe[i].y);
pipe[i].x--;
}
}
pipeSouth.onload = function tuba(){
ctx.drawImage(pipeSouth,pipe[i].x,pipe[i].y+myHeight);
}
bird.onload = function pajaro(){
ctx.drawImage(bird,bX,bY);
bY += gravity;
requestAnimationFrame(pajaro);
}
fg.onload = function flor(){
ctx.drawImage(fg,0,cvs.height - fg.height);
}
moveUP();
back();
tuba();
pajaro();
flor();
This can be done with Promise.all. We'll make a new promise for each image we want to load, resolving when onload is called. Once Promise.all resolves, we can call our initialize function and continue on with the rest of our logic. This avoids race conditions where the main game loop's requestAnimationFrame is called from bird.onload, but it's possible that pipe entities and so forth haven't loaded yet.
Here's a minimal, complete example:
const initialize = images => {
// images are loaded here and we can go about our business
const canvas = document.createElement("canvas");
document.body.appendChild(canvas);
canvas.width = 400;
canvas.height = 200;
const ctx = canvas.getContext("2d");
Object.values(images).forEach((e, i) =>
ctx.drawImage(e, i * 100, 0)
);
};
const imageUrls = [
"http://placekitten.com/90/100",
"http://placekitten.com/90/130",
"http://placekitten.com/90/160",
"http://placekitten.com/90/190",
];
Promise.all(imageUrls.map(e =>
new Promise((resolve, reject) => {
const img = new Image();
img.onload = () => resolve(img);
img.onerror = reject;
img.src = e;
})
)).then(initialize);
Notice that I used an array in the above example to store the images. The problem this solves is that the
var foo = ...
var bar = ...
var baz = ...
var qux = ...
foo.src = ...
bar.src = ...
baz.src = ...
qux.src = ...
foo.onload = ...
bar.onload = ...
baz.onload = ...
qux.onload = ...
pattern is extremely difficult to manage and scale. If you decide to add another thing into the game, then the code needs to be re-written to account for it and game logic becomes very wet. Bugs become difficult to spot and eliminate. Also, if we want a specific image, we'd prefer to access it like images.bird rather than images[1], preserving the semantics of the individual variables, but giving us the power to loop through the object and call each entity's render function, for example.
All of this motivates an object to aggregate game entities. Some information we'd like to have per entity might include, for example, the entity's current position, dead/alive status, functions for moving and rendering it, etc.
It's also a nice idea to have some kind of separate raw data object that contains all of the initial game state (this would typically be an external JSON file).
Clearly, this can turn into a significant refactor, but it's a necessary step when the game grows beyond small (and we can incrementally adopt these design ideas). It's generally a good idea to bite the bullet up front.
Here's a proof-of-concept illustrating some of the the musings above. Hopefully this offers some ideas for how you might manage game state and logic.
const entityData = [
{
name: "foo",
path: "http://placekitten.com/80/80",
x: 0,
y: 0
},
{
name: "baz",
path: "http://placekitten.com/80/150",
x: 0,
y: 90
},
{
name: "quux",
path: "http://placekitten.com/100/130",
x: 90,
y: 110
},
{
name: "corge",
path: "http://placekitten.com/200/240",
x: 200,
y: 0
},
{
name: "bar",
path: "http://placekitten.com/100/100",
x: 90,
y: 0
}
/* you can add more properties and functions
(movement, etc) to each entity
... try adding more entities ...
*/
];
const entities = entityData.reduce((a, e) => {
a[e.name] = {...e, image: new Image(), path: e.path};
return a;
}, {});
const initialize = () => {
const canvas = document.createElement("canvas");
document.body.appendChild(canvas);
canvas.width = innerWidth;
canvas.height = innerHeight;
const ctx = canvas.getContext("2d");
for (const key of Object.keys(entities)) {
entities[key].alpha = Math.random();
}
(function render () {
ctx.clearRect(0, 0, canvas.width, canvas.height);
Object.values(entities).forEach(e => {
ctx.globalAlpha = Math.abs(Math.sin(e.alpha += 0.005));
ctx.drawImage(e.image, e.x, e.y);
ctx.globalAlpha = 1;
});
requestAnimationFrame(render);
})();
};
Promise.all(Object.values(entities).map(e =>
new Promise((resolve, reject) => {
e.image.onload = () => resolve(e.image);
e.image.onerror = () =>
reject(`${e.path} failed to load`)
;
e.image.src = e.path;
})
))
.then(initialize)
.catch(err => console.error(err))
;
I am developing angular application and trying to decode image with QR code on client side only and facing with next errors.
I have next flow:
User uploads image.
I decode qr code from image.
<input type="file" name="file" id="file" accept="image/*"(change)="qrCodeUploaded($event.target.files)"/>
I have tried next libraries:
https://github.com/zxing-js/library
qrCodeUploaded(files: FileList): void {
const fileReader = new FileReader();
const codeReader = new BrowserQRCodeReader();
fileReader.readAsDataURL(files[0]);
fileReader.onload = (e: any) => {
var image = document.createElement("img");
image.src = e.target.result;
setTimeout(() => codeReader.decodeFromImage(image, e.target.result).then(res => console.log(res)), 100);
};
}
Works for some of qr codes, but issues on mobile. If you will take a photo of QR code with your phone, it will be not decoded. So for mobile you will need to implement video.
https://github.com/cozmo/jsQR
qrCodeUploaded(files: FileList): void {
const fileReader = new FileReader();
fileReader.readAsArrayBuffer(files[0]);
fileReader.onload = (e: any) => {
console.log(new Uint8ClampedArray(e.target.result));
console.log(jsQR(new Uint8ClampedArray(e.target.result), 256, 256));
};
}
I get next error for any QR images I upload:
core.js:15714 ERROR Error: Malformed data passed to binarizer.
at Object.binarize (jsQR.js:408)
at jsQR (jsQR.js:368)
gist link:
https://gist.github.com/er-ant/b5c75c822eb085e70035cf333bb0fb55
Please, tell me what I am doing wrong and propose some solution for QR codes decoding. Open for any thoughts :)
For second library I missed that it expects ImageData and I pass Binary Data.
Thus, we have 3 solutions how to convert Binary Data to ImageData:
Use createImageBitmap Kaiido solution with some updates, as proposed in comments doesn't work.
qrCodeUploadedHandler(files: FileList): void {
const file: File = files[0];
createImageBitmap(files[0])
.then(bmp => {
const canvas = document.createElement('canvas');
const width: number = bmp.width;
const height: number = bmp.height;
canvas.width = bmp.width;
canvas.height = bmp.height;
const ctx = canvas.getContext('2d');
ctx.drawImage(bmp, 0, 0);
const qrCodeImageFormat = ctx.getImageData(0,0,bmp.width,bmp.height);
const qrDecoded = jsQR(qrCodeImageFormat.data, qrCodeImageFormat.width, qrCodeImageFormat.height);
});
}
Get ImageData from canvas.
qrCodeUploadedHandler(files: FileList): void {
const file: File = files[0];
const fileReader: FileReader = new FileReader();
fileReader.readAsDataURL(files[0]);
fileReader.onload = (event: ProgressEvent) => {
const img: HTMLImageElement = new Image();
img.onload = () => {
const canvas: HTMLCanvasElement = document.createElement('canvas');
const width: number = img.width;
const height: number = img.height;
canvas.width = width;
canvas.height = height;
const canvasRenderingContext: CanvasRenderingContext2D = canvas.getContext('2d');
console.log(canvasRenderingContext);
canvasRenderingContext.drawImage(img, 0, 0);
const qrCodeImageFormat: ImageData = canvasRenderingContext.getImageData(0, 0, width, height);
const qrDecoded = jsQR(qrCodeImageFormat.data, qrCodeImageFormat.width, qrCodeImageFormat.height);
canvas.remove();
};
img.onerror = () => console.error('Upload file of image format please.');
img.src = (<any>event.target).result;
}
You can parse images with png.js and jpeg-js libraries for ImageData.
step:
Create a new image object from image file
Create a canvas
Draw the image to the canvas
Get ImageData through the context of canvas
Scan QR code with jsQR.
install:
npm install jsqr --save
code:
// utils/qrcode.js
import jsQR from "jsqr"
export const scanQrCode = (file, callback) => {
const image = new Image()
image.src = file.content
image.addEventListener("load", (e) => {
console.log(
"image on load, image.naturalWidth, image.naturalHeight",
image.naturalWidth,
image.naturalHeight
)
const canvas = document.createElement("canvas") // Creates a canvas object
canvas.width = image.naturalWidth // Assigns image's width to canvas
canvas.height = image.naturalHeight // Assigns image's height to canvas
const context = canvas.getContext("2d") // Creates a contect object
context.imageSmoothingEnabled = false
context.drawImage(image, 0, 0, image.naturalWidth, image.naturalHeight) // Draws the image on canvas
const imageData = context.getImageData(0, 0, image.naturalWidth, image.naturalHeight) // Assigns image base64 string in jpeg format to a variable
const code = jsQR(imageData.data, image.naturalWidth, image.naturalHeight)
if (code) {
console.log("Found QR code", code)
callback(code)
}
})
}
use:
scanQrCode(file, (code) => {
console.log(code.data);
});
I want to create a function that return an observable of file
My current working code uses a promise
testPromise(): Promise<File> {
return new Promise((resolve, reject) => {
const canvas = document.createElement('canvas');
canvas.width = 800;
canvas.height = 600;
const context = canvas.getContext('2d');
context.drawImage(image, 0, 0, 800, 600);
canvas.toBlob((blob: Blob) => resolve(new File([blob], name)), type, quality);
});
}
Instead of that, I want to return Observable<File>
I tried the following but I cannot pass the toBlob() arguments such as type and quality
testObservable(): Observable<File> {
const canvas = document.createElement('canvas');
canvas.width = 800;
canvas.height = 600;
const context = canvas.getContext('2d');
context.drawImage(image, 0, 0, 800, 600);
const toBlob = bindCallback(canvas.toBlob);
return toBlob().pipe(
map((blob: Blob) => {
return new File([blob], name)
})
);
}
Expected behavior
const toBlob = bindCallback(canvas.toBlob);
toBlob(type, quality) // <= Expected 0 arguments, but got 2.
Here I get an error on toBlob Expected 0 arguments, but got 2.
Current behavior
const toBlob = bindCallback(canvas.toBlob);
toBlob() // <= No type or quality arguments
Here is the canvas.toBlob interface according to MDN docs
void canvas.toBlob(callback, mimeType, qualityArgument);
PS: I don't want to convert the Promise to an Observable,but I need to convert the callback directly to an Observable
Any ideas?
I'm afraid bindCallback is not a good choice here because it expects the callback to be the last argument and not the first as what toBlob expects.
I think you'll have to wrap this call with na Observable yourself:
return new Observable(observer => {
canvas.toBlob(result => {
observer.next(result);
observer.complete();
}, type, quality);
});
I made a simple application, which takes photo from video tag and make it gray, available in full here: Canvas WebWorker PoC:
const photoParams = [
0, //x
0, //y
320, //width
240, //height
];
async function startVideo () {
const stream = await navigator.mediaDevices.getUserMedia({
audio: false,
video: true,
});
const video = document.querySelector('#video');
video.srcObject = stream;
video.play();
return video;
}
function takePhoto () {
const video = document.querySelector('#video');
const canvas = document.querySelector('#canvas');
canvas.width = 320;
canvas.height = 240;
const context = canvas.getContext('2d');
context.drawImage(video, ...photoParams);
const imageData = applyFilter({imageData:
context.getImageData(...photoParams)});
context.putImageData(imageData, 0, 0);
return canvas.toDataURL("image/png")
}
function setPhoto () {
const photo = takePhoto();
const image = document.querySelector('#image');
image.src = photo;
}
startVideo();
const button = document.querySelector('#button');
button.addEventListener('click', setPhoto);
In one of functions, I placed long, unnecessary for loop to make it really slow:
function transformPixels ({data}) {
let rawPixels;
const array = new Array(2000);
for (element of array) {
rawPixels = [];
const pixels = getPixels({
data,
});
const filteredPixels = [];
for (const pixel of pixels) {
const average = getAverage(pixel);
filteredPixels.push(new Pixel({
red: average,
green: average,
blue: average,
alpha: pixel.alpha,
}));
}
for (const pixel of filteredPixels) {
rawPixels.push(...pixel.toRawPixels());
}
}
return rawPixels;
};
And I created Web Worker version which, as I thougt, should be faster cause it not break the main thread:
function setPhotoWorker () {
const video = document.querySelector('#video');
const canvas = document.querySelector('#canvas');
canvas.width = 320;
canvas.height = 240;
const context = canvas.getContext('2d');
context.drawImage(video, ...photoParams);
const imageData = context.getImageData(...photoParams);
const worker = new Worker('filter-worker.js');
worker.onmessage = (event) => {
const rawPixelsArray = [...JSON.parse(event.data)];
rawPixelsArray.forEach((element, index) => {
imageData.data[index] = element;
});
context.putImageData(imageData, 0, 0);
const image = document.querySelector('#image');
image.src = canvas.toDataURL("image/png");
}
worker.postMessage(JSON.stringify([...imageData.data]));
}
Which could be run in this way:
button.addEventListener('click', setPhotoWorker);
Worker code is almost exactly the same as single-threaded version, except one thing - to improve messaging performance, string is sent instead of array of numbers:
worker.onmessage = (event) => {
const rawPixelsArray = [...JSON.parse(event.data)];
};
//...
worker.postMessage(JSON.stringify([...imageData.data]));
And inside filter-worker.js:
onmessage = (data) => {
const rawPixels = transformPixels({data: JSON.parse(data.data)});
postMessage(JSON.stringify(rawPixels));
};
The problem is that worker version is always about 20-25% slower than main thread version. First I thought it may be size of message, but in my laptop I have 640 x 480 camera, which gives 307 200 items - which I don't think are expensive enough to be reason why, for 2000 for iterations, leads to results: main thread: about 160 seconds, worker about 200 seconds. You can download app from Github repo and check it on your own. The pattern is quite the same here - worker is always 20-25% slower. Without using JSON API, worker needs something like 220 seconds to finish its job. The only one reason which I thought is that worker thread has very low priority, and in my application, where main thread has not too much things to do it is simply slower - and in real-world app, where main thread might be busier, worker will win. Do you have any ideas why worker is so slow? Thank you for every answer.