I'm trying to learn JavaScript, making my first game. How I can make all images onload in one function and later draw it in the canvas making my code shorter?
How can I put a lot of images in an array and later us it in a function.
This is my third day of learning JavaScript.
Thanks in advance.
var cvs = document.getElementById('canvas');
var ctx = cvs.getContext('2d');
//load images
var bird = new Image();
var bg = new Image();
var fg = new Image();
var pipeNorth = new Image();
var pipeSouth = new Image();
//images directions
bg.src = "assets/bg.png";
bird.src = "assets/bird.png";
fg.src = "assets/fg.png";
pipeNorth.src = "assets/pipeNorth.png";
pipeSouth.src = "assets/pipeSouth.png";
var heightnum = 80;
var myHeight = pipeSouth.height+heightnum;
var bX = 10;
var bY = 150;
var gravity = 0.5;
// Key Control :D
document.addEventListener("keydown",moveUP)
function moveUP(){
bY -= 20;
}
//pipe coordinates
var pipe = [];
pipe[0] = {
x : cvs.width,
y : 0
}
//draw images
//Background img
bg.onload = function back(){
ctx.drawImage(bg,0,0);
}
//pipe north
pipeNorth.onload = function tubo(){
for(var i = 0; i < pipe.length; i++){
ctx.drawImage(pipeNorth,pipe[i].x,pipe[i].y);
pipe[i].x--;
}
}
pipeSouth.onload = function tuba(){
ctx.drawImage(pipeSouth,pipe[i].x,pipe[i].y+myHeight);
}
bird.onload = function pajaro(){
ctx.drawImage(bird,bX,bY);
bY += gravity;
requestAnimationFrame(pajaro);
}
fg.onload = function flor(){
ctx.drawImage(fg,0,cvs.height - fg.height);
}
moveUP();
back();
tuba();
pajaro();
flor();
This can be done with Promise.all. We'll make a new promise for each image we want to load, resolving when onload is called. Once Promise.all resolves, we can call our initialize function and continue on with the rest of our logic. This avoids race conditions where the main game loop's requestAnimationFrame is called from bird.onload, but it's possible that pipe entities and so forth haven't loaded yet.
Here's a minimal, complete example:
const initialize = images => {
// images are loaded here and we can go about our business
const canvas = document.createElement("canvas");
document.body.appendChild(canvas);
canvas.width = 400;
canvas.height = 200;
const ctx = canvas.getContext("2d");
Object.values(images).forEach((e, i) =>
ctx.drawImage(e, i * 100, 0)
);
};
const imageUrls = [
"http://placekitten.com/90/100",
"http://placekitten.com/90/130",
"http://placekitten.com/90/160",
"http://placekitten.com/90/190",
];
Promise.all(imageUrls.map(e =>
new Promise((resolve, reject) => {
const img = new Image();
img.onload = () => resolve(img);
img.onerror = reject;
img.src = e;
})
)).then(initialize);
Notice that I used an array in the above example to store the images. The problem this solves is that the
var foo = ...
var bar = ...
var baz = ...
var qux = ...
foo.src = ...
bar.src = ...
baz.src = ...
qux.src = ...
foo.onload = ...
bar.onload = ...
baz.onload = ...
qux.onload = ...
pattern is extremely difficult to manage and scale. If you decide to add another thing into the game, then the code needs to be re-written to account for it and game logic becomes very wet. Bugs become difficult to spot and eliminate. Also, if we want a specific image, we'd prefer to access it like images.bird rather than images[1], preserving the semantics of the individual variables, but giving us the power to loop through the object and call each entity's render function, for example.
All of this motivates an object to aggregate game entities. Some information we'd like to have per entity might include, for example, the entity's current position, dead/alive status, functions for moving and rendering it, etc.
It's also a nice idea to have some kind of separate raw data object that contains all of the initial game state (this would typically be an external JSON file).
Clearly, this can turn into a significant refactor, but it's a necessary step when the game grows beyond small (and we can incrementally adopt these design ideas). It's generally a good idea to bite the bullet up front.
Here's a proof-of-concept illustrating some of the the musings above. Hopefully this offers some ideas for how you might manage game state and logic.
const entityData = [
{
name: "foo",
path: "http://placekitten.com/80/80",
x: 0,
y: 0
},
{
name: "baz",
path: "http://placekitten.com/80/150",
x: 0,
y: 90
},
{
name: "quux",
path: "http://placekitten.com/100/130",
x: 90,
y: 110
},
{
name: "corge",
path: "http://placekitten.com/200/240",
x: 200,
y: 0
},
{
name: "bar",
path: "http://placekitten.com/100/100",
x: 90,
y: 0
}
/* you can add more properties and functions
(movement, etc) to each entity
... try adding more entities ...
*/
];
const entities = entityData.reduce((a, e) => {
a[e.name] = {...e, image: new Image(), path: e.path};
return a;
}, {});
const initialize = () => {
const canvas = document.createElement("canvas");
document.body.appendChild(canvas);
canvas.width = innerWidth;
canvas.height = innerHeight;
const ctx = canvas.getContext("2d");
for (const key of Object.keys(entities)) {
entities[key].alpha = Math.random();
}
(function render () {
ctx.clearRect(0, 0, canvas.width, canvas.height);
Object.values(entities).forEach(e => {
ctx.globalAlpha = Math.abs(Math.sin(e.alpha += 0.005));
ctx.drawImage(e.image, e.x, e.y);
ctx.globalAlpha = 1;
});
requestAnimationFrame(render);
})();
};
Promise.all(Object.values(entities).map(e =>
new Promise((resolve, reject) => {
e.image.onload = () => resolve(e.image);
e.image.onerror = () =>
reject(`${e.path} failed to load`)
;
e.image.src = e.path;
})
))
.then(initialize)
.catch(err => console.error(err))
;
Related
I want to transfer full control to worker and use off screen canvas. But there is an Image() which is tied to UI, see function makeImg. I am not intending to show the image, it has pure data usage for building a mesh. The highlighted code fully depends on UI. Is it possible (and how exactly) to do it entirely in web worker, without interchanging data with UI until calculations done and the final mesh fully generated, ready to be shown? For instance following bitmap contains the heights:
The page without perspective with full source codes is here.
I am building height terrain using above bitmap as heightmap, code is here on GitHub, in the page heightMap.html. So, I use pixel values being used to generate vertices, calculate normals, texture coordinate. The result is the terrain going to be shown in the page, here shown without texture:
async function readImgHeightMap (src, crossOrigin) {
return new Promise ((resolve, reject) => {
readImg (src, crossOrigin).then ((imgData) => {
let heightmap = [];
//j, row -- z coordinate; i, column -- x coordinate
//imgData.data, height -- y coordinate
for (let j = 0, j0 = 0; j < imgData.height; j++, j0 += imgData.width * 4) {
heightmap[j] = [];
for (let i = 0, i0 = 0; i < imgData.width; i++, i0 += 4)
heightmap[j][i] = imgData.data[j0 + i0];
}
resolve( {data:heightmap, height:imgData.height, width:imgData.width} );
});
});
}
async function readImg (src, crossOrigin) {
return new Promise ( (resolve, reject) => {
makeOffscreenFromImg (src, crossOrigin).then((canvas) => {
let ctx = canvas.getContext("2d");
let imgData = ctx.getImageData(0, 0, canvas.width, canvas.height, { colorSpace: "srgb" });
resolve(imgData);
});
});
}
async function makeOffscreenFromImg (src, crossOrigin) {
let img = makeImg(src, crossOrigin);
return new Promise((resolve, reject) => {
img.addEventListener('load', () => {
let cnv = new OffscreenCanvas(img.width, img.height);
cnv.getContext("2d").drawImage(img, 0, 0);
resolve(cnv);
});
img.addEventListener('error', (event) => { console.log(event); reject (event); } );
});
}
function makeImg (src, crossOrigin)
{
let image = new Image ();
let canvas = document.createElement("canvas");
if (crossOrigin) image.crossOrigin = crossOrigin;
image.src = src;
return image;
}
##################
PS: Just in case, to see the crater from different angles, camera can be moved with mouse when pressing SHIFT, or rotated when pressing CTRL. Also click event for permanent animation, or tap if on mobile device.
PS1: Please do not use heightmap images for personal purposes. These have commercial copyright.
Use createImageBitmap from your Worker, passing a Blob you'd have fetched from the image URL:
const resp = await fetch(imageURL);
if (!resp.ok) {
throw "network error";
}
const blob = await resp.blob();
const bmp = await createImageBitmap(blob);
const { width, height } = bmp;
const canvas = new OffscreenCanvas(width, height);
const ctx = canvas.getContext("2d");
ctx.drawImage(bmp, 0, 0);
bmp.close();
const imgData = ctx.getImageData(0, 0, width, height);
If required, you could also create the ImageBitmap from the <img> tag in the main thread, and transfer it to your Worker.
I am trying to generate an image using p5.js and p5.js-svg and then save the SVG xml as a dataURL for storage. Here is an example script that I've simplified for this question:
const canvasWidth = 600;
const canvasHeight = 600;
const bgColor = "#fffcf3";
let unencodedDataURL = "";
const sketch = (p) => {
p.setup = () => {
p.createCanvas(canvasWidth, canvasHeight, p.SVG);
p.background(bgColor);
p.noLoop();
}
p.draw = () => {
p.noFill();
p.ellipse(z
canvasWidth / 2,
canvasHeight / 2,
150,
150
);
unencodedDataURL = p.getDataURL();
console.log(`inside: ${unencodedDataURL}`);
}
}
const test = new p5(sketch, document.body);
console.log(`outside: ${unencodedDataURL}`);
The first log statement prints the right dataURL. But of course the second log statement executes before the one inside draw() and I cannot figure out how to capture the dataURL correctly. I'm sure I am missing something in the p5.js or p5.js-svg libraries and there is an easier way. But I am stuck. Anyone have an idea here? Thanks in advance!
Because p5js run asynchronous,
You can add callback or write a Promise to get data after sketch draw, like this
var handleDataCallback = null;
//........
p.ellipse(
canvasWidth / 2,
canvasHeight / 2,
150,
150
);
unencodedDataURL = p.getDataURL();
if (handleDataCallback) handleDataCallback(unencodedDataURL);
//....
//for callback
handleDataCallback = dataURL => {
//todo something with dataURL
}
if you want to write as a function to generate image
function generateImage(param, callback) {
return new Promise(resolve => {
//......
// do something
p.ellipse(
canvasWidth / 2,
canvasHeight / 2,
150,
150
);
var unencodedDataURL = p.getDataURL();
if (callback) callback(unencodedDataURL);
resolve(unencodedDataURL);
// ....
})
}
//use Promise
generateImage(yourParam, null).then(unencodedDataURL=>{
console.log(`outside: ${unencodedDataURL}`);
})
//or callback
generateImage(yourParam, unencodedDataURL=> {
console.log(`outside: ${unencodedDataURL}`);
});
I'm trying to make 'slitscan' effects.
I set 2 canvases. One on the right is for source image, and another is for scan effect.
When i tried it with text function it works perfectly, however with an image, the copy function gives me an error message.
This is the p5 code I wrote
let a1, a2;
let cv1, cv2;
let dy = 0;
let s1 = function(p) {
p.setup = () => {
cv1 = p.createCanvas(300, 300);
p.background(150);
cv1.position(0, 0);
}
p.draw = () => {
p.copy(cv2, 0, dy, 400, 1, 0, dy, 400, 1);
if (dy > cv2.height) {
dy = 0;
} else {
dy += 1;
}
}
}
a1 = new p5(s1);
let s2 = function(p) {
let img;
p.preload = () => {
img = p.loadImage('https://pbs.twimg.com/profile_images/635910989498724356/uY4bc8q2.jpg');
}
p.setup = () => {
cv2 = p.createCanvas(300, 300);
cv2.position(300, 0);
}
p.draw = () => {
p.background(30);
p.imageMode(p.CENTER);
p.image(img, p.mouseX, p.mouseY);
}
}
a2 = new p5(s2);
In addition, if you have a better idea to make multi-canvas or any suggestion about my code, please leave a comment.
Thanks,
In your s1 sketch, you try to do stuff with the canvas cv2. Yet this canvas is only created later in your sketch.
To fix it, just change the order of your two sketches, i.e., call a1 = new p5(s1); after a2 = new p5(s2); (it doesn't matter when you define s1 and s2, only when you instantiate them).
See, for example, this p5 editor sketch.
I on a project making projection of maps onto a point cloud possible in Potree.
We are loading the maps from OpenStreetMap's tile API and are loading more tiles as the user zooms on point cloud in order to give the user a more detailed experience. We are projecting the data from the tiles to the point cloud using Three.js Texture which we are extracting from a canvas element where we place the tiles.
We have made all we code to get the tiles from OSM, cut them in the right size and place them in the canvas. And the solution is working quite well for smaller point clouds, but as the area grows and more tiles/images are needed and the canvas element gets bigger (we are scaling the canvas to follow the zoom level in order to give a better resolution) we get into trouble with an huge memory use (around 10 gb in the task manager, but then I investigate the memory use in the chrome dev tools the memory use is a lot lower). The big memory use will cause the browser to become unresponsive and eventually die.
My question is there for is there some kind of memory leak in this code or something you have to be very careful about when you use images and canvasses?
The code:
Potree.MapTextureManager = class MapTextureManager {
constructor(projection, bbMin, bbMax) {
this.projection = projection;
this.bbMin = bbMin;
this.bbMax = bbMax;
this._mapCanvas = document.getElementById("texture");
let ratio = (bbMax[0] - bbMin[0]) / (bbMax[1] - bbMin[1]);
let minHeight = 256;
this._mapCanvas.width = minHeight * ratio;
this._mapCanvas.height = minHeight;
this._minWeb = proj4(swiss, WGS84, [this.bbMin[0], this.bbMin[1]]);
this._maxWeb = proj4(swiss, WGS84, [this.bbMax[0], this.bbMax[1]]);
this.updateTexture(this._minWeb, this._maxWeb);
this._cachedTexture = null;
this._drawnImages = [];
this.geometryNodeIds = new Set();
this._cachedTileImages = [];
this._currentMaxZoom = this.getTiles(this._minWeb, this._maxWeb)[0].zoom;
}
updateTextureFor(visibleNodes, matrixWorld) {
visibleNodes.forEach(visibleNode => {
if (!this.geometryNodeIds.has(visibleNode.geometryNode.id)) {
this.geometryNodeIds.add(visibleNode.geometryNode.id);
var swiss = proj4.defs("test");
var WGS84 = proj4.defs("WGS84");
let nodeBox = Potree.utils.computeTransformedBoundingBox(visibleNode.geometryNode.boundingBox, matrixWorld);
let minWeb = proj4(swiss, WGS84, [nodeBox.min.x, nodeBox.min.y]);
let maxWeb = proj4(swiss, WGS84, [nodeBox.max.x, nodeBox.max.y]);
this.updateTexture(minWeb, maxWeb);
}
});
}
updateTexture(minWeb, maxWeb) {
let canvasEl = this._mapCanvas;
let tiles = this.getTiles(minWeb, maxWeb);
let tilePromises = this.tilePromisesFor(tiles);
tilePromises.forEach(tilePromise => {
tilePromise.then(tileImage => {
if (tileImage.tile.zoom > this._currentMaxZoom) {
this.resizeCanvasTo(tileImage.tile.zoom);
}
this._cachedTileImages.push(tileImage);
this._cachedTileImages.sort((tileImage1, tileImage2) => {
if (tileImage1.tile.zoom >= tileImage2.tile.zoom) {
return 1;
} else {
return -1;
}
});
let myArray = this._cachedTileImages.filter((el) => !this._drawnImages.includes(el));
myArray.forEach(tileImage => {
// if (this._drawnImages.indexOf(tileImage) === -1) {
this.drawTileOnCanvas(canvasEl, tileImage.image, tileImage.tile);
this._drawnImages.push(tileImage);
// }
});
if (this._cachedTexture) {
this._cachedTexture.dispose();
this._cachedTexture = null;
}
});
});
}
get mapTexture() {
if (this._cachedTexture) {
return this._cachedTexture;
}
let texture = new THREE.CanvasTexture(this._mapCanvas);
texture.minFilter = THREE.LinearFilter;
texture.needsUpdate = true;
this._cachedTexture = texture;
return texture;
}
getTiles(minCoord, maxCoord, zoom = 1) {
let maxZoom = 18;
let minNumberOfTiles = 4;
let minX = this.long2tile(minCoord[0], zoom);
let minY = this.lat2tile(minCoord[1], zoom);
let maxX = this.long2tile(maxCoord[0], zoom);
let maxY = this.lat2tile(maxCoord[1], zoom);
let arrayX = [minX, maxX].sort();
let arrayY = [minY, maxY].sort();
let tiles = [];
for (var x = arrayX[0]; x <= arrayX[1]; x++) {
for (var y = arrayY[0]; y <= arrayY[1]; y++) {
tiles.push({ X: x, Y: y, zoom: zoom });
}
}
// We want at least minNumberOfTiles tiles per pointcloud node
if (tiles.length >= minNumberOfTiles || zoom === maxZoom) {
return tiles;
} else {
return this.getTiles(minCoord, maxCoord, zoom + 1);
}
}
tilePromisesFor(Tiles) {
return Tiles.map(function (tile, i) {
return new Promise((resolve, reject) => {
let image = new Image(256, 256);
image.crossOrigin = "Anonymous";
image.onload = function () {
let data = { tile: tile, image: image };
resolve(data);
}
image.src = "https://tile.openstreetmap.org" + "/" + tile.zoom + "/" + tile.X + "/" + tile.Y + ".png";
})
});
}
drawTileOnCanvas(canvas, image, tile) {
let ctx = canvas.getContext("2d");
ctx.drawImage(image, sX, sY, imageWidthToBeDrawn, imageHeightToBeDrawn, dX, dY, drawingWidth, drawingHeight);
image.src = "";
image = null;
}
resizeCanvasTo(zoom) {
let canvas = this._mapCanvas;
let multiplier = Math.pow(2, zoom - this._currentMaxZoom);
let ctx = canvas.getContext("2d");
// create a temporary canvas obj to cache the pixel data //
var temp_cnvs = document.createElement('canvas');
var temp_cntx = temp_cnvs.getContext('2d');
// set it to the new width & height and draw the current canvas data into it //
temp_cnvs.width = canvas.width * multiplier;;
temp_cnvs.height = canvas.height * multiplier;;
temp_cntx.drawImage(canvas, 0, 0);
// resize & clear the original canvas and copy back in the cached pixel data //
canvas.width = canvas.width * multiplier;
canvas.height = canvas.height * multiplier;
ctx.scale(multiplier, multiplier);
ctx.drawImage(temp_cnvs, 0, 0);
this._currentMaxZoom = zoom;
temp_cnvs = null;
temp_cntx = null;
}
};
I have tried to remove some of the unnecessary code. So please say if you are missing something.
I made a simple application, which takes photo from video tag and make it gray, available in full here: Canvas WebWorker PoC:
const photoParams = [
0, //x
0, //y
320, //width
240, //height
];
async function startVideo () {
const stream = await navigator.mediaDevices.getUserMedia({
audio: false,
video: true,
});
const video = document.querySelector('#video');
video.srcObject = stream;
video.play();
return video;
}
function takePhoto () {
const video = document.querySelector('#video');
const canvas = document.querySelector('#canvas');
canvas.width = 320;
canvas.height = 240;
const context = canvas.getContext('2d');
context.drawImage(video, ...photoParams);
const imageData = applyFilter({imageData:
context.getImageData(...photoParams)});
context.putImageData(imageData, 0, 0);
return canvas.toDataURL("image/png")
}
function setPhoto () {
const photo = takePhoto();
const image = document.querySelector('#image');
image.src = photo;
}
startVideo();
const button = document.querySelector('#button');
button.addEventListener('click', setPhoto);
In one of functions, I placed long, unnecessary for loop to make it really slow:
function transformPixels ({data}) {
let rawPixels;
const array = new Array(2000);
for (element of array) {
rawPixels = [];
const pixels = getPixels({
data,
});
const filteredPixels = [];
for (const pixel of pixels) {
const average = getAverage(pixel);
filteredPixels.push(new Pixel({
red: average,
green: average,
blue: average,
alpha: pixel.alpha,
}));
}
for (const pixel of filteredPixels) {
rawPixels.push(...pixel.toRawPixels());
}
}
return rawPixels;
};
And I created Web Worker version which, as I thougt, should be faster cause it not break the main thread:
function setPhotoWorker () {
const video = document.querySelector('#video');
const canvas = document.querySelector('#canvas');
canvas.width = 320;
canvas.height = 240;
const context = canvas.getContext('2d');
context.drawImage(video, ...photoParams);
const imageData = context.getImageData(...photoParams);
const worker = new Worker('filter-worker.js');
worker.onmessage = (event) => {
const rawPixelsArray = [...JSON.parse(event.data)];
rawPixelsArray.forEach((element, index) => {
imageData.data[index] = element;
});
context.putImageData(imageData, 0, 0);
const image = document.querySelector('#image');
image.src = canvas.toDataURL("image/png");
}
worker.postMessage(JSON.stringify([...imageData.data]));
}
Which could be run in this way:
button.addEventListener('click', setPhotoWorker);
Worker code is almost exactly the same as single-threaded version, except one thing - to improve messaging performance, string is sent instead of array of numbers:
worker.onmessage = (event) => {
const rawPixelsArray = [...JSON.parse(event.data)];
};
//...
worker.postMessage(JSON.stringify([...imageData.data]));
And inside filter-worker.js:
onmessage = (data) => {
const rawPixels = transformPixels({data: JSON.parse(data.data)});
postMessage(JSON.stringify(rawPixels));
};
The problem is that worker version is always about 20-25% slower than main thread version. First I thought it may be size of message, but in my laptop I have 640 x 480 camera, which gives 307 200 items - which I don't think are expensive enough to be reason why, for 2000 for iterations, leads to results: main thread: about 160 seconds, worker about 200 seconds. You can download app from Github repo and check it on your own. The pattern is quite the same here - worker is always 20-25% slower. Without using JSON API, worker needs something like 220 seconds to finish its job. The only one reason which I thought is that worker thread has very low priority, and in my application, where main thread has not too much things to do it is simply slower - and in real-world app, where main thread might be busier, worker will win. Do you have any ideas why worker is so slow? Thank you for every answer.