How can I make my JavaScript programs run faster? - javascript

Ok, so I am working on a sort of detection system where I will be pointing the camera at a screen, and it will have to find the red object. I can successfully do this, with pictures, but the problem is that it takes several seconds to load. I want to be able to do this to live videos, so I need it to find the object immediately. Here is my code:
video.addEventListener('pause', function () {
let reds = [];
for(x=0; x<= canvas.width; x++){
for(y=0; y<= canvas.height; y++){
let data = ctx.getImageData(x, y, 1, 1).data;
let rgb = [ data[0], data[1], data[2] ];
if (rgb[0] >= rgb[1] && rgb[0] >=rgb[2] && !(rgb[0]>100 && rgb[1]>100 && rgb[2]>100) && rgb[1]<100 && rgb[2]<100 && rgb[0]>150){
reds[reds.length] = [x, y]
}
let addedx = 0
let addedy = 0
for(i=0; i<reds.length; i++){
addedx = addedx + reds[i][0]
addedy = addedy + reds[i][1]
}
let center = [addedx/reds.length, addedy/reds.length]
ctx.rect(center[0]-5, center[1]-5, 10, 10)
ctx.stroke()
}, 0);
Ya, I know its messy. Is there something about the for loops that are slow? I know I'm looping through thousands of pixels but that's the only way I can think of to do it.

As it has been said, Javascript is not the most performant for this task. However, here are some things I noticed, which could slow you down.
You grab the image data one pixel at a time. Since this method can return the whole frame, you can do this once.
Optimize your isRed condition:
rgb[0] >= rgb[1] && // \
rgb[0] >= rgb[2] && // >-- This is useless
!(rgb[0] > 100 && rgb[1] > 100 && rgb[2] > 100) && // /
rgb[1] < 100 && // \
rgb[2] < 100 && // >-- These 3 conditions imply the others
rgb[0] > 150 // /
You calculate the center inside your for loop after each pixel, but it would only make sense after processing the whole frame.
Since the video feed is coming from a camera, maybe you don't need to look at every single pixel. Maybe every 5 pixels is enough? That's what the example below does. Tweak this.
Demo including these optimizations
Node: This demo includes an adaptation of the code from this answer, to copy the video onto the canvas.
const video = document.getElementById("video"),
canvas = document.getElementById("canvas"),
ctx = canvas.getContext("2d");
let width,
height;
// To make this demo work
video.crossOrigin = "Anonymous";
// Set canvas to video size when known
video.addEventListener("loadedmetadata", function() {
width = canvas.width = video.videoWidth;
height = canvas.height = video.videoHeight;
});
video.addEventListener("play", function() {
const $this = this; // Cache
(function loop() {
if (!$this.paused && !$this.ended) {
ctx.drawImage($this, 0, 0);
const reds = [],
data = ctx.getImageData(0, 0, width, height).data,
len = data.length;
for (let i = 0; i < len; i += 5 * 4) { // 4 because data is made of RGBA values
const rgb = data.slice(i, i + 3);
if (rgb[0] > 150 && rgb[1] < 100 && rgb[2] < 100) {
reds.push([i / 4 % width, Math.floor(i / 4 / width)]); // Get [x,y] from i
}
}
if (reds.length) { // Can't divide by 0
const sums = reds.reduce(function (res, point) {
return [res[0] + point[0], res[1] + point[1]];
}, [0,0]);
const center = [
Math.round(sums[0] / reds.length),
Math.round(sums[1] / reds.length)
];
ctx.strokeStyle = "blue";
ctx.lineWidth = 10;
ctx.beginPath();
ctx.rect(center[0] - 5, center[1] - 5, 10, 10);
ctx.stroke();
}
setTimeout(loop, 1000 / 30); // Drawing at 30fps
}
})();
}, 0);
video, canvas { width: 250px; height: 180px; background: #eee; }
<video id="video" src="https://shrt-statics.s3.eu-west-3.amazonaws.com/redball.mp4" controls></video>
<canvas id="canvas"></canvas>

I would run the detection algorithm in a webassembly module. Since it is just pixel data, thats right up its alley.
You could then pass individual frames to a different instance of the wasm module.
As far as answering your question directly, I would grab the whole frame, not 1 pixel at a time, or you might get pixels sampled from different frames. You can then submit that frame to a worker, you could even divide up the frame and send them to different workers (or as previously mentioned a wasm module)
Also since you have an array you can use Arrray.map and Array.reduce to get you to just the red values, and how big they are by testing for adjacent pixels, instead of all the comparison. Not sure if it will be faster but worth a try.

For the speed, you should consider all your process:
more your language is near the machine language, better your result will be. Saying so, C++ is better for the algorithm.
CPU speed is your friend. Launching your code on an Atom processor or on an i7 processor, is like night and day. Moreover, some type of processor is dedicated for vision like VPU
For your code:
You try to rewrite code that already exists. You can find good examples of detection in the great OpenCV library: https://www.learnopencv.com/invisibility-cloak-using-color-detection-and-segmentation-with-opencv
Hope it help you :)

Related

Loading a tile map with better performance

So, I'm trying to create a game using javascript and canvas 2d api (without any external libraries/frameworks). I plan to keep it simple for now. However, while creating it, I realized that I don't understand some concepts and especially how to load my tiled based map (created with tiled editor) efficiently
I understand that for my small project it would be enough but if I wanted to load some bigger tile map in my game loop it could slow down the performance quite a bit. I found a pretty good solution when using the worker thread, but I don't know if that would solve the problem. I was thinking of something that would only load visible pixels on the screen, but again, I don't know if that would work and if it would be an ideal solution
any solution to solve this problem would help me to understand it better and I would be grateful for it
You should load your assets only once, before you need them (so at load is usually a good place).
Then drawing the bitmap with the cropping options of drawImage() is generally good enough. If your tilemap is so big it's a problem to draw it all like that, you may consider splitting it in multiple smaller files, then it networking becomes an issue, and you are targeting only recent browsers, you can use the createImageBitmap method which allows to do the cropping directly, and will store only the cropped tile as bitmap, making it a faster asset to use in drawImage:
(async () => {
const blob = await fetch("https://upload.wikimedia.org/wikipedia/commons/6/68/BOE_tile_set.png")
.then(resp => resp.ok && resp.blob());
const tiles = [];
const tileWidth = 28;
const tileHeight = 36;
for (let i=0; i<77; i++) {
const x = i % 8;
const y = Math.floor(i / 8);
// create one ImageBitmap per tile
const bmp = await createImageBitmap(blob, x * tileWidth, y * tileHeight, tileWidth, tileHeight);
tiles.push({
bmp,
x: 0,
y: 0,
dirX: Math.random() * 2 - 1,
dirY: Math.random() * 2 - 1
});
}
const canvas = document.querySelector("canvas");
const ctx = canvas.getContext("2d");
draw();
function draw() {
ctx.clearRect(0, 0, canvas.width, canvas.height);
tiles.forEach((tile) => {
tile.x = tile.x + tile.dirX;
tile.y = tile.y + tile.dirY;
if (tile.x < -tile.bmp.width) {
tile.x = canvas.width;
}
if (tile.x > canvas.width) {
tile.x = -tile.bmp.width;
}
if (tile.y < -tile.bmp.height) {
tile.y = canvas.height;
}
if (tile.y > canvas.height + tile.height) {
tile.y = -tile.bmp.height;
}
ctx.drawImage(tile.bmp, tile.x, tile.y);
});
requestAnimationFrame(draw);
}
})().catch(console.error);
<canvas></canvas>

How to Create a Multidimenisonal PRNG?

I am working on a procedural terrain generator, but the 3d Map is constantly morphing and changing, calling for at least 4d noise (5d if I need to make it loop). I haven't found a good perlin/simplex noise library that will work in this many dimensions, so I thought this would be a good time to learn how those algorithms work. After starting to make my own "perlin" noise, I found a large problem. I need to get a psudo random value based on the nD coordinates of that point. So far I have found solutions online that use the dot product of a single point and a vector generated by the inputs, but those became very predictable very fast (I'm not sure why). I then tried a recursive approach (below), and this worked ok, but I got some weird behavior towards the edges.
Recursive 3d randomness attempt:
function Rand(seed = 123456, deg = 1){
let s = seed % 2147483647;
s = s < 1 ? s + 2147483647 : s;
while(deg > 0){
s = s * 16807 % 2147483647;
deg--;
}
return (s - 1) / 2147483646;
}
function DimRand(seed, args){
if(args.length < 2){
return Rand(seed, args[0]);
}else{
let zero = args[0];
args.shift();
return DimRand(Rand(seed, zero), args);
}
}
var T = 1;
var c = document.getElementById('canvas').getContext('2d');
document.getElementById('canvas').height = innerHeight;
document.getElementById('canvas').width = innerWidth;
c.width = innerWidth;
c.height = innerHeight;
var size = 50;
function display(){
for(let i = 0; i < 20; i ++){
for(let j = 0; j < 20; j ++){
var bright = DimRand(89,[i,j])*255
c.fillStyle = `rgb(${bright},${bright},${bright})`
c.fillRect(i*size, j*size, size, size);
}
}
T++;
}
window.onmousedown=()=>{display();}
And here is the result:
The top row was always 1 (White), the 2d row and first column were all 0 (Black), and the 3d row was always very dark (less than ≈ 0.3)
This might just be a bug, or I might have to just deal with it, but I was wondering if there was a better approach.

How can I avoid exceeding the max call stack size during a flood fill algorithm?

I am using a recursive flood fill algorithm in javascript and I am not sure how to avoid exceeding the max call stack size. This is a little project that runs in the browser.
I got the idea from here: https://guide.freecodecamp.org/algorithms/flood-fill/
I chose this algorithm because it's easy to understand and so far I like it because it's pretty quick.
x and y are the 2-d coordinates from the top-left, targetColor and newColor are each a Uint8ClampedArray, and id = ctx.createImageData(1,1); that gets its info from newColor.
function floodFill2(x, y, targetColor, newColor, id) {
let c = ctx.getImageData(x, y, 1, 1).data;
// if the pixel doesnt match the target color, end function
if (c[0] !== targetColor[0] || c[1] !== targetColor[1] || c[2] !== targetColor[2]) {
return;
}
// if the pixel is already the newColor, exit function
if (c[0] === newColor[0] && c[1] === newColor[1] && c[2] === newColor[2]) {
// this 'probably' means we've already been here, so we should ignore the pixel
return;
}
// if the fn is still alive, then change the color of the pixel
ctx.putImageData(id, x, y);
// check neighbors
floodFill2(x-1, y, targetColor, newColor, id);
floodFill2(x+1, y, targetColor, newColor, id);
floodFill2(x, y-1, targetColor, newColor, id);
floodFill2(x, y+1, targetColor, newColor, id);
return;
}
If the section is small, this code works fine. If the section is big, only a portion gets filled in and then I get the max call stack size error.
Questions
Is there something that doesn't make sense in the above code? (ie. maybe an issue for code review?)
If the code looks ok, is it the possible that I am simply using an algorithm that is inappropriate to flood fill a large section?
I would like to say that my hope for this question is to have a simple function similar to the one above which will work even for a very large, oddly shaped region but that I suppose is contingent on the generality of the algorithm. Like, am I trying to drive a nail with a screwdriver kind of thing?
Use a stack or Why recursion in JavaScript sucks.
Recursion is just a lazy mans stack. Not only is it lazy, it uses more memory and is far slower than traditional stacks
To top it off (as you have discovered) In JavaScript recursion is risky as the call stack is very small and you can never know how much of the call stack has been used when your function is called.
Some bottle necks while here
Getting image data via getImageData is an intensive task for many devices. It can take just as long to get 1 pixel as getting 65000 pixels. Calling getImageData for every pixel is a very bad idea. Get all pixels once and get access to pixels directly from RAM
Use an Uint32Array so you can process a pixel in one step rather than having to check each channel in turn.
Example
Using a simple array as a stack, each item pushed to the stack is the index of a new pixel to fill. Thus rather than have to create a new execution context, a new local scope and associated variables, closure, and more. A single 64bit number takes the place of a callStack entry.
See demo for an alternative flood fill pixel search method
function floodFill(x, y, targetColor, newColor) {
const w = ctx.canvas.width, h = ctx.canvas.height;
const imgData = ctx.getImageData(0, 0, w, h);
const p32 = new Uint32Array(imgData.data.buffer);
const channelMask = 0xFFFFFF; // Masks out Alpha NOTE order of channels is ABGR
const cInvMask = 0xFF000000; // Mask out BGR
const canFill = idx => (p32[idx] & channelMask) === targetColor;
const setPixel = (idx, newColor) => p32[idx] = (p32[idx] & cInvMask) | newColor;
const stack = [x + y * w]; // add starting pos to stack
while (stack.length) {
let idx = stack.pop();
setPixel(idx, newColor);
// for each direction check if that pixel can be filled and if so add it to the stack
canFill(idx + 1) && stack.push(idx + 1); // check right
canFill(idx - 1) && stack.push(idx - 1); // check left
canFill(idx - w) && stack.push(idx - w); // check Up
canFill(idx + w) && stack.push(idx + w); // check down
}
// all done when stack is empty so put pixels back to canvas and return
ctx.putImageData(imgData,0, 0);
}
Usage
To use the function is slightly different. id is not used and the colors targetColor and newColor need to be 32bit words with the red, green, blue, alpha reversed.
For example if targetColor was yellow = [255, 255, 0] and newColor was blue =[0, 0, 255] then revers RGB for each and call fill with
const yellow = 0xFFFF;
const blue = 0xFF0000;
floodFill(x, y, yellow, blue);
Note that I am matching your function and completely ignoring alpha
Inefficient algorithm
Note that this style of fill (mark up to 4 neighbors) is very inefficient as many of the pixels will be marked to fill and by the time they are popped from the stack it will already have been filled by another neighbor.
The following GIF best illustrates the problem. Filling the 4 by 3 area with green.
First set the pixel green,
Then push to stack if not green right, left, up, down [illustration red, orange, cyan, purple boxes]
Pop bottom and set to green
Repeat
When a location that already is on the stack is added it is inset (just for illustration purpose)
Note that when all pixels are green there are still 6 items on the stack that still need to be popped. I estimate on average you will be processing 1.6 times the number of pixels needed. For a large image 2000sq thats 2million (alot of) pixels
Using an array stack rather than call stack means
No more call stack overflows
Inherently faster code.
Allows for many optimizations
Demo
The demo is a slightly different version as your logic has some problems. It still uses a stack, but limits the number of entries pushed to the stack to be equal to the number of unique columns in the fill area.
Includes alpha in the pixel fill test and pixel write color. Simplifying the pixel read and write code.
Checks against the edges of the canvas rather than filling outside the canvas width (looping back AKA asteroids style)
Reads target color from the canvas at the first x,y pixel
Fills columns from the top most pixel in each column and only branching left or right if the previous left or right pixel was not the target color. This reduces the number of pixels to push the stack by orders of magnitude.
Click to flood fill
function floodFill(x, y, newColor) {
var left, right, leftEdge, rightEdge;
const w = ctx.canvas.width, h = ctx.canvas.height, pixels = w * h;
const imgData = ctx.getImageData(0, 0, w, h);
const p32 = new Uint32Array(imgData.data.buffer);
const stack = [x + y * w]; // add starting pos to stack
const targetColor = p32[stack[0]];
if (targetColor === newColor || targetColor === undefined) { return } // avoid endless loop
while (stack.length) {
let idx = stack.pop();
while(idx >= w && p32[idx - w] === targetColor) { idx -= w }; // move to top edge
right = left = false;
leftEdge = (idx % w) === 0;
rightEdge = ((idx +1) % w) === 0;
while (p32[idx] === targetColor) {
p32[idx] = newColor;
if(!leftEdge) {
if (p32[idx - 1] === targetColor) { // check left
if (!left) {
stack.push(idx - 1); // found new column to left
left = true; //
}
} else if (left) { left = false }
}
if(!rightEdge) {
if (p32[idx + 1] === targetColor) {
if (!right) {
stack.push(idx + 1); // new column to right
right = true;
}
} else if (right) { right = false }
}
idx += w;
}
}
ctx.putImageData(imgData,0, 0);
return;
}
var w = canvas.width;
var h = canvas.height;
const ctx = canvas.getContext("2d");
var i = 400;
const fillCol = 0xFF0000FF
const randI = v => Math.random() * v | 0;
ctx.fillStyle = "#FFF";
ctx.fillRect(0, 0, w, h);
ctx.fillStyle = "#000";
while(i--) {
ctx.fillRect(randI(w), randI(h), 20, 20);
ctx.fillRect(randI(w), randI(h), 50, 20);
ctx.fillRect(randI(w), randI(h), 10, 60);
ctx.fillRect(randI(w), randI(h), 180, 2);
ctx.fillRect(randI(w), randI(h), 2, 182);
ctx.fillRect(randI(w), randI(h), 80, 6);
ctx.fillRect(randI(w), randI(h), 6, 82);
ctx.fillRect(randI(w), randI(h), randI(40), randI(40));
}
i = 400;
ctx.fillStyle = "#888";
while(i--) {
ctx.fillRect(randI(w), randI(h), randI(40), randI(40));
ctx.fillRect(randI(w), randI(h), randI(4), randI(140));
}
var fillIdx = 0;
const fillColors = [0xFFFF0000,0xFFFFFF00,0xFF00FF00,0xFF00FFFF,0xFF0000FF,0xFFFF00FF];
canvas.addEventListener("click",(e) => {
floodFill(e.pageX | 0, e.pageY | 0, fillColors[(fillIdx++) % fillColors.length]);
});
canvas {
position: absolute;
top: 0px;
left: 0px;
}
<canvas id="canvas" width="2048" height="2048">
Flood fill is a problematic process with respect to stack size requirements (be it the system stack or one managed on the heap): in the worst case you will need a recursion depth on the order of the image size. Such cases can occur when you binarize random noise, they are not so improbable.
There is a version of flood filling that is based on filling whole horizontal runs in a single go (https://en.wikipedia.org/wiki/Flood_fill#Scanline_fill). It is advisable in general because it roughly divides the recursion depth by the average length of the runs and is faster in the "normal" cases. Anyway, it doesn't solve the worst-case issue.
There is also an interesting truly stackless algorithm as described here: https://en.wikipedia.org/wiki/Flood_fill#Fixed-memory_method_(right-hand_fill_method). But the implementation looks cumbersome.

JavaScript pixel by pixel canvas manipulation

I'm working on a simple web app which simplifies the colours of an uploaded image to a colour palette selected by the user. The script works, but it takes a really long time to loop through the whole image (for large images it's over a few minutes), changing the pixels.
Initially, I was writing to the canvas itself, but I changed the code so that changes are made to an ImageData object and the canvas is only updated at the end of the script. However, this didn't really make much difference.
// User selects colours:
colours = [[255,45,0], [37,36,32], [110,110,105], [18,96,4]];
function colourDiff(colour1, colour2) {
difference = 0
difference += Math.abs(colour1[0] - colour2[0]);
difference += Math.abs(colour1[1] - colour2[1]);
difference += Math.abs(colour1[2] - colour2[2]);
return(difference);
}
function getPixel(imgData, index) {
return(imgData.data.slice(index*4, index*4+4));
}
function setPixel(imgData, index, pixelData) {
imgData.data.set(pixelData, index*4);
}
data = ctx.getImageData(0,0,canvas.width,canvas.height);
for(i=0; i<(canvas.width*canvas.height); i++) {
pixel = getPixel(data, i);
lowestDiff = 1024;
lowestColour = [0,0,0];
for(colour in colours) {
colour = colours[colour];
difference = colourDiff(colour, pixel);
if(lowestDiff < difference) {
continue;
}
lowestDiff = difference;
lowestColour = colour;
}
console.log(i);
setPixel(data, i, lowestColour);
}
ctx.putImageData(data, 0, 0);
During the entire process, the website is completely frozen, so I can't even display a progress bar. Is there any way to optimise this so that it takes less time?
There is no need to slice the array each iteration. (As niklas has already stated).
I would loop over the data array instead of looping over the canvas dimensions and directly edit the array.
for(let i = 0; i < data.length; i+=4) { // i+=4 to step over each r,g,b,a pixel
let pixel = getPixel(data, i);
...
setPixel(data, i, lowestColour);
}
function setPixel(data, i, colour) {
data[i] = colour[0];
data[i+1] = colour[1];
data[i+2] = colour[2];
}
function getPixel(data, i) {
return [data[i], data[i+1], data[i+2]];
}
Also, console.log can bring a browser to it's knees if you've got the console open. If your image is 1920 x 1080 then you will be logging to the console 2,073,600 times.
You can also pass all of the processing off to a Web Worker for ultimate threaded performance. Eg. https://jsfiddle.net/pnmz75xa/
One problem or option for improvement is clearly your slice function, which will create a new array every time it is called, you do not need this. I would change the for loop like so:
for y in canvas.height {
for x in canvas.width {
//directly alter the canvas' pixels
}
}
Finding difference in color
I am adding an answer because you have use a very poor color match algorithm.
Finding how closely a color matches another is best done if you imagine each unique possible colour as a point in 3D space. The red, green, and blue values represent the x,y,z coordinate.
You can then use some basic geometry to locate the distance from one colour to the another.
// the two colours as bytes 0-255
const colorDist = (r1, g1, b1, r2, g2, b2) => Math.hypot(r1 - r2, g1 - g2, b1 - b2);
It is also important to note that the channel value 0-255 is a compressed value, the actual intensity is close to that value squared (channelValue ** 2.2). That means that red = 255 is 65025 times more intense than red = 1
The following function is a close approximation of the colour difference between two colors. Avoiding the Math.hypot function as it is very slow.
const pallet = [[1,2,3],[2,10,30]]; // Array of arrays representing rgb byte
// of the colors you are matching
function findClosest(r,g,b) {
var closest;
var dist = Infinity;
r *= r;
g *= g;
b *= b;
for (const col of pallet) {
const d = ((r - col[0] * col[0]) + (g - col[1] * col[1]) + (b - col[2] * col[2])) ** 0.5;
if (d < dist) {
if (d === 0) { // if same then return result
return col;
}
closest = col;
dist = d;
}
}
return closest;
}
As for performance, your best bet is either via a web worker, or use webGL to do the conversion in realtime.
If you want to keep it simple to prevent the code from blocking the page cut the job into smaller slices using a timer to allow the page breathing room.
The example uses setTimeout and performance.now() to do 10ms slices letting other page events and rendering to do there thing. It returns a promise that resolves when all pixels are processed
function convertBitmap(canvas, maxTime) { // maxTime in ms (1/1000 second)
return new Promise(allDone => {
const ctx = canvas.getContext("2d");
const pixels = ctx.getImageData(0, 0, canvas.width, canvas.height);
const data = pixels.data;
var idx = data.length / 4;
processPixels(); // start processing
function processPixels() {
const time = performance.now();
while (idx-- > 0) {
if (idx % 1024) { // check time every 1024 pixels
if (performance.now() - time > maxTime) {
setTimeout(processPixels, 0);
idx++;
return;
}
}
let i = idx * 4;
const col = findClosest(data[i], data[i + 1], data[i + 2]);
data[i++] = col[0];
data[i++] = col[1];
data[i] = col[2];
}
ctx.putImageData(pixels, 0, 0);
allDone("Pixels processed");
}
});
}
// process pixels in 10ms slices.
convertBitmap(myCanvas, 10).then(mess => console.log(mess));

HTML5 Canvas works and disappear when click anywhere

In my following code, Canvas works perfectly and it shows on my map then it disappears when I tried to click anywhere.
I tried all the possible ways which submitted in (StackOverflow) here as solutions but no way, maybe my code has something error which causes that.
html code:
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/openlayers/4.4.2/ol.css" type="text/css">
<script src="https://cdnjs.cloudflare.com/ajax/libs/openlayers/4.4.2/ol.js"></script>
<canvas id="myCanvas" width="1000" height="500"></canvas>
<button type="button" onclick="evt();">resolution</button>
js code:
image = new ol.layer.Tile({
source: new ol.source.XYZ({
projection: 'EPSG:4326',
wrapX: false,
url: image/{z}/{x}/{-y}.png'
})
});
map.addLayer(image);
function evt() {
var canvasContext = $('.ol-unselectable')[0].getContext('2d');
var canvas = document.getElementById('myCanvas');
var imgData = canvasContext.getImageData(0, 0, canvas.width, canvas.height);
var imageWidth = imgData.width;
var imageHeight = imgData.height;
var pix = imgData.data;
var l = pix.length;
var i;
for (i = 0; l > i; i += 4) {
if (pix[i] >= 100 && pix[i] <= 200 && pix[i + 1] >= 100 && pix[i + 1] <= 200 && pix[i + 2] >= 100 && pix[i + 2] <= 200) {
pix[i] = 255;
pix[i + 1] = 0;
pix[i + 2] = 0;
}
}
canvasContext.putImageData(imgData, 0, 0);
};
We have an incomplete view of your application so some of the following remarks are going to rely on assumptions.
You're selecting the first element with a class .ol-unslectable, which is presumably a HTML Canvas.
Then you're running through the pixels of that Canvas and changing colour values depending on whether a particular condition is met.
Then you're putting the pixels back onto the canvas you took them from.
There is much in your code which is unnecessary.
The following variables..
canvas
imageWidth
imageHeight
red
green
blue
x
y
... play no part in the function.
Your imgData variable holds the pixels of an image at the same dimensions as the #myCanvas elemnt, where more usually one would take those dimensions from the source image - after all it is these pixels which are being manipulated here.
If we remove all extraneous lines from the function it will look like this and still perform the same task.
/*
manipulate the pixels of $('.ol-unselectable')[0]
*/
function evt (e) {
var cnv = $('.ol-unselectable')[0],
ctx = cnv.getContext('2d'),
imgData = ctx.getImageData(0, 0, cnv.width, cnv.height),
pix = imgData.data,
l = pix.length,
i = 0;
for (; i < l; i += 4) {
[
pix[i],
pix[i + 1],
pix[i + 2]
].every(function (val) {
return val >= 100 && val <= 200;
}) && (
pix[i] = 255,
pix[i + 1] = 0,
pix[i + 2] = 0
);
}
ctx.putImageData(imgData, 0, 0);
}
I would also suggest that you register an eventListener to your button element rather than relying on an onclick HTML attribute. Assign it a unique id ...
<button id="reso">resolution</button>
... and then register an eventListener once the DOM is ready (the JQuery way)...
$("#reso").on("click", evt);
Now, I'm aware that this may not be the help you were looking for, but the irregular effects you describe are not apparent from your example code. I'm pretty sure you need those extra variables to do something - but from the question it isn't clear exactly what that 'something' might be.
Nonetheless, at least you now have a clean function that performs it's single task a little more efficiently. ;)
More on Array.prototype.every # MDN.
Canvas Pixel manipulation # MDN
I got the solution, for Canvas on OpenLayers Map, should add to the end of the function
map.on('postcompose', function(event){
evt(event.context);
});
Thank you all..

Categories