I have the script that works with pixels in html5 canvas element. And there is some strange behavior by Google Chrome (version: 17.0.942.0 dev). I have 2 kind of operations with pixels:
changing the hue of pixels inside particular polygone
pixel selection by wand tool
The problem is when I'm changing the hue, the memory of that tab grows after every change up to 250MB (sometimes even more) and than being reseted to the initial size. But when I'm using wand tool selection, the memory doesn't grow, it goes up or down depending how many pixels are selected (this is normal behavior in my opinion). Please help me to understand why memory grows after every change in first case.
P.S. in FF there is no issue like that, that's why I think that this is Chrome-specific "strange" behavior
Here is the code of hue change:
function isPointInPoly(poly, pt){
for(var c = false, i = -1, l = poly.length, j = l - 1; ++i < l; j = i)
((poly[i].y <= pt.y && pt.y < poly[j].y) || (poly[j].y <= pt.y && pt.y < poly[i].y))
&& (pt.x < (poly[j].x - poly[i].x) * (pt.y - poly[i].y) / (poly[j].y - poly[i].y) + poly[i].x)
&& (c = !c);
return c;
}
function changeHue(hue){
var modifyCanvas = $("#canvas-modify").get(0);
var modifyContext = modifyCanvas.getContext('2d');
modifyContext.clearRect(0, 0, modifyCanvas.width, modifyCanvas.height);
var imageData = $$.mainCanvasContext.getImageData(0, 0, $$.mainCanvasElem.width, $$.mainCanvasElem.height);
for(var i=0;i<imageData.data.length;i+=4) {
var p = {x: (i/4)%imageData.width, y: parseInt((i/4)/imageData.width)};
for(var j=0;j<$$.globalSelection.length;j++){
var poly = $$.globalSelection[j].slice(0, $$.globalSelection[j].length-1);
if(isPointInPoly(poly, p)) {
var hsl = rgbToHsl(imageData.data[i], imageData.data[i+1], imageData.data[i+2]);
hsl[0] = hue;
var rgb = hslToRgb(hsl[0], hsl[1], hsl[2]);
imageData.data[i] = rgb[0];
imageData.data[i+1] = rgb[1];
imageData.data[i+2] = rgb[2];
} else {
imageData.data[i] = 0;
imageData.data[i+1] = 0;
imageData.data[i+2] = 0;
imageData.data[i+3] = 0;
}
}
}
modifyContext.putImageData(imageData, 0, 0);
}
It's not just you.
There have been off and on memory leaks in Chrome with anything canvas imageData related.
For instance:
http://code.google.com/p/chromium/issues/detail?id=51171
http://code.google.com/p/chromium/issues/detail?id=20067
etc.
Chromium's issue tracker policies are weird. The issues aren't necessarily fixed even though they close them.
It's possible its a webkit thing and not a Chrome thing but I can't say for certain. All I can say is that you yourself aren't doing anything wrong.
Though while we're here let me say that you should not be doing this:
var modifyCanvas = $("#canvas-modify").get(0);
var modifyContext = modifyCanvas.getContext('2d');
Every time, for performance's sake. Especially if this operation happens often.
Related
I made a project called "pixel paint" by javascript with p5js library, but when I run it, that project ran too slow. I don't know why and how to make it run faster. And here is my code:
let h = 40, w = 64;
let checkbox;
let scl = 10;
let painting = new Array(h);
let brush = [0, 0, 0];
for(let i = 0; i < h; i++) {
painting[i] = new Array(w);
for(let j = 0; j < w; j++) {
painting[i][j] = [255, 255, 255];
}
}
function setup() {
createCanvas(w * scl, h * scl);
checkbox = createCheckbox('Show gird line', true);
checkbox.changed(onChange);
}
function draw() {
background(220);
for(let y = 0; y < h; y++) {
for(let x = 0; x < w; x++) {
fill(painting[y][x]);
rect(x * scl, y * scl, scl, scl);
}
}
if(mouseIsPressed) {
paint();
}
}
function onChange() {
if (checkbox.checked()) {
stroke(0);
} else {
noStroke();
}
}
function paint() {
if(mouseX < w * scl && mouseY < h * scl) {
let x = floor(mouseX / scl);
let y = floor(mouseY / scl);
painting[y][x] = brush;
}
}
<!--Include-->
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.4.0/p5.min.js"></script>
Is there a solution to make my project run faster?
The code you have is easy to read.
It might not be worth optimising at this stage as it would make the code potentially needlessly more complex/harder to read and change in the future.
If you want to learn about different ways you could achieve the same thing I can provide a few ideas, though, for your particular use case int terms of performance might won't make a huge difference:
Instead of using the painting as a nested [w][h] array you could use a flat [w * h] array and use a single for loop instead of a nested for loop. This would be somewhat similar to using pixels[]. (You can convert x,y to an index (index = x + (y * width)) and the other way around(x = index % width, y = floor(index / width))
You could in theory use a p5.Image, access pixels[] to draw into it and render using image() (ideally you'd get lower level access to the WebGL renderer to enable antialiasing if it's supported by the browser). The grid itself could be a texture() for a quad where you'd use vertex() to specify not only x,y geometry positions, but also u, v texture coordinates in tandem with textureWrap(REPEAT). (I posted an older repeat Processing example: the logic is the same and syntax is almost identical)
Similar to the p5.Image idea, you can cache the drawing using createGraphics(): e.g. only update the p5.Graphics instance when the mouse is dragged, otherwise render the cached drawing. Additionally you can make use of noLoop()/loop() to control when p5's canvas gets updated (e.g. loop() on mousePressed(), updated graphics on mouseMoved(), noLoop() on mouseReleased())
There are probably other methods too.
While it's good to be aware of techniques to optimise your code,
I strongly recommend not optimising until you need to; and when you do
use a profiler (DevTools has that) to focus only the bits are the slowest
and not waste time and code readability on part of code where optimisation
wouldn't really make an impact.
So I wrote a flood fill function that works like a paint-app bucket tool: you click inside a closed shape and it'll fill with a color.
I have two problems with it:
performance - let's say my canvas is 600*600 (370,000 pixels) and I draw a big circle in it that for example has about 100K pixels in it, it can take about 40(!!!) seconds to fill this circle! thats insane!
A sqaure of exactly 10,000 pixels takes 0.4-0.5 seconds on average, but (I guess) since the sizes of the arrays used the program are growing so much, a sqaure 10 times the size takes about 100 times the length to fill.
there's something wierd about the filling. I'm not really sure how it happens but it's always leaving a few un-filled pixels. Not much at all, but it's really wierd.
My flood fill function uses 4 helper-functions: get and set pixel color, checking if it's a color to fill, and checking if that's a pixel that has been checked before.
Here are all the functions:
getPixelColor = (x, y) => {
let pixelColor = [];
for (let i = 0; i < pixDens; ++i) {
for (let j = 0; j < pixDens; ++j) {
index = 4 * ((y * pixDens + j) * width * pixDens + (x * pixDens + i));
pixelColor[0] = pixels[index];
pixelColor[1] = pixels[index + 1];
pixelColor[2] = pixels[index + 2];
pixelColor[3] = pixels[index + 3];
}
}
return pixelColor;
};
setPixelColor = (x, y, currentColor) => { //Remember to loadPixels() before using this function, and to updatePixels() after.
for (let i = 0; i < pixDens; ++i) {
for (let j = 0; j < pixDens; ++j) {
index = 4 * ((y * pixDens + j) * width * pixDens + (x * pixDens + i));
pixels[index] = currentColor[0];
pixels[index + 1] = currentColor[1];
pixels[index + 2] = currentColor[2];
pixels[index + 3] = currentColor[3];
}
}
}
isDuplicate = (posHistory, vector) => {
for (let i = 0; i < posHistory.length; ++i) {
if (posHistory[i].x === vector.x && posHistory[i].y === vector.y) {
return true;
}
}
return false;
}
compareColors = (firstColor, secondColor) => {
for (let i = 0; i < firstColor.length; ++i) {
if (firstColor[i] !== secondColor[i]) {
return false;
}
}
return true;
}
floodFill = () => {
loadPixels();
let x = floor(mouseX);
let y = floor(mouseY);
let startingColor = getPixelColor(x, y);
if (compareColors(startingColor, currentColor)) {
return false;
}
let pos = [];
pos.push(createVector(x, y));
let posHistory = [];
posHistory.push(createVector(x, y));
while (pos.length > 0) {
x = pos[0].x;
y = pos[0].y;
pos.shift();
if (x <= width && x >= 0 && y <= height && y >= 0) {
setPixelColor(x, y, currentColor);
let xMinus = createVector(x - 1, y);
if (!isDuplicate(posHistory, xMinus) && compareColors(getPixelColor(xMinus.x, xMinus.y), startingColor)) {
pos.push(xMinus);
posHistory.push(xMinus);
}
let xPlus = createVector(x + 1, y);
if (!isDuplicate(posHistory, xPlus) && compareColors(getPixelColor(xPlus.x, xPlus.y), startingColor)) {
pos.push(xPlus);
posHistory.push(xPlus);
}
let yMinus = createVector(x, y - 1);
if (!isDuplicate(posHistory, yMinus) && compareColors(getPixelColor(yMinus.x, yMinus.y), startingColor)) {
pos.push(yMinus);
posHistory.push(yMinus);
}
let yPlus = createVector(x, y + 1);
if (!isDuplicate(posHistory, yPlus) && compareColors(getPixelColor(yPlus.x, yPlus.y), startingColor)) {
pos.push(yPlus);
posHistory.push(yPlus);
}
}
}
updatePixels();
}
I would really apprciate it if someone could help me solve the problems with the functions.
Thank you very much!!
EDIT: So I updated my flood fill function itself and removed an array of colors that I never used. this array was pretty large and a few push() and a shift() methods called on it on pretty much every run.
UNFORTUNATLY, the execution time is 99.9% the same for small shapes (for example, a fill of 10,000 takes the same 0.5 seconds, but large fills, like 100,000 pixels now takes about 30 seconds and not 40, so that's a step in the right direction.
I guess that RAM usage is down as well since it was a pretty large array but I didn't measured it.
The wierd problem where it leaves un-filled pixels behind is still here as well.
A little suggestion:
You don't actually have to use the posHistory array to determine whether to set color. If the current pixel has the same color as startingColor then set color, otherwise don't set. This would have the same effect.
The posHistory array would grow larger and larger during execution. As a result, a lot of work has to be done just to determine whether to fill a single pixel. I think this might be the reason behind your code running slowly.
As for the "weird thing":
This also happened to me before. I think that's because the unfilled pixels do not have the same color as startingColor. Say you draw a black shape on a white background, you would expect to see some gray pixels (close to white) between the black and white parts somewhere. These pixels play the role of smoothing the shape.
I am working on a procedural terrain generator, but the 3d Map is constantly morphing and changing, calling for at least 4d noise (5d if I need to make it loop). I haven't found a good perlin/simplex noise library that will work in this many dimensions, so I thought this would be a good time to learn how those algorithms work. After starting to make my own "perlin" noise, I found a large problem. I need to get a psudo random value based on the nD coordinates of that point. So far I have found solutions online that use the dot product of a single point and a vector generated by the inputs, but those became very predictable very fast (I'm not sure why). I then tried a recursive approach (below), and this worked ok, but I got some weird behavior towards the edges.
Recursive 3d randomness attempt:
function Rand(seed = 123456, deg = 1){
let s = seed % 2147483647;
s = s < 1 ? s + 2147483647 : s;
while(deg > 0){
s = s * 16807 % 2147483647;
deg--;
}
return (s - 1) / 2147483646;
}
function DimRand(seed, args){
if(args.length < 2){
return Rand(seed, args[0]);
}else{
let zero = args[0];
args.shift();
return DimRand(Rand(seed, zero), args);
}
}
var T = 1;
var c = document.getElementById('canvas').getContext('2d');
document.getElementById('canvas').height = innerHeight;
document.getElementById('canvas').width = innerWidth;
c.width = innerWidth;
c.height = innerHeight;
var size = 50;
function display(){
for(let i = 0; i < 20; i ++){
for(let j = 0; j < 20; j ++){
var bright = DimRand(89,[i,j])*255
c.fillStyle = `rgb(${bright},${bright},${bright})`
c.fillRect(i*size, j*size, size, size);
}
}
T++;
}
window.onmousedown=()=>{display();}
And here is the result:
The top row was always 1 (White), the 2d row and first column were all 0 (Black), and the 3d row was always very dark (less than ≈ 0.3)
This might just be a bug, or I might have to just deal with it, but I was wondering if there was a better approach.
I've been working on this problem for some time now with little promising results. I am trying to split up an image into connected regions of similar color. (basically split a list of all the pixels into multiple groups (each group containing the coordinates of the pixels that belong to it and share a similar color).
For example:
http://unsplash.com/photos/SoC1ex6sI4w/
In this image the dark clouds at the top would probably fall into one group. Some of the grey rock on the mountain in another, and some of the orange grass in another. The snow would be another - the red of the backpack - etc.
I'm trying to design an algorithm that will be both accurate and efficient (it needs to run in a matter of ms on midrange laptop grade hardware)
Below is what I have tried:
Using a connected component based algorithm to go through every pixel from top left scanning every line of pixels from left to right (and comparing the current pixel to the top pixel and left pixel). Using the CIEDE2000 color difference formula if the pixel at the top or left was within a certain range then it would be considered "similar" and part of the group.
This sort of worked - but the problem is it relies on color regions having sharp edges - if any color groups are connected by a soft gradient it will travel down that gradient and continue to "join" the pixels as the difference between the individual pixels being compared is small enough to be considered "similar".
To try to fix this I chose to set every visited pixel's color to the color of most "similar" adjacent pixel (either top or left). If there are no similar pixels than it retains it's original color. This somewhat fixes the issue of more blurred boundaries or soft edges because the first color of a new group will be "carried" along as the algorithm progresses and eventually the difference between that color and the current compared color will exceed the "similarity" threashold and no longer be part of that group.
Hopefully this is making sense. The problem is neither of these options are really working. On the image above what is returned are not clean groups but noisy fragmented groups that is not what I am looking for.
I'm not looking for code specifically - but more ideas as to how an algorithm could be structured to successfully combat this problem. Does anyone have ideas about this?
Thanks!
You could convert from RGB to HSL to make it easier to calculate the distance between the colors. I'm setting the color difference tolerance in the line:
if (color_distance(original_pixels[i], group_headers[j]) < 0.3) {...}
If you change 0.3, you can get different results.
See it working.
Please, let me know if it helps.
function hsl_to_rgb(h, s, l) {
// from http://stackoverflow.com/questions/2353211/hsl-to-rgb-color-conversion
var r, g, b;
if (s == 0) {
r = g = b = l; // achromatic
} else {
var hue2rgb = function hue2rgb(p, q, t) {
if (t < 0) t += 1;
if (t > 1) t -= 1;
if (t < 1 / 6) return p + (q - p) * 6 * t;
if (t < 1 / 2) return q;
if (t < 2 / 3) return p + (q - p) * (2 / 3 - t) * 6;
return p;
}
var q = l < 0.5 ? l * (1 + s) : l + s - l * s;
var p = 2 * l - q;
r = hue2rgb(p, q, h + 1 / 3);
g = hue2rgb(p, q, h);
b = hue2rgb(p, q, h - 1 / 3);
}
return [Math.round(r * 255), Math.round(g * 255), Math.round(b * 255)];
}
function rgb_to_hsl(r, g, b) {
// from http://stackoverflow.com/questions/2353211/hsl-to-rgb-color-conversion
r /= 255, g /= 255, b /= 255;
var max = Math.max(r, g, b),
min = Math.min(r, g, b);
var h, s, l = (max + min) / 2;
if (max == min) {
h = s = 0; // achromatic
} else {
var d = max - min;
s = l > 0.5 ? d / (2 - max - min) : d / (max + min);
switch (max) {
case r:
h = (g - b) / d + (g < b ? 6 : 0);
break;
case g:
h = (b - r) / d + 2;
break;
case b:
h = (r - g) / d + 4;
break;
}
h /= 6;
}
return [h, s, l];
}
function color_distance(v1, v2) {
// from http://stackoverflow.com/a/13587077/1204332
var i,
d = 0;
for (i = 0; i < v1.length; i++) {
d += (v1[i] - v2[i]) * (v1[i] - v2[i]);
}
return Math.sqrt(d);
};
function round_to_groups(group_nr, x) {
var divisor = 255 / group_nr;
return Math.ceil(x / divisor) * divisor;
};
function pixel_data_to_key(pixel_data) {
return pixel_data[0].toString() + '-' + pixel_data[1].toString() + '-' + pixel_data[2].toString();
}
function posterize(context, image_data, palette) {
for (var i = 0; i < image_data.data.length; i += 4) {
rgb = image_data.data.slice(i, i + 3);
hsl = rgb_to_hsl(rgb[0], rgb[1], rgb[2]);
key = pixel_data_to_key(hsl);
if (key in palette) {
new_hsl = palette[key];
new_rgb = hsl_to_rgb(new_hsl[0], new_hsl[1], new_hsl[2]);
rgb = hsl_to_rgb(hsl);
image_data.data[i] = new_rgb[0];
image_data.data[i + 1] = new_rgb[1];
image_data.data[i + 2] = new_rgb[2];
}
}
context.putImageData(image_data, 0, 0);
}
function draw(img) {
var canvas = document.getElementById('canvas');
var context = canvas.getContext('2d');
context.drawImage(img, 0, 0, canvas.width, canvas.height);
img.style.display = 'none';
var image_data = context.getImageData(0, 0, canvas.width, canvas.height);
var data = image_data.data;
context.drawImage(target_image, 0, 0, canvas.width, canvas.height);
data = context.getImageData(0, 0, canvas.width, canvas.height).data;
original_pixels = [];
for (i = 0; i < data.length; i += 4) {
rgb = data.slice(i, i + 3);
hsl = rgb_to_hsl(rgb[0], rgb[1], rgb[2]);
original_pixels.push(hsl);
}
group_headers = [];
groups = {};
for (i = 0; i < original_pixels.length; i += 1) {
if (group_headers.length == 0) {
group_headers.push(original_pixels[i]);
}
group_found = false;
for (j = 0; j < group_headers.length; j += 1) {
// if a similar color was already observed
if (color_distance(original_pixels[i], group_headers[j]) < 0.3) {
group_found = true;
if (!(pixel_data_to_key(original_pixels[i]) in groups)) {
groups[pixel_data_to_key(original_pixels[i])] = group_headers[j];
}
}
if (group_found) {
break;
}
}
if (!group_found) {
if (group_headers.indexOf(original_pixels[i]) == -1) {
group_headers.push(original_pixels[i]);
}
if (!(pixel_data_to_key(original_pixels[i]) in groups)) {
groups[pixel_data_to_key(original_pixels[i])] = original_pixels[i];
}
}
}
posterize(context, image_data, groups)
}
var target_image = new Image();
target_image.crossOrigin = "";
target_image.onload = function() {
draw(target_image)
};
target_image.src = "http://i.imgur.com/zRzdADA.jpg";
canvas {
width: 300px;
height: 200px;
}
<canvas id="canvas"></canvas>
You can use "Mean Shift Filtering" algorithm to do the same.
Here's an example.
You will have to determine function parameters heuristically.
And here's the wrapper for the same in node.js
npm Wrapper for meanshift algorithm
Hope this helps!
The process you are trying to complete is called Image Segmentation and it's a well studied area in computer vision, with hundreds of different algorithms and implementations.
The algorithm you mentioned should work for simple images, however for real world images such as the one you linked to, you will probably need a more sophisticated algorithm, maybe even one that is domain specific (are all of your images contains a view?).
I have little experience in Node.js, however from Googling a bit I found the GraphicsMagic library, which as a segment function that might do the job (haven't verified).
In any case, I would try looking for "Image segmentation" libraries, and if possible, not limit myself only to Node.js implementations, as this language is not the common practice for writing vision applications, as opposed to C++ / Java / Python.
I would try a different aproach. Check out this description of how a flood fill algorithm could work:
Create an array to hold information about already colored coordinates.
Create a work list array to hold coordinates that must be looked at. Put the start position in it.
When the work list is empty, we are done.
Remove one pair of coordinates from the work list.
If those coordinates are already in our array of colored pixels, go back to step 3.
Color the pixel at the current coordinates and add the coordinates to the array of colored pixels.
Add the coordinates of each adjacent pixel whose color is the same as the starting pixel’s original color to the work list.
Return to step 3.
The "search approach" is superior because it does not only search from left to right, but in all directions.
You might look at k-means clustering.
http://docs.opencv.org/3.0-beta/modules/core/doc/clustering.html
I have a "bubble generator" that is mostly working, but is not properly clearing the bubbles and I can't figure out why. Been staring at this for a while now. Specifically, some of the bubbles are getting "cleared" as they float up, others aren't, and I can't see why. ARGH!
http://jsfiddle.net/Dud2q/7/ (slowed waaay down so that you can easily watch a single bubble)
Logic flow (this just describes the code in the fiddle):
Create an imageData array (long list of pixels)
imgData = ctx.getImageData(0, 0, w, h);
push new random bubbles onto the beginning of the "bubbles array" which draws bottom-up:
for(var i=0, l=generators.length; i<l; i++){
for(var j=0, m=0|Math.random()*6; j<m; j++){
newBubbles.push( 0|generators[i] + j );
}
generators[i] = Math.max(0, Math.min(w, generators[i] + Math.random()*10 - 5));
}
bubbles.unshift(newBubbles);
loop all bubbles to be drawn and:
clear the bubbles that were drawn in the last loop by setting alpha channel to 0 (transparent):
if(i<(l-1)){
x = 0|bubbles[i+1][j];
offset = y * w * 4 + x * 4;
pixels[offset+3] = 0;
}
draw new bubbles (offset+1 = g, offset+2 = b, offset+3 = alpha):
x = 0|(bubbles[i][j] += Math.random() * 6 - 3);
offset = y * w * 4 + x * 4;
pixels[offset+1] = 0x66;
pixels[offset+2] = 0x99;
pixels[offset+3] = 0xFF;
Increasing the number of generators to a higher number seems to make it work.
Eg. 50..times(createBubbleGenerator); almost works.
Cheers!!!