I'm looking for a way to render graphics onto an HTML5 canvas using JavaScript, but I want to only render said graphics if they're inside a pre-defined mask.
I'm creating a GUI framework that can be used to easily and quickly create GUIs on an HTML5 canvas. I think that something that would be really nice to have is a way to render graphics inside an element, and make the element auto-crop the graphics so that they always stay inside of it. For example, I can make a rectangular element and animate a circular pulse inside of it, and as the circle extends past the outside of the element, those parts of he circle should just not render to keep it looking smooth and sharp. This is similar to what CSS does with overflow: hidden;
Now, I know that one option is to use a mask-like feature. For example, P5.js has mask(). However, this is very very slow. Masking a single element a single time using P5.js significantly reduces framerate, and I want to be doing this potentially hundreds of times per frame without frame drops. I know that CSS does this incredibly efficiently (from my own experience working with it), but I can't seem to think of any way to make it efficient on a canvas element.
I could do it pretty simply if it was just a rectangle, but I want to do this for any shape. For example, a circle, a star, a rectangle with rounded edges, or really any polygon at all.
How can this be done? I thought of potentially rendering to an off screen canvas (which is shrunken to the size of the element in question), then render the element onto that screen using one color (let's say the background color will be white, and the shape will be black), then rendering the image we want masked onto another off screen canvas that's the same width as our other OSC, then looping through one of their image data arrays and mapping one to the other based on whether said pixel is white or black on the mask canvas.
But........ I can't help but think that that's going to be incredibly slow for the computer to process. I assume that CSS somehow leverages the GPU to do this type of computation incredibly efficiently and that's why they get such an increase in performance. Is it possible for me to do the same or am I just dreaming?
Okay, so I have found two different means of doing this (huge thank you to #Kaiido). One method is to use ctx.clip() while one works with CanvasPattern.
This snippet shows both means in action:
<canvas id = "c" width = "400" height = "400"></canvas>
<canvas id = "c2" width = "400" height = "400"></canvas>
<script>
var canvas = document.getElementById("c");
var ctx = canvas.getContext("2d");
ctx.fillStyle = "yellow";
ctx.fillRect(0,0,400,400);
ctx.beginPath();
ctx.arc(200,200,100,0,6);
ctx.clip();
ctx.beginPath();// This clears our previous arc from the path so that it doesn't render in when we `fill()`
ctx.fillStyle = "rgb(255,0,0)";
for(var i = 0;i < 20;i++){
for(var j = 0;j < 40;j++){
ctx.rect(i * 20 + j % 2 * 10,j * 10,10,10);
}
}
ctx.fill();
</script>
<script>
var canvas2 = document.getElementById("c2");
var ctx2 = canvas2.getContext("2d");
ctx2.fillStyle = "orange";
ctx2.fillRect(0,0,400,400);
var osc = new OffscreenCanvas(400,400);
var oscctx = osc.getContext("2d");
oscctx.fillStyle = "rgb(255,0,0)";
for(var i = 0;i < 20;i++){
for(var j = 0;j < 40;j++){
oscctx.rect(i * 20 + j % 2 * 10,j * 10,10,10);
}
}
oscctx.fill();
var pattern = ctx2.createPattern(osc,"no-repeat");
ctx2.fillStyle = pattern;
ctx2.arc(200,200,100,0,6);
ctx2.fill();
</script>
Which one is more efficient and better to be run hundreds of times per frame?
Another edit:
I spent about an hour messing around with it on a sandbox website, and I made this small project:
https://www.khanacademy.org/computer-programming/-/6446241383661568
There I run each one every millisecond and see how quickly each one updates to see which appears more efficient. clip() is on top while CanvasPattern is on the bottom. They both appear to be incredibly fast to me, and I feel that no matter which I chose I will have almost exactly the same results. However, clip() does still appear to be a bit faster as far as I can tell.
See for yourself and let me know what you think!
I have been writing a little javascript plugin, and i am having a little trouble with improving the canvas overall quality of the render. I have searched over the web here and there but can not find anything that makes sense.
The lines created from my curves are NOT smooth, if you look at the jsfiddle below you will understand what I mean. It kind of looks pixelated. Is there a way to improve the quality? Or is there a Canvas Framework that already uses some method to auto improve its quality that I can use in my project?
My Canvas Render
Not sure if this helps but i am using this code at the start of my script:
var c = document.getElementsByClassName("canvas");
for (i = 0; i < c.length; i++) {
var canvas = c[i];
var ctx = canvas.getContext("2d");
ctx.clearRect(0,0, canvas.width, canvas.height);
ctx.lineWidth=1;
}
}
Thanks in advance
Example of my Curve Code:
var data = {
diameter: 250,
slant: 20,
height: 290
};
for (i = 0; i < c.length; i++) {
var canvas = c[i];
var ctx = canvas.getContext("2d");
ctx.beginPath();
ctx.moveTo( 150 + ((data.diameter / 2) + data.slant ), (data.height - 3) );
ctx.quadraticCurveTo( 150 , (data.height - 15), 150 - ((data.diameter / 2) + data.slant ), (data.height - 3));
ctx.lineTo( 150 - ((data.diameter / 2) + data.slant ), data.height );
ctx.quadraticCurveTo( 150 , (data.height + 5), 150 + ((data.diameter / 2) + data.slant ), data.height);
ctx.closePath();
ctx.stroke();
}
The Problem
This is one of those cases where it's almost impossible to get a smooth result without manually tweaking it.
The cause has to do with the minimal space to distribute smoothing pixels. In this case we only have a single pixel height between each section in the quadratic curve.
If we look at a curve with no smoothing we can more clearly see this limitation (without smoothing each pixel sits on an integer position):
The red line indicates a single section and we can see that the transition between the previous and next section has to be distributed over the height one pixel. See my answer here for how this works.
Smoothing is based on the remaining fraction for the point's coordinate conversion to integer. Since smoothing then uses this fraction to determine the color and alpha based on stroke main color and background color to add a shaded pixel, we will quickly run into limitations as each pixel used for smoothing occupies a whole pixel itself and due to the lack of space as here, the shades will be very rough and therefor revealing.
When a long line goes from y to y+/-1 (or x to x+/-1) there is not a single pixel between the end points that would land on a perfect bound which means every pixel between is instead a shade.
If we take a closer look at a couple of segments from the current line we can see the shades more clearly and how it affects the result :
Additionally
Though this explains the principle in general - Other problems are (as I barely hinted about in revision 1 (last paragraph) of this answer a few days ago, but removed and forgot about going deeper into it) is that lines drawn on top of each other in general, will contribute to contrast as the alpha pixels will blend and in some parts introduce higher contrast.
You will have to go over the code to remove unneeded strokes so you get a single stroke in each location. You have for instance some closePaths() that will connect end of path with the beginning and draw double lines and so forth.
A combination of these two should give a nice balance between smooth and sharp.
Smoothing test-bench
This demo allows you to see the effect for how smoothing is distributed based on available space.
The more bent the curve is, the shorter each section becomes and would require less smoothing. The result: smoother line.
var ctx = c.getContext("2d");
ctx.imageSmoothingEnabled =
ctx.mozImageSmoothingEnabled = ctx.webkitImageSmoothingEnabled = false; // for zoom!
function render() {
ctx.clearRect(0, 0, c.width, c.height);
!!t.checked ? ctx.setTransform(1,0,0,1,0.5,0.5):ctx.setTransform(1,0,0,1,0,0);
ctx.beginPath();
ctx.moveTo(0,1);
ctx.quadraticCurveTo(150, +v.value, 300, 1);
ctx.lineWidth = +lw.value;
ctx.strokeStyle = "hsl(0,0%," + l.value + "%)";
ctx.stroke();
vv.innerHTML = v.value;
lv.innerHTML = l.value;
lwv.innerHTML = lw.value;
ctx.drawImage(c, 0, 0, 300, 300, 304, 0, 1200, 1200); // zoom
}
render();
v.oninput=v.onchange=l.oninput=l.onchange=t.onchange=lw.oninput=render;
html, body {margin:0;font:12px sans-serif}; #c {margin-top:5px}
<label>Bend: <input id=v type=range min=1 max=290 value=1></label>
<span id=vv></span><br>
<label>Lightness: <input id=l type=range min=0 max=60 value=0></label>
<span id=lv></span><br>
<label>Translate 1/2 pixel: <input id=t type=checkbox></label><br>
<label>Line width: <input id=lw type=range min=0.25 max=2 step=0.25 value=1></label>
<span id=lwv></span><br>
<canvas id=c width=580></canvas>
Solution
There is no good solution unless the resolution could have been increased. So we are stuck with tweaking the colors and geometry to give a more smooth result.
We can use a few of tricks to get around:
We can reduce the line width to 0.5 - 0.75 so we get a less visible color gradient used for shading.
We can dim the color to decrease the contrast
We can translate half pixel. This will work in some cases, others not.
If sharpness is not essential, increasing the line width instead may help balancing out shades. Example value could be 1.5 combined with a lighter color/gray.
We could use shadow too but this is an performance hungry approach as it uses more memory as well as Gaussian blur, together with two extra composition steps and is relatively slow. I would recommend using 4) instead.
1) and 2) are somewhat related as using a line width < 1 will force sub-pixeling on the whole line which means no pixel is pure black. The goal of both techniques is to reduce the contrast to camouflage the shade gradients giving the illusion of being a sharper/thinner line.
Note that 3) will only improve pixels that as a result lands on a exact pixel bound. All other cases will still be blurry. In this case this trick will have little to no effect on the curve, but serves well for the rectangles and vertical and horizontal lines.
If we apply these tricks by using the test-bench above, we'll get some usable values:
Variation 1
ctx.lineWidth = 1.25; // gives some space for lightness
ctx.strokeStyle = "hsl(0,0%,50%)"; // reduces contrast
ctx.setTransform(1,0,0,1,0.5,0.5); // not so useful for the curve itself
Variation 2:
ctx.lineWidth = 0.5; // sub-pixels all points
ctx.setTransform(1,0,0,1,0.5,0.5); // not so useful for the curve itself
We can fine-tune further by experimenting with line width and the stroke color/lightness.
An alternative is to produce a more accurate result for the curves using Photoshop or something similar which has better smoothing algorithms, and use that as image instead of using native curve.
This question struck me as a little odd. Canvas rendering, though not the best when compared to high end renderers is still very good. So why is there such a problem with this example. I was about to leave it, but 500 points is worth another look. From that I can give two bits of advice, a solution, and an alternative.
First, designers and their designs must incorporate the limits of the media. It may sound a little presumptuous but you are trying to reduce the irreducible, you can not get rid of aliasing on a bitmap display.
Second, Always write neat well commented code. There are 4 answers here and no-one picked out the flaw. That is because the presented code is rather messy and hard to understand. I am guessing (almost like me) the others skipped your code altogether rather than work out what it was doing wrong.
Please Note
The quality of images in all the answers for this question may be scaled (thus resampled) by the browser. To make a true comparison it is best to view the images on a separate page so that they are not scaled.
Results of study of problem in order of quality (in my view)
Genetic algorithm
The best method I found to improve the quality, a method not normally associated to computer graphics, is to use a very simple form of a genetic algorithm (GA) to search for the best solution by making subtle changes to the rendering process.
Sub pixel positioning, line width, filter selection, compositing, resampling and colour changes can make marked changes to the final result. This present billions of possible combinations, any one of which could be the best. GAs are well suited to finding solutions to these types of searches, though in this case the fitness test was problematic, because the quality is subjective the fitness test has to be also, and thus requires human input.
After many iterations and taking up rather a bit more of my time than I wanted I found a method very well suited to this type of image (many thin closely spaced lines) The GA is not suitable for public release. Fortunately we are only interested in the fittest solution and the GA created a sequence of steps that are repeatable and consistent for the particular style it was run to solve.
The result of the GA search is.
see Note 1 for processing steps
The results is far better than I expected so I had to have a closed look and noticed 2 distinct features that set this image apart from all the others presented in this and other answers.
Anti-aliasing is non uniform. Where you would normally expect a uniform change in intensity this method produces a stepped gradient (why this makes it look better I do not know)
Dark nodes. Just where the transition from one row to the next is almost complete the line below or above is rendered noticeably darker for a few pixels then reverts back to the lighter shade when the line is fitting the row. This seams to compensate lightening of the overall line as it shares its intencity across two rows.
This has given me some food for thought and I will see if these features can be incorporated directly into the line and curve scan line rendering.
Note 1
The methods used to render the above image. Off screen canvas size 1200 by 1200. Rendered at scale 4 ctx.setTransform(4,0,0,4,0,0), pixel y offset 3 ctx.translate(0,3), Line width 0.9pixels rendered twice on white background, 1 pixel photon count blur repeated 3 times (similar to convolution 3*3 gaussian blur but a little less weight along the diagonals), 4 times down samples via 2 step downsample using photon count means (each pixel channel is the square root of the mean of the squares of the 4 (2 by 2) sampled pixels). Sharpen one pass (unfortunately that is a very complex custom sharpen filter (like some pin/pixel sharpen filters)) and then layered once with ctx.globalCompositeOperation = "multiply" and ctx.globalAlpha = 0.433 then captured on canvas. All processing done on Firefox
Code fix
The awful rendering result was actually caused by some minor rendering inconsistencies in you code.
Below is the before and after the code fix. As you can see there is a marked improvement.
So what did you do wrong?
Not to much, the problem is that you where rendering lines over the top of existing lines. This has the effect of increasing the contrast of those lines, the render does not know you don't want the existing colours and thus adds to the existing anti aliasing doubling the opacity and destroying the effect.
Bellow your code with only the rendering of the shape. Comments show the changes.
ctx.beginPath();
// removed the top and bottom lines of the rectangle
ctx.moveTo(150, 0);
ctx.lineTo(150, 75);
ctx.moveTo(153, 0);
ctx.lineTo(153, 75);
// dont need close path
ctx.stroke();
ctx.beginPath();
ctx.moveTo((150 - (data.diameter / 2)), 80);
ctx.quadraticCurveTo(150, 70, 150 + (data.diameter / 2), 80);
ctx.lineTo(150 + (data.diameter / 2), 83);
ctx.quadraticCurveTo(150, 73, 150 - (data.diameter / 2), 83);
ctx.closePath();
ctx.stroke();
ctx.beginPath();
// removed the two quadratic curves that where drawing over the top of existing ones
ctx.moveTo(150 + (data.diameter / 2), 83);
ctx.lineTo(150 + ((data.diameter / 2) + data.slant), data.height);
ctx.moveTo(150 - ((data.diameter / 2) + data.slant), data.height);
ctx.lineTo(150 - (data.diameter / 2), 83);
// dont need close path
ctx.stroke();
ctx.beginPath();
// removed a curve
ctx.moveTo(150 + ((data.diameter / 2) + data.slant), (data.height - 3));
ctx.quadraticCurveTo(150, (data.height - 15), 150 - ((data.diameter / 2) + data.slant), (data.height - 3));
// dont need close path
ctx.stroke();
ctx.beginPath();
ctx.moveTo(150 + ((data.diameter / 2) + data.slant), data.height);
ctx.quadraticCurveTo(150, (data.height - 10), 150 - ((data.diameter / 2) + data.slant), data.height);
ctx.quadraticCurveTo(150, (data.height + 5), 150 + ((data.diameter / 2) + data.slant), data.height);
ctx.closePath();
ctx.stroke();
So now the render is much better.
Subjective eye
The code fix in my opinion is the best solution that can be achieved with the minimum of effort. As quality is subjective below I present several more methods that may or may not improve the quality, dependent on the eye of the judge.
DOWN SAMPLING
Another why of improving render quality is to down sample.This involves simply rendering the image at a higher resolution and then re rendering the image at a lower resolution. Each pixel is then an average of 2 or more pixels from the original.
There are many down sampling methods, but many are not of any practical use due to the time they take to process the image.
The quickest down sampling can be done via the GPU and native canvas render calls. Simply create an offscreen canvas at a resolution 2 time or 4 time greater than required, then use the transform to scale the image rendering up (so you don't need to change the rendering code). Then you render that image at the required resolution for the result.
Example of downsampling using 2D API and JS
var canvas = document.getElementById("myCanvas"); // get onscreen canvas
// set up the up scales offscreen canvas
var display = {};
display.width = canvas.width;
display.height = canvas.height;
var downSampleSize = 2;
var canvasUp = document.createElement("canvas");
canvasUp.width = display.width * downSampleSize;
canvasUp.height = display.height * downSampleSize;
var ctx = canvasUp.getContext("2D");
ctx.setTransform(downSampleSize,0,0,downSampleSize,0,0);
// call the render function and render to the offscreen canvas
Once you have the image just render it to you onscreen canvas
ctx = canvas.getContext("2d");
ctx.drawImage(canvasUp,0,0,canvas.width,canvas.height);
The following images shows the result of 4* down sampling and varying the line width from 1.2 pixels down to 0.9 pixels (Note the upsampled line width is 4 * that. 4.8, 4.4, 4, & 3.6)
Next image 4* down sample using Lanczos resampling a reasonably quick resample (better suited to pictures)
Down sampling is quick and requires very little modification to the original code to work. The resulting image will improve the look of fine detail and create a slightly better antialiased line.
Down sampling also allows for much finer control of the (apparent) line width. rendering at display resolution gives poor results under 1/4 pixel changes in line width. Using downsampling you double and quadruple that 1/8th and 1/16th fine detail (keep in mind there are other types of aliasing effect that come into play when rendering at sub pixels resolutions)
Dynamic Range
Dynamic range in digital media refers to the range of values that the media can handle. For the canvas that range is 256 (8bits) per color channel. The human eye has a hard time picking the difference between to concurrent values, say 128 and 129 so this range is almost ubiquitous in the realm of computer graphics. Modern GPU though can render at much higher dynamic ranges 16bit, 24bit, 32bit per channel and even double precision floats 64bit.
The adequate 8bit range is good for 95% of cases but suffers when the image being rendered is forced into a lower dynamic range. This happens when you render a line on top of a colour that is close to the line colour. In the questio the image is rendered on not a very bright background (example #888), the result is that the anti aliasing only has a range of 7 bits halving the dynamic range. The problem is compounded by the fact that if the image is rendered onto a transparent background where the anti aliasing is achieved by varying the alpha channel, resulting in the introduction of a second level of artifacts.
When you keep dynamic range in mind you can design your imagery to get the best result (within the design constraints). When rendering and the background is known, don't render onto a transparent canvas letting the hardware composite the final screen output, render the background onto the canvas, then render the design. Try to keep the dynamic range as large as possible, the greater the difference in colour the better the antialiasing algorithm can deal with the intermediate colour.
Below is an example of rendering to various background intensities, they are rendered using 2* down sampling on pre rendered background. BG denotes the background intensity .
Please note that this image is to wide too fit the page and is down sampled by the browser thus adding extra artifacts.
TRUE TYPE like
While here there is another method. If you consider the screen made up of pixels, each pixel has 3 parts red, green, blue and we group them always starting at red.
But it does not matter where a pixels starts, all that matters is that the pixel has the 3 colours rgb, it could be gbr or brg. When you look at the screen like this you effectively get 3 times the horizontal resolution in regard to the edges of the pixels. The pixel size is still the same but offset. This is how microsoft does its special font rendering (true type) Unfortunately Microsoft have many patents on the use of this method so all I can do is show you what it looks like when you render ignoring pixel boundaries.
The effect is most pronounced in the horizontal resolution, and does not improve the vertical much (Note this is my own implementation of the canvas rendering and it's still being refined) This method also does not work for transparent images
What is a slanted line on a pixel matrix is what you should understand. If you need to draw a slanted line of a single pixel width, there is no way you can prevent it from having jagged edges on it since slanting is achieved via a progressing vertical pattern of horizontal lines.
The solution is to have some blur effect around the lines and make the line joining smoother.
You need to use shadowColor, shadowBlur, lineCap and lineJoin properties of the canvas context to achieve this.
Put the following setup and try drawing you lines.
for (i = 0; i < c.length; i++) {
var canvas = c[i];
var ctx = canvas.getContext("2d");
ctx.shadowColor = "rgba(0,0,0,1)";
ctx.shadowBlur = 2;
ctx.lineCap = 'round';
ctx.lineJoin = 'round';
ctx.lineWidth = 1;
ctx.strokeStyle = 'black';
ctx.clearRect(0,0, canvas.width, canvas.height);
}
Here is the result
Try playing with the shadowColor opacity and the blur size together with the line width and color. You can get pretty amazing results.
On a side note, your project sounds more SVG to me than Canvas. Probably you should think of moving to SVG to get better drawing support and performance.
Update
Here is a fine adjustment
ctx.shadowColor = "rgba(128,128,128,.2)";
ctx.shadowBlur = 1;
ctx.lineCap = 'round';
ctx.lineJoin = 'round';
ctx.lineWidth = 1;
ctx.strokeStyle = 'gray';
Sorry I'm late to the party, but all of the answers here are overcomplicating things.
What you are actually seeing is the absence of gamma correction. Look at the Antialias 1&2 examples here: http://bourt.com/2014/ (you'll need to calibrate the gamma value for your monitor first), and this short explanation: https://medium.com/#alexbourt/use-gamma-everywhere-da027d9dc82f
The vectors are drawn as if in a linear color space, while the pixels exist in a gamma-corrected space. It's that simple. Unfortunately, Canvas has no gamma support, so you're kind of stuck.
There is a way to fix this, but you have to draw your stuff, then access the pixels directly and correct them for gamma yourself, like I did in those examples. Naturally, this is most easily done with simple graphics. For anything more complicated you need your own rendering pipeline which takes gamma into account.
(Because this argument invariably comes up, I'll address it now: it's better to err on the side of gamma than not. If you say "well, I don't know what the user monitor's gamma will be", and leave it at 1.0, the result WILL BE WRONG in almost all cases. But if you take an educated guess, say 1.8, then for a substantial percentage of users you will have guessed something close to what's correct for their monitor.)
One reason for blury lines is drawing in-between pixels. This answer gives a good overview of the canvas coordinate system:
https://stackoverflow.com/a/3657831/4602079
One way of keeping integer coordinates but still getting crisp lines is to translate the context by 0.5 pixel:
context.translate(0.5,0.5);
Take a look at the snippet below. The second canvas is translated by (0.5, 0.5) making the line drawn with integer coordinates look crisp.
That should get your straight lines fixed. Curves, diagonal lines, etc. will be anti-aliased (gray pixels around the strokes). Not much you can do about it. The higher the resolution less visible they are and all lines except for the straight ones look better anti-aliased anyways.
function draw(ctx){
ctx.beginPath();
ctx.moveTo(25, 30);
ctx.lineTo(75, 30);
ctx.stroke();
ctx.beginPath();
ctx.moveTo(25, 50.5);
ctx.lineTo(75, 50.5);
ctx.stroke();
}
draw(document.getElementById("c1").getContext("2d"))
var ctx = document.getElementById("c2").getContext("2d");
ctx.translate(0.5, 0.5);
draw(ctx);
<canvas id="c1" width="100" height="100" style="width: 100px; height: 100px"></canvas>
<canvas id="c2" width="100" height="100" style="width: 100px; height: 100px"></canvas>
Anti-aliasing helps a lot. But when you have angled lines that are close to horizontal, or vertical, or are gently curving, the anti-aliasing is going to be a lot more noticeable. Especially for thin lines with widths of less than a couple of a pixels or so.
As Maciej points out, if you have a line that's around 1px width and it passed directly between two pixels, anti-aliasing will result in a line that's two pixels wide and half-grey.
You may have to just learn to live with it. There is only so much that anti-aliasing can do.
I'm rendering a grid of cells, very much like the grid you find in a crossword puzzle, but using four different colors to fill each cell (not only black or white).
The grid size is about 160x120, and I need to render it as fast as possible, as it will be used to display a Cellular automaton animation.
I have tried two different approaches to render the grid:
Render each cell using something like:
var w = x + step;
var h = y + step;
canvasContext.fillStyle=cell.color;
canvasContext.fillRect(x+1,y+1,w-1,h-1);
canvasContext.strokeRect(x,y,w,h);
Render the all of cells without the border, and then render the grid lines using:
var XSteps = Math.floor(width/step);
canvasContext.fillStyle = gridColor;
for (var i = 0, len=XSteps; i<len; i++) {
canvasContext.fillRect(i*step, 0, 1, height);
}
//Similar thing for Y coord
Both algorithms perform poorly: it is slower to draw the grid than the cells in both cases. Am I missing something? How can I optimize those algorithms? Is there another way I should try?
Note: the grid moves, as the user can displace it or zoom the view.
The general question will be: what is the fastest algorithm to draw a grid of cells on a element?
The fastest way to do something is to not do it at all.
Draw your unchanging grid once on one canvas, and draw (and clear and redraw) your cellular automata on another canvas layered above (or below) that. Let the browser (in all it's native compiled optimized glory) handle dirtying and redrawing and compositing for you.
Or (better) if you are not going to change your grid size, just create a tiny image and let CSS fill it as the background.
Demo of CSS Background image to Canvas: http://jsfiddle.net/LdmFw/3/
Based on this excellent demo, here's a background image grid created entirely through CSS; with this you could change the size as desired (in whole-pixels increments).
Demo of CSS3 Grid to Canvas: http://jsfiddle.net/LdmFw/5/
If you must draw a grid, the fastest will be to just draw lines:
function drawGrid(ctx,size){
var w = ctx.canvas.width,
h = ctx.canvas.height;
ctx.beginPath();
for (var x=0;x<=w;x+=size){
ctx.moveTo(x-0.5,0); // 0.5 offset so that 1px lines are crisp
ctx.lineTo(x-0.5,h);
}
for (var y=0;y<=h;y+=size){
ctx.moveTo(0,y-0.5);
ctx.lineTo(w,y-0.5);
}
ctx.stroke(); // Only do this once, not inside the loops
}
Demo of grid drawing: http://jsfiddle.net/QScAk/4/
For m rows and n columns this requires m+n line draws in a single pass. Contrast this with drawing m×n individual rects and you can see that the performance difference can be quite significant.
For example, a 512×512 grid of 8×8 cells would take 4,096 fillRect() calls in the naive case, but only 128 lines need to be stroked in a single stroke() call using the code above.
It's really hard to help without seeing all the code to know where the performance is going, but just off the bat:
Instead of drawing a background grid using stroke, can you draw it using one call to drawImage? That will be much faster. If its truly static then you can just set a css background-image on the canvas to an image of the grid you want.
You're using fillRect and strokeRect a lot and these can probably be replaced with several calls to rect() (the path command) and only a single call to fill at the very end. So all the filled cells are rendered at once with a single filling (or stroking or both) command.
Set the fillStyle/strokeStyle as little as possible (not inside loops if you can avoid it)
You are using fill to draw the lines; it would be faster, I think, to define a path and stroke it:
canvasContext.beginPath();
var XSteps = Math.floor(width / step);
canvasContext.fillStyle = gridColor;
var x = 0;
for (var i = 0, len = XSteps; i < len; i++) {
canvasContext.moveTo(x, 0);
canvasContext.lineTo(x, height);
x += step;
}
// similar for y
canvasContext.stroke();