Javascript canvas. Fixed linewidth independent of the transformation - javascript

I am trying to draw a graph in a simple (x, y) plane.
I am using canvas transform to transform my coordinate system.
I am looking for a method to draw the 'linewidth' as a fixed pixel width independent of the x/y scale.
The problem is when x/y is large, the line width along x or y disappears or it is deformed because the line width is scaled relative to the x and y scale.
See example:
var c = document.getElementById("myCanvas");
var ctx = c.getContext("2d");
var y_scale=10
ctx.transform(1, 0, 0, 1/y_scale, 0, 0);
ctx.lineWidth = 10;
ctx.strokeRect(20, 20, 80, 100*y_scale);
Is there a way to fix the width independent of the transformation?
Of course I can do my own transformation matrix but I would rather not.

Same question: html5 canvas prevent linewidth scaling
Solution:
Apply transformation
Define path
Restore transformation
Stroke path
Code:
var y_scale=10;
ctx.beginPath();
ctx.save(); //save context
ctx.transform(1, 0, 0, 1 / y_scale, 0, 0);
ctx.rect(20, 20, 80, 100 * y_scale); // define rect
ctx.restore(); //restore context
ctx.lineWidth = 10;
ctx.stroke(); //stroke path
See example: https://jsfiddle.net/aj3sn7yv/2/

Instead of scaling the context you could scale your point series - no need for a complete matrix solution. This way you are just altering the "path" and not the rendering result.
var scale = 10;
var newPoints = scalePoints(points, scale); // scale or inverse: 1/scale
... plot new points here at the line-width you wish
// example function
function scalePoints(points, scale) {
var newPoints = [];
points.forEach(function(p) { newPoints.push(p * scale) });
return newPoints
}
Modify as needed.

Related

How to get the position of a rectangle in a canvas after transforming the canvas

I am trying to draw a rectangle with a label slightly above it on a canvas. The x, y, width, and height were generated to be around an object that was detected using the coco-ssd model in tensorflow. The problem is that the coordinates generated by the coco-ssd model in tensorflow is relative to a different origin from the canvas itself. More specifically, the origin for the coco-ssd model is at the top right corner, and the origin for the canvas is at the top left corner.
I am able to move the origin of the canvas, but not the model's origin (That I know of). To move the canvas' origin, I translated the canvas to the right 410px, 4px smaller than the width of the canvas, and then reflected it horizontally. This draws the rectangle at the correct position. If I were to also create the text at this point it would be inverted and unreadable (but at the proper position). If it were possible to get the x and y position of the rectangle after translating the canvas back left 410px and reflecting it horizontally once more I could easily use those coordinates to fill in the text at the proper position. From what I have learned about canvas, this is not possible. (Please correct me if im wrong)
Another solution I considered would be to use the x position generated and to apply this formula, -x+xLim, where xLim is the largest possible value of x. The problem here is that obtaining xLim is not possible either, it is not static, and it will change depending on the distance away from the detected object. I know this from trying to obtain what xLim could be by simply positioning the object on the leftmost side of the screen. (The largest value of x that is currently viewable with respect to the coco-ssd model's origin) Keep in mind, that if I create distance from the object, the value of x on the leftmost side of the screen will increase. If I were able to somehow grab the largest x value that is actively viewable on the canvas then this would be another viable solution.
Here is the function in-charge of drawing to the canvas.
export const drawRect = (x, y, width, height, text, ctx, canvas) => {
ctx.clearRect(0, 0, canvas.current.width, canvas.current.height);
ctx.transform(-1, 0, 0, 1, 410, 0);
//draw rectangle
ctx.beginPath();
const r = ctx.rect(x,y,width,height)
ctx.stroke()
//draw text
ctx.save();
ctx.scale(-1,1);
ctx.translate(-410, 0)
//update x and y to point to where the rectangle is currently
ctx.fillText(text,x,y-5)
ctx.stroke()
ctx.restore()
})
I feel heavily limited by the API available to react native and I hope that there is something I've simply overlooked. I've spent lots of time trying to resolve this issue and have found many relatable questions on stack overflow, but none of them gave insight as to how to solve this problem with so many unknown variables.
Here are some images to provide a visual of the issue at hand.
Leftmost
Rightmost
Without Restoring canvas to its original origin
SUMMARY:
The origin for the coco-ssd model is at the top right corner, and the origin for the canvas is at the top left corner.
I need to
A.)Somehow grab the largest x value that is actively viewable on the canvas
OR
B.) get the x and y position of the rectangle after translating the canvas back left 410px and reflecting it horizontally once more
This is in a react native expo environment
Public repo:
https://github.com/dpaceoffice/MobileAppTeamProject
The transforms that you are using seem unnecessary. Here is a simple proof of concept using the embed code for coco-ssd and an example on manipulating videos:
https://www.npmjs.com/package/#tensorflow-models/coco-ssd
https://developer.mozilla.org/en-US/docs/Web/API/Canvas_API/Manipulating_video_using_canvas
The following code does not use any transforms and everything works as expected. The video, canvas and model all use the same coordinate system:
(0, 0) is top left corner of the frame
(width, 0) is top right
(0, height) is bottom left
(width, height) is bottom right
And the rectangle (x, y, width, height) also use the familiar coordinates everywhere:
(x, y) is top left
(x + width, y) is top right
(x, y + height) is bottom left
(x + width, y + height) is bottom right
The only thing that is a bit different, is the context.fillText(text, x, y) method:
(x, y) is bottom left corner of the text. However, this is actually nice, because we just can draw the rectangle and the text with the same coordinates and the text will sit directly above the rectangle.
(x, y - fontSize) is usually the top left corner
(x, y + fontSize) is the coordinate for the next row.
If you want to place the text at a different position, the context.measureText(text) might be of interest. It returns a TextMetrics object. Usually, the .width property of this object is of most interest:
(x + context.measureText(text).width, y) is the bottom right corner of the text.
const processor = {}
processor.doLoad = function (model)
{
const video = document.getElementById("video")
this.video = video
this.canvas = document.getElementById("canvas")
this.context = this.canvas.getContext("2d")
this.model = model
video.addEventListener("play", () => {
this.canvas.width = video.videoWidth
this.canvas.height = video.videoHeight
this.timerCallback()
}, false)
}
processor.timerCallback = async function ()
{
if (this.video.paused || this.video.ended)
return
await this.computeFrame()
setTimeout(() => this.timerCallback(), 0)
};
processor.computeFrame = async function ()
{
// detect objects in the image.
const predictions = await this.model.detect(this.video)
const context = this.context
// draws the frame from the video at position (0, 0)
context.drawImage(this.video, 0, 0)
context.strokeStyle = "red"
context.fillStyle = "red"
context.font = "16px sans-serif"
for (const { bbox: [x, y, width, height], class: _class, score } of predictions)
{
// draws a rect with top-left corner of (x, y)
context.strokeRect(x, y, width, height)
// writes the class directly above (x, y), outside the rectangle
context.fillText(_class, x, y)
// writes the class directly below (x, y), inside the rectangle
context.fillText(score.toFixed(2), x, y + 16)
}
}
// Load the model.
const model = cocoSsd.load()
model.then(model => processor.doLoad(model))
<!-- Load TensorFlow.js. This is required to use coco-ssd model. -->
<script src="https://cdn.jsdelivr.net/npm/#tensorflow/tfjs"> </script>
<!-- Load the coco-ssd model. -->
<script src="https://cdn.jsdelivr.net/npm/#tensorflow-models/coco-ssd"> </script>
<div style="display: flex; flex-flow: column; gap: 1em; width: 200px;">
<!-- Replace this with your image. Make sure CORS settings allow reading the image! -->
<video id="video" src="https://mdn.github.io/dom-examples/canvas/chroma-keying/media/video.mp4" controls crossorigin="anonymous"></video>
<canvas id="canvas" style="border: 1px solid black;"></canvas>
</div>
As it seems, that you want to draw the image into some part of the canvas, here is some code that showcases how to correctly use the transforms. There are two options here:
Use the transforms (1x translate + 1x scale), so that you can use the coordinate system from the image. Note, that it does not require negative scales. The drawing is handled by the browser. You have to correct the font and line widths for the scaling.
Also use transforms (1x translate + 1x scale). Then restore the canvas, and transform the points manually. This involves some further math to transform the points. However, the upside is, that you don't have to correct the font and line widths.
const image = document.createElement("canvas")
image.width = 1200
image.height = 600
const image_context = image.getContext("2d")
image_context.fillStyle = "#4af"
image_context.fillRect(0, 0, 1200, 600)
const circle = (x, y, r) =>
{
image_context.beginPath()
image_context.arc(x, y, r, 0, Math.PI*2)
image_context.fill()
}
image_context.fillStyle = "#800"
image_context.fillRect(500-40/2, 400, 40, 180)
image_context.fillStyle = "#080"
circle(500, 400, 100)
circle(500, 300, 70)
circle(500, 220, 50)
const prediction = { bbox: [500-100, 220-50, 100*2, (400+180)-(220-50)], class: "tree", score: 0.42 } // somehow get a prediction
const canvas = document.getElementById("canvas")
const context = canvas.getContext("2d")
// we want to draw the big image into a smaller area (x, y, width, height)
const x = 50
const y = 80
const width = 220
const height = width * (image.height / image.width)
// debug: mark the area that we want to draw the image into
{
context.save() // save context, so that stroke properties can be restored
context.lineWidth = 5
context.strokeStyle = "red"
context.setLineDash([10, 10])
context.strokeRect(x, y, width, height)
context.restore()
}
{
// save context, before doing any transforms (even before setTransform)
context.save()
// Move top left corner to (x, y)
context.translate(x, y)
// This is the scale factor, it should be less than one, because the image is bigger that the target area. The idea is to increase the scale by the target area width, and then decrease the scale by the image width.
const f = width / image.width
context.scale(f, f)
// Draws the image, note that the coordinates are just (0, 0) without scaling.
context.drawImage(image, 0, 0)
// option 1: draw the prediction using the native transforms
if (true)
{
context.strokeStyle = "red"
context.fillStyle = "red"
context.lineWidth = 1 / f // linewidth and font-size has to be adjusted by scaling
const fontSize = 16 / f
context.font = fontSize.toFixed(0) + "px sans-serif"
const [p_x, p_y, p_width, p_height] = prediction.bbox
context.strokeRect(p_x, p_y, p_width, p_height) // draw the prediction
context.fillText(prediction.class, p_x, p_y) // draw the text
context.fillText(prediction.score.toFixed(2), p_x, p_y + fontSize) // draw the text
}
const matrix = context.getTransform() // save transform for option 2, needs to be done before restore()
context.restore()
// option 2: draw the prediction by manually transforming the corners
if (false)
{
context.save()
context.strokeStyle = "red"
context.fillStyle = "red"
context.lineWidth = 1
const fontSize = 16
context.font = fontSize + "px sans-serif"
let [p_x, p_y, p_width, p_height] = prediction.bbox
// manually transform corners
const topleft = matrix.transformPoint(new DOMPoint(p_x, p_y))
const bottomright = matrix.transformPoint(new DOMPoint(p_x + p_width, p_y + p_height))
p_x = topleft.x
p_y = topleft.y
p_width = bottomright.x - topleft.x
p_height = bottomright.y - topleft.y
context.strokeRect(p_x, p_y, p_width, p_height) // draw the prediction
context.fillText(prediction.class, p_x, p_y) // draw the text
context.fillText(prediction.score.toFixed(2), p_x, p_y + fontSize) // draw the text
context.restore()
}
}
<canvas id="canvas" width=400 height=300 style="border: 1px solid black;"></canvas>
Now, if you need to mirror the image using the transforms, you just have to make sure to mirror it back before writing the text.
const image = document.createElement("canvas")
image.width = 1200
image.height = 600
const image_context = image.getContext("2d")
image_context.fillStyle = "#4af"
image_context.fillRect(0, 0, 1200, 600)
const circle = (x, y, r) =>
{
image_context.beginPath()
image_context.arc(x, y, r, 0, Math.PI*2)
image_context.fill()
}
image_context.fillStyle = "#800"
image_context.fillRect(500-40/2, 400, 40, 180)
image_context.fillStyle = "#080"
circle(500, 400, 100)
circle(500, 300, 70)
circle(500, 220, 50)
const prediction = { bbox: [500-100, 220-50, 100*2, (400+180)-(220-50)], class: "tree", score: 0.42 } // somehow get a prediction
const canvas = document.getElementById("canvas")
const context = canvas.getContext("2d")
// we want to draw the big image into a smaller area (x, y, width, height)
const x = 50
const y = 80
const width = 220
const height = width * (image.height / image.width)
// debug: mark the area that we want to draw the image into
{
context.save() // save context, so that stroke properties can be restored
context.lineWidth = 5
context.strokeStyle = "red"
context.setLineDash([10, 10])
context.strokeRect(x, y, width, height)
context.restore()
}
{
// save context, before doing any transforms (even before setTransform)
context.save()
// Move top left corner to (x, y)
context.translate(x, y)
// This is the scale factor, it should be less than one, because the image is bigger that the target area. The idea is to increase the scale by the target area width, and then decrease the scale by the image width.
const f = width / image.width
context.scale(f, f)
// mirror the image before drawing it
context.scale(-1, 1)
context.translate(-image.width, 0)
// Draws the image, note that the coordinates are just (0, 0) without scaling.
context.drawImage(image, 0, 0)
// option 1: draw the prediction using the native transforms
if (true)
{
const [p_x, p_y, p_width, p_height] = prediction.bbox
// move to correct position and only then undo the mirroring
context.save()
context.translate(p_x + p_width, p_y) // move to top "right" (that is now actually at the left, due to mirroring)
context.scale(-1, 1)
context.strokeStyle = "red"
context.fillStyle = "red"
context.lineWidth = 1 / f // linewidth and font-size has to be adjusted by scaling
const fontSize = 16 / f
context.font = fontSize.toFixed(0) + "px sans-serif"
context.strokeRect(0, 0, p_width, p_height) // draw the prediction
context.fillText(prediction.class, 0, 0) // draw the text
context.fillText(prediction.score.toFixed(2), 0, 0 + fontSize) // draw the text
context.restore()
}
const matrix = context.getTransform() // save transform for option 2, needs to be done before restore()
context.restore()
// option 2: draw the prediction by manually transforming the corners
if (false)
{
context.save()
context.strokeStyle = "red"
context.fillStyle = "red"
context.lineWidth = 1
const fontSize = 16
context.font = fontSize + "px sans-serif"
let [p_x, p_y, p_width, p_height] = prediction.bbox
// manually transform corners, note that compared to previous snippet, topleft now uses the top right corner of the rectangle
const topleft = matrix.transformPoint(new DOMPoint(p_x + p_width, p_y))
const bottomright = matrix.transformPoint(new DOMPoint(p_x, p_y + p_height))
p_x = topleft.x
p_y = topleft.y
p_width = bottomright.x - topleft.x
p_height = bottomright.y - topleft.y
context.strokeRect(p_x, p_y, p_width, p_height) // draw the prediction
context.fillText(prediction.class, p_x, p_y) // draw the text
context.fillText(prediction.score.toFixed(2), p_x, p_y + fontSize) // draw the text
context.restore()
}
}
<canvas id="canvas" width=400 height=300 style="border: 1px solid black;"></canvas>

Canvas: fit drawing to canvas size without changing the coordinates

Let's say we're dynamically drawing a fractal on a canvas. Since we don't know how big the fractal is going to be, at some point we'd need to scale (zoom out) the canvas to fit our fractal in there.
How do we do that? How to scale it:
Properly, so that it perfectly fits the drawing we have, and
So that the coordinates stay the same, and our fractal calculation doesn't need to use the scale value (meaning, return x, not return x * scale, if possible)
What if the fractal grows in all directions and we have negative values?
See the tiny example below.
var $canvas = document.querySelector('canvas'),
ctx = $canvas.getContext('2d'),
lastX = 0,
lastY = 0;
drawLoop();
function drawLoop() {
var newX = lastX + 30,
newY = lastY + 30;
ctx.beginPath();
ctx.moveTo(lastX, lastY);
ctx.lineTo(newX, newY);
ctx.stroke();
lastX = newX;
lastY = newY;
setTimeout(drawLoop, 1000);
}
<canvas width="100" height="100" style="border: 1px solid #ccc;"></canvas>
You can scale, translate, and rotate the drawing coordinates via the canvas transform.
If you have the min and max coordinates of your drawing
Example:
const min = {x: 100, y: 200};
const max = {x: 10009, y: 10000};
You can make it fit the canvas as follows
const width = canvas.width;
const height = canvas.height;
// get a scale to best fit the canvas
const scale = Math.min(width / (max.x - min.x), height / (max.y - min.y));
// get a origin so that the drawing is centered on the canvas
const top = (height - (max.y - min.y)) / 2;
const left = (width - (max.x - min.x)) / 2;
// set the transform so that you can draw to the canvas
ctx.setTransform(scale, 0, 0, scale, left, top);
// draw something
ctx.strokeRect(min.x, min.y, max.x - min.x, max.y - min.y);
If you do not know the size of the drawing area at the start then you will need to save drawing coordinates as you go. When the min and max change you then recalculate the transform, clear the canvas and redraw. There is no other way if you do not know the size at the beginning.

Why do stroked rectangles tend to go outside the canvas?

I've been experimenting with the <canvas> recently, and I noticed a strange behaviour when stroking rectangles near the origin (0, 0) of the canvas.
// canvas context
var ctx = document.querySelector('#screen').getContext('2d');
// draw a rectangle
ctx.fillStyle = 'orange';
ctx.fillRect(0, 0, 100, 100);
// stroke a border for the rectangle
ctx.lineWidth = 20;
ctx.strokeRect(0, 0, 100, 100);
<canvas id="screen"></canvas>
What went wrong?
In the example above, the rectangle itself was drawn at (0, 0) as intended, but its border (the stroked rectangle) seems to be drawn at an offset.
Generally, when stroking a rectangle at a position away from the origin, this effect is omitted —
Meaning that the stroked rectangles aren't being drawn starting at the position specified, but at an offset, I suppose.
Why is that?
The stroke is centered around the coordinates that your primitve is defined at. In the case of your rectangle with stroke width of 20, drawing this at the top left of the canvas will cause half of the strokes width to be drawn outside of the canvas boundary.
Adjusting the coordinates of strokeRect() to 10,10,.. causes the rectangle to be offset from the canvas origin, meaning that the full stroke of 20 pixels will be visible from the top-left of the canvas:
ctx.lineWidth = 20;
ctx.strokeRect(10, 10, 100, 100);
Consider the following adjustments, made to ensure the stroke is fully visible around the drawn rectangle:
var canvas = document.querySelector('#screen');
// Set the width and height to specify dimensions of canvas (in pixels)
// Choosing a 100x100 square matches the strokeRect() drawn below and
// therefore achieves the appearance of a symmetric stroke
canvas.width = 100;
canvas.height = 100;
// canvas context
var ctx = canvas.getContext('2d');
// draw a rectangle
ctx.fillStyle = 'orange';
ctx.fillRect(10, 10, 90, 90);
// stroke a border for the rectangle
ctx.lineWidth = 20;
var halfStroke = ctx.lineWidth * 0.5;
ctx.strokeRect(halfStroke, halfStroke, 100 - (halfStroke * 2), 100 - (halfStroke * 2));
<canvas id="screen"></canvas>
Update
Here is a visualisation of the stroke in relation to the line/rectangle edge provided by Ibrahim Mahrir:

How to draw a complex transparent circle on Google Maps API

Recently I got a task that is to draw circles on my own website with Google Maps API.
The complexity is the center of the circle is representing a "signal transmitter" and I need to make the circle transparent, with the opacity of each pixel reprseting the signal intensity of the exact location.
My basic idea is to extend the "Overlay" of Google Map API, so I have to write it in javascript I think.
The key part is to draw a circle with gradually changing opacity (inner stronger, outter lighter) and idealy, I can specify the opacity of each pixel.
I've been looking for approaches like CSS3, SVG, VML and even jquery and AJAX but still having no idea about how to archve this.
Thank you very much for your helping!
It looks like you're going to have to manually set every pixel, if you want that level of control over the opacity. I'd use something like this:
var centerX = 100 // Center X coordinate of the circle.
var centerY = 100 // Center Y coordinate of the circle.
var radius = 25 // Radius of circle.
//(var diameter = 2 * radius // Diameter of circle)
for(x = -radius; x < radius; x++){
for(y = -radius; y < radius; y++){
var hypotenuse = Math.sqrt((x * x) + (y * y)); // Line from point (x,y) to the center of the circle.
if(hypotenuse < radius){ // If the point falls within the circle
var opacity = hypotenuse / radius; // Should return a value between 0 and 1
drawPointAt(x + centerX, y + centerY, colour, opacity); // You'll have to look up what function to use here, yourself.
}
}
}
Here's a small example of this code returning a circle.
Here I got the solution. It's making use of the Canvas element of HTML5 (which is widely supported).
Here is the javascript code for locating the canvas element and draw the circle with gradually changing transparency. The key part is to use the "Gradient".
//get a reference to the canvas
var ctx = $('#canvas')[0].getContext("2d");
//draw a circle
gradient = ctx.createRadialGradient(200, 200, 0, 200, 200, 200);
gradient.addColorStop("0", "blue");
gradient.addColorStop("1.0", "transparent");
ctx.fillStyle = gradient;
//ctx.beginPath();
ctx.arc(200, 200, 200, 0, Math.PI*2, true);
//ctx.closePath();
ctx.fill();

Overlapping shapes in canvas

PEN: https://codepen.io/jaredstanley/pen/gvmNye
var canvas = document.getElementById('c');
var ctx = canvas.getContext("2d");
var centerw = canvas.width/2;
var centerh = canvas.height/2;
var sq_w = 80;
//
ctx.beginPath();
//draw rectangle
ctx.rect(this.centerw-(sq_w/2), 0,sq_w, canvas.height);
//draw circle
ctx.arc(this.centerw, this.centerh, 185, 0, Math.PI * 2, true);
//fill
ctx.fill();
The shapes both draw but the intersection of the shapes is blank.
Looking to have one single, filled shape, but get the following result:[
REQUIREMENTS:
Cannot use CanvasRenderingContext2D.globalCompositeOperation as I'm using that for something else; this needs to be used as a single shape so i can use the shape to ...clip().
Note: when using two rect() calls it works, and when using two arc() calls it works, but mixing them seems to cause an issue.
Seems like it should be easy but I'm stumped, missing something basic I think. Thanks!
Path-direction matters
Simply remove (or set to false) the counter-clock wise flag on the arc() method as this will otherwise define the path the "opposite" direction affecting the default non-zero winding algorithm used for filling:
//ctx.arc(this.centerw, this.centerh, 185, 0, Math.PI * 2, true); ->
ctx.arc(this.centerw, this.centerh, 185, 0, Math.PI * 2);
A More Close Look at "Non-Zero Winding"
According to the non-zero winding rule we would add up winding counted from a point from where a line is "sent out". For each line intersection of the point's line we check the crossing line's direction and give it +1 for one direction, -1 if the opposite direction, and add those together.
To illustrate:
For the illustration on the left we can see that the sum of the directions of the two first line intersections (if point is placed left and center on y) will be 0 ("zero") so no fill for the center section. This would also happen if a point sent a line from center top and down through the shape.
However, in the illustration on the right the sum is non-zero when we come to the inner section so it too becomes filled.
Example: arc() uses clockwise direction instead
var canvas = document.getElementById('c');
var ctx = canvas.getContext("2d");
var centerw = canvas.width/2;
var centerh = canvas.height/2;
var sq_w = 120;
//
ctx.beginPath();
//draw rectangle
ctx.rect(centerw-(sq_w/2), 0,sq_w, canvas.height);
//draw circle
ctx.moveTo(centerw + 185, centerh); // create new sub-path (is unrelated, see below)
ctx.arc(centerw, centerh, 185, 0, Math.PI * 2); // <- don't use the CCW flag
//fill
ctx.fill();
<canvas id="c" width="500" height="500"></canvas>
Unrelated but something to have in mind: you would also want to create a new sub-path for the arc to avoid risking a line from a corner of the rect going to the start-angle point on the arc. Simply add this line before adding the arc:
ctx.moveTo(centerw + 185, centerh);
ctx.arc(centerw, centerh, 185, 0, Math.PI * 2);
ctx.beginPath();
//draw rectangle
ctx.rect(this.centerw - (sq_w / 2), 0, sq_w, canvas.height);
ctx.fill();
//draw circle
ctx.beginPath();
ctx.arc(this.centerw, this.centerh, 185, 0, Math.PI * 2, true);
//fill
ctx.fill();
The result you see happens because the standard operation on a surface contained by crossed paths, is to ignore.
var canvas = document.getElementById('c');
var ctx = canvas.getContext("2d");
var centerw = canvas.width/2;
var centerh = canvas.height/2;
var sq_w = 80;
//draw rectangle
ctx.fillRect(this.centerw-(sq_w/2), 0,sq_w, canvas.height);
//draw circle
ctx.arc(this.centerw, this.centerh, 185, 0, Math.PI * 2, true);
//fill
ctx.fill();
<canvas id='c' height=500 width=500/>
The shapes need to be filled between the rounds. Or, in the code snippet, I changed ctx.rect to ctx.fillRect.
Another approach would be to begin a new path before the arc.

Categories