How to draw other shapes or lines in `rot-js` canvas? - javascript

I'm using rot-js to draw a grid with hexagons and want to add triangles and other shapes to the canvas. I've tried acting on display.getContainer() but that's not working. What needs to be done to get this to work?
Setting ctx.globalCompositeOperation = "xor", I'm able to see the objects being drawn (but the colors are all wrong).
Setting ctx.globalAlpha = .8 also allows everything to be visible to an extent so I'm thinking this has something to do with layers.
If I work directly on an existing canvas element, drawing works fine.

Hard to tell for sure since you didn't provide anything about what you are actually doing, but given the description, I'll guess you are not waiting for the next frame before doing your own drawings over the ones made by the library.
The library stacks all its rendering operations in a requestAnimationFrame callback, so if you do it before, your drawings will be covered by the lib's ones.
To workaround this, simply wrap your own drawing operations in a requestAnimationFrame callback, it will get stacked after the ones of the lib, and will get drawn on top.
const display = new ROT.Display();
const canvas = display.getContainer();
const ctx = canvas.getContext('2d');
document.body.appendChild(canvas);
// async calls
display.draw(5, 4, "#");
display.draw(15, 4, "%", "#0f0");
display.draw(25, 4, "#", "#f00", "#009");
// end async calls
// this will get covered
ctx.fillStyle = 'red';
ctx.fillRect(20,20,40,40);
// wait next frame
requestAnimationFrame(() => {
ctx.fillStyle = 'green';
ctx.fillRect(120,20,40,40);
})
<script src="https://cdn.jsdelivr.net/npm/rot-js"></script>

Related

How do I draw a Javascript-modified SVG object on a HTML5 canvas?

The overall task I'm trying to achieve is to load an SVG image file, modify a color or text somewhere, and then draw it onto an HTML5 canvas (presumably with drawImage(), but any reasonable alternative would be fine).
I followed advice on another StackOverflow question on how to load and modify a SVG file in Javascript, which went like this:
<object class="svgClass" type="image/svg+xml" data="image.svg"></object>
followed in Javascript by
document.querySelector("object.svgClass").
getSVGDocument().getElementById("svgInternalID").setAttribute("fill", "red")
And that works. I now have the modified SVG displaying in my web page.
But I don't want to just display it - I want to draw it as part of an HTML5 canvas update, like this:
ctx.drawImage(myModifiedSVG, img_x, img_y);
If I try storing the result of getSVGDocument() and passing that in as myModifiedSVG, I just get an error message.
How do I make the HTML5 canvas draw call for my modified SVG?
Edit: I can draw an SVG image on an HTML5 canvas already through doing this:
var theSVGImage = new Image();
theSVGImage.src = "image.svg";
ctx.drawImage(theSVGImage, img_x, img_y);
and that's great, but I don't know how to modify text/colors in my loaded SVG image that way! If someone could tell me how to do that modification, then that would also be a solution. I'm not tied to going through the object HTML tag.
For a one shot, you could rebuild a new svg file, load it in an <img> and draw it again on the canvas:
async function doit() {
const ctx = canvas.getContext('2d');
const images = await prepareAssets();
let i = 0;
const size = canvas.width = canvas.height = 500;
canvas.onclick = e => {
i = +!i;
ctx.clearRect(0, 0, size, size);
ctx.drawImage(images[i], 0,0, size, size);
};
canvas.onclick();
return images;
}
async function prepareAssets() {
const svgDoc = await getSVGDOM();
// There is no standard to draw relative sizes in canvas
svgDoc.documentElement.setAttribute('width', '500');
svgDoc.documentElement.setAttribute('height', '500');
// generate the first <img> from current DOM state
const originalImage = loadSVGImage(svgDoc);
// here do your DOM manips
svgDoc.querySelectorAll('[fill="#cc7226"]')
.forEach(el => el.setAttribute('fill', 'lime'));
// generate new <img>
const coloredImage = loadSVGImage(svgDoc);
return Promise.all([originalImage, coloredImage]);
}
function getSVGDOM() {
return fetch('https://upload.wikimedia.org/wikipedia/commons/f/fd/Ghostscript_Tiger.svg')
.then(resp => resp.text())
.then(text => new DOMParser().parseFromString(text, 'image/svg+xml'));
}
function loadSVGImage(svgel) {
// get the markup synchronously
const markup = (new XMLSerializer()).serializeToString(svgel);
const img = new Image();
return new Promise((res, rej) => {
img.onload = e => res(img);
img.onerror = rej;
// convert to a dataURI
img.src= 'data:image/svg+xml,' + encodeURIComponent(markup);
});
}
doit()
.then(_ => console.log('ready: click to switch the image'))
.catch(console.error);
<canvas id="canvas"></canvas>
But if you are going to do it with a lot of frames, and expect it to animate...
You will have to convert your svg into Canvas drawing operations.
The method above is asynchronous, so you cannot reliably generate new images on the fly and get it ready to be drawn in a single frame. You need to store a few of these ahead of time, but since how long it will take to load the image is completely random (at least it should be) this might be a real programming nightmare.
Add to that the overhead the browser will have in loading a whole new SVG document every frame (yes, browsers do load the SVG document even when loaded inside an <img>), then paint it on the canvas, and finally remove it from the memory which will get filled in no time, you won't have a much free CPU to do anything else.
So the best here is probably to parse your SVG and to convert it to CanvasRenderingContext2D drawing operations => Draw it yourself.
This is achievable, moreover now that we can pass d attributes directly into Path2D object constructor, and that most of SVG objects have correspondence in the Canvas2D API (we even can use SVG filters), but that's still a lot of work.
So you may want to look at libraries that do that. I'm not an expert in libraries myself, and I can't recommend any, but I know that canvg does that since a very long time, I just don't know if they do expose their js objects in a reusable way. I know that Fabric.js does, but it also comes with a lot of other features that you may not need.
The choice is yours.

Dygraph graph gets distorted/partially shifted

I am regularly updating several Dygraphs graphs. After some period of time, normally a few minutes, some or all of them get corrupted as shown in the figure below. I haven't been able to tie this a particular event or browser. This happens even with a simple graph where I am just reloading the data stored in a CSV file. I call updateOptions({ file: URL }) on the graph object, where URL points to the CSV file, followed by calling resetZoom() on the graph object to update the axes. Googling hasn't revealed anyone suffering similar behaviour, so I'm lost as to what is causing this.
Update 1: It is linked to minimizing and maximizing the browser.
Update 2: The problem doesn't occur in Firefox. It does happen in Google Chrome and Internet Explorer, although IE has the additional problem of freezing after a while (a problem for another day).
Update 3: Minimum working examples added at http://jsfiddle.net/williamshipman/tvxekq56/ and http://jsfiddle.net/williamshipman/af66qstt/. Repeatedly minimize and maximize the browser window, after a while the distortion occurs. The first example uses AngularJS (like my own work), while the second demonstrates the same bug in pure JavaScript. You may have to minimize and maximize more than a dozen times to see the bug, it seems pretty random.
For me similar problem appears when I show and hide Y2 axis.
This one line helped me: ctx.clearRect(0, 0, this.width, this.height);
File: dygraph-canvas.js
var DygraphCanvasRenderer = function(dygraph, element, elementContext, layout) {
...
ctx = this.dygraph_.hidden_ctx_;
ctx.clearRect(0, 0, this.width, this.height); // <== clear whole canvas before cliping
ctx.beginPath();
ctx.rect(this.area.x, this.area.y, this.area.w, this.area.h);
ctx.clip();
};
The root of the problem
Canvas context is not fully restored after all draw is done.
Solution 1. (workaround)
Injecting canvas_ctx_.restore() after draw is done and context.save() before. save() is needed because library is restoring the context before every draw(except the initial one).
let g = new Dygraph('graph', {
underlayCallback: (context) => {
context.save();
},
drawCallback: (dygraph) => {
dygraph.canvas_ctx_.restore();
},
});
Solution 2. (library fix)
Here is my commit you can apply to the lib's src/dygraph.js
https://github.com/pawelzwronek/dygraphs/commit/c66ca37b82f14e096652a338cae8abf568b9c764

Canvas drawing functions behaving in asynchronous way

What I am doing here is that I am covering the entire web page loaded with the layer of a canvas. Now on that canvas user clicks and that click creates a dot. At that point createDot() function gets called.
Inside createDot() the dot is drawn on to the canvas and then the screenshot request is sent to the background script.
Now the problem is that when I click on the canvas and the screenshot is taken, the dot does not appear in the screenshot.
Now what's weird is that when i click a 2nd dot onto the same canvas and when the screenshot comes, it has now the first dot that I clicked but not the 2nd one.
So what is happening is that screenshot is not capturing the current dot but only the previous dots.
I checked that the canvas drawing functions are all blocking so the request cannot be sent to the background script before the drawing is complete. I also confirmed the same by logging to the console.
content script:
function createDot(canvas) {
var context = canvas.getContext("2d");
var canvasOffset = $("#canvas").offset();
var offsetX = canvasOffset.left;
var offsetY = canvasOffset.top;
var pointX , pointY;
var point_X,point_Y;
function handleMouseDown(e) {
//coordinates of the click in dom.
pointX = parseInt(e.pageX - offsetX);
pointY = parseInt(e.pageY - offsetY);
//making the dot
var radius = 10;
context.beginPath();
context.arc(pointX, pointY, radius, 0, 2 * Math.PI);
context.fillStyle = 'red';
context.fill();
context.lineWidth = 2;
context.strokeStyle = '#003300';
context.stroke();
console.log("drawn everything");
takeDotScreenshot(); //gets called when the canvas has finished its job of drawing.
}
$("#canvas").mousedown(function (e) {
handleMouseDown(e);
});
}
function takeDotScreenshot() {
console.log("sending request to Background script");
chrome.runtime.sendMessage({"capture":true},function(response) {
var img_src=response.screenshot;
});
//logic for using that screenshot and appending it on to the content page...
}
Background script:
chrome.runtime.onMessage.addListener(function(request,sender,sendResponse) {
if(request.capture) {
chrome.tabs.query({active: true, currentWindow: true}, function(tabs) {
chrome.tabs.captureVisibleTab(null, {}, function(dataUrl) {
if(dataUrl) {
sendResponse({"screenshot":dataUrl});
}
});
});
}
return true;
});
UPDATE:
If I use setTimeout of 1ms for calling the takeDotScreenshot function, everything works fine. WHY!
setTimeout(takeDotScreenshot, 1);
There is a very detailed, great answer why setTimeout helps here:
https://stackoverflow.com/a/4575011/10574621
The DOM-Update-Event for your canvas (which actually make the dot visible) could be behind your take-screenshot-event, even if the canvas recieved all data needed from your stroke()-event.
Screen shot is a snap shot of what the user can see. When inside a rendering function the canvas is not presented for display until the execution stack is empty (returned all the way out of the functions)
Depending on how you render (using requestAnimationFrame or directly from mouse, keyboard, timer events) the canvas will still take time to be presented to the display.
When you set the timeout to zero you can add a call onto the call stack that runs before the canvas is presented to the display. Any code you run will block the page thus preventing the canvas from being displayed and hence no screen shot.
I would set the timeout from render to capture at at least 17ms (just over 60fps). Nobody can notice that delay but gives plenty of time for the canvas to be presented to the display

canvas toDataURL() returns transparent image

I am attempting to use a chrome extension to take a screenshot of the current page, and then draw some shapes on it. After I have done that to the image, I turn the whole thing into a canvas that way it is all together, like the divs I have drawn on it are now baked into the 'image', and they are one in the same. After doing this, I want to turn the canvas back into a png image I can use to push to a service I have, but when I go to use the canvas.toDataURL() in order to do so, the image source that it creates is completely transparent. If I do it as a jpeg, it is completely black.
I read something about the canvas being 'dirtied' because I have drawn an image to it, and that this won't work in Chrome, but that doesn't make sense to me as I have gotten it to work before, but I am unable to use my previous method. Below is the code snippet that isn't working. I am just making a canvas element, and then I am drawing an image before that.
var passes = rectangles.length;
var run = 0;
var context = hiDefCanvas.getContext('2d');
while (run < passes) {
var rect = rectangles[run];
// Set the stroke and fill color
context.strokeStyle = 'rgba(0,255,130,0.7)';
context.fillStyle = 'rgba(0,0,255,0.1)';
context.rect(rect.left, rect.top, rect.width, rect.height);
context.setLineDash([2,1]);
context.lineWidth = 2;
run++;
} // end of the while loop
screencapImage.className = 'hide';
context.fill();
context.stroke();
console.log(hiDefCanvas.toDataURL());
And the image data that it returns is: data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAACWAAAAVGCAYAAAAaGIAxAAAgAElEQ…ECBAgQIECAAAECBAgQIECAAAECBAgQIECAAAECBAgQIECAQBVKBUe32pNYAAAAAElFTkSuQmCC which is a blank, transparent image.
Is there something special I need to do with Chrome? Is there something that I am missing? Thanks, I appreciate the time and help.
Had the same problem, and found the solution here:
https://bugzilla.mozilla.org/show_bug.cgi?id=749824
"I can confirm, that it works if you set preserveDrawingBuffer.
var glContextAttributes = { preserveDrawingBuffer: true };
var gl = canvas.getContext("experimental-webgl", glContextAttributes);"
After getting the context with preserveDrawingBuffer, toDataURL just works as expected, no completely transparent or black image.
Having a similar problem I found a solution, thanks to the following post
https://github.com/iddan/react-native-canvas/issues/29.
These methods return a promise, so you would need to wait for it to resolve before the variable can be populated.
My solution was to set asyn function and await for the result:
const canvas = <HTMLCanvasElement>document.getElementById("myCanvas")[0];
let ctx = canvas.getContext('2d');
let img = new Image();
img.onload = async (e) => { ctx.drawImage(img, 0, 0); ctx.font = "165px Arial";ctx.fillStyle = "white"; b64Code = await (<any>canvas).toDataURL(); }

Template for canvas html5/jquery/javascript

I am a beginner at HTML5, Jquery/JavaScript.
I am attempting to create a canvas (sort of like windows paint application) and I am looking at other users sample functions/code to see whats it going on and attempt to re-create it.
$(function(){
var paint = new Paint($('#surface').get(0));
// Setup line template
var templateLine = new Paint($('#toolbar #line').get(0), {'readonly': true});
templateLine.shape = new Line([10, 10], [50, 50]);
templateLine.place(templateLine.shape);
I am unsure what is going on here. I know this new Paint is not an internal built-in function. What is it?
Secondly whats the difference between this and
$( document).ready(function(){
var canvas = $("#canvas").get(0);
if (canvas.getContext) {
var ctx = canvas.getContext("2d");
// Choose a color
ctx.fillStyle = "black";
ctx.strokeStyle = color;
ctx.fillRect(0, 0, 50, 50);
} else {
// Browser doesn't support CANVAS
}
});
Help!!!
Well, first off, the code you're looking at in the beginning of your question was probably using some canvas library or API, but that is not the vanilla HTML5 Canvas API, making it completely different from what you've written below, even if they have the same output (although it doesn't look like they do).
Secondly, color is not defined, so unless that's defined in your code somewhere else, your code's not going to work. Otherwise, your code will draw a black rectangle in the corner of the canvas with the stroke color of whatever color is.

Categories