I'm trying to load an image to use it with canvas in node.js. Always getting a not found error. Here's the code:
const Canvas = require('canvas')
async function execute() {
const canvas = Canvas.createCanvas(1080, 611);
const context = canvas.getContext('2d');
const license = await Canvas.loadImage('../../data/media/images/license.jpg');
context.drawImage(license, 0, 0, canvas.width, canvas.height);
}
execute();
It somehow works with the path from the explorer (P:\\bot stuff\\darling.js\\src\\data\\media\\images\\license.jpg).
The thing that confuses me even more is that const { color } = require('../../data/config.json'); works perfectly fine, and that in the same file...
Folder Structure:
Node.js Version 16.10;
Windows 10 Pro 21H2
The behavior of require(...) is always resolved according to its own rules in the relative path of the file by the require.resolve(...) function.
However, Canvas.loadImage. Because it's loading part implemented in C, it can have different behavior than Node and has no context for the current js file location.
I recommend you to resolve the path based on __dirname explicitly.
Related
I am trying to use the posenet MobileNetV1 network in an electron app. I want to be able to read image from file system (it does not matter if it is a png or jpg), and run it through the network.
What have I done so far:
I am using the following modules:
import * as posenet from '#tensorflow-models/posenet';
var imageToBase64 = require('image-to-base64');
var toUint8Array = require('base64-to-uint8array')
And initializing the network with:
var net = posenet.load();
In order to read image I am converting it to base64 than to Uint8Array, than I am using them to create an object {data: bytes, width: width, height: height}, which is fitting in the definition of ImageData.
Everything is running but the results in percentages are very low:
{
score: 0.002851587634615819,
keypoints: [
{ score: 0.0007664567674510181, part: 'nose', position: [Object] },
{
score: 0.0010295170359313488,
part: 'leftEye',
position: [Object]
},
{
score: 0.0006740405224263668,
part: 'rightEye',
position: [Object]
},
Notice that in the future I intend to build this app so modules like Canvas are no good since it does not build well.
If someone could give me a working poc it would be great since I am working on that for a very long time.
electron has two separated contexts; one that can be considered as a server side context called the main context and the renderer context in which the browser and its scripts are called. Though the question is not precise enough, it is trying to execute posenet in the main context of electron which can be compared as if one is trying to run this code in nodejs
posenet in the main renderer
const data = Buffer.from(base64str, 'base64')
const t = tf.node.decodeImage(data)
const net = await posenet.load()
const poses = net.estimateMultiplePoses(t, {
flipHorizontal: false,
maxDetections: 2,
scoreThreshold: 0.6,
nmsRadius: 20})
})
// do whatever with the poses
posenet from a script executed by the browser
const im = new Image()
im.src = base64str
const net = await posenet.load()
im.onload = async() => {
const poses = await net.estimateMultiplePoses(im, {
flipHorizontal: false,
maxDetections: 2,
scoreThreshold: 0.6,
nmsRadius: 20})
})
// do whatever with the poses
}
Even though you copied the structure, maybe PoseNet is checking the if the Object is of a certain class which it will not be unless you actually create ImageData object and then set the fields. That is my guess as to why it doesn't like it.
Have you tried:
let clamped = Uint8ClampedArray.from(someframeBuffer);
let imageData = new ImageData(clamped, width, height);
It seems PoseNet accepts ImageData|HTMLImageElement|HTMLCanvasElement|HTMLVideoElement objects that can be passed to its prediction function.
pretty new to ML and tensorflow!
I made an object detection model with http://cloud.annotations.ai that permits to train and convert a model in different formats, tfjs (model_web) too.
That website provides also boilerplates for running the model within a browser (react app)... just like you do - probably it is the same code, didn't spend enough time.
So I have this model running inside a browser, giving prediction about objects in a photo with pretty good results considering the amount of example I gave and the prediction score (0.89). the given bounding box is good too.
But, unfortunately, I didn't have "just one video" to analyze frame by frame inside a browser, I've got plenty of them. So I decided to switch to node.js, porting the code as is.
Guess what? TF.js relies on DOM and browser components, and almost none examples that works with Node exists. So not a big deal, just spent a morning figuring out all the missing parts.
Finally I'm able to run my model over videos that are splitted in frames, at a decent speed - although having the "Hello there, use tfjs-node to gain speed" banner when I'm already using tfjs-node - but results seems odd.
Comparing the same picture with the same model_web folder gave the same prediction but with lower score (0.80 instead of 0.89) and a different bounding box, with object not centered at all.
(TL;DR)
Does tfjs have different implementation of the libraries (tfjs and tfjs-node) that makes different use of the same model? I don't think it can be a problem of input because - after a long search and fight - i figure out two ways to give the image to tf.browser.getPixel in Node (and I'm still wondering why I have to use a "browser" method inside tfjs-node). Anyone made comparisons?
So... that's the code I used, for your reference:
model_web is being loaded with tf.loadGraphModel("file://path/to/model_web/model.json");
two different ways to convert a JPG and make it works with tf.browser.getPixel()
const inkjet = require('inkjet');
const {createCanvas, loadImage} = require('canvas');
const decodeJPGInkjet = (file) => {
return new Promise((rs, rj) => {
fs.readFile(file).then((buffer) => {
inkjet.decode(buffer, (err, decoded) => {
if (err) {
rj(err);
} else {
rs(decoded);
}
});
});
});
};
const decodeJPGCanvas = (file) => {
return loadImage(file).then((image) => {
const canvas = createCanvas(image.width, image.height);
const ctx = canvas.getContext('2d');
ctx.drawImage(image, 0, 0, image.width, image.height);
const data = ctx.getImageData(0, 0, image.width, image.height);
return {data: new Uint8Array(data.data), width: data.width, height: data.height};
});
};
and that's the code that use the loaded model to give predictions - same code for node and browser, found at https://github.com/cloud-annotations/javascript-sdk/blob/master/src/index.js - doesn't works on node as it is, I changed require("#tensorflow/tfjs"); with require("#tensorflow/tfjs-node"); and replaced fetch with fs.read
const runObjectDetectionPrediction = async (graph, labels, input) => {
const batched = tf.tidy(() => {
const img = tf.browser.fromPixels(input);
// Reshape to a single-element batch so we can pass it to executeAsync.
return img.expandDims(0);
});
const height = batched.shape[1];
const width = batched.shape[2];
const result = await graph.executeAsync(batched);
const scores = result[0].dataSync();
const boxes = result[1].dataSync();
// clean the webgl tensors
batched.dispose();
tf.dispose(result);
const [maxScores, classes] = calculateMaxScores(
scores,
result[0].shape[1],
result[0].shape[2]
);
const prevBackend = tf.getBackend();
// run post process in cpu
tf.setBackend("cpu");
const indexTensor = tf.tidy(() => {
const boxes2 = tf.tensor2d(boxes, [result[1].shape[1], result[1].shape[3]]);
return tf.image.nonMaxSuppression(
boxes2,
maxScores,
20, // maxNumBoxes
0.5, // iou_threshold
0.5 // score_threshold
);
});
const indexes = indexTensor.dataSync();
indexTensor.dispose();
// restore previous backend
tf.setBackend(prevBackend);
return buildDetectedObjects(
width,
height,
boxes,
maxScores,
indexes,
classes,
labels
);
};
Do different implementation of the libraries (tfjs and tfjs-node) that makes different use of the same model
If the same model is deployed both in the browser and in nodejs, the prediction will be the same thing.
If the predicted value are different, it might be related to the tensor used for the prediction. The processing from the image to the tensor might be different resulting in different tensors being used for the prediction thus causing the output to be different.
i figure out two ways to give the image to tf.browser.getPixel in Node (and I'm still wondering why I have to use a "browser" method inside tfjs-node)
The canvas package use the system graphic to create the browser like canvas environment that can be used by nodejs. This makes it possible to use tf.browser namespace especially when dealing with image conversion. However it is still possible to use directly nodejs buffer to create a tensor.
I'm looking for library or code snippet which will help me to convert HTML DOM node element to image/png file.
I tried to use html2canvas library but it does not render svg nodes, and in my current project i have a lot of them. Also i tried to use canvg library to convert all SVG elements on page to Canvas elements, but canvg always threw error on render step:
Uncaught (in promise) TypeError: Cannot read property 'ImageClass' of
undefined
Code snippet that is used to conver SVGs to Canvas:
export const svgToCanvas = () => {
const print = document.getElementsByClassName('print-page')[0]
const svgElements = print.getElementsByTagName('svg')
_.each(svgElements, e => {
let xml
const canvas = document.createElement('canvas')
canvas.className = 'screenShotTempCanvas'
xml = (new window.XMLSerializer()).serializeToString(e)
xml = xml.replace(/xmlns=http:\/\/www\.w3\.org\/2000\/svg/, '')
canvg(canvas, xml)
e.parentNode.insertBefore(canvas, e.nextSibling)
e.classList.add('tempHide')
e.style.display = 'none'
})
const temps = document.getElementsByClassName('.screenShotTempCanvas')
_.each(temps, e => {
e.remove()
})
const svgs = document.getElementsByClassName('tempHide')
_.each(svgs, e => {
if (e) {
e.style.display = 'block'
e.classList.remove('tempHide')
}
})
}
Error throws on this line in canvg code:
if (nodeEnv) {
if (!s || s === '') {
return;
}
ImageClass = opts['ImageClass']; //<= error throws here
CanvasClass = target.constructor;
svg.loadXml(target.getContext('2d'), s);
return;
}
I also tried to convert node to PDF format using jsPDF and html-pdf libraries, but in this case all styles are disappeared from node, and i need them not to be lost.
Can anyone provide me with appropriate approach how to convert HTML DOM node (which is rich on SVG elements) to image ?
This seems to be the result of seriously bad testing and documenting by canvg authors. The line the error is thrown at was introduced as part of a pull request that supposedly added support for executing canvg in a node environment. But it seems it was only tested to work if it was used as a dependency in node-svg2img.
If you look at that source, you will find the following (excerpt):
var canvg = require('canvg'),
Canvas = require('canvas');
function convert(svgContent) {
var canvas = new Canvas();
canvg(canvas, svgContent, { ignoreMouse: true, ignoreAnimation: true, ImageClass: Canvas.Image });
return canvas;
}
As you can see, canvg is called with an option ImageClass that has never been documented, and whose content is dependent on a module canvas that is never stated as a dependency for canvg. (And that is not a lightweight one, it is the complete <canvas> implementation.)
Now you never stated that you were doing all this in node, but I will assume you do, otherwise I would not understand why the line causing the error got called at all.
It seems a successfull call to document.createElement('canvas') does not indicate you have the canvas module installed. This note in the jsdom doc tells you what needs to be done.
From there, calling canvg in your code with
import Image from 'canvas';
canvg(canvas, xml, {ImageClass: Image});
should get you runing.
I have a P5js sketch which creates a font that users can type anything they want.
I want to allow the user to download an svg/pdf (vector) version of their result.
As of now I succeeded to have them download a .jpg version using the save order and saving a shot of my screen.
Any ideas?
That's an old thread, but maybe my solution will be useful to someone. To export vector graphics in P5js to SVG file, first I use SVG renderer from p5.svg.js library - this will put svg element directly into HTML document. Exporting is to extract the content of that element (by outerHTML property) and save it to a file (like in this post).
So, for example, your "Save as SVG" button callback function may look like this:
function downloadSvg()
{
let svgElement = document.getElementsByTagName('svg')[0];
let svg = svgElement.outerHTML;
let file = new Blob([svg], { type: 'plain/text' });
let a = document.createElement("a"), url = URL.createObjectURL(file);
a.href = url;
a.download = 'exported.svg';
document.body.appendChild(a);
a.click();
setTimeout(function()
{
document.body.removeChild(a);
window.URL.revokeObjectURL(url);
}, 0);
}
From googling "p5.js svg", there doesn't seem to be a built-in way to work with SVG images in p5.js.
However, that search returns several promising results:
Here is a GitHub issue with a discussion about working with SVGs in p5.js.
Here is a project that attempts to add SVG support to p5.js:
The main goal of p5.SVG is to provide a SVG runtime for p5.js, so that we can draw using p5's powerful API in <svg>, save things to svg file and manipulating existing SVG file without rasterization.
Another discussion that links to two more SVG libraries.
The p5.SVG library sounds especially promising. I suggest you try something out and post an MCVE if you get stuck.
A new package called canvas-sketch seems to solve this issue. They have a wealth of examples for p5.js as well.
const canvasSketch = require('canvas-sketch')
const p5 = require('p5')
const settings = {
p5: { p5 },
// Turn on a render loop
animate: true,
}
canvasSketch(() => {
// Return a renderer, which is like p5.js 'draw' function
return ({ p5, time, width, height }) => {
// Draw with p5.js things
p5.background(0)
p5.fill(255)
p5.noStroke()
const anim = p5.sin(time - p5.PI / 2) * 0.5 + 0.5
p5.rect(0, 0, width * anim, height)
};
}, settings)
If you use p5 globally, there's also an example for that called animated-p5.js
I'm using some javascript to allow users to dynamically load a sketch on click to a canvas element using:
Processing.loadSketchFromSources('canvas_id', ['sketch.pde']);
If I call Processing.loadSketchFromSources(...) a second (or third...) time, it loads a second (or third...) .pde file onto the canvas, which is what I would expect.
I'd like for the user to be able to click another link to load a different sketch, effectively unloading the previous one. Is there a method I can call (or a technique I can use) to check if Processing has another sketch running, and if so, tell it to unload it first?
Is there some sort of Processing.unloadSketch() method I'm overlooking? I could simply drop the canvas DOM object and recreate it, but that (1) seems like using a hammer when I need a needle, and (2) it results in a screen-flicker that I'd like to avoid.
I'm no JS expert, but I've done my best to look through the processing.js source to see what other functions may exist, but I'm hitting a wall. I thought perhaps I could look at Processing.Sketches.length to see if something is loaded already, but simply pop'ing it off the array doesn't seem to work (didn't think it would).
I'm using ProcessingJS 1.3.6.
In case someone else comes looking for the solution, here's what I did that worked. Note that this was placed inside a closure (not included here for brevity) -- hence the this.launch = function(), blah blah blah... YMMV.
/**
* Launches a specific sketch. Assumes files are stored in
* the ./sketches subdirectory, and your canvas is named g_sketch_canvas
* #param {String} item The name of the file (no extension)
* #param {Array} sketchlist Array of sketches to choose from
* #returns true
* #type Boolean
*/
this.launch = function (item, sketchlist) {
var cvs = document.getElementById('g_sketch_canvas'),
ctx = cvs.getContext('2d');
if ($.inArray(item, sketchlist) !== -1) {
// Unload the Processing script
if (Processing.instances.length > 0) {
// There should only be one, so no need to loop
Processing.instances[0].exit();
// If you may have more than one, then use this loop:
for (i=0; i < Processing.instances.length; (i++)) {
// Processing.instances[i].exit();
//}
}
// Clear the context
ctx.setTransform(1, 0, 0, 1, 0, 0);
ctx.clearRect(0, 0, cvs.width, cvs.height);
// Now, load the new Processing script
Processing.loadSketchFromSources(cvs, ['sketches/' + item + '.pde']);
}
return true;
};
I'm not familiar with Processing.js, but the example code from the site has this:
var canvas = document.getElementById("canvas1");
// attaching the sketchProc function to the canvas
var p = new Processing(canvas, sketchProc);
// p.exit(); to detach it
So in your case, you'll want to keep a handle to the first instance when you create it:
var p1 = Processing.loadSketchFromSources('canvas_id', ['sketch.pde']);
When you're ready to "unload" and load a new sketch, I'm guessing (but don't know) that you'll need to clear the canvas yourself:
p1.exit();
var canvas = document.getElementById('canvas_id');
var context = canvas.getContext('2d');
context.clearRect(0, 0, canvas.width, canvas.height);
// Or context.fillRect(...) with white, or whatever clearing it means to you
Then, from the sound of things, you're free to attach another sketch:
var p2 = Processing.loadSketchFromSources('canvas_id', ['sketch2.pde']);
Again, I'm not actually familiar with that library, but this appears straightforward from the documentation.
As of processing.js 1.4.8, Andrew's accepted answer (and the other answers I've found in here) do not seem to work anymore.
This is what worked for me:
var pjs = Processing.getInstanceById('pjs');
if (typeof pjs !== "undefined") {
pjs.exit();
}
var canvas = document.getElementById('pjs')
new Processing(canvas, scriptText);
where pjs is the id of the canvas element where the scrips is being run.