I am trying to use the posenet MobileNetV1 network in an electron app. I want to be able to read image from file system (it does not matter if it is a png or jpg), and run it through the network.
What have I done so far:
I am using the following modules:
import * as posenet from '#tensorflow-models/posenet';
var imageToBase64 = require('image-to-base64');
var toUint8Array = require('base64-to-uint8array')
And initializing the network with:
var net = posenet.load();
In order to read image I am converting it to base64 than to Uint8Array, than I am using them to create an object {data: bytes, width: width, height: height}, which is fitting in the definition of ImageData.
Everything is running but the results in percentages are very low:
{
score: 0.002851587634615819,
keypoints: [
{ score: 0.0007664567674510181, part: 'nose', position: [Object] },
{
score: 0.0010295170359313488,
part: 'leftEye',
position: [Object]
},
{
score: 0.0006740405224263668,
part: 'rightEye',
position: [Object]
},
Notice that in the future I intend to build this app so modules like Canvas are no good since it does not build well.
If someone could give me a working poc it would be great since I am working on that for a very long time.
electron has two separated contexts; one that can be considered as a server side context called the main context and the renderer context in which the browser and its scripts are called. Though the question is not precise enough, it is trying to execute posenet in the main context of electron which can be compared as if one is trying to run this code in nodejs
posenet in the main renderer
const data = Buffer.from(base64str, 'base64')
const t = tf.node.decodeImage(data)
const net = await posenet.load()
const poses = net.estimateMultiplePoses(t, {
flipHorizontal: false,
maxDetections: 2,
scoreThreshold: 0.6,
nmsRadius: 20})
})
// do whatever with the poses
posenet from a script executed by the browser
const im = new Image()
im.src = base64str
const net = await posenet.load()
im.onload = async() => {
const poses = await net.estimateMultiplePoses(im, {
flipHorizontal: false,
maxDetections: 2,
scoreThreshold: 0.6,
nmsRadius: 20})
})
// do whatever with the poses
}
Even though you copied the structure, maybe PoseNet is checking the if the Object is of a certain class which it will not be unless you actually create ImageData object and then set the fields. That is my guess as to why it doesn't like it.
Have you tried:
let clamped = Uint8ClampedArray.from(someframeBuffer);
let imageData = new ImageData(clamped, width, height);
It seems PoseNet accepts ImageData|HTMLImageElement|HTMLCanvasElement|HTMLVideoElement objects that can be passed to its prediction function.
Related
I'm trying to load an image to use it with canvas in node.js. Always getting a not found error. Here's the code:
const Canvas = require('canvas')
async function execute() {
const canvas = Canvas.createCanvas(1080, 611);
const context = canvas.getContext('2d');
const license = await Canvas.loadImage('../../data/media/images/license.jpg');
context.drawImage(license, 0, 0, canvas.width, canvas.height);
}
execute();
It somehow works with the path from the explorer (P:\\bot stuff\\darling.js\\src\\data\\media\\images\\license.jpg).
The thing that confuses me even more is that const { color } = require('../../data/config.json'); works perfectly fine, and that in the same file...
Folder Structure:
Node.js Version 16.10;
Windows 10 Pro 21H2
The behavior of require(...) is always resolved according to its own rules in the relative path of the file by the require.resolve(...) function.
However, Canvas.loadImage. Because it's loading part implemented in C, it can have different behavior than Node and has no context for the current js file location.
I recommend you to resolve the path based on __dirname explicitly.
I am trying to use Phaser3 to create a web based game for my course and I am using tiled to create the maps. I have saved the tiled map as a .json file and I can't load the level I designed. The code I am using is from a tutorial as seen below. The problem is it is only showing a black screen when loading and the tilemapTiledJSON function is coming up as an unresolved function instead of working how it should. I am Webstorm for the coding and I am using phaser 3.50.0.
const config = {
type: Phaser.AUTO, // Which renderer to use
width: 800, // Canvas width in pixels
height: 600, // Canvas height in pixels
parent: "game-container", // ID of the DOM element to add the canvas to
scene: {
preload: preload,
create: create,
update: update
}
};
const game = new Phaser.Game(config);
function preload() {
this.load.image("tiles", "../Assets/Tileset/GB_TileSet_Cemetery_16x16_Sheet.png");
this.load.tilemapTiledJSON("map", "../Assets/Level/Cemeterymap.json");
}
function create() {
const map = this.make.tilemap({ key: "map" });
// Parameters are the name you gave the tileset in Tiled and then the key of the tileset image in
// Phaser's cache (i.e., the name you used in preload)
const tileset = map.addTilesetImage("cemetery 5", "tiles");
// Parameters: layer name (or index) from Tiled, tileset, x, y
const belowLayer = map.createStaticLayer("Base", tileset, 0, 0);
const worldLayer = map.createStaticLayer("Base objects", tileset, 0, 0);
const aboveLayer = map.createStaticLayer("Non Collision objects", tileset, 0, 0);
}
function update(time, delta) {
// Runs once per frame for the duration of the scene
}
Looking at the Phaser3 documentation on loading a tile map json, it could be that you need lights in your system for it to be visible because everything else seems to follow the same conventions as their example documentation.
The only other area you can investigate is double checking where your assets are, aka the .json and .png files, and possibly moving them closer to the directory with the .js file for phaser to remove that as the potential cause of the problem.
pretty new to ML and tensorflow!
I made an object detection model with http://cloud.annotations.ai that permits to train and convert a model in different formats, tfjs (model_web) too.
That website provides also boilerplates for running the model within a browser (react app)... just like you do - probably it is the same code, didn't spend enough time.
So I have this model running inside a browser, giving prediction about objects in a photo with pretty good results considering the amount of example I gave and the prediction score (0.89). the given bounding box is good too.
But, unfortunately, I didn't have "just one video" to analyze frame by frame inside a browser, I've got plenty of them. So I decided to switch to node.js, porting the code as is.
Guess what? TF.js relies on DOM and browser components, and almost none examples that works with Node exists. So not a big deal, just spent a morning figuring out all the missing parts.
Finally I'm able to run my model over videos that are splitted in frames, at a decent speed - although having the "Hello there, use tfjs-node to gain speed" banner when I'm already using tfjs-node - but results seems odd.
Comparing the same picture with the same model_web folder gave the same prediction but with lower score (0.80 instead of 0.89) and a different bounding box, with object not centered at all.
(TL;DR)
Does tfjs have different implementation of the libraries (tfjs and tfjs-node) that makes different use of the same model? I don't think it can be a problem of input because - after a long search and fight - i figure out two ways to give the image to tf.browser.getPixel in Node (and I'm still wondering why I have to use a "browser" method inside tfjs-node). Anyone made comparisons?
So... that's the code I used, for your reference:
model_web is being loaded with tf.loadGraphModel("file://path/to/model_web/model.json");
two different ways to convert a JPG and make it works with tf.browser.getPixel()
const inkjet = require('inkjet');
const {createCanvas, loadImage} = require('canvas');
const decodeJPGInkjet = (file) => {
return new Promise((rs, rj) => {
fs.readFile(file).then((buffer) => {
inkjet.decode(buffer, (err, decoded) => {
if (err) {
rj(err);
} else {
rs(decoded);
}
});
});
});
};
const decodeJPGCanvas = (file) => {
return loadImage(file).then((image) => {
const canvas = createCanvas(image.width, image.height);
const ctx = canvas.getContext('2d');
ctx.drawImage(image, 0, 0, image.width, image.height);
const data = ctx.getImageData(0, 0, image.width, image.height);
return {data: new Uint8Array(data.data), width: data.width, height: data.height};
});
};
and that's the code that use the loaded model to give predictions - same code for node and browser, found at https://github.com/cloud-annotations/javascript-sdk/blob/master/src/index.js - doesn't works on node as it is, I changed require("#tensorflow/tfjs"); with require("#tensorflow/tfjs-node"); and replaced fetch with fs.read
const runObjectDetectionPrediction = async (graph, labels, input) => {
const batched = tf.tidy(() => {
const img = tf.browser.fromPixels(input);
// Reshape to a single-element batch so we can pass it to executeAsync.
return img.expandDims(0);
});
const height = batched.shape[1];
const width = batched.shape[2];
const result = await graph.executeAsync(batched);
const scores = result[0].dataSync();
const boxes = result[1].dataSync();
// clean the webgl tensors
batched.dispose();
tf.dispose(result);
const [maxScores, classes] = calculateMaxScores(
scores,
result[0].shape[1],
result[0].shape[2]
);
const prevBackend = tf.getBackend();
// run post process in cpu
tf.setBackend("cpu");
const indexTensor = tf.tidy(() => {
const boxes2 = tf.tensor2d(boxes, [result[1].shape[1], result[1].shape[3]]);
return tf.image.nonMaxSuppression(
boxes2,
maxScores,
20, // maxNumBoxes
0.5, // iou_threshold
0.5 // score_threshold
);
});
const indexes = indexTensor.dataSync();
indexTensor.dispose();
// restore previous backend
tf.setBackend(prevBackend);
return buildDetectedObjects(
width,
height,
boxes,
maxScores,
indexes,
classes,
labels
);
};
Do different implementation of the libraries (tfjs and tfjs-node) that makes different use of the same model
If the same model is deployed both in the browser and in nodejs, the prediction will be the same thing.
If the predicted value are different, it might be related to the tensor used for the prediction. The processing from the image to the tensor might be different resulting in different tensors being used for the prediction thus causing the output to be different.
i figure out two ways to give the image to tf.browser.getPixel in Node (and I'm still wondering why I have to use a "browser" method inside tfjs-node)
The canvas package use the system graphic to create the browser like canvas environment that can be used by nodejs. This makes it possible to use tf.browser namespace especially when dealing with image conversion. However it is still possible to use directly nodejs buffer to create a tensor.
I have a P5js sketch which creates a font that users can type anything they want.
I want to allow the user to download an svg/pdf (vector) version of their result.
As of now I succeeded to have them download a .jpg version using the save order and saving a shot of my screen.
Any ideas?
That's an old thread, but maybe my solution will be useful to someone. To export vector graphics in P5js to SVG file, first I use SVG renderer from p5.svg.js library - this will put svg element directly into HTML document. Exporting is to extract the content of that element (by outerHTML property) and save it to a file (like in this post).
So, for example, your "Save as SVG" button callback function may look like this:
function downloadSvg()
{
let svgElement = document.getElementsByTagName('svg')[0];
let svg = svgElement.outerHTML;
let file = new Blob([svg], { type: 'plain/text' });
let a = document.createElement("a"), url = URL.createObjectURL(file);
a.href = url;
a.download = 'exported.svg';
document.body.appendChild(a);
a.click();
setTimeout(function()
{
document.body.removeChild(a);
window.URL.revokeObjectURL(url);
}, 0);
}
From googling "p5.js svg", there doesn't seem to be a built-in way to work with SVG images in p5.js.
However, that search returns several promising results:
Here is a GitHub issue with a discussion about working with SVGs in p5.js.
Here is a project that attempts to add SVG support to p5.js:
The main goal of p5.SVG is to provide a SVG runtime for p5.js, so that we can draw using p5's powerful API in <svg>, save things to svg file and manipulating existing SVG file without rasterization.
Another discussion that links to two more SVG libraries.
The p5.SVG library sounds especially promising. I suggest you try something out and post an MCVE if you get stuck.
A new package called canvas-sketch seems to solve this issue. They have a wealth of examples for p5.js as well.
const canvasSketch = require('canvas-sketch')
const p5 = require('p5')
const settings = {
p5: { p5 },
// Turn on a render loop
animate: true,
}
canvasSketch(() => {
// Return a renderer, which is like p5.js 'draw' function
return ({ p5, time, width, height }) => {
// Draw with p5.js things
p5.background(0)
p5.fill(255)
p5.noStroke()
const anim = p5.sin(time - p5.PI / 2) * 0.5 + 0.5
p5.rect(0, 0, width * anim, height)
};
}, settings)
If you use p5 globally, there's also an example for that called animated-p5.js
i want to read back the data stored in the GL.bufferData array in javascript.
Here is my code
var TRIANGLE_VERTEX = geometryNode["triangle_buffer"];
GL.bindBuffer(GL.ARRAY_BUFFER, TRIANGLE_VERTEX);
GL.bufferData(GL.ARRAY_BUFFER,new Float32Array(vertices),GL.STATIC_DRAW);
is it possible in webgl to read back the bufferdata in GPU?
if possible then please explain me with a sample code.
How to know the memory size(filled and free) of the Gpu in webgl at run time and how to debug the shader code and data in GPU in webgl.
It is not directly possible to read the data back in WebGL1. (see below for WebGL2). This is limitation of OpenGL ES 2.0 on which WebGL is based.
There are some workarounds:
You could try to render that data to a texture then use readPixels to read the data.
You'd have to encode the data into bytes in your shader because readPixels in WebGL can only read bytes
You can wrap your WebGL to store the data yourself something like
var buffers = {};
var nextId = 1;
var targets = {};
function copyBuffer(buffer) {
// make a Uint8 view of buffer in case it's not already
var view = new Uint8Buffer(buffer.buffer);
// now copy it
return new UintBuffer(view);
}
gl.bindBuffer = function(oldBindBufferFn) {
return function(target, buffer) {
targets[target] = new Uint8Buffer(buffer.buffer);
oldBindBufferFn(target, buffer);
};
}(gl.bindBuffer.bind(gl));
gl.bufferData = function(oldBufferDataFn) {
return function(target, data, hint) {
var buffer = targets[target];
if (!buffer.id) {
buffer.id = nextId++;
}
buffers[buffer.id] = copyBuffer(data);
oldBufferDataFn(target, data, hint);
};
}(gl.bufferData.bind(gl)));
Now you can get the data with
data = buffers[someBuffer.id];
This is probably what the WebGL Inspector does
Note that there are a few issues with the code above. One it doesn't check for errors. Checking for errors would make it way slower but not checking for error will give you incorrect results if your code generates errors. A simple example
gl.bufferData(someBuffer, someData, 123456);
This would generate an INVALID_ENUM error and not update the data in someBuffer but our code isn't checking for errors so it would have put someData into buffers and if you read that data it wouldn't match what's in WebGL.
Note the code above is pseudo code. For example I didn't supply a wrapper for gl.bufferSubData.
WebGL2
In WebGL2 there is a function gl.getBufferSubData that will allow you to read the contents of a buffer. Note that WebGL2, even though it's based on OpenGL ES 3.0 does not support gl.mapBuffer because there is no performant and safe way to expose that function.