How to truncate coco ssd model in tensorflow.js? - javascript

ML / Tensorflow beginner.
I'm having trouble trying to get one of the layers from the coco ssd model imported as a package in a React application. I'm following the Pacman tensorflow.js example to retrain the model.
const modelPromise = cocoSsd.load();
Promise.all([modelPromise])
.then(cocoModel => {
console.log(cocoModel[0]);
var cocoModel = cocoModel[0].model;
console.log(cocoModel);
const layer = cocoModel.getLayer('conv_pw_13_relu');
this.truncatedCocoModel = tf.model({inputs: cocoModel.inputs, outputs:
layer.output});
})
.catch(error => {
console.error(error);
});
In the const layer line I get the error message that 'cocoModel.getLayer is not a function'. The Pacman example is using the mobilenet model which I guess has this function.
What are my options here? I looked around using the browser console but I can't find this function anywhere and looking online didn't help much (is there any place online where I can see the whole structure of the cocoSSD model by Google?)

Using the npm package https://cdn.jsdelivr.net/npm/#tensorflow-models/coco-ssd, you cannot retrieve any layer.
load returns an instance of ObjectDetection which does not have the getLayer property.
If you want to retrieve the layer, you would have to load the graph model as described here

Related

Different predictions if running in Node instead of Browser (using the same model_web - python converted model)

pretty new to ML and tensorflow!
I made an object detection model with http://cloud.annotations.ai that permits to train and convert a model in different formats, tfjs (model_web) too.
That website provides also boilerplates for running the model within a browser (react app)... just like you do - probably it is the same code, didn't spend enough time.
So I have this model running inside a browser, giving prediction about objects in a photo with pretty good results considering the amount of example I gave and the prediction score (0.89). the given bounding box is good too.
But, unfortunately, I didn't have "just one video" to analyze frame by frame inside a browser, I've got plenty of them. So I decided to switch to node.js, porting the code as is.
Guess what? TF.js relies on DOM and browser components, and almost none examples that works with Node exists. So not a big deal, just spent a morning figuring out all the missing parts.
Finally I'm able to run my model over videos that are splitted in frames, at a decent speed - although having the "Hello there, use tfjs-node to gain speed" banner when I'm already using tfjs-node - but results seems odd.
Comparing the same picture with the same model_web folder gave the same prediction but with lower score (0.80 instead of 0.89) and a different bounding box, with object not centered at all.
(TL;DR)
Does tfjs have different implementation of the libraries (tfjs and tfjs-node) that makes different use of the same model? I don't think it can be a problem of input because - after a long search and fight - i figure out two ways to give the image to tf.browser.getPixel in Node (and I'm still wondering why I have to use a "browser" method inside tfjs-node). Anyone made comparisons?
So... that's the code I used, for your reference:
model_web is being loaded with tf.loadGraphModel("file://path/to/model_web/model.json");
two different ways to convert a JPG and make it works with tf.browser.getPixel()
const inkjet = require('inkjet');
const {createCanvas, loadImage} = require('canvas');
const decodeJPGInkjet = (file) => {
return new Promise((rs, rj) => {
fs.readFile(file).then((buffer) => {
inkjet.decode(buffer, (err, decoded) => {
if (err) {
rj(err);
} else {
rs(decoded);
}
});
});
});
};
const decodeJPGCanvas = (file) => {
return loadImage(file).then((image) => {
const canvas = createCanvas(image.width, image.height);
const ctx = canvas.getContext('2d');
ctx.drawImage(image, 0, 0, image.width, image.height);
const data = ctx.getImageData(0, 0, image.width, image.height);
return {data: new Uint8Array(data.data), width: data.width, height: data.height};
});
};
and that's the code that use the loaded model to give predictions - same code for node and browser, found at https://github.com/cloud-annotations/javascript-sdk/blob/master/src/index.js - doesn't works on node as it is, I changed require("#tensorflow/tfjs"); with require("#tensorflow/tfjs-node"); and replaced fetch with fs.read
const runObjectDetectionPrediction = async (graph, labels, input) => {
const batched = tf.tidy(() => {
const img = tf.browser.fromPixels(input);
// Reshape to a single-element batch so we can pass it to executeAsync.
return img.expandDims(0);
});
const height = batched.shape[1];
const width = batched.shape[2];
const result = await graph.executeAsync(batched);
const scores = result[0].dataSync();
const boxes = result[1].dataSync();
// clean the webgl tensors
batched.dispose();
tf.dispose(result);
const [maxScores, classes] = calculateMaxScores(
scores,
result[0].shape[1],
result[0].shape[2]
);
const prevBackend = tf.getBackend();
// run post process in cpu
tf.setBackend("cpu");
const indexTensor = tf.tidy(() => {
const boxes2 = tf.tensor2d(boxes, [result[1].shape[1], result[1].shape[3]]);
return tf.image.nonMaxSuppression(
boxes2,
maxScores,
20, // maxNumBoxes
0.5, // iou_threshold
0.5 // score_threshold
);
});
const indexes = indexTensor.dataSync();
indexTensor.dispose();
// restore previous backend
tf.setBackend(prevBackend);
return buildDetectedObjects(
width,
height,
boxes,
maxScores,
indexes,
classes,
labels
);
};
Do different implementation of the libraries (tfjs and tfjs-node) that makes different use of the same model
If the same model is deployed both in the browser and in nodejs, the prediction will be the same thing.
If the predicted value are different, it might be related to the tensor used for the prediction. The processing from the image to the tensor might be different resulting in different tensors being used for the prediction thus causing the output to be different.
i figure out two ways to give the image to tf.browser.getPixel in Node (and I'm still wondering why I have to use a "browser" method inside tfjs-node)
The canvas package use the system graphic to create the browser like canvas environment that can be used by nodejs. This makes it possible to use tf.browser namespace especially when dealing with image conversion. However it is still possible to use directly nodejs buffer to create a tensor.

Forge Viewer fails to display edges

I used https://github.com/Autodesk-Forge/viewer-react-express-headless as a starting point for my Forge React application and I modified viewer = new Autodesk.Viewing.Viewer3D(viewerElement, {}); to viewer = new Autodesk.Viewing.Private.GuiViewer3D(viewerElement, {}); to change it back from a headless to a classic viewer.
I can load my model but it appears without edges and when I go to Settings -> Performance -> Display edges it is off by default, and when I try to set it back the edges stay invisible.
From my non wokring viewer:
When I try the same operation with the same model loaded on Autodesk Viewer it works as expected and I can toggle the visibility of the edges.
From the Autodesk Viewer
I found another seemingly related question on stackoverflow, but I tried viewer.js?v=v4.2, viewer.js?v=v5.0 and viewer.js?v=v6.3.1 and I still have the invisible edges issue.
I also posted a Github Issue
Thank you for your help.
Alexis
ok, if you are creating the viewer instance via
Autodesk.Viewing.Private.GuiViewer3D directly, rather than the
Autodesk.Viewing.ViewingApplication, then there is a magic configuration parameter that you will need to apply when initializing the Forge viewer, so that the lines will appear...
To fix it, an extra option
isAEC: true must be passed into the
modelOptions in your code, see below:
var modelOptions = {
placementTransform: mat,
globalOffset:{x:0,y:0,z:0},
sharedPropertyDbPath: doc.getPropertyDbPath(),
isAEC: true //!<<< Here is the missing line
};
viewer.loadModel(svfUrl, modelOptions, onLoadModelSuccess, onLoadModelError);

Multi-dimensional input in Keras.js

I am looking to implement my Keras model into my website with the Keras.js library. One problem with this library is that when inputting data in javascript, only a Float32Array() is allowed as an input. This type of array is 1D, and in my Keras model, the input is 3D. I have asked on the Keras.js issues page and found some potential solutions such as adding an embedding layer, but that requires a specific input shape, but I would like to have any 3D input work, as it did when I trained the model. The model structure is simple, there is an input layer which takes an array of dimension mxnx3 (it is an image of unknown size with r, g, and b channels) and a Conv2D layer, which then outputs an mxnx1 array. I know the model works because it can give a good output based on an input, so the only problem I have is with the transition to Keras.js. Here is the JS code that I have at the moment.
function predictImageWithCNN(data) { //'data' is mxnx3 array
var final = [];
//instantiating model from json and buf files
var model = new KerasJS.Model({
filepaths: {
model: 'dist/model.json',
weights: 'dist/model_weights.buf',
metadata: 'dist/model_metadata.json'
},
gpu: true //MAY NEED TO CHANGE (NOT SURE REALLY)
});
//Ready the model.
model.ready()
.then(function() {
//This matches our input data with the input key (b/c Sequential Model)
var inputData = {
'input_1': new Float32Array(data)
};
// make predictions based on inputData
return model.predict(inputData);
})
.then(function(outputData) {
//Here we take the outputData and parse it to get a result.
var out = outputData['output']; //Gets output data
console.log(out);
//TODO: Put in code to assign outputData to 'final' so we can then convert it
// This should not be too hard to do.
})
.catch(function(err) {
console.log(err);
// handle error
});
return final; // should return nxmx1 array of vals 0-1.
}
If anyone had any suggestions for how to resolve this, that would be very much appreciated. Thanks! :)
I had the same problem with an LSTM. The way I got around this was by training using a flattened version of the data but using a reshape layer as the first layer to get it to the shape I needed for my LSTM. eg.
model = Sequential()
model.add(Reshape((40,59),input_shape=(1,2360)))
model.add(LSTM(128, input_shape=(maxlen, len(chars))))
model.add(Dense(len(chars)))
model.add(Activation('softmax'))
optimizer = RMSprop(lr=0.01)
model.compile(loss='categorical_crossentropy', optimizer=optimizer)
Then in Keras.JS I can feed in the flattened version from Float32Array

How to get time of page's first paint

While it is easy enough to get firstPaint times from dev tools, is there a way to get the timing from JS?
Yes, this is part of the paint timing API.
You probably want the timing for first-contentful-paint, which you can get using:
const paintTimings = performance.getEntriesByType('paint');
const fmp = paintTimings.find(({ name }) => name === "first-contentful-paint");
enter code here
console.log(`First contentful paint at ${fmp.startTime}ms`);
Recently new browser APIs like PerformanceObserver and PerformancePaintTiming have made it easier to retrieve First Contentful Paint (FCP) by Javascript.
I made a JavaScript library called Perfume.js which works with few lines of code
const perfume = new Perfume({
firstContentfulPaint: true
});
// ⚡️ Perfume.js: First Contentful Paint 2029.00 ms
I realize you asked about First Paint (FP) but would suggest using First Contentful Paint (FCP) instead.
The primary difference between the two metrics is FP marks the point
when the browser renders anything that is visually different from what
was on the screen prior to navigation. By contrast, FCP is the point
when the browser renders the first bit of content from the DOM, which
may be text, an image, SVG, or even a canvas element.
if(typeof(PerformanceObserver)!=='undefined'){ //if browser is supporting
const observer = new PerformanceObserver((list) => {
for (const entry of list.getEntries()) {
console.log(entry.entryType);
console.log(entry.startTime);
console.log(entry.duration);
}
});
observer.observe({entryTypes: ['paint']});
}
this will help you just paste this code in starting of your js app before everything else.

esri javascript api symbolize layers

I have a working application here: http://dola.colorado.gov/gis-cms/sites/default/files/html/census2000v2.html
I'm using the Javascript API with ArcGIS Online. I have a bunch of layers loaded and pre-symbolized in an AGOL 'Web Map'.
I'd like to be able to customize the symbology of each layer dynamically using javascript. I'd ideally like to use a renderer and be able to create a different symbology for each demographic variable.
I've run into a major brick wall. To be able to change the symbology, I need to be able to iterate through graphics in a feature set - yet I have no idea where to get a feature set object from. All the examples I see use 'Feature Layers' loaded through URLs.
I think first you need to get the layer from the webmap:
var featureLayer = mapObject.getLayer(layerName)
Then you can query the featurelayer, which will return a featureSet.
Here is an example:
var query = new esri.tasks.Query();
query.outFields = ["*"];
featureLayer.queryFeatures(query, function(featureSet) {
//do something with the featureSet here!
});

Categories