need help with async/await.
currently studying https://github.com/tensorflow/tfjs-converter.
and I'm stumped at this part of the code (loading my python converted saved js model for use in the browser):
import * as tf from '#tensorflow/tfjs';
import {loadFrozenModel} from '#tensorflow/tfjs-converter';
/*1st model loader*/
const MODEL_URL = './model/web_model.pb';
const WEIGHTS_URL = '.model/weights_manifest.json';
const model = await loadFrozenModel(MODEL_URL, WEIGHTS_URL);
/*2nd model execution in browser*/
const cat = document.getElementById('cat');
model.execute({input: tf.fromPixels(cat)});
I noticed it's using es6 (import/export) and es2017 (async/await) so I've used babel with babel-preset-env, babel-polyfill and babel-plugin-transform-runtime. I've used webpack but switched over to Parcel as my bundler (as suggested by the tensorflow.js devs). In both bundlers I keep getting the error that the await should be wrapped in an async function so I wrapped the first part of the code in an async function hoping to get a Promise.
async function loadMod(){
const MODEL_URL = './model/web_model.pb';
const WEIGHTS_URL = '.model/weights_manifest.json';
const model = await loadFrozenModel(MODEL_URL, WEIGHTS_URL);
}
loadMod();
now both builders say that the 'await is a reserved word'. vscode eslinter says that loadMod(); has a Promise void. (so the promise failed or got rejected?)
I'm trying to reference the javascript model files using a relative path or is this wrong? I have to 'serve' the ML model from the cloud? It can't be from a relative local path?
Any suggestions would be much appreciated. Thanks!
tf.loadFrozenModel uses fetch under the hood. Fetch is used to get a file served by a server and cannot be used with local files unless those are served by a server. See this answer for more.
For loadFrozenModel to work with local files, those files needs to be served by a server. One can use http-server to serve the model topology and its weights.
// install the http-server module
npm install http-server -g
// cd to the repository containing the files
// launch the server to serve static files of model topology and weights
http-server -c1 --cors .
// load model in js script
(async () => {
...
const model = await tf.loadFrozenModel('http://localhost:8080/tensorflowjs_model.pb', 'http://localhost:8080/weights_manifest.json')
})()
You try to use this function
tf.loadFrozenModel(MODEL_FILE_URL, WEIGHT_MANIFEST_FILE_URL)
And your code has a syntax error. If you use the key words 'await', you must define one async function, such as below:
async function run () {
/*1st model loader*/
MODEL_URL = './model/web_model.pb';
const WEIGHTS_URL = '.model/weights_manifest.json';
const model = await loadFrozenModel(MODEL_URL, WEIGHTS_URL);
/*2nd model execution in browser*/
const cat = document.getElementById('cat');
model.execute({input: tf.fromPixels(cat)});
}
run();
Related
I experiencing a problem below. All I know, it's because I added the "type": "module" in my package.json file. But do we have a way to convert the following code in module? I cannot find good resources so far.
const model = require(path.join(__dirname, file))( ReferenceError: require is not defined
My code:
const model = require(path.join(__dirname, file))(
sequelize,
Sequelize.DataTypes
);
db[model.name] = model;
You replicate this by using dynamic imports. Here's an example using your code:
import(path.join(__dirname, file))
.then(mod => {
const model = mod(sequelize, Sequelize.DataTypes);
db[model.name] = model;
});
Note that this is run asynchronously. If you're using a most recent version of Node.js that supports top level awaits or this code is being run inside an async function, then you can use this code insteadd:
const mod = await import(path.join(__dirname, file));
const model = mod(sequelize, Sequelize.DataTypes);
db[model.name] = model;
You are calling a function called require and you receive an error which tells you that it's undefined. This is a good indication that your require function is indeed undefined.
It is quite possible that you wanted to use RequireJS: https://requirejs.org/
You will need to make sure that RequireJS is already loaded and properly initialized by the time you try to call require.
If you are using NodeJS, you need to install RequireJS,like
npm install requirejs
If you are trying to use it in your browser, then you will need to link it via a script tag. Here's a tutorial for more information and practical examples: https://www.tutorialspoint.com/requirejs/index.htm
https://github.com/SimulatedGREG/electron-vue
I used this template to make electron.
And I am using this library.
https://www.npmjs.com/package/fs-extra
https://nodejs.org/docs/latest-v11.x/api/fs.html
By this document, I can write like this.
await fs.readdir
But the electron template, it is using electron#2.0.4 and which is using node#8.9.3.
So I checked here.
https://nodejs.org/docs/latest-v8.x/api/fs.html
It looks like the function doesn't return promise.
But I actually can await fs functions using fs-extra in electron#2.0.4.
Both develop and build.
Why is this?
result of
console.log(fs.readdir())
Is like below.
It's Promise.
But I don't know why I can do this in electron#2.0.4.
You can use fs module in node v8. the return value will be passed to callback function
const fs = require('fs');
fs.readdir(dir, function(err, list) {
// do your logic with list array
})
If you're using Node > 8.16 (I believe), you can use promisify in the utils module:
const { promisify } = require('utils');
const fs = require('fs');
const readdir = promisify(fs.readdir);
(async () => {
const res = await readdir('./path');
console.log(res);
})();
Sorry, I am not good at English.
And, I am a beginner web developer.
This is just an anticipation.
I saw package.json of "fs-extra".
And I could not find "fs" inside that.
It is using a library called "graceful-fs", but this library does not require "fs" neither.
Maybe, "fs-extra" is not relevant to "fs".
And it has own logic that is already promisified even in node lower than verstion10.
Any one know the truth?
I have converted a keras model to tensorflow json format and saved it locally in my computer. I am trying to load that json model in a javascript code using the below command
model = await tf.loadModel('web_model')
But the model is not getting loaded.
Is there a way to load tensorflow json model from local file system?
I know you're trying to load your model in a browser but if anybody lands here that's trying to do it in Node, here's how:
const tf = require("#tensorflow/tfjs");
const tfn = require("#tensorflow/tfjs-node");
const handler = tfn.io.fileSystem("./path/to/your/model.json");
const model = await tf.loadLayersModel(handler);
LoadModel uses fetch under the hood. And fetch cannot access the local files directly. It is meant to be used to get files served by a server. More on this here.
To load a local file with the browser, there is two approaches, asking the user to upload the file with
<input type="file"/>
Or serving the file by a server.
In these two scenarios, tf.js provides way to load the model.
Load the model by asking the user to upload the file
html
<input type="file" id="upload-json"/>
<input type="file" id="upload-weights"/>
js
const uploadJSONInput = document.getElementById('upload-json');
const uploadWeightsInput = document.getElementById('upload-weights');
const model = await tfl.loadModel(tf.io.browserFiles(
[uploadJSONInput.files[0], uploadWeightsInput.files[0]]));
Serving the local files using a server
To do so, one can use the following npm module http-server to serve the directory containing both the weight and the model. It can be installed with the following command:
npm install http-server -g
Inside the directory, one can run the following command to launch the server:
http-server -c1 --cors .
Now the model can be loaded:
// load model in js script
(async () => {
...
const model = await tf.loadFrozenModel('http://localhost:8080/model.pb', 'http://localhost:8080/weights.json')
})()
const tf = require('#tensorflow/tfjs');
const tfnode = require('#tensorflow/tfjs-node');
async function loadModel(){
const handler = tfnode.io.fileSystem('tfjs_model/model.json');
const model = await tf.loadLayersModel(handler);
console.log("Model loaded")
}
loadModel();
This worked for me in node. Thanks to jafaircl.
If you're using React with create-react-app, you can keep your saved model files in your public folder.
For example, say you want to use the blazeface model. You would
Download the .tar.gz model from that web page.
Unpack the model into your app's public directory. So now you have the files from the .tar.gz file in a public subdir:
%YOUR_APP%/public/blazeface_1_default_1/model.json
%YOUR_APP%/public/blazeface_1_default_1/group1-shard1of1.bin
Load the model in your React app using
tf.loadGraphModel(process.env.PUBLIC_URL + 'blazeface_1_default_1/model.json'
You could try:
const model = await tf.models.modelFromJSON(myModelJSON)
Here it is in the tensorflow.org docs
Check out our documentation for loading models: https://js.tensorflow.org/api/latest/#Models-Loading
You can use tf.loadModel takes a string which is a URL to your model definition which needs to get served over HTTP. This means you need to start an http-server to serve those files (it will not allow you to make a request to your filesystem because of CORS).
This package can do that for you: npmjs.com/package/http-server
You could use insecure chrome instance:
C:\Program Files (x86)\Google\Chrome\Application>chrome.exe --disable-web-security --disable-gpu --user-data-dir=C:/Temp
Than you could add this script to redefine fetch function
async function fetch(url) {
return new Promise(function(resolve, reject) {
var xhr = new XMLHttpRequest
xhr.onload = function() {
resolve(new Response(xhr.responseText, {status: 200}))
}
xhr.onerror = function() {
reject(new TypeError('Local request failed'))
}
xhr.open('GET', url)
xhr.send(null)
})
}
After that be shure that you use the right model loader
my comment about loader issue
BUT your weights will be incorrect - as I understand there are some encoding problems.
If you are trying to load it in server side, use #tensorflow/tfjs-node instead of #tensorflow/tfjs and update to 0.2.1 or higher version to resolve this issue.
I am using React js for loading model (for image classification and more machine learning stuff)
Tensorflow.js do not support an Api to read a previously model trained
const file= new Blob()
file.src=modelJSON
const files= new Blob()
files.src=modelWeights
console.log(files)
const model= await tf.loadLayersModel(tf.io.browserFiles([file, files]));
[![enter image description here][1]][1]
You be able to create an APi in Express.js for servering your model (model.json and weigths.bin) if you use a web app (for a tensorflow.lite you could use a opencv.readTensorflowmodel(model.pb, weight.pbtxt)
References: How to load tensorflow-js weights from express using tf.loadLayersModel()?
const classifierModel = await tf.loadLayersModel(
"https://rp5u7.sse.codesandbox.io/api/pokeml/classify"
);
const im = new Image()
im.src =imagenSample//'../../../../../Models/ShapesClassification/Samples/images (2).png';
const abc= this.preprocessImage(im);
const preds = await classifierModel.predict(abc)//.argMax(-1);
console.log('<Response>',preds,'Principal',preds.shape[0],'DATA',preds.dataSync())
const responde=[...preds.dataSync()]
console.log('Maxmimo Valor',Math.max.apply(Math, responde.map(function(o) { return o; })))
let indiceMax = this.indexOfMax(responde)
console.log(indiceMax)
console.log('<<<LABEL>>>',this.labelsReturn(indiceMax))
If you are using Django, you should:
create a directory static in your app and put your model there.
load that static directory to the template where you want to use your model:
var modelPath = "{% static 'sampleModel.json' %}">
Don't forget to also load tensorflow.js library:
<script src="https://cdn.jsdelivr.net/npm/#tensorflow/tfjs"></script>
Now you can load your model:
<script>model = await tf.loadGraphModel(modelPath)</script>
i found a solution that it works. You can replace the url with a localhost url on xampp, for example (directory = model) http://localhost/model/model.json and after that you have to disable your browser CORS policy. For me i found a chrome extention and removed cors for my specific tab and it worked.
Thank me later!!
I'm writing unit tests to check my api. Before I merged my git test branch with my dev branch everything was fine, but then I started to get this error:
App running at: http://localhost:4096/
spacejam: meteor is ready
spacejam: spawning phantomjs
phantomjs: Running tests at http://localhost:4096/local using test-in-console
phantomjs: Error: fetch is not found globally and no fetcher passed, to fix pass a fetch for
your environment like https://www.npmjs.com/package/unfetch.
For example:
import fetch from 'unfetch';
import { createHttpLink } from 'apollo-link-http';
const link = createHttpLink({ uri: '/graphql', fetch: fetch });
Here's a part of my api.test.js file:
describe('GraphQL API for users', () => {
before(() => {
StubCollections.add([Meteor.users]);
StubCollections.stub();
});
after(() => {
StubCollections.restore();
});
it('should do the work', () => {
const x = 'hello';
expect(x).to.be.a('string');
});
});
The funniest thing is that I don't even have graphql in my tests (although, I use it in my meteor package)
Unfortunately, I didn't to find enough information (apart from apollo-link-http docs that has examples, but still puzzles me). I did try to use that example, but it didn't help and I still get the same error
I got the same error importing a npm module doing graphql queries into my React application. The app was compiling but tests were failing since window.fetch is not available in the Node.js runtime.
I solved the problem by installing node-fetch https://www.npmjs.com/package/node-fetch and adding the following declarations to jest.config.js:
const fetch = require('node-fetch')
global.fetch = fetch
global.window = global
global.Headers = fetch.Headers
global.Request = fetch.Request
global.Response = fetch.Response
global.location = { hostname: '' }
Doing so we instruct Jest on how to handle window.fetch when it executes frontend code in the Node.js runtime.
If you're using nodejs do the following:
Install node-fetch
npm install --save node-fetch
Add the line below to index.js:
global.fetch = require('node-fetch');
The problem is this: fetch is defined when you are in the browser, and is available as fetch, or even window.fetch
In the server it is not defined, and either needs to be imported explicity, or a polyfill like https://www.npmjs.com/package/unfetch (as suggested in the error message) needs to be imported by your test code to make the problem go away.
I'm looking for an npm module, that I can use to edit the metatags like Author and Title of PDF files.
Alternatively, an open-license JavaScript library would also be okay.
There's a program called pdftk, which would be suitable if it was an npm module.
I have not tested this package but node-exiftool seems to provide pdf metadata edition.
Another possibility is to write your own module with use of pdftk (if available) and child_process.
Maybe I will try to make one myself.
You can install the exiftool command line utility to edit the metadata of PDFs:
sudo apt install libimage-exiftool-perl
Then you can use the Node.js child_process.exec() function to call the command line utility from your program:
'use strict';
const exec = require('util').promisify(require('child_process').exec);
const execute = async command => {
const {stdout, stderr} = await exec(command);
console.log((stderr || stdout).trim());
};
(async () => {
await execute('exiftool -title="Example PDF" -author="John Doe" /var/www/example.com/public_html/example.pdf');
})();