I am working on deploy an ML that I trained using tensorflow (in Python). The model is saved as an .h5 file. After converting the model using the tensorflowjs_converter --input_format=keras ./model/myFile.h5 /JS_model/ command.
I imported the tensorflow library using the following:
<script src="https://cdn.jsdelivr.net/npm/#tensorflow/tfjs/dist/tf.min.js"> </script>
After this, I ws able to load the model using the loadLayersModel() function. However, when using the loadGraphModel, it does not work. It outputs this error on the browser:
''
I also tried using the tf.models.save_model.save() function in python which it outputs the variables and assets folders, as well as the .pb file. However, an error still occurs. Using the code above, changing only the path to 'THE_classifier' (which is the name of the folder where asset, variables and the .pb is located), the output is:
I want to work with the loadGraphModel() function because according to various sources, it provides a faster inference time.
layers models and graph models have different internal layout, they are not compatible and interchangable. if its a layers model, it must be loaded with tf.loadLayersModel and if its a graph model, it must be loaded with tf.loadGraphModel
graph models are frozen models - so if you want to convert keras model to graph, you need to freeze it first, else it can only be converted to layers model
(and thats where difference in inference time comes from - its faster to evaluate a frozen model than one that is still using variables)
Related
I'm new to tensorflow and I have one question, My project has two majors part, first written in NodeJs that train my model from dataset and save model to local storage, so I have two files:
model.json
wights.bin
The second part is written in c++, After couple of days I could build tensorflow with bazel and add it to my OpenCv project, so here is my question :
I want to train my model in NodeJs part and use these models in my C++ part. Is this possible ?
also I saw tjs converter but it converts models to use in NodeJs not vice versa.
Update :
After searching a lot I figured out that I should convert my models to protobuff file, but tfjs-Converter does not support this type of conversion and another point is that I want to use my model with opencv library.
Update 2
Finally I could change my model to .pb file, first I use tfjs_converter to convert to keras model(.h5 file) and after that use this python script to convert to .pb file and opencv can successfully load model. But I got this error in using model :
libc++abi.dylib: terminating with uncaught exception of type
cv::Exception: OpenCV(4.1.0)
/tmp/opencv-20190505-12101-14vk1fh/opencv-4.1.0/modules/dnn/src/dnn.cpp:524:
error: (-2:Unspecified error) Can't create layer
"flatten_Flatten1/Shape" of type "Shape" in function
'getLayerInstance'
Any help ?
thanks
Finally i solved my own problem.
Here is the steps that I've done :
Convert my tfjs model to keras model using tfjs-converter
Using this python scripts to change keras(.h5) model to frozen model(.pb)
Use this tutorial to optimize my .pb model
Finally everything works great!
I'm trying to download a pretrained tensorflow.js model including weights, to be used offline in python in the standard version of tensorflow as part of a project that is not on an early stage by any means, so switching to tensorflow.js is not a possibility.
But I cant just figure out how to download those models and if its necessary to to do some conversion to the model.
I'm aware that in javascript I can access the models and use them by calling them like this
but how do I actually get the .ckpt files or the model frozen if thats the case?
<script src="https://cdn.jsdelivr.net/npm/#tensorflow/tfjs#0.13.3"></script>
<script src="https://cdn.jsdelivr.net/npm/#tensorflow-models/posenet#0.2.3"></script>
My final objective is to get the frozen model files, and get the outputs like is done in the normal version of tensorflow.
Also this will be used in an offline environment, so any online reference would not be useful.
Thanks for your replies
It is possible to save the model topology and its weights by calling the method save of the model.
const model = tf.sequential();
model.add(tf.layers.dense(
{units: 1, inputShape: [10], activation: 'sigmoid'}));
const saveResult = await model.save('downloads://mymodel');
// This will trigger downloading of two files:
// 'mymodel.json' and 'mymodel.weights.bin'.
console.log(saveResult);
There are different scheme strings depending on where to save the model and its weights (localStorage, IndexDB, ...). doc
I went to https://storage.googleapis.com/tfjs-models/ and found the directory listing all the files. I found the relevant files (I wanted all the mobilenet float, as opposed to quantized mobileNet), and populated this file_uris list.
base_uri = "https://storage.googleapis.com/tfjs-models/"
file_uris = [
"savedmodel/posenet/mobilenet/float/050/group1-shard1of1.bin",
"savedmodel/posenet/mobilenet/float/050/model-stride16.json",
"savedmodel/posenet/mobilenet/float/050/model-stride8.json",
"savedmodel/posenet/mobilenet/float/075/group1-shard1of2.bin",
"savedmodel/posenet/mobilenet/float/075/group1-shard2of2.bin",
"savedmodel/posenet/mobilenet/float/075/model-stride16.json",
"savedmodel/posenet/mobilenet/float/075/model-stride8.json",
"savedmodel/posenet/mobilenet/float/100/group1-shard1of4.bin",
"savedmodel/posenet/mobilenet/float/100/group1-shard2of4.bin",
"savedmodel/posenet/mobilenet/float/100/group1-shard3of4.bin",
"savedmodel/posenet/mobilenet/float/100/model-stride16.json",
"savedmodel/posenet/mobilenet/float/100/model-stride8.json"
]
Then I used python to iteratively download the files into their same folders.
from urllib.request import urlretrieve
import requests
from pathlib import Path
for file_uri in file_uris:
uri = base_uri + file_uri
save_path = "/".join(file_uri.split("/")[:-1])
Path(save_path).mkdir(parents=True, exist_ok=True)
urlretrieve(uri, file_uri)
print(path, file_uri)
I enjoyed Jupyter Lab (Jupyter Notebook is also good) when experimenting with this code.
With this, you'll get a folder with bin files (the weight) and the json files (the graph model). Unfortunately, these are graph models, so they cannot be converted into SavedModels, and so they are absolutely useless to you. Let me know if someone find a way of running these tfjs graph model files in regular TensorFlow (preferably 2.0+).
You can also download zip files with the 'entire' model from TFHub, for example, a 2 byte quantized ResNet PoseNet is available here.
When I try to load a very large file using the appropriate loaders provided with the library, the tab my website runs in crashes. I have tried implementing the Worker class, but it doesnt seem to work. Heres what happens:
In the main javascript file I have:
var worker = new Worker('loader.js');
When user selects one of available models I check for the extension and pass the file URL/path to the worker: (in this instance a pcd file)
worker.postMessage({fileType: "pcd", file: file});
Now the loader.js has the appropriate includes that are necessary to make it work:
importScripts('js/libs/three.js/three.js');
importScripts('js/libs/three.js/PCDLoader.js');
and in its onmessage method, it uses the apropriate loader depending on file extension.
var loader = new THREE.PCDLoader();
loader.load(file, function (mesh) {
postMessage({points: mesh.geometry.attributes.position.array, colors: mesh.geometry.attributes.color.array});
});
The data is passed back successfully to the main javascript which adds it to the scene. At least for small files - large ones, like I said, take too long and the browser decides there was an error. Now I thought the worker class was supposed to work asynchronously, so whats the deal here?
Currently Three.js's loaders rely on strings and arrays of strings to parse data from a file. They dont split files into pieces, which leads to excessive memory usage which browsers immediately interrupt. Loading a 64 MB file spikes to over 1 GB memory used during load (which then results in an error).
I am trying to implement D3 graphs and charts in a FileMaker solution. I am stuck on loading a JSON file properly to be used in D3 code displayed in the webviewer.
I am loading all JS libraries into global variables ($$LIB.D3.JS, $$LIB.JQUERY.JS, etc.). Then I am placing the HTML code on the layout (giving an object name, i.e. html). The web viewer grabs the HTML code (from a text box on the layout) and JS code (from global variables) to render the page. This all works fine. (I am using this method detailed here: https://www.youtube.com/watch?v=lMo7fILZTQs)
However, the D3 code I have uses the getJSON() function to get a JSON, parse the data and create the visualization. I can't figure out a way to get the JSON file as a file from within FileMaker. I could put the content of the JSON file into a FileMaker variable and feed that into the HTML, but I then would not be able to use getJSON(). I would have to redo the D3 code to get the data from a JS variable and parse the data from the variable.
Is there a way for me to load a JSON file so FileMaker could use it to render the visualization properly in the WebViewer.
You have two options.
1. Calc the JSON into the HTML as you mentioned. Your right you will have to change how you load the JSON with d3. But its not tough. When you load the JSON from disk, using something like d3.json('/data.json', callback) you are just loading the json and then giving it to the callback function. If the JSON is in the HTML page in a something like var embeddedJSON You can just call the callback directly with the embeddedJSON like
callback (embeddedJSON)
Your code may look more like this.
d3.json('/data.json', function(data){
// bunch of d3 code
})
The callback in this case is an anonymous function. You can change it like this.
var render = function(data){
// bunch of d3 code
})
// then call render with your json variable that you embedded into the html
render ( embeddedJSON )
That will work just fine.
2. Export the html page to the temp directory, and export the json file with the data into right next to it. Then display the html using a file://url. In this case you can use d3.json(/data.json, callback ) and that will work just fine too.
Each of these methods have their pros and cons, but they both work.
In most cases, the best practice for integrating javascript or other assets in a Webviewer is to push assets to the temp directory (get this using GetTemporaryPath() in FileMaker), you can then export assets directly to named files. Once this is done, you can reference these files in your code using the file:// protocol.
This has numerous advantages over older methods, such as loading everything into global variables. One of the biggest of these is that provided you load your JSON into a discrete file and don't "pollute" any other files with FileMaker data, you can work entirely in the code environment of your choice, then simply move web JavaScript libraries, html, CSS, and other assets directly into your FileMaker solution.
I'm working on a project that uses Unity WebGL to display different machines/parts/.. in 3D, these parts or machines are selected by the user and then loaded into a scene (for now there is just a single scene, but we might want to load scenes dynamically also).
Because of the large amount of choices we want to create different asset bundles, containing 1 or more parts each, so we can download these on-demand.
I've done this successfully by passing the URL to LoadFromCacheOrDownload and extracting the gameObject from the www object.
Now we would also like to include scripts with the assets to create animations and user interaction. I followed the explanation given here: docs.unity3d link, and this works perfectly in the WebPlayer. However the end requirement is WebGL, and the same code built for WebGL gives the following error:
NotSupportedException: /Users/builduser/buildslave/unity/build/Tools/il2cpp/il2cpp/libil2cpp/icalls/mscorlib/System/AppDomain.cpp(184) : Unsupported internal call for IL2CPP:AppDomain::LoadAssemblyRaw - "This icall is not supported by il2cpp."
System.AppDomain.LoadAssemblyRaw (System.Byte[] rawAssembly, System.Byte[] rawSymbolStore, System.Security.Policy.Evidence securityEvidence, Boolean refonly)
System.AppDomain.Load (System.Byte[] rawAssembly, System.Byte[] rawSymbolStore, System.Security.Policy.Evidence securityEvidence, Boolean refonly)
System.AppDomain.Load (System.Byte[] rawAssembly, System.Byte[] rawSymbolStore, System.Security.Policy.Evidence securityEvidence)
System.AppDomain.Load (System.Byte[] rawAssembly)
System.Reflection.Assembly.Load (System.Byte[] rawAssembly)
API+c__Iterator0.MoveNext ()
It seems to stem from the call System.Reflection.Assembly.Load(txt.bytes); (which I got from the example), so I suppose the Reflection class is not (yet?) fully supported for WebGL. I can't seem to find any documentation on this.
Is there a way around using Reflection for this? At best I'm hoping for some different code that can fix this, at worst that we will have to create the scripts for WebGL in Javascript and add them as such instead of as a binary? I'm a bit lost here so any leads are appreciated.
(Cross-posted from answers.unity3d)
No, there is no way around this restriction with reflection.
The key difference between the web player and WebGL build targets in Unity in this case is that WebGL uses AOT (ahead-of-time) compilation, whereas the web player uses JIT (just-in-time) compilation. With AOT compilation, it is not possible to load an assembly at run-time that was not present at compile time.
Of course, it is possible to load JavaScript code at runtime, so as you suggest, you'll probably need to go this route.