I am trying to call a R-script from a js-file but I always get null as a return value
My folder-structure is as following
-R_test
--example.js
--helloWorld.R
example.js looks as follows:
var R = require("r-script");
var out = R('helloWorld.R')
.callSync();
console.log(out);
and helloWorld.R looks as like this:
print("Hello World")
Looking at the only other question I could find similar to mine here my syntax is correct. Changing the js-file to
var R = require("r-script");
var out = R('helloWorld.R')
.data("Hello World)
.callSync();
console.log(out);
and helloWorld.R to
print(input)
should according to the readme of r-script print the data Hello World but it also just returns null
If there is any better way to run a R-script from a nodeJS-file please let me know
Thanks to dcruvolo I got it working. Here is how the js-file needs to look:
const { R } = require('#fridgerator/r-script')
let r = new R('./members.R')
r.call()
.then(response => console.log(response))
.catch(e => console.log('error : ', e))
And here is the .R-file:
members <- c(129, 1, 5, 3, 2, 1, 21, 1, 8, 7, 0, 1, 1, 1, 110, 0, 0, 5, 1, 0, 0, 0)
membersSorted <- sort(members, decreasing = TRUE)
# Graph cars using blue points overlayed by a line
plot(membersSorted, type="o", col="blue")
# Create a title with a red, bold/italic font
title(main="Members", col.main="red", font.main=4)
png(filename="members.png")
plot(membersSorted, type="o", col="blue")
dev.off()
Related
I get the following error:
Error when checking : expected dense_Dense_1_input to have 2 dimensions, but got array with shape [224, 224, 3]
When I run the following code:
const modelJson = require("../offline_model/pose-model.json");
const modelWeights = require("../offline_model/weights.bin");
let imageTensor = images.next().value as tf.Tensor3D;
tf.loadLayersModel(bundleResourceIO(modelJson, modelWeights)).then(model => {
try {
imageTensor = tf.image.resizeBilinear(
imageTensor,
[224, 224],
false
);
const normalized = imageTensor.cast('float32').div(127.5).sub(1);
model.predict(normalized)
Maybe this code is not the most correct but I already tried every other code from stackoverflow and youtube videos and they all give some error. I think the problem is that in my model.json file (I trained the model with teachable machine) I find this value:
"name":"dense_Dense1", "batch_input_shape":[null,14739]
This is why it expects 2 dimensions and I have 3. I guess I am not sure how to change the 3 dimension tensor I have to fit the dimensions of the batch input shape.
I created this graph and am trying to make it wieghted directed graph
class Graph{
#nodes;
constructor(){
this.#nodes={}
}
addNode(node){
this.#nodes[node]=[]
}
addEdge(source,vertix){
if( ! this.#nodes[source] || ! this.#nodes[vertix]){
return false
}
// this.vertix[destination]=distancex
if(! this.#nodes[source].includes(vertix)){
this.#nodes[source].push(vertix)
}
}
showNodes(){
console.log(this.#nodes)
}
}
and now am trying to add edges :
for(let i=0;i<citiesnamesarr.length;i++)
{
mapgraph.addNode(citiesnamesarr[i])
var x={}
var citiesform=document.getElementsByClassName(`check${citiesnamesarr[i]} `)
var distanceform=document.getElementsByClassName(`distance${citiesnamesarr[i]} `)
for(let j=0;j<citiesform.length;j++)
{
var edge=citiesform[j].value
var distance=distanceform[j].value
x[edge]=distance
}
v[i]=x
mapgraph.addEdge(citiesnamesarr[i],v[i])
}
but when I print the graph it gives me an empty array :
{city1: Array(0), city2: Array(0), city3: Array(0)}
knowing when I tried to print the array v it works
0:{city2: '87', city1: ''}
1: {city0: '12', city1: '78'}
2: {city0: '', city1: '21'}
The main problem in what you're doing is that your method addEdge is meant to add one edge at the time, but when you're calling it in your code, you're trying to add multiple edges at the same time.
I rewrote your code for better understanding:
for(let cityname of citiesnamesarr){
mapgraph.addNode(cityname);
var adjacentVertices={};
var cityInputArr=document.getElementsByClassName(`check${cityName} `);
var distanceInputArr=document.getElementsByClassName(`distance${cityName} `);
for(let key in cityInputArr){
var destination=cityInputArr[key].value;
var distance=distanceInputArr[key].value;
adjacentVertices[destination]=distance;
}
mapgraph.addEdge(cityname,adjacentVertices);
}
To solve the issues, you'd either need to call your method addEdges and do something like:
addEdges(node,edges){
this.#nodes[node] = edges;
}
And call this method instead.
Or you can keep a method that adds a single edge with:
addEdge(source, destination, distance){
this.#nodes[source].push([destination, distance]);
}
And change your code to something like:
for(let cityname of citiesnamesarr){
mapgraph.addNode(cityname);
var cityInputArr=document.getElementsByClassName(`check${cityName} `);
var distanceInputArr=document.getElementsByClassName(`distance${cityName} `);
for(let key in cityInputArr){
var destination=cityInputArr[key].value;
var distance=distanceInputArr[key].value;
mapgraph.addEdge(cityname,destination, distance);
}
}
Note that a weighed graph has a more complex structure that an unweighed graph.
An unweighed graph can be represented as an adjacency list:
0: [1, 4, 6]
1: [0, 3, 4]
etc.
While for a weighed graph needs to store an additional value:
0: [[1, 300], [4, 250], [6, -20]]
1: [[0, 100], [3, 76], [4, -10]]
etc.
Depending on which algorithms you'd like to use, a matrix might be more convenient.
I've Googled every version of the question I could think of, but for the life of me I cant find a single basic example of tensorflow.js training a tf.browser.fromPixels(image) to result in a yes or a no. All the examples out there I could find start with pre-trained nets.
I've built a database of 25x25 pixel images and have them all stored as canvases in a variable like:
let data = {
t: [canvas1, canvas2, canvas3, ... canvas3000 ....],
f: [canvas1, canvas2, ... and so on ...]
}
And I think it should be trivial to do something like:
data.t.forEach(canvas => {
const xs = tf.browser.fromPixels(canvas);
const ys = tf.tensor([1]); // output 1, since this canvas is from the `t` (true) dataset
model.fit(xs, ys, {
batchSize: 1,
epochs: 1000
});
});
data.f.forEach(canvas => {
const xs = tf.browser.fromPixels(canvas);
const ys = tf.tensor([0]); // output 0, since this canvas is from the `f` (false) dataset
model.fit(xs, ys, {
batchSize: 1,
epochs: 1000
});
});
model.predict(tf.browser.fromPixels(data.t[0])).print(); // -> [1]
model.predict(tf.browser.fromPixels(data.t[1])).print(); // -> [1]
model.predict(tf.browser.fromPixels(data.t[2])).print(); // -> [1]
model.predict(tf.browser.fromPixels(data.f[0])).print(); // -> [0]
model.predict(tf.browser.fromPixels(data.f[1])).print(); // -> [0]
model.predict(tf.browser.fromPixels(data.f[2])).print(); // -> [0]
But the specifics, like inputShape and various little details, being new to TF, make trying to accomplish this without being able to find a basic example pretty much a painful learning curve. What would a valid representation of this training func look like? Here's the code so far:
// Just imagine DataSet builds a large data set like described in my
// question and calls a callpack function with the data variable as
// its only argument, full of pre-categorized images. Since my database
// of images is locally stored, I cant really produce an example here
// that works fully, but this gets the idea across at least.
new DataSet(
data => {
const model = tf.sequential();
model.add(
// And yes, I realize I would want a convolutional layer,
// some max pooling, filtering, etc, but I'm trying to start simple
tf.layers.dense({
units: [1],
inputShape: [25, 25, 3],
dataFormat: "channelsLast",
activation: "tanh"
})
);
model.compile({optimizer: "sgd", loss: "binaryCrossentropy", lr: 0.1});
data.t.forEach(canvas => {
const xs = tf.browser.fromPixels(canvas);
const ys = tf.tensor([1]); // output 1, since this canvas is
// from the `t` (true) dataset
model.fit(xs, ys, {
batchSize: 1,
epochs: 1000
});
});
data.f.forEach(canvas => {
const xs = tf.browser.fromPixels(canvas);
const ys = tf.tensor([0]); // output 0, since this canvas is
// from the `f` (false) dataset
model.fit(xs, ys, {
batchSize: 1,
epochs: 1000
});
});
model.predict(tf.browser.fromPixels(data.t[0])).print(); // -> [1]
model.predict(tf.browser.fromPixels(data.t[1])).print(); // -> [1]
model.predict(tf.browser.fromPixels(data.t[2])).print(); // -> [1]
model.predict(tf.browser.fromPixels(data.f[0])).print(); // -> [0]
model.predict(tf.browser.fromPixels(data.f[1])).print(); // -> [0]
model.predict(tf.browser.fromPixels(data.f[2])).print(); // -> [0]
},
{canvas: true}
);
<script src="https://cdn.jsdelivr.net/npm/#tensorflow/tfjs#1.0.0/dist/tf.min.js"></script>
You have only one layer for your model. You need more layers than that.
There are lots of tutorial you can follow to build a classifier to distinguish between two or more class of images. Here is this tutorial on the official website of tensorflow using CNN.
Additionnaly, you can see how to use fully connected neural network using this snippet to build a classifier though the accuracy might not be as good as CNN models.
I was playing around with some tensorflow code I got from a youtube tutorial that predicts data of flowers. Here is the script (the training data is assigned to variable "iris" and the testing data is assigned to variable "irisTesting":
const trainingData = tf.tensor2d(iris.map(item => [
item.sepal_length, item.petal_length, item.petal_width,
]));
const outputData = tf.tensor2d(iris.map(item => [
item.species === "setosa" ? 1 : 0,
item.species === "virginica" ? 1 : 0,
item.species === "versicolor" ? 1 : 0,
item.sepal_width
]));
const testingData = tf.tensor2d(irisTesting.map(item => [
item.sepal_length, item.petal_length, item.petal_width
]));
const model = tf.sequential();
model.add(tf.layers.dense({
inputShape: [3],
activation: "sigmoid",
units: 5,
}));
model.add(tf.layers.dense({
inputShape: [5],
activation: "sigmoid",
units: 4,
}));
model.add(tf.layers.dense({
activation: "sigmoid",
units: 4,
}));
model.compile({
loss: "meanSquaredError",
optimizer: tf.train.adam(.06),
});
const startTime = Date.now();
model.fit(trainingData, outputData, {epochs: 100})
.then((history) => {
//console.log(history);
console.log("Done training in " + (Date.now()-startTime) / 1000 + " seconds.");
model.predict(testingData).print();
});
When the console prints the predicted sepal_width, it seems to have an upper limit of 1. The training data has sepal_width values of well over 1, but here is the data that is logged:
Tensor
[[0.9561102, 0.0028415, 0.0708825, 0.9997129],
[0.0081552, 0.9410981, 0.0867947, 0.999761 ],
[0.0346453, 0.1170913, 0.8383155, 0.9999373]]
The last (fourth) column would be the predicted sepal_width value. The predicted values should be larger than 1 however it seems that something is preventing it from being larger than 1.
This is the original code:
https://gist.github.com/learncodeacademy/a96d80a29538c7625652493c2407b6be
You're using a sigmoid activation function in the final layer to predict the sepal_width. Sigmoid is continuous function bounded between 0 and 1. See Wikipedia for a more thorough explanation.
You should try to use a different activation function if you want to predict the sepal_width. For a list of available activation functions you can check Tensorflow's API page (this is for the Python version, but the it should be similar for the JavaScript version). You can try 'softplus', 'relu' or even 'linear', but I cannot say if any of these are suitable for your application. Try and experiment to see which is best.
The original code from here addresses a classification problem. It is not meaningful to add item.sepal_width in your outputData because it is not another class.
The activation function of your last layer is sigmoid.
The Sigmoid function looks like this:
source
And as you can see it is restricted in the range of 0 to 1. So if you want other output values you need to adjust your last activation function accordingly.
I'm using Google's Area Chart to dispaly a graph.
For some reason I'm not ble to use values past into the function:
I have the following code:
function SetAreaChartData(valueString) {
//This works
AreaChartData.setValue(0, 0, 'SomeName');
AreaChartData.setValue(0, 1, 500);
AreaChartData.setValue(0, 2, 500);
AreaChartData.setValue(0, 3, 500);
//This does not work
// First I must "cast" the input to string in order to use the .split function
var str = new String(valueString);
// Then I split the string in order to get an array of string
val = str.split(",");
AreaChartData.setValue(0, 0, 'SomeName');
AreaChartData.setValue(0, 1, val[0]*1); //Multiply by one to cast it to integer
AreaChartData.setValue(0, 2, val[1]*1);
AreaChartData.setValue(0, 3, val[2]*1);
}
I've also tried using parseInt(val[0]), but that is not helping either.
Why won't .setValue recognize val[0] as an integer?