2D vector check the magnitude - javascript

I am creating a vector object which as the following function:
Vector.prototype.limitTo = function (pScalar) {
this.normalise();
this.multiply(pScalar);
if (this.magnitude() > pScalar) {
this.magnitude = 30;
}
return new Vector(this.getX(), this.getY(), this.getZ());
};
In this I am trying to make it comply with the this spec:
"your Vector object should have a ‘limitTo’ function that takes a single scalar number as its parameter. The function should return a newly constructed Vector object that has the same direction as the ‘this’ Vector, but if its magnitude exceeds the given parameter value then it is reduced in size to equal the maximum value. The direction of the Vector should be unaffected, only the magnitude may be altered. If the magnitude of the Vector does not exceed the maximum value, then it should not be altered."
And a jasmine test of:
describe("Limit To", function () {
var limitedVector, magnitude;
it("Magnitude not exceeding limit", function () {
limitedVector = vector.limitTo(60);
magnitude = limitedVector.magnitude();
expect(magnitude).toEqual(50);
});
it("Magnitude exceeding limit", function () {
limitedVector = vector.limitTo(30);
magnitude = limitedVector.magnitude();
expect(magnitude).toEqual(30);
});
});
I have the magnitude not exceeding limit but, am having trouble getting the exceeding limit test.

You didn't include your methods, but assigning a number to a method property seems wrong. Most probably you want
Vector.prototype.limitTo = function (pScalar) {
return this.normalise().multiply(Math.min(his.magnitude(), pScalar));
};
If this.normalise() is in-place, copy your vector first.

Related

tensorflow results are weird. How to solve it?

It has two inputs and one output.
Input: [Temperature, Humidity]
Output: [wattage]
I learned as follows
Even after 5 million rotations, it does not work properly.
Did I choose the wrong option?
var input_data = [
[-2.4,2.7,9,14.2,17.1,22.8,281,25.9,22.6,15.6,8.2,0.6],
[58,56,63,54,68,73,71,74,71,70,68,62]
];
var power_data = [239,224,189,189,179,192,243,317,224,190,189,202];
var reason_data = tf.tensor2d(input_data);
var result_data = tf.tensor(power_data);
var X = tf.input({ shape: [2] });
var Y = tf.layers.dense({ units: 1 }).apply(X);
var model = tf.model({ inputs: X, outputs: Y });
var compileParam = { optimizer: tf.train.adam(), loss: tf.losses.meanSquaredError }
model.compile(compileParam);
var fitParam = {
epochs: 500000,
callbacks: {
onEpochEnd: function (epoch, logs) {
console.log('epoch', epoch, logs, "RMSE --> ", Math.sqrt(logs.loss));
}
}
}
model.fit(reason_data, result_data, fitParam).then(function (result) {
var final_result = model.predict(reason_data);
final_result.print();
model.save('file:///path/');
});
The following is the result for 5 million times.
It should be the same as power_data , but it failed.
What should I fix?
While there is rarely one simple reason to point to when a model doesn't perform the way you would expect, here are some options to consider:
You don't have enough data points. Twelve is not nearly sufficient to get an accurate result.
You need to normalize the data of the input tensors. Given that your two features [temperature and humidity] have different ranges, they need to be normalized to give them equal opportunity to influence the output. The following is a normalization function you could start with:
function normalize(tensor, min, max) {
const result = tf.tidy(function() {
// Find the minimum value contained in the Tensor.
const MIN_VALUES = min || tf.min(tensor, 0);
// Find the maximum value contained in the Tensor.
const MAX_VALUES = max || tf.max(tensor, 0);
// Now calculate subtract the MIN_VALUE from every value in the Tensor
// And store the results in a new Tensor.
const TENSOR_SUBTRACT_MIN_VALUE = tf.sub(tensor, MIN_VALUES);
// Calculate the range size of possible values.
const RANGE_SIZE = tf.sub(MAX_VALUES, MIN_VALUES);
// Calculate the adjusted values divided by the range size as a new Tensor.
const NORMALIZED_VALUES = tf.div(TENSOR_SUBTRACT_MIN_VALUE, RANGE_SIZE);
// Return the important tensors.
return {NORMALIZED_VALUES, MIN_VALUES, MAX_VALUES};
});
return result;
}
You should try a different optimizer. Adam might be the best choice, but for a linear regression problem such as this, you should also consider Stochastic Gradient Descent (SGD).
Check out this sample code for an example that uses normalization and sgd. I ran your data points through the code (after making the changes to the tensors so they fit your data), and I was able to reduce the loss to less than 40. There is room for improvement, but that's where adding more data points comes in.

Why does my benchmark for memory usage in Node.js seem wrong?

I wanted to test memory usage for objects in node.js. My approach was simple: I first use process.memoryUsage().heapUsed / 1024 / 1024 to get the baseline memory. And I have an array of sizes, i.e. the number of entries in objects const WIDTHS = [100, 500, 1000, 5000, 10000] and I plan to loop through it and create objects of that size and compare the current memory usage with the baseline memory.
function memoryUsed() {
const mbUsed = process.memoryUsage().heapUsed / 1024 / 1024
return mbUsed
}
function createObject(size) {
const obj = {};
for (let i = 0; i < size; i++) {
obj[Math.random()] = i;
}
return obj;
}
const SIZES = [100, 500, 1000, 5000, 10000, 50000, 100000, 500000, 1000000]
const memoryUsage = {}
function fn() {
SIZES.forEach(size => {
const before = memoryUsed()
const obj = createObject(size)
const after = memoryUsed()
const diff = after - before
memoryUsage[size] = diff
})
}
fn()
but the results didn't look correct:
{
'100': 0.58087158203125,
'500': 0.0586700439453125,
'1000': 0.15680694580078125,
'5000': 0.7640304565429688,
'10000': 0.30365753173828125,
'50000': 7.4157257080078125,
'100000': 0.8076553344726562,
}
It doesn't make sense. Also, since the object memoryUsage that records the memory usage itself takes up more memory as it grows so I think it adds some overhead.
What are some of the more robust and proper ways to benchmark memory usage in node.js?
The key thing missing controlling for the garbage collector kicking in at "random" times. To attribute used memory to specific actions performed, you need to manually trigger full GC runs before taking a measurement. Concretely, modify memoryUsed so it reads:
function memoryUsed() {
gc();
const mbUsed = process.memoryUsage().heapUsed / 1024 / 1024;
return mbUsed;
}
and run the test in Node with --expose-gc. Then you'll get reasonable numbers:
{
'100': 0.20072174072265625,
'500': 0.0426025390625,
'1000': 0.08499908447265625,
'5000': 0.37823486328125,
'10000': 0.7519683837890625,
'50000': 4.9071807861328125,
'100000': 9.80963134765625,
'500000': 43.04571533203125,
'1000000': 86.08901977539062
}
The first result (for 100) is obviously too high; not sure why (and if I repeat the test for 100 at the end, its result is in line with the others). The other numbers check out: for a 2x or 5x increase in the number of properties, memory consumption increases by approximately the same 2x or 5x.
My high-level comment is that I'm not sure what you're trying to measure here. The majority of memory used in this scenario is spent on strings of the form "0.38459934255705686", whereas your description seems to indicate that you're more interested in objects.
The marginal cost of one object property depends on the state of the object: when several objects share a shape/"hidden class", then each property in an object takes just one pointer (4 bytes in browsers [32-bit platforms, or pointer compression], 8 bytes in Node [64-bit without pointer compression]). When an object is in dictionary mode, each additional property will take about 6 pointers on average (depending on when the dictionary's backing store needs to be grown), so 24 or 48 bytes depending on pointer size. (The latter is the scenario that this test is creating.)
In both cases, this is just the additional size of the object holding the property; the property's name and value might of course need additional memory.

Where is the variable 'line' declared?

I've been given the script below on Google Earth Engine to extract data along a transect. (https://code.earthengine.google.com/e31179d9e7143235092d6b4fa29a12fd) In the GEE code editor the top of the scipt has an import flag (picture attached).
Multiple references to 'line' are made, which I understand to be a variable that has been declared, but I can't find it. I've looked in the GEE documentation, and in a JavaScript reference to determine if it's a method or some such like but I can't work it out.
The imported data is declared as 'transect', so it's not that.
/***
* Reduces image values along the given line string geometry using given reducer.
*
* Samples image values using image native scale, or opt_scale
*/
function reduceImageProfile(image, line, reducer, scale, crs) {
var length = line.length();
var distances = ee.List.sequence(0, length, scale)
var lines = line.cutLines(distances, ee.Number(scale).divide(5)).geometries();
lines = lines.zip(distances).map(function(l) {
l = ee.List(l)
var geom = ee.Geometry(l.get(0))
var distance = ee.Number(l.get(1))
geom = ee.Geometry.LineString(geom.coordinates())
return ee.Feature(geom, {distance: distance})
})
lines = ee.FeatureCollection(lines)
// reduce image for every segment
var values = image.reduceRegions( {
collection: ee.FeatureCollection(lines),
reducer: reducer,
scale: scale,
crs: crs
})
return values
}
// Define a line across the Olympic Peninsula, USA.
// Import a digital surface model and add latitude and longitude bands.
var elevImg = ee.Image('JAXA/ALOS/AW3D30/V2_2').select('AVE_DSM');
var profile = reduceImageProfile(elevImg, transect, ee.Reducer.mean(), 100)
print(ui.Chart.feature.byFeature(profile, 'distance', ['mean']))
line isn't a variable, it's a parameter. Parameters are very similar to local variables within the function, but instead of being declared with var, let, or const, they're declared in the function's parameter list:
function reduceImageProfile(image, line, reducer, scale, crs) {
// here −−−−−−−−−−−−−−−−−−−−−−−−−−−^
The parameter's value is filled in each time the function is called using the corresponding argument in the function call. Let's take a simpler example:
function example(a, b) {
// ^−−^−−−−−−−−−− parameter declarations
return a + b;
}
// vv−−−−−−−−− argument for `a`
console.log(example(40, 2));
// ^−−−−−− argument for `b`
// vv−−−−−−−−− argument for `a`
console.log(example(60, 7));
// ^−−−−−− argument for `b`
In the first call to example, the a parameter receives the value 40 and the b parameter receives the value 2 from the call arguments. In the second call, the a parameter receives the value 60 and the b parameter receives the value 7 from the call arguments.

Three.js Object3D.applyMatrix() setting incorrect scale

Three.js r.71
I'm just getting into Three.js (awesome btw) but am having an issue. I am trying to stream geometry and position/scale/rotation changes between clients using Socket.io and NodeJS. On the server I store the JSON representation of the scene and stream object changes between clients.
When the object's matrix changes (position, scale, rotation), I stream the new matrix to the server and forward it to the other clients. On the other clients I call applyMatrix() with the streamed object (the source object's matrix).
The problem I ran into is that when calling applyMatrix(sourceMatrix), it seems to multiple the existing scale by the scale found in sourceMatrix. For example, when the current object has a scale of x: 2, y:1, z:1, and I apply a matrix with the same scale, after calling applyMatrix, the destination object's scale is x:4, y:1, z:1.
This seems like a bug to me, but wanted to double check.
// Client JS:
client.changeMatrix = function (object) {
// Set the object's scale to x:2 y:1 z:1 then call this twice.
var data = {uuid: object.uuid, matrix: object.matrix};
socket.emit('object:changeMatrix', data);
};
socket.on('object:matrixChanged', function (data) {
var cIdx = getChildIndex(data.uuid);
if (cIdx >= 0) {
scene.children[cIdx].applyMatrix(data.matrix);
// At this point, the object's scale is incorrect
ng3.viewport.updateSelectionHelper();
ng3.viewport.refresh();
}
});
// Server JS:
socket.on('object:changeMatrix', function (data) {
socket.broadcast.emit('object:matrixChanged', data);
});
#WestLangley is correct, I really did not understand what apply matrix is doing (and still don't quite know what it is used for).
I solved my problem by manually setting each element in the source matrix and calling decompose:
// Client JS:
socket.on('object:matrixChanged', function (data) {
var cIdx = getChildIndex(data.uuid);
var child = null;
var key;
if (cIdx >= 0) {
child = scene.children[cIdx];
for (key in child.matrix.elements) {
if (child.matrix.elements.hasOwnProperty(key)) {
child.matrix.elements[key] = data.matrix.elements[key];
}
}
child.matrix.decompose(child.position, child.quaternion, child.scale);
}
}
Unfortunately, once the server picks up the Matrix object, calling:
child.matrix.copy(data.matrix);
does no work. That's why I ended up setting each element manually.

Crossfilter - Double Dimensions (second value linked to daily max)

Quite an oddly specific question here but something I've been having a lot of trouble with over the past day or so. Broadly, I'm trying to calculate the maximum of an array using crossfilter and then use this value to find a maximum.
For example, I have a series of Timestamps with an associated X Value and a Y Value. I want to aggregate the Timestamps by day and find the maximum X Value and then report the Y Value associated with this Timestamp. In essence this is a double dimension as I understand it.
I'm able to do the first stage simply to find the maximum values. But am having a lot of difficulty getting through to the second value.
Working code for the first, (using Crossfilter and Reductio). Assuming that each row has the following four values.
[(Timestamp, Date, XValue, YValue),
(2015-05-15 16:00:00, 2015-05-15, 30, 15),
(2015-05-15 16:45:00, 2015-05-15, 25, 33)
... (many thousand of rows)]
First Dimension
ndx = crossfilter(data);
dailyDimension = ndx.dimension(function(d) { return d.date; });
Get the max of the X Value using reductio
maxXValue = reductio().max(function(d) { return d.XValue;});
XValues = maxXValue(dailyDimension.group())
XValues now contains all of the maximum X Values on a Daily Basis.
I would now like to use these X Values to identify the corresponding Y Values on a date basis.
Using the same data above the appropriate value returned would be:
[(date, YValue),
('2015-05-15', 15)]
// Note, that it is 15 as it is the max X Value we find, not the max Y Value.
In Python/Pandas I would set the index of a DataFrame to X and then do an index match to find the Y Values
(Note, it can safely be assumed that the X Values are unique in this case but in reality we should really identify the Timestamp linked to this period and then match on that as they are strictly guaranteed to be unique, not loosely).
I believe this can be accomplished by modifying the reductio maximum code which I don't fully understand properly Source Code is from here
var reductio_max = {
add: function (prior, path) {
return function (p, v) {
if(prior) prior(p, v);
path(p).max = path(p).valueList[path(p).valueList.length - 1];
return p;
};
},
remove: function (prior, path) {
return function (p, v) {
if(prior) prior(p, v);
// Check for undefined.
if(path(p).valueList.length === 0) {
path(p).max = undefined;
return p;
}
path(p).max = path(p).valueList[path(p).valueList.length - 1];
return p;
};
},
initial: function (prior, path) {
return function (p) {
p = prior(p);
path(p).max = undefined;
return p;
};
}
};
Perhaps this can be modified so that there is a second valueList of Y Values which maps 1:1 with the X Values associated in the max function. In that case it would be the same index look up of both in the functions and could be assigned simply.
My apologies that I don't have any more working code.
An alternative approach would be to use some form of Filtering Function to remove entries which don't satisfy the X Criteria and then group by day (there should only be one value in this setting so a simple reduceSum for example will still return the correct value).
// Pseudo non working code
dailyDimension.filter(function(p) {return p.XValue === XValues;})
dailyDimension.group().reduceSum(function(d) {return d.YValue;})
Eventual results will be plotted in dc.js
Not sure if this will work, but maybe give it a try:
maxXValue = reductio()
.valueList(function(d) {
return ("0000000000" + d.XValue).slice(-10) + ',' + d.YValue;
})
.aliasProp({
max: function(g) {
return +(g.valueList[g.valueList.length - 1].split(',')[0]);
},
yValue: function(g) {
return +(g.valueList[g.valueList.length - 1].split(',')[1]);
}
});
XValues = maxXValue(dailyDimension.group())
This is kind of a less efficient and less safe re-implementation of the maximum calculation using the aliasProp option, which let's you do pretty much whatever you want to to a group on every record addition and removal.
My untested assumption here is that the undocumented valueList function that is used internally in max/min/median will properly order. Might be easier/better to write a Crossfilter maximum aggregation and then modify it to also add the y-value to the group.
If you want to work through this with Reductio, I'm happy to do that with you here, but it will be easier if we have a working example on something like JSFiddle.

Categories