Related
I'm trying to push some values in an array for something called "Brain.js". When storing a variable in the array and later changing it, all the variables that were stored in the array change. Can someone help me make it so they don't change? I'm having much trouble with this.
Here's an example:
var hold = ([
]);
var a = [1, 1, 1]
var b = [2];
hold.push(
{ input: a, output: b }
);
console.log(hold); // returns [ { input: [ 1, 1, 1 ], output: [ 2 ] } ]
a[2] = 2;
b = [3];
hold.push(
{ input: a, output: b }
);
console.log(hold);
// Expected output: [ { input: [ 1, 1, 1 ], output: [ 2 ] }, { input: [ 1, 1, 2 ], output: [ 3 ] } ]
// What it really returns: [ { input: [ 1, 1, 2 ], output: [ 2 ] }, { input: [ 1, 1, 2 ], output: [ 3 ] } ]
Problem is, that you are not pushing actual number into the array, but reference. In other words, you passed twice the reference to same object.
What could you do, is create a copy of the object when you are passing it to hold. You can use eg. spread operator.
hold.push(
{
input: ...a,
output: ...b
}
);
You can find out more here
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_syntax
Problem is you are updating an existing array a which is already referenced inside first object you pushed. You should create a copy of an existing array if you do not wish to modify it.
var hold = ([
]);
var a = [1, 1, 1]
var b = [2];
hold.push({
input: a,
output: b
});
console.log(hold);
a = [...a]; // create a new copy of a
a[2] = 2;
b = [3];
hold.push({
input: a,
output: b
});
console.log(hold);
I'm trying to build my own little neural network library.
Everytime I run my code, it builds a new network with random weights.
Then I train it like 20,000 times:
const trainingData = [
{
data: [0,1],
target: [1]
},
{
data: [1,0],
target: [1]
},
{
data: [1,1],
target: [0]
},
{
data: [0,0],
target: [0]
}
]
const nn = new FNN(2,6,12,6,1)
for(let i = 0; i < 20000; i++) {
const index = Math.floor(Math.random() * 4)
nn.train(trainingData[index].data, trainingData[index].target)
}
console.log(nn.query([0,1]))
console.log(nn.query([1,0]))
console.log(nn.query([0,0]))
console.log(nn.query([1,1]))
As you can see I'm trying to solve the XOR problem with 4/5 layer (I know it's a bit overkill).
But when I run my code a couple of times the outputs are sometimes wrong:
$ deno run sketch.ts
[ 0.9808040222512294 ]
[ 0.9808219014520584 ]
[ 0.009098634709446591 ]
[ 0.009259505045600065 ]
$ deno run sketch.ts
[ 0.984698425823075 ]
[ 0.9844728486048201 ]
[ 0.010107497773167363 ]
[ 0.010367109588917735 ]
$ deno run sketch.ts
[ 0.9856540170897103 ]
[ 0.5163204342937323 ] <-- this should be 1
[ 0.02873017555538064 ]
[ 0.516320908619891 ] <-- this should be 0
What could be the problem here?
Is it because of the random weights?
It's interesting, that the wrong outputs are always really close to each other.
I'm using the sigmoid function and a learning rate of 0.2.
The random weights are between -1 and 1 (Math.random() * 2 - 1)
with my current project, I am dealing with large streams of numerical data and transformations that have to take place on them in a data-flow-programmable fashion.
I stumbled upon the idea of transducers, which promised to solve the difficulties to handle multiple transformations on large arrays. It seems that transducers don't suit exactly for what I'm trying to solve here.
I am looking for a pattern / concept for transducers which only collect a needed amount of lookback to then process out a result. Similar to the browser version of tensorflow, reaktor, max-msp (input outputs, flow-graphs, node-based, visual-programming)
Most of these modules, should be connected to a source, but should also be able to act as a source to chain those to other modules
source ( a stream ) =[new-value]|=> module1 => module2 => ...
|=> module3 => module4 // branch off here to a new chain
From my understanding, the transducers as explained in most blogs takes the whole array, and feeds each individual values trough chosen transformers.
Yet my modules/transformers don't require so much data to work, say the example of a simple moving average with a look back of 4 steps.
I imagine that module to collect enough data until it starts it's output.
I also don't need to hold the whole array in memory, I should only deal with the exact amounts needed. Results/Outputs would be optionally stored in a database.
stream =[sends-1-value]=> module[collects-values-until-processing-starts] =[sends-one-value]=>...
It should also be possible to connect multiple sources into a module (which transducers didn't seem to provide.
Would the transducer pattern here still apply or is something else out there?
To be honest, every programmer would have an idea to make this work, yet I am asking for some established way of doing it, just like transducers came to be.
The transducer pattern certainly applies here. You can create a floating point processor with transducers paired with the right data structure. I'll give you a baseline example, with one assumption:
the stream you are working with implements Symbol.asyncIterator
Consider a simple queue
function SimpleQueue({ size }) {
this.size = size
this.buffer = []
}
SimpleQueue.prototype.push = function(item) {
this.buffer.push(item)
if (this.buffer.length > this.size) {
this.buffer.shift()
}
return this
}
SimpleQueue.prototype[Symbol.iterator] = function*() {
for (const item of this.buffer) {
yield item
}
}
Our simple queue has one method push that pushes an item into its internal buffer (an array). The simple queue is also iterable, so you could do for (const x of simpleQueue) {/* stuff */}
We'll now use our SimpleQueue in our floating point processor.
const average = iterable => {
let sum = 0, count = 0
for (const item of iterable) {
sum += item
count += 1
}
return sum / count
}
const floatingPointAverage = ({ historySize }) => {
const queue = new SimpleQueue({ size: historySize })
return item => {
queue.push(item)
const avg = average(queue)
console.log(queue, avg) // this shows the average as the process runs
return avg
}
}
floatingPointAverage takes an item, pushes it into our SimpleQueue, and returns the current average of items in the queue.
Finally, we can implement and consume our transducer
const { pipe, map, transform } = require('rubico')
const numbersStream = {
[Symbol.asyncIterator]: async function*() {
for (let i = 0; i < 1000; i++) yield i
},
}
transform(
pipe([
map(floatingPointAverage({ historySize: 4 })),
/* transducers that do stuff with floating point average here */
]),
null,
)(numbersStream)
The transducer in this case is map(floatingPointAverage({ historySize: 4 })). This transducer is courtesy of rubico, a library I wrote to solve my own async problems. I write about transducers in the context of rubico here
Your output should look like this
SimpleQueue { size: 4, buffer: [ 0 ] } 0
SimpleQueue { size: 4, buffer: [ 0, 1 ] } 0.5
SimpleQueue { size: 4, buffer: [ 0, 1, 2 ] } 1
SimpleQueue { size: 4, buffer: [ 0, 1, 2, 3 ] } 1.5
SimpleQueue { size: 4, buffer: [ 1, 2, 3, 4 ] } 2.5
SimpleQueue { size: 4, buffer: [ 2, 3, 4, 5 ] } 3.5
SimpleQueue { size: 4, buffer: [ 3, 4, 5, 6 ] } 4.5
SimpleQueue { size: 4, buffer: [ 4, 5, 6, 7 ] } 5.5
SimpleQueue { size: 4, buffer: [ 5, 6, 7, 8 ] } 6.5
SimpleQueue { size: 4, buffer: [ 6, 7, 8, 9 ] } 7.5
I am sorting a list of objects according to a variable stored in them. Lets name that variable "sortCriteria"
A sample set of "sortCriteria" values for different objects :
400, 329, 529, "String1", 678, "String2", 588, "String3", "String1", 201, "String2"
So, basically there are 4 kind of values I can get in "sortCriteria" :
1. A numeric value
2. String1
3. String2
4. String3
Now I have to sort this data in such a way that the numeric data should be given the most priority, then "String1", then "String2" & then "String3". i.e.
Priority of (Numeric > String1 > String2 > String3)
Note that, in output, all those numeric values should be in sorted order.
Hence, the sorted order of sample data would be -
201, 329, 400, 529, 588, 678, "String1", "String1", "String2", "String2", "String3".
Also, if multiple objects are having same "sortCriteria" values, their order should be retained.
Eg.
Let say I got 2 objects whose "sortCriteria" value is same
Object 1 : 205,
Object 2 : 205.
Then in sorted order Object1 should come before Object2.
My current Javascript implementation of sorting specific logic looks like this :
function mySortRule(a, b) {
var value1 = a[1], value2 = b[1];
var value1Priority = getPriorityOf(value1);
var value2Priority = getPriorityOf(value2);
return value1Priority - value2Priority;
}
function getPriorityOf(value) {
var priority;
if(value!="String1" && value!="String2" && value!="String3") {
priority = value;
}
else if(value == "String1") {
priority = Number.MAX_VALUE-2;
}
else if(value == "String2") {
priority = Number.MAX_VALUE-1;
}
else if(value == "String3") {
priority = Number.MAX_VALUE;
}
return priority;
}
sortCriteriaArray.sort(mySortRule);
sortCriteriaArray[i] value is in this format :
["indexOfObject", "sortCriteria"]
This solution is kind of working but it's not retaining the objects order. Also, I don't feel that this is a good approach because -
1. Tomorrow, let say we have to fit in some other types of strings. In that case, we will have to change these conditional statements in getPriorityOf() function.
2. Using "Number.MAX_VALUE" to set the priority looks hacky to me.
Can there be any better way to achieve this?
Arrange the comparison test to be lexicographical, first on the type (assigning priority indexes such as like number: 0, string: 1), then on the value in case of ties. This is clean and extensible.
To retain the order, you need to use a stable sorting algorithm such as MergeSort.
Alternatively, if you don't plan to add other types, proceed as follows:
scan the list and split (stably) in a sublist with the numbers and another sublist with the strings;
sort the sublists and concatenete them.
Take a look at my solution and check if it fits:
// Array
const arr = [400, 329, 529, "String1", 678, "String2", 588, "String3", "String1", 201, "String2"];
// Filter methods
const isNumber = (item) => !isNaN(item);
const checkString = (str) => (item) => item === str;
// Sort/Config method
const sortPriority = () => {
return [
...arr.filter(isNumber).sort((a, b) => a - b),
...arr.filter(checkString("String1")),
...arr.filter(checkString("String2")),
...arr.filter(checkString("String3")),
]
}
console.log(sortPriority());
Given that you have a small and fixed number of strings, you could build a composite sorting rule using an object as a dictionary for priorities instead of your getPriorityOf and then you don't have to use MAX_VALUE. You could take an object for the order of the given strings or a default value of zero (assuming numbers are the top priority) and then sort by numerical value.
var data = [400, 329, 529, "String1", 678, "String2", 588, "String3", "String1", 201, "String2"],
order = { String1: 1, String2: 2, String3: 3 };
data.sort((a, b) => (order[a] || 0) - (order[b] || 0) || a - b);
console.log(data);
.as-console-wrapper { max-height: 100% !important; top: 0; }
Unfortunately Array.sort is not guaranteed to be stable although often in practice it is. See also About the stability of the algorithm used by V8 engine
If it is not stable in your environment, you can emulate stable sort using a WeakMap to store the original positions.
The fact that you ask for stability implies, that you really have some objects that you want to sort by some field. I assume that field called data. Note that the index property is not used by the algorithm, it is in this example only to proof the stability of the order.
var data = [{ index: 0, data: 400 }, { index: 1, data: 329 }, { index: 2, data: 529 }, { index: 3, data: "String1" }, { index: 4, data: 678 }, { index: 5, data: "String2" }, { index: 6, data: 588 }, { index: 7, data: "String3" }, { index: 8, data: "String1" }, { index: 9, data: 201 }, { index: 10, data: "String2" }],
indices = new WeakMap(data.map((o, i) => [o, i])),
order = { String1: 1, String2: 2, String3: 3 };
data.sort((a, b) =>
(order[a.data] || 0) - (order[b.data] || 0) || // get priority ordering
a.data - b.data || // order by number
indices.get(a) - indices.get(b) // keep same values in order
);
console.log(data);
.as-console-wrapper { max-height: 100% !important; top: 0; }
function mySortRule(a, b) {
function getPriorityOf(value) {
// priority order..
return [Boolean, Number, String, Array, Object, Function]
.reverse().indexOf(value.constructor);
}
// smallest first..
return getPriorityOf(a) == getPriorityOf(b) ?
(a.valueOf() == b.valueOf() ?
0 :
(a.valueOf() > b.valueOf() ? 1 : -1)) :
getPriorityOf(b) - getPriorityOf(a)
}
var sortCriteriaArray =
[400, 329, 529, "String1", 678, "String2", 588, "String3", "String1", 201, "String2"];
sortCriteriaArray.sort(mySortRule);
// -> [201, 329, 400, 529, 588, 678, "String1", "String1", "String2", "String2", "String3"]
(Assuming every array element is neither null or undefined)
I want to create two arrays b and c at the same time. I know two methods which can be able to achieve it. The first method is
b = ([i, i * 2] for i in [0..10])
c = ([i, i * 3] for i in [0..10])
alert "b=#{b}"
alert "c=#{c}"
This method is very handy for creating only one array. I can not be the better way to get the better performance for computation.
The second method is
b = []
c = []
for i in [0..10]
b.push [i, i*2]
c.push [i, i*3]
alert "b=#{b}"
alert "c=#{c}"
This method seems good for computation efficiency but two lines
b = []
c = []
have to be written first. I don't want to write this 2 lines but I have not find a good idea to have the answer. Without the initialization for the arrays of b and c, we can not use push method.
There exists the existential operator ? in Coffeescript but I don't know hot to use it in this problem. Do you have a better method for creating the arrays of b and c without the explicit initialization?
Thank you!
You can use a little help from underscore (or any other lib that provides zip-like functionality):
[b, c] = _.zip ([[i, i * 2], [i, i * 3]] for i in [0..10])...
After executing it we have:
coffee> b
[ [ 0, 0 ],
[ 1, 2 ],
[ 2, 4 ],
[ 3, 6 ],
[ 4, 8 ],
[ 5, 10 ],
[ 6, 12 ],
[ 7, 14 ],
[ 8, 16 ],
[ 9, 18 ],
[ 10, 20 ] ]
coffee> c
[ [ 0, 0 ],
[ 1, 3 ],
[ 2, 6 ],
[ 3, 9 ],
[ 4, 12 ],
[ 5, 15 ],
[ 6, 18 ],
[ 7, 21 ],
[ 8, 24 ],
[ 9, 27 ],
[ 10, 30 ] ]
See the section about splats in CoffeeScript docs for more details and examples.
How about this using the existential operator:
for i in [0..10]
b = [] if not b?.push [i, i*2]
c = [] if not c?.push [i, i*3]
console.log "b=#{b}"
console.log "c=#{c}"
Or to be a bit more understandable:
for i in [0..10]
(if b? then b else b = []).push [i, i*2]
(if c? then c else c = []).push [i, i*3]
console.log "b=#{b}"
console.log "c=#{c}"
EDIT: from comments:
OK but you you have to write so many tedious codes. The same reason is
also for ` (b = b or []).push [i, i*2]
It is tedious, so we can wrap it in a function (but beware the variables will be global now):
# for node.js
array = (name) -> global[name] = global[name] or []
# for the browser
array = (name) -> window[name] = window[name] or []
for i in [0..10]
array('b').push [i, i*2]
array('c').push [i, i*3]
console.log "b=#{b}"
console.log "c=#{c}"