Breaking a cycle in FRP snake in Bacon.js - javascript

I've been following this snake example and decided to modify it to generate new apples only in empty (i.e. non-snake) cells. However, that's introduced a cyclic dependency between Observables, since generating new apples now depends not only on the last position but on the whole snake:
// stream of last `length` positions -- snake cells
var currentSnake = currentPosition.slidingWindowBy(length);
// stream of apple positions
var apples = appleStream(currentSnake);
// length of snake
var length = apples.scan(1, function(l) { return l + 1; });
Is there a nice way to resolve the cycle?
I can imagine how this would work in a messy state machine but not with clean FRP.
The closest I can think of is coalescing apples and length into one stream and making that stream generate its own "currentSnake" from currentPosition.
applesAndLength --> currentPosition
^ ^
| /
currentSnake
I haven't thought about the implementation much, though.

Once it has been constructed, Bacon can usually handle a cyclic dependency between Observables. It is constructing them that's a bit tricky.
In a language like Javascript, to create a structure with a cycle in it (i.e. a doubly-linked list), you need a mutable variable. For regular objects you use a regular variable or field to do that, e.g.
var tail = { prev: null, next: null };
var head = { prev: null, next: tail };
tail.prev = head; // mutating 'tail' here!
In Bacon, we operate on Observables instead of variables and objects, so we need some kind of a mutable observable to reach the same ends. Thankfully, Bacon.Bus is just the kind of observable we need:
var apples = new Bacon.Bus(); // plugged in later
var length = apples.scan(1, function(l) { return l + 1; });
var currentSnake = currentPosition.slidingWindowBy(length);
apples.plug(appleStream(currentSnake)); // mutating 'apples' here!
In my experience, it is preferrable to cut the cycles at EventStreams instead of Properties, because initial values tend to get lost otherwise; thus the reordering of apples and length.

Related

How to implement a robust hash table like v8

Looking to learn how to implement a hash table in a decent way in JavaScript.
I would like for it to be able to:
Efficiently resolve collisions,
Be space efficient, and
Be unbounded in size (at least in principle, like v8 objects are, up to the size of the system memory).
From my research and help from SO, there are many ways to resolve collisions in hash tables. The way v8 does it is Quadratic probing:
hash-table.h
The wikipedia algorithm implementing quadratic probing in JavaScript looks something like this:
var i = 0
var SIZE = 10000
var key = getKey(arbitraryString)
var hash = key % SIZE
if (hashtable[hash]) {
while (i < SIZE) {
i++
hash = (key + i * i) % SIZE
if (!hashtable[hash]) break
if (i == SIZE) throw new Error('Hashtable full.')
}
hashtable[hash] = key
} else {
hashtable[hash] = key
}
The elements that are missing from the wikipedia entry are:
How to compute the hash getKey(arbitraryString). Hoping to learn how v8 does this (not necessarily an exact replica, just along the same lines). Not being proficient in C it looks like the key is an object, and the hash is a 32 bit integer. Not sure if the lookup-cache.h is important.
How to make it dynamic so the SIZE constraint can be removed.
Where to store the final hash, and how to compute it more than once.
V8 allows you to specify your own "Shape" object to use in the hash table:
// The hash table class is parameterized with a Shape.
// Shape must be a class with the following interface:
// class ExampleShape {
// public:
// // Tells whether key matches other.
// static bool IsMatch(Key key, Object* other);
// // Returns the hash value for key.
// static uint32_t Hash(Isolate* isolate, Key key);
// // Returns the hash value for object.
// static uint32_t HashForObject(Isolate* isolate, Object* object);
// // Convert key to an object.
// static inline Handle<Object> AsHandle(Isolate* isolate, Key key);
// // The prefix size indicates number of elements in the beginning
// // of the backing storage.
// static const int kPrefixSize = ..;
// // The Element size indicates number of elements per entry.
// static const int kEntrySize = ..;
// // Indicates whether IsMatch can deal with other being the_hole (a
// // deleted entry).
// static const bool kNeedsHoleCheck = ..;
// };
But not sure what the key is and how they convert that key to the hash so keys are evenly distributed and the hash function isn't just a hello-world example.
The question is, how to implement a quick hash table like V8 that can efficiently resolve collisions and is unbounded in size. It doesn't have to be exactly like V8 but have the features outlined above.
In terms of space efficiency, a naive approach would do var array = new Array(10000), which would eat up a bunch of memory until it was filled out. Not sure how v8 handles it, but if you do var x = {} a bunch of times, it doesn't allocate a bunch of memory for unused keys, somehow it is dynamic.
I'm stuck here essentially:
var m = require('node-murmurhash')
function HashTable() {
this.array = new Array(10000)
}
HashTable.prototype.key = function(value){
// not sure if the key is actually this, or
// the final result computed from the .set function,
// and if so, how to store that.
return m(value)
}
HashTable.prototype.set = function(value){
var key = this.key(value)
var array = this.array
// not sure how to get rid of this constraint.
var SIZE = 10000
var hash = key % SIZE
var i = 0
if (array[hash]) {
while (i < SIZE) {
i++
hash = (key + i * i) % SIZE
if (!array[hash]) break
if (i == SIZE) throw new Error('Hashtable full.')
}
array[hash] = value
} else {
array[hash] = value
}
}
HashTable.prototype.get = function(index){
return this.array[index]
}
This is a very broad question, and I'm not sure what exactly you want an answer to. ("How to implement ...?" sounds like you just want someone to do your work for you. Please be more specific.)
How to compute the hash
Any hash function will do. I've pointed out V8's implementation in the other question you've asked; but you really have a lot of freedom here.
Not sure if the lookup-cache.h is important.
Nope, it's unrelated.
How to make it dynamic so the SIZE constraint can be removed.
Store the table's current size as a variable, keep track of the number of elements in your hash table, and grow the table when the percentage of used slots exceeds a given threshold (you have a space-time tradeoff there: lower load factors like 50% give fewer collisions but use more memory, higher factors like 80% use less memory but hit more slow cases). I'd start with a capacity that's an estimate of "minimum number of entries you'll likely need", and grow in steps of 2x (e.g. 32 -> 64 -> 128 -> etc.).
Where to store the final hash,
That one's difficult: in JavaScript, you can't store additional properties on strings (or primitives in general). You could use a Map (or object) on the side, but if you're going to do that anyway, then you might as well use that as the hash table, and not bother implementing your own thing on top.
and how to compute it more than once.
That one's easy: invoke your hashing function again ;-)
I just want a function getUniqueString(string)
How about this:
var table = new Map();
var max = 0;
function getUniqueString(string) {
var unique = table.get(string);
if (unique === undefined) {
unique = (++max).toString();
table.set(string, unique);
}
return unique;
}
For nicer encapsulation, you could define an object that has table and max as properties.

Recursive Reduce method in ES6 / Immutable

Hope this isn't too difficult a question without context, but here goes nothing. So, I inherited this code from someone, and I can't seem to get it to work!
We're making a Go game. We want to scan a set of pieces on the board and see if they're empty or not. An empty square is called a 'liberty'. Now, at the bottom of the function there we're creating a new 2D array 'visitedBoard' that keeps track of where we've scanned so far.
PROBLEM, the current implementation allows for liberties to be scanned twice! It only seems to be marking something as 'visited' in the board when it is either empty or another color (0), not when it's 1.
BTW, at the bottom - we're iterating through neighbors, which is a 4 item array of objects {row: 2, col: 3} and then recursively running it through this function.
Any assistance is helpful. I'm new to this functional / immutable business.
const getLiberties = function (board, point, color) {
if (board.get(point.row).get(point.col) === C.VISITED) {
return 0; // we already counted this point
} else if (board.get(point.row).get(point.col) === C.EMPTY) {
return 1; // point is a liberty
} else if (board.get(point.row).get(point.col) !== color) {
return 0; // point has an opposing stone in it
}
const neighbours = getNeighbours(board, point)
const visitedBoard = board.setIn([point.row, point.col], C.VISITED)
return neighbours.reduce(
(liberties, neighbour) => liberties + getLiberties(visitedBoard,
neighbour, color), 0)}
instead of .get(point.row).get(point.col) you can use .getIn([point.row, point.col])
inside reduce you always use same visitedBoard for all calls. You have to re-assign new value to variable after reduce callback call

Implement partition refinement algorithm

Partition Refinement Algorithm
I have this algorithm from another Stack Exchange question:
Let
S = {x_1,x_2,x_3,x_4,x_5}
be the state space and
R = {
(x_1,a,x_2),
(x_1,b,x_3),
(x_2,a,x_2),
(x_3,a,x_4),
(x_3,b,x_5),
(x_4,a,x_4), // loop
(x_5,a,x_5), // loop (a-transition)
(x_5,b,x_5) // loop (b-transition)
}
be the transition relation
Then we start with the partition
Pi_1 = {{x_1,x_2,x_3,x_4,x_5}}
where all the states are lumped together.
Now, x_2 and x_4 can both always do an a-transition to a state, but no b-transitions, whereas the remaining states can do both a- and b-transitions, so we split the state space as
Pi_2 = {{x_1,x_3,x_5},{x_2,x_4}}.
Next, x_5 can do an a-transition into the class {x_1,x_3,x_5},
but x_1 and x_3 can not, since their a-transitions go into the class {x_2,x_4}. Hence these should again be split, so we get
Pi_3 = {{x_1,x_3},{x_5},{x_2,x_4}}.
Now it should come as no surprise that x_3 can do a b-transition into the class {x_5}, but x_1 can not, so they must also be split, so we get
Pi_4 = {{x_1},{x_3},{x_5},{x_2,x_4}},
and if you do one more step, you will see that Pi_4 = Pi_5, so this is the result.
Implementation
I do not know how to implement this algorithm in JavaScript.
// blocks in initial partition (only consist of 1 block)
const initialPartition = { block1: getStates() };
// new partition
const partition = {};
// loop through each block in the partition
Object.values(initialPartition).forEach(function (block) {
// now split the block into subgroups based on the relations.
// loop through each node in block to see to which nodes it has relations
block.forEach(function (node) {
// recursively get edges (and their targets) coming out of the node
const successors = node.successors();
// ...
}
});
I guess I should create a function that for each node can say which transitions it can make. If I have such function, I can loop through each node in each block, find the transitions, and create a key using something like
const key = getTransitions(node).sort().join();
and use this key to group the nodes into subblocks, making it possible to do something like
// blocks in initial partition (only consist of 1 block)
const initialPartition = { block1: getStates() };
// new partition
const partition = {};
// keep running until terminating
while (true) {
// loop through each block in the partition
Object.values(initialPartition).forEach(function (block) {
// now split the block into subgroups based on the relations.
// loop through each node in block to see to which nodes it has relations
block.forEach(function (node) {
// get possible transitions
const key = getTransitions(node).sort().join();
// group by key
partition[key].push(node);
}
});
}
but I need to remember which nodes were already separated into blocks, so the subblocks keep becoming smaller (i.e. if I have {{x_1,x_3,x_5},{x_2,x_4}}, I should remember that these blocks can only become smaller, and never 'interchange').
Can someone give an idea on how to implement the algorithm? Just in pseudocode or something, so I can implement how to get the nodes' successors, incomers, labels (e.g. a-transition or b-transition), etc. This is a bisimulation algorithm, and the algorithm is implemented in Haskell in the slides here, but I do not understand Haskell.

Find if a string line is nested or is a child of another line

I'm looking to write a small parser for some kind of files, and one of the things I have to accomplish is to find if a line is inside another one, defining this with indentation (spaces or tabs).
Example:
This is the main line
This is a nested or child line
I'm trying to establish this by reading the first character position in the line and comparing it with the previous one with something like this:
var str = ' hello';
str.indexOf(str.match(/\S|$/).shift());
I'm sure this is not the best way and it looks horrible, also I have another issues to address, like checking if the indentation is made by spaces (2 or 4), or tabs, or passing/maintaining an state of the previous line (object).
Also, lines can be infinitely nested and of course I'm looking more for a nice and performant algorithm (or idea), or pattern rather than a simple check that I think is relatively easy to do but error prone. I'm sure it is already solve by people who works with parsers and compilers.
Edit:
str.search(/\S/);
#Oriol proposal looks much better
This is generally the kind of thing you write a parser for, rather than purely relying on regex. If the nesting determines the depth, then you have two things to solve: 1) find the depth for an arbitrary line, and 2) iterate through the set of lines and track, for each line, which preceding line has a lower depth value.
The first is trivial if you are familiar with the RegExp functions in Javascript:
function getDepth(line) {
// find leading white space
var ws = str.match(/^(\s+)/);
// no leading white space?
if (ws === null) return 0;
// leading white space -> count the white space symbols.
// obviously this goes wrong for mixed spaces and tabs, and that's on you.
return ws[0].split('').length;
}
The second part is less trivial, and so you have several options. You could iterate through all the lines, and track the list of line numbers, pushing onto the list as you go deeper and popping from the list as you go back up, or you can build a simple tree structure (which is generally far better because it lets you expand its functionality much more easily) using standard tree building approached.
function buildTree(lines, depths) {
if (!depths) {
var depths = lines.map(e => getDepth);
return buildTree(lines, depths);
}
var root = new Node();
for(var pos=0, end=lines.length; pos<end; pos++) {
var line = lines[pos];
var depth = depths[pos];
root.insert(line, depth);
}
}
With a simple Node object, of course
var Node = function(text, depth) {
this.children = [];
this.line = text.replace(/^\s+/,'');
this.depth = depth;
}
Node.prototype = {
insert: function(text, depth) {
// this is where you become responsible: we need to insert
// a new node inside of this one if the depths indicate that
// is what should happen, but you're on the hook for determining
// what you want to have happen if the indentation was weird, like
// a line at depth 12 after a line at depth 2, or vice versa.
}
}

Using three.js with time series data

How do you best go about using time series data to direct the animation of a three.js scene?
For example:
Time | ObjA(x,y,z) | ObjB(x,y,z) | ...
00:00:00 | 0,9,0 | 1,1,1 | ...
00:00:10 | 0.1,0,0.1 | 1,0.5,1 | ...
00:00:15 | 0.1,0.1,0.1 | 0.9,0.5,1 | ...
The data can be hundreds, if not thousands of lines long. And the number of object can also change from dataset to dataset.
I've read up on using tween.js and chaining keyframes. But creating and chaining many thousands of tweens during initialization doesn't feel like the right answer.
Is tween.js the right way to go? Or have I missed an extension that would better handle the situation? Any examples of a similar use case that could prove useful?
UPDATE
So Director.js would certainly be capable of giving the right result. But it looks like it was intended to tween camera motion around a scene rather that directing the motion of hundreds of meshes. Is chaining potentially thousands of tweens together on possibly hundreds of meshes the best way of achieving a scripted replay?
Table you present is a little misleading. Typically if you have a timeline, and number of objects is dynamic - you would create multiple timelines, one for each - this makes it easier to manipulate overall set.
var Record = function(time, value){
this.time = time;
this.value = value;
};
var Signal = function(){
this.records = [];
this.findValue = function(time){
//... some divide and conquer implementation
}
this.getInterpolatedValue = function(time){...};
this.add = function(time,value){
//make sure sequence is preserved by doing a check or just assuming that add is always called with time greater than what's already in the series
this.records.push(new Record(time,value));
}
};
var signalObjA = new Signal();
var signalObjB = new Signal();
when it comes to replay, interpolation of some sort is necessary, and you probably want an animation manager of some sort, a thing which takes ( signal, object ) pairs and sets object values from signal based on current time.
var Binding = function(signal, object){
this.signal = signal;
this.object = object;
this.applyTime = function(t){
var val = this.signal.getInterpolatedValue(t);
for(var p in val){
if(val.hasOwnProperty(p)){
this.object[p] = val[p]; //copying values into object
}
}
}
}
var Simulator = function(){
this.time = 0;
this.bindings = [];
this.step = function(timeDelta){
this.time += timeDelta;
var time = this.time;
this.bindings.forEach(function(b){
b.applyTime(time);
});
}
}
If you run into problems with space, try flattening Records into a Float32Array or some other binary buffer of your choosing.
Edit:
please note that this approach is intended to save memory and remove data transformation. One saves on heap usage and GC, the other saves CPU time.

Categories