Efficiently find twin edges in Half-Edge (DCEL) data structure? - javascript

I am using a HalfEdge data structure to represent the connectivity between the faces on my mesh.
I am importing an external model, and I am constructing the HalfEdge structure during the import process. However, with meshes with many triangles, the construction process takes up too much time.
Specifically, it appears that the process of linking the half-edges take up the most time.
I would like to get some advice on how to improve my algorithm.
Below is the code I am using to initialize my data structure. The first for-loop creates a Face with the vertices data, while pushing the HalfEdges that compose the Face into a separate array to be used momentarily after.
The second for-loop is reponsible for looking into the array of all HalfEdges, and finding matching pairs (i.e., the two that are twins of one another).
I logged out the time before and after each process, and noticed that the second loop is what slows everything down.
Here are the time stamps
start constructing DCEL 14:55:22
start making faces 14:55:22
end making faces 14:55:22
/* this is where it takes long.. almost 6 seconds on a mesh with 13000 triangles */
start linking halfEdges 14:55:22
end linking halfEdges 14:55:28
end constructing DCEL 14:55:28
And here is the actual code
console.log('start constructing DCEL', new Date().toTimeString());
// initialize Half-Edge data structure (DCEL)
const initialFaceColor = new THREE.Color(1, 1, 1);
const { position } = geometry.attributes;
const faces = [];
const edges = [];
let newFace;
console.log('start making faces', new Date().toTimeString());
for (let faceIndex = 0; faceIndex < (position.count / 3); faceIndex++) {
newFace = new Face().create(
new THREE.Vector3().fromBufferAttribute(position, faceIndex * 3 + 0),
new THREE.Vector3().fromBufferAttribute(position, faceIndex * 3 + 1),
new THREE.Vector3().fromBufferAttribute(position, faceIndex * 3 + 2),
faceIndex);
edges.push(newFace.edge);
edges.push(newFace.edge.next);
edges.push(newFace.edge.prev);
newFace.color = initialFaceColor;
faces.push(newFace);
}
console.log('end making faces', new Date().toTimeString());
console.log('start linking halfEdges', new Date().toTimeString());
/**
* Find and connect twin Half-Edges
*
* if two Half-Edges are twins:
* Edge A TAIL ----> HEAD
* = =
* Edge B HEAD <---- TAIL
*/
let currentEdge;
let nextEdge;
for (let j = 0; j < edges.length; j++) {
currentEdge = edges[j];
// this edge has a twin already; skip to next one
if (currentEdge.twin !== null) continue;
for (let k = j + 1; k < edges.length; k++) {
nextEdge = edges[k];
// this edge has a twin already; skip to next one
if (nextEdge.twin !== null) continue;
if (currentEdge.head().equals(nextEdge.tail())
&& currentEdge.tail().equals(nextEdge.head())) {
currentEdge.setTwin(nextEdge);
}
}
}
console.log('end linking halfEdges', new Date().toTimeString());
console.log('end constructing DCEL', new Date().toTimeString());
How can I optimize the process of searching for twin edges?

I'd try to hash and look up the edges, e.g. like that:
function hash(p1, p2) {
return JSON.stringify(p1)+JSON.stringify(p2);
}
const lookup = {}
for (let j = 0; j < edges.length; j++) {
lookup[hash(edge.head(), edge.tail())] = edge;
}
for (let j = 0; j < edges.length; j++) {
const twin = lookup[hash(edge.tail(), edge.head())];
!edge.twin && twin && !twin.twin && edge.setTwin(twin);
}

Related

Optimizing process determining license usage in time

I am trying to find an efficient way to go through a big amount of data to determine how many units are processed at once.
So the data that I am receiving are just simple pairs:
{timestamp: *, solvetime: *}
What I need, is to see how many things are processed at each second.
To help you visualize what I mean: here is an example of data that I receive:
{{timestamp: 5, solvetime: 3},{timestamp: 7, solvetime: 5},{timestamp: 8, solvetime: 2},{timestamp: 12, solvetime: 10},{timestamp: 14, solvetime: 7}}
The chart below should help you understand how it looks in time:
https://i.stack.imgur.com/LEIhW.png
This is a simple case where the final calculation contains every second, but if the timeframe is much wider I show only 205 different times in this timeframe. E.g. if the time btw the first and the last timestamp is 20500 seconds I would calculate the usage for every second and divide the time into 205 parts - 100 seconds each and show only the second with the highest usage.
What I am doing right now is to iterate through all the pairs of input data and create a map of all the seconds, once I have it I go through this map again to find the highest usage in each time period (of 205 time periods I divide the whole time-frame in) and append it to the map of 205 timestamps.
It's working correctly, but it's very very slow and I feel like there is some better way to do it, a table might be faster but it is still not too efficient is it?
Here is the actual code that does it:
// results contain all the timestamps and solvetimes
// Timeframe of the chart
var start = Math.min.apply(Math, msgDetailsData.results.map((o) => { return o.timestamp; }))
var end = Math.max.apply(Math, msgDetailsData.results.map((o) => { return o.timestamp; }))
// map of all seconds in desired range (keys) the values are counter ofprocesses run in a given second
let mapOfSecondsInRange = new Map();
for (let i = start; i <= end; i++) {
mapOfSecondsInRange.set(i, 0);
}
// we go through every proces and add +1 to the value of each second in which the task was active
for (let element of msgDetailsData.results) {
var singleTaskStart = element.timestamp - Math.ceil(element.solveTime);
if (singleTaskStart < start) {
for (let i = singleTaskStart; i < start; i++) {
mapOfSecondsInRange.set(i, 0);
}
start = singleTaskStart;
}
for (let i = singleTaskStart; i < element.timestamp; i++) {
mapOfSecondsInRange.set(i, mapOfSecondsInRange.get(i) + 1);
}
}
// Preparation for the final map - all the seconds in the range divided into 205 parts.
const numberOfPointsOnChart = 205;
var numberOfSecondsForEachDataPoint = Math.floor((end - start) / numberOfPointsOnChart) + 1;
var leftoverSeconds = ((end - start) % numberOfPointsOnChart) + 1;
var highestUsageInGivenTimeframe = 0;
var timestampOfHighestUsage = 0;
let mapOfXXXDataPoints = new Map();
var currentElement = start;
for (let i = 0; i < numberOfPointsOnChart; i++) {
if (leftoverSeconds === 0) {
numberOfSecondsForEachDataPoint = numberOfSecondsForEachDataPoint - 1;
}
if (currentElement <= end) {
for (let j = 0; j < numberOfSecondsForEachDataPoint; j++) {
if (j === 0) {
highestUsageInGivenTimeframe = mapOfSecondsInRange.get(currentElement);
timestampOfHighestUsage = currentElement;
}
else {
if (mapOfSecondsInRange.get(currentElement) > highestUsageInGivenTimeframe) {
highestUsageInGivenTimeframe = mapOfSecondsInRange.get(currentElement);
timestampOfHighestUsage = currentElement;
}
}
currentElement = currentElement + 1;
}
mapOfXXXDataPoints.set(timestampOfHighestUsage, highestUsageInGivenTimeframe);
leftoverSeconds = leftoverSeconds - 1;
}
}

2D visibility algorithm not works well for overlapped case

I find the visibility algorithm in one great website
Since it is quiet long ago article, I am not sure that the author will give the answer.
As you can see, the interactive playground(try it yourself) does not work well in right upper side, especially overlapping edges.
// Pseudocode of main algorithm
var endpoints; # list of endpoints, sorted by angle
var open = []; # list of walls the sweep line intersects
loop over endpoints:
remember which wall is nearest
add any walls that BEGIN at this endpoint to 'walls'
remove any walls that END at this endpoint from 'walls'
figure out which wall is now nearest
if the nearest wall changed:
fill the current triangle and begin a new one
Here is also part of working JavaScript code
// Main algorithm with working Javascript code
export const calculateVisibility = (origin, endpoints) => {
let openSegments = [];
let output = [];
let beginAngle = 0;
endpoints.sort(endpointCompare);
for(let pass = 0; pass < 2; pass += 1) {
for (let i = 0; i < endpoints.length; i += 1) {
let endpoint = endpoints[i];
let openSegment = openSegments[0];
if (endpoint.beginsSegment) {
let index = 0
let segment = openSegments[index];
while (segment && segmentInFrontOf(endpoint.segment, segment, origin)) {
index += 1;
segment = openSegments[index]
}
if (!segment) {
openSegments.push(endpoint.segment);
} else {
openSegments.splice(index, 0, endpoint.segment);
}
} else {
let index = openSegments.indexOf(endpoint.segment)
if (index > -1) openSegments.splice(index, 1);
}
if (openSegment !== openSegments[0]) {
if (pass === 1) {
let trianglePoints = getTrianglePoints(origin, beginAngle, endpoint.angle, openSegment);
output.push(trianglePoints);
}
beginAngle = endpoint.angle;
}
}
}
return output;
};
In my opinion, it happens since the algorithm thinks right edge is shown whole part when it is the closest edge although it can be hidden from whole edges.
I think I can modify the algorithm to use the current closest edge and newer current closest edge but not sure...
How should I fix it?

Nested loops parsing huge object items freezes the browser at a certain thousand loops

I've been trying to figure out how to improve the way my code calls a function with 3 given coords taken from a huge object. I'm using big objects with more than 3000 items on them so when executing the functions to retrieve a geodesic area the computer ends freezeing at a certain thousand loops.
My JSON looks like this:
{"coords":[
{
"latE6":42140202,
"lngE6":1527653
},
{
"latE6":42134147,
"lngE6":1571707
},
{
"latE6":42138114,
"lngE6":1572871
},
{
"latE6":42141407,
"lngE6":1572997
}]
}
I'm using 3 nested for-loops which works well with objects with less than 200-300 items, but when the object gets an elevated number of items browser ends up freezing and area can't be processed.
var x = data.coords.length;
for (i = 0; i < x; i++) {
for (j = 0; j < x; j++) {
for (k = 0; k < x; k++) {
//checks for duplicated coords
//then calls geodesic area function
//feeds new object tem with 3 coords info and result area
}
}
}
So long code works and if I remove the function call and add some debugging like count the amount of loops or so, it returns the amount of loops quite fast. But when it comes to call the function computer becomes slower and slower on each loop.
Extra info, there's the area calc function (taken from L.GeometryUtils.geodesicArea):
var calcArea = function(latLngs) {
var pointsCount = latLngs.length,
area = 0.0,
d2r = Math.PI / 180,
p1, p2;
if (pointsCount > 2) {
for (var i = 0; i < pointsCount; i++) {
p1 = latLngs[i];
p2 = latLngs[(i + 1) % pointsCount];
area += ((p2.lng - p1.lng) * d2r) *
(2 + Math.sin(p1.lat * d2r) + Math.sin(p2.lat * d2r));
}
area = area * 6378137.0 * 6378137.0 / 2.0;
return Math.abs(area);
}
};
What would be best way to execute all these million calculations? Thanks.
If optimising your code is not enough and you need to perform heavy calculations, you could consider using web workers to perform the calculations in the background (similar to how you would code a multithreaded program) without blocking the UI.
See:
https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API
https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Using_web_workers
Apart from not blocking the browser ui, you can also parallelise parts of your calculations (if the kind of problem you're dealing with allows for it)

Feed forward Neural Network not working in JS

I am working on a project in Javascript where after the user has specified the size of the neural network and the weights, my program will create a feed forward NN that can predict the outcome of an input set based on already defined weights. My program does not train the network through backpropagation or anything else, it just takes in predefined weights to feed it through the network.
I have trained a neural network in Simbrain (Java based NN software) through backprop and i have exported the weights that resulted from training the network in Simbrain to a JS file that will input those weights when creating the NN.
My problem is that even though my neural network performs the way it's supposed to on most NN (even those with more than 3 layers) (see picture 1), but it keeps getting very different outputs from what it should be getting, according to the NN in Simbrain (see picture 2), my question therefore is, what am I doing wrong?
I have already made sure that both Simbrain and my own program use the same sigmoidal function (f(x) = 1 / (1 + 1 / e^x)), so the problem isn't that.
Any help, including help that isn't directly related to my problem but rather to neural networks in general, is greatly appreciated. Thank you for reading this far.
The constructors for the NN
function NeuralNetwork(layers /*2d array*/, weight3 /*3d array*/) {
/*
for the second test case, layers and weight3 are respectively:
[
[0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5],
[0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0],
[0,0,0]
]; , and
[
[
[0.8026458866414493,-4.88941104413936,-4.226087540974577,-4.530573464438122,-1.4556229469839264,-4.31689275950382,-2.598025750964335,-1.4024486094322206,-4.2746136202736595,-1.114811712353249],
[-0.7018557602796472,-1.5899008962198744,-0.16868647158179298,-1.5305564633774085,-1.4933903630534928,-1.0700257428899125,-4.302200807112217,-1.6005131049772037,0.1368813977388942,-1.5986983805280093],
[-0.29876796358044566,-2.4380450712541997,-1.5397608205098134,-2.3059188256916676,-1.5036183940283618,-1.7713981036865771,-1.2577757481948733,-1.7750243327951924,-1.2961124127198986,-1.6383194273199955],
[-0.6170049336232494,-3.3128244088884964,-3.331978879084264,-3.1456639968607307,-1.2964187225461807,-2.9267790861009417,-1.9560748559032006,-1.3973642251104967,-3.4449373490550164,-1.2039858563847703],
[0.28760582772383936,-0.029647036107864125,2.401661305865335,0.15614131284929608,0.9967571587559184,1.3920493637059834,2.053160398236636,1.560406973436222,2.5003616706837324,1.1406280698546443],
[-1.431355486584377,-0.5254956135639378,0.585966810464151,-0.47056989178558917,-0.34719568935262807,3.0873904709552016,2.7466680699367627,-0.3183084147224785,0.4307418811280014,-0.4347019809871141],
[4.730305578112657,2.794618188382498,2.7725054494795303,2.7971993062957767,3.0121313133902494,4.52697776884291,0.1861088251573733,3.3324377979102677,3.4776335904379945,3.220162438476049],
[1.3365670221393215,4.151102261635236,4.448937517824129,3.818527635050038,1.1622076847092544,5.056756438356793,-4.811867833736578,1.4279903810573407,5.067869165923729,1.2084930462786259],
[-7.653884362627585,3.4481239221814506,-1.3517036269721663,2.9744225300717084,-3.4121450698649567,-6.262463291313358,10.0,-5.000134578924327,-1.701089610696345,-4.510549176789293]
], [
[0.738937123642736,1.0027735638897857,-0.4895642945264585,-0.4966409487023605,-3.411429095495459,-0.645660237498346,0.4795890293073677,-1.1530391624279934,-0.5844011923153196,-0.18971906794059312,0.24259889837466253],
[-1.3289019929201453,-1.3846783936643814,-3.1027185772051857,-3.051664033849648,0.8551718079643078,-1.0150243513426276,-1.1322190191703165,-0.8017220694962384,-3.3343695886290257,-1.5355207800814192,-1.1708434001066501],
[-1.2376066287024123,-1.3769667430967405,-2.301168906630486,-2.325621305132306,1.9450338961713791,-0.6571756279826005,-1.1591625309611004,-0.40914317621655133,-2.489673013800612,-1.3075292847341096,-0.9491990659165409],
[-1.2735133823287095,-1.2998934938770366,-2.7684415870018477,-2.795194685738332,0.6119467566598199,-0.9188836490585277,-1.1200651160455346,-0.6609228479081031,-2.9594533679041617,-1.4427187863617232,-1.0109174567197428],
[-1.3424373302583281,-1.5343976853123078,-0.9412451891990541,-0.9181715664124199,0.39356360921646094,-1.0424607517601023,-1.502583319884496,-1.0152736611391637,-0.9513379926677091,-0.8563445028619694,-1.2613129065351194],
[1.479041505318804,1.810007067391299,-2.8023717750684107,-2.7812453328904354,-2.3035371159432065,0.03115788970802594,1.9657984684801135,-0.06598157999814092,-3.127064931997624,-0.823555626227005,1.264759264824068],
[-2.888035414770285,-2.994031589353307,3.6767014622022303,3.6660677492715528,-9.181646436584106,-3.263884016795336,-3.88349373084228,-3.7712237821378527,4.041549967683737,-0.687691881794572,-3.4265218341885384],
[-1.778832083041489,-2.12273644922548,-1.0963638066243724,-1.030260217185849,1.1187575804787673,-1.2409528854652865,-2.1738688894809637,-1.2743917544089247,-1.107812865029798,-1.0629428830636494,-1.7751739289407722],
[-1.004719873122501,-1.0443553807939716,-2.577499515628139,-2.5776692043229663,2.9709867841159,-0.5274719498845833,-0.843890283520379,-0.20671418403893513,-2.7179118826886026,-1.306839176041484,-0.7570735658813299],
[-1.6272666988539848,-1.827183409776364,-0.9438773753269729,-0.9435987081211876,0.7660203923230526,-1.2259095997120846,-2.031459716170598,-1.3231868095404185,-0.9964657871022223,-0.9165692090752512,-1.58444425796673]
], [
[3.725973720696792,-1.6646641113226457,-2.690673963411094],
[3.800277429998268,-2.085738176798833,-2.63291827658551],
[3.9016422516515155,2.0672177107134138,-3.855168383990615],
[3.8880037099581446,2.1381928663964778,-3.8429910059416983],
[-1.162825551537172,2.35558424769462,-2.2926681297161076],
[2.9764021713068542,0.3197129292281053,-2.8308532112377027],
[3.820130175045188,-1.7806950056811943,-3.106860347494969],
[2.967938499849539,0.8691409343754017,-3.130904551603304],
[4.0701841361555715,2.228471656300074,-4.053369245616366],
[3.2231153003534962,0.9718378272580407,-2.658010203549737],
[3.4876764892680967,-1.2622479268130227,-2.870582831864283]
]
];
I copied these directly from the simbrain weights matrices.
*/
this.layers = [];
for(var i = 0; i < layers.length; i++) { // the creation of the layers
this.layers.push(new Layer(layers[i]));
}
for(var f = 0; f < layers.length - 1; f++) { // the creation of the connections between neurons of different layers, uses predefined weights
this.layers[f].connectLayers(layers[f + 1], weight3[f]);
}
/*this.getChances = function(neuron, layer) { // this function isn't important
var total = 0;
for(var g = 0; g < layer.neurons.length; g++) {
total += layer.neurons[g].value;
}
var prob = layer.neurons[neuron].value / total * 100;
var decOdds = 100 / prob;
return [(prob).toFixed(2), decOdds.toFixed(2)];
} */
this.reset = function(values) { // reset the neuron values
for (var i = 0; i < this.layers.length; i++) {
for (var t = 0; t < this.layers[i].neurons.length; t++) {
this.layers[i].neurons[t].value = values[i][t];
}
}
}
this.run = function() { // the main function that will make the NN predict
for (var t = 0; t < this.layers.length; t++) { // for each layer
if(t !== 0) { // if not the first layer (input shouldn't be activated)
this.activate(t);
}
if(t !== this.layers.length - 1) { // if not the last layer (the output cannot preActivate a layer that comes after it)
this.preActivate(t); // this affects the layer that comes after it
}
}
}
this.preActivate = function(t) { // multiply the weights by the value of the neurons and add the result to the next layer (pre activation)
for (var p = 0; p < this.layers[t].neurons.length; p++) { // for the neurons in the current layer
for (var v = 0; v < this.layers[t].neurons[p].weights.length; v++) { // for the weights of the current neuron (the amount of weights is equal to the amount of neurons in the next layer)
this.layers[t + 1].neurons[v].value += this.layers[t].neurons[p].weights[v].value * this.layers[t].neurons[p].value; // increment the neurons in the next layer
}
}
}
this.activate = function(t) { // take the sigmoid for each neuron in the current layer (activation)
for (var hp = 0; hp < this.layers[t].neurons.length; hp++) {
this.layers[t].neurons[hp].value = this.sigmoid(this.layers[t].neurons[hp].value);
}
}
this.sigmoid = function(x) { // the sigmoidal function
return 1 / (1 + 1 / pow(Math.E, x));
}
}
function Layer(neurons /*1d array*/, weight2 /*2d array*/) { // create a new layer
this.neurons = [];
for(var i = 0; i < neurons.length; i++) {
this.neurons.push(new Neuron(neurons[i]));
}
this.connectLayers = function(targetLayer /*1d array*/, weight2 /*2d array*/) { // create the connections
for(var f = 0; f < this.neurons.length; f++) { // for each neuron in its own layer
for(var t = 0; t < targetLayer.length; t++) { // for each neuron in the next layer
this.neurons[f].weights.push(new Weight(weight2[f][t])); // get the weight from the predefined set and make it the value of that weights
}
}
}
}
function Neuron(value /*float*/) {
this.value = value;
this.weights = [];
this.x; // for drawing on a canvas
this.y; // for drawing on a canvas
}
function Weight(value /*float*/) {
this.value = value;
this.x; // for drawing on a canvas
this.y; // for drawing on a canvas
}

Blackjack javascript game infinite loop

I have created an utterly simple blackjack game that stores the first value of a shuffled array of cards into corresponding players' arrays, dealing them as actual hands. For some odd reason, I can't seem to find a way to execute the core part of the code multiple times without getting an infinite loop. For the time being, I have only tried running the quite commonplace "for" command which is meant for multiple statements, but just doesn't seem to work here.
The programm on its primitive form is as follows...
var dealerCards = [];
var playerCards = [];
var firstDeck = [];
function shuffle(o){
for(var j, x, i = o.length; i; j = Math.floor(Math.random() * i), x = o[--i], o[i] = o[j], o[j] = x);
return o;
}
function createShuffledDeckNumber(array, x) {
for (i = 0; i < 4*x; i++) {
array.push(1,2,3,4,5,6,7,8,9,10,11,12,13);
}
shuffle(array);
}
function drawCard(playersHand, playerSoft, playerHard) {
playersHand.push(firstDeck[0]);
firstDeck.shift();
}
function checkDeckDrawOne(playersHand) {
if (firstDeck.length === 0) {
createShuffledDeckNumber(firstDeck, 1);
drawCard(playersHand);
}else{
for (i = 0; i < 1; i++) {
drawCard(playersHand);
}
}
}
for (i = 0; i < 4; i++) {
dealerCards = [];
playerCards = [];
checkDeckDrawOne(dealerCards);
checkDeckDrawOne(dealerCards);
checkDeckDrawOne(playerCards);
checkDeckDrawOne(playerCards);
console.log("dealerCards",dealerCards,"playerCards",playerCards);
console.log("firstDeckDrawn", firstDeck, "Number", firstDeck.length);
}
Additional Notes;
The presumed objective could be performing calculations to figure out the winner by imitating the effect of consecutive computing rounds based on a finite number of values stored in each player's array. Although, I've tried a seried of different things when it comes to emulating the real life circumstances of actually playing blackjack, this version seems to do just that, by also giving the programmer the ability to introduce counting systems like KO or HiLo. The main logic behind the whole thing is fairly simple; order x shuffled decks whenever a command that involves drawing a card is being executed unless the deck has at least one card.
It's rather fair to ponder why should I possibly bother creating multiple rounds in such a game. Reason is, that I want to create an autoplayer application that provides me with percentages on processed data.
Your variable i in function checkDeckDrawOne() has global scope, meaning it alters the value of i in the main loop:
for (i = 0; i < 4; i++) {
dealerCards = [];
playerCards = [];
checkDeckDrawOne(dealerCards);
checkDeckDrawOne(dealerCards);
checkDeckDrawOne(playerCards);
checkDeckDrawOne(playerCards);
console.log("dealerCards",dealerCards,"playerCards",playerCards);
console.log("firstDeckDrawn", firstDeck, "Number", firstDeck.length);
}
Change this:
for (i = 0; i < 1; i++) {
drawCard(playersHand);
}
to this:
for (var i = 0; i < 1; i++) {
drawCard(playersHand);
}
although why you need a loop here anyway is baffling.

Categories