Given I have an array like this:
array = [Array[8], Array[8], Array[8], ...]
# array.length is 81; each octet represents a point on a 9x9 grid
where each nested array contains 8 numeric elements ranging from -2 to 2, how would I apply the following step to get a vector in Javascript?
Step 5. The signature of an image is simply the concatenation of the
8-element arrays corresponding to the grid points, ordered
left-to-right, top-to-bottom. Our signatures are thus vectors of
length 648. We store them in 648-byte arrays, but because some of the
entries for the first and last rows and columns are known to be zeros
and because each byte is used to hold only 5 values, signatures could
be represented by as few as ⌈544 log2 5⌉ = 1264
bits.
(Towards the end, those are supposed to be ceiling notations; best I could do given SO's lack of Latex formatting)
I have the array ready to go and ordered properly, but my knowledge of matricies and vectors is a little rusty, so I'm not sure how to tackle this next step. I'd appreciate any clarifications!
Background: I'm trying to create a JS implementation of an image processing algorithm published by the Xerox Palo Alto Research Center for a side-project I'm currently working on.
Conceptually you could convert this to a single 1264 bit number using the following algorithm:
Initialize an accumulator variable to zero
Iterate over all elements, but skipt those which you know to be zero
For the other elements, add 2 to obtain values in the range [0,1,2,3,4]
For each such value, multiply the accumulator by 5 then add the corresponding value
When you have processed all elements, the accumulator will encode your arrays
To reverse that encoding, youd do this:
Read the encoded value into the accumulator
Iterate over all elements, in reverse order, but skipt those which you know to be zero
For each element, you obtain the corresponding value as the accumulator modulo 5
Subtract 2 from that value
Divide the accumulator by 5 using a truncating division
The problem with all of this is the fact that JS doesn't provide 1264 bit numbers out of the box. You might try one of the libraries suggested in How to deal with big numbers in javascript.
But unless you absolutely requre an extremely small representation, I'd suggest an alternative approach: you can encode up to 13 such values in a 32 bit signed integer, since 513=1,220,703,125 < 2,147,483,648=231. So after encoding 13 values I'd write out the result using such a number, then reset the accumulator to zero. This way you'll need ⌈544/13⌉∙32=1376 bits, which is not that much worse in terms of space requirements, but will be a lot faster to implement.
Instead of iterating once in forward and once in reverse direction, it might be easier to not multiply the accumulator by 5, but instead multiply the value you add to that by a suitable power of 5. In other words, you maintain a factor which you initialize to 1, and multiply by 5 every time you add a value. So in this case, first data values will have less significant positions than later data values, both for encoding and decoding, which means you can use the same iteration order for both.
See the ideone link mentioned in my comment below for an example of this latter approach. It encodes the full 9*9*8 values in 50 integers, each of them using no more than 31 bits. It then decodes the original matrix from that encoded form, to show that all the information is still present. The example does not use any fixed zeros, in your case ⌈544/13⌉=42 integers should be enough.
Related
I'm generating an array of 5 random numbers. Once generated, I want to ensure that they're all not multiples of each other (super strange edge case). If they are, I want to regenerate them. So I don't want to see [2,4,6,8,10] or [4,8,12,16,20].
How would I detect that?
The most straightforward way is to sort them and then check to see if the lowest value is a divisor of all of them. If it is, check to see if the next number is a divisor of the ones that come after. etc. etc.
Easiest way is to create an array of 100 prime numbers then generate 5 unique numbers under 100 and use this to get the value at the numbered index of the prime numbers array. You will then guarantee the none of the numbers are multiples of each other.
Suppose I have a binary matrix of size m by n. Each row is n bits long. I would like to store this matrix as space efficiently as possible while still being able to quickly retrieve individual rows. In particular I'd like to be able to do bitwise operations on the rows after retrieving them. To do this I'm using the JavaScript TypedArray.
My attempt at a solution is the following:
Initialize the matrix to have a number of columns divisible by 8 (e.g by padding)
Flatten the matrix row-wise (i.e put the rows side by side)
Convert every 8 bits into an 8-bit integer, returning a flat array of 8-bit integers
Initialize buffer and store the array of 8-bit integers as a u8int TypedArray
To retrieve a specific row i:
Determine the number of 8-bit integers that constitute a single row (since I ensured the number of columns was divisible by 8, this should be a whole number). Call this rowLength.
Find the offset indices for row i, where rowStartOffset = i*rowLength; rowEndOffset = (i+1)*rowLength-1 and slice the array
Convert the integers in the array back into binary format
Repeat 0-2 for more rows, then do bitwise operations on them
This works, but I wondered if there was a better way to do this. In particular is there a way to do bitwise operations without having to convert back into binary? (Currently I'm doing this by converting the integers into a binary string and then concatenating). Is there a method which does not use TypedArray?
i just wanted to know javascript number size because i want to send lot of them via network per frame and i must know a measure of how many im gonna send per second.
As i readed:
According to the ECMAScript standard, there is only one number type: the double-precision 64-bit binary format IEEE 754 value (number between -(2^53 -1) and 2^53 -1).
So if im gonna send lot of diferent numbers(example later) if all numbers between -(2^53 -1) and (2^53 -1) use same memory i may just combinate them like 567832332423556 and then locally split them locally when received instead of sending a lot of diferent numbers, because anyway that unique number "567832332423556" sends same information as a separated 5,6,7,8... but in one so its supossed to waste many less if it haves same size as a single 5.
Is this true or just im so confused? pls explain me :(.
var data = Array2d(obj.size); //Size can be between 125 and 200;`
Array2d: function (rows) { //The number of rows and files are same
var arr = [];
for (var i=0;i<rows;i++) arr[i] = [];
return arr;
},
...
if (this.calculate()) {
data[x][y] = 1;
} else {
data[x][y] = 0;
}
and somewhere in the code i change those 1 to any number from 2 to 5 so numbers may be from 0 to 5 depends of the situation.
Example:
[
[0,0,2,1,3,4,5,0,2,3,4,5,4(200 numbers)],
[0,5,2,1,5,1,0,2,3,0,0,0,0(200 numbers)]
...(200 times)
]
*And i really need All numbers, i cant miss even one.
If in therms of size 5 is shame as 34234 so i could just do something like:
[
[0021345023454...(20 numbers 10 times)],
[0021345023454...(20 numbers 10 times)]
...(200 times)
]
and it may use 20 times less because if 5 size is the same as 2^53 i just stack numbers 20 by 20 and they should waste lot less (ofc, 20 numbers less by stacking 20, at least in the network, maybe the local split is a little big but locally i do few things so i can handle that).
Precise limits on numbers are covered in What is JavaScript's highest integer value that a Number can go to without losing precision? - 9007199254740991 for regualr arithmetic operations, 2^32 for bit operations.
But it sounds like you are more interested in network representation than memory usage at run-time. Below is list of options from less to more compact. Make sure to understand your performance goals before moving away from basic JSON solution -as cost and complexity of constructing data rises more compatc representation you pick.
Most basic solution - JSON representation of existing array gives pretty decent ~2 characters per value representation:
[[0,1,5,0,0],[1,1,1,1,1],[0,0,0,0,0]]
Representing all numbers in a row as one big string gives ~1 character number:
["01500","11111","00000"]
Representing same values as concatenated numbers does not bring much savings - "11111" as string is about as long as the same 11111 as number - you add pair of quotes per row for string but one coma pare 16 values when packing as numbers.
You can indeed pack values to number in more compact form since the range is 0-5 using standard 6-ary value you get ~6^20 per on JavaScript number which is not significant savings over 16 values per number which you get with just representing as digits concatenation.
Better packing would be to represent 2 or 3 values as one character - 2 values give 36 combinations (v1 * 6 + v2) which can be written with just [A-Z0-9], 3 - 216 value which mostly fits into regular characters range.
you can go strictly binary representation (3-4 bit per value) and send via WebSockets to avoid cost of converting to text with regular requests.
Whether you go with binary or text representation one more option is compression - basic RLE compression may be fine if your data have long sequences of same value, other compression algorithms may work better on more random data. There are libraries to perform compression in JavaScript too.
I am using a function where large integers are passed as a parameter.
For example.
doSomethingWithInteger(1234567890)
However, it's a little difficult to keep track of the place value of integer (and thus its value), if I do something like this:
doSomethingWithInteger(101934109348)
How many digits, and thus what is the actual value of that integer really? It's hard to keep track. Obviously the following example blows up with an error because it's interpreted as multiple arguments:
doSomethingWithInteger(101 934 109 348)
But is there a way to achieve some effect like that in JS to make the amount of digits, and thus the value of the integer more clear?
Edit: To clarify, I'm having trouble keeping track of the value of the numbers by not being able to track the place values, and not having trouble determining the length of the string.
There's no solution built in to the syntax but I suppose you could do something this with a function.
function toInt(arr) {
return parseInt(arr.join(''));
}
toInt([123, 456, 7890]); // 1234567890
doSomethingWithInteger(toInt([101, 934, 109, 348]));
This works by taking in an array, combining the entire array into a single string, then casting that string to an integer. Obviously you'll incur a performance hit with this.
Do you want to know the number of digits that has the int?
You can get it by doing this:
(12324897928).toString().length
(newbie here)
i have large floating-point arrays created by node.js that i need to pass to s client-side jquery-ajax function. the goal is download it the fastest way possible.
we can safely round off the floating-point to the hundredth position, maybe even the tenth position - but i still need to experiment to see which one works best.
so far i have multiplied each array value by 100 and rounded off to just have three digits:
wholeNbrValue = Math.round(floatingPointValue * Math.pow(10, 3));
so for example 0.123456 would become 123. then each set of digits is appended to a string:
returnString += sprintfJs('%03d', wholeNbrValue) ;
i end up with a rather long ascii string of digits. then i might use something like fs.createWriteStream to store the string on the server as an ordinary file, and later use jquery-ajax to fetch it on the client side.
my question: what might be the optimum way to store a numeric only string? i am tempted to loop back through the string again and use something like charCodeAt() and just grab up every two positions as an ascii value, or even grab every 64 digits and convert it to a four-byte hex value.
or perhaps is there some way using node to actually store a binary floating-point array and later retrieve it with jquery-ajax?
thank you very much.