System.OverflowException in converting js to C# - javascript

var pin = parseInt(form.mac.value.slice(-6), 16) % 10000000;
I'm convert the JS to C# like this
var pin = Convert.ToInt16(Networks[NetworkIndex, 0].Substring(Networks[NetworkIndex, 0].Length - 6)) % 10000000;
and then I get this error
An unhandled exception of type 'System.OverflowException' occurred in
mscorlib.dll Additional information: Value was either too large or too
small for an Int16.

The value is too big for Int16. Try to use Convert.ToInt32.
var pin = Convert.ToInt32(Networks[NetworkIndex, 0].Substring(Networks[NetworkIndex, 0].Length - 6)) % 10000000;

Use Convert.ToInt32 instead of Convert.ToInt16. The value is too big to fit in Int16.
The Int16 value type represents signed integers with values ranging from negative 32768 through positive 32767. 307650 is way bigger than 32767 so you should use a bigger type to store the value in. Int16 uses 2 bytes of memory to store integral value, Int32 will use 4 bytes and can manage to store a bigger range of integers. Int32 is an immutable value type that represents signed integers with values that range from negative 2,147,483,648 through positive 2,147,483,647.
Try this one
var pin = Convert.ToInt32(Networks[NetworkIndex, 0].Substring(Networks[NetworkIndex, 0].Length - 6)) % 10000000;

You can also use int.TryParse("your number", out int) This will not throw any exception(When you'll get null in string.). If it is parsed then it means the value is correct or you can explicitly throw exception from your code.
Take a look.
Int.TryParse

Related

How to initialise a variable in Javascript to INFINITE value?

How can I initialise a variable in javascript to the biggest possible number? I am looking for an equivalent of:
Integer.MAX_VALUE --> Java
INT_MAX --> C
int.MaxValue --> C#
With the new added ECMAScript feature MAX_SAFE_INTEGER
console.log(Number.MAX_SAFE_INTEGER)
var i = Number.POSITIVE_INFINITY
console.log("infinity:", i)
var max = Number.MAX_SAFE_INTEGER
console.log("max:", max)
According to the MDN documentation Number.MAX_VALUE returns the biggest number possible in Javascript smaller than infinity.
Since it is larger than the maximum safe integer (Number.MAX_SAFE_INTEGER = 2 ^ 53 - 1 vs 2 ^ 1024 for Number.MAX_VALUE), it is best represented using the BigInt object which appends n to the end of the value. i.e.:
const maxSafeValue = BigInt(Number.MAX_VALUE);
// returns 17976...n
If you want to type less:
var i = 1/0
console.log(i) // Shows 'Infinity'

Reading signed 16 bit data in Javascript

I have been banging my head to solve this:
I received a raw data from an embeded device. From the documentation, the way to read it into a single value is:
Every two bytes of data can be combined to a single raw wave value.Its value is a signed 16-bit integer that ranges from -32768 to 32767. The first byte of the Value represents the high-order byte of the twos-compliment value, while the second byte represents the low-order byte. To reconstruct the full raw wave value, simply shift the first byte left by 8 bits, and bitwise-or with the
second byte.
short raw = (Value[0]<<8) | Value[1];
One of the 2 bytes that I received is "ef". When I used the bitwise operation above the result does not seems right as I noticed I never get a single negative value (its ECG data, negative values are normal). I believe using Javascript to do this is not straight forward.
The way I did it was like this:
var raw = "ef"; // just to show one. Actual one is an array of this 2 bytes but in string.
var value = raw.charAt(0) << 8 | raw.charAt(1)
Please Advice. Thanks!
EDIT:
I also did like this:
let first = new Int8Array(len); // len is the length of the raw data array
let second = new Int8Array(len);
let values = new Int16Array(len) // to hold the converted value
for(var i=0; i<len ; i++)
{
//arr is the array that contains the every two "characters"
first[i] = arr[i].charAt(0);
second[i] = arr[i].charAt(1);
values[i] = first[i] << 8 | second[i];
}
But still all is positive result. no negative. Can someone verify if I am doing this correctly, just in case maybe the values are actually all positive :p
It's two's complement: Check the top bit of the high byte - byte[high]>>7. If it's 0, do byte[top]<<8 | byte[low]. If it is one, do -((byte[top]^0xff)<<8 | byte[low]^0xff) - 1. See https://en.wikipedia.org/wiki/Two%27s_complement for an explanation.
Also check out https://developer.mozilla.org/en-US/docs/Web/JavaScript/Typed_arrays. It has Int16 arrays which is what you want. It might be a ton faster.
You can use property that the string is already 16 bit and then make it signed.
Also instead of reading 8bit at time just read one unsigned 16bit using charCodeAt.
var raw = "\u00ef"; //original example
var buf = new Int16Array(1);
buf[0] = raw.charCodeAt(0); //now in the buf[0] is typed 16 bit integer
//returns 239, for \uffef returns -17
var raw = "\uffef"; //original example
var buf = new Int16Array(1);
buf[0] = raw.charCodeAt(0); //now in the buf[0] is typed 16 bit integer
console.log(buf[0])
For first byte take two's complement first then shift by 8 bit
let x = raw.charCodeAt(0); //ASCII value of first character
then flip x for 1's complement and add +1 for 2's complement and finally do
var value = x << 8 | bytevalueof(raw.charCodeAt(1))
This is a question about the raw wave data coming from a Neurosky Mindwave Mobile EEG headset.
You should find three values in the buffer when you read from the device. Perform this operation on the second two to get a correct reading:
var b = reader.buffer(3);
var raw = b[1]*256 + b[2];
if(raw >= 32768) {
raw = raw - 65536;
}

Portable hashCode implementation for binary data

I am looking for a portable algorithm for creating a hashCode for binary data. None of the binary data is very long -- I am Avro-encoding keys for use in kafka.KeyedMessages -- we're probably talking anywhere from 2 to 100 bytes in length, but most of the keys are in the 4 to 8 byte range.
So far, my best solution is to convert the data to a hex string, and then do a hashCode of that. I'm able to make that work in both Scala and JavaScript. Assuming I have defined b: Array[Byte], the Scala looks like this:
b.map("%02X" format _).mkString.hashCode
It's a little more elaborate in JavaScript -- luckily someone already ported the basic hashCode algorithm to JavaScript -- but the point is being able to create a Hex string to represent the binary data, I can ensure the hashing algorithm works off the same inputs.
On the other hand, I have to create an object twice the size of the original just to create the hashCode. Luckily most of my data is tiny, but still -- there has to be a better way to do this.
Instead of padding the data as its hex value, I presume you could just coerce the binary data into a String so the String has the same number of bytes as the binary data. It would be all garbled, more control characters than printable characters, but it would be a string nonetheless. Do you run into portability issues though? Endian-ness, Unicode, etc.
Incidentally, if you got this far reading and don't already know this -- you can't just do:
val b: Array[Byte] = ...
b.hashCode
Luckily I already knew that before I started, because I ran into that one early on.
Update
Based on the first answer given, it appears at first blush that java.util.Arrays.hashCode(Array[Byte]) would do the trick. However, if you follow the javadoc trail, you'll see that this is the algorithm behind it, which is as based on the algorithm for List and the algorithm for byte combined.
int hashCode = 1;
for (byte e : list) hashCode = 31*hashCode + (e==null ? 0 : e.intValue());
As you can see, all it's doing is creating a Long representing the value. At a certain point, the number gets too big and it wraps around. This is not very portable. I can get it to work for JavaScript, but you have to import the npm module long. If you do, it looks like this:
function bufferHashCode(buffer) {
const Long = require('long');
var hashCode = new Long(1);
for (var value of buff.values()) { hashCode = hashCode.multiply(31).add(value) }
return hashCode
}
bufferHashCode(new Buffer([1,2,3]));
// hashCode = Long { low: 30817, high: 0, unsigned: false }
And you do get the same results when the data wraps around, sort of, though I'm not sure why. In Scala:
java.util.Arrays.hashCode(Array[Byte](1,2,3,4,5,6,7,8,9,10))
// res30: Int = -975991962
Note that the result is an Int. In JavaScript:
bufferHashCode(new Buffer([1,2,3,4,5,6,7,8,9,10]);
// hashCode = Long { low: -975991962, high: 197407, unsigned: false }
So I have to take the low bytes and ignore the high, but otherwise I get the same results.
This functionality is already available in Java standard library, look at the Arrays.hashCode() method.
Because your binary data are Array[Byte], here is how you can verify it works:
println(java.util.Arrays.hashCode(Array[Byte](1,2,3))) // prints 30817
println(java.util.Arrays.hashCode(Array[Byte](1,2,3))) // prints 30817
println(java.util.Arrays.hashCode(Array[Byte](2,2,3))) // prints 31778
Update: It is not true that the Java implementation boxes the bytes. Of course, there is conversion to int, but there's no way around that. This is the Java implementation:
public static int hashCode(byte a[]) {
if (a == null) return 0;
int result = 1;
for (byte element : a) result = 31 * result + element;
return result;
}
Update 2
If what you need is a JavaScript implementation that gives the same results as a Scala/Java implementation, than you can extend the algorithm by, e.g., taking only the rightmost 31 bits:
def hashCode(a: Array[Byte]): Int = {
if (a == null) {
0
} else {
var hash = 1
var i: Int = 0
while (i < a.length) {
hash = 31 * hash + a(i)
hash = hash & Int.MaxValue // taking only the rightmost 31 bits
i += 1
}
hash
}
}
and JavaScript:
var hashCode = function(arr) {
if (arr == null) return 0;
var hash = 1;
for (var i = 0; i < arr.length; i++) {
hash = hash * 31 + arr[i]
hash = hash % 0x80000000 // taking only the rightmost 31 bits in integer representation
}
return hash;
}
Why do the two implementations produce the same results? In Java, integer overflow behaves as if the addition was performed without loss of precision and then bits higher than 32 got thrown away and & Int.MaxValue throws away the 32nd bit. In JavaScript, there is no loss of precision for integers up to 253 which is a limit the expression 31 * hash + a(i) never exceeds. % 0x80000000 then behaves as taking the rightmost 31 bits. The case without overflows is obvious.
This is the meat of algorithm used in the Java library:
int result 1;
for (byte element : a) result = 31 * result + element;
You comment:
this algorithm isn't very portable
Incorrect. If we are talking about Java, then provided that we all agree on the type of the result, then the algorithm is 100% portable.
Yes the computation overflows, but it overflows exactly the same way on all valid implementations of the Java language. A Java int is specified to be 32 bits signed two's complement, and the behavior of the operators when overflow occurs is well-defined ... and the same for all implementations. (The same goes for long ... though the size is different, obviously.)
I'm not an expert, but my understanding is that Scala's numeric types have the same properties as Java. Javascript is different, being based on IEE 754 double precision floating point. However, with case you should be able to code the Java algorithm portably in Javascript. (I think #Mifeet's version is wrong ...)

Node.js and 64-bit varints

I'm in the process of writing a Node.js based application which talks via TCP to a C++ based server. The server speaks a binary protocol, quite similar to Protocol Buffers, but not exactly the same.
One data type the server returns is that of a unsigned 64-bit integer (uint64_t), serialized as a varint, where the most significant bit is used to indicate whether the next byte is also part of the int.
I am unable to parse this out in Javascript currently due to the 32-bit limitation on bitwise operations, and also the fact that JS doesn't do 64-bit ints natively. Does anyone have any suggestions on how I could do this?
My varint reading code is very similar to that shown here: https://github.com/chrisdickinson/varint/blob/master/decode.js
I thought I could use node-bignum to represent the number, but I'm unsure how to turn a Buffer consisting of varint bytes into this.
Cheers,
Nathan
Simply took the existing varint read module and modified it to yield a Bignum object instead of a regular number:
Bignum = require('bignum');
module.exports = read;
var MSB = 0x80
, REST = 0x7F;
function read(buf, offset) {
var res = Bignum(0)
, offset = offset || 0
, counter = offset
, b
, shift = 0
, l = buf.length;
do {
if(counter >= l) {
read.bytesRead = 0;
return undefined
}
b = buf[counter++];
res = res.add(Bignum(b & REST).shiftLeft(shift));
shift += 7
} while (b >= MSB);
read.bytes = counter - offset;
return res
}
Use it exactly the same way as you would have used the original decode module.

JavaScript Max unsigned int 32?

I failed to find any constant in JS language which represents MAX UINT 32
Does it exists? I can have hardcoded the number itself, but i prefer to go in the more appropriate path of coding
For integers, Number.MAX_SAFE_INTEGER would be appropriate, as it's the maximum safe integer in JavaScript (2^53 – 1). The 53 power comes from how the double-precision floating-point numbers work. Those are also used in JavaScript to store numbers.
// In the safe integers zone:
const a = Number.MAX_SAFE_INTEGER - 1;
const b = Number.MAX_SAFE_INTEGER - 0;
console.log(a); // 9007199254740990
console.log(b); // 9007199254740991 (b + 1)
console.log(a === b); // false
// Outside the safe integers zone:
const x = Number.MAX_SAFE_INTEGER + 1;
const y = Number.MAX_SAFE_INTEGER + 2;
console.log(x); // 9007199254740992
console.log(y); // Also 9007199254740992, because precision....
console.log(x === y); // true
By the way, imagine that would happen if your iteration meets this kind of unsafe zone - infinite loop.
See also:
Number.EPSILON for the difference between 1 and the smallest floating point number greater than 1;
Number.MAX_VALUE for maximal number representable in JavaScript - not integer, but floating point.
Number.MIN_SAFE_INTEGER - for minimal safe integer (negative) in JavaScript.
Number.MIN_VALUE - for minimal negative number overall (floating point).
In some cases it's nicer to just use use Number.POSITIVE_INFINITY (or Number.NEGATIVE_INFINITY for negative), like when finding max/min values - for empty set you would get this not quite valid numerical value, that you can more easily notice and understand.
On linked pages you can also find other interesting stuff, like Number.isSafeInteger function to check whenever number is safe integer.
It does not exist, however you can have Max Numeric Value returned by Number object
You can see it here
alert(Number.MAX_VALUE);
Reference
javascript was no ints every number is a floating point number which is of class Number. The max value of that is Number.MAX_VALUE but that is almost certainly not what you are looking for (Number.MAX_VALUE = 1.7976931348623157e+308)
Try This:
<script>
function myFunction()
{
document.getElementById("demo").innerHTML=Number.MAX_VALUE;
}
</script>
Description
The MAX_VALUE property has a value of approximately 1.79E+308. Values larger than MAX_VALUE are represented as "Infinity".
Because MAX_VALUE is a static property of Number, you always use it as Number.MAX_VALUE, rather than as a property of a Number object you created.
Example: Using MAX_VALUE
The following code multiplies two numeric values. If the result is less than or equal to MAX_VALUE, the func1 function is called; otherwise, the func2 function is called.
if (num1 * num2 <= Number.MAX_VALUE) {
func1();
} else {
func2();
}

Categories