2’s complement checksum - javascript

I am trying to implement a protocol which use a certain checksum calculation I am unable to reproduce.
The specification says the checksum should be "7 bit, 2’s complement sum of command and message field
(m.s.b. = 0)".
Which according to me should be possible to calculate with:
const data = [0x04, 0x00, 0x10, 0x10, 0x18, 0x57, 0x05]
let sum = 0x00
for (let value of data) {
sum += value
}
const chk = 256 - sum // OR (~sum + 1) & 0xFF
console.log('0x' + chk.toString(16).padStart(2, '0'))
See, https://repl.it/repls/UntidySpotlessInternalcommand.
However, the result I get is 0x68, while the example I have says it should be 0x78.
Am I misunderstanding something in terms of calculating 2's complement sum?
The example is taken from a successfully executed command which is seen in a console window provided by the manufacturer.
Breaks down into:
SOM 10 02
CMD 04 (CONNECTED)
DATA 00 10 10 18 57
BTC 05
CHK 78
EOM 10 03

You should contact the manufacturer. Even using a programming calculator and making sure to use only 7 bits, the checksum comes out to 0x68. I'm not entirely sure your calculation is correct as per another comment it might not be 7 bit. But the sum of the numbers you provided is a 7 bit number anyway, so in the example you gave it shouldn't matter. It might matter for other data though. But definitely contact the company because the correct checksum does seem to be 0x68.

Related

Exponential growth one-liner function fails on one test but not others?

Can anyone debug this for me? I've been staring at it and testing things for hours but can't seem to get it working. I'm starting to wonder whether the coding challenge web app is wrong and I'm right!
The task is this:
Find the number of generations it would take for a population p0 to surpass a max population p given that the population increases by percent% plus aug every generation
My one-liner is as follows:
nbYear = (p0, percent, aug, p) => {
return Array(999).fill().filter((_,i)=>Array(i+1).fill().map((_,j)=>j==0?p0:0).reduce((y,_)=>Math.round(y*percent/100+y+aug))<=p).length;
}
The code I've written passes on 100 tests but fails on one in particular (they don't disclose the input paramters on which it failed). All it says is that the function gave the output: 51 when it should have been 50
I've concluded that the one test for which it failed must have been for a value over 999.

Hex to AScii conversion going wrong in node?

Okay first of all thank you for your time. This has been driving me crazy.
So after a lot of digging I'm now properly "talking" with a scale through RS232 which means talking to it using HEX.
So I've been able to send data to the scale and get it back as needed. What I'm getting when I process it on node.js though is making me crazy.
The raw data coming through after concatenation is
<Buffer 06 02 30 32 1b 33 1b 31 31 34 34 35 1b 30 30 30 31 32 30 1b 30 30 31 33 37 33 03>
Which I'm properly converting to string like this:
var coiso = Buffer.from(Buffer.concat(porquinho), 'ascii').toString('hex');
And getting as a result this:
060230321b331b31313434351b3030303132301b30303133373303
If I get this value onto any online HEX to ASCII website (for example this) the result I'm getting is the correct one which should be:
02311445000120001373
However if I use any javascript function in node for the conversion including the same that website uses:
function OnConvert(doom)
{
hex = doom;
hex = hex.match(/[0-9A-Fa-f]{2}/g);
len = hex.length;
if( len==0 ) return;
txt='';
for(i=0; i<len; i++)
{
h = hex[i];
code = parseInt(h,16);
t = String.fromCharCode(code);
txt += t;
}
return txt;
}
The result I get without exception is this:
0214450012001373
Which is completely different as I'm losing one digit of the weight in the scale as well as one digit from the calculated price!
What the f*ck am I doing wrong here?
Please Help me out... It's driving me nuts!
Thank you in advance,
Kind Regards:
João Moreira
UPDATE As pointed out in the comments by ChrisG, if you use the exact same function in the browser the result is correct (check it -> codepen)! Is this some node.js quirk???? I'm using node v8.9.3.
So after A LOT of testing and digging I noticed the results in node were different in the windows commandline (I was on a mac).
So after I tested for the length of the string instead of its content I found out that it was exactly the same everywhere which basically means this is a problem with the content of the string being perceived as something else by the command line... thank you very much Apple!
So if you work with SERIAL PORTS and communication protocols that envolve HEX or ASCII maybe it's best if you don't use the command line for it.
Just a small advice people!!!

Portable hashCode implementation for binary data

I am looking for a portable algorithm for creating a hashCode for binary data. None of the binary data is very long -- I am Avro-encoding keys for use in kafka.KeyedMessages -- we're probably talking anywhere from 2 to 100 bytes in length, but most of the keys are in the 4 to 8 byte range.
So far, my best solution is to convert the data to a hex string, and then do a hashCode of that. I'm able to make that work in both Scala and JavaScript. Assuming I have defined b: Array[Byte], the Scala looks like this:
b.map("%02X" format _).mkString.hashCode
It's a little more elaborate in JavaScript -- luckily someone already ported the basic hashCode algorithm to JavaScript -- but the point is being able to create a Hex string to represent the binary data, I can ensure the hashing algorithm works off the same inputs.
On the other hand, I have to create an object twice the size of the original just to create the hashCode. Luckily most of my data is tiny, but still -- there has to be a better way to do this.
Instead of padding the data as its hex value, I presume you could just coerce the binary data into a String so the String has the same number of bytes as the binary data. It would be all garbled, more control characters than printable characters, but it would be a string nonetheless. Do you run into portability issues though? Endian-ness, Unicode, etc.
Incidentally, if you got this far reading and don't already know this -- you can't just do:
val b: Array[Byte] = ...
b.hashCode
Luckily I already knew that before I started, because I ran into that one early on.
Update
Based on the first answer given, it appears at first blush that java.util.Arrays.hashCode(Array[Byte]) would do the trick. However, if you follow the javadoc trail, you'll see that this is the algorithm behind it, which is as based on the algorithm for List and the algorithm for byte combined.
int hashCode = 1;
for (byte e : list) hashCode = 31*hashCode + (e==null ? 0 : e.intValue());
As you can see, all it's doing is creating a Long representing the value. At a certain point, the number gets too big and it wraps around. This is not very portable. I can get it to work for JavaScript, but you have to import the npm module long. If you do, it looks like this:
function bufferHashCode(buffer) {
const Long = require('long');
var hashCode = new Long(1);
for (var value of buff.values()) { hashCode = hashCode.multiply(31).add(value) }
return hashCode
}
bufferHashCode(new Buffer([1,2,3]));
// hashCode = Long { low: 30817, high: 0, unsigned: false }
And you do get the same results when the data wraps around, sort of, though I'm not sure why. In Scala:
java.util.Arrays.hashCode(Array[Byte](1,2,3,4,5,6,7,8,9,10))
// res30: Int = -975991962
Note that the result is an Int. In JavaScript:
bufferHashCode(new Buffer([1,2,3,4,5,6,7,8,9,10]);
// hashCode = Long { low: -975991962, high: 197407, unsigned: false }
So I have to take the low bytes and ignore the high, but otherwise I get the same results.
This functionality is already available in Java standard library, look at the Arrays.hashCode() method.
Because your binary data are Array[Byte], here is how you can verify it works:
println(java.util.Arrays.hashCode(Array[Byte](1,2,3))) // prints 30817
println(java.util.Arrays.hashCode(Array[Byte](1,2,3))) // prints 30817
println(java.util.Arrays.hashCode(Array[Byte](2,2,3))) // prints 31778
Update: It is not true that the Java implementation boxes the bytes. Of course, there is conversion to int, but there's no way around that. This is the Java implementation:
public static int hashCode(byte a[]) {
if (a == null) return 0;
int result = 1;
for (byte element : a) result = 31 * result + element;
return result;
}
Update 2
If what you need is a JavaScript implementation that gives the same results as a Scala/Java implementation, than you can extend the algorithm by, e.g., taking only the rightmost 31 bits:
def hashCode(a: Array[Byte]): Int = {
if (a == null) {
0
} else {
var hash = 1
var i: Int = 0
while (i < a.length) {
hash = 31 * hash + a(i)
hash = hash & Int.MaxValue // taking only the rightmost 31 bits
i += 1
}
hash
}
}
and JavaScript:
var hashCode = function(arr) {
if (arr == null) return 0;
var hash = 1;
for (var i = 0; i < arr.length; i++) {
hash = hash * 31 + arr[i]
hash = hash % 0x80000000 // taking only the rightmost 31 bits in integer representation
}
return hash;
}
Why do the two implementations produce the same results? In Java, integer overflow behaves as if the addition was performed without loss of precision and then bits higher than 32 got thrown away and & Int.MaxValue throws away the 32nd bit. In JavaScript, there is no loss of precision for integers up to 253 which is a limit the expression 31 * hash + a(i) never exceeds. % 0x80000000 then behaves as taking the rightmost 31 bits. The case without overflows is obvious.
This is the meat of algorithm used in the Java library:
int result 1;
for (byte element : a) result = 31 * result + element;
You comment:
this algorithm isn't very portable
Incorrect. If we are talking about Java, then provided that we all agree on the type of the result, then the algorithm is 100% portable.
Yes the computation overflows, but it overflows exactly the same way on all valid implementations of the Java language. A Java int is specified to be 32 bits signed two's complement, and the behavior of the operators when overflow occurs is well-defined ... and the same for all implementations. (The same goes for long ... though the size is different, obviously.)
I'm not an expert, but my understanding is that Scala's numeric types have the same properties as Java. Javascript is different, being based on IEE 754 double precision floating point. However, with case you should be able to code the Java algorithm portably in Javascript. (I think #Mifeet's version is wrong ...)

I successfully compiled my program. Now how do I run it?

I want to solve Project Euler Problem 1:
If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.
Find the sum of all the multiples of 3 or 5 below 1000.
Here's my code:
\documentclass[10pt,a4paper]{article}
\usepackage{hyperref}
\newcommand*\rfrac[2]{{}^{#1}\!/_{#2}}
\title{Solution to Project Euler Problem 1}
\author{Aadit M Shah}
\begin{document}
\maketitle
We want to find the sum of all the multiples of 3 or 5 below 1000. We can use the formula of the $n^{th}$ triangular number\footnote{\url{http://en.wikipedia.org/wiki/Triangular_number}} to calculate the sum of all the multiples of a number $m$ below 1000. The formula of the $n^{th}$ triangular number is:
\begin{equation}
T_n = \sum_{k = 1}^n k = 1 + 2 + 3 + \ldots + n = \frac{n (n + 1)}{2}
\end{equation}
If the last multiple of $m$ below 1000 is $x$ then $n = \rfrac{x}{m}$. The sum of all the multiples of $m$ below 1000 is therefore:
\begin{equation}
m \times T_{\frac{x}{m}} = m \times \sum_{k = 1}^{\frac{x}{m}} k = \frac{x (\frac{x}{m} + 1)}{2}
\end{equation}
Thus the sum of all the multiples of 3 or 5 below 1000 is equal to:
\begin{equation}
3 \times T_{\frac{999}{3}} + 5 \times T_{\frac{995}{5}} - 15 \times T_{\frac{990}{15}} = \frac{999 \times 334 + 995 \times 200 - 990 \times 67}{2}
\end{equation}
\end{document}
I compiled it successfully using pdflatex:
$ pdflatex Problem1.tex
This is pdfTeX, Version 3.14159265-2.6-1.40.15 (TeX Live 2014/Arch Linux) (preloaded format=pdflatex)
.
.
.
Output written on Problem1.pdf (1 page, 106212 bytes).
Transcript written on Problem1.log.
It generated the following output PDF file along with a bunch of other files with scary extensions:
How do I run this PDF file so that it computes the solution? I know the solution to the problem but I want to know how to execute the PDF file to compute the solution.
The reason why I prefer LaTeX over other programming languages is because it supports literate programming, an approach to programming introduced by Donald Knuth, the creator of TeX and one of the greatest computer scientists of all time.
Edit: It would also be nice to be able to print the computed solution either on the screen or on paper. Computing the solution without printing it is useful for heating the room but it is so hot already with the onset of summer and global warming. In addition, printing the solution would teach me how to write a hello world program in LaTeX.
So, today seems to be a safe day to tackle this problem...
The OP does not seem to be quite so PDF-savvy.
However, he obviously is quite a literate LaTeX guy.
Which means, he also must be knowing TeX very well, given he is so much of a Donald Knuth admirer...
So much for the preliminaries.
Now for the real meat.
First, to quote the official PDF-1.7 specification document:
PDF is not a programming language, and a PDF file is not a program.
(p. 92, Section 7.10.1)
However, the pre-decessor of the PDF format, PostScript, IS a Turing-complete programming language... Turing-complete, just as TeX is, the creation of Donald Knuth, one of the greatest computer scientists of all time.
PostScript files, on the other hand, ARE programs, and can easily be executed by PostScript printers (though this execution time cannot reliably be determined in advance).
Hence, and second, the OP should be able to find a way to convert his hi-level LaTeX code to low-level TeX code.
That code needs to emit a PostScript program, which in turn can be executed by a PostScript printer.
Writing that TeX code should be trivial for somebody like the OP, once he is given the PostScript code that should be the result of his TeX code.
I myself am not so well-versed with the TeX aspect of that problem solving procedure.
However, I can help with the PostScript.
The PostScript which the OP's TeX code should produce goes like this (there are for sure more optimized versions possible -- this is only a first, quick'n'dirty shot at it):
%!PS
% define variables
/n1 999 def
/t1 334 def
/n2 995 def
/t2 200 def
/n3 990 def
/s1 67 def
/t3 2 def
% run the computational code
n1 t1 mul
n2 t2 mul
n3 s1 mul
sub
add
t3 div
% print result on printer, not on <stdout>
/Helvetica findfont
24 scalefont
setfont
30 500 moveto
(Result for 'Project Euler Problem No. 1' :) show
/Helvetica-Bold findfont
48 scalefont
setfont
80 400 moveto
( ) cvs show
showpage
Send this PostScript code to a PostScript printer, and it will compute and print the solution.
Update
To answer one of the comments: If you replace the last section of PostScript code starting with /Helvetica findfont with a simple print statement, it will not do what you might imagine.
print does not cause the printer to output paper. Instead it asks the PostScript interpreter to write the topmost item on the stack (which must be a (string)!) to the standard output channel. (If the topmost item on the stack is not of type (string), it will trigger a typecheck PostScript error.)
So sending a modified PostScript file (where print has replaced the last section of my PS code) to the printer will not work (unless that printer supports the interactive executive PostScript mode -- which is not a standard part of the PostScript language). It will work however if you feed that file to Ghostscript in a terminal or cmd.exe window.
You can't run pdf files. You need to use the latex command instead of pdflatex. e.g.
latex Problem1.tex
Here's some documentation

Firefox perfomance lagging for large integers

Backstory
I was playing around with finding large powers of 2. My method was splitting the large numbers into an array of smaller values and the multiplying each of them to get the next value. The size I chose for the smaller chunks was a maximum size of 1e15. Then I decided to see how performance changed if I used the new array buffers and had to reduce the maximum size down to 1e9. Something weird happened I got a performance boost not from using the array buffers, but from using a smaller integer. This doesn't make sense since the larger the number the fewer times the function has to cycle through the array.
Code
var i=3
function run(){
loop(50000,Math.pow(10,i),i++);
if(i<16)setTimeout(run,100)
}
run();
function loop(pin,lim,id){
var pow=110503;
pow=pin||pow;
var l=pow-1,val=[2];
var t1,t2;
//console.time(id);
t1=new Date();
while(l--){
val=multiply(val,lim);
}
//console.timeEnd(id);
t2=new Date();
console.log(id,' ',t2-t1);
}
function multiply(a,lim){
var l=a.length,val=0,carry=0;
while(l--){
val=a[l]*2+carry;
carry=0;
if(val>lim-1){
var b=val%lim;
carry=(val-b)/lim;
val=b;
}
a[l]=val;
}
if(carry>0){a.unshift(carry)}
return a;
}
Results
IE10
3 5539
4 4213
5 3329
6 2720
7 2341
8 2153
9 1948
10 1699
11 1508
12 1401
13 1309
14 1208
15 1133
Chrome
3 5962
4 4385
5 3851
6 3242
7 2533
8 2207
9 1940
10 1794
11 1542
12 1604
13 1560
14 1414
15 1331
Firefox
3 3651
4 2732
5 2279
6 1853
7 1615
8 1408
9 1256
10 2375
11 2034
12 1874
13 1723
14 1600
15 1504
Question
As you can see Firefox outperforms both IE10 and Chrome up til numbers that are 9 digits long and then it takes a sudden jump in time. So, why does it do this? I guess it may have something to do with switching between numbers that can by stored in 32 bytes. I suppose 32 byte numbers are more efficient to work with; so for smaller numbers they use them and then switch to a larger integer type if they need to. But if that is true why does it never catch up to Chrome's and IE's performance and would switching cause that much of a performance penalty?
Both SpiderMonkey (Firefox) and V8 (Chrome) use 32-bit integers when they can, switching to doubles (not larger integers) when the numbers no longer fit into a 32-bit int. But note that "can" is determined heuristically, because there are costs to mixing doubles and ints due to having to convert back and forth, so it's possible that V8 is not deciding to specialize to ints here.
Edit: deleted the part about needing the source, since it's all right there in the original post!

Categories