Understanding Cyclic Redundancy Code algorithm for beginners - javascript

at section 5.5 of the PNG Specification, it discusses this concept in the PNG file format called "CRC" or "Cyclic Redundancy Code". I've never heard of it before, so I'm trying to understand it.
The CRC polynomial employed is
x32 + x26 + x23 + x22 + x16 + x12 + x11 + x10 + x8 + x7 + x5 + x4 + x2
+ x + 1
In PNG, the 32-bit CRC is initialized to all 1's, and then the data
from each byte is processed from the least significant bit (1) to the
most significant bit (128). After all the data bytes are processed,
the CRC is inverted (its ones complement is taken). This value is
transmitted (stored in the datastream) MSB first. For the purpose of
separating into bytes and ordering, the least significant bit of the
32-bit CRC is defined to be the coefficient of the x31 term.
So let me tell you what I understand and what I don't understand about this.
I've heard of polynomials, but in this context I'm a bit confused of how they are implemented here.
In this case, what is "x" supposed to represent? The current bit in the 32-bit looP? Which brings us to the next part:
so it says to make an empty 32-bit number (or rather, all set to 1s, so 32 1s), then it says it's "processed from the least significant bit (1) to the most significant bit (128)", but the question is, the "least...most..significant bit" of what?
Of the other data in the chunk?
How does that work, if the chunk is set in bytes, and this is only 32 bits? What if there are more than 32 bits in the chunk data (which there definitely are?)
Does it mean "least..most..significant bit" of the "polynomial"?
But what does the polynomial represent exactly? What is x^32?
What is x represented by?
Any help with the above questions, and perhaps a simple example with the example IDATA chunk (AKA calculating the CRC chunk for it with basic explanations) would be great:
0 0 2 3 IDAT 0 1 0 1 0 1 0 1 0 1 0 C
where the last byte "C" should be replaced with that 32-bit CRC thing it was talking about.
Can someone provide me with a practical example?

I would recommend reading Ross Williams' classic "A Painless Guide to CRC Error Detection Algorithms". Therein you will find in-depth explanations and examples.
The polynomial is simply a different way to interpret a string of bits. When you have n bits in a register, they are most commonly interpreted either as just that, a list of n independent bits, or they are interpreted as an integer, where you multiply each bit by two raised to the powers 0 to n-1 and add them up. The polynomial representation is where you instead interpret each bit as the coefficient of a polynomial. Since a bit can only be a 0 or a 1, the resulting polynomials never actually show the 0 or 1. Instead the xn term is either there or not. So the four bits 1011 can be interpreted to be 1 x3 + 0 x2 + 1 x1 + 1 x0 = x3 + x + 1. Note that I made the choice that the most significant bit was the coefficient of the x3 term. That is an arbitrary choice, where I could have chosen the other direction.
As for what x is, it is simply a placeholder for the coefficient and the power of x. You never set x to some value, nor determine anything about x. What it does is allow you to operate on those bit strings as polynomials. When doing operations on these polynomials, you treat them just like the polynomials you had in algebra class, except that the coefficients are constrained to the field GF(2), where the coefficients can only be 0 or 1. Multiplication becomes the and operation, and addition becomes the exclusive-or operation. So 1 plus 1 is 0. You get a new and different way to add, multiply, and divide strings of bits. That different way is key to many error detection and correction schemes.
It is interesting, but ultimately irrelevant, that if you set x to 2 in the polynomial representation of a string of bits (with the proper ordering choice), you get the integer interpretation of that string of bits.

The spec includes a link to example code:
https://www.w3.org/TR/2003/REC-PNG-20031110/#D-CRCAppendix
The spec has errors or is confusing.
That should be "data from each byte is processed from the least significant bit(0) to the most significant bit bit(7).
The CRC is a 33 term polynomial, where each term has a one bit coefficient, 0 or 1, with the 0 coefficients ignored when describing the polynomial.
Think of the CRC as being held in a 32 bit register. The sequence is to xor a byte of data into the right most byte of the CRC register, bits 7 through 0 (which technically correspond to the polynomial coefficients of x^24 to x^31). Then the CRC is "cycled" to the right for 8 bits (via table lookup). Once all data bytes have gone through this cycle, based on the comment from Mark Adler, it's the CRC is appended to data most significant byte first, (CRC>>24)&0xff, (CRC>>16)&0xff, (CRC>>8)&0xff, (CRC)&0xff.
The wiki article may help. For the example in the computation section, the dividend would be an array of data bytes with the bits of each byte reversed, the bits of the 33 bit polynomial would be non-reversed (0x104C11DB7). After doing the computation, the bits of the remainder would be reversed and appended to the data bytes.
https://en.wikipedia.org/wiki/Cyclic_redundancy_check
Mark Adler's answer includes a link to a good tutorial for a CRC. His answer also explains the x's used in a polynomial. It's just like a polynomial in algebra, except the coefficients can only be 0 or 1, and addition (or subtraction) is done using XOR.
what is x
From the wiki example:
data = 11010011101100 = x^13 + x^12 + x^10 + x^7 + x^6 + x^5 + x^3 + x^2
divisor = 1011 = x^3 + x + 1
Three 0 bits are appended to the data, effectively multiplying it by x^3:
dividend = 11010011101100000 = x^16 + x^15 + x^13 + x^10 + x^9 + x^8 + x^6 + x^5
Then the crc = dividend % divisor, with coefficients restricted to 0 or 1.
(x^16 + x^15 + x^13 + x^10 + x^9 + x^8 + x^6 + x^5) % (x^3 + x + 1) = x^2
11010011101100000 % 1011 = 100

Beware: If you use (00000000)_2 and (00000001)_2 as the binary representations of the 0s and 1s in your example IDAT chunk, you will compute the CRC incorrectly. The ASCII values of '0' and '1' are 48 = (00110000)_2 and 49 = (00110001)_2; similarly, the ASCII values of 'I', 'D', 'A', and 'T' are 73 = (01001001)_2, 68 = (01000100)_2, 65 = (01000001)_2, and 84 = (01010100)_2. So, assuming you meant the values 0 and 1 rather than the characters ‘0’ and ‘1’, what you must compute the CRC of is (01001001 01000100 01000001 01010100 00000000 00000001 00000000 00000001 00000000 00000001 00000000 00000001 00000000 00000001 00000000)_2.
Inconsequentially to the CRC but consequentially to the validity of the chunk, the length field (i.e., the first 4 bytes) of the chunk should contain the length in bytes of the data only, which is 11, which is the ASCII value of a vertical tab (VT), which is a nonprinting character but can be represented in strings by the hexadecimal escape sequence \x0B (in which (B)_16 = 11). Similarly, the first 3 bytes must contain the character for which the ASCII value is 0 (rather than 48), which is null (NUL), which can be represented in strings by the hexadecimal escape sequence \x00. So, the length field must contain something like "\x00\x00\x00\x0B".

Related

Different hash results in Java and JavaScript [duplicate]

I'm in a computer systems course and have been struggling, in part, with two's complement. I want to understand it, but everything I've read hasn't brought the picture together for me. I've read the Wikipedia article and various other articles, including my text book.
What is two's complement, how can we use it and how can it affect numbers during operations like casts (from signed to unsigned and vice versa), bit-wise operations and bit-shift operations?
Two's complement is a clever way of storing integers so that common math problems are very simple to implement.
To understand, you have to think of the numbers in binary.
It basically says,
for zero, use all 0's.
for positive integers, start counting up, with a maximum of 2(number of bits - 1)-1.
for negative integers, do exactly the same thing, but switch the role of 0's and 1's and count down (so instead of starting with 0000, start with 1111 - that's the "complement" part).
Let's try it with a mini-byte of 4 bits (we'll call it a nibble - 1/2 a byte).
0000 - zero
0001 - one
0010 - two
0011 - three
0100 to 0111 - four to seven
That's as far as we can go in positives. 23-1 = 7.
For negatives:
1111 - negative one
1110 - negative two
1101 - negative three
1100 to 1000 - negative four to negative eight
Note that you get one extra value for negatives (1000 = -8) that you don't for positives. This is because 0000 is used for zero. This can be considered as Number Line of computers.
Distinguishing between positive and negative numbers
Doing this, the first bit gets the role of the "sign" bit, as it can be used to distinguish between nonnegative and negative decimal values. If the most significant bit is 1, then the binary can be said to be negative, where as if the most significant bit (the leftmost) is 0, you can say the decimal value is nonnegative.
"Sign-magnitude" negative numbers just have the sign bit flipped of their positive counterparts, but this approach has to deal with interpreting 1000 (one 1 followed by all 0s) as "negative zero" which is confusing.
"Ones' complement" negative numbers are just the bit-complement of their positive counterparts, which also leads to a confusing "negative zero" with 1111 (all ones).
You will likely not have to deal with Ones' Complement or Sign-Magnitude integer representations unless you are working very close to the hardware.
I wonder if it could be explained any better than the Wikipedia article.
The basic problem that you are trying to solve with two's complement representation is the problem of storing negative integers.
First, consider an unsigned integer stored in 4 bits. You can have the following
0000 = 0
0001 = 1
0010 = 2
...
1111 = 15
These are unsigned because there is no indication of whether they are negative or positive.
Sign Magnitude and Excess Notation
To store negative numbers you can try a number of things. First, you can use sign magnitude notation which assigns the first bit as a sign bit to represent +/- and the remaining bits to represent the magnitude. So using 4 bits again and assuming that 1 means - and 0 means + then you have
0000 = +0
0001 = +1
0010 = +2
...
1000 = -0
1001 = -1
1111 = -7
So, you see the problem there? We have positive and negative 0. The bigger problem is adding and subtracting binary numbers. The circuits to add and subtract using sign magnitude will be very complex.
What is
0010
1001 +
----
?
Another system is excess notation. You can store negative numbers, you get rid of the two zeros problem but addition and subtraction remains difficult.
So along comes two's complement. Now you can store positive and negative integers and perform arithmetic with relative ease. There are a number of methods to convert a number into two's complement. Here's one.
Convert Decimal to Two's Complement
Convert the number to binary (ignore the sign for now)
e.g. 5 is 0101 and -5 is 0101
If the number is a positive number then you are done.
e.g. 5 is 0101 in binary using two's complement notation.
If the number is negative then
3.1 find the complement (invert 0's and 1's)
e.g. -5 is 0101 so finding the complement is 1010
3.2 Add 1 to the complement 1010 + 1 = 1011.
Therefore, -5 in two's complement is 1011.
So, what if you wanted to do 2 + (-3) in binary? 2 + (-3) is -1.
What would you have to do if you were using sign magnitude to add these numbers? 0010 + 1101 = ?
Using two's complement consider how easy it would be.
2 = 0010
-3 = 1101 +
-------------
-1 = 1111
Converting Two's Complement to Decimal
Converting 1111 to decimal:
The number starts with 1, so it's negative, so we find the complement of 1111, which is 0000.
Add 1 to 0000, and we obtain 0001.
Convert 0001 to decimal, which is 1.
Apply the sign = -1.
Tada!
Like most explanations I've seen, the ones above are clear about how to work with 2's complement, but don't really explain what they are mathematically. I'll try to do that, for integers at least, and I'll cover some background that's probably familiar first.
Recall how it works for decimal: 2345 is a way of writing 2 × 103 + 3 × 102 + 4 × 101 + 5 × 100.
In the same way, binary is a way of writing numbers using just 0 and 1 following the same general idea, but replacing those 10s above with 2s. Then in binary, 1111is a way of writing 1 × 23 + 1 × 22 + 1 × 21 + 1 × 20and if you work it out, that turns out to equal 15 (base 10). That's because it is 8+4+2+1 = 15.
This is all well and good for positive numbers. It even works for negative numbers if you're willing to just stick a minus sign in front of them, as humans do with decimal numbers. That can even be done in computers, sort of, but I haven't seen such a computer since the early 1970's. I'll leave the reasons for a different discussion.
For computers it turns out to be more efficient to use a complement representation for negative numbers. And here's something that is often overlooked. Complement notations involve some kind of reversal of the digits of the number, even the implied zeroes that come before a normal positive number. That's awkward, because the question arises: all of them? That could be an infinite number of digits to be considered.
Fortunately, computers don't represent infinities. Numbers are constrained to a particular length (or width, if you prefer). So let's return to positive binary numbers, but with a particular size. I'll use 8 digits ("bits") for these examples. So our binary number would really be 00001111or 0 × 27 + 0 × 26 + 0 × 25 + 0 × 24 + 1 × 23 + 1 × 22 + 1 × 21 + 1 × 20
To form the 2's complement negative, we first complement all the (binary) digits to form 11110000and add 1 to form 11110001but how are we to understand that to mean -15?
The answer is that we change the meaning of the high-order bit (the leftmost one). This bit will be a 1 for all negative numbers. The change will be to change the sign of its contribution to the value of the number it appears in. So now our 11110001 is understood to represent -1 × 27 + 1 × 26 + 1 × 25 + 1 × 24 + 0 × 23 + 0 × 22 + 0 × 21 + 1 × 20Notice that "-" in front of that expression? It means that the sign bit carries the weight -27, that is -128 (base 10). All the other positions retain the same weight they had in unsigned binary numbers.
Working out our -15, it is -128 + 64 + 32 + 16 + 1 Try it on your calculator. it's -15.
Of the three main ways that I've seen negative numbers represented in computers, 2's complement wins hands down for convenience in general use. It has an oddity, though. Since it's binary, there have to be an even number of possible bit combinations. Each positive number can be paired with its negative, but there's only one zero. Negating a zero gets you zero. So there's one more combination, the number with 1 in the sign bit and 0 everywhere else. The corresponding positive number would not fit in the number of bits being used.
What's even more odd about this number is that if you try to form its positive by complementing and adding one, you get the same negative number back. It seems natural that zero would do this, but this is unexpected and not at all the behavior we're used to because computers aside, we generally think of an unlimited supply of digits, not this fixed-length arithmetic.
This is like the tip of an iceberg of oddities. There's more lying in wait below the surface, but that's enough for this discussion. You could probably find more if you research "overflow" for fixed-point arithmetic. If you really want to get into it, you might also research "modular arithmetic".
2's complement is very useful for finding the value of a binary, however I thought of a much more concise way of solving such a problem(never seen anyone else publish it):
take a binary, for example: 1101 which is [assuming that space "1" is the sign] equal to -3.
using 2's complement we would do this...flip 1101 to 0010...add 0001 + 0010 ===> gives us 0011. 0011 in positive binary = 3. therefore 1101 = -3!
What I realized:
instead of all the flipping and adding, you can just do the basic method for solving for a positive binary(lets say 0101) is (23 * 0) + (22 * 1) + (21 * 0) + (20 * 1) = 5.
Do exactly the same concept with a negative!(with a small twist)
take 1101, for example:
for the first number instead of 23 * 1 = 8 , do -(23 * 1) = -8.
then continue as usual, doing -8 + (22 * 1) + (21 * 0) + (20 * 1) = -3
Imagine that you have a finite number of bits/trits/digits/whatever. You define 0 as all digits being 0, and count upwards naturally:
00
01
02
..
Eventually you will overflow.
98
99
00
We have two digits and can represent all numbers from 0 to 100. All those numbers are positive! Suppose we want to represent negative numbers too?
What we really have is a cycle. The number before 2 is 1. The number before 1 is 0. The number before 0 is... 99.
So, for simplicity, let's say that any number over 50 is negative. "0" through "49" represent 0 through 49. "99" is -1, "98" is -2, ... "50" is -50.
This representation is ten's complement. Computers typically use two's complement, which is the same except using bits instead of digits.
The nice thing about ten's complement is that addition just works. You do not need to do anything special to add positive and negative numbers!
I read a fantastic explanation on Reddit by jng, using the odometer as an analogy.
It is a useful convention. The same circuits and logic operations that
add / subtract positive numbers in binary still work on both positive
and negative numbers if using the convention, that's why it's so
useful and omnipresent.
Imagine the odometer of a car, it rolls around at (say) 99999. If you
increment 00000 you get 00001. If you decrement 00000, you get 99999
(due to the roll-around). If you add one back to 99999 it goes back to
00000. So it's useful to decide that 99999 represents -1. Likewise, it is very useful to decide that 99998 represents -2, and so on. You have
to stop somewhere, and also by convention, the top half of the numbers
are deemed to be negative (50000-99999), and the bottom half positive
just stand for themselves (00000-49999). As a result, the top digit
being 5-9 means the represented number is negative, and it being 0-4
means the represented is positive - exactly the same as the top bit
representing sign in a two's complement binary number.
Understanding this was hard for me too. Once I got it and went back to
re-read the books articles and explanations (there was no internet
back then), it turned out a lot of those describing it didn't really
understand it. I did write a book teaching assembly language after
that (which did sell quite well for 10 years).
Two complement is found out by adding one to 1'st complement of the given number.
Lets say we have to find out twos complement of 10101 then find its ones complement, that is, 01010 add 1 to this result, that is, 01010+1=01011, which is the final answer.
Lets get the answer 10 – 12 in binary form using 8 bits:
What we will really do is 10 + (-12)
We need to get the compliment part of 12 to subtract it from 10.
12 in binary is 00001100.
10 in binary is 00001010.
To get the compliment part of 12 we just reverse all the bits then add 1.
12 in binary reversed is 11110011. This is also the Inverse code (one's complement).
Now we need to add one, which is now 11110100.
So 11110100 is the compliment of 12! Easy when you think of it this way.
Now you can solve the above question of 10 - 12 in binary form.
00001010
11110100
-----------------
11111110
Looking at the two's complement system from a math point of view it really makes sense. In ten's complement, the idea is to essentially 'isolate' the difference.
Example: 63 - 24 = x
We add the complement of 24 which is really just (100 - 24). So really, all we are doing is adding 100 on both sides of the equation.
Now the equation is: 100 + 63 - 24 = x + 100, that is why we remove the 100 (or 10 or 1000 or whatever).
Due to the inconvenient situation of having to subtract one number from a long chain of zeroes, we use a 'diminished radix complement' system, in the decimal system, nine's complement.
When we are presented with a number subtracted from a big chain of nines, we just need to reverse the numbers.
Example: 99999 - 03275 = 96724
That is the reason, after nine's complement, we add 1. As you probably know from childhood math, 9 becomes 10 by 'stealing' 1. So basically it's just ten's complement that takes 1 from the difference.
In Binary, two's complement is equatable to ten's complement, while one's complement to nine's complement. The primary difference is that instead of trying to isolate the difference with powers of ten (adding 10, 100, etc. into the equation) we are trying to isolate the difference with powers of two.
It is for this reason that we invert the bits. Just like how our minuend is a chain of nines in decimal, our minuend is a chain of ones in binary.
Example: 111111 - 101001 = 010110
Because chains of ones are 1 below a nice power of two, they 'steal' 1 from the difference like nine's do in decimal.
When we are using negative binary number's, we are really just saying:
0000 - 0101 = x
1111 - 0101 = 1010
1111 + 0000 - 0101 = x + 1111
In order to 'isolate' x, we need to add 1 because 1111 is one away from 10000 and we remove the leading 1 because we just added it to the original difference.
1111 + 1 + 0000 - 0101 = x + 1111 + 1
10000 + 0000 - 0101 = x + 10000
Just remove 10000 from both sides to get x, it's basic algebra.
The word complement derives from completeness. In the decimal world the numerals 0 through 9 provide a complement (complete set) of numerals or numeric symbols to express all decimal numbers. In the binary world the numerals 0 and 1 provide a complement of numerals to express all binary numbers. In fact The symbols 0 and 1 must be used to represent everything (text, images, etc) as well as positive (0) and negative (1).
In our world the blank space to the left of number is considered as zero:
35=035=000000035.
In a computer storage location there is no blank space. All bits (binary digits) must be either 0 or 1. To efficiently use memory numbers may be stored as 8 bit, 16 bit, 32 bit, 64 bit, 128 bit representations. When a number that is stored as an 8 bit number is transferred to a 16 bit location the sign and magnitude (absolute value) must remain the same. Both 1's complement and 2's complement representations facilitate this.
As a noun:
Both 1's complement and 2's complement are binary representations of signed quantities where the most significant bit (the one on the left) is the sign bit. 0 is for positive and 1 is for negative.
2s complement does not mean negative. It means a signed quantity. As in decimal the magnitude is represented as the positive quantity. The structure uses sign extension to preserve the quantity when promoting to a register [] with more bits:
[0101]=[00101]=[00000000000101]=5 (base 10)
[1011]=[11011]=[11111111111011]=-5(base 10)
As a verb:
2's complement means to negate. It does not mean make negative. It means if negative make positive; if positive make negative. The magnitude is the absolute value:
if a >= 0 then |a| = a
if a < 0 then |a| = -a = 2scomplement of a
This ability allows efficient binary subtraction using negate then add.
a - b = a + (-b)
The official way to take the 1's complement is for each digit subtract its value from 1.
1'scomp(0101) = 1010.
This is the same as flipping or inverting each bit individually. This results in a negative zero which is not well loved so adding one to te 1's complement gets rid of the problem.
To negate or take the 2s complement first take the 1s complement then add 1.
Example 1 Example 2
0101 --original number 1101
1's comp 1010 0010
add 1 0001 0001
2's comp 1011 --negated number 0011
In the examples the negation works as well with sign extended numbers.
Adding:
1110 Carry 111110 Carry
0110 is the same as 000110
1111 111111
sum 0101 sum 000101
SUbtracting:
1110 Carry 00000 Carry
0110 is the same as 00110
-0111 +11001
---------- ----------
sum 0101 sum 11111
Notice that when working with 2's complement, blank space to the left of the number is filled with zeros for positive numbers butis filled with ones for negative numbers. The carry is always added and must be either a 1 or 0.
Cheers
2's complement is essentially a way of coming up with the additive inverse of a binary number. Ask yourself this: Given a number in binary form (present at a fixed length memory location), what bit pattern, when added to the original number (at the fixed length memory location), would make the result all zeros ? (at the same fixed length memory location). If we could come up with this bit pattern then that bit pattern would be the -ve representation (additive inverse) of the original number; as by definition adding a number to its additive inverse always results in zero. Example: take 5 which is 101 present inside a single 8 bit byte. Now the task is to come up with a bit pattern which when added to the given bit pattern (00000101) would result in all zeros at the memory location which is used to hold this 5 i.e. all 8 bits of the byte should be zero. To do that, start from the right most bit of 101 and for each individual bit, again ask the same question: What bit should I add to the current bit to make the result zero ? continue doing that taking in account the usual carry over. After we are done with the 3 right most places (the digits that define the original number without regard to the leading zeros) the last carry goes in the bit pattern of the additive inverse. Furthermore, since we are holding in the original number in a single 8 bit byte, all other leading bits in the additive inverse should also be 1's so that (and this is important) when the computer adds "the number" (represented using the 8 bit pattern) and its additive inverse using "that" storage type (a byte) the result in that byte would be all zeros.
1 1 1
----------
1 0 1
1 0 1 1 ---> additive inverse
---------
0 0 0
Many of the answers so far nicely explain why two's complement is used to represent negative numbers, but do not tell us what two's complement number is, particularly not why a '1' is added, and in fact often added in a wrong way.
The confusion comes from a poor understanding of the definition of a complement number. A complement is the missing part that would make something complete.
The radix complement of an n digit number x in radix b is, by definition, b^n-x.
In binary 4 is represented by 100, which has 3 digits (n=3) and a radix of 2 (b=2). So its radix complement is b^n-x = 2^3-4=8-4=4 (or 100 in binary).
However, in binary obtaining a radix's complement is not as easy as getting its diminished radix complement, which is defined as (b^n-1)-y, just 1 less than that of radix complement. To get a diminished radix complement, you simply flip all the digits.
100 -> 011 (diminished (one's) radix complement)
to obtain the radix (two's) complement, we simply add 1, as the definition defined.
011 +1 ->100 (two's complement).
Now with this new understanding, let's take a look of the example given by Vincent Ramdhanie (see above second response):
Converting 1111 to decimal:
The number starts with 1, so it's negative, so we find the complement of 1111, which is 0000.
Add 1 to 0000, and we obtain 0001.
Convert 0001 to decimal, which is 1.
Apply the sign = -1.
Tada!
Should be understood as:
The number starts with 1, so it's negative. So we know it is a two's complement of some value x. To find the x represented by its two's complement, we first need find its 1's complement.
two's complement of x: 1111
one's complement of x: 1111-1 ->1110;
x = 0001, (flip all digits)
Apply the sign -, and the answer =-x =-1.
I liked lavinio's answer, but shifting bits adds some complexity. Often there's a choice of moving bits while respecting the sign bit or while not respecting the sign bit. This is the choice between treating the numbers as signed (-8 to 7 for a nibble, -128 to 127 for bytes) or full-range unsigned numbers (0 to 15 for nibbles, 0 to 255 for bytes).
It is a clever means of encoding negative integers in such a way that approximately half of the combination of bits of a data type are reserved for negative integers, and the addition of most of the negative integers with their corresponding positive integers results in a carry overflow that leaves the result to be binary zero.
So, in 2's complement if one is 0x0001 then -1 is 0x1111, because that will result in a combined sum of 0x0000 (with an overflow of 1).
2’s Complements: When we add an extra one with the 1’s complements of a number we will get the 2’s complements. For example: 100101 it’s 1’s complement is 011010 and 2’s complement is 011010+1 = 011011 (By adding one with 1's complement) For more information
this article explain it graphically.
Two's complement is mainly used for the following reasons:
To avoid multiple representations of 0
To avoid keeping track of carry bit (as in one's complement) in case of an overflow.
Carrying out simple operations like addition and subtraction becomes easy.
Two's complement is one of the ways of expressing a negative number and most of the controllers and processors store a negative number in two's complement form.
In simple terms, two's complement is a way to store negative numbers in computer memory. Whereas positive numbers are stored as a normal binary number.
Let's consider this example,
The computer uses the binary number system to represent any number.
x = 5;
This is represented as 0101.
x = -5;
When the computer encounters the - sign, it computes its two's complement and stores it.
That is, 5 = 0101 and its two's complement is 1011.
The important rules the computer uses to process numbers are,
If the first bit is 1 then it must be a negative number.
If all the bits except first bit are 0 then it is a positive number, because there is no -0 in number system (1000 is not -0 instead it is positive 8).
If all the bits are 0 then it is 0.
Else it is a positive number.
To bitwise complement a number is to flip all the bits in it. To two’s complement it, we flip all the bits and add one.
Using 2’s complement representation for signed integers, we apply the 2’s complement operation to convert a positive number to its negative equivalent and vice versa. So using nibbles for an example, 0001 (1) becomes 1111 (-1) and applying the op again, returns to 0001.
The behaviour of the operation at zero is advantageous in giving a single representation for zero without special handling of positive and negative zeroes. 0000 complements to 1111, which when 1 is added. overflows to 0000, giving us one zero, rather than a positive and a negative one.
A key advantage of this representation is that the standard addition circuits for unsigned integers produce correct results when applied to them. For example adding 1 and -1 in nibbles: 0001 + 1111, the bits overflow out of the register, leaving behind 0000.
For a gentle introduction, the wonderful Computerphile have produced a video on the subject.
The question is 'What is “two's complement”?'
The simple answer for those wanting to understand it theoretically (and me seeking to complement the other more practical answers): 2's complement is the representation for negative integers in the dual system that does not require additional characters, such as + and -.
Two's complement of a given number is the number got by adding 1 with the ones' complement of the number.
Suppose, we have a binary number: 10111001101
Its 1's complement is: 01000110010
And its two's complement will be: 01000110011
Reference: Two's Complement (Thomas Finley)
I invert all the bits and add 1. Programmatically:
// In C++11
int _powers[] = {
1,
2,
4,
8,
16,
32,
64,
128
};
int value = 3;
int n_bits = 4;
int twos_complement = (value ^ ( _powers[n_bits]-1)) + 1;
You can also use an online calculator to calculate the two's complement binary representation of a decimal number: http://www.convertforfree.com/twos-complement-calculator/
The simplest answer:
1111 + 1 = (1)0000. So 1111 must be -1. Then -1 + 1 = 0.
It's perfect to understand these all for me.

javascript: Why is Number.MAX_SAFE_INTEGER + 2.000000000000001 different than Number.MAX_SAFE_INTEGER + 2.0000000000000001?

It is said on Wikipedia that the representable numbers between 2⁵³ ~ 2⁵⁴ are all even integers.
If we let N = Number.MAX_SAFE_INTEGER (which is 2⁵³ - 1 = 9007199254740991), and calculate the following numbers: N+2, N+2.0000000000000001, N+2.000000000000001, I would expect that they are all the same, but I was wrong:
N + 2 === 9007199254740992
N + 2.0000000000000001 === 9007199254740992
N + 2.000000000000001 === 9007199254740994
Does anyone know why?
The first number, has a sub-decimal point component requiring 2^-56 exponent :
<<< '2.0000000000000001' gawk -v PREC=2000000 -nMbe '{
printf("\n\t0x%.16A\n\n",($0)%1) }'
0x 7.34ACA5F6226F0ADAp-56
the final 1 at the tail will be lost even during parsing, which means you're only asking it to handle
[ 4^3^3/2 - 1 ] (ps:: this is my favorite
way to express 2^53-1)
+ 2
which I'm guessing double-precision floating point logic defaults to not rounding it up. But conversely, the 2nd example you've provided has a sub-decimal component that is
0x 4.80EBE7B9D58566C8 p-52
i.e.
0.00000000000000 10000000000
0.000000000000000 2220446049 (~2^-52)
Combining both, what the floating point system sees is
0x 2.000000000000 0 p+0 <~~~ 1st one
0x 2.000000000000 5 p+0 <~~~ 2nd one
/
This 5 made all the difference
now the mantissa has something to use to round things up.

Unexpected negative value in Int32Array

const x = new Int32Array(1);
x[0] = 699044815921;
console.log(x[0]);
-1034853327
Who can explain why there is a negetive number?
Int32Array allows 32 bits per value, with 32nd bit (from the right) being reserved to specify the sign of the number. The number you're trying to fit is (699044815921).toString(2).length == 40 bits long, so 8 leftmost bits are discarded, 32nd bit is interpreted as a sign bit, and you get what you get as a result.

Math.pow() not giving precise answer

I've used Math.pow() to calculate the exponential value in my project.
Now, For specific values like Math.pow(3,40), it returns 12157665459056929000.
But when i tried the same value using a scientific Calculator, it returns 12157665459056928801.
Then i tried to traverse the loop till the exponential value :
function calculateExpo(base,power){
base = parseInt(base);
power = parseInt(power);
var output = 1;
gameObj.OutPutString = ''; //base + '^' + power + ' = ';
for(var i=0;i<power;i++){
output *= base;
gameObj.OutPutString += base + ' x ';
}
// to remove the last comma
gameObj.OutPutString = gameObj.OutPutString.substring(0,gameObj.OutPutString.lastIndexOf('x'));
gameObj.OutPutString += ' = ' + output;
return output;
}
This also returns 12157665459056929000.
Is there any restriction to Int type in JS ?
This behavior is highly dependent on the platform you are running this code at. Interestingly even the browser matters even on the same very machine.
<script>
document.write(Math.pow(3,40));
</script>
On my 64-bit machine Here are the results:
IE11: 12157665459056928000
FF25: 12157665459056929000
CH31: 12157665459056929000
SAFARI: 12157665459056929000
52 bits of JavaScript's 64-bit double-precision number values are used to store the "fraction" part of a number (the main part of the calculations performed), while 11 bits are used to store the "exponent" (basically, the position of the decimal point), and the 64th bit is used for the sign. (Update: see this illustration: http://en.wikipedia.org/wiki/File:IEEE_754_Double_Floating_Point_Format.svg)
There are slightly more than 63 bits worth of significant figures in the base-two expansion of 3^40 (63.3985... in a continuous sense, and 64 in a discrete sense), so hence it cannot be accurately computed using Math.pow(3, 40) in JavaScript. Only numbers with 52 or fewer significant figures in their base-two expansion (and a similar restriction on their order of magnitude fitting within 11 bits) have a chance to be represented accurately by a double-precision floating point value.
Take note that how large the number is does not matter as much as how many significant figures are used to represent it in base two. There are many numbers as large or larger than 3^40 which can be represented accurately by JavaScript's 64-bit double-precision number values.
Note:
3^40 = 1010100010111000101101000101001000101001000111111110100000100001 (base two)
(The length of the largest substring beginning and ending with a 1 is the number of base-two significant figures, which in this case is the entire string of 64 digits.)
Haskell (ghci) gives
Prelude> 3^40
12157665459056928801
Erlang gives
1> io:format("~f~n", [math:pow(3,40)]).
12157665459056929000.000000
2> io:format("~p~n", [crypto:mod_exp(3,40,trunc(math:pow(10,21)))]).
12157665459056928801
JavaScript
> Math.pow(3,40)
12157665459056929000
You get 12157665459056929000 because it uses IEEE floating point for computation. You get 12157665459056928801 because it uses arbitrary precision (bignum) for computation.
JavaScript can only represent distinct integers to 253 (or ~16 significant digits). This is because all JavaScript numbers have an internal representation of IEEE-754 base-2 doubles.
As a consequence, the result from Math.pow (even if was accurate internally) is brutally "rounded" such that the result is still a JavaScript integer (as it is defined to return an integer per the specification) - and the resulting number is thus not the correct value, but the closest integer approximation of it JavaScript can handle.
I have put underscores above the digits that don't [entirely] make the "significant digit" cutoff so it can be see how this would affect the results.
................____
12157665459056928801 - correct value
12157665459056929000 - closest JavaScript integer
Another way to see this is to run the following (which results in true):
12157665459056928801 == 12157665459056929000
From the The Number Type section in the specification:
Note that all the positive and negative integers whose magnitude is no greater than 253 are representable in the Number type ..
.. but not all integers with large magnitudes are representable.
The only way to handle this situation in JavaScript (such that information is not lost) is to use an external number encoding and pow function. There are a few different options mentioned in https://stackoverflow.com/questions/287744/good-open-source-javascript-math-library-for-floating-point-operations and Is there a decimal math library for JavaScript?
For instance, with big.js, the code might look like this fiddle:
var z = new Big(3)
var r = z.pow(40)
var str = r.toString()
// str === "12157665459056928801"
Can't say I know for sure, but this does look like a range problem.
I believe it is common for mathematics libraries to implement exponentiation using logarithms. This requires that both values are turned into floats and thus the result is also technically a float. This is most telling when I ask MySQL to do the same calculation:
> select pow(3, 40);
+-----------------------+
| pow(3, 40) |
+-----------------------+
| 1.2157665459056929e19 |
+-----------------------+
It might be a courtesy that you are actually getting back a large integer.

Math.random and arithmetic shift

If I have the following code in JavaScript:
var index1 = (Math.random() * 6) >> 0;
var index2 = Math.floor(Math.random() * 6);
The results for index1 or index2 are anywhere between 0 and 6.
I must be confused with my understanding of the >> operator. I thought that by using arithmetic shift that the results for index1 would be anywhere between 1 and 6.
I am noticing, however that I don't need to use Math.floor() or Math.round() for index1 if I use the >> operator.
I know I can achieve this by adding 1 to both indexes, but I was hoping there was a better way of ensuring results are from 1 to 6 instead of adding 1.
I'm aware that bitwise operators treat their operands as a sequence of 32 bits (zeros and ones), rather than as decimal, hexadecimal, or octal numbers. For example, the decimal number nine has a binary representation of 1001. Bitwise operators perform their operations on such binary representations, but they return standard JavaScript numerical values.
UPDATE:
I saw the original usage in this CAAT tutorial on line 26 and was wondering whether that would actually return a random number between 1 and 6 and it seems it would only ever return a random number between 0 and 6. So you would never actually see the anim1.png fish image!
Thank you in advance for any enlightenment.
Wikipedia says '(Arithmetic right shifts for negative numbers would be equivalent to division using rounding towards 0 in one's complement representation of signed numbers as was used by some historic computers.)'
Not exactly an answer, but the idea is that >> 0 is really specific and shouldn't be used in general for getting a range between 1 and 6.
Most people would tell you to do
Math.floor((Math.random()*10)+1);

Categories