Is there an accepted way to represent high resolution timestamps in JSON and/or JavaScript?
Ideally, I would like it to support at least 100 ns resolution, since that would make the server code a bit simpler (since the .NET DateTime resolution is 100 ns as well).
I found a lot of questions dealing with manipulating high-resolution timers and such (which is not possible, apparently), but I simply need to represent it somehow, in an application API.
This is for an API built in REST style using JSON, so actually measuring time in this resolution is not required. However, I would like to transfer and use (potentially in JavaScript) the timestamp in its full resolution (100 ns, since this is .NET).
In light of your clarification, isn't your timestamp just a really big integer? The current timestamp is 1329826212. If you want nanosecond precision, we are just talking about like 9 more digits: 1329826212000000000. That is a number that JavaScript can easily handle. Just send it over as:
myobject = {
"time": 1329826212000000000
}
It should be perfectly fine. I just tried doing some arithmetic operations on it, division by a large number and multiplication by the same. There is no loss of value.
JavaScript supports a huge range of floating point numbers, but guarantees integral accuracy from -2^53 to 2^53. So I think you are good to go.
UPDATE
I'm sorry, I just re-read your question. You wish to represent it? Well, one thing I can think of is to extract the last 9 digits (additional precision beyond second granularity) and show them to be the decimal part of the number. You may even wish to extract the last 6 digis (additional precision beyond the millisecond granularity).
The JavaScript Date object only has millisecond precision.
However, if you are just looking for a standard format to encode nanosecond precision times, an ISO 8601 format string will allow you to define nanoseconds as fractions of seconds:
Decimal fractions may also be added to any of the three time elements. A decimal point, either a comma or a dot (without any preference as stated most recently in resolution 10 of the 22nd General Conference CGPM in 2003), is used as a separator between the time element and its fraction. A fraction may only be added to the lowest order time element in the representation. To denote "14 hours, 30 and one half minutes", do not include a seconds figure. Represent it as "14:30,5", "1430,5", "14:30.5", or "1430.5". There is no limit on the number of decimal places for the decimal fraction. However, the number of decimal places needs to be agreed to by the communicating parties.
There is a trick used by jsperf.com and benchmarkjs.com that uses a small Java applet that exposes Java's nanosecond timer.
See Stack Overflow question Is there any way to get current time in nanoseconds using JavaScript?.
Related
I'm trying to write a function that would fetch me the number of decimal places after the decimal point in a floating-point literal using the answer as a reference from here.
Although this seems to work fine when tried in the browser consoles, in the Node.js environment while running test cases, the precision is truncated only up to 14 digits.
let data = 123.834756380650877834678
console.log(data) // 123.83475638065088
And the function returns 14 as the answer.
Why is the rounding off happening at rest? Is it a default behavior?
The floating-point format used in JavaScript (and presumably Node.js) is IEEE-754 binary64. When 123.834756380650877834678 is used in source code, it is converted to the nearest representable value, which is 123.834756380650873097692965529859066009521484375.
When this is converted to a string with default formatting, JavaScript uses just enough digits to uniquely distinguish the value. For 123.834756380650873097692965529859066009521484375, this should produce “123.83475638065087”. If you are getting “123.83475638065088”, which differs in the last digit, then the software you are using does not conform to the JavaScript specification (ECMAScript).
In any case, the binary64 format does not have sufficient precision to preserve the information that the original numeral, “123.834756380650877834678”, has 21 digits after the decimal point.
The code you link to also does not and cannot compute the number of digits in an original numeral. It computes the number of digits needed to uniquely distinguish the value represented after conversion to binary64. For sufficiently short numerals without trailing zeros after the decimal point, this is the same as the number of digits after the decimal point in the original numeral. For others, it may not be.
it is default behavior of JavaScript.I think it will be same in node.js.
in JS The maximum number of decimals is 17.
for more details take a look at here
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 3 years ago.
Using the parseFloat method converted the string value from the DB to float and multiplied by 100. But the output looks odd. Following the piece of code which I've used.
parseFloat(upliftPer) * 100 //upliftPer value read from DB and its value is 0.0099
So when it multiplied with 100 getting 0.9900000000000001 I suppose to get 0.99 but some junk values getting appended. Also I went ahead and did the same in the console log of chrome browser still the same result. I have attached screenshots for reference. Solution I needed is 0.0099 * 100 should result 0.99. I cant apply round / toFixed since I need more precision.
This is because of JavaScript internal casting to double type. There's always a certain degree of noise due to floating point inaccuracy. Here's some information on that.
You can just round it up using toFixed(x) with as much decimal spaces of precision as you want (or as much as JavaScript would allow you).
It is not related to JavaScript nor to any programming language.
It’s because of the conversion from decimal floating-point to binary representation: the process ends up in an infinite process that has to be truncated because of the limited number of bits available for storing the numbers (and thus, of the limited amount of memory in computers)
Check out the process of conversion and take a look at this converter that tells you how much error there is during conversion
As #sebasaenz pointed out in his answer, you can use toFixed(x) to round up your number and get rid of junk
This may be a strange question, but I'm using a Decimal to store a timestamp with more than millisecond precision. For example:
The time of 00:01:15, would be 75 seconds / 216000 seconds in a day = 0.00034722222.
The date of "2014-01-01 ==> 41638 (days since 1900-01-01)
The datetime of "2014-01-01 00:01:15.12222" would be 41638.00034778805.
Is there any possible way at all to include the timezone (such as "-6") in this data type? I think the answer is no, but I was wondering if there might be any tricks to be able to store that amount in the decimal.
I don't think it's possible, and perhaps I'll need to use either a String or an Array of [<datetime>, <timezone>] to get past Javascript's millisecond precision (using a library such as Decimal.js for high precision decimals), but I was wondering what might be possible here.
The reason I'm using a decimal here to be able to store a time or date in a similar format and to be able to add them together for certain operations (such as how Excel or Google Sheets would store dates/times, as a number). But timezone complicates things quite a bit here.
Hello guys I'm working on javascript file about astronomical calculations so I need more precisions.
I have a lot operations for each line and I want like result more precision (also ten or twenty decimal after comma).
In JavaScript if I do not declare number of decimal (for example using .ToFixed(n)), how many positions after the comma, the language consider during the calculation?
For example:
var a= 1.123456789*3.5; // result 3.9320987615
var b= a/0.321;
var c= b*a;
// c => 48.16635722800571
It will be the result?
Like using all decimal after comma for each variables of javascript does approximations ?
Why in other questions users suggests to use decimal.js or other libraries?
I'm sorry if my question seems stupid but for me is important.
I hope you can help me.
Sorry for my english !
Javascript uses IEEE-754 double-precision, that means it only calculates numbers to around 15 decimal digits of precision, any more than that gets cut off. If you need more precision you have to use decimal.js or another similar library. Other questions recommend decimal.js or other libraries is because it is easy to put into your program and can provide as much precision as you want.
The reason it isn't implemented in the computer by default is that it takes a lot more effort for the computer to calculate to 20 digits over 15 digits because the computer is built to compute only to 15 decimal digits. If you want to read more on it I would recommend reading Arbitrary Precision Arithmetic on Wikipedia.
Simply because the toFixed() method converts a number into a string, keeping a specified number of decimals. So, the returned value will be a string not a number.
In JavaScript, there's only one type for all different kinds of numbers. Does the amount of decimals in the numbers used (precision) affect performance especially in JavaScript? If it does, how?
How about saving numbers in MongoDB: Do precise numbers take more space than less precise ones?
Generally no. There are some possible performance implications when a number doesn't fit in a 31b signed int.
A tour of V8: object representation explains
According to the spec, all numbers in JavaScript are 64-bit floating point doubles. We frequently work with integers though, so V8 represents numbers with 31-bit signed integers whenever possible (the low bit is always 0; this helps the garbage collector distinguish numbers from pointers). So objects with the fast small integers elements kind only contain this type of number. If we want to store a fractional number or a larger integer or a special value like -0, then we need to upgrade the array to fast doubles. This involves a potentially expensive copy-and-convert operation, but it doesn't happen often in practice. fast doubles objects are still pretty fast because all of the numbers are stored in an unboxed representation. If we want to store any other kind of value, e.g., a string or an object, we must upgrade to a general array of fast elements.
Does the amount of decimals in the numbers used (precision) affect performance especially in JavaScript? If it does, how?
No. The number type in JavaScript is a 64-bit floating-point value with base 2 and always has the same precision. The computer works on that data bit by bit and it doesn't matter whether that data represents something that looks simple to a human like 1.0 or something seemingly complicated like 123423.5645632. In fact, for base 2 floats, the 'human' values are just as 'hard', because 1.1 is truly represented by a much longer number (sth. like 1.10000000000000054). All this doesn't matter, because the computer truly operates on 64 ones and zeros. There are always some arcane exceptions in floating point, but those usually don't matter in practice.
How about saving numbers in MongoDB: Do precise numbers take more space than less precise ones?
Decimal numbers are stored as doubles (64bit), and it doesn't matter whether that is 1.0 or 1.1221234423. Again, the number of bits is constant for these data types.
The same is true for ints, but MongoDB has support for both 32- and 64-bit ints. So NumberLong is indeed larger than regular 32-bit ints and just as large as doubles.