I've updated Chrome to 67 version.
new Date(1924,4,1,0,0,0,0).getTime()
return -1441245724000
must -1441249200000
if millisecond(1000), second(60), minute(60) === 0
getTime must give at the end a minimum of 5 zeros
There can be only one explanation: You are in Ukraine.
Allow me to explain:
When passing individual components to the Date constructor, those values are based on the local time zone of the computer where the code is running. Keeping in mind that months are zero based, new Date(1924,4,1,0,0,0,0) is asking for 1924-05-01 00:00:00.000 local time.
.getTime() is asking for a Unix timestamp in milliseconds, which are based on UTC - so there is an implicit conversion from local time to UTC. Therefore, anyone who runs this code will get different results depending on their own time zone.
Time zones are a relatively modern invention. They have not always existed in the way we use them today. The data that most computers keep about time zones comes from the IANA time zone database. In this data, for most time zones, the earliest entry is based on the solar local mean time (LMT) for the latitude and longitude associated with the city used to identify the time zone.
In this case, your value -1441245724000 translates to 1924-04-30 21:57:56 UTC. Since it was derived from a local time of midnight, then by math - the offset from UTC in that local time must have been +02:02:04.
The only time zone in the TZDB with an LMT value of +02:02:04 is Europe/Kiev, as shown here. For reasons I'm not certain of exactly, the TZDB also assigns the abbreviation KMT (Kiev Mean Time) from 1880 to 1924.
As to why you are seeing this on newer versions of Chrome - it is likely that older versions did not take the entire TZDB into consideration, but had truncated it at some point in the past. Indeed, the ECMAScript 5.1 standard used to require only the current time zone rule be applied as if it was in effect for all time. This was removed in ECMAScript 6, and most browsers now use the correct rule that was in effect for the timestamp provided.
TL;DR: Local time in Ukraine before 1 May 1924 was determined by the sun - not by the government. At least - that is the best known information that your computer has.
Related
I have a system where users from Washington DC can create a post. This post is saved in my system in UTC +0 time. Then, I can use a reporting system which will give me info about every created post in a certain date range. Lets say I select a date range from March 21st 00:00:00 to March 28th 23:59:59 but in my system someone created a post on March 28th 22:30:00 Washington DC time. Washington DC is several hours behind UTC, so this post would be saved at around March 29th 02:30:00, and so when I generate the report for March 21st to March 28th, I will not get the correct result because there is 1 post that has been created on March 28th Washington time, but that is March 29th UTC +0 time.
I first solved this by obtaining the UTC offset of the client and sending it to the server, and so adding that offset to my date range:
// JavaScript
"offsetHours" : parseInt(new Date().getTimezoneOffset() / -60)
"offsetMinutes" : (new Date().getTimezoneOffset() / -60) % 1 * 60
// Python
_range["from"] = strToDate(_range["from"]) - datetime.timedelta(hours = int(request.headers["offsetHours"]), minutes=int(request.headers["offsetMinutes"]))
This solved the issue, but it raised another. Now if I generate a report for the same time range (from 21st to 28th March) from 2 different timezones, I will get different results. This is due to the fact that the 2 users have different offsets and so they affect the from range in different intervals.
Is there any solution to this problem?
You're not necessarily describing a problem, but rather a side effect of how local times around the world work.
At any given time, there is usually more than one "date" in effect somewhere in the world. If you are saving the timestamp of an event that took place, and you have customers around the world, you're not necessarily saving it with the same date that the user thought it was in their own time zone. This is true whether you align the timestamps to UTC or to a specific time zone.
Therefore, you must make a business decision about how your application is intended to work. Do you want your daily reports to reflect posts that were made within a UTC day, or within the day according to the time zone of your business's headquarters? Then store the timestamp in UTC and (optionally) adjust to your business's time zone before or during reporting.
If however you want the daily reports to reflect the date in the user's time zone, then you might want to also store the user's time zone ID (such as America/New_York - not a numeric offset) so that you could convert to that. Keep in mind that if user's are in different time zones, your reports might look strange when examined from a single time zone's perspective.
Another technique that is often used (primarily for performance reasons, but also for clarity of logic), is to keep both a UTC-based timestamp and a separate field for the "business date" that applies. Usually such a field is just a date field, storing a value such as 2021-03-29 without any time or time zone. This field can be pre-converted to a time zone according to whatever rules you decide are applicable for your business. It then becomes a great candidate for an index and works well for range queries for daily reports.
In the end - there is no one "right" way to do it. You have to decide what works best for your use case. If you are working for a larger company and unsure of the business requirements, then ask someone who might already perform a similar activity manually. (Often this is an accounting or sales person in a larger organization.)
So a little context, I have an array of 24 arrays -- one for every hour in the day.
So midnight, 0 index, would be [133.00, 234.00] which would indicate 133 actions from 12 - 1230 and 234 actions between 1230 - 1am.
I need to adjust these indexed arrays to account for the user's timezone in a browser with JS, so that if the user is in New York the 0 index (midnight in the user's home turf) is displayed in China's offset (12pm tomorrow, from user's perspective).
I've been trying to think of a solution, I have a simple function for what I've been able to think of
function offsetHourIndex(hourIndex, dataCenterTimeZone) {
let userTime = new Date().setHour(hourIndex)
return moment(userTime).tz(dataCenterTimeZone).hour();
}
How reliable would this approach be?
Your approach has a few problems:
You are assuming that the current date in the local time zone is the correct date for the target time zone. Most of the time, there are two dates active somewhere around the world. For example, 2019-04-02 04:00 in London is 2019-04-01 23:00 in New York. If you just take hour 4 from London but apply it to the current date in New York, you've created a whole new point in time, a day too early.
You assume there will be exactly 24 hours in every day. In time zones that have transitions for daylight saving time or changes in standard time, you may have more or less hours of local time on a the day of the transition.
In the case of a backward transition, there is a period of ambiguous local time. For example, when US Pacific Time moves from PDT to PST in November, the hour from 1:00-1:59 is repeated. If data from both hours are summarized into array element 1, then you will have higher than normal results for that hour. The opposite is true for forward transitions - you will have an hour with no data.
The time zone setting of the server can be a fickle thing. What if you change data centers? What if you move to the cloud? What if you are operating multiple data centers? What if a server administrator thinks all they are affecting by changing the system time zone is the readout on the taskbar or front panel, etc., and then it affects your application? In general one should avoid these things by never relying on the server's local time zone setting.
You can avoid all of these problems by basing everything on Coordinated Universal Time (UTC) - especially the array. Then you can be ignorant of any server time zone setting, and just base everything off the current UTC day, which is the same all over the world.
This will give you the local hour from the given UTC hour in your index:
var localHour = moment.utc({hour: hourIndex}).local().hour();
You do not need moment-timezone for the above recommendation.
However, if you really feel like you need to convert from a specific time zone to the browser local time, then you would use moment-timezone like this:
var localHour = moment.tz({hour: hourIndex}, timeZoneName).local().hour();
Note when you do this, you also have another problem - not every time zone is offset by a whole number of hours. For example, India uses UTC+05:30. There are many that are :30 and a few that are :45. By tracking hours only, you're not providing enough information to properly convert to the correct local hour. Your results may be off by one.
It seems reasonable. And the code should work as long as you have the properly formatted inputs. I like the brevity and clarity of the function. Any reason you are concerned about reliability?
You might mention in your question that you are using the moment and moment-timezone packages here to derive your data via its functions (moment & tz) on this line of code:
return moment(userTime).tz(dataCenterTimeZone).hour();
Your function may appear a bit cryptic without the imports in your example for folks reading here to understand, such as :
import * as moment from 'moment';
import 'moment-timezone';
I realize this is a commonly asked question but I couldn't find any posts that point out the disadvantages of using/storing offset for user's time zone. Is this not a better and more efficient way?
Long drop down lists of time zones are not user friendly and most of such lists don't have all the cities anyway. They also require user to specify their time zone. I feel it may be much better to simply detect it. In my case, my app is an ASP.NET Core app with Reach front end and it's pretty easy to capture user's time zone offset via JavaScript.
Any reason why storing the offset of user's timezone is NOT a good idea?
Any reason why storing the offset of user's timezone is NOT a good idea?
Yes. Many. A time zone and an offset are not the same thing. A time zone represents a geographical area in which local time is aligned. A time zone may undergo several different changes in its offset from UTC. Some of which are regular (like daylight saving time), and some of which are irregular (like when a government changes its standard time or dst rules).
... In my case, I simply want to display all date time values in user’s current time zone so that they’re meaningful to the user.
Ok, so let's say you check the user's current time zone offset and it is UTC-7. So you apply that to some dates and times in your application and done - so you think. Except that you didn't take into account that the user is in California, and one of your dates is in December when the offset should be UTC-8.
So you try to correct for that, and work out the rules of "when I see -7, it might be -8 sometimes". Except now you have a user come along who is in Colorado, where it is -7 during the winter and -6 during the summer. Or another user from Arizona, where most of the state is in -7 for the whole year. How do you know which set of rules to follow? Without referencing an actual time zone, it cannot be done.
This gets even more complex worldwide. For example, the number of variations for UTC+2 is just crazy. Even for countries that switch between UTC+2 and UTC+3 - they don't all switch on the same dates or at the same time of day!
See also: The Problem with Time & Timezones - Computerphile (YouTube)
and the StackOverflow timezone tag wiki.
Wouldn't these values always been the same as getSeconds and getMilliseconds?
The adustment between local time and UTC time is based on an offset specified as a number of milliseconds.
http://es5.github.com/#x15.9.1.7 says
15.9.1.7 Local Time Zone Adjustment
An implementation of ECMAScript is expected to determine the local time zone adjustment. The local time zone adjustment is a value LocalTZA measured in milliseconds which when added to UTC represents the local standard time. Daylight saving time is not reflected by LocalTZA. The value LocalTZA does not vary with time but depends only on the geographic location.
As to when this is useful, http://bugs.python.org/issue5288 explains an API problem that arose from assuming that timezone offsets were an integral number of minutes:
The Olson time zone database (used by most UNIX systems and Mac OS X)
has a number of time zones with historic offsets that use second
resolution (from before those locations switched to a rounded offset
from GMT).
Once you get down to second resolution, not having a getUTCMillis just seems an odd asymmetry.
because there are leap seconds.
I want to use shell to get epoch time
and later use javascript on a html page to get another epoch time
and then get the difference between them
but I'm afraid that the epoch time may not be synchronized among different scripts
so this difference is useless
so I want to know, if at the very same time, I use shell and javascript to get epoch tiem
will the result be the same or not?
if not, how big is the difference?
thanks!
If you mean number of seconds since Unix epoch (1970-01-01T00:00:00Z), it is governed by this very definition. The only differences you should be able to see are caused by:
different times of invocation of the system call that returns it*);
unsynchronized clocks on different systems.
and possibly also:
unsynchronized clocks on different processor cores/physical processors;
implementation dependent handling of the function that returns current time (e.g. JS engine implementation might possibly cache the value for a short time as not to have to do the actual syscall, although I would doubt this).
Depending on the time resolution you need, some of these are not a problem. My guess is, that if you don't need granularity finer than 1s, you should be more than fine (on the same machine).
*) also note, that on single core system, you can't really get the same time (at least with the ns resolution) from different syscalls, unless the kernel caches it, simply because they have to happen one after another.
According to ECMA-262 section 15.9.1.1:
Time is measured in ECMAScript in milliseconds since 01 January, 1970
UTC. In time values leap seconds are ignored. It is assumed that there
are exactly 86,400,000 milliseconds per day.
So, yes, every JavaScript implementation that adheres to standard must return exactly same values, taking care of any physical clock quirks by itself. Barring wrong set up of system clock you will have same value everywhere.
Note that definition of "epoch" and "seconds since" in other languages and systems could be different (at very least most other systems use seconds, not milliseconds and most often take leap seconds in account), so you can't guarantee that JS time, even divided by 1000 will match timestamp from another platform or OS.