This June 30th, 235959UTC will be followed immediately by 235960UTC. What will /your/ devices do? http://www.marketplace.org/topics/world/leap-second-deep-space-and-how-we-ke... Cheers, -- jra -- Sent from my Android phone with K-9 Mail. Please excuse my brevity.
On 1/25/2015 at 9:37 AM Jay Ashworth wrote: |This June 30th, 235959UTC will be followed immediately by 235960UTC. | |What will /your/ devices do? ============= I've always wondered why this is such a big issue, and why it's done as it is. In UNIX, for instance, time is measured as the number of seconds since the UNIX epoch. imo, the counting of the number of seconds should not be "adjusted", unless there's a time warp of some sort. The leap second adjustment should be in the display of the time, i.e., similar to how time zones are handled. fwiw
Hi, Java had some issues with 100% CPU usage when NTP was running during the additional second in 2012. http://blog.wpkg.org/2012/07/01/java-leap-second-bug-30-june-1-july-2012-fix... Google did something different to get the extra second in: http://googleblog.blogspot.de/2011/09/time-technology-and-leaping-seconds.ht... Most devices probably don't even know about the leap second coming as that would require a firmware upgrade. Karsten 2015-01-25 16:19 GMT+01:00 Mike. <the.lists@mgm51.com>:
On 1/25/2015 at 9:37 AM Jay Ashworth wrote:
|This June 30th, 235959UTC will be followed immediately by 235960UTC. | |What will /your/ devices do? =============
I've always wondered why this is such a big issue, and why it's done as it is.
In UNIX, for instance, time is measured as the number of seconds since the UNIX epoch. imo, the counting of the number of seconds should not be "adjusted", unless there's a time warp of some sort. The leap second adjustment should be in the display of the time, i.e., similar to how time zones are handled.
fwiw
I think devices would likely be fine, unless they're concerned with reconciling a leap-second updated ntp source and one that's not. Who wins? For most NTPs I would guess they're slaves to whatever feed and just 'believe' whatever they're told. (Sounds like a security hole waiting for high frequency trader types, q.v. http://www.theverge.com/2013/10/3/4798542/whats-faster-than-a-light-speed-tr... ) Can't we just subscribe to a leapsmeary NTP feed if we care to have no big leap (I dont mind)? Isnt NIST offering this? /kc On Sun, Jan 25, 2015 at 06:01:40PM +0100, Karsten Elfenbein said:
Hi,
Java had some issues with 100% CPU usage when NTP was running during the additional second in 2012. http://blog.wpkg.org/2012/07/01/java-leap-second-bug-30-june-1-july-2012-fix...
Google did something different to get the extra second in: http://googleblog.blogspot.de/2011/09/time-technology-and-leaping-seconds.ht...
Most devices probably don't even know about the leap second coming as that would require a firmware upgrade.
Karsten
2015-01-25 16:19 GMT+01:00 Mike. <the.lists@mgm51.com>:
On 1/25/2015 at 9:37 AM Jay Ashworth wrote:
|This June 30th, 235959UTC will be followed immediately by 235960UTC. | |What will /your/ devices do? =============
I've always wondered why this is such a big issue, and why it's done as it is.
In UNIX, for instance, time is measured as the number of seconds since the UNIX epoch. imo, the counting of the number of seconds should not be "adjusted", unless there's a time warp of some sort. The leap second adjustment should be in the display of the time, i.e., similar to how time zones are handled.
fwiw
-- Ken Chase - math@sizone.org Toronto
I spoke on time hacking and ntp 3 years ago at shmoocon. On Jan 25, 2015 12:28 PM, "Ken Chase" <math@sizone.org> wrote:
I think devices would likely be fine, unless they're concerned with reconciling a leap-second updated ntp source and one that's not. Who wins?
For most NTPs I would guess they're slaves to whatever feed and just 'believe' whatever they're told. (Sounds like a security hole waiting for high frequency trader types, q.v.
http://www.theverge.com/2013/10/3/4798542/whats-faster-than-a-light-speed-tr... )
Can't we just subscribe to a leapsmeary NTP feed if we care to have no big leap (I dont mind)? Isnt NIST offering this?
/kc
On Sun, Jan 25, 2015 at 06:01:40PM +0100, Karsten Elfenbein said:
Hi,
Java had some issues with 100% CPU usage when NTP was running during the additional second in 2012.
http://blog.wpkg.org/2012/07/01/java-leap-second-bug-30-june-1-july-2012-fix...
Google did something different to get the extra second in:
http://googleblog.blogspot.de/2011/09/time-technology-and-leaping-seconds.ht...
Most devices probably don't even know about the leap second coming as that would require a firmware upgrade.
Karsten
2015-01-25 16:19 GMT+01:00 Mike. <the.lists@mgm51.com>:
On 1/25/2015 at 9:37 AM Jay Ashworth wrote:
|This June 30th, 235959UTC will be followed immediately by 235960UTC. | |What will /your/ devices do? =============
I've always wondered why this is such a big issue, and why it's done as it is.
In UNIX, for instance, time is measured as the number of seconds since the UNIX epoch. imo, the counting of the number of seconds should not be "adjusted", unless there's a time warp of some sort. The leap second adjustment should be in the display of the time, i.e., similar to how time zones are handled.
fwiw
-- Ken Chase - math@sizone.org Toronto
In article <201501251019290550.005C05BC@smtp.24cl.home> you write:
I've always wondered why this is such a big issue, and why it's done as it is.
A lot of people don't think the current approach is so great.
In UNIX, for instance, time is measured as the number of seconds since the UNIX epoch. imo, the counting of the number of seconds should not be "adjusted", unless there's a time warp of some sort. The leap second adjustment should be in the display of the time, i.e., similar to how time zones are handled.
It shares with time zones the problem that you cannot tell what the UNIX timestamp will be for a particular future time. If you want to have something happen at, say, July 2 2025 at 12:00 UTC you can guess what the timstamp for that will be, but if there's another leapsecond or two, you'll be wrong. Life would be a lot easier for everyone except a handful of astronomers if we forgot about leap seconds and adjusted by a full minute every couple of centuries.
On 25 Jan 2015 17:29:25 +0000, "John Levine" said:
It shares with time zones the problem that you cannot tell what the UNIX timestamp will be for a particular future time. If you want to have something happen at, say, July 2 2025 at 12:00 UTC you can guess what the timstamp for that will be, but if there's another leapsecond or two, you'll be wrong.
It shares another problem - that doing calculations across a boundary is difficult. If you have a recurring timer that pops at 23:58:30 on June 30, and you want another one in 2 minutes. do you want a timer that the next pop is at 00:00:30 - or 00:00:29? The operating system can't tell whether the desired semantic is "as close to every 120 elapsed seconds as possible" or "as close to the half-minute tick as possible". And of course doing interval math across several years where you cross multiple leap seconds is even more problematic - for some corner cases that have an endppoint nearmidnight, doing a naive "timestamp in seconds +/- 86400 * number of days" can land you on the wrong *day*, with possibly serious consequences...
the quote from the GNU coreutils manpages on Date Input Formats: "Our units of temporal measurement, from seconds on up to months, are so complicated, asymmetrical and disjunctive so as to make coherent mental reckoning in time all but impossible. Indeed, had some tyrannical god contrived to enslave our minds to time, to make it all but impossible for us to escape subjection to sodden routines and unpleasant surprises, he could hardly have done better than handing down our present system. It is like a set of trapezoidal building blocks, with no vertical or horizontal surfaces, like a language in which the simplest thought demands ornate constructions, useless particles and lengthy circumlocutions. Unlike the more successful patterns of language and science, which enable us to face experience boldly or at least level-headedly, our system of temporal calculation silently and persistently encourages our terror of time. ... It is as though architects had to measure length in feet, width in meters and height in ells; as though basic instruction manuals demanded a knowledge of five different languages. It is no wonder then that we often look into our own immediate past or future, last Tuesday or a week from Sunday, with feelings of helpless confusion." -Robert Grudin, Time and the Art of Living. http://www.gnu.org/software/coreutils/manual/coreutils.html#Date-input-forma... /kc On Sun, Jan 25, 2015 at 01:15:27PM -0500, Valdis.Kletnieks@vt.edu said:
And of course doing interval math across several years where you cross multiple leap seconds is even more problematic - for some corner cases that have an endppoint nearmidnight, doing a naive "timestamp in seconds +/- 86400 * number of days" can land you on the wrong *day*, with possibly serious consequences...
/kc -- Ken Chase - math@sizone.org Toronto
On 01/25/2015 10:15 AM, Valdis.Kletnieks@vt.edu wrote:
It shares another problem - that doing calculations across a boundary is difficult. If you have a recurring timer that pops at 23:58:30 on June 30, and you want another one in 2 minutes. do you want a timer that the next pop is at 00:00:30 - or 00:00:29? The operating system can't tell whether the desired semantic is "as close to every 120 elapsed seconds as possible" or "as close to the half-minute tick as possible".
I have automation code for a "classroom" full of Cisco routers, and the way I deal with that sort of issue is to say that anything that has to be synchronized to the wall clock uses cron(8), but for actions tied to a fixed interval I use sleep(3) and let the operating system sort out the issues, if any. I did this to sidestep the issues with Daylight Savings Time (DST) cusps; it also works for leapseconds, as the OS interval timing is not tied to the real-time clock. Funny you should mention ticks versus elapsed time. In every specification I've written since 1970, I've differentiated between the two. I got started doing that because in the computers of that era the real-time clock was tied to power-line frequency, while the interval timers were based on counts on a crystal oscillator. The crystal was using good for 1000 parts per million, good enough for short intervals. The power-line clock was pulled back and forth by the power company as said company would fine-tune the time so electric clocks would stay at the right time long-term as the expense of short-term jitter. Today's computers don't use clocks derived from 50- or 60-hertz power-line frequency. The last computer I remember seeing with such a clock was the IBM System/360. The System/370 used a motor-generator set for the power supply, so it had to get its real-time clock time source another way.
On Sun, Jan 25, 2015 at 02:24:52PM -0800, Stephen Satchell wrote:
Today's computers don't use clocks derived from 50- or 60-hertz power-line frequency. The last computer I remember seeing with such a clock was the IBM System/360. The System/370 used a motor-generator set for the power supply, so it had to get its real-time clock time source another way.
The 360/95 and /91 also used 400 Hz from a motor/gen. Water cooled, too. One of my fondest war stories is when the power was turned off for July 4th weekend but the water was left on.
On Jan 25, 2015, at 6:06 PM, Barney Wolff <barney@databus.com> wrote:
On Sun, Jan 25, 2015 at 02:24:52PM -0800, Stephen Satchell wrote:
Today's computers don't use clocks derived from 50- or 60-hertz power-line frequency. The last computer I remember seeing with such a clock was the IBM System/360. The System/370 used a motor-generator set for the power supply, so it had to get its real-time clock time source another way.
The 360/95 and /91 also used 400 Hz from a motor/gen. Water cooled, too. One of my fondest war stories is when the power was turned off for July 4th weekend but the water was left on.
That made the transformers smaller/cooler and more efficient. I seem to remember a 195 as well but maybe it is just CRS.
On Sun, Jan 25, 2015 at 06:42:51PM -0500, TR Shaw wrote:
That made the transformers smaller/cooler and more efficient. I seem to remember a 195 as well but maybe it is just CRS.
Google says the 360/195 did exist. But my baby was the 360/95, where the first megabyte of memory was flat-film at 60ns, which made it faster than the 195 for some things. It was incredibly expensive to build - we heard rumors of $30 million in 1967 dollars, and sold to NASA at a huge loss, which is why there were only two built. I used to amuse myself by climbing into the flats memory cabinet, and was amused again some years later when I could have ingested a megabyte without harm. Ours sat directly above Tom's Restaurant, of Seinfeld fame. Very early climate modeling was done on that machine, along with a lot of astrophysics.
Barney Wolff <barney@databus.com> wrote:
On Sun, Jan 25, 2015 at 06:42:51PM -0500, TR Shaw wrote:
That made the transformers smaller/cooler and more efficient. I seem to remember a 195 as well but maybe it
is just CRS.
Google says the 360/195 did exist. But my baby was the 360/95, where the first megabyte of memory was flat-film at 60ns, which made it faster than the 195 for some things. ...
The /95 was a /91 with a megabyte of thin film memory, which was both much faster than core (120 vs 780 ns cycle time) and much more expensive (7c rather than 1.6c per bit.) The /195 was a /91 reimplemented in slightly faster logic with a 54ns rather than 60ns cycle time, and a cache adapted from the /85. I can easily believe that for programs that didn't cache well, the /95 with the fast memory would be faster. IBM lost money on all of them and eventually stopped trying to compete with CDC in that niche. See alt.folklore.computers (yes, usenet, reports of its death are premature) for endless discussion of topics like this. R's, John
I'm pretty sure University College, London (UCL) had a 360/195 on the net in the late 1970s. I remember it had open login to I guess it was TSO? I'd play with it but couldn't really figure out anything interesting to do lacking all documentation and by and large motivation other than it was kind of cool in like 1978 to be typing at a computer in London even if it was just saying "do something or go away!" I guess you had to be there. -Barry Shein The World | bzs@TheWorld.com | http://www.TheWorld.com Purveyors to the Trade | Voice: 800-THE-WRLD | Dial-Up: US, PR, Canada Software Tool & Die | Public Access Internet | SINCE 1989 *oo* On January 26, 2015 at 03:36 barney@databus.com (Barney Wolff) wrote:
On Sun, Jan 25, 2015 at 06:42:51PM -0500, TR Shaw wrote:
That made the transformers smaller/cooler and more efficient. I seem to remember a 195 as well but maybe it is just CRS.
Google says the 360/195 did exist. But my baby was the 360/95, where the first megabyte of memory was flat-film at 60ns, which made it faster than the 195 for some things. It was incredibly expensive to build - we heard rumors of $30 million in 1967 dollars, and sold to NASA at a huge loss, which is why there were only two built. I used to amuse myself by climbing into the flats memory cabinet, and was amused again some years later when I could have ingested a megabyte without harm. Ours sat directly above Tom's Restaurant, of Seinfeld fame. Very early climate modeling was done on that machine, along with a lot of astrophysics.
participants (11)
-
Barney Wolff
-
Barry Shein
-
Jay Ashworth
-
Joe Klein
-
John Levine
-
Karsten Elfenbein
-
Ken Chase
-
Mike.
-
Stephen Satchell
-
TR Shaw
-
Valdis.Kletnieks@vt.edu