On 04/04/14 21:48, Saku Ytti wrote:
On (2014-04-04 20:37 +1100), Julien Goodwin wrote:
Meinberg[0] pegs rubidium at ±8ms per year, if you need NTP to do say single direction backbone SLA measurement you want to have microsecond precision.
Those two statements don't go together.
Point I was making is that free-running rubidium is not accurate enough for QoS measurements of IP core.
Free running oscillators are fine as long as you know what the actual specs are (and have the unit measured to that).
Also outside the HFTers most of us don't care about a few milliseconds (sure an extra 50ms can be a pain, but is trivial to measure).
Jitter in backbone is low tens of microseconds, if you want to measure how that changes over time, free-running rubidium is not going to cut it.
What you'd actually measure is a side affect of buffer depth at any point. Show my anything short of a classic SONET transmission system (or perhaps sync-E) where you actually have something with jitter that low. So what, that sends IP packets, are you using to *measure* it. I can imagine using an FPGA hard clocked to a reference oscillator (and even a TCXO is good enough) doing it, but I'm not aware of any actual implementation of this. Again, short of the HFT world I just can't imagine how this could actually matter.