Electronics-Related.com
Forums

triggering things with ethernet

Started by John Larkin April 17, 2023
On Mon, 17 Apr 2023 19:26:22 -0700, John Larkin
<jlarkin@highlandSNIPMEtechnology.com> wrote:

>On Mon, 17 Apr 2023 21:38:52 -0400, Joe Gwinn <joegwinn@comcast.net> >wrote: > >>On Mon, 17 Apr 2023 17:49:33 -0700, John Larkin >><jlarkin@highlandSNIPMEtechnology.com> wrote: >> >>>On Tue, 18 Apr 2023 01:39:18 +0200, Klaus Vestergaard Kragelund >>><klauskvik@hotmail.com> wrote: >>> >>>>On 18-04-2023 01:38, Klaus Vestergaard Kragelund wrote: >>>>> On 17-04-2023 21:18, John Larkin wrote: >>>>>> >>>>>> >>>>>> Suppose one were to send a broadcast packet from a PC to multiple >>>>>> boxes over regular 100 Mbit ethernet, without any fancy time protocols >>>>>> like PTP or anything. Each box would accept the packet as a trigger. >>>>>> Assume a pretty small private network and a reasonable number of >>>>>> switches to fan out to many boxes. >>>>>> >>>>>> Any guess as to how much the effective time trigger to various boxes >>>>>> would skew? I've seen one esimate of 125 usec, for cameras, with >>>>>> details unclear. >>>>>> >>>>> If you are connected to the Phy directly with high priority ISR, I think >>>>> you can do typical less than 1us. >>>>> >>>>> problem is loading on the bus, or retransmissions, then it could be way >>>>> longer >>>>> >>>>> If you need precise timing, you can use real time ethernet >>>> >>>><https://www.cisco.com/c/dam/en/us/solutions/collateral/industry-solutions/white-paper-c11-738950.pdf> >>> >>>I was wondering what sort of time skew I might get with an ordinary PC >>>shooting out commands and fairly ordinary receivers and some switches. >>>The PC could send a message to each of the boxes in the field, like >>>"set your voltage to 17 when I say GO" and things like that to various >>>boxes. Then it could broadcast a UDP message to all the boxes GO . >> >>With Windows <anything>, it's going to be pretty bad, especially when >>something like Java is running. There will be startling gaps, into >>the tens of milliseconds, sometimes longer. >> > >All I want to know is the destination time skews after the PC sends >the GO packet. Windows doesn't matter. > >My customers mostly use some realtime Linux, but that doesn't matter >either.
Probably RHEL.
>>Which is not a criticism - Windows is intended for desktop >>applications, not embedded realtime. So, wrong tool. >> >>With RHEL (Red Hat Enterprise Linux) with a few fancy extensions, it's >>going to be on order of hundreds of microseconds, so long as you run >>the relevant applications are a sufficiently urgent realtime priority >>and scheduling policy. >> >>To do better, one goes to partly hardware (with firmware) solutions. >> >> >>>The boxes would have to be able to accept the usual TCP commands at >>>unique IP addresses, and a UDP packet with some common IP address, and >>>process the GO command rapidly, but I was wondering what the inherent >>>time uncertainty might be with the ethernet itself. >> >>How good that stack is depends on what the host computer is optimized >>for. >> >> >>>I guess some switches are better than others, so if I found some good >>>ones I could recommend them. I'd have to understand how a switch can >>>handle a broadcast packet too. I think the GO packet is just sent to >>>some broadcast address. >> >>Modern network switches are typically far faster than RHEL. > >I want numbers!
Ten years ago, 20 microseconds first-bit-in to last-bit-out latency was typical, because the switch ingested the entire incoming packet before even thinking about transmitting it on. It would wait until the entire packet was in a buffer before trying to decode it. Now days, cut-through handling is common, and transmission begins when the header part has been received and can be parsed, so first-bit-in to first-bit-out is more like a microsecond, maybe far less in the bigger faster switches. These switches are designed to do wirespeed in and out, so the buffering delay is proportional to a short bit of the wire in question. There is less blockage due to big packets ahead in line. It all depends. But when compared with RHEL churn, at least 200 microseconds, the switch is not important. The modern equivalent of a "hub" is an "unmanaged switch". They are just that, but are internally buffered. If one chooses a gigabit-capable unit, the latency will be quite small. For instance, consider a NETGEAR 5-Port Multi-Gigabit (2.5G) Ethernet Unmanaged Switch: .<https://www.downloads.netgear.com/files/GDC/MS105/MS105_DS.pdf> The datasheet specifies the latency for 64-byte packets as less than 2.5 microseconds. Again, this ought to suffice. Web price from Netgear is $150. Slower units are cheaper, with increased buffering latency. The unspoken assumption in the above is that the ethernet network is lightly loaded, with few big packets getting underfoot. Also unmentioned is that non-blocking switches are not required to preserve packet reception order. If the key packets are spaced far enough apart, this won't cause reordering.
>>>The fancy time protocols, ethercat and PTP and TSN (and others!) are >>>complex on both ends. I might invent a new one, but that's another >>>story. >> >>It's been done, many times. Guilty. But PTP over ethernet is >>sweeping all that stuff away.
The wider world is going to PTPv2.1, which provides tens of nanoseconds (random jitter) and maybe 50-100 nanoseconds average offset error (can be plus or minus, depending on details of the cable plant et al). But all this is quite complex and expensive. But in five or ten years, it'll be common and dirt cheap. Joe Gwinn
On 4/18/2023 5:45 AM, Martin Brown wrote:
> My suggestion would be to measure it experimentally on a modest sized > configuration with the depth of switches you intend to use and have the > triggered devices send back a step function to the master box on an identical > length of coax. That should give you a good idea of the delay per additional > switch added and how the jitter increases with depth.
Put together a little bit of *hardware* (small FPGA) with N inputs. Each input trips a latch; first input starts a timer. When timer expires, number of tripped latches are totalled (logged?), reset and timer reset. Adjust duration of timer to set size of "window" to smallest that allows ALL latches to be reliably tripped. Note that this ignores the effect of latency between trigger issuance and reception; it just measures how *tightly* the arriving pulses are clustered. Let it run for days to reassure yourself. Then, prove to yourself that this behavior is repeatable -- in the presence of other traffic, etc.
> It will obviously depend a lot on the software and stack at the receiver - if > you can control that and/or put it into a quick response state then you might > be able to do quite well. That or have a means to calibrate out the systematic > delays using the same length of coax as a reference. Depends a lot on good > behaviour from the switches so you might have to be careful about which > chipsets you specify.
If you want small times with small variances, then you'll code on bare metal. If this has to coexist with some other (e.g., FOSS) code, you'll have to sort out how the two might potentially interact (e.g., the packet scheduler will obviously need a tweeking). Run ping(1) for an hour and note the statistics regarding the echoes. This typically drags the stack into the picture. Vary the length of cable connecting the *two* devices. Add another switch in the chain. Add some background chatter (e.g., if someone else wants to broadcast a datagram, that will tie up ALL ports just like your broadcast would). Similarly, set up a node as an NTP master. Periodically, send messages to each node (using a "reliable" protocol -- tallying the number of times and maximum delay for all to have been successfully notified of your "scheduled event". Then, let each toggle that wire to that same bit of (FPGA) hardware. Compare the distribution to the distribution of times between NTP-sync'ed slaves. Lots of ways to get information from commodity hardware. Then, figure out how to *beat* those figures (or, SETTLE for them). At the very least, you'll likely encounter many of the same problems that customers will encounter: why am I not getting a reply from this device? why is the delay so long? why are packets being dropped? how did these runts come into the picture? yikes! where did that jumbo frame come from?? etc.
On Tue, 18 Apr 2023 16:42:19 -0400, Joe Gwinn <joegwinn@comcast.net>
wrote:

>On Mon, 17 Apr 2023 19:26:22 -0700, John Larkin ><jlarkin@highlandSNIPMEtechnology.com> wrote: > >>On Mon, 17 Apr 2023 21:38:52 -0400, Joe Gwinn <joegwinn@comcast.net> >>wrote: >> >>>On Mon, 17 Apr 2023 17:49:33 -0700, John Larkin >>><jlarkin@highlandSNIPMEtechnology.com> wrote: >>> >>>>On Tue, 18 Apr 2023 01:39:18 +0200, Klaus Vestergaard Kragelund >>>><klauskvik@hotmail.com> wrote: >>>> >>>>>On 18-04-2023 01:38, Klaus Vestergaard Kragelund wrote: >>>>>> On 17-04-2023 21:18, John Larkin wrote: >>>>>>> >>>>>>> >>>>>>> Suppose one were to send a broadcast packet from a PC to multiple >>>>>>> boxes over regular 100 Mbit ethernet, without any fancy time protocols >>>>>>> like PTP or anything. Each box would accept the packet as a trigger. >>>>>>> Assume a pretty small private network and a reasonable number of >>>>>>> switches to fan out to many boxes. >>>>>>> >>>>>>> Any guess as to how much the effective time trigger to various boxes >>>>>>> would skew? I've seen one esimate of 125 usec, for cameras, with >>>>>>> details unclear. >>>>>>> >>>>>> If you are connected to the Phy directly with high priority ISR, I think >>>>>> you can do typical less than 1us. >>>>>> >>>>>> problem is loading on the bus, or retransmissions, then it could be way >>>>>> longer >>>>>> >>>>>> If you need precise timing, you can use real time ethernet >>>>> >>>>><https://www.cisco.com/c/dam/en/us/solutions/collateral/industry-solutions/white-paper-c11-738950.pdf> >>>> >>>>I was wondering what sort of time skew I might get with an ordinary PC >>>>shooting out commands and fairly ordinary receivers and some switches. >>>>The PC could send a message to each of the boxes in the field, like >>>>"set your voltage to 17 when I say GO" and things like that to various >>>>boxes. Then it could broadcast a UDP message to all the boxes GO . >>> >>>With Windows <anything>, it's going to be pretty bad, especially when >>>something like Java is running. There will be startling gaps, into >>>the tens of milliseconds, sometimes longer. >>> >> >>All I want to know is the destination time skews after the PC sends >>the GO packet. Windows doesn't matter. >> >>My customers mostly use some realtime Linux, but that doesn't matter >>either. > >Probably RHEL. > > >>>Which is not a criticism - Windows is intended for desktop >>>applications, not embedded realtime. So, wrong tool. >>> >>>With RHEL (Red Hat Enterprise Linux) with a few fancy extensions, it's >>>going to be on order of hundreds of microseconds, so long as you run >>>the relevant applications are a sufficiently urgent realtime priority >>>and scheduling policy. >>> >>>To do better, one goes to partly hardware (with firmware) solutions. >>> >>> >>>>The boxes would have to be able to accept the usual TCP commands at >>>>unique IP addresses, and a UDP packet with some common IP address, and >>>>process the GO command rapidly, but I was wondering what the inherent >>>>time uncertainty might be with the ethernet itself. >>> >>>How good that stack is depends on what the host computer is optimized >>>for. >>> >>> >>>>I guess some switches are better than others, so if I found some good >>>>ones I could recommend them. I'd have to understand how a switch can >>>>handle a broadcast packet too. I think the GO packet is just sent to >>>>some broadcast address. >>> >>>Modern network switches are typically far faster than RHEL. >> >>I want numbers! > >Ten years ago, 20 microseconds first-bit-in to last-bit-out latency >was typical, because the switch ingested the entire incoming packet >before even thinking about transmitting it on. It would wait until >the entire packet was in a buffer before trying to decode it. > >Now days, cut-through handling is common, and transmission begins when >the header part has been received and can be parsed, so first-bit-in >to first-bit-out is more like a microsecond, maybe far less in the >bigger faster switches. These switches are designed to do wirespeed >in and out, so the buffering delay is proportional to a short bit of >the wire in question. There is less blockage due to big packets ahead >in line. It all depends. > >But when compared with RHEL churn, at least 200 microseconds, the >switch is not important. > >The modern equivalent of a "hub" is an "unmanaged switch". They are >just that, but are internally buffered. If one chooses a >gigabit-capable unit, the latency will be quite small. For instance, >consider a NETGEAR 5-Port Multi-Gigabit (2.5G) Ethernet Unmanaged >Switch: > >.<https://www.downloads.netgear.com/files/GDC/MS105/MS105_DS.pdf> > >The datasheet specifies the latency for 64-byte packets as less than >2.5 microseconds. Again, this ought to suffice. Web price from >Netgear is $150. Slower units are cheaper, with increased buffering >latency. >
That's encouraging. Thanks. I like the idea of the switch forwarding the packet in microseconds, before it's actually over. A short UDP packet should get through fast.
>The unspoken assumption in the above is that the ethernet network is >lightly loaded, with few big packets getting underfoot.
My users usually have a private network for data aquisition and control, and I can tell them what the rules are.
> >Also unmentioned is that non-blocking switches are not required to >preserve packet reception order. If the key packets are spaced far >enough apart, this won't cause reordering. > > >>>>The fancy time protocols, ethercat and PTP and TSN (and others!) are >>>>complex on both ends. I might invent a new one, but that's another >>>>story. >>> >>>It's been done, many times. Guilty. But PTP over ethernet is >>>sweeping all that stuff away. > >The wider world is going to PTPv2.1, which provides tens of >nanoseconds (random jitter) and maybe 50-100 nanoseconds average >offset error (can be plus or minus, depending on details of the cable >plant et al). But all this is quite complex and expensive. But in >five or ten years, it'll be common and dirt cheap.
I don't need nanoseconds for power supplies and motors. If I were to try to phase coordinate, say, 400 Hz AC sources, 10s of usec would be nice. The clock on the Raspberry Pi is a cheap crystal and is not tunable. It might be interesting to do a DDS sort of thing to make a variable that is a local calibrated time counter. We could occasionally send out a packet to declare the time of day, and the little boxes could both sync to that and tweak their DDS cal factors to stay pretty close until the next correction. All software.
> >Joe Gwinn
On 4/18/2023 8:02 AM, Dimiter_Popoff wrote:
>> [I think mail is hosed, again&nbsp; :< ] > > I did email you earlier today (nothing worth a second thought if it > gets lost).
Not here. (spam or otherwise)
On 4/19/2023 0:41, Don Y wrote:
> On 4/18/2023 8:02 AM, Dimiter_Popoff wrote: >>> [I think mail is hosed, again&nbsp; :< ] >> >> I did email you earlier today (nothing worth a second thought if it >> gets lost). > > Not here.&nbsp; (spam or otherwise) >
Sent 3 copies: one exact, one with your address within <> (originally sent without these as usual by my mistake), and one like the second but Cc-ed to an address of mine. I got the Cc.
On 2023-04-18 17:40, John Larkin wrote:
> On Tue, 18 Apr 2023 16:42:19 -0400, Joe Gwinn <joegwinn@comcast.net> > wrote: > >> On Mon, 17 Apr 2023 19:26:22 -0700, John Larkin >> <jlarkin@highlandSNIPMEtechnology.com> wrote: >> >>> On Mon, 17 Apr 2023 21:38:52 -0400, Joe Gwinn <joegwinn@comcast.net> >>> wrote: >>> >>>> On Mon, 17 Apr 2023 17:49:33 -0700, John Larkin >>>> <jlarkin@highlandSNIPMEtechnology.com> wrote: >>>> >>>>> On Tue, 18 Apr 2023 01:39:18 +0200, Klaus Vestergaard Kragelund >>>>> <klauskvik@hotmail.com> wrote: >>>>> >>>>>> On 18-04-2023 01:38, Klaus Vestergaard Kragelund wrote: >>>>>>> On 17-04-2023 21:18, John Larkin wrote: >>>>>>>> >>>>>>>> >>>>>>>> Suppose one were to send a broadcast packet from a PC to multiple >>>>>>>> boxes over regular 100 Mbit ethernet, without any fancy time protocols >>>>>>>> like PTP or anything. Each box would accept the packet as a trigger. >>>>>>>> Assume a pretty small private network and a reasonable number of >>>>>>>> switches to fan out to many boxes. >>>>>>>> >>>>>>>> Any guess as to how much the effective time trigger to various boxes >>>>>>>> would skew? I've seen one esimate of 125 usec, for cameras, with >>>>>>>> details unclear. >>>>>>>> >>>>>>> If you are connected to the Phy directly with high priority ISR, I think >>>>>>> you can do typical less than 1us. >>>>>>> >>>>>>> problem is loading on the bus, or retransmissions, then it could be way >>>>>>> longer >>>>>>> >>>>>>> If you need precise timing, you can use real time ethernet >>>>>> >>>>>> <https://www.cisco.com/c/dam/en/us/solutions/collateral/industry-solutions/white-paper-c11-738950.pdf> >>>>> >>>>> I was wondering what sort of time skew I might get with an ordinary PC >>>>> shooting out commands and fairly ordinary receivers and some switches. >>>>> The PC could send a message to each of the boxes in the field, like >>>>> "set your voltage to 17 when I say GO" and things like that to various >>>>> boxes. Then it could broadcast a UDP message to all the boxes GO . >>>> >>>> With Windows <anything>, it's going to be pretty bad, especially when >>>> something like Java is running. There will be startling gaps, into >>>> the tens of milliseconds, sometimes longer. >>>> >>> >>> All I want to know is the destination time skews after the PC sends >>> the GO packet. Windows doesn't matter. >>> >>> My customers mostly use some realtime Linux, but that doesn't matter >>> either. >> >> Probably RHEL. >> >> >>>> Which is not a criticism - Windows is intended for desktop >>>> applications, not embedded realtime. So, wrong tool. >>>> >>>> With RHEL (Red Hat Enterprise Linux) with a few fancy extensions, it's >>>> going to be on order of hundreds of microseconds, so long as you run >>>> the relevant applications are a sufficiently urgent realtime priority >>>> and scheduling policy. >>>> >>>> To do better, one goes to partly hardware (with firmware) solutions. >>>> >>>> >>>>> The boxes would have to be able to accept the usual TCP commands at >>>>> unique IP addresses, and a UDP packet with some common IP address, and >>>>> process the GO command rapidly, but I was wondering what the inherent >>>>> time uncertainty might be with the ethernet itself. >>>> >>>> How good that stack is depends on what the host computer is optimized >>>> for. >>>> >>>> >>>>> I guess some switches are better than others, so if I found some good >>>>> ones I could recommend them. I'd have to understand how a switch can >>>>> handle a broadcast packet too. I think the GO packet is just sent to >>>>> some broadcast address. >>>> >>>> Modern network switches are typically far faster than RHEL. >>> >>> I want numbers! >> >> Ten years ago, 20 microseconds first-bit-in to last-bit-out latency >> was typical, because the switch ingested the entire incoming packet >> before even thinking about transmitting it on. It would wait until >> the entire packet was in a buffer before trying to decode it. >> >> Now days, cut-through handling is common, and transmission begins when >> the header part has been received and can be parsed, so first-bit-in >> to first-bit-out is more like a microsecond, maybe far less in the >> bigger faster switches. These switches are designed to do wirespeed >> in and out, so the buffering delay is proportional to a short bit of >> the wire in question. There is less blockage due to big packets ahead >> in line. It all depends. >> >> But when compared with RHEL churn, at least 200 microseconds, the >> switch is not important. >> >> The modern equivalent of a "hub" is an "unmanaged switch". They are >> just that, but are internally buffered. If one chooses a >> gigabit-capable unit, the latency will be quite small. For instance, >> consider a NETGEAR 5-Port Multi-Gigabit (2.5G) Ethernet Unmanaged >> Switch: >> >> .<https://www.downloads.netgear.com/files/GDC/MS105/MS105_DS.pdf> >> >> The datasheet specifies the latency for 64-byte packets as less than >> 2.5 microseconds. Again, this ought to suffice. Web price from >> Netgear is $150. Slower units are cheaper, with increased buffering >> latency. >> > > That's encouraging. Thanks. > > I like the idea of the switch forwarding the packet in microseconds, > before it's actually over. > > A short UDP packet should get through fast. > > > >> The unspoken assumption in the above is that the ethernet network is >> lightly loaded, with few big packets getting underfoot. > > My users usually have a private network for data aquisition and > control, and I can tell them what the rules are. > > >> >> Also unmentioned is that non-blocking switches are not required to >> preserve packet reception order. If the key packets are spaced far >> enough apart, this won't cause reordering. >> >> >>>>> The fancy time protocols, ethercat and PTP and TSN (and others!) are >>>>> complex on both ends. I might invent a new one, but that's another >>>>> story. >>>> >>>> It's been done, many times. Guilty. But PTP over ethernet is >>>> sweeping all that stuff away. >> >> The wider world is going to PTPv2.1, which provides tens of >> nanoseconds (random jitter) and maybe 50-100 nanoseconds average >> offset error (can be plus or minus, depending on details of the cable >> plant et al). But all this is quite complex and expensive. But in >> five or ten years, it'll be common and dirt cheap. > > I don't need nanoseconds for power supplies and motors. If I were to > try to phase coordinate, say, 400 Hz AC sources, 10s of usec would be > nice. > > The clock on the Raspberry Pi is a cheap crystal and is not tunable. > It might be interesting to do a DDS sort of thing to make a variable > that is a local calibrated time counter. We could occasionally send > out a packet to declare the time of day, and the little boxes could > both sync to that and tweak their DDS cal factors to stay pretty close > until the next correction. All software.
There's an algo for that in the guts of NTP since before the Flood, I believe. It even dorks the cal factor to ensure phase continuity in the timer as it slews to the new offset value. Cheers Phil Hobbs -- Dr Philip C D Hobbs Principal Consultant ElectroOptical Innovations LLC / Hobbs ElectroOptics Optics, Electro-optics, Photonics, Analog Electronics Briarcliff Manor NY 10510 http://electrooptical.net http://hobbs-eo.com
On 4/18/2023 3:19 PM, Dimiter_Popoff wrote:
> On 4/19/2023 0:41, Don Y wrote: >> On 4/18/2023 8:02 AM, Dimiter_Popoff wrote: >>>> [I think mail is hosed, again&nbsp; :< ] >>> >>> I did email you earlier today (nothing worth a second thought if it >>> gets lost). >> >> Not here.&nbsp; (spam or otherwise) > > Sent 3 copies: one exact, one with your address within <> (originally > sent without these as usual by my mistake), and one like the second > but Cc-ed to an address of mine. > I got the Cc.
I *just* received these two -- timestamped 4:37AM. The first must still be stuck in the ether... I'll reply a bit later (we're watching the last few episodes...)
On Tue, 18 Apr 2023 14:40:35 -0700, John Larkin
<jlarkin@highlandSNIPMEtechnology.com> wrote:

>On Tue, 18 Apr 2023 16:42:19 -0400, Joe Gwinn <joegwinn@comcast.net> >wrote: > >>On Mon, 17 Apr 2023 19:26:22 -0700, John Larkin >><jlarkin@highlandSNIPMEtechnology.com> wrote: >> >>>On Mon, 17 Apr 2023 21:38:52 -0400, Joe Gwinn <joegwinn@comcast.net> >>>wrote: >>> >>>>On Mon, 17 Apr 2023 17:49:33 -0700, John Larkin >>>><jlarkin@highlandSNIPMEtechnology.com> wrote: >>>> >>>>>On Tue, 18 Apr 2023 01:39:18 +0200, Klaus Vestergaard Kragelund >>>>><klauskvik@hotmail.com> wrote: >>>>> >>>>>>On 18-04-2023 01:38, Klaus Vestergaard Kragelund wrote: >>>>>>> On 17-04-2023 21:18, John Larkin wrote: >>>>>>>> >>>>>>>> >>>>>>>> Suppose one were to send a broadcast packet from a PC to multiple >>>>>>>> boxes over regular 100 Mbit ethernet, without any fancy time protocols >>>>>>>> like PTP or anything. Each box would accept the packet as a trigger. >>>>>>>> Assume a pretty small private network and a reasonable number of >>>>>>>> switches to fan out to many boxes. >>>>>>>> >>>>>>>> Any guess as to how much the effective time trigger to various boxes >>>>>>>> would skew? I've seen one esimate of 125 usec, for cameras, with >>>>>>>> details unclear. >>>>>>>> >>>>>>> If you are connected to the Phy directly with high priority ISR, I think >>>>>>> you can do typical less than 1us. >>>>>>> >>>>>>> problem is loading on the bus, or retransmissions, then it could be way >>>>>>> longer >>>>>>> >>>>>>> If you need precise timing, you can use real time ethernet >>>>>> >>>>>><https://www.cisco.com/c/dam/en/us/solutions/collateral/industry-solutions/white-paper-c11-738950.pdf> >>>>> >>>>>I was wondering what sort of time skew I might get with an ordinary PC >>>>>shooting out commands and fairly ordinary receivers and some switches. >>>>>The PC could send a message to each of the boxes in the field, like >>>>>"set your voltage to 17 when I say GO" and things like that to various >>>>>boxes. Then it could broadcast a UDP message to all the boxes GO . >>>> >>>>With Windows <anything>, it's going to be pretty bad, especially when >>>>something like Java is running. There will be startling gaps, into >>>>the tens of milliseconds, sometimes longer. >>>> >>> >>>All I want to know is the destination time skews after the PC sends >>>the GO packet. Windows doesn't matter. >>> >>>My customers mostly use some realtime Linux, but that doesn't matter >>>either. >> >>Probably RHEL. >> >> >>>>Which is not a criticism - Windows is intended for desktop >>>>applications, not embedded realtime. So, wrong tool. >>>> >>>>With RHEL (Red Hat Enterprise Linux) with a few fancy extensions, it's >>>>going to be on order of hundreds of microseconds, so long as you run >>>>the relevant applications are a sufficiently urgent realtime priority >>>>and scheduling policy. >>>> >>>>To do better, one goes to partly hardware (with firmware) solutions. >>>> >>>> >>>>>The boxes would have to be able to accept the usual TCP commands at >>>>>unique IP addresses, and a UDP packet with some common IP address, and >>>>>process the GO command rapidly, but I was wondering what the inherent >>>>>time uncertainty might be with the ethernet itself. >>>> >>>>How good that stack is depends on what the host computer is optimized >>>>for. >>>> >>>> >>>>>I guess some switches are better than others, so if I found some good >>>>>ones I could recommend them. I'd have to understand how a switch can >>>>>handle a broadcast packet too. I think the GO packet is just sent to >>>>>some broadcast address. >>>> >>>>Modern network switches are typically far faster than RHEL. >>> >>>I want numbers! >> >>Ten years ago, 20 microseconds first-bit-in to last-bit-out latency >>was typical, because the switch ingested the entire incoming packet >>before even thinking about transmitting it on. It would wait until >>the entire packet was in a buffer before trying to decode it. >> >>Now days, cut-through handling is common, and transmission begins when >>the header part has been received and can be parsed, so first-bit-in >>to first-bit-out is more like a microsecond, maybe far less in the >>bigger faster switches. These switches are designed to do wirespeed >>in and out, so the buffering delay is proportional to a short bit of >>the wire in question. There is less blockage due to big packets ahead >>in line. It all depends. >> >>But when compared with RHEL churn, at least 200 microseconds, the >>switch is not important. >> >>The modern equivalent of a "hub" is an "unmanaged switch". They are >>just that, but are internally buffered. If one chooses a >>gigabit-capable unit, the latency will be quite small. For instance, >>consider a NETGEAR 5-Port Multi-Gigabit (2.5G) Ethernet Unmanaged >>Switch: >> >>.<https://www.downloads.netgear.com/files/GDC/MS105/MS105_DS.pdf> >> >>The datasheet specifies the latency for 64-byte packets as less than >>2.5 microseconds. Again, this ought to suffice. Web price from >>Netgear is $150. Slower units are cheaper, with increased buffering >>latency. >> > >That's encouraging. Thanks.
Welcome.
>I like the idea of the switch forwarding the packet in microseconds, >before it's actually over. > >A short UDP packet should get through fast.
Yes. The shortest UDP packet is ~64 bytes.
>>The unspoken assumption in the above is that the ethernet network is >>lightly loaded, with few big packets getting underfoot. > >My users usually have a private network for data aquisition and >control, and I can tell them what the rules are.
Ahh. The usual dodge is to have a "realtime" LAN (no big trucks or coal trains allowed), plus a everything-goes LAN where latency is uncontrolled. These two LANs are logical, and may both be created by partitioning one or more network switches, so long as those switches are hunky enough.
>>Also unmentioned is that non-blocking switches are not required to >>preserve packet reception order. If the key packets are spaced far >>enough apart, this won't cause reordering. >> >> >>>>>The fancy time protocols, ethercat and PTP and TSN (and others!) are >>>>>complex on both ends. I might invent a new one, but that's another >>>>>story. >>>> >>>>It's been done, many times. Guilty. But PTP over ethernet is >>>>sweeping all that stuff away. >> >>The wider world is going to PTPv2.1, which provides tens of >>nanoseconds (random jitter) and maybe 50-100 nanoseconds average >>offset error (can be plus or minus, depending on details of the cable >>plant et al). But all this is quite complex and expensive. But in >>five or ten years, it'll be common and dirt cheap. > >I don't need nanoseconds for power supplies and motors. If I were to >try to phase coordinate, say, 400 Hz AC sources, 10s of usec would be >nice.
OK.
>The clock on the Raspberry Pi is a cheap crystal and is not tunable. >It might be interesting to do a DDS sort of thing to make a variable >that is a local calibrated time counter. We could occasionally send >out a packet to declare the time of day, and the little boxes could >both sync to that and tweak their DDS cal factors to stay pretty close >until the next correction. All software.
I don't know that Raspberry Pi units are all that good as clocks. The logic clocks in computers are pretty temperature-sensitive, but one can certainly implement a kind of DDS. Phil H mentioned antediluvian frequency lock loop algorithm from NTP, which I have in the past adapted for a like purpose. Basically, one counts the DDS output cycles between 1PPS pips, and change the DDS tuning word to steer towards zero frequency error. But this is done like steering a sailboat - steer to a place far ahead and readjust far slower than the response time of the boat to the helm. If one gets too eager, the boat instead swings widely instead of proceeding steadily towards the distant objective. Joe Gwinn
On Wed, 19 Apr 2023 18:46:53 -0400, Joe Gwinn <joegwinn@comcast.net>
wrote:

>On Tue, 18 Apr 2023 14:40:35 -0700, John Larkin ><jlarkin@highlandSNIPMEtechnology.com> wrote: > >>On Tue, 18 Apr 2023 16:42:19 -0400, Joe Gwinn <joegwinn@comcast.net> >>wrote: >> >>>On Mon, 17 Apr 2023 19:26:22 -0700, John Larkin >>><jlarkin@highlandSNIPMEtechnology.com> wrote: >>> >>>>On Mon, 17 Apr 2023 21:38:52 -0400, Joe Gwinn <joegwinn@comcast.net> >>>>wrote: >>>> >>>>>On Mon, 17 Apr 2023 17:49:33 -0700, John Larkin >>>>><jlarkin@highlandSNIPMEtechnology.com> wrote: >>>>> >>>>>>On Tue, 18 Apr 2023 01:39:18 +0200, Klaus Vestergaard Kragelund >>>>>><klauskvik@hotmail.com> wrote: >>>>>> >>>>>>>On 18-04-2023 01:38, Klaus Vestergaard Kragelund wrote: >>>>>>>> On 17-04-2023 21:18, John Larkin wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>> Suppose one were to send a broadcast packet from a PC to multiple >>>>>>>>> boxes over regular 100 Mbit ethernet, without any fancy time protocols >>>>>>>>> like PTP or anything. Each box would accept the packet as a trigger. >>>>>>>>> Assume a pretty small private network and a reasonable number of >>>>>>>>> switches to fan out to many boxes. >>>>>>>>> >>>>>>>>> Any guess as to how much the effective time trigger to various boxes >>>>>>>>> would skew? I've seen one esimate of 125 usec, for cameras, with >>>>>>>>> details unclear. >>>>>>>>> >>>>>>>> If you are connected to the Phy directly with high priority ISR, I think >>>>>>>> you can do typical less than 1us. >>>>>>>> >>>>>>>> problem is loading on the bus, or retransmissions, then it could be way >>>>>>>> longer >>>>>>>> >>>>>>>> If you need precise timing, you can use real time ethernet >>>>>>> >>>>>>><https://www.cisco.com/c/dam/en/us/solutions/collateral/industry-solutions/white-paper-c11-738950.pdf> >>>>>> >>>>>>I was wondering what sort of time skew I might get with an ordinary PC >>>>>>shooting out commands and fairly ordinary receivers and some switches. >>>>>>The PC could send a message to each of the boxes in the field, like >>>>>>"set your voltage to 17 when I say GO" and things like that to various >>>>>>boxes. Then it could broadcast a UDP message to all the boxes GO . >>>>> >>>>>With Windows <anything>, it's going to be pretty bad, especially when >>>>>something like Java is running. There will be startling gaps, into >>>>>the tens of milliseconds, sometimes longer. >>>>> >>>> >>>>All I want to know is the destination time skews after the PC sends >>>>the GO packet. Windows doesn't matter. >>>> >>>>My customers mostly use some realtime Linux, but that doesn't matter >>>>either. >>> >>>Probably RHEL. >>> >>> >>>>>Which is not a criticism - Windows is intended for desktop >>>>>applications, not embedded realtime. So, wrong tool. >>>>> >>>>>With RHEL (Red Hat Enterprise Linux) with a few fancy extensions, it's >>>>>going to be on order of hundreds of microseconds, so long as you run >>>>>the relevant applications are a sufficiently urgent realtime priority >>>>>and scheduling policy. >>>>> >>>>>To do better, one goes to partly hardware (with firmware) solutions. >>>>> >>>>> >>>>>>The boxes would have to be able to accept the usual TCP commands at >>>>>>unique IP addresses, and a UDP packet with some common IP address, and >>>>>>process the GO command rapidly, but I was wondering what the inherent >>>>>>time uncertainty might be with the ethernet itself. >>>>> >>>>>How good that stack is depends on what the host computer is optimized >>>>>for. >>>>> >>>>> >>>>>>I guess some switches are better than others, so if I found some good >>>>>>ones I could recommend them. I'd have to understand how a switch can >>>>>>handle a broadcast packet too. I think the GO packet is just sent to >>>>>>some broadcast address. >>>>> >>>>>Modern network switches are typically far faster than RHEL. >>>> >>>>I want numbers! >>> >>>Ten years ago, 20 microseconds first-bit-in to last-bit-out latency >>>was typical, because the switch ingested the entire incoming packet >>>before even thinking about transmitting it on. It would wait until >>>the entire packet was in a buffer before trying to decode it. >>> >>>Now days, cut-through handling is common, and transmission begins when >>>the header part has been received and can be parsed, so first-bit-in >>>to first-bit-out is more like a microsecond, maybe far less in the >>>bigger faster switches. These switches are designed to do wirespeed >>>in and out, so the buffering delay is proportional to a short bit of >>>the wire in question. There is less blockage due to big packets ahead >>>in line. It all depends. >>> >>>But when compared with RHEL churn, at least 200 microseconds, the >>>switch is not important. >>> >>>The modern equivalent of a "hub" is an "unmanaged switch". They are >>>just that, but are internally buffered. If one chooses a >>>gigabit-capable unit, the latency will be quite small. For instance, >>>consider a NETGEAR 5-Port Multi-Gigabit (2.5G) Ethernet Unmanaged >>>Switch: >>> >>>.<https://www.downloads.netgear.com/files/GDC/MS105/MS105_DS.pdf> >>> >>>The datasheet specifies the latency for 64-byte packets as less than >>>2.5 microseconds. Again, this ought to suffice. Web price from >>>Netgear is $150. Slower units are cheaper, with increased buffering >>>latency. >>> >> >>That's encouraging. Thanks. > >Welcome. > > >>I like the idea of the switch forwarding the packet in microseconds, >>before it's actually over. >> >>A short UDP packet should get through fast. > >Yes. The shortest UDP packet is ~64 bytes. > > >>>The unspoken assumption in the above is that the ethernet network is >>>lightly loaded, with few big packets getting underfoot. >> >>My users usually have a private network for data aquisition and >>control, and I can tell them what the rules are. > >Ahh. The usual dodge is to have a "realtime" LAN (no big trucks or >coal trains allowed), plus a everything-goes LAN where latency is >uncontrolled. These two LANs are logical, and may both be created by >partitioning one or more network switches, so long as those switches >are hunky enough. > > >>>Also unmentioned is that non-blocking switches are not required to >>>preserve packet reception order. If the key packets are spaced far >>>enough apart, this won't cause reordering. >>> >>> >>>>>>The fancy time protocols, ethercat and PTP and TSN (and others!) are >>>>>>complex on both ends. I might invent a new one, but that's another >>>>>>story. >>>>> >>>>>It's been done, many times. Guilty. But PTP over ethernet is >>>>>sweeping all that stuff away. >>> >>>The wider world is going to PTPv2.1, which provides tens of >>>nanoseconds (random jitter) and maybe 50-100 nanoseconds average >>>offset error (can be plus or minus, depending on details of the cable >>>plant et al). But all this is quite complex and expensive. But in >>>five or ten years, it'll be common and dirt cheap. >> >>I don't need nanoseconds for power supplies and motors. If I were to >>try to phase coordinate, say, 400 Hz AC sources, 10s of usec would be >>nice. > >OK. > > >>The clock on the Raspberry Pi is a cheap crystal and is not tunable. >>It might be interesting to do a DDS sort of thing to make a variable >>that is a local calibrated time counter. We could occasionally send >>out a packet to declare the time of day, and the little boxes could >>both sync to that and tweak their DDS cal factors to stay pretty close >>until the next correction. All software. > >I don't know that Raspberry Pi units are all that good as clocks.
No, that's the point of doing a DDS sort of correction to the event timebase. The Pico has a crystal and two caps, the classic CMOS oscillator, and I'd suspect it could be off by 100 PPM maybe.
> >The logic clocks in computers are pretty temperature-sensitive, but >one can certainly implement a kind of DDS. > >Phil H mentioned antediluvian frequency lock loop algorithm from NTP, >which I have in the past adapted for a like purpose. > >Basically, one counts the DDS output cycles between 1PPS pips, and >change the DDS tuning word to steer towards zero frequency error. But >this is done like steering a sailboat - steer to a place far ahead and >readjust far slower than the response time of the boat to the helm. If >one gets too eager, the boat instead swings widely instead of >proceeding steadily towards the distant objective.
It deserves to be simulated. But if it seesaws the effective clock frequency some 10s of PPM, but is long-term correct, that would so.
> > >Joe Gwinn
On Tue, 18 Apr 2023 14:40:35 -0700, John Larkin
<jlarkin@highlandSNIPMEtechnology.com> wrote:

>On Tue, 18 Apr 2023 16:42:19 -0400, Joe Gwinn <joegwinn@comcast.net> >wrote: > >>On Mon, 17 Apr 2023 19:26:22 -0700, John Larkin >><jlarkin@highlandSNIPMEtechnology.com> wrote: >> >>>On Mon, 17 Apr 2023 21:38:52 -0400, Joe Gwinn <joegwinn@comcast.net> >>>wrote: >>> >>>>On Mon, 17 Apr 2023 17:49:33 -0700, John Larkin >>>><jlarkin@highlandSNIPMEtechnology.com> wrote:
>>The modern equivalent of a "hub" is an "unmanaged switch". They are >>just that, but are internally buffered. If one chooses a >>gigabit-capable unit, the latency will be quite small. For instance, >>consider a NETGEAR 5-Port Multi-Gigabit (2.5G) Ethernet Unmanaged >>Switch: >> >>.<https://www.downloads.netgear.com/files/GDC/MS105/MS105_DS.pdf> >> >>The datasheet specifies the latency for 64-byte packets as less than >>2.5 microseconds. Again, this ought to suffice. Web price from >>Netgear is $150. Slower units are cheaper, with increased buffering >>latency.
The 64 byte minimum Ethernet frame size is from the 10Base5 vampire tap Ethernet so that collisions could be reliably detected.
>That's encouraging. Thanks. > >I like the idea of the switch forwarding the packet in microseconds, >before it's actually over. > >A short UDP packet should get through fast.
The problem is that if an other big frame has been started to be transmitted when the "GO" frame is received, the previous frame is transmitted fully before the GO packet. Things get catastrophic, if 9 Kbyte Jumbo frames are allowed on the network. IIRC the maximum IP frame size can be limited to 576 bytes, this reducing the maximum Ethernet frame size from 1500 to under 600 bytes. <snip>
>The clock on the Raspberry Pi is a cheap crystal and is not tunable. >It might be interesting to do a DDS sort of thing to make a variable >that is a local calibrated time counter. We could occasionally send >out a packet to declare the time of day, and the little boxes could >both sync to that and tweak their DDS cal factors to stay pretty close >until the next correction. All software.
If the crystal has a reasonable short term stability but the frequency is seriously inaccurate, some DDS principle can be applied. Assuming the crystal drives a timer interrupt, say nominally every millisecond and the ISR updates a nanosecond counter. If it has been determined that the ISR is activated every 1.001234 milliseconds, the ISR adds 1001234 to the nanosecond counter. Each time when the million changes, a new millisecond period is declared. Using two or more slightly different adders and fractional nanoseconds can be counted. Of course using a binary counter with say 0x8000 for no frequency error would simplify things.