Electronics-Related.com
Forums

triggering things with ethernet

Started by John Larkin April 17, 2023
On 4/17/2023 2:40 PM, John Walliker wrote:
>> Hmmm, interesting question. Since switches do "buffer then transmit" >> this might go either way; if the same copy is retransmitted to >> all rj-45s this can be done with practically no skew (apart from >> that at the receiving side, clock phases, software latencies etc.). >> But I doubt many - if any - switches do that, my bet would be that they >> transmit one cable at a time. >> On the old coaxial Ethernet the answer is obvious but not much >> use. > > Whatever the skew, it would surely be lower with 1Gbit/s ethernet. If the
That doesn't follow. The delay to any outgoing port will be governed by the traffic flowing *to* that port/device. So, for a set of N ports (each with different traffic queued), one port may be "ready" presently while each of the others can be in varying state of transmitting packets. Note, also, that the switch only sets the *upper* bound for throughput; if nodes have negotiated (deliberately or because of, e.g., duplexing issues) lower rates, that will further limit the throughput TO THAT NODE.
> broadcast packets are sent out sequentially then they will be sent faster > if the data rate is ten times higher. The cost is hardly any higher and the > maximum range is the same. > > John >
On Mon, 17 Apr 2023 17:49:33 -0700, John Larkin
<jlarkin@highlandSNIPMEtechnology.com> wrote:

>On Tue, 18 Apr 2023 01:39:18 +0200, Klaus Vestergaard Kragelund ><klauskvik@hotmail.com> wrote: > >>On 18-04-2023 01:38, Klaus Vestergaard Kragelund wrote: >>> On 17-04-2023 21:18, John Larkin wrote: >>>> >>>> >>>> Suppose one were to send a broadcast packet from a PC to multiple >>>> boxes over regular 100 Mbit ethernet, without any fancy time protocols >>>> like PTP or anything. Each box would accept the packet as a trigger. >>>> Assume a pretty small private network and a reasonable number of >>>> switches to fan out to many boxes. >>>> >>>> Any guess as to how much the effective time trigger to various boxes >>>> would skew? I've seen one esimate of 125 usec, for cameras, with >>>> details unclear. >>>> >>> If you are connected to the Phy directly with high priority ISR, I think >>> you can do typical less than 1us. >>> >>> problem is loading on the bus, or retransmissions, then it could be way >>> longer >>> >>> If you need precise timing, you can use real time ethernet >> >>https://www.cisco.com/c/dam/en/us/solutions/collateral/industry-solutions/white-paper-c11-738950.pdf > >I was wondering what sort of time skew I might get with an ordinary PC >shooting out commands and fairly ordinary receivers and some switches. >The PC could send a message to each of the boxes in the field, like >"set your voltage to 17 when I say GO" and things like that to various >boxes. Then it could broadcast a UDP message to all the boxes GO .
With Windows <anything>, it's going to be pretty bad, especially when something like Java is running. There will be startling gaps, into the tens of milliseconds, sometimes longer. Which is not a criticism - Windows is intended for desktop applications, not embedded realtime. So, wrong tool. With RHEL (Red Hat Enterprise Linux) with a few fancy extensions, it's going to be on order of hundreds of microseconds, so long as you run the relevant applications are a sufficiently urgent realtime priority and scheduling policy. To do better, one goes to partly hardware (with firmware) solutions.
>The boxes would have to be able to accept the usual TCP commands at >unique IP addresses, and a UDP packet with some common IP address, and >process the GO command rapidly, but I was wondering what the inherent >time uncertainty might be with the ethernet itself.
How good that stack is depends on what the host computer is optimized for.
>I guess some switches are better than others, so if I found some good >ones I could recommend them. I'd have to understand how a switch can >handle a broadcast packet too. I think the GO packet is just sent to >some broadcast address.
Modern network switches are typically far faster than RHEL.
>The fancy time protocols, ethercat and PTP and TSN (and others!) are >complex on both ends. I might invent a new one, but that's another >story.
It's been done, many times. Guilty. But PTP over ethernet is sweeping all that stuff away. Joe Gwinn
On Mon, 17 Apr 2023 21:38:52 -0400, Joe Gwinn <joegwinn@comcast.net>
wrote:

>On Mon, 17 Apr 2023 17:49:33 -0700, John Larkin ><jlarkin@highlandSNIPMEtechnology.com> wrote: > >>On Tue, 18 Apr 2023 01:39:18 +0200, Klaus Vestergaard Kragelund >><klauskvik@hotmail.com> wrote: >> >>>On 18-04-2023 01:38, Klaus Vestergaard Kragelund wrote: >>>> On 17-04-2023 21:18, John Larkin wrote: >>>>> >>>>> >>>>> Suppose one were to send a broadcast packet from a PC to multiple >>>>> boxes over regular 100 Mbit ethernet, without any fancy time protocols >>>>> like PTP or anything. Each box would accept the packet as a trigger. >>>>> Assume a pretty small private network and a reasonable number of >>>>> switches to fan out to many boxes. >>>>> >>>>> Any guess as to how much the effective time trigger to various boxes >>>>> would skew? I've seen one esimate of 125 usec, for cameras, with >>>>> details unclear. >>>>> >>>> If you are connected to the Phy directly with high priority ISR, I think >>>> you can do typical less than 1us. >>>> >>>> problem is loading on the bus, or retransmissions, then it could be way >>>> longer >>>> >>>> If you need precise timing, you can use real time ethernet >>> >>>https://www.cisco.com/c/dam/en/us/solutions/collateral/industry-solutions/white-paper-c11-738950.pdf >> >>I was wondering what sort of time skew I might get with an ordinary PC >>shooting out commands and fairly ordinary receivers and some switches. >>The PC could send a message to each of the boxes in the field, like >>"set your voltage to 17 when I say GO" and things like that to various >>boxes. Then it could broadcast a UDP message to all the boxes GO . > >With Windows <anything>, it's going to be pretty bad, especially when >something like Java is running. There will be startling gaps, into >the tens of milliseconds, sometimes longer. >
All I want to know is the destination time skews after the PC sends the GO packet. Windows doesn't matter. My customers mostly use some realtime Linux, but that doesn't matter either.
>Which is not a criticism - Windows is intended for desktop >applications, not embedded realtime. So, wrong tool. > >With RHEL (Red Hat Enterprise Linux) with a few fancy extensions, it's >going to be on order of hundreds of microseconds, so long as you run >the relevant applications are a sufficiently urgent realtime priority >and scheduling policy. > >To do better, one goes to partly hardware (with firmware) solutions. > > >>The boxes would have to be able to accept the usual TCP commands at >>unique IP addresses, and a UDP packet with some common IP address, and >>process the GO command rapidly, but I was wondering what the inherent >>time uncertainty might be with the ethernet itself. > >How good that stack is depends on what the host computer is optimized >for. > > >>I guess some switches are better than others, so if I found some good >>ones I could recommend them. I'd have to understand how a switch can >>handle a broadcast packet too. I think the GO packet is just sent to >>some broadcast address. > >Modern network switches are typically far faster than RHEL.
I want numbers!
> > >>The fancy time protocols, ethercat and PTP and TSN (and others!) are >>complex on both ends. I might invent a new one, but that's another >>story. > >It's been done, many times. Guilty. But PTP over ethernet is >sweeping all that stuff away. > >Joe Gwinn
On 4/17/2023 6:38 PM, Joe Gwinn wrote:
> It's been done, many times. Guilty. But PTP over ethernet is > sweeping all that stuff away.
PTP, itself, does nothing to eliminate the instantaneous variabilities; it is intended to be used to discipline a local clock (with filtering above the packet level). One *could* use 1588 *hardware* to try to quantify the instantaneous arrival times of packets. But, you'd still not be sure when YOUR packet arrived relative to other (broadcast) packets arriving at your peers. I.e., you still need *a* protocol built atop that.
On 4/17/23 22:26, John Larkin wrote:
> I want numbers!
I think pinging between random pairs of PC's on your in-house network should give you an estimate somewhere between the number you want and about twice that number, depending on how the trip time compares to the PC response times. It's at least a start. -- Regards, Carl
In article <u1l0ur$3cn42$1@dont-email.me>,
Don Y  <blockedofcourse@foo.invalid> wrote:
>On 4/17/2023 6:38 PM, Joe Gwinn wrote: >> It's been done, many times. Guilty. But PTP over ethernet is >> sweeping all that stuff away. > >PTP, itself, does nothing to eliminate the instantaneous variabilities; >it is intended to be used to discipline a local clock (with filtering >above the packet level). > >One *could* use 1588 *hardware* to try to quantify the instantaneous >arrival times of packets. But, you'd still not be sure when YOUR >packet arrived relative to other (broadcast) packets arriving at your >peers. > >I.e., you still need *a* protocol built atop that.
I agree. The simplest reliable approach I can think of, is to use PTP or something of similar intent to distribute a well-synchronized clock to all of the devices in the house. Then, send out a broadcast "Turn on, at time X" packet to all nodes, where "X" is far enough in the future that you can reasonably ensure that all nodes will receive the packet before time "X" arrives. Each device sets a timer when it receives the packet. The skew in the actual "turn on" times will depend on the success of your time-synchronization protocol, and the accuracy of each node's internal timer. It should be possible to get it down to much less than the skew in transmission times.
On 4/17/2023 10:15 PM, Dave Platt wrote:
> In article <u1l0ur$3cn42$1@dont-email.me>, > Don Y <blockedofcourse@foo.invalid> wrote: >> On 4/17/2023 6:38 PM, Joe Gwinn wrote: >>> It's been done, many times. Guilty. But PTP over ethernet is >>> sweeping all that stuff away. >> >> PTP, itself, does nothing to eliminate the instantaneous variabilities; >> it is intended to be used to discipline a local clock (with filtering >> above the packet level). >> >> One *could* use 1588 *hardware* to try to quantify the instantaneous >> arrival times of packets. But, you'd still not be sure when YOUR >> packet arrived relative to other (broadcast) packets arriving at your >> peers. >> >> I.e., you still need *a* protocol built atop that. > > I agree. The simplest reliable approach I can think of, is to use PTP > or something of similar intent to distribute a well-synchronized clock > to all of the devices in the house. > > Then, send out a broadcast "Turn on, at time X" packet to all nodes, > where "X" is far enough in the future that you can reasonably ensure that > all nodes will receive the packet before time "X" arrives.
And, that they *acknowledge* it (e.g., TCP or a bastardized UDP-based protocol). Just because you mailed 235 invitations to your wedding -- much in advance of the actual appointed time -- doesn't mean that everyone actually *received* them!
> Each device sets a timer when it receives the packet.
s/timer/alarm/... you want the future event to happen at time 'X' not necessarily "some time units from now". I.e., any changes that the time protocol makes to the notion of current time should affect the point *in* time that you've called 'X'. (timers measure *relative* time so 1 minute from now is "1 minute from now", not current_time+1minute -- because current_time is an arbitrary reference that the protocol will manipulate)
> The skew in the actual "turn on" times will depend on the success of > your time-synchronization protocol, and the accuracy of each node's > internal timer. It should be possible to get it down to much less > than the skew in transmission times.
I can use PTP-ish protocols to synchronize *my* "clocks" to << 1us -- without 1588 hardware. But, I can only do that because I have complete control over ALL the hardware/software AND traffic. The jitter between synchronization events/messages falls out of the equation because higher protocol levels cause it to do so. The fact that a message may take 100us to transit (while another takes 1us) is removed from the calculus. You still have to sort out what happens when things lose sync or messages get lost. If Penny doesn't make it to the wedding because her invitation got lost... The trivial days of "RS-style" serial protocols (worst-case analysis on the back of a napkin) are long past. Remember, most protocols are designed to share *information*, not *timing*.
On 4/17/2023 12:44 PM, Dimiter_Popoff wrote:
> Hmmm, interesting question. Since switches do "buffer then transmit" > this might go either way; if the same copy is retransmitted to > all rj-45s this can be done with practically no skew (apart from > that at the receiving side, clock phases, software latencies etc.).
Remember, the switch isn't a wire. Instead, it can be seen as a buffer upstream of each port. So, if port 4 is busy delivering a packet to it's client (from port 7, e.g.) but port 3 is idle (because no one loves him!), then a message intended for port 3 will be delivered before a message (possibly the same BROADCAST message) will be delivered to port 4. Depending on the switch architecture, a backlog at switch 4 may cause the "next" message intended for it to be dropped, *in* the switch -- while port 3 manages to receive it. > But I doubt many - if any - switches do that, my bet would be that they
> transmit one cable at a time. > On the old coaxial Ethernet the answer is obvious but not much > use.
10Base2/5 preferable to <any>BaseT (*wire* instead of hub/switch). But, even there, you have differences in the PHYs/NICs, stack, etc. that add to the variability. If you don't control the hardware *and* software in each node, you're just pissing in the wind and hoping not to get wet! So, you have to resort to standards' compliance and figure out how to get the performance you want/need *in* that framework (and, then, insist that everything you talk to is compliant).
On 2023-04-17 21:18, John Larkin wrote:
> > > Suppose one were to send a broadcast packet from a PC to multiple > boxes over regular 100 Mbit ethernet, without any fancy time protocols > like PTP or anything. Each box would accept the packet as a trigger. > Assume a pretty small private network and a reasonable number of > switches to fan out to many boxes. > > Any guess as to how much the effective time trigger to various boxes > would skew? I've seen one esimate of 125 usec, for cameras, with > details unclear.
When using the QOS field in the message header, and switching actually validating that, I've seen jitter below e few usec even in the presence of videostream (with a lower QOS value).
On a sunny day (Mon, 17 Apr 2023 22:15:44 -0700) it happened
dplatt@coop.radagast.org (Dave Platt) wrote in
<0ll1hj-m4kn2.ln1@coop.radagast.org>:

>In article <u1l0ur$3cn42$1@dont-email.me>, >Don Y <blockedofcourse@foo.invalid> wrote: >>On 4/17/2023 6:38 PM, Joe Gwinn wrote: >>> It's been done, many times. Guilty. But PTP over ethernet is >>> sweeping all that stuff away. >> >>PTP, itself, does nothing to eliminate the instantaneous variabilities; >>it is intended to be used to discipline a local clock (with filtering >>above the packet level). >> >>One *could* use 1588 *hardware* to try to quantify the instantaneous >>arrival times of packets. But, you'd still not be sure when YOUR >>packet arrived relative to other (broadcast) packets arriving at your >>peers. >> >>I.e., you still need *a* protocol built atop that. > >I agree. The simplest reliable approach I can think of, is to use PTP >or something of similar intent to distribute a well-synchronized clock >to all of the devices in the house. > >Then, send out a broadcast "Turn on, at time X" packet to all nodes, >where "X" is far enough in the future that you can reasonably ensure that >all nodes will receive the packet before time "X" arrives. > >Each device sets a timer when it receives the packet. > >The skew in the actual "turn on" times will depend on the success of >your time-synchronization protocol, and the accuracy of each node's >internal timer. It should be possible to get it down to much less >than the skew in transmission times.
Yes this is about what I do in one project, I also send time sync every now and then so make sure the receiver clock is still running correct (has no battery backup) raspberrypi: ~ # cat time_to_ethernet_color_pic #!/bin/bash # hour and minutes to ethernet_color_pic hour=/bin/date +"%H" minute=/bin/date +"%M" echo hour=$hour echo minute=$minute /bin/echo H$hour M$minute | /bin/netcat -w 0 -u 192.168.178.157 1024 sleep 1 /bin/echo H$hour M$minute | /bin/netcat -w 0 -u 192.168.178.157 1024 sleep 1 /bin/echo H$hour M$minute | /bin/netcat -w 0 -u 192.168.178.157 1024 Link and source code: https://panteltje.nl/panteltje/pic/ethernet_color_pic/index.html netcat is your friend for all network stuff I use UDP here as the receiving end uses UDP and send time 3 times to make sure it arrives, and that script is called from crontab 19 minutes past each hour: Entry in crtab: # synchronise the LED lighting timers. 19 * * * * /usr/local/sbin/time_to_ethernet_color_pic & The actual times ethernet color pic uses to switch things was programmed in its EEPROM via same UDP link. I can 'talk' to that ethernet_color_pic using netcat too, once you called it from the address IP you are then it will reply to it, I use 2 xterms for that, one to start up the communication then use an other one to set events. First xterm: To initiate an UDP llnk, ask for status or any other command: raspi95: ~ # netcat -u 192.168.178.157 1024 v Second xterm: raspi95: ~ # netcat -u -l -p 1024 v 11:32 ADC 2=262 4=117 5=49 6=295 Light level=75 setpoint=128 RGB 0 0 0 l 128 Timers 0 23:59 0 0 0 1 18:0 180 0 47 2 22:0 180 0 47 3 255:255 255 255 255 4 255:255 255 255 255 T1 reload 175 h Panteltje (c) ethernet_color_pic-0.3 UDP commands: a n print adc. B nnn set blue. c n clear digital output. G nnn set green. H nn set hours. h help, this help. i n print digital input. K nnn set clock calibration. L nnn set light sensitivity. l print light sensitivity. M nn set minutes. R nnn set red. S save settings. s n set digital output. T n hh mm rrr ggg bbb set timers (n = 0-4). v print status. UDP protocal even allows me to make a disco light show by streaming audio data. https://panteltje.nl/panteltje/xpequ/index.html