Electronics-Related.com
Forums

Universal Parallel Bus -- why not?

Started by GreenXenon March 20, 2010
On Mon, 5 Dec 2016 05:14:44 -0800 (PST), jrwalliker@gmail.com wrote:

>On Saturday, 3 December 2016 10:47:53 UTC, upsid...@downunder.com wrote: > >> One problem with high speed parallel connections is the need to keep >> the path lengths exactly the same for each connection to be able to >> direct sample full words. At high speeds, you need to use some self >> clocking on each wire and after clock skew elimination to build a full >> word. > >Even if the path lengths are exactly the same there is dispersion to >deal with. The propagation velocity is frequency dependent, so data- >dependent timing errors creep in. > >Long ago I was involved in transmitting digital TV signals around a >building. To cut costs the designer tried using twisted pair >telephone wire with one bit per pair plus a clock pair. It was >impossible to get reliable operation at 50m range because of dispersion >no matter how much tweaking of timing delays was done. > >John
This is a well known phenomenon. While working with (physically) big computers in the 1970's in which 1/2 inch 9 track (8 data bits+(odd)parity was used). The 800 BPI (bits per inch) drives were quite unreliable. The assumption was that odd parity with 8 data bits would generate a clean synch for all tracks. Depending on the read head azimuth angle, this was seldom the case. In reality, you had to manually adust the azimuth angle for each tape written on a separate drive. The 1600 BPI tapes were much better due to the chancel specific self clocking encoding.
jrwalliker@gmail.com wrote:
> On Saturday, 3 December 2016 10:47:53 UTC, upsid...@downunder.com wrote: > >> One problem with high speed parallel connections is the need to keep >> the path lengths exactly the same for each connection to be able to >> direct sample full words. At high speeds, you need to use some self >> clocking on each wire and after clock skew elimination to build a full >> word. > > Even if the path lengths are exactly the same there is dispersion to > deal with. The propagation velocity is frequency dependent, so data- > dependent timing errors creep in. > > Long ago I was involved in transmitting digital TV signals around a > building. To cut costs the designer tried using twisted pair > telephone wire with one bit per pair plus a clock pair. It was > impossible to get reliable operation at 50m range because of dispersion > no matter how much tweaking of timing delays was done. > > John
Which gives a good lesson of what not to do and what to look for. So a UPB needs a relatively low dispersion data path and a clock rate that is limited by frequency dependent issues. If speed is of the essense, use of different wavelengths for each channel would most likely limit total path length to an "impractical" amount (ie design cost not worth t). Still, a CPU board using light instead of wires/traces could be designed that would "kick ass" for parallel, multi-processing.
On Monday, 5 December 2016 23:01:28 UTC, Robert Baer  wrote:
> jrwalliker@gmail.com wrote: > > On Saturday, 3 December 2016 10:47:53 UTC, upsid...@downunder.com wrote: > > > >> One problem with high speed parallel connections is the need to keep > >> the path lengths exactly the same for each connection to be able to > >> direct sample full words. At high speeds, you need to use some self > >> clocking on each wire and after clock skew elimination to build a full > >> word. > > > > Even if the path lengths are exactly the same there is dispersion to > > deal with. The propagation velocity is frequency dependent, so data- > > dependent timing errors creep in. > > > > Long ago I was involved in transmitting digital TV signals around a > > building. To cut costs the designer tried using twisted pair > > telephone wire with one bit per pair plus a clock pair. It was > > impossible to get reliable operation at 50m range because of dispersion > > no matter how much tweaking of timing delays was done. > > > > John > Which gives a good lesson of what not to do and what to look for. > So a UPB needs a relatively low dispersion data path and a clock rate > that is limited by frequency dependent issues. > If speed is of the essense, use of different wavelengths for each > channel would most likely limit total path length to an "impractical" > amount (ie design cost not worth t). > Still, a CPU board using light instead of wires/traces could be > designed that would "kick ass" for parallel, multi-processing.
All you need do is run your parallel lines as several serial lines in parallel. Low cost is what rules now though. NT
On Monday, December 5, 2016 at 3:04:25 PM UTC-8, tabb...@gmail.com wrote:
> On Monday, 5 December 2016 23:01:28 UTC, Robert Baer wrote:
> All you need do is run your parallel lines as several serial lines in parallel. Low cost is what rules now though.
That's what gigabit Ethernet does (1000baseT) with its four pairs. It qualifies as low cost after you amortize the (relatively complex) design of the dedicated ASIC hardware. The best way to do a fast parallel port nowadays might be to bond a multiplicity of gigabit wired Ethernet ports (but the switch fabric requires multiport Ethernet switches to do some nonstandard things to make a one-to-many node). Outside of a server room, does anyone really have a need for such? There were four-bytes-at-a-time SCSI parallel ports, but not a lot of use for them.
On Mon, 5 Dec 2016 15:38:26 -0800 (PST), whit3rd <whit3rd@gmail.com>
wrote:

>On Monday, December 5, 2016 at 3:04:25 PM UTC-8, tabb...@gmail.com wrote: >> On Monday, 5 December 2016 23:01:28 UTC, Robert Baer wrote: > >> All you need do is run your parallel lines as several serial lines in parallel. Low cost is what rules now though. > >That's what gigabit Ethernet does (1000baseT) with its four pairs. It qualifies >as low cost after you amortize the (relatively complex) design of the dedicated >ASIC hardware. The best way to do a fast parallel port nowadays might be >to bond a multiplicity of gigabit wired Ethernet ports (but the switch fabric >requires multiport Ethernet switches to do some nonstandard things to make a >one-to-many node). > >Outside of a server room, does anyone really have a need for such? There >were four-bytes-at-a-time SCSI parallel ports, but not a lot of use for them.
Uncompressed video sucks up a lot of bandwidth.
On 2016-12-06 00:01, Robert Baer wrote:
[...]
> Still, a CPU board using light instead of wires/traces could be > designed that would "kick ass" for parallel, multi-processing. >
Why? What advantage do you imagine that might have? Jeroen Belleman
On 2016-12-05, whit3rd <whit3rd@gmail.com> wrote:
> On Monday, December 5, 2016 at 3:04:25 PM UTC-8, tabb...@gmail.com wrote: >> On Monday, 5 December 2016 23:01:28 UTC, Robert Baer wrote: > >> All you need do is run your parallel lines as several serial lines in parallel. Low cost is what rules now though. > > That's what gigabit Ethernet does (1000baseT) with its four pairs. It qualifies > as low cost after you amortize the (relatively complex) design of the dedicated > ASIC hardware. The best way to do a fast parallel port nowadays might be > to bond a multiplicity of gigabit wired Ethernet ports (but the switch fabric > requires multiport Ethernet switches to do some nonstandard things to make a > one-to-many node).
sounde like a poor immitation of 16x PCIe.
> Outside of a server room, does anyone really have a need for such? There > were four-bytes-at-a-time SCSI parallel ports, but not a lot of use for them.
every gamer. -- This email has not been checked by half-arsed antivirus software
On Mon, 05 Dec 2016 15:01:24 -0800, Robert Baer
<robertbaer@localnet.com> wrote:

>jrwalliker@gmail.com wrote: >> On Saturday, 3 December 2016 10:47:53 UTC, upsid...@downunder.com wrote: >> >>> One problem with high speed parallel connections is the need to keep >>> the path lengths exactly the same for each connection to be able to >>> direct sample full words. At high speeds, you need to use some self >>> clocking on each wire and after clock skew elimination to build a full >>> word. >> >> Even if the path lengths are exactly the same there is dispersion to >> deal with. The propagation velocity is frequency dependent, so data- >> dependent timing errors creep in. >> >> Long ago I was involved in transmitting digital TV signals around a >> building. To cut costs the designer tried using twisted pair >> telephone wire with one bit per pair plus a clock pair. It was >> impossible to get reliable operation at 50m range because of dispersion >> no matter how much tweaking of timing delays was done. >> >> John > Which gives a good lesson of what not to do and what to look for. > So a UPB needs a relatively low dispersion data path and a clock rate >that is limited by frequency dependent issues. > If speed is of the essense, use of different wavelengths for each >channel would most likely limit total path length to an "impractical" >amount (ie design cost not worth t). > Still, a CPU board using light instead of wires/traces could be >designed that would "kick ass" for parallel, multi-processing.
SFPs https://en.wikipedia.org/wiki/Small_form-factor_pluggable_transceiver are available up to at least 10 GBit/s (SFB+) so you get a decent throughput with a single fiber. OTOH, CWDM SFPs are available with different wavelengths (every 20 nm), so you can put 8 lamdas into a single fiber with simple passive combiners and splitters. Unfortunately the SFP is quite big, if you want to put 8 of these into the edge of the PCB.
On Tue, 6 Dec 2016 08:41:40 +0100, Jeroen Belleman
<jeroen@nospam.please> wrote:

>On 2016-12-06 00:01, Robert Baer wrote: >[...] >> Still, a CPU board using light instead of wires/traces could be >> designed that would "kick ass" for parallel, multi-processing. >> > >Why? What advantage do you imagine that might have?
If you are building something similar to the Connection Machines Hypercube computer, using CAT6 cabling would quickly consumes most of the cube module volume. Using fiber optic cables with multiple fibers and using WDM on each fiber will have a huge throughput for a specific cable volume. It might even make possible direct connection between any two nodes in the cube. Most massively parallel computers have direct connection to only a few nearby nodes in each dimension and relying on mesh networking further away. This greatly increases latency, when latency prone serializing/deserialisation is applied in each hop. Getting to any node with a single pair of desers would speed up things greatly.
On 12/06/2016 05:55 AM, upsidedown@downunder.com wrote:
> On Tue, 6 Dec 2016 08:41:40 +0100, Jeroen Belleman > <jeroen@nospam.please> wrote: > >> On 2016-12-06 00:01, Robert Baer wrote: >> [...] >>> Still, a CPU board using light instead of wires/traces could be >>> designed that would "kick ass" for parallel, multi-processing. >>> >> >> Why? What advantage do you imagine that might have? > > If you are building something similar to the Connection Machines > Hypercube computer, using CAT6 cabling would quickly consumes most of > the cube module volume. > > Using fiber optic cables with multiple fibers and using WDM on each > fiber will have a huge throughput for a specific cable volume. It > might even make possible direct connection between any two nodes in > the cube.
WDM isn't too helpful in general except for long links, but fibre has been used inside large computers and data centers for yonks.
> > Most massively parallel computers have direct connection to only a few > nearby nodes in each dimension and relying on mesh networking further > away. This greatly increases latency, when latency prone > serializing/deserialisation is applied in each hop. > > Getting to any node with a single pair of desers would speed up things > greatly.
On-chip optics is the win. Turns out that you should be able to do communication faster (~c/4 for silicon photonics vs. ~c/10 for long lines with repeaters), and at lower power. I calculated that my antenna-coupled tunnel junction modulators would come in at about 60 uW/(Gb/s). (I didn't get them working, unfortunately.) Cheers Phil Hobbs -- Dr Philip C D Hobbs Principal Consultant ElectroOptical Innovations LLC Optics, Electro-optics, Photonics, Analog Electronics 160 North State Road #203 Briarcliff Manor NY 10510 hobbs at electrooptical dot net http://electrooptical.net