Electronics-Related.com
Forums

What's Your Favorite Processor on an FPGA?

Started by rickman April 20, 2013
On 4/23/2013 10:03 AM, John Larkin wrote:
> On Tue, 23 Apr 2013 02:50:10 -0400, "Michael A. Terrell" > <mike.terrell@earthlink.net> wrote: > >> >> John Larkin wrote: >>> >>> On 22 Apr 2013 12:59:27 GMT, Allan Herriman<allanherriman@hotmail.com> wrote: >>> >>>> On Sun, 21 Apr 2013 09:05:49 -0700, John Larkin wrote: >>>> >>>>> The annoying thing is the CPU-to-FPGA interface. It takes a lot of FPGA >>>>> pins and it tends to be async and slow. It would be great to have an >>>>> industry-standard LVDS-type fast serial interface, with hooks like >>>>> shared memory, but transparent and easy to use. >>>> >>>> You've just described PCI Express. >>> >>> No. PCIe is insanely complex and has horrible latency. It takes something like 2 >>> microseconds to do an 8-bit read over gen1 4-lane PCIe. It was designed for >>> throughput, not latency. >>> >>> We've done three PCIe projects so far, and it's the opposite of "transparent and >>> easy to use." The PCIe spec reads like the tax code and Obamacare combined. >>> >>> Next up is Thunderbolt, probably worse. >> >> >> Have you ever worked with PCI-X? > > No, but it's mostly dead, as PCI will soon be. Intel busses only last a few > years each.
A *few* years? PCI has been around for 20 years! -- Rick
On Tue, 23 Apr 2013 19:45:07 -0400, rickman <gnuarm@gmail.com> wrote:

>On 4/23/2013 10:03 AM, John Larkin wrote: >> On Tue, 23 Apr 2013 02:50:10 -0400, "Michael A. Terrell" >> <mike.terrell@earthlink.net> wrote: >> >>> >>> John Larkin wrote: >>>> >>>> On 22 Apr 2013 12:59:27 GMT, Allan Herriman<allanherriman@hotmail.com> wrote: >>>> >>>>> On Sun, 21 Apr 2013 09:05:49 -0700, John Larkin wrote: >>>>> >>>>>> The annoying thing is the CPU-to-FPGA interface. It takes a lot of FPGA >>>>>> pins and it tends to be async and slow. It would be great to have an >>>>>> industry-standard LVDS-type fast serial interface, with hooks like >>>>>> shared memory, but transparent and easy to use. >>>>> >>>>> You've just described PCI Express. >>>> >>>> No. PCIe is insanely complex and has horrible latency. It takes something like 2 >>>> microseconds to do an 8-bit read over gen1 4-lane PCIe. It was designed for >>>> throughput, not latency. >>>> >>>> We've done three PCIe projects so far, and it's the opposite of "transparent and >>>> easy to use." The PCIe spec reads like the tax code and Obamacare combined. >>>> >>>> Next up is Thunderbolt, probably worse. >>> >>> >>> Have you ever worked with PCI-X? >> >> No, but it's mostly dead, as PCI will soon be. Intel busses only last a few >> years each. > >A *few* years? PCI has been around for 20 years!
But mobos seldom have PCI slots any more. It's all PCIe now. And Thunderbolt will displace PCIe. Motherboard slots are going away. Hell, motherboards are going away! -- John Larkin Highland Technology, Inc jlarkin at highlandtechnology dot com http://www.highlandtechnology.com Precision electronic instrumentation Picosecond-resolution Digital Delay and Pulse generators Custom laser drivers and controllers Photonics and fiberoptic TTL data links VME thermocouple, LVDT, synchro acquisition and simulation
John Larkin wrote:
> > On Tue, 23 Apr 2013 02:48:53 -0400, "Michael A. Terrell" > <mike.terrell@earthlink.net> wrote: > > > > >John Larkin wrote: > >> > >> On Mon, 22 Apr 2013 07:49:11 -0400, "Michael A. Terrell" > >> <mike.terrell@earthlink.net> wrote: > >> > >> > > >> >John Larkin wrote: > >> >> > >> >> DEC wrote operating systems (TOPS10, VMS, RSTS) that ran for months between > >> >> power failures, time-sharing multiple, sometimes hostile, users. We are now in > >> >> the dark ages of computing, overwhelmed by bloat and slop and complexity. No > >> >> wonder people are buying tablets. DEC understood things that Intel and Microsoft > >> >> never really got, like: don't execute data. > >> > > >> > > >> > I've had Win2K run over nine months, between power failures that > >> >lasted longer than the UPS batteries. DEC had more control over the > >> >computers, and a tiny fraction of the number running Windows or Linux. > >> > > >> > > >> > If DEC was so damned good, why were they unable to survive? Their > >> >IBM 'clone' (Rainbow 100) was very overpriced, not compatible, and did a > >> >very quick death spiral. Admit it. It was a dinosaur company with a > >> >very tiny customer base. > >> > >> It was *the* minicomputer company and that changed the world. > > > > > > Really? Could it have handled any modern application, let alone > >dozens or hundreds of them at once. > > As I recall, Unix and C were invented for the PDP11. As was Arpanet and the > Internet. The PDP8 was the first "personal" computer, a computer that one person > could buy and use all by himself, to automate a lab experiment or (in my case) > simulate a steamship power train. That changed everything. > > DECs RT11 OS was cloned to become CPM and Microsoft DOS, and lives on in the > Windows command line.
CP/M<>DOS and never was. 'Control Program for Microcomputers' was written by Gary Kildall of Digital Research, Inc, and the later MP/M for multiple users was written for the 8080 from scratch. If that is a clone, so is every other OS.
> What sort of computing system did you have in 1969? I had a PDP8 running Focal.
Did you own it?
> What did you compute on in 1975? I had a PDP11 timeshare system with around 20 > users.
I was still in high school in 1969, but a few students at the OTHER high school got some time on an IBM 360 that belonged to a local bank. I got my first computer in 1983, but I was troubleshooting & repairing the boards & software for a pair of Exorcisor bus based 'Metrodata' computers with six NTSC video outputs per system before I owned a computer. I had to repair them, since the OEM was out of business. A whopping 48 KB of RAM, and one 6800 MPU per system. One system had a SMS dual 8" floppy disk drive system, and they shared a single TI computer terminal. I wrote a terminal program for a Commodore 64 to replace the flaky TI, and added a menu system to reduce the 100+ commands the operator had to use to access different functions.
John Larkin wrote:
> > On Tue, 23 Apr 2013 02:50:10 -0400, "Michael A. Terrell" > <mike.terrell@earthlink.net> wrote: > > > > >John Larkin wrote: > >> > >> On 22 Apr 2013 12:59:27 GMT, Allan Herriman <allanherriman@hotmail.com> wrote: > >> > >> >On Sun, 21 Apr 2013 09:05:49 -0700, John Larkin wrote: > >> > > >> >> The annoying thing is the CPU-to-FPGA interface. It takes a lot of FPGA > >> >> pins and it tends to be async and slow. It would be great to have an > >> >> industry-standard LVDS-type fast serial interface, with hooks like > >> >> shared memory, but transparent and easy to use. > >> > > >> >You've just described PCI Express. > >> > >> No. PCIe is insanely complex and has horrible latency. It takes something like 2 > >> microseconds to do an 8-bit read over gen1 4-lane PCIe. It was designed for > >> throughput, not latency. > >> > >> We've done three PCIe projects so far, and it's the opposite of "transparent and > >> easy to use." The PCIe spec reads like the tax code and Obamacare combined. > >> > >> Next up is Thunderbolt, probably worse. > > > > > > Have you ever worked with PCI-X? > > No, but it's mostly dead, as PCI will soon be. Intel busses only last a few > years each.
It's alive & well in real servers for their RAID controllers and Ethernet or FC ports. I've never seen it used in a computer that sold for under $3K.
On Tue, 23 Apr 2013 22:38:23 -0400, "Michael A. Terrell"
<mike.terrell@earthlink.net> wrote:

> >John Larkin wrote: >> >> On Tue, 23 Apr 2013 02:48:53 -0400, "Michael A. Terrell" >> <mike.terrell@earthlink.net> wrote: >> >> > >> >John Larkin wrote: >> >> >> >> On Mon, 22 Apr 2013 07:49:11 -0400, "Michael A. Terrell" >> >> <mike.terrell@earthlink.net> wrote: >> >> >> >> > >> >> >John Larkin wrote: >> >> >> >> >> >> DEC wrote operating systems (TOPS10, VMS, RSTS) that ran for months between >> >> >> power failures, time-sharing multiple, sometimes hostile, users. We are now in >> >> >> the dark ages of computing, overwhelmed by bloat and slop and complexity. No >> >> >> wonder people are buying tablets. DEC understood things that Intel and Microsoft >> >> >> never really got, like: don't execute data. >> >> > >> >> > >> >> > I've had Win2K run over nine months, between power failures that >> >> >lasted longer than the UPS batteries. DEC had more control over the >> >> >computers, and a tiny fraction of the number running Windows or Linux. >> >> > >> >> > >> >> > If DEC was so damned good, why were they unable to survive? Their >> >> >IBM 'clone' (Rainbow 100) was very overpriced, not compatible, and did a >> >> >very quick death spiral. Admit it. It was a dinosaur company with a >> >> >very tiny customer base. >> >> >> >> It was *the* minicomputer company and that changed the world. >> > >> > >> > Really? Could it have handled any modern application, let alone >> >dozens or hundreds of them at once. >> >> As I recall, Unix and C were invented for the PDP11. As was Arpanet and the >> Internet. The PDP8 was the first "personal" computer, a computer that one person >> could buy and use all by himself, to automate a lab experiment or (in my case) >> simulate a steamship power train. That changed everything. >> >> DECs RT11 OS was cloned to become CPM and Microsoft DOS, and lives on in the >> Windows command line. > > > CP/M<>DOS and never was. 'Control Program for Microcomputers' was >written by Gary Kildall of Digital Research, Inc, and the later MP/M for >multiple users was written for the 8080 from scratch. If that is a >clone, so is every other OS. > > >> What sort of computing system did you have in 1969? I had a PDP8 running Focal. > > > Did you own it?
No, but my employer bought it for me; cost $12,800 with 4k 12-bit words of core and a teletype, when you could but a Chevy for a tenth of that. It was mine in the sense that I was the main, usually only, user. A couple of years later, 1972 I think, we got one of the first PDP-11s. The PDP-11 was a wonderful architecture; it taught a lot of people, including me, how to think. x86 is a pig by comparison. -- John Larkin Highland Technology Inc www.highlandtechnology.com jlarkin at highlandtechnology dot com Precision electronic instrumentation Picosecond-resolution Digital Delay and Pulse generators Custom timing and laser controllers Photonics and fiberoptic TTL data links VME analog, thermocouple, LVDT, synchro, tachometer Multichannel arbitrary waveform generators
On Tue, 23 Apr 2013 22:40:51 -0400, "Michael A. Terrell"
<mike.terrell@earthlink.net> wrote:

> >John Larkin wrote: >> >> On Tue, 23 Apr 2013 02:50:10 -0400, "Michael A. Terrell" >> <mike.terrell@earthlink.net> wrote: >> >> > >> >John Larkin wrote: >> >> >> >> On 22 Apr 2013 12:59:27 GMT, Allan Herriman <allanherriman@hotmail.com> wrote: >> >> >> >> >On Sun, 21 Apr 2013 09:05:49 -0700, John Larkin wrote: >> >> > >> >> >> The annoying thing is the CPU-to-FPGA interface. It takes a lot of FPGA >> >> >> pins and it tends to be async and slow. It would be great to have an >> >> >> industry-standard LVDS-type fast serial interface, with hooks like >> >> >> shared memory, but transparent and easy to use. >> >> > >> >> >You've just described PCI Express. >> >> >> >> No. PCIe is insanely complex and has horrible latency. It takes something like 2 >> >> microseconds to do an 8-bit read over gen1 4-lane PCIe. It was designed for >> >> throughput, not latency. >> >> >> >> We've done three PCIe projects so far, and it's the opposite of "transparent and >> >> easy to use." The PCIe spec reads like the tax code and Obamacare combined. >> >> >> >> Next up is Thunderbolt, probably worse. >> > >> > >> > Have you ever worked with PCI-X? >> >> No, but it's mostly dead, as PCI will soon be. Intel busses only last a few >> years each. > > > It's alive & well in real servers for their RAID controllers and >Ethernet or FC ports. I've never seen it used in a computer that sold >for under $3K.
Do current Intel chip sets support PCI-X? Or even PCI? -- John Larkin Highland Technology Inc www.highlandtechnology.com jlarkin at highlandtechnology dot com Precision electronic instrumentation Picosecond-resolution Digital Delay and Pulse generators Custom timing and laser controllers Photonics and fiberoptic TTL data links VME analog, thermocouple, LVDT, synchro, tachometer Multichannel arbitrary waveform generators
On Tue, 23 Apr 2013 20:00:06 -0700, John Larkin
<jjlarkin@highNOTlandTHIStechnologyPART.com> wrote:

>On Tue, 23 Apr 2013 22:40:51 -0400, "Michael A. Terrell" ><mike.terrell@earthlink.net> wrote: > >> >>John Larkin wrote: >>> >>> On Tue, 23 Apr 2013 02:50:10 -0400, "Michael A. Terrell" >>> <mike.terrell@earthlink.net> wrote: >>> >>> > >>> >John Larkin wrote: >>> >> >>> >> On 22 Apr 2013 12:59:27 GMT, Allan Herriman <allanherriman@hotmail.com> wrote: >>> >> >>> >> >On Sun, 21 Apr 2013 09:05:49 -0700, John Larkin wrote: >>> >> > >>> >> >> The annoying thing is the CPU-to-FPGA interface. It takes a lot of FPGA >>> >> >> pins and it tends to be async and slow. It would be great to have an >>> >> >> industry-standard LVDS-type fast serial interface, with hooks like >>> >> >> shared memory, but transparent and easy to use. >>> >> > >>> >> >You've just described PCI Express. >>> >> >>> >> No. PCIe is insanely complex and has horrible latency. It takes something like 2 >>> >> microseconds to do an 8-bit read over gen1 4-lane PCIe. It was designed for >>> >> throughput, not latency. >>> >> >>> >> We've done three PCIe projects so far, and it's the opposite of "transparent and >>> >> easy to use." The PCIe spec reads like the tax code and Obamacare combined. >>> >> >>> >> Next up is Thunderbolt, probably worse. >>> > >>> > >>> > Have you ever worked with PCI-X? >>> >>> No, but it's mostly dead, as PCI will soon be. Intel busses only last a few >>> years each. >> >> >> It's alive & well in real servers for their RAID controllers and >>Ethernet or FC ports. I've never seen it used in a computer that sold >>for under $3K. > >Do current Intel chip sets support PCI-X? Or even PCI?
I think PCI was a *just* hard intermediary bus. All the other busses were tertiary to it, and went though it to get to the CPU. I think PCIe is a hard intermediary bus too but it has it's own API practically, and I would call that pretty advanced. PCI-X is likely fully superseded by e, but elements of the original PCI paradigm decidedly must remain. PCI only had about 11 command codes.
On 2013-04-23, rickman <gnuarm@gmail.com> wrote:

> Who else made fatal mistakes were made in the computer industry? Who at > Osborn decided to promote the next generation before they were ready to > ship and killed the current sales? > > Why did the Alpha die? Was that more an issue of DEC going away? I > don't recall who ended up with it. > Was it Intel who let it die a lingering death?
HP http://h18002.www1.hp.com/alphaserver -- &#9858;&#9859; 100% natural --- news://freenews.netfront.net/ - complaints: news@netfront.net ---
On Tue, 23 Apr 2013 22:12:08 -0700, DecadentLinuxUserNumeroUno
<DLU1@DecadentLinuxUser.org> wrote:

>On Tue, 23 Apr 2013 20:00:06 -0700, John Larkin ><jjlarkin@highNOTlandTHIStechnologyPART.com> wrote: > >>On Tue, 23 Apr 2013 22:40:51 -0400, "Michael A. Terrell" >><mike.terrell@earthlink.net> wrote: >> >>> >>>John Larkin wrote: >>>> >>>> On Tue, 23 Apr 2013 02:50:10 -0400, "Michael A. Terrell" >>>> <mike.terrell@earthlink.net> wrote: >>>> >>>> > >>>> >John Larkin wrote: >>>> >> >>>> >> On 22 Apr 2013 12:59:27 GMT, Allan Herriman <allanherriman@hotmail.com> wrote: >>>> >> >>>> >> >On Sun, 21 Apr 2013 09:05:49 -0700, John Larkin wrote: >>>> >> > >>>> >> >> The annoying thing is the CPU-to-FPGA interface. It takes a lot of FPGA >>>> >> >> pins and it tends to be async and slow. It would be great to have an >>>> >> >> industry-standard LVDS-type fast serial interface, with hooks like >>>> >> >> shared memory, but transparent and easy to use. >>>> >> > >>>> >> >You've just described PCI Express. >>>> >> >>>> >> No. PCIe is insanely complex and has horrible latency. It takes something like 2 >>>> >> microseconds to do an 8-bit read over gen1 4-lane PCIe. It was designed for >>>> >> throughput, not latency. >>>> >> >>>> >> We've done three PCIe projects so far, and it's the opposite of "transparent and >>>> >> easy to use." The PCIe spec reads like the tax code and Obamacare combined. >>>> >> >>>> >> Next up is Thunderbolt, probably worse. >>>> > >>>> > >>>> > Have you ever worked with PCI-X? >>>> >>>> No, but it's mostly dead, as PCI will soon be. Intel busses only last a few >>>> years each. >>> >>> >>> It's alive & well in real servers for their RAID controllers and >>>Ethernet or FC ports. I've never seen it used in a computer that sold >>>for under $3K. >> >>Do current Intel chip sets support PCI-X? Or even PCI? > > > I think PCI was a *just* hard intermediary bus. All the other busses >were tertiary to it, and went though it to get to the CPU. > > I think PCIe is a hard intermediary bus too but it has it's own API >practically,
PCIe was desiged to be totally transparent against PCI. In theory, a BIOS or a driver can't tell if its talking to PCI or to PCIe. Thunderbolt is supposed to be the same. The only differences may be the hot-plug provisions. -- John Larkin Highland Technology Inc www.highlandtechnology.com jlarkin at highlandtechnology dot com Precision electronic instrumentation Picosecond-resolution Digital Delay and Pulse generators Custom timing and laser controllers Photonics and fiberoptic TTL data links VME analog, thermocouple, LVDT, synchro, tachometer Multichannel arbitrary waveform generators
On 4/24/2013 5:19 AM, Jasen Betts wrote:
> On 2013-04-23, rickman<gnuarm@gmail.com> wrote: > >> Who else made fatal mistakes were made in the computer industry? Who at >> Osborn decided to promote the next generation before they were ready to >> ship and killed the current sales? >> >> Why did the Alpha die? Was that more an issue of DEC going away? I >> don't recall who ended up with it. >> Was it Intel who let it die a lingering death? > > HP http://h18002.www1.hp.com/alphaserver
Ah, it was HP who ended up with it. It lasted up until 2008, that's not too bad really. -- Rick