Electronics-Related.com
Forums

What's Your Favorite Processor on an FPGA?

Started by rickman April 20, 2013
On Sun, 21 Apr 2013 16:40:22 -0400, rickman <gnuarm@gmail.com> wrote:

>On 4/21/2013 4:22 PM, John Larkin wrote: >> On Sun, 21 Apr 2013 17:34:12 GMT, Ralph Barone<address_is@invalid.invalid> >> wrote: >>> >>> and end up doing making new and innovative mistakes (just channeling Murphy >>> here). >> >> DEC wrote operating systems (TOPS10, VMS, RSTS) that ran for months between >> power failures, time-sharing multiple, sometimes hostile, users. We are now in >> the dark ages of computing, overwhelmed by bloat and slop and complexity. No >> wonder people are buying tablets. DEC understood things that Intel and Microsoft >> never really got, like: don't execute data. > >You really should stick to things you understand. Every Intel processor >since the 8086 has included protection mechanism to prevent the >execution of data. But they have to be used properly... Blame >Microsoft and all the other software vendors, but don't blame Intel.
The Intel memory protection is primitive. And Intel writes C compilers for their processors, which cheerfully mix data, code, and stacks in the same space. Which is why a simple buffer or stack overflow can plant and run hostile code in an application. After decades of chasing buffer overflow exploits, Wintel has STILL not managed to make them impossible. The common NOP SLED exploit works if the data on the stack can be executed!
>Actually, this is an issue just like so many that are determined by the >market place. When users put value on these features and spend their >money accordingly, the market will respond. So don't buy Windows >anymore if you don't like it. Microsoft will either respond or go out >of business. But that's not going to happen. People just like to >complain about MS while they continue giving them their money.
Windows sales are down, and with luck will continue to decline. -- John Larkin Highland Technology Inc www.highlandtechnology.com jlarkin at highlandtechnology dot com Precision electronic instrumentation Picosecond-resolution Digital Delay and Pulse generators Custom timing and laser controllers Photonics and fiberoptic TTL data links VME analog, thermocouple, LVDT, synchro, tachometer Multichannel arbitrary waveform generators
On Mon, 22 Apr 2013 00:07:47 +0300, upsidedown@downunder.com wrote:

>On Sun, 21 Apr 2013 13:22:12 -0700, John Larkin ><jjlarkin@highNOTlandTHIStechnologyPART.com> wrote: > >> >>DEC wrote operating systems (TOPS10, VMS, RSTS) that ran for months between >>power failures, time-sharing multiple, sometimes hostile, users. > > >I was responsible for some VMS-11 systems and I forgot to boot the >system every summer, when no-one was around. Booting the system the >system next year and everyone were happy :-). > >>We are now in >>the dark ages of computing, overwhelmed by bloat and slop and complexity. No >>wonder people are buying tablets. DEC understood things that Intel and Microsoft >>never really got, like: don't execute data. > >PDP-11/RSX-11M+ (early 1970's) had separate I/D spaces, VAX/VMS (mid >70's) had executable program sections.
RSTS mapped code pages to be execute-only (apps couldn't write to code space) and data/stack pages were non-executable. Stack overflow and buffer overrun exploits were prevented by hardware. Both Intel and Microsoft were out of the mainstream of computing, which is why we have such a mess today. -- John Larkin Highland Technology Inc www.highlandtechnology.com jlarkin at highlandtechnology dot com Precision electronic instrumentation Picosecond-resolution Digital Delay and Pulse generators Custom timing and laser controllers Photonics and fiberoptic TTL data links VME analog, thermocouple, LVDT, synchro, tachometer Multichannel arbitrary waveform generators
On Apr 21, 10:24=A0pm, John Larkin
<jjlar...@highNOTlandTHIStechnologyPART.com> wrote:
> On Sun, 21 Apr 2013 11:09:02 -0700 (PDT), "langw...@fonz.dk" <langw...@fo=
nz.dk>
> wrote: > > > > > > > > > > >On Apr 21, 6:05 pm, John Larkin > ><jjlar...@highNOTlandTHIStechnologyPART.com> wrote: > >> On Sun, 21 Apr 2013 08:23:37 -0500, Vladimir Vassilevsky <nos...@nowhe=
re.com>
> >> wrote: > > >> >On 4/20/2013 5:42 PM, rickman wrote: > >> >> I have been working on designs of processors for FPGAs for quite a > >> >> while. I have looked at the uBlaze, the picoBlaze, the NIOS, two fr=
om
> >> >> Lattice and any number of open source processors. Many of the open > >> >> source designs were stack processors since they tend to be small an=
d
> >> >> efficient in an FPGA. J1 is one I had pretty much missed until late=
ly.
> >> >> It is fast and small and looks like it wasn't too hard to design > >> >> (although looks may be deceptive), I'm impressed. There is also the=
b16
> >> >> from Bernd Paysan, the uCore, the ZPU and many others. > > >> >> Lately I have been looking at a hybrid approach which combines feat=
ures
> >> >> of addressing registers in order to access parameters of a stack CP=
U. It
> >> >> looks interesting. > > >> >> Anyone else here doing processor designs on FPGAs? > > >> >Soft core is fun thing to do, but otherwise I see no use. > >> >Except for very few special applications, standalone processor is bet=
ter
> >> >then FPGA soft core in every point, especially the price. > > >> >Vladimir Vassilevsky > >> >DSP and Mixed Signal Designs > >> >www.abvolt.com > > >> The annoying thing is the CPU-to-FPGA interface. It takes a lot of FPG=
A pins and
> >> it tends to be async and slow. It would be great to have an industry-s=
tandard
> >> LVDS-type fast serial interface, with hooks like shared memory, but tr=
ansparent
> >> and easy to use. > > >> Something like ARM internal to an FPGA could have a synchronous, maybe=
shared
> >> memory, interface into one of those SOPC type virtual bus structures w=
ithout
> >> wasting FPGA pins. > > >xilinx Zynq, arm9 with an fpga on the side > > >-Lasse > > We gave up on Xilinx a few yeas ago: great silicon, horrendous software t=
ools.
> Altera is somewhat less horrendous. > >
at one point it did crash alot, but I haven't had many problems with it for the past few years -Lasse
John Larkin wrote:
> > DEC wrote operating systems (TOPS10, VMS, RSTS) that ran for months between > power failures, time-sharing multiple, sometimes hostile, users. We are now in > the dark ages of computing, overwhelmed by bloat and slop and complexity. No > wonder people are buying tablets. DEC understood things that Intel and Microsoft > never really got, like: don't execute data.
I've had Win2K run over nine months, between power failures that lasted longer than the UPS batteries. DEC had more control over the computers, and a tiny fraction of the number running Windows or Linux. If DEC was so damned good, why were they unable to survive? Their IBM 'clone' (Rainbow 100) was very overpriced, not compatible, and did a very quick death spiral. Admit it. It was a dinosaur company with a very tiny customer base.
John Larkin wrote:
> > On Mon, 22 Apr 2013 00:07:47 +0300, upsidedown@downunder.com wrote: > > >On Sun, 21 Apr 2013 13:22:12 -0700, John Larkin > ><jjlarkin@highNOTlandTHIStechnologyPART.com> wrote: > > > >> > >>DEC wrote operating systems (TOPS10, VMS, RSTS) that ran for months between > >>power failures, time-sharing multiple, sometimes hostile, users. > > > > > >I was responsible for some VMS-11 systems and I forgot to boot the > >system every summer, when no-one was around. Booting the system the > >system next year and everyone were happy :-). > > > >>We are now in > >>the dark ages of computing, overwhelmed by bloat and slop and complexity. No > >>wonder people are buying tablets. DEC understood things that Intel and Microsoft > >>never really got, like: don't execute data. > > > >PDP-11/RSX-11M+ (early 1970's) had separate I/D spaces, VAX/VMS (mid > >70's) had executable program sections. > > RSTS mapped code pages to be execute-only (apps couldn't write to code space) > and data/stack pages were non-executable. Stack overflow and buffer overrun > exploits were prevented by hardware. Both Intel and Microsoft were out of the > mainstream of computing, which is why we have such a mess today.
Intel & Microsoft are the mainstream today, and DEC is in the scrapyard of computing history.
On Sun, 21 Apr 2013 09:05:49 -0700, John Larkin wrote:

> The annoying thing is the CPU-to-FPGA interface. It takes a lot of FPGA > pins and it tends to be async and slow. It would be great to have an > industry-standard LVDS-type fast serial interface, with hooks like > shared memory, but transparent and easy to use.
You've just described PCI Express. - Industry standard fast serial interface. - AC-coupled CML (rather than LVDS, but still differential). - scalable bandwidth: - 2.5, 5.0, 8.0 Gbps per lane. - 1, 2, 4, 8 or 16 lanes. - allows single access as well as bursts. - multi-master (allows DMA). - Fabric can be point-to-point (e.g. CPU-FPGA) or can use switches for larger networks. - in-band interrupts (saves pins). - Peripherals (typically) just appear as chunks of memory in the CPU address space. - Widely supported by operating systems. - Supports hot plug. - Many FPGAs have hard cores for PCIe. - Supported by ARM SoCs (but not the very cheapest ones). - compatible with loads of off the shelf chips and cards. - Easy to use (although that might be an "eye of the beholder" type of thing). I wouldn't recommend PCIe for the lowest cost or lowest power products, but it's great for the stuff that I do. Regards, Allan
On Mon, 22 Apr 2013 08:08:56 -0400, "Michael A. Terrell"
<mike.terrell@earthlink.net> wrote:

> >John Larkin wrote: >> >> On Mon, 22 Apr 2013 00:07:47 +0300, upsidedown@downunder.com wrote: >> >> >On Sun, 21 Apr 2013 13:22:12 -0700, John Larkin >> ><jjlarkin@highNOTlandTHIStechnologyPART.com> wrote: >> > >> >> >> >>DEC wrote operating systems (TOPS10, VMS, RSTS) that ran for months between >> >>power failures, time-sharing multiple, sometimes hostile, users. >> > >> > >> >I was responsible for some VMS-11 systems and I forgot to boot the >> >system every summer, when no-one was around. Booting the system the >> >system next year and everyone were happy :-). >> > >> >>We are now in >> >>the dark ages of computing, overwhelmed by bloat and slop and complexity. No >> >>wonder people are buying tablets. DEC understood things that Intel and Microsoft >> >>never really got, like: don't execute data. >> > >> >PDP-11/RSX-11M+ (early 1970's) had separate I/D spaces, VAX/VMS (mid >> >70's) had executable program sections. >> >> RSTS mapped code pages to be execute-only (apps couldn't write to code space) >> and data/stack pages were non-executable. Stack overflow and buffer overrun >> exploits were prevented by hardware. Both Intel and Microsoft were out of the >> mainstream of computing, which is why we have such a mess today. > > > > Intel & Microsoft are the mainstream today, and DEC is in the >scrapyard of computing history.
Tragically so. The thing that Intel and Microsoft had in common was brutal rapaciousness. That often overcomes quality. But Dec's PDP-11 bagat Unix, which begat Linux, then MacOS, then Android... -- John Larkin Highland Technology Inc www.highlandtechnology.com jlarkin at highlandtechnology dot com Precision electronic instrumentation Picosecond-resolution Digital Delay and Pulse generators Custom timing and laser controllers Photonics and fiberoptic TTL data links VME analog, thermocouple, LVDT, synchro, tachometer Multichannel arbitrary waveform generators
On Mon, 22 Apr 2013 07:49:11 -0400, "Michael A. Terrell"
<mike.terrell@earthlink.net> wrote:

> >John Larkin wrote: >> >> DEC wrote operating systems (TOPS10, VMS, RSTS) that ran for months between >> power failures, time-sharing multiple, sometimes hostile, users. We are now in >> the dark ages of computing, overwhelmed by bloat and slop and complexity. No >> wonder people are buying tablets. DEC understood things that Intel and Microsoft >> never really got, like: don't execute data. > > > I've had Win2K run over nine months, between power failures that >lasted longer than the UPS batteries. DEC had more control over the >computers, and a tiny fraction of the number running Windows or Linux. > > > If DEC was so damned good, why were they unable to survive? Their >IBM 'clone' (Rainbow 100) was very overpriced, not compatible, and did a >very quick death spiral. Admit it. It was a dinosaur company with a >very tiny customer base.
It was *the* minicomputer company and that changed the world. -- John Larkin Highland Technology Inc www.highlandtechnology.com jlarkin at highlandtechnology dot com Precision electronic instrumentation Picosecond-resolution Digital Delay and Pulse generators Custom timing and laser controllers Photonics and fiberoptic TTL data links VME analog, thermocouple, LVDT, synchro, tachometer Multichannel arbitrary waveform generators
On 22 Apr 2013 12:59:27 GMT, Allan Herriman <allanherriman@hotmail.com> wrote:

>On Sun, 21 Apr 2013 09:05:49 -0700, John Larkin wrote: > >> The annoying thing is the CPU-to-FPGA interface. It takes a lot of FPGA >> pins and it tends to be async and slow. It would be great to have an >> industry-standard LVDS-type fast serial interface, with hooks like >> shared memory, but transparent and easy to use. > >You've just described PCI Express.
No. PCIe is insanely complex and has horrible latency. It takes something like 2 microseconds to do an 8-bit read over gen1 4-lane PCIe. It was designed for throughput, not latency. We've done three PCIe projects so far, and it's the opposite of "transparent and easy to use." The PCIe spec reads like the tax code and Obamacare combined. Next up is Thunderbolt, probably worse.
> >- Industry standard fast serial interface. >- AC-coupled CML (rather than LVDS, but still differential). >- scalable bandwidth: >- 2.5, 5.0, 8.0 Gbps per lane. >- 1, 2, 4, 8 or 16 lanes. >- allows single access as well as bursts. >- multi-master (allows DMA). >- Fabric can be point-to-point (e.g. CPU-FPGA) or can use switches for >larger networks. >- in-band interrupts (saves pins). >- Peripherals (typically) just appear as chunks of memory in the CPU >address space. >- Widely supported by operating systems. >- Supports hot plug. >- Many FPGAs have hard cores for PCIe. >- Supported by ARM SoCs (but not the very cheapest ones). >- compatible with loads of off the shelf chips and cards. >- Easy to use (although that might be an "eye of the beholder" type of >thing). > > >I wouldn't recommend PCIe for the lowest cost or lowest power products, >but it's great for the stuff that I do. > > >Regards, >Allan
-- John Larkin Highland Technology Inc www.highlandtechnology.com jlarkin at highlandtechnology dot com Precision electronic instrumentation Picosecond-resolution Digital Delay and Pulse generators Custom timing and laser controllers Photonics and fiberoptic TTL data links VME analog, thermocouple, LVDT, synchro, tachometer Multichannel arbitrary waveform generators
On Mon, 22 Apr 2013 07:09:40 -0700, John Larkin wrote:

> On 22 Apr 2013 12:59:27 GMT, Allan Herriman <allanherriman@hotmail.com> > wrote: > >>On Sun, 21 Apr 2013 09:05:49 -0700, John Larkin wrote: >> >>> The annoying thing is the CPU-to-FPGA interface. It takes a lot of >>> FPGA pins and it tends to be async and slow. It would be great to have >>> an industry-standard LVDS-type fast serial interface, with hooks like >>> shared memory, but transparent and easy to use. >> >>You've just described PCI Express. > > No. PCIe is insanely complex and has horrible latency. It takes > something like 2 microseconds to do an 8-bit read over gen1 4-lane PCIe. > It was designed for throughput, not latency.
I agree about it being designed for throughput, not latency. However, with a fairly simple design, we can do 32 bit non-bursting reads or writes in about 350ns over a single lane of gen 1 through 1 layer of switching. I suspect there's some problem with your implementation (unless your 2 microsecond figure was just hyperbole).
> We've done three PCIe projects so far, and it's the opposite of > "transparent and easy to use." The PCIe spec reads like the tax code and > Obamacare combined.
I found the spec clear. It's rather large though, and a text book serves as more friendly introduction to the subject than the spec itself. One of my co-workers was confused by the way addresses come most significant octet first, whilst the data come least significant octet first. It makes sense on a little endian machine, once you get over the WTF. Hot plug is the only thing that gives us headaches. PCIe Hot plug is needed when reconfiguring the FPGA while the system is running. OS support for hot plug is patchy. Partial FPGA reconfiguration is one workaround (leaving the PCIe up while reconfiguring the rest of the FPGA), although I haven't tried that in any production design yet. Regards, Allan