Electronics-Related.com
Forums

What's Your Favorite Processor on an FPGA?

Started by rickman April 20, 2013
I have been working on designs of processors for FPGAs for quite a 
while.  I have looked at the uBlaze, the picoBlaze, the NIOS, two from 
Lattice and any number of open source processors.  Many of the open 
source designs were stack processors since they tend to be small and 
efficient in an FPGA.  J1 is one I had pretty much missed until lately. 
  It is fast and small and looks like it wasn't too hard to design 
(although looks may be deceptive), I'm impressed.  There is also the b16 
from Bernd Paysan, the uCore, the ZPU and many others.

Lately I have been looking at a hybrid approach which combines features 
of addressing registers in order to access parameters of a stack CPU. 
It looks interesting.

Anyone else here doing processor designs on FPGAs?

-- 

Rick
On Sunday, 21 April 2013 08:42:07 UTC+10, rickman  wrote:
> I have been working on designs of processors for FPGAs for quite a > while. I have looked at the uBlaze, the picoBlaze, the NIOS, two from > Lattice and any number of open source processors. Many of the open > source designs were stack processors since they tend to be small and > efficient in an FPGA. J1 is one I had pretty much missed until lately. > It is fast and small and looks like it wasn't too hard to design > (although looks may be deceptive), I'm impressed. There is also the b16 > from Bernd Paysan, the uCore, the ZPU and many others. > > Lately I have been looking at a hybrid approach which combines features > of addressing registers in order to access parameters of a stack CPU. > It looks interesting. > > Anyone else here doing processor designs on FPGAs?
Sounds like something where you'd get more responses on comp.arch.fpga. Are you cross-posting? -- Bill Sloman, Sydney
On Sat, 20 Apr 2013 18:42:07 -0400, rickman <gnuarm@gmail.com> wrote:

>I have been working on designs of processors for FPGAs for quite a >while. I have looked at the uBlaze, the picoBlaze, the NIOS, two from >Lattice and any number of open source processors. Many of the open >source designs were stack processors since they tend to be small and >efficient in an FPGA. J1 is one I had pretty much missed until lately. > It is fast and small and looks like it wasn't too hard to design >(although looks may be deceptive), I'm impressed. There is also the b16 >from Bernd Paysan, the uCore, the ZPU and many others. > >Lately I have been looking at a hybrid approach which combines features >of addressing registers in order to access parameters of a stack CPU. >It looks interesting. > >Anyone else here doing processor designs on FPGAs?
My guys have been ragging me for years to do designs that have soft-core CPUs in FPGAs, but I've been able to convince them (well, I am the boss) that they haven't made sense so far. They use up too much FPGA resources to make a mediocre, hard to use CPU. So we've been using separate ARM processors, and using a bunch of pins to get the CPU bus into the FPGA, usually with an async static-ram sort of interface. There's supposed to be a Cyclone coming soon, with dual-hard-core ARM processors and enough dedicated program RAM to run useful apps. When that's real, we may go that way. That will save pins and speed up the CPU-to-FPGA logic handshake. If the programs get too big for the on-chip sram, I guess the fix would be external DRAM with CPU cache. There goes the pin savings. At that point, an external ARM starts to look good again. -- John Larkin Highland Technology Inc www.highlandtechnology.com jlarkin at highlandtechnology dot com Precision electronic instrumentation Picosecond-resolution Digital Delay and Pulse generators Custom timing and laser controllers Photonics and fiberoptic TTL data links VME analog, thermocouple, LVDT, synchro, tachometer Multichannel arbitrary waveform generators
On 4/20/2013 7:12 PM, John Larkin wrote:
> On Sat, 20 Apr 2013 18:42:07 -0400, rickman<gnuarm@gmail.com> wrote: > >> I have been working on designs of processors for FPGAs for quite a >> while. I have looked at the uBlaze, the picoBlaze, the NIOS, two from >> Lattice and any number of open source processors. Many of the open >> source designs were stack processors since they tend to be small and >> efficient in an FPGA. J1 is one I had pretty much missed until lately. >> It is fast and small and looks like it wasn't too hard to design >> (although looks may be deceptive), I'm impressed. There is also the b16 >>from Bernd Paysan, the uCore, the ZPU and many others. >> >> Lately I have been looking at a hybrid approach which combines features >> of addressing registers in order to access parameters of a stack CPU. >> It looks interesting. >> >> Anyone else here doing processor designs on FPGAs? > > My guys have been ragging me for years to do designs that have soft-core CPUs in > FPGAs, but I've been able to convince them (well, I am the boss) that they > haven't made sense so far. They use up too much FPGA resources to make a > mediocre, hard to use CPU. So we've been using separate ARM processors, and > using a bunch of pins to get the CPU bus into the FPGA, usually with an async > static-ram sort of interface. > > There's supposed to be a Cyclone coming soon, with dual-hard-core ARM processors > and enough dedicated program RAM to run useful apps. When that's real, we may go > that way. That will save pins and speed up the CPU-to-FPGA logic handshake. > > If the programs get too big for the on-chip sram, I guess the fix would be > external DRAM with CPU cache. There goes the pin savings. At that point, an > external ARM starts to look good again.
The choice of an internal vs. an external CPU is a systems design decision. If you need so much memory that external memory is warranted, then I guess an external CPU is warranted. But that all depends on your app. Are you running an OS, if so, why? The sort of stuff I typically do doesn't need a USB or Ethernet interface, both great reasons to use an ARM... free, working software that comes with an OS like Linux. (by free I mean you don't have to spend all that time writing or debugging a TCP/IP stack, etc) But there are times when an internal CPU works even for high level interfaces. In fact, the J1 was written because they needed a processor to stream video over Ethernet and the uBlaze wan't so great at it. I get the impression your projects are about other things than the FPGA/CPU you use and cost/size really aren't so important. Then you have less reason to squeeze on size, power, unit costs, but rather minimize development cost. If so, that only makes sense. My next project will be similar in hardware requirements to a digital watch, but with more processing... -- Rick
On Sat, 20 Apr 2013 19:47:07 -0400, rickman <gnuarm@gmail.com> wrote:

>On 4/20/2013 7:12 PM, John Larkin wrote: >> On Sat, 20 Apr 2013 18:42:07 -0400, rickman<gnuarm@gmail.com> wrote: >> >>> I have been working on designs of processors for FPGAs for quite a >>> while. I have looked at the uBlaze, the picoBlaze, the NIOS, two from >>> Lattice and any number of open source processors. Many of the open >>> source designs were stack processors since they tend to be small and >>> efficient in an FPGA. J1 is one I had pretty much missed until lately. >>> It is fast and small and looks like it wasn't too hard to design >>> (although looks may be deceptive), I'm impressed. There is also the b16 >>>from Bernd Paysan, the uCore, the ZPU and many others. >>> >>> Lately I have been looking at a hybrid approach which combines features >>> of addressing registers in order to access parameters of a stack CPU. >>> It looks interesting. >>> >>> Anyone else here doing processor designs on FPGAs? >> >> My guys have been ragging me for years to do designs that have soft-core CPUs in >> FPGAs, but I've been able to convince them (well, I am the boss) that they >> haven't made sense so far. They use up too much FPGA resources to make a >> mediocre, hard to use CPU. So we've been using separate ARM processors, and >> using a bunch of pins to get the CPU bus into the FPGA, usually with an async >> static-ram sort of interface. >> >> There's supposed to be a Cyclone coming soon, with dual-hard-core ARM processors >> and enough dedicated program RAM to run useful apps. When that's real, we may go >> that way. That will save pins and speed up the CPU-to-FPGA logic handshake. >> >> If the programs get too big for the on-chip sram, I guess the fix would be >> external DRAM with CPU cache. There goes the pin savings. At that point, an >> external ARM starts to look good again. > >The choice of an internal vs. an external CPU is a systems design >decision. If you need so much memory that external memory is warranted, >then I guess an external CPU is warranted. But that all depends on your >app. Are you running an OS, if so, why?
FPGA ram is expensive compared to the SRAM or flash that comes on a small ARM, like an LPC1754. Something serious, like an LPC3250, has stuff like hardware vector floating point and runs 32-bit instructions at 260 MHz. Both the ARMs have uarts, timers, ADCs, DACs, and Ethernet, for $4 and $7 respectively. We generally run bare-metal, a central state machine and some ISR stuff. I've written three RTOSs in the past but haven't really needed one lately.
> >The sort of stuff I typically do doesn't need a USB or Ethernet >interface, both great reasons to use an ARM... free, working software >that comes with an OS like Linux. (by free I mean you don't have to >spend all that time writing or debugging a TCP/IP stack, etc)
Yeah, we use the GCC compilers. Stuff like Ethernet and USB stacks are available and work without much hassle. I don't know what the tool chains are like for the soft cores.
> >But there are times when an internal CPU works even for high level >interfaces. In fact, the J1 was written because they needed a processor >to stream video over Ethernet and the uBlaze wan't so great at it. > >I get the impression your projects are about other things than the >FPGA/CPU you use and cost/size really aren't so important. Then you >have less reason to squeeze on size, power, unit costs, but rather >minimize development cost. If so, that only makes sense.
We do a fair amount of "computing", stuff like signal filtering, calibrations with flash cal tables, serial and Ethernet communications, sometimes driving leds and lcds. There have been a minority of apps simple enough to use a microblaze, and I didn't think that acquiring/learning/archiving another whole tool chain was worth it for those few apps, what with an LPC1754 costing $4.
> >My next project will be similar in hardware requirements to a digital >watch, but with more processing...
Sometimes you can just do the computing "in hardware" in the FPGA and not even need a procedural language. So the use case gets even smaller. I am looking forward to having a serious ARM or two (or, say, 16) inside an FPGA. With enough CPUs, you don't need an RTOS. -- John Larkin Highland Technology Inc www.highlandtechnology.com jlarkin at highlandtechnology dot com Precision electronic instrumentation Picosecond-resolution Digital Delay and Pulse generators Custom timing and laser controllers Photonics and fiberoptic TTL data links VME analog, thermocouple, LVDT, synchro, tachometer Multichannel arbitrary waveform generators
On 4/20/2013 8:16 PM, John Larkin wrote:
> On Sat, 20 Apr 2013 19:47:07 -0400, rickman<gnuarm@gmail.com> wrote: > >> On 4/20/2013 7:12 PM, John Larkin wrote: >>> On Sat, 20 Apr 2013 18:42:07 -0400, rickman<gnuarm@gmail.com> wrote: >>> >>>> I have been working on designs of processors for FPGAs for quite a >>>> while. I have looked at the uBlaze, the picoBlaze, the NIOS, two from >>>> Lattice and any number of open source processors. Many of the open >>>> source designs were stack processors since they tend to be small and >>>> efficient in an FPGA. J1 is one I had pretty much missed until lately. >>>> It is fast and small and looks like it wasn't too hard to design >>>> (although looks may be deceptive), I'm impressed. There is also the b16 >>> >from Bernd Paysan, the uCore, the ZPU and many others. >>>> >>>> Lately I have been looking at a hybrid approach which combines features >>>> of addressing registers in order to access parameters of a stack CPU. >>>> It looks interesting. >>>> >>>> Anyone else here doing processor designs on FPGAs? >>> >>> My guys have been ragging me for years to do designs that have soft-core CPUs in >>> FPGAs, but I've been able to convince them (well, I am the boss) that they >>> haven't made sense so far. They use up too much FPGA resources to make a >>> mediocre, hard to use CPU. So we've been using separate ARM processors, and >>> using a bunch of pins to get the CPU bus into the FPGA, usually with an async >>> static-ram sort of interface. >>> >>> There's supposed to be a Cyclone coming soon, with dual-hard-core ARM processors >>> and enough dedicated program RAM to run useful apps. When that's real, we may go >>> that way. That will save pins and speed up the CPU-to-FPGA logic handshake. >>> >>> If the programs get too big for the on-chip sram, I guess the fix would be >>> external DRAM with CPU cache. There goes the pin savings. At that point, an >>> external ARM starts to look good again. >> >> The choice of an internal vs. an external CPU is a systems design >> decision. If you need so much memory that external memory is warranted, >> then I guess an external CPU is warranted. But that all depends on your >> app. Are you running an OS, if so, why? > > FPGA ram is expensive compared to the SRAM or flash that comes on a small ARM, > like an LPC1754. Something serious, like an LPC3250, has stuff like hardware > vector floating point and runs 32-bit instructions at 260 MHz. Both the ARMs > have uarts, timers, ADCs, DACs, and Ethernet, for $4 and $7 respectively.
That is not a useful way to look at RAM unless you are talking about buying a larger chip than you need otherwise just to get more RAM. That is like saying the routing in an FPGA is "expensive" compared to the PCB. It is there as part of the device, use it or it goes to waste. If you need Ethernet, then Ethernet is useful. But adding Ethernet to an FPGA is no big deal. Likewise for nearly any peripheral. No point in discussing this very much. Every system has it's own requirements. If external ARMs are what works for you, great!
> We generally run bare-metal, a central state machine and some ISR stuff. I've > written three RTOSs in the past but haven't really needed one lately.
What do you do for the networking code. If you write your own, then you are doing a lot of work for naught typically, unless you have special requirements.
>> The sort of stuff I typically do doesn't need a USB or Ethernet >> interface, both great reasons to use an ARM... free, working software >> that comes with an OS like Linux. (by free I mean you don't have to >> spend all that time writing or debugging a TCP/IP stack, etc) > > Yeah, we use the GCC compilers. Stuff like Ethernet and USB stacks are available > and work without much hassle. I don't know what the tool chains are like for the > soft cores.
So you are using networking code, but no OS? The soft cores I work with don't bother with that sort of stuff. The apps are much smaller and don't need that level of complexity. In fact, that is what they are all about, getting rid of unneeded complexity.
>> But there are times when an internal CPU works even for high level >> interfaces. In fact, the J1 was written because they needed a processor >> to stream video over Ethernet and the uBlaze wan't so great at it. >> >> I get the impression your projects are about other things than the >> FPGA/CPU you use and cost/size really aren't so important. Then you >> have less reason to squeeze on size, power, unit costs, but rather >> minimize development cost. If so, that only makes sense. > > We do a fair amount of "computing", stuff like signal filtering, calibrations > with flash cal tables, serial and Ethernet communications, sometimes driving > leds and lcds. There have been a minority of apps simple enough to use a > microblaze, and I didn't think that acquiring/learning/archiving another whole > tool chain was worth it for those few apps, what with an LPC1754 costing $4.
Ethernet comms can be a hunk of code, but the rest of what you describe is pretty simple stuff. I'm not sure there is even a need for a processor. Lots of designers are just so used to doing everything in software they think it is simple. Actually, I think everything you listed above is simple enough for a uBlaze. What is the issue with that? I find HDL to be the "simple" way to do stuff like I/O and serial comms, even signal processing. In fact, my bread and butter is a product with signal processing in an FPGA, not because of speed, it is just an audio app. But the FPGA *had* to be there. An MCU would just be a waste of board space which this has very little of.
>> My next project will be similar in hardware requirements to a digital >> watch, but with more processing... > > Sometimes you can just do the computing "in hardware" in the FPGA and not even > need a procedural language. So the use case gets even smaller. > > I am looking forward to having a serious ARM or two (or, say, 16) inside an > FPGA. With enough CPUs, you don't need an RTOS.
Xilinx has that now you know. What do they call it, Z-something? Zync maybe? How about 144 processors running at 100's of MIPS each? Enough processing power that you can devote one to a serial port, one to an SPI port, one to flash a couple of LEDs and still have 140 left over. Check out the GreenArrays GA144. Around $14 the last time I asked. You won't like the development system though. It is the processor equivalent of an FPGA. I call it a FPPA, Field Programmable Processor Array. It can be *very* low power too if you let the nodes idle when they aren't doing anything. -- Rick
On Sat, 20 Apr 2013 21:14:55 -0400, rickman <gnuarm@gmail.com> wrote:

>On 4/20/2013 8:16 PM, John Larkin wrote: >> On Sat, 20 Apr 2013 19:47:07 -0400, rickman<gnuarm@gmail.com> wrote: >> >>> On 4/20/2013 7:12 PM, John Larkin wrote: >>>> On Sat, 20 Apr 2013 18:42:07 -0400, rickman<gnuarm@gmail.com> wrote: >>>> >>>>> I have been working on designs of processors for FPGAs for quite a >>>>> while. I have looked at the uBlaze, the picoBlaze, the NIOS, two from >>>>> Lattice and any number of open source processors. Many of the open >>>>> source designs were stack processors since they tend to be small and >>>>> efficient in an FPGA. J1 is one I had pretty much missed until lately. >>>>> It is fast and small and looks like it wasn't too hard to design >>>>> (although looks may be deceptive), I'm impressed. There is also the b16 >>>> >from Bernd Paysan, the uCore, the ZPU and many others. >>>>> >>>>> Lately I have been looking at a hybrid approach which combines features >>>>> of addressing registers in order to access parameters of a stack CPU. >>>>> It looks interesting. >>>>> >>>>> Anyone else here doing processor designs on FPGAs? >>>> >>>> My guys have been ragging me for years to do designs that have soft-core CPUs in >>>> FPGAs, but I've been able to convince them (well, I am the boss) that they >>>> haven't made sense so far. They use up too much FPGA resources to make a >>>> mediocre, hard to use CPU. So we've been using separate ARM processors, and >>>> using a bunch of pins to get the CPU bus into the FPGA, usually with an async >>>> static-ram sort of interface. >>>> >>>> There's supposed to be a Cyclone coming soon, with dual-hard-core ARM processors >>>> and enough dedicated program RAM to run useful apps. When that's real, we may go >>>> that way. That will save pins and speed up the CPU-to-FPGA logic handshake. >>>> >>>> If the programs get too big for the on-chip sram, I guess the fix would be >>>> external DRAM with CPU cache. There goes the pin savings. At that point, an >>>> external ARM starts to look good again. >>> >>> The choice of an internal vs. an external CPU is a systems design >>> decision. If you need so much memory that external memory is warranted, >>> then I guess an external CPU is warranted. But that all depends on your >>> app. Are you running an OS, if so, why? >> >> FPGA ram is expensive compared to the SRAM or flash that comes on a small ARM, >> like an LPC1754. Something serious, like an LPC3250, has stuff like hardware >> vector floating point and runs 32-bit instructions at 260 MHz. Both the ARMs >> have uarts, timers, ADCs, DACs, and Ethernet, for $4 and $7 respectively. > >That is not a useful way to look at RAM unless you are talking about >buying a larger chip than you need otherwise just to get more RAM. That >is like saying the routing in an FPGA is "expensive" compared to the >PCB. It is there as part of the device, use it or it goes to waste. > >If you need Ethernet, then Ethernet is useful. But adding Ethernet to >an FPGA is no big deal. Likewise for nearly any peripheral. > >No point in discussing this very much. Every system has it's own >requirements. If external ARMs are what works for you, great! > > >> We generally run bare-metal, a central state machine and some ISR stuff. I've >> written three RTOSs in the past but haven't really needed one lately. > >What do you do for the networking code. If you write your own, then you >are doing a lot of work for naught typically, unless you have special >requirements.
We got an Ethernet stack somewhere. It's flag driven into the central state machine. If there's an input buffer full, we get a flag, and the state machine processes it when it gets around to it. It may build an outgoing message and queue that into the stack, which runs mostly at interrupt level. A typical system would parse incoming commands and generate replies. It's awfully simple. We usually share the whole parser/executor/reply generator code among multiple ports concurrently, like USB and Ethernet and serial; the buffers and flags all look alike.
> > >>> The sort of stuff I typically do doesn't need a USB or Ethernet >>> interface, both great reasons to use an ARM... free, working software >>> that comes with an OS like Linux. (by free I mean you don't have to >>> spend all that time writing or debugging a TCP/IP stack, etc) >> >> Yeah, we use the GCC compilers. Stuff like Ethernet and USB stacks are available >> and work without much hassle. I don't know what the tool chains are like for the >> soft cores. > >So you are using networking code, but no OS?
Right.
> >The soft cores I work with don't bother with that sort of stuff. The >apps are much smaller and don't need that level of complexity. In fact, >that is what they are all about, getting rid of unneeded complexity. > > >>> But there are times when an internal CPU works even for high level >>> interfaces. In fact, the J1 was written because they needed a processor >>> to stream video over Ethernet and the uBlaze wan't so great at it. >>> >>> I get the impression your projects are about other things than the >>> FPGA/CPU you use and cost/size really aren't so important. Then you >>> have less reason to squeeze on size, power, unit costs, but rather >>> minimize development cost. If so, that only makes sense. >> >> We do a fair amount of "computing", stuff like signal filtering, calibrations >> with flash cal tables, serial and Ethernet communications, sometimes driving >> leds and lcds. There have been a minority of apps simple enough to use a >> microblaze, and I didn't think that acquiring/learning/archiving another whole >> tool chain was worth it for those few apps, what with an LPC1754 costing $4. > >Ethernet comms can be a hunk of code, but the rest of what you describe >is pretty simple stuff. I'm not sure there is even a need for a >processor. Lots of designers are just so used to doing everything in >software they think it is simple.
Imagine a 16-channel thermocouple simulator.
> >Actually, I think everything you listed above is simple enough for a >uBlaze. What is the issue with that? > >I find HDL to be the "simple" way to do stuff like I/O and serial comms, >even signal processing. In fact, my bread and butter is a product with >signal processing in an FPGA, not because of speed, it is just an audio >app. But the FPGA *had* to be there. An MCU would just be a waste of >board space which this has very little of. > > >>> My next project will be similar in hardware requirements to a digital >>> watch, but with more processing... >> >> Sometimes you can just do the computing "in hardware" in the FPGA and not even >> need a procedural language. So the use case gets even smaller. >> >> I am looking forward to having a serious ARM or two (or, say, 16) inside an >> FPGA. With enough CPUs, you don't need an RTOS. > >Xilinx has that now you know. What do they call it, Z-something? Zync >maybe? > >How about 144 processors running at 100's of MIPS each? Enough >processing power that you can devote one to a serial port, one to an SPI >port, one to flash a couple of LEDs and still have 140 left over. Check >out the GreenArrays GA144. Around $14 the last time I asked. You won't >like the development system though. It is the processor equivalent of >an FPGA. I call it a FPPA, Field Programmable Processor Array. It can >be *very* low power too if you let the nodes idle when they aren't doing >anything.
My ideal computer would have one CPU that's just the OS, and a hundred or so assignable cores, one for each device, file system, program, or thread. The OS would be a few thousand lines of code, if that. With serious hardware protection, it would be totally virus/trojan/crash immune. -- John Larkin Highland Technology Inc www.highlandtechnology.com jlarkin at highlandtechnology dot com Precision electronic instrumentation Picosecond-resolution Digital Delay and Pulse generators Custom timing and laser controllers Photonics and fiberoptic TTL data links VME analog, thermocouple, LVDT, synchro, tachometer Multichannel arbitrary waveform generators
On 4/20/2013 9:37 PM, John Larkin wrote:
> On Sat, 20 Apr 2013 21:14:55 -0400, rickman <gnuarm@gmail.com> wrote: > >> On 4/20/2013 8:16 PM, John Larkin wrote: >>> On Sat, 20 Apr 2013 19:47:07 -0400, rickman<gnuarm@gmail.com> wrote: >>> >>>> On 4/20/2013 7:12 PM, John Larkin wrote: >>>>> On Sat, 20 Apr 2013 18:42:07 -0400, rickman<gnuarm@gmail.com> wrote: >>>>> >>>>>> I have been working on designs of processors for FPGAs for quite a >>>>>> while. I have looked at the uBlaze, the picoBlaze, the NIOS, two from >>>>>> Lattice and any number of open source processors. Many of the open >>>>>> source designs were stack processors since they tend to be small and >>>>>> efficient in an FPGA. J1 is one I had pretty much missed until lately. >>>>>> It is fast and small and looks like it wasn't too hard to design >>>>>> (although looks may be deceptive), I'm impressed. There is also the b16 >>>>> >from Bernd Paysan, the uCore, the ZPU and many others. >>>>>> >>>>>> Lately I have been looking at a hybrid approach which combines features >>>>>> of addressing registers in order to access parameters of a stack CPU. >>>>>> It looks interesting. >>>>>> >>>>>> Anyone else here doing processor designs on FPGAs? >>>>> >>>>> My guys have been ragging me for years to do designs that have soft-core CPUs in >>>>> FPGAs, but I've been able to convince them (well, I am the boss) that they >>>>> haven't made sense so far. They use up too much FPGA resources to make a >>>>> mediocre, hard to use CPU. So we've been using separate ARM processors, and >>>>> using a bunch of pins to get the CPU bus into the FPGA, usually with an async >>>>> static-ram sort of interface. >>>>> >>>>> There's supposed to be a Cyclone coming soon, with dual-hard-core ARM processors >>>>> and enough dedicated program RAM to run useful apps. When that's real, we may go >>>>> that way. That will save pins and speed up the CPU-to-FPGA logic handshake. >>>>> >>>>> If the programs get too big for the on-chip sram, I guess the fix would be >>>>> external DRAM with CPU cache. There goes the pin savings. At that point, an >>>>> external ARM starts to look good again. >>>> >>>> The choice of an internal vs. an external CPU is a systems design >>>> decision. If you need so much memory that external memory is warranted, >>>> then I guess an external CPU is warranted. But that all depends on your >>>> app. Are you running an OS, if so, why? >>> >>> FPGA ram is expensive compared to the SRAM or flash that comes on a small ARM, >>> like an LPC1754. Something serious, like an LPC3250, has stuff like hardware >>> vector floating point and runs 32-bit instructions at 260 MHz. Both the ARMs >>> have uarts, timers, ADCs, DACs, and Ethernet, for $4 and $7 respectively. >> >> That is not a useful way to look at RAM unless you are talking about >> buying a larger chip than you need otherwise just to get more RAM. That >> is like saying the routing in an FPGA is "expensive" compared to the >> PCB. It is there as part of the device, use it or it goes to waste. >> >> If you need Ethernet, then Ethernet is useful. But adding Ethernet to >> an FPGA is no big deal. Likewise for nearly any peripheral. >> >> No point in discussing this very much. Every system has it's own >> requirements. If external ARMs are what works for you, great! >> >> >>> We generally run bare-metal, a central state machine and some ISR stuff. I've >>> written three RTOSs in the past but haven't really needed one lately. >> >> What do you do for the networking code. If you write your own, then you >> are doing a lot of work for naught typically, unless you have special >> requirements. > > We got an Ethernet stack somewhere. It's flag driven into the central state > machine. If there's an input buffer full, we get a flag, and the state machine > processes it when it gets around to it. It may build an outgoing message and > queue that into the stack, which runs mostly at interrupt level. A typical > system would parse incoming commands and generate replies. It's awfully simple. > We usually share the whole parser/executor/reply generator code among multiple > ports concurrently, like USB and Ethernet and serial; the buffers and flags all > look alike. > >> >> >>>> The sort of stuff I typically do doesn't need a USB or Ethernet >>>> interface, both great reasons to use an ARM... free, working software >>>> that comes with an OS like Linux. (by free I mean you don't have to >>>> spend all that time writing or debugging a TCP/IP stack, etc) >>> >>> Yeah, we use the GCC compilers. Stuff like Ethernet and USB stacks are available >>> and work without much hassle. I don't know what the tool chains are like for the >>> soft cores. >> >> So you are using networking code, but no OS? > > Right. > >> >> The soft cores I work with don't bother with that sort of stuff. The >> apps are much smaller and don't need that level of complexity. In fact, >> that is what they are all about, getting rid of unneeded complexity. >> >> >>>> But there are times when an internal CPU works even for high level >>>> interfaces. In fact, the J1 was written because they needed a processor >>>> to stream video over Ethernet and the uBlaze wan't so great at it. >>>> >>>> I get the impression your projects are about other things than the >>>> FPGA/CPU you use and cost/size really aren't so important. Then you >>>> have less reason to squeeze on size, power, unit costs, but rather >>>> minimize development cost. If so, that only makes sense. >>> >>> We do a fair amount of "computing", stuff like signal filtering, calibrations >>> with flash cal tables, serial and Ethernet communications, sometimes driving >>> leds and lcds. There have been a minority of apps simple enough to use a >>> microblaze, and I didn't think that acquiring/learning/archiving another whole >>> tool chain was worth it for those few apps, what with an LPC1754 costing $4. >> >> Ethernet comms can be a hunk of code, but the rest of what you describe >> is pretty simple stuff. I'm not sure there is even a need for a >> processor. Lots of designers are just so used to doing everything in >> software they think it is simple. > > Imagine a 16-channel thermocouple simulator. > >> >> Actually, I think everything you listed above is simple enough for a >> uBlaze. What is the issue with that? >> >> I find HDL to be the "simple" way to do stuff like I/O and serial comms, >> even signal processing. In fact, my bread and butter is a product with >> signal processing in an FPGA, not because of speed, it is just an audio >> app. But the FPGA *had* to be there. An MCU would just be a waste of >> board space which this has very little of. >> >> >>>> My next project will be similar in hardware requirements to a digital >>>> watch, but with more processing... >>> >>> Sometimes you can just do the computing "in hardware" in the FPGA and not even >>> need a procedural language. So the use case gets even smaller. >>> >>> I am looking forward to having a serious ARM or two (or, say, 16) inside an >>> FPGA. With enough CPUs, you don't need an RTOS. >> >> Xilinx has that now you know. What do they call it, Z-something? Zync >> maybe? >> >> How about 144 processors running at 100's of MIPS each? Enough >> processing power that you can devote one to a serial port, one to an SPI >> port, one to flash a couple of LEDs and still have 140 left over. Check >> out the GreenArrays GA144. Around $14 the last time I asked. You won't >> like the development system though. It is the processor equivalent of >> an FPGA. I call it a FPPA, Field Programmable Processor Array. It can >> be *very* low power too if you let the nodes idle when they aren't doing >> anything. > > My ideal computer would have one CPU that's just the OS, and a hundred or so > assignable cores, one for each device, file system, program, or thread. The OS > would be a few thousand lines of code, if that. With serious hardware > protection, it would be totally virus/trojan/crash immune. > >
Something like this: http://www.tilera.com/sites/default/files/productbriefs/TILE-Gx8072_PB041-02.pdf ?
On Sunday, April 21, 2013 4:14:55 AM UTC+3, rickman wrote:

> Xilinx has that now you know. What do they call it, >Z-something? Zync maybe?
ZYNQ. There is a rather low-cost eval board, named Zedboard ( www.zedboard.org, $395 ) which comes with Linux pre-installed on a SD card. The ZYNQ chip onboard contains a hard dual-core Cortex-A9 and ~1M gates worth 7th generation logic. Regards, Mikko
On 4/20/2013 5:42 PM, rickman wrote:
> I have been working on designs of processors for FPGAs for quite a > while. I have looked at the uBlaze, the picoBlaze, the NIOS, two from > Lattice and any number of open source processors. Many of the open > source designs were stack processors since they tend to be small and > efficient in an FPGA. J1 is one I had pretty much missed until lately. > It is fast and small and looks like it wasn't too hard to design > (although looks may be deceptive), I'm impressed. There is also the b16 > from Bernd Paysan, the uCore, the ZPU and many others. > > Lately I have been looking at a hybrid approach which combines features > of addressing registers in order to access parameters of a stack CPU. It > looks interesting. > > Anyone else here doing processor designs on FPGAs? >
Soft core is fun thing to do, but otherwise I see no use. Except for very few special applications, standalone processor is better then FPGA soft core in every point, especially the price. Vladimir Vassilevsky DSP and Mixed Signal Designs www.abvolt.com