Electronics-Related.com
Forums

Electronic components aging

Started by Piotr Wyderski October 15, 2013
krw@attt.bizz writes:

> On Sun, 20 Oct 2013 08:43:29 -0700 (PDT), edward.ming.lee@gmail.com > wrote: > >>On Sunday, October 20, 2013 8:26:44 AM UTC-7, k...@attt.bizz wrote: >>> On Sun, 20 Oct 2013 07:57:08 -0700 (PDT), edward.ming.lee@gmail.com >>> >>> wrote: >>> >>> >>> >>> > >>> >>> >> US military is interested in 6502 programmers. 8085 is offered in >>> >>> >> a rad-hardened version. etc. This when folks are walking around >>> >>> >> with thousands of times the processing power in their phones! :-/ >>> >>> > >>> >>> >They are also used in ASIC, simple cores with low Qs. 4000 for >>> > 6502. 6000 for 8085. A modern chip like ARM7 needs at least >>> > 700,000 transistors. I am looking into a mid range core like >>> > BM32, around 200,000 Q, 32 bits C machine. >>> >>> >>> >>> Why? Qs are free. Pins are expensive. >> >>But they are better used elsewhere. Qs also complicates the >> synthesizer and often runs into tool limits. I don't need VM, >> pipelines, predictive branchings, etc., just bare bone C machine. > > Synthesizer? Tool limits? Are you trying to reinvent the wheel on a > shoestring? Why would you bother? It's all been done for you and > it's cheap. An M0 goes for about halfa buck, these days.
Can you get one as a "soft core" though? One you can integrate as part of the firmware on a FPGA? Without paying the million dollar license that is. -- John Devereux
> >>> >They are also used in ASIC, simple cores with low Qs. 4000 for > >>> > 6502. 6000 for 8085. A modern chip like ARM7 needs at least > >>> > 700,000 transistors. I am looking into a mid range core like > >>> > BM32, around 200,000 Q, 32 bits C machine. > =20 > >>> Why? Qs are free. Pins are expensive. > >>But they are better used elsewhere. Qs also complicates the > >> synthesizer and often runs into tool limits. I don't need VM, > >> pipelines, predictive branchings, etc., just bare bone C machine. >=20 > > Synthesizer? Tool limits? Are you trying to reinvent the wheel on a > > shoestring? Why would you bother? It's all been done for you and > it's cheap. An M0 goes for about halfa buck, these days.
Let say for prototyping, XC6SLX9, since they have a cheap enough startup to= ol package. However, the LX9 can only implement around 100K Q, not even a = bare bond BM32. We can probably strip some instructions such as floating p= oint multiple and division. We need 32 bits data, perhaps 24 bits address.
> > Can you get one as a "soft core" though? One you can integrate as part > of the firmware on a FPGA?=20
Yes, that what we are trying to find. The right soft core on the right FPG= A. For example: BM32 is a 32 bits CPU with 16 registers. First 9 are general purpose. Oth= ers are special registers such as AP, FP, SP, PC, PSW and PCB. I don't thi= nk we need the Process Control Block pointer, so we will change it to Port = Control Block pointer. Any port I/O should be relative to the PCB pointer.= AP & FP could be general pointers as well. R0-R8:GP R9:FP R10:AP R11:PSW R12:SP R13:PCB R14:ISP R15:PC Immediate Mode MOVW &0x12345678,% r2 84 4F 78 56 34 12 42 Deferred Displacement Mode MOVB *0x30(% r2),% r3 87 D2 30 43
On Sun, 20 Oct 2013 18:22:42 +0100, John Devereux
<john@devereux.me.uk> wrote:

>krw@attt.bizz writes: > >> On Sun, 20 Oct 2013 08:43:29 -0700 (PDT), edward.ming.lee@gmail.com >> wrote: >> >>>On Sunday, October 20, 2013 8:26:44 AM UTC-7, k...@attt.bizz wrote: >>>> On Sun, 20 Oct 2013 07:57:08 -0700 (PDT), edward.ming.lee@gmail.com >>>> >>>> wrote: >>>> >>>> >>>> >>>> > >>>> >>>> >> US military is interested in 6502 programmers. 8085 is offered in >>>> >>>> >> a rad-hardened version. etc. This when folks are walking around >>>> >>>> >> with thousands of times the processing power in their phones! :-/ >>>> >>>> > >>>> >>>> >They are also used in ASIC, simple cores with low Qs. 4000 for >>>> > 6502. 6000 for 8085. A modern chip like ARM7 needs at least >>>> > 700,000 transistors. I am looking into a mid range core like >>>> > BM32, around 200,000 Q, 32 bits C machine. >>>> >>>> >>>> >>>> Why? Qs are free. Pins are expensive. >>> >>>But they are better used elsewhere. Qs also complicates the >>> synthesizer and often runs into tool limits. I don't need VM, >>> pipelines, predictive branchings, etc., just bare bone C machine. >> >> Synthesizer? Tool limits? Are you trying to reinvent the wheel on a >> shoestring? Why would you bother? It's all been done for you and >> it's cheap. An M0 goes for about halfa buck, these days. > >Can you get one as a "soft core" though? One you can integrate as part >of the firmware on a FPGA?
Not for $.50 worth of FPGA fabric.
>Without paying the million dollar license that is.
No.
On Sun, 20 Oct 2013 10:54:47 -0700 (PDT), edward.ming.lee@gmail.com
wrote:

> >> >>> >They are also used in ASIC, simple cores with low Qs. 4000 for >> >>> > 6502. 6000 for 8085. A modern chip like ARM7 needs at least >> >>> > 700,000 transistors. I am looking into a mid range core like >> >>> > BM32, around 200,000 Q, 32 bits C machine. >> >> >>> Why? Qs are free. Pins are expensive. >> >>But they are better used elsewhere. Qs also complicates the >> >> synthesizer and often runs into tool limits. I don't need VM, >> >> pipelines, predictive branchings, etc., just bare bone C machine. >> >> > Synthesizer? Tool limits? Are you trying to reinvent the wheel on a >> > shoestring? Why would you bother? It's all been done for you and >> it's cheap. An M0 goes for about halfa buck, these days. > >Let say for prototyping, XC6SLX9, since they have a cheap enough startup tool package. However, the LX9 can only implement around 100K Q, not even a bare bond BM32. We can probably strip some instructions such as floating point multiple and division. > >We need 32 bits data, perhaps 24 bits address. > >> >> Can you get one as a "soft core" though? One you can integrate as part >> of the firmware on a FPGA? > >Yes, that what we are trying to find. The right soft core on the right FPGA.
It's a loser, all the way around. You might find an acceptable hard core but soft cores are a loser, for many reasons.
On Sunday, October 20, 2013 9:40:24 AM UTC-7, Don Y wrote:

[on squeezing performance from small CPUs]

> I particularly favor good counter/timer modules. With just a few > "little" features you can enhance a tiny processor's capabilities > far beyond what a larger, "bloated" processor could do...
Speaking of which, what IS available in off-the-shelf counter/timer support? I've still got a few DAQ cards with AMD's 9513 counter chips, which I KNOW are obsolete, but what's the modern replacement? The 9513 had five 16-bit counters, lots of modes, and topped out at 10 MHz; you could make an 80-bit counter, and test it once during the next many-lifetimes-of-the-universe.
On 10/20/2013 6:26 PM, whit3rd wrote:
> On Sunday, October 20, 2013 9:40:24 AM UTC-7, Don Y wrote: > > [on squeezing performance from small CPUs] > >> I particularly favor good counter/timer modules. With just a few >> "little" features you can enhance a tiny processor's capabilities >> far beyond what a larger, "bloated" processor could do... > > Speaking of which, what IS available in off-the-shelf counter/timer > support? I've still got a few DAQ cards with AMD's 9513 counter > chips, which I KNOW are obsolete, but what's the modern replacement?
I don't think there is a "free-standing" counter/timer "chip". Nowadays, most MCUs have counters of varying degrees of capability/buginess built in. So, we're supposed to learn to live with <whatever>.
> The 9513 had five 16-bit counters, lots of modes, and topped out > at 10 MHz; you could make an 80-bit counter, and test it once > during the next many-lifetimes-of-the-universe.
But many of its modes were silly. I.e., configuring it as a time-of-day clock/calendar? Sheesh! What idiot decided that a MICROPROCESSOR PERIPHERAL needed that capability? Can you spell "software"? It also had some funky bugs, was a *huge* die (for its time and functionality), etc. A lot of counter/timers are really "uninspired" designs, lately. Its as if the designer had *one* idea about how it should be used and that's how you're going to use it! The Z80's CTC, crippled as it was (not bad for that era), could be coaxed to add significant value -- if you thought carefully about how you *used* it! E.g., you could set it up to "count down once armed", initialize the "count" to 1 (so, it "times out" almost immediately after being armed), set it to arm on a rising (or falling) edge AND PRELOAD THE NEXT COUNT VALUE AS '00' (along with picking an appropriate prescaler and enabling that interrupt source) As a result, when the desired edge comes along, the timer arms at that instant (neglecting synchronization issues). Then, "one" cycle (depends on prescaler) later, it times out and generates an interrupt. I.e., you now effectively have an edge triggered interrupt input -- but one that has a FIXED, built-in latency before it is signalled to the processor. The magic happens when the counter reloads on this timeout and, instead of reloading with '1', uses that nice *big* number that you preloaded in the "reload register". AND, STARTS COUNTING that very same timebase! So, when your ISR comes along, it can read the current count and know exactly how long ago (to the precision of the timebase) the actual edge event occurred. EVEN IF THE PROCESSOR HAD INTERRUPTS DISABLED for a big portion of this time! (or, was busy serving a competing ISR, etc.). The naive way of doing this would configure the device as a COUNTER, preload the count at '1' and program the input to the desired polarity. One edge comes along, counter hits '0'/terminal_count and generates IRQ. Then, *hopes* you get around to noticing it promptly (or, at least *consistently*). The "smarter" approach lets you actually measure events with some degree of precision without having to stand on your head trying to keep interrupt latencies down to <nothing>.
I first designed high reliability products for Aerospace in 1975 using Mil-=
HBK-217. It was based on generic components with stress factors for environ=
ment, design stress levels or margin based on actual field reliability data=
. =20

It is based on the assumption that the design is defect-free and proven by =
test validation methods and the material quality is representative of the f=
ield of data  collected, which would be validated by vendor and component q=
ualification.  The overall product would be validated for reliability with =
Highly Accelerated Stress Screening (HASS) and Life Test (HALT) methods to =
investigate the weak link in the design or component.

Failures in Test (FIT) must be fixed by design to prevent future occurrence=
s and MTBF hours are recorded with confidence rates.

The only thing that prevents a design from NOT meeting a 50 yr goal is lack=
 of experience in knowing how to design and verify the above assumptions fo=
r design , material & process quality.

You have to know how to predict every stress that a product will see, and t=
est it with an acceptable margin requirement for aging, which means you mus=
t have the established failure rate of each part.

This means you cannot use new components without an established reliability=
 record.  COTS parts must be tested and verified with HALT/HASS methods.

In the end, pre-mature failures occur due to oversights in awareness of bad=
 parts, design or process and the statistical process to measure reliabilit=
y.
On 10/21/2013 02:24 AM, Anthony Stewart wrote:
> I first designed high reliability products for Aerospace in 1975 > using Mil-HBK-217. It was based on generic components with stress > factors for environment, design stress levels or margin based on > actual field reliability data.. > > It is based on the assumption that the design is defect-free and > proven by test validation methods and the material quality is > representative of the field of data collected, which would be > validated by vendor and component qualification. The overall product > would be validated for reliability with Highly Accelerated Stress > Screening (HASS) and Life Test (HALT) methods to investigate the weak > link in the design or component. > > Failures in Test (FIT) must be fixed by design to prevent future > occurrences and MTBF hours are recorded with confidence rates. > > The only thing that prevents a design from NOT meeting a 50 yr goal > is lack of experience in knowing how to design and verify the above > assumptions for design , material & process quality. > > You have to know how to predict every stress that a product will see, > and test it with an acceptable margin requirement for aging, which > means you must have the established failure rate of each part. > > This means you cannot use new components without an established > reliability record. COTS parts must be tested and verified with > HALT/HASS methods. > > In the end, pre-mature failures occur due to oversights in awareness > of bad parts, design or process and the statistical process to > measure reliability. >
The '217 methodology has been discredited pretty thoroughly since then, though, as has the Arrhenius model for failures. (It's still in use, because the alternative is waiting 50 years, but AFAICT nobody trusts the numbers much. The IBM folks I used to work with sure don't.) It's pretty silly when your calculation predicts that reliability will go _down_ when you add input protection components or a power supply crowbar. Cheers Phil Hobbs -- Dr Philip C D Hobbs Principal Consultant ElectroOptical Innovations LLC Optics, Electro-optics, Photonics, Analog Electronics 160 North State Road #203 Briarcliff Manor NY 10510 hobbs at electrooptical dot net http://electrooptical.net
On Mon, 21 Oct 2013 12:24:45 -0400, Phil Hobbs
<pcdhSpamMeSenseless@electrooptical.net> wrote:

>On 10/21/2013 02:24 AM, Anthony Stewart wrote: >> I first designed high reliability products for Aerospace in 1975 >> using Mil-HBK-217. It was based on generic components with stress >> factors for environment, design stress levels or margin based on >> actual field reliability data.. >> >> It is based on the assumption that the design is defect-free and >> proven by test validation methods and the material quality is >> representative of the field of data collected, which would be >> validated by vendor and component qualification. The overall product >> would be validated for reliability with Highly Accelerated Stress >> Screening (HASS) and Life Test (HALT) methods to investigate the weak >> link in the design or component. >> >> Failures in Test (FIT) must be fixed by design to prevent future >> occurrences and MTBF hours are recorded with confidence rates. >> >> The only thing that prevents a design from NOT meeting a 50 yr goal >> is lack of experience in knowing how to design and verify the above >> assumptions for design , material & process quality. >> >> You have to know how to predict every stress that a product will see, >> and test it with an acceptable margin requirement for aging, which >> means you must have the established failure rate of each part. >> >> This means you cannot use new components without an established >> reliability record. COTS parts must be tested and verified with >> HALT/HASS methods. >> >> In the end, pre-mature failures occur due to oversights in awareness >> of bad parts, design or process and the statistical process to >> measure reliability. >> > >The '217 methodology has been discredited pretty thoroughly since then, >though, as has the Arrhenius model for failures. (It's still in use, >because the alternative is waiting 50 years, but AFAICT nobody trusts >the numbers much. The IBM folks I used to work with sure don't.) > >It's pretty silly when your calculation predicts that reliability will >go _down_ when you add input protection components or a power supply >crowbar. > >Cheers > >Phil Hobbs
Our gear is, in the field, many times more reliable than 217 or Bellcore calculations, and the failures tend to be point issues, not random component failures. Once noticed, most of the failures can be understood and the products improved. So if our designs were perfect, MTBF would be much better than what we are seeing. -- John Larkin Highland Technology Inc www.highlandtechnology.com jlarkin at highlandtechnology dot com Precision electronic instrumentation Picosecond-resolution Digital Delay and Pulse generators Custom timing and laser controllers Photonics and fiberoptic TTL data links VME analog, thermocouple, LVDT, synchro, tachometer Multichannel arbitrary waveform generators
Phil Hobbs wrote:
> On 10/21/2013 02:24 AM, Anthony Stewart wrote: >> I first designed high reliability products for Aerospace in 1975 >> using Mil-HBK-217. It was based on generic components with stress >> factors for environment, design stress levels or margin based on >> actual field reliability data.. >> >> It is based on the assumption that the design is defect-free and >> proven by test validation methods and the material quality is >> representative of the field of data collected, which would be >> validated by vendor and component qualification. The overall product >> would be validated for reliability with Highly Accelerated Stress >> Screening (HASS) and Life Test (HALT) methods to investigate the weak >> link in the design or component. >> >> Failures in Test (FIT) must be fixed by design to prevent future >> occurrences and MTBF hours are recorded with confidence rates. >> >> The only thing that prevents a design from NOT meeting a 50 yr goal >> is lack of experience in knowing how to design and verify the above >> assumptions for design , material & process quality. >> >> You have to know how to predict every stress that a product will see, >> and test it with an acceptable margin requirement for aging, which >> means you must have the established failure rate of each part. >> >> This means you cannot use new components without an established >> reliability record. COTS parts must be tested and verified with >> HALT/HASS methods. >> >> In the end, pre-mature failures occur due to oversights in awareness >> of bad parts, design or process and the statistical process to >> measure reliability. >> > > The '217 methodology has been discredited pretty thoroughly since then, > though, as has the Arrhenius model for failures. (It's still in use, > because the alternative is waiting 50 years, but AFAICT nobody trusts > the numbers much. The IBM folks I used to work with sure don't.) > > It's pretty silly when your calculation predicts that reliability will > go _down_ when you add input protection components or a power supply > crowbar. > > Cheers > > Phil Hobbs >
Well....adding parts reduces OVERALL reliability due to the fact they can (and will) fail. Some parts,when they fail can induce spikes or surges that will stress "protected" parts. So, in some (specific) cases it is not silly.