Electronics-Related.com
Forums

Exceeding Vgs rating

Started by Pimpom April 6, 2018
On 4/11/2018 8:01 PM, John Larkin wrote:
> On Wed, 11 Apr 2018 18:17:58 -0700, mike <ham789@netzero.net> wrote: > >> On 4/11/2018 1:02 PM, jrwalliker@gmail.com wrote: >>> On Wednesday, 11 April 2018 17:54:49 UTC+1, John Larkin wrote: >>> >>>> The way that you tell if an FPGA is fast enough is to crank up the >>>> speed until it breaks, and then back off some. >>>> >>> A long time ago I needed to use a TMS320C52 DSP outside its rated >>> specs. TI were kind enough to give me some test code that exercised >>> the known critical timing paths so that I could test the devices >>> myself. I put the code in the boot path, so that each device was >>> auto-tested every time it was used. This worked very nicely. >>> >>> John >>> >>> >>> >> THAT'S HOW IT'S DONE! You GUARANTEE that each device works in the >> application. >> >> The marginal cost of that test is almost zero. >> >> YOU take responsibility for the vendor's ability to supply parts that >> work. That calculated risk may cost you in the >> long run, but the customer gets a quality product. >> >> You consider the consequences early in the design phase and ENGINEER >> ways to make it work. >> That's the mindset you want in all of your design engineers. >> >> That's a far cry from the context of this thread as detailed in the >> subject line: asking random internet denizens if it's OK >> to exceed some voltage breakdown spec on some unspecified >> component by some nondescript amount. >> >> I stand by my original statements in this context, "NO, it ain't OK!" > > Well, arrest the OP. I think he's in India. > >
I don't see anything warranting arrest. Consequences likely result in civil suits. The OP asked for input. I gave input. Most of the rest of this thread is about contributors overreacting and implying that they often practice incompetent engineering. Let me hasten to add that I don't believe that the problem is nearly as serious as implied by the very general nature of their statements. The answer is always, "it depends..." The implication that you can arbitrarily disregard device specifications that conflict with your needs is irresponsible. People come here for learned advice. That's what they should get. The DSP story on this page is an example of learned, relevant advice. Doesn't matter at all to me, unless I unwittingly purchase the result of one of their designs and it causes me harm.
On Thursday, 12 April 2018 05:20:18 UTC+1, mike  wrote:
> On 4/11/2018 8:01 PM, John Larkin wrote: > > On Wed, 11 Apr 2018 18:17:58 -0700, mike <ham789@netzero.net> wrote:
> >> I stand by my original statements in this context, "NO, it ain't OK!" > > > > Well, arrest the OP. I think he's in India. > > > > > I don't see anything warranting arrest. Consequences likely > result in civil suits. > > The OP asked for input. I gave input. > > Most of the rest of this thread is about contributors > overreacting and > implying that they often practice incompetent engineering.
all that tells me is you're missing a useful engineering skill NT
On Wed, 11 Apr 2018 21:19:04 -0700, mike <ham789@netzero.net> wrote:

>On 4/11/2018 8:01 PM, John Larkin wrote: >> On Wed, 11 Apr 2018 18:17:58 -0700, mike <ham789@netzero.net> wrote: >> >>> On 4/11/2018 1:02 PM, jrwalliker@gmail.com wrote: >>>> On Wednesday, 11 April 2018 17:54:49 UTC+1, John Larkin wrote: >>>> >>>>> The way that you tell if an FPGA is fast enough is to crank up the >>>>> speed until it breaks, and then back off some. >>>>> >>>> A long time ago I needed to use a TMS320C52 DSP outside its rated >>>> specs. TI were kind enough to give me some test code that exercised >>>> the known critical timing paths so that I could test the devices >>>> myself. I put the code in the boot path, so that each device was >>>> auto-tested every time it was used. This worked very nicely. >>>> >>>> John >>>> >>>> >>>> >>> THAT'S HOW IT'S DONE! You GUARANTEE that each device works in the >>> application. >>> >>> The marginal cost of that test is almost zero. >>> >>> YOU take responsibility for the vendor's ability to supply parts that >>> work. That calculated risk may cost you in the >>> long run, but the customer gets a quality product. >>> >>> You consider the consequences early in the design phase and ENGINEER >>> ways to make it work. >>> That's the mindset you want in all of your design engineers. >>> >>> That's a far cry from the context of this thread as detailed in the >>> subject line: asking random internet denizens if it's OK >>> to exceed some voltage breakdown spec on some unspecified >>> component by some nondescript amount. >>> >>> I stand by my original statements in this context, "NO, it ain't OK!" >> >> Well, arrest the OP. I think he's in India. >> >> >I don't see anything warranting arrest. Consequences likely >result in civil suits. > >The OP asked for input. I gave input. > >Most of the rest of this thread is about contributors >overreacting and >implying that they often practice incompetent engineering.
There's nothing incompetent about testing and understanding parts. There are some opamps that I wouldn't run anywhere close to the data sheet abs max supply voltage, and there are caps that I run at 1/4 spec sheet voltage, and others at twice. I tested some polymer aluminum caps at 4x rated voltage, and at (totally unspecified) reverse voltage... cycled a bunch of them for over a month. Now I know. RF parts are woefully unspecified for time-domain applications. So I test them and derate my own measurements. -- John Larkin Highland Technology, Inc picosecond timing precision measurement jlarkin att highlandtechnology dott com http://www.highlandtechnology.com
>"Pimpom" wrote in message news:cAHxC.142146$cC7.99542@fx19.ams1...
>I'm designing a small simple circuit in which a MOSFET drives a low-power >load. The very low frequency gate drive may, on rare occasions, exceed the >max Vgs rating of 12V by about 1V, possibly 2V. > There are a number of ways to limit the gate voltage but I want to avoid > them and keep the circuit as simple as possible, and it won't cause a > disaster if the transistor fails. What do you think?
There is "by the book" design and reality. Manufactures have to put in safety factor. The probability that a mos rated for 12V will blow at 14 is very low. For example, a typical IC process for a "5.5V" rated Vgs (a nominal 5V part at 10% tol) will typically data sheet spec a typical breakdown of around 14V, with 7.5V as a minimum. Even the ESDs devices have to operate at a nominal voltage higher than the rated voltage of the part, otherwise they might activate when the system is sitting at its rated voltage due to the tolerance of the ESD protector. So, don't worry, shit don't always happen... -- Kevin Aylward http://www.anasoft.co.uk - SuperSpice http://www.kevinaylward.co.uk/ee/index.html
On 4/12/2018 10:29 AM, John Larkin wrote:
snip
> There's nothing incompetent about
Split the line so you couldn't miss it testing and understanding parts. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Never said anything about testing/qualifying parts and taking calculated risk that your samples will continue to perform beyond their specification for the life of the product (or the vendor of said product). I suggested that it was irresponsible to encourage someone to IGNORE a Vgs spec on some unspecified part in some vague application that may or may not exceed the spec by one, maybe two, whoknowswhatthehellthenumbermightbe volts.
> There are some opamps that I wouldn't run anywhere close to the data > sheet abs max supply voltage, and there are caps that I run at 1/4 > spec sheet voltage, and others at twice. > > I
There's that word again... tested some polymer aluminum caps at 4x rated voltage, and at
> (totally unspecified) reverse voltage... cycled a bunch of them for > over a month. Now I know. > > RF parts are woefully unspecified for time-domain applications. So I
See a trend here?
> test
them and Oooo, be still my heart... derate my own measurements.
> >
We appear to be in complete agreement. Pissing contest not required. Now that we've established that testing is required, we can look at the cost/benefit ratio. YOU seem to work at or beyond the state of the art. Testing and creative application of components can yield significantly better performance that's worth the risk. If the OP can buy a better FET, or a protection diode for a penny or three, how much testing can he afford? How much risk should he take? I submit that the cost of engineering resources wasted on this thread might have bought a lifetime supply of diodes. We know nothing about the system that started this thread. A holistic approach can often yield significant benefit. I once attended a design review of two subsystems designed by different engineers. Took about 10 seconds to ask the question, "What happens if the assembler gets distracted and fails to plug in that cable?" Took another ten minutes to get them to appreciate that it stressed a component and another ten minutes for them to appreciate that I wasn't gonna allow release to manufacturing with that design defect. I knew how to fix it, so I insisted that they work together to achieve a solution. They were pissed, but eventually realized that all they had to do was to move a component from one board to the other. And they became better engineers for the exercise. Your holistic vision becomes more acute when you can expect your manager to come around with clip leads and short a power supply to ground. Or if there was a plug that could reach two sockets of the same format, I was gonna plug it onto the wrong one and hit the power switch. Cascading failures become less common. ;-) I almost never got the call from manufacturing to send some engineers over to clean up their mess.
"Kevin Aylward" <kevinRemovAT@kevinaylward.co.uk> wrote in message 
news:_72dnVx2OIDxIVLHnZ2dnUU7-Y3NnZ2d@giganews.com...
> Manufactures have to put in safety factor. The probability that a mos > rated for 12V will blow at 14 is very low. > > For example, a typical IC process for a "5.5V" rated Vgs (a nominal 5V > part at 10% tol) will typically data sheet spec a typical breakdown of > around > 14V, with 7.5V as a minimum. > > Even the ESDs devices have to operate at a nominal voltage higher than the > rated voltage of the part, otherwise they might activate when the system > is sitting at its rated voltage due to the tolerance of the ESD protector. > > So, don't worry, shit don't always happen...
I've always wondered if manufacturers intend you to select TVSs based on the max rating and that's that, or using a more responsible minmax constraint. Common sense supposes that the average user will take the simpler option, wrong though it may be... Doesn't matter much for today's logic, what with TVS diodes being useless under 5V. I suppose they would've worked just fine back in the day, a "5V" TVS protecting TTL or HC CMOS (or a 12V TVS protecting CD4000, but probably not a 15 or 18V TVS!). Tim -- Seven Transistor Labs, LLC Electrical Engineering Consultation and Contract Design Website: https://www.seventransistorlabs.com/
On Thu, 12 Apr 2018 14:37:47 -0700, mike <ham789@netzero.net> wrote:

>On 4/12/2018 10:29 AM, John Larkin wrote: >snip >> There's nothing incompetent about >Split the line so you couldn't miss it >testing and understanding parts. >^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > >Never said anything about testing/qualifying parts and taking >calculated risk that your samples will continue to perform >beyond their specification for the life of the product >(or the vendor of said product). > >I suggested that it was irresponsible to encourage someone >to IGNORE a Vgs spec on some unspecified part in some vague >application that may or may not exceed the spec by one, maybe two, >whoknowswhatthehellthenumbermightbe volts. > >> There are some opamps that I wouldn't run anywhere close to the data >> sheet abs max supply voltage, and there are caps that I run at 1/4 >> spec sheet voltage, and others at twice. >> >> I >There's that word again... >tested > >some polymer aluminum caps at 4x rated voltage, and at >> (totally unspecified) reverse voltage... cycled a bunch of them for >> over a month. Now I know. >> >> RF parts are woefully unspecified for time-domain applications. So I >See a trend here? >> test > >them and >Oooo, be still my heart... >derate > >my own measurements. >> >> >We appear to be in complete agreement. Pissing contest not required. > >Now that we've established that testing is required, we can look at the >cost/benefit ratio. > >YOU seem to work at or beyond the state of the art. >Testing and creative application of components can yield significantly >better performance that's worth the risk. > >If the OP can buy a better FET, or a protection diode for a penny or three, >how much testing can he afford? How much risk should he take? > >I submit that the cost of engineering resources wasted on this thread >might have bought a lifetime supply of diodes. > >We know nothing about the system that started this thread. >A holistic approach can often yield significant benefit. > >I once attended a design review of two subsystems designed >by different engineers. Took about 10 seconds to ask the question, >"What happens if the assembler gets distracted and fails to plug >in that cable?" Took another ten minutes to get them to appreciate >that it stressed a component and another ten minutes for them to >appreciate that I wasn't gonna allow release to manufacturing with >that design defect. >I knew how to fix it, so I insisted that they work together to achieve >a solution. They were pissed, but eventually realized that all they >had to do was to move a component from one board to the other. >And they became better engineers for the exercise. > >Your holistic >vision becomes more acute when you can expect your manager to >come around with clip leads and short a power supply to ground. >Or if there was a plug that could reach two sockets of the same >format, I was gonna plug it onto the wrong one and hit the power switch. >Cascading failures become less common. ;-) >I almost never got the call from manufacturing to send some engineers >over to clean up their mess. >
If a mosfet is rated for 12 volts max Vgs, I am completely confident that an occasional excursion to 14 or 15 will not affect reliability. As I noted, mosfet gates tend to blow out around 70 volts. I would not run an aluminum electrolytic past specified Vmax, even after testing some. Bad stuff can happen long-term. -- John Larkin Highland Technology, Inc picosecond timing precision measurement jlarkin att highlandtechnology dott com http://www.highlandtechnology.com
On 4/12/2018 2:59 PM, John Larkin wrote:
> On Thu, 12 Apr 2018 14:37:47 -0700, mike <ham789@netzero.net> wrote: > >> On 4/12/2018 10:29 AM, John Larkin wrote: >> snip >>> There's nothing incompetent about >> Split the line so you couldn't miss it >> testing and understanding parts. >> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >> >> Never said anything about testing/qualifying parts and taking >> calculated risk that your samples will continue to perform >> beyond their specification for the life of the product >> (or the vendor of said product). >> >> I suggested that it was irresponsible to encourage someone >> to IGNORE a Vgs spec on some unspecified part in some vague >> application that may or may not exceed the spec by one, maybe two, >> whoknowswhatthehellthenumbermightbe volts. >> >>> There are some opamps that I wouldn't run anywhere close to the data >>> sheet abs max supply voltage, and there are caps that I run at 1/4 >>> spec sheet voltage, and others at twice. >>> >>> I >> There's that word again... >> tested >> >> some polymer aluminum caps at 4x rated voltage, and at >>> (totally unspecified) reverse voltage... cycled a bunch of them for >>> over a month. Now I know. >>> >>> RF parts are woefully unspecified for time-domain applications. So I >> See a trend here? >>> test >> >> them and >> Oooo, be still my heart... >> derate >> >> my own measurements. >>> >>> >> We appear to be in complete agreement. Pissing contest not required. >> >> Now that we've established that testing is required, we can look at the >> cost/benefit ratio. >> >> YOU seem to work at or beyond the state of the art. >> Testing and creative application of components can yield significantly >> better performance that's worth the risk. >> >> If the OP can buy a better FET, or a protection diode for a penny or three, >> how much testing can he afford? How much risk should he take? >> >> I submit that the cost of engineering resources wasted on this thread >> might have bought a lifetime supply of diodes. >> >> We know nothing about the system that started this thread. >> A holistic approach can often yield significant benefit. >> >> I once attended a design review of two subsystems designed >> by different engineers. Took about 10 seconds to ask the question, >> "What happens if the assembler gets distracted and fails to plug >> in that cable?" Took another ten minutes to get them to appreciate >> that it stressed a component and another ten minutes for them to >> appreciate that I wasn't gonna allow release to manufacturing with >> that design defect. >> I knew how to fix it, so I insisted that they work together to achieve >> a solution. They were pissed, but eventually realized that all they >> had to do was to move a component from one board to the other. >> And they became better engineers for the exercise. >> >> Your holistic >> vision becomes more acute when you can expect your manager to >> come around with clip leads and short a power supply to ground. >> Or if there was a plug that could reach two sockets of the same >> format, I was gonna plug it onto the wrong one and hit the power switch. >> Cascading failures become less common. ;-) >> I almost never got the call from manufacturing to send some engineers >> over to clean up their mess. >> > > If a mosfet is rated for 12 volts max Vgs, I am completely confident > that an occasional excursion to 14 or 15 will not affect reliability. > > As I noted, mosfet gates tend to blow out around 70 volts.
That's good information, but I'd still verify by using the actual parts.
> > I would not run an aluminum electrolytic past specified Vmax, even > after testing some. Bad stuff can happen long-term.
Sounds like you have a good mindset for an engineer. I'm bitching about general statements implying that you can ignore specs without testing on the ACTUAL parts you're gonna use. Nobody has mentioned binning. Many manufacturing processes have great variability. And the parts that test better can be sold at a higher price as a different part number, or selected from the same part number. What happens when the vendor decides to obsolete a process and produce the part on a different machine that still meets the specs? Or someone else special orders high spec parts. Or my favorite...some purchasing agent decides to save half a cent. Back in the day, when the ink on my diploma was still wet, I needed a 1% resistor for a test. I had a few dozen carbon resistors in the drawer and expected I'd be able to select one close enough for the test. Well, there weren't any. So, I went to the engineering stockroom and borrowed the whole supply of them. There were far fewer than would be suggested by a bell-shaped curve. So, I took a few minutes to bin a bunch. There was a deep hole in the distribution around 5%. "So that's where 5% resistors come from." That was my first realization that expecting parts to be better than spec was risky. Browsing the approved parts catalog confirmed that we had quite a few parts under different internal part numbers selected by the vendor for specific characteristics. And the same hole in the distribution on the generic ones. What would you find if you ordered a bunch of 2N3904 transistors from different sources? I'd bet that most of them exceed specification, but in very different ways. If you're lucky, the parts you get tomorrow will still work in the design you did yesterday. The probability of that goes way up when you stay within ALL the specifications. Don't worry, there will still be plenty of unspecified issues to keep you busy. I believe that calculated risks are a critical component of a business strategy, but only if you actually do the calculations.
On Thursday, April 12, 2018 at 5:38:53 PM UTC-4, mike wrote:
> On 4/12/2018 10:29 AM, John Larkin wrote: > snip > > There's nothing incompetent about > Split the line so you couldn't miss it > testing and understanding parts. > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > Never said anything about testing/qualifying parts and taking > calculated risk that your samples will continue to perform > beyond their specification for the life of the product > (or the vendor of said product). > > I suggested that it was irresponsible to encourage someone > to IGNORE a Vgs spec on some unspecified part in some vague > application that may or may not exceed the spec by one, maybe two, > whoknowswhatthehellthenumbermightbe volts. > > > There are some opamps that I wouldn't run anywhere close to the data > > sheet abs max supply voltage, and there are caps that I run at 1/4 > > spec sheet voltage, and others at twice. > > > > I > There's that word again... > tested > > some polymer aluminum caps at 4x rated voltage, and at > > (totally unspecified) reverse voltage... cycled a bunch of them for > > over a month. Now I know. > > > > RF parts are woefully unspecified for time-domain applications. So I > See a trend here? > > test > > them and > Oooo, be still my heart... > derate > > my own measurements. > > > > > We appear to be in complete agreement. Pissing contest not required. > > Now that we've established that testing is required, we can look at the > cost/benefit ratio. > > YOU seem to work at or beyond the state of the art. > Testing and creative application of components can yield significantly > better performance that's worth the risk. > > If the OP can buy a better FET, or a protection diode for a penny or three, > how much testing can he afford? How much risk should he take? > > I submit that the cost of engineering resources wasted on this thread > might have bought a lifetime supply of diodes. > > We know nothing about the system that started this thread. > A holistic approach can often yield significant benefit. > > I once attended a design review of two subsystems designed > by different engineers. Took about 10 seconds to ask the question, > "What happens if the assembler gets distracted and fails to plug > in that cable?" Took another ten minutes to get them to appreciate > that it stressed a component and another ten minutes for them to > appreciate that I wasn't gonna allow release to manufacturing with > that design defect. > I knew how to fix it, so I insisted that they work together to achieve > a solution. They were pissed, but eventually realized that all they > had to do was to move a component from one board to the other. > And they became better engineers for the exercise. > > Your holistic > vision becomes more acute when you can expect your manager to > come around with clip leads and short a power supply to ground. > Or if there was a plug that could reach two sockets of the same > format, I was gonna plug it onto the wrong one and hit the power switch. > Cascading failures become less common. ;-) > I almost never got the call from manufacturing to send some engineers > over to clean up their mess.
Nice stories, all the ways things can f'up should be talked about more. Plugging everything into every other similar plug, is how I discovered that you can fry an lm395 with a big inductor (8" OD Helmholtz coils, ~10 ohms, 100 mT at 3A.. I think.) Well, students at college physics labs discovered it. Fortunately, our customers are mostly happy to do field repairs and updates. George H.
On Thu, 12 Apr 2018 16:11:09 -0700, mike <ham789@netzero.net> wrote:

>On 4/12/2018 2:59 PM, John Larkin wrote: >> On Thu, 12 Apr 2018 14:37:47 -0700, mike <ham789@netzero.net> wrote: >> >>> On 4/12/2018 10:29 AM, John Larkin wrote: >>> snip >>>> There's nothing incompetent about >>> Split the line so you couldn't miss it >>> testing and understanding parts. >>> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >>> >>> Never said anything about testing/qualifying parts and taking >>> calculated risk that your samples will continue to perform >>> beyond their specification for the life of the product >>> (or the vendor of said product). >>> >>> I suggested that it was irresponsible to encourage someone >>> to IGNORE a Vgs spec on some unspecified part in some vague >>> application that may or may not exceed the spec by one, maybe two, >>> whoknowswhatthehellthenumbermightbe volts. >>> >>>> There are some opamps that I wouldn't run anywhere close to the data >>>> sheet abs max supply voltage, and there are caps that I run at 1/4 >>>> spec sheet voltage, and others at twice. >>>> >>>> I >>> There's that word again... >>> tested >>> >>> some polymer aluminum caps at 4x rated voltage, and at >>>> (totally unspecified) reverse voltage... cycled a bunch of them for >>>> over a month. Now I know. >>>> >>>> RF parts are woefully unspecified for time-domain applications. So I >>> See a trend here? >>>> test >>> >>> them and >>> Oooo, be still my heart... >>> derate >>> >>> my own measurements. >>>> >>>> >>> We appear to be in complete agreement. Pissing contest not required. >>> >>> Now that we've established that testing is required, we can look at the >>> cost/benefit ratio. >>> >>> YOU seem to work at or beyond the state of the art. >>> Testing and creative application of components can yield significantly >>> better performance that's worth the risk. >>> >>> If the OP can buy a better FET, or a protection diode for a penny or three, >>> how much testing can he afford? How much risk should he take? >>> >>> I submit that the cost of engineering resources wasted on this thread >>> might have bought a lifetime supply of diodes. >>> >>> We know nothing about the system that started this thread. >>> A holistic approach can often yield significant benefit. >>> >>> I once attended a design review of two subsystems designed >>> by different engineers. Took about 10 seconds to ask the question, >>> "What happens if the assembler gets distracted and fails to plug >>> in that cable?" Took another ten minutes to get them to appreciate >>> that it stressed a component and another ten minutes for them to >>> appreciate that I wasn't gonna allow release to manufacturing with >>> that design defect. >>> I knew how to fix it, so I insisted that they work together to achieve >>> a solution. They were pissed, but eventually realized that all they >>> had to do was to move a component from one board to the other. >>> And they became better engineers for the exercise. >>> >>> Your holistic >>> vision becomes more acute when you can expect your manager to >>> come around with clip leads and short a power supply to ground. >>> Or if there was a plug that could reach two sockets of the same >>> format, I was gonna plug it onto the wrong one and hit the power switch. >>> Cascading failures become less common. ;-) >>> I almost never got the call from manufacturing to send some engineers >>> over to clean up their mess. >>> >> >> If a mosfet is rated for 12 volts max Vgs, I am completely confident >> that an occasional excursion to 14 or 15 will not affect reliability. >> >> As I noted, mosfet gates tend to blow out around 70 volts. >That's good information, but I'd still verify by using the actual parts. >> >> I would not run an aluminum electrolytic past specified Vmax, even >> after testing some. Bad stuff can happen long-term. > >Sounds like you have a good mindset for an engineer.
I'm sure John, and his customers, are tickled pink that you approve of his mindset.
>I'm bitching about general statements implying that you can ignore >specs without testing on the ACTUAL parts you're gonna use.
But you just said that John had a good mindset for an engineer (or was that a pejorative?). That's exactly what he does. He doesn't test every device that goes on the board, rather will test a sample of parts and if they exceed the specs, he'll use that information in the design. He doesn't use *those* parts (they're probably toast) for product.
>Nobody has mentioned binning. >Many manufacturing processes have great variability. And the >parts that test better can be sold at a higher price as a different >part number, or selected from the same part number. >What happens when the vendor decides to obsolete a process >and produce the part on a different machine that still meets >the specs? Or someone else special orders high spec parts. >Or my favorite...some purchasing agent decides to save half a cent.
How would a "purchasing agent" change the specs on a part?
> >Back in the day, when the ink on my diploma was still wet, >I needed a 1% resistor for a test. >I had a few dozen carbon resistors in the drawer and expected >I'd be able to select one close enough for the test. >Well, there weren't any. >So, I went to the engineering stockroom and borrowed the whole supply of >them. >There were far fewer than would be suggested by a bell-shaped curve. >So, I took a few minutes to bin a bunch. There was a deep hole >in the distribution around 5%. "So that's where 5% resistors come from." >That was my first realization that expecting parts to be better than >spec was risky. Browsing the approved parts catalog confirmed >that we had quite a few parts under different internal part numbers selected >by the vendor for specific characteristics. And the same hole >in the distribution on the generic ones.
They didn't teach you much in school. Of *course* parts are binned. I bet you didn't take the temperature coefficient into account in your design.
> >What would you find if you ordered a bunch of 2N3904 transistors >from different sources? I'd bet that most of them exceed specification, >but in very different ways.
But John would test a bunch from one supplier and figure out where they exceeded the spec and use that information.
> >If you're lucky, the parts you get tomorrow will still work in the >design you did yesterday. The probability of that goes way up >when you stay within ALL the specifications. Don't worry, >there will still be plenty of unspecified issues to keep you busy. > >I believe that calculated risks are a critical component >of a business strategy, >but only if you actually do the calculations. >
John makes high value, low volume products that push the performance envelope. His needs are different that, apparently, yours. I'm on the other end of that spectrum, too. We worry about making a million widgets a year for five years, or more. It's a very different world.