Electronics-Related.com
Forums

bit about transistor cost

Started by Unknown December 6, 2021
On 07/12/2021 13:47, Dimiter_Popoff wrote:
> On 12/7/2021 3:40, Sylvia Else wrote: >> On 07-Dec-21 12:14 pm, Anthony William Sloman wrote: >>> >>> The rest of us exploit those chips to serve different - smaller, but >>> more numerous - markets. We don't need few-nm chips to do that, but >>> if we can buy one more or less suitable for our application, we will >>> do it, because it's going to be faster and use less current than it's >>> predecessor. The interface to produce the outputs we can sell is >>> always a mess, but it's been like that forever. >>> >> >> A few nm is not many silicon atoms, so I have to wonder about the >> longevity of these chips. >> >> People generally may recycle their phones every couple of years >> (though I don't), and manufacturers may be willing just to replace >> those that die during the warranty period, but for most things one >> wants the electronics to work for a reasonable time.
I think it depends a lot on how warm you run them. It may ultimately put a hard limit on just how small the features can go before chip lifetime becomes a serious problem for fast machines doing heavy computation. It is astonishing how fine the features have become on modern chips.
> > I think I saw something somewhere about longevity, figures were not > great (if my memory is real, far from being sure).
My instinct is that the OLEDs in the displays will quite likely be the weakest link in the chain rather than the silicon CPU itself. Chemistry of light emission and living in direct sunlight all takes its toll.
> My older phone lasted for 5 years and its micro-USB got broken so > I replaced the phone. > The current one is 4 years old and still works, though > about a year (or was it 18 months) ago its battery got swollen > (bad micro USB again, probably it damaged the battery by perpetual > power cycling) but I managed to buy locally both the connector and > a new battery at some negligible cost and replaced these so it still > works.
Batteries again are complex chemistry and so prone to premature failure especially if you don't look after them quite right. My laptops tend to kill their batteries through being used more than just some of the time as a portable desktop and left on power crunching numbers. Speed is maximised when on mains power, but it slowly damages the battery. -- Regards, Martin Brown
On Monday, December 6, 2021 at 8:14:43 PM UTC-5, Martin Brown wrote:
> > Likewise with PC's. I'm in the market for a new one right now but I'm > not convinced that any of them offer single threaded performance that is > 3x better than the ancient i7-3770 I have now. That has always been my > upgrade heuristic (used to be every 3 years). Clock speeds have maxed > out and now they are adding more cores (many of which are idle most of > the time). Performance cores and efficient cores is the new selling > point. It looks on paper like the i5-12600K might just pass this test.
I bought an i5 machine and it was a real dog. I said something to the effect that they ran out of ways to add transistors to improve the speed of CPUs a few years ago and someone listed a number of architectural improvements they've added for a 20-30% boost. It has been quite some time since you could expect significant speed improvements by adding transistors or faster clock speeds. I think it was the Pentium 4 where the clock rate peaked at about 3 GHz by adding pipeline stages for shorter gate delays. But the cost of pipeline stalls pretty much mitigated that advantage. I believe people could overclock the Pentium 3 to run faster than the 4.
> It may yet swing the other way when simulations are so good that the > conversion to masks is essentially error free. Where it gets tricky is > when the AI is designing new chips for us that no-one understands. > > This years BBC Rieth lectures are about the rise of AI and the future by > Stuart Russell of Berkley (starts this Wednesday). > > https://www.bbc.co.uk/programmes/articles/1N0w5NcK27Tt041LPVLZ51k/reith-lectures-2021-living-with-artificial-intelligence
> It is still at least partially holding for number density of transistors > if not for actual computing performance.
It was never about performance, it was just the number of transistors doubling every 18 to 24 months.
> We must be very close to the > limits where quantum effects mess things up (but 3D stacks allow some > alternative ways of gaining number density on a chip).
We keep hearing that the limit is just ahead and they find ways of working it. I'm flabbergasted they have reached single digit nm, How big are silicon atoms? The Corona virus is 70 or more nm. We could build a whole bunch of transistors on one virus. I seem to recall a Ball Semiconductor who wanted to print ICs on balls. I don't recall the advantages. They ended up providing some services from the processing technologies they developed.
> It was originally specified in terms of transistors per chip.
Yeah, it was just an observation back when the number was still in the thousands. It is interesting that the advancement wasn't a lot faster in the earlier stages, but that may have had to do with finances since the semiconductor market was so much smaller then. Less R&D money available. I think the physics has been developed along with the miniaturization push. They couldn't go faster because they didn't have the science to design smaller transistors at any given point. They needed the to build smaller transistors to study before they could put them into production. I took one semiconductor course and that's what the guy said basically. They first had heuristics which let them build devices and the understanding came as they worked with them. -- Rick C. -- Get 1,000 miles of free Supercharging -- Tesla referral code - https://ts.la/richard11209
tirsdag den 7. december 2021 kl. 23.32.09 UTC+1 skrev gnuarm.del...@gmail.com:
> On Monday, December 6, 2021 at 8:14:43 PM UTC-5, Martin Brown wrote: > > > > Likewise with PC's. I'm in the market for a new one right now but I'm > > not convinced that any of them offer single threaded performance that is > > 3x better than the ancient i7-3770 I have now. That has always been my > > upgrade heuristic (used to be every 3 years). Clock speeds have maxed > > out and now they are adding more cores (many of which are idle most of > > the time). Performance cores and efficient cores is the new selling > > point. It looks on paper like the i5-12600K might just pass this test. > I bought an i5 machine and it was a real dog. I said something to the effect that they ran out of ways to add transistors to improve the speed of CPUs a few years ago and someone listed a number of architectural improvements they've added for a 20-30% boost. > > It has been quite some time since you could expect significant speed improvements by adding transistors or faster clock speeds. I think it was the Pentium 4 where the clock rate peaked at about 3 GHz by adding pipeline stages for shorter gate delays. But the cost of pipeline stalls pretty much mitigated that advantage. I believe people could overclock the Pentium 3 to run faster than the 4. >
https://www.cpubenchmark.net/compare/Intel-i7-3770-vs-Intel-i9-12900KF-vs-Intel-Pentium-4-3.60GHz/896vs4611vs1079
On Mon, 06 Dec 2021 09:42:29 -0800, jlarkin@highlandsniptechnology.com
wrote:

>https://www.fabricatedknowledge.com/p/the-rising-tide-of-semiconductor > >I've also heard that the cost of one next-gen euv scanner is well over >$200M, and that the design and mask set for a high-end chip costs a >billion dollars. > >We just don't need few-nm chips.
Aside from smart phones and maybe some IOT stuff in the future I think you're right. The penalty is a bit of power and speed. Of course it's possible some amazing new market will come along of left field that will generate demand, but it's hard to imagine something brand new on the global scale of smart phones (~1.5bn units/year) that is also power-sensitive. I think the mask costs are more of the order of $1m or $2m, not including design, obviously. -- Best regards, Spehro Pefhany
Rick C wrote:
> On Monday, December 6, 2021 at 8:14:43 PM UTC-5, Martin Brown wrote: >> >> Likewise with PC's. I'm in the market for a new one right now but I'm >> not convinced that any of them offer single threaded performance that is >> 3x better than the ancient i7-3770 I have now. That has always been my >> upgrade heuristic (used to be every 3 years). Clock speeds have maxed >> out and now they are adding more cores (many of which are idle most of >> the time). Performance cores and efficient cores is the new selling >> point. It looks on paper like the i5-12600K might just pass this test. > > I bought an i5 machine and it was a real dog. I said something to the effect that they ran out of ways to add transistors to improve the speed of CPUs a few years ago and someone listed a number of architectural improvements they've added for a 20-30% boost. > > It has been quite some time since you could expect significant speed improvements by adding transistors or faster clock speeds. I think it was the Pentium 4 where the clock rate peaked at about 3 GHz by adding pipeline stages for shorter gate delays. But the cost of pipeline stalls pretty much mitigated that advantage. I believe people could overclock the Pentium 3 to run faster than the 4. > > >> It may yet swing the other way when simulations are so good that the >> conversion to masks is essentially error free. Where it gets tricky is >> when the AI is designing new chips for us that no-one understands. >> >> This years BBC Rieth lectures are about the rise of AI and the future by >> Stuart Russell of Berkley (starts this Wednesday). >> >> https://www.bbc.co.uk/programmes/articles/1N0w5NcK27Tt041LPVLZ51k/reith-lectures-2021-living-with-artificial-intelligence > >> It is still at least partially holding for number density of transistors >> if not for actual computing performance. > > It was never about performance, it was just the number of transistors doubling every 18 to 24 months.
Not so. Back in the Mead-Conway-Dennard days (late 1970s to early 2000s), everything was driven by litho. Once the litho folks figured out how to make the next node with decent yield, you were golden--the materials system was almost unchanged (apart from going to copper wiring), and the speed and power consumption per transistor improved automatically as the feature sizes got smaller, so the power density stayed pretty well constant and the performance went up roughly quadratically. The 65-nm node was about where that ended, and of course analogue performance peaked around 0.5 um to 0.18 um depending on what you care about. Since then, transistors have been getting slower, leakier, and noisier with each node. BITD you could leave all of the logic on your processor running all the time. Now if you did that, it'd overheat rapidly. We live in the Dark Silicon era--lots of huge chips, most parts of which are powered down most of the time. Cheers Phil Hobbs -- Dr Philip C D Hobbs Principal Consultant ElectroOptical Innovations LLC / Hobbs ElectroOptics Optics, Electro-optics, Photonics, Analog Electronics Briarcliff Manor NY 10510 http://electrooptical.net http://hobbs-eo.com
On Tuesday, December 7, 2021 at 6:21:37 PM UTC-5, lang...@fonz.dk wrote:
> tirsdag den 7. december 2021 kl. 23.32.09 UTC+1 skrev gnuarm.del...@gmail.com: > > On Monday, December 6, 2021 at 8:14:43 PM UTC-5, Martin Brown wrote: > > > > > > Likewise with PC's. I'm in the market for a new one right now but I'm > > > not convinced that any of them offer single threaded performance that is > > > 3x better than the ancient i7-3770 I have now. That has always been my > > > upgrade heuristic (used to be every 3 years). Clock speeds have maxed > > > out and now they are adding more cores (many of which are idle most of > > > the time). Performance cores and efficient cores is the new selling > > > point. It looks on paper like the i5-12600K might just pass this test. > > I bought an i5 machine and it was a real dog. I said something to the effect that they ran out of ways to add transistors to improve the speed of CPUs a few years ago and someone listed a number of architectural improvements they've added for a 20-30% boost. > > > > It has been quite some time since you could expect significant speed improvements by adding transistors or faster clock speeds. I think it was the Pentium 4 where the clock rate peaked at about 3 GHz by adding pipeline stages for shorter gate delays. But the cost of pipeline stalls pretty much mitigated that advantage. I believe people could overclock the Pentium 3 to run faster than the 4. > > > https://www.cpubenchmark.net/compare/Intel-i7-3770-vs-Intel-i9-12900KF-vs-Intel-Pentium-4-3.60GHz/896vs4611vs1079
Not sure if you wanted to say something about this page? Intel Core i7-3770 @ 3.40GHz Intel Core i9-12900KF Intel Pentium 4 3.60GHz First Seen on Chart Q1 2012 Q4 2021 Q4 2008 Single Thread Rating 2074 4229 605 Not going to try to align the columns. From 2008 to 2012 the CPU processing speed increased over 3 fold. From 2008 to 2021 it increased barely 2 fold. The last 13 years provided less improvement than the previous 4 years. -- Rick C. -+ Get 1,000 miles of free Supercharging -+ Tesla referral code - https://ts.la/richard11209
On Tuesday, December 7, 2021 at 8:42:18 PM UTC-5, Phil Hobbs wrote:
> Rick C wrote: > > On Monday, December 6, 2021 at 8:14:43 PM UTC-5, Martin Brown wrote: > >> > >> Likewise with PC's. I'm in the market for a new one right now but I'm > >> not convinced that any of them offer single threaded performance that is > >> 3x better than the ancient i7-3770 I have now. That has always been my > >> upgrade heuristic (used to be every 3 years). Clock speeds have maxed > >> out and now they are adding more cores (many of which are idle most of > >> the time). Performance cores and efficient cores is the new selling > >> point. It looks on paper like the i5-12600K might just pass this test. > > > > I bought an i5 machine and it was a real dog. I said something to the effect that they ran out of ways to add transistors to improve the speed of CPUs a few years ago and someone listed a number of architectural improvements they've added for a 20-30% boost. > > > > It has been quite some time since you could expect significant speed improvements by adding transistors or faster clock speeds. I think it was the Pentium 4 where the clock rate peaked at about 3 GHz by adding pipeline stages for shorter gate delays. But the cost of pipeline stalls pretty much mitigated that advantage. I believe people could overclock the Pentium 3 to run faster than the 4. > > > > > >> It may yet swing the other way when simulations are so good that the > >> conversion to masks is essentially error free. Where it gets tricky is > >> when the AI is designing new chips for us that no-one understands. > >> > >> This years BBC Rieth lectures are about the rise of AI and the future by > >> Stuart Russell of Berkley (starts this Wednesday). > >> > >> https://www.bbc.co.uk/programmes/articles/1N0w5NcK27Tt041LPVLZ51k/reith-lectures-2021-living-with-artificial-intelligence > > > >> It is still at least partially holding for number density of transistors > >> if not for actual computing performance. > > > > It was never about performance, it was just the number of transistors doubling every 18 to 24 months. > Not so.
You don't understand what I said.
> Back in the Mead-Conway-Dennard days (late 1970s to early > 2000s), everything was driven by litho. Once the litho folks figured > out how to make the next node with decent yield, you were golden--the > materials system was almost unchanged (apart from going to copper > wiring), and the speed and power consumption per transistor improved > automatically as the feature sizes got smaller, so the power density > stayed pretty well constant and the performance went up roughly > quadratically. > > The 65-nm node was about where that ended, and of course analogue > performance peaked around 0.5 um to 0.18 um depending on what you care > about. > > Since then, transistors have been getting slower, leakier, and noisier > with each node. BITD you could leave all of the logic on your processor > running all the time. Now if you did that, it'd overheat rapidly. We > live in the Dark Silicon era--lots of huge chips, most parts of which > are powered down most of the time.
All of that may be true, but Moore's observation was simply about the trend in the number of transistors on a die. That's all. None of it matters. Larkin has deemed improvements in lithography to be pointless. So I expect the industry will stop advancing immediately. They just needed someone to explain things to them. -- Rick C. +- Get 1,000 miles of free Supercharging +- Tesla referral code - https://ts.la/richard11209
On Wednesday, December 8, 2021 at 12:39:28 AM UTC+11, Dimiter Popoff wrote:
> On 12/7/2021 3:25, Anthony William Sloman wrote: > > On Tuesday, December 7, 2021 at 12:14:43 PM UTC+11, Martin Brown wrote: > >> On 06/12/2021 19:04, Dimiter_Popoff wrote: > >>> On 12/6/2021 20:47, John Larkin wrote: > >>>> On Mon, 6 Dec 2021 20:36:17 +0200, Dimiter_Popoff <d...@tgi-sci.com> wrote: > >>>>> On 12/6/2021 19:42, jla...@highlandsniptechnology.com wrote: > > > > <snip> > > > >> It may yet swing the other way when simulations are so good that the conversion to masks is essentially error free. > > > > That happened around 1990. The electron beam tester I was working on back then was the next generation of a unit which famously trimmed three months off the development time of the Motorola 68k processor chip set. > > > > The project wasn't canned because out machine didn't work - we did get it working quite well enough to demonstrate that it was an order of magnitude faster than it's predecessor - but because simulation had got good enough that most mask sets produced chips that worked. > > > > The older, slower, machines were quite fast enough to check out that the simulation software was predicting what actually happened on the chip and that killed our market. > > > It is a shame such an advanced machinery has been lost (or did it > survive for some niche applications?).
It didn't or at least not that I know of. We had a couple of working prototypes, but they did depend on Gigabit Logic's GaAs integrated circuits, and Gigabit got merged with a couple of other GaAs suppliers at the same time, partly because they couldn't produce the logic with a decent yield. If the machine had gone into production it would probably would have had to be re-worked within a year or so to use Motorola/ON-Semiconductor ECLinPS parts for the quick bits (which would probably have performed better in consequence). -- Bill Sloman, Sydney
John Larkin wrote:
> On Mon, 6 Dec 2021 20:36:17 +0200, Dimiter_Popoff <dp@tgi-sci.com> > wrote: > >> On 12/6/2021 19:42, jlarkin@highlandsniptechnology.com wrote: >>> https://www.fabricatedknowledge.com/p/the-rising-tide-of-semiconductor >>> >>> I've also heard that the cost of one next-gen euv scanner is well >>> over $200M, and that the design and mask set for a high-end chip >>> costs a billion dollars. >>> >>> We just don't need few-nm chips. >>> >>> >>> >> >> Gradually electronics design without having access to a silicon >> factory becomes useless, hopefully the process is slow enough so we >> don't see that in full. >> Sort of like nowadays you can somehow master an internal combustion >> engine if you have a lathe and a milling machine but you have no >> chance to make it comparable to those car makers make, not to speak >> about cost. > > Some things have got good enough. Hammers, spoons, beds, LED lights, > microwave ovens. Moore's Law can't go on forever, and is probably at > or in same cases past its practical limit.
Emissions requirements used to get tighter every year until it served no purpose to make 1000 cars emit less than one BBQ. Then they started subsidizing EV's. When everyone has an EV - or actually when no one has an ICE and a few have EV's and most have to take a bus - then they'll force us to change to something else.
> We don't need 3 nm chips to text and twitter. I can't imagine my cell > phone needing to be better hardware.
First there was a century of advances in transportation, then it was communication. When something is good enough we have to find something else to make better and nobody has found it yet. -- Defund the Thought Police Andiamo Brandon!
On Tue, 7 Dec 2021 22:56:47 -0500, "Tom Del Rosso"
<fizzbintuesday@that-google-mail-domain.com> wrote:

>John Larkin wrote: >> On Mon, 6 Dec 2021 20:36:17 +0200, Dimiter_Popoff <dp@tgi-sci.com> >> wrote: >> >>> On 12/6/2021 19:42, jlarkin@highlandsniptechnology.com wrote: >>>> https://www.fabricatedknowledge.com/p/the-rising-tide-of-semiconductor >>>> >>>> I've also heard that the cost of one next-gen euv scanner is well >>>> over $200M, and that the design and mask set for a high-end chip >>>> costs a billion dollars. >>>> >>>> We just don't need few-nm chips. >>>> >>>> >>>> >>> >>> Gradually electronics design without having access to a silicon >>> factory becomes useless, hopefully the process is slow enough so we >>> don't see that in full. >>> Sort of like nowadays you can somehow master an internal combustion >>> engine if you have a lathe and a milling machine but you have no >>> chance to make it comparable to those car makers make, not to speak >>> about cost. >> >> Some things have got good enough. Hammers, spoons, beds, LED lights, >> microwave ovens. Moore's Law can't go on forever, and is probably at >> or in same cases past its practical limit. > >Emissions requirements used to get tighter every year until it served no >purpose to make 1000 cars emit less than one BBQ. Then they started >subsidizing EV's. When everyone has an EV - or actually when no one has >an ICE and a few have EV's and most have to take a bus - then they'll >force us to change to something else. > > >> We don't need 3 nm chips to text and twitter. I can't imagine my cell >> phone needing to be better hardware. > >First there was a century of advances in transportation, then it was >communication. When something is good enough we have to find something >else to make better and nobody has found it yet.
The giant advances will be in biology. -- Father Brown's figure remained quite dark and still; but in that instant he had lost his head. His head was always most valuable when he had lost it.