Electronics-Related.com
Forums

supercomputer progress

Started by Unknown April 26, 2022
On Tue, 26 Apr 2022 13:53:08 -0700, John Larkin
<jlarkin@highland_atwork_technology.com> wrote:

>On Tue, 26 Apr 2022 12:04:44 -0700, John Robertson <spam@flippers.com> >wrote: > >> >>On 2022/04/26 8:44 a.m., jlarkin@highlandsniptechnology.com wrote: >>> Lawrence Berkeley Lab announced the results from a new supercomputer >>> analysis of climate change. They analyzed five west coast "extreme >>> storms" from 1982 to 2014. >> >>https://www.greenbiz.com/article/berkeley-lab-tensilica-collaborate-energy-efficient-climate-modeling-supercomputer >> >>---------<quote>----------------- >>Lawrence Berkeley National Laboratory scientists are looking to make >>highly detailed, 1 kilometer scale cloud models to improve climate >>predictions. Using current supercomputer designs of combining >>microprocessors used in personal computers, a system capable of making >>such models would cost about $1 billion and use up 200 megawatts of >>energy. A supercomputer using 20 million embedded processors, on the >>other hand, would cost about $75 million and use less than 4 megawatts >>of energy, according to Lawrence Berkeley National Laboratory researchers. >>-------------<end quote>-------------- >> >>4 megawatts/200 megawatts - do the computers factor in their heat >>generation in the climate models? >> >>John ;-#)# > >Does LBL measure energy in megawatts? > >Do bigger computers predict climate better? > >Oh dear.
I think the jury has already returned that there is climate change/global warming and it is probably already too late to do much about it with the short time needed for countries and people to react. Especially with all the global warming denialists that don't care agout it and state of the art and science of generating non-greenhouse gas energy. I suppose that I won't be around to see how bad it will get which could be a good thing. I would love to have a super computer to run LTspice. boB
On Thu, 28 Apr 2022 09:26:40 -0700, boB <boB@K7IQ.com> wrote:

>On Tue, 26 Apr 2022 13:53:08 -0700, John Larkin ><jlarkin@highland_atwork_technology.com> wrote: > >>On Tue, 26 Apr 2022 12:04:44 -0700, John Robertson <spam@flippers.com> >>wrote: >> >>> >>>On 2022/04/26 8:44 a.m., jlarkin@highlandsniptechnology.com wrote: >>>> Lawrence Berkeley Lab announced the results from a new supercomputer >>>> analysis of climate change. They analyzed five west coast "extreme >>>> storms" from 1982 to 2014. >>> >>>https://www.greenbiz.com/article/berkeley-lab-tensilica-collaborate-energy-efficient-climate-modeling-supercomputer >>> >>>---------<quote>----------------- >>>Lawrence Berkeley National Laboratory scientists are looking to make >>>highly detailed, 1 kilometer scale cloud models to improve climate >>>predictions. Using current supercomputer designs of combining >>>microprocessors used in personal computers, a system capable of making >>>such models would cost about $1 billion and use up 200 megawatts of >>>energy. A supercomputer using 20 million embedded processors, on the >>>other hand, would cost about $75 million and use less than 4 megawatts >>>of energy, according to Lawrence Berkeley National Laboratory researchers. >>>-------------<end quote>-------------- >>> >>>4 megawatts/200 megawatts - do the computers factor in their heat >>>generation in the climate models? >>> >>>John ;-#)# >> >>Does LBL measure energy in megawatts? >> >>Do bigger computers predict climate better? >> >>Oh dear. > > >I think the jury has already returned that there is climate >change/global warming and it is probably already too late to do much >about it with the short time needed for countries and people to react.
At last! We'll all be dead in 8 years. I'd rather be drowned or blown away than bored to death. -- Anybody can count to one. - Robert Widlar
On 4/28/22 11:26, boB wrote:

> I would love to have a super computer to run LTspice. >
I thought one of the problems with LTspice (and spice in general) performance is that the algorithms don't parallelize very well.
On 2022-04-28 18:26, boB wrote:
[...]
> I would love to have a super computer to run LTspice. > > boB >
In fact, what you have on your desk *is* a super computer, in the 1970's meaning of the words. It's just that it's bogged down running bloatware. Jeroen Belleman
On Thu, 28 Apr 2022 12:01:59 -0500, Dennis <dennis@none.none> wrote:

>On 4/28/22 11:26, boB wrote: > >> I would love to have a super computer to run LTspice. >> >I thought one of the problems with LTspice (and spice in general) >performance is that the algorithms don't parallelize very well.
LT runs on multiple cores now. I'd love the next gen LT Spice to run on an Nvidia card. 100x at least. -- If a man will begin with certainties, he shall end with doubts, but if he will be content to begin with doubts he shall end in certainties. Francis Bacon
On Thursday, April 28, 2022 at 1:47:11 PM UTC-4, Jeroen Belleman wrote:
> On 2022-04-28 18:26, boB wrote: > [...] > > I would love to have a super computer to run LTspice. > > > > boB > > > In fact, what you have on your desk *is* a super computer, > in the 1970's meaning of the words. It's just that it's > bogged down running bloatware.
Even supercomputers from the 80s were not as fast as many of today's computers and the memory was often 16,000 times smaller than a typical laptop today. -- Rick C. - Get 1,000 miles of free Supercharging - Tesla referral code - https://ts.la/richard11209
John Larkin wrote:
> On Thu, 28 Apr 2022 12:01:59 -0500, Dennis <dennis@none.none> wrote: > >> On 4/28/22 11:26, boB wrote: >> >>> I would love to have a super computer to run LTspice. >>> >> I thought one of the problems with LTspice (and spice in general) >> performance is that the algorithms don't parallelize very well. > > LT runs on multiple cores now. I'd love the next gen LT Spice to run > on an Nvidia card. 100x at least. >
The "number of threads" setting doesn't do anything very dramatic, though, at least last time I tried. Splitting up the calculation between cores would require all of them to communicate a couple of times per time step, but lots of other simulation codes do that. The main trouble is that the matrix defining the connectivity between nodes is highly irregular in general. Parallellizing that efficiently might well need a special-purpose compiler, sort of similar to the profile-guided optimizer in the guts of the FFTW code for computing DFTs. Probably not at all impossible, but not that straightforward to implement. Cheers Phil Hobbs -- Dr Philip C D Hobbs Principal Consultant ElectroOptical Innovations LLC / Hobbs ElectroOptics Optics, Electro-optics, Photonics, Analog Electronics Briarcliff Manor NY 10510 http://electrooptical.net http://hobbs-eo.com
Phil Hobbs <pcdhSpamMeSenseless@electrooptical.net> wrote:

> John Larkin wrote: >> On Thu, 28 Apr 2022 12:01:59 -0500, Dennis <dennis@none.none> wrote: >> >>> On 4/28/22 11:26, boB wrote: >>> >>>> I would love to have a super computer to run LTspice. >>>> >>> I thought one of the problems with LTspice (and spice in general) >>> performance is that the algorithms don't parallelize very well. >> >> LT runs on multiple cores now. I'd love the next gen LT Spice to run >> on an Nvidia card. 100x at least. >> > > The "number of threads" setting doesn't do anything very dramatic, > though, at least last time I tried. Splitting up the calculation > between cores would require all of them to communicate a couple of times > per time step, but lots of other simulation codes do that. > > The main trouble is that the matrix defining the connectivity between > nodes is highly irregular in general. > > Parallellizing that efficiently might well need a special-purpose > compiler, sort of similar to the profile-guided optimizer in the guts of > the FFTW code for computing DFTs. Probably not at all impossible, but > not that straightforward to implement. > > Cheers > > Phil Hobbs
Supercomputers have thousands or hundreds of thousands of cores. Quote: "Cerebras Systems has unveiled its new Wafer Scale Engine 2 processor with a record-setting 2.6 trillion transistors and 850,000 AI-optimized cores. It&#4294967295;s built for supercomputing tasks, and it&#4294967295;s the second time since 2019 that Los Altos, California-based Cerebras has unveiled a chip that is basically an entire wafer." https://venturebeat.com/2021/04/20/cerebras-systems-launches-new-ai- supercomputing-processor-with-2-6-trillion-transistors/ Man, I wish I were back living in Los Altos again. -- MRM
On 28/04/2022 18:47, Jeroen Belleman wrote:
> On 2022-04-28 18:26, boB wrote: > [...] >> I would love to have a super computer to run LTspice. >> >> boB > > In fact, what you have on your desk *is* a super computer, > in the 1970's meaning of the words. It's just that it's > bogged down running bloatware.
Indeed. The Cray X-MP in its 4 CPU configuration with a 105MHz clock and a whopping for the time 128MB of fast core memory with 40GB of disk. The one I used had an amazing for the time 1TB tape cassette backing store. It did 600 MFLOPs with the right sort of parallel vector code. That was back in the day when you needed special permission to use more than 4MB of core on the timesharing IBM 3081 (approx 7 MIPS). Current Intel 12 gen CPU desktops are ~4GHz, 16GB ram and >1TB of disk. (and the upper limits are even higher) That combo does ~66,000 MFLOPS. Spice simulation doesn't scale particularly well to large scale multiprocessor environments to many long range interractions. -- Regards, Martin Brown
On 29/04/2022 07:09, Phil Hobbs wrote:
> John Larkin wrote: >> On Thu, 28 Apr 2022 12:01:59 -0500, Dennis <dennis@none.none> wrote: >> >>> On 4/28/22 11:26, boB wrote: >>> >>>> I would love to have a super computer to run LTspice. >>>> >>> I thought one of the problems with LTspice (and spice in general) >>> performance is that the algorithms don't parallelize very well. >> >> LT runs on multiple cores now. I'd love the next gen LT Spice to run >> on an Nvidia card. 100x at least. >> > > The "number of threads" setting doesn't do anything very dramatic, > though, at least last time I tried.&nbsp; Splitting up the calculation > between cores would require all of them to communicate a couple of times > per time step, but lots of other simulation codes do that.
If it is anything like chess problems then the memory bandwidth will saturate long before all cores+threads are used to optimum effect. After that point the additional threads merely cause it to run hotter. I found setting max threads to about 70% of those notionally available produced the most computing power with the least heat. After that the performance gain per thread was negligible but the extra heat was not. Having everything running full bore was actually slower and much hotter!
> > The main trouble is that the matrix defining the connectivity between > nodes is highly irregular in general. > > Parallellizing that efficiently might well need a special-purpose > compiler, sort of similar to the profile-guided optimizer in the guts of > the FFTW code for computing DFTs.&nbsp; Probably not at all impossible, but > not that straightforward to implement.
I'm less than impressed with profile guided optimisers in compilers. The only time I tried it in anger the instrumentation code interfered with the execution of the algorithms to such an extent as to be meaningless. One gotcha I have identified in the latest MSC is that when it uses higher order SSE2, AVX, and AVX-512 implicitly in its code generation it does not align them on the stack properly so that sometimes they are split across two cache lines. I see two distinct speeds for each benchmark code segment depending on how the cache allignment falls. Basically the compiler forces stack alignment to 8 bytes and cache lines are 64 bytes but the compiler generated objects in play are 16 bytes, 32 bytes or 64 bytes. Alignment failure fractions 1:4, 2:4 and 3:4. If you manually allocate such objects you can use pragmas to force optimal alignment but when the code generator chooses to use them internally you have no such control. Even so the MS compiler does generate blisteringly fast code compared to either Intel or GCC. -- Regards, Martin Brown