Electronics-Related.com
Forums

new spice

Started by John Larkin September 28, 2021
On 2021-10-01 11:15, Gerhard Hoffmann wrote:
> Am 01.10.21 um 07:56 schrieb Don Y: > >> Technological speedups are a red herring. What you're really concerned with >> is the TOTAL time to perform a particular action. If you speed up some >> portion of it 100-fold... but, that was just 20% of the entire process, >> what have your *real* gains been? > > Amdahl's law. > > < https://en.wikipedia.org/wiki/Amdahl%27s_law > > > > Cheers, Gerhard
There's also Wirth's observation: "Software gets slower faster than hardware gets faster." Jeroen Belleman
On a sunny day (Fri, 01 Oct 2021 12:04:18 +0200) it happened Jeroen Belleman
<jeroen@nospam.please> wrote in <sj6mf2$heo$1@gioia.aioe.org>:

>On 2021-10-01 11:15, Gerhard Hoffmann wrote: >> Am 01.10.21 um 07:56 schrieb Don Y: >> >>> Technological speedups are a red herring. What you're really concerned with >>> is the TOTAL time to perform a particular action. If you speed up some >>> portion of it 100-fold... but, that was just 20% of the entire process, >>> what have your *real* gains been? >> >> Amdahl's law. >> >> < https://en.wikipedia.org/wiki/Amdahl%27s_law > >> >> >> Cheers, Gerhard > >There's also Wirth's observation: "Software gets slower faster >than hardware gets faster." > >Jeroen Belleman
Especially with bloated programs that need 20,000 lines to say "Hello world'.
On 10/1/2021 3:04 AM, Jeroen Belleman wrote:
> There's also Wirth's observation: "Software gets slower faster > than hardware gets faster."
You've got to have a bit of sympathy for folks who write desktop applications; they have virtually no control over the environment in which their code executes. They can "recommend" a particular hardware/OS configuration. But, few actually *enforce* those minimum requirements. And, even if "The System" meets those requirements, there's no guarantee that the current (or typical) load on the system will be "light enough" to not impact the performance of their code. Or, the actual "application" of their software to the user's particular needs. (how many devices in the circuit you're hoping to simulate?) What's the sign off process? Is there a formal spec that says: "On a system with hardware characteristics of XXX running OS OOO and having a load factor of < LFmax, the following circuit simulation must complete in N seconds over the YYY constraints."? And, if it *doesn't*, does the Marketing guy pressure to ship it, regardless? ("We'll FIX it after release!") Then, deal with the gripes from the user who figured he'd run it despite NOT having the minimum required system...
Am 01.10.21 um 12:04 schrieb Jeroen Belleman:


> There's also Wirth's observation: "Software gets slower faster > than hardware gets faster."
When asked how his name should be pronounced he said: "You can call me by name: that's Wirth or you can call me by value: that's worth" >> Cheers, Gerhard
On 2021-10-01 12:56, Don Y wrote:
> On 10/1/2021 3:04 AM, Jeroen Belleman wrote: >> There's also Wirth's observation: "Software gets slower faster >> than hardware gets faster." > > You've got to have a bit of sympathy for folks who write desktop > applications; they have virtually no control over the environment > in which their code executes. >
[Snip!] Just last week, I rebooted a Linux machine that I set up 20 years ago, and which had been sleeping in the attic. It's actually much snappier than my newest machine. Software *is* getting more and more bloated, and I don't really have the impression that the functionality is there to justify it. Mostly we are irritated by the addition of pointless animations and snazzy effects. There is something rotten in the state of modern software. Jeroen Belleman
On 2021-10-01 14:11, Gerhard Hoffmann wrote:
> Am 01.10.21 um 12:04 schrieb Jeroen Belleman: > > >> There's also Wirth's observation: "Software gets slower faster >> than hardware gets faster." > > When asked how his name should be pronounced he said: > > "You can call me by name: that's Wirth > or you can call me by value: that's worth" > > > >> Cheers, Gerhard
I still don't get why C++ had to add call by reference. Big mistake, in my view. Jeroen Belleman
Gerhard Hoffmann wrote:
> Am 30.09.21 um 21:24 schrieb John Larkin: >> On Thu, 30 Sep 2021 19:58:50 +0100, "Kevin Aylward" >> <kevinRemoveandReplaceATkevinaylward.co.uk> wrote: >> >>>> "John Larkin"&nbsp; wrote in message >>>> news:bdc7lgdap8o66j8m92ullph1nojbg9c5ni@4ax.com... >>> >>>> https://www.linkedin.com/in/mike-engelhardt-a788a822 >>> >>> ...but why ......? >>> >> >> Maybe he enjoys it. I'm sure he's enormously wealthy and could do >> anything he wahts. >> >> I want a Spice that uses an nvidia board to speed it up 1000:1. >> > > Hopeless. That has already been tried in IBM AT times with these > Weitek coprocessors and NS 32032 processor boards; it never lived > up to the expectations.
Parallellizing sparse matrix computation is an area of active research. The best approach ATM seems to be runtime profiling that generates a JIT-compiled FPGA image for the actual crunching, sort of like the way FFTW generates optimum FFT routines.
> > That's no wonder. When in time domain integration the next result > depends on the current value and maybe a few in the past, you cannot > compute more future timesteps in parallel. Maybe some speculative > versions in parallel and then selecting the best. But that is no > work for 1000 processors.
SPICE isn't a bad application for parallellism, if you can figure out how to do it--you wouldn't bother for trivial things, where the run times are less than 30s or so, but for longer calculations the profiling would be a small part of the work. The inner loop is time-stepping the same matrix topology all the time (though the coefficients change with the time step). Since all that horsepower would be spending most of its time waiting for us to dork the circuit and dink the fonts, it could be running the profiling in the background during editing. You might get 100x speedup that way, ISTM.
> > The inversion of the nodal matrix might use some improvement since > it is NP complete, like almost everything that is interesting.
Matrix inversion is NP-complete? Since when? It's actually not even cubic, asymptotically--the lowest-known complexity bound is less than O(N**2.4).
> Its size grows with the number of nodes and the matrix is sparse since > most nodes have no interaction. Dividing the circuit into subcircuits, > solving these separately and combining the results could provide > a speedup, for problems with many nodes. That would be a MAJOR change. > > Spice has not made much progress since Berkeley is no longer involved. > Some people make some local improvements and when they lose interest > after 15 years their improvements die. There is no one to integrate > that stuff in one open official version. Maybe NGspice comes closest. > > Keysight ADS has on option to run it on a bunch of workstations but > that helps probably most for electromagnetics which has not much > in common with spice. >
It has more than you might think. EM simulators basically have to loop over all of main memory twice per time step, and all the computational boundaries have to be kept time-coherent. With low-latency interconnects and an OS with a thread scheduler that isn't completely brain-dead (i.e. anything except Linux AFAICT), my EM code scales within 20% or so of linearly up to 15 compute nodes, which is as far as I've tried. So I'm more optimistic than you, if still rather less than JL. ;) Cheers Phil Hobbs -- Dr Philip C D Hobbs Principal Consultant ElectroOptical Innovations LLC / Hobbs ElectroOptics Optics, Electro-optics, Photonics, Analog Electronics Briarcliff Manor NY 10510 http://electrooptical.net http://hobbs-eo.com
Jeroen Belleman wrote:
> On 2021-10-01 11:15, Gerhard Hoffmann wrote: >> Am 01.10.21 um 07:56 schrieb Don Y: >> >>> Technological speedups are a red herring.&nbsp; What you're really >>> concerned with >>> is the TOTAL time to perform a particular action.&nbsp; If you speed up some >>> portion of it 100-fold... but, that was just 20% of the entire process, >>> what have your *real* gains been? >> >> Amdahl's law. >> >> <&nbsp;&nbsp; https://en.wikipedia.org/wiki/Amdahl%27s_law&nbsp; > >> >> >> Cheers, Gerhard > > There's also Wirth's observation: "Software gets slower faster > than hardware gets faster." > > Jeroen Belleman
Or as the old saying has it, "Intel giveth and Microsoft taketh away." (Apologies to Job). Cheers Phil Hobbs -- Dr Philip C D Hobbs Principal Consultant ElectroOptical Innovations LLC / Hobbs ElectroOptics Optics, Electro-optics, Photonics, Analog Electronics Briarcliff Manor NY 10510 http://electrooptical.net http://hobbs-eo.com
On 10/1/2021 5:44 AM, Jeroen Belleman wrote:
> On 2021-10-01 12:56, Don Y wrote: >> On 10/1/2021 3:04 AM, Jeroen Belleman wrote: >>> There's also Wirth's observation: "Software gets slower faster >>> than hardware gets faster." >> >> You've got to have a bit of sympathy for folks who write desktop >> applications; they have virtually no control over the environment >> in which their code executes. >> > [Snip!] > > Just last week, I rebooted a Linux machine that I set up > 20 years ago, and which had been sleeping in the attic. > It's actually much snappier than my newest machine.
I used to run Brief (TmReg) as a text editor. It was reasonably fast on a 16MHz 386 -- 40 years ago. Run it *today* (in a DOS box) and it is completely unusable; in the time it takes you to strike a key, it will have filled the screen with repeated copies of that key (i.e., trying to navigate in a file using the arrow keys is useless). Run a modern version of an equivalent product and it feels like the "brakes are dragging" :<
> Software *is* getting more and more bloated, and I don't > really have the impression that the functionality is > there to justify it. Mostly we are irritated by the addition > of pointless animations and snazzy effects. There is > something rotten in the state of modern software.
Developers seem more interested in adding features than in fixing bugs and/or improving performance. To be fair, how many folks would pay for a new version if its sole claim to fame was that it FIXED the bugs in the previous version?! I've learned to pick *a* version of each tool, identify all of its shortcomings and, if you can live with them (or workarounds), just don't bother to update it. [This works, in my case, as the machines aren't routed so I can tolerate any "vulnerabilities" in the existing implementations] There's also a very different mindset between the Windows world and the Eunices: in the latter case, you'd plumb different tools together to achieve a particular goal; in the former case, someone would add code to an existing application to provide that functionality. (How many features can you *imagine* adding to each and every application?) So, apps grow because Windows users aren't capable of gluing "utilities" together, to fit their needs. As a result, you can write a piece of code ("utility") once under a Eunice and repeatedly (for each potential application!) under windows. [And, of course, deal with slight differences in features, capabilities, correctness, etc.]
On 01/10/21 14:05, Phil Hobbs wrote:
> Jeroen Belleman wrote: >> On 2021-10-01 14:11, Gerhard Hoffmann wrote: >>> Am 01.10.21 um 12:04 schrieb Jeroen Belleman: >>> >>> >>>> There's also Wirth's observation: "Software gets slower faster >>>> than hardware gets faster." >>> >>> When asked how his name should be pronounced he said: >>> >>> &nbsp;&nbsp; "You can call me by name:&nbsp; that's Wirth >>> or you can call me by value: that's worth" >>> >>> >>> &nbsp;>> Cheers, Gerhard >> >> I still don't get why C++ had to add call by reference. Big >> mistake, in my view. >> >> Jeroen Belleman > > Why?&nbsp; Smart pointers didn't exist in 1998 iirc, and reducing the number of bare > pointers getting passed around has to be a good thing, surely?
"Smart pointer" is a much older concept given a new name. Why? Either because newbies triumphantly (but unwittingly) reinvented the wheel, or because a marketeer wanted to make their product sound new/shiny.