Electronics-Related.com
Forums

new spice

Started by John Larkin September 28, 2021
>"John Larkin" wrote in message >news:bdc7lgdap8o66j8m92ullph1nojbg9c5ni@4ax.com...
>https://www.linkedin.com/in/mike-engelhardt-a788a822
...but why ......? -- Kevin Aylward http://www.anasoft.co.uk/ SuperSpice http://www.kevinaylward.co.uk/ee/index.html
On Thu, 30 Sep 2021 19:58:50 +0100, "Kevin Aylward"
<kevinRemoveandReplaceATkevinaylward.co.uk> wrote:

>>"John Larkin" wrote in message >>news:bdc7lgdap8o66j8m92ullph1nojbg9c5ni@4ax.com... > >>https://www.linkedin.com/in/mike-engelhardt-a788a822 > >...but why ......? >
Maybe he enjoys it. I'm sure he's enormously wealthy and could do anything he wahts. I want a Spice that uses an nvidia board to speed it up 1000:1. -- If a man will begin with certainties, he shall end with doubts, but if he will be content to begin with doubts he shall end in certainties. Francis Bacon
Am 30.09.21 um 21:24 schrieb John Larkin:
> On Thu, 30 Sep 2021 19:58:50 +0100, "Kevin Aylward" > <kevinRemoveandReplaceATkevinaylward.co.uk> wrote: > >>> "John Larkin" wrote in message >>> news:bdc7lgdap8o66j8m92ullph1nojbg9c5ni@4ax.com... >> >>> https://www.linkedin.com/in/mike-engelhardt-a788a822 >> >> ...but why ......? >> > > Maybe he enjoys it. I'm sure he's enormously wealthy and could do > anything he wahts. > > I want a Spice that uses an nvidia board to speed it up 1000:1. >
Hopeless. That has already been tried in IBM AT times with these Weitek coprocessors and NS 32032 processor boards; it never lived up to the expectations. That's no wonder. When in time domain integration the next result depends on the current value and maybe a few in the past, you cannot compute more future timesteps in parallel. Maybe some speculative versions in parallel and then selecting the best. But that is no work for 1000 processors. The inversion of the nodal matrix might use some improvement since it is NP complete, like almost everything that is interesting. Its size grows with the number of nodes and the matrix is sparse since most nodes have no interaction. Dividing the circuit into subcircuits, solving these separately and combining the results could provide a speedup, for problems with many nodes. That would be a MAJOR change. Spice has not made much progress since Berkeley is no longer involved. Some people make some local improvements and when they lose interest after 15 years their improvements die. There is no one to integrate that stuff in one open official version. Maybe NGspice comes closest. Keysight ADS has on option to run it on a bunch of workstations but that helps probably most for electromagnetics which has not much in common with spice. Cheers, Gerhard
On 9/30/2021 5:12 PM, Gerhard Hoffmann wrote:
>> I want a Spice that uses an nvidia board to speed it up 1000:1. > > Hopeless. That has already been tried in IBM AT times with these > Weitek coprocessors and NS 32032 processor boards; it never lived > up to the expectations. > > That's no wonder. When in time domain integration the next result > depends on the current value and maybe a few in the past, you cannot > compute more future timesteps in parallel. Maybe some speculative > versions in parallel and then selecting the best. But that is no > work for 1000 processors.
I think a factor of two *overall* is possible with a bit of work. I've seen proposals that suggest a factor of 4 is likely the upper limit on any such work. Some models may hit sweet spots -- but not the sort of thing you're going to boast about in sales literature! These are small enough that it's easier just to twiddle your thumbs and wait for next year's PC update cycle to get that improvement "for free" (i.e., for no development work cuz new work means new bugs).
> The inversion of the nodal matrix might use some improvement since > it is NP complete, like almost everything that is interesting. > Its size grows with the number of nodes and the matrix is sparse since > most nodes have no interaction. Dividing the circuit into subcircuits, > solving these separately and combining the results could provide > a speedup, for problems with many nodes. That would be a MAJOR change. > > Spice has not made much progress since Berkeley is no longer involved. > Some people make some local improvements and when they lose interest > after 15 years their improvements die. There is no one to integrate > that stuff in one open official version. Maybe NGspice comes closest.
A big part of the problem is folks want these tools for free. That attitude suggests they place no value on the time they save running simulations (or, that their time *has* no value). Someone will come up with a breakthrough and sell it as a *service*. So, you'll submit your circuit and get a bill for the analysis. Of course, this will likely cut down on the "what if" use of such a tool (cuz changing a portion of the circuit will still require a complete REanalysis... likely hard to leverage any prior computations to reduce the cost of those efforts) Then again, software-as-a-service is the obvious future.
> Keysight ADS has on option to run it on a bunch of workstations but > that helps probably most for electromagnetics which has not much > in common with spice.
Am 01.10.21 um 05:47 schrieb Don Y:
> On 9/30/2021 5:12 PM, Gerhard Hoffmann wrote: >>> I want a Spice that uses an nvidia board to speed it up 1000:1. >> >> Hopeless. That has already been tried in IBM AT times with these >> Weitek coprocessors and NS 32032 processor boards; it never lived >> up to the expectations. >> >> That's no wonder. When in time domain integration the next result >> depends on the current value and maybe a few in the past, you cannot >> compute more future timesteps in parallel. Maybe some speculative >> versions in parallel and then selecting the best. But that is no >> work for 1000 processors. > > I think a factor of two *overall* is possible with a bit of work.&nbsp; I've > seen proposals that suggest a factor of 4 is likely the upper limit on > any such work.&nbsp; Some models may hit sweet spots -- but not the sort of > thing you're going to boast about in sales literature!
You are probably right here.
> These are small enough that it's easier just to twiddle your thumbs and > wait for next year's PC update cycle to get that improvement "for free" > (i.e., for no development work cuz new work means new bugs). > >> The inversion of the nodal matrix might use some improvement since >> it is NP complete, like almost everything that is interesting. >> Its size grows with the number of nodes and the matrix is sparse since >> most nodes have no interaction. Dividing the circuit into subcircuits, >> solving these separately and combining the results could provide >> a speedup, for problems with many nodes. That would be a MAJOR change. >> >> Spice has not made much progress since Berkeley is no longer involved. >> Some people make some local improvements and when they lose interest >> after 15 years their improvements die. There is no one to integrate >> that stuff in one open official version. Maybe NGspice comes closest. > > A big part of the problem is folks want these tools for free. > That attitude suggests they place no value on the time they save > running simulations (or, that their time *has* no value).
No. The big part of the problem is that spice 2g4 or 3.x was always free and some people added some miniscule things like a GUI and expected they would rule the world from now on since THEY had written THE simulation program. For a time that worked for few of them, like pspice that got swallowed by Orcad that got swallowed by Cadunz. And where are they now? When have you seen the last result from pspice and its offspring? JT from s.e.d. is RIP now for some years, but do you think he missed any of the new developments? No, not really. Which new developments?
> Someone will come up with a breakthrough and sell it as a *service*. > So, you'll submit your circuit and get a bill for the analysis.&nbsp; Of > course, this will likely cut down on the "what if" use of such a > tool (cuz changing a portion of the circuit will still require a complete > REanalysis... likely hard to leverage any prior computations to reduce > the cost of those efforts) > > Then again, software-as-a-service is the obvious future.
You mean, like H-Spice some decades ago? OMG, I'm growing old. Some customers of mine don't like it when I have their stuff on MY computer, much less in a cloud. Cheers, Gerhard
On 9/30/2021 9:32 PM, Gerhard Hoffmann wrote:
> Am 01.10.21 um 05:47 schrieb Don Y: >> On 9/30/2021 5:12 PM, Gerhard Hoffmann wrote: >>>> I want a Spice that uses an nvidia board to speed it up 1000:1. >>> >>> Hopeless. That has already been tried in IBM AT times with these >>> Weitek coprocessors and NS 32032 processor boards; it never lived >>> up to the expectations. >>> >>> That's no wonder. When in time domain integration the next result >>> depends on the current value and maybe a few in the past, you cannot >>> compute more future timesteps in parallel. Maybe some speculative >>> versions in parallel and then selecting the best. But that is no >>> work for 1000 processors. >> >> I think a factor of two *overall* is possible with a bit of work. I've >> seen proposals that suggest a factor of 4 is likely the upper limit on >> any such work. Some models may hit sweet spots -- but not the sort of >> thing you're going to boast about in sales literature! > > You are probably right here.
Technological speedups are a red herring. What you're really concerned with is the TOTAL time to perform a particular action. If you speed up some portion of it 100-fold... but, that was just 20% of the entire process, what have your *real* gains been? There's also a "sour spot" (if something is the opposite of "sweet spot" I would imagine "sour spot" to be it?) where things get faster... but still not fast *enough*. E.g., decades ago, I would render 3D models and it would eat 100% of a machine's time for 36 hours. It would be *silly* to sit and wait for the result! I'd simply power off the monitor, put a sign on the machine reminding me NOT to turn it off, then move on to some other task. Some time later (36, 40, 72 hours??), I'd come back to the machine and, hopefully, find the result that I desired. But, in the mean time, I'd have worked on something *else*, making uninterrupted progress, there. If rendering took 10 minutes, I'd likely have twiddled my thumbs for those 10 minutes -- too little time to do much of anything beyond get a drink or bite to eat. So, I've spent 10 additional minutes of "personal time" on the work (that wasn't present in the original approach). Worse, if the cost of waiting gets small enough, you change your work strategy from one of thoughtful preparation to "hit-and-miss" ("Let's try this and see how it works?"). So, you end up performing more iterations than you might, otherwise. [I know many engineers who are perpetually tweaking designs instead of DESIGNING them from the outset!]
>> These are small enough that it's easier just to twiddle your thumbs and >> wait for next year's PC update cycle to get that improvement "for free" >> (i.e., for no development work cuz new work means new bugs). >> >>> The inversion of the nodal matrix might use some improvement since >>> it is NP complete, like almost everything that is interesting. >>> Its size grows with the number of nodes and the matrix is sparse since >>> most nodes have no interaction. Dividing the circuit into subcircuits, >>> solving these separately and combining the results could provide >>> a speedup, for problems with many nodes. That would be a MAJOR change. >>> >>> Spice has not made much progress since Berkeley is no longer involved. >>> Some people make some local improvements and when they lose interest >>> after 15 years their improvements die. There is no one to integrate >>> that stuff in one open official version. Maybe NGspice comes closest. >> >> A big part of the problem is folks want these tools for free. >> That attitude suggests they place no value on the time they save >> running simulations (or, that their time *has* no value). > > No. The big part of the problem is that spice 2g4 or 3.x was always free > and some people added some miniscule things like a GUI and expected > they would rule the world from now on since THEY had written THE > simulation program. For a time that worked for few of them, like pspice > that got swallowed by Orcad that got swallowed by Cadunz. And where > are they now? When have you seen the last result from pspice and its offspring?
So, they were "spoiled" by a boon -- and now are expecting that "good will" to continue? What would you *pay* for 1000X speedup in your simulations? Would you offer up the value of the (your!) time saved over, say, a calendar year? Or, would you grumble at what you perceived as "price gouging"? People have funny ideas of what items are worth -- when they have to open their own wallets! :>
> JT from s.e.d. is RIP now for some years, but do you think he missed any > of the new developments? No, not really. Which new developments?
Some of that can be cultural. I write all my code on a *BSD box running X -- with a "root weave" wallpaper. No naked ladies, no pictures of fields of flowers, no tweets and bops and whistles. Doesn't change the quality of the code I produce to be absent those things. OTOH, I use some costly tools to help me test my codebase as they are more reliable than "other eyes". And, I can "ask" them to test over and over again without "overstaying my welcome".
>> Someone will come up with a breakthrough and sell it as a *service*. >> So, you'll submit your circuit and get a bill for the analysis. Of >> course, this will likely cut down on the "what if" use of such a >> tool (cuz changing a portion of the circuit will still require a complete >> REanalysis... likely hard to leverage any prior computations to reduce >> the cost of those efforts) >> >> Then again, software-as-a-service is the obvious future. > > You mean, like H-Spice some decades ago?
You have to offer genuine value, not just an alternate user interface. Autodesk offers SfM as a service -- they sell you a GUI that essentially just acts as an input terminal for your data and a presentation portal for their results. You can do roughly the same thing on your own workstation -- but with many many MIPS (and some dubious results). Some tools being "free" -- but often with blemishes that you won't discover until you've *invested* (!) some time -- for which you may not see any return!
> OMG, I'm growing old.
Heh heh heh... a common malady!
> Some customers of mine don't like it when I have their stuff on MY > computer, much less in a cloud.
That's one reason all of my machines (save this one) are air-gapped. There's no time wasted reassuring clients (or myself) that I've not been hacked! And, no *effort* spent keeping my defenses "current"! OTOH, bean counters see services as a cost saving. Why have your own IT department to perpetually update and service bought (ahem, "licensed") tools when you can have someone else sort all that out, for you?! I've noticed (time tracking) that I tend to spend about a day a week on "support" issues -- keeping equipment running, looking for upgrades, ditto software tools, backups, chasing down bugs/workarounds, etc. It's easy to see how outsourcing those things (or parts of them) can be attractive. [OTOH, *not* outsourcing means I have control over how and when those tools change/evolve. I can opt to "live with" some known problem instead of having it replaced by a new set of UNknown problems without my consent] Part of the appeal of web-based services is that all the user (client) needs to do is keep a current browser running. Even an IT department made of dweebs can likely do this (esp if they run diskless clients)
On 2021-10-01 05:47, Don Y wrote:
[...]
> > Then again, software-as-a-service is the obvious future.
I'll be kicking and screaming the whole way. Jeroen Belleman
On 9/30/2021 11:03 PM, Jeroen Belleman wrote:
> On 2021-10-01 05:47, Don Y wrote: > [...] >> >> Then again, software-as-a-service is the obvious future. > > I'll be kicking and screaming the whole way.
Just say "no"! What you have *today*, works. Keep that in mind for tomorrow... and the day after... and... (Do you really *need* version N+m of whatever tool is working for you *now*?)
Am 01.10.21 um 07:56 schrieb Don Y:

> Technological speedups are a red herring.&nbsp; What you're really concerned > with > is the TOTAL time to perform a particular action.&nbsp; If you speed up some > portion of it 100-fold... but, that was just 20% of the entire process, > what have your *real* gains been?
Amdahl's law. < https://en.wikipedia.org/wiki/Amdahl%27s_law > Cheers, Gerhard
On 10/1/2021 2:15 AM, Gerhard Hoffmann wrote:
> Am 01.10.21 um 07:56 schrieb Don Y: > >> Technological speedups are a red herring. What you're really concerned with >> is the TOTAL time to perform a particular action. If you speed up some >> portion of it 100-fold... but, that was just 20% of the entire process, >> what have your *real* gains been? > > Amdahl's law. > > < https://en.wikipedia.org/wiki/Amdahl%27s_law >
Misses the point. There's still an organic being *in* the process. More cores doesn't make that being more productive in *its* actions, thought processes, etc. E.g., if you have to *think* about how you want to change (or create) a design between iterations, that's a cost that a faster processor won't help reduce. Likewise with actually editing the design. (unless you just POKE at the design until you see results that you like -- because you don't really understand how it actually works and are relying on the simulation to give you a "feel" for that)