Forums

LTspice Loafing?

Started by rickman March 9, 2017
Am 10.03.2017 um 17:20 schrieb John Larkin:
> On Fri, 10 Mar 2017 04:51:03 +0100, Gerhard Hoffmann > <ghf@hoffmann-hochfrequenz.de> wrote:
>> >> LTspice 4: Linux, virtual Win7 machine VMware Workstation 12: 42 sec >> same machine in a Linux window LTspice4 with wine under Linux: 22 sec >> >> Machine is a Dell Precision "portable Workstation" 3.5 years old. >> "portable" has to be taken with a grain of salt. 240W power supply.
> > > On my Dell, that sim runs about 10% slower if I plot the waveform > during the run. I can imagine that PCs with slow graphics could take a > bigger hit. Do VMs add graphics overhead?
I suppose they do. There must be a lot of exchanging graphics state when switching tasks and there must be a lot of task switches; when I move the cursor from a win7 window to the Linux table top: that looks harmless, but someone has to decide now which operating system has to react on the next click. And that click could trigger actions is a 3D game or 3D layout in Altium Designer. I also think that the 2nd 2560*144 pixel monitor attached to my laptop slows things down, somewhat. Waveform was ON in my test.
> We once rented a compute farm from Amazon to run a slow sim. It wasn't > any faster than our 4-core Dells.
Spice is well-known for being hard to parallelize. It has been tried over and over again. It is not even really floating-point limited. There has been H-spice on the Cray trying to vectorize it. Do you remember all those Weitek/NS32032 coprocessor boards that were sold as Spice accelerators? No one ever made a breakthrough. Gerhard
On 03/09/2017 07:11 PM, John Larkin wrote:
> On Thu, 9 Mar 2017 20:25:42 -0500, bitrex > <bitrex@de.lete.earthlink.net> wrote: > >> On 03/09/2017 07:13 PM, John Larkin wrote: >>> Xeon E5-1603 v3 2.8 GHz >> >> Ah, maybe one of the Facebook specials! >> >> <http://www.techspot.com/review/1155-affordable-dual-xeon-pc/> > > Well, it's a Dell box. > >> >> They're available refurb on NewEgg for $38, woah. >> >> <https://www.newegg.com/Product/Product.aspx?Item=9SIA4GH4R61733&cm_re=Intel_Xeon_E5-1603-_-9SIA4GH4R61733-_-Product> > > Refurbished? > >
Does that mean "pulled off a board & re-balled"?
On Fri, 10 Mar 2017 19:06:04 +0100, Gerhard Hoffmann
<ghf@hoffmann-hochfrequenz.de> wrote:

>Am 10.03.2017 um 17:20 schrieb John Larkin: >> On Fri, 10 Mar 2017 04:51:03 +0100, Gerhard Hoffmann >> <ghf@hoffmann-hochfrequenz.de> wrote: > >>> >>> LTspice 4: Linux, virtual Win7 machine VMware Workstation 12: 42 sec >>> same machine in a Linux window LTspice4 with wine under Linux: 22 sec >>> >>> Machine is a Dell Precision "portable Workstation" 3.5 years old. >>> "portable" has to be taken with a grain of salt. 240W power supply. > >> >> >> On my Dell, that sim runs about 10% slower if I plot the waveform >> during the run. I can imagine that PCs with slow graphics could take a >> bigger hit. Do VMs add graphics overhead? > >I suppose they do. There must be a lot of exchanging graphics state >when switching tasks and there must be a lot of task switches; when I >move the cursor from a win7 window to the Linux table top: that looks >harmless, but someone has to decide now which operating system has to >react on the next click. And that click could trigger actions is a >3D game or 3D layout in Altium Designer. > >I also think that the 2nd 2560*144 pixel monitor attached to my laptop >slows things down, somewhat. Waveform was ON in my test. > > > >> We once rented a compute farm from Amazon to run a slow sim. It wasn't >> any faster than our 4-core Dells. > >Spice is well-known for being hard to parallelize. It has been tried >over and over again. It is not even really floating-point limited. >There has been H-spice on the Cray trying to vectorize it. Do you >remember all those Weitek/NS32032 coprocessor boards that were sold >as Spice accelerators? No one ever made a breakthrough. > >Gerhard >
I'd buy one of those Nvidia boards if it would speed up Spice by, say, 100:1. -- John Larkin Highland Technology, Inc picosecond timing precision measurement jlarkin att highlandtechnology dott com http://www.highlandtechnology.com
"Gerhard Hoffmann"  wrote in message 
news:eig88oFdvpkU1@mid.individual.net...

Am 10.03.2017 um 17:20 schrieb John Larkin:
> On Fri, 10 Mar 2017 04:51:03 +0100, Gerhard Hoffmann > <ghf@hoffmann-hochfrequenz.de> wrote:
>> >> LTspice 4: Linux, virtual Win7 machine VMware Workstation 12: 42 sec >> same machine in a Linux window LTspice4 with wine under Linux: 22 sec >> >> Machine is a Dell Precision "portable Workstation" 3.5 years old. >> "portable" has to be taken with a grain of salt. 240W power supply.
> > > On my Dell, that sim runs about 10% slower if I plot the waveform > during the run. I can imagine that PCs with slow graphics could take a > bigger hit. Do VMs add graphics overhead?
I suppose they do. There must be a lot of exchanging graphics state when switching tasks and there must be a lot of task switches; when I move the cursor from a win7 window to the Linux table top: that looks harmless, but someone has to decide now which operating system has to react on the next click. And that click could trigger actions is a 3D game or 3D layout in Altium Designer. I also think that the 2nd 2560*144 pixel monitor attached to my laptop slows things down, somewhat. Waveform was ON in my test.
> We once rented a compute farm from Amazon to run a slow sim. It wasn't > any faster than our 4-core Dells.
>Spice is well-known for being hard to parallelize. It has been tried >over and over again. It is not even really floating-point limited. >There has been H-spice on the Cray trying to vectorize it. Do you >remember all those Weitek/NS32032 coprocessor boards that were sold >as Spice accelerators? No one ever made a breakthrough.
Not sure where you get your information from. Spice can, and has been paralysed quite effectively. The core function of parallel processing all the elements in a matrix has been around for a long time in physics applications e.g. solving nuclear bomb equations. Those systems you read about that have Petraflop performance at er..ahmm, 10 MWs of power basically do the same sort of calculations that spice does. Gaussian elimination of a matrix is a classic software problem that can be parallelised pretty easily. All elements in a row, and all rows, can all be multiplied at once. Each reduction of the matrix size does not depend on any other calculation. Many other software algorithms can be very difficult to parallelise like this. I use Cadence Spectre every day, and using its APS, multicore features speedups are really good. There is also Mentors FastSpice, purchased as Berkeley Design Systems, that runs 20 Million analog transistor circuits. -- Kevin Aylward http://www.anasoft.co.uk - SuperSpice http://www.kevinaylward.co.uk/ee/index.html
makolber@yahoo.com wrote:
> >> >> I was running some simulations on LTspice and it is not even using >> >> 5% of my CPU. Nothing else is topping it and the entire computer >> >> is pretty much at idle. What could be limiting the speed? > > Make sure the hard drives are running in DMA mode not PIO mode. > > They automatically and permanently fall back to PIO mode if there > are hardware errors.
Now that's some mighty fine information to know. FWIW Larkin's simulation takes ~ 80s when run from a Remote Desktop session and a mapped Samba share on my ancient 32-bit Pentium 4, 4 GB, W2003 server with LTSpice v4.20. "If it ain't broke don't fix it." Thank you, -- Don Kuenz KB7RPU
On 3/10/2017 12:42 PM, John Larkin wrote:
> On Fri, 10 Mar 2017 19:06:04 +0100, Gerhard Hoffmann > <ghf@hoffmann-hochfrequenz.de> wrote: > >> Am 10.03.2017 um 17:20 schrieb John Larkin: >>> On Fri, 10 Mar 2017 04:51:03 +0100, Gerhard Hoffmann >>> <ghf@hoffmann-hochfrequenz.de> wrote: >> >>>> >>>> LTspice 4: Linux, virtual Win7 machine VMware Workstation 12: 42 sec >>>> same machine in a Linux window LTspice4 with wine under Linux: 22 sec >>>> >>>> Machine is a Dell Precision "portable Workstation" 3.5 years old. >>>> "portable" has to be taken with a grain of salt. 240W power supply. >> >>> >>> >>> On my Dell, that sim runs about 10% slower if I plot the waveform >>> during the run. I can imagine that PCs with slow graphics could take a >>> bigger hit. Do VMs add graphics overhead? >> >> I suppose they do. There must be a lot of exchanging graphics state >> when switching tasks and there must be a lot of task switches; when I >> move the cursor from a win7 window to the Linux table top: that looks >> harmless, but someone has to decide now which operating system has to >> react on the next click. And that click could trigger actions is a >> 3D game or 3D layout in Altium Designer. >> >> I also think that the 2nd 2560*144 pixel monitor attached to my laptop >> slows things down, somewhat. Waveform was ON in my test. >> >> >> >>> We once rented a compute farm from Amazon to run a slow sim. It wasn't >>> any faster than our 4-core Dells. >> >> Spice is well-known for being hard to parallelize. It has been tried >> over and over again. It is not even really floating-point limited. >> There has been H-spice on the Cray trying to vectorize it. Do you >> remember all those Weitek/NS32032 coprocessor boards that were sold >> as Spice accelerators? No one ever made a breakthrough. >> >> Gerhard >> > > I'd buy one of those Nvidia boards if it would speed up Spice by, say, > 100:1.
In your example LTSpice listing, just change your maximum time step from 10 ns to 10 us. That will give you great speed with no detectable loss of detail.
"John S"  wrote in message news:o9vn80$nvt$1@dont-email.me...


>> Spice is well-known for being hard to parallelize. It has been tried >> over and over again. It is not even really floating-point limited. >> There has been H-spice on the Cray trying to vectorize it. Do you >> remember all those Weitek/NS32032 coprocessor boards that were sold >> as Spice accelerators? No one ever made a breakthrough. >> >> Gerhard >> > >> I'd buy one of those Nvidia boards if it would speed up Spice by, say, >> 100:1.
>In your example LTSpice listing, just change your maximum time step from 10 >ns to 10 us. That will give you great speed with no detectable loss of >detail.
Yes. I do find it surprizing, that many don't even *try* to understand the most basic aspects of Spice when running their simulations. Like, read the manual! Lots of point mean slower simulations, few points means a jaggered graph. Even with essentially zero knowledge of equations solvers, one should surly expect to just know that high accuracy (reltol=1u) will also slow the simulation. The forgivable one is TRTOL. Many might not understand how to set this parameter. Setting this option can make a change of speed of around a factor of 3 or so. The Spice3 default is 7. However, in something like a SMPS this value can often result in incorrect results. I default this to 1 in SuperSpice to make sure results are always accurate. For a whole class of circuits though, increasing it up to say 3 or more will be ok, with a significant speedup. Its worth doing a few runs when setting up your design to get an optimum value for it. -- Kevin Aylward http://www.anasoft.co.uk - SuperSpice http://www.kevinaylward.co.uk/ee/index.html
On Fri, 10 Mar 2017 20:29:16 -0600, John S <Sophi.2@invalid.org>
wrote:

>On 3/10/2017 12:42 PM, John Larkin wrote: >> On Fri, 10 Mar 2017 19:06:04 +0100, Gerhard Hoffmann >> <ghf@hoffmann-hochfrequenz.de> wrote: >> >>> Am 10.03.2017 um 17:20 schrieb John Larkin: >>>> On Fri, 10 Mar 2017 04:51:03 +0100, Gerhard Hoffmann >>>> <ghf@hoffmann-hochfrequenz.de> wrote: >>> >>>>> >>>>> LTspice 4: Linux, virtual Win7 machine VMware Workstation 12: 42 sec >>>>> same machine in a Linux window LTspice4 with wine under Linux: 22 sec >>>>> >>>>> Machine is a Dell Precision "portable Workstation" 3.5 years old. >>>>> "portable" has to be taken with a grain of salt. 240W power supply. >>> >>>> >>>> >>>> On my Dell, that sim runs about 10% slower if I plot the waveform >>>> during the run. I can imagine that PCs with slow graphics could take a >>>> bigger hit. Do VMs add graphics overhead? >>> >>> I suppose they do. There must be a lot of exchanging graphics state >>> when switching tasks and there must be a lot of task switches; when I >>> move the cursor from a win7 window to the Linux table top: that looks >>> harmless, but someone has to decide now which operating system has to >>> react on the next click. And that click could trigger actions is a >>> 3D game or 3D layout in Altium Designer. >>> >>> I also think that the 2nd 2560*144 pixel monitor attached to my laptop >>> slows things down, somewhat. Waveform was ON in my test. >>> >>> >>> >>>> We once rented a compute farm from Amazon to run a slow sim. It wasn't >>>> any faster than our 4-core Dells. >>> >>> Spice is well-known for being hard to parallelize. It has been tried >>> over and over again. It is not even really floating-point limited. >>> There has been H-spice on the Cray trying to vectorize it. Do you >>> remember all those Weitek/NS32032 coprocessor boards that were sold >>> as Spice accelerators? No one ever made a breakthrough. >>> >>> Gerhard >>> >> >> I'd buy one of those Nvidia boards if it would speed up Spice by, say, >> 100:1. > >In your example LTSpice listing, just change your maximum time step from >10 ns to 10 us. That will give you great speed with no detectable loss >of detail. > >
I deliberately set the time step short to slow down this simple simulation, so it could be timed in seconds. But for LC oscillators, the default/automatic time step will create considerable frequency error, several per cent typically. What's awful is simulating high-Q circuits accurately in time domain, like crystal oscillators. -- John Larkin Highland Technology, Inc lunatic fringe electronics
On Sat, 11 Mar 2017 08:55:22 -0000, "Kevin Aylward"
<kevinRemovAT@kevinaylward.co.uk> wrote:

>"John S" wrote in message news:o9vn80$nvt$1@dont-email.me... > > >>> Spice is well-known for being hard to parallelize. It has been tried >>> over and over again. It is not even really floating-point limited. >>> There has been H-spice on the Cray trying to vectorize it. Do you >>> remember all those Weitek/NS32032 coprocessor boards that were sold >>> as Spice accelerators? No one ever made a breakthrough. >>> >>> Gerhard >>> >> >>> I'd buy one of those Nvidia boards if it would speed up Spice by, say, >>> 100:1. > >>In your example LTSpice listing, just change your maximum time step from 10 >>ns to 10 us. That will give you great speed with no detectable loss of >>detail. > >Yes. I do find it surprizing, that many don't even *try* to understand the >most basic aspects of Spice when running their simulations. Like, read the >manual!
My sim was intended to run slow. -- John Larkin Highland Technology, Inc lunatic fringe electronics
"John Larkin"  wrote in message 
news:k2a8cc1jfblh5j3treis244mojipt6et25@4ax.com...

On Fri, 10 Mar 2017 20:29:16 -0600, John S <Sophi.2@invalid.org>
wrote:

>On 3/10/2017 12:42 PM, John Larkin wrote: >> On Fri, 10 Mar 2017 19:06:04 +0100, Gerhard Hoffmann >> <ghf@hoffmann-hochfrequenz.de> wrote: >> >>> Am 10.03.2017 um 17:20 schrieb John Larkin: >>>> On Fri, 10 Mar 2017 04:51:03 +0100, Gerhard Hoffmann >>>> <ghf@hoffmann-hochfrequenz.de> wrote: >>> >>>>> >>>>> LTspice 4: Linux, virtual Win7 machine VMware Workstation 12: 42 sec >>>>> same machine in a Linux window LTspice4 with wine under Linux: 22 sec >>>>> >>>>> Machine is a Dell Precision "portable Workstation" 3.5 years old. >>>>> "portable" has to be taken with a grain of salt. 240W power supply. >>> >>>> >>>> >>>> On my Dell, that sim runs about 10% slower if I plot the waveform >>>> during the run. I can imagine that PCs with slow graphics could take a >>>> bigger hit. Do VMs add graphics overhead? >>> >>> I suppose they do. There must be a lot of exchanging graphics state >>> when switching tasks and there must be a lot of task switches; when I >>> move the cursor from a win7 window to the Linux table top: that looks >>> harmless, but someone has to decide now which operating system has to >>> react on the next click. And that click could trigger actions is a >>> 3D game or 3D layout in Altium Designer. >>> >>> I also think that the 2nd 2560*144 pixel monitor attached to my laptop >>> slows things down, somewhat. Waveform was ON in my test. >>> >>> >>> >>>> We once rented a compute farm from Amazon to run a slow sim. It wasn't >>>> any faster than our 4-core Dells. >>> >>> Spice is well-known for being hard to parallelize. It has been tried >>> over and over again. It is not even really floating-point limited. >>> There has been H-spice on the Cray trying to vectorize it. Do you >>> remember all those Weitek/NS32032 coprocessor boards that were sold >>> as Spice accelerators? No one ever made a breakthrough. >>> >>> Gerhard >>> >> >> I'd buy one of those Nvidia boards if it would speed up Spice by, say, >> 100:1. > >In your example LTSpice listing, just change your maximum time step from >10 ns to 10 us. That will give you great speed with no detectable loss >of detail. > >
>I deliberately set the time step short to slow down this simple >simulation, so it could be timed in seconds.
I have an option in SS to have variable wait states so that if you are using the "change component values on the schematic in real time" feature, the sim don't finish before you can do so :-)
>But for LC oscillators, the default/automatic time step will create >considerable frequency error, several per cent typically. What's awful >is simulating high-Q circuits accurately in time domain, like crystal >oscillators.
Yes. A real pain. As noted in my other post though, I find most design work can be done with a de-Qed xtal. However, for accurate, full Q phase noise, at my day job, its the $100k per seat, per year Cadence Periodic Steady State Phase Noise. I usually use the shooting method, rather than the harmonic balance. Often have to include extra resistance in series with caps to get it to converge though. Phase noise results in Cadence are very impressive. Usually within the 1dB or so error range. Its a complicated calculation. For every point in the cycle, small signal noise is changing due to different instantaneous currents. It handles it all in the wash. Does really well on up conversion of the 1/f noise of devices and resistors. I don't know of any freebie PSS/PSSN. It requires a totally different calculation engine than spice -- Kevin Aylward http://www.anasoft.co.uk - SuperSpice http://www.kevinaylward.co.uk/ee/index.html