Electronics-Related.com
Forums

LTspice speed

Started by dalai lamah September 21, 2023
On Thu, 21 Sep 2023 19:20:55 +0200, dalai lamah
<antonio12358@hotmail.com> wrote:

>Un bel giorno John Larkin digit&#4294967295;: > >>>>> As you probably know, in many occasions LTspice cannot take advantage of >>>>> multiple CPU cores because many operations are not easily parallelizable. >>>>> In fact, most simulations I make use less than 20/25% of CPU (intel i5, 4 >>>>> cores/8 threads). >>>>> >>>>> However, running more processes of LTspice to execute different simulations >>>>> at the same time should overcome this limitation: each simulation is >>>>> distinct, they can be fully paralleled. If I run two simulations that >>>>> individually would use the 20% of CPU and last 10 minutes, I should see a >>>>> 40% CPU occupation but they still should take 10 minutes to complete. Maybe >>>>> a little more for the Windows scheduler overhead. >>>>> >>>>> Instead, what I'm seeing in reality is indeed a 40% CPU occupation, but >>>>> both simulations would take almost exactly twice as much to complete, 20 >>>>> minutes. >>>>> >>>>> I've already tried to manually fiddle with Task Manager and the processor >>>>> affinities, for example assigning two cores to a process and two other >>>>> cores to the other process. No difference. >>>>> >>>>> Why? Is this some crappy Windows scheduler behavior, or do I miss something >>>>> else? >>>> >>>> My bet: each sim is causing the other's data to be evicted from the cache. >>> >>>Yes, I think this is it: cache misses and probably also I/O overhead. In >>>absolute terms the disk write speed is moderate (not more than 1 or 2 MB/s) >>>but the I/O operations are in the millions. >>> >>>Moreover, I've just noticed that every LTspice process uses a lot of >>>threads, even if you limit the "max threads" parameter from the LTspice >>>control panel. At least ten. Right now I'm running three simulations at >>>once, and in total there are 46 LTspice threads running... >>> >>>I think that LTspice is quite similar to AAA games: the number of cores >>>does not matter much, and clock speed is king. >> >> A biggish circuit generates gigabytes of .RAW file and can bog down a >> slow hard drive. SS drives help, as does limiting the data that is >> saved. > >Yes, I have a SSD and each RAW file grows around 15 GB. Unfortunately I >need all the data and also some precision; I've set the maximum timestep to >10 ns, it's still slightly inadequate, but I need the simulations to end >within a day. :)
Yikes. I whine about 20 minute sims. Humans learn from rapid feedback, and even 20 minutes is too slow.
> >> .SAVE has the disadvantage that you can't freely probe after the sim >> is done. .SAVE V(*) will save only voltages. >> >> LT Spice doesn't allow a fixed or minimum time step, does it? > >There would be the spice option "dtmin", but I don't know if LTspice >supports it. I've never tried it.
It doesn't seem to allow a min time step. If we make a product with 1% or 5% parts, we don't need PPB sim accuracy, so a bigger time step could make sense.
On Thu, 21 Sep 2023 14:04:29 -0400, bitrex <user@example.net> wrote:

>On 9/21/2023 1:31 PM, Martin Brown wrote: >> On 21/09/2023 13:22, dalai lamah wrote: >>> As you probably know, in many occasions LTspice cannot take advantage of >>> multiple CPU cores because many operations are not easily parallelizable. >>> In fact, most simulations I make use less than 20/25% of CPU (intel i5, 4 >>> cores/8 threads). >> >> Even with code that is optimised for multiprocessor operation like chess >> engines a rule of thumb is that about 75% of fast cores running flat out >> you saturate memory bandwidth and so allowing more than 6 cores out of 8 >> to run merely increases power consumption and may even slow down the >> computation. Chess is even more insidious in that certain pruning >> techniques don't lend themselves to parallelism so you lose both ways. >>> >>> However, running more processes of LTspice to execute different >>> simulations >>> at the same time should overcome this limitation: each simulation is >>> distinct, they can be fully paralleled. If I run two simulations that >>> individually would use the 20% of CPU and last 10 minutes, I should see a >>> 40% CPU occupation but they still should take 10 minutes to complete. >>> Maybe >>> a little more for the Windows scheduler overhead. >>> >>> Instead, what I'm seeing in reality is indeed a 40% CPU occupation, but >>> both simulations would take almost exactly twice as much to complete, 20 >>> minutes. >> >> The computation is almost certainly memory constrained. The matrix >> solver needs to have plenty of cache to solve the sparse equations and >> is likely making assumptions about cache lines remaining in cache. >> >> Two processes trying to do the same sort of thing will fight like hell >> for the available resources. I expect LT Spice is very cache aware even >> if it is only single processor friendly. > >What about disk access? AFAIK an LTSpice instance by default saves its >work to disk as it goes along, see e.g. > ><https://groups.google.com/g/sci.electronics.cad/c/EnqyB0hUSvo/m/QGxt1uTN1AkJ> >
I have seen .save, limiting disk access, double sim speed. But then you can't freely probe the results, or calculate power dissipation, unless you plan that in advance.
On 9/21/2023 2:39 PM, John Larkin wrote:
> On Thu, 21 Sep 2023 14:04:29 -0400, bitrex <user@example.net> wrote: > >> On 9/21/2023 1:31 PM, Martin Brown wrote: >>> On 21/09/2023 13:22, dalai lamah wrote: >>>> As you probably know, in many occasions LTspice cannot take advantage of >>>> multiple CPU cores because many operations are not easily parallelizable. >>>> In fact, most simulations I make use less than 20/25% of CPU (intel i5, 4 >>>> cores/8 threads). >>> >>> Even with code that is optimised for multiprocessor operation like chess >>> engines a rule of thumb is that about 75% of fast cores running flat out >>> you saturate memory bandwidth and so allowing more than 6 cores out of 8 >>> to run merely increases power consumption and may even slow down the >>> computation. Chess is even more insidious in that certain pruning >>> techniques don't lend themselves to parallelism so you lose both ways. >>>> >>>> However, running more processes of LTspice to execute different >>>> simulations >>>> at the same time should overcome this limitation: each simulation is >>>> distinct, they can be fully paralleled. If I run two simulations that >>>> individually would use the 20% of CPU and last 10 minutes, I should see a >>>> 40% CPU occupation but they still should take 10 minutes to complete. >>>> Maybe >>>> a little more for the Windows scheduler overhead. >>>> >>>> Instead, what I'm seeing in reality is indeed a 40% CPU occupation, but >>>> both simulations would take almost exactly twice as much to complete, 20 >>>> minutes. >>> >>> The computation is almost certainly memory constrained. The matrix >>> solver needs to have plenty of cache to solve the sparse equations and >>> is likely making assumptions about cache lines remaining in cache. >>> >>> Two processes trying to do the same sort of thing will fight like hell >>> for the available resources. I expect LT Spice is very cache aware even >>> if it is only single processor friendly. >> >> What about disk access? AFAIK an LTSpice instance by default saves its >> work to disk as it goes along, see e.g. >> >> <https://groups.google.com/g/sci.electronics.cad/c/EnqyB0hUSvo/m/QGxt1uTN1AkJ> >> > > I have seen .save, limiting disk access, double sim speed. But then > you can't freely probe the results, or calculate power dissipation, > unless you plan that in advance. >
On this older i7 laptop that has two physical cores and two logical cores per, in LTSpice I tried setting thread priority to medium and max threads to two in each LTSpice instance to see if I could get them to load-share more evenly. And they seem to, CPU and disk utilization both go up, but the two sims still complete slower. At least on this machine for this test case just letting each instance take turns hogging everything for a while seems the optimal way to get it done
On 21/09/2023 19:04, bitrex wrote:
> On 9/21/2023 1:31 PM, Martin Brown wrote: >> On 21/09/2023 13:22, dalai lamah wrote:
>>> Instead, what I'm seeing in reality is indeed a 40% CPU occupation, but >>> both simulations would take almost exactly twice as much to complete, 20 >>> minutes. >> >> The computation is almost certainly memory constrained. The matrix >> solver needs to have plenty of cache to solve the sparse equations and >> is likely making assumptions about cache lines remaining in cache. >> >> Two processes trying to do the same sort of thing will fight like hell >> for the available resources. I expect LT Spice is very cache aware >> even if it is only single processor friendly. > > What about disk access? AFAIK an LTSpice instance by default saves its > work to disk as it goes along, see e.g. > > <https://groups.google.com/g/sci.electronics.cad/c/EnqyB0hUSvo/m/QGxt1uTN1AkJ>
Quite likely it is also a factor and putting the machine on a UPS and using the more dangerous disk write caching strategy might speed it up. I'm assuming that anyone half serious about doing this will have the fastest possible SSD and on the fastest interface (which is very good when compared to spinning rust). You can gain almost another factor of two by having a matched RAID pair if your hardware supports it. But first you need to identify which bottleneck is the real problem and holding back performance. Doubling physical ram is fairly cheap. -- Martin Brown
On 9/21/2023 12:21 PM, Martin Brown wrote:
> On 21/09/2023 19:04, bitrex wrote: >> On 9/21/2023 1:31 PM, Martin Brown wrote: >>> On 21/09/2023 13:22, dalai lamah wrote: > >>>> Instead, what I'm seeing in reality is indeed a 40% CPU occupation, but >>>> both simulations would take almost exactly twice as much to complete, 20 >>>> minutes. >>> >>> The computation is almost certainly memory constrained. The matrix solver >>> needs to have plenty of cache to solve the sparse equations and is likely >>> making assumptions about cache lines remaining in cache. >>> >>> Two processes trying to do the same sort of thing will fight like hell for >>> the available resources. I expect LT Spice is very cache aware even if it is >>> only single processor friendly. >> >> What about disk access? AFAIK an LTSpice instance by default saves its work >> to disk as it goes along, see e.g. >> >> <https://groups.google.com/g/sci.electronics.cad/c/EnqyB0hUSvo/m/QGxt1uTN1AkJ> > > Quite likely it is also a factor and putting the machine on a UPS and using the > more dangerous disk write caching strategy might speed it up. > > I'm assuming that anyone half serious about doing this will have the fastest > possible SSD and on the fastest interface (which is very good when compared to > spinning rust). You can gain almost another factor of two by having a matched > RAID pair if your hardware supports it.
If the OP is only seeing 1-2MB/s on the disk, it's not the medium that's the problem (I can easily move 100MB/s on four spindles concurrently with "old hardware"). If the application is foolishly flushing buffers all the time, then it's just wasting CPU cycles (are you afraid YOU are going to crash? a simulation can always be restarted so there's no "precious" data at stake)
> But first you need to identify which bottleneck is the real problem and holding > back performance. Doubling physical ram is fairly cheap.
It's a win in that it helps EVERYTHING on the machine. You can see the effect of having spice run alongside some other (e.g.) disk intensive application; does the disk app complete in the same time as it would "solo"? What impact does it have on the sim? (i.e., run apps that you know make specific types of demands on the hardware and see which "annoy" the sim) [Given that you can't really instrument anything beyond what's already available for inspection] But, I suspect it will prove to be exhausting the cache that is the real culprit.