Reply by John Doe June 12, 20222022-06-12
The 1 TB is $155. Great price.
Reply by John Doe June 12, 20222022-06-12
Samsung 980 Pro NVMe SSD 1TB PCIe 4.0 (Amazon)

Oom, baby.

The 2 TB version jumps to $270. But if money is no object, those two would 
make an amazing primary and secondary drive combination. Primary being 
smaller.

I'm only using half of my 960 Pro (primary drive) and I already have the 980 
original (secondary drive).









John Doe <always.look@message.header> wrote:

> John Larkin wrote: > >> I'll need to get a new PC soon. People say that a screaming CPU and lots >> of ram and solid-state C drive would really speed things up. > > I have had NVMe drives for ages. Get good ones, like from Samsung. Spend > more than you used to spend on hard drives, it's well worth it. Primary > (and secondary) storage is not whizbang fun, but as IBM used to put it, it > affects "throughput" more than anything else. NVMe is what I have always > wanted. > > If you use a secondary drive, the transfer rate between two premium NVMe > drives is OUTRAGEOUS. Transferring movies from your downloads folder to > the secondary drive is quick. Making backups of Windows is also quick.
Reply by Martin Brown June 4, 20222022-06-04
On 03/06/2022 15:30, Phil Hobbs wrote:
> Chris Jones wrote: >> On 02/06/2022 23:33, jlarkin@highlandsniptechnology.com wrote: >>> There have been attempts to use computers to actually design >>> circuits, or at least to optimize values in a given topology. They >>> &nbsp;tended to be ludicrous failures. >> >> I used an optimizer for some chip designs, it was very very good for >> &nbsp;things like choosing the size of the transistors in flipflops for >> best toggle frequency per supply current, and optimising a low pass >> filter for best noise and in-band error-vector-magnitude and stop-band >> rejection etc. all at the same time. It did way better than >> I could have. > > Sounds like a super useful tool.&nbsp; I did something similar for optimizing > plasmonic nanoantennas 15 or so years ago, and like yours, it found good > solutions that weren't at all obvious.&nbsp; So I'm a fan of the general > approach.
Simulated annealing or simplex are pretty good for that sort of thing. Then conjugate gradients once you get somewhere near an optimum. You may never be sure you have the global optimum but it will probably find a solution that is better than any human can in a reasonable time.
>> The trick was to write a script that runs the right simulations and >> results in an expression (or several) that correctly describes how >> well a circuit meets the goals. Once you've done that, it can twiddle >> the knobs much better than any human, and I don't mean because it >> could do it faster and spam the simulations across a thousand CPUs >> whilst you could look at only one at a time, it was also better in >> that it could remember many sets of parameters that were good in >> various ways, and combine them more efficiently than a human. It was >> a company internal tool and they will surely have kept it that way. > > Numerical optimization based on merit / penalty functions has been well > known for 200 years, since Gauss iirc.&nbsp; That's a far cry from actual > computer-based design. > > Even lenses, which you'd think would be a natural application, have been > resistant to fully-automated design--it's all about finding a suitable > starting point.
Optical lens designs have been pretty much solved now at least in the domains that I frequent. There could only be a handful of truly weird configurations remaining that haven't been tried already. The last interesting one in terms of being very different to orthodoxy was Willstrop's three mirror telescope (no chromatic or sphereical aberration, precise focus and very fast). https://www.ast.cam.ac.uk/about/three-mirror.telescope To the best of my knowledge no full scale one has ever been built. I expect the odd novelty still lurks in the shadows. The search for eyepieces with ever more wide angle views remains the Holy grail - they are getting a bit ridiculous now with some offering 120 degree AFOV. https://www.telescopehouse.com/Telescope-Accessories/EXPLORE-SCIENTIFIC-120-Ar-Eyepiece-9mm-2.html I think that is the current record holder but I could be wrong on that. Price and weight are both a bit on the high side (lots of glass in it).
> There are various approaches that modify topologies, of which the best > known are genetic algorithms.
Modifying the topology is best done by humans. Optimising the components against some library of available materials and shapes is now the domain of sophisticated ray tracing programs. Zemax is probably the best known: https://www.zemax.com/pages/try-opticstudio-for-free Learning curve could best be described as STEEP... -- Regards, Martin Brown
Reply by Phil Hobbs June 4, 20222022-06-04
legg wrote:
> On Fri, 3 Jun 2022 10:30:51 -0400, Phil Hobbs > <pcdhSpamMeSenseless@electrooptical.net> wrote: > >> Chris Jones wrote: >>> On 02/06/2022 23:33, jlarkin@highlandsniptechnology.com wrote: >>>> There have been attempts to use computers to actually design >>>> circuits, or at least to optimize values in a given topology. They >>>> tended to be ludicrous failures. >>> >>> I used an optimizer for some chip designs, it was very very good for >>> things like choosing the size of the transistors in flipflops for >>> best toggle frequency per supply current, and optimising a low pass >>> filter for best noise and in-band error-vector-magnitude and >>> stop-band rejection etc. all at the same time. It did way better than >>> I could have. >> >> Sounds like a super useful tool. I did something similar for optimizing >> plasmonic nanoantennas 15 or so years ago, and like yours, it found good >> solutions that weren't at all obvious. So I'm a fan of the general >> approach. >> >> Of course, that sort of thing has been done automatically since the >> 1940s (or earlier, using manual methods--see e.g. >> <https://en.wikipedia.org/wiki/Linear_programming#History>. >> >>> The trick was to write a script that runs the right simulations and >>> results in an expression (or several) that correctly describes how >>> well a circuit meets the goals. Once you've done that, it can twiddle >>> the knobs much better than any human, and I don't mean because it >>> could do it faster and spam the simulations across a thousand CPUs >>> whilst you could look at only one at a time, it was also better in >>> that it could remember many sets of parameters that were good in >>> various ways, and combine them more efficiently than a human. It was >>> a company internal tool and they will surely have kept it that way. >> >> Numerical optimization based on merit / penalty functions has been well >> known for 200 years, since Gauss iirc. That's a far cry from actual >> computer-based design. >> >> Even lenses, which you'd think would be a natural application, have been >> resistant to fully-automated design--it's all about finding a suitable >> starting point. >> >> There are various approaches that modify topologies, of which the best >> known are genetic algorithms. >>
> You can do the same thing with a list of components to generate a > simple figure of merit for an application. Can include such esoteric > issues as cost (book, real estate, process). >
Sure. My EM simulator can optimize on literally anything expressible in its input files. The issue isn't figuring out how good a design is, it's generating good ones from a blank sheet of paper. Cheers Phil Hobbs -- Dr Philip C D Hobbs Principal Consultant ElectroOptical Innovations LLC / Hobbs ElectroOptics Optics, Electro-optics, Photonics, Analog Electronics Briarcliff Manor NY 10510 http://electrooptical.net http://hobbs-eo.com
Reply by legg June 4, 20222022-06-04
On Fri, 3 Jun 2022 10:30:51 -0400, Phil Hobbs
<pcdhSpamMeSenseless@electrooptical.net> wrote:

>Chris Jones wrote: >> On 02/06/2022 23:33, jlarkin@highlandsniptechnology.com wrote: >>> There have been attempts to use computers to actually design >>> circuits, or at least to optimize values in a given topology. They >>> tended to be ludicrous failures. >> >> I used an optimizer for some chip designs, it was very very good for >> things like choosing the size of the transistors in flipflops for >> best toggle frequency per supply current, and optimising a low pass >> filter for best noise and in-band error-vector-magnitude and >> stop-band rejection etc. all at the same time. It did way better than >> I could have. > >Sounds like a super useful tool. I did something similar for optimizing >plasmonic nanoantennas 15 or so years ago, and like yours, it found good >solutions that weren't at all obvious. So I'm a fan of the general >approach. > >Of course, that sort of thing has been done automatically since the >1940s (or earlier, using manual methods--see e.g. ><https://en.wikipedia.org/wiki/Linear_programming#History>. > >> The trick was to write a script that runs the right simulations and >> results in an expression (or several) that correctly describes how >> well a circuit meets the goals. Once you've done that, it can twiddle >> the knobs much better than any human, and I don't mean because it >> could do it faster and spam the simulations across a thousand CPUs >> whilst you could look at only one at a time, it was also better in >> that it could remember many sets of parameters that were good in >> various ways, and combine them more efficiently than a human. It was >> a company internal tool and they will surely have kept it that way. > >Numerical optimization based on merit / penalty functions has been well >known for 200 years, since Gauss iirc. That's a far cry from actual >computer-based design. > >Even lenses, which you'd think would be a natural application, have been >resistant to fully-automated design--it's all about finding a suitable >starting point. > >There are various approaches that modify topologies, of which the best >known are genetic algorithms. > >Cheers > >Phil Hobbs
You can do the same thing with a list of components to generate a simple figure of merit for an application. Can include such esoteric issues as cost (book, real estate, process). Don't stock market analysis programs try to do this in real time? Muddying their own pool . . . . . RL
Reply by Anthony William Sloman June 4, 20222022-06-04
On Saturday, June 4, 2022 at 9:02:09 AM UTC+10, Clifford Heath wrote:
> On 4/6/22 01:38, Joe Gwinn wrote: > > On Fri, 3 Jun 2022 10:30:51 -0400, Phil Hobbs <pcdhSpamM...@electrooptical.net> wrote: > >> Chris Jones wrote: > >>> On 02/06/2022 23:33, jla...@highlandsniptechnology.com wrote:
<snip>
> > Yes. The part about " it could remember many sets of parameters that > > were good in various ways, and combine them more efficiently than a > > human" sounds very much like a genetic programming algorithm, which > > are very good at improving from a valid starting point. > > It sounds like multi-dimensional slope-descent to me.
I actually used non-linear multi-parameter curve fitting in my Ph.D. work back around 1968. It did rely on a finite continuous data to create the surface that it crawled across. I used the Fletcher-Powell algorithm rather than Marquardt, but there are plenty of others. https://www.sciencedirect.com/science/article/abs/pii/0167715283900494
> These are very good at finding local optima for a given design. Design > from scratch requires finding global optima, something that > slope-descent isn't very good at. Simulated annealing and genetic > programming might have more luck.
That does sound right. -- Bill Sloman, Sydney
Reply by Clifford Heath June 3, 20222022-06-03
On 4/6/22 01:38, Joe Gwinn wrote:
> On Fri, 3 Jun 2022 10:30:51 -0400, Phil Hobbs > <pcdhSpamMeSenseless@electrooptical.net> wrote: > >> Chris Jones wrote: >>> On 02/06/2022 23:33, jlarkin@highlandsniptechnology.com wrote: >>>> There have been attempts to use computers to actually design >>>> circuits, or at least to optimize values in a given topology. They >>>> tended to be ludicrous failures. >>> >>> I used an optimizer for some chip designs, it was very very good for >>> things like choosing the size of the transistors in flipflops for >>> best toggle frequency per supply current, and optimising a low pass >>> filter for best noise and in-band error-vector-magnitude and >>> stop-band rejection etc. all at the same time. It did way better than >>> I could have. >> >> Sounds like a super useful tool. I did something similar for optimizing >> plasmonic nanoantennas 15 or so years ago, and like yours, it found good >> solutions that weren't at all obvious. So I'm a fan of the general >> approach. >> >> Of course, that sort of thing has been done automatically since the >> 1940s (or earlier, using manual methods--see e.g. >> <https://en.wikipedia.org/wiki/Linear_programming#History>. >> >>> The trick was to write a script that runs the right simulations and >>> results in an expression (or several) that correctly describes how >>> well a circuit meets the goals. Once you've done that, it can twiddle >>> the knobs much better than any human, and I don't mean because it >>> could do it faster and spam the simulations across a thousand CPUs >>> whilst you could look at only one at a time, it was also better in >>> that it could remember many sets of parameters that were good in >>> various ways, and combine them more efficiently than a human. It was >>> a company internal tool and they will surely have kept it that way. >> >> Numerical optimization based on merit / penalty functions has been well >> known for 200 years, since Gauss iirc. That's a far cry from actual >> computer-based design. >> >> Even lenses, which you'd think would be a natural application, have been >> resistant to fully-automated design--it's all about finding a suitable >> starting point. >> >> There are various approaches that modify topologies, of which the best >> known are genetic algorithms. > > Yes. The part about " it could remember many sets of parameters that > were good in various ways, and combine them more efficiently than a > human" sounds very much like a genetic programming algorithm, which > are very good at improving from a valid starting point.
It sounds like multi-dimensional slope-descent to me. These are very good at finding local optima for a given design. Design from scratch requires finding global optima, something that slope-descent isn't very good at. Simulated annealing and genetic programming might have more luck. CH
Reply by Phil Hobbs June 3, 20222022-06-03
Joe Gwinn wrote:
> On Fri, 3 Jun 2022 10:30:51 -0400, Phil Hobbs > <pcdhSpamMeSenseless@electrooptical.net> wrote: > >> Chris Jones wrote: >>> On 02/06/2022 23:33, jlarkin@highlandsniptechnology.com wrote: >>>> There have been attempts to use computers to actually design >>>> circuits, or at least to optimize values in a given topology. They >>>> tended to be ludicrous failures. >>> >>> I used an optimizer for some chip designs, it was very very good for >>> things like choosing the size of the transistors in flipflops for >>> best toggle frequency per supply current, and optimising a low pass >>> filter for best noise and in-band error-vector-magnitude and >>> stop-band rejection etc. all at the same time. It did way better than >>> I could have. >> >> Sounds like a super useful tool. I did something similar for optimizing >> plasmonic nanoantennas 15 or so years ago, and like yours, it found good >> solutions that weren't at all obvious. So I'm a fan of the general >> approach. >> >> Of course, that sort of thing has been done automatically since the >> 1940s (or earlier, using manual methods--see e.g. >> <https://en.wikipedia.org/wiki/Linear_programming#History>. >> >>> The trick was to write a script that runs the right simulations and >>> results in an expression (or several) that correctly describes how >>> well a circuit meets the goals. Once you've done that, it can twiddle >>> the knobs much better than any human, and I don't mean because it >>> could do it faster and spam the simulations across a thousand CPUs >>> whilst you could look at only one at a time, it was also better in >>> that it could remember many sets of parameters that were good in >>> various ways, and combine them more efficiently than a human. It was >>> a company internal tool and they will surely have kept it that way. >> >> Numerical optimization based on merit / penalty functions has been well >> known for 200 years, since Gauss iirc. That's a far cry from actual >> computer-based design. >> >> Even lenses, which you'd think would be a natural application, have been >> resistant to fully-automated design--it's all about finding a suitable >> starting point. >> >> There are various approaches that modify topologies, of which the best >> known are genetic algorithms. > > Yes. The part about " it could remember many sets of parameters that > were good in various ways, and combine them more efficiently than a > human" sounds very much like a genetic programming algorithm, which > are very good at improving from a valid starting point. > > Joe Gwinn >
Not as far as I know. The point of genetic algos is to change the topology, not just the values. Unless I'm misunderstanding, Chris's optimizer was the usual sort that tweaks parameters to minimize some penalty function. Most of those remember previous values too--for instance my usual go-to algo, the Nelder-Mead downhill simplex method ('amoeba()' in Numerical Recipes). For N variables, it keeps N+1 sets. I like Nelder-Mead because most of the things I need to optimize are either discontinuous themselves, like the number and placement of rectangular boxes of metal in a nanoantenna, or else need to be constrained to physically realizable values, as in a filter design code where the component values need to be positive. (I usually use mirroring to constrain that sort of thing, which avoids the tendency of the simplex to collapse along the discontinuity like water along a curb.) Cheers Phil Hobbs -- Dr Philip C D Hobbs Principal Consultant ElectroOptical Innovations LLC / Hobbs ElectroOptics Optics, Electro-optics, Photonics, Analog Electronics Briarcliff Manor NY 10510 http://electrooptical.net http://hobbs-eo.com
Reply by Joe Gwinn June 3, 20222022-06-03
On Fri, 3 Jun 2022 10:30:51 -0400, Phil Hobbs
<pcdhSpamMeSenseless@electrooptical.net> wrote:

>Chris Jones wrote: >> On 02/06/2022 23:33, jlarkin@highlandsniptechnology.com wrote: >>> There have been attempts to use computers to actually design >>> circuits, or at least to optimize values in a given topology. They >>> tended to be ludicrous failures. >> >> I used an optimizer for some chip designs, it was very very good for >> things like choosing the size of the transistors in flipflops for >> best toggle frequency per supply current, and optimising a low pass >> filter for best noise and in-band error-vector-magnitude and >> stop-band rejection etc. all at the same time. It did way better than >> I could have. > >Sounds like a super useful tool. I did something similar for optimizing >plasmonic nanoantennas 15 or so years ago, and like yours, it found good >solutions that weren't at all obvious. So I'm a fan of the general >approach. > >Of course, that sort of thing has been done automatically since the >1940s (or earlier, using manual methods--see e.g. ><https://en.wikipedia.org/wiki/Linear_programming#History>. > >> The trick was to write a script that runs the right simulations and >> results in an expression (or several) that correctly describes how >> well a circuit meets the goals. Once you've done that, it can twiddle >> the knobs much better than any human, and I don't mean because it >> could do it faster and spam the simulations across a thousand CPUs >> whilst you could look at only one at a time, it was also better in >> that it could remember many sets of parameters that were good in >> various ways, and combine them more efficiently than a human. It was >> a company internal tool and they will surely have kept it that way. > >Numerical optimization based on merit / penalty functions has been well >known for 200 years, since Gauss iirc. That's a far cry from actual >computer-based design. > >Even lenses, which you'd think would be a natural application, have been >resistant to fully-automated design--it's all about finding a suitable >starting point. > >There are various approaches that modify topologies, of which the best >known are genetic algorithms.
Yes. The part about " it could remember many sets of parameters that were good in various ways, and combine them more efficiently than a human" sounds very much like a genetic programming algorithm, which are very good at improving from a valid starting point. Joe Gwinn
Reply by Phil Hobbs June 3, 20222022-06-03
Chris Jones wrote:
> On 02/06/2022 23:33, jlarkin@highlandsniptechnology.com wrote: >> There have been attempts to use computers to actually design >> circuits, or at least to optimize values in a given topology. They >> tended to be ludicrous failures. > > I used an optimizer for some chip designs, it was very very good for > things like choosing the size of the transistors in flipflops for > best toggle frequency per supply current, and optimising a low pass > filter for best noise and in-band error-vector-magnitude and > stop-band rejection etc. all at the same time. It did way better than > I could have.
Sounds like a super useful tool. I did something similar for optimizing plasmonic nanoantennas 15 or so years ago, and like yours, it found good solutions that weren't at all obvious. So I'm a fan of the general approach. Of course, that sort of thing has been done automatically since the 1940s (or earlier, using manual methods--see e.g. <https://en.wikipedia.org/wiki/Linear_programming#History>.
> The trick was to write a script that runs the right simulations and > results in an expression (or several) that correctly describes how > well a circuit meets the goals. Once you've done that, it can twiddle > the knobs much better than any human, and I don't mean because it > could do it faster and spam the simulations across a thousand CPUs > whilst you could look at only one at a time, it was also better in > that it could remember many sets of parameters that were good in > various ways, and combine them more efficiently than a human. It was > a company internal tool and they will surely have kept it that way.
Numerical optimization based on merit / penalty functions has been well known for 200 years, since Gauss iirc. That's a far cry from actual computer-based design. Even lenses, which you'd think would be a natural application, have been resistant to fully-automated design--it's all about finding a suitable starting point. There are various approaches that modify topologies, of which the best known are genetic algorithms. Cheers Phil Hobbs -- Dr Philip C D Hobbs Principal Consultant ElectroOptical Innovations LLC / Hobbs ElectroOptics Optics, Electro-optics, Photonics, Analog Electronics Briarcliff Manor NY 10510 http://electrooptical.net http://hobbs-eo.com