Electronics-Related.com
Forums

new spice

Started by John Larkin September 28, 2021
On Fri, 1 Oct 2021 17:52:22 -0400, Phil Hobbs
<pcdhSpamMeSenseless@electrooptical.net> wrote:

>John Larkin wrote:
[snip]
> >> >> Some famous person said "If you really need to use floating point, you >> don't understand the problem." > >Probably one of Rick Collins's FORTH pals. ;) (BITD FORTH didn't have >FP--it seems to now. MacFORTH circa 1984 sure didn't--that's the last >time I used it.)
I remember that. I don't think that many telescope controller computers possessed FP hardware, and software library implementation was too slow. But at least in my world, we went to floating-point ASAP because it greatly reduced the programming effort, and finding and fixing bugs due to scaling errors. Having big enough computer words helped a lot too, but the key was when FP arithmetic was roughly as fast as integer hardware, or at least fast enough for the task at hand. War story: My best-selling memo in the early 1980s documented a crank-and-grind procedure to convert a math formula into the equivalent scaled binary. The CPUs were 68000-family. Before, people couldn't figure out when to multiply and when to divide. Joe Gwinn
On 1/10/21 10:11 pm, Gerhard Hoffmann wrote:
> Am 01.10.21 um 12:04 schrieb Jeroen Belleman: > > >> There's also Wirth's observation: "Software gets slower faster >> than hardware gets faster." > > When asked how his name should be pronounced he said: > > &nbsp; "You can call me by name:&nbsp; that's Wirth > or you can call me by value: that's worth"
Specifically, "Niklaus Wirth" (by name, spoken as a European would) versus "nickel's worth" (by value, what Americans called him). CH
On 1/10/21 10:46 pm, Jeroen Belleman wrote:
> I still don't get why C++ had to add call by reference. Big > mistake, in my view.
Because without them, you can't implement operator overloading with the same semantics as the built-ins. CH
On 2/10/21 4:20 am, Jeroen Belleman wrote:
> On 2021-10-01 17:51, Phil Hobbs wrote: >> See Chisnall's classic 2018 paper, >> "C is not a low-level language. Your computer is not a fast PDP-11." >> <https://dl.acm.org/doi/abs/10.1145/3212477.3212479> >> >> It's far from a straightforward process. > He concludes with "C doesn&rsquo;t map to modern hardware very well". > > Given that a very large fraction of all software is written in > some dialect of C, one may wonder how this could happen.
C did not envisage massive pipelining and caches for memory that is several orders of magnitude slower than the CPUs, so there was no real necessity to avoid aliasing and other things that make it difficult for a compiler to re-order operations. CH
On 2/10/21 3:03 am, Phil Hobbs wrote:
> jlarkin@highlandsniptechnology.com wrote: >> On Fri, 1 Oct 2021 12:34:25 -0400, Phil Hobbs >> <pcdhSpamMeSenseless@electrooptical.net> wrote: >> >>> jlarkin@highlandsniptechnology.com wrote: >>>> On Fri, 1 Oct 2021 12:13:40 -0400, Phil Hobbs >>>> <pcdhSpamMeSenseless@electrooptical.net> wrote: >>> Large sparse matrices don't map well onto purely general-purpose >>> hardware. >> Then stop thinking of circuit simulation in terms of matrix math. > > Oh, come _on_.&nbsp; The problem is a large sparse system of nonlinear ODEs > with some bags hung onto the side for Tlines and such.&nbsp; How you write it > out doesn't change what has to be done--the main issue is the > irregularity and unpredictibility of the circuit topology.
Describing the matrix as sparse ignores the potential advantages of the topology. A matrix allows anything to connect to anything with equal cost, but in actual circuits, connections are far more likely to be to "nearby" components - where nearby means there is another short path to the same component. JIT FPGA synthesis could use that property to make use of local interconnect instead of needing an escalating amount of scarce global interconnect. CH
On 10/1/2021 5:27 PM, Dimiter_Popoff wrote:
>> The wider public effectively selects what *we* can/will use. >> Companies that can't produce in quantity can't afford to support >> their products. >> >> And, those products go away. Witness how Zilog disappeared, >> despite having the king of the 8-bitters (yet failed to evolve >> to anything bigger/wider). >> >> So, "merit" doesn't figure into the calculation. > > Like you said in another post here, > "The Market didn't select for the "best" -- but *did* "select". " > Of course I know how life works. But I am not a marketeer, and > I would have felt to have wasted my life had I chosen to join > the herd as herded by the marketeers. It has been anything but > an easy ride but I don't regret a minute of it.
I am thankful never to have had to deal with the hacks in the 80x86 family of devices. Thankfully, they were never cost-effective for any of my designs. Likewise, glad to never have designed a *product* under any of the OSs that run on it. <shudder> The ARM devices don't feel much "cleaner"; it feels like you're working with a bunch of hand-me-down furniture and trying to lay out a nice apartment (but are constrained by all of the quirks that the "pieces" present -- the end tables are early-american cherry but the coffee table is southwestern oak). But, at least they aren't a "sole source" solution; if I get annoyed with an ARM offering from vendor A, I can move over to vendor B with just a revision to the PCB artwork (considerably cheaper than having to rewrite any amount of the codebase for a processor with different features/characteristics). It would be delightful to return to the 70's and tinker with all those different architectures but with more modern tools (so you can actually finish a design before the devices are obsolete!)
PS
 https://arstechnica.com/science/2021/09/understanding-neuromorphic-computing-and-why-intels-excited-about-it/2/

and 128 cores sort of J.Larkin's thoughts erhaps.


On a sunny day (Fri, 01 Oct 2021 20:20:25 +0200) it happened Jeroen Belleman
<jeroen@nospam.please> wrote in <sj7jha$cg6$1@gioia.aioe.org>:

>On 2021-10-01 17:51, Phil Hobbs wrote: >> Jan Panteltje wrote: >>> On a sunny day (Fri, 1 Oct 2021 09:05:31 -0400) it happened Phil Hobbs >>> <pcdhSpamMeSenseless@electrooptical.net> wrote in >>> <95332466-d835-f22c-a8f3-dfc5cd15d1a7@electrooptical.net>: >>> >>>> Jeroen Belleman wrote: >>>>> On 2021-10-01 14:11, Gerhard Hoffmann wrote: >>>>>> Am 01.10.21 um 12:04 schrieb Jeroen Belleman: >>>>>> >>>>>> >>>>>>> There's also Wirth's observation: "Software gets slower faster >>>>>>> than hardware gets faster." >>>>>> >>>>>> When asked how his name should be pronounced he said: >>>>>> >>>>>> &Acirc; &Acirc; "You can call me by name:&Acirc; that's Wirth >>>>>> or you can call me by value: that's worth" >>>>>> >>>>>> >>>>>> &Acirc; >> Cheers, Gerhard >>>>> >>>>> I still don't get why C++ had to add call by reference. Big >>>>> mistake, in my view. >>>>> >>>>> Jeroen Belleman >>>> >>>> Why? Smart pointers didn't exist in 1998 iirc, and reducing the number >>>> of bare pointers getting passed around has to be a good thing, surely? >>> >>> C++ is a crime against humanity. >>> >>> New languages are created almost daily because people are not willing to learn about the hardware >>> and what really happens. >> >> I sort of doubt that you or anyone on this group actually knows "what really happens" when a 2021-vintage compiler maps your >> source code onto 2010s or 2020s-vintage hardware. See Chisnall's classic 2018 paper, >> "C is not a low-level language. Your computer is not a fast PDP-11." >> <https://dl.acm.org/doi/abs/10.1145/3212477.3212479> >> >> It's far from a straightforward process. >> > >He concludes with "C doesn&rsquo;t map to modern hardware very well". > >Given that a very large fraction of all software is written in >some dialect of C, one may wonder how this could happen. > >Jeroen Belleman
So many ideas, "C does not map to modern hardware very well" is complete bullox. Lots of code I wrote in C runs perfectly on ARM and AMD and Intel right now. Including the text I just wrote on a core i5 posted via this newsreader I wrote in C. If I wanted to see the asm gcc created no problem. But gcc (the free opensoure C compiler) is so good these days that viewing the generated code almost never is needed. Maybe out of curiosity I have, but not because of problems. And quite honestly I would not WANT to look at the ARM asm and register stuff gcc generates, Maybe I encountered 1 or 2 gcc bugs in 20 years? All the GUI stuff and libraries are all written in C.
On a sunny day (Sat, 2 Oct 2021 00:37:15 +0300) it happened Dimiter_Popoff
<dp@tgi-sci.com> wrote in <sj7v2c$oi2$1@dont-email.me>:

>On 10/2/2021 0:17, Joe Gwinn wrote: >> On Fri, 1 Oct 2021 20:07:11 +0300, Dimiter_Popoff <dp@tgi-sci.com> >> wrote: >> Umm. It's too late. Vanilla C is the universal assembler. That >> actually was the original intent. > > >Probably so for my lifetime. >The entire Roman empire spoke Latin - and used roman numbers.... >Survived for many centuries in spite of the roman numbers. >C won't live that long but neither will I.
As the Linux kernel is written in C, and Linus does not want anything else in it and that Linux is in everything from TVs (open soure at that for example my Samsung) to satellites, to most embedded stuff (WiFi modems for example) and many many other things it <Unix> has become like nuts and bolts, won't go away for a long time.
On a sunny day (Sat, 2 Oct 2021 11:31:36 +1000) it happened Clifford Heath
<no.spam@please.net> wrote in
<16aa12a0e62982ea$1$3213637$e0ddea62@news.thecubenet.com>:

>On 1/10/21 10:46 pm, Jeroen Belleman wrote: >> I still don't get why C++ had to add call by reference. Big >> mistake, in my view. > >Because without them, you can't implement operator overloading with the >same semantics as the built-ins.
C++ and operator overloading is a crime against humanity. Bjarne Stroustrup should have learned to program.