Electronics-Related.com
Forums

new spice

Started by John Larkin September 28, 2021
On 2021-10-02 03:31, Clifford Heath wrote:
> On 1/10/21 10:46 pm, Jeroen Belleman wrote: >> I still don't get why C++ had to add call by reference. Big >> mistake, in my view. > > Because without them, you can't implement operator overloading with > the same semantics as the built-ins. > > CH
You mean the same syntax, surely. Semantics of programmer-defined operators are all over the place. It seemed like a good idea at the time, but gets abused so much that it ends up being a liability. Jeroen Belleman
On 2021-10-02 03:36, Clifford Heath wrote:
> On 2/10/21 4:20 am, Jeroen Belleman wrote: >> On 2021-10-01 17:51, Phil Hobbs wrote: >>> See Chisnall's classic 2018 paper, "C is not a low-level >>> language. Your computer is not a fast PDP-11." >>> <https://dl.acm.org/doi/abs/10.1145/3212477.3212479> >>> >>> It's far from a straightforward process. >> He concludes with "C doesn&rsquo;t map to modern hardware very well". >> >> Given that a very large fraction of all software is written in some >> dialect of C, one may wonder how this could happen. > > C did not envisage massive pipelining and caches for memory that is > several orders of magnitude slower than the CPUs, so there was no > real necessity to avoid aliasing and other things that make it > difficult for a compiler to re-order operations. > > CH
You don't think processors should be designed to the needs of the code? I don't want to be bothered by the irrelevant details of system architecture. Why should I have to consider the memory hierarchy or pipelining gotchas when programming in C, or any HLL? Jeroen Belleman
Am 02.10.21 um 03:47 schrieb Don Y:
> On 10/1/2021 5:27 PM, Dimiter_Popoff wrote: >>> The wider public effectively selects what *we* can/will use. >>> Companies that can't produce in quantity can't afford to support >>> their products. >>> >>> And, those products go away.&nbsp; Witness how Zilog disappeared, >>> despite having the king of the 8-bitters (yet failed to evolve >>> to anything bigger/wider).
Oh, the Z8000 was a really good 16 bit machine. We had a large room in the univ we used to call the zoo, with about every CPU available. The pet peeve of the prof was to have Tanenbaum's Experimental Machine running on each one. That was some kind of p-code engine. The Z8000 could hold the candle to everyone else. The 32 Bit Z80000 simply came too late, also because AMD promised the blue from the sky and failed to deliver results. When they shouted "Me too", the 32 bit cake was already eaten and Intel had the slogan "Intel Delivers!"
>>> So, "merit" doesn't figure into the calculation. >> >> Like you said in another post here, >> "The Market didn't select for the "best" -- but *did* "select". " >> Of course I know how life works. But I am not a marketeer, and >> I would have felt to have wasted my life had I chosen to join >> the herd as herded by the marketeers. It has been anything but >> an easy ride but I don't regret a minute of it. > > I am thankful never to have had to deal with the hacks > in the 80x86 family of devices.&nbsp; Thankfully, they were > never cost-effective for any of my designs.
Oh, it could be fun. < https://www.flickr.com/photos/137684711@N07/50651112722/in/dateposted-public/ > I didn't even forget the Transputer link. :-) And Intel had solutions for everything around the CPU: Dual ported DRAM controller, cache, MultibusII which is the prandpa of PCI, DMA/Timer
> Likewise, glad to never have designed a *product* under > any of the OSs that run on it.&nbsp;&nbsp; <shudder>
We had QNX, that was quite OK. Gerhard
On 10/2/2021 1:57 AM, Gerhard Hoffmann wrote:
> Am 02.10.21 um 03:47 schrieb Don Y: >> On 10/1/2021 5:27 PM, Dimiter_Popoff wrote: >>>> The wider public effectively selects what *we* can/will use. >>>> Companies that can't produce in quantity can't afford to support >>>> their products. >>>> >>>> And, those products go away. Witness how Zilog disappeared, >>>> despite having the king of the 8-bitters (yet failed to evolve >>>> to anything bigger/wider). > > Oh, the Z8000 was a really good 16 bit machine. We had a large room > in the univ we used to call the zoo, with about every CPU available. > The pet peeve of the prof was to have Tanenbaum's Experimental Machine > running on each one. That was some kind of p-code engine. > The Z8000 could hold the candle to everyone else. > The 32 Bit Z80000 simply came too late, also because AMD promised > the blue from the sky and failed to deliver results. > When they shouted "Me too", the 32 bit cake was already eaten > and Intel had the slogan "Intel Delivers!"
There were lots of "interesting" machines -- esp when you looked at their I/Os (native instruction set would be hidden by a HLL). But, too many just "didn't make the cut". Not all of which could be attributable to bad marketing, performance/delivery issues, etc. The 99K was an interesting approach (in hindsight, exactly *wrong*!). As were some of the Z[123]80 devices -- esp when considering leveraging existing codebases (I did quite a lot with 180's which taught me that there's a huge set of applications that need "big code" but don't need "wide data"). 65816, etc. There were several 68K parts (specialized I/O and instruction sets) that never made it into production. TI had some novel graphics processors that could have been revolutionary -- save for their architectural limitations.
>>>> So, "merit" doesn't figure into the calculation. >>> >>> Like you said in another post here, >>> "The Market didn't select for the "best" -- but *did* "select". " >>> Of course I know how life works. But I am not a marketeer, and >>> I would have felt to have wasted my life had I chosen to join >>> the herd as herded by the marketeers. It has been anything but >>> an easy ride but I don't regret a minute of it. >> >> I am thankful never to have had to deal with the hacks >> in the 80x86 family of devices. Thankfully, they were >> never cost-effective for any of my designs. > > Oh, it could be fun. > < https://www.flickr.com/photos/137684711@N07/50651112722/in/dateposted-public/ > > > > I didn't even forget the Transputer link. :-) > > And Intel had solutions for everything around the CPU: > Dual ported DRAM controller, cache, MultibusII which is the > prandpa of PCI, DMA/Timer
Intel was expensive (relatively speaking). I'd looked into using the MPC on a design and realized I'd lose half my hardware budget on just *that*! :< "Time to be creative!" Counter/timers have become less capable, over time (remember the 9513?). And, for low-cost I/O-DAS hardware, there's nothing that beats a counter/timer in terms of cost, complexity, ease of interface, etc.
>> Likewise, glad to never have designed a *product* under >> any of the OSs that run on it. <shudder> > > We had QNX, that was quite OK.
Much too heavyweight. We'd write very "slim" OSs that imposed little-to-no overhead on the system -- because overhead meant (hardware) cost. Hardware (DM+DL) was ALWAYS the driving force. [I designed a box with a bank of x1 DRAM devices. It could be populated with any combination of 16K or 64K devices. The software would detect the configuration and essentially treat it as a large bit-array (onto which it mapped a byte-wide presentation layer). I.e., the cost of accumulating bits into bytes was less than the cost of universally using 64Kb devices. This "made sense" because the memory wasn't used heavily but needed to be relatively large. Remember 32Kx1 DRAMs?? :> ]
On 2/10/21 5:50 pm, Jeroen Belleman wrote:
> On 2021-10-02 03:31, Clifford Heath wrote: >> On 1/10/21 10:46 pm, Jeroen Belleman wrote: >>> I still don't get why C++ had to add call by reference. Big >>> mistake, in my view. >> >> Because without them, you can't implement operator overloading with >> the same semantics as the built-ins. >> >> CH > > You mean the same syntax, surely.
No, unfortunately I mean semantics. For example, if "a" and "b" have different (but convertible) types, which is the type of "(a = b)"? There are lots of things like that - things that should only matter for folk implementing library classes, such as complex numbers. But they do matter.
> Semantics of programmer-defined > operators are all over the place. It seemed like a good idea at the > time, but gets abused so much that it ends up being a liability.
You won't find me disagreeing. Programmers are an undisciplined mob. But there was at least a fairly good rationale for references, and they get used well and correctly by folk who know what they are doing. The whole C++ language has spun completely out of control in the 15 years since I used it every day. It's not that there aren't nice additions in the mix, but the result is truly awful. CH
On 2/10/21 5:57 pm, Jeroen Belleman wrote:
> On 2021-10-02 03:36, Clifford Heath wrote: >> On 2/10/21 4:20 am, Jeroen Belleman wrote: >>> On 2021-10-01 17:51, Phil Hobbs wrote: >>>> See Chisnall's classic 2018 paper, "C is not a low-level >>>> language. Your computer is not a fast PDP-11." >>>> <https://dl.acm.org/doi/abs/10.1145/3212477.3212479> >>>> >>>> It's far from a straightforward process. >>> He concludes with "C doesn&rsquo;t map to modern hardware very well". >>> >>> Given that a very large fraction of all software is written in some >>> dialect of C, one may wonder how this could happen. >> >> C did not envisage massive pipelining and caches for memory that is >> several orders of magnitude slower than the CPUs, so there was no >> real necessity to avoid aliasing and other things that make it >> difficult for a compiler to re-order operations. >> >> CH > > You don't think processors should be designed to the needs > of the code?
The processors had to evolve unforeseen features to take advantage of on-chip speeds radically outpacing off-chip speeds. Many of those features are incompatible with languages like C, because those languages create intractable problems for compilers. There are other languages such as Haskell which do not pose such problems. They're more like mathematics and less like train timetables, so programmers need to be taught to think in mathematics instead of logistics. And of course all the ancient experts in logistics are gonna scream about that. I'm one - but I have partially made the transition instead of screaming about it.
> I don't want to be bothered by the irrelevant details of > system architecture.
And yet that is exactly what you have to think about, when you choose a language for which a compiler cannot make those details vanish. And C is such a language. CH
On 01/10/2021 22:44, Jeroen Belleman wrote:
> On 2021-10-01 12:56, Don Y wrote: >> On 10/1/2021 3:04 AM, Jeroen Belleman wrote: >>> There's also Wirth's observation: "Software gets slower faster >>> than hardware gets faster." >> >> You've got to have a bit of sympathy for folks who write desktop >> applications; they have virtually no control over the environment >> in which their code executes. >> > [Snip!] > > Just last week, I rebooted a Linux machine that I set up > 20 years ago, and which had been sleeping in the attic. > It's actually much snappier than my newest machine. > > Software *is* getting more and more bloated, and I don't > really have the impression that the functionality is > there to justify it. Mostly we are irritated by the addition > of pointless animations and snazzy effects. There is > something rotten in the state of modern software.
A big part of the problem is that software developers have much newer computers than a large proportion of their users. This makes them not notice when they have implemented something in a needlessly inefficient manner, whereas if they had a slower computer they would notice that an algorithm was inefficient as soon as they ran it, and would be incentivised to fix it at a time when they still remembered what it was supposed to do. Ideally there would be some way to make sure that software developmers have slower computers than say 80% of users, as given some incentive I'm sure they would still be able to make the software do what it needs to on their machines, and it would run really nicely for the users. I don't know how to bring about that situation.
On 10/2/2021 5:29 AM, Chris Jones wrote:
> A big part of the problem is that software developers have much newer computers > than a large proportion of their users. This makes them not notice when they > have implemented something in a needlessly inefficient manner, whereas if they > had a slower computer they would notice that an algorithm was inefficient as > soon as they ran it, and would be incentivised to fix it at a time when they > still remembered what it was supposed to do.
"Bloat" doesn't just pertain to execution speed, load time, etc. It also applies to the complexity of an app -- often above what is necessary *in* that app. Should I be able to draw illustrations *in* my word processor? (the document will likely have illustrations *in* it so what better place to create them, eh?) Should I be able to design typefaces there, as well? I can embed animations in a PDF. Should I be able to *create* them while in (e.g.) Acrobat? In the windows world, there seems to be a tendency to put everything you might need for a particular task into the "app" that targets that task. Instead of developing different apps oriented towards those specific subtasks.
> Ideally there would be some way to make sure that software developmers have > slower computers than say 80% of users, as given some incentive I'm sure they > would still be able to make the software do what it needs to on their machines, > and it would run really nicely for the users. I don't know how to bring about > that situation.
How do you quantify the resources available to those "80% users"? I've seen folks with little 8GB boxes running 20 apps at the same time. Should the OS ensure that you can never run in a configuration above some particular load factor? IME, developers tend to be focused on what they are "developing", not watching YouTube videos, browsing the web, checking mail, AND "developing". I may have dozens of windows open -- but, most are just idling, not actually doing anything (that consumes resources beyond memory).
On Fri, 01 Oct 2021 20:20:25 +0200, Jeroen Belleman
<jeroen@nospam.please> wrote:

>On 2021-10-01 17:51, Phil Hobbs wrote: >> Jan Panteltje wrote: >>> On a sunny day (Fri, 1 Oct 2021 09:05:31 -0400) it happened Phil Hobbs >>> <pcdhSpamMeSenseless@electrooptical.net> wrote in >>> <95332466-d835-f22c-a8f3-dfc5cd15d1a7@electrooptical.net>: >>> >>>> Jeroen Belleman wrote: >>>>> On 2021-10-01 14:11, Gerhard Hoffmann wrote: >>>>>> Am 01.10.21 um 12:04 schrieb Jeroen Belleman: >>>>>> >>>>>> >>>>>>> There's also Wirth's observation: "Software gets slower faster >>>>>>> than hardware gets faster." >>>>>> >>>>>> When asked how his name should be pronounced he said: >>>>>> >>>>>> &#4294967295; &#4294967295; "You can call me by name:&#4294967295; that's Wirth >>>>>> or you can call me by value: that's worth" >>>>>> >>>>>> >>>>>> &#4294967295; >> Cheers, Gerhard >>>>> >>>>> I still don't get why C++ had to add call by reference. Big >>>>> mistake, in my view. >>>>> >>>>> Jeroen Belleman >>>> >>>> Why? Smart pointers didn't exist in 1998 iirc, and reducing the number >>>> of bare pointers getting passed around has to be a good thing, surely? >>> >>> C++ is a crime against humanity. >>> >>> New languages are created almost daily because people are not willing to learn about the hardware >>> and what really happens. >> >> I sort of doubt that you or anyone on this group actually knows "what really happens" when a 2021-vintage compiler maps your source code onto 2010s or 2020s-vintage hardware. See Chisnall's classic 2018 paper, >> "C is not a low-level language. Your computer is not a fast PDP-11." >> <https://dl.acm.org/doi/abs/10.1145/3212477.3212479> >> >> It's far from a straightforward process. >> > >He concludes with "C doesn&#4294967295;t map to modern hardware very well". > >Given that a very large fraction of all software is written in >some dialect of C, one may wonder how this could happen.
The way it's generally compiled, it's not secure. I/D/stack space are all tangled. -- Father Brown's figure remained quite dark and still; but in that instant he had lost his head. His head was always most valuable when he had lost it.
On 01/10/21 19:20, Jeroen Belleman wrote:
> On 2021-10-01 17:51, Phil Hobbs wrote: >> Jan Panteltje wrote: >>> On a sunny day (Fri, 1 Oct 2021 09:05:31 -0400) it happened Phil Hobbs >>> <pcdhSpamMeSenseless@electrooptical.net> wrote in >>> <95332466-d835-f22c-a8f3-dfc5cd15d1a7@electrooptical.net>: >>> >>>> Jeroen Belleman wrote: >>>>> On 2021-10-01 14:11, Gerhard Hoffmann wrote: >>>>>> Am 01.10.21 um 12:04 schrieb Jeroen Belleman: >>>>>> >>>>>> >>>>>>> There's also Wirth's observation: "Software gets slower faster >>>>>>> than hardware gets faster." >>>>>> >>>>>> When asked how his name should be pronounced he said: >>>>>> >>>>>> &Acirc; &Acirc;&nbsp; "You can call me by name:&Acirc;&nbsp; that's Wirth >>>>>> or you can call me by value: that's worth" >>>>>> >>>>>> >>>>>> &Acirc; >> Cheers, Gerhard >>>>> >>>>> I still don't get why C++ had to add call by reference. Big >>>>> mistake, in my view. >>>>> >>>>> Jeroen Belleman >>>> >>>> Why?&nbsp; Smart pointers didn't exist in 1998 iirc, and reducing the number >>>> of bare pointers getting passed around has to be a good thing, surely? >>> >>> C++ is a crime against humanity. >>> >>> New languages are created almost daily because people are not willing to >>> learn about the hardware >>> and what really happens. >> >> I sort of doubt that you or anyone on this group actually knows "what really >> happens" when a 2021-vintage compiler maps your source code onto 2010s or >> 2020s-vintage hardware.&nbsp; See Chisnall's classic 2018 paper, >> "C is not a low-level language. Your computer is not a fast PDP-11." >> <https://dl.acm.org/doi/abs/10.1145/3212477.3212479> >> >> It's far from a straightforward process. >> > > He concludes with "C doesn&rsquo;t map to modern hardware very well". > > Given that a very large fraction of all software is written in > some dialect of C, one may wonder how this could happen.
Easy: processors have advanced to include out of order and speculative execution, multiple levels of cache, non-uniform main memory, multicore processors. None of those were in the original concept of C, where only a single processor would mutate memory. Recent versions of C have /finally/ been forced to include a memory model. Java got that right a quarter of a /century/ earlier. IMHO it remains to be seen how effective the C memory model is. Even when starting with a clean slate, the Java memory model had to be subtly tweaked about a decade after introduction.