Electronics-Related.com
Forums

new spice

Started by John Larkin September 28, 2021
On 2021-10-01, Dimiter_Popoff <dp@tgi-sci.com> wrote:
> On 10/2/2021 1:40, Lasse Langwadt Christensen wrote: >> l&oslash;rdag den 2. oktober 2021 kl. 00.15.53 UTC+2 skrev Dimiter Popoff: >>> On 10/2/2021 1:00, Gerhard Hoffmann wrote: >>>> Am 01.10.21 um 23:14 schrieb Dimiter_Popoff: >>>> >>>>>>>> You may be able to design (or subcontract) a compatible device, by >>>>>>>> then. Nothing to stop someone from altering the microcode in >>>>>>>> an "x86" to make it emulate the PPC's instruction set. ("Nothing" >>>>>>>> other than NDAs, etc.) >>>>>>> Hopefully so. The clunkiest part in those little-endian cores they >>>>>>> make is the fact they don't have an opcode to access memory as if >>>>>>> they were big-endian (power can do both, and in vpa it is just >>>>>>> move.size vs. mover.size). They do have the muxes and all needed, >>>>>>> they must put only a little bit of extra logic to implement it; >>>>>>> but they don't, for whatever reason. >>>>>> >>>>>> which of the various ways of arranging bytes in a word do you want? >>>>>> >>>>> >>>>> Just the natural one, highest byte first and so on down for >>>>> as long as the word is.
That's not at all natural. when the Indians invented their numbering system they wrote the ones digit first, the first digit you read you know its multiplier is one, the next has multiplier ten and so on... The Arabs copied this. we Europeans copied the layout, but as their writing is in the opposite direction to ours we are writing the ones digit last. This is perverse: you need to read the whole number before you can interpret any of it. The later invention of decimal fractions, would probably not have happened without this perversion, so it's not all bad. -- Jasen.
Phil Hobbs wrote:
> Jan Panteltje wrote: >> On a sunny day (Fri, 1 Oct 2021 09:05:31 -0400) it happened Phil Hobbs >> <pcdhSpamMeSenseless@electrooptical.net> wrote in >> <95332466-d835-f22c-a8f3-dfc5cd15d1a7@electrooptical.net>: >> >>> Jeroen Belleman wrote: >>>> On 2021-10-01 14:11, Gerhard Hoffmann wrote: >>>>> Am 01.10.21 um 12:04 schrieb Jeroen Belleman: >>>>> >>>>> >>>>>> There's also Wirth's observation: "Software gets slower faster >>>>>> than hardware gets faster." >>>>> >>>>> When asked how his name should be pronounced he said: >>>>> >>>>> &nbsp;&nbsp; "You can call me by name:&nbsp; that's Wirth >>>>> or you can call me by value: that's worth" >>>>> >>>>> >>>>> &nbsp;>> Cheers, Gerhard >>>> >>>> I still don't get why C++ had to add call by reference. Big >>>> mistake, in my view. >>>> >>>> Jeroen Belleman >>> >>> Why?&#4294967295; Smart pointers didn't exist in 1998 iirc, and reducing the number >>> of bare pointers getting passed around has to be a good thing, surely? >> >> C++ is a crime against humanity. >> >> New languages are created almost daily because people are not willing >> to learn about the hardware >> and what really happens. > > I sort of doubt that you or anyone on this group actually knows "what > really happens" when a 2021-vintage compiler maps your source code onto > 2010s or 2020s-vintage hardware.&#4294967295; See Chisnall's classic 2018 paper, > "C is not a low-level language. Your computer is not a fast PDP-11." > <https://dl.acm.org/doi/abs/10.1145/3212477.3212479> > > It's far from a straightforward process. >
It is quite far. However, Chisnall's not wrong but he's right in a fundamentally uninteresting way. How modern chips even branch is a mind-bogglingly complex thing. Jim Keller has a Lex Friedman that's valuable enough to tolerate Lex that long :) But a well written ( as in mostly-correct in an non-formal sense ) C program will chirp merrily along as long as the box has power. I have a blu-ray player that must be powered off now and again; the GC in the Java front end eventually fails. https://www.youtube.com/watch?v=Nb2tebYAaOA Almost all of programming is bent towards the demographics of "the population of developers doubles every five years." Nobody really wants to be a programmer any more. It is starting to look like mental health in that domain is declining and becoming the pacing element. <snip>
> Cheers > > Phil Hobbs > >
-- Les Cargill
jlarkin@highlandsniptechnology.com wrote:
> On Fri, 01 Oct 2021 20:20:25 +0200, Jeroen Belleman > <jeroen@nospam.please> wrote: > >> On 2021-10-01 17:51, Phil Hobbs wrote: >>> Jan Panteltje wrote: >>>> On a sunny day (Fri, 1 Oct 2021 09:05:31 -0400) it happened Phil Hobbs >>>> <pcdhSpamMeSenseless@electrooptical.net> wrote in >>>> <95332466-d835-f22c-a8f3-dfc5cd15d1a7@electrooptical.net>: >>>> >>>>> Jeroen Belleman wrote: >>>>>> On 2021-10-01 14:11, Gerhard Hoffmann wrote: >>>>>>> Am 01.10.21 um 12:04 schrieb Jeroen Belleman: >>>>>>> >>>>>>> >>>>>>>> There's also Wirth's observation: "Software gets slower faster >>>>>>>> than hardware gets faster." >>>>>>> >>>>>>> When asked how his name should be pronounced he said: >>>>>>> >>>>>>> &Acirc; &Acirc; "You can call me by name:&Acirc; that's Wirth >>>>>>> or you can call me by value: that's worth" >>>>>>> >>>>>>> >>>>>>> &Acirc; >> Cheers, Gerhard >>>>>> >>>>>> I still don't get why C++ had to add call by reference. Big >>>>>> mistake, in my view. >>>>>> >>>>>> Jeroen Belleman >>>>> >>>>> Why? Smart pointers didn't exist in 1998 iirc, and reducing the number >>>>> of bare pointers getting passed around has to be a good thing, surely? >>>> >>>> C++ is a crime against humanity. >>>> >>>> New languages are created almost daily because people are not willing to learn about the hardware >>>> and what really happens. >>> >>> I sort of doubt that you or anyone on this group actually knows "what really happens" when a 2021-vintage compiler maps your source code onto 2010s or 2020s-vintage hardware. See Chisnall's classic 2018 paper, >>> "C is not a low-level language. Your computer is not a fast PDP-11." >>> <https://dl.acm.org/doi/abs/10.1145/3212477.3212479> >>> >>> It's far from a straightforward process. >>> >> >> He concludes with "C doesn&rsquo;t map to modern hardware very well". >> >> Given that a very large fraction of all software is written in >> some dialect of C, one may wonder how this could happen. > > The way it's generally compiled, it's not secure. I/D/stack space are > all tangled. > > >
If you use the Greenhill tools they're kept quite separate. I'm not familiar with how ( or even if ) that's done in other toolchains. The discipline necessary to write an informally-correct C program has other advantages. Programs written in higher-level languages are closer to software BOMs than programs. And there's nothing wrong with that. -- Les Cargill
whit3rd wrote:
> On Friday, October 1, 2021 at 9:05:42 AM UTC-4, Phil Hobbs wrote: >> Jeroen Belleman wrote: > >>> I still don't get why C++ had to add call by reference. Big >>> mistake, in my view. > > >> Why? Smart pointers didn't exist in 1998 iirc, and reducing the number >> of bare pointers getting passed around has to be a good thing, surely? > > Apple's MacOS used 'handles' even back in the eighties. > > <https://en.wikipedia.org/wiki/Classic_Mac_OS_memory_management> >
At its core, Unix/Linux is nothing but ioctl() calls, and the identity of the thing beind ioctl()-ed is an integer that is also a handle. -- Les Cargill
Clifford Heath wrote:
> On 2/10/21 5:50 pm, Jeroen Belleman wrote: >> On 2021-10-02 03:31, Clifford Heath wrote: >>> On 1/10/21 10:46 pm, Jeroen Belleman wrote: >>>> I still don't get why C++ had to add call by reference. Big >>>> mistake, in my view. >>> >>> Because without them, you can't implement operator overloading with >>> the same semantics as the built-ins. >>> >>> CH >> >> You mean the same syntax, surely. > No, unfortunately I mean semantics. For example, if "a" and "b" have > different (but convertible) types, which is the type of "(a = b)"? There > are lots of things like that - things that should only matter for folk > implementing library classes, such as complex numbers. But they do matter. > >> Semantics of programmer-defined >> operators are all over the place. It seemed like a good idea at the >> time, but gets abused so much that it ends up being a liability. > > You won't find me disagreeing. Programmers are an undisciplined mob. But > there was at least a fairly good rationale for references, and they get > used well and correctly by folk who know what they are doing. > > The whole C++ language has spun completely out of control in the 15 > years since I used it every day. It's not that there aren't nice > additions in the mix, but the result is truly awful. > > CH
Eh. "C with classes" plus "std:*" where "*" means {map,vector,string} is quite pleasant. Nobody seems to actually understand what "constexpr" actually means :) -- Les Cargill
On 10/2/2021 5:37 PM, Les Cargill wrote:
> Nobody really wants to be a programmer any more. It is starting > to look like mental health in that domain is declining and becoming > the pacing element.
Programmers are technicians. How many folks aspire to be technicians?
On 10/2/2021 5:46 PM, Les Cargill wrote:
> At its core, Unix/Linux is nothing but ioctl() calls, and the > identity of the thing beind ioctl()-ed is an integer that is also a > handle.
OS mediated handles are a powerful tool in structuring and enforcing "interaction rules" in software. You know the fd for stdin on process A (you being process B). Yet, knowing that "small integer" doesn't *give* you anything. You can't even sort out the file mapped to it, in process A. If you knew A had 15 fds active, you couldn't even guess random integers with the hope of stumbling on *its* actual files. [Extend this and you can see how you can create completely disjoint namespaces that isolate the named resources accessed by each process from each other] Yet, A can tell the OS to share that resource with you just by referencing *its* fd value. You will eventually receive a new fd (via IPC) that grants you access to it -- IF the original process decides to perform that sharing. I.e., they determine *who* can access a resource. Handles also let you restrict *how* a resource can be accessed. The resource isn't exposed so an intermediary (the OS and possibly agents) has to "handle" (pun not intended) your access. So, you can't do stuff that doesn't make sense (like reading past the end of an object). And, if you appear to be doing something that you shouldn't (due to permissions *or* semantics), the mediating agent knows you are *broken* -- or, trying to subvert the system. "Why are you trying to access that stack segment?" But, all comes at a cost -- because the underlying hardware doesn't (typically) have the mechanisms to do all of these things without software assist.
On Sat, 2 Oct 2021 17:43:55 +0100, Tom Gardner
<spamjunk@blueyonder.co.uk> wrote:

>On 01/10/21 19:20, Jeroen Belleman wrote: >> On 2021-10-01 17:51, Phil Hobbs wrote: >>> Jan Panteltje wrote: >>>> On a sunny day (Fri, 1 Oct 2021 09:05:31 -0400) it happened Phil Hobbs >>>> <pcdhSpamMeSenseless@electrooptical.net> wrote in >>>> <95332466-d835-f22c-a8f3-dfc5cd15d1a7@electrooptical.net>: >>>> >>>>> Jeroen Belleman wrote: >>>>>> On 2021-10-01 14:11, Gerhard Hoffmann wrote: >>>>>>> Am 01.10.21 um 12:04 schrieb Jeroen Belleman: >>>>>>> >>>>>>> >>>>>>>> There's also Wirth's observation: "Software gets slower faster >>>>>>>> than hardware gets faster." >>>>>>> >>>>>>> When asked how his name should be pronounced he said: >>>>>>> >>>>>>> &#4294967295; &nbsp; "You can call me by name:&nbsp; that's Wirth >>>>>>> or you can call me by value: that's worth" >>>>>>> >>>>>>> >>>>>>> &#4294967295; >> Cheers, Gerhard >>>>>> >>>>>> I still don't get why C++ had to add call by reference. Big >>>>>> mistake, in my view. >>>>>> >>>>>> Jeroen Belleman >>>>> >>>>> Why?&#4294967295; Smart pointers didn't exist in 1998 iirc, and reducing the number >>>>> of bare pointers getting passed around has to be a good thing, surely? >>>> >>>> C++ is a crime against humanity. >>>> >>>> New languages are created almost daily because people are not willing to >>>> learn about the hardware >>>> and what really happens. >>> >>> I sort of doubt that you or anyone on this group actually knows "what really >>> happens" when a 2021-vintage compiler maps your source code onto 2010s or >>> 2020s-vintage hardware.&#4294967295; See Chisnall's classic 2018 paper, >>> "C is not a low-level language. Your computer is not a fast PDP-11." >>> <https://dl.acm.org/doi/abs/10.1145/3212477.3212479> >>> >>> It's far from a straightforward process. >>> >> >> He concludes with "C doesn&#4294967295;t map to modern hardware very well". >> >> Given that a very large fraction of all software is written in >> some dialect of C, one may wonder how this could happen. > >Easy: processors have advanced to include out of order >and speculative execution, multiple levels of cache, >non-uniform main memory, multicore processors. None >of those were in the original concept of C, where only >a single processor would mutate memory. > >Recent versions of C have /finally/ been forced to include >a memory model. Java got that right a quarter of a /century/ >earlier. > >IMHO it remains to be seen how effective the C memory model >is. Even when starting with a clean slate, the Java memory >model had to be subtly tweaked about a decade after introduction.
The Java memory model is crippling in embedded realtime (ERT), where nobody can tolerate random hard blinks while garbage collector (GC) runs, and nothing else. This was a central problem in the efforts to develop Realtime Java in the 1980s or 1990s. I don't recall that RT Java succeeded; it vanished in ERT for sure. In ERT, memory allocation and release is part of the software design, and the built-in allocator (like malloc) is used only on startup to grab a big memory area., which is then directly managed by mission code. This is true in both C and C++. Typical schemes are the buddy system, and queue-based fixed-block allocators. None of which ever have GC blinks, or slow down as memory is obligated, or crashes if memory is exhausted. Joe Gwinn
On Sat, 2 Oct 2021 17:43:55 +0100, Tom Gardner
<spamjunk@blueyonder.co.uk> wrote:

>On 01/10/21 19:20, Jeroen Belleman wrote: >> On 2021-10-01 17:51, Phil Hobbs wrote: >>> Jan Panteltje wrote: >>>> On a sunny day (Fri, 1 Oct 2021 09:05:31 -0400) it happened Phil Hobbs >>>> <pcdhSpamMeSenseless@electrooptical.net> wrote in >>>> <95332466-d835-f22c-a8f3-dfc5cd15d1a7@electrooptical.net>: >>>> >>>>> Jeroen Belleman wrote: >>>>>> On 2021-10-01 14:11, Gerhard Hoffmann wrote: >>>>>>> Am 01.10.21 um 12:04 schrieb Jeroen Belleman: >>>>>>> >>>>>>> >>>>>>>> There's also Wirth's observation: "Software gets slower faster >>>>>>>> than hardware gets faster." >>>>>>> >>>>>>> When asked how his name should be pronounced he said: >>>>>>> >>>>>>> &#4294967295; &nbsp; "You can call me by name:&nbsp; that's Wirth >>>>>>> or you can call me by value: that's worth" >>>>>>> >>>>>>> >>>>>>> &#4294967295; >> Cheers, Gerhard >>>>>> >>>>>> I still don't get why C++ had to add call by reference. Big >>>>>> mistake, in my view. >>>>>> >>>>>> Jeroen Belleman >>>>> >>>>> Why?&#4294967295; Smart pointers didn't exist in 1998 iirc, and reducing the number >>>>> of bare pointers getting passed around has to be a good thing, surely? >>>> >>>> C++ is a crime against humanity. >>>> >>>> New languages are created almost daily because people are not willing to >>>> learn about the hardware >>>> and what really happens. >>> >>> I sort of doubt that you or anyone on this group actually knows "what really >>> happens" when a 2021-vintage compiler maps your source code onto 2010s or >>> 2020s-vintage hardware.&#4294967295; See Chisnall's classic 2018 paper, >>> "C is not a low-level language. Your computer is not a fast PDP-11." >>> <https://dl.acm.org/doi/abs/10.1145/3212477.3212479> >>> >>> It's far from a straightforward process. >>> >> >> He concludes with "C doesn&#4294967295;t map to modern hardware very well". >> >> Given that a very large fraction of all software is written in >> some dialect of C, one may wonder how this could happen. > >Easy: processors have advanced to include out of order >and speculative execution, multiple levels of cache, >non-uniform main memory, multicore processors. None >of those were in the original concept of C, where only >a single processor would mutate memory. > >Recent versions of C have /finally/ been forced to include >a memory model. Java got that right a quarter of a /century/ >earlier. > >IMHO it remains to be seen how effective the C memory model >is. Even when starting with a clean slate, the Java memory >model had to be subtly tweaked about a decade after introduction.
The very concept of programmer-level "unchecked buffer" is astounding. Microsoft: when in doubt, execute it. -- Father Brown's figure remained quite dark and still; but in that instant he had lost his head. His head was always most valuable when he had lost it.
On 03/10/21 20:58, Joe Gwinn wrote:
> On Sat, 2 Oct 2021 17:43:55 +0100, Tom Gardner > <spamjunk@blueyonder.co.uk> wrote: > >> On 01/10/21 19:20, Jeroen Belleman wrote: >>> On 2021-10-01 17:51, Phil Hobbs wrote: >>>> Jan Panteltje wrote: >>>>> On a sunny day (Fri, 1 Oct 2021 09:05:31 -0400) it happened Phil Hobbs >>>>> <pcdhSpamMeSenseless@electrooptical.net> wrote in >>>>> <95332466-d835-f22c-a8f3-dfc5cd15d1a7@electrooptical.net>: >>>>> >>>>>> Jeroen Belleman wrote: >>>>>>> On 2021-10-01 14:11, Gerhard Hoffmann wrote: >>>>>>>> Am 01.10.21 um 12:04 schrieb Jeroen Belleman: >>>>>>>> >>>>>>>> >>>>>>>>> There's also Wirth's observation: "Software gets slower faster >>>>>>>>> than hardware gets faster." >>>>>>>> >>>>>>>> When asked how his name should be pronounced he said: >>>>>>>> >>>>>>>> &Acirc; &Acirc;&nbsp; "You can call me by name:&Acirc;&nbsp; that's Wirth >>>>>>>> or you can call me by value: that's worth" >>>>>>>> >>>>>>>> >>>>>>>> &Acirc; >> Cheers, Gerhard >>>>>>> >>>>>>> I still don't get why C++ had to add call by reference. Big >>>>>>> mistake, in my view. >>>>>>> >>>>>>> Jeroen Belleman >>>>>> >>>>>> Why?&nbsp; Smart pointers didn't exist in 1998 iirc, and reducing the number >>>>>> of bare pointers getting passed around has to be a good thing, surely? >>>>> >>>>> C++ is a crime against humanity. >>>>> >>>>> New languages are created almost daily because people are not willing to >>>>> learn about the hardware >>>>> and what really happens. >>>> >>>> I sort of doubt that you or anyone on this group actually knows "what really >>>> happens" when a 2021-vintage compiler maps your source code onto 2010s or >>>> 2020s-vintage hardware.&nbsp; See Chisnall's classic 2018 paper, >>>> "C is not a low-level language. Your computer is not a fast PDP-11." >>>> <https://dl.acm.org/doi/abs/10.1145/3212477.3212479> >>>> >>>> It's far from a straightforward process. >>>> >>> >>> He concludes with "C doesn&rsquo;t map to modern hardware very well". >>> >>> Given that a very large fraction of all software is written in >>> some dialect of C, one may wonder how this could happen. >> >> Easy: processors have advanced to include out of order >> and speculative execution, multiple levels of cache, >> non-uniform main memory, multicore processors. None >> of those were in the original concept of C, where only >> a single processor would mutate memory. >> >> Recent versions of C have /finally/ been forced to include >> a memory model. Java got that right a quarter of a /century/ >> earlier. >> >> IMHO it remains to be seen how effective the C memory model >> is. Even when starting with a clean slate, the Java memory >> model had to be subtly tweaked about a decade after introduction. > > The Java memory model is crippling in embedded realtime (ERT), where > nobody can tolerate random hard blinks while garbage collector (GC) > runs, and nothing else. This was a central problem in the efforts to > develop Realtime Java in the 1980s or 1990s. I don't recall that RT > Java succeeded; it vanished in ERT for sure.
You are confusing the JVM (virtual machine) with the JMM (memory model); they are independent. There is only one JMM, by definition, but there are many JVMs. Java is not suitable for embedded hard realtime systems, but that is not because of the Java Memory Model (JMM).
> In ERT, memory allocation and release is part of the software design, > and the built-in allocator (like malloc) is used only on startup to > grab a big memory area., which is then directly managed by mission > code. This is true in both C and C++. Typical schemes are the buddy > system, and queue-based fixed-block allocators. None of which ever > have GC blinks, or slow down as memory is obligated, or crashes if > memory is exhausted.
True, but irrelevant w.r.t. a memory model. It that doesn't change the point I was making about the need for a memory model - which C has only very belatedly recognised.