Electronics-Related.com
Forums

new spice

Started by John Larkin September 28, 2021
On 10/3/2021 4:35 PM, Tom Gardner wrote:
> It that doesn't change the point I was making about the > need for a memory model - which C has only very belatedly > recognised.
I think you're overstating the problem. I.e., code written "years ago" still manages to run on "more modern" hardware. If the need for the memory model came about because of changes/advances in hardware, then code compiled before such a model existed would "suffer" when running on that modern hardware. (i.e., the "old assumptions" still allow functional code despite hardware advances)
On 04/10/21 02:07, Don Y wrote:
> On 10/3/2021 4:35 PM, Tom Gardner wrote: >> It that doesn't change the point I was making about the >> need for a memory model - which C has only very belatedly >> recognised. > > I think you're overstating the problem. > > I.e., code written "years ago" still manages to run on > "more modern" hardware.  If the need for the memory > model came about because of changes/advances in hardware, > then code compiled before such a model existed would > "suffer" when running on that modern hardware. > > (i.e., the "old assumptions" still allow functional code > despite hardware advances)
By that reasoning, old code would suffer if recompiled with a "new" C compiler incorporating the C memory model. Well-written old code should continue to work. Poorly written code may indeed suffer where the inbuilt presumptions are invalidated by new hardware. The language lawyers would say that is correct behaviour. If by "more modern hardware" you mean newer versions of x86 architecture, then you would need to take into account that they take extraordinary efforts to ensure previous code continues to work. They are largely but not entirely successful. If by "more modern hardware" you mean a different architecture or an architecture where that is not a design objective, e.g. ARM and/or embedded, then old code may indeed suffer.
On 10/4/2021 12:46 AM, Tom Gardner wrote:
> On 04/10/21 02:07, Don Y wrote: >> On 10/3/2021 4:35 PM, Tom Gardner wrote: >>> It that doesn't change the point I was making about the >>> need for a memory model - which C has only very belatedly >>> recognised. >> >> I think you're overstating the problem. >> >> I.e., code written "years ago" still manages to run on >> "more modern" hardware. If the need for the memory >> model came about because of changes/advances in hardware, >> then code compiled before such a model existed would >> "suffer" when running on that modern hardware. >> >> (i.e., the "old assumptions" still allow functional code >> despite hardware advances) > > By that reasoning, old code would suffer if recompiled > with a "new" C compiler incorporating the C memory model.
That doesn't necessarily follow. But, if C was so hopelessly flawed before adopting a model, then code compiled before such a model existed would surely not perform correctly?
> Well-written old code should continue to work. Poorly > written code may indeed suffer where the inbuilt presumptions > are invalidated by new hardware. The language lawyers > would say that is correct behaviour.
It is. If you've relied on something that wasn't nailed down ("unspecified behavior") then you may have "got lucky". You could just as easily have been UNlucky! But, presumably, if you'd been, you would have noticed and done something about it.
> If by "more modern hardware" you mean newer versions of > x86 architecture, then you would need to take into account > that they take extraordinary efforts to ensure previous > code continues to work. They are largely but not entirely > successful.
Yet, the compiler made no such assumptions, at the time the code was originally compiled! The resulting binaries *were* correct and continue to be correct. The more detailed the model of the execution environment, the more you can (correctly) *exploit* capabilities of newer devices. Writing for SMP 40 years ago would be a crap shoot. Today, writing to exploit multiple concurrently executing cores is possible -- for folks skilled in the art. For folks NOT so skilled, you write as if running on one core and live with the performance constraints that imposes on your code's execution. Or, hope for a magical compiler that can mind-read! The point is, there are countless applications (embedded, RT and desktop) that were written in C before any of these issues were formalized (or even recognized). So, that's got to say something for the power of the language and people to *use* it.
> If by "more modern hardware" you mean a different architecture > or an architecture where that is not a design objective, > e.g. ARM and/or embedded, then old code may indeed suffer
On 04/10/21 09:27, Don Y wrote:
> On 10/4/2021 12:46 AM, Tom Gardner wrote: >> On 04/10/21 02:07, Don Y wrote: >>> On 10/3/2021 4:35 PM, Tom Gardner wrote: >>>> It that doesn't change the point I was making about the >>>> need for a memory model - which C has only very belatedly >>>> recognised. >>> >>> I think you're overstating the problem. >>> >>> I.e., code written "years ago" still manages to run on >>> "more modern" hardware.  If the need for the memory >>> model came about because of changes/advances in hardware, >>> then code compiled before such a model existed would >>> "suffer" when running on that modern hardware. >>> >>> (i.e., the "old assumptions" still allow functional code >>> despite hardware advances) >> >> By that reasoning, old code would suffer if recompiled >> with a "new" C compiler incorporating the C memory model. > > That doesn't necessarily follow. > > But, if C was so hopelessly flawed before adopting a model, > then code compiled before such a model existed would surely > not perform correctly?
C wasn't hopelessly flawed, it is just that processors aren't PDP-11s any more.
>> Well-written old code should continue to work. Poorly >> written code may indeed suffer where the inbuilt presumptions >> are invalidated by new hardware. The language lawyers >> would say that is correct behaviour. > > It is.  If you've relied on something that wasn't nailed > down ("unspecified behavior") then you may have "got lucky". > You could just as easily have been UNlucky!  But, presumably, > if you'd been, you would have noticed and done something > about it.
Trying to spot un-reproduceable errors due to races between processors, caches and the like is non-trivial and not deterministic. Most C programmers didn't even realise that it was impossible to build a threading library in C. To do so you had to rely behaviour of a particular compiler and processor. The memory model should have removed that restriction.
>> If by "more modern hardware" you mean newer versions of >> x86 architecture, then you would need to take into account >> that they take extraordinary efforts to ensure previous >> code continues to work. They are largely but not entirely >> successful. > > Yet, the compiler made no such assumptions, at the time the > code was originally compiled!  The resulting binaries > *were* correct and continue to be correct.
Only if the original presumptions about the hardware's behaviour is unchanged - and that certainly isn't guaranteed.
> The point is, there are countless applications (embedded, RT > and desktop) that were written in C before any of these > issues were formalized (or even recognized).  So, that's > got to say something for the power of the language and people > to *use* it.
It says /something/, but it doesn't invalidate my points.
On 10/4/2021 2:14 AM, Tom Gardner wrote:
>>>> I think you're overstating the problem. >>>> >>>> I.e., code written "years ago" still manages to run on >>>> "more modern" hardware. If the need for the memory >>>> model came about because of changes/advances in hardware, >>>> then code compiled before such a model existed would >>>> "suffer" when running on that modern hardware. >>>> >>>> (i.e., the "old assumptions" still allow functional code >>>> despite hardware advances) >>> >>> By that reasoning, old code would suffer if recompiled >>> with a "new" C compiler incorporating the C memory model. >> >> That doesn't necessarily follow. >> >> But, if C was so hopelessly flawed before adopting a model, >> then code compiled before such a model existed would surely >> not perform correctly? > > C wasn't hopelessly flawed, it is just that processors > aren't PDP-11s any more.
They haven;t been for a very long time! Yet the world hasn't come falling down around us!
>>> Well-written old code should continue to work. Poorly >>> written code may indeed suffer where the inbuilt presumptions >>> are invalidated by new hardware. The language lawyers >>> would say that is correct behaviour. >> >> It is. If you've relied on something that wasn't nailed >> down ("unspecified behavior") then you may have "got lucky". >> You could just as easily have been UNlucky! But, presumably, >> if you'd been, you would have noticed and done something >> about it. > > Trying to spot un-reproduceable errors due to races > between processors, caches and the like is non-trivial > and not deterministic.
You don't have to identify where the problems are occurring; just that the results were "wrong". I am amused (dismayed!) at how many times I've watched a developer WITNESS a fault and rationalize it away with "that can't happen!" (Yes, but it DID!!! Now, what are you going to do about it? The fact that it's "hard" to track down or infrequent doesn't mean it can be ignored. Do you have some GUARANTEE that it will REMAIN infrequent?)
> Most C programmers didn't even realise that it was > impossible to build a threading library in C. To do so you > had to rely behaviour of a particular compiler and processor. > The memory model should have removed that restriction.
Yet we've all used such libraries and, gee, they work! :>
>>> If by "more modern hardware" you mean newer versions of >>> x86 architecture, then you would need to take into account >>> that they take extraordinary efforts to ensure previous >>> code continues to work. They are largely but not entirely >>> successful. >> >> Yet, the compiler made no such assumptions, at the time the >> code was originally compiled! The resulting binaries >> *were* correct and continue to be correct. > > Only if the original presumptions about the hardware's > behaviour is unchanged - and that certainly isn't guaranteed.
Because folks designing hardware know that they can't just introduce something that invalidates all previously "working" code. At least, not if they want to make any claims to market share! How many people would discard all of their tools/utilities and their personal experience with those just because BigChipCo claimed their new processor was 10 times more performant than the processors currently running? <shrug> Gee, that's nice!
>> The point is, there are countless applications (embedded, RT >> and desktop) that were written in C before any of these >> issues were formalized (or even recognized). So, that's >> got to say something for the power of the language and people >> to *use* it. > > It says /something/, but it doesn't invalidate my points.
On 04/10/21 11:11, Don Y wrote:
> On 10/4/2021 2:14 AM, Tom Gardner wrote: >>>>> I think you're overstating the problem. >>>>> >>>>> I.e., code written "years ago" still manages to run on >>>>> "more modern" hardware.&nbsp; If the need for the memory >>>>> model came about because of changes/advances in hardware, >>>>> then code compiled before such a model existed would >>>>> "suffer" when running on that modern hardware. >>>>> >>>>> (i.e., the "old assumptions" still allow functional code >>>>> despite hardware advances) >>>> >>>> By that reasoning, old code would suffer if recompiled >>>> with a "new" C compiler incorporating the C memory model. >>> >>> That doesn't necessarily follow. >>> >>> But, if C was so hopelessly flawed before adopting a model, >>> then code compiled before such a model existed would surely >>> not perform correctly? >> >> C wasn't hopelessly flawed, it is just that processors >> aren't PDP-11s any more. > > They haven;t been for a very long time!&nbsp; Yet the world > hasn't come falling down around us!
There are many many many "random" errors observed in deployed systems. Read comp.risks! Many, where the cause can be determined, are due to false assumptions made by programmers at all levels.
>>>> Well-written old code should continue to work. Poorly >>>> written code may indeed suffer where the inbuilt presumptions >>>> are invalidated by new hardware. The language lawyers >>>> would say that is correct behaviour. >>> >>> It is.&nbsp; If you've relied on something that wasn't nailed >>> down ("unspecified behavior") then you may have "got lucky". >>> You could just as easily have been UNlucky!&nbsp; But, presumably, >>> if you'd been, you would have noticed and done something >>> about it. >> >> Trying to spot un-reproduceable errors due to races >> between processors, caches and the like is non-trivial >> and not deterministic. > > You don't have to identify where the problems are occurring; > just that the results were "wrong".
Spotting rare events is a matter of luck.
> I am amused (dismayed!) at how many times I've watched > a developer WITNESS a fault and rationalize it away > with "that can't happen!" > > (Yes, but it DID!!!&nbsp; Now, what are you going to do about it? > The fact that it's "hard" to track down or infrequent doesn't > mean it can be ignored.&nbsp; Do you have some GUARANTEE that it > will REMAIN infrequent?)
Yup. Happens to the best, e.g. aborting the first space shuttle launch.
>> Most C programmers didn't even realise that it was >> impossible to build a threading library in C. To do so you >> had to rely behaviour of a particular compiler and processor. >> The memory model should have removed that restriction. > > Yet we've all used such libraries and, gee, they work!&nbsp; :>
Note the important caveats. Don't forget that in early systems there was probably assembly code in there.
>>>> If by "more modern hardware" you mean newer versions of >>>> x86 architecture, then you would need to take into account >>>> that they take extraordinary efforts to ensure previous >>>> code continues to work. They are largely but not entirely >>>> successful. >>> >>> Yet, the compiler made no such assumptions, at the time the >>> code was originally compiled!&nbsp; The resulting binaries >>> *were* correct and continue to be correct. >> >> Only if the original presumptions about the hardware's >> behaviour is unchanged - and that certainly isn't guaranteed. > > Because folks designing hardware know that they can't just > introduce something that invalidates all previously "working" > code. > > At least, not if they want to make any claims to market share!
That's not important for the embedded market, and it doesn't happen; consider all the ARM variants. There the SOP is that maintenance is done using the same toolset, even though updated tools are available. Virtual machines are a godsend there.
> How many people would discard all of their tools/utilities > and their personal experience with those just because > BigChipCo claimed their new processor was 10 times more > performant than the processors currently running? > > <shrug>&nbsp; Gee, that's nice!
Indeed; it is a problem. The best hope for the server market is the Mill Architecture, which is radically different but still able to run C efficiently.
>>> The point is, there are countless applications (embedded, RT >>> and desktop) that were written in C before any of these >>> issues were formalized (or even recognized).&nbsp; So, that's >>> got to say something for the power of the language and people >>> to *use* it. >> >> It says /something/, but it doesn't invalidate my points. >
On a sunny day (Mon, 4 Oct 2021 10:14:51 +0100) it happened Tom Gardner
<spamjunk@blueyonder.co.uk> wrote in <sjegmb$vah$2@dont-email.me>:

>Most C programmers didn't even realise that it was >impossible to build a threading library in C. To do so you >had to rely behaviour of a particular compiler and processor. >The memory model should have removed that restriction.
No idea what you are on about, but why not download this from my site (should be in parts on your Linux system too, but this is unzipped in one piece): http://panteltje.com/pub/libc.info search in it for: pthread_create is around line 54361 All the threading functions in libc. Used it many times and runs on x86 and arm. IF you ever start programming in C then reading libc.info is a MUST. Else you will remain in the dark so to speak .
On 10/4/2021 3:58 AM, Tom Gardner wrote:
> On 04/10/21 11:11, Don Y wrote: >> On 10/4/2021 2:14 AM, Tom Gardner wrote: >>>>> By that reasoning, old code would suffer if recompiled >>>>> with a "new" C compiler incorporating the C memory model. >>>> >>>> That doesn't necessarily follow. >>>> >>>> But, if C was so hopelessly flawed before adopting a model, >>>> then code compiled before such a model existed would surely >>>> not perform correctly? >>> >>> C wasn't hopelessly flawed, it is just that processors >>> aren't PDP-11s any more. >> >> They haven;t been for a very long time! Yet the world >> hasn't come falling down around us! > > There are many many many "random" errors observed > in deployed systems. Read comp.risks! > > Many, where the cause can be determined, are due > to false assumptions made by programmers at all levels.
That;s my point. It *did* screw up. YOU saw it! We're not relying on some dubious report from a (clueless?) end user. So, why aren't you taking it more seriously? Robotron 2084 has an annoying bug that typically manifests when you're in the middle of a REALLY good game! Shooting an "enforcer" in a corner (so it explodes to areas off-screen) ends up immediately crashing the machine. Sometimes. Often enough that it is REALLY ANNOYING! <shrug> "Ship it!"
>>>>> Well-written old code should continue to work. Poorly >>>>> written code may indeed suffer where the inbuilt presumptions >>>>> are invalidated by new hardware. The language lawyers >>>>> would say that is correct behaviour. >>>> >>>> It is. If you've relied on something that wasn't nailed >>>> down ("unspecified behavior") then you may have "got lucky". >>>> You could just as easily have been UNlucky! But, presumably, >>>> if you'd been, you would have noticed and done something >>>> about it. >>> >>> Trying to spot un-reproduceable errors due to races >>> between processors, caches and the like is non-trivial >>> and not deterministic. >> >> You don't have to identify where the problems are occurring; >> just that the results were "wrong". > > Spotting rare events is a matter of luck.
Unless they aren't particularly rare -- see above. You should be crafting test cases that prove the correctness of your code at all levels, not relying on some high level exercise of the device to assure you that all is well. (If it isn't, how will that high level testing help you FIND it?)
>>> Most C programmers didn't even realise that it was >>> impossible to build a threading library in C. To do so you >>> had to rely behaviour of a particular compiler and processor. >>> The memory model should have removed that restriction. >> >> Yet we've all used such libraries and, gee, they work! :> > > Note the important caveats. > > Don't forget that in early systems there was probably > assembly code in there.
But you can coax the compiler to generate the same assembly. The key difference is (was?) that compilers didn't try to outsmart the developer. So, if the developer WROTE a particular sequence of instructions, he had a pretty good idea that they would generate a particular set of opcodes. Nowadays, looking at the generated code is like trying to read hieroglyphs: "Where the hell did THIS come from??"
>>>>> If by "more modern hardware" you mean newer versions of >>>>> x86 architecture, then you would need to take into account >>>>> that they take extraordinary efforts to ensure previous >>>>> code continues to work. They are largely but not entirely >>>>> successful. >>>> >>>> Yet, the compiler made no such assumptions, at the time the >>>> code was originally compiled! The resulting binaries >>>> *were* correct and continue to be correct. >>> >>> Only if the original presumptions about the hardware's >>> behaviour is unchanged - and that certainly isn't guaranteed. >> >> Because folks designing hardware know that they can't just >> introduce something that invalidates all previously "working" >> code. >> >> At least, not if they want to make any claims to market share! > > That's not important for the embedded market, and it > doesn't happen; consider all the ARM variants.
But appliances don't see their *delivered* binaries reused in future devices -- as is the case with PCs. They also tend not to see as much revision as desktop environments. How many software updates has your TV downloaded? Microwave oven? Washer/dryer? And, the bar isn't too high to replacing them. You don't even consider holding onto your refrigerator because it's got "such great firmware"! <rolls eyes>
> There the SOP is that maintenance is done using the > same toolset, even though updated tools are available. > Virtual machines are a godsend there.
Yup. I run VMs back to W98 for this deliberate reason. Clients thinkn nothing of updating their tools. I'll be damned if I'm going to waste time fighting some quirk in a new tool when I can keep the old ones around for a bit of disk space (I have 96T of rust on my ESXi server).
>> How many people would discard all of their tools/utilities >> and their personal experience with those just because >> BigChipCo claimed their new processor was 10 times more >> performant than the processors currently running? >> >> <shrug> Gee, that's nice! > > Indeed; it is a problem. > > The best hope for the server market is the Mill Architecture, > which is radically different but still able to run C efficiently.
I'm not sure how stable the server market is going to be, going forward. I see people clinging to old models (to avoid updating software, tools, processes, etc.). And, imagine there will be folks jumping on bleeding edge (Qbit) offerings. Regardless, servers tend to have resources ($$ and people) to throw at their problems. Considerably moreso than someone building a desktop (or embedded) application. Most web sites spend more on salaries than many embedded projects spend on their entire development effort. For a "bunch of pages"??
On 04/10/21 12:58, Don Y wrote:
> On 10/4/2021 3:58 AM, Tom Gardner wrote: >> On 04/10/21 11:11, Don Y wrote: >>> On 10/4/2021 2:14 AM, Tom Gardner wrote: >>>>>> Well-written old code should continue to work. Poorly >>>>>> written code may indeed suffer where the inbuilt presumptions >>>>>> are invalidated by new hardware. The language lawyers >>>>>> would say that is correct behaviour. >>>>> >>>>> It is.&nbsp; If you've relied on something that wasn't nailed >>>>> down ("unspecified behavior") then you may have "got lucky". >>>>> You could just as easily have been UNlucky!&nbsp; But, presumably, >>>>> if you'd been, you would have noticed and done something >>>>> about it. >>>> >>>> Trying to spot un-reproduceable errors due to races >>>> between processors, caches and the like is non-trivial >>>> and not deterministic. >>> >>> You don't have to identify where the problems are occurring; >>> just that the results were "wrong". >> >> Spotting rare events is a matter of luck. > > Unless they aren't particularly rare -- see above. > > You should be crafting test cases that prove the correctness > of your code at all levels, not relying on some high level > exercise of the device to assure you that all is well. > > (If it isn't, how will that high level testing help you FIND it?)
I'd like to see your test cases that prove a database's ACID properties! Even proving the absence of ACID behaviour is probabilistic!
> >>>> Most C programmers didn't even realise that it was >>>> impossible to build a threading library in C. To do so you >>>> had to rely behaviour of a particular compiler and processor. >>>> The memory model should have removed that restriction. >>> >>> Yet we've all used such libraries and, gee, they work!&nbsp; :> >> >> Note the important caveats. >> >> Don't forget that in early systems there was probably >> assembly code in there. > > But you can coax the compiler to generate the same assembly. > > The key difference is (was?) that compilers didn't try > to outsmart the developer.&nbsp; So, if the developer WROTE > a particular sequence of instructions, he had a pretty good > idea that they would generate a particular set of opcodes. > > Nowadays, looking at the generated code is like trying to > read hieroglyphs:&nbsp; "Where the hell did THIS come from??"
That's because compilers have more complex optimisation algorithms. Nowadays programmers have to ensure they /aren't/ unwittingly invoked - and that relies on a good knowledge of what /isn't/ guaranteed by the language.
> >>>>>> If by "more modern hardware" you mean newer versions of >>>>>> x86 architecture, then you would need to take into account >>>>>> that they take extraordinary efforts to ensure previous >>>>>> code continues to work. They are largely but not entirely >>>>>> successful. >>>>> >>>>> Yet, the compiler made no such assumptions, at the time the >>>>> code was originally compiled!&nbsp; The resulting binaries >>>>> *were* correct and continue to be correct. >>>> >>>> Only if the original presumptions about the hardware's >>>> behaviour is unchanged - and that certainly isn't guaranteed. >>> >>> Because folks designing hardware know that they can't just >>> introduce something that invalidates all previously "working" >>> code. >>> >>> At least, not if they want to make any claims to market share! >> >> That's not important for the embedded market, and it >> doesn't happen; consider all the ARM variants. > > But appliances don't see their *delivered* binaries reused > in future devices -- as is the case with PCs.
You've got it :) That's why the "continue to run code unaltered" contention is misleading in many important cases.
> They also tend not to see as much revision as desktop > environments.&nbsp; How many software updates has your TV > downloaded?&nbsp; Microwave oven?&nbsp; Washer/dryer? > > And, the bar isn't too high to replacing them.&nbsp; You don't > even consider holding onto your refrigerator because it's > got "such great firmware"!&nbsp; <rolls eyes>
The problem is becoming that the manufacturer's servers are decommissioned, so the IoT device fails by design.
>> The best hope for the server market is the Mill Architecture, >> which is radically different but still able to run C efficiently. > > I'm not sure how stable the server market is going to be, going > forward.&nbsp; I see people clinging to old models (to avoid updating > software, tools, processes, etc.).&nbsp; And, imagine there will be > folks jumping on bleeding edge (Qbit) offerings.
I expect that to continue for run-of-the-mill application code, for the same reasons COBOL is still executed. Products such as databases will be continually updated with new hardware becomes available, e.g. Oracle on Itanic.
> Regardless, servers tend to have resources ($$ and people) to > throw at their problems.&nbsp; Considerably moreso than someone > building a desktop (or embedded) application. > > Most web sites spend more on salaries than many embedded > projects spend on their entire development effort.&nbsp; For > a "bunch of pages"??
On 04/10/21 12:31, Jan Panteltje wrote:
> On a sunny day (Mon, 4 Oct 2021 10:14:51 +0100) it happened Tom Gardner > <spamjunk@blueyonder.co.uk> wrote in <sjegmb$vah$2@dont-email.me>: > >> Most C programmers didn't even realise that it was >> impossible to build a threading library in C. To do so you >> had to rely behaviour of a particular compiler and processor. >> The memory model should have removed that restriction. > > No idea what you are on about,
It is in K&R v1. If I could find my copy, I'd tell you what page. For a more modern statement from someone that /does/ understand C, see http://www.hpl.hp.com/techreports/2004/HPL-2004-209.pdf "Threads Cannot be Implemented as a Library", Hans-J. Boehm