Forums

Good hardware, poor software

Started by bitrex April 16, 2017
On Sun, 16 Apr 2017 23:36:20 +0100, Tom Gardner
<spamjunk@blueyonder.co.uk> wrote:

>On 16/04/17 19:36, bitrex wrote: >> On 04/16/2017 01:25 PM, Don Y wrote: >>> On 4/16/2017 6:13 AM, bitrex wrote: >>>> The graveyard of audio production equipment is littered with the >>>> corpses of >>>> products that had rock-solid hardware designed by professionals, but >>>> firmware >>>> written by nincompoops. >>> >>> That's true of many application domains. >>> >>>> It's really frustrating when you buy a blinky-light box that some >>>> engineers >>>> somewhere likely spent thousands of man-hours on designing the best >>>> sounding >>>> DAC structure they could for the budget, but then the OS crashes when you >>>> accidentally try to load a file that's the wrong format. >>> >>> Because most (naive) developers seem to "test" by throwing things >>> that *should* work at the code and verifying that they *do*, in fact, >>> work -- instead of throwing things that *don't* work (i.e., bad file >>> formats, out-of-bounds parameters, etc.) and verifying proper handling >>> of those conditions. >> >> <https://en.wikipedia.org/wiki/Test-driven_development#/media/File:TDD_Global_Lifecycle.png> > >One of the major changes since I started engineering >is that it is no longer deemed necessary to design >software. Nowadays it is possible to "test quality >into products".
Sounds like you got your MBA.
> >Or not.
;-)
On 17/04/17 08:36, Tom Gardner wrote:
> On 16/04/17 19:36, bitrex wrote: >> On 04/16/2017 01:25 PM, Don Y wrote: >>> On 4/16/2017 6:13 AM, bitrex wrote: >>>> The graveyard of audio production equipment is littered with the >>>> corpses of >>>> products that had rock-solid hardware designed by professionals, but >>>> firmware >>>> written by nincompoops. >>> >>> That's true of many application domains. >>> >>>> It's really frustrating when you buy a blinky-light box that some >>>> engineers >>>> somewhere likely spent thousands of man-hours on designing the best >>>> sounding >>>> DAC structure they could for the budget, but then the OS crashes >>>> when you >>>> accidentally try to load a file that's the wrong format. >>> >>> Because most (naive) developers seem to "test" by throwing things >>> that *should* work at the code and verifying that they *do*, in fact, >>> work -- instead of throwing things that *don't* work (i.e., bad file >>> formats, out-of-bounds parameters, etc.) and verifying proper handling >>> of those conditions. >> >> <https://en.wikipedia.org/wiki/Test-driven_development#/media/File:TDD_Global_Lifecycle.png> >> > > One of the major changes since I started engineering > is that it is no longer deemed necessary to design > software. Nowadays it is possible to "test quality > into products".
That's not what TDD does. A well-written specification should be written like code, and it should be machine-processable (to be run as a test suite). That's what TDD does. As you write the specification, you *test* each part by implementing it. If you ever find yourself implementing anything that's not specified, you're no longer doing TDD. If you find yourself specifying something that cannot be tested, you're no longer doing TDD. It's all very easy to criticize cowboy coders, but how do you write design specifications? How do you know that the spec is (a) correct and (b) implementable? Very few people are willing to accept the discipline that TDD actually demands. They might say they're doing TDD, but they're often breaking the rules.
On 17/04/17 09:53, Clifford Heath wrote:
> As you write the specification, you *test* each part by > implementing it.
I just realized that people might read this in a way that I did not intend. I mean that the *specification* gets tested by being implemented. The code does too, but that's incidental. It's more important that the code is used to test the specification, than the other way around. Most specifications contain many statements that cannot be tested, and/or cannot be implemented. Perhaps they set no "pass level", or are simply ambiguous, reflecting contradictory requirements from committee members, or from the same person at different times.
On 17/04/17 00:53, Clifford Heath wrote:
> On 17/04/17 08:36, Tom Gardner wrote: >> On 16/04/17 19:36, bitrex wrote: >>> On 04/16/2017 01:25 PM, Don Y wrote: >>>> On 4/16/2017 6:13 AM, bitrex wrote: >>>>> The graveyard of audio production equipment is littered with the >>>>> corpses of >>>>> products that had rock-solid hardware designed by professionals, but >>>>> firmware >>>>> written by nincompoops. >>>> >>>> That's true of many application domains. >>>> >>>>> It's really frustrating when you buy a blinky-light box that some >>>>> engineers >>>>> somewhere likely spent thousands of man-hours on designing the best >>>>> sounding >>>>> DAC structure they could for the budget, but then the OS crashes >>>>> when you >>>>> accidentally try to load a file that's the wrong format. >>>> >>>> Because most (naive) developers seem to "test" by throwing things >>>> that *should* work at the code and verifying that they *do*, in fact, >>>> work -- instead of throwing things that *don't* work (i.e., bad file >>>> formats, out-of-bounds parameters, etc.) and verifying proper handling >>>> of those conditions. >>> >>> <https://en.wikipedia.org/wiki/Test-driven_development#/media/File:TDD_Global_Lifecycle.png> >>> >>> >> >> One of the major changes since I started engineering >> is that it is no longer deemed necessary to design >> software. Nowadays it is possible to "test quality >> into products". > > That's not what TDD does. > > A well-written specification should be written like code, > and it should be machine-processable (to be run as a test > suite). That's what TDD does. > > As you write the specification, you *test* each part by > implementing it. If you ever find yourself implementing > anything that's not specified, you're no longer doing TDD. > If you find yourself specifying something that cannot be > tested, you're no longer doing TDD. > > It's all very easy to criticize cowboy coders, but how do > you write design specifications? How do you know that the > spec is (a) correct and (b) implementable? > > Very few people are willing to accept the discipline that > TDD actually demands. They might say they're doing TDD, > but they're often breaking the rules.
As you get around to implying, TDD is like fracking: safe *when done properly*. The relevant questions are: - do the people using the tool understand the theory behind it - is the specific task somewhere where the tool can be used safely - are the people given sufficient resources to use the tool safely - are the individuals capable of doing it properly Too many times I've seen people emphatically state "there's a green light so it works"! Too many times I've seen unit tests that aren't failable! Too many times I've seen unit tests that serve *only* to test irrelevant implementation details, thus freezing the codebase! TDD is better than some alternative testing regimes, but it is not the panacea claimed by some zealots.
On 17/04/17 05:24, Clifford Heath wrote:
> On 17/04/17 09:53, Clifford Heath wrote: >> As you write the specification, you *test* each part by >> implementing it. > > I just realized that people might read this in a way that I did > not intend. > > I mean that the *specification* gets tested by being implemented. > > The code does too, but that's incidental. It's more important > that the code is used to test the specification, than the other > way around. > > Most specifications contain many statements that cannot be tested, > and/or cannot be implemented. Perhaps they set no "pass level", > or are simply ambiguous, reflecting contradictory requirements > from committee members, or from the same person at different > times.
Good luck finding a way to use TDD to test "mission critical" properties such as - liveness - ACID - high availability OK, the latter can be tickled to a useful extent; not the former
On 17/04/17 17:40, Tom Gardner wrote:
> On 17/04/17 05:24, Clifford Heath wrote: >> On 17/04/17 09:53, Clifford Heath wrote: >>> As you write the specification, you *test* each part by >>> implementing it. >> >> I just realized that people might read this in a way that I did >> not intend. >> >> I mean that the *specification* gets tested by being implemented. >> >> The code does too, but that's incidental. It's more important >> that the code is used to test the specification, than the other >> way around. >> >> Most specifications contain many statements that cannot be tested, >> and/or cannot be implemented. Perhaps they set no "pass level", >> or are simply ambiguous, reflecting contradictory requirements >> from committee members, or from the same person at different >> times. > > Good luck finding a way to use TDD to test "mission > critical" properties such as > - liveness > - ACID > - high availability
You can't test things that cannot even be proven analytically. That's not a weakness of TDD.
On 17/04/17 17:38, Tom Gardner wrote:
> On 17/04/17 00:53, Clifford Heath wrote: >> On 17/04/17 08:36, Tom Gardner wrote: >>> On 16/04/17 19:36, bitrex wrote: >>>> On 04/16/2017 01:25 PM, Don Y wrote: >>>>> On 4/16/2017 6:13 AM, bitrex wrote: >>>>>> The graveyard of audio production equipment is littered with the >>>>>> corpses of >>>>>> products that had rock-solid hardware designed by professionals, but >>>>>> firmware >>>>>> written by nincompoops. >>>>> >>>>> That's true of many application domains. >>>>> >>>>>> It's really frustrating when you buy a blinky-light box that some >>>>>> engineers >>>>>> somewhere likely spent thousands of man-hours on designing the best >>>>>> sounding >>>>>> DAC structure they could for the budget, but then the OS crashes >>>>>> when you >>>>>> accidentally try to load a file that's the wrong format. >>>>> >>>>> Because most (naive) developers seem to "test" by throwing things >>>>> that *should* work at the code and verifying that they *do*, in fact, >>>>> work -- instead of throwing things that *don't* work (i.e., bad file >>>>> formats, out-of-bounds parameters, etc.) and verifying proper handling >>>>> of those conditions. >>>> >>>> <https://en.wikipedia.org/wiki/Test-driven_development#/media/File:TDD_Global_Lifecycle.png> >>>> >>>> >>>> >>> >>> One of the major changes since I started engineering >>> is that it is no longer deemed necessary to design >>> software. Nowadays it is possible to "test quality >>> into products". >> >> That's not what TDD does. >> >> A well-written specification should be written like code, >> and it should be machine-processable (to be run as a test >> suite). That's what TDD does. >> >> As you write the specification, you *test* each part by >> implementing it. If you ever find yourself implementing >> anything that's not specified, you're no longer doing TDD. >> If you find yourself specifying something that cannot be >> tested, you're no longer doing TDD. >> >> It's all very easy to criticize cowboy coders, but how do >> you write design specifications? How do you know that the >> spec is (a) correct and (b) implementable? >> >> Very few people are willing to accept the discipline that >> TDD actually demands. They might say they're doing TDD, >> but they're often breaking the rules. > > As you get around to implying, TDD is like > fracking: safe *when done properly*. > > The relevant questions are: > - do the people using the tool understand the > theory behind it > - is the specific task somewhere where the tool > can be used safely > - are the people given sufficient resources to > use the tool safely > - are the individuals capable of doing it > properly > > Too many times I've seen people emphatically state > "there's a green light so it works"! > > Too many times I've seen unit tests that aren't > failable! > > Too many times I've seen unit tests that serve > *only* to test irrelevant implementation details, > thus freezing the codebase! > > TDD is better than some alternative testing > regimes, but it is not the panacea claimed by > some zealots.
I suspect that we're in strong agreement here. There are many practitioners who think that TDD solves the correctness problem, only because they do not understand the philosophical problem of incorrect specification. If you do not understand that a specification can be unsatisfiable (cannot be met) or incomplete (can be met, but with behavior that is not desirable nor intended), then all bets are off. You can't write correct software until you can define correctness in the specific instance. That is *much* harder than it seems, and most programmers are quite simply unaware of that. TDD is valuable not mainly because it helps you implement software correctly, but because it helps you figure out what you mean by "correct". Furthermore, a formal definition of "correct" can only be expressed in code. Unless you're using proof tools, that is equivalent to saying "the program should behave the way the code says it will". Only the users can say whether the behavior is desirable. By keeping unspecified behavior to the bare minimum (which TDD does), it's more likely that the users will observe (and reject) any possible behavior they deem undesirable. Clifford Heath.
On 17/04/17 10:00, Clifford Heath wrote:
> On 17/04/17 17:38, Tom Gardner wrote: >> On 17/04/17 00:53, Clifford Heath wrote: >>> On 17/04/17 08:36, Tom Gardner wrote: >>>> On 16/04/17 19:36, bitrex wrote: >>>>> On 04/16/2017 01:25 PM, Don Y wrote: >>>>>> On 4/16/2017 6:13 AM, bitrex wrote: >>>>>>> The graveyard of audio production equipment is littered with the >>>>>>> corpses of >>>>>>> products that had rock-solid hardware designed by professionals, but >>>>>>> firmware >>>>>>> written by nincompoops. >>>>>> >>>>>> That's true of many application domains. >>>>>> >>>>>>> It's really frustrating when you buy a blinky-light box that some >>>>>>> engineers >>>>>>> somewhere likely spent thousands of man-hours on designing the best >>>>>>> sounding >>>>>>> DAC structure they could for the budget, but then the OS crashes >>>>>>> when you >>>>>>> accidentally try to load a file that's the wrong format. >>>>>> >>>>>> Because most (naive) developers seem to "test" by throwing things >>>>>> that *should* work at the code and verifying that they *do*, in fact, >>>>>> work -- instead of throwing things that *don't* work (i.e., bad file >>>>>> formats, out-of-bounds parameters, etc.) and verifying proper handling >>>>>> of those conditions. >>>>> >>>>> <https://en.wikipedia.org/wiki/Test-driven_development#/media/File:TDD_Global_Lifecycle.png> >>>>> >>>>> >>>>> >>>>> >>>> >>>> One of the major changes since I started engineering >>>> is that it is no longer deemed necessary to design >>>> software. Nowadays it is possible to "test quality >>>> into products". >>> >>> That's not what TDD does. >>> >>> A well-written specification should be written like code, >>> and it should be machine-processable (to be run as a test >>> suite). That's what TDD does. >>> >>> As you write the specification, you *test* each part by >>> implementing it. If you ever find yourself implementing >>> anything that's not specified, you're no longer doing TDD. >>> If you find yourself specifying something that cannot be >>> tested, you're no longer doing TDD. >>> >>> It's all very easy to criticize cowboy coders, but how do >>> you write design specifications? How do you know that the >>> spec is (a) correct and (b) implementable? >>> >>> Very few people are willing to accept the discipline that >>> TDD actually demands. They might say they're doing TDD, >>> but they're often breaking the rules. >> >> As you get around to implying, TDD is like >> fracking: safe *when done properly*. >> >> The relevant questions are: >> - do the people using the tool understand the >> theory behind it >> - is the specific task somewhere where the tool >> can be used safely >> - are the people given sufficient resources to >> use the tool safely >> - are the individuals capable of doing it >> properly >> >> Too many times I've seen people emphatically state >> "there's a green light so it works"! >> >> Too many times I've seen unit tests that aren't >> failable! >> >> Too many times I've seen unit tests that serve >> *only* to test irrelevant implementation details, >> thus freezing the codebase! >> >> TDD is better than some alternative testing >> regimes, but it is not the panacea claimed by >> some zealots. > > I suspect that we're in strong agreement here. > > There are many practitioners who think that TDD solves the > correctness problem, only because they do not understand > the philosophical problem of incorrect specification. > > If you do not understand that a specification can be > unsatisfiable (cannot be met) or incomplete (can be met, > but with behavior that is not desirable nor intended), > then all bets are off. You can't write correct software > until you can define correctness in the specific instance. > That is *much* harder than it seems, and most programmers > are quite simply unaware of that.
Up to here we are in strong agreement.
> TDD is valuable not mainly because it helps you implement > software correctly, but because it helps you figure out > what you mean by "correct".
Ideally, yes. But in practice TDD can be used to provide a veneer of progress (especially to PHBs), where lots of simple little things are tested in isolation. But then "emergent behaviour", um, emerges. And people don't test the ensemble because each piece is known to be working on its own; hence the whole must work (mustn't it?) :( One can reasonably argue that such perpetrators shouldn't be allowed near a keyboard, but back on planet Earth...
> Furthermore, a formal definition of "correct" can only be > expressed in code. Unless you're using proof tools, that > is equivalent to saying "the program should behave the way > the code says it will". Only the users can say whether the > behavior is desirable.
You are assuming that the users know what they want before/during implementation. If only :( Many don't, and even fewer can think in terms of "non-functional behaviour" such as availability. (Yes, I realise that loathsome term is oxymoronic)
> By keeping unspecified behavior to > the bare minimum (which TDD does), it's more likely that > the users will observe (and reject) any possible behavior > they deem undesirable.
Too often TDD is used to specify irrelevant details, e.g. a unit test for each getter/setter. Those are easy to spot, but after 5 years the new team members won't know what is *required/desired* behaviour, and which is merely an *artefact* of one particular implementation. And at that point TDD has turned the codebase into concrete.
On 17/04/17 09:36, Clifford Heath wrote:
> On 17/04/17 17:40, Tom Gardner wrote: >> On 17/04/17 05:24, Clifford Heath wrote: >>> On 17/04/17 09:53, Clifford Heath wrote: >>>> As you write the specification, you *test* each part by >>>> implementing it. >>> >>> I just realized that people might read this in a way that I did >>> not intend. >>> >>> I mean that the *specification* gets tested by being implemented. >>> >>> The code does too, but that's incidental. It's more important >>> that the code is used to test the specification, than the other >>> way around. >>> >>> Most specifications contain many statements that cannot be tested, >>> and/or cannot be implemented. Perhaps they set no "pass level", >>> or are simply ambiguous, reflecting contradictory requirements >>> from committee members, or from the same person at different >>> times. >> >> Good luck finding a way to use TDD to test "mission >> critical" properties such as >> - liveness >> - ACID >> - high availability > > You can't test things that cannot even be proven analytically. > That's not a weakness of TDD.
Of course. But many people overstate the capabilities of TDD. A typical manifestation of that is the "but the TDD test quite light is green, so the software works". Heard that on more than one occasion, in more than one company :(
On 04/16/2017 07:00 PM, Lasse Langwadt Christensen wrote:
> Den s&oslash;ndag den 16. april 2017 kl. 16.51.46 UTC+2 skrev bitrex: >> On 04/16/2017 10:25 AM, Tim Wescott wrote: >>> On Sun, 16 Apr 2017 09:17:36 -0400, bitrex wrote: >>> >>>> On 04/16/2017 09:13 AM, bitrex wrote: >>>>> The graveyard of audio production equipment is littered with the >>>>> corpses of products that had rock-solid hardware designed by >>>>> professionals, but firmware written by nincompoops. >>>> >>>> To be clear, what I mean by that is they let EEs write the software. >>> >>> HEY! There's a lot of profoundly good embedded software engineers out >>> there with EE degrees. >>> >> >> They sometimes seem scared of newfangled stuff like OOP, "design >> patterns", "test-driven development", and "agile development." > > there is also the opposite, where using the latest greatest half done > pre-alpha languages, libraries, tools and buzz word development models > they only understand half of gets priority over making stuff that > actually works >
There are probably four or five languages that came out in the past decade that have as their "mission statement" something like "To develop a language which has the ease of use, expressiveness, and safety of an interpreted language, and the execution speed of a compiled language." People still use C++ a lot...