Electronics-Related.com
Forums

PIC/dsPIC development

Started by bitrex November 4, 2018
On 05/11/2018 02:07, gnuarm.deletethisbit@gmail.com wrote:
> On Sunday, November 4, 2018 at 6:41:34 PM UTC-5, bitrex wrote: >> On 11/04/2018 05:45 PM, speff wrote: >>> On Sunday, 4 November 2018 15:12:28 UTC-5, bitrex wrote: >>>> I might have to work on a project involving dsPIC code which I >>>> don't have a lot of experience with, I'm primarily an AVR and >>>> ARM guy. >>>> >>>> What's the hip current toolchains/dev boards y'all like to use >>>> for PIC development on Linux or Windows desktops nowatimes? >>> >>> MPLAB-X with their compiler XC-16. It's a bit greedy with >>> resources but okay on a modern machine. There is a free version >>> with no optimization. I am not sure how well the DSP >>> functionality is supported by the compiler, we just used it as a >>> relatively fast 16-bit micro. >>> >>> The compiler is $995 but there is a 50% off discount coupon good >>> to the end of this month. Use Coupon Code : TP1932 >>> >>> This family is a bit long in the tooth, but if your customer >>> wants it.. >>> >>> --Spehro Pefhany >>> >> >> I see a lot of requests on freelance sites, etc. for assistance >> maintain/debugging/future-proofing PIC code. I see very few >> requests for maintenance or debugging of "legacy" 8 bit AVR code. >> >> Maybe like the Maytag repairman they just worked right first time >> and didn't need any further assistance, most of my AVR projects are >> like that nowatimes. ;) > > That may well be more about the people doing the design than anything > inherent in the design process. The people I've met who like PICs > seem to do so just because they "like" them rather than being able to > explain any particular features or benefits of the PICs.
At a particular point in time they did have advantages for low power relatively simple control systems requiring a few IO pins. I was quite struck on the 16F877 because it had enough pins to direct drive a 4 digit seven segment LCD display at bare metal level with almost no power consumption to speak of when run on a 32kHz clock crystal. Cheapness and robustness usually tended to outweigh any disadvantages.
> While devices like the AVR often are justified because of a "modern" > instruction set (which is not really an engineering evaluation) > advocates can usually point to some real advantages.
Today I would be more inclined to go with an ARM core for HLL use.
> I'm not saying the PICs don't have advantages. I'm saying that when > advocates can't justify their preference it makes me suspect other > aspects of their work.
They were pretty handy price performance around the turn of the century and very easy for hobbyists to get into without expensive kit. You could practically program one using the printer port of a PC and bit bashing. Today there are any number of really rather impressive sub $20 single board computers kicking around that show off the features of the various chipsets that have been mass produced for consumer routers and the like. Not sure I would want to start a new project using a PIC today. YMMV -- Regards, Martin Brown
Elliott 803/503?

The real programmers used Autocode or octal machine code.

Been there - done that, in late 1960's.

-- 

-TV


On 6.11.18 17:58, Tom Gardner wrote:
> On 06/11/18 15:10, TTman wrote: >> >>> >>> But then I started coding in binary long time ago, >>> so for me asm is a high level language. >> >> Me too... with front panel switches on an 8 bit mini computer in the >> early 70's.... > > In that timeframe I was using a 39-bit(!) computer, still to be > seen working at the best museum I've come across. > > My first assembler program was the triumphant reinvention of > an FSM to convert one 5-channel paper tape code into another. > Worked first time. > > >>> I started with PICs cracking TV smartcards that had those in it. >>> So the dirty secrets I should know... Was still legal back then. >> >> Same again, me too :) >> I hate high level languages.. could never get the hang of them... > > I used Algol60 on that machine. I still remember the > epiphany when I realised what the compiler was actually > doing. >
On 6.11.18 18:11, Tom Gardner wrote:
> On 06/11/18 15:05, David Brown wrote: >> On 06/11/18 15:38, Tom Gardner wrote: >>> On 06/11/18 14:01, Phil Hobbs wrote: >>>> On 11/6/18 7:36 AM, Tom Gardner wrote: >>>>> On 06/11/18 11:28, David Brown wrote: >>>>>> On 05/11/18 19:44, Clive Arthur wrote: >>>>>>> On 05/11/2018 18:01, Tom Gardner wrote: >> >> <snipped> >> >>>>> >>>>> That complexity is a serious issue. If given a piece >>>>> of code, most developers won't understand which >>>>> combination of compiler flag must/mustn't be used. >>>> >>>> Code that works on some compiler settings and not others gives me the >>>> heebie-jeebies.&nbsp; People often talk about "optimizer bugs" that really >>>> aren't anything of the sort.&nbsp; Of course vaguely-defined language >>>> features such as 'volatile' and old-timey thread support don't help. >>>> (Things have been getting better on that front, I think.) >>> >>> Me too, but it is unclear to me that Things >>> Are Getting Better. If they are it is /very/ >>> slow and will in many cases be constrained by >>> having to use old variants of a language. >>> >> >> One thing that I can think of that is "getting better" is threading in >> C11 and C++11.&nbsp; I don't see it being particularly popular in C11 - >> people use other methods, and embedded developers are often still using >> older C standards.&nbsp; C++11 gives more useful threading features, which >> have been extended in later versions - they give more reasons to use the >> language's threading functionality rather than external libraries. >> >> The other new feature (again, from C++11 and C11) is atomic support. >> >> These are nice, but I think that most people who understands and uses >> "atomic" probably already understood how to use "volatile" correctly. > > Yes, but /very/ slowly. > > The core multiprocessing concepts were known in the mid 70s, > but quite reasonably they didn't make it into K&R C. > > They first made their way into systems in the early/mid 80s, > in Cedar/Mesa and Occam. They were, unfortunately, completely > ignored in C++, because it was addressing (and creating) > different problems. > > They were used in mainstream languages/environments in the > mid 90s, i.e. almost a quarter of a century ago. > > Some of the higher-level concepts continue to be included > in newer languages. > > So by a very generous definition, C/C++ is catching up with > techniques that were known to be good 30 years ago, and > that have been in widespread use for 20 years. > Not impressive. > > The main reason C/C++ continues to be used is history: > there's a lot of code out there, and people are familiar > with (old versions of) it. Ditto COBOL.
There is another reason: C is a bloody good substitute for assembly code. I have written a dozen of real-time kernels with it. Programming modern RISC processors (Sparc, ARM etc) on assembler is like sitting on a coil of NATO barbed wire. -- -TV
Am 06.11.18 um 20:24 schrieb Tauno Voipio:
> On 6.11.18 18:11, Tom Gardner wrote:
>> So by a very generous definition, C/C++ is catching up with >> techniques that were known to be good 30 years ago, and >> that have been in widespread use for 20 years. >> Not impressive. >> >> The main reason C/C++ continues to be used is history: >> there's a lot of code out there, and people are familiar >> with (old versions of) it. Ditto COBOL. > > There is another reason: C is a bloody good substitute > for assembly code. I have written a dozen of real-time > kernels with it. > > Programming modern RISC processors (Sparc, ARM etc) on > assembler is like sitting on a coil of NATO barbed wire.
Exactly. Yesterday, I have written some data aquisition from an ADC into a Beaglebone Black using one of its two PRUs. These are 200 MHz 32-bit RISCs without pipelining, but predictable timing. I could control setup and hold times in 5 nsec increments by bit-banging the ports, all in C and "volatile" to make sure that all transfers are actually executed. I would never even have tried that in .asm. Absolutely no point to learn the instruction set of yet another IO processor. Everything was edited and compiled locally on the BBB's Debian Linux. :-) Gerhard
On 11/06/2018 12:42 PM, Martin Brown wrote:

>> While devices like the AVR often are justified because of a "modern" >> instruction set (which is not really an engineering evaluation) >> advocates can usually point to some real advantages. > > Today I would be more inclined to go with an ARM core for HLL use. > >> I'm not saying the PICs don't have advantages.&nbsp; I'm saying that when >> advocates can't justify their preference it makes me suspect other >> aspects of their work. > They were pretty handy price performance around the turn of the century > and very easy for hobbyists to get into without expensive kit. You could > practically program one using the printer port of a PC and bit bashing. > > Today there are any number of really rather impressive sub $20 single > board computers kicking around that show off the features of the various > chipsets that have been mass produced for consumer routers and the like. > > Not sure I would want to start a new project using a PIC today. YMMV >
also the AVR 8 bit at 16 or 20 MHz is a speed demon with a very good power consumption-to-milliMIPS ratio. Some might argue "you can't do DSP on a 50 cent 8 bit micro-controller" oh yes you can. It's not your granpa's 8 bit
On 06/11/18 17:11, Tom Gardner wrote:
> On 06/11/18 15:05, David Brown wrote: >> On 06/11/18 15:38, Tom Gardner wrote: >>> On 06/11/18 14:01, Phil Hobbs wrote: >>>> On 11/6/18 7:36 AM, Tom Gardner wrote: >>>>> On 06/11/18 11:28, David Brown wrote: >>>>>> On 05/11/18 19:44, Clive Arthur wrote: >>>>>>> On 05/11/2018 18:01, Tom Gardner wrote: >> >> <snipped> >> >>>>> >>>>> That complexity is a serious issue. If given a piece >>>>> of code, most developers won't understand which >>>>> combination of compiler flag must/mustn't be used. >>>> >>>> Code that works on some compiler settings and not others gives me the >>>> heebie-jeebies. People often talk about "optimizer bugs" that really >>>> aren't anything of the sort. Of course vaguely-defined language >>>> features such as 'volatile' and old-timey thread support don't help. >>>> (Things have been getting better on that front, I think.) >>> >>> Me too, but it is unclear to me that Things >>> Are Getting Better. If they are it is /very/ >>> slow and will in many cases be constrained by >>> having to use old variants of a language. >>> >> >> One thing that I can think of that is "getting better" is threading in >> C11 and C++11. I don't see it being particularly popular in C11 - >> people use other methods, and embedded developers are often still using >> older C standards. C++11 gives more useful threading features, which >> have been extended in later versions - they give more reasons to use the >> language's threading functionality rather than external libraries. >> >> The other new feature (again, from C++11 and C11) is atomic support. >> >> These are nice, but I think that most people who understands and uses >> "atomic" probably already understood how to use "volatile" correctly. > > Yes, but /very/ slowly. >
Agreed. I think especially in C, the multi-threading and atomic stuff was too little, too late. Anyone wanting this is already using it via libraries. In C++, the case is a bit different as the standard C++11 threading things give you many advantages over OS calls and macros. It is much nicer to simply create a local lock object and then know that you hold the lock for as long as that object is in scope, than to have to have a call to get the lock, then have matching releases on all possible exit points from the function (including exceptions). At least some of this could have come earlier, of course, but language improvements in C++11 with "auto" and better templates gave an overall better system.
> The core multiprocessing concepts were known in the mid 70s, > but quite reasonably they didn't make it into K&R C. > > They first made their way into systems in the early/mid 80s, > in Cedar/Mesa and Occam. They were, unfortunately, completely > ignored in C++, because it was addressing (and creating) > different problems. >
(I don't know Cedar/Mesa, but Occam was for handling completely different kinds of problems. It was for massively parallel systems, often SIMD, rather than multiple independent threads.) C and C++ lived in a serial world, and you used OS features to work with multiple threads or synchronisation. Part of this was, I think, a chicken-and-egg effect of C being tightly bound with *nix, and the *nix world being slow to use threads. In *nix, processes are cheap and inter-process communication is easy and efficient. There was little incentive to have threads and more efficient (but harder) communication and synchronisation between threads. Windows needed threads because processes were, and still are, hugely expensive in comparison. Without threads in the OS, there is no need for threading support in a language. Without threading support (and a memory model, atomics, and synchronisation instructions), it is hard to use threads in code and therefore no point in having them in the OS.
> They were used in mainstream languages/environments in the > mid 90s, i.e. almost a quarter of a century ago. > > Some of the higher-level concepts continue to be included > in newer languages. > > So by a very generous definition, C/C++ is catching up with > techniques that were known to be good 30 years ago, and > that have been in widespread use for 20 years. > Not impressive. > > The main reason C/C++ continues to be used is history: > there's a lot of code out there, and people are familiar > with (old versions of) it. Ditto COBOL.
The main reason C and C++ continue to be used is that they are the best choice available for many tasks. (Note that there is no "C/C++" language - they are different, they have diverged more and more in both style and usage. There is plenty of cooperation between them, and the tools used are often the same, but they are different types of language.) For low-level work - for small embedded systems, for key parts of OS's, for libraries that need maximal efficiency, and as a "lingua francais" of software, nothing comes close to C. Part of this comes precisely because of its stability and resistance to change. For mid-level work, C++ is still a top choice. And it continues to evolve and gain new features - modern C++ is not the same language as it was a decade ago. This does not mean that C or C++ are the best choices for all the tasks for which they are used. It also does not mean we could not have better languages. But they are good enough for a great many uses, and combined with the history - the existing code, the existing developers, the existing tools - you have to have a /much/ better language in order to replace them.
On 06/11/18 17:10, bitrex wrote:
> On 11/06/2018 10:57 AM, Phil Hobbs wrote: >> On 11/6/18 9:38 AM, Tom Gardner wrote: >>> On 06/11/18 14:01, Phil Hobbs wrote:
>> >> Nah. I understand some people like template metaprogramming, but C++ >> is now more a family of languages than a single language. Few of us >> use all of it, and most (including meself) don't use most of it. > > Ya but if you're not doing any meta-programming C++ is then hard to > recommend as a modern language at all why not just use C99. or Pascal or > some other HLL that intrinsically supports objects or object-like > abstractions and is let's be real here a lot more pleasant to work with > syntactically.
No, not at all. C++ is not a "generic/template programming language" any more than it is an "object oriented programming language". It is a multi-paradigm language, and can be used in many ways. It is absolutely the case that few people understand or use all of the language. The kind of stuff that goes into the implementation of parts of the standard library is not remotely accessible to most C++ programmers, and requires a very different sort of thinking than, say, gui application programming. People can - and should - use high-level container types, strings, and anything else from the standard library without ever having to understand how it works. I used to program a lot in Pascal - for DOS and for Windows, especially Delphi. Objects are much harder to work with in Pascal. In particular, all construction and destruction is manual and therefore easily forgotten. And your objects are all on the heap, meaning you can't have small and efficient objects. (Pascal as a language has some nicer features, however, such as ranged integer types and decent arrays. It also has horrible stuff, such as the separation of variable declarations from their use.) I would not consider picking Pascal over C++. And it is also absolutely fine to use C++ as "A Better C" - as an imperative procedural programming language, but with better typing, better const, and better structuring (like namespaces) compared to C99 or C11.
> > The "core" of C++ is a statically-typed 1970s style C-like language > that's not intrinsically that remarkable, the other half of the language > is a powerful compile-time metaprogramming language you use to fashion > the bits of the "core" into zero-overhead abstractions. >
Nope. You are missing a great deal. One feature that C++ has - that Object Pascal, Java, Python and many other languages do not - is that the effects of object construction and destruction are controlled by the class definition, and they are executed at clearly defined times. This lets you make invariants for your classes - established in the constructor, and kept by your public methods for the class. There is no way - without "cheating" - that code using the class gets access to inconsistent or half-made objects. And because your objects always get destructed in a predictable manner, you can use RAII for all sorts of resources. This is completely different from languages where objects get tidied up some time by a garbage collector, assuming there are no complications like circular references. With C++, you can use a class constructor/destructor to hold a lock, or as an "interrupt disabler" in an embedded system - you know /exactly/ when you have the lock. Your objects in C++ can also be as efficient as possible. There is no problem with minimal objects or classes - they can be allocated in registers or optimised as well as any native types. (As an example, it is quite possible to make an Int8_t and UInt8_t class for the AVR that is more efficient than the native int8_t and uint8_t types, because it is not subject to integer promotions.)
> If you're not using any of that you're missing out, for example > templates can be used to implement static polymorphism and have many of > the advantages of inheritance-based runtime polymorphism with none of > the overhead. >
Certainly templates are extremely useful. And their reputation for "code bloat", especially in small embedded systems, is both unfair and outdated. But you don't /need/ templates in order to take advantage of C++ in your programming. After all, why worry about static polymorphism compared to runtime polymorphism, when you can do a great deal of useful programming with classes without polymorphism at all?
> Virtual methods have their place even in uP development but it makes > sense to use them sparingly particularly on the Harvard architecture > with limited SRAM; the standard insists on copying all vtables out of > Flash into SRAM at runtime even though in theory this should not be > necessary.
On 06/11/18 17:43, Tom Gardner wrote:
> On 06/11/18 16:10, bitrex wrote: >> The "core" of C++ is a statically-typed 1970s style C-like language >> that's not intrinsically that remarkable, the other half of the >> language is a powerful compile-time metaprogramming language you use >> to fashion the bits of the "core" into zero-overhead abstractions. > > The variables are typed. The data is untyped. That's necessary to > allow you to cast a Camel into a PaintPot. > > >> Virtual methods have their place even in uP development but it makes >> sense to use them sparingly particularly on the Harvard architecture >> with limited SRAM; the standard insists on copying all vtables out of >> Flash into SRAM at runtime even though in theory this should not be >> necessary. > > Wow. Really? Yuck.
No, in general it is not the case. But in specific implementations it can be. In particular, Bitrex is concerned about gcc's C++ on the AVR - an 8-bit microcontroller with separate address spaces for ram and flash. A key point is that the vtable is read as data, and thus has to be in ram rather than flash. The same limitation applies to things like strings and tables of constants - unless you go out of your way to handle them specially (using the "address space" extensions supported by the compiler), these are also copied to ram. There is nothing stopping a more dedicated C++ implementation keeping the vtables in flash. (I don't know if there /are/ any C++ implementations for the AVR that do this, I have not looked at details of any other tools.) And for microcontrollers with a single address space, this is simply not an issue - the vtables and other read-only data go in flash.
On 06/11/2018 19:21, Tauno Voipio wrote:
> Elliott 803/503? > > The real programmers used Autocode or octal machine code. > > Been there - done that, in late 1960's. >
No, MINIC 1, designed by guys from Sussex(?) UNI.... First used on CNC machine tools with Herbert Machine Tools in Coventry. Octal M/C too. Diode array microprogram, 8/16K ferrite core store, 1MHz clock PMSL ! --- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus
On 06/11/18 17:30, Tom Gardner wrote:
> On 06/11/18 15:44, David Brown wrote: >> On 06/11/18 16:16, Tom Gardner wrote: >>> On 06/11/18 14:24, David Brown wrote: >>>> On 06/11/18 13:36, Tom Gardner wrote:
>>> >>> And there you touch on one of the irresolvable dilemmas that >>> has bedeviled C/C++ since the early 90s, viz should they be: >>> - low-level, i.e. very close to the hardware and what it does >>> - high-level, i.e. closer to the abstract specification of the >>> specification. >>> >>> Either is perfectly valid and useful, but there is a problem >>> when there is insufficient distinction between the two. >>> >> >> This would all have been so much easier if people had learned what C is >> and what it is for, before learning how to use it. >> >> C has /never/ been a "high level assembler". It has /never/ been a >> "portable assembler". It has /never/ been an "alternative assembler". >> >> C was designed to cover a range of uses. It was designed to be useful >> as a portable application and systems programming language. It was also >> designed to be useful for low-level code, with implementation-specific >> features, extensions and details. It was intended to be suitable for a >> fair proportion of code that would otherwise need to be written in >> assembler - it was intended to replace the /need/ for assembler for a >> lot of code, not to /be/ assembler. >> >> C is a high level programming language - it is the lowest level high >> level programming language. But it is not a low level language - it is >> defined in terms of an abstract machine, not in terms of the underlying >> hardware. >> >> Once you understand that, the "dilemma" disappears. > > I defined my use of high and low level. You are using different > definitions, and to *that* extent it is a strawman argument. >
You defined "low level" as "close to the hardware and what it does", and "high level" as "closer to the abstract specification". /I/ defined "low level" as "defined in terms of the underlying hardware", and "high level" as "defined in terms of an abstract machine". I really cannot see any non-negligible difference between these definitions. But there is a dilemma in how C is used - because many people don't understand that it is a high-level language that can often be used instead of a low-level language.
> > >> You write your code in C, using the features of the high-level abstract >> machine. You use - if you want - non-portable features for greater >> efficiency on a particular target. And you let the compiler worry about >> the details of the low-level efficiency. (But where it matters, you >> should learn to understand how your compiler works and what high-level >> code you need to get the results you want.) >> >>> >>>> The people that get in trouble with undefined behaviour like this are - >>>> for the most part - the smart arses. It's the people who think "I'm >>>> not >>>> going to test ((x > -5) && (x < 5)) - it's more efficient to test ((x - >>>> 2147483644) > 0)". >>> >>> That used not to be a problem, long ago. >>> >> >> It was never a good idea - now or long ago. >> >> But it is true that some types of incorrect code worked with the >> compilers of long ago, and gave more efficient results than correct >> code. And those incorrect versions fail on modern tools, while the >> correct versions are more efficient than the incorrect versions ever >> were. This makes it hard to write code that is correct, works, and is >> efficient on both old and new tools. >> >> The answer, of course, is to write correct code regardless of the >> efficiency - "correct" is always more important than "fast". > > I definitely agree with that attitude. Too many people don't. > > The next question is "what is the easiest and most productive > way to write correct code?", and the answer to that isn't C/C++. >
That is a bad way to phrase your answer - because your question is too broad to be useful. It is like asking "what is the best method of transport?". /Sometimes/ the easiest and most productive way to write code is with C. /Sometimes/ it is with C++. /Sometimes/ it is with a different language. Trying to say it is never C or C++ is just as bad as saying it is always C or C++. And talking about C/C++ makes it look like you don't understand either C or C++, or where they might be the right choice.
> >>> It is more of a problem since compilers have advanced to >>> work around the language deficiencies, especially those >>> related to aliasing, caches, and multiprocessors. >>> >> >> What "language deficiencies" ? >> >>> And those points are only going to get worse now that >>> Moore's "law" has run out of puff. >>> >>> >>>>> So, despite their best intentions, it is unlikely that >>>>> such users will fully comprehend the C/C++ tools limitations >>>>> nor their own limitations, let alone that of other people's >>>>> code and libraries. >>>>> >>>> >>>> True. But it is rarely a problem - except for the smart-arses, who are >>>> always a problem regardless of the language or its behaviour. >>>> >>>> People often talk about C being an "unsafe" language. That is, in many >>>> ways, true - it is easy to get things wrong, and write code with bugs, >>>> leaks, security holes, etc. But these mostly have nothing to do with >>>> the quirks of C, or how compilers optimise. It is just thoughtless or >>>> careless coding, because C requires more manual effort than most >>>> languages. Buffer overflows, missing or duplicate "free" calls, etc., >>>> are nothing to do with misunderstandings about the oddities of the >>>> language - it is simply not thinking carefully enough about the code, >>>> and the results are the same regardless of optimisation settings. >>> >>> Ah yes, the "guns don't kill people" argument. Being right-pondian >>> (and showered with glass when a local hero took a potshot at a >>> Pittsburgh tram!) that has never impressed me. >>> >> >> I too am right-pondian, and have never been a "guns don't kill people" >> fan. >> >> But in this analogy, C is a gun. You need to learn to use it safely - >> or you should not use it at all. (I have often said that the majority >> of C programmers would be better off using other languages - and the >> majority of programs written in C would be better if they were written >> in other languages.) > > You have, and we agree there. > > But back in the real world, too many people think they are good > drivers / gun owners / chainsaw swingers etc :(
Agreed.
> > Now, what's the best way to achieve less danger? Just saying > "don't do that" doesn't seem very effective. >
Yes, I agree. I am open to suggestions here.
> >>>>>>> I find it frustrating that you're using a tool where the maker won't >>>>>>> tell you what the little check boxes actually do. >>>>>>> >>>>>> >>>>>> Compiler manuals often (but not always) have a lot of information >>>>>> about >>>>>> what the optimisations do. It is rarely possible to give complete >>>>>> details, however - it's just too complicated to fit in user >>>>>> documentation. gcc has summaries for all the different optimisation >>>>>> flags - there are /many/ of them - but it doesn't cover everything. >>>>> >>>>> That complexity is a serious issue. If given a piece >>>>> of code, most developers won't understand which >>>>> combination of compiler flag must/mustn't be used. >>>>> >>>> >>>> For the most part, if the code requires a particular choice of flags to >>>> be used or omitted, the code is wrong. There are exceptions, of course >>>> - flags to pick a particular version of the C or C++ standards are >>>> important. >>> >>> It is often required in order to achieve the touted >>> performance advantages of C/C++. Without using the >>> optimisation flags it isn't unreasonable to think of >>> C/C++ compilers as being /pessimising/ compilers! >>> >> >> Sure, it is usually pointless using a C or C++ compiler without >> optimisation enabled. And trying to use one for development without >> warnings enabled is as smart as typing your code with your arms tied >> behind your back. >> >> But optimisations and warnings are not there for correctness - they are >> there for efficiency (and to help you get correct code). If the code >> requires particular flags to be /correct/, then usually you have a >> problem in the code. (Again, excluding obvious ones like the choice of >> standard, or the choice of target processor details.) > > The problem is that injudicious choice of compiler flags > often makes the application (cf code) fail subtly. >
No, the problem is that people sometimes write bad code. Particular compiler flags (or particular compilers) can result in visible symptoms of the bad code, rather than the problems being hidden. But the cause is the bad code - /not/ the compiler flags, or the compiler. Picking particular compiler flags is papering over the problem.
> At that point it doesn't matter if the code or compiler > is the problem. It reminds me of the apocryphal joke I > first heard said with an Irish accent: "Well, sir. If > you want to get /there/, I wouldn't start from /here/". >
It matters for two reasons. One is that if you can ever hope to fix the situation, you need to deal with the problem - not the cover-up. As long as people write crap code, no amount of compiler limitations will give you assurance of working end products. You have to either stop these people writing code, make them write good code, or make sure that the crap code they write does not escape to bother other people. Secondly, limiting compilers makes them less useful to the people writing good code. I /want/ signed integer overflow to be undefined behaviour in C, and I /want/ the compiler to assume it never happens - that leads to better code generation for my correct code, plus better warnings and static error checking (because the compiler can report any overflows it finds as an error, rather than as defined but pointless behaviour).
> >>>>> Now, given that the principal actors are the users, the >>>>> tools and the user's organisation, what is an appropriate >>>>> behaviour for each actor? >>>>> >>>>> Take the time and money to become an expert? >>>>> >>>>> Choose simple(r) tools that do the job adequately? >>>>> >>>>> Ship products even though there is, despite best intentions, >>>>> improper understanding of the tools and code? >>>>> >>>>> Refuse to use unnecessarily complex tools? >>>> >>>> Stick to what you know, and try to write code that makes sense. If you >>>> don't know the details of how C (and your particular compiler) treats >>>> overflows, or casts with different pointer types, or other "dark >>>> corners" of the language, then stay out of the dark corners. >>>> >>>> Ask people when you are stuck. >>> >>> That presumes you *know* the dark corners. >>> See your points above L( >>> >> >> Programming in C requires responsibility (as does any serious >> programming). With that, comes the requirement of a certain amount of >> insight into your own skills. Qualified people doing useful and >> productive work in programming should not be limited by amateurs >> bumbling about. You don't insist that carpenters work with rubber >> mallets because some muppet might hit his thumb - why do you think >> programmers should be hobbled by people who don't know what they are >> doing? > > Many environments have the equivalent: with HV electricity you > work with one hand in your pocket, with industrial knives you > wear chainmail gloves and/or have one hand tethered to the wall > behind you! > > But most environments find ways avoid the need for such measures. >
There is no way to legislate against stupidity! The real challenge with programming is that management can't see the problems. The slaughterhouse manager can understand the advantage of chainmail gloves, and can see if his employees are not wearing them. The programming team manager doesn't understand that you need to check your pointers for null before using them, not afterwards, and has no way to confirm it.
> >>>> Test thoroughly. >>> >>> Ah, the "inspect quality into a product" mentality :( >>> >> >> No. Testing can help find flaws - it can't find a lack of flaws. But >> that does not mean you should skip it! > > Just so :) > > >>>> Learn to use and love compiler warnings. >>> >>> Always! >> >> Agreement at last :-) > > We agree on many things, but not necessarily the means > of achieving them. >
I am not sure we have really discussed how to achieve improvements here.