Forums

PIC/dsPIC development

Started by bitrex November 4, 2018
On 06/11/18 19:21, Tauno Voipio wrote:
> Elliott 803/503? > > The real programmers used Autocode or octal machine code. > > Been there - done that, in late 1960's.
803 :) - 8Kwords architectural maximum. - 576us instruction time (~2kIPS) - serial logic - 35mm magnetic film for backing storage, with sprocket holes - and an excellent influential Algol60 compiler by Tony Hoare
On 06/11/18 20:21, Gerhard Hoffmann wrote:
> Am 06.11.18 um 20:24 schrieb Tauno Voipio: >> On 6.11.18 18:11, Tom Gardner wrote: > >>> So by a very generous definition, C/C++ is catching up with >>> techniques that were known to be good 30 years ago, and >>> that have been in widespread use for 20 years. >>> Not impressive. >>> >>> The main reason C/C++ continues to be used is history: >>> there's a lot of code out there, and people are familiar >>> with (old versions of) it. Ditto COBOL. >> >> There is another reason: C is a bloody good substitute >> for assembly code. I have written a dozen of real-time >> kernels with it. >> >> Programming modern RISC processors (Sparc, ARM etc) on >> assembler is like sitting on a coil of NATO barbed wire. > > > Exactly. Yesterday, I have written some data aquisition > from an ADC into a Beaglebone Black using one of its two > PRUs. These are 200 MHz 32-bit RISCs without pipelining, > but predictable timing. I could control setup and hold > times in 5 nsec increments by bit-banging the ports, all > in C and "volatile" to make sure that all transfers are > actually executed. I would never even have tried that in > .asm. Absolutely no point to learn the instruction set > of yet another IO processor.
Last time I needed to do have 4ns guaranteed timings, I also needed to have a front panel interface and USB comms running simultaneously. With a different processor I could have had 100Mb/s ethernet instead of the USB. That was surprisingly easy with the right processors, language and dev environment. The latter /guaranteed/ cycle accurate timings /without/ running the code and hoping you had spotted the worst case.
> Everything was edited and compiled locally on the BBB's > Debian Linux. :-)
This was cross-compiled in Linux/Eclipse.
On 07/11/18 08:51, David Brown wrote:
> On 06/11/18 17:11, Tom Gardner wrote: >> On 06/11/18 15:05, David Brown wrote: >>> On 06/11/18 15:38, Tom Gardner wrote: >>>> On 06/11/18 14:01, Phil Hobbs wrote: >>>>> On 11/6/18 7:36 AM, Tom Gardner wrote: >>>>>> On 06/11/18 11:28, David Brown wrote: >>>>>>> On 05/11/18 19:44, Clive Arthur wrote: >>>>>>>> On 05/11/2018 18:01, Tom Gardner wrote: >>> >>> <snipped> >>> >>>>>> >>>>>> That complexity is a serious issue. If given a piece >>>>>> of code, most developers won't understand which >>>>>> combination of compiler flag must/mustn't be used. >>>>> >>>>> Code that works on some compiler settings and not others gives me the >>>>> heebie-jeebies. People often talk about "optimizer bugs" that really >>>>> aren't anything of the sort. Of course vaguely-defined language >>>>> features such as 'volatile' and old-timey thread support don't help. >>>>> (Things have been getting better on that front, I think.) >>>> >>>> Me too, but it is unclear to me that Things >>>> Are Getting Better. If they are it is /very/ >>>> slow and will in many cases be constrained by >>>> having to use old variants of a language. >>>> >>> >>> One thing that I can think of that is "getting better" is threading in >>> C11 and C++11. I don't see it being particularly popular in C11 - >>> people use other methods, and embedded developers are often still using >>> older C standards. C++11 gives more useful threading features, which >>> have been extended in later versions - they give more reasons to use the >>> language's threading functionality rather than external libraries. >>> >>> The other new feature (again, from C++11 and C11) is atomic support. >>> >>> These are nice, but I think that most people who understands and uses >>> "atomic" probably already understood how to use "volatile" correctly. >> >> Yes, but /very/ slowly. >> > > Agreed. I think especially in C, the multi-threading and atomic stuff > was too little, too late. Anyone wanting this is already using it via > libraries.
I'm from the time when C explicitly regarded that as a library issue not a language issue - and explicitly avoided giving libraries the necessary primitives. At least things have moved on (a bit) since then!
> In C++, the case is a bit different as the standard C++11 threading > things give you many advantages over OS calls and macros. It is much > nicer to simply create a local lock object and then know that you hold > the lock for as long as that object is in scope, than to have to have a > call to get the lock, then have matching releases on all possible exit > points from the function (including exceptions). At least some of this > could have come earlier, of course, but language improvements in C++11 > with "auto" and better templates gave an overall better system.
Agreed. An OS call to do a context switch might be OK on a PDP11 "workstation", but beyond that it has always sucked.
>> The core multiprocessing concepts were known in the mid 70s, >> but quite reasonably they didn't make it into K&R C. >> >> They first made their way into systems in the early/mid 80s, >> in Cedar/Mesa and Occam. They were, unfortunately, completely >> ignored in C++, because it was addressing (and creating) >> different problems. >> > > (I don't know Cedar/Mesa, but Occam was for handling completely > different kinds of problems. It was for massively parallel systems, > often SIMD, rather than multiple independent threads.)
I don't know Cedar/Mesa either, but Occam heartily embraced parallelism whereas C/C++ was always (at best) ambivalent about it. Yes, Occam had/has its limitations, but it showed what was possible. Perhaps with the demise of "clock-rate inflation" people will pay more attention to parallel behaviour and semantics. Some languages are already going down that path.
> C and C++ lived in a serial world, and you used OS features to work with > multiple threads or synchronisation.
Agreed. Having used hardware and C for my embedded real-time applications, that attitude always frustrated me.
> Part of this was, I think, a chicken-and-egg effect of C being tightly > bound with *nix, and the *nix world being slow to use threads. In *nix, > processes are cheap and inter-process communication is easy and > efficient. There was little incentive to have threads and more > efficient (but harder) communication and synchronisation between > threads. Windows needed threads because processes were, and still are, > hugely expensive in comparison. > > Without threads in the OS, there is no need for threading support in a > language. Without threading support (and a memory model, atomics, and > synchronisation instructions), it is hard to use threads in code and > therefore no point in having them in the OS.
Agreed. A chicken and egg situation. What has irritated me is some people's belief that parallelism at the OS process level is necessary, sufficient, and the most beneficial solution.
>> They were used in mainstream languages/environments in the >> mid 90s, i.e. almost a quarter of a century ago. >> >> Some of the higher-level concepts continue to be included >> in newer languages. >> >> So by a very generous definition, C/C++ is catching up with >> techniques that were known to be good 30 years ago, and >> that have been in widespread use for 20 years. >> Not impressive. >> >> The main reason C/C++ continues to be used is history: >> there's a lot of code out there, and people are familiar >> with (old versions of) it. Ditto COBOL. > > The main reason C and C++ continue to be used is that they are the best > choice available for many tasks.
Yes, but that's like saying Ford Transits are used because they are the best choice available for tasks. And then extending that to bicycle-delivery tasks, off-road tasks, and racing tasks.
> (Note that there is no "C/C++" > language - they are different, they have diverged more and more in both > style and usage. There is plenty of cooperation between them, and the > tools used are often the same, but they are different types of language.)
Agreed. I use the notation for simplicity where the two have common characteristics. I continue to be surprised at the number of people that still think C is a subset of C++.
> For low-level work - for small embedded systems, for key parts of OS's, > for libraries that need maximal efficiency, and as a "lingua francais" > of software, nothing comes close to C. Part of this comes precisely > because of its stability and resistance to change.
Now that's a far more contentious statement.
> For mid-level work, C++ is still a top choice. And it continues to > evolve and gain new features - modern C++ is not the same language as it > was a decade ago.
Apparently even Scott Meyers has given up trying to keep abreast of the C++ changes!
> This does not mean that C or C++ are the best choices for all the tasks > for which they are used. It also does not mean we could not have better > languages. But they are good enough for a great many uses, and combined > with the history - the existing code, the existing developers, the > existing tools - you have to have a /much/ better language in order to > replace them.
Agreed. There are languages that are much better in significant domains. One of the key insights I had a couple of decades ago was that the papers on C/C++ always referred to other C/C++ papers. In contrast the papers on other languages referred to many other languages[1]. The net result was that the C/C++ community was smugly self-satisfied and that there was nothing they could learn from wider theory and practical experience. [1] an excellent example is Gosling's Java whitepaper. http://www.stroustrup.com/1995_Java_whitepaper.pdf (Yes, that domain tends to weaken my point!)
Am 07.11.18 um 16:34 schrieb Tom Gardner:
> On 06/11/18 20:21, Gerhard Hoffmann wrote: >> Am 06.11.18 um 20:24 schrieb Tauno Voipio: >>> On 6.11.18 18:11, Tom Gardner wrote: >> >>>> So by a very generous definition, C/C++ is catching up with >>>> techniques that were known to be good 30 years ago, and >>>> that have been in widespread use for 20 years. >>>> Not impressive. >>>> >>>> The main reason C/C++ continues to be used is history: >>>> there's a lot of code out there, and people are familiar >>>> with (old versions of) it. Ditto COBOL. >>> >>> There is another reason: C is a bloody good substitute >>> for assembly code. I have written a dozen of real-time >>> kernels with it. >>> >>> Programming modern RISC processors (Sparc, ARM etc) on >>> assembler is like sitting on a coil of NATO barbed wire. >> >> >> Exactly. Yesterday, I have written some data aquisition >> from an ADC into a Beaglebone Black using one of its two >> PRUs. These are 200 MHz 32-bit RISCs without pipelining, >> but predictable timing. I could control setup and hold >> times in 5 nsec increments by bit-banging the ports, all >> in C and "volatile" to make sure that all transfers are >> actually executed. I would never even have tried that in >> .asm. Absolutely no point to learn the instruction set >> of yet another IO processor. > > Last time I needed to do have 4ns guaranteed timings, I > also needed to have a front panel interface and USB > comms running simultaneously. With a different processor > I could have had 100Mb/s ethernet instead of the USB.
The Ethernet is taken care of by the Linux stack on the 1GHz ARM; the PRU has nothing to do but watch the ADC. The PRU and the ARM talk to each other via fixed 16 KB of shared RAM, most of it is used as a ping-pong buffer for ADC data. That also removes the need to understand how to set up DMA transfers to huge buffers in the ARM's virtual address space that may move around because of paging. I don't want to dive into kernel drivers. PRU programming is on the bare metal. I can simply open port 5025 on 192.168.178.111 on my laptop and stream the data from the 24 bit/1MSPS LTC2500-32 ADC. Code for both processors together is just a few pages of C.
> That was surprisingly easy with the right processors, > language and dev environment. The latter /guaranteed/ > cycle accurate timings /without/ running the code and > hoping you had spotted the worst case.
The BBB tries to be everybody's darling, so absolutely most of the work was to understand the IO-pin multiplexer I/F to Linux and undo the default setup for things I don't need and don't want that steal pins I do need. cheers, Gerhard
On 11/07/2018 04:43 AM, David Brown wrote:
> On 06/11/18 17:10, bitrex wrote: >> On 11/06/2018 10:57 AM, Phil Hobbs wrote: >>> On 11/6/18 9:38 AM, Tom Gardner wrote: >>>> On 06/11/18 14:01, Phil Hobbs wrote: > >>> >>> Nah. I understand some people like template metaprogramming, but C++ >>> is now more a family of languages than a single language. Few of us >>> use all of it, and most (including meself) don't use most of it. >> >> Ya but if you're not doing any meta-programming C++ is then hard to >> recommend as a modern language at all why not just use C99. or Pascal or >> some other HLL that intrinsically supports objects or object-like >> abstractions and is let's be real here a lot more pleasant to work with >> syntactically. > > No, not at all. C++ is not a "generic/template programming language" > any more than it is an "object oriented programming language". It is a > multi-paradigm language, and can be used in many ways. > > It is absolutely the case that few people understand or use all of the > language. The kind of stuff that goes into the implementation of parts > of the standard library is not remotely accessible to most C++ > programmers, and requires a very different sort of thinking than, say, > gui application programming. People can - and should - use high-level > container types, strings, and anything else from the standard library > without ever having to understand how it works. > > I used to program a lot in Pascal - for DOS and for Windows, especially > Delphi. Objects are much harder to work with in Pascal. In particular, > all construction and destruction is manual and therefore easily > forgotten. And your objects are all on the heap, meaning you can't have > small and efficient objects. (Pascal as a language has some nicer > features, however, such as ranged integer types and decent arrays. It > also has horrible stuff, such as the separation of variable declarations > from their use.) I would not consider picking Pascal over C++. > > And it is also absolutely fine to use C++ as "A Better C" - as an > imperative procedural programming language, but with better typing, > better const, and better structuring (like namespaces) compared to C99 > or C11. > > >> >> The "core" of C++ is a statically-typed 1970s style C-like language >> that's not intrinsically that remarkable, the other half of the language >> is a powerful compile-time metaprogramming language you use to fashion >> the bits of the "core" into zero-overhead abstractions. >> > > Nope. You are missing a great deal. > > One feature that C++ has - that Object Pascal, Java, Python and many > other languages do not - is that the effects of object construction and > destruction are controlled by the class definition, and they are > executed at clearly defined times. This lets you make invariants for > your classes - established in the constructor, and kept by your public > methods for the class. There is no way - without "cheating" - that code > using the class gets access to inconsistent or half-made objects. > > And because your objects always get destructed in a predictable manner, > you can use RAII for all sorts of resources. This is completely > different from languages where objects get tidied up some time by a > garbage collector, assuming there are no complications like circular > references. With C++, you can use a class constructor/destructor to > hold a lock, or as an "interrupt disabler" in an embedded system - you > know /exactly/ when you have the lock.
I guess the perspective I'm coming at it from is that yeah, RAII is definitely a strength of C++ - on a desktop environment/managed memory environment. On a bare-metal platform with e.g. 2k of SRAM I almost never use anything but the default destructor unless a custom allocator is being used which is overkill for most tasks. Any memory resources needed for a particular class's operation are acquired at program start, sure often via the constructor, or shortly thereafter and never freed; if you have something that allocates and you end up calling "free" in some destructor, even indirectly via an object-holding-an-object by value which itself frees a resource in your destructor, and you inadvertently do that repeatedly, ya dead! Generalized copy constructors and copy assignment operators are mostly useless for classes which acquire heap memory resources in that environment; yes it's a violation of the rule that objects which acquire resources must have a well-defined copy constructor or copy assignment operator but what can you do? You can't just go around copying and assigning/freeing new heap memory willy-nilly. Move semantics in modern C++ do make the situation somewhat easier to cope with. All this is easy to avoid if you have "objects" which hold everything by-value and never actually acquire any persistent resources, everything of transient lifetime gets allocated on the stack, whatever persistent state you have (most programs which do anything useful need some of some kind) is stored in some heap globals or their effective equivalent. There's a term for that it's working with structs and functions like in C. You're just doing procedural/functional programming in a dress which is fine I guess if it makes your life easier but I don't feel really leverages C++ main advantages. IMO it's because C++ allows you add "real" OOP on an embedded platform with its template meta-programming capabilities leveraged to implement modularity and generic programming and customizable "plug-in" behavior. Okay as you say there are probably syntactic-sugar-like advantages to using C++ over C as in the lock-acquiring example but it's not anything that can't be accomplished with careful programming in C just as well. It's sugar, not cake.
> Your objects in C++ can also be as efficient as possible. There is no > problem with minimal objects or classes - they can be allocated in > registers or optimised as well as any native types. (As an example, it > is quite possible to make an Int8_t and UInt8_t class for the AVR that > is more efficient than the native int8_t and uint8_t types, because it > is not subject to integer promotions.)
Can you give an example how that works in practice? I don't see how you could define a custom "char" type that performs any better than the native "char" type. What generalized runtime overhead is there to the "existential" capability of integer promotions in the language? That stuff is all figured out at compile-time, I don't see how any particular operations on a char type with another char type which doesn't involve any promotions, at the particular instance the operation occurs, could generate any more instructions than their C equivalent. If you don't want the overhead of a promotion to a different type at some other instance then just don't do it!
>> If you're not using any of that you're missing out, for example >> templates can be used to implement static polymorphism and have many of >> the advantages of inheritance-based runtime polymorphism with none of >> the overhead. >> > > Certainly templates are extremely useful. And their reputation for > "code bloat", especially in small embedded systems, is both unfair and > outdated. But you don't /need/ templates in order to take advantage of > C++ in your programming. After all, why worry about static polymorphism > compared to runtime polymorphism, when you can do a great deal of useful > programming with classes without polymorphism at all? > >> Virtual methods have their place even in uP development but it makes >> sense to use them sparingly particularly on the Harvard architecture >> with limited SRAM; the standard insists on copying all vtables out of >> Flash into SRAM at runtime even though in theory this should not be >> necessary. >
On 7.11.18 17:26, Tom Gardner wrote:
> On 06/11/18 19:21, Tauno Voipio wrote: >> Elliott 803/503? >> >> The real programmers used Autocode or octal machine code. >> >> Been there - done that, in late 1960's. > > 803 :) > - 8Kwords architectural maximum. > - 576us instruction time (~2kIPS) > - serial logic > - 35mm magnetic film for backing storage, with > &#2013266080; sprocket holes > - and an excellent influential Algol60 compiler by Tony Hoare
And a nice speaker input from the top bit of the instruction register. One quickly learned the different gurgles and beeps from it, so it was possible to have coffee ( -- oops, tea -- ) in another room and still be aware what's going on. It had delay-line registers (nickel spirals) which made the CPU sensitive to temperature and clock frequency variations. The logic was done with ferrite memory cores, with fan-in and fan-out of 3 (IIRC). There was a three-phase clocking scheme to keep the data flowing into the proper direction. -- -TV
On 7.11.18 17:34, Tom Gardner wrote:
> On 06/11/18 20:21, Gerhard Hoffmann wrote: >> Am 06.11.18 um 20:24 schrieb Tauno Voipio: >>> On 6.11.18 18:11, Tom Gardner wrote: >> >>>> So by a very generous definition, C/C++ is catching up with >>>> techniques that were known to be good 30 years ago, and >>>> that have been in widespread use for 20 years. >>>> Not impressive. >>>> >>>> The main reason C/C++ continues to be used is history: >>>> there's a lot of code out there, and people are familiar >>>> with (old versions of) it. Ditto COBOL. >>> >>> There is another reason: C is a bloody good substitute >>> for assembly code. I have written a dozen of real-time >>> kernels with it. >>> >>> Programming modern RISC processors (Sparc, ARM etc) on >>> assembler is like sitting on a coil of NATO barbed wire. >> >> >> Exactly. Yesterday, I have written some data aquisition >> from an ADC into a Beaglebone Black using one of its two >> PRUs. These are 200 MHz 32-bit RISCs without pipelining, >> but predictable timing. I could control setup and hold >> times in 5 nsec increments by bit-banging the ports, all >> in C and "volatile" to make sure that all transfers are >> actually executed. I would never even have tried that in >> .asm. Absolutely no point to learn the instruction set >> of yet another IO processor. > > Last time I needed to do have 4ns guaranteed timings, I > also needed to have a front panel interface and USB > comms running simultaneously. With a different processor > I could have had 100Mb/s ethernet instead of the USB. > > That was surprisingly easy with the right processors, > language and dev environment. The latter /guaranteed/ > cycle accurate timings /without/ running the code and > hoping you had spotted the worst case. > > >> Everything was edited and compiled locally on the BBB's >> Debian Linux. :-) > This was cross-compiled in Linux/Eclipse.
Eclipse is just a very blown-up editor, not a compiler, though there's a compiler for Java. This does not mean that I did not like it - it is the best tool since hand punch and paste rings for paper tape. -- -TV
On 7.11.18 11:43, David Brown wrote:
> On 06/11/18 17:10, bitrex wrote: >> On 11/06/2018 10:57 AM, Phil Hobbs wrote: >>> On 11/6/18 9:38 AM, Tom Gardner wrote: >>>> On 06/11/18 14:01, Phil Hobbs wrote: > >>> >>> Nah. I understand some people like template metaprogramming, but C++ >>> is now more a family of languages than a single language. Few of us >>> use all of it, and most (including meself) don't use most of it. >> >> Ya but if you're not doing any meta-programming C++ is then hard to >> recommend as a modern language at all why not just use C99. or Pascal or >> some other HLL that intrinsically supports objects or object-like >> abstractions and is let's be real here a lot more pleasant to work with >> syntactically. > > No, not at all. C++ is not a "generic/template programming language" > any more than it is an "object oriented programming language". It is a > multi-paradigm language, and can be used in many ways. > > It is absolutely the case that few people understand or use all of the > language. The kind of stuff that goes into the implementation of parts > of the standard library is not remotely accessible to most C++ > programmers, and requires a very different sort of thinking than, say, > gui application programming. People can - and should - use high-level > container types, strings, and anything else from the standard library > without ever having to understand how it works. > > I used to program a lot in Pascal - for DOS and for Windows, especially > Delphi. Objects are much harder to work with in Pascal. In particular, > all construction and destruction is manual and therefore easily > forgotten. And your objects are all on the heap, meaning you can't have > small and efficient objects. (Pascal as a language has some nicer > features, however, such as ranged integer types and decent arrays. It > also has horrible stuff, such as the separation of variable declarations > from their use.) I would not consider picking Pascal over C++. > > And it is also absolutely fine to use C++ as "A Better C" - as an > imperative procedural programming language, but with better typing, > better const, and better structuring (like namespaces) compared to C99 > or C11. > > >> >> The "core" of C++ is a statically-typed 1970s style C-like language >> that's not intrinsically that remarkable, the other half of the language >> is a powerful compile-time metaprogramming language you use to fashion >> the bits of the "core" into zero-overhead abstractions. >> > > Nope. You are missing a great deal. > > One feature that C++ has - that Object Pascal, Java, Python and many > other languages do not - is that the effects of object construction and > destruction are controlled by the class definition, and they are > executed at clearly defined times. This lets you make invariants for > your classes - established in the constructor, and kept by your public > methods for the class. There is no way - without "cheating" - that code > using the class gets access to inconsistent or half-made objects. > > And because your objects always get destructed in a predictable manner, > you can use RAII for all sorts of resources. This is completely > different from languages where objects get tidied up some time by a > garbage collector, assuming there are no complications like circular > references. With C++, you can use a class constructor/destructor to > hold a lock, or as an "interrupt disabler" in an embedded system - you > know /exactly/ when you have the lock. > > Your objects in C++ can also be as efficient as possible. There is no > problem with minimal objects or classes - they can be allocated in > registers or optimised as well as any native types. (As an example, it > is quite possible to make an Int8_t and UInt8_t class for the AVR that > is more efficient than the native int8_t and uint8_t types, because it > is not subject to integer promotions.) > >> If you're not using any of that you're missing out, for example >> templates can be used to implement static polymorphism and have many of >> the advantages of inheritance-based runtime polymorphism with none of >> the overhead. >> > > Certainly templates are extremely useful. And their reputation for > "code bloat", especially in small embedded systems, is both unfair and > outdated. But you don't /need/ templates in order to take advantage of > C++ in your programming. After all, why worry about static polymorphism > compared to runtime polymorphism, when you can do a great deal of useful > programming with classes without polymorphism at all? > >> Virtual methods have their place even in uP development but it makes >> sense to use them sparingly particularly on the Harvard architecture >> with limited SRAM; the standard insists on copying all vtables out of >> Flash into SRAM at runtime even though in theory this should not be >> necessary.
C++ is a sorry bastard, not an object language nor a procedural one. For object uses, there are far better languages than C++. C++ is far too opaque for small embedded processor use. There are seemingly innocent constructs that blow up the code in surpising ways. -- -TV
On 11/07/2018 03:47 PM, Tauno Voipio wrote:
> On 7.11.18 11:43, David Brown wrote: >> On 06/11/18 17:10, bitrex wrote: >>> On 11/06/2018 10:57 AM, Phil Hobbs wrote: >>>> On 11/6/18 9:38 AM, Tom Gardner wrote: >>>>> On 06/11/18 14:01, Phil Hobbs wrote: >> >>>> >>>> Nah.&nbsp; I understand some people like template metaprogramming, but C++ >>>> is now more a family of languages than a single language.&nbsp; Few of us >>>> use all of it, and most (including meself) don't use most of it. >>> >>> Ya but if you're not doing any meta-programming C++ is then hard to >>> recommend as a modern language at all why not just use C99. or Pascal or >>> some other HLL that intrinsically supports objects or object-like >>> abstractions and is let's be real here a lot more pleasant to work with >>> syntactically. >> >> No, not at all.&nbsp; C++ is not a "generic/template programming language" >> any more than it is an "object oriented programming language".&nbsp; It is a >> multi-paradigm language, and can be used in many ways. >> >> It is absolutely the case that few people understand or use all of the >> language.&nbsp; The kind of stuff that goes into the implementation of parts >> of the standard library is not remotely accessible to most C++ >> programmers, and requires a very different sort of thinking than, say, >> gui application programming.&nbsp; People can - and should - use high-level >> container types, strings, and anything else from the standard library >> without ever having to understand how it works. >> >> I used to program a lot in Pascal - for DOS and for Windows, especially >> Delphi.&nbsp; Objects are much harder to work with in Pascal.&nbsp; In particular, >> all construction and destruction is manual and therefore easily >> forgotten.&nbsp; And your objects are all on the heap, meaning you can't have >> small and efficient objects.&nbsp; (Pascal as a language has some nicer >> features, however, such as ranged integer types and decent arrays.&nbsp; It >> also has horrible stuff, such as the separation of variable declarations >> from their use.)&nbsp; I would not consider picking Pascal over C++. >> >> And it is also absolutely fine to use C++ as "A Better C" - as an >> imperative procedural programming language, but with better typing, >> better const, and better structuring (like namespaces) compared to C99 >> or C11. >> >> >>> >>> The "core" of C++ is a statically-typed 1970s style C-like language >>> that's not intrinsically that remarkable, the other half of the language >>> is a powerful compile-time metaprogramming language you use to fashion >>> the bits of the "core" into zero-overhead abstractions. >>> >> >> Nope.&nbsp; You are missing a great deal. >> >> One feature that C++ has - that Object Pascal, Java, Python and many >> other languages do not - is that the effects of object construction and >> destruction are controlled by the class definition, and they are >> executed at clearly defined times.&nbsp; This lets you make invariants for >> your classes - established in the constructor, and kept by your public >> methods for the class.&nbsp; There is no way - without "cheating" - that code >> using the class gets access to inconsistent or half-made objects. >> >> And because your objects always get destructed in a predictable manner, >> you can use RAII for all sorts of resources.&nbsp; This is completely >> different from languages where objects get tidied up some time by a >> garbage collector, assuming there are no complications like circular >> references.&nbsp; With C++, you can use a class constructor/destructor to >> hold a lock, or as an "interrupt disabler" in an embedded system - you >> know /exactly/ when you have the lock. >> >> Your objects in C++ can also be as efficient as possible.&nbsp; There is no >> problem with minimal objects or classes - they can be allocated in >> registers or optimised as well as any native types.&nbsp; (As an example, it >> is quite possible to make an Int8_t and UInt8_t class for the AVR that >> is more efficient than the native int8_t and uint8_t types, because it >> is not subject to integer promotions.) >> >>> If you're not using any of that you're missing out, for example >>> templates can be used to implement static polymorphism and have many of >>> the advantages of inheritance-based runtime polymorphism with none of >>> the overhead. >>> >> >> Certainly templates are extremely useful.&nbsp; And their reputation for >> "code bloat", especially in small embedded systems, is both unfair and >> outdated.&nbsp; But you don't /need/ templates in order to take advantage of >> C++ in your programming.&nbsp; After all, why worry about static polymorphism >> compared to runtime polymorphism, when you can do a great deal of useful >> programming with classes without polymorphism at all? >> >>> Virtual methods have their place even in uP development but it makes >>> sense to use them sparingly particularly on the Harvard architecture >>> with limited SRAM; the standard insists on copying all vtables out of >>> Flash into SRAM at runtime even though in theory this should not be >>> necessary. > > > C++ is a sorry bastard, not an object language nor a procedural one. > > For object uses, there are far better languages than C++. > > C++ is far too opaque for small embedded processor use. There are > seemingly innocent constructs that blow up the code in surpising > ways. >
Nah. C++ is eminently usable for small devices and has virtually no overhead in practice, as compared to what you'd have to write in C to achieve the same flexibility, ease-of-use, and performance. With modern code-analysis tools or even plug-ins that allow you to monitor the compiler asm output in real time as you write there's not much excuse for not knowing which constructs are "innocent" and which are not.
On 07/11/18 20:41, Tauno Voipio wrote:
> On 7.11.18 17:34, Tom Gardner wrote: >>> Everything was edited and compiled locally on the BBB's >>> Debian Linux. :-) >> This was cross-compiled in Linux/Eclipse. > > Eclipse is just a very blown-up editor, not a compiler, though > there's a compiler for Java. This does not mean that I did not > like it - it is the best tool since hand punch and paste rings > for paper tape.
Eclipse is a framework for plugins. Plugins include compilers for many, many languages - and much more. Where it exceeds an editors capabilities is in its plugins that allow code introspection and analysis across /all/ the files in its projects. I never uses hand-punches. There were a manual hand-held few punch card machines. The main benefit of those was to prepare me for the multi-keys-pressed-simultaneously that was used to good effect in emacs :) OTOH, I did use 5-channel paper tape. I never dreamed that letter-shift and figure-shift keys would rematerialise (on modern "smart"phone keyboards shift and number and control character keyboards). Plus ca change, plus c'est la mem chose - or "history doesn't repeat , but it does rhyme".