Electronics-Related.com
Forums

PIC/dsPIC development

Started by bitrex November 4, 2018
Tauno Voipio wrote
>Eclipse is just a very blown-up editor, not a compiler, though >there's a compiler for Java. This does not mean that I did not >like it - it is the best tool since hand punch and paste rings >for paper tape.
It is just more bloat. I use rxvt's and 'joe' as editor. Switch with ctrl-cursor between about 6 rxvts with documentation (datasheets), source, compile window, test window, say ssh to other stuff, world wide too and 'joe' beats any editor for speed. What Linux? I have several versions of Linux, but the ones I use for development all run X and fvwm window manager. Interface has not changed in > 20 years. The rest is bloat, crap, sales, make believe, kids stuff, as is 'object oriented ' programming. There is plenty of code on my site to see what it does. C++, Java, and the snake language, plus a whole lot of other attempts to recreate what already existed, all are crap, C++ is a crime against humanity. Recreated by those who were to lazy or stupid to learn the basics. Now look a the world, one big project after the other done in those shit languages and system fails. But is sells. Stupid as stupid goes, stupid for the stupid. Need a 2 GHz quad core to read an email that takes a megabyte or 10 for 10 lines of ASCII text to convey ..nothing. Many of you will disagree, but at least ye know where I stand. As to the asm versus whatever malformed dialect your speech disabilities allow you to utter nonsense in, this was about PIC asm (OP) and that is actually a very clever instruction set. It can be learned in a few minutes, does not need all that giggle byte size slow crap, and IMNSHO is often the faster way, as in the case of PIC it is about the peripherals, Maybe those who oppose it should try programming some FPGA too some day... It is just hardware. I like the bit set and bit test instruction in the 18F, the organization of the chip, the reliability, the availability, the cost, and as somebody mentioned: the robustness. Only ever damaged one, broke of a pin, inserted into the programmer one to many times. It is very easy to program in 18F asm, the basic -, not even the advanced instruction set, this was a fun project: http://panteltje.com/panteltje/pic/scope_pic/ written in a few afternoons or weekend, in asm, and a first. Bit of math is no problem in PIC asm. The hard part was using the bits twice... Limited RAM space. As to ARM, yes I do not program raspberries in asm, but in C. I had a good book on the ARM instruction set, lost it somewhere, already back in the nineties of last century. I trust gcc on that, OTOH on a multitasker there are other issues such as task switch, if you want to go fast, then you will need extra hardware, buffers, like here: http://panteltje.com/panteltje/raspberry_pi_dvb-s_transmitter/ Does this help anyone? I dunno, but design something publish it, blah blah is worth nothing.
On 08/11/18 00:45, Tom Gardner wrote:
> On 07/11/18 20:41, Tauno Voipio wrote: >> On 7.11.18 17:34, Tom Gardner wrote: >>>> Everything was edited and compiled locally on the BBB's >>>> Debian Linux. :-) >>> This was cross-compiled in Linux/Eclipse. >> >> Eclipse is just a very blown-up editor, not a compiler, though >> there's a compiler for Java. This does not mean that I did not >> like it - it is the best tool since hand punch and paste rings >> for paper tape. > > Eclipse is a framework for plugins.
I'd rather describe it as an extensible IDE - project manager and editor for multiple programming languages. It may be that the editor is implemented as a plugin, but since it is part of the base Eclipse package, it doesn't really make sense to imagine Eclipse without it. Many other things are found as plugins, such as debugging, version control integration, additional languages, etc.
> > Plugins include compilers for many, many languages - and > much more.
Plugins for Eclipse do /not/ include compilers. It has plugins for /support/ for compilers - such as project settings pages where you can choose compiler and linker options for particular compilers, and error parsers so that when the project is built, there is a list of generated errors and easy navigation to the relevant source code lines. For many microcontrollers there can be plugins for debugging, flash programming, etc. But there are no compilers in Eclipse.
> > Where it exceeds an editors capabilities is in its plugins > that allow code introspection and analysis across /all/ the > files in its projects. >
On 07/11/18 16:56, Tom Gardner wrote:
> On 07/11/18 08:51, David Brown wrote: >> On 06/11/18 17:11, Tom Gardner wrote: >>> On 06/11/18 15:05, David Brown wrote: >>>> On 06/11/18 15:38, Tom Gardner wrote: >>>>> On 06/11/18 14:01, Phil Hobbs wrote: >>>>>> On 11/6/18 7:36 AM, Tom Gardner wrote: >>>>>>> On 06/11/18 11:28, David Brown wrote: >>>>>>>> On 05/11/18 19:44, Clive Arthur wrote: >>>>>>>>> On 05/11/2018 18:01, Tom Gardner wrote: >>>> >>>> <snipped> >>>> >>>>>>> >>>>>>> That complexity is a serious issue. If given a piece >>>>>>> of code, most developers won't understand which >>>>>>> combination of compiler flag must/mustn't be used. >>>>>> >>>>>> Code that works on some compiler settings and not others gives me the >>>>>> heebie-jeebies. People often talk about "optimizer bugs" that really >>>>>> aren't anything of the sort. Of course vaguely-defined language >>>>>> features such as 'volatile' and old-timey thread support don't help. >>>>>> (Things have been getting better on that front, I think.) >>>>> >>>>> Me too, but it is unclear to me that Things >>>>> Are Getting Better. If they are it is /very/ >>>>> slow and will in many cases be constrained by >>>>> having to use old variants of a language. >>>>> >>>> >>>> One thing that I can think of that is "getting better" is threading in >>>> C11 and C++11. I don't see it being particularly popular in C11 - >>>> people use other methods, and embedded developers are often still using >>>> older C standards. C++11 gives more useful threading features, which >>>> have been extended in later versions - they give more reasons to use >>>> the >>>> language's threading functionality rather than external libraries. >>>> >>>> The other new feature (again, from C++11 and C11) is atomic support. >>>> >>>> These are nice, but I think that most people who understands and uses >>>> "atomic" probably already understood how to use "volatile" correctly. >>> >>> Yes, but /very/ slowly. >>> >> >> Agreed. I think especially in C, the multi-threading and atomic stuff >> was too little, too late. Anyone wanting this is already using it via >> libraries. > > I'm from the time when C explicitly regarded that as a > library issue not a language issue - and explicitly avoided > giving libraries the necessary primitives. > > At least things have moved on (a bit) since then! >
In small systems embedded programming, most RTOS's are written in C90, with a bit of implementation-specific (compiler and target) parts. No, things have not moved on - not nearly fast enough for my liking, at least in some areas. A lot of software like this is written to be portable, and to support the lowest common denominator - and that means ancient C90 compilers on brain-dead 8-bit architectures. Embedded development - both hardware and software - is often a curious mixture of wanting the latest and greatest, fastest, smallest and cheapest, while simultaneously wanting tried and tested technology that has proven its worth over a decade or two.
> > >> In C++, the case is a bit different as the standard C++11 threading >> things give you many advantages over OS calls and macros. It is much >> nicer to simply create a local lock object and then know that you hold >> the lock for as long as that object is in scope, than to have to have a >> call to get the lock, then have matching releases on all possible exit >> points from the function (including exceptions). At least some of this >> could have come earlier, of course, but language improvements in C++11 >> with "auto" and better templates gave an overall better system. > > Agreed. An OS call to do a context switch might be OK > on a PDP11 "workstation", but beyond that it has always > sucked. >
Do you mean cooperative multi-tasking as distinct from pre-emptive multi-tasking? Each method has its advantages and disadvantages. The multi-tasking and context switches are always going to boil down to OS calls in the end, but the interface can make a big difference. In C11, using a mutex to protect some data might be like this (ignoring errors) : mtx_t lock; uint64_t counter; void init(void) { mtx_init(&lock, mtx_plain); } void finit(void) { mtx_destroy(&lock); } void increment(void) { mtx_lock(&lock); counter++; mtx_unlock(&lock); } You need manual creation and destruction of the mutex, manual lock and unlock. Good luck keeping track if you need to pass a locked mutex onto other functions. C code using OS interfaces directly, rather than C11, will be very similar. In C++, you have: std::mutex lock; uint64_t counter; void increment(void) { std::lock_guard<std::mutex> guard(lock); counter++; } As long as the lock_guard is in scope, the lock is locked. These kind of things make it easier to get code right, and harder to get it wrong.
> > >>> The core multiprocessing concepts were known in the mid 70s, >>> but quite reasonably they didn't make it into K&R C. >>> >>> They first made their way into systems in the early/mid 80s, >>> in Cedar/Mesa and Occam. They were, unfortunately, completely >>> ignored in C++, because it was addressing (and creating) >>> different problems. >>> >> >> (I don't know Cedar/Mesa, but Occam was for handling completely >> different kinds of problems. It was for massively parallel systems, >> often SIMD, rather than multiple independent threads.) > > I don't know Cedar/Mesa either, but Occam heartily embraced > parallelism whereas C/C++ was always (at best) ambivalent > about it. > > Yes, Occam had/has its limitations, but it showed what was > possible. Perhaps with the demise of "clock-rate inflation" > people will pay more attention to parallel behaviour and > semantics. Some languages are already going down that path. > > >> C and C++ lived in a serial world, and you used OS features to work with >> multiple threads or synchronisation. > > Agreed. Having used hardware and C for my embedded real-time > applications, that attitude always frustrated me. > > > >> Part of this was, I think, a chicken-and-egg effect of C being tightly >> bound with *nix, and the *nix world being slow to use threads. In *nix, >> processes are cheap and inter-process communication is easy and >> efficient. There was little incentive to have threads and more >> efficient (but harder) communication and synchronisation between >> threads. Windows needed threads because processes were, and still are, >> hugely expensive in comparison. >> >> Without threads in the OS, there is no need for threading support in a >> language. Without threading support (and a memory model, atomics, and >> synchronisation instructions), it is hard to use threads in code and >> therefore no point in having them in the OS. > > Agreed. A chicken and egg situation. > > What has irritated me is some people's belief that > parallelism at the OS process level is necessary, > sufficient, and the most beneficial solution. >
I don't know quite what you mean here. Some languages have support for parallelism, but it is done in coordination with the OS. The underlying mechanism might be processes, threads, or fibres, or even different computer nodes, but the OS needs to be in control of where things run. Languages with parallel support just make it all simpler to use. So when you use C++17 and say that a search through a large container is to use the "parallel execution policy", it is the OS that handles the threads and the C++17 library that splits the search across the threads. But if you mean processes vs. threads, then for many tasks it is easier to keep the interaction between the parts clean and clear, lowering the risks of deadlocks and other problems, with processes than threads. This is primarily because the more efficient, but riskier, synchronisation and sharing methods available in threads are not possible or not as simple with processes.
> >>> They were used in mainstream languages/environments in the >>> mid 90s, i.e. almost a quarter of a century ago. >>> >>> Some of the higher-level concepts continue to be included >>> in newer languages. >>> >>> So by a very generous definition, C/C++ is catching up with >>> techniques that were known to be good 30 years ago, and >>> that have been in widespread use for 20 years. >>> Not impressive. >>> >>> The main reason C/C++ continues to be used is history: >>> there's a lot of code out there, and people are familiar >>> with (old versions of) it. Ditto COBOL. >> >> The main reason C and C++ continue to be used is that they are the best >> choice available for many tasks. > > Yes, but that's like saying Ford Transits are used because > they are the best choice available for tasks. And then > extending that to bicycle-delivery tasks, off-road tasks, > and racing tasks. >
No, that is not a fair comparison. There are rarely any uses for which a Ford Transit is significantly better than any other similarly sized van. The appropriate comparison is that the /vans/ are used because they are often the best choice available for certain tasks - even if bicycles are better for other purposes.
> >> (Note that there is no "C/C++" >> language - they are different, they have diverged more and more in both >> style and usage. There is plenty of cooperation between them, and the >> tools used are often the same, but they are different types of language.) > > Agreed. I use the notation for simplicity where the > two have common characteristics. > > I continue to be surprised at the number of people that > still think C is a subset of C++. >
C is close to a subset of C++, but not entirely. A good deal of C code does not use the features of C that are not in C++ - at most, small modifications would be needed to make it equally valid C++.
> > >> For low-level work - for small embedded systems, for key parts of OS's, >> for libraries that need maximal efficiency, and as a "lingua francais" >> of software, nothing comes close to C. Part of this comes precisely >> because of its stability and resistance to change. > > Now that's a far more contentious statement. >
Which part do you contend? I covered quite a few things in that paragraph.
> >> For mid-level work, C++ is still a top choice. And it continues to >> evolve and gain new features - modern C++ is not the same language as it >> was a decade ago. > > Apparently even Scott Meyers has given up trying to > keep abreast of the C++ changes! >
It has grown to a big system, and a big library. The same applies to lots of things. Do you think anyone knows all the Java libraries, or Python libraries? Do you think anyone knows the all the details of the Python language, never mind the libraries?
> > >> This does not mean that C or C++ are the best choices for all the tasks >> for which they are used. It also does not mean we could not have better >> languages. But they are good enough for a great many uses, and combined >> with the history - the existing code, the existing developers, the >> existing tools - you have to have a /much/ better language in order to >> replace them. > > Agreed. > > There are languages that are much better in significant domains. > > One of the key insights I had a couple of decades ago was > that the papers on C/C++ always referred to other C/C++ papers. > In contrast the papers on other languages referred to many other > languages[1]. The net result was that the C/C++ community was smugly > self-satisfied and that there was nothing they could learn from > wider theory and practical experience. > > [1] an excellent example is Gosling's Java whitepaper. > http://www.stroustrup.com/1995_Java_whitepaper.pdf > (Yes, that domain tends to weaken my point!)
That may have been the case two decades ago, but it is not the case now. C++ has looked far and wide for inspiration, and there is a good deal of "language X can do /this/ in /that/ way - is that something C++ should copy?".
On Tuesday, November 6, 2018 at 10:10:20 AM UTC-5, TTman wrote:
> > > > But then I started coding in binary long time ago, > > so for me asm is a high level language. > > Me too... with front panel switches on an 8 bit mini computer in the > early 70's....
Being a human paper tape machine isn't the same as coding in binary. I know that the instruction set of the PDP-11 was simple enough and had fields that aligned to 3 bit octal digits making binary (or I guess octal technically) coding possible. Is that what you are talking about?
> > I started with PICs cracking TV smartcards that had those in it. > > So the dirty secrets I should know... Was still legal back then. > > Same again, me too :) > I hate high level languages.. could never get the hang of them...
Funny. HHLs make coding so much easier... if you pick the right HLL. I work in Forth with allows you to work as close to the metal as you like or abstract to any level you can construct. Pretty nice really. Not your typical HLL at all. Rick C.
On Tuesday, November 6, 2018 at 10:16:42 AM UTC-5, Tom Gardner wrote:
> On 06/11/18 14:24, David Brown wrote: > > > > But usually, they can simple avoid the quirks of C. Do you know what > > happens if your signed integers overflow? The real answer is "it does > > not matter". If your calculations are overflowing, then bar a very few > > special cases, your code is broken. At best, a discussion of integer > > overflow behaviour is a discussion of what the pieces will look like > > when your program falls apart. > > And there you touch on one of the irresolvable dilemmas that > has bedeviled C/C++ since the early 90s, viz should they be: > - low-level, i.e. very close to the hardware and what it does > - high-level, i.e. closer to the abstract specification of the > specification. > > Either is perfectly valid and useful, but there is a problem > when there is insufficient distinction between the two.
Huh? Can you explain that? Why is a distinction needed? Nothing wrong with supporting both in my book. Rick C.
>> >> Me too... with front panel switches on an 8 bit mini computer in the >> early 70's.... > > Being a human paper tape machine isn't the same as coding in binary. I know that the instruction set of the PDP-11 was simple enough and had fields that aligned to 3 bit octal digits making binary (or I guess octal technically) coding possible. Is that what you are talking about?
Yes, address and data entered using switches in octal format by way of a 'load' switch. Mostly a boot loader was entered that the loaded the application program from a tape reader..... --- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus
SNIP...

>> >> Certainly templates are extremely useful.&nbsp; And their reputation for >> "code bloat", especially in small embedded systems, is both unfair and >> outdated.&nbsp; But you don't /need/ templates in order to take advantage of >> C++ in your programming.&nbsp; After all, why worry about static polymorphism >> compared to runtime polymorphism, when you can do a great deal of useful >> programming with classes without polymorphism at all? >> >>> Virtual methods have their place even in uP development but it makes >>> sense to use them sparingly particularly on the Harvard architecture >>> with limited SRAM; the standard insists on copying all vtables out of >>> Flash into SRAM at runtime even though in theory this should not be >>> necessary. >> >
All the above is why I just never got to grips with HLL... My programmer that I employed back in the day used Delphi/ Borland Pascal to great effect on PCs... but then he had an honors in Maths... --- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus
On 07/11/18 17:40, bitrex wrote:
> On 11/07/2018 04:43 AM, David Brown wrote: >> On 06/11/18 17:10, bitrex wrote: >>> On 11/06/2018 10:57 AM, Phil Hobbs wrote: >>>> On 11/6/18 9:38 AM, Tom Gardner wrote: >>>>> On 06/11/18 14:01, Phil Hobbs wrote: >> >>>> >>>> Nah. I understand some people like template metaprogramming, but C++ >>>> is now more a family of languages than a single language. Few of us >>>> use all of it, and most (including meself) don't use most of it. >>> >>> Ya but if you're not doing any meta-programming C++ is then hard to >>> recommend as a modern language at all why not just use C99. or Pascal or >>> some other HLL that intrinsically supports objects or object-like >>> abstractions and is let's be real here a lot more pleasant to work with >>> syntactically. >> >> No, not at all. C++ is not a "generic/template programming language" >> any more than it is an "object oriented programming language". It is a >> multi-paradigm language, and can be used in many ways. >> >> It is absolutely the case that few people understand or use all of the >> language. The kind of stuff that goes into the implementation of parts >> of the standard library is not remotely accessible to most C++ >> programmers, and requires a very different sort of thinking than, say, >> gui application programming. People can - and should - use high-level >> container types, strings, and anything else from the standard library >> without ever having to understand how it works. >> >> I used to program a lot in Pascal - for DOS and for Windows, especially >> Delphi. Objects are much harder to work with in Pascal. In particular, >> all construction and destruction is manual and therefore easily >> forgotten. And your objects are all on the heap, meaning you can't have >> small and efficient objects. (Pascal as a language has some nicer >> features, however, such as ranged integer types and decent arrays. It >> also has horrible stuff, such as the separation of variable declarations >> from their use.) I would not consider picking Pascal over C++. >> >> And it is also absolutely fine to use C++ as "A Better C" - as an >> imperative procedural programming language, but with better typing, >> better const, and better structuring (like namespaces) compared to C99 >> or C11. >> >> >>> >>> The "core" of C++ is a statically-typed 1970s style C-like language >>> that's not intrinsically that remarkable, the other half of the language >>> is a powerful compile-time metaprogramming language you use to fashion >>> the bits of the "core" into zero-overhead abstractions. >>> >> >> Nope. You are missing a great deal. >> >> One feature that C++ has - that Object Pascal, Java, Python and many >> other languages do not - is that the effects of object construction and >> destruction are controlled by the class definition, and they are >> executed at clearly defined times. This lets you make invariants for >> your classes - established in the constructor, and kept by your public >> methods for the class. There is no way - without "cheating" - that code >> using the class gets access to inconsistent or half-made objects. >> >> And because your objects always get destructed in a predictable manner, >> you can use RAII for all sorts of resources. This is completely >> different from languages where objects get tidied up some time by a >> garbage collector, assuming there are no complications like circular >> references. With C++, you can use a class constructor/destructor to >> hold a lock, or as an "interrupt disabler" in an embedded system - you >> know /exactly/ when you have the lock. > > I guess the perspective I'm coming at it from is that yeah, RAII is > definitely a strength of C++ - on a desktop environment/managed memory > environment. > > On a bare-metal platform with e.g. 2k of SRAM I almost never use > anything but the default destructor unless a custom allocator is being > used which is overkill for most tasks. >
RAII is not just about memory - nor is memory just about "malloc" or "new". You can have something like: class CriticalSection { public : CriticalSection() { asm volatile ("" ::: "memory"); sreg = SREG; asm volatile ("cli" ::: "memory"); } ~CriticalSection() { asm volatile ("" ::: "memory"); SREG = sreg; asm volatile ("" ::: "memory"); } private : uint8_t sreg; } (Protection against copying, assignment, etc., is left as an exercise.) This lets you make a section of a code run with interrupts disabled by just making a CriticalSection variable in it. If you use I&sup2;C, you could have a class for that which sends the start bit on construction, and on destruction sends the stop bit and ensures the lines are freed. It doesn't matter if your code exits the I&sup2;C routines early - the bus will be restored to a free state. RS-485 could have a transmitter class that sets the drive enable on construction and turns it off on destruction. That way you know your drive enable is always active when you are transmitting. There are many possibilities.
> Any memory resources needed for a particular class's operation are > acquired at program start, sure often via the constructor, or shortly > thereafter and never freed; if you have something that allocates and you > end up calling "free" in some destructor, even indirectly via an > object-holding-an-object by value which itself frees a resource in your > destructor, and you inadvertently do that repeatedly, ya dead!
Memory allocation in small systems is mostly static. But you might have, for example, a set of 3 buffers in a statically allocated array, and a system to let you pick a free one as needed.
> > Generalized copy constructors and copy assignment operators are mostly > useless for classes which acquire heap memory resources in that > environment; yes it's a violation of the rule that objects which acquire > resources must have a well-defined copy constructor or copy assignment > operator but what can you do? You can't just go around copying and > assigning/freeing new heap memory willy-nilly. Move semantics in modern > C++ do make the situation somewhat easier to cope with. > > All this is easy to avoid if you have "objects" which hold everything > by-value and never actually acquire any persistent resources, everything > of transient lifetime gets allocated on the stack, whatever persistent > state you have (most programs which do anything useful need some of some > kind) is stored in some heap globals or their effective equivalent. > > There's a term for that it's working with structs and functions like in > C. You're just doing procedural/functional programming in a dress which > is fine I guess if it makes your life easier but I don't feel really > leverages C++ main advantages.
You are still using some of C++'s advantages - just not all of them. I realise that there is more available with other C++ features (like templates) - but that does not mean that classes by themselves, or RAII, are not useful tools even if you don't use templates.
> IMO it's because C++ allows you add > "real" OOP on an embedded platform with its template meta-programming > capabilities leveraged to implement modularity and generic programming > and customizable "plug-in" behavior. > > Okay as you say there are probably syntactic-sugar-like advantages to > using C++ over C as in the lock-acquiring example but it's not anything > that can't be accomplished with careful programming in C just as well. > It's sugar, not cake. > > >> Your objects in C++ can also be as efficient as possible. There is no >> problem with minimal objects or classes - they can be allocated in >> registers or optimised as well as any native types. (As an example, it >> is quite possible to make an Int8_t and UInt8_t class for the AVR that >> is more efficient than the native int8_t and uint8_t types, because it >> is not subject to integer promotions.) > > Can you give an example how that works in practice? I don't see how you > could define a custom "char" type that performs any better than the > native "char" type. What generalized runtime overhead is there to the > "existential" capability of integer promotions in the language?
Basically, if you have "uint8_t x, y, z;" and you write "z = x + y", the C integer promotion rules say that "x" and "y" first get turned into int16_t (since the AVR has 16-bit ints), then the addition is carried out as 16-bit, then the result is converted to a uint8_t for the assignment. And it is up to the compiler's optimiser to remove the extra work here. User-defined classes are not subject to the same kind of integer promotion, and can (depending on the details of the code, the compiler, the optimisation, etc.) give marginally more efficient code.
> > That stuff is all figured out at compile-time, I don't see how any > particular operations on a char type with another char type which > doesn't involve any promotions, at the particular instance the operation > occurs, could generate any more instructions than their C equivalent. If > you don't want the overhead of a promotion to a different type at some > other instance then just don't do it! > >>> If you're not using any of that you're missing out, for example >>> templates can be used to implement static polymorphism and have many of >>> the advantages of inheritance-based runtime polymorphism with none of >>> the overhead. >>> >> >> Certainly templates are extremely useful. And their reputation for >> "code bloat", especially in small embedded systems, is both unfair and >> outdated. But you don't /need/ templates in order to take advantage of >> C++ in your programming. After all, why worry about static polymorphism >> compared to runtime polymorphism, when you can do a great deal of useful >> programming with classes without polymorphism at all? >> >>> Virtual methods have their place even in uP development but it makes >>> sense to use them sparingly particularly on the Harvard architecture >>> with limited SRAM; the standard insists on copying all vtables out of >>> Flash into SRAM at runtime even though in theory this should not be >>> necessary. >> >
On 8/11/18 9:35 pm, David Brown wrote:
> On 07/11/18 16:56, Tom Gardner wrote: >> At least things have moved on (a bit) since then! > In small systems embedded programming, most RTOS's are written in C90, > with a bit of implementation-specific (compiler and target) parts. No, > things have not moved on - not nearly fast enough for my liking, at > least in some areas.
Agreed. Even the C++ of 20 years ago was much better than C for embedded work, if you use it well. Too few developers seem to have realised this.
> The multi-tasking and context switches are always going to boil down to > OS calls in the end, but the interface can make a big difference.
"Always"? Not if you use a lock-free approach, which only drops to the kernel to yield the CPU. I implemented a lock-free library as C++ templates (with embedded #asm!) on *old* C++ compilers and that code is still running on tens of millions of enterprise computers as I wrote this. Clifford Heath.
On 08/11/18 23:05, Clifford Heath wrote:
> On 8/11/18 9:35 pm, David Brown wrote: >> On 07/11/18 16:56, Tom Gardner wrote: >>> At least things have moved on (a bit) since then! >> In small systems embedded programming, most RTOS's are written in C90, >> with a bit of implementation-specific (compiler and target) parts. No, >> things have not moved on - not nearly fast enough for my liking, at >> least in some areas. > > Agreed. Even the C++ of 20 years ago was much better than C for embedded > work, if you use it well. Too few developers seem to have realised this. > >> The multi-tasking and context switches are always going to boil down to >> OS calls in the end, but the interface can make a big difference. > > "Always"? Not if you use a lock-free approach, which only drops to the > kernel to yield the CPU. I implemented a lock-free library as C++ > templates (with embedded #asm!) on *old* C++ compilers and that code is > still running on tens of millions of enterprise computers as I wrote this. >
You are talking about shared data structures here. Sure, it is possible to implement many data structures as lock-free, and thus use them safely without leaving your user context - as distinct from calling OS routines for acquiring and releasing a lock. But what I said was "multi-tasking and context switches" need OS calls, which is exactly what you wrote: "drops to the kernel to yield the CPU". As far as I can see, we are in agreement here.