Electronics-Related.com
Forums

new spice

Started by John Larkin September 28, 2021
On 2021-10-01 17:51, Phil Hobbs wrote:
> Jan Panteltje wrote: >> On a sunny day (Fri, 1 Oct 2021 09:05:31 -0400) it happened Phil Hobbs >> <pcdhSpamMeSenseless@electrooptical.net> wrote in >> <95332466-d835-f22c-a8f3-dfc5cd15d1a7@electrooptical.net>: >> >>> Jeroen Belleman wrote: >>>> On 2021-10-01 14:11, Gerhard Hoffmann wrote: >>>>> Am 01.10.21 um 12:04 schrieb Jeroen Belleman: >>>>> >>>>> >>>>>> There's also Wirth's observation: "Software gets slower faster >>>>>> than hardware gets faster." >>>>> >>>>> When asked how his name should be pronounced he said: >>>>> >>>>> &Acirc; &Acirc; "You can call me by name:&Acirc; that's Wirth >>>>> or you can call me by value: that's worth" >>>>> >>>>> >>>>> &Acirc; >> Cheers, Gerhard >>>> >>>> I still don't get why C++ had to add call by reference. Big >>>> mistake, in my view. >>>> >>>> Jeroen Belleman >>> >>> Why? Smart pointers didn't exist in 1998 iirc, and reducing the number >>> of bare pointers getting passed around has to be a good thing, surely? >> >> C++ is a crime against humanity. >> >> New languages are created almost daily because people are not willing to learn about the hardware >> and what really happens. > > I sort of doubt that you or anyone on this group actually knows "what really happens" when a 2021-vintage compiler maps your source code onto 2010s or 2020s-vintage hardware. See Chisnall's classic 2018 paper, > "C is not a low-level language. Your computer is not a fast PDP-11." > <https://dl.acm.org/doi/abs/10.1145/3212477.3212479> > > It's far from a straightforward process. >
He concludes with "C doesn&rsquo;t map to modern hardware very well". Given that a very large fraction of all software is written in some dialect of C, one may wonder how this could happen. Jeroen Belleman
On 2021-10-01 20:03, whit3rd wrote:
> On Friday, October 1, 2021 at 9:05:42 AM UTC-4, Phil Hobbs wrote: >> Jeroen Belleman wrote: > >>> I still don't get why C++ had to add call by reference. Big >>> mistake, in my view. > > >> Why? Smart pointers didn't exist in 1998 iirc, and reducing the number >> of bare pointers getting passed around has to be a good thing, surely? > > Apple's MacOS used 'handles' even back in the eighties. > > <https://en.wikipedia.org/wiki/Classic_Mac_OS_memory_management> >
Ah, Macs. I'd been told that programming Macs was so very efficient, compact and elegant. I believed it. What a deception when I finally got to try it myself! Compared to programming on Unix, to which I'd already been exposed somewhat, Macs were awful, horrible! Developing code on Macs was a nightmare. Jeroen Belleman
Jeroen Belleman wrote:
> On 2021-10-01 17:51, Phil Hobbs wrote: >> Jan Panteltje wrote: >>> On a sunny day (Fri, 1 Oct 2021 09:05:31 -0400) it happened Phil Hobbs >>> <pcdhSpamMeSenseless@electrooptical.net> wrote in >>> <95332466-d835-f22c-a8f3-dfc5cd15d1a7@electrooptical.net>: >>> >>>> Jeroen Belleman wrote: >>>>> On 2021-10-01 14:11, Gerhard Hoffmann wrote: >>>>>> Am 01.10.21 um 12:04 schrieb Jeroen Belleman: >>>>>> >>>>>> >>>>>>> There's also Wirth's observation: "Software gets slower faster >>>>>>> than hardware gets faster." >>>>>> >>>>>> When asked how his name should be pronounced he said: >>>>>> >>>>>> &Acirc; &Acirc;&nbsp; "You can call me by name:&Acirc;&nbsp; that's Wirth >>>>>> or you can call me by value: that's worth" >>>>>> >>>>>> >>>>>> &Acirc; >> Cheers, Gerhard >>>>> >>>>> I still don't get why C++ had to add call by reference. Big >>>>> mistake, in my view. >>>>> >>>>> Jeroen Belleman >>>> >>>> Why?&nbsp; Smart pointers didn't exist in 1998 iirc, and reducing the number >>>> of bare pointers getting passed around has to be a good thing, surely? >>> >>> C++ is a crime against humanity. >>> >>> New languages are created almost daily because people are not willing >>> to learn about the hardware >>> and what really happens. >> >> I sort of doubt that you or anyone on this group actually knows "what >> really happens" when a 2021-vintage compiler maps your source code >> onto 2010s or 2020s-vintage hardware.&nbsp; See Chisnall's classic 2018 paper, >> "C is not a low-level language. Your computer is not a fast PDP-11." >> <https://dl.acm.org/doi/abs/10.1145/3212477.3212479> >> >> It's far from a straightforward process. >> > > He concludes with "C doesn&rsquo;t map to modern hardware very well". > > Given that a very large fraction of all software is written in > some dialect of C, one may wonder how this could happen. > > Jeroen Belleman >
See the rest of the paper. ;) Cheers Phil Hobbs -- Dr Philip C D Hobbs Principal Consultant ElectroOptical Innovations LLC / Hobbs ElectroOptics Optics, Electro-optics, Photonics, Analog Electronics Briarcliff Manor NY 10510 http://electrooptical.net http://hobbs-eo.com
whit3rd wrote:
> On Friday, October 1, 2021 at 9:05:42 AM UTC-4, Phil Hobbs wrote: >> Jeroen Belleman wrote: > >>> I still don't get why C++ had to add call by reference. Big >>> mistake, in my view. > > >> Why? Smart pointers didn't exist in 1998 iirc, and reducing the number >> of bare pointers getting passed around has to be a good thing, surely? > > Apple's MacOS used 'handles' even back in the eighties. > > <https://en.wikipedia.org/wiki/Classic_Mac_OS_memory_management> >
A handle is an opaque way to specify some object that the OS maintains. Not the same thing at all--pointers can refer to all sorts of things the OS doesn't know about, e.g. the guts of some peripheral driver. As the wise man said, "All problems can be solved with an additional level of indirection and some extra run time." ;) Cheers Phil Hobbs -- Dr Philip C D Hobbs Principal Consultant ElectroOptical Innovations LLC / Hobbs ElectroOptics Optics, Electro-optics, Photonics, Analog Electronics Briarcliff Manor NY 10510 http://electrooptical.net http://hobbs-eo.com
On Fri, 1 Oct 2021 13:03:33 -0400, Phil Hobbs
<pcdhSpamMeSenseless@electrooptical.net> wrote:

>jlarkin@highlandsniptechnology.com wrote: >> On Fri, 1 Oct 2021 12:34:25 -0400, Phil Hobbs >> <pcdhSpamMeSenseless@electrooptical.net> wrote: >> >>> jlarkin@highlandsniptechnology.com wrote: >>>> On Fri, 1 Oct 2021 12:13:40 -0400, Phil Hobbs >>>> <pcdhSpamMeSenseless@electrooptical.net> wrote: >>>> >>>>> jlarkin@highlandsniptechnology.com wrote: >>>>>> On Fri, 1 Oct 2021 11:07:48 -0400, Phil Hobbs >>>>>> <pcdhSpamMeSenseless@electrooptical.net> wrote: >>>>>> >>>>>>> jlarkin@highlandsniptechnology.com wrote: >>>>>>>> On Fri, 1 Oct 2021 08:55:01 -0400, Phil Hobbs >>>>>>>> <pcdhSpamMeSenseless@electrooptical.net> wrote: >>>>>>>> >>>>>>>>> Gerhard Hoffmann wrote: >>>>>>>>>> Am 30.09.21 um 21:24 schrieb John Larkin: >>>>>>>>>>> On Thu, 30 Sep 2021 19:58:50 +0100, "Kevin Aylward" >>>>>>>>>>> <kevinRemoveandReplaceATkevinaylward.co.uk> wrote: >>>>>>>>>>> >>>>>>>>>>>>> "John Larkin"&#4294967295; wrote in message >>>>>>>>>>>>> news:bdc7lgdap8o66j8m92ullph1nojbg9c5ni@4ax.com... >>>>>>>>>>>> >>>>>>>>>>>>> https://www.linkedin.com/in/mike-engelhardt-a788a822 >>>>>>>>>>>> >>>>>>>>>>>> ...but why ......? >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Maybe he enjoys it. I'm sure he's enormously wealthy and could do >>>>>>>>>>> anything he wahts. >>>>>>>>>>> >>>>>>>>>>> I want a Spice that uses an nvidia board to speed it up 1000:1. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Hopeless. That has already been tried in IBM AT times with these >>>>>>>>>> Weitek coprocessors and NS 32032 processor boards; it never lived >>>>>>>>>> up to the expectations. >>>>>>>>> >>>>>>>>> Parallellizing sparse matrix computation is an area of active research. >>>>>>>>> The best approach ATM seems to be runtime profiling that generates a >>>>>>>>> JIT-compiled FPGA image for the actual crunching, sort of like the way >>>>>>>>> FFTW generates optimum FFT routines. >>>>>>>>> >>>>>>>>>> >>>>>>>>>> That's no wonder. When in time domain integration the next result >>>>>>>>>> depends on the current value and maybe a few in the past, you cannot >>>>>>>>>> compute more future timesteps in parallel. Maybe some speculative >>>>>>>>>> versions in parallel and then selecting the best. But that is no >>>>>>>>>> work for 1000 processors. >>>>>>>>> >>>>>>>>> SPICE isn't a bad application for parallellism, if you can figure out >>>>>>>>> how to do it--you wouldn't bother for trivial things, where the run >>>>>>>>> times are less than 30s or so, but for longer calculations the profiling >>>>>>>>> would be a small part of the work. The inner loop is time-stepping the >>>>>>>>> same matrix topology all the time (though the coefficients change with >>>>>>>>> the time step). >>>>>>>>> >>>>>>>>> Since all that horsepower would be spending most of its time waiting for >>>>>>>>> us to dork the circuit and dink the fonts, it could be running the >>>>>>>>> profiling in the background during editing. You might get 100x speedup >>>>>>>>> that way, ISTM. >>>>>>>>> >>>>>>>>>> >>>>>>>>>> The inversion of the nodal matrix might use some improvement since >>>>>>>>>> it is NP complete, like almost everything that is interesting. >>>>>>>>> >>>>>>>>> Matrix inversion is NP-complete? Since when? It's actually not even >>>>>>>>> cubic, asymptotically--the lowest-known complexity bound is less than >>>>>>>>> O(N**2.4). >>>>>>>>> >>>>>>>>>> Its size grows with the number of nodes and the matrix is sparse since >>>>>>>>>> most nodes have no interaction. Dividing the circuit into subcircuits, >>>>>>>>>> solving these separately and combining the results could provide >>>>>>>>>> a speedup, for problems with many nodes. That would be a MAJOR change. >>>>>>>>>> >>>>>>>>>> Spice has not made much progress since Berkeley is no longer involved. >>>>>>>>>> Some people make some local improvements and when they lose interest >>>>>>>>>> after 15 years their improvements die. There is no one to integrate >>>>>>>>>> that stuff in one open official version. Maybe NGspice comes closest. >>>>>>>>>> >>>>>>>>>> Keysight ADS has on option to run it on a bunch of workstations but >>>>>>>>>> that helps probably most for electromagnetics which has not much >>>>>>>>>> in common with spice. >>>>>>>>>> >>>>>>>>> >>>>>>>>> It has more than you might think. EM simulators basically have to loop >>>>>>>>> over all of main memory twice per time step, and all the computational >>>>>>>>> boundaries have to be kept time-coherent. With low-latency >>>>>>>>> interconnects and an OS with a thread scheduler that isn't completely >>>>>>>>> brain-dead (i.e. anything except Linux AFAICT), my EM code scales within >>>>>>>>> 20% or so of linearly up to 15 compute nodes, which is as far as I've >>>>>>>>> tried. >>>>>>>>> >>>>>>>>> So I'm more optimistic than you, if still rather less than JL. ;) >>>>>>>>> >>>>>>>>> Cheers >>>>>>>>> >>>>>>>>> Phil Hobbs >>>>>>>> >>>>>>>> Since a schematic has a finite number of nodes, why not have one CPU >>>>>>>> per node? >>>>>>> >>>>>>> Doing what, exactly? >>>>>> >>>>>> Computing the node voltage for the next time step. >>>>> >>>>> Right, but exactly how? >>>>>> >>>>>>> Given that the circuit topology forms an irregular >>>>>>> sparse matrix, there would be a gigantic communication bottleneck in >>>>>>> general. >>>>>> >>>>>> Shared ram. Most nodes only need to see a few neighbors, plainly >>>>>> visible on the schematic. >>>>> >>>>> "Shared ram" is all virtual, though--you don't have N-port memory >>>>> really. >>>> >>>> FPGAs do. >>>> >>>>> It has to be connected somehow, and all the caches kept >>>>> coherent. That causes communications traffic that grows very rapidly >>>>> with the number of cores--about N**4 if it's done in symmetrical (SMP) >>>>> fashion. >>>> >>>> Then don't cache the node voltages; put them in sram. Mux the ram >>>> accesses cleverly. >>>> >>>>> >>>>>> The ram memory map could be clever that way. >>>>> >>>>> Sure, that's what the JIT FPGA approach does, but the memory layout >>>>> doesn't solve the communications bottleneck with a normal CPU or GPU. >>>>> >>>>>>> Somebody has to decide on the size of the next time step, for >>>>>>> instance, which is a global property that has to be properly >>>>>>> disseminated after computation. >>>>>> >>>>>> Step when the slowest CPU is done processing its node. >>>>> >>>>> But then it has to decide what to do next. The coefficients of the next >>>>> iteration depend on the global time step, so there's no purely >>>>> node-local method for doing adaptive step size. >>>> >>>> Proceed when all the nodes are done their computation. Then each reads >>>> the global node ram to get its inputs for the next step. >>>> >>>> This would all work in a FPGA that had a lot of CPUs on chip. Let the >>>> FPGA hardware do the node ram and access paths. >>> >>> Yeah, the key is that the circuit topology gets handled by the FPGA, >>> which is more or less my point. (Not that I'm the one doing it.) >>> >>> Large sparse matrices don't map well onto purely general-purpose hardware. >> >> >> Then stop thinking of circuit simulation in terms of matrix math. > >Oh, come _on_. The problem is a large sparse system of nonlinear ODEs >with some bags hung onto the side for Tlines and such. How you write it >out doesn't change what has to be done--the main issue is the >irregularity and unpredictibility of the circuit topology. > >> >> >>>> >>>> I've advocated for such a chip as a general OS host. One CPU per >>>> process, with absolute hardware protections. >>> >>> Still has the SMP problem if the processes need to know about each other >>> at all. >> >> There needs to be the clever multiport common SRAM, and one global >> DONE line. > >Yes, "clever" in the sense of "magic happens here."
Multiport rams and multiplexers aren't magic. Each node needs to share a few variables with connected nodes. Global things like DC voltage sources don't even need to be shared. They can be compile-time constants.
> >> >> One shared register could be readable by all CPUs. It could have some >> management bits. > >For some reasonable number of "all CPUs", sure. Not an unlimited number.
Just one per circuit node, and a manger maybe. If one used soft cores, some could be really dumb, not even with floating point. One could compile the soft cores as needed. Some famous person said "If you really need to use floating point, you don't understand the problem."
>> >> I sure hope we're not still running Windows on 86 and linux on ARM a >> hundred years from now. > >I certainly won't be. ;) > >Cheers > >Phil Hobbs
-- If a man will begin with certainties, he shall end with doubts, but if he will be content to begin with doubts he shall end in certainties. Francis Bacon
On 10/1/2021 10:07 AM, Dimiter_Popoff wrote:
> On 10/1/2021 13:56, Don Y wrote: >> On 10/1/2021 3:04 AM, Jeroen Belleman wrote: >>> There's also Wirth's observation: "Software gets slower faster >>> than hardware gets faster." >> >> You've got to have a bit of sympathy for folks who write desktop >> applications; they have virtually no control over the environment >> in which their code executes. > > Given what today's popular OS-s look - and the amount of space > the *need* to work - I have more than sympathy for them. So much > human resource wasted on coping with legacies built on half-baked > ideas etc. But this is how life works I suppose.
No one (developer, end user) wants to throw away their previous creations -- just to support a new/better OS concept. And, sadly, "OS' is now taking on many other bits of code that you'd previously consider "supplemental libraries"; they are so ingrained in so many apps that they effectively become a core component/service of the OS.
>> They can "recommend" a particular hardware/OS configuration. >> But, few actually *enforce* those minimum requirements. > > And what have they to choose from? Windows or Linux, x86 and ARM.... > Soon that risk-v will come into the picture, same thing of course.
There's an advantage to that -- they don't have to code for a variety of platforms and *guess* which ones will be the most popular. Even with just a few, there are still issues that make code that *could* be portable considerably less so (unless you ignore all but the key features of each platform). You and I are spoiled in that we can start from scratch with each new design; there's no legacy user base with code that "must" continue to run on our devices. We are free to reimplement each system in the way that best suits the application(s) it is intended to host. E.g., my disk sanitizer has system calls that let me "write_random_data()" and "read_random_data()" instead of just "read()" and "write()". Not the sort of thing you'd encounter in a COTS OS where the data being read and written are intended to be more unconstrained. (so, when THEY want to implement a similar functionality, they have to do that in userland atop the OS -- which makes things slower... an issue when you're trying to saturate 60 spindles!)
> Power - by far the most advanced and "fully baked" architecture humans > have produced - is well hidden from the wider public. Fortunately I > can still get some processors to do what I want to but what if it > dies? The only processors left will be this or that version of > some little-endian half-baked "shite".
You may be able to design (or subcontract) a compatible device, by then. Nothing to stop someone from altering the microcode in an "x86" to make it emulate the PPC's instruction set. ("Nothing" other than NDAs, etc.)
> And don't get me started with programming languages which went > totally C - which most if not all kids nowadays deem as "low level" ...
Many of the more modern languages aren't C-ish, at all. But, I suspect their "staying power" will prove to be considerably less.
> How this came to be is known, so what. I have been going my path > for nearly 30 years now, I have vpa (to become renamed to MIA, > for Machine Independent Assembly once I have its 64 bit version) > and the dps environment with its object maintenance system etc. etc., > a world of its own incomparably more efficient than the popular > bloats - but I am unlikely to live long enough to be able to > win with it against the rest of the world... Not that I care much > about it lately.
While I lament "bloat", I think it is a necessary evil in many designs. *Not* for creeping featurism but, rather, for building more robust systems. And, processors are dirt cheap, nowadays. Having *one* in a design is almost crippling. This, because most systems can actually benefit from walking-and-chewing-gum at the same time. (E.g., I dedicate an entire core to running my network interface as virtually all communications runs through it -- or *can* run through it)
On 01/10/21 20:07, Phil Hobbs wrote:
> As the wise man said, "All problems can be solved with an additional level of > indirection and some extra run time." ;)
And all signal processing problems can be solved by integrating for longer :)
On 10/1/2021 22:40, John Larkin wrote:
> .... > Some famous person said "If you really need to use floating point, you > don't understand the problem."
Hah, this is certainly true on many occasions - though not universal of course. I myself must have done so at times when I have just needed some result and move on (who has not done it, that is, we all use calculators etc.). There are times when I have used FP because it has been the only way to get out of the silicon what I needed (e.g. DSP-ing on a 32 bit machine with a 64 bit FPU). And then there can come the moment when you just have to trade off accuracy for dynamic range, which is when you do need FP and you do understand the problem :-).
On 10/1/2021 22:50, Don Y wrote:
> On 10/1/2021 10:07 AM, Dimiter_Popoff wrote: >> On 10/1/2021 13:56, Don Y wrote: >>> On 10/1/2021 3:04 AM, Jeroen Belleman wrote: >>>> There's also Wirth's observation: "Software gets slower faster >>>> than hardware gets faster." >>> >>> You've got to have a bit of sympathy for folks who write desktop >>> applications; they have virtually no control over the environment >>> in which their code executes. >> >> Given what today's popular OS-s look - and the amount of space >> the *need* to work - I have more than sympathy for them. So much >> human resource wasted on coping with legacies built on half-baked >> ideas etc. But this is how life works I suppose. > > No one (developer, end user) wants to throw away their previous > creations -- just to support a new/better OS concept. > > And, sadly, "OS' is now taking on many other bits of code that > you'd previously consider "supplemental libraries"; they are > so ingrained in so many apps that they effectively become a > core component/service of the OS. > >>> They can "recommend" a particular hardware/OS configuration. >>> But, few actually *enforce* those minimum requirements. >> >> And what have they to choose from? Windows or Linux, x86 and ARM.... >> Soon that risk-v will come into the picture, same thing of course. > > There's an advantage to that -- they don't have to code for a variety of > platforms and *guess* which ones will be the most popular.&nbsp; Even with > just a few, there are still issues that make code that *could* be portable > considerably less so (unless you ignore all but the key features of > each platform). > > You and I are spoiled in that we can start from scratch with each new > design; > there's no legacy user base with code that "must" continue to run on > our devices.&nbsp; We are free to reimplement each system in the way that > best suits the application(s) it is intended to host.
Hmmm, I wish I could easily join you on that :-). I did have to put in plenty of work to keep my old code running after the migration 68k -> ppc some 20 years ago (it did with negligible interventions into its sources, but the work on vpa prior to that took me a year...). Having written all of the code - apps, objects etc. - running under my dps does not change the fact that this has been my output for nearly 30 years, and I have been sort of productive... :-). Recently I finished a "device driver" I had started as something I thought I'd be done with in a week or two (took me 6 months). [It is a "Distributed File System" driver, via which dps apps can access data on foreign hosts as if on a local disk [[in dps a device driver allows the apps to access a number of bytes starting at a certain logical block (e.g. sector) to/from a certain address; byte starting granularity is one level higher, in the OS, not the driver]]]. For some parts the fact that I had written it all so far helped only because I knew *where* to look for what or where to *start* looking... (having kept the comments consistent was of huge help of course). But still I suspect I am spoiled as you suggest, just the thought of digging through someone else's mess instead of through mine fills me with resignation :-).
> >> Power - by far the most advanced and "fully baked" architecture humans >> have produced - is well hidden from the wider public. Fortunately I >> can still get some processors to do what I want to but what if it >> dies? The only processors left will be this or that version of >> some little-endian half-baked "shite". > > You may be able to design (or subcontract) a compatible device, by > then.&nbsp; Nothing to stop someone from altering the microcode in > an "x86" to make it emulate the PPC's instruction set.&nbsp; ("Nothing" > other than NDAs, etc.)
Hopefully so. The clunkiest part in those little-endian cores they make is the fact they don't have an opcode to access memory as if they were big-endian (power can do both, and in vpa it is just move.size vs. mover.size). They do have the muxes and all needed, they must put only a little bit of extra logic to implement it; but they don't, for whatever reason.
fredag den 1. oktober 2021 kl. 22.34.29 UTC+2 skrev Dimiter Popoff:
> On 10/1/2021 22:50, Don Y wrote: > > On 10/1/2021 10:07 AM, Dimiter_Popoff wrote: > >> On 10/1/2021 13:56, Don Y wrote: > >>> On 10/1/2021 3:04 AM, Jeroen Belleman wrote: > >>>> There's also Wirth's observation: "Software gets slower faster > >>>> than hardware gets faster." > >>> > >>> You've got to have a bit of sympathy for folks who write desktop > >>> applications; they have virtually no control over the environment > >>> in which their code executes. > >> > >> Given what today's popular OS-s look - and the amount of space > >> the *need* to work - I have more than sympathy for them. So much > >> human resource wasted on coping with legacies built on half-baked > >> ideas etc. But this is how life works I suppose. > > > > No one (developer, end user) wants to throw away their previous > > creations -- just to support a new/better OS concept. > > > > And, sadly, "OS' is now taking on many other bits of code that > > you'd previously consider "supplemental libraries"; they are > > so ingrained in so many apps that they effectively become a > > core component/service of the OS. > > > >>> They can "recommend" a particular hardware/OS configuration. > >>> But, few actually *enforce* those minimum requirements. > >> > >> And what have they to choose from? Windows or Linux, x86 and ARM.... > >> Soon that risk-v will come into the picture, same thing of course. > > > > There's an advantage to that -- they don't have to code for a variety of > > platforms and *guess* which ones will be the most popular. Even with > > just a few, there are still issues that make code that *could* be portable > > considerably less so (unless you ignore all but the key features of > > each platform). > > > > You and I are spoiled in that we can start from scratch with each new > > design; > > there's no legacy user base with code that "must" continue to run on > > our devices. We are free to reimplement each system in the way that > > best suits the application(s) it is intended to host. > Hmmm, I wish I could easily join you on that :-). I did have to put in > plenty of work to keep my old code running after the migration > 68k -> ppc some 20 years ago (it did with negligible interventions > into its sources, but the work on vpa prior to that took me a year...). > Having written all of the code - apps, objects etc. - running under > my dps does not change the fact that this has been my output for > nearly 30 years, and I have been sort of productive... :-). > Recently I finished a "device driver" I had started as something > I thought I'd be done with in a week or two (took me 6 months). > [It is a "Distributed File System" driver, via which dps apps can > access data on foreign hosts as if on a local disk [[in dps a > device driver allows the apps to access a number of bytes starting > at a certain logical block (e.g. sector) to/from a certain address; > byte starting granularity is one level higher, in the OS, not the > driver]]]. > For some parts the fact that I had written it all so far helped > only because I knew *where* to look for what or where to *start* > looking... (having kept the comments consistent was of huge help > of course). > But still I suspect I am spoiled as you suggest, just the thought > of digging through someone else's mess instead of through mine > fills me with resignation :-). > > > >> Power - by far the most advanced and "fully baked" architecture humans > >> have produced - is well hidden from the wider public. Fortunately I > >> can still get some processors to do what I want to but what if it > >> dies? The only processors left will be this or that version of > >> some little-endian half-baked "shite". > > > > You may be able to design (or subcontract) a compatible device, by > > then. Nothing to stop someone from altering the microcode in > > an "x86" to make it emulate the PPC's instruction set. ("Nothing" > > other than NDAs, etc.) > Hopefully so. The clunkiest part in those little-endian cores they > make is the fact they don't have an opcode to access memory as if > they were big-endian (power can do both, and in vpa it is just > move.size vs. mover.size). They do have the muxes and all needed, > they must put only a little bit of extra logic to implement it; > but they don't, for whatever reason.
which of the various ways of arranging bytes in a word do you want?