Electronics-Related.com
Forums

new spice

Started by John Larkin September 28, 2021
On 10/1/2021 23:50, Lasse Langwadt Christensen wrote:
> fredag den 1. oktober 2021 kl. 22.34.29 UTC+2 skrev Dimiter Popoff: >> On 10/1/2021 22:50, Don Y wrote: >>> On 10/1/2021 10:07 AM, Dimiter_Popoff wrote: >>>> On 10/1/2021 13:56, Don Y wrote: >>>>> On 10/1/2021 3:04 AM, Jeroen Belleman wrote: >>>>>> There's also Wirth's observation: "Software gets slower faster >>>>>> than hardware gets faster." >>>>> >>>>> You've got to have a bit of sympathy for folks who write desktop >>>>> applications; they have virtually no control over the environment >>>>> in which their code executes. >>>> >>>> Given what today's popular OS-s look - and the amount of space >>>> the *need* to work - I have more than sympathy for them. So much >>>> human resource wasted on coping with legacies built on half-baked >>>> ideas etc. But this is how life works I suppose. >>> >>> No one (developer, end user) wants to throw away their previous >>> creations -- just to support a new/better OS concept. >>> >>> And, sadly, "OS' is now taking on many other bits of code that >>> you'd previously consider "supplemental libraries"; they are >>> so ingrained in so many apps that they effectively become a >>> core component/service of the OS. >>> >>>>> They can "recommend" a particular hardware/OS configuration. >>>>> But, few actually *enforce* those minimum requirements. >>>> >>>> And what have they to choose from? Windows or Linux, x86 and ARM.... >>>> Soon that risk-v will come into the picture, same thing of course. >>> >>> There's an advantage to that -- they don't have to code for a variety of >>> platforms and *guess* which ones will be the most popular. Even with >>> just a few, there are still issues that make code that *could* be portable >>> considerably less so (unless you ignore all but the key features of >>> each platform). >>> >>> You and I are spoiled in that we can start from scratch with each new >>> design; >>> there's no legacy user base with code that "must" continue to run on >>> our devices. We are free to reimplement each system in the way that >>> best suits the application(s) it is intended to host. >> Hmmm, I wish I could easily join you on that :-). I did have to put in >> plenty of work to keep my old code running after the migration >> 68k -> ppc some 20 years ago (it did with negligible interventions >> into its sources, but the work on vpa prior to that took me a year...). >> Having written all of the code - apps, objects etc. - running under >> my dps does not change the fact that this has been my output for >> nearly 30 years, and I have been sort of productive... :-). >> Recently I finished a "device driver" I had started as something >> I thought I'd be done with in a week or two (took me 6 months). >> [It is a "Distributed File System" driver, via which dps apps can >> access data on foreign hosts as if on a local disk [[in dps a >> device driver allows the apps to access a number of bytes starting >> at a certain logical block (e.g. sector) to/from a certain address; >> byte starting granularity is one level higher, in the OS, not the >> driver]]]. >> For some parts the fact that I had written it all so far helped >> only because I knew *where* to look for what or where to *start* >> looking... (having kept the comments consistent was of huge help >> of course). >> But still I suspect I am spoiled as you suggest, just the thought >> of digging through someone else's mess instead of through mine >> fills me with resignation :-). >>> >>>> Power - by far the most advanced and "fully baked" architecture humans >>>> have produced - is well hidden from the wider public. Fortunately I >>>> can still get some processors to do what I want to but what if it >>>> dies? The only processors left will be this or that version of >>>> some little-endian half-baked "shite". >>> >>> You may be able to design (or subcontract) a compatible device, by >>> then. Nothing to stop someone from altering the microcode in >>> an "x86" to make it emulate the PPC's instruction set. ("Nothing" >>> other than NDAs, etc.) >> Hopefully so. The clunkiest part in those little-endian cores they >> make is the fact they don't have an opcode to access memory as if >> they were big-endian (power can do both, and in vpa it is just >> move.size vs. mover.size). They do have the muxes and all needed, >> they must put only a little bit of extra logic to implement it; >> but they don't, for whatever reason. > > which of the various ways of arranging bytes in a word do you want? >
Just the natural one, highest byte first and so on down for as long as the word is.
On Fri, 1 Oct 2021 20:07:11 +0300, Dimiter_Popoff <dp@tgi-sci.com>
wrote:

>On 10/1/2021 13:56, Don Y wrote: >> On 10/1/2021 3:04 AM, Jeroen Belleman wrote: >>> There's also Wirth's observation: "Software gets slower faster >>> than hardware gets faster." >> >> You've got to have a bit of sympathy for folks who write desktop >> applications; they have virtually no control over the environment >> in which their code executes. > >Given what today's popular OS-s look - and the amount of space >the *need* to work - I have more than sympathy for them. So much >human resource wasted on coping with legacies built on half-baked >ideas etc. But this is how life works I suppose. > >> >> They can "recommend" a particular hardware/OS configuration. >> But, few actually *enforce* those minimum requirements. >> > >And what have they to choose from? Windows or Linux, x86 and ARM.... >Soon that risk-v will come into the picture, same thing of course. > >Power - by far the most advanced and "fully baked" architecture humans >have produced - is well hidden from the wider public. Fortunately I >can still get some processors to do what I want to but what if it >dies? The only processors left will be this or that version of >some little-endian half-baked "shite". > >And don't get me started with programming languages which went >totally C - which most if not all kids nowadays deem as "low level" ... >How this came to be is known, so what. I have been going my path >for nearly 30 years now, I have vpa (to become renamed to MIA, >for Machine Independent Assembly once I have its 64 bit version) >and the dps environment with its object maintenance system etc. etc., >a world of its own incomparably more efficient than the popular >bloats - but I am unlikely to live long enough to be able to >win with it against the rest of the world... Not that I care much >about it lately.
Umm. It's too late. Vanilla C is the universal assembler. That actually was the original intent. The historical context was that prior to UNIX, all operating systems were written in assembly code, and thus were totally non-portable. UNIX was intended from inception to be a portable OS, so people at Bell Labs could move from platform to platform without losing all their work and starting over each time. They were very proud that UNIX was written mostly in C (96%) and a little assembly (4%). UNIX and descendents eventually wiped out all the closed platforms (other than in the desktop market). As I recall, it says this in the original C manual, in the Introduction. War story: In the transition era, while on a weather-radar project, we had minicomputers that used a proprietary LAN, and a big price. In the new effort, we made our first moves to UNIX and Ethernet. When the old platform vendor salesmen came by, I showed them the storeroom where the empty boxes were piled to the ceiling, and told them that this was the future. Didn't help - vendor didn't survive. Nobody noticed. Joe Gwinn
On 10/2/2021 0:17, Joe Gwinn wrote:
> On Fri, 1 Oct 2021 20:07:11 +0300, Dimiter_Popoff <dp@tgi-sci.com> > wrote: > >> On 10/1/2021 13:56, Don Y wrote: >>> On 10/1/2021 3:04 AM, Jeroen Belleman wrote: >>>> There's also Wirth's observation: "Software gets slower faster >>>> than hardware gets faster." >>> >>> You've got to have a bit of sympathy for folks who write desktop >>> applications; they have virtually no control over the environment >>> in which their code executes. >> >> Given what today's popular OS-s look - and the amount of space >> the *need* to work - I have more than sympathy for them. So much >> human resource wasted on coping with legacies built on half-baked >> ideas etc. But this is how life works I suppose. >> >>> >>> They can "recommend" a particular hardware/OS configuration. >>> But, few actually *enforce* those minimum requirements. >>> >> >> And what have they to choose from? Windows or Linux, x86 and ARM.... >> Soon that risk-v will come into the picture, same thing of course. >> >> Power - by far the most advanced and "fully baked" architecture humans >> have produced - is well hidden from the wider public. Fortunately I >> can still get some processors to do what I want to but what if it >> dies? The only processors left will be this or that version of >> some little-endian half-baked "shite". >> >> And don't get me started with programming languages which went >> totally C - which most if not all kids nowadays deem as "low level" ... >> How this came to be is known, so what. I have been going my path >> for nearly 30 years now, I have vpa (to become renamed to MIA, >> for Machine Independent Assembly once I have its 64 bit version) >> and the dps environment with its object maintenance system etc. etc., >> a world of its own incomparably more efficient than the popular >> bloats - but I am unlikely to live long enough to be able to >> win with it against the rest of the world... Not that I care much >> about it lately. > > Umm. It's too late. Vanilla C is the universal assembler. That > actually was the original intent.
Probably so for my lifetime. The entire Roman empire spoke Latin - and used roman numbers.... Survived for many centuries in spite of the roman numbers. C won't live that long but neither will I.
On Friday, October 1, 2021 at 3:07:33 PM UTC-4, Phil Hobbs wrote:
> whit3rd wrote: > > On Friday, October 1, 2021 at 9:05:42 AM UTC-4, Phil Hobbs wrote: > >> Jeroen Belleman wrote: > > > >>> I still don't get why C++ had to add call by reference. Big > >>> mistake, in my view.
> >> Why? Smart pointers didn't exist in 1998 iirc...
> > Apple's MacOS used 'handles' even back in the eighties. > > > > <https://en.wikipedia.org/wiki/Classic_Mac_OS_memory_management> > > > A handle is an opaque way to specify some object that the OS maintains. > Not the same thing at all--pointers can refer to all sorts of things the > OS doesn't know about, e.g. the guts of some peripheral driver.
But, I've fixed/customized peripheral drivers by editing a resource, never recompiled nothin. If you USE that OS framework well, all sorts of problems dissolve; language localization for instance. That was the big win for Apple's handles and resources in those early years.
John Larkin wrote:
> On Fri, 1 Oct 2021 13:03:33 -0400, Phil Hobbs > <pcdhSpamMeSenseless@electrooptical.net> wrote: > >> jlarkin@highlandsniptechnology.com wrote: >>> On Fri, 1 Oct 2021 12:34:25 -0400, Phil Hobbs >>> <pcdhSpamMeSenseless@electrooptical.net> wrote: >>> >>>> jlarkin@highlandsniptechnology.com wrote: >>>>> On Fri, 1 Oct 2021 12:13:40 -0400, Phil Hobbs >>>>> <pcdhSpamMeSenseless@electrooptical.net> wrote: >>>>> >>>>>> jlarkin@highlandsniptechnology.com wrote: >>>>>>> On Fri, 1 Oct 2021 11:07:48 -0400, Phil Hobbs >>>>>>> <pcdhSpamMeSenseless@electrooptical.net> wrote: >>>>>>> >>>>>>>> jlarkin@highlandsniptechnology.com wrote: >>>>>>>>> On Fri, 1 Oct 2021 08:55:01 -0400, Phil Hobbs >>>>>>>>> <pcdhSpamMeSenseless@electrooptical.net> wrote: >>>>>>>>> >>>>>>>>>> Gerhard Hoffmann wrote: >>>>>>>>>>> Am 30.09.21 um 21:24 schrieb John Larkin: >>>>>>>>>>>> On Thu, 30 Sep 2021 19:58:50 +0100, "Kevin Aylward" >>>>>>>>>>>> <kevinRemoveandReplaceATkevinaylward.co.uk> wrote: >>>>>>>>>>>> >>>>>>>>>>>>>> "John Larkin"&nbsp; wrote in message >>>>>>>>>>>>>> news:bdc7lgdap8o66j8m92ullph1nojbg9c5ni@4ax.com... >>>>>>>>>>>>> >>>>>>>>>>>>>> https://www.linkedin.com/in/mike-engelhardt-a788a822 >>>>>>>>>>>>> >>>>>>>>>>>>> ...but why ......? >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Maybe he enjoys it. I'm sure he's enormously wealthy and could do >>>>>>>>>>>> anything he wahts. >>>>>>>>>>>> >>>>>>>>>>>> I want a Spice that uses an nvidia board to speed it up 1000:1. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Hopeless. That has already been tried in IBM AT times with these >>>>>>>>>>> Weitek coprocessors and NS 32032 processor boards; it never lived >>>>>>>>>>> up to the expectations. >>>>>>>>>> >>>>>>>>>> Parallellizing sparse matrix computation is an area of active research. >>>>>>>>>> The best approach ATM seems to be runtime profiling that generates a >>>>>>>>>> JIT-compiled FPGA image for the actual crunching, sort of like the way >>>>>>>>>> FFTW generates optimum FFT routines. >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> That's no wonder. When in time domain integration the next result >>>>>>>>>>> depends on the current value and maybe a few in the past, you cannot >>>>>>>>>>> compute more future timesteps in parallel. Maybe some speculative >>>>>>>>>>> versions in parallel and then selecting the best. But that is no >>>>>>>>>>> work for 1000 processors. >>>>>>>>>> >>>>>>>>>> SPICE isn't a bad application for parallellism, if you can figure out >>>>>>>>>> how to do it--you wouldn't bother for trivial things, where the run >>>>>>>>>> times are less than 30s or so, but for longer calculations the profiling >>>>>>>>>> would be a small part of the work. The inner loop is time-stepping the >>>>>>>>>> same matrix topology all the time (though the coefficients change with >>>>>>>>>> the time step). >>>>>>>>>> >>>>>>>>>> Since all that horsepower would be spending most of its time waiting for >>>>>>>>>> us to dork the circuit and dink the fonts, it could be running the >>>>>>>>>> profiling in the background during editing. You might get 100x speedup >>>>>>>>>> that way, ISTM. >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> The inversion of the nodal matrix might use some improvement since >>>>>>>>>>> it is NP complete, like almost everything that is interesting. >>>>>>>>>> >>>>>>>>>> Matrix inversion is NP-complete? Since when? It's actually not even >>>>>>>>>> cubic, asymptotically--the lowest-known complexity bound is less than >>>>>>>>>> O(N**2.4). >>>>>>>>>> >>>>>>>>>>> Its size grows with the number of nodes and the matrix is sparse since >>>>>>>>>>> most nodes have no interaction. Dividing the circuit into subcircuits, >>>>>>>>>>> solving these separately and combining the results could provide >>>>>>>>>>> a speedup, for problems with many nodes. That would be a MAJOR change. >>>>>>>>>>> >>>>>>>>>>> Spice has not made much progress since Berkeley is no longer involved. >>>>>>>>>>> Some people make some local improvements and when they lose interest >>>>>>>>>>> after 15 years their improvements die. There is no one to integrate >>>>>>>>>>> that stuff in one open official version. Maybe NGspice comes closest. >>>>>>>>>>> >>>>>>>>>>> Keysight ADS has on option to run it on a bunch of workstations but >>>>>>>>>>> that helps probably most for electromagnetics which has not much >>>>>>>>>>> in common with spice. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> It has more than you might think. EM simulators basically have to loop >>>>>>>>>> over all of main memory twice per time step, and all the computational >>>>>>>>>> boundaries have to be kept time-coherent. With low-latency >>>>>>>>>> interconnects and an OS with a thread scheduler that isn't completely >>>>>>>>>> brain-dead (i.e. anything except Linux AFAICT), my EM code scales within >>>>>>>>>> 20% or so of linearly up to 15 compute nodes, which is as far as I've >>>>>>>>>> tried. >>>>>>>>>> >>>>>>>>>> So I'm more optimistic than you, if still rather less than JL. ;) >>>>>>>>>> >>>>>>>>>> Cheers >>>>>>>>>> >>>>>>>>>> Phil Hobbs >>>>>>>>> >>>>>>>>> Since a schematic has a finite number of nodes, why not have one CPU >>>>>>>>> per node? >>>>>>>> >>>>>>>> Doing what, exactly? >>>>>>> >>>>>>> Computing the node voltage for the next time step. >>>>>> >>>>>> Right, but exactly how? >>>>>>> >>>>>>>> Given that the circuit topology forms an irregular >>>>>>>> sparse matrix, there would be a gigantic communication bottleneck in >>>>>>>> general. >>>>>>> >>>>>>> Shared ram. Most nodes only need to see a few neighbors, plainly >>>>>>> visible on the schematic. >>>>>> >>>>>> "Shared ram" is all virtual, though--you don't have N-port memory >>>>>> really. >>>>> >>>>> FPGAs do. >>>>> >>>>>> It has to be connected somehow, and all the caches kept >>>>>> coherent. That causes communications traffic that grows very rapidly >>>>>> with the number of cores--about N**4 if it's done in symmetrical (SMP) >>>>>> fashion. >>>>> >>>>> Then don't cache the node voltages; put them in sram. Mux the ram >>>>> accesses cleverly. >>>>> >>>>>> >>>>>>> The ram memory map could be clever that way. >>>>>> >>>>>> Sure, that's what the JIT FPGA approach does, but the memory layout >>>>>> doesn't solve the communications bottleneck with a normal CPU or GPU. >>>>>> >>>>>>>> Somebody has to decide on the size of the next time step, for >>>>>>>> instance, which is a global property that has to be properly >>>>>>>> disseminated after computation. >>>>>>> >>>>>>> Step when the slowest CPU is done processing its node. >>>>>> >>>>>> But then it has to decide what to do next. The coefficients of the next >>>>>> iteration depend on the global time step, so there's no purely >>>>>> node-local method for doing adaptive step size. >>>>> >>>>> Proceed when all the nodes are done their computation. Then each reads >>>>> the global node ram to get its inputs for the next step. >>>>> >>>>> This would all work in a FPGA that had a lot of CPUs on chip. Let the >>>>> FPGA hardware do the node ram and access paths. >>>> >>>> Yeah, the key is that the circuit topology gets handled by the FPGA, >>>> which is more or less my point. (Not that I'm the one doing it.) >>>> >>>> Large sparse matrices don't map well onto purely general-purpose hardware. >>> >>> >>> Then stop thinking of circuit simulation in terms of matrix math. >> >> Oh, come _on_. The problem is a large sparse system of nonlinear ODEs >> with some bags hung onto the side for Tlines and such. How you write it >> out doesn't change what has to be done--the main issue is the >> irregularity and unpredictibility of the circuit topology. >> >>> >>> >>>>> >>>>> I've advocated for such a chip as a general OS host. One CPU per >>>>> process, with absolute hardware protections. >>>> >>>> Still has the SMP problem if the processes need to know about each other >>>> at all. >>> >>> There needs to be the clever multiport common SRAM, and one global >>> DONE line. >> >> Yes, "clever" in the sense of "magic happens here." > > Multiport rams and multiplexers aren't magic. Each node needs to share > a few variables with connected nodes.
A 733-port asynchronous-access RAM would be a pretty good trick, especially since when you move one wire on the schematic, it might need to be 737 ports next time you hit F5. It would also have to handle a thundering herd of accesses at the beginning of each time step. I'm not saying it's impossible, because I don't know that. I rather expect it might be hard, though.
> > Global things like DC voltage sources don't even need to be shared. > They can be compile-time constants.
Nah, you want to be able to measure the supply current, for sure.
> >> >>> >>> One shared register could be readable by all CPUs. It could have some >>> management bits. >> >> For some reasonable number of "all CPUs", sure. Not an unlimited number. > > Just one per circuit node, and a manger maybe.
But if you have to talk to all 733 at once, without getting killed by latency, it's more difficult.
> > If one used soft cores, some could be really dumb, not even with > floating point. One could compile the soft cores as needed.
Given enough communications resources, probably so. I'm not sure how the various interconnect layers in big FPGAs are implemented, but from what I know about semiconductor processing, there are a lot fewer fatwires (long distance/upper level) than L1/L2 interconnects.
> > Some famous person said "If you really need to use floating point, you > don't understand the problem."
Probably one of Rick Collins's FORTH pals. ;) (BITD FORTH didn't have FP--it seems to now. MacFORTH circa 1984 sure didn't--that's the last time I used it.) Cheers Phil Hobbs -- Dr Philip C D Hobbs Principal Consultant ElectroOptical Innovations LLC / Hobbs ElectroOptics Optics, Electro-optics, Photonics, Analog Electronics Briarcliff Manor NY 10510 http://electrooptical.net http://hobbs-eo.com
Tom Gardner wrote:
> On 01/10/21 20:07, Phil Hobbs wrote: >> As the wise man said, "All problems can be solved with an additional >> level of indirection and some extra run time." ;) > > And all signal processing problems can be solved by integrating for > longer :)
Unless there's drift, in which case you have to integrate for shorter. ;) Cheers Phil Hobbs -- Dr Philip C D Hobbs Principal Consultant ElectroOptical Innovations LLC / Hobbs ElectroOptics Optics, Electro-optics, Photonics, Analog Electronics Briarcliff Manor NY 10510 http://electrooptical.net http://hobbs-eo.com
whit3rd wrote:
> On Friday, October 1, 2021 at 3:07:33 PM UTC-4, Phil Hobbs wrote: >> whit3rd wrote: >>> On Friday, October 1, 2021 at 9:05:42 AM UTC-4, Phil Hobbs wrote: >>>> Jeroen Belleman wrote: >>> >>>>> I still don't get why C++ had to add call by reference. Big >>>>> mistake, in my view. > >>>> Why? Smart pointers didn't exist in 1998 iirc... > >>> Apple's MacOS used 'handles' even back in the eighties. >>> >>> <https://en.wikipedia.org/wiki/Classic_Mac_OS_memory_management> >>> >> A handle is an opaque way to specify some object that the OS maintains. >> Not the same thing at all--pointers can refer to all sorts of things the >> OS doesn't know about, e.g. the guts of some peripheral driver. > > But, I've fixed/customized peripheral drivers by editing a resource, never recompiled > nothin. If you USE that OS framework well, all sorts > of problems dissolve; language localization for instance. > That was the big win for Apple's handles and resources in those early years.
For sufficiently simple uses, it's fine. For implementing a circular buffer for a comm port, you might need a few local pointers, at which point allowing the OS to move stuff without telling you might be a bit, um, inconvenient. Cheers Phil Hobbs -- Dr Philip C D Hobbs Principal Consultant ElectroOptical Innovations LLC / Hobbs ElectroOptics Optics, Electro-optics, Photonics, Analog Electronics Briarcliff Manor NY 10510 http://electrooptical.net http://hobbs-eo.com
Am 01.10.21 um 23:14 schrieb Dimiter_Popoff:

>>>> You may be able to design (or subcontract) a compatible device, by >>>> then. Nothing to stop someone from altering the microcode in >>>> an "x86" to make it emulate the PPC's instruction set. ("Nothing" >>>> other than NDAs, etc.) >>> Hopefully so. The clunkiest part in those little-endian cores they >>> make is the fact they don't have an opcode to access memory as if >>> they were big-endian (power can do both, and in vpa it is just >>> move.size vs. mover.size). They do have the muxes and all needed, >>> they must put only a little bit of extra logic to implement it; >>> but they don't, for whatever reason. >> >> which of the various ways of arranging bytes in a word do you want? >> > > Just the natural one, highest byte first and so on down for > as long as the word is.
No, NEVER!!!! Never EVER!!!!1!11!!ELEVEN!!!! No one would start eating an egg from the big end. There is only air. The older, the more! The lowest address belongs to the lowest bit in a field, the lowest nibble, the lowest byte, the lowest half word, the lowest word, the lowest double word, the lowest array element and so on up as much as needed. And for computing offsets and the like, I want to start when the lowest bits are available. No carry chain in adders goes top-down. Gerhard
On 10/2/2021 1:00, Gerhard Hoffmann wrote:
> Am 01.10.21 um 23:14 schrieb Dimiter_Popoff: > >>>>> You may be able to design (or subcontract) a compatible device, by >>>>> then. Nothing to stop someone from altering the microcode in >>>>> an "x86" to make it emulate the PPC's instruction set. ("Nothing" >>>>> other than NDAs, etc.) >>>> Hopefully so. The clunkiest part in those little-endian cores they >>>> make is the fact they don't have an opcode to access memory as if >>>> they were big-endian (power can do both, and in vpa it is just >>>> move.size vs. mover.size). They do have the muxes and all needed, >>>> they must put only a little bit of extra logic to implement it; >>>> but they don't, for whatever reason. >>> >>> which of the various ways of arranging bytes in a word do you want? >>> >> >> Just the natural one, highest byte first and so on down for >> as long as the word is. > > > > No, NEVER!!!! Never EVER!!!!1!11!!ELEVEN!!!! > > No one would start eating an egg from the big end. There is only air. > The older, the more! > > The lowest address belongs to the lowest bit in a field, the lowest > nibble, the lowest byte, the lowest half word, the lowest word, > the lowest double word, the lowest array element and so on > up as much as needed. > > And for computing offsets and the like, I want to start when the > lowest bits are available. No carry chain in adders goes top-down. > > Gerhard > > >
It is not as laughable as people have been made to accept and you seem to agree. For one, check what "network byte order" stands for. While the carry chain example is laughable indeed think if sending the most relevant data first is not the better idea over a not so fast link. But anyway, I am not going to go into a dispute like that. Or about the Earth being flat etc.
l&oslash;rdag den 2. oktober 2021 kl. 00.15.53 UTC+2 skrev Dimiter Popoff:
> On 10/2/2021 1:00, Gerhard Hoffmann wrote: > > Am 01.10.21 um 23:14 schrieb Dimiter_Popoff: > > > >>>>> You may be able to design (or subcontract) a compatible device, by > >>>>> then. Nothing to stop someone from altering the microcode in > >>>>> an "x86" to make it emulate the PPC's instruction set. ("Nothing" > >>>>> other than NDAs, etc.) > >>>> Hopefully so. The clunkiest part in those little-endian cores they > >>>> make is the fact they don't have an opcode to access memory as if > >>>> they were big-endian (power can do both, and in vpa it is just > >>>> move.size vs. mover.size). They do have the muxes and all needed, > >>>> they must put only a little bit of extra logic to implement it; > >>>> but they don't, for whatever reason. > >>> > >>> which of the various ways of arranging bytes in a word do you want? > >>> > >> > >> Just the natural one, highest byte first and so on down for > >> as long as the word is. > > > > > > > > No, NEVER!!!! Never EVER!!!!1!11!!ELEVEN!!!! > > > > No one would start eating an egg from the big end. There is only air. > > The older, the more! > > > > The lowest address belongs to the lowest bit in a field, the lowest > > nibble, the lowest byte, the lowest half word, the lowest word, > > the lowest double word, the lowest array element and so on > > up as much as needed. > > > > And for computing offsets and the like, I want to start when the > > lowest bits are available. No carry chain in adders goes top-down. > > > > Gerhard > > > > > > > It is not as laughable as people have been made to accept and you > seem to agree. > For one, check what "network byte order" stands for. > While the carry chain example is laughable indeed think if > sending the most relevant data first is not the better idea > over a not so fast link.
more like back in prehistoric times telephone numbers were routed with relays one digit at a time starting from the most significant digit because that was most convenient. The network guys copied it and now it is too late to fix it ;) big endian does have one thing, if you read a byte where you should have read a word, it won't work at all instead of working for numbers less that 255 ;)