Electronics-Related.com
Forums

new spice

Started by John Larkin September 28, 2021
On 10/2/2021 1:40, Lasse Langwadt Christensen wrote:
> lørdag den 2. oktober 2021 kl. 00.15.53 UTC+2 skrev Dimiter Popoff: >> On 10/2/2021 1:00, Gerhard Hoffmann wrote: >>> Am 01.10.21 um 23:14 schrieb Dimiter_Popoff: >>> >>>>>>> You may be able to design (or subcontract) a compatible device, by >>>>>>> then. Nothing to stop someone from altering the microcode in >>>>>>> an "x86" to make it emulate the PPC's instruction set. ("Nothing" >>>>>>> other than NDAs, etc.) >>>>>> Hopefully so. The clunkiest part in those little-endian cores they >>>>>> make is the fact they don't have an opcode to access memory as if >>>>>> they were big-endian (power can do both, and in vpa it is just >>>>>> move.size vs. mover.size). They do have the muxes and all needed, >>>>>> they must put only a little bit of extra logic to implement it; >>>>>> but they don't, for whatever reason. >>>>> >>>>> which of the various ways of arranging bytes in a word do you want? >>>>> >>>> >>>> Just the natural one, highest byte first and so on down for >>>> as long as the word is. >>> >>> >>> >>> No, NEVER!!!! Never EVER!!!!1!11!!ELEVEN!!!! >>> >>> No one would start eating an egg from the big end. There is only air. >>> The older, the more! >>> >>> The lowest address belongs to the lowest bit in a field, the lowest >>> nibble, the lowest byte, the lowest half word, the lowest word, >>> the lowest double word, the lowest array element and so on >>> up as much as needed. >>> >>> And for computing offsets and the like, I want to start when the >>> lowest bits are available. No carry chain in adders goes top-down. >>> >>> Gerhard >>> >>> >>> >> It is not as laughable as people have been made to accept and you >> seem to agree. >> For one, check what "network byte order" stands for. >> While the carry chain example is laughable indeed think if >> sending the most relevant data first is not the better idea >> over a not so fast link. > > more like back in prehistoric times telephone numbers were routed > with relays one digit at a time starting from the most significant digit > because that was most convenient. The network guys copied it and > now it is too late to fix it ;)
Yeah, in today's world a second of RTT is unheard of and what is a second, any user will wait for as many seconds as the *fixed* system in front of them requires. Then it is too late to fix the way we read and write left to right, too. Astonishing how easy it has been to teach the public that black does not differ from white as long as they are warm and cozy. And now on a more serious note - it does not matter how well little endian machines have been working, most of the time the internal byte ordering is of negligible consequences anyway. But having made a sloppy choice of byte ordering is telling a lot about how much thought has been put into the design - and whoever can compare say x86 to power or 68k can see that quite well.
Am 02.10.21 um 00:56 schrieb Dimiter_Popoff:

> And now on a more serious note - it does not matter how well little > endian machines have been working, most of the time the internal byte > ordering is of negligible consequences anyway. > But having made a sloppy choice of byte ordering is telling a lot > about how much thought has been put into the design - and whoever > can compare say x86 to power or 68k can see that quite well.
To make it short, X86 won. It was more RISKy than 68K as the Moto people noticed when they threw the towel. It's not so funny when double memory indirect adressing and that gives you instructions that never end because they can trap forever. I have an Agilent logic analyzer with an 68020 or better. Use that when you want to remove all the haste from your life. Alone the user interface. Handling of complicated hardware not needed. Gerhard
lørdag den 2. oktober 2021 kl. 00.56.45 UTC+2 skrev Dimiter Popoff:
> On 10/2/2021 1:40, Lasse Langwadt Christensen wrote: > > lørdag den 2. oktober 2021 kl. 00.15.53 UTC+2 skrev Dimiter Popoff: > >> On 10/2/2021 1:00, Gerhard Hoffmann wrote: > >>> Am 01.10.21 um 23:14 schrieb Dimiter_Popoff: > >>> > >>>>>>> You may be able to design (or subcontract) a compatible device, by > >>>>>>> then. Nothing to stop someone from altering the microcode in > >>>>>>> an "x86" to make it emulate the PPC's instruction set. ("Nothing" > >>>>>>> other than NDAs, etc.) > >>>>>> Hopefully so. The clunkiest part in those little-endian cores they > >>>>>> make is the fact they don't have an opcode to access memory as if > >>>>>> they were big-endian (power can do both, and in vpa it is just > >>>>>> move.size vs. mover.size). They do have the muxes and all needed, > >>>>>> they must put only a little bit of extra logic to implement it; > >>>>>> but they don't, for whatever reason. > >>>>> > >>>>> which of the various ways of arranging bytes in a word do you want? > >>>>> > >>>> > >>>> Just the natural one, highest byte first and so on down for > >>>> as long as the word is. > >>> > >>> > >>> > >>> No, NEVER!!!! Never EVER!!!!1!11!!ELEVEN!!!! > >>> > >>> No one would start eating an egg from the big end. There is only air. > >>> The older, the more! > >>> > >>> The lowest address belongs to the lowest bit in a field, the lowest > >>> nibble, the lowest byte, the lowest half word, the lowest word, > >>> the lowest double word, the lowest array element and so on > >>> up as much as needed. > >>> > >>> And for computing offsets and the like, I want to start when the > >>> lowest bits are available. No carry chain in adders goes top-down. > >>> > >>> Gerhard > >>> > >>> > >>> > >> It is not as laughable as people have been made to accept and you > >> seem to agree. > >> For one, check what "network byte order" stands for. > >> While the carry chain example is laughable indeed think if > >> sending the most relevant data first is not the better idea > >> over a not so fast link. > > > > more like back in prehistoric times telephone numbers were routed > > with relays one digit at a time starting from the most significant digit > > because that was most convenient. The network guys copied it and > > now it is too late to fix it ;) > Yeah, in today's world a second of RTT is unheard of and what is a > second, any user will wait for as many seconds as the *fixed* > system in front of them requires. > > Then it is too late to fix the way we read and write left to right, > too. > > Astonishing how easy it has been to teach the public that black > does not differ from white as long as they are warm and cozy. > > And now on a more serious note - it does not matter how well little > endian machines have been working, most of the time the internal byte > ordering is of negligible consequences anyway. > But having made a sloppy choice of byte ordering is telling a lot > about how much thought has been put into the design - and whoever > can compare say x86 to power or 68k can see that quite well.
if you have to say add 32 bit numbers in memory on an 8bit machine is seems rather convenient that you have to read memory at increasing addresses
On Fri, 1 Oct 2021 17:52:22 -0400, Phil Hobbs
<pcdhSpamMeSenseless@electrooptical.net> wrote:

>John Larkin wrote: >> On Fri, 1 Oct 2021 13:03:33 -0400, Phil Hobbs >> <pcdhSpamMeSenseless@electrooptical.net> wrote: >> >>> jlarkin@highlandsniptechnology.com wrote: >>>> On Fri, 1 Oct 2021 12:34:25 -0400, Phil Hobbs >>>> <pcdhSpamMeSenseless@electrooptical.net> wrote: >>>> >>>>> jlarkin@highlandsniptechnology.com wrote: >>>>>> On Fri, 1 Oct 2021 12:13:40 -0400, Phil Hobbs >>>>>> <pcdhSpamMeSenseless@electrooptical.net> wrote: >>>>>> >>>>>>> jlarkin@highlandsniptechnology.com wrote: >>>>>>>> On Fri, 1 Oct 2021 11:07:48 -0400, Phil Hobbs >>>>>>>> <pcdhSpamMeSenseless@electrooptical.net> wrote: >>>>>>>> >>>>>>>>> jlarkin@highlandsniptechnology.com wrote: >>>>>>>>>> On Fri, 1 Oct 2021 08:55:01 -0400, Phil Hobbs >>>>>>>>>> <pcdhSpamMeSenseless@electrooptical.net> wrote: >>>>>>>>>> >>>>>>>>>>> Gerhard Hoffmann wrote: >>>>>>>>>>>> Am 30.09.21 um 21:24 schrieb John Larkin: >>>>>>>>>>>>> On Thu, 30 Sep 2021 19:58:50 +0100, "Kevin Aylward" >>>>>>>>>>>>> <kevinRemoveandReplaceATkevinaylward.co.uk> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>>> "John Larkin"&#4294967295; wrote in message >>>>>>>>>>>>>>> news:bdc7lgdap8o66j8m92ullph1nojbg9c5ni@4ax.com... >>>>>>>>>>>>>> >>>>>>>>>>>>>>> https://www.linkedin.com/in/mike-engelhardt-a788a822 >>>>>>>>>>>>>> >>>>>>>>>>>>>> ...but why ......? >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Maybe he enjoys it. I'm sure he's enormously wealthy and could do >>>>>>>>>>>>> anything he wahts. >>>>>>>>>>>>> >>>>>>>>>>>>> I want a Spice that uses an nvidia board to speed it up 1000:1. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Hopeless. That has already been tried in IBM AT times with these >>>>>>>>>>>> Weitek coprocessors and NS 32032 processor boards; it never lived >>>>>>>>>>>> up to the expectations. >>>>>>>>>>> >>>>>>>>>>> Parallellizing sparse matrix computation is an area of active research. >>>>>>>>>>> The best approach ATM seems to be runtime profiling that generates a >>>>>>>>>>> JIT-compiled FPGA image for the actual crunching, sort of like the way >>>>>>>>>>> FFTW generates optimum FFT routines. >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> That's no wonder. When in time domain integration the next result >>>>>>>>>>>> depends on the current value and maybe a few in the past, you cannot >>>>>>>>>>>> compute more future timesteps in parallel. Maybe some speculative >>>>>>>>>>>> versions in parallel and then selecting the best. But that is no >>>>>>>>>>>> work for 1000 processors. >>>>>>>>>>> >>>>>>>>>>> SPICE isn't a bad application for parallellism, if you can figure out >>>>>>>>>>> how to do it--you wouldn't bother for trivial things, where the run >>>>>>>>>>> times are less than 30s or so, but for longer calculations the profiling >>>>>>>>>>> would be a small part of the work. The inner loop is time-stepping the >>>>>>>>>>> same matrix topology all the time (though the coefficients change with >>>>>>>>>>> the time step). >>>>>>>>>>> >>>>>>>>>>> Since all that horsepower would be spending most of its time waiting for >>>>>>>>>>> us to dork the circuit and dink the fonts, it could be running the >>>>>>>>>>> profiling in the background during editing. You might get 100x speedup >>>>>>>>>>> that way, ISTM. >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> The inversion of the nodal matrix might use some improvement since >>>>>>>>>>>> it is NP complete, like almost everything that is interesting. >>>>>>>>>>> >>>>>>>>>>> Matrix inversion is NP-complete? Since when? It's actually not even >>>>>>>>>>> cubic, asymptotically--the lowest-known complexity bound is less than >>>>>>>>>>> O(N**2.4). >>>>>>>>>>> >>>>>>>>>>>> Its size grows with the number of nodes and the matrix is sparse since >>>>>>>>>>>> most nodes have no interaction. Dividing the circuit into subcircuits, >>>>>>>>>>>> solving these separately and combining the results could provide >>>>>>>>>>>> a speedup, for problems with many nodes. That would be a MAJOR change. >>>>>>>>>>>> >>>>>>>>>>>> Spice has not made much progress since Berkeley is no longer involved. >>>>>>>>>>>> Some people make some local improvements and when they lose interest >>>>>>>>>>>> after 15 years their improvements die. There is no one to integrate >>>>>>>>>>>> that stuff in one open official version. Maybe NGspice comes closest. >>>>>>>>>>>> >>>>>>>>>>>> Keysight ADS has on option to run it on a bunch of workstations but >>>>>>>>>>>> that helps probably most for electromagnetics which has not much >>>>>>>>>>>> in common with spice. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> It has more than you might think. EM simulators basically have to loop >>>>>>>>>>> over all of main memory twice per time step, and all the computational >>>>>>>>>>> boundaries have to be kept time-coherent. With low-latency >>>>>>>>>>> interconnects and an OS with a thread scheduler that isn't completely >>>>>>>>>>> brain-dead (i.e. anything except Linux AFAICT), my EM code scales within >>>>>>>>>>> 20% or so of linearly up to 15 compute nodes, which is as far as I've >>>>>>>>>>> tried. >>>>>>>>>>> >>>>>>>>>>> So I'm more optimistic than you, if still rather less than JL. ;) >>>>>>>>>>> >>>>>>>>>>> Cheers >>>>>>>>>>> >>>>>>>>>>> Phil Hobbs >>>>>>>>>> >>>>>>>>>> Since a schematic has a finite number of nodes, why not have one CPU >>>>>>>>>> per node? >>>>>>>>> >>>>>>>>> Doing what, exactly? >>>>>>>> >>>>>>>> Computing the node voltage for the next time step. >>>>>>> >>>>>>> Right, but exactly how? >>>>>>>> >>>>>>>>> Given that the circuit topology forms an irregular >>>>>>>>> sparse matrix, there would be a gigantic communication bottleneck in >>>>>>>>> general. >>>>>>>> >>>>>>>> Shared ram. Most nodes only need to see a few neighbors, plainly >>>>>>>> visible on the schematic. >>>>>>> >>>>>>> "Shared ram" is all virtual, though--you don't have N-port memory >>>>>>> really. >>>>>> >>>>>> FPGAs do. >>>>>> >>>>>>> It has to be connected somehow, and all the caches kept >>>>>>> coherent. That causes communications traffic that grows very rapidly >>>>>>> with the number of cores--about N**4 if it's done in symmetrical (SMP) >>>>>>> fashion. >>>>>> >>>>>> Then don't cache the node voltages; put them in sram. Mux the ram >>>>>> accesses cleverly. >>>>>> >>>>>>> >>>>>>>> The ram memory map could be clever that way. >>>>>>> >>>>>>> Sure, that's what the JIT FPGA approach does, but the memory layout >>>>>>> doesn't solve the communications bottleneck with a normal CPU or GPU. >>>>>>> >>>>>>>>> Somebody has to decide on the size of the next time step, for >>>>>>>>> instance, which is a global property that has to be properly >>>>>>>>> disseminated after computation. >>>>>>>> >>>>>>>> Step when the slowest CPU is done processing its node. >>>>>>> >>>>>>> But then it has to decide what to do next. The coefficients of the next >>>>>>> iteration depend on the global time step, so there's no purely >>>>>>> node-local method for doing adaptive step size. >>>>>> >>>>>> Proceed when all the nodes are done their computation. Then each reads >>>>>> the global node ram to get its inputs for the next step. >>>>>> >>>>>> This would all work in a FPGA that had a lot of CPUs on chip. Let the >>>>>> FPGA hardware do the node ram and access paths. >>>>> >>>>> Yeah, the key is that the circuit topology gets handled by the FPGA, >>>>> which is more or less my point. (Not that I'm the one doing it.) >>>>> >>>>> Large sparse matrices don't map well onto purely general-purpose hardware. >>>> >>>> >>>> Then stop thinking of circuit simulation in terms of matrix math. >>> >>> Oh, come _on_. The problem is a large sparse system of nonlinear ODEs >>> with some bags hung onto the side for Tlines and such. How you write it >>> out doesn't change what has to be done--the main issue is the >>> irregularity and unpredictibility of the circuit topology. >>> >>>> >>>> >>>>>> >>>>>> I've advocated for such a chip as a general OS host. One CPU per >>>>>> process, with absolute hardware protections. >>>>> >>>>> Still has the SMP problem if the processes need to know about each other >>>>> at all. >>>> >>>> There needs to be the clever multiport common SRAM, and one global >>>> DONE line. >>> >>> Yes, "clever" in the sense of "magic happens here." >> >> Multiport rams and multiplexers aren't magic. Each node needs to share >> a few variables with connected nodes. > >A 733-port asynchronous-access RAM would be a pretty good trick, >especially since when you move one wire on the schematic, it might need >to be 737 ports next time you hit F5. It would also have to handle a >thundering herd of accesses at the beginning of each time step.
Each node only connects to a few other nodes. So it only needs to see a few-port fast SRAM. You can have lots of small SRAMS on an FPGA. Most lay out their rams that way.
> >I'm not saying it's impossible, because I don't know that. I rather >expect it might be hard, though.
Sure, some really smart people would have to sprain their brains for a while. It might be possible to program a big FPGA to be a multi-block, block-per node Spice engine.
> >> >> Global things like DC voltage sources don't even need to be shared. >> They can be compile-time constants. > >Nah, you want to be able to measure the supply current, for sure. > >> >>> >>>> >>>> One shared register could be readable by all CPUs. It could have some >>>> management bits. >>> >>> For some reasonable number of "all CPUs", sure. Not an unlimited number. >> >> Just one per circuit node, and a manger maybe. > >But if you have to talk to all 733 at once, without getting killed by >latency, it's more difficult.
Don't do that. When I breadboard a circuit, I don't connect all the parts to one another.
> >> >> If one used soft cores, some could be really dumb, not even with >> floating point. One could compile the soft cores as needed. > >Given enough communications resources, probably so. I'm not sure how >the various interconnect layers in big FPGAs are implemented, but from >what I know about semiconductor processing, there are a lot fewer >fatwires (long distance/upper level) than L1/L2 interconnects. > >> >> Some famous person said "If you really need to use floating point, you >> don't understand the problem." > >Probably one of Rick Collins's FORTH pals. ;) (BITD FORTH didn't have >FP--it seems to now. MacFORTH circa 1984 sure didn't--that's the last >time I used it.) > >Cheers > >Phil Hobbs
I wrote a saturating math library for the 68332. It used signed 32.32 format, which is all any physical system needs. It was fast because nothing needed to be normalized. Divide was the worst, but a Spice node would be unlikely to need to divide. 32.32 would work for Spice too. -- If a man will begin with certainties, he shall end with doubts, but if he will be content to begin with doubts he shall end in certainties. Francis Bacon
On 10/2/2021 2:27, Lasse Langwadt Christensen wrote:
> l&oslash;rdag den 2. oktober 2021 kl. 00.56.45 UTC+2 skrev Dimiter Popoff: >> On 10/2/2021 1:40, Lasse Langwadt Christensen wrote: >>> l&oslash;rdag den 2. oktober 2021 kl. 00.15.53 UTC+2 skrev Dimiter Popoff: >>>> On 10/2/2021 1:00, Gerhard Hoffmann wrote: >>>>> Am 01.10.21 um 23:14 schrieb Dimiter_Popoff: >>>>> >>>>>>>>> You may be able to design (or subcontract) a compatible device, by >>>>>>>>> then. Nothing to stop someone from altering the microcode in >>>>>>>>> an "x86" to make it emulate the PPC's instruction set. ("Nothing" >>>>>>>>> other than NDAs, etc.) >>>>>>>> Hopefully so. The clunkiest part in those little-endian cores they >>>>>>>> make is the fact they don't have an opcode to access memory as if >>>>>>>> they were big-endian (power can do both, and in vpa it is just >>>>>>>> move.size vs. mover.size). They do have the muxes and all needed, >>>>>>>> they must put only a little bit of extra logic to implement it; >>>>>>>> but they don't, for whatever reason. >>>>>>> >>>>>>> which of the various ways of arranging bytes in a word do you want? >>>>>>> >>>>>> >>>>>> Just the natural one, highest byte first and so on down for >>>>>> as long as the word is. >>>>> >>>>> >>>>> >>>>> No, NEVER!!!! Never EVER!!!!1!11!!ELEVEN!!!! >>>>> >>>>> No one would start eating an egg from the big end. There is only air. >>>>> The older, the more! >>>>> >>>>> The lowest address belongs to the lowest bit in a field, the lowest >>>>> nibble, the lowest byte, the lowest half word, the lowest word, >>>>> the lowest double word, the lowest array element and so on >>>>> up as much as needed. >>>>> >>>>> And for computing offsets and the like, I want to start when the >>>>> lowest bits are available. No carry chain in adders goes top-down. >>>>> >>>>> Gerhard >>>>> >>>>> >>>>> >>>> It is not as laughable as people have been made to accept and you >>>> seem to agree. >>>> For one, check what "network byte order" stands for. >>>> While the carry chain example is laughable indeed think if >>>> sending the most relevant data first is not the better idea >>>> over a not so fast link. >>> >>> more like back in prehistoric times telephone numbers were routed >>> with relays one digit at a time starting from the most significant digit >>> because that was most convenient. The network guys copied it and >>> now it is too late to fix it ;) >> Yeah, in today's world a second of RTT is unheard of and what is a >> second, any user will wait for as many seconds as the *fixed* >> system in front of them requires. >> >> Then it is too late to fix the way we read and write left to right, >> too. >> >> Astonishing how easy it has been to teach the public that black >> does not differ from white as long as they are warm and cozy. >> >> And now on a more serious note - it does not matter how well little >> endian machines have been working, most of the time the internal byte >> ordering is of negligible consequences anyway. >> But having made a sloppy choice of byte ordering is telling a lot >> about how much thought has been put into the design - and whoever >> can compare say x86 to power or 68k can see that quite well. > > if you have to say add 32 bit numbers in memory on an 8bit machine > is seems rather convenient that you have to read memory at increasing addresses >
And if you have been thoughtful enough back when designing that 8 bit machine you would have seen that 32 bit machines are to become ubiquitous before too long and you can decrement the address as easily as increment it - and not switch to the Arabic alphabet and all the related nonsense, we have been writing/thinking left to right for quite some time. And yes, a 32 bit register has the bytes inside it ordered big endian on little endian systems, too.
l&oslash;rdag den 2. oktober 2021 kl. 01.37.40 UTC+2 skrev Dimiter Popoff:
> On 10/2/2021 2:27, Lasse Langwadt Christensen wrote: > > l&oslash;rdag den 2. oktober 2021 kl. 00.56.45 UTC+2 skrev Dimiter Popoff: > >> On 10/2/2021 1:40, Lasse Langwadt Christensen wrote: > >>> l&oslash;rdag den 2. oktober 2021 kl. 00.15.53 UTC+2 skrev Dimiter Popoff: > >>>> On 10/2/2021 1:00, Gerhard Hoffmann wrote: > >>>>> Am 01.10.21 um 23:14 schrieb Dimiter_Popoff: > >>>>> > >>>>>>>>> You may be able to design (or subcontract) a compatible device, by > >>>>>>>>> then. Nothing to stop someone from altering the microcode in > >>>>>>>>> an "x86" to make it emulate the PPC's instruction set. ("Nothing" > >>>>>>>>> other than NDAs, etc.) > >>>>>>>> Hopefully so. The clunkiest part in those little-endian cores they > >>>>>>>> make is the fact they don't have an opcode to access memory as if > >>>>>>>> they were big-endian (power can do both, and in vpa it is just > >>>>>>>> move.size vs. mover.size). They do have the muxes and all needed, > >>>>>>>> they must put only a little bit of extra logic to implement it; > >>>>>>>> but they don't, for whatever reason. > >>>>>>> > >>>>>>> which of the various ways of arranging bytes in a word do you want? > >>>>>>> > >>>>>> > >>>>>> Just the natural one, highest byte first and so on down for > >>>>>> as long as the word is. > >>>>> > >>>>> > >>>>> > >>>>> No, NEVER!!!! Never EVER!!!!1!11!!ELEVEN!!!! > >>>>> > >>>>> No one would start eating an egg from the big end. There is only air. > >>>>> The older, the more! > >>>>> > >>>>> The lowest address belongs to the lowest bit in a field, the lowest > >>>>> nibble, the lowest byte, the lowest half word, the lowest word, > >>>>> the lowest double word, the lowest array element and so on > >>>>> up as much as needed. > >>>>> > >>>>> And for computing offsets and the like, I want to start when the > >>>>> lowest bits are available. No carry chain in adders goes top-down. > >>>>> > >>>>> Gerhard > >>>>> > >>>>> > >>>>> > >>>> It is not as laughable as people have been made to accept and you > >>>> seem to agree. > >>>> For one, check what "network byte order" stands for. > >>>> While the carry chain example is laughable indeed think if > >>>> sending the most relevant data first is not the better idea > >>>> over a not so fast link. > >>> > >>> more like back in prehistoric times telephone numbers were routed > >>> with relays one digit at a time starting from the most significant digit > >>> because that was most convenient. The network guys copied it and > >>> now it is too late to fix it ;) > >> Yeah, in today's world a second of RTT is unheard of and what is a > >> second, any user will wait for as many seconds as the *fixed* > >> system in front of them requires. > >> > >> Then it is too late to fix the way we read and write left to right, > >> too. > >> > >> Astonishing how easy it has been to teach the public that black > >> does not differ from white as long as they are warm and cozy. > >> > >> And now on a more serious note - it does not matter how well little > >> endian machines have been working, most of the time the internal byte > >> ordering is of negligible consequences anyway. > >> But having made a sloppy choice of byte ordering is telling a lot > >> about how much thought has been put into the design - and whoever > >> can compare say x86 to power or 68k can see that quite well. > > > > if you have to say add 32 bit numbers in memory on an 8bit machine > > is seems rather convenient that you have to read memory at increasing addresses > > > And if you have been thoughtful enough back when designing that 8 > bit machine you would have seen that 32 bit machines are to become > ubiquitous before too long
and then 64 bit ...
>and you can decrement the address as easily > as increment it
but some memory have ways to skip adress cycles for reads at incrementing addresses
> - and not switch to the Arabic alphabet and all > the related nonsense, we have been writing/thinking > left to right for quite some time. And yes, a 32 bit > register has the bytes inside it ordered big endian on little endian > systems, too.
how about 64bit in two 32bit registers?
On 10/1/2021 2:17 PM, Joe Gwinn wrote:
> Umm. It's too late. Vanilla C is the universal assembler. That > actually was the original intent.
PL/M? (sad by comparison)
> The historical context was that prior to UNIX, all operating systems > were written in assembly code, and thus were totally non-portable.
Um, MULTICS? It's non-portability is largely related to the oddness of the targeted host!
> UNIX was intended from inception to be a portable OS, so people at > Bell Labs could move from platform to platform without losing all > their work and starting over each time. > > They were very proud that UNIX was written mostly in C (96%) and a > little assembly (4%).
But *any* HLL would suffice (e.g., PL/1) to afford portability. How *well* it expresses particular abstractions is an issue along with whether or not it provides features that rely on mechanisms that *must* be supported by the underlying hardware.
> UNIX and descendents eventually wiped out all the closed platforms > (other than in the desktop market). As I recall, it says this in the > original C manual, in the Introduction.
I suspect BSD was more of a reason, there, than "UNIX in C". I recall using VMS well into the 80's. And writing code on bare metal for Novas. UN*X gained a foothold when it was affordable enough to be widely used. Typically by exposing university students.
> War story: In the transition era, while on a weather-radar project, > we had minicomputers that used a proprietary LAN, and a big price. In > the new effort, we made our first moves to UNIX and Ethernet. When > the old platform vendor salesmen came by, I showed them the storeroom > where the empty boxes were piled to the ceiling, and told them that > this was the future. Didn't help - vendor didn't survive. Nobody > noticed.
You can say the same about dozens (scores?) of microprocessors. The Market didn't select for the "best" -- but *did* "select".
On 10/1/2021 4:30 PM, Dimiter_Popoff wrote:
> On 10/2/2021 2:18, Gerhard Hoffmann wrote: >> Am 02.10.21 um 00:56 schrieb Dimiter_Popoff: >> >>> And now on a more serious note - it does not matter how well little >>> endian machines have been working, most of the time the internal byte >>> ordering is of negligible consequences anyway. >>> But having made a sloppy choice of byte ordering is telling a lot >>> about how much thought has been put into the design - and whoever >>> can compare say x86 to power or 68k can see that quite well. >> >> To make it short, X86 won. > > Making it short is for the wider public, here we are supposed to > make the difference between things we know and things we are told.
The wider public effectively selects what *we* can/will use. Companies that can't produce in quantity can't afford to support their products. And, those products go away. Witness how Zilog disappeared, despite having the king of the 8-bitters (yet failed to evolve to anything bigger/wider). So, "merit" doesn't figure into the calculation.
l&oslash;rdag den 2. oktober 2021 kl. 02.15.27 UTC+2 skrev Don Y:
> On 10/1/2021 4:30 PM, Dimiter_Popoff wrote: > > On 10/2/2021 2:18, Gerhard Hoffmann wrote: > >> Am 02.10.21 um 00:56 schrieb Dimiter_Popoff: > >> > >>> And now on a more serious note - it does not matter how well little > >>> endian machines have been working, most of the time the internal byte > >>> ordering is of negligible consequences anyway. > >>> But having made a sloppy choice of byte ordering is telling a lot > >>> about how much thought has been put into the design - and whoever > >>> can compare say x86 to power or 68k can see that quite well. > >> > >> To make it short, X86 won. > > > > Making it short is for the wider public, here we are supposed to > > make the difference between things we know and things we are told. > The wider public effectively selects what *we* can/will use. > Companies that can't produce in quantity can't afford to support > their products. > > And, those products go away. Witness how Zilog disappeared, > despite having the king of the 8-bitters (yet failed to evolve > to anything bigger/wider). > > So, "merit" doesn't figure into the calculation.
"first best" never happens, "second best" is too late, so "third best" it is ...
On 10/2/2021 3:15, Don Y wrote:
> On 10/1/2021 4:30 PM, Dimiter_Popoff wrote: >> On 10/2/2021 2:18, Gerhard Hoffmann wrote: >>> Am 02.10.21 um 00:56 schrieb Dimiter_Popoff: >>> >>>> And now on a more serious note - it does not matter how well little >>>> endian machines have been working, most of the time the internal byte >>>> ordering is of negligible consequences anyway. >>>> But having made a sloppy choice of byte ordering is telling a lot >>>> about how much thought has been put into the design - and whoever >>>> can compare say x86 to power or 68k can see that quite well. >>> >>> To make it short, X86 won. >> >> Making it short is for the wider public, here we are supposed to >> make the difference between things we know and things we are told. > > The wider public effectively selects what *we* can/will use. > Companies that can't produce in quantity can't afford to support > their products. > > And, those products go away.&nbsp; Witness how Zilog disappeared, > despite having the king of the 8-bitters (yet failed to evolve > to anything bigger/wider). > > So, "merit" doesn't figure into the calculation.
Like you said in another post here, "The Market didn't select for the "best" -- but *did* "select". " Of course I know how life works. But I am not a marketeer, and I would have felt to have wasted my life had I chosen to join the herd as herded by the marketeers. It has been anything but an easy ride but I don't regret a minute of it.