Reply by Joe Gwinn July 29, 20212021-07-29
On Thu, 29 Jul 2021 15:43:15 -0700 (PDT), Rick C
<gnuarm.deletethisbit@gmail.com> wrote:

>On Thursday, July 29, 2021 at 5:58:29 PM UTC-4, Joe Gwinn wrote: >> On Thu, 29 Jul 2021 13:11:13 -0700 (PDT), Rick C >> <gnuarm.del...@gmail.com> wrote: >> >> >On Sunday, July 25, 2021 at 12:57:00 PM UTC-4, Joe Gwinn wrote: >> >> This article from 2014 is Kanerva's 7-page summary of the then current >> >> state of Sparse Distributed Memory, Spatter Codes, and related areas, >> >> with an extensive reference list. >> >> >> >> The computational engine Kanerva describes here is something that the >> >> denizens of S.E.D. could implement in a small FPGA, should it be >> >> useful to do so. A likely use would be content-addressed memory that >> >> can tolerate considerable noise. >> >> >> >> The 10,000 bits is the minimum that works reliably, and Kanerva speaks >> >> of using 65,000 bits, which will occupy ~8Kbytes, trivial by >> >> present-day hardware standards. This would fit in SRAM. >> >> >> >> >> >> .<http://www.rctn.org/vs265/Kanerva-allerton2014.pdf> >> > >> >What does it mean, "tolerate considerable noise"? Does that mean matching a lot of bits but not all? That would probably be implemented by counting the matched bits which can be fairly slow compared to other operations. I suppose the various speedup methods can be used to improve that time. I've looked at 12 bit adders and I've looked at adding lots of 1s, but never 12 bits worth of 1s. >> > >> >But this depends on what is meant by the "noise". >> This is a form of true associative memory, and so is far less fussy >> than traditional content addressed memory in that it will return the >> nearest item in memory, not insisting that the match be exact. >> >> SDM is built upon sparse random vectors in a vector space of say >> 100,000 dimensions. >> >> By "sparse", they mean that typically only 2% of the vector elements >> are non-zero. >> >> The arithmetic operators (add, multiply, ...) are from abstract >> algebra, and while they often resemble the operators of integer >> arithmetic, they need not be exactly that. >> >> In Abstract Algebra (also know as Modern Algebra), one defines and >> studies algebras as mathematical objects in themselves, the algebra we >> learned in high school being a special case. It's pretty simple - >> there are a handful of things than an algebra must possess or meet. >> Zero and Unity (with specific properties) must exist. Addition and >> multiplication must be defined to meet commutative and associative >> laws, et al. And so on. >> >> I've invented special-purpose algebras to solve a specific problem. >> >> Anyway, with this in mind, reread the section about how the math >> operations are defined. >> >> Anyway, it's all pretty simple, and one could implement this in an >> ordinary FPGA. And part of the design is deciding just how fussy the >> association should be - how far apart can a lookup vector be to >> something in the memory and be returned as a match (with an estimate >> of closeness), versus saying not found. > >Amazing. You managed to use so many words and yet didn't even attempt to answer my question. > >So how is "closeness" of a match defined? Can you explain that?
Number of bits that are the same, called coincidences. I assumed that you would reread that part of the article. Joe Gwinn
Reply by Rick C July 29, 20212021-07-29
On Thursday, July 29, 2021 at 5:58:29 PM UTC-4, Joe Gwinn wrote:
> On Thu, 29 Jul 2021 13:11:13 -0700 (PDT), Rick C > <gnuarm.del...@gmail.com> wrote: > > >On Sunday, July 25, 2021 at 12:57:00 PM UTC-4, Joe Gwinn wrote: > >> This article from 2014 is Kanerva's 7-page summary of the then current > >> state of Sparse Distributed Memory, Spatter Codes, and related areas, > >> with an extensive reference list. > >> > >> The computational engine Kanerva describes here is something that the > >> denizens of S.E.D. could implement in a small FPGA, should it be > >> useful to do so. A likely use would be content-addressed memory that > >> can tolerate considerable noise. > >> > >> The 10,000 bits is the minimum that works reliably, and Kanerva speaks > >> of using 65,000 bits, which will occupy ~8Kbytes, trivial by > >> present-day hardware standards. This would fit in SRAM. > >> > >> > >> .<http://www.rctn.org/vs265/Kanerva-allerton2014.pdf> > > > >What does it mean, "tolerate considerable noise"? Does that mean matching a lot of bits but not all? That would probably be implemented by counting the matched bits which can be fairly slow compared to other operations. I suppose the various speedup methods can be used to improve that time. I've looked at 12 bit adders and I've looked at adding lots of 1s, but never 12 bits worth of 1s. > > > >But this depends on what is meant by the "noise". > This is a form of true associative memory, and so is far less fussy > than traditional content addressed memory in that it will return the > nearest item in memory, not insisting that the match be exact. > > SDM is built upon sparse random vectors in a vector space of say > 100,000 dimensions. > > By "sparse", they mean that typically only 2% of the vector elements > are non-zero. > > The arithmetic operators (add, multiply, ...) are from abstract > algebra, and while they often resemble the operators of integer > arithmetic, they need not be exactly that. > > In Abstract Algebra (also know as Modern Algebra), one defines and > studies algebras as mathematical objects in themselves, the algebra we > learned in high school being a special case. It's pretty simple - > there are a handful of things than an algebra must possess or meet. > Zero and Unity (with specific properties) must exist. Addition and > multiplication must be defined to meet commutative and associative > laws, et al. And so on. > > I've invented special-purpose algebras to solve a specific problem. > > Anyway, with this in mind, reread the section about how the math > operations are defined. > > Anyway, it's all pretty simple, and one could implement this in an > ordinary FPGA. And part of the design is deciding just how fussy the > association should be - how far apart can a lookup vector be to > something in the memory and be returned as a match (with an estimate > of closeness), versus saying not found.
Amazing. You managed to use so many words and yet didn't even attempt to answer my question. So how is "closeness" of a match defined? Can you explain that? -- Rick C. + Get 1,000 miles of free Supercharging + Tesla referral code - https://ts.la/richard11209
Reply by Joe Gwinn July 29, 20212021-07-29
On Thu, 29 Jul 2021 13:11:13 -0700 (PDT), Rick C
<gnuarm.deletethisbit@gmail.com> wrote:

>On Sunday, July 25, 2021 at 12:57:00 PM UTC-4, Joe Gwinn wrote: >> This article from 2014 is Kanerva's 7-page summary of the then current >> state of Sparse Distributed Memory, Spatter Codes, and related areas, >> with an extensive reference list. >> >> The computational engine Kanerva describes here is something that the >> denizens of S.E.D. could implement in a small FPGA, should it be >> useful to do so. A likely use would be content-addressed memory that >> can tolerate considerable noise. >> >> The 10,000 bits is the minimum that works reliably, and Kanerva speaks >> of using 65,000 bits, which will occupy ~8Kbytes, trivial by >> present-day hardware standards. This would fit in SRAM. >> >> >> .<http://www.rctn.org/vs265/Kanerva-allerton2014.pdf> > >What does it mean, "tolerate considerable noise"? Does that mean matching a lot of bits but not all? That would probably be implemented by counting the matched bits which can be fairly slow compared to other operations. I suppose the various speedup methods can be used to improve that time. I've looked at 12 bit adders and I've looked at adding lots of 1s, but never 12 bits worth of 1s. > >But this depends on what is meant by the "noise".
This is a form of true associative memory, and so is far less fussy than traditional content addressed memory in that it will return the nearest item in memory, not insisting that the match be exact. SDM is built upon sparse random vectors in a vector space of say 100,000 dimensions. By "sparse", they mean that typically only 2% of the vector elements are non-zero. The arithmetic operators (add, multiply, ...) are from abstract algebra, and while they often resemble the operators of integer arithmetic, they need not be exactly that. In Abstract Algebra (also know as Modern Algebra), one defines and studies algebras as mathematical objects in themselves, the algebra we learned in high school being a special case. It's pretty simple - there are a handful of things than an algebra must possess or meet. Zero and Unity (with specific properties) must exist. Addition and multiplication must be defined to meet commutative and associative laws, et al. And so on. I've invented special-purpose algebras to solve a specific problem. Anyway, with this in mind, reread the section about how the math operations are defined. Anyway, it's all pretty simple, and one could implement this in an ordinary FPGA. And part of the design is deciding just how fussy the association should be - how far apart can a lookup vector be to something in the memory and be returned as a match (with an estimate of closeness), versus saying not found. Joe Gwinn
Reply by Rick C July 29, 20212021-07-29
On Sunday, July 25, 2021 at 12:57:00 PM UTC-4, Joe Gwinn wrote:
> This article from 2014 is Kanerva's 7-page summary of the then current > state of Sparse Distributed Memory, Spatter Codes, and related areas, > with an extensive reference list. > > The computational engine Kanerva describes here is something that the > denizens of S.E.D. could implement in a small FPGA, should it be > useful to do so. A likely use would be content-addressed memory that > can tolerate considerable noise. > > The 10,000 bits is the minimum that works reliably, and Kanerva speaks > of using 65,000 bits, which will occupy ~8Kbytes, trivial by > present-day hardware standards. This would fit in SRAM. > > > .<http://www.rctn.org/vs265/Kanerva-allerton2014.pdf>
What does it mean, "tolerate considerable noise"? Does that mean matching a lot of bits but not all? That would probably be implemented by counting the matched bits which can be fairly slow compared to other operations. I suppose the various speedup methods can be used to improve that time. I've looked at 12 bit adders and I've looked at adding lots of 1s, but never 12 bits worth of 1s. But this depends on what is meant by the "noise". -- Rick C. - Get 1,000 miles of free Supercharging - Tesla referral code - https://ts.la/richard11209
Reply by John Doe July 29, 20212021-07-29
Joe Gwinn <joegwinn@comcast.net> wrote: 

> This article from 2014 is Kanerva's 7-page summary of the then current > state of Sparse Distributed Memory, Spatter Codes, and related areas, > with an extensive reference list. > > The computational engine Kanerva describes here is something that the > denizens of S.E.D. could implement in a small FPGA, should it be useful > to do so. A likely use would be content-addressed memory that can > tolerate considerable noise. > > The 10,000 bits is the minimum that works reliably, and Kanerva speaks > of using 65,000 bits, which will occupy ~8Kbytes, trivial by present-day > hardware standards. This would fit in SRAM. > > .<http://www.rctn.org/vs265/Kanerva-allerton2014.pdf>
Apparently the Chinese have a size problem with computing in their tonal language. A visiting Japanese programmer told me something to that effect, a long time ago.
Reply by Joe Gwinn July 25, 20212021-07-25
This article from 2014 is Kanerva's 7-page summary of the then current
state of Sparse Distributed Memory, Spatter Codes, and related areas,
with an extensive reference list.

The computational engine Kanerva describes here is something that the
denizens of S.E.D. could implement in a small FPGA, should it be
useful to do so.  A likely use would be content-addressed memory that
can tolerate considerable noise.

The 10,000 bits is the minimum that works reliably, and Kanerva speaks
of using 65,000 bits, which will occupy ~8Kbytes, trivial by
present-day hardware standards.  This would fit in SRAM.


.<http://www.rctn.org/vs265/Kanerva-allerton2014.pdf>


Joe Gwinn