Ask objects to group themselves by similar meanings of words....

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
14 messages Options
Reply | Threaded
Open this post in threaded view
|

Ask objects to group themselves by similar meanings of words....

Squeak - Dev mailing list
Hi folks

I have extracted the various Greek and Latin Roots from https://en.wikipedia.org/wiki/List_of_Greek_and_Latin_roots_in_English/A–G to Squeak objects.
The objects correlate to one row in the various tables at the link.

For example, I have one object for:

<tr>
<td><b>abac-</b><sup id="cite_ref-2" class="reference"><a href="#cite_note-2">[2]</a></sup></td>
<td>slab</td>
<td>Greek</td>
<td><span lang="grc"><a href="https://en.wiktionary.org/wiki/%E1%BC%84%CE%B2%CE%B1%CE%BE#Ancient_Greek" class="extiw" title="wikt:ἄβαξ">ἄβαξ, ἄβακος</a></span> (<span title="Ancient Greek transliteration" lang="grc-Latn"><i>ábax, ábakos</i></span>), <span lang="grc"><a href="https://en.wiktionary.org/wiki/%E1%BC%80%CE%B2%CE%B1%CE%BA%CE%AF%CF%83%CE%BA%CE%BF%CF%82#Ancient_Greek" class="extiw" title="wikt:ἀβακίσκος">ἀβακίσκος</a></span> (<span title="Ancient Greek transliteration" lang="grc-Latn"><i>abakískos</i></span>)</td>
<td>abaciscus, <a href="/wiki/Abacus" title="Abacus">abacus</a>, <a href="/wiki/Abax" class="mw-redirect" title="Abax">abax</a>
</td></tr>


the cells are put into accessors..corresponding to the headers of the table:

Root, Meaning, Origin, Etymology, English examples.

MyObject
      root -> abac
      meaning -> slab
      language -> greek
      etymology -> blah
       examples -> more-blah


Focusing on "english examples" I am interested in

LatinRoots select:[:each | each english_examples  "have same or similar meanings"]

If anybody has pointers to projects that have grappled with that problem I would appreciate a link.

answers like "Your question is completely nonsensical" are ok, too (:

thanks for your time.







Reply | Threaded
Open this post in threaded view
|

Re: Ask objects to group themselves by similar meanings of words....

Christoph Thiede

So if I understand you correctly, your question is not actually related to Squeak/Smalltalk at all but rather to the general problem of comparing English vocables by semantic? Off-topic, but still an interesting topic :)


I can only give you a few rough keywords, maybe one of them can help you, and maybe you were already ten steps ahead of me :-)


If you only care about similarity by letters, the simplest solution might be something like calculating the Longest Common Prefix of two strings and comparing the result with a threshold. (That term is googlable :)) However, this won't help you with pairs such as "acentric - acrocentric" unless you use some kind of fuzzy matching.


If you actually care about the semantic similarity, one approach could be a gigantic dictionary of synonyms. I'm sure there are any relevant databases on the web.

The problem with synonyms is that they can compare words only dually. But are "centrifugal" and "centripetal" actually synonyms? It totally depends on the perspective. Maybe you won't be happy with this approach.

A more sophisticated approach is word embeddings. The rough idea is to map each vocable to a large vector in which each component quantifies how related the vocable is to a specific topic. There's a lot of research around this field ...


PS: What are you trying to do with these results, eventually? :-)


Best,

Christoph


Von: Squeak-dev <[hidden email]> im Auftrag von gettimothy via Squeak-dev <[hidden email]>
Gesendet: Freitag, 3. April 2020 21:00:57
An: squeak-dev
Betreff: [squeak-dev] Ask objects to group themselves by similar meanings of words....
 
Hi folks

I have extracted the various Greek and Latin Roots from https://en.wikipedia.org/wiki/List_of_Greek_and_Latin_roots_in_English/A–G to Squeak objects.
The objects correlate to one row in the various tables at the link.

For example, I have one object for:

<tr>
<td><b>abac-</b><sup id="cite_ref-2" class="reference"><a href="#cite_note-2">[2]</a></sup></td>
<td>slab</td>
<td>Greek</td>
<td><span lang="grc"><a href="https://en.wiktionary.org/wiki/%E1%BC%84%CE%B2%CE%B1%CE%BE#Ancient_Greek" class="extiw" title="wikt:ἄβαξ">ἄβαξ, ἄβακος</a></span> (<span title="Ancient Greek transliteration" lang="grc-Latn"><i>ábax, ábakos</i></span>), <span lang="grc"><a href="https://en.wiktionary.org/wiki/%E1%BC%80%CE%B2%CE%B1%CE%BA%CE%AF%CF%83%CE%BA%CE%BF%CF%82#Ancient_Greek" class="extiw" title="wikt:ἀβακίσκος">ἀβακίσκος</a></span> (<span title="Ancient Greek transliteration" lang="grc-Latn"><i>abakískos</i></span>)</td>
<td>abaciscus, <a href="/wiki/Abacus" title="Abacus">abacus</a>, <a href="/wiki/Abax" class="mw-redirect" title="Abax">abax</a>
</td></tr>


the cells are put into accessors..corresponding to the headers of the table:

Root, Meaning, Origin, Etymology, English examples.

MyObject
      root -> abac
      meaning -> slab
      language -> greek
      etymology -> blah
       examples -> more-blah


Focusing on "english examples" I am interested in

LatinRoots select:[:each | each english_examples  "have same or similar meanings"]

If anybody has pointers to projects that have grappled with that problem I would appreciate a link.

answers like "Your question is completely nonsensical" are ok, too (:

thanks for your time.







Carpe Squeak!
Reply | Threaded
Open this post in threaded view
|

Re: Ask objects to group themselves by similar meanings of&nbsp; &nbsp; &nbsp; &nbsp; words....

Squeak - Dev mailing list

Hi Christoph

Thanks for the reply.

The "embeddings" is the "interesting idea" I was looking for. I was sort of hoping Squeak already had something like that. (:


What I am doing is teaching myself Latin.  See. Write. Hear. Say. via a Seaside App.

The imported objects will take care of the "See" part.
I can probably get them to import , or link to how they sound via Wiktionary.

I would like objects I imported to organize themselves in different ways. Alphabetic order is the obvious one and the one the wikipedia pages use.

I think having these things "cluster about meaning/concept" and "cluster about sound" would be a useful pedagogical approach.


For example spherical things: Ball, Sun, Planet, potato, 
Colors: 
animals
Big things
Small things..



When I import the medical roots: https://en.wikipedia.org/wiki/List_of_medical_roots,_suffixes_and_prefixes
and say you are studying the skeletal system in anatomy, stuff clustered around bones.


Thanks for your reply.


---- On Fri, 03 Apr 2020 15:41:57 -0400 Thiede, Christoph <[hidden email]> wrote ----

So if I understand you correctly, your question is not actually related to Squeak/Smalltalk at all but rather to the general problem of comparing English vocables by semantic? Off-topic, but still an interesting topic :)


I can only give you a few rough keywords, maybe one of them can help you, and maybe you were already ten steps ahead of me :-)


If you only care about similarity by letters, the simplest solution might be something like calculating the Longest Common Prefix of two strings and comparing the result with a threshold. (That term is googlable :)) However, this won't help you with pairs such as "acentric - acrocentric" unless you use some kind of fuzzy matching.


If you actually care about the semantic similarity, one approach could be a gigantic dictionary of synonyms. I'm sure there are any relevant databases on the web.

The problem with synonyms is that they can compare words only dually. But are "centrifugal" and "centripetal" actually synonyms? It totally depends on the perspective. Maybe you won't be happy with this approach.

A more sophisticated approach is word embeddings. The rough idea is to map each vocable to a large vector in which each component quantifies how related the vocable is to a specific topic. There's a lot of research around this field ...


PS: What are you trying to do with these results, eventually? :-)


Best,

Christoph



Von: Squeak-dev <[hidden email]> im Auftrag von gettimothy via Squeak-dev <[hidden email]>
Gesendet: Freitag, 3. April 2020 21:00:57
An: squeak-dev
Betreff: [squeak-dev] Ask objects to group themselves by similar meanings of words....
 
Hi folks

I have extracted the various Greek and Latin Roots from https://en.wikipedia.org/wiki/List_of_Greek_and_Latin_roots_in_English/A–G to Squeak objects.
The objects correlate to one row in the various tables at the link.

For example, I have one object for:

<tr>
<td><b>abac-</b><sup id="cite_ref-2" class="reference"><a href="#cite_note-2">[2]</a></sup></td>
<td>slab</td>
<td>Greek</td>
<td><span lang="grc"><a href="https://en.wiktionary.org/wiki/%E1%BC%84%CE%B2%CE%B1%CE%BE#Ancient_Greek" class="extiw" title="wikt:ἄβαξ">ἄβαξ, ἄβακος</a></span> (<span title="Ancient Greek transliteration" lang="grc-Latn"><i>ábax, ábakos</i></span>), <span lang="grc"><a href="https://en.wiktionary.org/wiki/%E1%BC%80%CE%B2%CE%B1%CE%BA%CE%AF%CF%83%CE%BA%CE%BF%CF%82#Ancient_Greek" class="extiw" title="wikt:ἀβακίσκος">ἀβακίσκος</a></span> (<span title="Ancient Greek transliteration" lang="grc-Latn"><i>abakískos</i></span>)</td>
<td>abaciscus, <a href="/wiki/Abacus" title="Abacus">abacus</a>, <a href="/wiki/Abax" class="mw-redirect" title="Abax">abax</a>
</td></tr>


the cells are put into accessors..corresponding to the headers of the table:

Root, Meaning, Origin, Etymology, English examples.

MyObject
      root -> abac
      meaning -> slab
      language -> greek
      etymology -> blah
       examples -> more-blah


Focusing on "english examples" I am interested in

LatinRoots select:[:each | each english_examples  "have same or similar meanings"]

If anybody has pointers to projects that have grappled with that problem I would appreciate a link.

answers like "Your question is completely nonsensical" are ok, too (:

thanks for your time.









Reply | Threaded
Open this post in threaded view
|

Re: Ask objects to group themselves by similar meanings of words....

Stéphane Rollandin
In reply to this post by Squeak - Dev mailing list
Just a wild guess: self-organizing maps?

https://en.wikipedia.org/wiki/Self-organizing_map

Stef


Reply | Threaded
Open this post in threaded view
|

Re: Ask objects to group themselves by similar meanings of words....

Squeak - Dev mailing list
That's interesting and I will read up on it as I get time.

It is an interesting problem isn't it?

What I might try is having each LatinRoot object visit each other one and by some heuristic, have them determine if they are "close" to each other.

For giggles, I can randomly assign an integer weight to each one, and if the absolute value of their difference is within X, then they are close.

That in itself is an interesting problem in itself. How to efficiently (or not)  have 800 objects visit the other 799 objects.
How to store the set of "other close objects" in an object.

Then, as other heuristics of "close" are developed, I can re-use that 


Thanks for your reply!


---- On Fri, 03 Apr 2020 17:12:31 -0400 Stéphane Rollandin <[hidden email]> wrote ----

Just a wild guess: self-organizing maps?

https://en.wikipedia.org/wiki/Self-organizing_map

Stef






Reply | Threaded
Open this post in threaded view
|

Re: Ask objects to group themselves by similar meanings of words....

Herbert König
Hi,

I used this for mapping comments from testers to error causes (soldering, supplier.....) in production of electronics. I achieved a recognition rate 80% which was good because the computer looked at 100% of the comments, humans (= me :-) at 1%.

It's shaky, and slow and depends a lot on deciding of the similarity criterion and mapping the text input to floats. 1250 trainings samples, 1200 Neurons 280 inputs (known words) each take 15 minutes to train. The code is from 2007 and I remember running it over night because hardware was slower back when.

Also it was not pure SOFM. Lots of fun thou.

Anyway gettimothy if you want to give it a try, we can talk and I can share code.

Cheers,

Herbert

Am 03.04.2020 um 23:23 schrieb gettimothy via Squeak-dev:
That's interesting and I will read up on it as I get time.

It is an interesting problem isn't it?

What I might try is having each LatinRoot object visit each other one and by some heuristic, have them determine if they are "close" to each other.

For giggles, I can randomly assign an integer weight to each one, and if the absolute value of their difference is within X, then they are close.

That in itself is an interesting problem in itself. How to efficiently (or not)  have 800 objects visit the other 799 objects.
How to store the set of "other close objects" in an object.

Then, as other heuristics of "close" are developed, I can re-use that 


Thanks for your reply!


---- On Fri, 03 Apr 2020 17:12:31 -0400 Stéphane Rollandin [hidden email] wrote ----

Just a wild guess: self-organizing maps?

https://en.wikipedia.org/wiki/Self-organizing_map

Stef






    



Reply | Threaded
Open this post in threaded view
|

Re: Ask objects to group themselves by similar meanings of words....

Squeak - Dev mailing list
Herbert, that looks like fun and a wonderful addition to the mental toolbox.

I am at my factoryvjpb until Monday. I will touch base then or Tuesday.

Thx



---- On Sat, 04 Apr 2020 05:53:13 -0400 [hidden email] wrote ----

Hi,

I used this for mapping comments from testers to error causes (soldering, supplier.....) in production of electronics. I achieved a recognition rate 80% which was good because the computer looked at 100% of the comments, humans (= me :-) at 1%.

It's shaky, and slow and depends a lot on deciding of the similarity criterion and mapping the text input to floats. 1250 trainings samples, 1200 Neurons 280 inputs (known words) each take 15 minutes to train. The code is from 2007 and I remember running it over night because hardware was slower back when.

Also it was not pure SOFM. Lots of fun thou.

Anyway gettimothy if you want to give it a try, we can talk and I can share code.

Cheers,

Herbert

Am 03.04.2020 um 23:23 schrieb gettimothy via Squeak-dev:
That's interesting and I will read up on it as I get time.

It is an interesting problem isn't it?

What I might try is having each LatinRoot object visit each other one and by some heuristic, have them determine if they are "close" to each other.

For giggles, I can randomly assign an integer weight to each one, and if the absolute value of their difference is within X, then they are close.

That in itself is an interesting problem in itself. How to efficiently (or not)  have 800 objects visit the other 799 objects.
How to store the set of "other close objects" in an object.

Then, as other heuristics of "close" are developed, I can re-use that 


Thanks for your reply!


---- On Fri, 03 Apr 2020 17:12:31 -0400 Stéphane Rollandin [hidden email] wrote ----

Just a wild guess: self-organizing maps?

https://en.wikipedia.org/wiki/Self-organizing_map

Stef






    





Reply | Threaded
Open this post in threaded view
|

Re: Ask objects to group themselves by similar meanings of&nbsp; &nbsp; &nbsp; &nbsp; words....

Christoph Thiede
In reply to this post by Squeak - Dev mailing list

Sounds like an interesting project! A few years ago, I had a lot of fun with a sort of similar project, setting up an object model for the Latin grammar. It was really pleasant, and Latin is such a delightfully logical language! Unfortunately, I did not yet know Squeak/Smalltalk when I was working at this project ...


I wish you much joy and success with your project. Carpe Squicum! :-)


Best,

Christoph


Von: gettimothy <[hidden email]>
Gesendet: Freitag, 3. April 2020 22:08:19
An: Thiede, Christoph
Cc: squeak-dev
Betreff: Re: AW: [squeak-dev] Ask objects to group themselves by similar meanings of&nbsp;&nbsp;&nbsp;&nbsp;words....
 

Hi Christoph

Thanks for the reply.

The "embeddings" is the "interesting idea" I was looking for. I was sort of hoping Squeak already had something like that. (:


What I am doing is teaching myself Latin.  See. Write. Hear. Say. via a Seaside App.

The imported objects will take care of the "See" part.
I can probably get them to import , or link to how they sound via Wiktionary.

I would like objects I imported to organize themselves in different ways. Alphabetic order is the obvious one and the one the wikipedia pages use.

I think having these things "cluster about meaning/concept" and "cluster about sound" would be a useful pedagogical approach.


For example spherical things: Ball, Sun, Planet, potato, 
Colors: 
animals
Big things
Small things..



When I import the medical roots: https://en.wikipedia.org/wiki/List_of_medical_roots,_suffixes_and_prefixes
and say you are studying the skeletal system in anatomy, stuff clustered around bones.


Thanks for your reply.


---- On Fri, 03 Apr 2020 15:41:57 -0400 Thiede, Christoph <[hidden email]> wrote ----

So if I understand you correctly, your question is not actually related to Squeak/Smalltalk at all but rather to the general problem of comparing English vocables by semantic? Off-topic, but still an interesting topic :)


I can only give you a few rough keywords, maybe one of them can help you, and maybe you were already ten steps ahead of me :-)


If you only care about similarity by letters, the simplest solution might be something like calculating the Longest Common Prefix of two strings and comparing the result with a threshold. (That term is googlable :)) However, this won't help you with pairs such as "acentric - acrocentric" unless you use some kind of fuzzy matching.


If you actually care about the semantic similarity, one approach could be a gigantic dictionary of synonyms. I'm sure there are any relevant databases on the web.

The problem with synonyms is that they can compare words only dually. But are "centrifugal" and "centripetal" actually synonyms? It totally depends on the perspective. Maybe you won't be happy with this approach.

A more sophisticated approach is word embeddings. The rough idea is to map each vocable to a large vector in which each component quantifies how related the vocable is to a specific topic. There's a lot of research around this field ...


PS: What are you trying to do with these results, eventually? :-)


Best,

Christoph



Von: Squeak-dev <[hidden email]> im Auftrag von gettimothy via Squeak-dev <[hidden email]>
Gesendet: Freitag, 3. April 2020 21:00:57
An: squeak-dev
Betreff: [squeak-dev] Ask objects to group themselves by similar meanings of words....
 
Hi folks

I have extracted the various Greek and Latin Roots from https://en.wikipedia.org/wiki/List_of_Greek_and_Latin_roots_in_English/A–G to Squeak objects.
The objects correlate to one row in the various tables at the link.

For example, I have one object for:

<tr>
<td><b>abac-</b><sup id="cite_ref-2" class="reference"><a href="#cite_note-2">[2]</a></sup></td>
<td>slab</td>
<td>Greek</td>
<td><span lang="grc"><a href="https://en.wiktionary.org/wiki/%E1%BC%84%CE%B2%CE%B1%CE%BE#Ancient_Greek" class="extiw" title="wikt:ἄβαξ">ἄβαξ, ἄβακος</a></span> (<span title="Ancient Greek transliteration" lang="grc-Latn"><i>ábax, ábakos</i></span>), <span lang="grc"><a href="https://en.wiktionary.org/wiki/%E1%BC%80%CE%B2%CE%B1%CE%BA%CE%AF%CF%83%CE%BA%CE%BF%CF%82#Ancient_Greek" class="extiw" title="wikt:ἀβακίσκος">ἀβακίσκος</a></span> (<span title="Ancient Greek transliteration" lang="grc-Latn"><i>abakískos</i></span>)</td>
<td>abaciscus, <a href="/wiki/Abacus" title="Abacus">abacus</a>, <a href="/wiki/Abax" class="mw-redirect" title="Abax">abax</a>
</td></tr>


the cells are put into accessors..corresponding to the headers of the table:

Root, Meaning, Origin, Etymology, English examples.

MyObject
      root -> abac
      meaning -> slab
      language -> greek
      etymology -> blah
       examples -> more-blah


Focusing on "english examples" I am interested in

LatinRoots select:[:each | each english_examples  "have same or similar meanings"]

If anybody has pointers to projects that have grappled with that problem I would appreciate a link.

answers like "Your question is completely nonsensical" are ok, too (:

thanks for your time.









Carpe Squeak!
Reply | Threaded
Open this post in threaded view
|

Re: Ask objects to group themselves by similar meanings of words....

Squeak - Dev mailing list
In reply to this post by Herbert König
Hi Herbert,

I will get back to you after I read up on Neural Networks. 

I found this on the web, and it looks interesting and challenging.






cheers,




---- On Sat, 04 Apr 2020 05:53:13 -0400 Herbert König <[hidden email]> wrote ----

Hi,

I used this for mapping comments from testers to error causes (soldering, supplier.....) in production of electronics. I achieved a recognition rate 80% which was good because the computer looked at 100% of the comments, humans (= me :-) at 1%.

It's shaky, and slow and depends a lot on deciding of the similarity criterion and mapping the text input to floats. 1250 trainings samples, 1200 Neurons 280 inputs (known words) each take 15 minutes to train. The code is from 2007 and I remember running it over night because hardware was slower back when.

Also it was not pure SOFM. Lots of fun thou.

Anyway gettimothy if you want to give it a try, we can talk and I can share code.

Cheers,

Herbert

Am 03.04.2020 um 23:23 schrieb gettimothy via Squeak-dev:



That's interesting and I will read up on it as I get time.

It is an interesting problem isn't it?

What I might try is having each LatinRoot object visit each other one and by some heuristic, have them determine if they are "close" to each other.

For giggles, I can randomly assign an integer weight to each one, and if the absolute value of their difference is within X, then they are close.

That in itself is an interesting problem in itself. How to efficiently (or not)  have 800 objects visit the other 799 objects.
How to store the set of "other close objects" in an object.

Then, as other heuristics of "close" are developed, I can re-use that 


Thanks for your reply!


---- On Fri, 03 Apr 2020 17:12:31 -0400 Stéphane Rollandin [hidden email] wrote ----

Just a wild guess: self-organizing maps?

https://en.wikipedia.org/wiki/Self-organizing_map

Stef










Reply | Threaded
Open this post in threaded view
|

Re: Ask objects to group themselves by similar meanings of words....

Herbert König
Hi TTY,

haven't yet found the proper way to addres you :-)

Back when I got all my knowledge from: Ulrich Nehmzow, Mobile Robotics (I found it fun) and experiments.

What is left from that and I maybe wrong:
  • The base element is a neuron (named but unlike the billion tiny things in the brain). It is simply  multiply, add what any DSP or graphics card can do much better than a PC. Audio guys like me call it FIR filter. outputOfEachNeuron := (lotsOfCoefficients * sameNumberOfInputs) sum. FloatArray does that quickly.
  • To get a limited output of the neuron you use a so-called activation function so output := optput * activationFunction. Tons of implementations but you could use output := min( 1.0, (max (output, -1.0))) sorry for python syntax.
  • Then you take e.g. as many neurons as you have inputs, connect each neuron to all inputs and get as many outputs as you have inputs (or any number of outputs you desire) and call this a layer. Start with random coefficients. This is the one layerd perceptron which cannot learn the xor function.
  • Then you start teaching this thingi by putting a gazillion samples with known desired outputs to the network, compare the desired output to the actual output and change the coefficients (all those numberOfInputs * NumberOfNeurons) so that you get closer to the desired output by gradient descent via a so called learn rate. Repeat with the next trainings sample. Call this an epoch. When done start over with the first sample with diminished learn rate. (next epoch) Use a ton of knowledge on how to train in batches etc as to make the network find a general solution vs. learning your samples by heart and produce utterly stupid outputs on the first unknown sample. Needs inputs * Neurons * samples * epochs multiplications. Each >> 1000.
  • Ooops, so much  work for an AI that even cannot  learn a simple XOR?
  • More helps more so after the first layer put another layer. All inputs of each neuron of the second layer connected to all outputs of the first layer. Throw in much more computing power and use even more layers. Be clever and vary the number of neurons per layer, choose more complicated connection paths etc.
  • All the above is called supervised learning.


  • Here comes the SOFM suggested by Stef:


  • Take Neurons as above which you may organize linearly, as a torus, as ring or 3D. Take samples w/o knowing the desired output for the samples.
  • Put the first sample to all neurons and find out which one has the strongest output. Define a neighbourhood depending on topology chosen above.
  • Train the neuron with the strongest output and its neighbours on this sample.
  • Repeat for all samples.
  • Lower the learn rate and take a smaller neighbourhood.
  • Again train all samples (this is the second epoch)
  • Repeat for many epochs.
  • Make it more complicated like using samples with desired outputs and put layers of other networks around it. I did.
  • In my perception the SOFM has fallen out of favour.

This is (a) unsupervised learning (no desired outputs known for the samples) and (b) similar new samples fire neurons in the same neighbourhood --> clustering.

Anybody feel free to correct me without significantly complicating it. I'm no AI expert.

That is the so called AI. A gigantic gigo machine (garbage in, garbage out). You don't know what the network learns. So in image recognition they (Google?) made a heatmap for the pixels of an image which contributed most to the decision. Come tons of tagged images scraped on the net. Most horses came from one page, showing the text 'Copyright horsephotgrapher.com' at the bottom. This was what the net learned to be the image of a horse. That guy also had a photo of a cat. Must be a horse. His children .... a horse. Horses w/o copyright notice ... cats :-)

Huskies were mostly photographed in snow which got to be the main criterion for huskies.

Recruiting decisions from biased people were use to train networks for pre-selecting job applicants. --> Guess who got no interview.

So make sure to understand why your network learns what, I have implemented a SOFM learn watcher to help me with that.


Cheers,


Herbert


Am 08.04.2020 um 17:41 schrieb gettimothy via Squeak-dev:
Hi Herbert,

I will get back to you after I read up on Neural Networks. 

I found this on the web, and it looks interesting and challenging.






cheers,




---- On Sat, 04 Apr 2020 05:53:13 -0400 Herbert König [hidden email] wrote ----

Hi,

I used this for mapping comments from testers to error causes (soldering, supplier.....) in production of electronics. I achieved a recognition rate 80% which was good because the computer looked at 100% of the comments, humans (= me :-) at 1%.

It's shaky, and slow and depends a lot on deciding of the similarity criterion and mapping the text input to floats. 1250 trainings samples, 1200 Neurons 280 inputs (known words) each take 15 minutes to train. The code is from 2007 and I remember running it over night because hardware was slower back when.

Also it was not pure SOFM. Lots of fun thou.

Anyway gettimothy if you want to give it a try, we can talk and I can share code.

Cheers,

Herbert

Am 03.04.2020 um 23:23 schrieb gettimothy via Squeak-dev:



That's interesting and I will read up on it as I get time.

It is an interesting problem isn't it?

What I might try is having each LatinRoot object visit each other one and by some heuristic, have them determine if they are "close" to each other.

For giggles, I can randomly assign an integer weight to each one, and if the absolute value of their difference is within X, then they are close.

That in itself is an interesting problem in itself. How to efficiently (or not)  have 800 objects visit the other 799 objects.
How to store the set of "other close objects" in an object.

Then, as other heuristics of "close" are developed, I can re-use that 


Thanks for your reply!


---- On Fri, 03 Apr 2020 17:12:31 -0400 Stéphane Rollandin [hidden email] wrote ----

Just a wild guess: self-organizing maps?

https://en.wikipedia.org/wiki/Self-organizing_map

Stef






            




    



Reply | Threaded
Open this post in threaded view
|

Re: Ask objects to group themselves by similar meanings of words....

Stéphane Rollandin
In reply to this post by Squeak - Dev mailing list
> https://dai.fmph.uniba.sk/courses/NN/haykin.neural-networks.3ed.2009.pdf

This one seems very deep, and not the easiest ressource to start with...
Maybe also have a look at this page:

https://www.superdatascience.com/blogs/the-ultimate-guide-to-self-organizing-maps-soms

Stef

Reply | Threaded
Open this post in threaded view
|

Re: Ask objects to group themselves by similar meanings of words....

Herbert König
Hi Stef,

great resource but I was too impatient to go through it. In the Nehmzow
book the SOMK is 1.5 pages including an image and ready to implement.
For training it uses distances instead of squared distances in the
article you mention (I just skimmed it for 5 minutes :-)) so that may
lead to faster convergence. Definitely worth a try and easy to
implement. Th usual thing with neural nets (and AI algorithms in
general) is you have to implement and try what works best for your problem.
Unless that changed since the last time I looked :-)

Very curious to see (and help) how it performs on the original problem
of this thread!

Cheers,

Herbert

Am 09.04.2020 um 01:36 schrieb Stéphane Rollandin:

>> https://dai.fmph.uniba.sk/courses/NN/haykin.neural-networks.3ed.2009.pdf
>
> This one seems very deep, and not the easiest ressource to start
> with... Maybe also have a look at this page:
>
> https://www.superdatascience.com/blogs/the-ultimate-guide-to-self-organizing-maps-soms
>
>
> Stef
>


Reply | Threaded
Open this post in threaded view
|

Re: Ask objects to group themselves by similar meanings of words....

Squeak - Dev mailing list
In reply to this post by Stéphane Rollandin
Thanks, Stef. Much appreciated.


---- On Wed, 08 Apr 2020 19:36:30 -0400 Stéphane Rollandin <[hidden email]> wrote ----

> https://dai.fmph.uniba.sk/courses/NN/haykin.neural-networks.3ed.2009.pdf

This one seems very deep, and not the easiest ressource to start with...
Maybe also have a look at this page:

https://www.superdatascience.com/blogs/the-ultimate-guide-to-self-organizing-maps-soms

Stef




Reply | Threaded
Open this post in threaded view
|

Re: Ask objects to group themselves by similar meanings of words....

Squeak - Dev mailing list
In reply to this post by Herbert König

If you folks want the source and objects, I can get them to you via FTP.

I just copy-n-paste the html tables from the pages here:  https://en.wikipedia.org/wiki/List_of_Greek_and_Latin_roots_in_English

I pasted them into a class, then used a simple PEG grammar and PEG actor to extract the table rows into objects.

The objects are stored in  Class Variable ordered collections, and from my experience, these are not transferrable via Monticello.

So any .mcz will have to re-run the (very fast) import process again.

Let me know, I will be happy to help.

t



---- On Thu, 09 Apr 2020 00:32:40 -0400 Herbert König <[hidden email]> wrote ----

Hi Stef,

great resource but I was too impatient to go through it. In the Nehmzow
book the SOMK is 1.5 pages including an image and ready to implement.
For training it uses distances instead of squared distances in the
article you mention (I just skimmed it for 5 minutes :-)) so that may
lead to faster convergence. Definitely worth a try and easy to
implement. Th usual thing with neural nets (and AI algorithms in
general) is you have to implement and try what works best for your problem.
Unless that changed since the last time I looked :-)

Very curious to see (and help) how it performs on the original problem
of this thread!

Cheers,

Herbert

Am 09.04.2020 um 01:36 schrieb Stéphane Rollandin:

>> https://dai.fmph.uniba.sk/courses/NN/haykin.neural-networks.3ed.2009.pdf
>
> This one seems very deep, and not the easiest ressource to start
> with... Maybe also have a look at this page:
>
> https://www.superdatascience.com/blogs/the-ultimate-guide-to-self-organizing-maps-soms
>
>
> Stef
>