Hi TTY,
haven't yet found the proper way to addres you :-)
Back when I got all my knowledge from: Ulrich Nehmzow, Mobile
Robotics (I found it fun) and experiments.
What is left from that and I maybe wrong:
- The base element is a neuron (named but unlike the billion
tiny things in the brain). It is simply multiply, add what any
DSP or graphics card can do much better than a PC. Audio guys
like me call it FIR filter. outputOfEachNeuron :=
(lotsOfCoefficients * sameNumberOfInputs) sum. FloatArray does
that quickly.
- To get a limited output of the neuron you use a so-called
activation function so output := optput * activationFunction.
Tons of implementations but you could use output := min( 1.0,
(max (output, -1.0))) sorry for python syntax.
- Then you take e.g. as many neurons as you have inputs, connect
each neuron to all inputs and get as many outputs as you have
inputs (or any number of outputs you desire) and call this a
layer. Start with random coefficients. This is the one layerd
perceptron which cannot learn the xor function.
- Then you start teaching this thingi by putting a gazillion
samples with known desired outputs to the network, compare the
desired output to the actual output and change the coefficients
(all those numberOfInputs * NumberOfNeurons) so that you get
closer to the desired output by gradient descent via a so called
learn rate. Repeat with the next trainings sample. Call this an
epoch. When done start over with the first sample with
diminished learn rate. (next epoch) Use a ton of knowledge on
how to train in batches etc as to make the network find a
general solution vs. learning your samples by heart and produce
utterly stupid outputs on the first unknown sample. Needs inputs
* Neurons * samples * epochs multiplications. Each >>
1000.
- Ooops, so much work for an AI that even cannot learn a
simple XOR?
- More helps more so after the first layer put another layer.
All inputs of each neuron of the second layer connected to all
outputs of the first layer. Throw in much more computing power
and use even more layers. Be clever and vary the number of
neurons per layer, choose more complicated connection paths etc.
- All the above is called supervised learning.
- Here comes the SOFM suggested by Stef:
- Take Neurons as above which you may organize linearly, as a
torus, as ring or 3D. Take samples w/o knowing the desired
output for the samples.
- Put the first sample to all neurons and find out which one has
the strongest output. Define a neighbourhood depending on
topology chosen above.
- Train the neuron with the strongest output and its neighbours
on this sample.
- Repeat for all samples.
- Lower the learn rate and take a smaller neighbourhood.
- Again train all samples (this is the second epoch)
- Repeat for many epochs.
- Make it more complicated like using samples with desired
outputs and put layers of other networks around it. I did.
- In my perception the SOFM has fallen out of favour.
This is (a) unsupervised learning (no desired outputs known for
the samples) and (b) similar new samples fire neurons in the same
neighbourhood --> clustering.
Anybody feel free to correct me without significantly
complicating it. I'm no AI expert.
That is the so called AI. A gigantic gigo machine (garbage in,
garbage out). You don't know what the network learns. So in image
recognition they (Google?) made a heatmap for the pixels of an
image which contributed most to the decision. Come tons of tagged
images scraped on the net. Most horses came from one page, showing
the text 'Copyright horsephotgrapher.com' at the bottom. This was
what the net learned to be the image of a horse. That guy also had
a photo of a cat. Must be a horse. His children .... a horse.
Horses w/o copyright notice ... cats :-)
Huskies were mostly photographed in snow which got to be the main
criterion for huskies.
Recruiting decisions from biased people were use to train
networks for pre-selecting job applicants. --> Guess who got no
interview.
So make sure to understand why your network learns what, I have
implemented a SOFM learn watcher to help me with that.
Cheers,
Herbert
Am 08.04.2020 um 17:41 schrieb
gettimothy via Squeak-dev:
Hi Herbert,
I will get back to you after I read up on Neural Networks.
I found this on the web, and it looks interesting and
challenging.
cheers,
t