Neural Networks in Pharo

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
11 messages Options
Reply | Threaded
Open this post in threaded view
|

Neural Networks in Pharo

Oleksandr Zaitsev
Hello!

Several weeks ago I've announced my NeuralNetworks project. Thank you very much for your ideas and feedback. As suggested, I wrote examples for every class and tested my perceptron on linearly-separable logical functions.

I have just completed a post about my implementation of a single-layer perceptron: https://medium.com/@i.oleks/single-layer-perceptron-in-pharo-5b13246a041d. It has a detailed explanation of every part of the design and illustrates different approaches to implementation.

Please, tell me what you think.

Are my class diagrams correct or did I mess something up?
Is there a design pattern that I should consider?
Do you think that I should do something differently?
Should I improve the quality of my code?

Yours sincerely,
Oleksandr
Reply | Threaded
Open this post in threaded view
|

Re: Neural Networks in Pharo

Ben Coman


On Wed, Apr 5, 2017 at 8:27 AM, Oleksandr Zaytsev <[hidden email]> wrote:
Hello!

Several weeks ago I've announced my NeuralNetworks project. Thank you very much for your ideas and feedback. As suggested, I wrote examples for every class and tested my perceptron on linearly-separable logical functions.

I have just completed a post about my implementation of a single-layer perceptron: https://medium.com/@i.oleks/single-layer-perceptron-in-pharo-5b13246a041d. It has a detailed explanation of every part of the design and illustrates different approaches to implementation.

Please, tell me what you think.

Are my class diagrams correct or did I mess something up?
Is there a design pattern that I should consider?
Do you think that I should do something differently?
Should I improve the quality of my code?

Yours sincerely,
Oleksandr

Hi Oleks,

(sorry for the delayed response. Saw you other post in pharo-dev and found this sitting in my drafts from last week.)

Nice article and interesting read.  I only did NeuralNetworks in my undergrad 25 years ago so my knowledge is a bit vague.
I think you design reasoning is fine for this stage.  Down the track you might consider that OO is about hiding implementation details. So just a vague idea, you might have SLPerceptron storing neuron data internally in arrays that a GPU can process efficiently, but when "SLPerceptron>>#at:" is asked for a neuron, it constructs a "real" neuron object, whose methods forward to the arrays in theSLPerceptron.

cheers -ben
Reply | Threaded
Open this post in threaded view
|

Re: Neural Networks in Pharo

Johann Hibschman
I definitely agree with this. Performance-wise, I expect it to be terrible to model each individual neuron as an object. The logical unit (IMHO) should be a layer of neurons, with matrix weights, vector biases, and vector output.

Similarly, I think you'd be better off keeping the bias as a separate value, rather than concatenating a 1 to the input vector. I know it's what they do when presenting the math, but it means you'll be allocating a new vector each time through.

Finally, I suspect that you'll eventually want to move the learning rate (and maybe even the learn methods) out of the neuron and into a dedicated "training"/"learning"/"optimizing" object. Perhaps overkill for the perceptron, but for a multilayer network, you definitely want to control the learning rate from the outside.

I've been working in TensorFlow, so my perceptions may be a bit colored by that framework.

Cheers,
Johann

On Wed, Apr 12, 2017 at 10:45 AM Ben Coman <[hidden email]> wrote:


On Wed, Apr 5, 2017 at 8:27 AM, Oleksandr Zaytsev <[hidden email]> wrote:
Hello!

Several weeks ago I've announced my NeuralNetworks project. Thank you very much for your ideas and feedback. As suggested, I wrote examples for every class and tested my perceptron on linearly-separable logical functions.

I have just completed a post about my implementation of a single-layer perceptron: https://medium.com/@i.oleks/single-layer-perceptron-in-pharo-5b13246a041d. It has a detailed explanation of every part of the design and illustrates different approaches to implementation.

Please, tell me what you think.

Are my class diagrams correct or did I mess something up?
Is there a design pattern that I should consider?
Do you think that I should do something differently?
Should I improve the quality of my code?

Yours sincerely,
Oleksandr

Hi Oleks,

(sorry for the delayed response. Saw you other post in pharo-dev and found this sitting in my drafts from last week.)

Nice article and interesting read.  I only did NeuralNetworks in my undergrad 25 years ago so my knowledge is a bit vague.
I think you design reasoning is fine for this stage.  Down the track you might consider that OO is about hiding implementation details. So just a vague idea, you might have SLPerceptron storing neuron data internally in arrays that a GPU can process efficiently, but when "SLPerceptron>>#at:" is asked for a neuron, it constructs a "real" neuron object, whose methods forward to the arrays in theSLPerceptron.

cheers -ben
Reply | Threaded
Open this post in threaded view
|

Re: Neural Networks in Pharo

Oleksandr Zaitsev
Hello,

Thanks a lot for your advice! It was very helpful and educating (for example, I thought that we store biases in the weight matrix and prepend 1 to input to make it faster, but now I see why it's actually slower that way).

I've implemented a multi-layer neural network as a linked list of layers that propagate the input and error from one to another, similar to the Chain of Responsibility pattern. Also, now I represent biases as separate vectors. The LearningAlgorithm is a separate class with Backpropagation as its subclass (though at this point the network can only learn through backpropagation, but I'm planning to change that). I'm trying to figure out how the activation and cost functions should be connected. For example, cross-entropy works best with logistic sigmoid activation etc. I would like to give the user a freedom to use whatever he wants (plug in whatever you like and see what happens), but it can be very inefficient (because some time-consuming parts of activation and cost derivatives cancel out each other).

Also, there is an interface for setting the learning rate for the whole network, which can be used to choose the learning rate prior to learning, as well as to change the learning rate after each iteration. I am planning to implement some optimization algorithms that would automize the process of choosing a learning rate (adagrad for example), but this would require a bit different design (maybe I will implement the Optimizer, as you suggested).

I'm attaching two images with UML diagrams, describing my current implementation. Could you please tell me what you think about this design? The first image is a class diagram that shows the whole architecture, and the second one is a sequence diagram of backpropagation.

mlnn.png
backprop.png

Sincerely yours,
Oleksandr
Reply | Threaded
Open this post in threaded view
|

Re: Neural Networks in Pharo

abergel
Continue to push that topic Oleks. You are on the right track!

Alexandre

> On Apr 24, 2017, at 1:43 AM, Oleks <[hidden email]> wrote:
>
> Hello,
>
> Thanks a lot for your advice! It was very helpful and educating (for
> example, I thought that we store biases in the weight matrix and prepend 1
> to input to make it faster, but now I see why it's actually slower that
> way).
>
> I've implemented a multi-layer neural network as a linked list of layers
> that propagate the input and error from one to another, similar to the Chain
> of Responsibility pattern. Also, now I represent biases as separate vectors.
> The LearningAlgorithm is a separate class with Backpropagation as its
> subclass (though at this point the network can only learn through
> backpropagation, but I'm planning to change that). I'm trying to figure out
> how the activation and cost functions should be connected. For example,
> cross-entropy works best with logistic sigmoid activation etc. I would like
> to give the user a freedom to use whatever he wants (plug in whatever you
> like and see what happens), but it can be very inefficient (because some
> time-consuming parts of activation and cost derivatives cancel out each
> other).
>
> Also, there is an interface for setting the learning rate for the whole
> network, which can be used to choose the learning rate prior to learning, as
> well as to change the learning rate after each iteration. I am planning to
> implement some optimization algorithms that would automize the process of
> choosing a learning rate (adagrad for example), but this would require a bit
> different design (maybe I will implement the Optimizer, as you suggested).
>
> I'm attaching two images with UML diagrams, describing my current
> implementation. Could you please tell me what you think about this design?
> The first image is a class diagram that shows the whole architecture, and
> the second one is a sequence diagram of backpropagation.
>
> mlnn.png <http://forum.world.st/file/n4943698/mlnn.png>  
> backprop.png <http://forum.world.st/file/n4943698/backprop.png>  
>
> Sincerely yours,
> Oleksandr
>
>
>
> --
> View this message in context: http://forum.world.st/Neural-Networks-in-Pharo-tp4941271p4943698.html
> Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.
>

--
_,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
Alexandre Bergel  http://www.bergel.eu
^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.




Reply | Threaded
Open this post in threaded view
|

Re: Neural Networks in Pharo

francescoagati
Hi Oleks,
there is a mode for install neural network from metacello?

2017-04-25 13:00 GMT+02:00 Alexandre Bergel <[hidden email]>:
Continue to push that topic Oleks. You are on the right track!

Alexandre

> On Apr 24, 2017, at 1:43 AM, Oleks <[hidden email]> wrote:
>
> Hello,
>
> Thanks a lot for your advice! It was very helpful and educating (for
> example, I thought that we store biases in the weight matrix and prepend 1
> to input to make it faster, but now I see why it's actually slower that
> way).
>
> I've implemented a multi-layer neural network as a linked list of layers
> that propagate the input and error from one to another, similar to the Chain
> of Responsibility pattern. Also, now I represent biases as separate vectors.
> The LearningAlgorithm is a separate class with Backpropagation as its
> subclass (though at this point the network can only learn through
> backpropagation, but I'm planning to change that). I'm trying to figure out
> how the activation and cost functions should be connected. For example,
> cross-entropy works best with logistic sigmoid activation etc. I would like
> to give the user a freedom to use whatever he wants (plug in whatever you
> like and see what happens), but it can be very inefficient (because some
> time-consuming parts of activation and cost derivatives cancel out each
> other).
>
> Also, there is an interface for setting the learning rate for the whole
> network, which can be used to choose the learning rate prior to learning, as
> well as to change the learning rate after each iteration. I am planning to
> implement some optimization algorithms that would automize the process of
> choosing a learning rate (adagrad for example), but this would require a bit
> different design (maybe I will implement the Optimizer, as you suggested).
>
> I'm attaching two images with UML diagrams, describing my current
> implementation. Could you please tell me what you think about this design?
> The first image is a class diagram that shows the whole architecture, and
> the second one is a sequence diagram of backpropagation.
>
> mlnn.png <http://forum.world.st/file/n4943698/mlnn.png>
> backprop.png <http://forum.world.st/file/n4943698/backprop.png>
>
> Sincerely yours,
> Oleksandr
>
>
>
> --
> View this message in context: http://forum.world.st/Neural-Networks-in-Pharo-tp4941271p4943698.html
> Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.
>

--
_,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
Alexandre Bergel  http://www.bergel.eu
^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.





Reply | Threaded
Open this post in threaded view
|

Re: Neural Networks in Pharo

Oleksandr Zaitsev
Hello,

There isn't one yet. But I will try to create it today. I will let you know

Cheers,
Oleks

On Apr 25, 2017 16:10, "francescoagati [via Smalltalk]" <[hidden email]> wrote:
Hi Oleks,
there is a mode for install neural network from metacello?

2017-04-25 13:00 GMT+02:00 Alexandre Bergel <[hidden email]>:
Continue to push that topic Oleks. You are on the right track!

Alexandre

> On Apr 24, 2017, at 1:43 AM, Oleks <[hidden email]> wrote:
>
> Hello,
>
> Thanks a lot for your advice! It was very helpful and educating (for
> example, I thought that we store biases in the weight matrix and prepend 1
> to input to make it faster, but now I see why it's actually slower that
> way).
>
> I've implemented a multi-layer neural network as a linked list of layers
> that propagate the input and error from one to another, similar to the Chain
> of Responsibility pattern. Also, now I represent biases as separate vectors.
> The LearningAlgorithm is a separate class with Backpropagation as its
> subclass (though at this point the network can only learn through
> backpropagation, but I'm planning to change that). I'm trying to figure out
> how the activation and cost functions should be connected. For example,
> cross-entropy works best with logistic sigmoid activation etc. I would like
> to give the user a freedom to use whatever he wants (plug in whatever you
> like and see what happens), but it can be very inefficient (because some
> time-consuming parts of activation and cost derivatives cancel out each
> other).
>
> Also, there is an interface for setting the learning rate for the whole
> network, which can be used to choose the learning rate prior to learning, as
> well as to change the learning rate after each iteration. I am planning to
> implement some optimization algorithms that would automize the process of
> choosing a learning rate (adagrad for example), but this would require a bit
> different design (maybe I will implement the Optimizer, as you suggested).
>
> I'm attaching two images with UML diagrams, describing my current
> implementation. Could you please tell me what you think about this design?
> The first image is a class diagram that shows the whole architecture, and
> the second one is a sequence diagram of backpropagation.
>
> mlnn.png <http://forum.world.st/file/n4943698/mlnn.png>
> backprop.png <http://forum.world.st/file/n4943698/backprop.png>
>
> Sincerely yours,
> Oleksandr
>
>
>
> --
> View this message in context: http://forum.world.st/Neural-Networks-in-Pharo-tp4941271p4943698.html
> Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.
>

--
_,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
Alexandre Bergel  http://www.bergel.eu
^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.








If you reply to this email, your message will be added to the discussion below:
http://forum.world.st/Neural-Networks-in-Pharo-tp4941271p4944025.html
To unsubscribe from Neural Networks in Pharo, click here.
NAML
Reply | Threaded
Open this post in threaded view
|

Re: Neural Networks in Pharo

francescoagati
thanks ;-)

2017-04-25 15:09 GMT+02:00 Oleks <[hidden email]>:
Hello,

There isn't one yet. But I will try to create it today. I will let you know

Cheers,
Oleks

On Apr 25, 2017 16:10, "francescoagati [via Smalltalk]" <[hidden email]> wrote:
Hi Oleks,
there is a mode for install neural network from metacello?

2017-04-25 13:00 GMT+02:00 Alexandre Bergel <[hidden email]>:
Continue to push that topic Oleks. You are on the right track!

Alexandre

> On Apr 24, 2017, at 1:43 AM, Oleks <[hidden email]> wrote:
>
> Hello,
>
> Thanks a lot for your advice! It was very helpful and educating (for
> example, I thought that we store biases in the weight matrix and prepend 1
> to input to make it faster, but now I see why it's actually slower that
> way).
>
> I've implemented a multi-layer neural network as a linked list of layers
> that propagate the input and error from one to another, similar to the Chain
> of Responsibility pattern. Also, now I represent biases as separate vectors.
> The LearningAlgorithm is a separate class with Backpropagation as its
> subclass (though at this point the network can only learn through
> backpropagation, but I'm planning to change that). I'm trying to figure out
> how the activation and cost functions should be connected. For example,
> cross-entropy works best with logistic sigmoid activation etc. I would like
> to give the user a freedom to use whatever he wants (plug in whatever you
> like and see what happens), but it can be very inefficient (because some
> time-consuming parts of activation and cost derivatives cancel out each
> other).
>
> Also, there is an interface for setting the learning rate for the whole
> network, which can be used to choose the learning rate prior to learning, as
> well as to change the learning rate after each iteration. I am planning to
> implement some optimization algorithms that would automize the process of
> choosing a learning rate (adagrad for example), but this would require a bit
> different design (maybe I will implement the Optimizer, as you suggested).
>
> I'm attaching two images with UML diagrams, describing my current
> implementation. Could you please tell me what you think about this design?
> The first image is a class diagram that shows the whole architecture, and
> the second one is a sequence diagram of backpropagation.
>
> mlnn.png <http://forum.world.st/file/n4943698/mlnn.png>
> backprop.png <http://forum.world.st/file/n4943698/backprop.png>
>
> Sincerely yours,
> Oleksandr
>
>
>
> --
> View this message in context: http://forum.world.st/Neural-Networks-in-Pharo-tp4941271p4943698.html
> Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.
>

--
_,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
Alexandre Bergel  http://www.bergel.eu
^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.








If you reply to this email, your message will be added to the discussion below:
http://forum.world.st/Neural-Networks-in-Pharo-tp4941271p4944025.html
To unsubscribe from Neural Networks in Pharo, click here.
NAML


View this message in context: Re: Neural Networks in Pharo

Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.

Reply | Threaded
Open this post in threaded view
|

Re: Neural Networks in Pharo

Oleksandr Zaitsev
Hello,

I have finally added a configuration to the NeuralNetwork project. Now you can use this Metacello script to load it into your Pharo image:

Metacello new
  repository: 'http://smalltalkhub.com/mc/Oleks/NeuralNetwork/main';
  configuration: 'MLNeuralNetwork';
  version: #development;
  load.

Sorry for the delay

Oleks

On Tue, Apr 25, 2017 at 4:13 PM, francescoagati [via Smalltalk] <[hidden email]> wrote:
thanks ;-)

2017-04-25 15:09 GMT+02:00 Oleks <[hidden email]>:
Hello,

There isn't one yet. But I will try to create it today. I will let you know

Cheers,
Oleks

On Apr 25, 2017 16:10, "francescoagati [via Smalltalk]" <[hidden email]> wrote:
Hi Oleks,
there is a mode for install neural network from metacello?

2017-04-25 13:00 GMT+02:00 Alexandre Bergel <[hidden email]>:
Continue to push that topic Oleks. You are on the right track!

Alexandre

> On Apr 24, 2017, at 1:43 AM, Oleks <[hidden email]> wrote:
>
> Hello,
>
> Thanks a lot for your advice! It was very helpful and educating (for
> example, I thought that we store biases in the weight matrix and prepend 1
> to input to make it faster, but now I see why it's actually slower that
> way).
>
> I've implemented a multi-layer neural network as a linked list of layers
> that propagate the input and error from one to another, similar to the Chain
> of Responsibility pattern. Also, now I represent biases as separate vectors.
> The LearningAlgorithm is a separate class with Backpropagation as its
> subclass (though at this point the network can only learn through
> backpropagation, but I'm planning to change that). I'm trying to figure out
> how the activation and cost functions should be connected. For example,
> cross-entropy works best with logistic sigmoid activation etc. I would like
> to give the user a freedom to use whatever he wants (plug in whatever you
> like and see what happens), but it can be very inefficient (because some
> time-consuming parts of activation and cost derivatives cancel out each
> other).
>
> Also, there is an interface for setting the learning rate for the whole
> network, which can be used to choose the learning rate prior to learning, as
> well as to change the learning rate after each iteration. I am planning to
> implement some optimization algorithms that would automize the process of
> choosing a learning rate (adagrad for example), but this would require a bit
> different design (maybe I will implement the Optimizer, as you suggested).
>
> I'm attaching two images with UML diagrams, describing my current
> implementation. Could you please tell me what you think about this design?
> The first image is a class diagram that shows the whole architecture, and
> the second one is a sequence diagram of backpropagation.
>
> mlnn.png <http://forum.world.st/file/n4943698/mlnn.png>
> backprop.png <http://forum.world.st/file/n4943698/backprop.png>
>
> Sincerely yours,
> Oleksandr
>
>
>
> --
> View this message in context: http://forum.world.st/Neural-Networks-in-Pharo-tp4941271p4943698.html
> Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.
>

--
_,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
Alexandre Bergel  http://www.bergel.eu
^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.








If you reply to this email, your message will be added to the discussion below:
http://forum.world.st/Neural-Networks-in-Pharo-tp4941271p4944025.html
To unsubscribe from Neural Networks in Pharo, click here.
NAML


View this message in context: Re: Neural Networks in Pharo

Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.




If you reply to this email, your message will be added to the discussion below:
http://forum.world.st/Neural-Networks-in-Pharo-tp4941271p4944028.html
To unsubscribe from Neural Networks in Pharo, click here.
NAML

Reply | Threaded
Open this post in threaded view
|

Re: Neural Networks in Pharo

francescoagati
now work all good with last version of polymath
thanks :-)

2017-04-27 17:29 GMT+02:00 Oleks <[hidden email]>:
Hello,

I have finally added a configuration to the NeuralNetwork project. Now you can use this Metacello script to load it into your Pharo image:

Metacello new
  repository: 'http://smalltalkhub.com/mc/Oleks/NeuralNetwork/main';
  configuration: 'MLNeuralNetwork';
  version: #development;
  load.

Sorry for the delay

Oleks

On Tue, Apr 25, 2017 at 4:13 PM, francescoagati [via Smalltalk] <[hidden email]> wrote:
thanks ;-)

2017-04-25 15:09 GMT+02:00 Oleks <[hidden email]>:
Hello,

There isn't one yet. But I will try to create it today. I will let you know

Cheers,
Oleks

On Apr 25, 2017 16:10, "francescoagati [via Smalltalk]" <[hidden email]> wrote:
Hi Oleks,
there is a mode for install neural network from metacello?

2017-04-25 13:00 GMT+02:00 Alexandre Bergel <[hidden email]>:
Continue to push that topic Oleks. You are on the right track!

Alexandre

> On Apr 24, 2017, at 1:43 AM, Oleks <[hidden email]> wrote:
>
> Hello,
>
> Thanks a lot for your advice! It was very helpful and educating (for
> example, I thought that we store biases in the weight matrix and prepend 1
> to input to make it faster, but now I see why it's actually slower that
> way).
>
> I've implemented a multi-layer neural network as a linked list of layers
> that propagate the input and error from one to another, similar to the Chain
> of Responsibility pattern. Also, now I represent biases as separate vectors.
> The LearningAlgorithm is a separate class with Backpropagation as its
> subclass (though at this point the network can only learn through
> backpropagation, but I'm planning to change that). I'm trying to figure out
> how the activation and cost functions should be connected. For example,
> cross-entropy works best with logistic sigmoid activation etc. I would like
> to give the user a freedom to use whatever he wants (plug in whatever you
> like and see what happens), but it can be very inefficient (because some
> time-consuming parts of activation and cost derivatives cancel out each
> other).
>
> Also, there is an interface for setting the learning rate for the whole
> network, which can be used to choose the learning rate prior to learning, as
> well as to change the learning rate after each iteration. I am planning to
> implement some optimization algorithms that would automize the process of
> choosing a learning rate (adagrad for example), but this would require a bit
> different design (maybe I will implement the Optimizer, as you suggested).
>
> I'm attaching two images with UML diagrams, describing my current
> implementation. Could you please tell me what you think about this design?
> The first image is a class diagram that shows the whole architecture, and
> the second one is a sequence diagram of backpropagation.
>
> mlnn.png <http://forum.world.st/file/n4943698/mlnn.png>
> backprop.png <http://forum.world.st/file/n4943698/backprop.png>
>
> Sincerely yours,
> Oleksandr
>
>
>
> --
> View this message in context: http://forum.world.st/Neural-Networks-in-Pharo-tp4941271p4943698.html
> Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.
>

--
_,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
Alexandre Bergel  http://www.bergel.eu
^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.








If you reply to this email, your message will be added to the discussion below:
http://forum.world.st/Neural-Networks-in-Pharo-tp4941271p4944025.html
To unsubscribe from Neural Networks in Pharo, click here.
NAML


View this message in context: Re: Neural Networks in Pharo

Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.




If you reply to this email, your message will be added to the discussion below:
http://forum.world.st/Neural-Networks-in-Pharo-tp4941271p4944028.html
To unsubscribe from Neural Networks in Pharo, click here.
NAML



View this message in context: Re: Neural Networks in Pharo
Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.

Reply | Threaded
Open this post in threaded view
|

Re: Neural Networks in Pharo

Oleksandr Zaitsev
And I have added a PolyMath dependency to the configuration file, so now the Metacello script should update you to the latest stable versions of everything automatically.
Thanks for pointing it out.

On Fri, Apr 28, 2017 at 11:05 PM, francescoagati [via Smalltalk] <[hidden email]> wrote:
now work all good with last version of polymath
thanks :-)

2017-04-27 17:29 GMT+02:00 Oleks <[hidden email]>:
Hello,

I have finally added a configuration to the NeuralNetwork project. Now you can use this Metacello script to load it into your Pharo image:

Metacello new
  repository: 'http://smalltalkhub.com/mc/Oleks/NeuralNetwork/main';
  configuration: 'MLNeuralNetwork';
  version: #development;
  load.

Sorry for the delay

Oleks

On Tue, Apr 25, 2017 at 4:13 PM, francescoagati [via Smalltalk] <[hidden email]> wrote:
thanks ;-)

2017-04-25 15:09 GMT+02:00 Oleks <[hidden email]>:
Hello,

There isn't one yet. But I will try to create it today. I will let you know

Cheers,
Oleks

On Apr 25, 2017 16:10, "francescoagati [via Smalltalk]" <[hidden email]> wrote:
Hi Oleks,
there is a mode for install neural network from metacello?

2017-04-25 13:00 GMT+02:00 Alexandre Bergel <[hidden email]>:
Continue to push that topic Oleks. You are on the right track!

Alexandre

> On Apr 24, 2017, at 1:43 AM, Oleks <[hidden email]> wrote:
>
> Hello,
>
> Thanks a lot for your advice! It was very helpful and educating (for
> example, I thought that we store biases in the weight matrix and prepend 1
> to input to make it faster, but now I see why it's actually slower that
> way).
>
> I've implemented a multi-layer neural network as a linked list of layers
> that propagate the input and error from one to another, similar to the Chain
> of Responsibility pattern. Also, now I represent biases as separate vectors.
> The LearningAlgorithm is a separate class with Backpropagation as its
> subclass (though at this point the network can only learn through
> backpropagation, but I'm planning to change that). I'm trying to figure out
> how the activation and cost functions should be connected. For example,
> cross-entropy works best with logistic sigmoid activation etc. I would like
> to give the user a freedom to use whatever he wants (plug in whatever you
> like and see what happens), but it can be very inefficient (because some
> time-consuming parts of activation and cost derivatives cancel out each
> other).
>
> Also, there is an interface for setting the learning rate for the whole
> network, which can be used to choose the learning rate prior to learning, as
> well as to change the learning rate after each iteration. I am planning to
> implement some optimization algorithms that would automize the process of
> choosing a learning rate (adagrad for example), but this would require a bit
> different design (maybe I will implement the Optimizer, as you suggested).
>
> I'm attaching two images with UML diagrams, describing my current
> implementation. Could you please tell me what you think about this design?
> The first image is a class diagram that shows the whole architecture, and
> the second one is a sequence diagram of backpropagation.
>
> mlnn.png <http://forum.world.st/file/n4943698/mlnn.png>
> backprop.png <http://forum.world.st/file/n4943698/backprop.png>
>
> Sincerely yours,
> Oleksandr
>
>
>
> --
> View this message in context: http://forum.world.st/Neural-Networks-in-Pharo-tp4941271p4943698.html
> Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.
>

--
_,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
Alexandre Bergel  http://www.bergel.eu
^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.








If you reply to this email, your message will be added to the discussion below:
http://forum.world.st/Neural-Networks-in-Pharo-tp4941271p4944025.html
To unsubscribe from Neural Networks in Pharo, click here.
NAML


View this message in context: Re: Neural Networks in Pharo

Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.




If you reply to this email, your message will be added to the discussion below:
http://forum.world.st/Neural-Networks-in-Pharo-tp4941271p4944028.html
To unsubscribe from Neural Networks in Pharo, click here.
NAML



View this message in context: Re: Neural Networks in Pharo
Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.




If you reply to this email, your message will be added to the discussion below:
http://forum.world.st/Neural-Networks-in-Pharo-tp4941271p4944863.html
To unsubscribe from Neural Networks in Pharo, click here.
NAML