Neural Networks in Pharo

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
9 messages Options
Reply | Threaded
Open this post in threaded view
|

Neural Networks in Pharo

Oleksandr Zaitsev
Hello.

I'm implementing Neural Networks in Pharo as part of my thesis (object-oriented approaches to neural networks implementation). It would be really nice to receive some feedback. So if you have any ideas, recommendations, or critique, please, write me. What are the existing projects or common pitfalls that I should consider? Maybe, you can recommend me some nice book or paper on a related topic. I would be grateful for any kind of feedback.

Here is the repository: http://smalltalkhub.com/#!/~Oleks/NeuralNetwork. It's not much, but I'm working on it.

Yours sincerely,
Oleksandr
Reply | Threaded
Open this post in threaded view
|

Re: Neural Networks in Pharo

SergeStinckwich
On Tue, Mar 21, 2017 at 10:09 AM, Oleksandr Zaytsev
<[hidden email]> wrote:

> Hello.
>
> I'm implementing Neural Networks in Pharo as part of my thesis
> (object-oriented approaches to neural networks implementation). It would be
> really nice to receive some feedback. So if you have any ideas,
> recommendations, or critique, please, write me. What are the existing
> projects or common pitfalls that I should consider? Maybe, you can recommend
> me some nice book or paper on a related topic. I would be grateful for any
> kind of feedback.
>
> Here is the repository: http://smalltalkhub.com/#!/~Oleks/NeuralNetwork.
> It's not much, but I'm working on it.

You should talk to Gullermo Polito. He is working on neural networks in Pharo.
Are you implementing Deep Learning algorithms ?

Regards,

--
Serge Stinckwich
UCN & UMI UMMISCO 209 (IRD/UPMC)
Every DSL ends up being Smalltalk
http://www.doesnotunderstand.org/

Reply | Threaded
Open this post in threaded view
|

Re: Neural Networks in Pharo

Oleksandr Zaitsev
In reply to this post by Oleksandr Zaitsev
I started by implementing some simple threshold neurons. The current goal is a multilayer perceptron (similar to the one in scikit-learn), and maybe other kinds of networks, such as self-organizing maps or radial basis networks.

I could try to implement a deep learning algorithm, but the big issue with them is time complexity. Probably, it would require the use of GPU, or some advanced "tricks", so I should start with something smaller.

Also, I want to try different kinds of design approaches, including those that are not based on highly optimized vector algebra (I know that it might not be the best idea, but I want try it and see what happens). For example, a network, where each neuron is an object (normally the whole network is represented as a collection of weight matrices). It might turn out to be very slow, but more object-friendly. For now it's just an idea, but to try something like that I would need a small network with 1-100 neurons.

Yours sincerely,
Oleksandr
Reply | Threaded
Open this post in threaded view
|

Re: Neural Networks in Pharo

Guillermo Polito
Hi Oleksandr,

I'm working half-time on a team doing simulations of spiking neural networks (on scala). Since the topic is "new" to me, I started following a MOOC on traditional machine learning and putting some of my code in here:


Also, I wanted to experiment on hand-written characters with the Mnist dataset (yann.lecun.com/exdb/mnist/) so I wrote a reader for the IDX format to load the dataset.


If you want I can take a look and we can discuss further :)

Guille


On Tue, Mar 21, 2017 at 11:30 AM, Oleksandr Zaytsev <[hidden email]> wrote:
I started by implementing some simple threshold neurons. The current goal is a multilayer perceptron (similar to the one in scikit-learn), and maybe other kinds of networks, such as self-organizing maps or radial basis networks.

I could try to implement a deep learning algorithm, but the big issue with them is time complexity. Probably, it would require the use of GPU, or some advanced "tricks", so I should start with something smaller.

Also, I want to try different kinds of design approaches, including those that are not based on highly optimized vector algebra (I know that it might not be the best idea, but I want try it and see what happens). For example, a network, where each neuron is an object (normally the whole network is represented as a collection of weight matrices). It might turn out to be very slow, but more object-friendly. For now it's just an idea, but to try something like that I would need a small network with 1-100 neurons.

Yours sincerely,
Oleksandr

Reply | Threaded
Open this post in threaded view
|

Re: Neural Networks in Pharo

abergel
In reply to this post by Oleksandr Zaitsev
Hi Oleksandr!

I had a look at your code a couple of weeks ago, as I also got some interest in Neural networks and genetic algorithm (I will start a lecture here at my university on this topic).

I think that your code needs examples. Maybe you can add some simple examples, such as learning boolean expressions, or having a more complete example on recognizing handing writing. There is in python here that does exactly that: http://neuralnetworksanddeeplearning.com/chap1.html

I wrote this code to support some aspect of my lecture. Do not feel this as an absolute answer. Having concrete and relevant example is important and my code do not have such example.

Push push your code! We need it!

Cheers,
Alexandre

-- 
_,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
Alexandre Bergel  http://www.bergel.eu
^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.



On Mar 21, 2017, at 6:09 AM, Oleksandr Zaytsev <[hidden email]> wrote:

Hello.

I'm implementing Neural Networks in Pharo as part of my thesis (object-oriented approaches to neural networks implementation). It would be really nice to receive some feedback. So if you have any ideas, recommendations, or critique, please, write me. What are the existing projects or common pitfalls that I should consider? Maybe, you can recommend me some nice book or paper on a related topic. I would be grateful for any kind of feedback.

Here is the repository: http://smalltalkhub.com/#!/~Oleks/NeuralNetwork. It's not much, but I'm working on it.

Yours sincerely,
Oleksandr

Reply | Threaded
Open this post in threaded view
|

Re: Neural Networks in Pharo

SergeStinckwich
On Tue, Mar 21, 2017 at 2:38 PM, Alexandre Bergel
<[hidden email]> wrote:
> Hi Oleksandr!

Hi all,

> I had a look at your code a couple of weeks ago, as I also got some interest
> in Neural networks and genetic algorithm (I will start a lecture here at my
> university on this topic).

This is great !

> I think that your code needs examples. Maybe you can add some simple
> examples, such as learning boolean expressions, or having a more complete
> example on recognizing handing writing. There is in python here that does
> exactly that: http://neuralnetworksanddeeplearning.com/chap1.html
>
> My code is available here:
> http://smalltalkhub.com/#!/~abergel/NeuralNetworks
> I wrote this code to support some aspect of my lecture. Do not feel this as
> an absolute answer. Having concrete and relevant example is important and my
> code do not have such example.

Would be nice to have something similar to Scikit-learn in Python.
http://scikit-learn.org/stable/

> Push push your code! We need it!

Yes we definitively need something like that !
Have a look to PolyMath that already provide a lot of math libraries:
https://github.com/PolyMathOrg/PolyMath

I just release v0.85 of PolyMath.

Regards,
--
Serge Stinckwich
UCN & UMI UMMISCO 209 (IRD/UPMC)
Every DSL ends up being Smalltalk
http://www.doesnotunderstand.org/

Reply | Threaded
Open this post in threaded view
|

Re: Neural Networks in Pharo

Offray Vladimir Luna Cárdenas-2
In reply to this post by abergel

Nice to see this development. On the examples issues suggested by Alexandre, maybe Grafoscopio[1] could be useful to combine prose with code. I have write a new user manual [2] and I'm going to work on it this summer of code.

[1] http://mutabit.com/grafoscopio/index.en.html
[2] http://mutabit.com/repos.fossil/grafoscopio/doc/tip/Docs/En/Books/Manual/manual.pdf
[3] http://gsoc.pharo.org/#topic-grafoscopio-literate-computing-and-reproducible-research-for-pharo

Cheers,

Offray

On 21/03/17 08:38, Alexandre Bergel wrote:
Hi Oleksandr!

I had a look at your code a couple of weeks ago, as I also got some interest in Neural networks and genetic algorithm (I will start a lecture here at my university on this topic).

I think that your code needs examples. Maybe you can add some simple examples, such as learning boolean expressions, or having a more complete example on recognizing handing writing. There is in python here that does exactly that: http://neuralnetworksanddeeplearning.com/chap1.html

I wrote this code to support some aspect of my lecture. Do not feel this as an absolute answer. Having concrete and relevant example is important and my code do not have such example.

Push push your code! We need it!

Cheers,
Alexandre

-- 
_,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
Alexandre Bergel  http://www.bergel.eu
^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.



On Mar 21, 2017, at 6:09 AM, Oleksandr Zaytsev <[hidden email]> wrote:

Hello.

I'm implementing Neural Networks in Pharo as part of my thesis (object-oriented approaches to neural networks implementation). It would be really nice to receive some feedback. So if you have any ideas, recommendations, or critique, please, write me. What are the existing projects or common pitfalls that I should consider? Maybe, you can recommend me some nice book or paper on a related topic. I would be grateful for any kind of feedback.

Here is the repository: http://smalltalkhub.com/#!/~Oleks/NeuralNetwork. It's not much, but I'm working on it.

Yours sincerely,
Oleksandr


Reply | Threaded
Open this post in threaded view
|

Re: Neural Networks in Pharo

abergel
In reply to this post by Oleksandr Zaitsev
Having a neuron as an object is exactly what I have in my implementation.
Sounds exciting!

Share your code when ready! Eager to try it!

Alexandre
--
_,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
Alexandre Bergel  http://www.bergel.eu
^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.



> On Mar 21, 2017, at 7:30 AM, Oleksandr Zaytsev <[hidden email]> wrote:
>
> I started by implementing some simple threshold neurons. The current goal is a multilayer perceptron (similar to the one in scikit-learn), and maybe other kinds of networks, such as self-organizing maps or radial basis networks.
>
> I could try to implement a deep learning algorithm, but the big issue with them is time complexity. Probably, it would require the use of GPU, or some advanced "tricks", so I should start with something smaller.
>
> Also, I want to try different kinds of design approaches, including those that are not based on highly optimized vector algebra (I know that it might not be the best idea, but I want try it and see what happens). For example, a network, where each neuron is an object (normally the whole network is represented as a collection of weight matrices). It might turn out to be very slow, but more object-friendly. For now it's just an idea, but to try something like that I would need a small network with 1-100 neurons.
>
> Yours sincerely,
> Oleksandr


Reply | Threaded
Open this post in threaded view
|

Re: Neural Networks in Pharo

abergel
In reply to this post by Guillermo Polito
Excellent Guillermo! 
I also wanted to play with the Mnist dataset. I will try your code

Alexandre
-- 
_,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
Alexandre Bergel  http://www.bergel.eu
^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.



On Mar 21, 2017, at 8:24 AM, Guillermo Polito <[hidden email]> wrote:

Hi Oleksandr,

I'm working half-time on a team doing simulations of spiking neural networks (on scala). Since the topic is "new" to me, I started following a MOOC on traditional machine learning and putting some of my code in here:


Also, I wanted to experiment on hand-written characters with the Mnist dataset (yann.lecun.com/exdb/mnist/) so I wrote a reader for the IDX format to load the dataset.


If you want I can take a look and we can discuss further :)

Guille


On Tue, Mar 21, 2017 at 11:30 AM, Oleksandr Zaytsev <[hidden email]> wrote:
I started by implementing some simple threshold neurons. The current goal is a multilayer perceptron (similar to the one in scikit-learn), and maybe other kinds of networks, such as self-organizing maps or radial basis networks.

I could try to implement a deep learning algorithm, but the big issue with them is time complexity. Probably, it would require the use of GPU, or some advanced "tricks", so I should start with something smaller.

Also, I want to try different kinds of design approaches, including those that are not based on highly optimized vector algebra (I know that it might not be the best idea, but I want try it and see what happens). For example, a network, where each neuron is an object (normally the whole network is represented as a collection of weight matrices). It might turn out to be very slow, but more object-friendly. For now it's just an idea, but to try something like that I would need a small network with 1-100 neurons.

Yours sincerely,
Oleksandr