DhbVector: implementation and extensions

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
27 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Re: DhbVector: implementation and extensions

SergeStinckwich
Enjoy your holidays Stéphane !

On Thu, Apr 7, 2016 at 8:14 AM, stepharo <[hidden email]> wrote:

> Hello nicolas
>
> if you want we could add/include Smallapack into polymath.
> In the long term I would like to split the huge configurations in many
> smaller one and the user should be able to pick what he wants
> Now I'm on holidays trying to get far from a keyboard.
> Stef
>
> Le 5/4/16 17:44, Nicolas Cellier a écrit :
>
>
>
> 2016-04-05 11:52 GMT+02:00 Alexey Cherkaev <[hidden email]>:
>>
>> Hi Nicolas, Werner!
>>
>> I must admit, Pharo's non-distinction on the naming level between
>> double-float for Float numbers and single-float FloatArray took me by
>> surprise.
>>
>> Werner: #scaleBy: does not produce a new vector as a result, but modifies
>> the receiver. Whereas, I assume, it is expected that 'aNumber * aVector'
>> doesn't modify aVector and answers a newly allocated vector. #scaleBy: in
>> that sense is a lower level operation. Which brings me to the next point:
>>
>> Nicolas: yes, these operations are primitive. The reason for them is that
>> they are used repeatedly in iterative solvers (CG, BiCGStab) which can be
>> used in non-linear solvers (e.g. Newton's method), which can be used in ODE
>> sovlers.... You get the picture. But, I was thinking about those primitives:
>> they are not really useful to the user of the code. The user would want to
>> operate using #+, #* and wouldn't really care about producing (and trashing)
>> some intermediate results.
>>
>> So, here is my dilemma:
>>
>> I would want to make CG, BiCGStab and GEMRES (this one is a bit trickier)
>> efficient, so that they can be used inside non-linear solvers. While I've
>> been playing with CG and BiCGStab in Common Lisp, I found that the solution
>> timing is dominated by matrix to vector multiplication (or equivalently,
>> linear operator to vector application), even for sparse systems. So, actual
>> vector arithmetic is fast. Yet, I'm afraid that producing lots of
>> intermediate short-lived results will strain garbage collector, once these
>> methods become a part of non-linear solver.
>>
>> Should I just implement these methods using existing vector arithmetic and
>> worry about optimisation later? I.e. make it work, make it right, make it
>> fast approach. (I do feel the answer to this question should be 'yes').
>>
>> Secondly, in terms of the future of PolyMath, should DhbVector include
>> BLAS-like primitive? It might feel like over-engineering at this point
>> though. Nice thing about these primitives, however, is that higher-level
>> operations can be easily implemented using these primitives. So, it
>> introduces nice level of abstraction. (and, let's say in the future, these
>> primitives can be implemented with parallelisation on GPU)
>>
>> Thanks for the help!
>> Alexey
>>
>
> BLAS certainly makes sense if you want that basic linear algebra operations
> on numerical data to better scale
> (in terms of operations/second especially for large matrices, because it's
> not really sure it applies to 3x3 dimensions)
> If we want to compete with, say a basic Matlab interpreter or Numpy or R or
> etc..., then it's mandatory from my POV.
>
> If we are using a BLAS backend, then an idea is to create proxies and don't
> perform any operation untill it better matches a native BLAS operation.
> This is implemented in C++ for example in BOOST ublas
> http://www.boost.org/doc/libs/1_60_0/libs/numeric/ublas/doc/
>
> May I remind that I have developped a rather rough and dumb interface to
> BLACK/LAPACK for Smalltalk: Smallapack
> see https://github.com/nicolas-cellier-aka-nice/smallapack and
> https://github.com/nicolas-cellier-aka-nice/smallapack/wiki
> Its status on latest Pharo is unknown, the package depends on old Compiler,
> but if there's interest into it I can inquire.
>
> Smallapack is not integrated in PolyMath because maybe it's too huge, and
> because interaction with dhb matrices/vector would have to be thought out
> first.
> I see it as a concurrent implementation, so one could choose either dhb or
> blas/lapack based matrix/vector implementation depending on the needs, and
> we could maintain a kind of minimal common API between the two.
>
> Nicolas
>
>>
>> On Tuesday, April 5, 2016 at 10:57:55 AM UTC+2, werner kassens wrote:
>>>
>>> Hi,
>>> re #scaleBy:
>>> long time ago i made #* commutatively usable with any mixture of scalars,
>>> dhbvectors and dhbmatrices and replaced all uses of #scaleBy: in dhb by #*.
>>> i thought using #* is conceptually simpler and one could perhaps one day
>>> deprecate #scaleBy: or so.
>>> werner
>>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "SciSmalltalk" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to [hidden email].
>> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "SciSmalltalk" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [hidden email].
> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "SciSmalltalk" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [hidden email].
> For more options, visit https://groups.google.com/d/optout.



--
Serge Stinckwich
UCBN & UMI UMMISCO 209 (IRD/UPMC)
Every DSL ends up being Smalltalk
http://www.doesnotunderstand.org/

--
You received this message because you are subscribed to the Google Groups "SciSmalltalk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: DhbVector: implementation and extensions

Nicolas Cellier
Hi, Serge, Stephane,
I think I need hollidays too ;)

I can't say that Smallapack is integration ready. But it would be worth inquiring.
If you try, you will see that there are lot of duplicate areas with DHB.
You will probably find incompatible protocols too.
It's normal, they both more or less solve the same linear algebra problems, just with different solution
(full smalltalk vs external library).

IMO, it's good to have both, but integrating Smallapack means refactoring like mad :) maybe both libraries
The first goal would be to converge to a common protocol in order to provide a minima inter-operability.
But that's of course not enough.

The external libraries brings some advantages like
- space*time efficiency,
- man*years library of already existing algorithms,
- carefully written with numerical accuracy in mind, etc...
But it costs extra complexity versus Smalltalk-based like for example
- handling of double/single precision floats
- handling of specific memory layout (space optimization)
- accounting for different flavours of external library (the FORTRAN interface, the C interface, ...)

This mixes with intrinsic complexity like
- providing different algorithms for solving in real or complex domain
- applying different algorithms depending on matrix proporties (symmetric, triangular, etc...)

In Smallapack, I used simple inheritance scheme to solve these problems mixed with some more or less complex dispatch rules.
If we want to add more properties, it might not scale, and there are other solutions, i.e. composition-based like that of ublas.
That means reifying the memory layout properties and delegate to it for memory access oriented protocol,
maybe doing the same with mathematical properties, but not all combinations are possible...
I would not call this refactoring, but complete rewrite, so it ain't going to happen any time soon,
especially if DHB has nothing in this direction, and maybe we ain't gonna need it.
But it's worth to keep in mind for judging if quality is OK or not for elligibility in Polymath.
Ideally these implementation details should be neutral for API.

There are other design decisions that are questionable and more easily accessible.
For example, I tried to provide compatibility with a lot of existing Smalltalk Matrix libraries.
That is bloating the protocol and should better be split into separate compatibility packages

In all cases, it's good to isolate modular units and try to have something composable.

So in a word, yes, we really should try, and yes, it's going to be a lot of work.
I'm interested to help, but I want to avoid to just do things alone.
Every ounce of positive criticism is welcome :)

Stephane, keep this damned keyboard away for a while and enjoy your hollidays
We'll see when you're back.

Nicolas

2016-04-07 8:19 GMT+02:00 Serge Stinckwich <[hidden email]>:
Enjoy your holidays Stéphane !

On Thu, Apr 7, 2016 at 8:14 AM, stepharo <[hidden email]> wrote:
> Hello nicolas
>
> if you want we could add/include Smallapack into polymath.
> In the long term I would like to split the huge configurations in many
> smaller one and the user should be able to pick what he wants
> Now I'm on holidays trying to get far from a keyboard.
> Stef
>
> Le 5/4/16 17:44, Nicolas Cellier a écrit :
>
>
>
> 2016-04-05 11:52 GMT+02:00 Alexey Cherkaev <[hidden email]>:
>>
>> Hi Nicolas, Werner!
>>
>> I must admit, Pharo's non-distinction on the naming level between
>> double-float for Float numbers and single-float FloatArray took me by
>> surprise.
>>
>> Werner: #scaleBy: does not produce a new vector as a result, but modifies
>> the receiver. Whereas, I assume, it is expected that 'aNumber * aVector'
>> doesn't modify aVector and answers a newly allocated vector. #scaleBy: in
>> that sense is a lower level operation. Which brings me to the next point:
>>
>> Nicolas: yes, these operations are primitive. The reason for them is that
>> they are used repeatedly in iterative solvers (CG, BiCGStab) which can be
>> used in non-linear solvers (e.g. Newton's method), which can be used in ODE
>> sovlers.... You get the picture. But, I was thinking about those primitives:
>> they are not really useful to the user of the code. The user would want to
>> operate using #+, #* and wouldn't really care about producing (and trashing)
>> some intermediate results.
>>
>> So, here is my dilemma:
>>
>> I would want to make CG, BiCGStab and GEMRES (this one is a bit trickier)
>> efficient, so that they can be used inside non-linear solvers. While I've
>> been playing with CG and BiCGStab in Common Lisp, I found that the solution
>> timing is dominated by matrix to vector multiplication (or equivalently,
>> linear operator to vector application), even for sparse systems. So, actual
>> vector arithmetic is fast. Yet, I'm afraid that producing lots of
>> intermediate short-lived results will strain garbage collector, once these
>> methods become a part of non-linear solver.
>>
>> Should I just implement these methods using existing vector arithmetic and
>> worry about optimisation later? I.e. make it work, make it right, make it
>> fast approach. (I do feel the answer to this question should be 'yes').
>>
>> Secondly, in terms of the future of PolyMath, should DhbVector include
>> BLAS-like primitive? It might feel like over-engineering at this point
>> though. Nice thing about these primitives, however, is that higher-level
>> operations can be easily implemented using these primitives. So, it
>> introduces nice level of abstraction. (and, let's say in the future, these
>> primitives can be implemented with parallelisation on GPU)
>>
>> Thanks for the help!
>> Alexey
>>
>
> BLAS certainly makes sense if you want that basic linear algebra operations
> on numerical data to better scale
> (in terms of operations/second especially for large matrices, because it's
> not really sure it applies to 3x3 dimensions)
> If we want to compete with, say a basic Matlab interpreter or Numpy or R or
> etc..., then it's mandatory from my POV.
>
> If we are using a BLAS backend, then an idea is to create proxies and don't
> perform any operation untill it better matches a native BLAS operation.
> This is implemented in C++ for example in BOOST ublas
> http://www.boost.org/doc/libs/1_60_0/libs/numeric/ublas/doc/
>
> May I remind that I have developped a rather rough and dumb interface to
> BLACK/LAPACK for Smalltalk: Smallapack
> see https://github.com/nicolas-cellier-aka-nice/smallapack and
> https://github.com/nicolas-cellier-aka-nice/smallapack/wiki
> Its status on latest Pharo is unknown, the package depends on old Compiler,
> but if there's interest into it I can inquire.
>
> Smallapack is not integrated in PolyMath because maybe it's too huge, and
> because interaction with dhb matrices/vector would have to be thought out
> first.
> I see it as a concurrent implementation, so one could choose either dhb or
> blas/lapack based matrix/vector implementation depending on the needs, and
> we could maintain a kind of minimal common API between the two.
>
> Nicolas
>
>>
>> On Tuesday, April 5, 2016 at 10:57:55 AM UTC+2, werner kassens wrote:
>>>
>>> Hi,
>>> re #scaleBy:
>>> long time ago i made #* commutatively usable with any mixture of scalars,
>>> dhbvectors and dhbmatrices and replaced all uses of #scaleBy: in dhb by #*.
>>> i thought using #* is conceptually simpler and one could perhaps one day
>>> deprecate #scaleBy: or so.
>>> werner
>>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "SciSmalltalk" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to [hidden email].
>> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "SciSmalltalk" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [hidden email].
> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "SciSmalltalk" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [hidden email].
> For more options, visit https://groups.google.com/d/optout.



--
Serge Stinckwich
UCBN & UMI UMMISCO 209 (IRD/UPMC)
Every DSL ends up being Smalltalk
http://www.doesnotunderstand.org/

--
You received this message because you are subscribed to the Google Groups "SciSmalltalk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "SciSmalltalk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: DhbVector: implementation and extensions

Didier Besset
In reply to this post by Alexey Cherkaev
Just a quick remark. The idea behind implementing operator is that of readibilty: "aScalar * aVector" is more readable than "aVector scaleBy: aScalar", at least to a mathematician and a physicist (like me ;-), fro two reasons: 1) just looking at it, 2) the (mathematical) conventional order of operand is preserved. The latter argument is IMHO the most important.

Cheers,

Didier

On Mon, Apr 4, 2016 at 10:46 PM, Alexey Cherkaev <[hidden email]> wrote:
Hi Werner and Didier,

Thanks for the replies! I didn't expect the precision deteriorate in FloatArray... Do you know what is the reason?

So, my understanding of DhbVector is that it represents a) a mathematical vector (conceptually); b) an array of some kind of numbers, including Float, DualNumber, etc.). It is a different entity from Array, as Array is a more generic collection of things (ordered and indexed).  Am I correct? (my question if the separate class was needed was based on the idea of using FloatArray, before realising it was losing the precision)

As for extra operations I find myself using (while implementing CG method):

y <- a * y          (1)
y <- y + a * x    (2)
y <- a * y + x    (3)
y <- y + x         (4)

with x, y - vectors and a - scalar.

And (1) is already in: scaleBy method.


On Monday, April 4, 2016 at 9:26:35 PM UTC+2, Didier Besset wrote:
The class DHBVector has been introduced to implement common linear algebra operations: product with scalar, addition, dot product and matrix products.Using float is not good enough for precision as has laready been addressed. As you pointed out, there was no need for additional instance variables, only additonal methods and operators were added.Those methods are needed in all linear algebra algorithms.

I would also add that the methods and operators have been designed to improve readability.

For further explanation, I refer you to my book: OO implementation of numerical methods.

Cheers,

Didier

On Mon, Apr 4, 2016 at 4:10 PM, Alexey Cherkaev <[hidden email]> wrote:
Hi all,

I've been looking at DhbVector implementation and couple of questions came up:
  1. DhbVector is a sublcass of Array. Why not FloatArray? I see PolyMath has a package for automatic differentiation. If AD is used, the user will need the array of DualNumber's or Tape's (not implemented yet) In this case FloatArray might get on the way.
  2. Even more, why we even need a separate class for a vector? DhbVector doesn't have any instance or class variable. Vector operations can be implemented as extensions of Array/FloatArray.
How I would like to see it:
  • Have FloatArray implementing all the necessary operations for Float's only. The idea is that this would be an optimised implementation. In Common Lisp I would implement operations as multi-methods with type hints inside specialisations for floating-point arrays. I don't know how to achieve this in Pharo and what kind of optimisation it can offer.
  • Have some kind of NumberArray, whose members are known to be some kind of numbers. So, it will be able to deal with DualNumber's, for example. Thus, it's less optimised, but more universal.
I am not sure if this is achievable and what kind of design should be chosen to implement this with the minimum code (once again, in Common Lisp I would probably resort to macros as the code is essentially the same and only type hints are changing).

On the top of this, to efficiently implement some extra linear algebra methods more BLAS-like operations on vectors are needed, i.e. operations that don't allocate the memory but modify existing vector. This can be used in, for example, implementation of Conjugate Gradients (CG) linear solver for positively defined symmetric matrices, BiCGStab and GEMRES for general matrices; these methods are efficient for sparse systems and can solve linear equations where matrix part is expressed as a function. I'm keen to add these operations: I can start adding them to DhbVector and if we decide to change the whole vector design, they can be moved to an appropriate class.

Cheers,
Alexey

PS: scmutils, the extension of MIT-Scheme, to study Classical Mechanics developed by G. Sussman, A. Radul and others from MIT uses an interesting idea for vectors: there are two kinds of vectors, columns (vectors, with indices in superscripts) and rows (co-vectors, with indices in subscripts); so, scalar multiplication would be achieved by multiplying a row by a column; if a column is multiplied by a row, this will give outer product. Furthermore, element-wise operations are only allowed for the same kind of vectors.

--
You received this message because you are subscribed to the Google Groups "SciSmalltalk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "SciSmalltalk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "SciSmalltalk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: DhbVector: implementation and extensions

stepharo
In reply to this post by Nicolas Cellier


Le 7/4/16 14:28, Nicolas Cellier a écrit :
Hi, Serge, Stephane,
I think I need hollidays too ;)

take them :)
I did not stop since august and I felt it.

I can't say that Smallapack is integration ready. But it would be worth inquiring.
If you try, you will see that there are lot of duplicate areas with DHB.
You will probably find incompatible protocols too.
It's normal, they both more or less solve the same linear algebra problems, just with different solution
(full smalltalk vs external library).

IMO, it's good to have both, but integrating Smallapack means refactoring like mad :) maybe both libraries
The first goal would be to converge to a common protocol in order to provide a minima inter-operability.
But that's of course not enough.
We can do that slowly.
First clean polymath and then we can have fun refactoring or not :)

The external libraries brings some advantages like
- space*time efficiency,
- man*years library of already existing algorithms,
- carefully written with numerical accuracy in mind, etc...
But it costs extra complexity versus Smalltalk-based like for example
- handling of double/single precision floats
- handling of specific memory layout (space optimization)
- accounting for different flavours of external library (the FORTRAN interface, the C interface, ...)

This mixes with intrinsic complexity like
- providing different algorithms for solving in real or complex domain
- applying different algorithms depending on matrix proporties (symmetric, triangular, etc...)

In Smallapack, I used simple inheritance scheme to solve these problems mixed with some more or less complex dispatch rules.
If we want to add more properties, it might not scale, and there are other solutions, i.e. composition-based like that of ublas.
That means reifying the memory layout properties and delegate to it for memory access oriented protocol,
maybe doing the same with mathematical properties, but not all combinations are possible...
I would not call this refactoring, but complete rewrite, so it ain't going to happen any time soon,
especially if DHB has nothing in this direction, and maybe we ain't gonna need it.
But it's worth to keep in mind for judging if quality is OK or not for elligibility in Polymath.
Ideally these implementation details should be neutral for API.

There are other design decisions that are questionable and more easily accessible.
For example, I tried to provide compatibility with a lot of existing Smalltalk Matrix libraries.
That is bloating the protocol and should better be split into separate compatibility packages

In all cases, it's good to isolate modular units and try to have something composable.

So in a word, yes, we really should try, and yes, it's going to be a lot of work.
I'm interested to help, but I want to avoid to just do things alone.
Every ounce of positive criticism is welcome :)

Stephane, keep this damned keyboard away for a while and enjoy your hollidays
We'll see when you're back.

Nicolas

2016-04-07 8:19 GMT+02:00 Serge Stinckwich <[hidden email]>:
Enjoy your holidays Stéphane !

On Thu, Apr 7, 2016 at 8:14 AM, stepharo <[hidden email]> wrote:
> Hello nicolas
>
> if you want we could add/include Smallapack into polymath.
> In the long term I would like to split the huge configurations in many
> smaller one and the user should be able to pick what he wants
> Now I'm on holidays trying to get far from a keyboard.
> Stef
>
> Le 5/4/16 17:44, Nicolas Cellier a écrit :
>
>
>
> 2016-04-05 11:52 GMT+02:00 Alexey Cherkaev <[hidden email]>:
>>
>> Hi Nicolas, Werner!
>>
>> I must admit, Pharo's non-distinction on the naming level between
>> double-float for Float numbers and single-float FloatArray took me by
>> surprise.
>>
>> Werner: #scaleBy: does not produce a new vector as a result, but modifies
>> the receiver. Whereas, I assume, it is expected that 'aNumber * aVector'
>> doesn't modify aVector and answers a newly allocated vector. #scaleBy: in
>> that sense is a lower level operation. Which brings me to the next point:
>>
>> Nicolas: yes, these operations are primitive. The reason for them is that
>> they are used repeatedly in iterative solvers (CG, BiCGStab) which can be
>> used in non-linear solvers (e.g. Newton's method), which can be used in ODE
>> sovlers.... You get the picture. But, I was thinking about those primitives:
>> they are not really useful to the user of the code. The user would want to
>> operate using #+, #* and wouldn't really care about producing (and trashing)
>> some intermediate results.
>>
>> So, here is my dilemma:
>>
>> I would want to make CG, BiCGStab and GEMRES (this one is a bit trickier)
>> efficient, so that they can be used inside non-linear solvers. While I've
>> been playing with CG and BiCGStab in Common Lisp, I found that the solution
>> timing is dominated by matrix to vector multiplication (or equivalently,
>> linear operator to vector application), even for sparse systems. So, actual
>> vector arithmetic is fast. Yet, I'm afraid that producing lots of
>> intermediate short-lived results will strain garbage collector, once these
>> methods become a part of non-linear solver.
>>
>> Should I just implement these methods using existing vector arithmetic and
>> worry about optimisation later? I.e. make it work, make it right, make it
>> fast approach. (I do feel the answer to this question should be 'yes').
>>
>> Secondly, in terms of the future of PolyMath, should DhbVector include
>> BLAS-like primitive? It might feel like over-engineering at this point
>> though. Nice thing about these primitives, however, is that higher-level
>> operations can be easily implemented using these primitives. So, it
>> introduces nice level of abstraction. (and, let's say in the future, these
>> primitives can be implemented with parallelisation on GPU)
>>
>> Thanks for the help!
>> Alexey
>>
>
> BLAS certainly makes sense if you want that basic linear algebra operations
> on numerical data to better scale
> (in terms of operations/second especially for large matrices, because it's
> not really sure it applies to 3x3 dimensions)
> If we want to compete with, say a basic Matlab interpreter or Numpy or R or
> etc..., then it's mandatory from my POV.
>
> If we are using a BLAS backend, then an idea is to create proxies and don't
> perform any operation untill it better matches a native BLAS operation.
> This is implemented in C++ for example in BOOST ublas
> http://www.boost.org/doc/libs/1_60_0/libs/numeric/ublas/doc/
>
> May I remind that I have developped a rather rough and dumb interface to
> BLACK/LAPACK for Smalltalk: Smallapack
> see https://github.com/nicolas-cellier-aka-nice/smallapack and
> https://github.com/nicolas-cellier-aka-nice/smallapack/wiki
> Its status on latest Pharo is unknown, the package depends on old Compiler,
> but if there's interest into it I can inquire.
>
> Smallapack is not integrated in PolyMath because maybe it's too huge, and
> because interaction with dhb matrices/vector would have to be thought out
> first.
> I see it as a concurrent implementation, so one could choose either dhb or
> blas/lapack based matrix/vector implementation depending on the needs, and
> we could maintain a kind of minimal common API between the two.
>
> Nicolas
>
>>
>> On Tuesday, April 5, 2016 at 10:57:55 AM UTC+2, werner kassens wrote:
>>>
>>> Hi,
>>> re #scaleBy:
>>> long time ago i made #* commutatively usable with any mixture of scalars,
>>> dhbvectors and dhbmatrices and replaced all uses of #scaleBy: in dhb by #*.
>>> i thought using #* is conceptually simpler and one could perhaps one day
>>> deprecate #scaleBy: or so.
>>> werner
>>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "SciSmalltalk" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to [hidden email].
>> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "SciSmalltalk" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [hidden email].
> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "SciSmalltalk" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [hidden email].
> For more options, visit https://groups.google.com/d/optout.



--
Serge Stinckwich
UCBN & UMI UMMISCO 209 (IRD/UPMC)
Every DSL ends up being Smalltalk
http://www.doesnotunderstand.org/

--
You received this message because you are subscribed to the Google Groups "SciSmalltalk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "SciSmalltalk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "SciSmalltalk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: DhbVector: implementation and extensions

Didier Besset
In reply to this post by Alexey Cherkaev
"producing lots of intermediate short-lived results will strain garbage collector"

My (admitedly old) knowledge about CGs is the countrary. If you are creating and deleting a lot of objects, they are just discarded at the next flip. Flip space is constant, so that any temporary object does not strain the CG. What is a problem for GC, are the long-lived objects which get promoted out of flip space.

Cheers,

Didier
(sorry to be out of synch, but I am catching up a week's load of messages...)

On Tue, Apr 5, 2016 at 11:52 AM, Alexey Cherkaev <[hidden email]> wrote:
Hi Nicolas, Werner!

I must admit, Pharo's non-distinction on the naming level between double-float for Float numbers and single-float FloatArray took me by surprise.

Werner: #scaleBy: does not produce a new vector as a result, but modifies the receiver. Whereas, I assume, it is expected that 'aNumber * aVector' doesn't modify aVector and answers a newly allocated vector. #scaleBy: in that sense is a lower level operation. Which brings me to the next point:

Nicolas: yes, these operations are primitive. The reason for them is that they are used repeatedly in iterative solvers (CG, BiCGStab) which can be used in non-linear solvers (e.g. Newton's method), which can be used in ODE sovlers.... You get the picture. But, I was thinking about those primitives: they are not really useful to the user of the code. The user would want to operate using #+, #* and wouldn't really care about producing (and trashing) some intermediate results.

So, here is my dilemma:

I would want to make CG, BiCGStab and GEMRES (this one is a bit trickier) efficient, so that they can be used inside non-linear solvers. While I've been playing with CG and BiCGStab in Common Lisp, I found that the solution timing is dominated by matrix to vector multiplication (or equivalently, linear operator to vector application), even for sparse systems. So, actual vector arithmetic is fast. Yet, I'm afraid that producing lots of intermediate short-lived results will strain garbage collector, once these methods become a part of non-linear solver.

Should I just implement these methods using existing vector arithmetic and worry about optimisation later? I.e. make it work, make it right, make it fast approach. (I do feel the answer to this question should be 'yes').

Secondly, in terms of the future of PolyMath, should DhbVector include BLAS-like primitive? It might feel like over-engineering at this point though. Nice thing about these primitives, however, is that higher-level operations can be easily implemented using these primitives. So, it introduces nice level of abstraction. (and, let's say in the future, these primitives can be implemented with parallelisation on GPU)

Thanks for the help!
Alexey


On Tuesday, April 5, 2016 at 10:57:55 AM UTC+2, werner kassens wrote:
Hi,
re #scaleBy:
long time ago i made #* commutatively usable with any mixture of scalars, dhbvectors and dhbmatrices and replaced all uses of #scaleBy: in dhb by #*. i thought using #* is conceptually simpler and one could perhaps one day deprecate #scaleBy: or so.
werner

--
You received this message because you are subscribed to the Google Groups "SciSmalltalk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "SciSmalltalk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: DhbVector: implementation and extensions

Didier Besset
In reply to this post by werner kassens-2
Please see my comment on GCs...

Also, I remember timing things with really huge matrices and vectors (dimension 1000) and did not notice much degradation: time goes like the square of the dimension as expected. This was back in the last century (1999!) at a time where a 1GB machine was not even considered (,-), so machines of today should  not have this problem.
Cheers,
Didier

On Tue, Apr 5, 2016 at 12:19 PM, werner kassens <[hidden email]> wrote:
On Tuesday, April 5, 2016 at 11:52:47 AM UTC+2, Alexey Cherkaev wrote:
 #scaleBy: does not produce a new vector as a result, but modifies the receiver. Whereas, I assume, it is expected that 'aNumber * aVector' doesn't modify aVector and answers a newly allocated vector. #scaleBy: in that sense is a lower level operation. Which brings me to the next point:

Hi Alexey,
yes, that is why i did not silently deprecate #scaleBy: <stupid grin>. in dhb  Didier often uses in place calcs instead of producing intermediary objects, that then gets thrown away, but he often put them in a bigger method where several of these object updates are done in one iteration. wouldnt this be a possibility instead of a method that just updates y < - y + a * x etc?
werner

--
You received this message because you are subscribed to the Google Groups "SciSmalltalk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "SciSmalltalk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: DhbVector: implementation and extensions

Alexey Cherkaev
Hi Didier,

Thanks for your reply! I think the best way to proceed for me would be to implement everything using standard operations and optimise it only if necessary.

Best regards,
Alexey

On 8 April 2016 at 09:49, Didier Besset <[hidden email]> wrote:
Please see my comment on GCs...

Also, I remember timing things with really huge matrices and vectors (dimension 1000) and did not notice much degradation: time goes like the square of the dimension as expected. This was back in the last century (1999!) at a time where a 1GB machine was not even considered (,-), so machines of today should  not have this problem.
Cheers,
Didier

On Tue, Apr 5, 2016 at 12:19 PM, werner kassens <[hidden email]> wrote:
On Tuesday, April 5, 2016 at 11:52:47 AM UTC+2, Alexey Cherkaev wrote:
 #scaleBy: does not produce a new vector as a result, but modifies the receiver. Whereas, I assume, it is expected that 'aNumber * aVector' doesn't modify aVector and answers a newly allocated vector. #scaleBy: in that sense is a lower level operation. Which brings me to the next point:

Hi Alexey,
yes, that is why i did not silently deprecate #scaleBy: <stupid grin>. in dhb  Didier often uses in place calcs instead of producing intermediary objects, that then gets thrown away, but he often put them in a bigger method where several of these object updates are done in one iteration. wouldnt this be a possibility instead of a method that just updates y < - y + a * x etc?
werner

--
You received this message because you are subscribed to the Google Groups "SciSmalltalk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the Google Groups "SciSmalltalk" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/scismalltalk/X9n-uQGrwV8/unsubscribe.
To unsubscribe from this group and all its topics, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "SciSmalltalk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
12