Enjoy your holidays Stéphane !
On Thu, Apr 7, 2016 at 8:14 AM, stepharo <[hidden email]> wrote: > Hello nicolas > > if you want we could add/include Smallapack into polymath. > In the long term I would like to split the huge configurations in many > smaller one and the user should be able to pick what he wants > Now I'm on holidays trying to get far from a keyboard. > Stef > > Le 5/4/16 17:44, Nicolas Cellier a écrit : > > > > 2016-04-05 11:52 GMT+02:00 Alexey Cherkaev <[hidden email]>: >> >> Hi Nicolas, Werner! >> >> I must admit, Pharo's non-distinction on the naming level between >> double-float for Float numbers and single-float FloatArray took me by >> surprise. >> >> Werner: #scaleBy: does not produce a new vector as a result, but modifies >> the receiver. Whereas, I assume, it is expected that 'aNumber * aVector' >> doesn't modify aVector and answers a newly allocated vector. #scaleBy: in >> that sense is a lower level operation. Which brings me to the next point: >> >> Nicolas: yes, these operations are primitive. The reason for them is that >> they are used repeatedly in iterative solvers (CG, BiCGStab) which can be >> used in non-linear solvers (e.g. Newton's method), which can be used in ODE >> sovlers.... You get the picture. But, I was thinking about those primitives: >> they are not really useful to the user of the code. The user would want to >> operate using #+, #* and wouldn't really care about producing (and trashing) >> some intermediate results. >> >> So, here is my dilemma: >> >> I would want to make CG, BiCGStab and GEMRES (this one is a bit trickier) >> efficient, so that they can be used inside non-linear solvers. While I've >> been playing with CG and BiCGStab in Common Lisp, I found that the solution >> timing is dominated by matrix to vector multiplication (or equivalently, >> linear operator to vector application), even for sparse systems. So, actual >> vector arithmetic is fast. Yet, I'm afraid that producing lots of >> intermediate short-lived results will strain garbage collector, once these >> methods become a part of non-linear solver. >> >> Should I just implement these methods using existing vector arithmetic and >> worry about optimisation later? I.e. make it work, make it right, make it >> fast approach. (I do feel the answer to this question should be 'yes'). >> >> Secondly, in terms of the future of PolyMath, should DhbVector include >> BLAS-like primitive? It might feel like over-engineering at this point >> though. Nice thing about these primitives, however, is that higher-level >> operations can be easily implemented using these primitives. So, it >> introduces nice level of abstraction. (and, let's say in the future, these >> primitives can be implemented with parallelisation on GPU) >> >> Thanks for the help! >> Alexey >> > > BLAS certainly makes sense if you want that basic linear algebra operations > on numerical data to better scale > (in terms of operations/second especially for large matrices, because it's > not really sure it applies to 3x3 dimensions) > If we want to compete with, say a basic Matlab interpreter or Numpy or R or > etc..., then it's mandatory from my POV. > > If we are using a BLAS backend, then an idea is to create proxies and don't > perform any operation untill it better matches a native BLAS operation. > This is implemented in C++ for example in BOOST ublas > http://www.boost.org/doc/libs/1_60_0/libs/numeric/ublas/doc/ > > May I remind that I have developped a rather rough and dumb interface to > BLACK/LAPACK for Smalltalk: Smallapack > see https://github.com/nicolas-cellier-aka-nice/smallapack and > https://github.com/nicolas-cellier-aka-nice/smallapack/wiki > Its status on latest Pharo is unknown, the package depends on old Compiler, > but if there's interest into it I can inquire. > > Smallapack is not integrated in PolyMath because maybe it's too huge, and > because interaction with dhb matrices/vector would have to be thought out > first. > I see it as a concurrent implementation, so one could choose either dhb or > blas/lapack based matrix/vector implementation depending on the needs, and > we could maintain a kind of minimal common API between the two. > > Nicolas > >> >> On Tuesday, April 5, 2016 at 10:57:55 AM UTC+2, werner kassens wrote: >>> >>> Hi, >>> re #scaleBy: >>> long time ago i made #* commutatively usable with any mixture of scalars, >>> dhbvectors and dhbmatrices and replaced all uses of #scaleBy: in dhb by #*. >>> i thought using #* is conceptually simpler and one could perhaps one day >>> deprecate #scaleBy: or so. >>> werner >>> >> -- >> You received this message because you are subscribed to the Google Groups >> "SciSmalltalk" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to [hidden email]. >> For more options, visit https://groups.google.com/d/optout. > > > -- > You received this message because you are subscribed to the Google Groups > "SciSmalltalk" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [hidden email]. > For more options, visit https://groups.google.com/d/optout. > > > -- > You received this message because you are subscribed to the Google Groups > "SciSmalltalk" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [hidden email]. > For more options, visit https://groups.google.com/d/optout. -- Serge Stinckwich UCBN & UMI UMMISCO 209 (IRD/UPMC) Every DSL ends up being Smalltalk http://www.doesnotunderstand.org/ -- You received this message because you are subscribed to the Google Groups "SciSmalltalk" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. |
Hi, Serge, Stephane, I think I need hollidays too ;)The first goal would be to converge to a common protocol in order to provide a minima inter-operability. But that's of course not enough. - space*time efficiency, - man*years library of already existing algorithms, - carefully written with numerical accuracy in mind, etc... But it costs extra complexity versus Smalltalk-based like for example - accounting for different flavours of external library (the FORTRAN interface, the C interface, ...) This mixes with intrinsic complexity like - providing different algorithms for solving in real or complex domain - applying different algorithms depending on matrix proporties (symmetric, triangular, etc...)If we want to add more properties, it might not scale, and there are other solutions, i.e. composition-based like that of ublas. That means reifying the memory layout properties and delegate to it for memory access oriented protocol, maybe doing the same with mathematical properties, but not all combinations are possible... I would not call this refactoring, but complete rewrite, so it ain't going to happen any time soon, especially if DHB has nothing in this direction, and maybe we ain't gonna need it. But it's worth to keep in mind for judging if quality is OK or not for elligibility in Polymath. Ideally these implementation details should be neutral for API. There are other design decisions that are questionable and more easily accessible. For example, I tried to provide compatibility with a lot of existing Smalltalk Matrix libraries. That is bloating the protocol and should better be split into separate compatibility packages In all cases, it's good to isolate modular units and try to have something composable. So in a word, yes, we really should try, and yes, it's going to be a lot of work. I'm interested to help, but I want to avoid to just do things alone. Every ounce of positive criticism is welcome :) Stephane, keep this damned keyboard away for a while and enjoy your hollidays We'll see when you're back. Nicolas 2016-04-07 8:19 GMT+02:00 Serge Stinckwich <[hidden email]>: Enjoy your holidays Stéphane ! You received this message because you are subscribed to the Google Groups "SciSmalltalk" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. |
In reply to this post by Alexey Cherkaev
Just a quick remark. The idea behind implementing operator is that of readibilty: "aScalar * aVector" is more readable than "aVector scaleBy: aScalar", at least to a mathematician and a physicist (like me ;-), fro two reasons: 1) just looking at it, 2) the (mathematical) conventional order of operand is preserved. The latter argument is IMHO the most important. Cheers, Didier On Mon, Apr 4, 2016 at 10:46 PM, Alexey Cherkaev <[hidden email]> wrote:
You received this message because you are subscribed to the Google Groups "SciSmalltalk" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. |
In reply to this post by Nicolas Cellier
Le 7/4/16 14:28, Nicolas Cellier a
écrit :
take them :) I did not stop since august and I felt it. We can do that slowly. First clean polymath and then we can have fun refactoring or not :)
-- You received this message because you are subscribed to the Google Groups "SciSmalltalk" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. |
In reply to this post by Alexey Cherkaev
"producing lots of intermediate short-lived results will strain garbage collector" My (admitedly old) knowledge about CGs is the countrary. If you are creating and deleting a lot of objects, they are just discarded at the next flip. Flip space is constant, so that any temporary object does not strain the CG. What is a problem for GC, are the long-lived objects which get promoted out of flip space. Cheers, Didier (sorry to be out of synch, but I am catching up a week's load of messages...) On Tue, Apr 5, 2016 at 11:52 AM, Alexey Cherkaev <[hidden email]> wrote:
You received this message because you are subscribed to the Google Groups "SciSmalltalk" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. |
In reply to this post by werner kassens-2
Please see my comment on GCs... Also, I remember timing things with really huge matrices and vectors (dimension 1000) and did not notice much degradation: time goes like the square of the dimension as expected. This was back in the last century (1999!) at a time where a 1GB machine was not even considered (,-), so machines of today should not have this problem. Cheers, Didier On Tue, Apr 5, 2016 at 12:19 PM, werner kassens <[hidden email]> wrote:
You received this message because you are subscribed to the Google Groups "SciSmalltalk" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. |
Hi Didier,
-- Thanks for your reply! I think the best way to proceed for me would be to implement everything using standard operations and optimise it only if necessary. Best regards, Alexey On 8 April 2016 at 09:49, Didier Besset <[hidden email]> wrote:
You received this message because you are subscribed to the Google Groups "SciSmalltalk" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. |
Free forum by Nabble | Edit this page |