Mathematically, we might expect 1-dimensional vectors to act like scalars and nx1-dimensional matrices to act like vectors. There are some important places where current implementations seem to violate these expectations:
-- 1-dimensional vectors are not scalars with respect to vectors: 2 * #(1 2 3) asPMVector. "a PMVector(2 4 6)" #(2) asPMVector * #(1 2 3) asPMVector. "Error" 1-dimensional vectors are not scalars with respect to matrices, and in fact have undefined behavior: 2 * (PMMatrix rows: #((1 2) (3 4))). "a PMVector(2 4) a PMVector(6 8)" #(2) asPMVector * (PMMatrix rows: #((1 2) (3 4))). "a PMVector(2 4)" n-dimensional vectors are row vectors when right-multiplied by an nx1 matrix, but column vectors when left multiplied by a matrix 1xn, meaning they cannot be used as either a 1xn or an nx1 matrix in code that expects one or the other: the dimensions will be unpredictable. #(1 2) asPMVector * (PMMatrix rows: #((1) (2))). "a PMVector(5)" (PMMatrix rows: #((1 2))) * #(1 2) asPMVector. "a PMVector(5)" 1x1 matrices are not scalars with respect to vectors: (PMMatrix rows: #((2))) * #(1 2) asPMVector. "Error" 1xn * nx1 matrix multiplication produces a matrix, not a scalar, which behaves differently than a scalar, as per above: (PMMatrix rows: #((1 2))) * (PMMatrix rows: #((1) (2))) "a PMVector(5) <-- actually a 1x1 matrix" As long as one works only with scalars and vectors OR scalars and matrices, things seem fine. It seems like maybe either matrix-vector operations should throw errors, or vectors should behave consistently as an nx1 matrix during matrix math. It may also make sense to make the 1-dimensional vectors and matrices either just convert to scalar Numbers, or make them polymorphic with #= etc, but I'm not sure. You received this message because you are subscribed to the Google Groups "SciSmalltalk" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. |
One possible solution is to reduce a size-1 vector to a scalar, exactly like a Fraction with denominator 1 is reduced to Integer (for example 2/1 -> 2). But in this case, the scalar has to be polymorphic to a vector (understand the whole vector protocol...).2017-05-18 20:56 GMT+02:00 Evan Donahue <[hidden email]>:
You received this message because you are subscribed to the Google Groups "SciSmalltalk" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. |
In reply to this post by Evan Donahue
Hi Evan, reducing 1-dimensional vectors to scalars is not such a good idea: 1. the whole vector protocol consists also of the Array protocol, the ArrayedCollection, the SequenceableCollection and the Collection protocol. 2. a 1-dimensional vector _is_ something different than a scalar! eg a whole lot of methods take an Array as argument, even if it has only one dimension, but usually they wouldnt work with a scalar. hence you can't "expect 1-dimensional vectors to act like scalars". re: #(2) asPMVector * (PMMatrix rows: #((1 2) (3 4))). "a PMVector(2 4)" yes, this is in a way a problem and the problem is here: #(2 3) asPMVector * #(1) asPMVector."2" there exists the occasional warning here & there, that for speed reasons no size-check is made - well, <g> probably more there than here -, but unfortunately this warning does at the moment not exist in PMVector>>productWithVector: . if this would have a size-check, this result would not happen. but it would slow down the calculations a tiny bit, since productWithVector: is called very very often. i would prefer if these things would not be made slower than necessary. one just has to pay attention and not produce nonsense code. so that your (and my) example also produces an error you would have to change PMVector>>productWithVector: to produce an SizeMismatch error or so. ok, i think this is unnecessary, but if i'm the only one who wants these things to be fast, then just change it (i can also use earlier versions for real-world-calculations if things are getting too slow for me because of all kinds of error-checks one can think of). "n-dimensional vectors...cannot be used as either a 1xn or an nx1 matrix in code that expects one or the other" <g> you can 'practically' specify that by using "( )" around a matrix & vector multiplication with the correct sequence (or simply by using a matrix instead). where else would you really need to make that distinction for a vector? at the moment, i have to admit, i dont really see the problem. werner On Thu, May 18, 2017 at 8:56 PM, Evan Donahue <[hidden email]> wrote:
You received this message because you are subscribed to the Google Groups "SciSmalltalk" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. |
Hi Werner,
-- Thanks for the reply. The context for this is that I was trying to design a neural networking library. I made the assumption that it made sense to accept vectors, apply matrix operations to them, and return vectors. My initial experiments with matrices and vectors seemed to suggest this would work. However, it then took several days experimenting and debugging to uncover the precise semantics of matrix-vector interactions. These semantics were not obvious to me from my initial experiments. Furthermore, I ultimately discovered that, as best I could tell, the operations I wanted to perform were not possible at all with the standard api, and that I needed to rewrite the whole critical path to use only matrices. Since vectors are not consistently 1xn or nx1, I could not simply change my vector inputs to matrices, but had to rewrite the entire critical path to get the orientations right. Overall, this seemed to suggest to me that either vectors and matrices should be made fully compatible, or entirely incompatible, but should not work sometimes and not others. Based on the difficulties you and others have pointed out, I think you are right that it would be difficult to unify the protocols, and that matrices and vectors should just not interoperate at all. Serge seemed to suggest that that was a basic assumption of the library. See my specific comments inline below. On Saturday, May 20, 2017 at 6:33:29 PM UTC-4, werner kassens wrote:
Since I'm not sure what a "1-dimensional vector" means mathematically, I'm not sure how it should behave, even though we can clearly define the concept in PolyMath. Fortunately, I don't think it will come up as long as matrices and vectors inhabit different worlds.
That is a good point. We can probably make the default math safe and add clearly marked unsafe operations for speed. I aspire every day to not to write buggy, nonsense code but alas, that day has not yet come. Would that work for everyone? You shouldn't need to use an old library to get performant math, but I just had all my tests passing for most of my development even though I was comparing matrices of different dimensions just because it only checked one row!
Can you? I could not find a way to use any sort of order or grouping that resulted in the calculation I wanted. a := PMMatrix rows: #((1 2)). b := PMMatrix rows: #((1) (2)). c := #(1 2) asPMVector. Using a and b, I can get a 2x2 matrix: b * a. "a PMVector(1 2) a PMVector(2 4)" How can I perform that multiplication with c and either a or b? This was the show stopper that finally made me stop trying to get vectors to work and rewrite everything with matrices: a * c "a PMVector(5)" c * a "Error" c * b. "a PMVector(5)" b * c. "Error" Hope that helps clarify things, Cheers, Evan
You received this message because you are subscribed to the Google Groups "SciSmalltalk" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. |
Hi Evan, wernerno, the problem is not yet clear <g>, are a and b vectors in disguise? if yes, you could do (sorry i just used an old version, 'Dhb' should of course be 'PM': a := DhbMatrix rows: #((1 3)). b := DhbMatrix rows: #((1) (2)). b*a. "a DhbVector(1 3) a DhbVector(2 6)" c := #(1 2) asDhbVector. c tensorProduct:(a rowAt:1). "a DhbVector(1 3) a DhbVector(2 6)" On Sun, May 21, 2017 at 5:28 PM, Evan Donahue <[hidden email]> wrote:
You received this message because you are subscribed to the Google Groups "SciSmalltalk" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. |
Hi Werner, Thanks for the clarification. I think that example gets at the heart of the matter. In my mind, the ideal linear algebra library would be polymorphic with Number in the sense originally suggested (collapsing 1-vectors to Numbers, etc). As a result, none of my code depends on the underlying array protocol: I treat vectors as opaque mathematical objects, not sequences. This is why, to me, suggesting that 1-vectors should just coerce to Numbers made sense. At the same time, this requires vectors that have orientations, which is why the current vector class so stumped me.On Sun, May 21, 2017 at 1:09 PM, werner kassens <[hidden email]> wrote:
You received this message because you are subscribed to the Google Groups "SciSmalltalk" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. |
On Mon, May 22, 2017 at 12:34 AM, Apocalypse Mystic <[hidden email]> wrote:
Hi Evan, ok, i'll say something about this; i'm not sure i understand your suggestion correctly, but i will try to explain the problem anyway. you say " we might expect 1-dimensional vectors to act like scalars and nx1-dimensional matrices to act like vectors". i understand this that way: if i have a 1x1matrix, then this should act like a 1-dim vector and as such act like a scalar. since i can do 'ascalar * amatrix', i can then expect that 'a1x1matrix * a42x43Matrix' is a valid expression. well, perhaps i misunderstand it? take for example the rank concept: there exists eg this theorem (eg https://en.wikipedia.org/wiki/Rank_(linear_algebra) under properties) : the rank of a matrix product cannot exceed the rank of any factor. this theorem would fall overboard in your algebra. what would you do with this, throw the whole rank concept away, let the theorem fall by the wayside or change the definition of rank? if the last one, how? rank can be helpful when argumenting about linear independence, you'd need to think about this too (and quite a few other things). and im sure it is just _one_ example how this could change, how math operates, i'm just too lazy to find others. its the same thing with vectors: you seem not to like '#(2) asPMVector * #(1 2 3) asPMVector. "Error"', and it is my understanding (?) that you would prefer 'a PMVector(2 4 6)' as result instead. think block matrices and build a matrix out of several vectors. you'll get _exactly_ the same problems with your result. (just in case my assumption, that 1x1matrix * a42x43Matrix is a valid expression, was wrong.) or iow, changing the algebra of matrices in a computer language, that is intended for general use, could mean trouble; please be careful with such ideas. werner You received this message because you are subscribed to the Google Groups "SciSmalltalk" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. |
Hi Werner, I've already closed the suggestion. I needed higher order tensors anyway and I am writing my own outside of PolyMath. I do not think there is anything more to discuss.On Mon, May 22, 2017 at 11:43 AM, werner kassens <[hidden email]> wrote:
You received this message because you are subscribed to the Google Groups "SciSmalltalk" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. |
Hi Evan, please excuse, but i have <g> yet another question. may i ask why outside polymath? its only the dhb part of polymath that differentiates between rowvectors & columnvectors by their positions as receiver or argument of a method, that would imho not make much sense to change. nevertheless it would not be difficult to translate its matrices and vectors to a tensor object and vice versa. werner On Mon, May 22, 2017 at 5:54 PM, Apocalypse Mystic <[hidden email]> wrote:
You received this message because you are subscribed to the Google Groups "SciSmalltalk" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. |
Hi Werner,
-- Maybe I misunderstood what you were saying? Perhaps you could clarify. All I was suggesting is that linear algebra in PolyMath should work like linear algebra in e.g. Matlab. In Matlab: 1-vectors are scalars: >> [42] == 42 ans = 1 1-matrices are scalars: >> [[42]] == 42 ans = 1 1xn * n*1 matrix products yield scalars: >> ([1, 2] * [3; 4]) == 11 ans = 1 scalars, 1-vectors, and 1-matrices all have rank 1: >> rank(42) == rank([42]) == rank([[42]]) == 1 ans = 1 a 1x1 * 42x43 multiplication yields a 42x43 matrix, just like multiplication by a scalar: >> size(zeros(1,1) * ones(42,43)) ans = 42 43 This is all well defined and shouldn't require us to, eg, abandon the concept of rank. Currently, in PolyMath, these equalities do not hold, or at least not without a lot of manual boxing and unboxing of the internal representations of the datatypes. My suggestion was that we just find a way to make scalars vectors and matrices all appropriately polymorphic. I understood you, or perhaps misunderstood you, to be saying that the current behavior of the PolyMath linear algebra classes, with respect to the above inequalities, should not be changed. If that is the case, I would prefer to just write a new tensor object that let me write down equations in the style of matlab and have them work without manual unboxing. This made sense since I needed tensors of dimension >2 anyway which I don't *think* PMMatrix currently supports, although perhaps I am wrong. Consequently, I was going to have to write a new matrix class anyway. I have no problem if PolyMath would want to incorporate such a tensor class, but I just assumed that having 3 separate and mutually incompatible linear algebra classes running around in the same library would be confusing. Is there a good reason to keep them all around? Is PMVector used in a lot of legacy code or something and should it not be considered part of the standard matrix algebra interface? I'm just not sure how the library is *supposed* to be used, and mixing vector and matrix has caused me a lot of pain so far. I'm open to anything, but I'm not sure I'm quite clear on what _you_ think the ideal PM linear algebra interface should be? What do you think a finished, polished PM linear algebra library would consist of? Cheers, Evan On Thursday, May 25, 2017 at 3:34:36 AM UTC-4, werner kassens wrote:
You received this message because you are subscribed to the Google Groups "SciSmalltalk" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. |
Hi Evan, thanks for your reply. 1x1 matrices of course have a rank of 1 and 42x43 matrices can have ranks up to 42. lets say you multiply a 42x43 matrix with a rank of 9 with a 1x1 matrix with a rank of 1, the resulting 42x43 matrix will again have a rank of 9. according to the mentioned theorem ( the rank of a matrix product cannot exceed the rank of any factor) its rank should not be higher than one: an obvious contradiction. now i'm definitely not married to such a theorem, of course one could change something about it. the reason i brought that up is more or less this: when i was young, people had no problems manipulating matrices & vectors, iow they could immediately identify problems such as these, but with tensors they often had some difficulties bending their mind around them. if these days young people generally think in tensor terms and a vector is just an order 1 tensor and a matrix an order 2 tensor for them, one should of course make polymath futureproof and make your changes. i just intended to point out that old people like me can obviously have some difficulties in thinking in those tensor terms. but if that is the way young people think these days, then i would no doubt politely ignore those stupid old people like me (i <g> can adapt). generally there is often in pharo this method that somebody thinks about a new way to do something, eg a new workspace, and it is not uncommon that pharo has for some time two versions, just to see how people adapt to the new way, and whether in the new way there are things that are more difficult to do than in the old way so that one should make some changes if users complain too much. iow i cant see no problem, if there are two versions in polymath. but then i initially thought you would have a general tensor object usable also for vectors (iow i thought things would not be too difficult to implement). in that case i would assume it would not be as fast as dhb. if otoh an order 1 tensor would have its own implementation then there would probably be much code duplication in polymath and the whole work would not be worthwhile. to be honest in that case i would simply ask Serge and very probably Nicolas, and then just do the necessary changes. in my experience this approach generally works rather smoothly. werner On Fri, May 26, 2017 at 6:38 PM, Evan Donahue <[hidden email]> wrote:
You received this message because you are subscribed to the Google Groups "SciSmalltalk" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. |
Free forum by Nabble - JMeter user | Edit this page |