Float should not implement #to:, #to:by:, etc...

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
12 messages Options
Reply | Threaded
Open this post in threaded view
|

Float should not implement #to:, #to:by:, etc...

Julien Delplanque-2
Hello,

I realised that it is possible to create an interval of floats.

I think this is bad because, since intervals are computed by successively adding a number, it might result in precision errors.

(0.0 to: 1.0 by: 0.1) asArray >>> #(0.0 0.1 0.2 0.30000000000000004 0.4 0.5 0.6000000000000001 0.7000000000000001 0.8 0.9 1.0)

The correct (precise) way to do it would be to use ScaledDecimal:

(0.0s1 to: 1.0s1 by: 0.1s1) asArray >>> #(0.0s1 0.1s1 0.2s1 0.3s1 0.4s1 0.5s1 0.6s1 0.7s1 0.8s1 0.9s1 1.0s1)


And I’d like to discuss this with you.

What do you think?

Cheers,

Julien

---
Julien Delplanque
Doctorant à l’Université de Lille
Bâtiment B 40, Avenue Halley 59650 Villeneuve d'Ascq
Numéro de téléphone: +333 59 35 86 40

Reply | Threaded
Open this post in threaded view
|

Re: [rmod] Float should not implement #to:, #to:by:, etc...

Guillermo Polito


On Tue, Sep 18, 2018 at 11:06 AM Julien <[hidden email]> wrote:
Hello,

I realised that it is possible to create an interval of floats.

I think this is bad because, since intervals are computed by successively adding a number, it might result in precision errors.

(0.0 to: 1.0 by: 0.1) asArray >>> #(0.0 0.1 0.2 0.30000000000000004 0.4 0.5 0.6000000000000001 0.7000000000000001 0.8 0.9 1.0)

The correct (precise) way to do it would be to use ScaledDecimal:

(0.0s1 to: 1.0s1 by: 0.1s1) asArray >>> #(0.0s1 0.1s1 0.2s1 0.3s1 0.4s1 0.5s1 0.6s1 0.7s1 0.8s1 0.9s1 1.0s1)


And I’d like to discuss this with you.

What do you think?

Well, I think it's a matter of balance :)

#to:by: is defined in Number. So we could, for example, cancel it in Float.
However, people would still be able to do

1 to: 1.0 by: 0.1

Which would still show problems.

And moreover, we could try to do

1 to: 7 by: (Margin fromNumber: 1)

And even worse

1 to: Object new by: (Margin fromNumber: 1)

I think adding type-validations all over the place is not a good solution, and is kind of opposite to our philosophy...

So we should
 - document the good usages
 - document the bad ones
 - and live with the fact that we have a relaxed type system that will fail at runtime :)

Guille
Reply | Threaded
Open this post in threaded view
|

Re: [rmod] Float should not implement #to:, #to:by:, etc...

EstebanLM


On 18 Sep 2018, at 11:13, Guillermo Polito <[hidden email]> wrote:



On Tue, Sep 18, 2018 at 11:06 AM Julien <[hidden email]> wrote:
Hello,

I realised that it is possible to create an interval of floats.

I think this is bad because, since intervals are computed by successively adding a number, it might result in precision errors.

(0.0 to: 1.0 by: 0.1) asArray >>> #(0.0 0.1 0.2 0.30000000000000004 0.4 0.5 0.6000000000000001 0.7000000000000001 0.8 0.9 1.0)

The correct (precise) way to do it would be to use ScaledDecimal:

(0.0s1 to: 1.0s1 by: 0.1s1) asArray >>> #(0.0s1 0.1s1 0.2s1 0.3s1 0.4s1 0.5s1 0.6s1 0.7s1 0.8s1 0.9s1 1.0s1)


And I’d like to discuss this with you.

What do you think?

Well, I think it's a matter of balance :)

#to:by: is defined in Number. So we could, for example, cancel it in Float.
However, people would still be able to do

1 to: 1.0 by: 0.1

Which would still show problems.

Nevertheless, I have seen this a lot of times. 

0.0 to: 1.0 by: 0.1

Is a common use case.


And moreover, we could try to do

1 to: 7 by: (Margin fromNumber: 1)

And even worse

1 to: Object new by: (Margin fromNumber: 1)

I think adding type-validations all over the place is not a good solution, and is kind of opposite to our philosophy...

So we should
 - document the good usages
 - document the bad ones
 - and live with the fact that we have a relaxed type system that will fail at runtime :)

yup. 
But not cancel.

Esteban


Guille

Reply | Threaded
Open this post in threaded view
|

Re: [rmod] Float should not implement #to:, #to:by:, etc...

Guillaume Larcheveque
Maybe #to:by: should convert its parameters in Fraction to avoid Floats problems (not sure, just an idea)

2018-09-18 11:25 GMT+02:00 Esteban Lorenzano <[hidden email]>:


On 18 Sep 2018, at 11:13, Guillermo Polito <[hidden email]> wrote:



On Tue, Sep 18, 2018 at 11:06 AM Julien <[hidden email]> wrote:
Hello,

I realised that it is possible to create an interval of floats.

I think this is bad because, since intervals are computed by successively adding a number, it might result in precision errors.

(0.0 to: 1.0 by: 0.1) asArray >>> #(0.0 0.1 0.2 0.30000000000000004 0.4 0.5 0.6000000000000001 0.7000000000000001 0.8 0.9 1.0)

The correct (precise) way to do it would be to use ScaledDecimal:

(0.0s1 to: 1.0s1 by: 0.1s1) asArray >>> #(0.0s1 0.1s1 0.2s1 0.3s1 0.4s1 0.5s1 0.6s1 0.7s1 0.8s1 0.9s1 1.0s1)


And I’d like to discuss this with you.

What do you think?

Well, I think it's a matter of balance :)

#to:by: is defined in Number. So we could, for example, cancel it in Float.
However, people would still be able to do

1 to: 1.0 by: 0.1

Which would still show problems.

Nevertheless, I have seen this a lot of times. 

0.0 to: 1.0 by: 0.1

Is a common use case.


And moreover, we could try to do

1 to: 7 by: (Margin fromNumber: 1)

And even worse

1 to: Object new by: (Margin fromNumber: 1)

I think adding type-validations all over the place is not a good solution, and is kind of opposite to our philosophy...

So we should
 - document the good usages
 - document the bad ones
 - and live with the fact that we have a relaxed type system that will fail at runtime :)

yup. 
But not cancel.

Esteban


Guille




--
Guillaume Larcheveque

Reply | Threaded
Open this post in threaded view
|

Re: [rmod] Float should not implement #to:, #to:by:, etc...

Eliot Miranda-2


On Sep 18, 2018, at 2:52 AM, Guillaume Larcheveque <[hidden email]> wrote:

Maybe #to:by: should convert its parameters in Fraction to avoid Floats problems (not sure, just an idea)

There is no need to convert.  One can simply write

    0 to: 1 by: 1/10

The issue with 0 to: 1 by: 0.1 is a problem with floating point arithmetic, not with intervals, and one does not cure disease by putting band aids on symptoms.  Instead we should teach the pitfalls of floating point arithmetic representations so that people are not astonished by 1/10.0*10.  Avoid simplifying language. Teach literacy.


2018-09-18 11:25 GMT+02:00 Esteban Lorenzano <[hidden email]>:


On 18 Sep 2018, at 11:13, Guillermo Polito <[hidden email]> wrote:



On Tue, Sep 18, 2018 at 11:06 AM Julien <[hidden email]> wrote:
Hello,

I realised that it is possible to create an interval of floats.

I think this is bad because, since intervals are computed by successively adding a number, it might result in precision errors.

(0.0 to: 1.0 by: 0.1) asArray >>> #(0.0 0.1 0.2 0.30000000000000004 0.4 0.5 0.6000000000000001 0.7000000000000001 0.8 0.9 1.0)

The correct (precise) way to do it would be to use ScaledDecimal:

(0.0s1 to: 1.0s1 by: 0.1s1) asArray >>> #(0.0s1 0.1s1 0.2s1 0.3s1 0.4s1 0.5s1 0.6s1 0.7s1 0.8s1 0.9s1 1.0s1)


And I’d like to discuss this with you.

What do you think?

Well, I think it's a matter of balance :)

#to:by: is defined in Number. So we could, for example, cancel it in Float.
However, people would still be able to do

1 to: 1.0 by: 0.1

Which would still show problems.

Nevertheless, I have seen this a lot of times. 

0.0 to: 1.0 by: 0.1

Is a common use case.


And moreover, we could try to do

1 to: 7 by: (Margin fromNumber: 1)

And even worse

1 to: Object new by: (Margin fromNumber: 1)

I think adding type-validations all over the place is not a good solution, and is kind of opposite to our philosophy...

So we should
 - document the good usages
 - document the bad ones
 - and live with the fact that we have a relaxed type system that will fail at runtime :)

yup. 
But not cancel.

Esteban


Guille




--
Guillaume Larcheveque

Reply | Threaded
Open this post in threaded view
|

Re: [rmod] Float should not implement #to:, #to:by:, etc...

Nicolas Cellier
Hi Julien,
You are right, Float intervals are not to be encouraged!
But if a knowledgeable person wants to use it, why forbid it?
Why not forbid Float alltogether then?

In the same vein, I saw C compiler warning me about using Float = (== in C).
Great! Now I cannot use -Wall -Werror, though I perfectly know where I can use == and where I cannot...
I understand the intentions, but in french, don't we say that hell is paved with good intentions?

Warning about = and not warning about < <= > >= != is kind of stupid common knowledge not very well assimilated...
Like telling me that I should use a tolerance (a-b) abs < eps, as kind of miraculous workaround. Err...
The problem is now to choose eps and deal with false positive/negative, and bad news, there's no such thing as a universal eps.

So in this spirit, I would restrain from forbiding anything.
Rather, you could add a rule checking for that construct for the purpose of educating.
And please, if activating the rule on some continuous integration Cerberus bot, then provide a way to explicitely bypass it in exceptional conditions!

Last thing, Interval enumeration is not performed via cumulative additions
   start + step + step + step + ....
But rather with a multiplication which performs well better wrt loss of accuracy:
  start + (index - 1 * step)
See Interval>>do:


Unfortunately, #to:do: #to:by:do: and their inlined versions do not follow this pattern, but use cumulative additions
(at least in Squeak, to be confirmed in Pharo...)
If I would change something, that would be that.
Remember, "You can cheat as long as you don't get caught", I think I caught the inliner with this snippet:


Beware, most loops are for Integer and should not be de-optimized too much (cheating can be a tough problem)...

Le mar. 18 sept. 2018 à 17:18, Eliot Miranda <[hidden email]> a écrit :


On Sep 18, 2018, at 2:52 AM, Guillaume Larcheveque <[hidden email]> wrote:

Maybe #to:by: should convert its parameters in Fraction to avoid Floats problems (not sure, just an idea)

There is no need to convert.  One can simply write

    0 to: 1 by: 1/10

The issue with 0 to: 1 by: 0.1 is a problem with floating point arithmetic, not with intervals, and one does not cure disease by putting band aids on symptoms.  Instead we should teach the pitfalls of floating point arithmetic representations so that people are not astonished by 1/10.0*10.  Avoid simplifying language. Teach literacy.


2018-09-18 11:25 GMT+02:00 Esteban Lorenzano <[hidden email]>:


On 18 Sep 2018, at 11:13, Guillermo Polito <[hidden email]> wrote:



On Tue, Sep 18, 2018 at 11:06 AM Julien <[hidden email]> wrote:
Hello,

I realised that it is possible to create an interval of floats.

I think this is bad because, since intervals are computed by successively adding a number, it might result in precision errors.

(0.0 to: 1.0 by: 0.1) asArray >>> #(0.0 0.1 0.2 0.30000000000000004 0.4 0.5 0.6000000000000001 0.7000000000000001 0.8 0.9 1.0)

The correct (precise) way to do it would be to use ScaledDecimal:

(0.0s1 to: 1.0s1 by: 0.1s1) asArray >>> #(0.0s1 0.1s1 0.2s1 0.3s1 0.4s1 0.5s1 0.6s1 0.7s1 0.8s1 0.9s1 1.0s1)


And I’d like to discuss this with you.

What do you think?

Well, I think it's a matter of balance :)

#to:by: is defined in Number. So we could, for example, cancel it in Float.
However, people would still be able to do

1 to: 1.0 by: 0.1

Which would still show problems.

Nevertheless, I have seen this a lot of times. 

0.0 to: 1.0 by: 0.1

Is a common use case.


And moreover, we could try to do

1 to: 7 by: (Margin fromNumber: 1)

And even worse

1 to: Object new by: (Margin fromNumber: 1)

I think adding type-validations all over the place is not a good solution, and is kind of opposite to our philosophy...

So we should
 - document the good usages
 - document the bad ones
 - and live with the fact that we have a relaxed type system that will fail at runtime :)

yup. 
But not cancel.

Esteban


Guille




--
Guillaume Larcheveque

Reply | Threaded
Open this post in threaded view
|

Re: [rmod] Float should not implement #to:, #to:by:, etc...

Nicolas Cellier
In reply to this post by Guillaume Larcheveque


Le mar. 18 sept. 2018 à 11:53, Guillaume Larcheveque <[hidden email]> a écrit :
Maybe #to:by: should convert its parameters in Fraction to avoid Floats problems (not sure, just an idea)


Hi Guillaume,
Yes possibly...
But if the author explicitely requested a loop on Float, why not honour the request, as bad as it can be?
If the answer is, well, it could be legitimate in some cases, but we just don't know how to recognize theses legitimate cases, then it's better to not alter the resquest at all, and let the author deal with the responsibility (and all the possible consequences).

You could argue for an intermediate solution:
maybe we could perform the increment arithmetic with Fraction but convert back asFloat just before evaluating the block ...
But in this case, (0 to: 1 by: 0.1 asFraction) collect: #asFloat would be probably more surprising than (0 to: 1 by: 0.1 ) asArray w.r.t. naive expectations:
#(0.0 0.1 0.2 0.30000000000000004 0.4 0.5 0.6000000000000001 0.7000000000000001 0.8 0.9)
#(0.0 0.1 0.2 0.30000000000000004 0.4 0.5 0.6000000000000001 0.7000000000000001 0.8 0.9 1.0)

We could try to workaround by converting start/step asMinialDecimalFraction, just as a wild guess of author's intentions...
But then again, if we are not sure of the intentions, it's better to not alter the request at all.

We have Renraku in Pharo, so we could use Renraku for trivial cases (usage of literal Float as start/step).
And with Instruction Reified capability of Pharo Compiler, we could even instrument some code and implement runtime checks when the static analysis cannot infer the types like the cases submitted by Guillermo, if it really matters.
It would be very much like the undefined behavior runtime checks of clang for example, and a nice subject for an advanced student.

2018-09-18 11:25 GMT+02:00 Esteban Lorenzano <[hidden email]>:


On 18 Sep 2018, at 11:13, Guillermo Polito <[hidden email]> wrote:



On Tue, Sep 18, 2018 at 11:06 AM Julien <[hidden email]> wrote:
Hello,

I realised that it is possible to create an interval of floats.

I think this is bad because, since intervals are computed by successively adding a number, it might result in precision errors.

(0.0 to: 1.0 by: 0.1) asArray >>> #(0.0 0.1 0.2 0.30000000000000004 0.4 0.5 0.6000000000000001 0.7000000000000001 0.8 0.9 1.0)

The correct (precise) way to do it would be to use ScaledDecimal:

(0.0s1 to: 1.0s1 by: 0.1s1) asArray >>> #(0.0s1 0.1s1 0.2s1 0.3s1 0.4s1 0.5s1 0.6s1 0.7s1 0.8s1 0.9s1 1.0s1)


And I’d like to discuss this with you.

What do you think?

Well, I think it's a matter of balance :)

#to:by: is defined in Number. So we could, for example, cancel it in Float.
However, people would still be able to do

1 to: 1.0 by: 0.1

Which would still show problems.

Nevertheless, I have seen this a lot of times. 

0.0 to: 1.0 by: 0.1

Is a common use case.


And moreover, we could try to do

1 to: 7 by: (Margin fromNumber: 1)

And even worse

1 to: Object new by: (Margin fromNumber: 1)

I think adding type-validations all over the place is not a good solution, and is kind of opposite to our philosophy...

So we should
 - document the good usages
 - document the bad ones
 - and live with the fact that we have a relaxed type system that will fail at runtime :)

yup. 
But not cancel.

Esteban


Guille




--
Guillaume Larcheveque

Reply | Threaded
Open this post in threaded view
|

Re: [rmod] Float should not implement #to:, #to:by:, etc...

Nicolas Cellier


Le mar. 18 sept. 2018 à 22:40, Nicolas Cellier <[hidden email]> a écrit :


Le mar. 18 sept. 2018 à 11:53, Guillaume Larcheveque <[hidden email]> a écrit :
Maybe #to:by: should convert its parameters in Fraction to avoid Floats problems (not sure, just an idea)


Hi Guillaume,
Yes possibly...
But if the author explicitely requested a loop on Float, why not honour the request, as bad as it can be?
If the answer is, well, it could be legitimate in some cases, but we just don't know how to recognize theses legitimate cases, then it's better to not alter the resquest at all, and let the author deal with the responsibility (and all the possible consequences).

You could argue for an intermediate solution:
maybe we could perform the increment arithmetic with Fraction but convert back asFloat just before evaluating the block ...
But in this case, (0 to: 1 by: 0.1 asFraction) collect: #asFloat would be probably more surprising than (0 to: 1 by: 0.1 ) asArray w.r.t. naive expectations:
#(0.0 0.1 0.2 0.30000000000000004 0.4 0.5 0.6000000000000001 0.7000000000000001 0.8 0.9)
#(0.0 0.1 0.2 0.30000000000000004 0.4 0.5 0.6000000000000001 0.7000000000000001 0.8 0.9 1.0)

And I forgot also the inlined or not inlined, to:by:do: which currently agree with each other (but not with Interval do:)
    Array new: 11 streamContents: [:s | 0 to: 1 by: 0.1 do: [:e|s nextPut: e]].
    Array new: 11 streamContents: [:s | 0 to: 1 by: 0.1 do: [:e|s nextPut: e] yourself].
#(0 0.1 0.2 0.30000000000000004 0.4 0.5 0.6 0.7 0.7999999999999999 0.8999999999999999 0.9999999999999999)

We could try to workaround by converting start/step asMinialDecimalFraction, just as a wild guess of author's intentions...
But then again, if we are not sure of the intentions, it's better to not alter the request at all.

We have Renraku in Pharo, so we could use Renraku for trivial cases (usage of literal Float as start/step).
And with Instruction Reified capability of Pharo Compiler, we could even instrument some code and implement runtime checks when the static analysis cannot infer the types like the cases submitted by Guillermo, if it really matters.
It would be very much like the undefined behavior runtime checks of clang for example, and a nice subject for an advanced student.

2018-09-18 11:25 GMT+02:00 Esteban Lorenzano <[hidden email]>:


On 18 Sep 2018, at 11:13, Guillermo Polito <[hidden email]> wrote:



On Tue, Sep 18, 2018 at 11:06 AM Julien <[hidden email]> wrote:
Hello,

I realised that it is possible to create an interval of floats.

I think this is bad because, since intervals are computed by successively adding a number, it might result in precision errors.

(0.0 to: 1.0 by: 0.1) asArray >>> #(0.0 0.1 0.2 0.30000000000000004 0.4 0.5 0.6000000000000001 0.7000000000000001 0.8 0.9 1.0)

The correct (precise) way to do it would be to use ScaledDecimal:

(0.0s1 to: 1.0s1 by: 0.1s1) asArray >>> #(0.0s1 0.1s1 0.2s1 0.3s1 0.4s1 0.5s1 0.6s1 0.7s1 0.8s1 0.9s1 1.0s1)


And I’d like to discuss this with you.

What do you think?

Well, I think it's a matter of balance :)

#to:by: is defined in Number. So we could, for example, cancel it in Float.
However, people would still be able to do

1 to: 1.0 by: 0.1

Which would still show problems.

Nevertheless, I have seen this a lot of times. 

0.0 to: 1.0 by: 0.1

Is a common use case.


And moreover, we could try to do

1 to: 7 by: (Margin fromNumber: 1)

And even worse

1 to: Object new by: (Margin fromNumber: 1)

I think adding type-validations all over the place is not a good solution, and is kind of opposite to our philosophy...

So we should
 - document the good usages
 - document the bad ones
 - and live with the fact that we have a relaxed type system that will fail at runtime :)

yup. 
But not cancel.

Esteban


Guille




--
Guillaume Larcheveque

Reply | Threaded
Open this post in threaded view
|

Re: [rmod] Float should not implement #to:, #to:by:, etc...

Davide Grandi-2
In reply to this post by Guillaume Larcheveque
> 0.0 to: 1.0 by: 0.1
Receiver and arguments, lexically (the dot), are float BUT
are written as decimal number (zero, one, one tenth).

I think that in a text you can ONLY write "decimal" numbers or (in bases other than 10 [or with factors other than 2^x and 5 ?]), at worst, repeating decimals
(eg 0,1 in base 3 = 1/3 ~ 0.333...), that are ultimately fractions.

So, may be, if the receiver or an argument is a float the compiler may issue a warning and compile to non-float,
if receiver or arguments are computed ... there should be a default behaviour.

Best regards,

    Davide Grandi
(PS : I work mainly in an ERP that has only integers ... and doubles)


On 18/09/2018 11:52, Guillaume Larcheveque wrote:
Maybe #to:by: should convert its parameters in Fraction to avoid Floats problems (not sure, just an idea)

2018-09-18 11:25 GMT+02:00 Esteban Lorenzano <[hidden email]>:

On 18 Sep 2018, at 11:13, Guillermo Polito <[hidden email]> wrote:



On Tue, Sep 18, 2018 at 11:06 AM Julien <[hidden email]> wrote:
Hello,

I realised that it is possible to create an interval of floats.

I think this is bad because, since intervals are computed by successively adding a number, it might result in precision errors.

(0.0 to: 1.0 by: 0.1) asArray >>> #(0.0 0.1 0.2 0.30000000000000004 0.4 0.5 0.6000000000000001 0.7000000000000001 0.8 0.9 1.0)

The correct (precise) way to do it would be to use ScaledDecimal:

(0.0s1 to: 1.0s1 by: 0.1s1) asArray >>> #(0.0s1 0.1s1 0.2s1 0.3s1 0.4s1 0.5s1 0.6s1 0.7s1 0.8s1 0.9s1 1.0s1)


And I’d like to discuss this with you.

What do you think?

Well, I think it's a matter of balance :)

#to:by: is defined in Number. So we could, for example, cancel it in Float.
However, people would still be able to do

1 to: 1.0 by: 0.1

Which would still show problems.

Nevertheless, I have seen this a lot of times. 

0.0 to: 1.0 by: 0.1

Is a common use case.


And moreover, we could try to do

1 to: 7 by: (Margin fromNumber: 1)

And even worse

1 to: Object new by: (Margin fromNumber: 1)

I think adding type-validations all over the place is not a good solution, and is kind of opposite to our philosophy...

So we should
 - document the good usages
 - document the bad ones
 - and live with the fact that we have a relaxed type system that will fail at runtime :)

yup. 
But not cancel.

Esteban


Guille




--
Guillaume Larcheveque

Reply | Threaded
Open this post in threaded view
|

Re: [rmod] Float should not implement #to:, #to:by:, etc...

Julien Delplanque-2
Hello,

Ok, I read all the mails. I see your point about not cancelling the possibility to use #to:by: on any Number.

However, the remark from David Grandi seems really relevant to me.

You can NOT write anything else than a *rational number* when writing a literal using the XXX.XXX pattern.

I think it would be legit to generate scaled decimal by default from float literals and to be able to get Float only by either explicitly specify it (#asFloat, etc…) or because of computation that lead to a Float (e.g. the need to approach an irrational number ?).

I would be curious to see the impact of such change in the compiler on the system.

Maybe a first step is indeed to implement a rule in Renraku to encourage people to use ScaledDecimals.

Cheers,

Julien

---
Julien Delplanque
Doctorant à l’Université de Lille
Bâtiment B 40, Avenue Halley 59650 Villeneuve d'Ascq
Numéro de téléphone: +333 59 35 86 40

Le 19 sept. 2018 à 09:22, Davide Grandi <[hidden email]> a écrit :

> 0.0 to: 1.0 by: 0.1
Receiver and arguments, lexically (the dot), are float BUT
are written as decimal number (zero, one, one tenth).

I think that in a text you can ONLY write "decimal" numbers or (in bases other than 10 [or with factors other than 2^x and 5 ?]), at worst, repeating decimals
(eg 0,1 in base 3 = 1/3 ~ 0.333...), that are ultimately fractions.

So, may be, if the receiver or an argument is a float the compiler may issue a warning and compile to non-float,
if receiver or arguments are computed ... there should be a default behaviour.

Best regards,

    Davide Grandi
(PS : I work mainly in an ERP that has only integers ... and doubles)


On 18/09/2018 11:52, Guillaume Larcheveque wrote:
Maybe #to:by: should convert its parameters in Fraction to avoid Floats problems (not sure, just an idea)

2018-09-18 11:25 GMT+02:00 Esteban Lorenzano <[hidden email]>:

On 18 Sep 2018, at 11:13, Guillermo Polito <[hidden email]> wrote:



On Tue, Sep 18, 2018 at 11:06 AM Julien <[hidden email]> wrote:
Hello,

I realised that it is possible to create an interval of floats.

I think this is bad because, since intervals are computed by successively adding a number, it might result in precision errors.

(0.0 to: 1.0 by: 0.1) asArray >>> #(0.0 0.1 0.2 0.30000000000000004 0.4 0.5 0.6000000000000001 0.7000000000000001 0.8 0.9 1.0)

The correct (precise) way to do it would be to use ScaledDecimal:

(0.0s1 to: 1.0s1 by: 0.1s1) asArray >>> #(0.0s1 0.1s1 0.2s1 0.3s1 0.4s1 0.5s1 0.6s1 0.7s1 0.8s1 0.9s1 1.0s1)


And I’d like to discuss this with you.

What do you think?

Well, I think it's a matter of balance :)

#to:by: is defined in Number. So we could, for example, cancel it in Float.
However, people would still be able to do

1 to: 1.0 by: 0.1

Which would still show problems.

Nevertheless, I have seen this a lot of times. 

0.0 to: 1.0 by: 0.1

Is a common use case.


And moreover, we could try to do

1 to: 7 by: (Margin fromNumber: 1)

And even worse

1 to: Object new by: (Margin fromNumber: 1)

I think adding type-validations all over the place is not a good solution, and is kind of opposite to our philosophy...

So we should
 - document the good usages
 - document the bad ones
 - and live with the fact that we have a relaxed type system that will fail at runtime :)

yup. 
But not cancel.

Esteban


Guille




--
Guillaume Larcheveque


Reply | Threaded
Open this post in threaded view
|

Re: [rmod] Float should not implement #to:, #to:by:, etc...

Davide Grandi-2
> *rational number*
CORRECT, for any BASE and any string :
>
at worst, repeating decimals (eg [TYPO -->0,1<--] 0.1 in base 3 = 1/3 ~ 0.333...), that are ultimately fractions.

but for
10, 2 and 5 bases and combinations thereof (Z = 2r, 5r, 10r, 20r, ...), any string ZrXXX.YYY will lead to a decimal number with finite and precise number of digits.

... or, maybe, I'm wrong and insisting on wrong premises and conclusions in front of all the public...

    Davide Grandi

On 19/09/2018 22:54, Julien wrote:
Hello,

Ok, I read all the mails. I see your point about not cancelling the possibility to use #to:by: on any Number.

However, the remark from David Grandi seems really relevant to me.

You can NOT write anything else than a *rational number* when writing a literal using the XXX.XXX pattern.

I think it would be legit to generate scaled decimal by default from float literals and to be able to get Float only by either explicitly specify it (#asFloat, etc…) or because of computation that lead to a Float (e.g. the need to approach an irrational number ?).

I would be curious to see the impact of such change in the compiler on the system.

Maybe a first step is indeed to implement a rule in Renraku to encourage people to use ScaledDecimals.

Cheers,

Julien

---
Julien Delplanque
Doctorant à l’Université de Lille
Bâtiment B 40, Avenue Halley 59650 Villeneuve d'Ascq
Numéro de téléphone: +333 59 35 86 40

Le 19 sept. 2018 à 09:22, Davide Grandi <[hidden email]> a écrit :

> 0.0 to: 1.0 by: 0.1
Receiver and arguments, lexically (the dot), are float BUT
are written as decimal number (zero, one, one tenth).

I think that in a text you can ONLY write "decimal" numbers or (in bases other than 10 [or with factors other than 2^x and 5 ?]), at worst, repeating decimals
(eg 0,1 in base 3 = 1/3 ~ 0.333...), that are ultimately fractions.

So, may be, if the receiver or an argument is a float the compiler may issue a warning and compile to non-float, if receiver or arguments are computed ... there should be a default behaviour.

Best regards,

    Davide Grandi
(PS : I work mainly in an ERP that has only integers ... and doubles)


On 18/09/2018 11:52, Guillaume Larcheveque wrote:
Maybe #to:by: should convert its parameters in Fraction to avoid Floats problems (not sure, just an idea)

2018-09-18 11:25 GMT+02:00 Esteban Lorenzano <[hidden email]>:
On 18 Sep 2018, at 11:13, Guillermo Polito <[hidden email]> wrote:

On Tue, Sep 18, 2018 at 11:06 AM Julien <[hidden email]> wrote:
Hello,

I realised that it is possible to create an interval of floats.

I think this is bad because, since intervals are computed by successively adding a number, it might result in precision errors.

(0.0 to: 1.0 by: 0.1) asArray >>> #(0.0 0.1 0.2 0.30000000000000004 0.4 0.5 0.6000000000000001 0.7000000000000001 0.8 0.9 1.0)

The correct (precise) way to do it would be to use ScaledDecimal:

(0.0s1 to: 1.0s1 by: 0.1s1) asArray >>> #(0.0s1 0.1s1 0.2s1 0.3s1 0.4s1 0.5s1 0.6s1 0.7s1 0.8s1 0.9s1 1.0s1)


And I’d like to discuss this with you.

What do you think?

Well, I think it's a matter of balance :)

#to:by: is defined in Number. So we could, for example, cancel it in Float.
However, people would still be able to do

1 to: 1.0 by: 0.1

Which would still show problems.
Nevertheless, I have seen this a lot of times. 

0.0 to: 1.0 by: 0.1

Is a common use case.

And moreover, we could try to do

1 to: 7 by: (Margin fromNumber: 1)

And even worse

1 to: Object new by: (Margin fromNumber: 1)

I think adding type-validations all over the place is not a good solution, and is kind of opposite to our philosophy...

So we should
 - document the good usages
 - document the bad ones
 - and live with the fact that we have a relaxed type system that will fail at runtime :)
yup. 
But not cancel.

Esteban

Guille
--
Guillaume Larcheveque


-- 
Ing. Davide Grandi
email  : [hidden email]
mobile : +39 339 7468 778
Reply | Threaded
Open this post in threaded view
|

Re: [rmod] Float should not implement #to:, #to:by:, etc...

Nicolas Cellier
Hi Davide,
Using decimals is a possibility which already exists, just add a $s at end of the literal.
What you suggest is to change the syntax and use decimal as the default, and another notation (or no literal notation at all, just asFloat message send) for floating point numbers.

It's worth trying, but IMO, you will encounter these limitations:
- decimals currently are implemented as unlimited precision Fraction. For long calculations, they will tend to grow in size and slow down the system (I have developped a symbolic calculation library in Smalltalk 30 yeras ago and thus used the Fraction extensively, so I know what it can cost).
- decimals currently are not decimals but arbitrary fractions (for example 2.0s / 3 will be represented as Fraction 2/3 internally). But they print as a rounded decimal representation which can't convert back to the same number, and thus is not the most convenient for REPL style.
- most decimals operations are slower than floating point operations.

Suggested notations like aaa.bbb(cccc) for a repeating pattern cccc could be indeed used to deal with REPL problem.
Or we could emulate some limited precision decimals (preferably with floating point).
But it's hard to address all the issues above together.

For simple geometry problems we soon need Algebraic numbers, and then transcendental, so the usefulness of unlimited fraction is limited anyway.
In practice, we rapidly need approximations, that's where highly optimized and carfully thought  floating point shine...
Personnally, if Pharo want to go down that road, I would at least keep a literal notation for floating point, given the universality and interest of such numbers.

Le dim. 23 sept. 2018 à 16:31, Davide Grandi <[hidden email]> a écrit :
> *rational number*
CORRECT, for any BASE and any string :
>
at worst, repeating decimals (eg [TYPO -->0,1<--] 0.1 in base 3 = 1/3 ~ 0.333...), that are ultimately fractions.

but for
10, 2 and 5 bases and combinations thereof (Z = 2r, 5r, 10r, 20r, ...), any string ZrXXX.YYY will lead to a decimal number with finite and precise number of digits.

... or, maybe, I'm wrong and insisting on wrong premises and conclusions in front of all the public...

    Davide Grandi

On 19/09/2018 22:54, Julien wrote:
Hello,

Ok, I read all the mails. I see your point about not cancelling the possibility to use #to:by: on any Number.

However, the remark from David Grandi seems really relevant to me.

You can NOT write anything else than a *rational number* when writing a literal using the XXX.XXX pattern.

I think it would be legit to generate scaled decimal by default from float literals and to be able to get Float only by either explicitly specify it (#asFloat, etc…) or because of computation that lead to a Float (e.g. the need to approach an irrational number ?).

I would be curious to see the impact of such change in the compiler on the system.

Maybe a first step is indeed to implement a rule in Renraku to encourage people to use ScaledDecimals.

Cheers,

Julien

---
Julien Delplanque
Doctorant à l’Université de Lille
Bâtiment B 40, Avenue Halley 59650 Villeneuve d'Ascq
Numéro de téléphone: +333 59 35 86 40

Le 19 sept. 2018 à 09:22, Davide Grandi <[hidden email]> a écrit :

> 0.0 to: 1.0 by: 0.1
Receiver and arguments, lexically (the dot), are float BUT
are written as decimal number (zero, one, one tenth).

I think that in a text you can ONLY write "decimal" numbers or (in bases other than 10 [or with factors other than 2^x and 5 ?]), at worst, repeating decimals
(eg 0,1 in base 3 = 1/3 ~ 0.333...), that are ultimately fractions.

So, may be, if the receiver or an argument is a float the compiler may issue a warning and compile to non-float, if receiver or arguments are computed ... there should be a default behaviour.

Best regards,

    Davide Grandi
(PS : I work mainly in an ERP that has only integers ... and doubles)


On 18/09/2018 11:52, Guillaume Larcheveque wrote:
Maybe #to:by: should convert its parameters in Fraction to avoid Floats problems (not sure, just an idea)

2018-09-18 11:25 GMT+02:00 Esteban Lorenzano <[hidden email]>:
On 18 Sep 2018, at 11:13, Guillermo Polito <[hidden email]> wrote:

On Tue, Sep 18, 2018 at 11:06 AM Julien <[hidden email]> wrote:
Hello,

I realised that it is possible to create an interval of floats.

I think this is bad because, since intervals are computed by successively adding a number, it might result in precision errors.

(0.0 to: 1.0 by: 0.1) asArray >>> #(0.0 0.1 0.2 0.30000000000000004 0.4 0.5 0.6000000000000001 0.7000000000000001 0.8 0.9 1.0)

The correct (precise) way to do it would be to use ScaledDecimal:

(0.0s1 to: 1.0s1 by: 0.1s1) asArray >>> #(0.0s1 0.1s1 0.2s1 0.3s1 0.4s1 0.5s1 0.6s1 0.7s1 0.8s1 0.9s1 1.0s1)


And I’d like to discuss this with you.

What do you think?

Well, I think it's a matter of balance :)

#to:by: is defined in Number. So we could, for example, cancel it in Float.
However, people would still be able to do

1 to: 1.0 by: 0.1

Which would still show problems.
Nevertheless, I have seen this a lot of times. 

0.0 to: 1.0 by: 0.1

Is a common use case.

And moreover, we could try to do

1 to: 7 by: (Margin fromNumber: 1)

And even worse

1 to: Object new by: (Margin fromNumber: 1)

I think adding type-validations all over the place is not a good solution, and is kind of opposite to our philosophy...

So we should
 - document the good usages
 - document the bad ones
 - and live with the fact that we have a relaxed type system that will fail at runtime :)
yup. 
But not cancel.

Esteban

Guille
--
Guillaume Larcheveque


-- 
Ing. Davide Grandi
email  : [hidden email]
mobile : +39 339 7468 778