float & fraction equality bug

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
47 messages Options
123
Reply | Threaded
Open this post in threaded view
|

Re: float & fraction equality bug

Nicolas Cellier


2017-11-09 20:10 GMT+01:00 Raffaello Giulietti <[hidden email]>:
On 2017-11-09 19:04, Nicolas Cellier wrote:


2017-11-09 18:02 GMT+01:00 Raffaello Giulietti <[hidden email] <mailto:[hidden email]>>:




        Anyway relying upon Float equality should allways be subject to
        extreme caution and examination

        For example, what do you expect with plain old arithmetic in mind:

              a := 0.1.
              b := 0.3 - 0.2.
              a = b

        This will lead to (a - b) reciprocal = 3.602879701896397e16
        If it is in a Graphics context, I'm not sure that it's the
        expected scale...



    a = b evaluates to false in this example, so no wonder (a - b)
    evaluates to a big number.


Writing a = b with floating point is rarely a good idea, so asking about the context which could justify such approach makes sense IMO.


Simple contexts, like the one which is the subject of this trail, are the one we should strive at because they are the ones most likely used in day-to-day working. Having useful properties and regularity for simple cases might perhaps cover 99% of the everyday usages (just a dishonestly biased estimate ;-) )

Complex contexts, with heavy arithmetic, are best dealt by numericists when Floats are involved, or with unlimited precision numbers like Fractions by other programmers.


This differs from my experience.
Float strikes in the most simple place were we put false expectation because of a different mental representation



    But the example is not plain old arithmetic.

    Here, 0.1, 0.2, 0.3 are just a shorthands to say "the Floats closest
    to 0.1, 0.2, 0.3" (if implemented correctly, like in Pharo as it
    seems). Every user of Floats should be fully aware of the implicit
    loss of precision that using Floats entails.


Yes, it makes perfect sense!
But precisely because you are aware that 0.1e0 is "the Float closest to 0.1" and not exactly 1/10, you should then not be surprised that they are not equal.


Indeed, I'm not surprised. But then
    0.1 - (1/10)
shall not evaluate to 0. If it evaluates to 0, then the numbers shall compare as being equal.

The surprise lies in the inconsistency between the comparison and the subtraction, not in the isolated operations.




I agree that following assertion hold:
     self assert: a ~= b & a isFloat & b isFloat & a isFinite & b isFinite ==> (a - b) isZero not


The arrow ==> is bidirectional even for finite Floats:

self assert: (a - b) isZero not & a isFloat & b isFloat & a isFinite & b isFinite ==> a ~= b



But (1/10) is not a Float and there is no Float that can represent it exactly, so you can simply not apply the rules of FloatingPoint on it.

When you write (1/10) - 0.1, you implicitely perform (1/10) asFloat - 0.1.
It is the rounding operation asFloat that made the operation inexact, so it's no more surprising than other floating point common sense

See above my observation about what I consider surprising.

As already said, it's a false expectation in the context of mixed arithmetic.






    In the case of mixed-mode Float/Fraction operations, I personally
    prefer reducing the Fraction to a Float because other commercial
    Smalltalk implementations do so, so there would be less pain porting
    code to Pharo, perhaps attracting more Smalltalkers to Pharo.

Mixed arithmetic is problematic, and from my experience mostly happens in graphics in Smalltalk.

If ever I would change something according to this principle (but I'm not convinced it's necessary, it might lead to other strange side effects),
maybe it would be how mixed arithmetic is performed...
Something like exact difference like Martin suggested, then converting to nearest Float because result is inexact:
     ((1/10) - 0.1 asFraction) asFloat

This way, you would have a less surprising result in most cases.
But i could craft a fraction such that the difference underflows, and the assertion a ~= b ==> (a - b) isZero not would still not hold.
Is it really worth it?
Will it be adopted in other dialects?



As an alternative, the Float>>asFraction method could return the Fraction with the smallest denominator that would convert to the receiver by the Fraction>>asFloat method.

So, 0.1 asFraction would return 1/10 rather than the beefy Fraction it currently returns. To return the beast, one would have to intentionally invoke asExactFraction or something similar.

This might cause less surprising behavior. But I have to think more.


No the goal here was to have a non null difference because we need to preserve inequality for other features.

Answering anything but a Float at a high computation price goes against primary purpose of Float (speed, efficiency)
If that's what we want, then we shall not use Float in the first place.
That's why I don't believe in such proposal

The minimal Fraction algorithm is an intersting challenge though. Not sure how to find it...
Coming back to a bit of code, we have only minimal decimal (with only powers of 2 & 5 at denominator):

 {[Float pi asFraction]. [Float pi asMinimalDecimalFraction]} collect: #bench.




    But the main point here, I repeat myself, is to be consistent and to
    have as much regularity as intrinsically possible.



I think we have as much as possible already.
Non equality resolve more surprising behavior than it creates.
It makes the implementation more mathematically consistent (understand preserving more properties).
Tell me how you are going to sort these 3 numbers:

{1.0 . 1<<60+1/(1<<60).  1<<61+1/(1<<61)} sort.

tell me the expectation of:

{1.0 . 1<<60+1/(1<<60). 1<<61+1/(1<<61)} asSet size.


A clearly stated rule, consistently applied and known to everybody, helps.

In presence of heterogeneous numbers, the rule should state the common denominator, so to say. Hence, the numbers involved in mixed-mode arithmetic are either all converted to one representation or all to the other: whether they are compared or added, subtracted or divided, etc. One rule for mixed-mode conversions, not two.


Having an economy of rules is allways a good idea.
If you can obtain a consistent system with 1 single rule rather than 2 then go.
But if it's at the price of sacrificing higher expectations, that's another matter.

Languages that have a simpler arithmetic model, bounded integer, no Fraction, may stick to a single rule.
More sofisticated models like you'll find in Lisp and Scheme have exact same logic as Squeak/Pharo.

We don't have 2 rules gratuitously as already explained.
- Total relation order of non nan values so as to be a good Magnitude citizen imply non equality
- Producing Float in case of mixed arithmetic is for practicle purpose: speed
  (What are those damn Float for otherwise?)
  it's also justified a posteriori by (exact op: inexact) -> inexact

What are you ready to sacrifice/trade?




tell me why = is not a relation of equivalence anymore (not associative)



Ensuring that equality is an equivalence is always a problem when the entities involved are of different nature, like here. This is not a new problem and not inherent in numbers. (Logicians and set theorists would have much to tell.) Even comparing Points and ColoredPoints is problematic, so I have no final answer.

In Smalltalk, furthermore, implementing equality makes it necessary to (publicly) expose much more internal details about an object than in other environments.


Let's focus on Number.
Loosing equivalence is loosing ability to mix Numbers in Set.
But not only Numbers... Anything having a Number somewhere in an inst var, like (1/10)@0 and 0.1@0.

Reply | Threaded
Open this post in threaded view
|

Re: float & fraction equality bug

Nicolas Cellier


2017-11-09 21:55 GMT+01:00 Nicolas Cellier <[hidden email]>:


2017-11-09 20:10 GMT+01:00 Raffaello Giulietti <[hidden email]>:
On 2017-11-09 19:04, Nicolas Cellier wrote:


2017-11-09 18:02 GMT+01:00 Raffaello Giulietti <[hidden email] <mailto:[hidden email]>>:




        Anyway relying upon Float equality should allways be subject to
        extreme caution and examination

        For example, what do you expect with plain old arithmetic in mind:

              a := 0.1.
              b := 0.3 - 0.2.
              a = b

        This will lead to (a - b) reciprocal = 3.602879701896397e16
        If it is in a Graphics context, I'm not sure that it's the
        expected scale...



    a = b evaluates to false in this example, so no wonder (a - b)
    evaluates to a big number.


Writing a = b with floating point is rarely a good idea, so asking about the context which could justify such approach makes sense IMO.


Simple contexts, like the one which is the subject of this trail, are the one we should strive at because they are the ones most likely used in day-to-day working. Having useful properties and regularity for simple cases might perhaps cover 99% of the everyday usages (just a dishonestly biased estimate ;-) )

Complex contexts, with heavy arithmetic, are best dealt by numericists when Floats are involved, or with unlimited precision numbers like Fractions by other programmers.


This differs from my experience.
Float strikes in the most simple place were we put false expectation because of a different mental representation



    But the example is not plain old arithmetic.

    Here, 0.1, 0.2, 0.3 are just a shorthands to say "the Floats closest
    to 0.1, 0.2, 0.3" (if implemented correctly, like in Pharo as it
    seems). Every user of Floats should be fully aware of the implicit
    loss of precision that using Floats entails.


Yes, it makes perfect sense!
But precisely because you are aware that 0.1e0 is "the Float closest to 0.1" and not exactly 1/10, you should then not be surprised that they are not equal.


Indeed, I'm not surprised. But then
    0.1 - (1/10)
shall not evaluate to 0. If it evaluates to 0, then the numbers shall compare as being equal.

The surprise lies in the inconsistency between the comparison and the subtraction, not in the isolated operations.




I agree that following assertion hold:
     self assert: a ~= b & a isFloat & b isFloat & a isFinite & b isFinite ==> (a - b) isZero not


The arrow ==> is bidirectional even for finite Floats:

self assert: (a - b) isZero not & a isFloat & b isFloat & a isFinite & b isFinite ==> a ~= b



But (1/10) is not a Float and there is no Float that can represent it exactly, so you can simply not apply the rules of FloatingPoint on it.

When you write (1/10) - 0.1, you implicitely perform (1/10) asFloat - 0.1.
It is the rounding operation asFloat that made the operation inexact, so it's no more surprising than other floating point common sense

See above my observation about what I consider surprising.

As already said, it's a false expectation in the context of mixed arithmetic.






    In the case of mixed-mode Float/Fraction operations, I personally
    prefer reducing the Fraction to a Float because other commercial
    Smalltalk implementations do so, so there would be less pain porting
    code to Pharo, perhaps attracting more Smalltalkers to Pharo.

Mixed arithmetic is problematic, and from my experience mostly happens in graphics in Smalltalk.

If ever I would change something according to this principle (but I'm not convinced it's necessary, it might lead to other strange side effects),
maybe it would be how mixed arithmetic is performed...
Something like exact difference like Martin suggested, then converting to nearest Float because result is inexact:
     ((1/10) - 0.1 asFraction) asFloat

This way, you would have a less surprising result in most cases.
But i could craft a fraction such that the difference underflows, and the assertion a ~= b ==> (a - b) isZero not would still not hold.
Is it really worth it?
Will it be adopted in other dialects?



As an alternative, the Float>>asFraction method could return the Fraction with the smallest denominator that would convert to the receiver by the Fraction>>asFloat method.

So, 0.1 asFraction would return 1/10 rather than the beefy Fraction it currently returns. To return the beast, one would have to intentionally invoke asExactFraction or something similar.

This might cause less surprising behavior. But I have to think more.


No the goal here was to have a non null difference because we need to preserve inequality for other features.

Answering anything but a Float at a high computation price goes against primary purpose of Float (speed, efficiency)
If that's what we want, then we shall not use Float in the first place.
That's why I don't believe in such proposal

The minimal Fraction algorithm is an intersting challenge though. Not sure how to find it...
Coming back to a bit of code, we have only minimal decimal (with only powers of 2 & 5 at denominator):

 {[Float pi asFraction]. [Float pi asMinimalDecimalFraction]} collect: #bench.




    But the main point here, I repeat myself, is to be consistent and to
    have as much regularity as intrinsically possible.



I think we have as much as possible already.
Non equality resolve more surprising behavior than it creates.
It makes the implementation more mathematically consistent (understand preserving more properties).
Tell me how you are going to sort these 3 numbers:

{1.0 . 1<<60+1/(1<<60).  1<<61+1/(1<<61)} sort.

tell me the expectation of:

{1.0 . 1<<60+1/(1<<60). 1<<61+1/(1<<61)} asSet size.


A clearly stated rule, consistently applied and known to everybody, helps.

In presence of heterogeneous numbers, the rule should state the common denominator, so to say. Hence, the numbers involved in mixed-mode arithmetic are either all converted to one representation or all to the other: whether they are compared or added, subtracted or divided, etc. One rule for mixed-mode conversions, not two.


Having an economy of rules is allways a good idea.
If you can obtain a consistent system with 1 single rule rather than 2 then go.
But if it's at the price of sacrificing higher expectations, that's another matter.

Languages that have a simpler arithmetic model, bounded integer, no Fraction, may stick to a single rule.
More sofisticated models like you'll find in Lisp and Scheme have exact same logic as Squeak/Pharo.

sophisticated... (i'm on my way copying/pasting that one a thousand times)

We don't have 2 rules gratuitously as already explained.
- Total relation order of non nan values so as to be a good Magnitude citizen imply non equality
- Producing Float in case of mixed arithmetic is for practicle purpose: speed
  (What are those damn Float for otherwise?)
  it's also justified a posteriori by (exact op: inexact) -> inexact

What are you ready to sacrifice/trade?




tell me why = is not a relation of equivalence anymore (not associative)



Ensuring that equality is an equivalence is always a problem when the entities involved are of different nature, like here. This is not a new problem and not inherent in numbers. (Logicians and set theorists would have much to tell.) Even comparing Points and ColoredPoints is problematic, so I have no final answer.

In Smalltalk, furthermore, implementing equality makes it necessary to (publicly) expose much more internal details about an object than in other environments.


Let's focus on Number.
Loosing equivalence is loosing ability to mix Numbers in Set.
But not only Numbers... Anything having a Number somewhere in an inst var, like (1/10)@0 and 0.1@0.


Reply | Threaded
Open this post in threaded view
|

Re: float & fraction equality bug

raffaello.giulietti
In reply to this post by Stephane Ducasse-3
On 2017-11-09 21:49, Stephane Ducasse wrote:

> On Thu, Nov 9, 2017 at 5:34 PM, Nicolas Cellier
> <[hidden email]> wrote:
>> Note that this started a long time ago and comes up episodically
>> http://forum.world.st/Fraction-equality-and-Float-infinity-problem-td48323.html
>> http://forum.world.st/BUG-Equality-should-be-transitive-tc1404335.html
>> https://lists.gforge.inria.fr/pipermail/pharo-project/2009-July/010496.html
>>
>> A bit like a "marronnier" (in French, a subject that is treated periodically
>> by newspapers and magazines)
>
> Hi nicolas except that now we could a chapter describing some of the answers :)
>

It's easy to make = behave as an equivalence if the two objects it
operates on are of the same class.

But it is well-known that it's hard when their classes are different,
short of implementing = trivially in this case.

This is to say that there's no real hope to make = an equivalence if the
wish is to make it useful in the presence of different kind of objects.

In the context of numeric computations, luckily, there is some hope when
conversions, if possible at all, are all performed consistently. But the
same conversions should then be applied for all operations, whether
comparisons or subtractions, etc., to ensure more familiar properties.
This is not currently the case in Pharo.



Raffaello

Reply | Threaded
Open this post in threaded view
|

Re: float & fraction equality bug

raffaello.giulietti
In reply to this post by Nicolas Cellier
On 2017-11-09 22:11, Nicolas Cellier wrote:


>             Something like exact difference like Martin suggested, then
>             converting to nearest Float because result is inexact:
>                  ((1/10) - 0.1 asFraction) asFloat
>
>             This way, you would have a less surprising result in most cases.
>             But i could craft a fraction such that the difference
>             underflows, and the assertion a ~= b ==> (a - b) isZero not
>             would still not hold.
>             Is it really worth it?
>             Will it be adopted in other dialects?
>
>
>
>         As an alternative, the Float>>asFraction method could return the
>         Fraction with the smallest denominator that would convert to the
>         receiver by the Fraction>>asFloat method.
>
>         So, 0.1 asFraction would return 1/10 rather than the beefy
>         Fraction it currently returns. To return the beast, one would
>         have to intentionally invoke asExactFraction or something similar.
>
>         This might cause less surprising behavior. But I have to think more.
>
>
>     No the goal here was to have a non null difference because we need
>     to preserve inequality for other features.
>
>     Answering anything but a Float at a high computation price goes
>     against primary purpose of Float (speed, efficiency)
>     If that's what we want, then we shall not use Float in the first place.
>     That's why I don't believe in such proposal
>
>     The minimal Fraction algorithm is an intersting challenge though.
>     Not sure how to find it...

I'm thinking of a continuous fraction expansion of the exact fraction
until the partial fraction falls inside the rounding interval of the Float.

Heavy, but doable.

Not sure, however, if it always meets the stated criterion.




Reply | Threaded
Open this post in threaded view
|

Re: float & fraction equality bug

Henrik Sperre Johansen
In reply to this post by raffaello.giulietti
raffaello.giulietti wrote
> On 2017-11-09 21:49, Stephane Ducasse wrote:
>> On Thu, Nov 9, 2017 at 5:34 PM, Nicolas Cellier
>> &lt;

> nicolas.cellier.aka.nice@

> &gt; wrote:
>>> Note that this started a long time ago and comes up episodically
>>> http://forum.world.st/Fraction-equality-and-Float-infinity-problem-td48323.html
>>> http://forum.world.st/BUG-Equality-should-be-transitive-tc1404335.html
>>> https://lists.gforge.inria.fr/pipermail/pharo-project/2009-July/010496.html
>>>
>>> A bit like a "marronnier" (in French, a subject that is treated
>>> periodically
>>> by newspapers and magazines)
>>
>> Hi nicolas except that now we could a chapter describing some of the
>> answers :)
>>
>
> It's easy to make = behave as an equivalence if the two objects it
> operates on are of the same class.
>
> But it is well-known that it's hard when their classes are different,
> short of implementing = trivially in this case.
>
> This is to say that there's no real hope to make = an equivalence if the
> wish is to make it useful in the presence of different kind of objects.
>
> In the context of numeric computations, luckily, there is some hope when
> conversions, if possible at all, are all performed consistently. But the
> same conversions should then be applied for all operations, whether
> comparisons or subtractions, etc., to ensure more familiar properties.
> This is not currently the case in Pharo.
>
>
>
> Raffaello

Personally, I rather like = being transitive across different types of
objects, which is the case with the current implementation.*
It would not be if, like you suggest, multiple Fractions = the same Float.
Take a moment to also reflect on what it would mean if, as the implication
goes, a float represents a range of numbers on the rational number line,
rather than a single point.
Taken to its logical conclusion, it follows you'd need to also redefine the
other mathematical operators to reflect this, as well as conversion to exact
numbers like Integers and Fractions being lossy, rather than the other way
around.
You could certainly build an interesting system of floats with such a
property (I can swear I've read a paper on one somewhere...), but it
wouldn't be the world of IEEE754 Floats we live in.

Other properties one might deem beneficial, such as
a + b > a if b > 0,
or  
(a + b) + c = a + (b + c)
are not true in the context of floats. Does that mean we need to fix them
too?
My 2c: It feels like you are asking Floats to be something they're not, and
the answer simply isn't to try and paint over issues and try and make them
look like something they're not.

Cheers,
Henry



--
Sent from: http://forum.world.st/Pharo-Smalltalk-Developers-f1294837.html

Reply | Threaded
Open this post in threaded view
|

Re: float & fraction equality bug

Nicolas Cellier
In reply to this post by raffaello.giulietti


2017-11-09 23:50 GMT+01:00 <[hidden email]>:
On 2017-11-09 22:11, Nicolas Cellier wrote:


>             Something like exact difference like Martin suggested, then
>             converting to nearest Float because result is inexact:
>                  ((1/10) - 0.1 asFraction) asFloat
>
>             This way, you would have a less surprising result in most cases.
>             But i could craft a fraction such that the difference
>             underflows, and the assertion a ~= b ==> (a - b) isZero not
>             would still not hold.
>             Is it really worth it?
>             Will it be adopted in other dialects?
>
>
>
>         As an alternative, the Float>>asFraction method could return the
>         Fraction with the smallest denominator that would convert to the
>         receiver by the Fraction>>asFloat method.
>
>         So, 0.1 asFraction would return 1/10 rather than the beefy
>         Fraction it currently returns. To return the beast, one would
>         have to intentionally invoke asExactFraction or something similar.
>
>         This might cause less surprising behavior. But I have to think more.
>
>
>     No the goal here was to have a non null difference because we need
>     to preserve inequality for other features.
>
>     Answering anything but a Float at a high computation price goes
>     against primary purpose of Float (speed, efficiency)
>     If that's what we want, then we shall not use Float in the first place.
>     That's why I don't believe in such proposal
>
>     The minimal Fraction algorithm is an intersting challenge though.
>     Not sure how to find it...

I'm thinking of a continuous fraction expansion of the exact fraction
until the partial fraction falls inside the rounding interval of the Float.

Heavy, but doable.

Not sure, however, if it always meets the stated criterion.


then look at asApproximateFraction and change the termination condition

Reply | Threaded
Open this post in threaded view
|

Re: float & fraction equality bug

raffaello.giulietti
In reply to this post by Nicolas Cellier
On 2017-11-09 22:11, Nicolas Cellier wrote:

>
>
>
>             I think we have as much as possible already.
>             Non equality resolve more surprising behavior than it creates.
>             It makes the implementation more mathematically consistent
>             (understand preserving more properties).
>             Tell me how you are going to sort these 3 numbers:
>
>             {1.0 . 1<<60+1/(1<<60).  1<<61+1/(1<<61)} sort.
>
>             tell me the expectation of:
>
>             {1.0 . 1<<60+1/(1<<60). 1<<61+1/(1<<61)} asSet size.
>
>
>         A clearly stated rule, consistently applied and known to
>         everybody, helps.
>
>         In presence of heterogeneous numbers, the rule should state the
>         common denominator, so to say. Hence, the numbers involved in
>         mixed-mode arithmetic are either all converted to one
>         representation or all to the other: whether they are compared or
>         added, subtracted or divided, etc. One rule for mixed-mode
>         conversions, not two.
>
>
>     Having an economy of rules is allways a good idea.
>     If you can obtain a consistent system with 1 single rule rather than
>     2 then go.
>     But if it's at the price of sacrificing higher expectations, that's
>     another matter.
>
>     Languages that have a simpler arithmetic model, bounded integer, no
>     Fraction, may stick to a single rule.
>     More sofisticated models like you'll find in Lisp and Scheme have
>     exact same logic as Squeak/Pharo.
>
> sophisticated... (i'm on my way copying/pasting that one a thousand times)
>
>     We don't have 2 rules gratuitously as already explained.
>     - Total relation order of non nan values so as to be a good
>     Magnitude citizen imply non equality
>     - Producing Float in case of mixed arithmetic is for practicle
>     purpose: speed
>       (What are those damn Float for otherwise?)
>       it's also justified a posteriori by (exact op: inexact) -> inexact
>
>     What are you ready to sacrifice/trade?
>
>

Let me check if I correctly understand the reason for the dual rule
regime for mixed computations:

(1) preservation of = as an equivalence and of total ordering. This is
ensured by converting Floats to Fractions.

(2) performance in case of the 4 basic operations, which is the reason
for the second conversion rule from Fractions to Floats.



Now, the gain in speed by exercising (2) really depends on how the
numbers "mix" in a long chain of operations. I guess for most uses of
mixed arithmetic it doesn't make any noticeable difference with respect
to a pure Fraction computation. Besides, correctly converting a Fraction
to a Float requires more computation than the opposite.

So, to answer your question, if preservation of total order and = is
worthwhile even in case of mixed numbers, I would sacrifice speed for
the sake of the principle of least surprise. One rule, Float->Fraction,
slightly less speed.

But for those cases where the gain in speed from using Floats would make
a noticeable difference, we are entering the hard, counter-intuitive
realm of limited precision arithmetic anyway. We better be experts in
the first place. And as experts we will find a way out of the one-rule
regime by performing explicit Fraction->Float conversions where needed
and won't face surprises.



The only reason to prefer the opposite one-rule, that would always
convert Fractions to Floats in mixed computations, is compatibility with
the commercial Smalltalk implementations. Granted, it's not a sound
reason but a pragmatic one.








Reply | Threaded
Open this post in threaded view
|

Re: float & fraction equality bug

raffaello.giulietti
In reply to this post by Henrik Sperre Johansen
On 2017-11-10 00:02, Henrik Sperre Johansen wrote:

> raffaello.giulietti wrote
>> On 2017-11-09 21:49, Stephane Ducasse wrote:
>>> On Thu, Nov 9, 2017 at 5:34 PM, Nicolas Cellier
>>> &lt;
>
>> nicolas.cellier.aka.nice@
>
>> &gt; wrote:
>>>> Note that this started a long time ago and comes up episodically
>>>> http://forum.world.st/Fraction-equality-and-Float-infinity-problem-td48323.html
>>>> http://forum.world.st/BUG-Equality-should-be-transitive-tc1404335.html
>>>> https://lists.gforge.inria.fr/pipermail/pharo-project/2009-July/010496.html
>>>>
>>>> A bit like a "marronnier" (in French, a subject that is treated
>>>> periodically
>>>> by newspapers and magazines)
>>>
>>> Hi nicolas except that now we could a chapter describing some of the
>>> answers :)
>>>
>>
>> It's easy to make = behave as an equivalence if the two objects it
>> operates on are of the same class.
>>
>> But it is well-known that it's hard when their classes are different,
>> short of implementing = trivially in this case.
>>
>> This is to say that there's no real hope to make = an equivalence if the
>> wish is to make it useful in the presence of different kind of objects.
>>
>> In the context of numeric computations, luckily, there is some hope when
>> conversions, if possible at all, are all performed consistently. But the
>> same conversions should then be applied for all operations, whether
>> comparisons or subtractions, etc., to ensure more familiar properties.
>> This is not currently the case in Pharo.
>>
>>
>>
>> Raffaello
>
> Personally, I rather like = being transitive across different types of
> objects, which is the case with the current implementation.*
> It would not be if, like you suggest, multiple Fractions = the same Float.
> Take a moment to also reflect on what it would mean if, as the implication
> goes, a float represents a range of numbers on the rational number line,
> rather than a single point.
> Taken to its logical conclusion, it follows you'd need to also redefine the
> other mathematical operators to reflect this, as well as conversion to exact
> numbers like Integers and Fractions being lossy, rather than the other way
> around.
> You could certainly build an interesting system of floats with such a
> property (I can swear I've read a paper on one somewhere...), but it
> wouldn't be the world of IEEE754 Floats we live in.
>
> Other properties one might deem beneficial, such as
> a + b > a if b > 0,
> or  
> (a + b) + c = a + (b + c)
> are not true in the context of floats. Does that mean we need to fix them
> too?
> My 2c: It feels like you are asking Floats to be something they're not, and
> the answer simply isn't to try and paint over issues and try and make them
> look like something they're not.
>
> Cheers,
> Henry
>
>

Personally, I could live without mixed arithmetic between unlimited and
limited precision numbers: I would always be explicit in the
conversions. They are so different in nature, like apples and oranges are.

This also entails that I would accept that 1 = 1.0 evaluates to false,
that 1 <= 1.0 throws an exception and all consequences of this.

But this would be unacceptable for most Smalltalkers for many good
reasons. And, as you point out, = and total ordering are easily
preserved if Floats are converted to Fractions. But then please convert
Floats to Fractions even when adding, multiplying, etc.

To me a Float does not stand for an interval. It's just a real number
that happens to have strange, unfamiliar operations that resemble the
pure ones. Some familiar properties also hold for these strange
operations, others do not.



Reply | Threaded
Open this post in threaded view
|

Re: float & fraction equality bug

Nicolas Cellier
In reply to this post by Henrik Sperre Johansen


2017-11-10 0:02 GMT+01:00 Henrik Sperre Johansen <[hidden email]>:
raffaello.giulietti wrote
> On 2017-11-09 21:49, Stephane Ducasse wrote:
>> On Thu, Nov 9, 2017 at 5:34 PM, Nicolas Cellier
>> &lt;

> nicolas.cellier.aka.nice@

> &gt; wrote:
>>> Note that this started a long time ago and comes up episodically
>>> http://forum.world.st/Fraction-equality-and-Float-infinity-problem-td48323.html
>>> http://forum.world.st/BUG-Equality-should-be-transitive-tc1404335.html
>>> https://lists.gforge.inria.fr/pipermail/pharo-project/2009-July/010496.html
>>>
>>> A bit like a "marronnier" (in French, a subject that is treated
>>> periodically
>>> by newspapers and magazines)
>>
>> Hi nicolas except that now we could a chapter describing some of the
>> answers :)
>>
>
> It's easy to make = behave as an equivalence if the two objects it
> operates on are of the same class.
>
> But it is well-known that it's hard when their classes are different,
> short of implementing = trivially in this case.
>
> This is to say that there's no real hope to make = an equivalence if the
> wish is to make it useful in the presence of different kind of objects.
>
> In the context of numeric computations, luckily, there is some hope when
> conversions, if possible at all, are all performed consistently. But the
> same conversions should then be applied for all operations, whether
> comparisons or subtractions, etc., to ensure more familiar properties.
> This is not currently the case in Pharo.
>
>
>
> Raffaello

Personally, I rather like = being transitive across different types of
objects, which is the case with the current implementation.*
It would not be if, like you suggest, multiple Fractions = the same Float.
Take a moment to also reflect on what it would mean if, as the implication
goes, a float represents a range of numbers on the rational number line,
rather than a single point.
Taken to its logical conclusion, it follows you'd need to also redefine the
other mathematical operators to reflect this, as well as conversion to exact
numbers like Integers and Fractions being lossy, rather than the other way
around.
You could certainly build an interesting system of floats with such a
property (I can swear I've read a paper on one somewhere...), but it
wouldn't be the world of IEEE754 Floats we live in.

Other properties one might deem beneficial, such as
a + b > a if b > 0,
or
(a + b) + c = a + (b + c)
are not true in the context of floats. Does that mean we need to fix them
too?
My 2c: It feels like you are asking Floats to be something they're not, and
the answer simply isn't to try and paint over issues and try and make them
look like something they're not.

Cheers,
Henry


on the other hand,
his original expectations are true when restricted in the world of floating point:

    (a isFloat & b isFloat & a isFinite & b isFinite) ==> (a-b)=0 = (a=b)

so the astonishment is legitimate
It's just that we have a more complex model in which above Smalltalk expression cannot hold with mixed arithmetic without breaking higher expectations...



--
Sent from: http://forum.world.st/Pharo-Smalltalk-Developers-f1294837.html


Reply | Threaded
Open this post in threaded view
|

Re: float & fraction equality bug

Nicolas Cellier
In reply to this post by raffaello.giulietti


2017-11-10 1:18 GMT+01:00 <[hidden email]>:
On 2017-11-10 00:02, Henrik Sperre Johansen wrote:
> raffaello.giulietti wrote
>> On 2017-11-09 21:49, Stephane Ducasse wrote:
>>> On Thu, Nov 9, 2017 at 5:34 PM, Nicolas Cellier
>>> &lt;
>
>> nicolas.cellier.aka.nice@
>
>> &gt; wrote:
>>>> Note that this started a long time ago and comes up episodically
>>>> http://forum.world.st/Fraction-equality-and-Float-infinity-problem-td48323.html
>>>> http://forum.world.st/BUG-Equality-should-be-transitive-tc1404335.html
>>>> https://lists.gforge.inria.fr/pipermail/pharo-project/2009-July/010496.html
>>>>
>>>> A bit like a "marronnier" (in French, a subject that is treated
>>>> periodically
>>>> by newspapers and magazines)
>>>
>>> Hi nicolas except that now we could a chapter describing some of the
>>> answers :)
>>>
>>
>> It's easy to make = behave as an equivalence if the two objects it
>> operates on are of the same class.
>>
>> But it is well-known that it's hard when their classes are different,
>> short of implementing = trivially in this case.
>>
>> This is to say that there's no real hope to make = an equivalence if the
>> wish is to make it useful in the presence of different kind of objects.
>>
>> In the context of numeric computations, luckily, there is some hope when
>> conversions, if possible at all, are all performed consistently. But the
>> same conversions should then be applied for all operations, whether
>> comparisons or subtractions, etc., to ensure more familiar properties.
>> This is not currently the case in Pharo.
>>
>>
>>
>> Raffaello
>
> Personally, I rather like = being transitive across different types of
> objects, which is the case with the current implementation.*
> It would not be if, like you suggest, multiple Fractions = the same Float.
> Take a moment to also reflect on what it would mean if, as the implication
> goes, a float represents a range of numbers on the rational number line,
> rather than a single point.
> Taken to its logical conclusion, it follows you'd need to also redefine the
> other mathematical operators to reflect this, as well as conversion to exact
> numbers like Integers and Fractions being lossy, rather than the other way
> around.
> You could certainly build an interesting system of floats with such a
> property (I can swear I've read a paper on one somewhere...), but it
> wouldn't be the world of IEEE754 Floats we live in.
>
> Other properties one might deem beneficial, such as
> a + b > a if b > 0,
> or
> (a + b) + c = a + (b + c)
> are not true in the context of floats. Does that mean we need to fix them
> too?
> My 2c: It feels like you are asking Floats to be something they're not, and
> the answer simply isn't to try and paint over issues and try and make them
> look like something they're not.
>
> Cheers,
> Henry
>
>

Personally, I could live without mixed arithmetic between unlimited and
limited precision numbers: I would always be explicit in the
conversions. They are so different in nature, like apples and oranges are.

This also entails that I would accept that 1 = 1.0 evaluates to false,
that 1 <= 1.0 throws an exception and all consequences of this.

But this would be unacceptable for most Smalltalkers for many good
reasons. And, as you point out, = and total ordering are easily
preserved if Floats are converted to Fractions. But then please convert
Floats to Fractions even when adding, multiplying, etc.

To me a Float does not stand for an interval. It's just a real number
that happens to have strange, unfamiliar operations that resemble the
pure ones. Some familiar properties also hold for these strange
operations, others do not.



It would be tempting, but a priori I don't believe it would be sustainable.
What is nice in Smalltalk is that we can just try.
(a bit less easy for int+float because it's hardwired in the primitives and maybe the JIT too)

So in current Pharo, if I just define

    Fraction>>adaptToFloat: f andSend: m
        ^f isFinite
            ifTrue: [f asFraction perform: m with: self]
            ifFalse: [f perform: m with: self asFloat]

and symmetric in

    Float>>adaptToFraction: f andSend: m
        ^self isFinite
             ifTrue: [f perform: m with: self asFraction]
             ifFalse: [f asFloat perform: m with: self]

then World does not seem to fall apart...
We would need to measure if noticeable slow down happens


Reply | Threaded
Open this post in threaded view
|

Re: summary of "float & fraction equality bug"

raffaello.giulietti
In reply to this post by raffaello.giulietti
I would like to summarize my perspective of what emerged from the
discussions in the "float & fraction equality bug" trail.

The topic is all about mixed operations when both Fractions and Floats
are involved in the mix and can be restated as the question of whether
it is better to automagically convert the Float to a Fraction or the
Fraction to a Float before performing the operation.

AFAIK, Pharo currently implements a dual-conversion strategy:
(1) it applies Float->Fraction for comparison like =, <=, etc.
(2) it applies Fraction->Float for operations like +, *, etc.

The reason for (1) is preservation of = as an equivalence and of <= as a
total ordering. This is an important point for most Smalltalkers.

The reason for (2), however, is dictated by a supposedly better
performance. While it is true that Floats perform better than Fractions,
I'm not sure that it makes a noticeable difference in everyday uses.
Further, the Fraction->Float conversion might even cost more than the
gain of using Floats for the real work, the operation itself. The
conversion Float->Fraction, on the contrary, is easier.

But the major disadvantage of (2) is that it enters the world of limited
precision computation (e.g., Floats), which is much harder to
understand, less intuitive, more surprising for most of us.



So, it might be worthwhile to suppress (2) and consistently apply
Float->Fraction conversions whenever needed. It won't make daily
computations noticeably slower and helps in preserving more enjoyable
properties than the current dual-conversion regime.

Also, it won't prevent the numericists or other practitioners to do
floating point computations in mixed contexts: just apply explicit
Fraction->Float conversions when so desired.

This will be at odd with other Smalltalk implementations but might end
up being a safer environment.



I would like to thank Nicolas in particular for being so quick in
answering back and for the good points he raised.

Greetings
Raffaello

Reply | Threaded
Open this post in threaded view
|

Re: summary of "float & fraction equality bug"

henry
Good summary. I must add to comparison and operations, I’d add encoding, where ASN.1 Reals MUST convert to Float before encoding with mantissa, exponent, etc. Here is another Fraction -> Float conversion. As well as ScaledDecimal -> Float conversion.


Sent from ProtonMail Mobile


On Fri, Nov 10, 2017 at 06:59, <[hidden email]> wrote:
I would like to summarize my perspective of what emerged from the discussions in the "float & fraction equality bug" trail. The topic is all about mixed operations when both Fractions and Floats are involved in the mix and can be restated as the question of whether it is better to automagically convert the Float to a Fraction or the Fraction to a Float before performing the operation. AFAIK, Pharo currently implements a dual-conversion strategy: (1) it applies Float->Fraction for comparison like =, <=, etc. (2) it applies Fraction->Float for operations like +, *, etc. The reason for (1) is preservation of = as an equivalence and of <= as a total ordering. This is an important point for most Smalltalkers. The reason for (2), however, is dictated by a supposedly better performance. While it is true that Floats perform better than Fractions, I'm not sure that it makes a noticeable difference in everyday uses. Further, the Fraction->Float conversion might even cost more than the gain of using Floats for the real work, the operation itself. The conversion Float->Fraction, on the contrary, is easier. But the major disadvantage of (2) is that it enters the world of limited precision computation (e.g., Floats), which is much harder to understand, less intuitive, more surprising for most of us. So, it might be worthwhile to suppress (2) and consistently apply Float->Fraction conversions whenever needed. It won't make daily computations noticeably slower and helps in preserving more enjoyable properties than the current dual-conversion regime. Also, it won't prevent the numericists or other practitioners to do floating point computations in mixed contexts: just apply explicit Fraction->Float conversions when so desired. This will be at odd with other Smalltalk implementations but might end up being a safer environment. I would like to thank Nicolas in particular for being so quick in answering back and for the good points he raised. Greetings Raffaello
Reply | Threaded
Open this post in threaded view
|

Re: summary of "float & fraction equality bug"

Tudor Girba-2
In reply to this post by raffaello.giulietti
Thanks indeed for the summary. I like this.

Doru


> On Nov 10, 2017, at 12:59 PM, [hidden email] wrote:
>
> I would like to summarize my perspective of what emerged from the
> discussions in the "float & fraction equality bug" trail.
>
> The topic is all about mixed operations when both Fractions and Floats
> are involved in the mix and can be restated as the question of whether
> it is better to automagically convert the Float to a Fraction or the
> Fraction to a Float before performing the operation.
>
> AFAIK, Pharo currently implements a dual-conversion strategy:
> (1) it applies Float->Fraction for comparison like =, <=, etc.
> (2) it applies Fraction->Float for operations like +, *, etc.
>
> The reason for (1) is preservation of = as an equivalence and of <= as a
> total ordering. This is an important point for most Smalltalkers.
>
> The reason for (2), however, is dictated by a supposedly better
> performance. While it is true that Floats perform better than Fractions,
> I'm not sure that it makes a noticeable difference in everyday uses.
> Further, the Fraction->Float conversion might even cost more than the
> gain of using Floats for the real work, the operation itself. The
> conversion Float->Fraction, on the contrary, is easier.
>
> But the major disadvantage of (2) is that it enters the world of limited
> precision computation (e.g., Floats), which is much harder to
> understand, less intuitive, more surprising for most of us.
>
>
>
> So, it might be worthwhile to suppress (2) and consistently apply
> Float->Fraction conversions whenever needed. It won't make daily
> computations noticeably slower and helps in preserving more enjoyable
> properties than the current dual-conversion regime.
>
> Also, it won't prevent the numericists or other practitioners to do
> floating point computations in mixed contexts: just apply explicit
> Fraction->Float conversions when so desired.
>
> This will be at odd with other Smalltalk implementations but might end
> up being a safer environment.
>
>
>
> I would like to thank Nicolas in particular for being so quick in
> answering back and for the good points he raised.
>
> Greetings
> Raffaello
>

--
www.tudorgirba.com
www.feenk.com

"Yesterday is a fact.
 Tomorrow is a possibility.
 Today is a challenge."





Reply | Threaded
Open this post in threaded view
|

Re: summary of "float & fraction equality bug"

Florin Mateoc
I think we should also mention that the literal 0.1 is not the number in base ten that we all learned in school, despite
both using base 10 for printing and despite both being printed the same way - this is the crux of the problem.

But there is such a number in the system, called ScaledDecimal, for which the equality stands:

0.1s = (1/10)

We could have chosen ScaledDecimal as backing for our decimal literals (the literal 0.1 being translated by the
parser/compiler as a ScaledDecimal and only have say the literal 0.1f being translated to an imprecise float) and we
would not have had this problem. We could probably still do it

Florin

On 11/10/2017 7:18 AM, Tudor Girba wrote:

> Thanks indeed for the summary. I like this.
>
> Doru
>
>
>> On Nov 10, 2017, at 12:59 PM, [hidden email] wrote:
>>
>> I would like to summarize my perspective of what emerged from the
>> discussions in the "float & fraction equality bug" trail.
>>
>> The topic is all about mixed operations when both Fractions and Floats
>> are involved in the mix and can be restated as the question of whether
>> it is better to automagically convert the Float to a Fraction or the
>> Fraction to a Float before performing the operation.
>>
>> AFAIK, Pharo currently implements a dual-conversion strategy:
>> (1) it applies Float->Fraction for comparison like =, <=, etc.
>> (2) it applies Fraction->Float for operations like +, *, etc.
>>
>> The reason for (1) is preservation of = as an equivalence and of <= as a
>> total ordering. This is an important point for most Smalltalkers.
>>
>> The reason for (2), however, is dictated by a supposedly better
>> performance. While it is true that Floats perform better than Fractions,
>> I'm not sure that it makes a noticeable difference in everyday uses.
>> Further, the Fraction->Float conversion might even cost more than the
>> gain of using Floats for the real work, the operation itself. The
>> conversion Float->Fraction, on the contrary, is easier.
>>
>> But the major disadvantage of (2) is that it enters the world of limited
>> precision computation (e.g., Floats), which is much harder to
>> understand, less intuitive, more surprising for most of us.
>>
>>
>>
>> So, it might be worthwhile to suppress (2) and consistently apply
>> Float->Fraction conversions whenever needed. It won't make daily
>> computations noticeably slower and helps in preserving more enjoyable
>> properties than the current dual-conversion regime.
>>
>> Also, it won't prevent the numericists or other practitioners to do
>> floating point computations in mixed contexts: just apply explicit
>> Fraction->Float conversions when so desired.
>>
>> This will be at odd with other Smalltalk implementations but might end
>> up being a safer environment.
>>
>>
>>
>> I would like to thank Nicolas in particular for being so quick in
>> answering back and for the good points he raised.
>>
>> Greetings
>> Raffaello
>>
> --
> www.tudorgirba.com
> www.feenk.com
>
> "Yesterday is a fact.
>  Tomorrow is a possibility.
>  Today is a challenge."
>
>
>
>
>
>


Reply | Threaded
Open this post in threaded view
|

Re: summary of "float & fraction equality bug"

Florin Mateoc
On 11/10/2017 9:39 AM, Florin Mateoc wrote:

> I think we should also mention that the literal 0.1 is not the number in base ten that we all learned in school, despite
> both using base 10 for printing and despite both being printed the same way - this is the crux of the problem.
>
> But there is such a number in the system, called ScaledDecimal, for which the equality stands:
>
> 0.1s = (1/10)
>
> We could have chosen ScaledDecimal as backing for our decimal literals (the literal 0.1 being translated by the
> parser/compiler as a ScaledDecimal and only have say the literal 0.1f being translated to an imprecise float) and we
> would not have had this problem. We could probably still do it
>
> Florin

Sorry to follow up on myself, but the more I think about it, the more I like my own proposal
It would make the choice/compromise to be fast but imprecise explicit



>
> On 11/10/2017 7:18 AM, Tudor Girba wrote:
>> Thanks indeed for the summary. I like this.
>>
>> Doru
>>
>>
>>> On Nov 10, 2017, at 12:59 PM, [hidden email] wrote:
>>>
>>> I would like to summarize my perspective of what emerged from the
>>> discussions in the "float & fraction equality bug" trail.
>>>
>>> The topic is all about mixed operations when both Fractions and Floats
>>> are involved in the mix and can be restated as the question of whether
>>> it is better to automagically convert the Float to a Fraction or the
>>> Fraction to a Float before performing the operation.
>>>
>>> AFAIK, Pharo currently implements a dual-conversion strategy:
>>> (1) it applies Float->Fraction for comparison like =, <=, etc.
>>> (2) it applies Fraction->Float for operations like +, *, etc.
>>>
>>> The reason for (1) is preservation of = as an equivalence and of <= as a
>>> total ordering. This is an important point for most Smalltalkers.
>>>
>>> The reason for (2), however, is dictated by a supposedly better
>>> performance. While it is true that Floats perform better than Fractions,
>>> I'm not sure that it makes a noticeable difference in everyday uses.
>>> Further, the Fraction->Float conversion might even cost more than the
>>> gain of using Floats for the real work, the operation itself. The
>>> conversion Float->Fraction, on the contrary, is easier.
>>>
>>> But the major disadvantage of (2) is that it enters the world of limited
>>> precision computation (e.g., Floats), which is much harder to
>>> understand, less intuitive, more surprising for most of us.
>>>
>>>
>>>
>>> So, it might be worthwhile to suppress (2) and consistently apply
>>> Float->Fraction conversions whenever needed. It won't make daily
>>> computations noticeably slower and helps in preserving more enjoyable
>>> properties than the current dual-conversion regime.
>>>
>>> Also, it won't prevent the numericists or other practitioners to do
>>> floating point computations in mixed contexts: just apply explicit
>>> Fraction->Float conversions when so desired.
>>>
>>> This will be at odd with other Smalltalk implementations but might end
>>> up being a safer environment.
>>>
>>>
>>>
>>> I would like to thank Nicolas in particular for being so quick in
>>> answering back and for the good points he raised.
>>>
>>> Greetings
>>> Raffaello
>>>
>> --
>> www.tudorgirba.com
>> www.feenk.com
>>
>> "Yesterday is a fact.
>>  Tomorrow is a possibility.
>>  Today is a challenge."
>>
>>
>>
>>
>>
>>


Reply | Threaded
Open this post in threaded view
|

Re: summary of "float & fraction equality bug"

Martin McClure-2
In reply to this post by raffaello.giulietti
On 11/10/2017 03:59 AM, [hidden email] wrote:

> I would like to summarize my perspective of what emerged from the
> discussions in the "float & fraction equality bug" trail.
>
> The topic is all about mixed operations when both Fractions and Floats
> are involved in the mix and can be restated as the question of whether
> it is better to automagically convert the Float to a Fraction or the
> Fraction to a Float before performing the operation.
>
> AFAIK, Pharo currently implements a dual-conversion strategy:
> (1) it applies Float->Fraction for comparison like =, <=, etc.
> (2) it applies Fraction->Float for operations like +, *, etc.
>
[...]

Thanks for the summary, Raffaello.
One can choose to convert Float -> Fraction (because fraction is the
more general format, in that it can represent more of the real numbers)
or to convert Fraction->Float (because a Float is considered an
approximation of a real number).

But more important, I think, is that whichever choice is made, the
*same* choice must be made in *all* operations that involve both Floats
and Fractions.

Regards,
-Martin

Reply | Threaded
Open this post in threaded view
|

Re: summary of "float & fraction equality bug"

Tudor Girba-2

> On Nov 10, 2017, at 7:45 PM, Martin McClure <[hidden email]> wrote:
>
> On 11/10/2017 03:59 AM, [hidden email] wrote:
>> I would like to summarize my perspective of what emerged from the
>> discussions in the "float & fraction equality bug" trail.
>>
>> The topic is all about mixed operations when both Fractions and Floats
>> are involved in the mix and can be restated as the question of whether
>> it is better to automagically convert the Float to a Fraction or the
>> Fraction to a Float before performing the operation.
>>
>> AFAIK, Pharo currently implements a dual-conversion strategy:
>> (1) it applies Float->Fraction for comparison like =, <=, etc.
>> (2) it applies Fraction->Float for operations like +, *, etc.
>>
> [...]
>
> Thanks for the summary, Raffaello.
> One can choose to convert Float -> Fraction (because fraction is the more general format, in that it can represent more of the real numbers) or to convert Fraction->Float (because a Float is considered an approximation of a real number).
>
> But more important, I think, is that whichever choice is made, the *same* choice must be made in *all* operations that involve both Floats and Fractions.

Indeed, this was the source of my original confusion.

Doru


> Regards,
> -Martin
>

--
www.tudorgirba.com
www.feenk.com

"Presenting is storytelling."


Reply | Threaded
Open this post in threaded view
|

Re: summary of "float & fraction equality bug"

raffaello.giulietti
In reply to this post by Martin McClure-2
On 2017-11-10 19:45, Martin McClure wrote:

> On 11/10/2017 03:59 AM, [hidden email] wrote:
>> I would like to summarize my perspective of what emerged from the
>> discussions in the "float & fraction equality bug" trail.
>>
>> The topic is all about mixed operations when both Fractions and Floats
>> are involved in the mix and can be restated as the question of whether
>> it is better to automagically convert the Float to a Fraction or the
>> Fraction to a Float before performing the operation.
>>
>> AFAIK, Pharo currently implements a dual-conversion strategy:
>> (1) it applies Float->Fraction for comparison like =, <=, etc.
>> (2) it applies Fraction->Float for operations like +, *, etc.
>>
> [...]
>
> Thanks for the summary, Raffaello.
> One can choose to convert Float -> Fraction (because fraction is the
> more general format, in that it can represent more of the real numbers)
> or to convert Fraction->Float (because a Float is considered an
> approximation of a real number).
>
> But more important, I think, is that whichever choice is made, the
> *same* choice must be made in *all* operations that involve both Floats
> and Fractions.
>
> Regards,
> -Martin



Doing only Fraction->Float conversions in mixed mode won't preserve = as
an equivalence relation and won't enable a consistent ordering with <=,
which probably most Smalltalkers consider important and enjoyable
properties. Nicolas gave some convincing examples on why most
programmers might want to rely on them.


Also, as I mentioned, most Smalltalkers might prefer keeping away from
the complex properties of Floats. Doing automatic, implicit
Fraction->Float conversions behind the scenes only exacerbates the
probability of encountering Floats and of having to deal with their
weird and unfamiliar arithmetic.




Reply | Threaded
Open this post in threaded view
|

Re: summary of "float & fraction equality bug"

Martin McClure-2
On 11/10/2017 11:33 AM, [hidden email] wrote:
> Doing only Fraction->Float conversions in mixed mode won't preserve = as
> an equivalence relation and won't enable a consistent ordering with <=,
> which probably most Smalltalkers consider important and enjoyable
> properties.
Good point. I agree that Float -> Fraction is the more desirable mode
for implicit conversion, since it can always be done without changing
the value.
> Nicolas gave some convincing examples on why most
> programmers might want to rely on them.
>
>
> Also, as I mentioned, most Smalltalkers might prefer keeping away from
> the complex properties of Floats. Doing automatic, implicit
> Fraction->Float conversions behind the scenes only exacerbates the
> probability of encountering Floats and of having to deal with their
> weird and unfamiliar arithmetic.
One problem is that we make it easy to create Floats in source code
(0.1), and we print Floats in a nice decimal format but by default print
Fractions in their reduced fractional form. If we didn't do this,
Smalltalkers might not be working with Floats in the first place, and if
they did not have any Floats in their computation they would never run
into an implicit conversion to *or* from Float.

As it is, if we were to uniformly do Float -> Fraction conversion on
mixed-mode operations, we would get things like

(0.1 * (1/1)) printString                       -->
'3602879701896397/36028797018963968'

Not incredibly friendly.

Regards,
-Martin

Reply | Threaded
Open this post in threaded view
|

Re: summary of "float & fraction equality bug"

Nicolas Cellier


2017-11-10 20:58 GMT+01:00 Martin McClure <[hidden email]>:
On 11/10/2017 11:33 AM, [hidden email] wrote:
Doing only Fraction->Float conversions in mixed mode won't preserve = as
an equivalence relation and won't enable a consistent ordering with <=,
which probably most Smalltalkers consider important and enjoyable
properties.
Good point. I agree that Float -> Fraction is the more desirable mode for implicit conversion, since it can always be done without changing the value.
Nicolas gave some convincing examples on why most
programmers might want to rely on them.


Also, as I mentioned, most Smalltalkers might prefer keeping away from
the complex properties of Floats. Doing automatic, implicit
Fraction->Float conversions behind the scenes only exacerbates the
probability of encountering Floats and of having to deal with their
weird and unfamiliar arithmetic.
One problem is that we make it easy to create Floats in source code (0.1), and we print Floats in a nice decimal format but by default print Fractions in their reduced fractional form. If we didn't do this, Smalltalkers might not be working with Floats in the first place, and if they did not have any Floats in their computation they would never run into an implicit conversion to *or* from Float.

As it is, if we were to uniformly do Float -> Fraction conversion on mixed-mode operations, we would get things like

(0.1 * (1/1)) printString                       -->
'3602879701896397/36028797018963968'

Not incredibly friendly.

For those not practicing the litote: definitely a no go.


Regards,
-Martin


At the risk of repeating myself, unique choice for all operations is a nice to have but not a goal per se.

I mostly agree with Florin: having 0.1 representing a decimal rather than a Float might be a better path meeting more expectations.

But thinking that it will magically eradicate the problems is a myth.

Shall precision be limited or shall we use ScaledDecimals ?

With limited precision, we'll be back to having several Fraction converting to same LimitedDecimal, so it won't solve anything wrt original problem.
With illimted precision we'll have the bad property that what we print does not re-interpret to the same ScaledDecimal (0.1s / 0.3s), but that's a detail.
The worse thing is that long chain of operations will tend to produce monster numerators denominators.
And we will all have long chain when resizing a morph with proportional layout in scalable graphics.
We will then have to insert rounding operations manually for mitigating the problem, and somehow reinvent a more inconvenient Float...
It's boring to allways play the role of Cassandra, but why do you think that Scheme and Lisp did not choose that path?

123