float & fraction equality bug

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
47 messages Options
123
Reply | Threaded
Open this post in threaded view
|

Re: summary of "float & fraction equality bug"

Florin Mateoc
On 11/10/2017 4:18 PM, Nicolas Cellier wrote:


2017-11-10 20:58 GMT+01:00 Martin McClure <[hidden email]>:
On 11/10/2017 11:33 AM, [hidden email] wrote:
Doing only Fraction->Float conversions in mixed mode won't preserve = as
an equivalence relation and won't enable a consistent ordering with <=,
which probably most Smalltalkers consider important and enjoyable
properties.
Good point. I agree that Float -> Fraction is the more desirable mode for implicit conversion, since it can always be done without changing the value.
Nicolas gave some convincing examples on why most
programmers might want to rely on them.


Also, as I mentioned, most Smalltalkers might prefer keeping away from
the complex properties of Floats. Doing automatic, implicit
Fraction->Float conversions behind the scenes only exacerbates the
probability of encountering Floats and of having to deal with their
weird and unfamiliar arithmetic.
One problem is that we make it easy to create Floats in source code (0.1), and we print Floats in a nice decimal format but by default print Fractions in their reduced fractional form. If we didn't do this, Smalltalkers might not be working with Floats in the first place, and if they did not have any Floats in their computation they would never run into an implicit conversion to *or* from Float.

As it is, if we were to uniformly do Float -> Fraction conversion on mixed-mode operations, we would get things like

(0.1 * (1/1)) printString                       -->
'3602879701896397/36028797018963968'

Not incredibly friendly.

For those not practicing the litote: definitely a no go.


Regards,
-Martin


At the risk of repeating myself, unique choice for all operations is a nice to have but not a goal per se.

I mostly agree with Florin: having 0.1 representing a decimal rather than a Float might be a better path meeting more expectations.

But thinking that it will magically eradicate the problems is a myth.

Shall precision be limited or shall we use ScaledDecimals ?

With limited precision, we'll be back to having several Fraction converting to same LimitedDecimal, so it won't solve anything wrt original problem.

I don't think people have problems with understanding limited precision. We do know that when we write 3.14.. we did not write Pi.
As long as it does not contradict the intuition that we developed early on (and that's the problem with the binary representation of floats, not that it is limited)...

With illimted precision we'll have the bad property that what we print does not re-interpret to the same ScaledDecimal (0.1s / 0.3s), but that's a detail.
The worse thing is that long chain of operations will tend to produce monster numerators denominators.
And we will all have long chain when resizing a morph with proportional layout in scalable graphics.

But I did not propose to drop floats, just to change the visible representation of literals: 0.1 would mean a ScaledDecimal literal and 0.1f would mean a float literal. The extra "f" would be a constant reminder that something special is going on (the dangers of interpreting the literal as an exact representation in base 10).

We will then have to insert rounding operations manually for mitigating the problem, and somehow reinvent a more inconvenient Float...

I don't think we would have to reinvent Float - this one would stay exactly the same (other than printing the extra f (or d)).
But I agree that you raise an important point with precision/scale. We would indeed need to make ScaledDecimal more complicated.
Because today 0.1s * 0.1s is printed as 0.0s1, even though 0.1s * 0.1s = (1/100) evaluates to true, which is nice.

It's boring to allways play the role of Cassandra, but why do you think that Scheme and Lisp did not choose that path?

Oh. come on, if Scheme and Lisp did it all already, why are we all here? :)

Reply | Threaded
Open this post in threaded view
|

Re: summary of "float & fraction equality bug"

Florin Mateoc
On 11/10/2017 4:42 PM, Florin Mateoc wrote:
On 11/10/2017 4:18 PM, Nicolas Cellier wrote:


2017-11-10 20:58 GMT+01:00 Martin McClure <[hidden email]>:
On 11/10/2017 11:33 AM, [hidden email] wrote:
Doing only Fraction->Float conversions in mixed mode won't preserve = as
an equivalence relation and won't enable a consistent ordering with <=,
which probably most Smalltalkers consider important and enjoyable
properties.
Good point. I agree that Float -> Fraction is the more desirable mode for implicit conversion, since it can always be done without changing the value.
Nicolas gave some convincing examples on why most
programmers might want to rely on them.


Also, as I mentioned, most Smalltalkers might prefer keeping away from
the complex properties of Floats. Doing automatic, implicit
Fraction->Float conversions behind the scenes only exacerbates the
probability of encountering Floats and of having to deal with their
weird and unfamiliar arithmetic.
One problem is that we make it easy to create Floats in source code (0.1), and we print Floats in a nice decimal format but by default print Fractions in their reduced fractional form. If we didn't do this, Smalltalkers might not be working with Floats in the first place, and if they did not have any Floats in their computation they would never run into an implicit conversion to *or* from Float.

As it is, if we were to uniformly do Float -> Fraction conversion on mixed-mode operations, we would get things like

(0.1 * (1/1)) printString                       -->
'3602879701896397/36028797018963968'

Not incredibly friendly.

For those not practicing the litote: definitely a no go.


Regards,
-Martin


At the risk of repeating myself, unique choice for all operations is a nice to have but not a goal per se.

I mostly agree with Florin: having 0.1 representing a decimal rather than a Float might be a better path meeting more expectations.

But thinking that it will magically eradicate the problems is a myth.

Shall precision be limited or shall we use ScaledDecimals ?

With limited precision, we'll be back to having several Fraction converting to same LimitedDecimal, so it won't solve anything wrt original problem.

I don't think people have problems with understanding limited precision. We do know that when we write 3.14.. we did not write Pi.
As long as it does not contradict the intuition that we developed early on (and that's the problem with the binary representation of floats, not that it is limited)...

With illimted precision we'll have the bad property that what we print does not re-interpret to the same ScaledDecimal (0.1s / 0.3s), but that's a detail.
The worse thing is that long chain of operations will tend to produce monster numerators denominators.
And we will all have long chain when resizing a morph with proportional layout in scalable graphics.

But I did not propose to drop floats, just to change the visible representation of literals: 0.1 would mean a ScaledDecimal literal and 0.1f would mean a float literal. The extra "f" would be a constant reminder that something special is going on (the dangers of interpreting the literal as an exact representation in base 10).


And there would still be a lot of places where we would continue to judiciously use floats, such as in morphic. Just like today people don't use fractions when they don't need their extra precision - ah, who am I kidding?
There are a lot of developers out there who would go to great lengths to save an extra keystroke. :)
Well, I think it could be made to work (e.g. one of the low-hanging fixes would be that multiplication of scaledDecimals would add their scales) and it would complement nicely Smalltalk's seamless and intuitive largeIntegers - just like largeIntegers liberate us from the tyranny of the hardware representation for integers, we should also not be prisoners to the hardware representation of floats.

We will then have to insert rounding operations manually for mitigating the problem, and somehow reinvent a more inconvenient Float...

I don't think we would have to reinvent Float - this one would stay exactly the same (other than printing the extra f (or d)).
But I agree that you raise an important point with precision/scale. We would indeed need to make ScaledDecimal more complicated.
Because today 0.1s * 0.1s is printed as 0.0s1, even though 0.1s * 0.1s = (1/100) evaluates to true, which is nice.

It's boring to allways play the role of Cassandra, but why do you think that Scheme and Lisp did not choose that path?

Oh. come on, if Scheme and Lisp did it all already, why are we all here? :)


Reply | Threaded
Open this post in threaded view
|

Re: summary of "float & fraction equality bug"

Nicolas Cellier
In reply to this post by Florin Mateoc


2017-11-10 22:42 GMT+01:00 Florin Mateoc <[hidden email]>:
On 11/10/2017 4:18 PM, Nicolas Cellier wrote:


2017-11-10 20:58 GMT+01:00 Martin McClure <[hidden email]>:
On 11/10/2017 11:33 AM, [hidden email] wrote:
Doing only Fraction->Float conversions in mixed mode won't preserve = as
an equivalence relation and won't enable a consistent ordering with <=,
which probably most Smalltalkers consider important and enjoyable
properties.
Good point. I agree that Float -> Fraction is the more desirable mode for implicit conversion, since it can always be done without changing the value.
Nicolas gave some convincing examples on why most
programmers might want to rely on them.


Also, as I mentioned, most Smalltalkers might prefer keeping away from
the complex properties of Floats. Doing automatic, implicit
Fraction->Float conversions behind the scenes only exacerbates the
probability of encountering Floats and of having to deal with their
weird and unfamiliar arithmetic.
One problem is that we make it easy to create Floats in source code (0.1), and we print Floats in a nice decimal format but by default print Fractions in their reduced fractional form. If we didn't do this, Smalltalkers might not be working with Floats in the first place, and if they did not have any Floats in their computation they would never run into an implicit conversion to *or* from Float.

As it is, if we were to uniformly do Float -> Fraction conversion on mixed-mode operations, we would get things like

(0.1 * (1/1)) printString                       -->
'3602879701896397/36028797018963968'

Not incredibly friendly.

For those not practicing the litote: definitely a no go.


Regards,
-Martin


At the risk of repeating myself, unique choice for all operations is a nice to have but not a goal per se.

I mostly agree with Florin: having 0.1 representing a decimal rather than a Float might be a better path meeting more expectations.

But thinking that it will magically eradicate the problems is a myth.

Shall precision be limited or shall we use ScaledDecimals ?

With limited precision, we'll be back to having several Fraction converting to same LimitedDecimal, so it won't solve anything wrt original problem.

I don't think people have problems with understanding limited precision. We do know that when we write 3.14.. we did not write Pi.
As long as it does not contradict the intuition that we developed early on (and that's the problem with the binary representation of floats, not that it is limited)...

With illimted precision we'll have the bad property that what we print does not re-interpret to the same ScaledDecimal (0.1s / 0.3s), but that's a detail.
The worse thing is that long chain of operations will tend to produce monster numerators denominators.
And we will all have long chain when resizing a morph with proportional layout in scalable graphics.

But I did not propose to drop floats, just to change the visible representation of literals: 0.1 would mean a ScaledDecimal literal and 0.1f would mean a float literal. The extra "f" would be a constant reminder that something special is going on (the dangers of interpreting the literal as an exact representation in base 10).

We will then have to insert rounding operations manually for mitigating the problem, and somehow reinvent a more inconvenient Float...

I don't think we would have to reinvent Float - this one would stay exactly the same (other than printing the extra f (or d)).
But I agree that you raise an important point with precision/scale. We would indeed need to make ScaledDecimal more complicated.
Because today 0.1s * 0.1s is printed as 0.0s1, even though 0.1s * 0.1s = (1/100) evaluates to true, which is nice.

It's boring to allways play the role of Cassandra, but why do you think that Scheme and Lisp did not choose that path?

Oh. come on, if Scheme and Lisp did it all already, why are we all here? :)

Not at all, here it what it means:
We must analyze why some decisions were taken, and analyze what condition changed to enable different solutions.
Maths did not change so much, so it must be something else...
Fast CPU and huge memory could be a game changer, but how do the LargeInteger operations scale?
Reply | Threaded
Open this post in threaded view
|

Re: summary of "float & fraction equality bug"

Nicolas Cellier
In reply to this post by Florin Mateoc


2017-11-10 23:04 GMT+01:00 Florin Mateoc <[hidden email]>:
On 11/10/2017 4:42 PM, Florin Mateoc wrote:
On 11/10/2017 4:18 PM, Nicolas Cellier wrote:


2017-11-10 20:58 GMT+01:00 Martin McClure <[hidden email]>:
On 11/10/2017 11:33 AM, [hidden email] wrote:
Doing only Fraction->Float conversions in mixed mode won't preserve = as
an equivalence relation and won't enable a consistent ordering with <=,
which probably most Smalltalkers consider important and enjoyable
properties.
Good point. I agree that Float -> Fraction is the more desirable mode for implicit conversion, since it can always be done without changing the value.
Nicolas gave some convincing examples on why most
programmers might want to rely on them.


Also, as I mentioned, most Smalltalkers might prefer keeping away from
the complex properties of Floats. Doing automatic, implicit
Fraction->Float conversions behind the scenes only exacerbates the
probability of encountering Floats and of having to deal with their
weird and unfamiliar arithmetic.
One problem is that we make it easy to create Floats in source code (0.1), and we print Floats in a nice decimal format but by default print Fractions in their reduced fractional form. If we didn't do this, Smalltalkers might not be working with Floats in the first place, and if they did not have any Floats in their computation they would never run into an implicit conversion to *or* from Float.

As it is, if we were to uniformly do Float -> Fraction conversion on mixed-mode operations, we would get things like

(0.1 * (1/1)) printString                       -->
'3602879701896397/36028797018963968'

Not incredibly friendly.

For those not practicing the litote: definitely a no go.


Regards,
-Martin


At the risk of repeating myself, unique choice for all operations is a nice to have but not a goal per se.

I mostly agree with Florin: having 0.1 representing a decimal rather than a Float might be a better path meeting more expectations.

But thinking that it will magically eradicate the problems is a myth.

Shall precision be limited or shall we use ScaledDecimals ?

With limited precision, we'll be back to having several Fraction converting to same LimitedDecimal, so it won't solve anything wrt original problem.

I don't think people have problems with understanding limited precision. We do know that when we write 3.14.. we did not write Pi.
As long as it does not contradict the intuition that we developed early on (and that's the problem with the binary representation of floats, not that it is limited)...

The problem is about wanting to preserve equivalence.
With limited precision, forget about (a - b) isZero = (a=b), whatever the base we're using for printing numbers.
With illimted precision we'll have the bad property that what we print does not re-interpret to the same ScaledDecimal (0.1s / 0.3s), but that's a detail.
The worse thing is that long chain of operations will tend to produce monster numerators denominators.
And we will all have long chain when resizing a morph with proportional layout in scalable graphics.

But I did not propose to drop floats, just to change the visible representation of literals: 0.1 would mean a ScaledDecimal literal and 0.1f would mean a float literal. The extra "f" would be a constant reminder that something special is going on (the dangers of interpreting the literal as an exact representation in base 10).


And there would still be a lot of places where we would continue to judiciously use floats, such as in morphic. Just like today people don't use fractions when they don't need their extra precision - ah, who am I kidding?

Of course, most of the places where we use Float today in fact.
Doesn't it mean that ScaledDecimals would solve essentially the problems we don't have?

The major outcome is if we can have some notation for float telling about the rounding.
Currently with syntactic color we can already mark Float that are not exactly the decimal they pretend to be without changing the syntax at all.
So let's do that immediately.

There are a lot of developers out there who would go to great lengths to save an extra keystroke. :)
Well, I think it could be made to work (e.g. one of the low-hanging fixes would be that multiplication of scaledDecimals would add their scales) and it would complement nicely Smalltalk's seamless and intuitive largeIntegers - just like largeIntegers liberate us from the tyranny of the hardware representation for integers, we should also not be prisoners to the hardware representation of floats.

But then division is the little grain of sand in the cog which makes ScaledDecimals to not scale.

All fraction have a finite number of digits which are repeated, so it's not completely unsolvable.
We could write 1.2357575757... as 1.23(57) for example, meaning 57 repeated ad libitum.
In order to preserve nice properties of REPL we would completely throw the scale away and just have Decimals.
The scale would be what it should: just a parameter passed to a variety of printing.

We will then have to insert rounding operations manually for mitigating the problem, and somehow reinvent a more inconvenient Float...

I don't think we would have to reinvent Float - this one would stay exactly the same (other than printing the extra f (or d)).
But I agree that you raise an important point with precision/scale. We would indeed need to make ScaledDecimal more complicated.
Because today 0.1s * 0.1s is printed as 0.0s1, even though 0.1s * 0.1s = (1/100) evaluates to true, which is nice.

It's boring to allways play the role of Cassandra, but why do you think that Scheme and Lisp did not choose that path?

Oh. come on, if Scheme and Lisp did it all already, why are we all here? :)



Reply | Threaded
Open this post in threaded view
|

Re: summary of "float & fraction equality bug"

raffaello.giulietti
In reply to this post by Nicolas Cellier
On 2017-11-10 22:18, Nicolas Cellier wrote:

>
>
> 2017-11-10 20:58 GMT+01:00 Martin McClure <[hidden email]
> <mailto:[hidden email]>>:
>
>     On 11/10/2017 11:33 AM, [hidden email]
>     <mailto:[hidden email]> wrote:
>
>         Doing only Fraction->Float conversions in mixed mode won't
>         preserve = as
>         an equivalence relation and won't enable a consistent ordering
>         with <=,
>         which probably most Smalltalkers consider important and enjoyable
>         properties.
>
>     Good point. I agree that Float -> Fraction is the more desirable
>     mode for implicit conversion, since it can always be done without
>     changing the value.
>
>         Nicolas gave some convincing examples on why most
>         programmers might want to rely on them.
>
>
>         Also, as I mentioned, most Smalltalkers might prefer keeping
>         away from
>         the complex properties of Floats. Doing automatic, implicit
>         Fraction->Float conversions behind the scenes only exacerbates the
>         probability of encountering Floats and of having to deal with their
>         weird and unfamiliar arithmetic.
>
>     One problem is that we make it easy to create Floats in source code
>     (0.1), and we print Floats in a nice decimal format but by default
>     print Fractions in their reduced fractional form. If we didn't do
>     this, Smalltalkers might not be working with Floats in the first
>     place, and if they did not have any Floats in their computation they
>     would never run into an implicit conversion to *or* from Float.
>
>     As it is, if we were to uniformly do Float -> Fraction conversion on
>     mixed-mode operations, we would get things like
>
>     (0.1 * (1/1)) printString                       -->
>     '3602879701896397/36028797018963968'
>
>     Not incredibly friendly.
>
>
> For those not practicing the litote: definitely a no go.
>
>
>     Regards,
>     -Martin
>
>
> At the risk of repeating myself, unique choice for all operations is a
> nice to have but not a goal per se.
>
> I mostly agree with Florin: having 0.1 representing a decimal rather
> than a Float might be a better path meeting more expectations.
>
> But thinking that it will magically eradicate the problems is a myth.
>
> Shall precision be limited or shall we use ScaledDecimals ?
>
> With limited precision, we'll be back to having several Fraction
> converting to same LimitedDecimal, so it won't solve anything wrt
> original problem.
> With illimted precision we'll have the bad property that what we print
> does not re-interpret to the same ScaledDecimal (0.1s / 0.3s), but
> that's a detail.
> The worse thing is that long chain of operations will tend to produce
> monster numerators denominators.
> And we will all have long chain when resizing a morph with proportional
> layout in scalable graphics.
> We will then have to insert rounding operations manually for mitigating
> the problem, and somehow reinvent a more inconvenient Float...
> It's boring to allways play the role of Cassandra, but why do you think
> that Scheme and Lisp did not choose that path?
>

I don't know why Lisper got this way. They justify the conversion choice
for comparison (equivalence of = and total ordering of <=) but for the
arithmetic operations there does not seem to be a clearly stated rationale.

Are they happy with their choice? I haven't the foggiest idea.

Don't get me wrong: I understand the Fraction->Float choice for the sake
of more speed.

But this increase in speed is not functionally transparent: it is not
like an increase in the clock rate of a CPU, it's not like a better
performing division algorithm, a faster implementation of sorting or a
JIT compiler.

This increase in speed, rather, comes at the cost of a shift in
functionality and cognitive load, because even + or 0.1 have not their
conventional semantics. It's not that Floats are wrong by themselves,
it's that their operations seem weird at first and at odd with the
familiar behavior.



Again, I'm not addressing the experts in floating point computation
here: they know how to deal with it. Thus, I expect that the developers
of numerically intensive libraries or packages, like 2D or 3D graphics,
are fully aware of the trade-offs and are in a position to make an
explicit choice: either monster denominators in fractions or more care
with floats, but always in control.

Here, I'm targeting the John Doe Smalltalker which usually uses
well-crafted, well-performing libraries and probably only performs a few
thousand mixed Fraction/Float computations on his own, and then speed
trade-offs do not matter. He is better served with implicit, automatic
Float->Fraction conversions in both comparisons and operations.

For him, less cognitive load and less surprises are probably an added value.


Reply | Threaded
Open this post in threaded view
|

Re: summary of "float & fraction equality bug"

Tudor Girba-2
Hi,

Indeed, I agree with this point of view.

Can we distill a concrete path to action from this?

Cheers,
Doru


> On Nov 11, 2017, at 11:58 AM, [hidden email] wrote:
>
> On 2017-11-10 22:18, Nicolas Cellier wrote:
>>
>>
>> 2017-11-10 20:58 GMT+01:00 Martin McClure <[hidden email]
>> <mailto:[hidden email]>>:
>>
>>    On 11/10/2017 11:33 AM, [hidden email]
>>    <mailto:[hidden email]> wrote:
>>
>>        Doing only Fraction->Float conversions in mixed mode won't
>>        preserve = as
>>        an equivalence relation and won't enable a consistent ordering
>>        with <=,
>>        which probably most Smalltalkers consider important and enjoyable
>>        properties.
>>
>>    Good point. I agree that Float -> Fraction is the more desirable
>>    mode for implicit conversion, since it can always be done without
>>    changing the value.
>>
>>        Nicolas gave some convincing examples on why most
>>        programmers might want to rely on them.
>>
>>
>>        Also, as I mentioned, most Smalltalkers might prefer keeping
>>        away from
>>        the complex properties of Floats. Doing automatic, implicit
>>        Fraction->Float conversions behind the scenes only exacerbates the
>>        probability of encountering Floats and of having to deal with their
>>        weird and unfamiliar arithmetic.
>>
>>    One problem is that we make it easy to create Floats in source code
>>    (0.1), and we print Floats in a nice decimal format but by default
>>    print Fractions in their reduced fractional form. If we didn't do
>>    this, Smalltalkers might not be working with Floats in the first
>>    place, and if they did not have any Floats in their computation they
>>    would never run into an implicit conversion to *or* from Float.
>>
>>    As it is, if we were to uniformly do Float -> Fraction conversion on
>>    mixed-mode operations, we would get things like
>>
>>    (0.1 * (1/1)) printString                       -->
>>    '3602879701896397/36028797018963968'
>>
>>    Not incredibly friendly.
>>
>>
>> For those not practicing the litote: definitely a no go.
>>
>>
>>    Regards,
>>    -Martin
>>
>>
>> At the risk of repeating myself, unique choice for all operations is a
>> nice to have but not a goal per se.
>>
>> I mostly agree with Florin: having 0.1 representing a decimal rather
>> than a Float might be a better path meeting more expectations.
>>
>> But thinking that it will magically eradicate the problems is a myth.
>>
>> Shall precision be limited or shall we use ScaledDecimals ?
>>
>> With limited precision, we'll be back to having several Fraction
>> converting to same LimitedDecimal, so it won't solve anything wrt
>> original problem.
>> With illimted precision we'll have the bad property that what we print
>> does not re-interpret to the same ScaledDecimal (0.1s / 0.3s), but
>> that's a detail.
>> The worse thing is that long chain of operations will tend to produce
>> monster numerators denominators.
>> And we will all have long chain when resizing a morph with proportional
>> layout in scalable graphics.
>> We will then have to insert rounding operations manually for mitigating
>> the problem, and somehow reinvent a more inconvenient Float...
>> It's boring to allways play the role of Cassandra, but why do you think
>> that Scheme and Lisp did not choose that path?
>>
>
> I don't know why Lisper got this way. They justify the conversion choice
> for comparison (equivalence of = and total ordering of <=) but for the
> arithmetic operations there does not seem to be a clearly stated rationale.
>
> Are they happy with their choice? I haven't the foggiest idea.
>
> Don't get me wrong: I understand the Fraction->Float choice for the sake
> of more speed.
>
> But this increase in speed is not functionally transparent: it is not
> like an increase in the clock rate of a CPU, it's not like a better
> performing division algorithm, a faster implementation of sorting or a
> JIT compiler.
>
> This increase in speed, rather, comes at the cost of a shift in
> functionality and cognitive load, because even + or 0.1 have not their
> conventional semantics. It's not that Floats are wrong by themselves,
> it's that their operations seem weird at first and at odd with the
> familiar behavior.
>
>
>
> Again, I'm not addressing the experts in floating point computation
> here: they know how to deal with it. Thus, I expect that the developers
> of numerically intensive libraries or packages, like 2D or 3D graphics,
> are fully aware of the trade-offs and are in a position to make an
> explicit choice: either monster denominators in fractions or more care
> with floats, but always in control.
>
> Here, I'm targeting the John Doe Smalltalker which usually uses
> well-crafted, well-performing libraries and probably only performs a few
> thousand mixed Fraction/Float computations on his own, and then speed
> trade-offs do not matter. He is better served with implicit, automatic
> Float->Fraction conversions in both comparisons and operations.
>
> For him, less cognitive load and less surprises are probably an added value.

--
www.tudorgirba.com
www.feenk.com

"Every now and then stop and ask yourself if the war you're fighting is the right one."





Reply | Threaded
Open this post in threaded view
|

Re: summary of "float & fraction equality bug"

wernerk
Hi,
i dont agree with this pov. for example consider this mixed-mode calc:
aFraction ln + aNumber.
Fraction>>#ln returns a float and i suppose you dont want to change that
eg by returning a quasi-NaN like
"ResultNonRepresentableInTheSetOfFractionsError".
if you suppress (2) and aNumber would be a Float youd get a float as
result. if aNumber would be a fraction youd get a fraction as result and
perhaps somebody could expect the result to be precise, but actually it
can be less precise than in the first case (two roundings instead of
one). this change just opens a can of worms.
werner

On 11/12/2017 08:15 AM, Tudor Girba wrote:

> Hi,
>
> Indeed, I agree with this point of view.
>
> Can we distill a concrete path to action from this?
>
> Cheers,
> Doru
>
>
>> On Nov 11, 2017, at 11:58 AM, [hidden email] wrote:
>>
>> On 2017-11-10 22:18, Nicolas Cellier wrote:
>>>
>>> 2017-11-10 20:58 GMT+01:00 Martin McClure <[hidden email]
>>> <mailto:[hidden email]>>:
>>>
>>>    On 11/10/2017 11:33 AM, [hidden email]
>>>    <mailto:[hidden email]> wrote:
>>>
>>>        Doing only Fraction->Float conversions in mixed mode won't
>>>        preserve = as
>>>        an equivalence relation and won't enable a consistent ordering
>>>        with <=,
>>>        which probably most Smalltalkers consider important and enjoyable
>>>        properties.
>>>
>>>    Good point. I agree that Float -> Fraction is the more desirable
>>>    mode for implicit conversion, since it can always be done without
>>>    changing the value.
>>>
>>>        Nicolas gave some convincing examples on why most
>>>        programmers might want to rely on them.
>>>
>>>
>>>        Also, as I mentioned, most Smalltalkers might prefer keeping
>>>        away from
>>>        the complex properties of Floats. Doing automatic, implicit
>>>        Fraction->Float conversions behind the scenes only exacerbates the
>>>        probability of encountering Floats and of having to deal with their
>>>        weird and unfamiliar arithmetic.
>>>
>>>    One problem is that we make it easy to create Floats in source code
>>>    (0.1), and we print Floats in a nice decimal format but by default
>>>    print Fractions in their reduced fractional form. If we didn't do
>>>    this, Smalltalkers might not be working with Floats in the first
>>>    place, and if they did not have any Floats in their computation they
>>>    would never run into an implicit conversion to *or* from Float.
>>>
>>>    As it is, if we were to uniformly do Float -> Fraction conversion on
>>>    mixed-mode operations, we would get things like
>>>
>>>    (0.1 * (1/1)) printString                       -->
>>>    '3602879701896397/36028797018963968'
>>>
>>>    Not incredibly friendly.
>>>
>>>
>>> For those not practicing the litote: definitely a no go.
>>>
>>>
>>>    Regards,
>>>    -Martin
>>>
>>>
>>> At the risk of repeating myself, unique choice for all operations is a
>>> nice to have but not a goal per se.
>>>
>>> I mostly agree with Florin: having 0.1 representing a decimal rather
>>> than a Float might be a better path meeting more expectations.
>>>
>>> But thinking that it will magically eradicate the problems is a myth.
>>>
>>> Shall precision be limited or shall we use ScaledDecimals ?
>>>
>>> With limited precision, we'll be back to having several Fraction
>>> converting to same LimitedDecimal, so it won't solve anything wrt
>>> original problem.
>>> With illimted precision we'll have the bad property that what we print
>>> does not re-interpret to the same ScaledDecimal (0.1s / 0.3s), but
>>> that's a detail.
>>> The worse thing is that long chain of operations will tend to produce
>>> monster numerators denominators.
>>> And we will all have long chain when resizing a morph with proportional
>>> layout in scalable graphics.
>>> We will then have to insert rounding operations manually for mitigating
>>> the problem, and somehow reinvent a more inconvenient Float...
>>> It's boring to allways play the role of Cassandra, but why do you think
>>> that Scheme and Lisp did not choose that path?
>>>
>> I don't know why Lisper got this way. They justify the conversion choice
>> for comparison (equivalence of = and total ordering of <=) but for the
>> arithmetic operations there does not seem to be a clearly stated rationale.
>>
>> Are they happy with their choice? I haven't the foggiest idea.
>>
>> Don't get me wrong: I understand the Fraction->Float choice for the sake
>> of more speed.
>>
>> But this increase in speed is not functionally transparent: it is not
>> like an increase in the clock rate of a CPU, it's not like a better
>> performing division algorithm, a faster implementation of sorting or a
>> JIT compiler.
>>
>> This increase in speed, rather, comes at the cost of a shift in
>> functionality and cognitive load, because even + or 0.1 have not their
>> conventional semantics. It's not that Floats are wrong by themselves,
>> it's that their operations seem weird at first and at odd with the
>> familiar behavior.
>>
>>
>>
>> Again, I'm not addressing the experts in floating point computation
>> here: they know how to deal with it. Thus, I expect that the developers
>> of numerically intensive libraries or packages, like 2D or 3D graphics,
>> are fully aware of the trade-offs and are in a position to make an
>> explicit choice: either monster denominators in fractions or more care
>> with floats, but always in control.
>>
>> Here, I'm targeting the John Doe Smalltalker which usually uses
>> well-crafted, well-performing libraries and probably only performs a few
>> thousand mixed Fraction/Float computations on his own, and then speed
>> trade-offs do not matter. He is better served with implicit, automatic
>> Float->Fraction conversions in both comparisons and operations.
>>
>> For him, less cognitive load and less surprises are probably an added value.
> --
> www.tudorgirba.com
> www.feenk.com
>
> "Every now and then stop and ask yourself if the war you're fighting is the right one."
>
>
>
>
>
>


123