fdlibm 1.0 exp is not the closest approximation of E

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

fdlibm 1.0 exp is not the closest approximation of E

Nicolas Cellier
It seems that
    1.0 exp = (Float classPool at: #E)
is now false with the fdlibm version, at leat on Cog/MacOSX.

(Float classPool at: #E) is the closest Float approximation of e,
which you can check with:
(1.0 asArbitraryPrecisionFloatNumBits: 200) exp asFloat = (Float
classPool at: #E)

It's bad that fdlibm be able to compute (1.0e32 cos) with less than
1/2 ulp error, but (1.0 exp) with more than 1/2 ulp !

Nicolas

Reply | Threaded
Open this post in threaded view
|

Re: fdlibm 1.0 exp is not the closest approximation of E

Igor Stasenko
On 27 December 2010 20:09, Nicolas Cellier
<[hidden email]> wrote:
> It seems that
>    1.0 exp = (Float classPool at: #E)
> is now false with the fdlibm version, at leat on Cog/MacOSX.
>

tried on both John's 5.8b12
and own-built Cog VM , it returns true in both cases.
Maybe my image is obsolete and E is different there?

(Float classPool at: #E) at: 1
1074118410

(Float classPool at: #E) at: 2
 2333366121

> (Float classPool at: #E) is the closest Float approximation of e,
> which you can check with:
> (1.0 asArbitraryPrecisionFloatNumBits: 200) exp asFloat = (Float
> classPool at: #E)
>
> It's bad that fdlibm be able to compute (1.0e32 cos) with less than
> 1/2 ulp error, but (1.0 exp) with more than 1/2 ulp !
>

Nicolas, how about writing own float math plugin?
You , among all of us seems care about float math most. So how about
getting your hands dirty with some C hacking?
:)

> Nicolas
>
>



--
Best regards,
Igor Stasenko AKA sig.

Reply | Threaded
Open this post in threaded view
|

Re: fdlibm 1.0 exp is not the closest approximation of E

Andres Valloud-4
In reply to this post by Nicolas Cellier
Something I learned is that basically all platforms claim wonderful IEEE
functionality, but in practice the standard is followed only to a
certain extent.  In addition, there are wonderful examples of how your
mileage may vary.  For example...

* The only platform on which I saw sensible arcTan functionality was
IRIX on RISC.

* On HPUX, doing something like 10d "or a Squeak's float" raisedTo:
-306, followed by floorLog10, results in 306.  But it fails with 307.
It works on Windows, Mac, Linux, AIX, IRIX/RISC (when we supported it
--- the platform is being phased out by Silicon Graphics), etc.  Why?

* Some platforms provide .h files with handy definitions for IEEE
related functions such as isinf(), isnan(), isfinite() and the like.
But different versions of the operating system will provide different
compilation environments.  Consequently, whether you can use such things
in your code depends on the compilation environment.  And if you upgrade
the compiler environment but then do that too much, then you don't
support supported versions of the OS in question.

* x86 FPUs are known to have a somewhat imprecise approximation of pi,
which in turn affects the trigonometric transcendentals.  This problem
was fixed back in the early nineties by AMD, but it was rolled back
because of compatibility concerns.  The 32 bit x86 FPU was never fixed.
  Supposedly, the 64 bit FPU uses the new pi approximation.  Of course,
that may mean that you won't get the same floating point results in 32
and 64 bits.

* Also, on x86, you will get different results if you let the FPU use
the extended double format that enlarges the mantissa to 64 bits.  So,
if you keep doing calculations in the FPU, the effects of rounding will
be different than if you put in some value in the FPU, do the
calculation, and get the intermediate value out, thus truncating the
extended double format to the normal double format.  Of course leaving
the doubles in the FPU is faster, but then you get different results
unless you twiddle the control bits of the FPU.  Which, OBTW, 64 bit
Windows defaults to the extended format.

* On the Mac, you get different trigonometric transcendental results
between 10.4 and 10.6.

And this is just a sample of the wonderful world of IEEE and floating
point arithmetic.

On 12/27/10 11:09 , Nicolas Cellier wrote:

> It seems that
>      1.0 exp = (Float classPool at: #E)
> is now false with the fdlibm version, at leat on Cog/MacOSX.
>
> (Float classPool at: #E) is the closest Float approximation of e,
> which you can check with:
> (1.0 asArbitraryPrecisionFloatNumBits: 200) exp asFloat = (Float
> classPool at: #E)
>
> It's bad that fdlibm be able to compute (1.0e32 cos) with less than
> 1/2 ulp error, but (1.0 exp) with more than 1/2 ulp !
>
> Nicolas
>
>

Reply | Threaded
Open this post in threaded view
|

Re: fdlibm 1.0 exp is not the closest approximation of E

Andres Valloud-4
A couple extra things.  Floating point operations that result in IEEE
special values (INF, NaN) behave different on different platforms.  For
example, in some platforms you may get INF, and in some other platforms
you may get NaN.  In some platforms, you may actually get a very large
value, but not INF.  Testing for conformance is a big pain because you
have to account for all the disparate behavior.  In addition, Intel
loosely defines a particular NaN bit configuration to mean
"indeterminate".  The IEEE standard has no such thing.

Finally, in the HPUX example below, the exponents should read -306,
-306, and -307.

On 12/27/10 13:58 , Andres Valloud wrote:

> Something I learned is that basically all platforms claim wonderful IEEE
> functionality, but in practice the standard is followed only to a
> certain extent.  In addition, there are wonderful examples of how your
> mileage may vary.  For example...
>
> * The only platform on which I saw sensible arcTan functionality was
> IRIX on RISC.
>
> * On HPUX, doing something like 10d "or a Squeak's float" raisedTo:
> -306, followed by floorLog10, results in 306.  But it fails with 307.
> It works on Windows, Mac, Linux, AIX, IRIX/RISC (when we supported it
> --- the platform is being phased out by Silicon Graphics), etc.  Why?
>
> * Some platforms provide .h files with handy definitions for IEEE
> related functions such as isinf(), isnan(), isfinite() and the like.
> But different versions of the operating system will provide different
> compilation environments.  Consequently, whether you can use such things
> in your code depends on the compilation environment.  And if you upgrade
> the compiler environment but then do that too much, then you don't
> support supported versions of the OS in question.
>
> * x86 FPUs are known to have a somewhat imprecise approximation of pi,
> which in turn affects the trigonometric transcendentals.  This problem
> was fixed back in the early nineties by AMD, but it was rolled back
> because of compatibility concerns.  The 32 bit x86 FPU was never fixed.
>    Supposedly, the 64 bit FPU uses the new pi approximation.  Of course,
> that may mean that you won't get the same floating point results in 32
> and 64 bits.
>
> * Also, on x86, you will get different results if you let the FPU use
> the extended double format that enlarges the mantissa to 64 bits.  So,
> if you keep doing calculations in the FPU, the effects of rounding will
> be different than if you put in some value in the FPU, do the
> calculation, and get the intermediate value out, thus truncating the
> extended double format to the normal double format.  Of course leaving
> the doubles in the FPU is faster, but then you get different results
> unless you twiddle the control bits of the FPU.  Which, OBTW, 64 bit
> Windows defaults to the extended format.
>
> * On the Mac, you get different trigonometric transcendental results
> between 10.4 and 10.6.
>
> And this is just a sample of the wonderful world of IEEE and floating
> point arithmetic.
>
> On 12/27/10 11:09 , Nicolas Cellier wrote:
>> It seems that
>>       1.0 exp = (Float classPool at: #E)
>> is now false with the fdlibm version, at leat on Cog/MacOSX.
>>
>> (Float classPool at: #E) is the closest Float approximation of e,
>> which you can check with:
>> (1.0 asArbitraryPrecisionFloatNumBits: 200) exp asFloat = (Float
>> classPool at: #E)
>>
>> It's bad that fdlibm be able to compute (1.0e32 cos) with less than
>> 1/2 ulp error, but (1.0 exp) with more than 1/2 ulp !
>>
>> Nicolas
>>
>>
>
> .
>

Reply | Threaded
Open this post in threaded view
|

Re: fdlibm 1.0 exp is not the closest approximation of E

Nicolas Cellier
In reply to this post by Igor Stasenko
2010/12/27 Igor Stasenko <[hidden email]>:

> On 27 December 2010 20:09, Nicolas Cellier
> <[hidden email]> wrote:
>> It seems that
>>    1.0 exp = (Float classPool at: #E)
>> is now false with the fdlibm version, at leat on Cog/MacOSX.
>>
>
> tried on both John's 5.8b12
> and own-built Cog VM , it returns true in both cases.
> Maybe my image is obsolete and E is different there?
>
> (Float classPool at: #E) at: 1
> 1074118410
>
> (Float classPool at: #E) at: 2
>  2333366121
>

Hi Igor,
your E seems correct.
You can simply use (Float classPool at: #E) hex.

Do you have the FloatMathPlugin plugged in your image ?
In my trunk, Float>>exp is defined like this:

exp
        "Answer E raised to the receiver power.
         Optional. See Object documentation whatIsAPrimitive."

        <primitive: 'primitiveExp' module: 'FloatMathPlugin'>
        self isNaN ifTrue:[SignalNaN ifTrue:[NaNError signal]. ^self].
        "For now, fall back to the Squeak version of exp if FloatMathPlugin is absent"
        ^self primitiveExp

The old primitive works OK for exp(1.0) :
    1.0 primitiveExp hex = (Float classPool at: #E) hex

>> (Float classPool at: #E) is the closest Float approximation of e,
>> which you can check with:
>> (1.0 asArbitraryPrecisionFloatNumBits: 200) exp asFloat = (Float
>> classPool at: #E)
>>
>> It's bad that fdlibm be able to compute (1.0e32 cos) with less than
>> 1/2 ulp error, but (1.0 exp) with more than 1/2 ulp !
>>
>
> Nicolas, how about writing own float math plugin?
> You , among all of us seems care about float math most. So how about
> getting your hands dirty with some C hacking?
> :)
>

You mean rewriting fdlibm ? Less fun than writing in Smalltalk !

Nicolas

Reply | Threaded
Open this post in threaded view
|

Re: fdlibm 1.0 exp is not the closest approximation of E

Andres Valloud-4
In reply to this post by Andres Valloud-4
Oh, and finally (yes, the final time this time), if your C code ever
does something like

   *((sixtyFourBitInt*)&someDouble)

then you're asking for trouble because the above expression is in
violation of pointer aliasing rules as specified in the C99 standard...

On 12/27/10 14:03 , Andres Valloud wrote:

> A couple extra things.  Floating point operations that result in IEEE
> special values (INF, NaN) behave different on different platforms.  For
> example, in some platforms you may get INF, and in some other platforms
> you may get NaN.  In some platforms, you may actually get a very large
> value, but not INF.  Testing for conformance is a big pain because you
> have to account for all the disparate behavior.  In addition, Intel
> loosely defines a particular NaN bit configuration to mean
> "indeterminate".  The IEEE standard has no such thing.
>
> Finally, in the HPUX example below, the exponents should read -306,
> -306, and -307.
>
> On 12/27/10 13:58 , Andres Valloud wrote:
>> Something I learned is that basically all platforms claim wonderful IEEE
>> functionality, but in practice the standard is followed only to a
>> certain extent.  In addition, there are wonderful examples of how your
>> mileage may vary.  For example...
>>
>> * The only platform on which I saw sensible arcTan functionality was
>> IRIX on RISC.
>>
>> * On HPUX, doing something like 10d "or a Squeak's float" raisedTo:
>> -306, followed by floorLog10, results in 306.  But it fails with 307.
>> It works on Windows, Mac, Linux, AIX, IRIX/RISC (when we supported it
>> --- the platform is being phased out by Silicon Graphics), etc.  Why?
>>
>> * Some platforms provide .h files with handy definitions for IEEE
>> related functions such as isinf(), isnan(), isfinite() and the like.
>> But different versions of the operating system will provide different
>> compilation environments.  Consequently, whether you can use such things
>> in your code depends on the compilation environment.  And if you upgrade
>> the compiler environment but then do that too much, then you don't
>> support supported versions of the OS in question.
>>
>> * x86 FPUs are known to have a somewhat imprecise approximation of pi,
>> which in turn affects the trigonometric transcendentals.  This problem
>> was fixed back in the early nineties by AMD, but it was rolled back
>> because of compatibility concerns.  The 32 bit x86 FPU was never fixed.
>>     Supposedly, the 64 bit FPU uses the new pi approximation.  Of course,
>> that may mean that you won't get the same floating point results in 32
>> and 64 bits.
>>
>> * Also, on x86, you will get different results if you let the FPU use
>> the extended double format that enlarges the mantissa to 64 bits.  So,
>> if you keep doing calculations in the FPU, the effects of rounding will
>> be different than if you put in some value in the FPU, do the
>> calculation, and get the intermediate value out, thus truncating the
>> extended double format to the normal double format.  Of course leaving
>> the doubles in the FPU is faster, but then you get different results
>> unless you twiddle the control bits of the FPU.  Which, OBTW, 64 bit
>> Windows defaults to the extended format.
>>
>> * On the Mac, you get different trigonometric transcendental results
>> between 10.4 and 10.6.
>>
>> And this is just a sample of the wonderful world of IEEE and floating
>> point arithmetic.
>>
>> On 12/27/10 11:09 , Nicolas Cellier wrote:
>>> It seems that
>>>        1.0 exp = (Float classPool at: #E)
>>> is now false with the fdlibm version, at leat on Cog/MacOSX.
>>>
>>> (Float classPool at: #E) is the closest Float approximation of e,
>>> which you can check with:
>>> (1.0 asArbitraryPrecisionFloatNumBits: 200) exp asFloat = (Float
>>> classPool at: #E)
>>>
>>> It's bad that fdlibm be able to compute (1.0e32 cos) with less than
>>> 1/2 ulp error, but (1.0 exp) with more than 1/2 ulp !
>>>
>>> Nicolas
>>>
>>>
>>
>> .
>>
>
> .
>

Reply | Threaded
Open this post in threaded view
|

Re: fdlibm 1.0 exp is not the closest approximation of E

Igor Stasenko
In reply to this post by Nicolas Cellier
On 27 December 2010 23:04, Nicolas Cellier
<[hidden email]> wrote:

> 2010/12/27 Igor Stasenko <[hidden email]>:
>> On 27 December 2010 20:09, Nicolas Cellier
>> <[hidden email]> wrote:
>>> It seems that
>>>    1.0 exp = (Float classPool at: #E)
>>> is now false with the fdlibm version, at leat on Cog/MacOSX.
>>>
>>
>> tried on both John's 5.8b12
>> and own-built Cog VM , it returns true in both cases.
>> Maybe my image is obsolete and E is different there?
>>
>> (Float classPool at: #E) at: 1
>> 1074118410
>>
>> (Float classPool at: #E) at: 2
>>  2333366121
>>
>
> Hi Igor,
> your E seems correct.
> You can simply use (Float classPool at: #E) hex.
>
> Do you have the FloatMathPlugin plugged in your image ?
> In my trunk, Float>>exp is defined like this:
>
> exp
>        "Answer E raised to the receiver power.
>         Optional. See Object documentation whatIsAPrimitive."
>
>        <primitive: 'primitiveExp' module: 'FloatMathPlugin'>
>        self isNaN ifTrue:[SignalNaN ifTrue:[NaNError signal]. ^self].
>        "For now, fall back to the Squeak version of exp if FloatMathPlugin is absent"
>        ^self primitiveExp
>
> The old primitive works OK for exp(1.0) :
>    1.0 primitiveExp hex = (Float classPool at: #E) hex
>
i found that i using old primitives. not fdlibm ones.

indeed, now after using a floatmath plugin prim i got false from:

1.0 primitiveExp = (Float e)

>>> (Float classPool at: #E) is the closest Float approximation of e,
>>> which you can check with:
>>> (1.0 asArbitraryPrecisionFloatNumBits: 200) exp asFloat = (Float
>>> classPool at: #E)
>>>
>>> It's bad that fdlibm be able to compute (1.0e32 cos) with less than
>>> 1/2 ulp error, but (1.0 exp) with more than 1/2 ulp !
>>>
>>
>> Nicolas, how about writing own float math plugin?
>> You , among all of us seems care about float math most. So how about
>> getting your hands dirty with some C hacking?
>> :)
>>
>
> You mean rewriting fdlibm ? Less fun than writing in Smalltalk !
>

yeah.  but you can solve the problems :)

> Nicolas
>
>



--
Best regards,
Igor Stasenko AKA sig.