On Fri, Nov 21, 2014 at 02:30:59PM +0100, Bert Freudenberg wrote: > > To be abstract, or to be concrete, that is the question. > > Coming back to Eliot's proposal: > > > modify class Float to be an abstract class, and add two subclasses, BoxedFloat and SmallFloat, such that existing boxed instances of Float outside the SmallFloat range will become instances of BoxedFloat and instances within that range will be replaced by references to the relevant SmallFloat. > > [...] > > An alternative [...] is to add a superclass, e.g. LimitedPrecisionReal, move most of the methods into it, and keep Float as Float, and add SmallFloat as a subclass of LimitedPrecisionReal. > > > Float > | > +------- BoxedFloat > | > +------- SmallFloat > > > LimitedPrecisionReal > | > +------- Float > | > +------- SmallFloat > > > The actual question was if the class named "Float" (as used in expressions like "Float pi") should be concrete or abstract. > > I strongly agree with Eliot's assessment that making Float the abstract superclass is best. What we name the two concrete subclasses is bikeshedding, and I trust Eliot to pick something not too unreasonable. > I also agree. The name "Float" suggests the concept of floating point arithmetic. There are many different ways to implement that concept (*). But for all of the possible concrete implementations of floating point numbers, the name "Float" makes sense in the abstract. In Squeak, all instances of "Float" (in the abstract sense) are currently implemented as 64-bit doubles (instances of class Float) or 32-bit singles (hidden within FloatArray). Spur-64 will provide an immediate implementation. Maybe somebody will come up with a class to represent the 32-bit floating point values in a FloatArray. And maybe someone else will come up with a 128 bit floating point represention, or something else entirely. But in any case, it seems natural to have an abstract "Float" to represent all of the concrete implementations that may prove necessary or useful over time. So +1 for making Float be the abstract superclass. Dave (*) As a former field service engineer for Harris Computer Systems, I still consider the 48-bit floating point format of the H800 series to be superior to the awkward compromises of 32-bit and 64-bit floating point representations ;-) See pages 2-2 and 6-1 of the manual for descriptions of the floating point data formats (I think I have a paper copy of this moldering away in my basement). http://bitsavers.informatik.uni-stuttgart.de/pdf/harris/0830007-000_Series_800_Reference_Man_Aug79.pdf |
On 24.11.2014, at 05:09, David T. Lewis <[hidden email]> wrote: > (*) As a former field service engineer for Harris Computer Systems, I still > consider the 48-bit floating point format of the H800 series to be superior to > the awkward compromises of 32-bit and 64-bit floating point representations ;-) > See pages 2-2 and 6-1 of the manual for descriptions of the floating point data > formats (I think I have a paper copy of this moldering away in my basement). > > http://bitsavers.informatik.uni-stuttgart.de/pdf/harris/0830007-000_Series_800_Reference_Man_Aug79.pdf Oh, I got excited for a moment there, thinking that maybe this could be the origin of Smalltalk-78's weird 48 bit floating point format. But it's completely different. I had to reverse-engineer it because Dan could not remember (only later we got a printout of the VM's 8086 assembly source code). It's optimized for a software implementation with the mantissa on a 16-bit word boundary. Not sure why the exponent's sign bit is in the LSB though. But 16 bits of exponent, can you imagine the range? Luckily there were no insanely large instances in the snapshot. They get converted to modern floats when parsing the original object space dump: wordsAsFloat: function() { // layout of NoteTaker Floats (from MSB): // 15 bits exponent in two's complement without bias, 1 bit sign // 32 bits mantissa including its highest bit (which is implicit in IEEE 754) if (this.words[1] == 0) return 0.0; // if high-bit of mantissa is 0, then it's all zero var nt0 = this.words[0], nt1 = this.words[1], nt2 = this.words[2], ntExponent = nt0 >> 1, ntSign = nt0 & 1, ntMantissa = (nt1 & 0x7FFF) << 16 | nt2, // drop high bit of mantissa ieeeExponent = (ntExponent + 1022) & 0x7FF, // IEEE: 11 bit exponent, biased ieee = new DataView(new ArrayBuffer(8)); // IEEE is 1 sign bit, 11 bits exponent, 53 bits mantissa omitting the highest bit (which is always 1, except for 0.0) ieee.setInt32(0, ntSign << 31 | ieeeExponent << (31-11) | ntMantissa >> 11); // 20 bits of ntMantissa ieee.setInt32(4, ntMantissa << (32-11)); // remaining 11 bits of ntMantissa, rest filled up with 0 // why not use setInt64()? Because JavaScript does not have 64 bit ints return ieee.getFloat64(0); } - Bert - smime.p7s (5K) Download Attachment |
On Mon, Nov 24, 2014 at 11:51:06AM +0100, Bert Freudenberg wrote: > > On 24.11.2014, at 05:09, David T. Lewis <[hidden email]> wrote: > > (*) As a former field service engineer for Harris Computer Systems, I still > > consider the 48-bit floating point format of the H800 series to be superior to > > the awkward compromises of 32-bit and 64-bit floating point representations ;-) > > See pages 2-2 and 6-1 of the manual for descriptions of the floating point data > > formats (I think I have a paper copy of this moldering away in my basement). > > > > http://bitsavers.informatik.uni-stuttgart.de/pdf/harris/0830007-000_Series_800_Reference_Man_Aug79.pdf > > Oh, I got excited for a moment there, thinking that maybe this could be > the origin of Smalltalk-78's weird 48 bit floating point format. But it's > completely different. It's just a coincidence I'm sure. A 48 bit float makes a lot of sense. Minicomputers were typically 16 bit machines, but the H800 was a 24 bit machine with 48 and 96 bit floating point data types and 24 bit registers. 24 bits was a lot of address space, and a 48 bit float was far superior to a 32 bits for scientific and numeric computing. In those days, "Super Minicomputer" was a market category, and the H800 was marketed that way, targeting applications such as finite element analysis. I would not be surprised if 16 bit Smalltalk systems arrived at similar conclusions for general purpose floating point representation. 32 bits would have been too small, and 64 bits too big. 48 bits was just about the right size to be useful for serious work on a small machine. Very large mantissa ranges also make sense for interative numeric work, where I expect that they would reduce the need to keep track of numeric overflow in some kinds of calculations (just guessing, but I'm sure that was the reason). > I had to reverse-engineer it because Dan could not remember (only later > we got a printout of the VM's 8086 assembly source code). It's optimized > for a software implementation with the mantissa on a 16-bit word boundary. > Not sure why the exponent's sign bit is in the LSB though. But 16 bits of > exponent, can you imagine the range? Luckily there were no insanely large > instances in the snapshot. They get converted to modern floats when parsing > the original object space dump: > > wordsAsFloat: function() { > // layout of NoteTaker Floats (from MSB): > // 15 bits exponent in two's complement without bias, 1 bit sign > // 32 bits mantissa including its highest bit (which is implicit > if (this.words[1] == 0) return 0.0; // if high-bit of mantissa is 0, then it's all zero > var nt0 = this.words[0], nt1 = this.words[1], nt2 = this.words[2], > ntExponent = nt0 >> 1, ntSign = nt0 & 1, ntMantissa = (nt1 & 0x7FFF) << 16 | nt2, // drop high bit of mantissa > ieeeExponent = (ntExponent + 1022) & 0x7FF, // IEEE: 11 bit exponent, biased > ieee = new DataView(new ArrayBuffer(8)); > // IEEE is 1 sign bit, 11 bits exponent, 53 bits mantissa omitting the highest bit (which is always 1, except for 0.0) > ieee.setInt32(0, ntSign << 31 | ieeeExponent << (31-11) | ntMantissa >> 11); // 20 bits of ntMantissa > ieee.setInt32(4, ntMantissa << (32-11)); // remaining 11 bits of ntMantissa, rest filled up with 0 > // why not use setInt64()? Because JavaScript does not have 64 bit ints > return ieee.getFloat64(0); > } > Cool! I had no idea that you were dialing your wayback machine this far back in time. Very impressive indeed. Dave |
In reply to this post by Eliot Miranda-2
"Float" is also the name of an IEEE entity in a multitude of languages other than ours... if we needed to implement single precision floating point numbers, generally called "float", where would that fit? On 11/21/14 10:08 , Eliot Miranda wrote: > Good. I think I'll go with > > Float > | > +------- BoxedDouble > | > +------- SmallDouble |
On 28.11.2014, at 01:24, Andres Valloud <[hidden email]> wrote: > > "Float" is also the name of an IEEE entity in a multitude of languages other than ours... if we needed to implement single precision floating point numbers, generally called "float", where would that fit? > > On 11/21/14 10:08 , Eliot Miranda wrote: >> Good. I think I'll go with >> >> Float >> | >> +------- BoxedDouble >> | >> +------- SmallDouble you missed the decision: We're going with BoxedFloat64 and SmallFloat64 now. - Bert - smime.p7s (5K) Download Attachment |
Free forum by Nabble | Edit this page |