Memory and performance-problems in Object Lens

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
12 messages Options
Reply | Threaded
Open this post in threaded view
|

Memory and performance-problems in Object Lens

Mülheims, Klaus
Memory and performance-problems in Object Lens

Hello,

The ObjectLens registers its persistent objects in a special dictionary class LensWeakRegistry. This dictionary grows in case it has to cache many objects. But if the cache empties by garbage collecting unreferenced objects, the method finalizeElements only rehashes the dictionary. This can lead to memory and performance-problems.

If a program loads many persistent objects into the image like 'Give me all objects in the table', then these objects fill the registry to lets say 700.000 objects. When the objects are garbage-collected, the dictionary doesn't shrink. A dictionary of this size is a waste of memory and we think accessing this very big empty dictionary is not as performant as a smaller one due to inefficient hashing algorithms.

We changed the finalizeElements-method, it uses now trim instead of rehash. This change lets the dictionary shrink as expected.

Question: Is this a good idea to do so?

Klaus Mülheims

Collogia AG
Ubierring 11
 
50678 Köln
Germany
+49 221 336080
http://www.collogia.de


Diese Nachricht ist vertraulich. Wenn Sie nicht der beabsichtigte Empfänger sind, informieren Sie bitte den Absender. Das unbefugte Kopieren oder Weiterleiten ist nicht gestattet. Wegen der Manipulierbarkeit von E-Mails übernehmen wir keine Haftung für den Inhalt.

Reply | Threaded
Open this post in threaded view
|

AW: Memory and performance-problems in Object Lens

Georg Heeg
Lieber Klaus Mülheims,

you ask: "A dictionary of this size is a waste of memory and we think
accessing this very big empty dictionary is not as performant as a smaller
one due to inefficient hashing algorithms."

Whether it is a good idea to shrink a dictionary or not cannot be answered
without analysing the concrete application. This is valid both concerning
the memory consumption and the speed.

Memory consumption: If using the large amount of objects one time in a long
running application, it is a good idea to give the memory free for other
objects. If using large about of objects over and over again, it is a bad
idea, because shrinking and growing the dictionary over and over again
defragments the memory and thus has higher memory consumption.

Time usage: Dictionaries use hash methods to access entries. It depends on
the concrete hash function whether it works better with small dictionaries
(risk of collisions) or large dictionaries (risk that hash method does not
span wide enough).

Thus I recommend measuring your application using ATProfiling.

Georg Heeg

Reply | Threaded
Open this post in threaded view
|

RE: Memory and performance-problems in Object Lens

Terry Raymond
Considering that WeakDictionary uses rehash to remove nilled
elements I think it would be more efficient if the rehash
could be done within the current dictionary instead of
creating a new one.

Terry
 
===========================================================
Terry Raymond       Smalltalk Professional Debug Package
Crafted Smalltalk
80 Lazywood Ln.
Tiverton, RI  02878
(401) 624-4517      [hidden email]
<http://www.craftedsmalltalk.com>
===========================================================

> -----Original Message-----
> From: Georg Heeg [mailto:[hidden email]]
> Sent: Friday, January 27, 2006 11:52 AM
> To: 'Mülheims, Klaus'; [hidden email]
> Subject: AW: Memory and performance-problems in Object Lens
>
> Lieber Klaus Mülheims,
>
> you ask: "A dictionary of this size is a waste of memory and we think
> accessing this very big empty dictionary is not as performant as a smaller
> one due to inefficient hashing algorithms."
>
> Whether it is a good idea to shrink a dictionary or not cannot be answered
> without analysing the concrete application. This is valid both concerning
> the memory consumption and the speed.
>
> Memory consumption: If using the large amount of objects one time in a
> long
> running application, it is a good idea to give the memory free for other
> objects. If using large about of objects over and over again, it is a bad
> idea, because shrinking and growing the dictionary over and over again
> defragments the memory and thus has higher memory consumption.
>
> Time usage: Dictionaries use hash methods to access entries. It depends on
> the concrete hash function whether it works better with small dictionaries
> (risk of collisions) or large dictionaries (risk that hash method does not
> span wide enough).
>
> Thus I recommend measuring your application using ATProfiling.
>
> Georg Heeg

Reply | Threaded
Open this post in threaded view
|

Re: Memory and performance-problems in Object Lens

Andres Valloud
In reply to this post by Mülheims, Klaus
Hello Mülheims,

Friday, January 27, 2006, 10:33:05 AM, you wrote:

M> The ObjectLens registers its persistent objects in a special
M> dictionary class LensWeakRegistry. This dictionary grows in case it
M> has to cache many objects. But if the cache empties by garbage
M> collecting unreferenced objects, the method finalizeElements only
M> rehashes the dictionary. This can lead to memory and
M> performance-problems.

M> If a program loads many persistent objects into the image like
M> 'Give me all objects in the table', then these objects fill the
M> registry to lets say 700.000 objects. When the objects are
M> garbage-collected, the dictionary doesn't shrink. A dictionary of
M> this size is a waste of memory and we think accessing this very big
M> empty dictionary is not as performant as a smaller one due to
M> inefficient hashing algorithms.

M> We changed the finalizeElements-method, it uses now trim
M> instead of rehash. This change lets the dictionary shrink as
M> expected.

M> Question: Is this a good idea to do so?

Well... a 700k element dictionary should occupy less than 10mb of
memory.  Considering you are loading 700k objects into the image, is
10mb a concern worth taking care of?

Note that as the dictionary grows and becomes empty, it won't have to
regrow the next time you load 700k objects (hint: growth is very
expensive).

If hashing is inefficient, can you tell where the inefficiency is?

--
Best regards,
 Andres                            mailto:[hidden email]

Reply | Threaded
Open this post in threaded view
|

Pollock and new tools session at Smalltalk Solutions?

Carl Gundel
Hey guys,

I don't see any mention of any Pollock related material on the conference
sessions list.  I would really appreciate an update from Sam and Vassili (I
hope they're going) on where Pollock is at and where it's headed, and also
how the new VW tools are coming along.  Also, could Sam bring some more of
those matches?  ;-)

If there won't be an official session, a BOF would suffice.

Thanks,

-Carl Gundel, author of Liberty BASIC
http://www.libertybasic.com


Reply | Threaded
Open this post in threaded view
|

Re: Memory and performance-problems in Object Lens

Joachim Geidel
In reply to this post by Terry Raymond
Georg Heeg wrote:

>>you ask: "A dictionary of this size is a waste of memory and we think
>>accessing this very big empty dictionary is not as performant as a smaller
>>one due to inefficient hashing algorithms."
>>
>>Whether it is a good idea to shrink a dictionary or not cannot be answered
>>without analysing the concrete application. This is valid both concerning
>>the memory consumption and the speed.
>>
>>Memory consumption: If using the large amount of objects one time in a
>>long
>>running application, it is a good idea to give the memory free for other
>>objects. If using large about of objects over and over again, it is a bad
>>idea, because shrinking and growing the dictionary over and over again
>>defragments the memory and thus has higher memory consumption.

Using #trim instead of #rehash doesn't change much about memory
consumption in this special case. #rehash copies the whole registry
without changing its size, and that's even worse than shrinking it. The
drawback is that the registry has to be grown again when new objects are
loaded. One could use #trim only when the registry size will be reduced
by a significant amount, and #rehash otherwise. The cost of growth could
also be adressed by manually growing the capacity of the registry before
loading a large number of objects from the database instead of relying
on the default growth strategy. Of course, one has to know how many
objects will be loaded.

Klaus Mülheims wrote:
M> A dictionary of
M> this size is a waste of memory and we think accessing this very big
M> empty dictionary is not as performant as a smaller one due to
M> inefficient hashing algorithms.
Georg Heeg wrote:
>>Time usage: Dictionaries use hash methods to access entries. It depends on
>>the concrete hash function whether it works better with small dictionaries
>>(risk of collisions) or large dictionaries (risk that hash method does not
>>span wide enough).

The LensObjectRegistry has arbitrary objects as keys and uses their
identityHashes in its hash table. identityHashes have an upper bound
of "ObjectMemory maximumIdentityHashValue" = 16383 (in VW 7.4).
Dictionary lookup performance degrades quickly when the Dictionaries are
large, despite the attempt at correcting the problem in
Set>>initialIndexFor:boundedBy:. The capacity of the hash table is not
important for lookup performance, only the number of actual elements in
the Set / Dictionary is (due to collisions in the hash table). We are
using special Set and Dictionary classes for very large collections,
which override this method with a more efficient implementation. I'd
love to post the code, but it's our customer's property, so I can't.

However, if I understand Klaus Mülheims right, the main problem seems to
be the #rehash after garbage collection.
LensWeakRegistry>>finalizeElements does a linear scan of the complete
key array. It seems to be implemented efficiently, using a primitive for
finding the next element, but in the case described it will iterate over
1 mio. slots every time one of the registry's elements is garbage
collected. Shrinking the registry should be good if it really changes
its size by a sufficiently large amount, but a bad idea otherwise. See
below.

Terry Raymond wrote:
> Considering that WeakDictionary uses rehash to remove nilled
> elements I think it would be more efficient if the rehash
> could be done within the current dictionary instead of
> creating a new one.

I agree. The current implementation does a rehash (with a second
iteration over the key array) as final step of every finalization, even
if only one element was removed and none of the other elements will
actually be moved around in the hash table. As there is a key array in
addition to the value array, this means allocating 5-10 MB for a 700,000
element collection, computing 700.000 identity hashes, collision
detection for many thousands of elements etc.

Terry's suggestion would mean looking up which elements have to be moved
after finalization, and moving only those into the now empty slots.
There is already a method #removeAndCleanUpAtIndex: which does this for
one removed element, and which is used in #removeKey:ifAbsent:. It
should be possible to adapt this for removing multiple elements, but it
won't be easy, and it might be too expensive when there are too many
elements which have been finalized and / or the number of collisions in
the hash table is too high. You could do a #trim when the registry size
changes such that it is less than half of its capacity (i.e. leaving
room for loading new objects without growing), and in situ cleanup
otherwise - but that's only a starting point which would neet thorough
performance testing for finding the right point for switching strategy.

In the situation described by Klaus Mülheims, there may be another
strategy worth looking at: If the usage pattern is "load a large list,
do something with it, forget everything", then you could actively remove
the unneeded objects from the registry and force it to shrink, instead
of waiting for the elements to be garbage collected. This should cause
only one pass through the registry at a known location in the control
flow of the application, instead of one or more uncontrollable
finalizations later. This might be better from the view of an end user.

Best regards,
Joachim Geidel

Reply | Threaded
Open this post in threaded view
|

RE: Memory and performance-problems in Object Lens

Terry Raymond
Here is the "untested" algorithm I had in mind. It looks
to me that for a large sparse dictionary it would perform
as well as a rehash, for a small full dictionary it would
be faster than a rehash. This is because it skips adjacent
elements that have the same hash.

rehashAfterFinalize
        | length |
        length := idx := self basicSize.
        [idx > 0]
                whileTrue:
                        [| ele hashIdx newLoc |
                        ele := self basicAt: idx.
                        hashIdx := self initialIndexFor: ele identityHash
boundedBy: length.
                        hashIdx = idx
                                ifFalse:
                                        [newLoc := self
findLocationForHashIndex: hashIdx andKey: ele boundedBy: length.
                                        newLoc = idx
                                                ifTrue: "every slot between
idx and hashIdx is used"
                                                        [idx > hashIdx
                                                                ifTrue: [idx
:= hashIdx]
                                                                ifFalse:
[idx := 1]. "We wrapped around, all done"]
                                                ifFalse: [self swap: idx
with: newLoc]].
                        idx := idx - 1].

findLocationForHashIndex: index andKey: key boundedBy: length
        | probe |
        [(probe := self basicAt: index) == nil or: [probe == key]]
                whileFalse:
                        [(index := index + 1) > length
                                ifTrue: [index := 1]].
        ^index

Terry
 
===========================================================
Terry Raymond       Smalltalk Professional Debug Package
Crafted Smalltalk
80 Lazywood Ln.
Tiverton, RI  02878
(401) 624-4517      [hidden email]
<http://www.craftedsmalltalk.com>
===========================================================

>
> Terry's suggestion would mean looking up which elements have to be moved
> after finalization, and moving only those into the now empty slots.
> There is already a method #removeAndCleanUpAtIndex: which does this for
> one removed element, and which is used in #removeKey:ifAbsent:. It
> should be possible to adapt this for removing multiple elements, but it
> won't be easy, and it might be too expensive when there are too many
> elements which have been finalized and / or the number of collisions in
> the hash table is too high. You could do a #trim when the registry size
> changes such that it is less than half of its capacity (i.e. leaving
> room for loading new objects without growing), and in situ cleanup
> otherwise - but that's only a starting point which would neet thorough
> performance testing for finding the right point for switching strategy.
>
> In the situation described by Klaus Mülheims, there may be another
> strategy worth looking at: If the usage pattern is "load a large list,
> do something with it, forget everything", then you could actively remove
> the unneeded objects from the registry and force it to shrink, instead
> of waiting for the elements to be garbage collected. This should cause
> only one pass through the registry at a known location in the control
> flow of the application, instead of one or more uncontrollable
> finalizations later. This might be better from the view of an end user.
>
> Best regards,
> Joachim Geidel

Reply | Threaded
Open this post in threaded view
|

Re: Memory and performance-problems in Object Lens

Joachim Geidel
Terry Raymond wrote:
> Here is the "untested" algorithm I had in mind. It looks
> to me that for a large sparse dictionary it would perform
> as well as a rehash, for a small full dictionary it would
> be faster than a rehash. This is because it skips adjacent
> elements that have the same hash.

I think that the algorithm has a flaw. Please check the following
example. Maybe I'm wrong, it's possible that I made a mistake somewhere.

Let's assume that the hash table is populated like this before garbage
collection:

1   2   3   4   5   6
b1  b2  a1  a2  b3  a3

Let's further assume that all "a" elements have initial index 3, and all
"b" elements have initial index 1. Of course, a real hash table would
not be 100% full, but it's sufficient that a pattern like this occurs
somewhere in the hash table.

Now let b2 be garbage collected, which leads to the following after
finalization and before rehashing:

1   2   3   4   5   6
b1  nil a1  a2  b3  a3

The algorithm starts at slot 6, finding a3 with initial index 3. It will
try to find an empty slot between 3 and 5, finding them all used.
Therefore, idx will be set to 2 at the end of the loop, thus skipping
slot 5 which contains b3. b3 should be moved to slot 2, but isn't.

It still think that it will be difficult to get the details of the
algorithm right without worsening algorithmic complexity. Actually, the
problem is not large sparse or small dense dictionaries, but large dense
ones, which have lots of collisions when the hash function is based on
identity hashes. With such a dictionary, an attempt to repair the hash
table after finalization may have quadratic worst case complexity.

Even if the alorithm works, finalization leads to two enumerations of
the complete hash table, one for removing dead elements, and one for
rehashing. It would be good if these two passes could be merged into
one. On the other hand, a decision if it's worthwhile to try to shrink
the table can only be made when you know how many elements have been
garbage collected.

It would help if one could use a hash function with a larger spread than
identityHash, thus avoiding collisions and improving performance in the
first place. Maybe using a different kind of hash table, e.g. with
collision buckets instead of linear hashing, similar to
LensLinkedDictionary, would help. Collision buckets should be easier to
clean up than an out-of-order hash table. If the number of collisions is
not too high, one could also skip rehashing and modify the hash table
lookup such that it continues searching when it encounters a 0.
Rehashing could then be postponed until a certain percentage of the
slots is filled with 0.

OTOH, before investing lots of time in low-level optimizations, it might
be better to check if the design of the application can be changed such
that it does not load such large numbers of objects. If that does not
help, one could check if it's really necessary to put all the objects in
the Lens object registry. If the Lens just loads them and never touches
them again (for updating, queries etc.), it's not necessary to register
them. Again, how to do this a question of design. It could be done by
redesigning the application and the data model, or by adding support at
the Lens level.

BTW, there is an unnecessary waste of performance and memory in
LensWeakRegistry>>setTally. It calls "super setTally". Therefore the
instance variable keyArray will be filled with an Array, which is then
immediately replaced by a WeakArray. For a large Array, this is costly.
It should be
        tally := 0.
instead of
        super setTally.

Best regards,
Joachim Geidel

> rehashAfterFinalize
> | length |
> length := idx := self basicSize.
> [idx > 0]
> whileTrue:
> [| ele hashIdx newLoc |
> ele := self basicAt: idx.
> hashIdx := self initialIndexFor: ele identityHash
> boundedBy: length.
> hashIdx = idx
> ifFalse:
> [newLoc := self
> findLocationForHashIndex: hashIdx andKey: ele boundedBy: length.
> newLoc = idx
> ifTrue: "every slot between
> idx and hashIdx is used"
> [idx > hashIdx
> ifTrue: [idx
> := hashIdx]
> ifFalse:
> [idx := 1]. "We wrapped around, all done"]
> ifFalse: [self swap: idx
> with: newLoc]].
> idx := idx - 1].
>
> findLocationForHashIndex: index andKey: key boundedBy: length
> | probe |
> [(probe := self basicAt: index) == nil or: [probe == key]]
> whileFalse:
> [(index := index + 1) > length
> ifTrue: [index := 1]].
> ^index
>

Reply | Threaded
Open this post in threaded view
|

RE: Memory and performance-problems in Object Lens

Terry Raymond
Joachim

I agree with your analysis of the algorithm and I also
agree with your suggestion of using a different type
of dictionary. However, as for redesigning the application,
I don't think it matters because VW should still handle
it properly without too much overhead.

It has been stated here before that dictionary performance
in VW is lacking and should be improved.

Terry
 
===========================================================
Terry Raymond       Smalltalk Professional Debug Package
Crafted Smalltalk
80 Lazywood Ln.
Tiverton, RI  02878
(401) 624-4517      [hidden email]
<http://www.craftedsmalltalk.com>
===========================================================

> -----Original Message-----
> From: Joachim Geidel [mailto:[hidden email]]
> Sent: Sunday, January 29, 2006 10:42 AM
> To: Terry Raymond; [hidden email]
> Cc: [hidden email]
> Subject: Re: Memory and performance-problems in Object Lens
>
> Terry Raymond wrote:
> > Here is the "untested" algorithm I had in mind. It looks
> > to me that for a large sparse dictionary it would perform
> > as well as a rehash, for a small full dictionary it would
> > be faster than a rehash. This is because it skips adjacent
> > elements that have the same hash.
>
> I think that the algorithm has a flaw. Please check the following
> example. Maybe I'm wrong, it's possible that I made a mistake somewhere.
>
> Let's assume that the hash table is populated like this before garbage
> collection:
>
> 1   2   3   4   5   6
> b1  b2  a1  a2  b3  a3
>
> Let's further assume that all "a" elements have initial index 3, and all
> "b" elements have initial index 1. Of course, a real hash table would
> not be 100% full, but it's sufficient that a pattern like this occurs
> somewhere in the hash table.
>
> Now let b2 be garbage collected, which leads to the following after
> finalization and before rehashing:
>
> 1   2   3   4   5   6
> b1  nil a1  a2  b3  a3
>
> The algorithm starts at slot 6, finding a3 with initial index 3. It will
> try to find an empty slot between 3 and 5, finding them all used.
> Therefore, idx will be set to 2 at the end of the loop, thus skipping
> slot 5 which contains b3. b3 should be moved to slot 2, but isn't.
>
> It still think that it will be difficult to get the details of the
> algorithm right without worsening algorithmic complexity. Actually, the
> problem is not large sparse or small dense dictionaries, but large dense
> ones, which have lots of collisions when the hash function is based on
> identity hashes. With such a dictionary, an attempt to repair the hash
> table after finalization may have quadratic worst case complexity.
>
> Even if the alorithm works, finalization leads to two enumerations of
> the complete hash table, one for removing dead elements, and one for
> rehashing. It would be good if these two passes could be merged into
> one. On the other hand, a decision if it's worthwhile to try to shrink
> the table can only be made when you know how many elements have been
> garbage collected.
>
> It would help if one could use a hash function with a larger spread than
> identityHash, thus avoiding collisions and improving performance in the
> first place. Maybe using a different kind of hash table, e.g. with
> collision buckets instead of linear hashing, similar to
> LensLinkedDictionary, would help. Collision buckets should be easier to
> clean up than an out-of-order hash table. If the number of collisions is
> not too high, one could also skip rehashing and modify the hash table
> lookup such that it continues searching when it encounters a 0.
> Rehashing could then be postponed until a certain percentage of the
> slots is filled with 0.
>
> OTOH, before investing lots of time in low-level optimizations, it might
> be better to check if the design of the application can be changed such
> that it does not load such large numbers of objects. If that does not
> help, one could check if it's really necessary to put all the objects in
> the Lens object registry. If the Lens just loads them and never touches
> them again (for updating, queries etc.), it's not necessary to register
> them. Again, how to do this a question of design. It could be done by
> redesigning the application and the data model, or by adding support at
> the Lens level.
>
> BTW, there is an unnecessary waste of performance and memory in
> LensWeakRegistry>>setTally. It calls "super setTally". Therefore the
> instance variable keyArray will be filled with an Array, which is then
> immediately replaced by a WeakArray. For a large Array, this is costly.
> It should be
> tally := 0.
> instead of
> super setTally.
>
> Best regards,
> Joachim Geidel
>
> > rehashAfterFinalize
> > | length |
> > length := idx := self basicSize.
> > [idx > 0]
> > whileTrue:
> > [| ele hashIdx newLoc |
> > ele := self basicAt: idx.
> > hashIdx := self initialIndexFor: ele identityHash
> > boundedBy: length.
> > hashIdx = idx
> > ifFalse:
> > [newLoc := self
> > findLocationForHashIndex: hashIdx andKey: ele boundedBy: length.
> > newLoc = idx
> > ifTrue: "every slot between
> > idx and hashIdx is used"
> > [idx > hashIdx
> > ifTrue: [idx
> > := hashIdx]
> > ifFalse:
> > [idx := 1]. "We wrapped around, all done"]
> > ifFalse: [self swap: idx
> > with: newLoc]].
> > idx := idx - 1].
> >
> > findLocationForHashIndex: index andKey: key boundedBy: length
> > | probe |
> > [(probe := self basicAt: index) == nil or: [probe == key]]
> > whileFalse:
> > [(index := index + 1) > length
> > ifTrue: [index := 1]].
> > ^index
> >

Reply | Threaded
Open this post in threaded view
|

RE: Memory and performance-problems in Object Lens

Steven Kelly
In reply to this post by Mülheims, Klaus
From: Joachim Geidel [mailto:[hidden email]]
> It would help if one could use a hash function with a larger
> spread than identityHash, thus avoiding collisions and
> improving performance in the first place.

In the 64-bit versions of VW, the identity hash value is increased from
14 bits to 20 bits: 16,384 to 1,048,576.
http://www.cincomsmalltalk.com/blog/blogView?entry=3281185978

The older hash discussion in the UIUC Wiki is also interesting:
http://wiki.cs.uiuc.edu/VisualWorks/Hashing+methods+of+VW

Steve

Reply | Threaded
Open this post in threaded view
|

AW: Memory and performance-problems in Object Lens

Mülheims, Klaus
In reply to this post by Mülheims, Klaus
Hi to all,

Thanks for the very interesting discussion of my problem. I think, I understand the problem better than before.
I think the trim-solution is not as bad as it looks like for our applications. I have no ambitions to solve one of the many problems the Lens still has, so the solution thould be easy and with very low risk.
Every call of finalizeElements rehashes the dictionary and the trim does the same and adjusts the size of the registry. Our szenario is to load a big amount of objects, work on these, forget them and do something else for a long time without the need of such a big registry. Therefore the strategy of "keep registry small" is simple and does what we want.

Thanks to all

Klaus Mülheims


Collogia AG
Ubierring 11
 
50678 Köln
Germany
+49 221 336080
http://www.collogia.de



Diese Nachricht ist vertraulich. Wenn Sie nicht der beabsichtigte Empfänger sind, informieren Sie bitte den Absender. Das unbefugte Kopieren oder Weiterleiten ist nicht gestattet. Wegen der Manipulierbarkeit von E-Mails übernehmen wir keine Haftung für den Inhalt.
Reply | Threaded
Open this post in threaded view
|

Re: Pollock and new tools session at Smalltalk Solutions?

Alan Knight-2
In reply to this post by Carl Gundel
There is not an official session. I believe both Sam and Vassili will be attending, though, and I think it's likely that there would be a BOF or something similar. Fortunately, organizing that is someone else's problem.

At 09:36 PM 1/27/2006, Carl Gundel wrote:

>Hey guys,
>
>I don't see any mention of any Pollock related material on the conference
>sessions list.  I would really appreciate an update from Sam and Vassili (I
>hope they're going) on where Pollock is at and where it's headed, and also
>how the new VW tools are coming along.  Also, could Sam bring some more of
>those matches?  ;-)
>
>If there won't be an official session, a BOF would suffice.
>
>Thanks,
>
>-Carl Gundel, author of Liberty BASIC
>http://www.libertybasic.com

--
Alan Knight [|], Cincom Smalltalk Development
[hidden email]
[hidden email]
http://www.cincom.com/smalltalk

"The Static Typing Philosophy: Make it fast. Make it right. Make it run." - Niall Ross