Squeak Foundation Board 2007 Candidates

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
133 messages Options
1234567
Reply | Threaded
Open this post in threaded view
|

Re: election details *PLEASE READ*

Bryce Kampjes
karl writes:

 > Another issue that will come up in the near future is multi core
 > processors. What must be done with Squeak to utilize all the processors
 > and what are the benefits and drawback of the  different approaches ?
 > What can  the board do to help in this effort ?

The only gain from a multithreaded VM is speed. It seems sensible
to aim to compete with C for scalar performance before moving to
parallelization.

That said, the biggest difficulty with moving the VM to multithreaded
execution is the garbage collector and write barrier. Replacing the
write barrier with a card marking scheme designed for fast scaler
performance could easily be combined with adding a mostly parallel
old space collector. A mostly parallel old space collector would
remove the number of pauses over a few milliseconds which would
be nice for multimedia type soft real time systems. With a mostly
parallel collector then adding multiple parallel mutators (threads)
should be relatively easy but will expose exciting concurrency bugs
in the image.

For practical use running multiple images is probably the way to go if
you want to scale up. Swarms of Spoons.  SMP machines don't scale that
far. Non-symmetric machines scale further. Grids scale
furthest. Running a GCed system on hardware where you care about how
close the memory is to the CPU you're on is going to create
interesting performance characteristics.

That said, I think we should go multithreaded but there's other things
we should do first. Some of those things will provide most of the
plumbing needed to go multithreaded.

Bryce

Reply | Threaded
Open this post in threaded view
|

Re: election details *PLEASE READ*

johnmci

On Feb 22, 2007, at 3:21 PM, <[hidden email]> wrote:

> karl writes:
>
>> Another issue that will come up in the near future is multi core
>> processors. What must be done with Squeak to utilize all the  
>> processors
>> and what are the benefits and drawback of the  different approaches ?
>> What can  the board do to help in this effort ?
>
> That said, the biggest difficulty with moving the VM to multithreaded
> execution is the garbage collector and write barrier.

In thinking about that in the past there are some interesting places  
in the smalltalk code where
we don't consider a VM could be executing the same logic twice when  
referring to a class variable
and the like. Right now this code is "safe" because the VM actually  
only switches processes at
known times  so we do a, b, and c and it's "safe" because a process  
switch doesn't happen and
since there isn't multiple concurrent threads we don't have an  
issue.  Process switches only happen
at certain times, mmm and the list is where?

But if you ran multiple threads, that same execution path could fail  
if one process does a and b
then the other attempts a but c on the other thread was required to  
finish and fix up 'a'.

In general the Smalltalk code is not truly fine grained thread safe.  
A fun task I'm sure...

--
========================================================================
===
John M. McIntosh <[hidden email]>
Corporate Smalltalk Consulting Ltd.  http://www.smalltalkconsulting.com
========================================================================
===



Reply | Threaded
Open this post in threaded view
|

Re: election details *PLEASE READ*

J J-6
In reply to this post by Karl-19
>From: karl <[hidden email]>
>Reply-To: The general-purpose Squeak developers
>list<[hidden email]>
>To: The general-purpose Squeak developers
>list<[hidden email]>
>Subject: Re: election details *PLEASE READ*
>Date: Thu, 22 Feb 2007 22:44:07 +0100
>
>>I think we should try to take things from other languages that make sense
>>for smalltalk, but I don't think there will be a time when one language
>>(even smalltalk!) will be perfect for every task.  We will still need at
>>least Haskell. :)
>Another issue that will come up in the near future is multi core
>processors. What must be done with Squeak to utilize all the processors and
>what are the benefits and drawback of the  different approaches ? What can  
>the board do to help in this effort ?

Afaik there are 3 ways of handling true concurrent execution (i.e. not green
threads):

1) Fine-grained locking/shared thread state:

The old way of running heavy weight threads, sharing memory across threads
and using some kind of locking to protect against race conditions.

Positive:  Hrm, well I guess there is the most support for this, since it is
probably the most common.  If you don't use any locking but only read the
data shared this is very fast approach.

Negative: It simply doesn't scale well.  It also doesn't compose well.  You
can't simply put two independently created pieces of code together that use
locking and expect it to work.  Stated another way, fine-grained locking is
the manual memory management of concurrency methodologies [1].  If any part
of your code is doing fine-grain locking, you can never "just use it"
somewhere else.  You have to dig deep down in every method to make sure you
aren't going to cause a deadlock.

This one would probably be very hard to add to Squeak based on what John
said.

2) Shared state, transactional memory:

Think relational database.  You stick a series of code in an "atomic" block
and the system does what it has to to make it appear as the memory changes
occurred atomically.

Positive:  This approach affords some composability.  You still should know
if the methods your calling are going to operate on memory, but in the case
that you put two pieces of code together that you know will, you can just
slap an atomic block around them and it works.  The system can also ensure
that nested atomic blocks work as expected to further aid composability.  
This approach can often require very few changes to existing code to make it
thread safe.  And finally, you can still have all (most?) of the benefits of
thread-shared memory without having to give up so much abstraction (i.e.
work at such a low level).

Negative:  To continue the above analogy, I consider this one the "reference
counted memory management" of options.  That is, it works as expected, but
can end up taking more resources and time in the end.  My concern with this
approach is that it still does need some knowledge of what the code you are
calling does at a lower level.  And most people aren't going to want to
worry about it so they will just stick "atomic" everywhere.  That probably
wont hurt anything, but it forces the system to keep track of a lot more
things then it should, and this bookkeeping is not free.

This one would also require some VM (probably very big) changes to support
and could be tough to get right.

3) Share nothing message passing:

Basically, no threads, only independent processes that send messages between
each other to get work done.

Positive:  This approach also allows a high level of composability.  If you
get new requirements, you typically add new processes to deal with them.  
And at last, you don't have to think about what the other "guy" is going to
do.  A system designed in this manner is very scalable; in Erlang for
example, a message send doesn't have to worry if it is sending to a local
process or a totally different computer.  A message send is a message send.  
There is no locking at all in this system, so no process is sleeping waiting
for some other process to get finished with a resource it wants (low level
concerns).  Instead a process will block waiting for another process to give
him work (high level concerns).

Negative:  This requires a new way of architecting in the places that use
it.  What we are used to is; call a function and wait for an answer.  An
approach like this works best if your message senders never care about
answers.  The "main loop" sends out work, the consumers consume it and
generate output that they send to other consumers (i.e. not the main loop).  
In some cases, what we would normally do in a method is done in a whole
other process.  Code that uses this in smalltalk will also have to take
care, as we *do* have state that could leak to local processes.  We would
either have to make a big change how #fork and co. work today to ensure no
information can be shared, or we would have to take care in our coding that
we don't make changes to data that might be shared.

I think this one would be, by far, the easiest to add to Squeak (unless we
have to change #fork and co, of course).  I think the same code that writes
out objects to a file could be used to serialize them over the network.  The
system/package/whatever can check the recipient of a message send to decide
if it is a local call that doesn't need to be serialized or not.

[1] The big win usually cited for GC's is something to the effect of "well,
people forget to clean up after themselves and this frees up their time by
not making them".  But really, the big win was composability.  In any
GC-less system, it is always a nightmare of who has responsibility for
deleting what, when.  You can't just use a new vendor API, you have to know
if it cleans up after itself, do I have to do it, is there some API I call?  
With a GC you just forget about it, use the API and everything works.

_________________________________________________________________
Win a Zune™—make MSN® your homepage for your chance to win!
http://homepage.msn.com/zune?icid=hmetagline


Reply | Threaded
Open this post in threaded view
|

Re: election details *PLEASE READ*

J J-6
Oh, I forgot to mention, Haskell is also experimenting with a system where
the language environment figures out the dependencies between functions and
automatically moves the portions it can into separate threads.  I don't
really see this as viable for Smalltalk (and I could of course be wrong in
this), so I didn't include it, but I did intend to mention it. :)


>From: "J J" <[hidden email]>
>Reply-To: The general-purpose Squeak developers
>list<[hidden email]>
>To: [hidden email]
>Subject: Re: election details *PLEASE READ*
>Date: Fri, 23 Feb 2007 05:14:34 +0000
>
>>From: karl <[hidden email]>
>>Reply-To: The general-purpose Squeak developers
>>list<[hidden email]>
>>To: The general-purpose Squeak developers
>>list<[hidden email]>
>>Subject: Re: election details *PLEASE READ*
>>Date: Thu, 22 Feb 2007 22:44:07 +0100
>>
>>>I think we should try to take things from other languages that make sense
>>>for smalltalk, but I don't think there will be a time when one language
>>>(even smalltalk!) will be perfect for every task.  We will still need at
>>>least Haskell. :)
>>Another issue that will come up in the near future is multi core
>>processors. What must be done with Squeak to utilize all the processors
>>and what are the benefits and drawback of the  different approaches ? What
>>can  the board do to help in this effort ?
>
>Afaik there are 3 ways of handling true concurrent execution (i.e. not
>green threads):
>
>1) Fine-grained locking/shared thread state:
>
>The old way of running heavy weight threads, sharing memory across threads
>and using some kind of locking to protect against race conditions.
>
>Positive:  Hrm, well I guess there is the most support for this, since it
>is probably the most common.  If you don't use any locking but only read
>the data shared this is very fast approach.
>
>Negative: It simply doesn't scale well.  It also doesn't compose well.  You
>can't simply put two independently created pieces of code together that use
>locking and expect it to work.  Stated another way, fine-grained locking is
>the manual memory management of concurrency methodologies [1].  If any part
>of your code is doing fine-grain locking, you can never "just use it"
>somewhere else.  You have to dig deep down in every method to make sure you
>aren't going to cause a deadlock.
>
>This one would probably be very hard to add to Squeak based on what John
>said.
>
>2) Shared state, transactional memory:
>
>Think relational database.  You stick a series of code in an "atomic" block
>and the system does what it has to to make it appear as the memory changes
>occurred atomically.
>
>Positive:  This approach affords some composability.  You still should know
>if the methods your calling are going to operate on memory, but in the case
>that you put two pieces of code together that you know will, you can just
>slap an atomic block around them and it works.  The system can also ensure
>that nested atomic blocks work as expected to further aid composability.  
>This approach can often require very few changes to existing code to make
>it thread safe.  And finally, you can still have all (most?) of the
>benefits of thread-shared memory without having to give up so much
>abstraction (i.e. work at such a low level).
>
>Negative:  To continue the above analogy, I consider this one the
>"reference counted memory management" of options.  That is, it works as
>expected, but can end up taking more resources and time in the end.  My
>concern with this approach is that it still does need some knowledge of
>what the code you are calling does at a lower level.  And most people
>aren't going to want to worry about it so they will just stick "atomic"
>everywhere.  That probably wont hurt anything, but it forces the system to
>keep track of a lot more things then it should, and this bookkeeping is not
>free.
>
>This one would also require some VM (probably very big) changes to support
>and could be tough to get right.
>
>3) Share nothing message passing:
>
>Basically, no threads, only independent processes that send messages
>between each other to get work done.
>
>Positive:  This approach also allows a high level of composability.  If you
>get new requirements, you typically add new processes to deal with them.  
>And at last, you don't have to think about what the other "guy" is going to
>do.  A system designed in this manner is very scalable; in Erlang for
>example, a message send doesn't have to worry if it is sending to a local
>process or a totally different computer.  A message send is a message send.
>  There is no locking at all in this system, so no process is sleeping
>waiting for some other process to get finished with a resource it wants
>(low level concerns).  Instead a process will block waiting for another
>process to give him work (high level concerns).
>
>Negative:  This requires a new way of architecting in the places that use
>it.  What we are used to is; call a function and wait for an answer.  An
>approach like this works best if your message senders never care about
>answers.  The "main loop" sends out work, the consumers consume it and
>generate output that they send to other consumers (i.e. not the main loop).
>  In some cases, what we would normally do in a method is done in a whole
>other process.  Code that uses this in smalltalk will also have to take
>care, as we *do* have state that could leak to local processes.  We would
>either have to make a big change how #fork and co. work today to ensure no
>information can be shared, or we would have to take care in our coding that
>we don't make changes to data that might be shared.
>
>I think this one would be, by far, the easiest to add to Squeak (unless we
>have to change #fork and co, of course).  I think the same code that writes
>out objects to a file could be used to serialize them over the network.  
>The system/package/whatever can check the recipient of a message send to
>decide if it is a local call that doesn't need to be serialized or not.
>
>[1] The big win usually cited for GC's is something to the effect of "well,
>people forget to clean up after themselves and this frees up their time by
>not making them".  But really, the big win was composability.  In any
>GC-less system, it is always a nightmare of who has responsibility for
>deleting what, when.  You can't just use a new vendor API, you have to know
>if it cleans up after itself, do I have to do it, is there some API I call?
>  With a GC you just forget about it, use the API and everything works.
>
>_________________________________________________________________
>Win a Zune™—make MSN® your homepage for your chance to win!
>http://homepage.msn.com/zune?icid=hmetagline
>
>

_________________________________________________________________
Find what you need at prices you’ll love. Compare products and save at MSN®
Shopping.
http://shopping.msn.com/default/shp/?ptnrid=37,ptnrdata=24102&tcode=T001MSN20A0701


Reply | Threaded
Open this post in threaded view
|

Re: election details *PLEASE READ*

Andreas.Raab
In reply to this post by Adrian Lienhard
Adrian Lienhard wrote:
> I don't think you need to understand its internal details to be able to
> build tools. You do not need to enhance kernel but use the interface
> that is there already. E.g., Trait>>#users will return you all classes
> and traits that use a trait. Its just that this kind of information is
> lacking in the UI.

That's certainly part of it but not exclusively. For anything that is
truly useful you will need to understand a lot about the implementation.
Consider a simple example, like removing a method. If you want to build
a useful tool, you need to know whether the method you remove originates
from the class or from a trait and if so, which one. As far as I know
there is no method directly answering that information so you need to
know the implementation of traits to write that method. Etc.

Often you need that to understand the implementation to know how and
where a tool is allowed to temporarily violate (or extend) the
constraints. The browser, for example, pre-parses class definitions and
checks whether actually executing it could cause harm (like changing a
class definition not currently visible to the user). If the browser
wouldn't do that it would be a good amount less usable. But for that you
need to know how class interact and what can go wrong if you execute the
class definition without pre-parsing it.

So I think you really do need to understand the implementation to write
a useful tool. It is possible to get to a certain point without that but
usually these points are trivial and not very useful. Case in point:
Traits>>users. It is no problem for me to to change the class browser to
show that bit of information. I have done it. But that is not enough by far.

>> I've been poking around in the traits implementation myself (fairly
>> well documented in [1], and [2]) and although I have a very good
>> understanding about the metaclass relationships in Squeak < 3.9 I
>> found the traits implementation basically impenetrable. If I look at
>> who implements a method and get ten implementors thrown at me where
>> there used to be one or two, it's just not helpful. I stopped digging
>> into it for that reason - the traits class kernel has become
>> completely inaccessible to me.
>
> Just to make that clear. This contradicts to your mail [3]. Since [2]
> the traits structure of the kernel was simplified. You even looked at
> this and you said in [3] that it indeed helps you to understand how this
> all works. I do not want to start a new endless discussion.

I think you misunderstand that message. I didn't say (and I didn't mean
to say) that suddenly traits became all clear and simple and obvious ;-)
I said (and I meant to say) that I understand some aspects now better
(like the dual hierarchical structure) which lead me to conclude that it
also makes me understand a lot better what I dislike about traits.

But really, I haven't grok-ed the implementation by a long shot. To give
you an example, here is a simple question: Where do I find the
addSelector:* and removeSelector:* family of methods and why? I can give
you the answer for 3.8 (without looking at the code): Those methods must
in Behavior with a few overrides in ClassDescription (where they
manipulate the organization). They can't be higher in the hierarchy
since they affect behaviors and they can't be lower since they should be
shared between classes and meta classes. So without looking at the code
I can deduce where these methods must be found. In 3.9 I can't give you
that answer - I have no idea why addSelector:withMethod:notifying: is in
one set of traits and addSelector:withMethod: is in another set of them
and what the organizing principle behind it is.

Anyway, you are right, we had that discussion already. All I'm saying is
that if you read that message of mine as saying "oh, now I got it" then
you read it wrong.

Cheers,
   - Andreas

Reply | Threaded
Open this post in threaded view
|

Re: election details *PLEASE READ*

Andreas.Raab
In reply to this post by J J-6
J J wrote:
> Namespaces ok, but the way they have been done is not always that
> great.  Modules are pretty much the same thing, and interfaces?  No.  
> Traits are much better then interfaces imo.

Modules is one area where IMO Java shines. Not necessarily because of
the language part but because of the whole ClassLoader/JAR/Applet
pipeline (just run Dan's SqueakOnJava in the browser and then think
about what must be true to be able to handle that in a reasonably secure
fashion). It's the best deployment solution I am aware of. In the
namespace area I think Python wins hands-down, because they manage to
tie namespaces (and modules) into the object model so nicely. Traits
better than Interfaces? Well that remains to be seen.

Cheers,
   - Andreas

Reply | Threaded
Open this post in threaded view
|

Re: Future of smalltalk (was Re: election details *PLEASE READ*)

Andreas.Raab
In reply to this post by J J-6
J J wrote:
> Touches on a bit?  Java interfaces are nothing more then c++ base
> classes that have virtual methods that deriving classes must implement.  
> Traits is already better then this by at least allowing you to specify
> what the default code implementation is.

So you have used traits? In which project? Can I see the code? I've been
constantly on the lookout for good examples of traits use but so far I
have only found a few toy academic projects that look beautiful (and go
and scale nowhere) and one realistic real-world use (the traits
implementation itself) which to me is a bunch of spaghetti code and
where I am very curious how maintenance will work out over the next years.

Cheers,
   - Andreas

Reply | Threaded
Open this post in threaded view
|

Re: election details *PLEASE READ*

Milan Zimmermann-2
In reply to this post by Karl-19
On 2007 February 22 16:44, karl wrote:

> J J skrev:
> >> From: Roel Wuyts <[hidden email]>
> >> Reply-To: The general-purpose Squeak developers
> >> list<[hidden email]>
> >> To: The general-purpose Squeak developers
> >> list<[hidden email]>
> >> Subject: Re: election details *PLEASE READ*
> >> Date: Wed, 21 Feb 2007 22:52:45 +0100
> >>
> >> I drink do that. Cheers Andreas.
> >>
> >> It's fun that especially a very open and reflective language like
> >> Smalltalk actually is not extended very much (or only within small
> >> research projects not taken up by the community). Where are the
> >> macro  systems ? Variable length argument lists ? Nifty versioning
> >> and  packaging systems ? Monads ? Usable typing systems ? etc. etc.
> >
> > But before we decide a feature is missing from Squeak because someone
> > else has it, we must first think *why* some other place has it.  For
> > an extreme example, C has pointers and Squeak doesn't.  Does anyone
> > think Squeak needs pointers?
> >
> > Likewise, Haskell has Monads.  But that is because there is no other
> > language supported way to represent state change.  It is a very nice
> > language, but for Monads to work right, I think you kind of need the
> > whole thing (i.e. partial application, type checking, the things that
> > make Haskell Haskell).
> >
> > So far, any time I would have reached for variable length argument
> > lists in other languages, I used meta-programming in Smalltalk.  And
> > Macro systems?  It can be put in easily, and I have code that
> > generates code, but due to the nature of Smalltalk, we can add
> > language constructs without having to resort to macros as Lisp does.
> >
> > Having said all that, I think Squeak could use more formal Lazy
> > evaluation support, and I published some classes for it.  And the
> > package system is a known open issue.  I personally believe change
> > sets could be made more advanced to help out in a lot of areas (for
> > one, to help with documenting that elusive "why").
> >
> > I think we should try to take things from other languages that make
> > sense for smalltalk, but I don't think there will be a time when one
> > language (even smalltalk!) will be perfect for every task.  We will
> > still need at least Haskell. :)
>
> Another issue that will come up in the near future is multi core
> processors. What must be done with Squeak to utilize all the processors
> and what are the benefits and drawback of the  different approaches ?

I agree this is an important point, I was in fact thinking adding a question
about it to the list the election team is organizing, you asked it first
(below) :)

Milan
> What can  the board do to help in this effort ?
>
> Karl

Reply | Threaded
Open this post in threaded view
|

Re: election details *PLEASE READ*

Andreas.Raab
In reply to this post by J J-6
Nice summary. The only thing I have to add is that your approach (3) is
how Croquet works. We currently don't do it inside a single image but we
have been talking about it and at some point we will get around to it.

Cheers,
   - Andreas

J J wrote:

>> From: karl <[hidden email]>
>> Reply-To: The general-purpose Squeak developers
>> list<[hidden email]>
>> To: The general-purpose Squeak developers
>> list<[hidden email]>
>> Subject: Re: election details *PLEASE READ*
>> Date: Thu, 22 Feb 2007 22:44:07 +0100
>>
>>> I think we should try to take things from other languages that make
>>> sense for smalltalk, but I don't think there will be a time when one
>>> language (even smalltalk!) will be perfect for every task.  We will
>>> still need at least Haskell. :)
>> Another issue that will come up in the near future is multi core
>> processors. What must be done with Squeak to utilize all the
>> processors and what are the benefits and drawback of the  different
>> approaches ? What can  the board do to help in this effort ?
>
> Afaik there are 3 ways of handling true concurrent execution (i.e. not
> green threads):
>
> 1) Fine-grained locking/shared thread state:
>
> The old way of running heavy weight threads, sharing memory across
> threads and using some kind of locking to protect against race conditions.
>
> Positive:  Hrm, well I guess there is the most support for this, since
> it is probably the most common.  If you don't use any locking but only
> read the data shared this is very fast approach.
>
> Negative: It simply doesn't scale well.  It also doesn't compose well.  
> You can't simply put two independently created pieces of code together
> that use locking and expect it to work.  Stated another way,
> fine-grained locking is the manual memory management of concurrency
> methodologies [1].  If any part of your code is doing fine-grain
> locking, you can never "just use it" somewhere else.  You have to dig
> deep down in every method to make sure you aren't going to cause a
> deadlock.
>
> This one would probably be very hard to add to Squeak based on what John
> said.
>
> 2) Shared state, transactional memory:
>
> Think relational database.  You stick a series of code in an "atomic"
> block and the system does what it has to to make it appear as the memory
> changes occurred atomically.
>
> Positive:  This approach affords some composability.  You still should
> know if the methods your calling are going to operate on memory, but in
> the case that you put two pieces of code together that you know will,
> you can just slap an atomic block around them and it works.  The system
> can also ensure that nested atomic blocks work as expected to further
> aid composability.  This approach can often require very few changes to
> existing code to make it thread safe.  And finally, you can still have
> all (most?) of the benefits of thread-shared memory without having to
> give up so much abstraction (i.e. work at such a low level).
>
> Negative:  To continue the above analogy, I consider this one the
> "reference counted memory management" of options.  That is, it works as
> expected, but can end up taking more resources and time in the end.  My
> concern with this approach is that it still does need some knowledge of
> what the code you are calling does at a lower level.  And most people
> aren't going to want to worry about it so they will just stick "atomic"
> everywhere.  That probably wont hurt anything, but it forces the system
> to keep track of a lot more things then it should, and this bookkeeping
> is not free.
>
> This one would also require some VM (probably very big) changes to
> support and could be tough to get right.
>
> 3) Share nothing message passing:
>
> Basically, no threads, only independent processes that send messages
> between each other to get work done.
>
> Positive:  This approach also allows a high level of composability.  If
> you get new requirements, you typically add new processes to deal with
> them.  And at last, you don't have to think about what the other "guy"
> is going to do.  A system designed in this manner is very scalable; in
> Erlang for example, a message send doesn't have to worry if it is
> sending to a local process or a totally different computer.  A message
> send is a message send.  There is no locking at all in this system, so
> no process is sleeping waiting for some other process to get finished
> with a resource it wants (low level concerns).  Instead a process will
> block waiting for another process to give him work (high level concerns).
>
> Negative:  This requires a new way of architecting in the places that
> use it.  What we are used to is; call a function and wait for an
> answer.  An approach like this works best if your message senders never
> care about answers.  The "main loop" sends out work, the consumers
> consume it and generate output that they send to other consumers (i.e.
> not the main loop).  In some cases, what we would normally do in a
> method is done in a whole other process.  Code that uses this in
> smalltalk will also have to take care, as we *do* have state that could
> leak to local processes.  We would either have to make a big change how
> #fork and co. work today to ensure no information can be shared, or we
> would have to take care in our coding that we don't make changes to data
> that might be shared.
>
> I think this one would be, by far, the easiest to add to Squeak (unless
> we have to change #fork and co, of course).  I think the same code that
> writes out objects to a file could be used to serialize them over the
> network.  The system/package/whatever can check the recipient of a
> message send to decide if it is a local call that doesn't need to be
> serialized or not.
>
> [1] The big win usually cited for GC's is something to the effect of
> "well, people forget to clean up after themselves and this frees up
> their time by not making them".  But really, the big win was
> composability.  In any GC-less system, it is always a nightmare of who
> has responsibility for deleting what, when.  You can't just use a new
> vendor API, you have to know if it cleans up after itself, do I have to
> do it, is there some API I call?  With a GC you just forget about it,
> use the API and everything works.
>
> _________________________________________________________________
> Win a Zune™—make MSN® your homepage for your chance to win!
> http://homepage.msn.com/zune?icid=hmetagline
>
>
>


Reply | Threaded
Open this post in threaded view
|

Traits vs interfaces and a trivial idea (was Re: Future of smalltalk (was Re: election details *PLEASE READ*))

Göran Krampe
In reply to this post by J J-6
Hi!

>>From: Göran Krampe <[hidden email]>
>>- Interfaces. Yes, might be neat to have. Traits touch on this a bit and
>>we also have SmallInterfaces (which I never have looked at). I really
>>don't know if it would hurt more than it would help.
>
> Touches on a bit?  Java interfaces are nothing more then c++ base classes
> that have virtual methods that deriving classes must implement.  Traits is
> already better then this by at least allowing you to specify what the
> default code implementation is.

AFAIK Traits is not intended as "specification of protocol" that you can
check against (during runtime or in some interesting way - compile time).
Sure, they *are* comprised of a bunch of methods - and sure - we could
probably (mis)use them as interfaces/protocol-specifications - but... I
couldn't really say at this point because alas, I haven't used them yet.
:)

Ok, let me take the opportunity to flesh out a trivial idea:

As most people know a java interface is a "named bunch of messages" that a
class (or its superclasses of course) can declare it implements. But this
is early binding, right?

I would be more interested in "late" binding where I could do:

   someThingy respondsToProtocol: aProtocol

...where aProtocol is more or less just a bunch of selectors. The main
difference is of course that the class (or any of its superclasses) of
someThingy doesn't need to *declare* that it implements aProtocol - it
just has to actually do it. :)

This means in practice that the code using someThingy (written later by
developer X) can declare what messages it actually intends to send to
someThingy instead of the other way around. IMHO this decouples the
writing of someThingy from the code using it *in time*.

Now... one place where this might be useful is in unit tests. Or hey, I
dunno, just felt like a Smalltalkish way of using "interfaces". :)

regards, Göran

PS. Btw, anyone recall the syntactic changes that was introduced in one of
the older VWs? Perhaps around version 2.5 or so. The Compiler could
actually parse "parameter type checks using some <...>-syntax IIRC" but
when I looked closer in the code it didn't actually do anything with it
yet. Eh... have no idea why this popped up in my head right now. ;)


Reply | Threaded
Open this post in threaded view
|

Re: Traits vs interfaces and a trivial idea (was Re: Future of smalltalk (was Re: election details *PLEASE READ*))

Andreas.Raab
Göran Krampe wrote:
> I would be more interested in "late" binding where I could do:
>
>    someThingy respondsToProtocol: aProtocol
>
> ...where aProtocol is more or less just a bunch of selectors. The main
> difference is of course that the class (or any of its superclasses) of
> someThingy doesn't need to *declare* that it implements aProtocol - it
> just has to actually do it. :)

Funny you should mention this. I had the same idea a while ago but the
trouble is that you want a really, REALLY fast check (as fast as isFoo
effectively so that you can say: "true isA: Boolean" and be that the
speedy equivalent of "true isBoolean") and to do this you need some way
of caching the result effectively (and invalidate it as the class
changes). Alas, I could never come up with a scheme that was as fast as
I needed it to be (if you have any ideas, I'm all ears).

Cheers,
   - Andreas

Reply | Threaded
Open this post in threaded view
|

Re: Traits vs interfaces and a trivial idea (was Re: Future of smalltalk (was Re: election details *PLEASE READ*))

Klaus D. Witzel
Hi Andreas,
on Fri, 23 Feb 2007 08:52:52 +0100, you wrote:

> Göran Krampe wrote:
>> I would be more interested in "late" binding where I could do:
>>     someThingy respondsToProtocol: aProtocol
>>  ...where aProtocol is more or less just a bunch of selectors. The main
>> difference is of course that the class (or any of its superclasses) of
>> someThingy doesn't need to *declare* that it implements aProtocol - it
>> just has to actually do it. :)
>
> Funny you should mention this. I had the same idea a while ago but the  
> trouble is that you want a really, REALLY fast check (as fast as isFoo  
> effectively so that you can say: "true isA: Boolean" and be that the  
> speedy equivalent of "true isBoolean") and to do this you need some way  
> of caching the result effectively (and invalidate it as the class  
> changes). Alas, I could never come up with a scheme that was as fast as  
> I needed it to be (if you have any ideas, I'm all ears).

Here's the scheme that is as fast as I needed it to be:

during method lookup, treat aProtocol as the object you want to cache.

The rest follows immediately.

/Klaus

> Cheers,
>    - Andreas
>
>



Reply | Threaded
Open this post in threaded view
|

Concurrent Squeak (was Re: election details *PLEASE READ*)

David T. Lewis
In reply to this post by Andreas.Raab
You can do approach (3) with an ordinary Squeak image if you are
using a unix platform (including OS X, I think) with OSProcess. Use
#forkHeadlessSqueakAndDo: to start the "threads", and connect them
with OSPipes. The endpoints of an OSPipe are FileStreams, so you
can read and write serialized objects between the images. The
#forkSqueak turns out to be surprisingly fast and memory efficient.

I'm not up to speed on Croquet, but some variant of this technique
might be a convenient way to start up a large number of cooperating
Croquet images, presumably using sockets instead of OSPipe.

Dave

On Thu, Feb 22, 2007 at 10:21:42PM -0800, Andreas Raab wrote:

> Nice summary. The only thing I have to add is that your approach (3) is
> how Croquet works. We currently don't do it inside a single image but we
> have been talking about it and at some point we will get around to it.
>
> Cheers,
>   - Andreas
>
> J J wrote:
> >
> >Afaik there are 3 ways of handling true concurrent execution (i.e. not
> >green threads):
> >
> >1) Fine-grained locking/shared thread state:
> >
> >The old way of running heavy weight threads, sharing memory across
> >threads and using some kind of locking to protect against race conditions.
> >
> >Positive:  Hrm, well I guess there is the most support for this, since
> >it is probably the most common.  If you don't use any locking but only
> >read the data shared this is very fast approach.
> >
> >Negative: It simply doesn't scale well.  It also doesn't compose well.  
> >You can't simply put two independently created pieces of code together
> >that use locking and expect it to work.  Stated another way,
> >fine-grained locking is the manual memory management of concurrency
> >methodologies [1].  If any part of your code is doing fine-grain
> >locking, you can never "just use it" somewhere else.  You have to dig
> >deep down in every method to make sure you aren't going to cause a
> >deadlock.
> >
> >This one would probably be very hard to add to Squeak based on what John
> >said.
> >
> >2) Shared state, transactional memory:
> >
> >Think relational database.  You stick a series of code in an "atomic"
> >block and the system does what it has to to make it appear as the memory
> >changes occurred atomically.
> >
> >Positive:  This approach affords some composability.  You still should
> >know if the methods your calling are going to operate on memory, but in
> >the case that you put two pieces of code together that you know will,
> >you can just slap an atomic block around them and it works.  The system
> >can also ensure that nested atomic blocks work as expected to further
> >aid composability.  This approach can often require very few changes to
> >existing code to make it thread safe.  And finally, you can still have
> >all (most?) of the benefits of thread-shared memory without having to
> >give up so much abstraction (i.e. work at such a low level).
> >
> >Negative:  To continue the above analogy, I consider this one the
> >"reference counted memory management" of options.  That is, it works as
> >expected, but can end up taking more resources and time in the end.  My
> >concern with this approach is that it still does need some knowledge of
> >what the code you are calling does at a lower level.  And most people
> >aren't going to want to worry about it so they will just stick "atomic"
> >everywhere.  That probably wont hurt anything, but it forces the system
> >to keep track of a lot more things then it should, and this bookkeeping
> >is not free.
> >
> >This one would also require some VM (probably very big) changes to
> >support and could be tough to get right.
> >
> >3) Share nothing message passing:
> >
> >Basically, no threads, only independent processes that send messages
> >between each other to get work done.
> >
> >Positive:  This approach also allows a high level of composability.  If
> >you get new requirements, you typically add new processes to deal with
> >them.  And at last, you don't have to think about what the other "guy"
> >is going to do.  A system designed in this manner is very scalable; in
> >Erlang for example, a message send doesn't have to worry if it is
> >sending to a local process or a totally different computer.  A message
> >send is a message send.  There is no locking at all in this system, so
> >no process is sleeping waiting for some other process to get finished
> >with a resource it wants (low level concerns).  Instead a process will
> >block waiting for another process to give him work (high level concerns).
> >
> >Negative:  This requires a new way of architecting in the places that
> >use it.  What we are used to is; call a function and wait for an
> >answer.  An approach like this works best if your message senders never
> >care about answers.  The "main loop" sends out work, the consumers
> >consume it and generate output that they send to other consumers (i.e.
> >not the main loop).  In some cases, what we would normally do in a
> >method is done in a whole other process.  Code that uses this in
> >smalltalk will also have to take care, as we *do* have state that could
> >leak to local processes.  We would either have to make a big change how
> >#fork and co. work today to ensure no information can be shared, or we
> >would have to take care in our coding that we don't make changes to data
> >that might be shared.
> >
> >I think this one would be, by far, the easiest to add to Squeak (unless
> >we have to change #fork and co, of course).  I think the same code that
> >writes out objects to a file could be used to serialize them over the
> >network.  The system/package/whatever can check the recipient of a
> >message send to decide if it is a local call that doesn't need to be
> >serialized or not.
> >
> >[1] The big win usually cited for GC's is something to the effect of
> >"well, people forget to clean up after themselves and this frees up
> >their time by not making them".  But really, the big win was
> >composability.  In any GC-less system, it is always a nightmare of who
> >has responsibility for deleting what, when.  You can't just use a new
> >vendor API, you have to know if it cleans up after itself, do I have to
> >do it, is there some API I call?  With a GC you just forget about it,
> >use the API and everything works.
> >
> >_________________________________________________________________
> >Win a Zune??make MSN? your homepage for your chance to win!
> >http://homepage.msn.com/zune?icid=hmetagline
> >
> >
> >
>

Reply | Threaded
Open this post in threaded view
|

Re: election details (was "Squeak Foundation Board 2007 Candidates")

garduino
In reply to this post by Klaus D. Witzel
Yes, and this don't help so much........


2007/2/22, Klaus D. Witzel <[hidden email]>:
> Thank you Giovanni,
>
> so now we can see that some of the candidates' pages are still blank...
>
> /Klaus
>

Reply | Threaded
Open this post in threaded view
|

RE: election details (was "Squeak Foundation Board 2007 Candidates")

Tansel Ersavas
Hi,

If someone kindly forward me the usercode/password to edit my page or pages
in the swiki I'd gladly put some info up, answer the questions and place
them on the page. I can't seem to find any emails that refer to the u/c
password pair.

Thanks

Tansel

-----Original Message-----
From: [hidden email]
[mailto:[hidden email]] On Behalf Of Germán
Arduino
Sent: Friday, 23 February 2007 2:38 PM
To: The general-purpose Squeak developers list
Subject: Re: election details (was "Squeak Foundation Board 2007
Candidates")

Yes, and this don't help so much........


2007/2/22, Klaus D. Witzel <[hidden email]>:
> Thank you Giovanni,
>
> so now we can see that some of the candidates' pages are still blank...
>
> /Klaus
>


Reply | Threaded
Open this post in threaded view
|

Re: election details (was "Squeak Foundation Board 2007 Candidates")

garduino
Sended by private.

2007/2/23, Tansel <[hidden email]>:

> Hi,
>
> If someone kindly forward me the usercode/password to edit my page or pages
> in the swiki I'd gladly put some info up, answer the questions and place
> them on the page. I can't seem to find any emails that refer to the u/c
> password pair.
>
> Thanks
>
> Tansel
>
> -----Original Message-----
> From: [hidden email]
> [mailto:[hidden email]] On Behalf Of Germán
> Arduino
> Sent: Friday, 23 February 2007 2:38 PM
> To: The general-purpose Squeak developers list
> Subject: Re: election details (was "Squeak Foundation Board 2007
> Candidates")
>
> Yes, and this don't help so much........
>
>
> 2007/2/22, Klaus D. Witzel <[hidden email]>:
> > Thank you Giovanni,
> >
> > so now we can see that some of the candidates' pages are still blank...
> >
> > /Klaus
> >
>
>
>


--
Germán S. Arduino
http://www.arsol.biz
http://www.arsol.net

Reply | Threaded
Open this post in threaded view
|

RE: Concurrent Squeak (was Re: election details *PLEASE READ*)

J J-6
In reply to this post by David T. Lewis
Yes, I think it could be done fairly easily.  To make it easier I would want
some kind of "spawn" to spawn a process (calls OSPluginFork where
supported), and a package that lets me do something like:

ConcurrencySystem sendMsg: message to: somePID  "where somePID can be local
of off box"

And the process recieves it with something like:

ConcurrencySystem
  recieveMsgOfType: someMessageDescriptionThing
  orType: someOtherOne
  orType: systemNetworkFailure
  ofType: systemProcessFailure

Something like that, the way Erlang has. :)

>From: "David T. Lewis" <[hidden email]>
>Reply-To: The general-purpose Squeak developers
>list<[hidden email]>
>To: The general-purpose Squeak developers
>list<[hidden email]>
>Subject: Concurrent Squeak (was Re: election details *PLEASE READ*)
>Date: Fri, 23 Feb 2007 07:09:45 -0500
>
>You can do approach (3) with an ordinary Squeak image if you are
>using a unix platform (including OS X, I think) with OSProcess. Use
>#forkHeadlessSqueakAndDo: to start the "threads", and connect them
>with OSPipes. The endpoints of an OSPipe are FileStreams, so you
>can read and write serialized objects between the images. The
>#forkSqueak turns out to be surprisingly fast and memory efficient.
>
>I'm not up to speed on Croquet, but some variant of this technique
>might be a convenient way to start up a large number of cooperating
>Croquet images, presumably using sockets instead of OSPipe.
>
>Dave
>
>On Thu, Feb 22, 2007 at 10:21:42PM -0800, Andreas Raab wrote:
> > Nice summary. The only thing I have to add is that your approach (3) is
> > how Croquet works. We currently don't do it inside a single image but we
> > have been talking about it and at some point we will get around to it.
> >
> > Cheers,
> >   - Andreas
> >
> > J J wrote:
> > >
> > >Afaik there are 3 ways of handling true concurrent execution (i.e. not
> > >green threads):
> > >
> > >1) Fine-grained locking/shared thread state:
> > >
> > >The old way of running heavy weight threads, sharing memory across
> > >threads and using some kind of locking to protect against race
>conditions.
> > >
> > >Positive:  Hrm, well I guess there is the most support for this, since
> > >it is probably the most common.  If you don't use any locking but only
> > >read the data shared this is very fast approach.
> > >
> > >Negative: It simply doesn't scale well.  It also doesn't compose well.
> > >You can't simply put two independently created pieces of code together
> > >that use locking and expect it to work.  Stated another way,
> > >fine-grained locking is the manual memory management of concurrency
> > >methodologies [1].  If any part of your code is doing fine-grain
> > >locking, you can never "just use it" somewhere else.  You have to dig
> > >deep down in every method to make sure you aren't going to cause a
> > >deadlock.
> > >
> > >This one would probably be very hard to add to Squeak based on what
>John
> > >said.
> > >
> > >2) Shared state, transactional memory:
> > >
> > >Think relational database.  You stick a series of code in an "atomic"
> > >block and the system does what it has to to make it appear as the
>memory
> > >changes occurred atomically.
> > >
> > >Positive:  This approach affords some composability.  You still should
> > >know if the methods your calling are going to operate on memory, but in
> > >the case that you put two pieces of code together that you know will,
> > >you can just slap an atomic block around them and it works.  The system
> > >can also ensure that nested atomic blocks work as expected to further
> > >aid composability.  This approach can often require very few changes to
> > >existing code to make it thread safe.  And finally, you can still have
> > >all (most?) of the benefits of thread-shared memory without having to
> > >give up so much abstraction (i.e. work at such a low level).
> > >
> > >Negative:  To continue the above analogy, I consider this one the
> > >"reference counted memory management" of options.  That is, it works as
> > >expected, but can end up taking more resources and time in the end.  My
> > >concern with this approach is that it still does need some knowledge of
> > >what the code you are calling does at a lower level.  And most people
> > >aren't going to want to worry about it so they will just stick "atomic"
> > >everywhere.  That probably wont hurt anything, but it forces the system
> > >to keep track of a lot more things then it should, and this bookkeeping
> > >is not free.
> > >
> > >This one would also require some VM (probably very big) changes to
> > >support and could be tough to get right.
> > >
> > >3) Share nothing message passing:
> > >
> > >Basically, no threads, only independent processes that send messages
> > >between each other to get work done.
> > >
> > >Positive:  This approach also allows a high level of composability.  If
> > >you get new requirements, you typically add new processes to deal with
> > >them.  And at last, you don't have to think about what the other "guy"
> > >is going to do.  A system designed in this manner is very scalable; in
> > >Erlang for example, a message send doesn't have to worry if it is
> > >sending to a local process or a totally different computer.  A message
> > >send is a message send.  There is no locking at all in this system, so
> > >no process is sleeping waiting for some other process to get finished
> > >with a resource it wants (low level concerns).  Instead a process will
> > >block waiting for another process to give him work (high level
>concerns).
> > >
> > >Negative:  This requires a new way of architecting in the places that
> > >use it.  What we are used to is; call a function and wait for an
> > >answer.  An approach like this works best if your message senders never
> > >care about answers.  The "main loop" sends out work, the consumers
> > >consume it and generate output that they send to other consumers (i.e.
> > >not the main loop).  In some cases, what we would normally do in a
> > >method is done in a whole other process.  Code that uses this in
> > >smalltalk will also have to take care, as we *do* have state that could
> > >leak to local processes.  We would either have to make a big change how
> > >#fork and co. work today to ensure no information can be shared, or we
> > >would have to take care in our coding that we don't make changes to
>data
> > >that might be shared.
> > >
> > >I think this one would be, by far, the easiest to add to Squeak (unless
> > >we have to change #fork and co, of course).  I think the same code that
> > >writes out objects to a file could be used to serialize them over the
> > >network.  The system/package/whatever can check the recipient of a
> > >message send to decide if it is a local call that doesn't need to be
> > >serialized or not.
> > >
> > >[1] The big win usually cited for GC's is something to the effect of
> > >"well, people forget to clean up after themselves and this frees up
> > >their time by not making them".  But really, the big win was
> > >composability.  In any GC-less system, it is always a nightmare of who
> > >has responsibility for deleting what, when.  You can't just use a new
> > >vendor API, you have to know if it cleans up after itself, do I have to
> > >do it, is there some API I call?  With a GC you just forget about it,
> > >use the API and everything works.
> > >
> > >_________________________________________________________________
> > >Win a Zune??make MSN? your homepage for your chance to win!
> > >http://homepage.msn.com/zune?icid=hmetagline
> > >
> > >
> > >
> >
>

_________________________________________________________________
Find what you need at prices you’ll love. Compare products and save at MSN®
Shopping.
http://shopping.msn.com/default/shp/?ptnrid=37,ptnrdata=24102&tcode=T001MSN20A0701


Reply | Threaded
Open this post in threaded view
|

Re: election details *PLEASE READ*

J J-6
In reply to this post by Andreas.Raab
>From: Andreas Raab <[hidden email]>
>Reply-To: The general-purpose Squeak developers
>list<[hidden email]>
>To: The general-purpose Squeak developers
>list<[hidden email]>
>Subject: Re: election details *PLEASE READ*
>Date: Thu, 22 Feb 2007 22:05:50 -0800
>
>J J wrote:
>>Namespaces ok, but the way they have been done is not always that great.  
>>Modules are pretty much the same thing, and interfaces?  No.  Traits are
>>much better then interfaces imo.
>
>Modules is one area where IMO Java shines. Not necessarily because of the
>language part but because of the whole ClassLoader/JAR/Applet pipeline
>(just run Dan's SqueakOnJava in the browser and then think about what must
>be true to be able to handle that in a reasonably secure fashion). It's the
>best deployment solution I am aware of.

Ah, I wasn't thinking about them from that angle.  Good point.

>In the namespace area I think Python wins hands-down, because they manage
>to tie namespaces (and modules) into the object model so nicely. Traits
>better than Interfaces? Well that remains to be seen.

Interfaces in Java are mostly just a pain IMO.  They provide a few extra
protections here and there, and give the programmer hints as he browses the
code, but since you can't add a default implementation to them they just
enforce a certain level of code duplication.  At least in my experience and
most of the Java programmers I know (admittedly not the whole world).

So from a usefulness point of view I think Traits already should be better,
and once the tools support them they will give the same hints the Java
implementation does at least.

_________________________________________________________________
Don’t miss your chance to WIN 10 hours of private jet travel from Microsoft®
Office Live http://clk.atdmt.com/MRT/go/mcrssaub0540002499mrt/direct/01/


Reply | Threaded
Open this post in threaded view
|

Interfaces and dynamic binding [was: election details *PLEASE READ*]

Klaus D. Witzel
Hi JJ,

on Fri, 23 Feb 2007 16:51:04 +0100, you wrote:

>> From: Andreas Raab <[hidden email]>
>> Reply-To: The general-purpose Squeak developers  
>> list<[hidden email]>
>> To: The general-purpose Squeak developers  
>> list<[hidden email]>
>> Subject: Re: election details *PLEASE READ*
>> Date: Thu, 22 Feb 2007 22:05:50 -0800
>>
>> J J wrote:
>>> Namespaces ok, but the way they have been done is not always that  
>>> great.  Modules are pretty much the same thing, and interfaces?  No.  
>>> Traits are much better then interfaces imo.
>>
>> Modules is one area where IMO Java shines. Not necessarily because of  
>> the language part but because of the whole ClassLoader/JAR/Applet  
>> pipeline (just run Dan's SqueakOnJava in the browser and then think  
>> about what must be true to be able to handle that in a reasonably  
>> secure fashion). It's the best deployment solution I am aware of.
>
> Ah, I wasn't thinking about them from that angle.  Good point.
>
>> In the namespace area I think Python wins hands-down, because they  
>> manage to tie namespaces (and modules) into the object model so nicely.  
>> Traits better than Interfaces? Well that remains to be seen.
>
> Interfaces in Java are mostly just a pain IMO.  They provide a few extra  
> protections here and there, and give the programmer hints as he browses  
> the code, but since you can't add a default implementation to them they  
> just enforce a certain level of code duplication.  At least in my  
> experience and most of the Java programmers I know (admittedly not the  
> whole world).

You can use Java's interfaces for modeling the dynamic binding[1] of  
Smalltalk ;-) At runtime, no change is needed in the JVM. But the existing  
gcj's require you to cast message sends. To get rid of the cost of the  
casts I run a small utility which patches the bytecodes in the .class  
files with noops. May sound crazy at first sight but, it works and I know  
of no bytecode verifier which complains (so much for the static checking  
== safe software mythodology ;-)

Here's how:

- create an empty interface, name it smalltalk
- make a distinct new interface for each method
- the latter extends the former

Then when you want to expose an implementor, just use " implements " as  
before, for example "implements smalltalk.doIt, smalltalk.printIt".

You can extend all the existing public non-final library classes with the  
above. The JVM will only throw an exception if an implementor is actually  
missing (like DNU does). And your instance variables and args and temps  
will be of the singular type smalltalk.

When time permits (rare) I work on a compiler which reads Smalltalk source  
code and emits JVM bytecode (already compiles itself). But blocks and  
exceptions will be a pain :(

> So from a usefulness point of view I think Traits already should be  
> better, and once the tools support them they will give the same hints  
> the Java implementation does at least.

IMO Java's interfaces and Traits do not compare. To give a nogo example:  
with traits you can mix required methods and provided methods. Translated  
this means that your library classes would be abstract (and as a  
consequence cannot be instantiated within their own library). So people  
use newInstance() and then cast (sometimes preceded by Class.forName and  
other expensive reflections), all the way down the bloat.

/Klaus

P.S. from the doc of newInstance() "Use of this method effectively  
bypasses the compile-time exception checking that would otherwise be  
performed by the compiler". Not compile-time safe, that is.

-----------
[1] http://www.ibm.com/developerworks/java/library/j-cb11076.html


Reply | Threaded
Open this post in threaded view
|

Re: Traits vs interfaces and a trivial idea (was Re: Future of smalltalk (was Re: election details *PLEASE READ*))

tblanchard
In reply to this post by Göran Krampe

On Feb 22, 2007, at 11:46 PM, Göran Krampe wrote:

> As most people know a java interface is a "named bunch of messages"  
> that a
> class (or its superclasses of course) can declare it implements.  
> But this
> is early binding, right?
>
> I would be more interested in "late" binding where I could do:
>
>    someThingy respondsToProtocol: aProtocol
>
> ...where aProtocol is more or less just a bunch of selectors. The main
> difference is of course that the class (or any of its superclasses) of
> someThingy doesn't need to *declare* that it implements aProtocol - it
> just has to actually do it. :)
>

Get a mac. :-)

ObjectiveC  has Protocols (which I think were the inspiration for  
Java's interfaces). Protocols can be formal (the class declares it  
implements a protocol explicitly and the compiler whines if you leave  
something out) or informal - you implement it but you don't declare it.

There is a lovely method in Cocoa conformsToProtocol: that does just  
what you're asking.  I don't know how it works, but if I were to  
implement such a thing in Squeak in a cheap and cheerful way - I  
would reify a protocol as a class  and attache protocols to existing  
classes much the way PoolDictionaries work now only instead of  
importing shared constants, you are using it to enforce a set of  
methods.  How tools prompt you to make sure you conform to the  
protocol you declare is an exercise left to the implementer.

-Todd Blanchard



1234567