Traits is major feature added since 3.9. And i think it should STAY.
I don't see any good reason, why next releases should not support traits. Instead of thinking about, how to get rid of them, i think we should think about, what in traits implementation prevents people from loading code based on 3.8 without much stress, and based on this knowledge, refactor traits to be more compatible with other non-traitified code. But they should stay! -- Best regards, Igor Stasenko AKA sig. |
Hi Igor
Thanks. Apparently I got 300 emails blocked somewhere. I thought I was not in the mailing-list anymore. So I will summarize all the emails in this one: About traits: - I would love to see if we can get a simpler, loadable on demand BUT ROBUST implementation of traits. I will have a look at Andreas code. Thanks for that. - I'm not sure that sharing method is a good idea. - I'm not sure that the tradeoff introduce traits is worth but if do not try it how can we learn. for traits in the kernel: there is duplicated code in the kernel, so it was natural to try. We designed especially traits to be backward compatible so that we can learn and still can backtrack if not needed/bad/idiot/stupid. - It seems to me that the Smalltalk spirit of inventing the future is lost in Squeak. May be should wait for Newspeak and leave there... - We will work on adding state to traits (not following the design of stateful traits because too complex for nothing). I like a lot when people think that we are "researchers" you know the guys that don't give a shit and do not know what is real development with balls. Seriously there are a **lot** of cool experiences we did (changesbox, classboxes....) and we never even think about to put them in a language. Why we did it for traits, because it was simple, backwards compatible, and good. Now this is clear that it introduces noise and you have to choose between traits and subclass and you have to fix the tools. I wish we would be with a language that does not have an IDE. But we are serious about our meta-concerns, we are eating our own dog food. - I'm convinced after the experience with nathanael on collection and damien on stream that traits are good. - We will run a new large analysis and build a new collection hierarchy (if I get funding) from september. - I find a bit strange that people gets so annoyed by traits when they can ignore them. - looks at lukas with Seaside: it does not use them and do not see them. - now if the people really against traits would make real arguments like (the reflective interface is not good enough when I use selector I get all the selector and this is not clear how to get the local one versus the traits one.... and you can find other examples) - Instead of bashing traits, try to use them. The OB version of david rotlisberger is supporting traits. - There are a lot of work to do to get a better Squeak (see my previous email when I gave such a list to Edgar). Stef On May 15, 2008, at 6:33 PM, Igor Stasenko wrote: > Traits is major feature added since 3.9. And i think it should STAY. > I don't see any good reason, why next releases should not support > traits. > > Instead of thinking about, how to get rid of them, i think we should > think about, what in traits implementation prevents people from > loading code based on 3.8 without much stress, and based on this > knowledge, refactor traits to be more compatible with other > non-traitified code. > > But they should stay! > > -- > Best regards, > Igor Stasenko AKA sig. > > |
I am genuinely curious to see a simple example that clearly demonstrates the advantages of traits.
It is my perception (or misperception) that traits are an attempt (perhaps unknowingly) to bring multiple inheritance into the language. As I see it, this confuses Y Is-A X (i.e. Y subclasses X) with Y Has-A-Part X (i.e. Y has a member variable / subpart / trait X) I've seen this in GUI toolkits - e.g. someone has a Bitmap class and a ClickableArea class. They want to make a Button - i.e. something that has both Bitmap and ClickableArea traits. So they create Button by inheriting from both Bitmap and ClickableArea classes (usually C++). The better solution is to say that Button has both Bitmap and ClickableArea parts (i.e. member variables). This confusion becomes ridiculously clear if we attempt to create e.g. a Human class by inheriting from Eye, Leg, Arm etc. There is very likely much confusion between all of the members that get dragged in and try to compete with each other. So, if I have a mistaken view of traits, I would in all honesty love to see an example that shows this to me clearly. I have a strong view that software should be designed with a very high view of orthogonality. I will often reject non-orthogonal approaches in my own software without much thought, simply because I have been down that road before and seen the trouble that ensues. That's why I would suggest that traits should be rejected IFF they turn out to be a non-orthogonal design (i.e. their utility is duplicated by e.g. member variables). I am not yet convinced one way or the other. |
please read our papers!
Stef On May 16, 2008, at 10:52 AM, Ryan Mitchley wrote: > > I am genuinely curious to see a simple example that clearly > demonstrates the > advantages of traits. > > It is my perception (or misperception) that traits are an attempt > (perhaps > unknowingly) to bring multiple inheritance into the language. As I > see it, > this confuses > Y Is-A X (i.e. Y subclasses X) > with > Y Has-A-Part X (i.e. Y has a member variable / subpart / trait X) > > I've seen this in GUI toolkits - e.g. someone has a Bitmap class and a > ClickableArea class. They want to make a Button - i.e. something > that has > both Bitmap and ClickableArea traits. So they create Button by > inheriting > from both Bitmap and ClickableArea classes (usually C++). The better > solution is to say that Button has both Bitmap and ClickableArea > parts (i.e. > member variables). This confusion becomes ridiculously clear if we > attempt > to create e.g. a Human class by inheriting from Eye, Leg, Arm etc. > There is > very likely much confusion between all of the members that get > dragged in > and try to compete with each other. > > So, if I have a mistaken view of traits, I would in all honesty love > to see > an example that shows this to me clearly. > > I have a strong view that software should be designed with a very > high view > of orthogonality. I will often reject non-orthogonal approaches in > my own > software without much thought, simply because I have been down that > road > before and seen the trouble that ensues. That's why I would suggest > that > traits should be rejected IFF they turn out to be a non-orthogonal > design > (i.e. their utility is duplicated by e.g. member variables). I am > not yet > convinced one way or the other. > > -- > View this message in context: http://www.nabble.com/-squeak-dev--My-view-on-Traits-tp17257352p17270291.html > Sent from the Squeak - Dev mailing list archive at Nabble.com. > > > |
Hi Stef Could you please post a link to your papers, and possibly point me to a page that has a simple example. Thanks. Ryan |
On Fri, May 16, 2008 at 11:51 AM, Ryan Mitchley <[hidden email]> wrote:
> Could you please post a link to your papers, and possibly point me to a page > that has a simple example. For a simple example, you can read the last paper which contains an appendix with a small presentation of traits (page 32 and nexts): http://www.iam.unibe.ch/~scg/Archive/Papers/Cass08a-NileNewKernel-ComputerLanguages.pdf The complete trait bibliography is at: http://www.iam.unibe.ch/~scg/cgi-bin/scgbib.cgi?query=traits I advise you to read Traits: «Composable Units of Behavior» http://www.iam.unibe.ch/~scg/Archive/Papers/Scha03aTraits.pdf -- Damien Cassou Peter von der Ahé: «I'm beginning to see why Gilad wished us good luck». (http://blogs.sun.com/ahe/entry/override_snafu) |
In reply to this post by Ryan Mitchley
If you like to have an example look at the collections hierarchy. That
is a perfect example for that. These are full of "diamond problems". And what options do you have to solve it? Here my top approaches (worst first): - code duplication. Don't need to comment that - delegation. Adds a lot of one line methods. Code is growing and is error prone - moving implementation up the hierarchy. This leads to something like ArrayedCollection>>add: newObject self shouldNotImplement - traits. Not easy to understand and adds complexity I'm not sure which one is better of the last two. And I'm also not sure if collections are the only useful example for using traits. For me it is an language option I can use or not. Every approach has its drawbacks you have to think about. For me even inheritance can be seen that way. This discussion raises some tensions. For me it is important to make a difference between "my opinion about traits" and "if traits should stay in the image". And I hope we don't discuss about banning traits from the image. To make traits a real option it should be removed but only if it is reloadable. Removing it without being able to reload it would be a big loss. So, where's the problem? Even Stef agrees that to have it reloadable would be a good choice. I think it's because it has to be proven and be done. Norbert On Fri, 2008-05-16 at 01:52 -0700, Ryan Mitchley wrote: > I am genuinely curious to see a simple example that clearly demonstrates the > advantages of traits. > > It is my perception (or misperception) that traits are an attempt (perhaps > unknowingly) to bring multiple inheritance into the language. As I see it, > this confuses > Y Is-A X (i.e. Y subclasses X) > with > Y Has-A-Part X (i.e. Y has a member variable / subpart / trait X) > > I've seen this in GUI toolkits - e.g. someone has a Bitmap class and a > ClickableArea class. They want to make a Button - i.e. something that has > both Bitmap and ClickableArea traits. So they create Button by inheriting > from both Bitmap and ClickableArea classes (usually C++). The better > solution is to say that Button has both Bitmap and ClickableArea parts (i.e. > member variables). This confusion becomes ridiculously clear if we attempt > to create e.g. a Human class by inheriting from Eye, Leg, Arm etc. There is > very likely much confusion between all of the members that get dragged in > and try to compete with each other. > > So, if I have a mistaken view of traits, I would in all honesty love to see > an example that shows this to me clearly. > > I have a strong view that software should be designed with a very high view > of orthogonality. I will often reject non-orthogonal approaches in my own > software without much thought, simply because I have been down that road > before and seen the trouble that ensues. That's why I would suggest that > traits should be rejected IFF they turn out to be a non-orthogonal design > (i.e. their utility is duplicated by e.g. member variables). I am not yet > convinced one way or the other. > |
On Fri, May 16, 2008 at 12:00 PM, Norbert Hartl <[hidden email]> wrote:
> If you like to have an example look at the collections hierarchy. That > is a perfect example for that. These are full of "diamond problems". > And what options do you have to solve it? Here my top approaches (worst > first): > > - code duplication. Don't need to comment that > - delegation. Adds a lot of one line methods. Code is growing and is > error prone > - moving implementation up the hierarchy. This leads to something like > ArrayedCollection>>add: newObject > self shouldNotImplement > - traits. Not easy to understand and adds complexity For a discussion about these questions and the Stream hierarchy, have a look at «Traits at Work: the design of a new trait-based stream library» http://www.iam.unibe.ch/~scg/Archive/Papers/Cass08a-NileNewKernel-ComputerLanguages.pdf. Here, we identify all problems with the current Stream hierarchy, design a new one based on traits and discuss the differences and drawbacks. -- Damien Cassou Peter von der Ahé: «I'm beginning to see why Gilad wished us good luck». (http://blogs.sun.com/ahe/entry/override_snafu) |
In reply to this post by Ryan Mitchley
"Ryan Mitchley" <[hidden email]> wrote in message
> As I see it, this confuses > Y Is-A X (i.e. Y subclasses X) > with > Y Has-A-Part X (i.e. Y has a member variable / subpart / trait X) What about Y plays role R1, and role R2 e.g. a Car plays the role of TransportationVehicle and CollateralAsset. The "state" related to these two roles are not independent: as you rack up miles on the transportation vehicle, the asset collateral value drops; when the asset is hauled away for non-payment, the transportation vehicle is not available. Doing this with "has-A" becomes quite intricate, unless you systematically set up a notification channel between the parts and the whole. Perhaps traits are cleaner for these (within their limitations e.g. static, not dynamic; cannot rename callback methods, only alias). Sophie |
It is precisely Car's function to mediate between its parts (and provide a useful interface). The containing class provides the "glue". If the parts have good interfaces and have been well designed, the interaction will be well manageable. A fundamentally complex relationship would require fundamentally complex interactions. The interaction between TransportationVehicle>>Miles and CollateralAsset>>Value would be radically different for, say, a Ferrari F430 and a Fiat Seicento (apologies to any offended!). It would be a function of the particular Car instance. I think we could agree that seizure would be physically constrained to the Car! (that the seizor cared chiefly about the CollateralAsset is irrelevant...) |
In reply to this post by Damien Cassou-3
Damien Cassou wrote:
> For a discussion about these questions and the Stream hierarchy, have > a look at «Traits at Work: the design of a new trait-based stream > library» http://www.iam.unibe.ch/~scg/Archive/Papers/Cass08a-NileNewKernel-ComputerLanguages.pdf. > Here, we identify all problems with the current Stream hierarchy, > design a new one based on traits and discuss the differences and > drawbacks. This is a very nice paper but it falls into the same trap that other traits papers fall into - it confuses ideas that are just as applicable to single inheritance with traits and claims that (just because traits have been used to implement the original idea) traits are superior. For example: In section 7.3 some metrics are given that compare the Squeak implementation with Nile's implementation of internal streams. Based on the (superior) version in Nile it says that "This means we avoided reimplementation of a lot of methods by putting them in the right traits. Finally, we can deduce from the last metrics that the design of Nile is better: there is neither cancelled methods nor methods implemented too high and there are only three methods reimplemented for speed reasons compared to the fourteen of the Squeak version." with the strong implication that it was traits that helped to achieve the majority of the improvements. But is that so? Let's look at section 4.1 and note that here it is pointed out that "The library implements all collection-oriented methods in a single class: CollectionStream. This class replaces the three classes ReadStream, ReadWriteStream and WriteStream of the traditional Smalltalk stream hierarchy (See Figure 1)." This approach is definitely an improvement but every bit as applicable to a single inheritance implementation of streams. It is interesting to do a quick check to see how much this might change matters: First, combining these three classes into one means that the traits version has now twice the number of entities vs. the non-traits version (3 vs 6). This view is also supported by counting the "backward compatible" part of Figure 12 (which is directly comparable with the Squeak version) which results in 11 classes and traits (compared to 5 classes in Squeak). Next, if we take the total number of methods in these three classes: ReadStream selectors size + WriteStream selectors size + ReadWriteStream selectors size ----------------- 68 (the measure was taken in 3.9 to be roughly comparable with the paper and I'm not sure why the paper claims 55 methods) and compare this with the number of unique selectors (discounting all re-implemented methods): (Set withAll: (ReadStream selectors asArray), (WriteStream selectors asArray), (ReadWriteStream selectors asArray)) size ------------------- 59 What we get is 15% improvement *minimum* by folding these three classes (very likely more if one looks in detail). Next, let's look at "canceled methods" (those that use #shouldNotImplement). The paper lists 2 canceled methods which happen to be WriteStream>>next and ReadStream>>nextPut:. And of course those wouldn't exist in a single-inheritance implementation either. Etc. In other words, the measures change *dramatically* as soon as we apply the original idea regardless of whether traits are used or not. Which speaks clearly for the original idea of folding these three classes into one but concluding that traits have anything to do with it would require a very different comparison. If the paper wants to make any claims regarding traits, it really needs to distinguish improvements that are due to traits from general improvements (i.e., improvements that are just as applicable to single-inheritance implementations). Otherwise it is comparing apples to oranges and can't be taken seriously in this regard. Cheers, - Andreas |
In reply to this post by stephane ducasse
Hi Stef-- > please read our papers! The papers are very useful, of course, but it would also be useful to have a short summary, suitable for the infamous "elevator pitch". :) thanks, -C |
In reply to this post by Andreas.Raab
Yes. As far as using streams as an example, I never understood why support for write-only streams was ever needed. What's wrong with just assuming all streams are readable? Then this classic dilemma just goes away. It seems to me that whoever wrote the first internal streams implementation for Smalltalk simply got that part wrong, and no one questioned it for a long time. I wouldn't make this a primary motivating example for traits (hopefully there's a better one). -C |
On Fri, 16 May 2008 17:01:08 -0700, Craig Latta <[hidden email]> wrote:
> Yes. As far as using streams as an example, I never understood why > support for write-only streams was ever needed. What's wrong with just > assuming all streams are readable? Not all streams =are= readable. I'm not sure that's a serious problem, but it is a reality. |
In reply to this post by ccrraaiigg
2008/5/17 Craig Latta <[hidden email]>:
> > Yes. As far as using streams as an example, I never understood why > support for write-only streams was ever needed. What's wrong with just > assuming all streams are readable? Then this classic dilemma just goes away. > It seems to me that whoever wrote the first internal streams implementation > for Smalltalk simply got that part wrong, and no one questioned it for a > long time. I wouldn't make this a primary motivating example for traits > (hopefully there's a better one). > There are lot of hardware having a write-only capabilities, yes they can be bidirectional and software built on top of them can imitate read/write behavior. A devices like com port or network card are bidirectional devices which support both reading and writing but this not means that you can read back what you just written to it, or do seek. One example from my real experience. I was involved in a project to develop VoIp software (like Skype). I was hired just after point, when main design decisions was made. A principal design failure was, that communication between parties was built on a bidirectional stream principles (say two hosts connecting and starting conversation by exchanging voice data streams). And when we came to point where we started implementing a 'conference calls' (when more than two people involved in conversation), it was become clear to all (not only me, who warned about that ;) ), that it is better to represent a conversation as a set of independent listeners and independent media sources. And you may see an analogy to read/write only streams: listener is read only stream, while media source is write-only stream which should stay decoupled from each other in example above. > > -C > > > > -- Best regards, Igor Stasenko AKA sig. |
In reply to this post by Andreas.Raab
Hi,
On Fri, May 16, 2008 at 4:38 PM, Andreas Raab <[hidden email]> wrote: ... > It is interesting to do a quick check to see how much this might change > matters: First, combining these three classes into one means that the traits > version has now twice the number of entities vs. the non-traits version (3 > vs 6). This view is also supported by counting the "backward compatible" Less entities are not necessarily better than more, as I´m sure you know. Generally, more classes with a clear responsibility are better than less, harder to understand, classes. > part of Figure 12 (which is directly comparable with the Squeak version) > which results in 11 classes and traits (compared to 5 classes in Squeak). > > Next, if we take the total number of methods in these three classes: > > ReadStream selectors size + > WriteStream selectors size + > ReadWriteStream selectors size > ----------------- > 68 > > (the measure was taken in 3.9 to be roughly comparable with the paper and > I'm not sure why the paper claims 55 methods) and compare this with the > number of unique selectors (discounting all re-implemented methods): > > (Set withAll: > (ReadStream selectors asArray), > (WriteStream selectors asArray), > (ReadWriteStream selectors asArray)) size > ------------------- > 59 > > What we get is 15% improvement *minimum* by folding these three classes > (very likely more if one looks in detail). > > Next, let's look at "canceled methods" (those that use #shouldNotImplement). > The paper lists 2 canceled methods which happen to be WriteStream>>next and > ReadStream>>nextPut:. And of course those wouldn't exist in a > single-inheritance implementation either. Etc. > > In other words, the measures change *dramatically* as soon as we apply the > original idea regardless of whether traits are used or not. Which speaks > clearly for the original idea of folding these three classes into one but > concluding that traits have anything to do with it would require a very > different comparison. > > If the paper wants to make any claims regarding traits, it really needs to > distinguish improvements that are due to traits from general improvements > (i.e., improvements that are just as applicable to single-inheritance > implementations). Otherwise it is comparing apples to oranges and can't be > taken seriously in this regard. But there *are* limits to what you can achieve with single inheritance. It is not very hard to come up with an example: The Magnitude class is the perfect candidate for being converted into a trait, if you ask me. Here is its class comment: I'm the abstract class Magnitude that provides common protocol for objects that have the ability to be compared along a linear dimension, such as dates or times. Subclasses of Magnitude include Date, ArithmeticValue, and Time, as well as Character and LookupKey. My subclasses should implement < aMagnitude = aMagnitude hash Subclasses of Magnitude, by implementing #< #= #hash, gain methods #<= #> #>= #between:and: #hashMappedBy: #max: #min: #min:max:. The subclasses of Magnitude are Number, Character, DateAndTime, etc. String does not subclass Magnitude, it subclasses ArrayedCollection, and yet it does implement #< #= and #hash. It could clearly benefit from using Magnitude as a trait (indeed, it does implement #hashMappedBy: exactly as Magnitude). Having traits like Magnitude leave you more options to define a better inheritance hierarchy. Saludos, Víctor Rodríguez. > Cheers, > - Andreas > > > |
In reply to this post by Igor Stasenko
> > ...I never understood why support for write-only streams was ever > > needed. What's wrong with just assuming all streams are readable? > > Then this classic dilemma just goes away. It seems to me that > > whoever wrote the first internal streams implementation for > > Smalltalk simply got that part wrong, and no one questioned it for a > > long time. I wouldn't make this a primary motivating example for > > traits (hopefully there's a better one). > > There are lot of hardware having a write-only capabilities... I was speaking of internal streams, but even in the external stream case, you can just leave it to the external resource to complain if an inappropriate reading operation is attempted (not to the stream itself). -C -- Craig Latta improvisational musical informaticist www.netjam.org Smalltalkers do: [:it | All with: Class, (And love: it)] |
In reply to this post by Victor Rodriguez-5
Victor Rodriguez wrote:
>> If the paper wants to make any claims regarding traits, it really needs to >> distinguish improvements that are due to traits from general improvements >> (i.e., improvements that are just as applicable to single-inheritance >> implementations). Otherwise it is comparing apples to oranges and can't be >> taken seriously in this regard. > > But there *are* limits to what you can achieve with single > inheritance. It is not very hard to come up with an example: I'm not saying that there aren't any limits to single inheritance. I'm saying the paper is comparing apples and oranges and it shouldn't do that but rather be clear what improvements are due to the use of MI and which aren't. Cheers, - Andreas |
In reply to this post by Ryan Mitchley
On Fri, 16 May 2008 10:52:57 +0200, Ryan Mitchley wrote:
> > I am genuinely curious to see a simple example that clearly demonstrates > the > advantages of traits. > > It is my perception (or misperception) that traits are an attempt > (perhaps > unknowingly) to bring multiple inheritance into the language. I think you can easily judge for yourself whether or not a system gives you multiple inheritance (MI): o in Smalltalk every class has at most one superclass o every MI class has at least two superclasses o in Smalltalk a method can send to self or super o a method in an MI class can choose to which super it sends Compare Squeak's traits to these statements ... /Klaus > As I see it, > this confuses > Y Is-A X (i.e. Y subclasses X) > with > Y Has-A-Part X (i.e. Y has a member variable / subpart / trait X) > > I've seen this in GUI toolkits - e.g. someone has a Bitmap class and a > ClickableArea class. They want to make a Button - i.e. something that has > both Bitmap and ClickableArea traits. So they create Button by inheriting > from both Bitmap and ClickableArea classes (usually C++). The better > solution is to say that Button has both Bitmap and ClickableArea parts > (i.e. > member variables). This confusion becomes ridiculously clear if we > attempt > to create e.g. a Human class by inheriting from Eye, Leg, Arm etc. There > is > very likely much confusion between all of the members that get dragged in > and try to compete with each other. > > So, if I have a mistaken view of traits, I would in all honesty love to > see > an example that shows this to me clearly. > > I have a strong view that software should be designed with a very high > view > of orthogonality. I will often reject non-orthogonal approaches in my own > software without much thought, simply because I have been down that road > before and seen the trouble that ensues. That's why I would suggest that > traits should be rejected IFF they turn out to be a non-orthogonal design > (i.e. their utility is duplicated by e.g. member variables). I am not yet > convinced one way or the other. > |
Klaus D. Witzel wrote:
> I think you can easily judge for yourself whether or not a system gives > you multiple inheritance (MI): > > o in Smalltalk every class has at most one superclass > o every MI class has at least two superclasses > o in Smalltalk a method can send to self or super > o a method in an MI class can choose to which super it sends > > Compare Squeak's traits to these statements ... I don't think this is a particularly good definition of MI but the above certainly applies to traits: When a class uses a trait it certainly has multiple superclasses for all its observable behavior; when the method in the trait is changed the behavior in the class changes unless it has been reimplemented locally, which is the behavior of superclasses. For the "super sends" the only reason for aliasing in traits that I'm aware of is to get access for a trait user to a "particular version of the superclass method". Even in the earliest papers there was an example about a "colored rectangle" derived from TColor and TRectangle used aliases to get to specific implementations in its "super classes". In other words, even though it may be called a little differently, the concepts that you describe are all present. Which is why traits are generally treated as a form of MI. Cheers, - Andreas |
Free forum by Nabble | Edit this page |