Hi Michael-- > Modularization will help to a degree the novice user as well, but not > as much. Knowing that certain methods were added by a module informs > the purpose of the method a bit more, but does not really help in the > slow introduction of complexity from a novice's point of view. Oh, I think it will. Knowing where a method came from, a novice can reason about the contributing module itself ("why does it exist? how are the methods in it similar?"). The novice can start to reason about the system's composition at ever-larger levels of organization, which aids understanding. -C -- Craig Latta improvisational musical informaticist www.netjam.org Smalltalkers do: [:it | All with: Class, (And love: it)] |
Don't get me wrong. I think modular layered systems are FAR easier to learn
than a huge soup of objects/methods. And you are correct that they will start to understand why B3D exists by seeing what it does to the system. I was just saying that there is no substitute for the deliberate presentation of information. That is why a book has value over the code comments. I have yet to see a system where there was as much documentation in the system as would exist in even a moderately well written book on the system. Since system authors often forget what it was like to be novices it would really require an organization that was committed to training of new project members to put such a culture in place. Michael -----Original Message----- From: [hidden email] [mailto:[hidden email]] On Behalf Of Craig Latta Sent: Tuesday, May 02, 2006 12:38 PM To: [hidden email] Subject: Re: Peeping At The KeyHole Hi Michael-- > Modularization will help to a degree the novice user as well, but not > as much. Knowing that certain methods were added by a module informs > the purpose of the method a bit more, but does not really help in the > slow introduction of complexity from a novice's point of view. Oh, I think it will. Knowing where a method came from, a novice can reason about the contributing module itself ("why does it exist? how are the methods in it similar?"). The novice can start to reason about the system's composition at ever-larger levels of organization, which aids understanding. -C -- Craig Latta improvisational musical informaticist www.netjam.org Smalltalkers do: [:it | All with: Class, (And love: it)] |
In reply to this post by Chris Muller
Hi Chris-- > ...how did you ever solve your issue about methods implemented in > superclasses.. > > i.e., in the master image I have MyDomainClass>>#name, but > Object>>#name has already been faulted down, so I'm not getting the > correct implementation because DNU was not invoked... Ah, I love this trick. :) In my changes to the garbage collector, I made it so that inert methods get replaced with nil, but the corresponding associations persist in their method dictionaries. And I changed method lookup in the virtual machine so that, when it finds one of those nils, it behaves as if there's no matching method at all anywhere in the class hierarchy. So great, you already brought in Object>>name. But there's still an entry for >>name in the method dictionary for MyDomainClass, and it refers to nil instead of a compiled method. So when you send name to an instance of MyDomainClass, a message-not-understood results, and MyDomainClass>>name gets swapped in. At some point, I'll remove all remaining "placeholders" from the kernel image (remember that a lot of them get nuked anyway when their entire class gets collected, due to a lack of references). But probably not until I've personally gone over every single "real" method remaining, written a comment for it, commented its class, its module, etc. :) With my previous two kernel images, I was able to point to any individual byte and justify why it was there (with the aid of the visualization tools[1]). I'm confident I'll be able to do the same with the latest kernel image, even though it does a little more than the other ones. thanks, -C -- Craig Latta improvisational musical informaticist www.netjam.org Smalltalkers do: [:it | All with: Class, (And love: it)] |
Oops, I forgot a footnote in the last message. It was a reference to the object memory visualization tools, at: http://netjam.org/spoon/viz -C -- Craig Latta improvisational musical informaticist www.netjam.org Smalltalkers do: [:it | All with: Class, (And love: it)] |
Very cool tool. You could also turn this into an educational/application
tool to visualize some set of connected objects using a series of snapshots. If you knew the root objects you could hide all objects not referenced in that sub-graph. Michael -----Original Message----- From: [hidden email] [mailto:[hidden email]] On Behalf Of Craig Latta Sent: Tuesday, May 02, 2006 1:03 PM To: [hidden email] Subject: Re: Peeping At The KeyHole Oops, I forgot a footnote in the last message. It was a reference to the object memory visualization tools, at: http://netjam.org/spoon/viz -C -- Craig Latta improvisational musical informaticist www.netjam.org Smalltalkers do: [:it | All with: Class, (And love: it)] |
In reply to this post by Klaus D. Witzel
Hi Andreas et. Al,
Note that we (I think I speak for the other authors as well, if not they will without any doubt speak up) do not claim that traits are "modules" -- that term is very overloaded anyway, but it typically assumes some kind of 'black-box'-ness (strong encapsulation) that Traits do not satisfy. When you see Traits as black-box modules you get the problems Andreas mentions in his mail (with which I agree up until a certain point). But Traits are more of a white-box reuse model: fine-grained composition of groups of methods, as you would do with inheritance, but with composition operators. Required methods in that respect are more like abstract methods: an implementation has to be given when the trait is used in a class. Note that, compared to subclassing, trait composition is actually less brittle, since more things are checked. And that was the point Traits are trying to solve. That said, more applications and feedback are absolutely welcome. On 02 May 2006, at 15:23, Klaus D. Witzel wrote: > Hi Andreas, > > on Tue, 02 May 2006 11:28:49 +0200, you <[hidden email]> wrote: > >> Klaus D. Witzel wrote: >>> Have a look at the current effort for the 3.9 release, perhaps >>> after looking at >>> - http://www.iam.unibe.ch/~scg/Research/Traits/ >>> to find out if no one really knows how to modularize Squeak. >> >> Interesting. Can you elaborate on how you think traits deal with/ >> help modularization? > > Sure. And apologies in advance if this becomes too theoretical. > O.K. base things first, from wikipedia: > > "A module can be defined variously, but generally must be a > component of a larger system, and operate within that system > independently from the operations of the other components." > > The emphasis, IMO, is on "...operate within that system > independently from the operations of the other components". > > Every Traits is such a module, also the composition of two or more > Traits qualifies (IMO without any doubt). > > I use a simple judgement to see if something qualifies for being a > module, a raw translation from the work of > - http://scholar.google.com/scholar?q=Anne+Berry+separator > > a) there are at least two modules in any non-monolitic system which > deserves to be called "modular" > b) there are pairs of modules which are independent from the > operations of each other > c) if a) and b) then there is/are one or more things (the users of > modules) which separate pairs of modules > >> I would actually (from my current point of view which is based on >> the available examples) claim the opposite - traits being a tool >> for "more reuse" seem to make relationships between classes and >> their (compositions of) traits even more intricate than before. > > This is quite possible and I agree that the example of traitified > Traits (TBasicCategorisingDescription et al) can be seen this way. > > But I think that at this stage of the Traits story we do not have a > convincing view on the relationships between Traits and users of > Traits (and between Traits themselves). We just use GOFBrowser as > before but what is needed (IMO) is a shift in the paradigm of "...a > query path into the class descriptions, the software of the > system...". Not that I have a concrete idea on how such a thing > would look like, but without any doubt I'd recognize it if someone > showed it to me (no, it won't look like http://www.eclipse.org/ ;) > >> At least that's the feeling I get when I look at their current use >> for defining the metaclass hierarchy (if anyone has a better real- >> world example I'd be interested in studying it; but please no toy >> examples ;-) >> >> The relations between the fifteen or so traits that ultimately >> make up a class seem to be fairly elaborate and (in my eyes) much >> more fragile than I would have expected. To give an example, have >> a look at TBasicCategorisingDescription (which I chose at random, >> the same applies pretty much to all of these traits) - even by >> roughly glancing over the trait I find several methods that are >> implemented nowhere in sight but that have very specific >> requirements (#organization, #organization:, #includesSelector:, >> #isClassSide, #classSide) which strikes me as very unmodular indeed. > > This is, again in my eyes, a direct consequence of the > modularization effort of the author of these requirements: it tells > me that the author didn't want to know anything about how a > Behavior decides #includesSelector: and how a Behavior manages > #isClassSide and #classSide, and that {#organization. > #organization:} are just getters/setters. > >> It seems in particular problematic that there is no information >> which of these selectors are actual (computational) requirements, >> which ones are (assumed to be) state and which ones are simply >> bugs (by elimination). > > Out of curiosity, why do you want to know which ones are state? > >> There is zero information about the interface of these >> requirements (arguments taken, return values, error conditions), >> whether the methods are assumed to come from a common (required) >> trait or whether it's just a loose collection of random methods >> (like, for example, a utility trait for implementing unrelated >> math functions). > > Don't get me wrong but isn't this the case for *most* methods in > Squeak? If you can agree then we can put this issue aside (until > "somebody" fixed the documentation problem, I mean ;) > >> All of which I think are critical if you want to build a modular >> system (and btw, much of that information *is* readily available >> as soon as a trait is used in a functioning class but that's >> exactly the point here - traits themselves definitely loose some >> modularity when they try to stand on their own -modular- feet). > > I agree, this (your comment) comes close to my a)-c) view from > above, in the sense that modules are "nothing" in the absence of > their user(s). > >> So to me it really does feel as if, yes, reuse is being maximized, >> but modularity is actually being sacrificed in that process. > > Well, my impression is quite the opposite: only because the author > *did* this (IMO high) level of modularization, he/she was able to > lay the ground for maximizing reuse of this particular Traits. Of > course if there is only *one* user, then... But I also think the > author has given Traits traits as a high level example on how to do > "it". > >> Also, remember that duplication of code is not necessarily a bad >> thing if one tries to minimize dependencies which is typically a >> good idea for modularity. > > This is the point at which I view users a separators: have a look > at the average non-traitified classes and how they are used: the > more "glue" code a user needs, the less the value of > modularization. Take for example Http-readable content, to be > paired with CrLf conventions: no way unless one accepts to read the > entire contents, before being able to apply CrLf conventions by > using an *existing* module (this is just an example, no one take > this personal, please). > >> Contrary to which one of the primary goals of traits is to >> maximize reuse and minimize duplication which makes the goals at >> least somewhat incompatible. And although one could argue that the >> expected gains of traits offset the loss of decoupling I get the >> distinct feeling this is not true for the examples we currently >> have - if I would want to reuse any of these traits I really >> wouldn't know how (other than in precisely the same form they're >> already being used in). That, I think, was the most surprising >> result when looking at actual code but it may partly be related to >> tools issues (although I doubt that because no tool will tell you >> what a non-existent method's expected return value is if you load >> a trait and have no existing composition at hand to look at). > > Agreed, this amounts (again) to a shift in the GOFBrowser paradigm. > >> Now it may be that this is not a good example but it's the only >> real-world traits use that I've seen so far (toy examples >> discounted). > > I'd say the example you used is a good one. It is fairly complex > and is part of something which everybody in squeak-dev understands: > behavior. > >> Anyway, if you think that example is badly chosen or if you have >> another example that shows how traits help modularity (or even if >> you just have gut feelings ;-) I'd be interested in hearing more >> about it. > > I hope it was possible to demonstrate my view on modularity and > Traits. > > /Klaus > > |
Free forum by Nabble | Edit this page |