Following requests for further information, posted since the original
seems hard to find on the internet ---- Copyright 2000, by Nevin Pratt In Smalltalk, as we all know, nil is the distinguished object which is typically used as the initial value of all variables. It answers true to the message #isNil, and throws a "Does Not Understand" exception when almost any other message is sent to it. Do all dynamic languages have a similar concept, and a similar nil? No, they do not. In this article, I am going to briefly compare Smalltalk's nil behavior to that of another dynamic language-Objective-C. Even though it has been about six years since I have done any programming in Objective-C (because I switched to Smalltalk), I have found some Objective-C techniques to be helpful and useful to my Smalltalk career. In particular, there are certain situations where I actually prefer Objective-C's concept of a nil over Smalltalk's. This article explores some of those situations. If you answer nil in Objective-C, the nil that is returned is, in some respects, a lot like Smalltalk's nil, except that instead of the nil generating exceptions when you message it, it just silently eats the messages. Thus, in Objective-C, nil acts more like a "black hole". It is emptiness. It is nothingness. If you send anything to nil, nil is all you get back. No exceptions. No return values. Nothing. Obviously, if we wanted this same behavior in Smalltalk, we would simply override the #doesNotUnderstand: instance message of the UndefinedObject class to just answer self. But would making this change be a good idea in Smalltalk? No it wouldn't-but I'm getting ahead of myself in saying that. So, let's look at it a bit closer, because in doing so it will eventually lead us to what I really do like! In Objective-C, nil didn't always have this "message eating" behavior. Brad Cox, the inventor of Objective-C, originally gave nil behavior that more closely modeled Smalltalk's nil. That is, messaging nil originally generated a runtime exception, as is evidenced in his release (via StepStone Corporation) of his "ICPack 101" class library and accompanying compiler and runtime. Then, beginning with "ICPack 201" (and accompanying compiler and runtime), this behavior was changed to the current "message eating" behavior. And, NeXT Computers followed suit with their Objective-C implementation, in which they gave their nil a "message eating" behavior, as did the Free Software Foundation with their GNU Objective-C compiler. But it wasn't always that way. So why did it change? As you might guess, this change created two diverging camps among the programmers. On one side sat the programmers that preferred the original "exception throwing" behavior, and on the other side sat the programmers who preferred the new "message eating" behavior. And they each gave their best arguments to try and illustrate why the philosophy of their side was superior to the other. It was a lively and interesting debate that had no victors, other than the de-facto victor voiced by the compiler implementers themselves; namely NeXT Computers, StepStone Corp, and later the FSF, all of which chose the "message eating" behavior. But, the opinions voiced in the "pro" and "con" arguments were interesting, and especially interesting in what they both actually agreed upon! Both sides agreed that the "message eating" behavior tended to create more elegant code! But of course, the "exception throwing" side responded by saying, "seemingly more elegant code, yes, but potentially troublesome and unreliable", and they gave their reasons for asserting this. We will look at some of those arguments, but first I will demonstrate how the "message eating" behavior can tend to make the code more elegant looking. Suppose, for example, that we wanted to find out the last telephone number that you dialed from your office telephone, and we wanted to save this last number into a variable called, say, `lastNumber'. Suppose further that we wanted to save it as a string so that we could display it in a GUI widget (as well as so we could use it later). If you are the `person' in the message sequence below, would the message sequence to accomplish this request be as follows? lastNumber := person office phone lastNumberDialed asString. widget setStringValue: lastNumber. Maybe. But then, what if you don't have an office? Or, what if you have an office, but the office doesn't have a phone? Or, what if it is new phone, and the phone has never been dialed yet? In any of these cases, using the "exception throwing" nil convention, an exception will be thrown, thus potentially halting the program if an exception handler hasn't been created to handle that exception. But, what if nil has the "message eating" behavior? In this case, `lastNumber' could potentially have a final value of nil, but the code above works just fine for this. Even passing nil as an argument to the #setStringValue:1 method doesn't hurt, because the "message eating" nil convention is used by the widgetry as well. It doesn't matter that the argument is nil. Everything still works, and there's no exception thrown, and no immediately apparent strange side-effects (but we'll analyze that one some more later). To contrast this, how then would you have to code it if nil has the "exception throwing" behavior? You would do it similar to this: | tmp | tmp := person office. tmp notNil ifTrue: [tmp := tmp phone]. tmp notNil ifTrue: [tmp := tmp lastNumberDialed]. tmp notNil ifTrue: [lastNumber := tmp asString]. widget setStringValue: lastNumber. Yuck...all those explicit tests for nil are really ugly! Of course, you could have instead wrapped the original code in an exception handler, and thus avoided the nil tests, something like as follows: [lastNumber := person office phone lastNumberDialed asString. widget setStringValue: lastNumber] on: Object messageNotUnderstoodSignal do: []. This looks a bit simpler than the previous example, but even this example contrasts poorly to the first example. The first example of these three is much simpler! You just "do it", without worrying about exceptions, exception handlers, or explicit tests. But, is this kind of code common? With the "exception throwing" nil, do we really end up typically testing for nil like this, or else setting up exception handlers like this? Yes, it is common. While the above example was contrived, let's look at a real-life example, from the #objectWantingControl method of the VisualPart class of VisualWorks. The VisualPart class is a superclass of the View class, and views in VisualWorks are coded to generally expect to have a collaborating controller object for processing user input (mouse and keyboard events). Thus, the #objectWantingControl method is a method of the view object, and it asks it's controller if it wants the user focus. If it does, #objectWantingControl answers self, otherwise it answers nil. If nil is answered, then the view is considered to be read only, and will not process user input. The actual implementation of #objectWantingControl is as follows: objectWantingControl | ctrl | ctrl := self getController. ctrl isNil ifTrue: [^nil]. " Trap errors occurring while searching for the object wanting control. " ^Object errorSignal handle: [:ex | Controller badControllerSignal raiseErrorString: 'Bad controller in objectWantingControl'] do: [ctrl isControlWanted ifTrue: [self] ifFalse: [nil]] Notice that this method has both an explicit #isNil test, as well as an exception handler. How would this method instead be written if the system had a "message eating" nil throughout? While there are a several variations of possibilities, including at least one variation that is shorter (but not necessarily clearer), we would probably write it as follows: objectWantingControl self getController isControlWanted ifTrue: [^self]. ^nil Notice how much simpler it suddenly became. The programmer's intentions are much clearer. No #isNil checks, no exceptions, no extra code to confuse the issue. Furthermore, with the "exception throwing" nil, even when the code is written to avoid #isNil tests and/or exception handlers, the coding style is usually altered in other ways to compensate. And, invariably, these style alterations don't produce as elegant of code as if you had a "message eating" nil. "Message eating" nil creates simpler, more elegant code. This was almost the unanimous opinion of both camps of the Objective-C debate on this. But of course, the "exception throwing" camp argued that this "simpler" code was also potentially more troublesome, and sometimes even buggy. And, they gave examples to illustrate. But their examples also all seemed to fall into one of two arguments. The first argument boiled down to the observation that, with a "message eating" nil, if a message sequence produces nil as the final result, it is more difficult to determine exactly where the breakdown occurred. In other words, what was the message that produced the first nil? And, the response to this argument was: the programmer typically doesn't care what message produced the first nil, and even if he did, he would explicitly test for it. And of course, the response to this response was: the programmer should care, but typically won't care, therefore the "message eating" nil is a feature which promotes bad programming habits. And of course, this in turn illicited a response that essentially just disagreed with their conclusions and challenged their statements, such as why the programmer should care, etc. And so the debate raged. But none-the-less, both sides seemed to admit that the "message eating" nil tended to create simpler, more elegant looking code. And, the existing code base tended to substantiate this conclusion. And simple code is good code, as long as it is also accurate code. So, with a "message eating" nil, is the resulting code accurate? Or does the "message eating" nil tend to introduce subtle bugs? To this question, the "exception throwing" crowd said it introduces subtle bugs, and they gave specific examples. Interestingly enough, all of their examples that I looked at were with statically declared variables, and those examples typically illustrated the platform dependent idiosyncracies that developed when a nil was coerced into a static type. One specific example was illustrated via the following code snippet (which I have modified to conform to Smalltalk syntax instead of Objective-C syntax): value := widget floatValue In this example, if `value' is statically declared to be a variable of type float (floats are not objects in Objective-C, but are instead native data types), and the #floatValue2 message returns nil instead of a valid float number, then after the assignment has completed, `value' will equal zero on the Motorola M68K family of processors, but on the Intel processor family, it ends up being a non-zero value, because of the peculiarities of implicit casting of a nil to a native float datatype. This is clearly an undesirable result, and can lead to subtle bugs. But, while those examples might be relevant for Objective-C, they are totally irrelevant for Smalltalk. There is no static declaration of variable types in Smalltalk, nor are there native data types (non-objects). It's a non-issue in Smalltalk. So, should the semantics of nil in Smalltalk be changed such that it eats messages? This is easy to do by changing #doesNotUnderstand: to just answer self. Should we do it? No, I don't think so. There has been too much code already written that is now expecting nil to throw exceptions. To change the semantics of nil from "exception throwing" to "message eating" at this time would likely break a large body of that code. It could be a very painful change, indeed. Furthermore, even in Smalltalk, the first objection to a "message eating" nil still stands; to whit, in a given message sequence whose final value is nil, it is difficult to determine what object first returned the nil. While it is purely a subjective opinion as to how important that objection really is, I don't know how anyone could not agree that it is indeed a valid objection. Minor perhaps (and perhaps not), but valid. So, instead of modifying Smalltalk's nil, let's now briefly look at an alternative, that of sending back a specialized Null object that has message-eating semantics. The first public document that I am aware of that explored this alternative is Bobby Woolf's excellent white paper, "The Null Object Pattern"3, although earlier works likely do exist. In that paper (which is now about five years old-almost an eternity in computer time), he also uses the VisualPart example from above to illustrate his pattern. In fact, that is precisely why I also chose to illustrate the results of using a "message eating" null via this same VisualPart example. That way, I could keep things simple and consistent, without introducing too much additional code for all of the illustrations. The "Null Object Pattern" essentially recommends the creation of a "do nothing" null object which implements the same protocol as the original object, but does nothing in response to that protocol. For the VisualPart example above, this pattern requires the creation of a class called NoController which implements the protocol of a controller, but does nothing in response to it. Doing nothing, however, means something special to a controller. For example, the NoController is expected to answer false to the #isControlWanted message. Why is this important? Because clients of NoController expect a boolean result to the #isControlWanted message, and they might in turn try sending #ifTrue: or #ifFalse: to that result, and only booleans (and perhaps "message eating" nils) respond to #ifTrue: and #ifFalse. The NoController has to return something that will respond to these boolean messages, or else the NoController is not going to be plug-compatible with a real controller. But, suppose we instead had #isControlWanted return a "message eating" nil? Or better yet, what if the #getController method of the VisualPart returned a "message eating" nil? I believe everything would still "just work", and that this also would be a simple way to generalize the "Null Object Pattern" of Woolf's paper. Interestingly enough, in Woolf's paper, he describes an advantage of the "Null Object Pattern" by saying it... ...simplifies client code. Clients can treat real collaborators and null collaborators uniformly. Clients normally don't know (and shouldn't care) whether they're dealing with a real or a null collaborator. This simplifies client code, because it avoids having to write special testing code to handle the null collaborator. This testimony dovetails nicely with the NeXTSTEP community's assertion that the "message eating" nil behavior of Objective-C appears to simplify code, as I have already demonstrated. But, to implement the "Null Object Pattern", do we create a NoController class, and a NoOffice, and a NoPhone, and a NoLastNumberDialed class? Where does it end? Indeed, this potential class explosion of the "Null Object Pattern" is also mentioned by Woolf, as follows: [One of] the disadvantages of the Null Object pattern [is]...class explosion. The pattern can necessitate creating a new NullObject class for every new AbstractObject class. A "message eating" nil would avoid this class explosion, as it is protocol-general instead of protocol-specific. I personally feel that this difference is even a bit reminiscent of the static typing vs. dynamic typing differences (and ensuing debates), as the following chart illustrates: Static Typing vs. Dynamic Typing Should we allow any type of object to be handled (assigned to) this variable, or only objects of a specific type? Null Objects vs. Message Eating Nil Should we allow any type of message to be handled by (sent to) this object, or only messages of a specific type (protocol)? I make no secret that I prefer dynamic typing over static typing. And, I also believe that often a general "message eating" nil is more desirable than the more specific "Null Object Pattern", provided of course that the "message eating" nil is implemented correctly. What follows is my implementation of a "message eating" nil, which I call a null, which is an instance of my class Null. Recall that the first objection against the null was that in a given message sequence whose final value is null, it is difficult to determine what object first returned the null. How do we handle that objection? Simple. Just ask it. The Null class should have an originator instance variable that records who originally invoked the `Null new', as well as a sentFromMethod instance variable that records from what method of the originator the `Null new' was invoked. But does that mean that the creator of the null must now tell the null so that the null can tell you? That sounds like a lot of work! And, what if someone forgets those extra steps? Simple. Don't require the extra steps. Anybody should be able to send `Null new', and the Null class itself should be able to figure this information out. But that is not possible to do unless the Null class can somehow automatically determine who is calling one of it's instance creation methods. In other words, we need to detect who the sender of the message is. How do we do that? It is not part of standard Smalltalk! Well, here is how to do it in VisualWorks: Object>>sender ^thisContext sender sender receiver Object>>sentFromMethod ^thisContext sender sender selector Now, in your other code, anytime you want to return a nil that also has message-eating semantics (which I call a null), you use `^Null new' from your code instead of `^nil'. Then, your caller can easily discover the originator of the null if it wishes to simply by asking. If you are concerned about the potential proliferation of nulls with such a scheme, another trick you might try is to create a default null using a `Default' class variable: Null class>>default Default == nil ifTrue: [Default := super new initialize. Default originator: Null. Default fromMethod: #default]. ^Default The default null can then later be accessed via `Null default' instead of `Null new'. I actually use this quite often for automatic instance variable initialization in my abstract DomainModel class, which is the superclass of all of my domain objects: DomainObject>>initialize 1 to: self class instSize do: [:ea | (self instVarAt: ea) isNil ifTrue: [self instVarAt: ea put: Null default]]. ^self I have found that such a scheme does indeed simplify the domain logic, just as this article indicates that it should. In fact, sometimes it has dramatically simplified things. And, I have never had any problems with this scheme, as long as I have limited its use to the domain layer only. I have, however, had problems trying to integrate some of these ideas into the GUI layer, and decided long ago that it was a bad idea in that layer. My own implementation of the Null class was originally written in VisualAge, and was originally part of a much larger domain-specific class library. This class library originally tried a number of ideas on an experimental basis, to see if problems resulted from their use. The use of the Null pattern described in this article was one of those experimental ideas. Even though it is actually a small idea, it's widespread use in the domain layer was encouraged from my previous Objective-C experience, but I still didn't know if I would run into other subtle issues while using it in Smalltalk. But I feel comfortable with it now in the domain layer (but not in the GUI layer). Some time after creating the class library I mentioned above, the entire class library was ported to GemStone, and then finally the entire class library was moved to VisualWorks. A filein of the VisualWorks implementation of the Null class follows. Email me at [hidden email] if you want either the VisualAge or GemStone versions, and I'll try to dig them out. If you instead decide to create your own Null class in VisualAge, another thing to realize is that #isNil is inlined in VA (but not in VW). Thus, something like `Null new isNil' will always answer false in VA, even though your Null>>isNil method explicitly answers true. Hence, in your domain code, with VA you probably want to create an #isNull method and use that instead of #isNil. That is what I originally did, and that convention carried to the GemStone version, but I have since broken that convention in the VisualWorks version. 1.As a sidebar, one could also argue about the appropriateness of a #setStringValue: method in this example, and its implied limitation of only setting, or showing, strings, rather than having perhaps a more generic #show: method that can show other types as well. To this, I have three things to say: first, consider the commonly used #show: method of the Transcript class in Smalltalk, and the argument type it expects (strings) second, #setStringValue: is the actual method name used for TextField widgets in NeXTSTEP third, who cares, this is just a contrived example anyway. 2.#floatValue also is an actual message implemented by TextFields under NeXTSTEP, just as the #setStringValue: of the earlier code snippets is. 3.Published in Pattern Languages of Program Design. Addison-Wesley, James Coplien and Douglas Schmidt (editors). Reading, MA, 1995; http://www.awl.com/cseng/titles/0-201-60734-4. |
One thing, what i like about null pattern is having an info, where
first was null object created, so this potentialy helps track down errors. And this is not so easy with nils, because exceptions can be throwed in places which far far away from code which was originally returned nil. assume some message returns nil or object. a := b someMessage. later 'a' used to put somewhere in dictionary: dict at: #someKey put: a. and only after some more assignments, some code needs to send a message which was first returned by 'b someMessage'. So, finding the original source of object makes big difference between using nil, or null. And i think nil and null can coexist fine in smalltalk. For those who wants try: To make null working , we simply need to slightly modify interpreter and bytecode, in such way, that everytime parser sees a #null symbol in method code, it generates a special bytecode, which tells the interpreter to create a new instance of Null class and fill its ivars with current receiver and method name. So. every time i write something like: myMethod ^ null. this will be equivalent to: myMethod ^ Null receiver: self method: thisContext method. |
In reply to this post by keith1y
> Date: Wed, 25 Jul 2007 16:58:49 +0100
> From: [hidden email] > To: [hidden email] > Subject: Message Eating Null - article > > To contrast this, how then would you have to code it if nil has the > "exception throwing" behavior? You would do > it similar to this: > > | tmp | > tmp := person office. > tmp notNil ifTrue: [tmp := tmp phone]. > tmp notNil ifTrue: [tmp := tmp lastNumberDialed]. > tmp notNil ifTrue: [lastNumber := tmp asString]. > widget setStringValue: lastNumber. > > Yuck...all those explicit tests for nil are really ugly! Of course, you > could have instead wrapped the original code > in an exception handler, and thus avoided the nil tests, something like > as follows: > > [lastNumber := person office phone lastNumberDialed asString. > widget setStringValue: lastNumber] > on: Object messageNotUnderstoodSignal do: []. > > This looks a bit simpler than the previous example, but even this > example contrasts poorly to the first example. > The first example of these three is much simpler! You just "do it", > without worrying about exceptions, exception > handlers, or explicit tests. Yuck, I wouldn't do either of those. I would do: widget setStringValue: #(office phone lastNumberDialed asString) inject: person into: [:obj :sel| o == nil ifTrue: [ nil ] ifFalse: [ obj sel ] ] Local listings, incredible imagery, and driving directions - all in one place! Find it! |
> > > Yuck, I wouldn't do either of those. I would do: > > widget setStringValue: #(office phone lastNumberDialed asString) > inject: person into: [:obj :sel| o == nil ifTrue: [ nil ] ifFalse: [ > obj sel ] ] > > ------------------------------------------------------------------------ you would? To me the above looks as close to perl as smalltalk is hopefully ever likely to get! ;-) Keith p.s. Someone once accused me of being a PL/1 programmer in a former life. |
In reply to this post by keith1y
What on earth are you talking about????? That is #inject:into:, known in functional programming as a fold. Perl doesn't even have a fold operator. The only part that looks bad is the block, and only because I don't know what #ifNotNil: returns if the block isn't nil. Plus with this I have the option of doing a home return when I first encounter the nil, while your Null will have to keep chomping until the end. :)
> Date: Thu, 26 Jul 2007 20:06:32 +0100 > From: [hidden email] > To: [hidden email] > Subject: Re: Message Eating Null - article > > > > > > > > Yuck, I wouldn't do either of those. I would do: > > > > widget setStringValue: #(office phone lastNumberDialed asString) > > inject: person into: [:obj :sel| o == nil ifTrue: [ nil ] ifFalse: [ > > obj sel ] ] > > > > ------------------------------------------------------------------------ > you would? > > To me the above looks as close to perl as smalltalk is hopefully ever > likely to get! > > ;-) > > Keith > > p.s. Someone once accused me of being a PL/1 programmer in a former life. > > See what you’re getting into…before you go there. Check it out! |
In reply to this post by keith1y
oh, and I forgot that I actually needed: obj perform: sel
PC Magazine’s 2007 editors’ choice for best web mail—award-winning Windows Live Hotmail. Check it out! |
In reply to this post by keith1y
Hello,
I would like to point out that both of these claims are unrealistic. Specifically, Keith's examples all use low-level control-flow messages like "notNil ifTrue: [" whereas non-novice Smalltalkers should be using "ifNotNil:" and "ifNotNilDo: [:obj |". The example would best be: widget setStringValue: (#(office phone lastNumberDialed asString) inject: person into: [:obj :sel | obj ifNotNil: [obj perform: sel]]) without the message-eating Null pattern. Or if #perform: irks you, you can use: widget setStringValue: (person office ifNotNilDo: [:office | office phone ifNotNilDo: [:phone | phone lastNumberDialed ifNotNilDo: [:number | number asString]]]) Like Keith, I find this a problem to have to write and maintain in code, but I'm not compelled to write long essays on it. I tend to think that chained message-sends represent a way in which distantly- related code can reach across protocols a bit too far. And Keith is being a bit unfair in that a several-page-long essay (with a really wide column justification width) will not be adequately rebutted in an email forum, thus keeping the discussion from balance. My own view is that message-eating null is usable in controlled circumstances, but that in general one cannot control the circumstances (you can't know who *won't* receive a null as a parameter, so code written without that in mind fails in strange ways) in Smalltalk-80. It is also worth pointing out that "message-eating null" most closely resembles the apposition of type T to NIL in Lisp, where one is the supertype of every type and the other is the subtype of every type. In Lisp, you do not interchange these, for good reason that I believe applies here but don't have time to gather evidence in support. I think his packages which use message-eating-null would be a lot more palatable if they didn't... On Jul 26, 2007, at 12:06 PM, Keith Hodges wrote: >> Yuck, I wouldn't do either of those. I would do: >> >> widget setStringValue: #(office phone lastNumberDialed asString) >> inject: person into: [:obj :sel| o == nil ifTrue: [ nil ] ifFalse: >> [ obj sel ] ] > you would? > > To me the above looks as close to perl as smalltalk is hopefully > ever likely to get! > > ;-) That is disingenuous. Please use honest arguments. > Keith > > p.s. Someone once accused me of being a PL/1 programmer in a former > life. -- -Brian http://briantrice.com |
The article was posted as a historical artifact, it is not really in
need of rebuttal. The example quoted unfortunately provides critics with an easy straw man. Personally I don't think this use case is typical at all. however. I think that there are cases where null is very useful to have around. cheers Keith |
In reply to this post by J J-6
> > > > widget setStringValue: #(office phone lastNumberDialed asString) > > > inject: person into: [:obj :sel| o == nil ifTrue: [ nil ] ifFalse: [ > > > obj sel ] ] > > Not being that clever, I find #inject:into: to be the most obfuscated message in the image. Whenever I see it I have to think quite hard to work out what is going on. I was confused long before I got to the block. The original code has the advantage that it works without me needing to know how exactly the effect is achieved. cheers Keith |
In reply to this post by keith1y
I am not decided against the generalized null pattern just yet [1]. I just wanted to point out that no one is going to put 5 temps in a row like that. The exception might be more common but I never see it.
[1] I am not going to say it's bad, nor am I convinced it is a sure win. Personally I can't recall a single time I have needed something like this. I do tend to chain messages often, but I guess I only do that when I don't expect a nil. If one comes up I want to see it right then to track it down. But perhaps in other domains then I have been programming in so far it would come up more. At any rate I'm glad it's out there to try out if I need it. Thanks for that Keith. > From: [hidden email] > Date: Thu, 26 Jul 2007 12:41:05 -0700 > To: [hidden email] > Subject: Re: Message Eating Null - article > > Hello, > > I would like to point out that both of these claims are unrealistic. > > Specifically, Keith's examples all use low-level control-flow > messages like "notNil ifTrue: [" whereas non-novice Smalltalkers > should be using "ifNotNil:" and "ifNotNilDo: [:obj |". > > The example would best be: > > widget setStringValue: (#(office phone lastNumberDialed asString) > inject: person into: [:obj :sel | obj ifNotNil: [obj perform: sel]]) > > without the message-eating Null pattern. Or if #perform: irks you, > you can use: > > widget setStringValue: (person office ifNotNilDo: [:office | office > phone ifNotNilDo: [:phone | phone lastNumberDialed ifNotNilDo: > [:number | number asString]]]) > > Like Keith, I find this a problem to have to write and maintain in > code, but I'm not compelled to write long essays on it. I tend to > think that chained message-sends represent a way in which distantly- > related code can reach across protocols a bit too far. And Keith is > being a bit unfair in that a several-page-long essay (with a really > wide column justification width) will not be adequately rebutted in > an email forum, thus keeping the discussion from balance. > > My own view is that message-eating null is usable in controlled > circumstances, but that in general one cannot control the > circumstances (you can't know who *won't* receive a null as a > parameter, so code written without that in mind fails in strange > ways) in Smalltalk-80. > > It is also worth pointing out that "message-eating null" most closely > resembles the apposition of type T to NIL in Lisp, where one is the > supertype of every type and the other is the subtype of every type. > In Lisp, you do not interchange these, for good reason that I believe > applies here but don't have time to gather evidence in support. > > I think his packages which use message-eating-null would be a lot > more palatable if they didn't... > > On Jul 26, 2007, at 12:06 PM, Keith Hodges wrote: > > >> Yuck, I wouldn't do either of those. I would do: > >> > >> widget setStringValue: #(office phone lastNumberDialed asString) > >> inject: person into: [:obj :sel| o == nil ifTrue: [ nil ] ifFalse: > >> [ obj sel ] ] > > you would? > > > > To me the above looks as close to perl as smalltalk is hopefully > > ever likely to get! > > > > ;-) > > That is disingenuous. Please use honest arguments. > > > Keith > > > > p.s. Someone once accused me of being a PL/1 programmer in a former > > life. > > -- > -Brian > http://briantrice.com > > Local listings, incredible imagery, and driving directions - all in one place! Find it! |
In reply to this post by J J-6
On Thu, Jul 26, 2007 at 07:23:42PM +0000, J J wrote:
> > > widget setStringValue: #(office phone lastNumberDialed asString) > > > inject: person into: [:obj :sel| o == nil ifTrue: [ nil ] ifFalse: [ > > > obj sel ] ] > > > > To me the above looks as close to perl as smalltalk is hopefully ever > > likely to get! > > What on earth are you talking about????? That is #inject:into:, known in > functional programming as a fold. I agree; that is a seriously obfuscated way to access the last number dialed. I don't know how often 4-level chaining would arise in practice, but I would hope it would be as simple as: ^ self office phone lastNumberDialed asString Anything more complex is not lazy enough to get diligently applied everywhere it should -- Matthew Fulmer -- http://mtfulmer.wordpress.com/ Help improve Squeak Documentation: http://wiki.squeak.org/squeak/808 |
In reply to this post by J J-6
I think proper exception handling combined with reflection can produce
a very readable result without the flakiness of message-eating nil: lastNumber := [person office phone lastNumberDialed asString] ifNilShowsUp: [''] with #ifNilShowsUp: defined as BlockContext>>ifNilShowsUp: aBlock ^self on: MessageNotUnderstood do: [:ex | (ex receiver isNil and: [ex signalerContext sender = self]) ifTrue: [ex return: aBlock value] ifFalse: [ex pass]] Or in other words, the second block is evaluated to produce the final result if one of the messages inside the first block returns nil and that nil doesn't understand the following message. All other failures, including MNUs by nil inside the messages sent from the block, fail "properly". Cheers, --Vassili P.S. "ex signalerContext sender = self" would fail to capture a relevant MNU in some cases. The 100% solution is an exercise for the reader. :) |
> I think proper exception handling combined with reflection can produce
> a very readable result without the flakiness of message-eating nil: I think that this misses the point. The power of the message eating null is in the fact that you have a real object to pass around a system. It is an item that you can use to model things, or more precisely non-things with. I used it to model empty slots in a model of telecoms equipment for example, and yes in the right place it can simplify implementation. Using exception handling around a long calling chain is just not worth the effort. As for flakiness, a generic message eating null, that does its job could be less flaky than maintianing specific null-objects for different domain models. best regards Keith |
In reply to this post by Vassili Bykov-2
Unfortunately nil will understand some messages...
In squeak nil asString -> 'nil' Nicolas Vassili Bykov a écrit : > I think proper exception handling combined with reflection can produce > a very readable result without the flakiness of message-eating nil: > > lastNumber := [person office phone lastNumberDialed asString] > ifNilShowsUp: [''] > > with #ifNilShowsUp: defined as > > BlockContext>>ifNilShowsUp: aBlock > ^self > on: MessageNotUnderstood > do: > [:ex | > (ex receiver isNil and: [ex signalerContext sender = self]) > ifTrue: [ex return: aBlock value] > ifFalse: [ex pass]] > > Or in other words, the second block is evaluated to produce the final > result if one of the messages inside the first block returns nil and > that nil doesn't understand the following message. All other failures, > including MNUs by nil inside the messages sent from the block, fail > "properly". > > Cheers, > > --Vassili > > P.S. "ex signalerContext sender = self" would fail to capture a > relevant MNU in some cases. The 100% solution is an exercise for the > reader. :) > > |
In reply to this post by keith1y
On 7/26/07, Keith Hodges <[hidden email]> wrote:
> As for flakiness, a generic message eating null, that does its job could > be less flaky than maintianing specific null-objects for different > domain models. The problem with message-eating null is that it will happily do *more* than its job. This is what I find objectionable. If you are somewhat familiar with Haskell, a message-eating null is very much like the Maybe monad. But unlike Haskell, without a type system to contain it all object references throughout the system effectively become maybe-values, and all implicit continuation calls turn into bind operators. In other words this means that a null accidentally leaking outside the area where it was expected can silently cause some actions to not happen. What you deal with in that case is not just a null of an unknown origin that Nevin focused on in his write-up, but things breaking because some code didn't run in the past because of a stray null value. When we write ifTrue:ifFalse:, we expect that each time it runs one of the branches is taken. In a system with message-eating null, this is no longer an invariant. Perhaps we disagree on what constitutes flaky. Flaky in my book is poor locality of failures with respect to their causes. If I open a door and a window falls out, that's flaky. So is if I change a method and things stop working in an entirely different place because a null value leaked, ended up as the receiver of ifTrue:ifFalse: and disabled both execution branches. Indeed, maintaining specialized null objects is more work (however, more often than not they are part of a hierarchy of classes whose protocols you have to coordinate anyway), but that work in the end produces a program that is more predictable and is more likely to break where it's broken. I value that in my programs. Cheers, --Vassili |
Nicely said Vassili
Sean Glazier -----Original Message----- From: [hidden email] [mailto:[hidden email]] On Behalf Of Vassili Bykov Sent: Thursday, July 26, 2007 4:18 PM To: The general-purpose Squeak developers list Subject: Re: Message Eating Null - article On 7/26/07, Keith Hodges <[hidden email]> wrote: > As for flakiness, a generic message eating null, that does its job could > be less flaky than maintianing specific null-objects for different > domain models. The problem with message-eating null is that it will happily do *more* than its job. This is what I find objectionable. If you are somewhat familiar with Haskell, a message-eating null is very much like the Maybe monad. But unlike Haskell, without a type system to contain it all object references throughout the system effectively become maybe-values, and all implicit continuation calls turn into bind operators. In other words this means that a null accidentally leaking outside the area where it was expected can silently cause some actions to not happen. What you deal with in that case is not just a null of an unknown origin that Nevin focused on in his write-up, but things breaking because some code didn't run in the past because of a stray null value. When we write ifTrue:ifFalse:, we expect that each time it runs one of the branches is taken. In a system with message-eating null, this is no longer an invariant. Perhaps we disagree on what constitutes flaky. Flaky in my book is poor locality of failures with respect to their causes. If I open a door and a window falls out, that's flaky. So is if I change a method and things stop working in an entirely different place because a null value leaked, ended up as the receiver of ifTrue:ifFalse: and disabled both execution branches. Indeed, maintaining specialized null objects is more work (however, more often than not they are part of a hierarchy of classes whose protocols you have to coordinate anyway), but that work in the end produces a program that is more predictable and is more likely to break where it's broken. I value that in my programs. Cheers, --Vassili |
In reply to this post by Vassili Bykov-2
Vassili Bykov wrote:
> On 7/26/07, Keith Hodges <[hidden email]> wrote: >> As for flakiness, a generic message eating null, that does its job could >> be less flaky than maintianing specific null-objects for different >> domain models. > > The problem with message-eating null is that it will happily do *more* > than its job. This is what I find objectionable. If you are somewhat > familiar with Haskell, a message-eating null is very much like the > Maybe monad. But unlike Haskell, without a type system to contain it > all object references throughout the system effectively become > maybe-values, and all implicit continuation calls turn into bind > operators. > > In other words this means that a null accidentally leaking outside the > area where it was expected can silently cause some actions to not > happen. What you deal with in that case is not just a null of an > unknown origin that Nevin focused on in his write-up, but things > breaking because some code didn't run in the past because of a stray > null value. When we write ifTrue:ifFalse:, we expect that each time it > runs one of the branches is taken. In a system with message-eating > null, this is no longer an invariant. > Have you ever found something not working because a value you expected to be initialize was not, some time in the past. I am not suggesting indiscriminate use of null. It is useful in some situations. In actual fact in versions prior to my latest package releases null would not have been able to ignore ifTrue ifFalse anyway, it would have to throw an error. Neither can it in other languages such as ruby since ifTrue ifFalse are somewhat more wired in to the underlying runtime than in smalltalk. best regards Keith Further more do you have any idea where in the past the initialization failed to happen. With null you can find out where it started. |
In reply to this post by keith1y
On Thu, 26 Jul 2007 12:06:32 -0700, Keith Hodges
<[hidden email]> wrote: > p.s. Someone once accused me of being a PL/1 programmer in a former life. Hey, I resemble that remark. And, technically, it's PL/I.<s> I think the philosophy of its design was correct: Make the compiler do the work instead of the programmer. Ultimately such things are less useful than a more minimal language with lots of flexibility but I'd still rather use it than C, K&R's snark about PL/I's data type conversion aside. |
In reply to this post by Brian Rice
On 7/26/07, Brian Rice <[hidden email]> wrote:
> > To me the above looks as close to perl as smalltalk is hopefully > > ever likely to get! > > > > ;-) > > That is disingenuous. Please use honest arguments. > I think it *is* an honest argument. One of the strong points of Smalltalk is its readability, and that code block was awful in that respect. FYI - I used the message eating nil pattern in one of the largest systems I wrote in Smalltalk, and I loved it. We did restrict ourselves to the business layer only, the couple of experiments we did with it in other layers quickly found their way to the wastebin. Just my two cents and now I'm off on holidau :) -- "Human beings make life so interesting. Do you know, that in a universe so full of wonders, they have managed to invent boredom. " - Death, in "The Hogfather" |
In reply to this post by keith1y
> Date: Thu, 26 Jul 2007 13:14:52 -0700 > From: [hidden email] > To: [hidden email] > Subject: Re: Message Eating Null - article > > > What on earth are you talking about????? That is #inject:into:, known in > > functional programming as a fold. > > I agree; that is a seriously obfuscated way to access the last > number dialed. I don't know how often 4-level chaining would > arise in practice, but I would hope it would be as simple as: > > ^ self office phone lastNumberDialed asString > > Anything more complex is not lazy enough to get diligently > applied everywhere it should Yes, but it isn't quite that simple. Normally I would do exactly what you suggested. But in that case I would want a debugger if any of those messages returned nil. If you want different handling for nil then the default you have to do a test after each message send. This can either be done by making a special object as Keith suggested, using a fold as I suggested or brute forcing your way through as in Keith's example of what not to do. :) Don't get caught with egg on your face. Play Chicktionary! |
Free forum by Nabble | Edit this page |