Hi all-- Now that I've been living with it for a while, here are a few details on the shrinking technique I mentioned a couple of months ago (where methods that haven't been run recently get reclaimed by the garbage collector). Also, at the end, are some notes on what I'm doing now and my next steps. When I did the earlier imprinting work, I added two bytes to the compiled method trailer format. One bit in those bytes indicates whether the virtual machine has run the method (the other fifteen are for recording the method's linear version). I changed the virtual machine to set that bit every time it runs a method. By clearing that bit on all methods, then examining them later after running the system for some duration, one can tell all the methods which were run over that duration. A method without that bit set is "inert". I extended the garbage collector with an alternative mark phase that doesn't mark or trace inert methods (with the exception of methods associated with currently-active contexts). References to inert methods are replaced with nil. This leaves some method dictionaries with nils where methods used to be. My typical development mode so far has been to use a "full" object memory equipped with remote-messaging tools to control a target object memory over a network. This lets me make changes to the target with impunity. In particular, I don't have to worry about the target getting wedged because I've broken the user interface, because I'm using another system's user interface. I've got a remote system browser, debugger, process browser, and workspace. The remote system browser uses the master system's compiler, and transfer methods directly into the target, so the target need not have a compiler (or ClassBuilder). No class names are ever exchanged between master and target, and source code is completely optional. I changed the target system's Object>>doesNotUnderstand: so that, before raising an exception, it first attempts to install the missing method from the connected master system. If installation is successful, it resends the method and carries on. I changed method lookup in the virtual machine so that when a nil is encountered where an inert method used to be, it is handled as a message-not-understood. So, the master system effectively acts as a virtual memory for the target, providing missing methods inline as they are encountered. In the past (before the garbage collector changes), a new target system was created as a copy of the master system. I would then connect master and target, and use the remote tools in the master to shrink the target manually. I've done two passes that way; the first took three months of work in 2003, the second took two weeks of work in 2005. The reason I've made multiple passes is that I kept realizing significant system features I'd forgotten to include before I started shrinking (e.g., remote debugging), and it'd be much more work to retrofit it into the target than to just shrink a new master copy. Since there will probably always be new fundamental things to add, I decided I ought to just make the shrinking process more automatic. Now I've got a class in the master that can automatically gut the target in 30 minutes. It invokes code in the target that can throw out entire "inert classes" (classes which have no references and no non-inert methods). I got rid of the entirety of Morphic this way, for example, without having to understand anything about how Morphic works. *** In January I wrote a module-aware webserver for the target, to which the web-based Spoon installer will redirect the user after downloading and starting the system. The user will then be able to discover and select modules to load and unload, make snapshots, quit, etc. Currently I'm working on a Naiad module which installs the ClassBuilder (for manipulating existing classes in the target). (Naiad is Spoon's module system.) After that I plan to: - make a module which installs every last bit of 3.8 final, just to show that one can recreate familiar old systems - make modules which install the VM construction tools - make a VM from a Spoon system with those modules in it - throw away the VMs and object memories that I started with! - make modules for other things I want to use (Quoth, Chronos, Weather on Display, Tweak, etc. etc.) Ideally, I'd like the new modules to contain well-factored and highly-readable expressions of the ideas of the old subsystems, rather than just blind repackagings, but we'll see... it's tempting to just imprint things to save time. :) (For more about imprinting, see [1].) thanks, -C [1] http://lists.squeakfoundation.org/pipermail/spoon/2004-October/000061.html -- Craig Latta improvisational musical informaticist www.netjam.org Smalltalkers do: [:it | All with: Class, (And love: it)] |
Hi Craig,
Thanks for the detailed description. Any plan for "Shrinking a Language" ? ;-) Please keep up with the great work. I am looking forward to a round trip (shrinking-growing) working copy of Spoon. Cheers, PhiHo ----- Original Message ----- From: "Craig Latta" <[hidden email]> To: <[hidden email]> Sent: Friday, April 14, 2006 7:33 PM Subject: Spoon progress 15 April 2005: inert method deletion details and next steps > > Hi all-- > > Now that I've been living with it for a while, here are a few details on > the shrinking technique I mentioned a couple of months ago (where methods > that haven't been run recently get reclaimed by the garbage collector). > Also, at the end, are some notes on what I'm doing now and my next steps. > > When I did the earlier imprinting work, I added two bytes to the compiled > method trailer format. One bit in those bytes indicates whether the > virtual machine has run the method (the other fifteen are for recording > the method's linear version). I changed the virtual machine to set that > bit every time it runs a method. By clearing that bit on all methods, then > examining them later after running the system for some duration, one can > tell all the methods which were run over that duration. A method without > that bit set is "inert". > > I extended the garbage collector with an alternative mark phase that > doesn't mark or trace inert methods (with the exception of methods > associated with currently-active contexts). References to inert methods > are replaced with nil. This leaves some method dictionaries with nils > where methods used to be. > > My typical development mode so far has been to use a "full" object memory > equipped with remote-messaging tools to control a target object memory > over a network. This lets me make changes to the target with impunity. In > particular, I don't have to worry about the target getting wedged because > I've broken the user interface, because I'm using another system's user > interface. I've got a remote system browser, debugger, process browser, > and workspace. The remote system browser uses the master system's > compiler, and transfer methods directly into the target, so the target > need not have a compiler (or ClassBuilder). No class names are ever > exchanged between master and target, and source code is completely > optional. > > I changed the target system's Object>>doesNotUnderstand: so that, before > raising an exception, it first attempts to install the missing method from > the connected master system. If installation is successful, it resends the > method and carries on. I changed method lookup in the virtual machine so > that when a nil is encountered where an inert method used to be, it is > handled as a message-not-understood. So, the master system effectively > acts as a virtual memory for the target, providing missing methods inline > as they are encountered. > > In the past (before the garbage collector changes), a new target system > was created as a copy of the master system. I would then connect master > and target, and use the remote tools in the master to shrink the target > manually. I've done two passes that way; the first took three months of > work in 2003, the second took two weeks of work in 2005. The reason I've > made multiple passes is that I kept realizing significant system features > I'd forgotten to include before I started shrinking (e.g., remote > debugging), and it'd be much more work to retrofit it into the target than > to just shrink a new master copy. Since there will probably always be new > fundamental things to add, I decided I ought to just make the shrinking > process more automatic. > > Now I've got a class in the master that can automatically gut the target > in 30 minutes. It invokes code in the target that can throw out entire > "inert classes" (classes which have no references and no non-inert > methods). I got rid of the entirety of Morphic this way, for example, > without having to understand anything about how Morphic works. > > *** > > In January I wrote a module-aware webserver for the target, to which the > web-based Spoon installer will redirect the user after downloading and > starting the system. The user will then be able to discover and select > modules to load and unload, make snapshots, quit, etc. Currently I'm > working on a Naiad module which installs the ClassBuilder (for > manipulating existing classes in the target). (Naiad is Spoon's module > system.) > > After that I plan to: > > - make a module which installs every last bit of 3.8 final, just to show > that one can recreate familiar old systems > > - make modules which install the VM construction tools > > - make a VM from a Spoon system with those modules in it > > - throw away the VMs and object memories that I started with! > > - make modules for other things I want to use (Quoth, Chronos, Weather on > Display, Tweak, etc. etc.) > > > Ideally, I'd like the new modules to contain well-factored and > highly-readable expressions of the ideas of the old subsystems, rather > than just blind repackagings, but we'll see... it's tempting to just > imprint things to save time. :) (For more about imprinting, see [1].) > > > thanks, > > -C > > [1] > > http://lists.squeakfoundation.org/pipermail/spoon/2004-October/000061.html > > -- > Craig Latta > improvisational musical informaticist > www.netjam.org > Smalltalkers do: [:it | All with: Class, (And love: it)] > > |
In reply to this post by ccrraaiigg
This is really important work. You create an opportunity to evolve a better
and even readable Squeak to the great benefit of all. I remember that there was a study a long time ago to compare how long it took to become proficient in C++ and Smalltalk. The answer was the same in both cases - about one year. The reason for the unexpected Smalltalk result was that a person needs this long to become familiar with the class library. Congratulations, Craig. Keep up the good works --Trygve At 01:33 15.04.2006, Craig wrote: >Hi all-- > > Now that I've been living with it for a while, here are a few > details on the shrinking technique I mentioned a couple of months ago > (where methods that haven't been run recently get reclaimed by the > garbage collector). Also, at the end, are some notes on what I'm doing > now and my next steps. > > When I did the earlier imprinting work, I added two bytes to the > compiled method trailer format. One bit in those bytes indicates whether > the virtual machine has run the method (the other fifteen are for > recording the method's linear version). I changed the virtual machine to > set that bit every time it runs a method. By clearing that bit on all > methods, then examining them later after running the system for some > duration, one can tell all the methods which were run over that duration. > A method without that bit set is "inert". > > I extended the garbage collector with an alternative mark phase > that doesn't mark or trace inert methods (with the exception of methods > associated with currently-active contexts). References to inert methods > are replaced with nil. This leaves some method dictionaries with nils > where methods used to be. > > My typical development mode so far has been to use a "full" > object memory equipped with remote-messaging tools to control a target > object memory over a network. This lets me make changes to the target > with impunity. In particular, I don't have to worry about the target > getting wedged because I've broken the user interface, because I'm using > another system's user interface. I've got a remote system browser, > debugger, process browser, and workspace. The remote system browser uses > the master system's compiler, and transfer methods directly into the > target, so the target need not have a compiler (or ClassBuilder). No > class names are ever exchanged between master and target, and source code > is completely optional. > > I changed the target system's Object>>doesNotUnderstand: so that, > before raising an exception, it first attempts to install the missing > method from the connected master system. If installation is successful, > it resends the method and carries on. I changed method lookup in the > virtual machine so that when a nil is encountered where an inert method > used to be, it is handled as a message-not-understood. So, the master > system effectively acts as a virtual memory for the target, providing > missing methods inline as they are encountered. > > In the past (before the garbage collector changes), a new target > system was created as a copy of the master system. I would then connect > master and target, and use the remote tools in the master to shrink the > target manually. I've done two passes that way; the first took three > months of work in 2003, the second took two weeks of work in 2005. The > reason I've made multiple passes is that I kept realizing significant > system features I'd forgotten to include before I started shrinking > (e.g., remote debugging), and it'd be much more work to retrofit it into > the target than to just shrink a new master copy. Since there will > probably always be new fundamental things to add, I decided I ought to > just make the shrinking process more automatic. > > Now I've got a class in the master that can automatically gut the > target in 30 minutes. It invokes code in the target that can throw out > entire "inert classes" (classes which have no references and no non-inert > methods). I got rid of the entirety of Morphic this way, for example, > without having to understand anything about how Morphic works. > >*** > > In January I wrote a module-aware webserver for the target, to > which the web-based Spoon installer will redirect the user after > downloading and starting the system. The user will then be able to > discover and select modules to load and unload, make snapshots, quit, > etc. Currently I'm working on a Naiad module which installs the > ClassBuilder (for manipulating existing classes in the target). (Naiad is > Spoon's module system.) > > After that I plan to: > >- make a module which installs every last bit of 3.8 final, just to >show that one can recreate familiar old systems > >- make modules which install the VM construction tools > >- make a VM from a Spoon system with those modules in it > >- throw away the VMs and object memories that I started with! > >- make modules for other things I want to use (Quoth, Chronos, >Weather on Display, Tweak, etc. etc.) > > > Ideally, I'd like the new modules to contain well-factored and > highly-readable expressions of the ideas of the old subsystems, rather > than just blind repackagings, but we'll see... it's tempting to just > imprint things to save time. :) (For more about imprinting, see [1].) > > > thanks, > >-C > >[1] > >http://lists.squeakfoundation.org/pipermail/spoon/2004-October/000061.html > >-- >Craig Latta >improvisational musical informaticist >www.netjam.org >Smalltalkers do: [:it | All with: Class, (And love: it)] > -- Trygve Reenskaug mailto: [hidden email] Morgedalsvn. 5A http://heim.ifi.uio.no/~trygver N-0378 Oslo Tel: (+47) 22 49 57 27 Norway |
In reply to this post by ccrraaiigg
On Fri, Apr 14, 2006 at 04:33:49PM -0700, Craig Latta wrote:
> > Hi all-- > > Now that I've been living with it for a while, here are a few details > on the shrinking technique I mentioned a couple of months ago (where > methods that haven't been run recently get reclaimed by the garbage > collector). Also, at the end, are some notes on what I'm doing now and > my next steps. > Wow! This is truly quite remarkable work. Dave |
In reply to this post by ccrraaiigg
> After that I plan to: > > - make a module which installs every last bit of 3.8 final, just to > show > that one can recreate familiar old systems > > - make modules which install the VM construction tools > > - make a VM from a Spoon system with those modules in it > > - throw away the VMs and object memories that I started with! > > - make modules for other things I want to use (Quoth, Chronos, > Weather > on Display, Tweak, etc. etc.) Craig, it sounds like you will have a 3.8 compatible system that can "breath" code in and out; methods can be GC'd or dynamically re-installed on demand. I think this "upgrade path" could really help Spoon become the basis for future Squeaks. Bravo! Can one master image serve multiple target images? Magma test cases run in four images; client1, client2, server and test-conductor. Let's say I wanted to have all four of these start out "gutted". I woould kick off the test suite in the conductor image which would require it and the other three to dynamically download only the needed code from the master Spoon image. At the end of the cases, the conductor image would have all of the test code, the two cliens would have most of hte client code, the server should only have the server code. Does it preserve the package semantics (i.e., class categories) when it brings methods so I can save new streamlined Monticello packages? Does Naiad use class categories in its packaging scheme or something different? Is there a way to know if any methods were installed *recently*? I could imagine someone deploying a gutted target image to "production" and leave the master image connected for the first couple of months. There might be some "nervousness" to disconnect the master image unless assured no methods had faulted in the last month under heavy usage.. - Chris |
> Is there a way to know if any methods were installed *recently*? I
> could imagine someone deploying a gutted target image to "production" > and leave the master image connected for the first couple of months. > There might be some "nervousness" to disconnect the master image unless > assured no methods had faulted in the last month under heavy usage.. Of course, the promoters of TDD would claim that if there isn't a test covering it, the code might as well not exist ;-) So if you ship all the methods that have been touched by your test suite you should be good, eh? ;-) Cheers, - Andreas |
Andreas: "Of course, the promoters of TDD would claim that if there isn't a
test covering it, the code might as well not exist ;-)" Ya know, that sentiment reminds me way too much of the arguments made by the static typing priesthood. At the very least, it's a dogmatic overstatement of the case. --Alan |
In reply to this post by Andreas.Raab
Alan Lovejoy wrote:
> Andreas: "Of course, the promoters of TDD would claim that if there isn't a > test covering it, the code might as well not exist ;-)" > > Ya know, that sentiment reminds me way too much of the arguments made by the > static typing priesthood. At the very least, it's a dogmatic overstatement > of the case. Which -I thought- the smiley at the end made clear. Sorry to see it doesn't. Yes, of course, that was a dogmatic overstatement, it was meant to be. But there is a grain of truth that's worthwhile to discuss - namely that, if anything, tests should be used as "a" primary source for imprinting (I'm putting the "a source" in quotes to point out that I don't mean it to be the sole source just in case someone else is inclined to interpret this as another dogmatic overstatement which in that case it's not supposed to be ;-) <-- and please notice smiley here; this wasn't supposed to be taken too seriously ;-) <-- etc. Cheers, - Andreas |
On Apr 16, 2006, at 7:35 PM, Andreas Raab wrote: > But there is a grain of truth that's worthwhile to discuss - namely > that, if anything, tests should be used as "a" primary source for > imprinting The really nice thing about Craig's scheme is that the marking of methods as active is separate from the specialized GC that cleans out the inactive ones. I'd be interested in a method marker that uses type inference to mark all the methods in a given package, as well as all the methods it depends on, even indirectly. Then you could easily build images for specific purposes without worrying about test coverage, or providing the complete range of expected inputs during imprinting. Colin |
In reply to this post by Andreas.Raab
Andreas:
Actually, I didn't intend to imply that you were endorsing the statement without qualification. Sorry about that. I've been on the warpath against mindless dogma, lately. And I agree that there's a kernel of truth to the statement, and the argument behind it. Of course, the same can be said about the case for static typing. It's often the case the "sound byte" statements, pro or con, oversimplify the situation. --Alan -----Original Message----- From: [hidden email] [mailto:[hidden email]] On Behalf Of Andreas Raab Sent: Sunday, April 16, 2006 4:36 PM To: The general-purpose Squeak developers list Subject: Re: Spoon progress 15 April 2005: inert method deletion details and next steps Alan Lovejoy wrote: > Andreas: "Of course, the promoters of TDD would claim that if there > isn't a test covering it, the code might as well not exist ;-)" > > Ya know, that sentiment reminds me way too much of the arguments made > by the static typing priesthood. At the very least, it's a dogmatic > overstatement of the case. Which -I thought- the smiley at the end made clear. Sorry to see it doesn't. Yes, of course, that was a dogmatic overstatement, it was meant to be. But there is a grain of truth that's worthwhile to discuss - namely that, if anything, tests should be used as "a" primary source for imprinting (I'm putting the "a source" in quotes to point out that I don't mean it to be the sole source just in case someone else is inclined to interpret this as another dogmatic overstatement which in that case it's not supposed to be ;-) <-- and please notice smiley here; this wasn't supposed to be taken too seriously ;-) <-- etc. Cheers, - Andreas |
In reply to this post by Chris Muller
Hi all-- Wow, it took me over a day to realize I'd written the wrong year in the original subject line. :) Thanks to all for the encouragement! Chris Muller writes: > Craig, it sounds like you will have a 3.8-compatible system that can > "breathe" code in and out; methods can be GC'd or dynamically > re-installed on demand. I think this "upgrade path" could really help > Spoon become the basis for future Squeaks. I hope so! > Can one master image serve multiple target images? Yes. One of the original scenarios I envisioned was broadcasting a demo to an audience at a conference. Yours sounds like a good one, too. > Does it preserve the package semantics (i.e., class categories) when > it brings methods so I can save new streamlined Monticello packages? Yes. It's all done by modules synchronizing with each other. The target modules can be functionally equivalent to the sources, or completely new, as one wishes. > Does Naiad use class categories in its packaging scheme or something > different? They're optional, rather like source code. Currently, each version of a class object (independently of the methods) is associated with a module, and each version of a compiled method is associated with a module. Everything else is optional annotation. This is a good time to talk about design wishes. > Is there a way to know if any methods were installed *recently*? Yes, you can get each method's module to tell you when the method was installed, from which host, etc. > I could imagine someone deploying a gutted target image to > "production" and leave the master image connected for the first couple > of months. There might be some "nervousness" to disconnect the master > image unless assured no methods had faulted in the last month under > heavy usage... I must admit (not surprisingly :) that I'm a fan of using test cases to establish coverage, and improving the test cases over time. I also think this is markedly different than static typing advocacy; in fact, I think they're near polar opposites. The static typing people seem to want to use it in large part as a hedge against testing. I think testing is critical and unavoidable, and we may as well use the information we get from it to the fullest. But sure, there are still pitfalls to deal with. On the subject of type characterization, Colin writes: > I'd be interested in a method marker that uses type inference to mark > all the methods in a given package, as well as all the methods it > depends on, even indirectly. Then you could easily build images for > specific purposes without worrying about test coverage, or providing > the complete range of expected inputs during imprinting. That would be interesting, but I'm more interested in the reverse: using method marking to inform a type inferencer. :) thanks again, -C -- Craig Latta improvisational musical informaticist www.netjam.org Smalltalkers do: [:it | All with: Class, (And love: it)] |
On Apr 16, 2006, at 10:19 PM, Craig Latta wrote: > > Hi all-- > > Wow, it took me over a day to realize I'd written the wrong year > in the original subject line. :) If you keep improving Spoon at that rate, you'll be "there" in no time! Josh |
In reply to this post by ccrraaiigg
Hi craig
This is cool. Do you have a description of Naiad? What is a naiad modules? I'm really interested in knowing that and what we could do with them + improving if necessary. I would like to push in my new lab a work on remodularizing OO systems and I would like to use squeak as a case study (of course) So we could do more traditional code analysis or your imprint. Are other people welcome to use and build on top of spoon and Naiad? How do you see possible collaboration? > Ideally, I'd like the new modules to contain well-factored and > highly-readable expressions of the ideas of the old subsystems, > rather than just blind repackagings, but we'll see... it's tempting > to just imprint things to save time. :) You mean that you would like to rewrite some parts? Stef |
In reply to this post by Andreas.Raab
But indeed this would be an excellent incentive to get more tests.
And a good coverage tools...coming soon in prepreprealphaversion Stef > Which -I thought- the smiley at the end made clear. Sorry to see it > doesn't. Yes, of course, that was a dogmatic overstatement, it was > meant to be. But there is a grain of truth that's worthwhile to > discuss - namely that, if anything, tests should be used as "a" > primary source for imprinting (I'm putting the "a source" in quotes > to point out that I don't mean it to be the sole source just in > case someone else is inclined to interpret this as another dogmatic > overstatement which in that case it's not supposed to be ;-) <-- > and please notice smiley here; this wasn't supposed to be taken too > seriously ;-) <-- etc. > > Cheers, > - Andreas > |
In reply to this post by Andreas.Raab
Riiigggghhht... :-D
----- Original Message ---- From: Andreas Raab <[hidden email]> To: The general-purpose Squeak developers list <[hidden email]> Sent: Sunday, April 16, 2006 6:02:53 PM Subject: Re: Spoon progress 15 April 2005: inert method deletion details and next steps > Is there a way to know if any methods were installed *recently*? I > could imagine someone deploying a gutted target image to "production" > and leave the master image connected for the first couple of months. > There might be some "nervousness" to disconnect the master image unless > assured no methods had faulted in the last month under heavy usage.. Of course, the promoters of TDD would claim that if there isn't a test covering it, the code might as well not exist ;-) So if you ship all the methods that have been touched by your test suite you should be good, eh? ;-) Cheers, - Andreas |
In reply to this post by Andreas.Raab
Although Alan didn't have an explicit smiley face, I took it as wry humor, not to mention an enlightening allusion. It made me smile anyway.. Love this list!
----- Original Message ---- From: Andreas Raab <[hidden email]> To: The general-purpose Squeak developers list <[hidden email]> Sent: Sunday, April 16, 2006 6:36:10 PM Subject: Re: Spoon progress 15 April 2005: inert method deletion details and next steps Alan Lovejoy wrote: > Andreas: "Of course, the promoters of TDD would claim that if there isn't a > test covering it, the code might as well not exist ;-)" > > Ya know, that sentiment reminds me way too much of the arguments made by the > static typing priesthood. At the very least, it's a dogmatic overstatement > of the case. Which -I thought- the smiley at the end made clear. Sorry to see it doesn't. Yes, of course, that was a dogmatic overstatement, it was meant to be. But there is a grain of truth that's worthwhile to discuss - namely that, if anything, tests should be used as "a" primary source for imprinting (I'm putting the "a source" in quotes to point out that I don't mean it to be the sole source just in case someone else is inclined to interpret this as another dogmatic overstatement which in that case it's not supposed to be ;-) <-- and please notice smiley here; this wasn't supposed to be taken too seriously ;-) <-- etc. Cheers, - Andreas |
In reply to this post by ccrraaiigg
> I must admit (not surprisingly :) that I'm a fan of using test cases to > establish coverage, and improving the test cases over time. I also think > this is markedly different than static typing advocacy; in fact, I think > they're near polar opposites. The static typing people seem to want to >use it in large part as a hedge against testing. I think testing is >critical and unavoidable, and we may as well use the information we get >from it to the fullest. But sure, there are still pitfalls to deal with. Well I've never met anyone even in the Java world who was really serious when they said, "hey, it compiles, let's go to production..!" What I took from Alans allusion was that tests are a static declaration of the desired behavior and how, in the "ultimate" dynamic system, this would just get in the way of "nowness" because you already have ultimate malleability anyway. If you ever run into a problem, just fix it right then, don't declare it in the test and then fix it. I like to think of the old Enterprise computer of "Star Trek". Jim Kirk didn't need no stinkin' tests anymore than he needed a class diagram because it was quicker to just tell the computer what he wanted. (Kirk) "Computer." "w e r k i n g ..." "Calculate how to break through the Tholian web." Of course, don't take any of this to think I don't approve of tests, I have to rely on them heavily in my projects; and we have a quite a ways to go before we reach that level of dynamism.. Cheers.. |
Right; tests just protect against regression. They're not an assurance of completeness. Requirements keep changing, so no system is ever complete. -C -- Craig Latta improvisational musical informaticist www.netjam.org Smalltalkers do: [:it | All with: Class, (And love: it)] |
On 17-Apr-06, at 2:12 PM, Craig Latta wrote: > > Right; tests just protect against regression. They're not an > assurance of completeness. Requirements keep changing, so no system > is ever complete. "The software is only complete when the last customer dies" tim -- tim Rowledge; [hidden email]; http://www.rowledge.org/tim CChheecckk yyoouurr dduupplleexx sswwiittcchh.. |
In reply to this post by stéphane ducasse-2
> Ideally, I'd like the new modules to contain well-factored and
> highly-readable expressions of the ideas of the old subsystems, > rather than just blind repackagings, but we'll see... it's tempting > to just imprint things to save time. :) I think its a fascinating question. On the one hand, with the former we'd find an abstract class with someMethod self subclassResponsibility that could be argued provides a level of documentation about intent. On the other, it could be argued its just static and fat. Its intriguing to think about how an imprinting system could facilitate dynamic construction of complex software. Today, we mostly create software like Swiss watchmakers; hand-assembling a fine instrument carefully under a microscope. The instrument usually either performs a fairly narrow function well or a more general system that requires too much work to get immediate results. There will always be appreciation for these but we'll want to be able to sling dynamic assemblys of these instruments quickly. With something like Spoon able to do this dynamically, one might be "enabled" to think at the higher level of abstractions of the software-instruments, down so much in classes and methods.. |
Free forum by Nabble | Edit this page |