Daniel Vainsencher writes:
> When we ask how important the update stream is, I think we need to > separate two questions: > -How important is it to be able to update every image forever across > versions? > -How important is it to be able to update most of the time, and only get > a new image once or twice per release when things break? Personally, I doubt that it's that important to update every image forever because it currently never works. Well, at least not if you've loaded a few packages which have extensions. For a release team customer like me a Monticello based system has a better chance of allowing updates because it provides better tools to deal with conflicting changes. The update stream lets the last version win which can easily create problems if there are any changes from outside the stream. Bryce |
In reply to this post by Cees de Groot-2
Cees de Groot wrote:
>Could you explain a bit about the Gemstone model you are referring to? >I only have marginal GS exposure, and I'm sure that I'm not the only >one. > > Its been a while, so I may be missing some of the details... Gemstone keeps multiple versions of classes in the image, each in a linear collection of versions. Instances can hop rom one version to the next lazily (or be pushed) and custom methods are written to specify how variable values should change. Such a conversion would happen in a transaction, therefore every instance is consistent - in one version or the other. So this gives a framework for making class evolution happen in a nice orderly manner, without data loss or even significant downtime/delays. This is clearly a priority if we remember GS is a database, but never-ending image transformations or recompilations are a bane for us too. Certainly when stopped half way through... ;-) These are not our top issues to solve, and almost certainly not the techniques we need to use, but its worth remembering that solutions are out there. Alan Lovejoy's recent message sounds more directly relevant. Daniel Vainsencher |
In reply to this post by Cees de Groot-2
Daniel said
"...just wanted to remind ourselves that systematic treatment of live system updates is possible at least in the sense that GemStone does it. " Cees de Groot wrote: > Could you explain a bit about the Gemstone model you are referring to? > I only have marginal GS exposure, and I'm sure that I'm not the only > one. I certainly don't want to talk on Daniel's behalf... but I have some exposure to GS and can maybe suggest a few things of interest. Apologies if this is off-topic, way off, etc... One thing that GemStone provides is a concept of 'class version'. This is separate from the concept of 'class'. If we think about the operation of a Smalltalk environment we have quite a simple view of the changing of the shape of a class. Say we have a class User, with an instance variable firstName. Say we make a few users: #(fred wilma barney betty) collect: [:each | User named: each] Now we have 4 instances of User. Say we now add another instance variable to User, familyName. By adding this new instance variable we change the shape of the class and on our behalf all existing instances are populated with the new variable familyName, referencing nil. We can go on to make more Users and provide the existing instances with suitable values for familyName if we wish. I say simple 'view' above because it's not necessarily a simple operation. We do, however, just watch it happen. So on the concept of class version in our image, we can define the shapes A and B of User, and observe the transition from A to B but we have all our instances moved from A to B. This is after all a very convenient feature of our environment... In GemStone we can create a similar class User, with the same initial instance variable firstName, and make the same four instances. If we commit these changes within a transaction we make them available to all users of the database. We can now add the second instance variable to our GS class. At this point GemStone provides us with this additional concept of class version. By adding the second instance variable we alter the creation of *new* instances of User. So we can make User firstName: #pebbles familyName: #flintstone We do not, however, force this shape change onto existing GS instances of User. This is because the original User class object is still available and providing the class of the original four instances. So you have the following situation in a pseudo fashion: User[1] instanceVariableNames: 'firstName ' inspecting User[1] allInstances --> (an array of size 4, fred wilma...) User[2] instanceVariableNames: 'firstName lastName ' inspecting User[2] allInstances --> (an array of size 1, pebbles...) In the GemStone class browser you can view these two distinct versions of User [1] and [2]. You can then GS evaluate User classHistory which answers a class history object (basically an array) containing the two versions of the class, which are respectively the first and second class objects we made. What you can then do is migrate all the instances of the first class version to instances of the second class version. You can do this at any point you decide as long as you have class versions to migrate to. You can then hook into this migration to perform any necessary upgrade steps. So GemStone provides the ability to have a number of versions of a class, and all associated instances, all running in the same DB, at the same time. It would be an understatement to say it's cool... Of course, I've only talked about state. As classes reference the method dictionaries that define their instances' behaviour these are also versioned by the same process. You can evolve the state and behaviour of your classes and keep older instances around that still behave 'correctly'. It is up to you to define what correctly means because you have to decide what protocol the clients of your objects expect in the presence of different instances of different versions of the class. From the point of view of updating a live object system - we started off there didn't we! - the ability to do any of this at all, and within a transaction is particularly powerful; not least because you can abort. You can try out the migration, and even do unit tests, but not actually commit the changes. Cheers, Mike |
In reply to this post by Adrian Lienhard
Adrian Lienhard wrote:
> What I wanted to say is: lets identify what is most critical and > encourage people to work on this now, rather than talking about what > would be the perfect solution that solves all problem (although that's > nice too). That's why I said "start here" ;-) There is a good list of things to choose from all of which are (in my understanding) necessary. If there is overlap with your (or anyone else's) needs, great. > Of course, then, everybody has different needs (for example, it seems > you care less about speed, whereas I would rather like to have more > speed than having correctly working overrides because I can work around > the latter but not the former) but I have the impression that with some > concrete steps we could already go a step forward. Let's take the atomic > load. It seems it would solve a couple of problems (e.g., load order > dependencies, moving methods/classes between packages), no? Depends on what you mean by "atomic load". If atomic means you load and compile everything, and when finished you install everything in one big become, then yes, this would fix many problems. If you mean "load packages together" instead of one after the other, then no, this wouldn't. Cheers, - Andreas |
On Mar 15, 2006, at 1:17 PM, Andreas Raab wrote: > Depends on what you mean by "atomic load". If atomic means you load > and compile everything, and when finished you install everything in > one big become, then yes, this would fix many problems. If you mean > "load packages together" instead of one after the other, then no, > this wouldn't. Earlier Adrian mentioned SystemEditor, so I think he's referring to the first option. System Editor does exactly that - it builds new classes and compiles methods in a sandbox, and then does a big become at the end. (Well, actually, a #become: and a #becomeForward:). SystemEditor is mostly complete, and works pretty well, but there are still some subtleties to be worked out. I've been able to use it to atomically load some packages in Monticello2, but it's still tripping over edge cases. The other aspect to SystemEditor is that it's more flexible than ClassBuilder about migrating instances. It currently has one migration strategy - the standard one that maps values according to ivar names and assigns nil to any new ivars. But other migrators can be used as well. I plan to take advantage of this in Monticello2 with a migrator that can handle class and ivar renames, and of course custom migrators can be built as well. The only thing missing to allow this is a protocol for specifying the migrator to use for a given class. Colin |
Colin Putney wrote:
> The other aspect to SystemEditor is that it's more flexible than > ClassBuilder about migrating instances. It currently has one migration > strategy - the standard one that maps values according to ivar names and > assigns nil to any new ivars. But other migrators can be used as well. I > plan to take advantage of this in Monticello2 with a migrator that can > handle class and ivar renames, and of course custom migrators can be > built as well. The only thing missing to allow this is a protocol for > specifying the migrator to use for a given class. Remember that migrating instances is really done by the class itself via #updateInstancesFrom: and probably shouldn't be hardcoded in either ClassBuilder or SystemEditor. Cheers, - Andreas |
On Mar 15, 2006, at 2:44 PM, Andreas Raab wrote: > Colin Putney wrote: >> The other aspect to SystemEditor is that it's more flexible than >> ClassBuilder about migrating instances. It currently has one >> migration strategy - the standard one that maps values according >> to ivar names and assigns nil to any new ivars. But other >> migrators can be used as well. I plan to take advantage of this in >> Monticello2 with a migrator that can handle class and ivar >> renames, and of course custom migrators can be built as well. The >> only thing missing to allow this is a protocol for specifying the >> migrator to use for a given class. > > Remember that migrating instances is really done by the class > itself via #updateInstancesFrom: and probably shouldn't be > hardcoded in either ClassBuilder or SystemEditor. Yeah. SystemEditor doesn't use #updateInstancesFrom: and that's a bit of a change from the way ClassBuilder works. On the other hand, it's not hardcoded, as I was explaining in the paragraph above. |
Free forum by Nabble | Edit this page |