Hi guys
MethodContext allInstances size seems to loop forever or even to crash my VM. Stef _______________________________________________ Pharo-project mailing list [hidden email] http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project |
On 05.08.2009, at 11:30, Stéphane Ducasse wrote: > Hi guys > > MethodContext allInstances size seems to loop forever or even to > crash my VM. This might be because new instances are created while executing the expression. Normally contexts are recycled. But the list of contexts available for recycling in the vm is flushed (gc) or can just have none to recycle anymore. Therefore, while executing code new instance of MethodContext are created, leading to an endless loop for "allInstances". That´s my theory.. Marcus _______________________________________________ Pharo-project mailing list [hidden email] http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project |
Yes stupid me.
Now in Squeak or Cuis the equivalent of MethodContext was not crashing. So there is something strange. In any case may be then we should pay attention to allInstancesDo: and friends Stef On Aug 5, 2009, at 5:53 PM, Marcus Denker wrote: > > On 05.08.2009, at 11:30, Stéphane Ducasse wrote: > >> Hi guys >> >> MethodContext allInstances size seems to loop forever or even to >> crash my VM. > > This might be because new instances are created while executing the > expression. > > Normally contexts are recycled. But the list of contexts available for > recycling in the vm is flushed (gc) or can just have none to recycle > anymore. > > Therefore, while executing code new instance of MethodContext are > created, > leading to an endless loop for "allInstances". > > That´s my theory.. > > > Marcus > > > > > _______________________________________________ > Pharo-project mailing list > [hidden email] > http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project _______________________________________________ Pharo-project mailing list [hidden email] http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project |
In reply to this post by Marcus Denker-3
I did that in pharo 211 and it worked.
MethodContext allInstances size 2309 Stef On Aug 5, 2009, at 5:53 PM, Marcus Denker wrote: > > On 05.08.2009, at 11:30, Stéphane Ducasse wrote: > >> Hi guys >> >> MethodContext allInstances size seems to loop forever or even to >> crash my VM. > > This might be because new instances are created while executing the > expression. > > Normally contexts are recycled. But the list of contexts available for > recycling in the vm is flushed (gc) or can just have none to recycle > anymore. > > Therefore, while executing code new instance of MethodContext are > created, > leading to an endless loop for "allInstances". > > That´s my theory.. > > > Marcus > > > > > _______________________________________________ > Pharo-project mailing list > [hidden email] > http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project _______________________________________________ Pharo-project mailing list [hidden email] http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project |
On 05.08.2009, at 12:00, Stéphane Ducasse wrote: > I did that in pharo 211 and it worked. > Yes, that was before closures. > MethodContext allInstances size 2309 > with the blockClosures, creation of method contexts happes far more frequently than it used to. (as both closures and methods use the same context objects) It of course could be a sign that it happens actually too much. Marcus -- Marcus Denker - http://marcusdenker.de PLEIAD Lab - Computer Science Department (DCC) - University of Chile _______________________________________________ Pharo-project mailing list [hidden email] http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project |
> I did that in pharo 211 and it worked.
>> > Yes, that was before closures. > >> MethodContext allInstances size 2309 >> > > with the blockClosures, creation of method contexts happes far more > frequently than it used to. > (as both closures and methods use the same context objects) > > It of course could be a sign that it happens actually too much. I stopped in the image that did not crash yet at a couple of billions :) _______________________________________________ Pharo-project mailing list [hidden email] http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project |
In reply to this post by Marcus Denker-3
2009/8/5 Marcus Denker <[hidden email]>:
> > On 05.08.2009, at 12:00, Stéphane Ducasse wrote: > >> I did that in pharo 211 and it worked. >> > Yes, that was before closures. > >> MethodContext allInstances size 2309 >> > > with the blockClosures, creation of method contexts happes far more > frequently than it used to. > (as both closures and methods use the same context objects) > > It of course could be a sign that it happens actually too much. > degradation. But its not. :) I think it is correct behavior, that you can't get #allInstances of class when you creating them in a loop which serves to collect them all. The solution would be to collect #allInstances primitively (by a single primitive call), then there is no chance that anything happen with object memory during memory scan by primitive. > > Marcus > > -- > Marcus Denker - http://marcusdenker.de > PLEIAD Lab - Computer Science Department (DCC) - University of Chile > > > _______________________________________________ > Pharo-project mailing list > [hidden email] > http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project > -- Best regards, Igor Stasenko AKA sig. _______________________________________________ Pharo-project mailing list [hidden email] http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project |
In reply to this post by Stéphane Ducasse
On 05.08.2009, at 12:08, Stéphane Ducasse wrote: >> I did that in pharo 211 and it worked. >>> >> Yes, that was before closures. >> >>> MethodContext allInstances size 2309 >>> >> >> with the blockClosures, creation of method contexts happes far more >> frequently than it used to. >> (as both closures and methods use the same context objects) >> >> It of course could be a sign that it happens actually too much. > > I stopped in the image that did not crash yet at a couple of > billions :) > Yes, but not "too much" in this sense. When creating new instances while counting them, the case "no problem" and "endless loop" are very close together... a very tiny increase, or even just a change in context-creation behavior could trigger the case that it creates too many contexts. This is one possbility: the change in creation pattern cause the problem, nothing to worry about. So let see. It used to be that block-context where created by sending blockCopy: to thiscontext: 13 <89> pushThisContext: 14 <75> pushConstant: 0 15 <C8> send: blockCopy: 16 <A4 02> jumpTo: 20 18 <73> pushConstant: nil 19 <7D> blockReturn 20 <87> pop 21 <78> returnSelf Now, the same code reads like this: 13 <8F 00 00 02> closureNumCopied: 0 numArgs: 0 bytes 17 to 18 17 <73> pushConstant: nil 18 <7D> blockReturn 19 <87> pop 20 <78> returnSelf and the closureNumCopied bytecode creates a BlockClosure object which is *not* the context. The context is later created when evaluating the Closure. So the time *when* an allocation of a new context may happen changed fundamtally. If we now look at the code for getting allinstances: allInstances "Answer a collection of all current instances of the receiver." | all | all := OrderedCollection new. self allInstancesDo: [:x | x == all ifFalse: [all add: x]]. ^ all asArray (allInstancesDo: is carefully written to not use any real blocks, just inlined ones). So the context for the block was created just once, before calling #llInstancesDo: Bytecode: 29 <40> pushLit: OrderedCollection 30 <CC> send: new 31 <68> popIntoTemp: 0 32 <70> self 33 <89> pushThisContext: 34 <76> pushConstant: 1 35 <C8> send: blockCopy: 36 <A4 0B> jumpTo: 49 .... and so on. Now with the Closures, it evaluates the closure in allInstancesDo:, leading for the closure VM to a potential allocation of a context in the execution of allInstancesDo: (and for each eval, it´s a loop). So we now have potentiall allocations of new instances of methodcontext when iterating over the existing ones. --> race condition. Marcus -- Marcus Denker - http://marcusdenker.de PLEIAD Lab - Computer Science Department (DCC) - University of Chile _______________________________________________ Pharo-project mailing list [hidden email] http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project |
In reply to this post by Igor Stasenko
On 05.08.2009, at 12:30, Igor Stasenko wrote: > 2009/8/5 Marcus Denker <[hidden email]>: >> >> On 05.08.2009, at 12:00, Stéphane Ducasse wrote: >> >>> I did that in pharo 211 and it worked. >>> >> Yes, that was before closures. >> >>> MethodContext allInstances size 2309 >>> >> >> with the blockClosures, creation of method contexts happes far more >> frequently than it used to. >> (as both closures and methods use the same context objects) >> >> It of course could be a sign that it happens actually too much. >> > Hmm, if that so, then we would expect to see an interpreter speed > degradation. But its not. :) Yes, see the other mail. The reason is that the allocation of contexts in case of blocks happens now at #value, whereas it used to be just done at the definiton point (#blockCopy). So nothing to worry about. > I think it is correct behavior, that you can't get #allInstances of > class when you creating them in a loop which serves to collect them > all. > The solution would be to collect #allInstances primitively (by a > single primitive call), then there is no chance that anything happen > with object memory during memory scan by primitive. Yes, and the real solution of course is to change the VM to allocate less contexts and use the C-stack instead... Marcus >> >> Marcus >> >> -- >> Marcus Denker - http://marcusdenker.de >> PLEIAD Lab - Computer Science Department (DCC) - University of Chile >> >> >> _______________________________________________ >> Pharo-project mailing list >> [hidden email] >> http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project >> > > > > -- > Best regards, > Igor Stasenko AKA sig. > > _______________________________________________ > Pharo-project mailing list > [hidden email] > http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project _______________________________________________ Pharo-project mailing list [hidden email] http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project |
On 05.08.2009, at 12:42, Marcus Denker wrote: >>> >> Hmm, if that so, then we would expect to see an interpreter speed >> degradation. But its not. :) > > Yes, see the other mail. The reason is that the allocation of contexts > in case of blocks happens now at #value, whereas it used to be just > done > at the definiton point (#blockCopy). > > So nothing to worry about. Eliot says: | So you can either | - wait for the stack VM or | - inline allInstancesDo: into allInstances or | - implement allInstacesDo: specially in ContextPart or | - implement a pair of primitives to answer allInstances and allObject atomically. | | This latter approach allows much more flexibility in implementing the garbage collector subsequently; for | example segmenting the heap and adding and freeing segments as required, which complicates the simple object | ordering provided by the single heap but has much better memory usage. I vote for inling for now... this fixes the problem. allInstances "Answer a collection of all current instances of the receiver." | all inst next | all := OrderedCollection new. inst := self someInstance. [inst == nil] whileFalse: [ next := inst nextInstance. inst == all ifFalse: [all add: inst]. inst := next]. ^ all asArray Marcus -- Marcus Denker - http://marcusdenker.de PLEIAD Lab - Computer Science Department (DCC) - University of Chile _______________________________________________ Pharo-project mailing list [hidden email] http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project |
In reply to this post by Marcus Denker-3
Soon I will be able to be a fly on your shoulder and learn much faster
than via emails :) Stef >> > > Yes, but not "too much" in this sense. When creating new instances > while counting > them, the case "no problem" and "endless loop" are very close > together... a very > tiny increase, or even just a change in context-creation behavior > could trigger > the case that it creates too many contexts. > > This is one possbility: the change in creation pattern cause the > problem, nothing > to worry about. > > So let see. > > It used to be that block-context where created by sending blockCopy: > to thiscontext: > > 13 <89> pushThisContext: > 14 <75> pushConstant: 0 > 15 <C8> send: blockCopy: > 16 <A4 02> jumpTo: 20 > 18 <73> pushConstant: nil > 19 <7D> blockReturn > 20 <87> pop > 21 <78> returnSelf > > Now, the same code reads like this: > > 13 <8F 00 00 02> closureNumCopied: 0 numArgs: 0 bytes 17 to 18 > 17 <73> pushConstant: nil > 18 <7D> blockReturn > 19 <87> pop > 20 <78> returnSelf > > > and the closureNumCopied bytecode creates a BlockClosure object which > is *not* the > context. The context is later created when evaluating the Closure. > > So the time *when* an allocation of a new context may happen changed > fundamtally. > > If we now look at the code for getting allinstances: > > allInstances > "Answer a collection of all current instances of the receiver." > > | all | > all := OrderedCollection new. > self allInstancesDo: [:x | x == all ifFalse: [all add: x]]. > ^ all asArray > > (allInstancesDo: is carefully written to not use any real blocks, just > inlined ones). > > So the context for the block was created just once, before calling > #llInstancesDo: > Bytecode: > > 29 <40> pushLit: OrderedCollection > 30 <CC> send: new > 31 <68> popIntoTemp: 0 > 32 <70> self > 33 <89> pushThisContext: > 34 <76> pushConstant: 1 > 35 <C8> send: blockCopy: > 36 <A4 0B> jumpTo: 49 > .... and so on. > > Now with the Closures, it evaluates the closure in allInstancesDo:, > leading for the > closure VM to a potential allocation of a context in the execution of > allInstancesDo: > (and for each eval, it´s a loop). So we now have potentiall > allocations of new instances > of methodcontext when iterating over the existing ones. --> race > condition. > > Marcus > > > > -- > Marcus Denker - http://marcusdenker.de > PLEIAD Lab - Computer Science Department (DCC) - University of Chile > > > _______________________________________________ > Pharo-project mailing list > [hidden email] > http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project _______________________________________________ Pharo-project mailing list [hidden email] http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project |
In reply to this post by Marcus Denker-3
> Yes, see the other mail. The reason is that the allocation of contexts
> in case of blocks happens now at #value, whereas it used to be just > done > at the definiton point (#blockCopy). > > So nothing to worry about. still it crashes my image when doing MethodContext allInstances may be a temporary solution would be to redefine allInstances on MethodContext >> I think it is correct behavior, that you can't get #allInstances of >> class when you creating them in a loop which serves to collect them >> all. >> The solution would be to collect #allInstances primitively (by a >> single primitive call), then there is no chance that anything happen >> with object memory during memory scan by primitive. > > > Yes, and the real solution of course is to change the VM to allocate > less > contexts and use the C-stack instead... > > Pharo-project mailing list > [hidden email] > http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project _______________________________________________ Pharo-project mailing list [hidden email] http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project |
In reply to this post by Marcus Denker-3
yes.
Will do that in the next integration phase. Stef On Aug 5, 2009, at 7:01 PM, Marcus Denker wrote: > > On 05.08.2009, at 12:42, Marcus Denker wrote: >>>> >>> Hmm, if that so, then we would expect to see an interpreter speed >>> degradation. But its not. :) >> >> Yes, see the other mail. The reason is that the allocation of >> contexts >> in case of blocks happens now at #value, whereas it used to be just >> done >> at the definiton point (#blockCopy). >> >> So nothing to worry about. > > > Eliot says: > > | So you can either > | - wait for the stack VM or > | - inline allInstancesDo: into allInstances or > | - implement allInstacesDo: specially in ContextPart or > | - implement a pair of primitives to answer allInstances and > allObject atomically. > | > | This latter approach allows much more flexibility in implementing > the garbage collector subsequently; for > | example segmenting the heap and adding and freeing segments as > required, which complicates the simple object > | ordering provided by the single heap but has much better memory > usage. > > > I vote for inling for now... this fixes the problem. > > allInstances > "Answer a collection of all current instances of the receiver." > > | all inst next | > all := OrderedCollection new. > inst := self someInstance. > [inst == nil] > whileFalse: [ > next := inst nextInstance. > inst == all ifFalse: [all add: inst]. > inst := next]. > ^ all asArray > > > Marcus > > > -- > Marcus Denker - http://marcusdenker.de > PLEIAD Lab - Computer Science Department (DCC) - University of Chile > > > _______________________________________________ > Pharo-project mailing list > [hidden email] > http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project _______________________________________________ Pharo-project mailing list [hidden email] http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project |
In reply to this post by Marcus Denker-3
2009/8/5 Marcus Denker <[hidden email]>:
> > On 05.08.2009, at 12:30, Igor Stasenko wrote: > >> 2009/8/5 Marcus Denker <[hidden email]>: >>> >>> On 05.08.2009, at 12:00, Stéphane Ducasse wrote: >>> >>>> I did that in pharo 211 and it worked. >>>> >>> Yes, that was before closures. >>> >>>> MethodContext allInstances size 2309 >>>> >>> >>> with the blockClosures, creation of method contexts happes far more >>> frequently than it used to. >>> (as both closures and methods use the same context objects) >>> >>> It of course could be a sign that it happens actually too much. >>> >> Hmm, if that so, then we would expect to see an interpreter speed >> degradation. But its not. :) > > Yes, see the other mail. The reason is that the allocation of contexts > in case of blocks happens now at #value, whereas it used to be just done > at the definiton point (#blockCopy). > > So nothing to worry about. > Could it be rewritten to something like that: ( for all classes except UndefinedObject) allInstances | all inst next | all := OrderedCollection new. inst := self someInstance. inst ifNil: [ ^ #() ]. [inst == 0] whileFalse: [ next := inst nextObject. next class == self ifTrue: [ all add: next ]. next == all ifTrue: [ ^ all ]. inst := next]. ^ all here i added the stop rule ' next == all ifTrue: [ ^ all ].' , which should return from the loop once heap reached the last allocated object. The caller of #allInstances expects to see all instances of class existed before invocation of this method, and not interested in those ones, which was created as a side effect of running it. Right? >> I think it is correct behavior, that you can't get #allInstances of >> class when you creating them in a loop which serves to collect them >> all. >> The solution would be to collect #allInstances primitively (by a >> single primitive call), then there is no chance that anything happen >> with object memory during memory scan by primitive. > > > Yes, and the real solution of course is to change the VM to allocate > less > contexts and use the C-stack instead... > since then you can't see all the context objects which is used by interpreter for running your code? I want to point, that the way how VM allocating contexts is implementation details , and if it done correctly then you're still should be able to access all context objects. > Marcus > >>> >>> Marcus >>> >>> -- >>> Marcus Denker - http://marcusdenker.de >>> PLEIAD Lab - Computer Science Department (DCC) - University of Chile >>> >>> >>> _______________________________________________ >>> Pharo-project mailing list >>> [hidden email] >>> http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project >>> >> >> >> >> -- >> Best regards, >> Igor Stasenko AKA sig. >> >> _______________________________________________ >> Pharo-project mailing list >> [hidden email] >> http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project > > > _______________________________________________ > Pharo-project mailing list > [hidden email] > http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project > -- Best regards, Igor Stasenko AKA sig. _______________________________________________ Pharo-project mailing list [hidden email] http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project |
In reply to this post by Marcus Denker-3
the problem now is that
allInstancesDo: aBlock "Evaluate the argument, aBlock, for each of the current instances of the receiver. Because aBlock might change the class of inst (for example, using become:), it is essential to compute next before aBlock value: inst." | inst next | self == UndefinedObject ifTrue: [^ aBlock value: nil]. inst := self someInstance. [inst == nil] whileFalse: [ next := inst nextInstance. aBlock value: inst. inst := next] is looping too. Stef On Aug 5, 2009, at 7:01 PM, Marcus Denker wrote: > > On 05.08.2009, at 12:42, Marcus Denker wrote: >>>> >>> Hmm, if that so, then we would expect to see an interpreter speed >>> degradation. But its not. :) >> >> Yes, see the other mail. The reason is that the allocation of >> contexts >> in case of blocks happens now at #value, whereas it used to be just >> done >> at the definiton point (#blockCopy). >> >> So nothing to worry about. > > > Eliot says: > > | So you can either > | - wait for the stack VM or > | - inline allInstancesDo: into allInstances or > | - implement allInstacesDo: specially in ContextPart or > | - implement a pair of primitives to answer allInstances and > allObject atomically. > | > | This latter approach allows much more flexibility in implementing > the garbage collector subsequently; for > | example segmenting the heap and adding and freeing segments as > required, which complicates the simple object > | ordering provided by the single heap but has much better memory > usage. > > > I vote for inling for now... this fixes the problem. > > allInstances > "Answer a collection of all current instances of the receiver." > > | all inst next | > all := OrderedCollection new. > inst := self someInstance. > [inst == nil] > whileFalse: [ > next := inst nextInstance. > inst == all ifFalse: [all add: inst]. > inst := next]. > ^ all asArray > > > Marcus > > > -- > Marcus Denker - http://marcusdenker.de > PLEIAD Lab - Computer Science Department (DCC) - University of Chile > > > _______________________________________________ > Pharo-project mailing list > [hidden email] > http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project _______________________________________________ Pharo-project mailing list [hidden email] http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project |
allInstancesDo: aBlock
"Evaluate the argument, aBlock, for each of the current instances of the receiver. Because aBlock might change the class of inst (for example, using become:), it is essential to compute next before aBlock value: inst." self == UndefinedObject ifTrue: [^ aBlock value: nil]. self allInstances do: [:each | aBlock value: each. ] Works On Aug 5, 2009, at 8:58 PM, Stéphane Ducasse wrote: > the problem now is that > > allInstancesDo: aBlock > "Evaluate the argument, aBlock, for each of the current instances of > the > receiver. > > Because aBlock might change the class of inst (for example, using > become:), > it is essential to compute next before aBlock value: inst." > | inst next | > self == UndefinedObject ifTrue: [^ aBlock value: nil]. > inst := self someInstance. > [inst == nil] > whileFalse: > [ > next := inst nextInstance. > aBlock value: inst. > inst := next] > > is looping too. > > Stef > > On Aug 5, 2009, at 7:01 PM, Marcus Denker wrote: > >> >> On 05.08.2009, at 12:42, Marcus Denker wrote: >>>>> >>>> Hmm, if that so, then we would expect to see an interpreter speed >>>> degradation. But its not. :) >>> >>> Yes, see the other mail. The reason is that the allocation of >>> contexts >>> in case of blocks happens now at #value, whereas it used to be just >>> done >>> at the definiton point (#blockCopy). >>> >>> So nothing to worry about. >> >> >> Eliot says: >> >> | So you can either >> | - wait for the stack VM or >> | - inline allInstancesDo: into allInstances or >> | - implement allInstacesDo: specially in ContextPart or >> | - implement a pair of primitives to answer allInstances and >> allObject atomically. >> | >> | This latter approach allows much more flexibility in implementing >> the garbage collector subsequently; for >> | example segmenting the heap and adding and freeing segments as >> required, which complicates the simple object >> | ordering provided by the single heap but has much better memory >> usage. >> >> >> I vote for inling for now... this fixes the problem. >> >> allInstances >> "Answer a collection of all current instances of the receiver." >> >> | all inst next | >> all := OrderedCollection new. >> inst := self someInstance. >> [inst == nil] >> whileFalse: [ >> next := inst nextInstance. >> inst == all ifFalse: [all add: inst]. >> inst := next]. >> ^ all asArray >> >> >> Marcus >> >> >> -- >> Marcus Denker - http://marcusdenker.de >> PLEIAD Lab - Computer Science Department (DCC) - University of Chile >> >> >> _______________________________________________ >> Pharo-project mailing list >> [hidden email] >> http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project > > > _______________________________________________ > Pharo-project mailing list > [hidden email] > http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project _______________________________________________ Pharo-project mailing list [hidden email] http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project |
In reply to this post by Igor Stasenko
On 05.08.2009, at 13:18, Igor Stasenko wrote: >> >> Yes, and the real solution of course is to change the VM to allocate >> less >> contexts and use the C-stack instead... >> > hmm, doesn't that breaks the smalltalk introspection capabilities, > since then you can't see all the > context objects which is used by interpreter for running your code? > I want to point, that the way how VM allocating contexts is > implementation details , and if it done correctly > then you're still should be able to access all context objects. the magic is to create these objects as soon as you look at them. e.g, a description for visualworks: http://pages.cs.wisc.edu/~cymen/misc/interests/oopsla99-contexts.pdf Of course, even though as soon as I look at a thisContext, there is an object, I will not see all possible contexts with #allInstances, but only those that where created because they are needed. In that way, the implementation level optimization is not completely hidden. And of course, #allInstances works even for MethodContext, as no MethodContexts are created in a normal execution of a method like aInstancesDo:. Marcus -- Marcus Denker - http://marcusdenker.de PLEIAD Lab - Computer Science Department (DCC) - University of Chile _______________________________________________ Pharo-project mailing list [hidden email] http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project |
In reply to this post by Marcus Denker-3
Am I right in thinking this is the same issue? It appears to loop forever.
...Stan SystemNavigation obsoleteBehaviors "SystemNavigation default obsoleteBehaviors inspect" "Find all obsolete behaviors including meta classes" | obs | obs := OrderedCollection new. Smalltalk garbageCollect. self allObjectsDo: [:cl | (cl isBehavior and: [cl isObsolete]) ifTrue: [obs add: cl]]. ^ obs asArray
|
In reply to this post by Marcus Denker-3
Hi, Am I right in thinking this is the same issue? It appears to loop forever.
...Stan SystemNavigation obsoleteBehaviors "SystemNavigation default obsoleteBehaviors inspect" "Find all obsolete behaviors including meta classes" | obs | obs := OrderedCollection new. Smalltalk garbageCollect. self allObjectsDo: [:cl | (cl isBehavior and: [cl isObsolete]) ifTrue: [obs add: cl]]. ^ obs asArray
|
probably
Stef On Aug 7, 2009, at 2:20 PM, Stan Shepherd wrote: > > Hi, Am I right in thinking this is the same issue? It appears to > loop > forever. > ...Stan > > SystemNavigation obsoleteBehaviors > "SystemNavigation default obsoleteBehaviors inspect" > "Find all obsolete behaviors including meta classes" > > | obs | > obs := OrderedCollection new. > Smalltalk garbageCollect. > self > allObjectsDo: [:cl | (cl isBehavior > and: [cl isObsolete]) > ifTrue: [obs add: cl]]. > ^ obs asArray > > Marcus Denker-2 wrote: >> >> >> On 05.08.2009, at 11:30, Stéphane Ducasse wrote: >> >>> Hi guys >>> >>> MethodContext allInstances size seems to loop forever or even to >>> crash my VM. >> >> This might be because new instances are created while executing the >> expression. >> >> Normally contexts are recycled. But the list of contexts available >> for >> recycling in the vm is flushed (gc) or can just have none to recycle >> anymore. >> >> Therefore, while executing code new instance of MethodContext are >> created, >> leading to an endless loop for "allInstances". >> >> That´s my theory.. >> >> >> Marcus >> >> >> >> >> _______________________________________________ >> Pharo-project mailing list >> [hidden email] >> http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project >> >> > > -- > View this message in context: http://n2.nabble.com/-BUG--MethodContext-allInstances-size-tp3392811p3404015.html > Sent from the Pharo Smalltalk mailing list archive at Nabble.com. > > _______________________________________________ > Pharo-project mailing list > [hidden email] > http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project _______________________________________________ Pharo-project mailing list [hidden email] http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project |
Free forum by Nabble | Edit this page |