Here the idea, which came to Chris,
put a delay into #finalizationProcess loop: finalizationProcess [true] whileTrue: [ WeakFinalizationList initTestPair. FinalizationSemaphore wait. FinalizationLock critical: [ WeakFinalizationList checkTestPair. FinalizationDependents do: [:weakDependent | weakDependent ifNotNil: [weakDependent finalizeValues]]] ifError: [:msg :rcvr | rcvr error: msg]. 5 seconds asDelay wait. ]. And here a simple benchmark, which triggers GC often: [ Array new: 100 ] bench without delay: '2,450,000 per second.' '2,490,000 per second.' '2,490,000 per second.' '2,480,000 per second.' '2,530,000 per second.' with delay: '2,670,000 per second.' '2,680,000 per second.' '2,690,000 per second.' '2,730,000 per second.' roughly about ~8% faster :) But now lets put something big into weak array: | dict b | dict := WeakKeyDictionary new addAll: (( 1 to: 1000 ) collect: [:i | i->i] ); yourself. WeakArray addWeakDependent: dict. b := [ Array new: 100 ] bench. WeakArray removeWeakDependent: dict. b without delay: '1,840,000 per second.' '2,060,000 per second.' '2,130,000 per second.' with delay: '3,030,000 per second.' '2,880,000 per second.' '2,890,000 per second.' Do not forget to do: WeakArray restartFinalizationProcess when you changing the #finalizationProcess method, otherwise you won't see real numbers. So, i like the idea of putting delay there. Finalization is eventual, and there is no hard guarantees that it will happen in micro-second just after some object become garbage. So, be it 5 seconds or 1000 seconds not really matters. What is matters that with delay we win much more, by avoiding wasting time in finalization process too often. -- Best regards, Igor Stasenko AKA sig. |
Why does finalization do any work when running a block which I think
just creates garbage? Is finalization on a per class basis (i.e.: the VM notifies the image that some objects might be notified, and the image just enumerates through them), or on a per object basis (i.e.: the VM maintains a finalization queue as in VisualWorks)? On 10/26/10 0:25 , Igor Stasenko wrote: > Here the idea, which came to Chris, > put a delay into #finalizationProcess loop: > > finalizationProcess > [true] whileTrue: > [ WeakFinalizationList initTestPair. > FinalizationSemaphore wait. > FinalizationLock critical: > [ > WeakFinalizationList checkTestPair. > FinalizationDependents do: > [:weakDependent | > weakDependent ifNotNil: > [weakDependent finalizeValues]]] > ifError: > [:msg :rcvr | rcvr error: msg]. > 5 seconds asDelay wait. > ]. > > > And here a simple benchmark, which triggers GC often: > > [ Array new: 100 ] bench > > without delay: > > '2,450,000 per second.' > '2,490,000 per second.' > '2,490,000 per second.' > '2,480,000 per second.' > '2,530,000 per second.' > > with delay: > > '2,670,000 per second.' > '2,680,000 per second.' > '2,690,000 per second.' > '2,730,000 per second.' > > roughly about ~8% faster :) > > But now lets put something big into weak array: > > | dict b | > dict := WeakKeyDictionary new addAll: (( 1 to: 1000 ) collect: [:i | > i->i] ); yourself. > WeakArray addWeakDependent: dict. > b := [ Array new: 100 ] bench. > WeakArray removeWeakDependent: dict. > b > > without delay: > > '1,840,000 per second.' > '2,060,000 per second.' > '2,130,000 per second.' > > with delay: > > '3,030,000 per second.' > '2,880,000 per second.' > '2,890,000 per second.' > > Do not forget to do: > WeakArray restartFinalizationProcess > > when you changing the #finalizationProcess method, > otherwise you won't see real numbers. > > So, i like the idea of putting delay there. > Finalization is eventual, and there is no hard guarantees that it will > happen in micro-second just after some object become garbage. > So, be it 5 seconds or 1000 seconds not really matters. > What is matters that with delay we win much more, by avoiding wasting > time in finalization process too often. > > |
On 26 October 2010 10:30, Andres Valloud
<[hidden email]> wrote: > Why does finalization do any work when running a block which I think just > creates garbage? Is finalization on a per class basis (i.e.: the VM > notifies the image that some objects might be notified, and the image just > enumerates through them), or on a per object basis (i.e.: the VM maintains a > finalization queue as in VisualWorks)? > There is no per-object or per-class finalization in Squeak. One must either register an object using WeakRegistry, or add own object to weak dependents (it should answer to #finalizeValues). Squeak VM signals the semaphore each time GC happen, and actually, a finalization process could do something arbitrary to react on such event. Putting delay there ensuring that even if you put a very ineffective finalization into weak dependents, it won't affect the performance too much. > On 10/26/10 0:25 , Igor Stasenko wrote: >> >> Here the idea, which came to Chris, >> put a delay into #finalizationProcess loop: >> >> finalizationProcess >> [true] whileTrue: >> [ WeakFinalizationList initTestPair. >> FinalizationSemaphore wait. >> FinalizationLock critical: >> [ >> WeakFinalizationList checkTestPair. >> FinalizationDependents do: >> [:weakDependent | >> weakDependent ifNotNil: >> [weakDependent finalizeValues]]] >> ifError: >> [:msg :rcvr | rcvr error: msg]. >> 5 seconds asDelay wait. >> ]. >> >> >> And here a simple benchmark, which triggers GC often: >> >> [ Array new: 100 ] bench >> >> without delay: >> >> '2,450,000 per second.' >> '2,490,000 per second.' >> '2,490,000 per second.' >> '2,480,000 per second.' >> '2,530,000 per second.' >> >> with delay: >> >> '2,670,000 per second.' >> '2,680,000 per second.' >> '2,690,000 per second.' >> '2,730,000 per second.' >> >> roughly about ~8% faster :) >> >> But now lets put something big into weak array: >> >> | dict b | >> dict := WeakKeyDictionary new addAll: (( 1 to: 1000 ) collect: [:i | >> i->i] ); yourself. >> WeakArray addWeakDependent: dict. >> b := [ Array new: 100 ] bench. >> WeakArray removeWeakDependent: dict. >> b >> >> without delay: >> >> '1,840,000 per second.' >> '2,060,000 per second.' >> '2,130,000 per second.' >> >> with delay: >> >> '3,030,000 per second.' >> '2,880,000 per second.' >> '2,890,000 per second.' >> >> Do not forget to do: >> WeakArray restartFinalizationProcess >> >> when you changing the #finalizationProcess method, >> otherwise you won't see real numbers. >> >> So, i like the idea of putting delay there. >> Finalization is eventual, and there is no hard guarantees that it will >> happen in micro-second just after some object become garbage. >> So, be it 5 seconds or 1000 seconds not really matters. >> What is matters that with delay we win much more, by avoiding wasting >> time in finalization process too often. >> >> > > -- Best regards, Igor Stasenko AKA sig. |
In reply to this post by Igor Stasenko
Hi Igor -
I'm having a hard time understanding how adding a delay would improve performance. In my code, adding delays generally does the opposite :-) Can you explain *why* you see a performance improvement? The work done for each finalized object doesn't differ, does it? So how come you see an improvement? Cheers, - Andreas On 10/26/2010 12:25 AM, Igor Stasenko wrote: > Here the idea, which came to Chris, > put a delay into #finalizationProcess loop: > > finalizationProcess > [true] whileTrue: > [ WeakFinalizationList initTestPair. > FinalizationSemaphore wait. > FinalizationLock critical: > [ > WeakFinalizationList checkTestPair. > FinalizationDependents do: > [:weakDependent | > weakDependent ifNotNil: > [weakDependent finalizeValues]]] > ifError: > [:msg :rcvr | rcvr error: msg]. > 5 seconds asDelay wait. > ]. > > > And here a simple benchmark, which triggers GC often: > > [ Array new: 100 ] bench > > without delay: > > '2,450,000 per second.' > '2,490,000 per second.' > '2,490,000 per second.' > '2,480,000 per second.' > '2,530,000 per second.' > > with delay: > > '2,670,000 per second.' > '2,680,000 per second.' > '2,690,000 per second.' > '2,730,000 per second.' > > roughly about ~8% faster :) > > But now lets put something big into weak array: > > | dict b | > dict := WeakKeyDictionary new addAll: (( 1 to: 1000 ) collect: [:i | > i->i] ); yourself. > WeakArray addWeakDependent: dict. > b := [ Array new: 100 ] bench. > WeakArray removeWeakDependent: dict. > b > > without delay: > > '1,840,000 per second.' > '2,060,000 per second.' > '2,130,000 per second.' > > with delay: > > '3,030,000 per second.' > '2,880,000 per second.' > '2,890,000 per second.' > > Do not forget to do: > WeakArray restartFinalizationProcess > > when you changing the #finalizationProcess method, > otherwise you won't see real numbers. > > So, i like the idea of putting delay there. > Finalization is eventual, and there is no hard guarantees that it will > happen in micro-second just after some object become garbage. > So, be it 5 seconds or 1000 seconds not really matters. > What is matters that with delay we win much more, by avoiding wasting > time in finalization process too often. > > |
On 26 October 2010 11:49, Andreas Raab <[hidden email]> wrote:
> Hi Igor - > > I'm having a hard time understanding how adding a delay would improve > performance. In my code, adding delays generally does the opposite :-) Can > you explain *why* you see a performance improvement? The work done for each > finalized object doesn't differ, does it? So how come you see an > improvement? > Finalization is triggered at each GC cycle. Putting 5 sec delay makes a finalization process to do weak containers scavenging not often than each 5 seconds, instead after each GC. The amount of work to do finalization is same, but you forgetting the cost of scanning weak dicts to find the objects to be finalized. The code above actually illustrating this overhead: | dict b | dict := WeakKeyDictionary new addAll: (( 1 to: 1000 ) collect: [:i | i->i] ); yourself. WeakArray addWeakDependent: dict. Here, at each GC cycle, a 'dict finalizeValues' will be sent, which leads to looping over 1000 entries. Add dict with 10000 entries, and you will loop over 10000 after each GC. > Cheers, > - Andreas > > On 10/26/2010 12:25 AM, Igor Stasenko wrote: >> >> Here the idea, which came to Chris, >> put a delay into #finalizationProcess loop: >> >> finalizationProcess >> [true] whileTrue: >> [ WeakFinalizationList initTestPair. >> FinalizationSemaphore wait. >> FinalizationLock critical: >> [ >> WeakFinalizationList checkTestPair. >> FinalizationDependents do: >> [:weakDependent | >> weakDependent ifNotNil: >> [weakDependent finalizeValues]]] >> ifError: >> [:msg :rcvr | rcvr error: msg]. >> 5 seconds asDelay wait. >> ]. >> >> >> And here a simple benchmark, which triggers GC often: >> >> [ Array new: 100 ] bench >> >> without delay: >> >> '2,450,000 per second.' >> '2,490,000 per second.' >> '2,490,000 per second.' >> '2,480,000 per second.' >> '2,530,000 per second.' >> >> with delay: >> >> '2,670,000 per second.' >> '2,680,000 per second.' >> '2,690,000 per second.' >> '2,730,000 per second.' >> >> roughly about ~8% faster :) >> >> But now lets put something big into weak array: >> >> | dict b | >> dict := WeakKeyDictionary new addAll: (( 1 to: 1000 ) collect: [:i | >> i->i] ); yourself. >> WeakArray addWeakDependent: dict. >> b := [ Array new: 100 ] bench. >> WeakArray removeWeakDependent: dict. >> b >> >> without delay: >> >> '1,840,000 per second.' >> '2,060,000 per second.' >> '2,130,000 per second.' >> >> with delay: >> >> '3,030,000 per second.' >> '2,880,000 per second.' >> '2,890,000 per second.' >> >> Do not forget to do: >> WeakArray restartFinalizationProcess >> >> when you changing the #finalizationProcess method, >> otherwise you won't see real numbers. >> >> So, i like the idea of putting delay there. >> Finalization is eventual, and there is no hard guarantees that it will >> happen in micro-second just after some object become garbage. >> So, be it 5 seconds or 1000 seconds not really matters. >> What is matters that with delay we win much more, by avoiding wasting >> time in finalization process too often. >> >> > > > -- Best regards, Igor Stasenko AKA sig. |
On Tue, 26 Oct 2010, Igor Stasenko wrote:
> On 26 October 2010 11:49, Andreas Raab <[hidden email]> wrote: >> Hi Igor - >> >> I'm having a hard time understanding how adding a delay would improve >> performance. In my code, adding delays generally does the opposite :-) Can >> you explain *why* you see a performance improvement? The work done for each >> finalized object doesn't differ, does it? So how come you see an >> improvement? >> > > Finalization is triggered at each GC cycle. Putting 5 sec delay makes > a finalization process to > do weak containers scavenging not often than each 5 seconds, > instead after each GC. > > The amount of work to do finalization is same, but you forgetting the cost > of scanning weak dicts to find the objects to be finalized. it? > > The code above actually illustrating this overhead: > > | dict b | > dict := WeakKeyDictionary new addAll: (( 1 to: 1000 ) collect: [:i | > i->i] ); yourself. > WeakArray addWeakDependent: dict. > > Here, at each GC cycle, a 'dict finalizeValues' will be sent, > which leads to looping over 1000 entries. > Add dict with 10000 entries, and you will loop over 10000 after each GC. They are not thread safe. Since the next VMs will support your new finalization scheme, I see no benefit from the delay. Levente > > > >> Cheers, >> - Andreas >> >> On 10/26/2010 12:25 AM, Igor Stasenko wrote: >>> >>> Here the idea, which came to Chris, >>> put a delay into #finalizationProcess loop: >>> >>> finalizationProcess >>> [true] whileTrue: >>> [ WeakFinalizationList initTestPair. >>> FinalizationSemaphore wait. >>> FinalizationLock critical: >>> [ >>> WeakFinalizationList checkTestPair. >>> FinalizationDependents do: >>> [:weakDependent | >>> weakDependent ifNotNil: >>> [weakDependent finalizeValues]]] >>> ifError: >>> [:msg :rcvr | rcvr error: msg]. >>> 5 seconds asDelay wait. >>> ]. >>> >>> >>> And here a simple benchmark, which triggers GC often: >>> >>> [ Array new: 100 ] bench >>> >>> without delay: >>> >>> '2,450,000 per second.' >>> '2,490,000 per second.' >>> '2,490,000 per second.' >>> '2,480,000 per second.' >>> '2,530,000 per second.' >>> >>> with delay: >>> >>> '2,670,000 per second.' >>> '2,680,000 per second.' >>> '2,690,000 per second.' >>> '2,730,000 per second.' >>> >>> roughly about ~8% faster :) >>> >>> But now lets put something big into weak array: >>> >>> | dict b | >>> dict := WeakKeyDictionary new addAll: (( 1 to: 1000 ) collect: [:i | >>> i->i] ); yourself. >>> WeakArray addWeakDependent: dict. >>> b := [ Array new: 100 ] bench. >>> WeakArray removeWeakDependent: dict. >>> b >>> >>> without delay: >>> >>> '1,840,000 per second.' >>> '2,060,000 per second.' >>> '2,130,000 per second.' >>> >>> with delay: >>> >>> '3,030,000 per second.' >>> '2,880,000 per second.' >>> '2,890,000 per second.' >>> >>> Do not forget to do: >>> WeakArray restartFinalizationProcess >>> >>> when you changing the #finalizationProcess method, >>> otherwise you won't see real numbers. >>> >>> So, i like the idea of putting delay there. >>> Finalization is eventual, and there is no hard guarantees that it will >>> happen in micro-second just after some object become garbage. >>> So, be it 5 seconds or 1000 seconds not really matters. >>> What is matters that with delay we win much more, by avoiding wasting >>> time in finalization process too often. >>> >>> >> >> >> > > > > -- > Best regards, > Igor Stasenko AKA sig. > > |
2010/10/26 Levente Uzonyi <[hidden email]>:
> On Tue, 26 Oct 2010, Igor Stasenko wrote: > >> On 26 October 2010 11:49, Andreas Raab <[hidden email]> wrote: >>> >>> Hi Igor - >>> >>> I'm having a hard time understanding how adding a delay would improve >>> performance. In my code, adding delays generally does the opposite :-) >>> Can >>> you explain *why* you see a performance improvement? The work done for >>> each >>> finalized object doesn't differ, does it? So how come you see an >>> improvement? >>> >> >> Finalization is triggered at each GC cycle. Putting 5 sec delay makes >> a finalization process to >> do weak containers scavenging not often than each 5 seconds, >> instead after each GC. >> >> The amount of work to do finalization is same, but you forgetting the cost >> of scanning weak dicts to find the objects to be finalized. > > This is exactly what your new WeakRegistry implementation avoids, isn't it? > Yes. >> >> The code above actually illustrating this overhead: >> >> | dict b | >> dict := WeakKeyDictionary new addAll: (( 1 to: 1000 ) collect: [:i | >> i->i] ); yourself. >> WeakArray addWeakDependent: dict. >> >> Here, at each GC cycle, a 'dict finalizeValues' will be sent, >> which leads to looping over 1000 entries. >> Add dict with 10000 entries, and you will loop over 10000 after each GC. > > You should never register WeakKeyDictionaries to the finalization process. > They are not thread safe. > And still WeakArray provides a mechanism to add weak dependents, so, potentially you could add anything there. :) > Since the next VMs will support your new finalization scheme, I see no > benefit from the delay. > Yes. But because Chris have no official VMs with new finalization, he using this trick to reduce CPU hog in finalization process. > > Levente > >> >> >> >>> Cheers, >>> - Andreas >>> >>> On 10/26/2010 12:25 AM, Igor Stasenko wrote: >>>> >>>> Here the idea, which came to Chris, >>>> put a delay into #finalizationProcess loop: >>>> >>>> finalizationProcess >>>> [true] whileTrue: >>>> [ WeakFinalizationList initTestPair. >>>> FinalizationSemaphore wait. >>>> FinalizationLock critical: >>>> [ >>>> WeakFinalizationList checkTestPair. >>>> FinalizationDependents do: >>>> [:weakDependent | >>>> weakDependent ifNotNil: >>>> [weakDependent finalizeValues]]] >>>> ifError: >>>> [:msg :rcvr | rcvr error: msg]. >>>> 5 seconds asDelay wait. >>>> ]. >>>> >>>> >>>> And here a simple benchmark, which triggers GC often: >>>> >>>> [ Array new: 100 ] bench >>>> >>>> without delay: >>>> >>>> '2,450,000 per second.' >>>> '2,490,000 per second.' >>>> '2,490,000 per second.' >>>> '2,480,000 per second.' >>>> '2,530,000 per second.' >>>> >>>> with delay: >>>> >>>> '2,670,000 per second.' >>>> '2,680,000 per second.' >>>> '2,690,000 per second.' >>>> '2,730,000 per second.' >>>> >>>> roughly about ~8% faster :) >>>> >>>> But now lets put something big into weak array: >>>> >>>> | dict b | >>>> dict := WeakKeyDictionary new addAll: (( 1 to: 1000 ) collect: [:i | >>>> i->i] ); yourself. >>>> WeakArray addWeakDependent: dict. >>>> b := [ Array new: 100 ] bench. >>>> WeakArray removeWeakDependent: dict. >>>> b >>>> >>>> without delay: >>>> >>>> '1,840,000 per second.' >>>> '2,060,000 per second.' >>>> '2,130,000 per second.' >>>> >>>> with delay: >>>> >>>> '3,030,000 per second.' >>>> '2,880,000 per second.' >>>> '2,890,000 per second.' >>>> >>>> Do not forget to do: >>>> WeakArray restartFinalizationProcess >>>> >>>> when you changing the #finalizationProcess method, >>>> otherwise you won't see real numbers. >>>> >>>> So, i like the idea of putting delay there. >>>> Finalization is eventual, and there is no hard guarantees that it will >>>> happen in micro-second just after some object become garbage. >>>> So, be it 5 seconds or 1000 seconds not really matters. >>>> What is matters that with delay we win much more, by avoiding wasting >>>> time in finalization process too often. >>>> >>>> >>> >>> >>> >> >> >> >> -- >> Best regards, >> Igor Stasenko AKA sig. >> > > > > -- Best regards, Igor Stasenko AKA sig. |
On Tue, 26 Oct 2010, Igor Stasenko wrote:
> 2010/10/26 Levente Uzonyi <[hidden email]>: >> On Tue, 26 Oct 2010, Igor Stasenko wrote: >> >>> On 26 October 2010 11:49, Andreas Raab <[hidden email]> wrote: >>>> >>>> Hi Igor - >>>> >>>> I'm having a hard time understanding how adding a delay would improve >>>> performance. In my code, adding delays generally does the opposite :-) >>>> Can >>>> you explain *why* you see a performance improvement? The work done for >>>> each >>>> finalized object doesn't differ, does it? So how come you see an >>>> improvement? >>>> >>> >>> Finalization is triggered at each GC cycle. Putting 5 sec delay makes >>> a finalization process to >>> do weak containers scavenging not often than each 5 seconds, >>> instead after each GC. >>> >>> The amount of work to do finalization is same, but you forgetting the cost >>> of scanning weak dicts to find the objects to be finalized. >> >> This is exactly what your new WeakRegistry implementation avoids, isn't it? >> > > Yes. > >>> >>> The code above actually illustrating this overhead: >>> >>> | dict b | >>> dict := WeakKeyDictionary new addAll: (( 1 to: 1000 ) collect: [:i | >>> i->i] ); yourself. >>> WeakArray addWeakDependent: dict. >>> >>> Here, at each GC cycle, a 'dict finalizeValues' will be sent, >>> which leads to looping over 1000 entries. >>> Add dict with 10000 entries, and you will loop over 10000 after each GC. >> >> You should never register WeakKeyDictionaries to the finalization process. >> They are not thread safe. >> > Yeah.. still there are some code, which used it. > > And still WeakArray provides a mechanism to add weak dependents, > so, potentially you could add anything there. :) Yes, but it's your responsibility to add appropriate objects. > >> Since the next VMs will support your new finalization scheme, I see no >> benefit from the delay. >> > > Yes. But because Chris have no official VMs with new finalization, he > using this trick to reduce CPU hog in finalization process. According to the current schedule, the new VMs will be released in december. Levente > >> >> Levente >> >>> >>> >>> >>>> Cheers, >>>> - Andreas >>>> >>>> On 10/26/2010 12:25 AM, Igor Stasenko wrote: >>>>> >>>>> Here the idea, which came to Chris, >>>>> put a delay into #finalizationProcess loop: >>>>> >>>>> finalizationProcess >>>>> [true] whileTrue: >>>>> [ WeakFinalizationList initTestPair. >>>>> FinalizationSemaphore wait. >>>>> FinalizationLock critical: >>>>> [ >>>>> WeakFinalizationList checkTestPair. >>>>> FinalizationDependents do: >>>>> [:weakDependent | >>>>> weakDependent ifNotNil: >>>>> [weakDependent finalizeValues]]] >>>>> ifError: >>>>> [:msg :rcvr | rcvr error: msg]. >>>>> 5 seconds asDelay wait. >>>>> ]. >>>>> >>>>> >>>>> And here a simple benchmark, which triggers GC often: >>>>> >>>>> [ Array new: 100 ] bench >>>>> >>>>> without delay: >>>>> >>>>> '2,450,000 per second.' >>>>> '2,490,000 per second.' >>>>> '2,490,000 per second.' >>>>> '2,480,000 per second.' >>>>> '2,530,000 per second.' >>>>> >>>>> with delay: >>>>> >>>>> '2,670,000 per second.' >>>>> '2,680,000 per second.' >>>>> '2,690,000 per second.' >>>>> '2,730,000 per second.' >>>>> >>>>> roughly about ~8% faster :) >>>>> >>>>> But now lets put something big into weak array: >>>>> >>>>> | dict b | >>>>> dict := WeakKeyDictionary new addAll: (( 1 to: 1000 ) collect: [:i | >>>>> i->i] ); yourself. >>>>> WeakArray addWeakDependent: dict. >>>>> b := [ Array new: 100 ] bench. >>>>> WeakArray removeWeakDependent: dict. >>>>> b >>>>> >>>>> without delay: >>>>> >>>>> '1,840,000 per second.' >>>>> '2,060,000 per second.' >>>>> '2,130,000 per second.' >>>>> >>>>> with delay: >>>>> >>>>> '3,030,000 per second.' >>>>> '2,880,000 per second.' >>>>> '2,890,000 per second.' >>>>> >>>>> Do not forget to do: >>>>> WeakArray restartFinalizationProcess >>>>> >>>>> when you changing the #finalizationProcess method, >>>>> otherwise you won't see real numbers. >>>>> >>>>> So, i like the idea of putting delay there. >>>>> Finalization is eventual, and there is no hard guarantees that it will >>>>> happen in micro-second just after some object become garbage. >>>>> So, be it 5 seconds or 1000 seconds not really matters. >>>>> What is matters that with delay we win much more, by avoiding wasting >>>>> time in finalization process too often. >>>>> >>>>> >>>> >>>> >>>> >>> >>> >>> >>> -- >>> Best regards, >>> Igor Stasenko AKA sig. >>> >> >> >> >> > > > > -- > Best regards, > Igor Stasenko AKA sig. > > |
In reply to this post by Levente Uzonyi-2
>> Here, at each GC cycle, a 'dict finalizeValues' will be sent,
>> which leads to looping over 1000 entries. >> Add dict with 10000 entries, and you will loop over 10000 after each GC. > > You should never register WeakKeyDictionaries to the finalization process. > They are not thread safe. > > Since the next VMs will support your new finalization scheme, I see no > benefit from the delay. Igor, I know I'm very late to this discussion but.. The other benefit to the Delay that your finalization scheme does not solve is the fact that there could be a lot of _registrants_ in WeakArray that need to be enumerated after every GC. Correct me if I'm wrong, but your finalization fix only allows each individual registrant to "clean quickly" rather than do a full enumeration; is that right? If there are 5000 registrants in WeakArray, the delay would prevent enumerating 5000 elements after every GC unless 5 seconds have elapsed. So we are we not best off with _both_ your finalization fix _and_ the delay...? - Chris |
On 4 November 2010 21:58, Chris Muller <[hidden email]> wrote:
>>> Here, at each GC cycle, a 'dict finalizeValues' will be sent, >>> which leads to looping over 1000 entries. >>> Add dict with 10000 entries, and you will loop over 10000 after each GC. >> >> You should never register WeakKeyDictionaries to the finalization process. >> They are not thread safe. >> >> Since the next VMs will support your new finalization scheme, I see no >> benefit from the delay. > > Igor, I know I'm very late to this discussion but.. The other benefit > to the Delay that your finalization scheme does not solve is the fact > that there could be a lot of _registrants_ in WeakArray that need to > be enumerated after every GC. > > Correct me if I'm wrong, but your finalization fix only allows each > individual registrant to "clean quickly" rather than do a full > enumeration; is that right? > > If there are 5000 registrants in WeakArray, the delay would prevent > enumerating 5000 elements after every GC unless 5 seconds have > elapsed. So we are we not best off with _both_ your finalization fix > _and_ the delay...? > Yes. Usually, a weak array populated by weakregistry instances. And usually there are few of them. Of course it may pose a problem, if you put it on stress, like adding 5000 registrants. And of course, new finalization does not addressing this problem. Because weak dependents work for different purpose: be notified upon each GC cycle. And particular registrant can do anything during hanling a notification. And it can be something completely unrelated to weak stuff. VM just signals a semaphore, when GC done. The rest is up to image what to do. I don't see what can be improved here. > - Chris > > -- Best regards, Igor Stasenko AKA sig. |
Hi Igor,
I already explained the ratchet technique we use in VisualWorks which involves no delay and simply takes advantage of the fact that Smalltalk processes are cooperatively scheduled within priorities. This both avoids creating a process per finalized object and avoids the finalization process stalling if there is an error in a finalizer. What's not to like?
cheers Eliot
On Thu, Nov 4, 2010 at 1:24 PM, Igor Stasenko <[hidden email]> wrote:
|
In reply to this post by Chris Muller-3
On Thu, 4 Nov 2010, Chris Muller wrote:
>>> Here, at each GC cycle, a 'dict finalizeValues' will be sent, >>> which leads to looping over 1000 entries. >>> Add dict with 10000 entries, and you will loop over 10000 after each GC. >> >> You should never register WeakKeyDictionaries to the finalization process. >> They are not thread safe. >> >> Since the next VMs will support your new finalization scheme, I see no >> benefit from the delay. > > Igor, I know I'm very late to this discussion but.. The other benefit > to the Delay that your finalization scheme does not solve is the fact > that there could be a lot of _registrants_ in WeakArray that need to > be enumerated after every GC. > > Correct me if I'm wrong, but your finalization fix only allows each > individual registrant to "clean quickly" rather than do a full > enumeration; is that right? > > If there are 5000 registrants in WeakArray, the delay would prevent > enumerating 5000 elements after every GC unless 5 seconds have > elapsed. So we are we not best off with _both_ your finalization fix > _and_ the delay...? 5000 WeakRegistries sounds unrealistic. Btw, the delay would cause failure in most Weak* tests. It would also make constructs like #retryWithGC:until:forFileNamed: or #repeatWithGCIf: pointless. Levente > > - Chris > > |
In reply to this post by Eliot Miranda-2
On 4 November 2010 22:32, Eliot Miranda <[hidden email]> wrote:
> Hi Igor, > I already explained the ratchet technique we use in VisualWorks which > involves no delay and simply takes advantage of the fact that Smalltalk > processes are cooperatively scheduled within priorities. This both avoids > creating a process per finalized object and avoids the finalization process > stalling if there is an error in a finalizer. What's not to like? I remember your explanation quite well, and i think this is a way to go. By saying that VM just signals a semaphore and "I don't see what can be improved here.", i meant that there is no need to complicate things further at VM side. At image side, however there is a space for improvement. And i plan to address that, give it a time :) > cheers > Eliot > > On Thu, Nov 4, 2010 at 1:24 PM, Igor Stasenko <[hidden email]> wrote: >> >> On 4 November 2010 21:58, Chris Muller <[hidden email]> wrote: >> >>> Here, at each GC cycle, a 'dict finalizeValues' will be sent, >> >>> which leads to looping over 1000 entries. >> >>> Add dict with 10000 entries, and you will loop over 10000 after each >> >>> GC. >> >> >> >> You should never register WeakKeyDictionaries to the finalization >> >> process. >> >> They are not thread safe. >> >> >> >> Since the next VMs will support your new finalization scheme, I see no >> >> benefit from the delay. >> > >> > Igor, I know I'm very late to this discussion but.. The other benefit >> > to the Delay that your finalization scheme does not solve is the fact >> > that there could be a lot of _registrants_ in WeakArray that need to >> > be enumerated after every GC. >> > >> > Correct me if I'm wrong, but your finalization fix only allows each >> > individual registrant to "clean quickly" rather than do a full >> > enumeration; is that right? >> > >> > If there are 5000 registrants in WeakArray, the delay would prevent >> > enumerating 5000 elements after every GC unless 5 seconds have >> > elapsed. So we are we not best off with _both_ your finalization fix >> > _and_ the delay...? >> > >> >> Yes. Usually, a weak array populated by weakregistry instances. >> And usually there are few of them. >> Of course it may pose a problem, if you put it on stress, like adding >> 5000 registrants. >> And of course, new finalization does not addressing this problem. >> Because weak dependents >> work for different purpose: be notified upon each GC cycle. >> And particular registrant can do anything during hanling a >> notification. And it can be something completely unrelated to weak >> stuff. >> >> VM just signals a semaphore, when GC done. The rest is up to image >> what to do. I don't see what can be improved here. >> >> > - Chris >> > >> > >> >> >> >> -- >> Best regards, >> Igor Stasenko AKA sig. >> > > > > > -- Best regards, Igor Stasenko AKA sig. |
Free forum by Nabble | Edit this page |