Hi guys,
I use RcIdentityBag as the "collection" class for the typical storage of domain persistent objects. I am getting an issue now where I cleaned such a collection but things are not GCed. I have one object whose print string looks like "aRcIdentityBag( )" and "self size" does answer zero. However, self instVarNamed: 'components' -----> anArray( anIdentityBag( ), anIdentityBag( ), anIdentityBag( ), anIdentityBag( ), anIdentityBag( aDpOfxPriceRepository), anIdentityBag( aDpOfxPriceRepository), anIdentityBag( ), anIdentityBag( ), anIdentityBag( ), anIdentityBag( ), anIdentityBag( ), anIdentityBag( ), anIdentityBag( ), anIdentityBag( ), anIdentityBag( ), anIdentityBag( ), anIdentityBag( ), anIdentityBag( ), anIdentityBag( ), anIdentityBag( ), anIdentityBag( ), anIdentityBag( ), anIdentityBag( ), anIdentityBag( ), anIdentityBag( ), anIdentityBag( ), anIdentityBag( ), anIdentityBag( ), anIdentityBag( ), anIdentityBag( )) WTF? The collection is empty, but the internal 'components' instVar is holding the already deleted objects that were in the collection (DpOfxPriceRepository instances). BTW...I remember a similar issue with the ObjectLogEntry whose ObjectQueue is a RcQueue. I remembered that I was not GCing objects while I was trying: ObjectLogEntry class >> emptyLog "expect the caller to abort, acquire lock, and commit if necessary" self objectQueue removeAll. ObjectLog := nil. That was not GCing. Once I run #initialize instead, suddenly things got GCed. So since the code there does a #removeAll too I wonder if it wasn't the same issue. Not sure since RcQueue does not have a 'component' instVar...but just wondering. _______________________________________________ Glass mailing list [hidden email] http://lists.gemtalksystems.com/mailman/listinfo/glass |
Mariano,
What happens if you run #’cleanupBag’ on the instance? To manage reduced conflicts there are separate bags for each possible session, one for adding and one for removing. If you add in one session and remove in another then the collection will not have the object because the number of removals matches the number of adds. To provide the reduced conflict behavior a session does not modify the cache for another session unless you do a cleanup (which is documented to risk a conflict). James
_______________________________________________ Glass mailing list [hidden email] http://lists.gemtalksystems.com/mailman/listinfo/glass |
On Mon, Aug 24, 2015 at 4:49 PM, James Foster <[hidden email]> wrote:
Nothing, I just tried. 'components' answers the same and a MFC does not GC those.
I do not folliw. when I should send #cleanupBag? Every time I remove/add? Or as part of my maintenance script? Also.... in the ProgGuide I found NO reference to #cleanupBag. Where should I read more about it? Thanks,
_______________________________________________ Glass mailing list [hidden email] http://lists.gemtalksystems.com/mailman/listinfo/glass |
I’d suggest that #”centralizeSessionElements’ and #’cleanupBag’ be part of a maintenance process when other sessions are not logged in.
Good point. The docs briefly mention cleanup in RcQueue but should discuss it for other classes. I’ll put in a docs request.
James
_______________________________________________ Glass mailing list [hidden email] http://lists.gemtalksystems.com/mailman/listinfo/glass |
On Mon, Aug 24, 2015 at 5:34 PM, James Foster <[hidden email]> wrote:
Hi James, This is a quick email just saying that indeed the #_unsafeCleanupBag did get rid of those objects. Thanks!
I will have to read carefully the comments of those methods and see how to run those at maintenance scripts. I was not recycling gems anymore in daily cleanup (prune at 100%) ...but seems it seems I may have to do it again.... grrrrrr... Just quick question, do you recommend me sending such cleanup methods only to my DOMAIN specific RC instances or you'd better go over #allInstances so that to clean up as well RC classes used by other parts of the system (like gemstone internals, seaside, etc) ?
mmmmm should I then send #cleanupMySession to RcQueue instances as part of my daily maintenance (like ObjectLogEntry queue) ? if true, same question as above...should I only clean up MY instances or go via #allInstances?
Thanks James. Tomorrow I will watch this, check the code a bit more and read comments. Cheers,
_______________________________________________ Glass mailing list [hidden email] http://lists.gemtalksystems.com/mailman/listinfo/glass |
Mariano,
I don't think you need to clean up RcBags/RcQueues on a daily basis unless you are running tight on space ... If you can live with a bit of repo growth then you can do the cleanup on a weekly or monthly basis. You might have a pattern of daily cleanup and mfc; on a weekly basis take a 15 minute system shutdown where you shutdown system, run single user cleanup tasks, restart system and run mfc; on a monthly basis take a longer system shutdown where you stop the stone, shrink extents, do weekly cleanup .... Dale On 08/24/2015 02:28 PM, Mariano
Martinez Peck via Glass wrote:
_______________________________________________ Glass mailing list [hidden email] http://lists.gemtalksystems.com/mailman/listinfo/glass |
On Mon, Aug 24, 2015 at 6:48 PM, Dale Henrichs via Glass <[hidden email]> wrote:
Hi Dale/James, Thanks for your answer. I do understand that a daily cleanup of those doesn't make sense in most of the cases. However, I've been thinking and I think I mine it does make sense. We deploy in Linode, were we have SSD disks and the disk space is NOT very big normally. Also, in the same linode we run a couple of stones. In addition, we keep daily backups of each stone. And finally, we have some background process that end up adding/editing lots of stuff from RC collections. Something we have migrations etc... So if I can automagically do this and forget about this in the future, then better. Otherwise I will be again fighting with objects not being GCed. So the size of the repository is a bit more important for me than the average. So...I just measured in a typical repository we have, the time to clean all those RC classes while all seaside gems are down and it's only a minute. Furthermore, it helps to recyle gems. So...I very much prefer 1 minute down in favor of RC cleaning and gem recycles. But I understand this is uncommon for more cases. BTW the code I am running at daily cleanup is this (with all seaside and background jobs gems down): | array | System commit. array := (SystemRepository listInstances: { RcQueue. RcIdentityBag. RcCounter.}). (array at: 1) do: [:each | each cleanupQueue]. (array at: 2) do: [:each | each cleanupBag]. (array at: 3) do: [:each | each cleanupCounter]. It may help others with the same scenario as me.
_______________________________________________ Glass mailing list [hidden email] http://lists.gemtalksystems.com/mailman/listinfo/glass |
I just wanted to point out, that I was also not aware of these problems
and therefore I will add your cleanup code to my code generator. Thanks for the discussion ... assuming that there are more points like this in the Gemstone infrastructure I am not aware of :-) Marten -- Marten Feldtmann _______________________________________________ Glass mailing list [hidden email] http://lists.gemtalksystems.com/mailman/listinfo/glass |
On Wed, Aug 26, 2015 at 3:59 AM, [hidden email] via Glass <[hidden email]> wrote: I just wanted to point out, that I was also not aware of these problems Hi Marten, Are you aware of the rest of the classic clean up code? (monticello cache, some globals, object log, etc, etc) Some general related questions: 1) how many gems are you running with zinc server? 2) what are the values of SHR_PAGE_CACHE_SIZE_KB and the real GEM_TEMPOBJ_CACHE_SIZE those gems get? 3) what is the value of GEM_TEMPOBJ_POMGEN_PRUNE_ON_VOTE ? 4) You said you were doing nothing about GC right? So you need to start running MFC periodically and optionally a #reclaimAll in order to clean your repository. Are you doing this? 5) Do you recycle gems periodically? That is, the gems running the Zinc servers, do they go down and up again periodically? Note that the code I pased above about RC classes must be run with all gems down. Cheers,
_______________________________________________ Glass mailing list [hidden email] http://lists.gemtalksystems.com/mailman/listinfo/glass |
Am 26.08.2015 um 15:23 schrieb Mariano Martinez Peck:
> > Hi Marten, > > Are you aware of the rest of the classic clean up code? (monticello > cache, some globals, object log, etc, etc) > > Some general related questions: > > 1) how many gems are you running with zinc server? Around 10 gems in our first production system - but that may change. All our REST calls are put into categories named: "normal", "memory" and "long". We have around 6 "normal" GEMs, 2 "memory" GEMs and 2 "long" GEMs. Scheduling is done via Apache2 > 2) what are the values of SHR_PAGE_CACHE_SIZE_KB and the > real GEM_TEMPOBJ_CACHE_SIZE those gems get? That depends on the classification of "normal", "memory" and "long" and might change ... > 3) what is the value of GEM_TEMPOBJ_POMGEN_PRUNE_ON_VOTE ? nothing done with that :-))) > 4) You said you were doing nothing about GC right? So you need to start > running MFC periodically Yes, I've build in this right now ... after reading it here ... and optionally a #reclaimAll in order to clean > your repository. Are you doing this? I thought, that this done by the admingcgem, reclaimgcgem and symbolgem tasks. > 5) Do you recycle gems periodically? That is, the gems running the Zinc > servers, do they go down and up again periodically? Not now .. have to think about how to do it ... Note that the code > I pased above about RC classes must be run with all gems down. Yes ... -- Marten Feldtmann _______________________________________________ Glass mailing list [hidden email] http://lists.gemtalksystems.com/mailman/listinfo/glass |
On Wed, Aug 26, 2015 at 11:16 AM, [hidden email] via Glass <[hidden email]> wrote:
Am 26.08.2015 um 15:23 schrieb Mariano Martinez Peck: That make sense. I also have normal gems for seaside and "service gems" for background heavy jobs.
Well, SHR_PAGE_CACHE_SIZE_KB should be always the same. I guess you simply change GEM_TEMPOBJ_CACHE_SIZE for those 3 type of gems, correct? If so, would you mind telling me all those 4 values?
So...search the mailing list for this, there is a recent discussion with me. You may want to set that to 100% (even more if you do not plan to re-cycle gems periodically)
OK, let me know if you want me to share my cleanup code (which involves all this).
No. Those gems ARE needed for the reclaim and MFC to work, but MFC NEVER happens automatically. You MUST explicitly do a MFC in order to get garbage collected. How frequent depends on the type of application. But you MUST run it always your extent will grow forever. You may also want to read about the epoch gc in the admin guide.
It is not a MANDATORY step, but it may help a bit in the cleaning, depending on the app. BTW.... I also found that Gemstone by default does NOT enable native code generation (JIT). I do enable it in all my gems and the difference in performance is huge. So you may want to enable if you don't have it ;) (GEM_NATIVE_CODE_ENABLED=TRUE) Cheers, _______________________________________________ Glass mailing list [hidden email] http://lists.gemtalksystems.com/mailman/listinfo/glass |
Am 26.08.2015 um 16:26 schrieb Mariano Martinez Peck:
> > BTW.... I also found that Gemstone by default does NOT enable native > code generation (JIT). I do enable it in all my gems and the difference > in performance is huge. So you may want to enable if you don't have it > ;) (GEM_NATIVE_CODE_ENABLED=TRUE) > Reading the documentation I think, that this is enabled per default (SysAdminGuide 3.2 page 278 - default = 2 means jit enabled) Marten -- Marten Feldtmann _______________________________________________ Glass mailing list [hidden email] http://lists.gemtalksystems.com/mailman/listinfo/glass |
On Wed, Aug 26, 2015 at 12:34 PM, [hidden email] <[hidden email]> wrote: Am 26.08.2015 um 16:26 schrieb Mariano Martinez Peck: OK, it changed then between gemstone 3.1 and 3.2. If you tell me the SPC and temp space config I can tell you if it would make sense a workaround we did with Dale so that "idle" gems use less memory. _______________________________________________ Glass mailing list [hidden email] http://lists.gemtalksystems.com/mailman/listinfo/glass |
My suspicion is that a config file disabled native code to allow for debugging. In an earlier version (3.1?) if you tried to debug native code it failed. In later versions the VM will convert the method back to interpreted and allow debugging. I vaguely recall that GemTools turned native code off to address this problem. Instead, Jade has a menu to turn it off after login. I’m not sure about tOAD.
James
_______________________________________________ Glass mailing list [hidden email] http://lists.gemtalksystems.com/mailman/listinfo/glass |
On 8/26/15 10:15 AM, James Foster via
Glass wrote:
The GLASS/GsDevKit default system.conf files turned off native code generation to make it possible to use the remote breakpoint facilities in production gems. Remote breakpoints allow a developer to arrange to set a breakpoint in a server gem and when the breakpoint is encountered, snap off a continuation to the object log, and then resume execution without the end user ever seeing an error ... there is a similar facility for handling halts in production code as well .... For a range of releases between 2.4 and 3.x sometime, it was necessary to be in interpreted mode to have breakpoints take effect. Remote breakpoints are functional in 2.4.x. In the 3.x line remote breakpoints have had a more checkered history. Between changes in gem to gem signalling and various issues in the behavior related to catching and resuming Breakpoint exceptions the feature isn't usable in all 3.x releases. In the more recent versions of GemStone (3.2.x sometime), new threads are automatically run in interpreted mode when a breakpoint is set in the gem, so I've since removed the conf file entry so that by default everything is run in Native mode ... Dale _______________________________________________ Glass mailing list [hidden email] http://lists.gemtalksystems.com/mailman/listinfo/glass |
In reply to this post by GLASS mailing list
Can you tell if we need a workaround? We have a big problem with
seaside service gems eating up RAM: GEM_PRIVATE_PAGE_CACHE_KB = 960KB; GEM_TEMPOBJ_CACHE_SIZE = 400000KB; GEM_TEMPOBJ_MESPACE_SIZE = 0KB; GEM_TEMPOBJ_OOPMAP_SIZE = 0; GEM_TEMPOBJ_SCOPES_SIZE = 2000; GEM_TEMPOBJ_POMGEN_SIZE = 0KB; GEM_TEMPOBJ_POMGEN_PRUNE_ON_VOTE = 50; GEM_TEMPOBJ_POMGEN_SCAVENGE_INTERVAL = 1800; STN_PRIVATE_PAGE_CACHE_KB = 2000KB; SHR_PAGE_CACHE_SIZE_KB = 2000000KB; Thanks On Wed, Aug 26, 2015 at 6:40 PM, Mariano Martinez Peck via Glass <[hidden email]> wrote: > > > On Wed, Aug 26, 2015 at 12:34 PM, [hidden email] > <[hidden email]> wrote: >> >> Am 26.08.2015 um 16:26 schrieb Mariano Martinez Peck: >> >> > >> > BTW.... I also found that Gemstone by default does NOT enable native >> > code generation (JIT). I do enable it in all my gems and the difference >> > in performance is huge. So you may want to enable if you don't have it >> > ;) (GEM_NATIVE_CODE_ENABLED=TRUE) >> > >> >> Reading the documentation I think, that this is enabled per default >> (SysAdminGuide 3.2 page 278 - default = 2 means jit enabled) >> > > > OK, it changed then between gemstone 3.1 and 3.2. > If you tell me the SPC and temp space config I can tell you if it would make > sense a workaround we did with Dale so that "idle" gems use less memory. > > > -- > Mariano > http://marianopeck.wordpress.com > > _______________________________________________ > Glass mailing list > [hidden email] > http://lists.gemtalksystems.com/mailman/listinfo/glass > Glass mailing list [hidden email] http://lists.gemtalksystems.com/mailman/listinfo/glass |
On Thu, Aug 27, 2015 at 7:10 AM, Otto Behrens <[hidden email]> wrote: Can you tell if we need a workaround? We have a big problem with Hi Otto, I think in your case it was something related to the processes you called from System performOnServer: wasn't it? 400MB of TEMP space doesn't seem much. And I imagine you do not have that many service VMs. What about the seaside VMs temp space? What I explain below is just MY understanding. For the real and precise words you should read the thread were we discuss this. Anyway...what I found out reading the docs is that the mark and sweep in the temp space of a gem is run ONLY upon memory pressure. So imagine it runs (I am doing an example, don't know exactly when) only when temp space reaches 90% of used. So if you have gems running whose temp space is occupied a 70% or a 80% or .. they won't be mark and sweeped until they reach that 90%. So if the gems are not used much, they can be like that for many hours or days and so they may use unnecessary memory. So the workaround we found with Dale is to simply run mark and sweep (this is very very fast so no impact) every 1 minute or so. I am doing this for all my seaside gems. That way, I found out that I could minimize memory usage for idle gems. The thread in question is: "GC on Gems only fired when temp space is over?" Good luck, Thanks _______________________________________________ Glass mailing list [hidden email] http://lists.gemtalksystems.com/mailman/listinfo/glass |
In reply to this post by GLASS mailing list
Otto,
The problem that Mariano is referring to is that with a large TOC, excessive RAM may be consumed by an idle gem ... When the vm runs a gc, and unused objects are found, the "newly freed" RAM is returned to the system ... you have a large TOC, so you could be exposed to the problem .... the large TOC is not a problem but if you frequently consume big chunks of the TOC and then go idel on a bunch of gems regularly running the in-vm gc could help ... you can probably tell if you look at vsd and see that an idel gem isi consuming a big chunk of RAM and once an in-memory gc is run the RAM drops off (not completely certain that the system stats in VSD represent allocated RAM or RAM that could be allocated ... Dale On 8/27/15 3:10 AM, Otto Behrens via Glass wrote: > Can you tell if we need a workaround? We have a big problem with > seaside service gems eating up RAM: > GEM_PRIVATE_PAGE_CACHE_KB = 960KB; > GEM_TEMPOBJ_CACHE_SIZE = 400000KB; > GEM_TEMPOBJ_MESPACE_SIZE = 0KB; > GEM_TEMPOBJ_OOPMAP_SIZE = 0; > GEM_TEMPOBJ_SCOPES_SIZE = 2000; > GEM_TEMPOBJ_POMGEN_SIZE = 0KB; > GEM_TEMPOBJ_POMGEN_PRUNE_ON_VOTE = 50; > GEM_TEMPOBJ_POMGEN_SCAVENGE_INTERVAL = 1800; > STN_PRIVATE_PAGE_CACHE_KB = 2000KB; > SHR_PAGE_CACHE_SIZE_KB = 2000000KB; > > Thanks > > On Wed, Aug 26, 2015 at 6:40 PM, Mariano Martinez Peck via Glass > <[hidden email]> wrote: >> >> On Wed, Aug 26, 2015 at 12:34 PM, [hidden email] >> <[hidden email]> wrote: >>> Am 26.08.2015 um 16:26 schrieb Mariano Martinez Peck: >>> >>>> BTW.... I also found that Gemstone by default does NOT enable native >>>> code generation (JIT). I do enable it in all my gems and the difference >>>> in performance is huge. So you may want to enable if you don't have it >>>> ;) (GEM_NATIVE_CODE_ENABLED=TRUE) >>>> >>> Reading the documentation I think, that this is enabled per default >>> (SysAdminGuide 3.2 page 278 - default = 2 means jit enabled) >>> >> >> OK, it changed then between gemstone 3.1 and 3.2. >> If you tell me the SPC and temp space config I can tell you if it would make >> sense a workaround we did with Dale so that "idle" gems use less memory. >> >> >> -- >> Mariano >> http://marianopeck.wordpress.com >> >> _______________________________________________ >> Glass mailing list >> [hidden email] >> http://lists.gemtalksystems.com/mailman/listinfo/glass >> > _______________________________________________ > Glass mailing list > [hidden email] > http://lists.gemtalksystems.com/mailman/listinfo/glass _______________________________________________ Glass mailing list [hidden email] http://lists.gemtalksystems.com/mailman/listinfo/glass |
Free forum by Nabble | Edit this page |