Garbage collector eating up my processing time?

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
8 messages Options
Reply | Threaded
Open this post in threaded view
|

Garbage collector eating up my processing time?

Wouter Gazendam
Hi,

I'm timeprofiling a piece of code on a vw64 image using +2GB memory. It seems that of the 16 seconds of execution, almost 15 seconds is used by the garbagecollector. I have installed the LargeMemoryPolicy and have tweaked the objectmemory as

    ObjectMemory sizesAtStartup: #(500.0 500.0 10.0 10.0 10.0 10.0 10.0)

When profiling on an idle image I get the following result:

56 samples, 302.19 average ms/sample, 115 scavenges, 2 incGCs,
1.24s active, 15.65s other processes,
16.92s real time, 0.03s profiling overhead

The profiled code copies a lot of objects, doesn't use anything external as databases, ui or disk.
Tweaking #sizesAtStartup: got me from an initial '37s other processes' down to as low as '8s other processes'.

 - Is the 'other processes' GC time?
 - Is it possible to suspend the garbage collector during execution?

Thanks,

Wouter
Reply | Threaded
Open this post in threaded view
|

RE: Garbage collector eating up my processing time?

Andres Valloud-6
Wouter,
 
Without looking at what the code is doing, it is hard to tell if what you are seeing is reasonable behavior or not.
 
Note that the rather large multipliers for the new space may or may not be beneficial, depending on the application.  For example, if you are creating many objects, it may be better to have a large eden but small survivor spaces to force quick tenuring of objects into old space.
 
While you can certainly prevent the old space GC by configuring the memory policy accordingly (growth regime limit), the VM allocates objects in new space and will therefore be periodically forced to make room there by triggering a new space GC which is referred to as a scavenge.
 
Thanks,
Andres.


From: [hidden email] [mailto:[hidden email]] On Behalf Of Wouter Gazendam
Sent: Monday, October 01, 2007 2:58 AM
To: vwnc
Subject: Garbage collector eating up my processing time?

Hi,

I'm timeprofiling a piece of code on a vw64 image using +2GB memory. It seems that of the 16 seconds of execution, almost 15 seconds is used by the garbagecollector. I have installed the LargeMemoryPolicy and have tweaked the objectmemory as

    ObjectMemory sizesAtStartup: #(500.0 500.0 10.0 10.0 10.0 10.0 10.0)

When profiling on an idle image I get the following result:

56 samples, 302.19 average ms/sample, 115 scavenges, 2 incGCs,
1.24s active, 15.65s other processes,
16.92s real time, 0.03s profiling overhead

The profiled code copies a lot of objects, doesn't use anything external as databases, ui or disk.
Tweaking #sizesAtStartup: got me from an initial '37s other processes' down to as low as '8s other processes'.

 - Is the 'other processes' GC time?
 - Is it possible to suspend the garbage collector during execution?

Thanks,

Wouter
Reply | Threaded
Open this post in threaded view
|

Re: Garbage collector eating up my processing time?

Eliot Miranda-2
In reply to this post by Wouter Gazendam


On 10/1/07, Wouter Gazendam <[hidden email]> wrote:
Hi,

I'm timeprofiling a piece of code on a vw64 image using +2GB memory. It seems that of the 16 seconds of execution, almost 15 seconds is used by the garbagecollector. I have installed the LargeMemoryPolicy and have tweaked the objectmemory as

    ObjectMemory sizesAtStartup: #(500.0 500.0 10.0 10.0 10.0 10.0 10.0)

When profiling on an idle image I get the following result:

56 samples, 302.19 average ms/sample, 115 scavenges, 2 incGCs,
1.24s active, 15.65s other processes,
16.92s real time, 0.03s profiling overhead

The profiled code copies a lot of objects, doesn't use anything external as databases, ui or disk.
Tweaking #sizesAtStartup: got me from an initial '37s other processes' down to as low as '8s other processes'.

 - Is the 'other processes' GC time?


Perhaps finalization, but not GC time. GC time due to scavenging and tenuring is incurred by the VM and gets charged to the current process.  The incremental and scan-mark GCs are invoked through primitives and so could be invoked from other processes.  But your profile output only shows two incremental GCs so it doesn't look like GC activity is abnormally high.

However, "other processes" might cover finalization, which is run in anothe rprocess.  So if you are creating lots of finalizable objects that are getting collected/finalized this could account for the high "other processes" time.  Use the multi-process profiler and this will tell you what the other processes are actually doing.

 - Is it possible to suspend the garbage collector during execution?

No, and generally this is a very bad idea.  The GC is quite efficient.   There are very few circumstances in which you might legitimately want to disable GC (e.g. ensuring extremely low latency responses in some real-time application).  But before you accuse the GC try using both the multi-process profiler and the OEProfiler.  The latter shows a histogram of performance against PC in the VM.  Its quite hard for someone unfamiliar with the VM to read but it will tell you where time is being spent on a per-activity basis (running Smalltalk code, invoking specific primitives, garbage collection, process scheduling, blocking for input).

Thanks,

Wouter

Reply | Threaded
Open this post in threaded view
|

Re: Garbage collector eating up my processing time?

Joachim Geidel
In reply to this post by Wouter Gazendam
Wouter, Andres,

Be careful about forcing tenuring. If the copied objects are needed for some time, forcing the tenuring may be okay. If they are just temporary objects with a short lifetime, moving them to OldSpace would be bad, because this would cause a lot of additional work for the incremental garbage collector. Currently, there are only two incremental GCs in the profile. If this number goes up when the SurvivorSpace size (the second of the 500.0s) is reduced, it could be a sign that this was a bad move. OTOH, the extremely large sizes of Eden and SurvivorSpace will cause rather lengthy scavenges in NewSpace. It might actually be better to reduce them and look for other parameters to tweak.

The best way to reduce garbage collection is not producing garbage, of course, and if you see more than 90% of the time spent in garbage collection, there certainly is a lot of garbage. I would check where this garbage is produced and try to avoid that, e.g. by ! doing in-situ manipulations, reusing Collection objects, etc.

If you have a 2 GB image, you probably won't want to suspend garbage collection - if lots of new temporary objects are created, the size of the image might increase quite fast.

Besides configuring the sizesAtStartup, you have to tune the MemoryPolicy. It's not only growthRegimeUpperBound and memoryUpperbound which have to be tuned. When your image has to grow fast, you also have to set preferredGrowthIncrement to a higher value (at least 10 or 20 MB for a large image), and freeMemoryUpperbound to a value larger than preferredGrowthIncrement (I suggest 1.5 times larger). Also, set growthRetryDecrement to 1 MB instead of the very small default size. You can also set these parameters to really high values temporarily.

If you want to make the image start up faster, and if 2 GB is the normal size of the beast, you should also increase OldSpaceHeadroom (the sixth entry in sizesA! tStartup), e.g. to 500.0 which corresponds to a 500 MB chunk of OldSpa ce allocated when the image starts.

And once this is done, you might want to tune the incremental garbage collector - but that's the point where you need professional help. ;-)  We had such a situation in our project several years ago, and what really helped was hiring John McIntosh for the job.

BTW, how large is the image when this happens? If its in the area between growthRegimeUpperBound and memoryUpperbound, it will run a full garbage collection whenever it needs more memory. The only way to avoid this would be to further increase growthRegimeUpperBound (and memoryUpperbound, if needed).

HTH,
Joachim Geidel

Valloud, Andres schrieb:
Wouter,
 
Without looking at what the code is doing, it is hard to tell if what you are seeing is reasonable behavior or not.
 
Note that the rather large multipliers for the new space may or may not be beneficial, depending on the application.  For example, if you are creating many objects, it may be better to have a large eden but small survivor spaces to force quick tenuring of objects into old space.
!  
While you can certainly prevent the old space GC by configuring the memory policy accordingly (growth regime limit), the VM allocates objects in new space and will therefore be periodically forced to make room there by triggering a new space GC which is referred to as a scavenge.
 
Thanks,
Andres.


From: [hidden email] [mailto:[hidden email]] On Behalf Of Wouter Gazendam
Sent: Monday, October 01, 2007 2:58 AM
To: vwnc
Subject: Garbage collector eating up my processing time?

Hi,

I'm timeprofiling a piece of code on a vw64 image using +2GB memory. It seems that of the 16 seconds of execution, almost 15 seconds is used by the garbagecollector. I have installed the LargeMemoryPolicy and have tweaked the objectmemory as

    ObjectMemory sizesAtStartup: #(500.0 500.0 10.0 10.0 10.0 10.0 10.0)

When profiling on an idle image I get the following result:

56 samples, 302.19 average ms/sample, 115 scavenges, 2 incGCs,
1.24s active, 15.65s other processes,
16.92s real time, 0.03s profiling overhead

The profiled code copies a lot of objects, doesn't use anything external as databases, ! ui or disk.
Tweaking #sizesAtStartup: got me from an initial  9;37s other processes' down to as low as '8s other processes'.

 - Is the 'other processes' GC time?
 - Is it possible to suspend the garbage collector during execution?

Thanks,

Wouter
Reply | Threaded
Open this post in threaded view
|

Re: Garbage collector eating up my processing time?

Wouter Gazendam
I've been twiddling a bit more on the knobs of MemoryPolicy. Increasing incrementalAllocationThreshold improved my situation drastically:

incrementalAllocationThreshold : 10000000.

67 samples, 44.07 average ms/sample, 115 scavenges, 0 incGCs,
1.62s active, 1.31s other processes,
2.95s real time, 0.02s profiling overhead

incrementalAllocationThreshold : 100000000.

82 samples, 30.2 average ms/sample, 115 scavenges, 0 incGCs,
2.01s active, 0.43s other processes,
2.48s real time, 0.03s profiling overhead

Only thing is, I don't understand exactly why. Is this a good thing, or will I run into other problems when I increase the incrementalAllocationThreshold to 100M?

Thanks,

Wouter








On 10/1/07, Joachim Geidel <[hidden email]> wrote:
Wouter, Andres,

Be careful about forcing tenuring. If the copied objects are needed for some time, forcing the tenuring may be okay. If they are just temporary objects with a short lifetime, moving them to OldSpace would be bad, because this would cause a lot of additional work for the incremental garbage collector. Currently, there are only two incremental GCs in the profile. If this number goes up when the SurvivorSpace size (the second of the 500.0s) is reduced, it could be a sign that this was a bad move. OTOH, the extremely large sizes of Eden and SurvivorSpace will cause rather lengthy scavenges in NewSpace. It might actually be better to reduce them and look for other parameters to tweak.

The best way to reduce garbage collection is not producing garbage, of course, and if you see more than 90% of the time spent in garbage collection, there certainly is a lot of garbage. I would check where this garbage is produced and try to avoid that, e.g. by doing in-situ manipulations, reusing Collection objects, etc.

If you have a 2 GB image, you probably won't want to suspend garbage collection - if lots of new temporary objects are created, the size of the image might increase quite fast.

Besides configuring the sizesAtStartup, you have to tune the MemoryPolicy. It's not only growthRegimeUpperBound and memoryUpperbound which have to be tuned. When your image has to grow fast, you also have to set preferredGrowthIncrement to a higher value (at least 10 or 20 MB for a large image), and freeMemoryUpperbound to a value larger than preferredGrowthIncrement (I suggest 1.5 times larger). Also, set growthRetryDecrement to 1 MB instead of the very small default size. You can also set these parameters to really high values temporarily.

If you want to make the image start up faster, and if 2 GB is the normal size of the beast, you should also increase OldSpaceHeadroom (the sixth entry in sizesAtStartup), e.g. to 500.0 which corresponds to a 500 MB chunk of OldSpace allocated when the image starts.

And once this is done, you might want to tune the incremental garbage collector - but that's the point where you need professional help. ;-)  We had such a situation in our project several years ago, and what really helped was hiring John McIntosh for the job.

BTW, how large is the image when this happens? If its in the area between growthRegimeUpperBound and memoryUpperbound, it will run a full garbage collection whenever it needs more memory. The only way to avoid this would be to further increase growthRegimeUpperBound (and memoryUpperbound, if needed).

HTH,
Joachim Geidel

Valloud, Andres schrieb:
Wouter,
 
Without looking at what the code is doing, it is hard to tell if what you are seeing is reasonable behavior or not.
 
Note that the rather large multipliers for the new space may or may not be beneficial, depending on the application.  For example, if you are creating many objects, it may be better to have a large eden but small survivor spaces to force quick tenuring of objects into old space.
 
While you can certainly prevent the old space GC by configuring the memory policy accordingly (growth regime limit), the VM allocates objects in new space and will therefore be periodically forced to make room there by triggering a new space GC which is referred to as a scavenge.
 
Thanks,
Andres.


From: [hidden email] [mailto:[hidden email]] On Behalf Of Wouter Gazendam
Sent: Monday, October 01, 2007 2:58 AM
To: vwnc
Subject: Garbage collector eating up my processing time?

Hi,

I'm timeprofiling a piece of code on a vw64 image using +2GB memory. It seems that of the 16 seconds of execution, almost 15 seconds is used by the garbagecollector. I have installed the LargeMemoryPolicy and have tweaked the objectmemory as

    ObjectMemory sizesAtStartup: #(500.0 500.0 10.0 10.0 10.0 10.0 10.0)

When profiling on an idle image I get the following result:

56 samples, 302.19 average ms/sample, 115 scavenges, 2 incGCs,
1.24s active, 15.65s other processes,
16.92s real time, 0.03s profiling overhead

The profiled code copies a lot of objects, doesn't use anything external as databases, ui or disk.
Tweaking #sizesAtStartup: got me from an initial '37s other processes' down to as low as '8s other processes'.

 - Is the 'other processes' GC time?
 - Is it possible to suspend the garbage collector during execution?

Thanks,

Wouter

Reply | Threaded
Open this post in threaded view
|

Re: Garbage collector eating up my processing time?

Eliot Miranda-2


On 10/2/07, Wouter Gazendam <[hidden email]> wrote:
I've been twiddling a bit more on the knobs of MemoryPolicy. Increasing incrementalAllocationThreshold improved my situation drastically:

incrementalAllocationThreshold : 10000000.

67 samples, 44.07 average ms/sample, 115 scavenges, 0 incGCs,
1.62s active, 1.31s other processes,
2.95s real time, 0.02s profiling overhead

incrementalAllocationThreshold : 100000000.

82 samples, 30.2 average ms/sample, 115 scavenges, 0 incGCs,
2.01s active, 0.43s other processes,
2.48s real time, 0.03s profiling overhead

Only thing is, I don't understand exactly why. Is this a good thing, or will I run into other problems when I increase the incrementalAllocationThreshold to 100M?


What the incrementalAllocationThreshold determines is the newSoftLowSpaceLimit which in turn decides the rate at which the incremental GC is invoked.  Provided that your application produces garbage that can be collected by the incremental GC (likely a safe assumption) you won't want to completely disable the incremental GC.  You'll want it to invoke at a rate where it is able to keep up with the amont of garbage you create.  If the incremental GC is invoked too frequently you'l see it take processing time.  If it is invoked too infrequently then it may fail to collect any garage at all, failing to sweep old space completely, and instead the MemoryPolicy/ObjectMemory subsystem will be forced either to grow old space or, once the growthRegimeUpperBound is reached, to do a stop-the-world scan-mark garbage collection at some point to satisfy some allocation request.

So you want to tune things such that the IGC is being onvoked just frequently enough that in the steady state old-space memory doesn't grow.  You could determine this by instrumenting (installng a subclass of) MemoryPolicy to log oldSpace growth ( e.g. including writing to the transcript).  Then run with a very high growthRegimeUpperBound and run the application.  Once your application has reached its natural working set you'd want the IGC to be invoked frequenty enough so tat it collects garbage in old space sufficiently that further growth of old-space isn't required.


Thanks,

Wouter








On 10/1/07, Joachim Geidel <[hidden email]> wrote:
Wouter, Andres,

Be careful about forcing tenuring. If the copied objects are needed for some time, forcing the tenuring may be okay. If they are just temporary objects with a short lifetime, moving them to OldSpace would be bad, because this would cause a lot of additional work for the incremental garbage collector. Currently, there are only two incremental GCs in the profile. If this number goes up when the SurvivorSpace size (the second of the 500.0s) is reduced, it could be a sign that this was a bad move. OTOH, the extremely large sizes of Eden and SurvivorSpace will cause rather lengthy scavenges in NewSpace. It might actually be better to reduce them and look for other parameters to tweak.

The best way to reduce garbage collection is not producing garbage, of course, and if you see more than 90% of the time spent in garbage collection, there certainly is a lot of garbage. I would check where this garbage is produced and try to avoid that, e.g. by doing in-situ manipulations, reusing Collection objects, etc.

If you have a 2 GB image, you probably won't want to suspend garbage collection - if lots of new temporary objects are created, the size of the image might increase quite fast.

Besides configuring the sizesAtStartup, you have to tune the MemoryPolicy. It's not only growthRegimeUpperBound and memoryUpperbound which have to be tuned. When your image has to grow fast, you also have to set preferredGrowthIncrement to a higher value (at least 10 or 20 MB for a large image), and freeMemoryUpperbound to a value larger than preferredGrowthIncrement (I suggest 1.5 times larger). Also, set growthRetryDecrement to 1 MB instead of the very small default size. You can also set these parameters to really high values temporarily.

If you want to make the image start up faster, and if 2 GB is the normal size of the beast, you should also increase OldSpaceHeadroom (the sixth entry in sizesAtStartup), e.g. to 500.0 which corresponds to a 500 MB chunk of OldSpace allocated when the image starts.

And once this is done, you might want to tune the incremental garbage collector - but that's the point where you need professional help. ;-)  We had such a situation in our project several years ago, and what really helped was hiring John McIntosh for the job.

BTW, how large is the image when this happens? If its in the area between growthRegimeUpperBound and memoryUpperbound, it will run a full garbage collection whenever it needs more memory. The only way to avoid this would be to further increase growthRegimeUpperBound (and memoryUpperbound, if needed).

HTH,
Joachim Geidel

Valloud, Andres schrieb:
Wouter,
 
Without looking at what the code is doing, it is hard to tell if what you are seeing is reasonable behavior or not.
 
Note that the rather large multipliers for the new space may or may not be beneficial, depending on the application.  For example, if you are creating many objects, it may be better to have a large eden but small survivor spaces to force quick tenuring of objects into old space.
 
While you can certainly prevent the old space GC by configuring the memory policy accordingly (growth regime limit), the VM allocates objects in new space and will therefore be periodically forced to make room there by triggering a new space GC which is referred to as a scavenge.
 
Thanks,
Andres.


From: [hidden email] [mailto:[hidden email]] On Behalf Of Wouter Gazendam
Sent: Monday, October 01, 2007 2:58 AM
To: vwnc
Subject: Garbage collector eating up my processing time?

Hi,

I'm timeprofiling a piece of code on a vw64 image using +2GB memory. It seems that of the 16 seconds of execution, almost 15 seconds is used by the garbagecollector. I have installed the LargeMemoryPolicy and have tweaked the objectmemory as

    ObjectMemory sizesAtStartup: #(500.0 500.0 10.0 10.0 10.0 10.0 10.0)

When profiling on an idle image I get the following result:

56 samples, 302.19 average ms/sample, 115 scavenges, 2 incGCs,
1.24s active, 15.65s other processes,
16.92s real time, 0.03s profiling overhead

The profiled code copies a lot of objects, doesn't use anything external as databases, ui or disk.
Tweaking #sizesAtStartup: got me from an initial '37s other processes' down to as low as '8s other processes'.

 - Is the 'other processes' GC time?
 - Is it possible to suspend the garbage collector during execution?

Thanks,

Wouter


Reply | Threaded
Open this post in threaded view
|

RE: Garbage collector eating up my processing time?

Terry Raymond

Wouter

 

You should also be aware that you should tune the IGC on a

machine whose peformance is close to the machine the image

will run on.

 

Rant:

The IGC is much more difficult to tune than it should be. It should

be designed so it is adaptable and the parameters relate to the users

application requirements such as; memory bounds, acceptable overhead,

and pause time. Pause time is a time slice where the image processing

is suspended while the IGC runs.

 

Terry

===========================================================
Terry Raymond
Crafted Smalltalk
80 Lazywood Ln.
Tiverton, RI  02878
(401) 624-4517      [hidden email]
<http://www.craftedsmalltalk.com>
===========================================================


From: Eliot Miranda [mailto:[hidden email]]
Sent: Tuesday, October 02, 2007 11:37 AM
To: Wouter Gazendam
Cc: [hidden email]; [hidden email]
Subject: Re: Garbage collector eating up my processing time?

 

 

On 10/2/07, Wouter Gazendam <[hidden email]> wrote:

I've been twiddling a bit more on the knobs of MemoryPolicy. Increasing incrementalAllocationThreshold improved my situation drastically:

incrementalAllocationThreshold : 10000000.

67 samples, 44.07 average ms/sample, 115 scavenges, 0 incGCs,
1.62s active, 1.31s other processes,
2.95s real time, 0.02s profiling overhead

incrementalAllocationThreshold : 100000000.

82 samples, 30.2 average ms/sample, 115 scavenges, 0 incGCs,
2.01s active, 0.43s other processes,
2.48s real time, 0.03s profiling overhead

Only thing is, I don't understand exactly why. Is this a good thing, or will I run into other problems when I increase the incrementalAllocationThreshold to 100M?



What the incrementalAllocationThreshold determines is the newSoftLowSpaceLimit which in turn decides the rate at which the incremental GC is invoked.  Provided that your application produces garbage that can be collected by the incremental GC (likely a safe assumption) you won't want to completely disable the incremental GC.  You'll want it to invoke at a rate where it is able to keep up with the amont of garbage you create.  If the incremental GC is invoked too frequently you'l see it take processing time.  If it is invoked too infrequently then it may fail to collect any garage at all, failing to sweep old space completely, and instead the MemoryPolicy/ObjectMemory subsystem will be forced either to grow old space or, once the growthRegimeUpperBound is reached, to do a stop-the-world scan-mark garbage collection at some point to satisfy some allocation request.

So you want to tune things such that the IGC is being onvoked just frequently enough that in the steady state old-space memory doesn't grow.  You could determine this by instrumenting (installng a subclass of) MemoryPolicy to log oldSpace growth ( e.g. including writing to the transcript).  Then run with a very high growthRegimeUpperBound and run the application.  Once your application has reached its natural working set you'd want the IGC to be invoked frequenty enough so tat it collects garbage in old space sufficiently that further growth of old-space isn't required.

 

Thanks,

Wouter







On 10/1/07, Joachim Geidel <[hidden email]> wrote:

Wouter, Andres,

Be careful about forcing tenuring. If the copied objects are needed for some time, forcing the tenuring may be okay. If they are just temporary objects with a short lifetime, moving them to OldSpace would be bad, because this would cause a lot of additional work for the incremental garbage collector. Currently, there are only two incremental GCs in the profile. If this number goes up when the SurvivorSpace size (the second of the 500.0s) is reduced, it could be a sign that this was a bad move. OTOH, the extremely large sizes of Eden and SurvivorSpace will cause rather lengthy scavenges in NewSpace. It might actually be better to reduce them and look for other parameters to tweak.

The best way to reduce garbage collection is not producing garbage, of course, and if you see more than 90% of the time spent in garbage collection, there certainly is a lot of garbage. I would check where this garbage is produced and try to avoid that, e.g. by doing in-situ manipulations, reusing Collection objects, etc.

If you have a 2 GB image, you probably won't want to suspend garbage collection - if lots of new temporary objects are created, the size of the image might increase quite fast.

Besides configuring the sizesAtStartup, you have to tune the MemoryPolicy. It's not only growthRegimeUpperBound and memoryUpperbound which have to be tuned. When your image has to grow fast, you also have to set preferredGrowthIncrement to a higher value (at least 10 or 20 MB for a large image), and freeMemoryUpperbound to a value larger than preferredGrowthIncrement (I suggest 1.5 times larger). Also, set growthRetryDecrement to 1 MB instead of the very small default size. You can also set these parameters to really high values temporarily.

If you want to make the image start up faster, and if 2 GB is the normal size of the beast, you should also increase OldSpaceHeadroom (the sixth entry in sizesAtStartup), e.g. to 500.0 which corresponds to a 500 MB chunk of OldSpace allocated when the image starts.

And once this is done, you might want to tune the incremental garbage collector - but that's the point where you need professional help. ;-)  We had such a situation in our project several years ago, and what really helped was hiring John McIntosh for the job.

BTW, how large is the image when this happens? If its in the area between growthRegimeUpperBound and memoryUpperbound, it will run a full garbage collection whenever it needs more memory. The only way to avoid this would be to further increase growthRegimeUpperBound (and memoryUpperbound, if needed).

HTH,
Joachim Geidel

Valloud, Andres schrieb:

Wouter,

 

Without looking at what the code is doing, it is hard to tell if what you are seeing is reasonable behavior or not.

 

Note that the rather large multipliers for the new space may or may not be beneficial, depending on the application.  For example, if you are creating many objects, it may be better to have a large eden but small survivor spaces to force quick tenuring of objects into old space.

 

While you can certainly prevent the old space GC by configuring the memory policy accordingly (growth regime limit), the VM allocates objects in new space and will therefore be periodically forced to make room there by triggering a new space GC which is referred to as a scavenge.

 

Thanks,

Andres.

 


From: [hidden email] [mailto:[hidden email]] On Behalf Of Wouter Gazendam
Sent: Monday, October 01, 2007 2:58 AM
To: vwnc
Subject: Garbage collector eating up my processing time?

Hi,

I'm timeprofiling a piece of code on a vw64 image using +2GB memory. It seems that of the 16 seconds of execution, almost 15 seconds is used by the garbagecollector. I have installed the LargeMemoryPolicy and have tweaked the objectmemory as

    ObjectMemory sizesAtStartup: #(500.0 500.0 10.0 10.0 10.0 10.0 10.0)

When profiling on an idle image I get the following result:

56 samples, 302.19 average ms/sample, 115 scavenges, 2 incGCs,
1.24s active, 15.65s other processes,
16.92s real time, 0.03s profiling overhead

The profiled code copies a lot of objects, doesn't use anything external as databases, ui or disk.
Tweaking #sizesAtStartup: got me from an initial '37s other processes' down to as low as '8s other processes'.

 - Is the 'other processes' GC time?
 - Is it possible to suspend the garbage collector during execution?

Thanks,

Wouter

 

 

Reply | Threaded
Open this post in threaded view
|

Re: Garbage collector eating up my processing time?

Joachim Geidel
In reply to this post by Wouter Gazendam
Yes, indeed. Turning off the incremental garbage collector by raising the threshold which triggers it to run is usually a bad idea. Raising the threshold a bit such that it doesn't interfere with other actions may be a good idea in some special situations as described by Eliot, but I don't think this is the case here.

What I suspect from the numbers is that the application creates lots of new objects which are referenced long enough to be moved to OldSpace (tenured) instead of being garbage collected by the NewSpace scavenger. This would explain why the performance is better after stopping IGC - it simply doesn't run, and the objects will still exist in OldSpace. Running a full garbage collection will remove them, of course. Depending on the requirements of the application, it might be an option to defer the work by stopping the IGC during the operation, and running a full GC afterwards. That way, the action will be completed earlier. The downside is that a full GC in such a large image (2GB) will take a lot of time afterwards.

The best way to solve problems with garbage collection is not producing garbage in the first place (I can't help but repeat this). I'd first use the allocation profiler and get rid of any unneeded object allocations before investing more work in GC tuning. If this doesn't help, call John McIntosh. :-)

Apart from this, Terry is right: We need tool support (and even more documentation, although what's there is already quite good). Andres will work on that (see the second comment):
http://blogten.blogspot.com/2007/09/native-stack-tuning-in-vw.html
:-)

Joachim


Eliot Miranda wrote:

On 10/2/07, Wouter Gazendam <[hidden email]> wrote:
I've been twiddling a bit more on the knobs of MemoryPolicy. Increasing incrementalAllocationThreshold improved my situation drastically:

incrementalAllocationThreshold : 10000000.

67 samples, 44.07 average ms/sample, 115 scavenges, 0 incGCs,
1.62s active, 1.31s other processes,
2.95s real time, 0.02s profiling overhead

incrementalAllocationThreshold : 100000000.

82 samples, 30.2 average ms/sample, 115 scavenges, 0 incGCs,
2.01s active, 0.43s other processes,
2.48s real time, 0.03s profiling overhead

Only thing is, I don't understand exactly why. Is this a good thing, or will I run into other problems when I increase the incrementalAllocationThreshold to 100M?


What the incrementalAllocationThreshold determines is the newSoftLowSpaceLimit which in turn decides the rate at which the incremental GC is invoked.  Provided that your application produces garbage that can be collected by the incremental GC (likely a safe assumption) you won't want to completely disable the incremental GC.  You'll want it to invoke at a rate where it is able to keep up with the amont of garbage you create.  If the incremental GC is invoked too frequently you'l see it take processing time.  If it is invoked too infrequently then it may fail to collect any garage at all, failing to sweep old space completely, and instead the MemoryPolicy/ObjectMemory subsystem will be forced either to grow old space or, once the growthRegimeUpperBound is reached, to do a stop-the-world scan-mark garbage collection at some point to satisfy some allocation request.

So you want to tune things such that the IGC is being onvoked just frequently enough that in the steady state old-space memory doesn't grow.  You could determine this by instrumenting (installng a subclass of) MemoryPolicy to log oldSpace growth ( e.g. including writing to the transcript).  Then run with a very high growthRegimeUpperBound and run the application.  Once your application has reached its natural working set you'd want the IGC to be invoked frequenty enough so tat it collects garbage in old space sufficiently that further growth of old-space isn't required.