The current default eden size for 64-bits is 8 Mb which is too small. The performance cost of generation scavenging on 64-bits is radically different at 8Mb vs e.g. 64Mb. For example, here are the GC stats and the time to run for recompiling the Morphic package in a trunk Squeak 640-bit image, collected by running
FileStream stdout print: [(PackageInfo named: 'Morphic') methods do:
Aeolus.image$ spur64cfvm -eden 60m spurreader-64. <RecompileMorphicAndReport.st
While the run-time varies very little (7 seconds to 6.97 seconds), the number of scavenges goes down enormously. The reason the run-time is relatively unaffected is because the generation scavenger is quite efficient. At an 8mb young space we see 195 scavenges with about a 1.1% runtime overhead. But at a 64mb young space we see 23 scavenges (a ratio of about 8, which mirrors the ratio of the sizes), but the run-time overhead is now only 0.41%; 82ms overhead vs 30ms overhead. So a larger young space is a good idea.
The suggestion here is that on 64-bit platforms we tailor the default young space size to the available memory. See e.g. https://stackoverflow.com/questions/2513505/how-to-get-available-memory-c-g which provides code above sysconf on Unix and GlobalMemoryStatusEx on WIN32 for obtaining the total amount of physical memory available. Therefore we could use a larger young space on systems with a lot more memory; for example 2Mb of young space per Gb of physical memory would give a 64-mb young space on 32Gb machines such as my fully loaded 2018 MacBook Pro Core i9.
Does this also mean memory consumption is increased by 1.65x (
I kinda like the idea of autoscaling GC settings depending on the memory available. However, this could easily turn into yet another benchmarking pitfall one needs to keep in mind.
|Free forum by Nabble||Edit this page|