Hi All, with Marcus chez moi working together on adaptive optimization/speculative inlining Marcus wondered how big the Cog code zone is and how much time goes into reclamation. Reclamation is the process of throwing away some number of jitted methods to make room for methods that need to be jitted. Currently Cog's policy is hard-coded and it always throws away a quarter of the jitted methods, attempting to throw away the least-recenty used, based on a simple 3-bit usage count per method derived from marking methods on the stack and the methods they directly reference. On my 2.66 GHz Intel Core i7 Mac Book Pro the system seems to compact the code zone once every three or four seconds. Here is a more precise measurement for recompiling the world in my current development image.
num compiled methods 93138 code size 1048576 compile all time (seconds) 48.48
compile all num compactions 20 compile all compaction period 2.42 milliseconds per compaction 0.85
percentage of runtime 0.035067 So a few things follow. A mere megabyte of code seems to be adequate; reclamation rates are low and the latency introduced is of the order of a youngSpace reclamation (I just measured an average of 0.825ms per young space collection in this image); the cost is extremely low, a mere 0.04% of entire execution time. (Note that the VM is using microsecond resolution times internally and answering totals in milliseconds). Here are the numbers repeated for a 2 megabyte code zone:
num compiled methods 93138 code size 2097152 compile all time (seconds) 54.63
compile all num compactions 3 compile all compaction period 18.21 milliseconds per compaction 1.333
percentage of runtime 0.007322 So interestingly the overall runtime is higher, which could be because of poorer cache density due to the larger code zone and hence larger working set, the latency is a little higher (1.333 ms per compaction instead of 0.85) the rate of compactions and total cost is much lower. It would be interesting to measure costs for a range of code sizes for some realistic applications. But clearly a one megabyte code cache isn't that bad a choice. The code density advantages of a small working set could trump all other considerations. But of course much more thorough measurements would be required, for example variation in times could be being heavily influenced by the other programs I've got open at the moment.
best, Eliot (| nbefore tbefore nafter tafter size ncm | nbefore := Smalltalk vmParameterAt: 62. tbefore := Smalltalk vmParameterAt: 63.
ms := [Compiler recompileAll] timeToRun. nafter := Smalltalk vmParameterAt: 62. tafter := Smalltalk vmParameterAt: 63. size := Smalltalk vmParameterAt: 46. ncm := 0. SystemNavigation new allSelect: [:m| ncm := ncm + 1. false].
String new writeStream ensureCr; nextPutAll: 'num compiled methods '; print: ncm; cr;
nextPutAll: 'code size '; print: size; cr; nextPutAll: 'compile all time (seconds) '; print: (ms / 1000 roundTo: 0.01); cr;
nextPutAll: 'compile all num compactions '; print: nafter - nbefore; cr; nextPutAll: 'compile all compaction period '; print: (ms / 1000 / (nafter - nbefore) roundTo: 0.01); cr;
nextPutAll: 'milliseconds per compaction '; print: ((tafter - tbefore) asFloat / (nafter - nbefore) roundTo: 0.001); cr; nextPutAll: 'percentage of runtime '; print: ((tafter - tbefore) * 100.0 / ms roundTo: 0.000001); cr;
contents)
|
Free forum by Nabble | Edit this page |