Hi,
I have written a little recursive method rec: anInteger ^anInteger < 1 ifTrue: ['ready'] ifFalse: [self rec: anInteger - 1] and let it run for several values of the argument. I measured the runtime by expressions like that: ObjectMemory globalGarbageCollect. Time microsecondsToRun: [RecTest new rec: 1000] The results (using 7.3.1 on a PB G4 1,67 GHz): argument runtime 1000 300 2000 1000 4000 7500 8000 17000 16000 33000 32000 71000 64000 175000 There is a strong non-linearity between the values 2000 and 4000. Could anybody give me a reason for this behavior? Johannes |
Johannes Brauer wrote:
> There is a strong non-linearity between the values 2000 and 4000. Could > anybody give me a reason for this behavior? Somewhere between those values is the point where the call stack gets deep enough that the VM starts converting stack frames to "real" heap objects to preserve the illusion of unlimited stack. --Vassili |
Hi Vassili,
thank you for the quick response (on Easter Sunday) Johannes Am 17.04.2006 um 00:34 schrieb Vassili Bykov: > Johannes Brauer wrote: >> There is a strong non-linearity between the values 2000 and 4000. >> Could anybody give me a reason for this behavior? > > Somewhere between those values is the point where the call stack > gets deep enough that the VM starts converting stack frames to > "real" heap objects to preserve the illusion of unlimited stack. > > --Vassili |
Yes, but the VM should not create activation records at all, because
the method is tail recursive. Johannes Am 17.04.2006 um 05:14 schrieb Boris Popov: > Johannes Brauer wrote: >> Hi Vassili, >> thank you for the quick response (on Easter Sunday) > > Heh, this just goes to show that one should avoid deep recursion, > but its not like we didn't know that already ;) > > Cheers! > > -- > -Boris > http://bpopov.wordpress.com > >> Johannes >> Am 17.04.2006 um 00:34 schrieb Vassili Bykov: >>> Johannes Brauer wrote: >>>> There is a strong non-linearity between the values 2000 and >>>> 4000. Could anybody give me a reason for this behavior? >>> >>> Somewhere between those values is the point where the call stack >>> gets deep enough that the VM starts converting stack frames to >>> "real" heap objects to preserve the illusion of unlimited stack. >>> >>> --Vassili |
Johannes Brauer writes:
> Yes, but the VM should not create activation records at all, because > the method is tail recursive. Smalltalks generally don't do tail recursion optimisation so it'll consume stack. There's also the chance that you're overflowing a CPU cache as well. This can be measured if you've got CPU counter profilers. oprofile is a good one for Linux, I'm not sure if such tools are easily availible for the Mac. Unfortunately performance isn't simple anymore even at the machine code level. Bryce |
I heard about it, but I don't know the rationale behind it.
Johannes
Am 17.04.2006 um 14:48 schrieb Bryce Kampjes:
|
The rationale is called "why bother". :) On the one hand, Smalltalk
programming culture never focused on list processing and on the other, Smalltalk-80 from the start used heap-based activation records so running out of stack was never a critical issue. Not more so than running out of heap anyway. All that made tail recursion not an integral part of the language culture as in Scheme, but only a potential optimization for not all that frequent use cases. By saying this I'm of course ignoring those implementations not derived from ST-80 where you do run out of stack at a few hundred or thousand activations. But they didn't bother to implement tail recursion so I figure they didn't see that as an important enough issue anyway. Johannes Brauer wrote: > I heard about it, but I don't know the rationale behind it. > > Johannes > Am 17.04.2006 um 14:48 schrieb Bryce Kampjes: > >> Smalltalks generally don't do tail recursion optimisation so >> >> it'll consume stack. >> > -- Vassili Bykov <[hidden email]> [:s | s, s printString] value: '[s: | s, s printString] value: ' |
Hi Vassili,
thank you, once more, for the detailed explanation. Johannes Am 17.04.2006 um 20:10 schrieb Vassili Bykov: > The rationale is called "why bother". :) On the one hand, Smalltalk > programming culture never focused on list processing and on the > other, Smalltalk-80 from the start used heap-based activation > records so running out of stack was never a critical issue. Not > more so than running out of heap anyway. All that made tail > recursion not an integral part of the language culture as in > Scheme, but only a potential optimization for not all that frequent > use cases. > > By saying this I'm of course ignoring those implementations not > derived from ST-80 where you do run out of stack at a few hundred > or thousand activations. But they didn't bother to implement tail > recursion so I figure they didn't see that as an important enough > issue anyway. > > > Johannes Brauer wrote: >> I heard about it, but I don't know the rationale behind it. >> Johannes >> Am 17.04.2006 um 14:48 schrieb Bryce Kampjes: >>> Smalltalks generally don't do tail recursion optimisation so >>> >>> it'll consume stack. >>> > > > -- > Vassili Bykov <[hidden email]> > > [:s | s, s printString] value: '[s: | s, s printString] value: ' |
In reply to this post by jb
This is to keep the code debuggable at Smalltalk semantics level.
I forgot the details but it has to do with loosing context information in such optimized runs that cannot always be reconstructed to show/manipulate in the debugger (which by design has to show smalltalk semantics while stepping, not the optimizations). R - Johannes Brauer wrote: > I heard about it, but I don't know the rationale behind it. > > Johannes > > Am 17.04.2006 um 14:48 schrieb Bryce Kampjes: > > Smalltalks generally don't do tail recursion optimisation so > > it'll consume stack. |
Reinout Heeck wrote:
> This is to keep the code debuggable at Smalltalk semantics level. > > I forgot the details but it has to do with loosing context information in such > optimized runs that cannot always be reconstructed to show/manipulate in the > debugger (which by design has to show smalltalk semantics while stepping, not > the optimizations). Ah yes, and that too. But with many programmers happily trading convenience for performance, that's hardly the first reason. The details are simple. A tail call is essentially a goto to the beginning of the function. Local state from prior activation gets overwritten by each new run through the function. If you stop after a thousand tail recursive calls and look at the stack all you see is the frame for the thousandth activation with what appears as its caller really being the caller of the function a thousand activations ago. > > R > - > > Johannes Brauer wrote: >> I heard about it, but I don't know the rationale behind it. >> >> Johannes >> >> Am 17.04.2006 um 14:48 schrieb Bryce Kampjes: >>> Smalltalks generally don't do tail recursion optimisation so >>> it'll consume stack. > > -- Vassili Bykov <[hidden email]> [:s | s, s printString] value: '[s: | s, s printString] value: ' |
Free forum by Nabble | Edit this page |