Re: Why is the method context size fixed to two magical numbers

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

Re: Why is the method context size fixed to two magical numbers

Max Leske
 
Thanks Clément, that explains it.


On 07 Jul 2016, at 13:52, [hidden email] wrote:

Hi,

In the interpreter VM, contexts are recycled and you have 2 pools of
contexts, one for each size. Contexts represent 30% of allocations in that
VM.

In the Stack and Cog VM, the size is used to detect stack page overflow and
to allocate contexts, we could use a precise value instead of this bit
flag. Using a single bit saves memory as you need 1 bit per compiled method
instead of 1 byte (or even 1 word !).

Cheers

On Thu, Jul 7, 2016 at 1:02 PM, Max Leske <[hidden email]> wrote:


Hi,

I’ve been looking at the code that creates new method contexts (in the
image and in the VM) and I can’t figure out why it would be beneficial to
fix the size of the context to (currently) 16 (22) or 56 (62) bytes.
Clément says in one of his blog entries that this if for performance
reasons but I don’t see where the performance gain is (the Bluebook doesn’t
mention this either). At allocation time the VM simply takes the number of
bytes to allocate from the “large context flag”. Maybe the performance gain
comes from the idea that the size does not have to be calculated? But then
I ask: why not simply store the frame size instead of this weird flag?

Cheers,
Max

Reply | Threaded
Open this post in threaded view
|

Re: Why is the method context size fixed to two magical numbers

Eliot Miranda-2
 
Hi All,

On Jul 7, 2016, at 10:29 AM, Max Leske <[hidden email]> wrote:

Thanks Clément, that explains it.


On 07 Jul 2016, at 13:52, [hidden email] wrote:

Hi,

In the interpreter VM, contexts are recycled and you have 2 pools of
contexts, one for each size. Contexts represent 30% of allocations in that
VM.

In the Stack and Cog VM, the size is used to detect stack page overflow and
to allocate contexts, we could use a precise value instead of this bit
flag. Using a single bit saves memory as you need 1 bit per compiled method
instead of 1 byte (or even 1 word !).

And if you look at the blue book definition you'll see that a large context's body (because object table entries were separate) is exactly twice the length of a small context's body.  Small = size,class,6 named inst vars,12 stack slots = 20 16-bit slots.  Large = size,class,6 named inst vars,32 stack slots = 40 16-bit slots.  This to avoid fragmentation, allowing a freed large context to be divided in the allocator into two small context bodies.

None of that is compelling now with a two-space allocator and lazy context creation.  Hence our intent to increase the large context size to allow for high argument count methods.  Given lazy creation we might even make large contexts stretchy.

Cheers

On Thu, Jul 7, 2016 at 1:02 PM, Max Leske <[hidden email]> wrote:


Hi,

I’ve been looking at the code that creates new method contexts (in the
image and in the VM) and I can’t figure out why it would be beneficial to
fix the size of the context to (currently) 16 (22) or 56 (62) bytes.
Clément says in one of his blog entries that this if for performance
reasons but I don’t see where the performance gain is (the Bluebook doesn’t
mention this either). At allocation time the VM simply takes the number of
bytes to allocate from the “large context flag”. Maybe the performance gain
comes from the idea that the size does not have to be calculated? But then
I ask: why not simply store the frame size instead of this weird flag?

Cheers,
Max