thisContext in direct-call implementation (was: Re: [amber-lang] Sourcemaps: the holy grale for Amber?)

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

thisContext in direct-call implementation (was: Re: [amber-lang] Sourcemaps: the holy grale for Amber?)

Herby Vojčík


Nicolas Petton wrote:
> Well, if we go this path, thisContext will for sure go away. We are
> still thinking about it though, not sure yet which path we'll take :)

It depends. There are solutions to have thisContext as well as speed,
but they are more complicated.

One solution I see boils down to having optimized as well as deoptimized
code. When someone needs thisContext, optimized code path must be
interrupted, stack info collected, and replayed using depotimized code,
which gives you proper context.

This could be implemented for example by putting body of every optimized
method in try/catch and updating local variable $pc to denote where in
the body it actually is. Otherwise, it can stay pretty as it is (for $pc
to be accurate, there needs to be no chaining nor embedding of calls -
every single call needs to be done isolated).

Overall, this imposes very little overhead, except, sadly, local
returns, which get the penalty of catch in every piece of the stack and
rethrowing, instead of today where method without try/catch are ignored.


When thisContext is needed, DeoptimizeRequest is thrown, and in catch
clauses, all local vars including $pc are collected, until it is finally
caught somewhere deep down, and there the deoptimized code is called in
context of the newly collected context.

The deoptimized code will be slow, complicated, but context-friendly, so
it is able to skip already processed part of the code and continue
straight from th3e place of the last call. Thus, the stack is
reconstructed, but now with depoptimized functions in the stack, until
finally thisContext getter is invoked again, but now it succeeds and the
code proceeds.

I think the deoptimized code at the top of the stack _can_ call
optimized code again, there is no need to cripple the speed from that
point on. If that optimized code wants thisContext again, the
DeoptimizeRequest is caught not deep down inside boot.js, but by the
deoptimized/optimized boundary.

The devil is in the details, of course, it is the question is the
replaying idea can work. The big risk I see is that there may be some
values in local vars that would not be reusable after replaying; but
this is only risk when the values are tightly bound to the actual
execution context (like self-reference to the method itself; arguments,
argumants.callee etc.). If these are banned (I'd like to see Amber move
to strict mode), maybe the risk could be avoided.

Herby

> Nico
>
> Denis Kudriashov<[hidden email]>  writes:
>
>> Hello
>>
>> 2012/12/11 Nicolas Petton<[hidden email]>
>>
>>> Herby Vojčík<[hidden email]>  writes:
>>>>> The advantages:
>>>>>
>>>>> - we get the speed of JavaScript as we would compile to straight JS
>>>>>     (yeah!!)
>>>> We sort-of do now. I don't get this.
>>> the message_send branch is a step forward, yes. But the idea was to
>>> remove SmalltalkContext completely. Then we would get the speed of JS.
>>>
>>> Nico
>>>
>> Is it means that resumable exceptions, continuations and other thisContext
>> stuff can't be implemented in Amber?
>>
>> Is it possible to get SmalltakContext configurable? So code which is need
>> it can be compiled specifically.