Debugging and optimizations

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
18 messages Options
Reply | Threaded
Open this post in threaded view
|

Debugging and optimizations

Torsten Bergmann
Hi,

for tracing and logging one could easily write:

foo
  ...
  Log enabled ifTrue: [ "do some tracing here" ].
  ...

but this always requires some message sends/checks if the debug mode is active.
This is OK while debugging/tracing to find errors - but for runtime one whishes to have the
best performance without any effect of tracing/debug code.

Smalltalk/MT had a nice optimization feature that allowed to mix in debug code without
additional overhead when in non-debug mode. If I remember correctly the basic idea was the
following: there is a pool constant _DEBUG_INTERNAL that you compare to another pool constant
TRUE (representing the value 1).

So with an expresion like _DEBUG_INTERNAL == TRUE this ends up in 1 == 1 comparision
which is always true if the pool constants/values have the same value of 1.
A followup message send of #ifTrue: will always be executed - therefore means for the compiler
to included the (debug) code:

foo
   ...
   _DEBUG_INTERNAL == TRUE ifTrue: [
       Processor outputDebugLine: 'Trace something'
   ].
   ...

So in this case (always true) this was optimized by the compiler to

foo
   ...
   Processor outputDebugLine: 'Trace something'
   ...

When _DEBUG_INTERNAL was set to FALSE (represented by value 0 as in C/C++) this
ends up in 0 == 1 which is always false and will never be true. So a followup message
send of #ifTrue: was optimized (removed as it was dead code and never reached).

foo
   ...
   ...

So depending on pool flag _DEBUG_INTERNAL and recompilation of (all) methods one
could add debug/trace/logging code without runtime overhead. Yes this is very C language
like to have conditional compilation - but very effective in keeping debug/tracing
code influence low.

I know we usually have the debugger in front of us - but sometimes a trace or log is
required to see where code crashes. Think of a headless situation in a webserver or
a small device like the pi where you want to reduce the performance overhead.

This leads to several questions:
- Is something like this feasible/already possible in Pharo?
- Any pointers on how Opal does optimizations? Do we have similar optimizations (removing
  code that could not be reached) that could be used.
- How do you usually deal with additional tracing code when you do not want to
  have too much runtime overhead.

Thanks
T.

Reply | Threaded
Open this post in threaded view
|

Re: Debugging and optimizations

Eliot Miranda-2
Hi Torsten,

On Jun 12, 2015, at 2:49 PM, "Torsten Bergmann" <[hidden email]> wrote:

> Hi,
>
> for tracing and logging one could easily write:
>
> foo
>  ...
>  Log enabled ifTrue: [ "do some tracing here" ].
>  ...
>
> but this always requires some message sends/checks if the debug mode is active.
> This is OK while debugging/tracing to find errors - but for runtime one whishes to have the
> best performance without any effect of tracing/debug code.
>
> Smalltalk/MT had a nice optimization feature that allowed to mix in debug code without
> additional overhead when in non-debug mode. If I remember correctly the basic idea was the
> following: there is a pool constant _DEBUG_INTERNAL that you compare to another pool constant
> TRUE (representing the value 1).
>
> So with an expresion like _DEBUG_INTERNAL == TRUE this ends up in 1 == 1 comparision
> which is always true if the pool constants/values have the same value of 1.
> A followup message send of #ifTrue: will always be executed - therefore means for the compiler
> to included the (debug) code:
>
> foo
>   ...
>   _DEBUG_INTERNAL == TRUE ifTrue: [
>       Processor outputDebugLine: 'Trace something'
>   ].
>   ...
>
> So in this case (always true) this was optimized by the compiler to
>
> foo
>   ...
>   Processor outputDebugLine: 'Trace something'
>   ...
>
> When _DEBUG_INTERNAL was set to FALSE (represented by value 0 as in C/C++) this
> ends up in 0 == 1 which is always false and will never be true. So a followup message
> send of #ifTrue: was optimized (removed as it was dead code and never reached).
>
> foo
>   ...
>   ...
>
> So depending on pool flag _DEBUG_INTERNAL and recompilation of (all) methods one
> could add debug/trace/logging code without runtime overhead. Yes this is very C language
> like to have conditional compilation - but very effective in keeping debug/tracing
> code influence low.
>
> I know we usually have the debugger in front of us - but sometimes a trace or log is
> required to see where code crashes. Think of a headless situation in a webserver or
> a small device like the pi where you want to reduce the performance overhead.
>
> This leads to several questions:
> - Is something like this feasible/already possible in Pharo?
> - Any pointers on how Opal does optimizations? Do we have similar optimizations (removing
>  code that could not be reached) that could be used.
> - How do you usually deal with additional tracing code when you do not want to
>  have too much runtime overhead.

One way is to hold the logger in an inst var and have two loggers, one null and one not.  Then the overhead in the null case is marshaling arguments and a frameless send.  If there are no expensive arguments to Marshall (such as a block or an expression) then I expect thus will be quite cheap.

The analogue of this approach that I use in the VM are the many many assertions which compile Tia macro invocation that in production code compiles to null.

>
> Thanks
> T.
>

Reply | Threaded
Open this post in threaded view
|

Re: Debugging and optimizations

Ben Coman
In reply to this post by Torsten Bergmann
On Sat, Jun 13, 2015 at 5:49 AM, Torsten Bergmann <[hidden email]> wrote:

> Hi,
>
> for tracing and logging one could easily write:
>
> foo
>   ...
>   Log enabled ifTrue: [ "do some tracing here" ].
>   ...
>
> but this always requires some message sends/checks if the debug mode is active.
> This is OK while debugging/tracing to find errors - but for runtime one whishes to have the
> best performance without any effect of tracing/debug code.
>
> Smalltalk/MT had a nice optimization feature that allowed to mix in debug code without
> additional overhead when in non-debug mode. If I remember correctly the basic idea was the
> following: there is a pool constant _DEBUG_INTERNAL that you compare to another pool constant
> TRUE (representing the value 1).
>
> So with an expresion like _DEBUG_INTERNAL == TRUE this ends up in 1 == 1 comparision
> which is always true if the pool constants/values have the same value of 1.
> A followup message send of #ifTrue: will always be executed - therefore means for the compiler
> to included the (debug) code:
>
> foo
>    ...
>    _DEBUG_INTERNAL == TRUE ifTrue: [
>        Processor outputDebugLine: 'Trace something'
>    ].
>    ...
>
> So in this case (always true) this was optimized by the compiler to
>
> foo
>    ...
>    Processor outputDebugLine: 'Trace something'
>    ...
>
> When _DEBUG_INTERNAL was set to FALSE (represented by value 0 as in C/C++) this
> ends up in 0 == 1 which is always false and will never be true. So a followup message
> send of #ifTrue: was optimized (removed as it was dead code and never reached).
>
> foo
>    ...
>    ...
>
> So depending on pool flag _DEBUG_INTERNAL and recompilation of (all) methods one
> could add debug/trace/logging code without runtime overhead. Yes this is very C language
> like to have conditional compilation - but very effective in keeping debug/tracing
> code influence low.
>
> I know we usually have the debugger in front of us - but sometimes a trace or log is
> required to see where code crashes. Think of a headless situation in a webserver or
> a small device like the pi where you want to reduce the performance overhead.
>
> This leads to several questions:
> - Is something like this feasible/already possible in Pharo?
> - Any pointers on how Opal does optimizations? Do we have similar optimizations (removing
>   code that could not be reached) that could be used.
> - How do you usually deal with additional tracing code when you do not want to
>   have too much runtime overhead.

Does Pharo have pool *constants* or only pool *variables* ?  I don't
think a variable can/should affect compilation.

I like Eliot's approach of a variable holding a NullLogger, but if you
have expensive arguments to marshall and you need absolute
performance, perhaps an alternative proposal would be to consider
expanding the semantic that pragmas affect compilation, such that you
can use angle bracket blocks throughout the method that look up a
variable during compilation
   <compileThis: [Log this: 'error'] when: DebugFlag>

Another alternative may be to use MetaLinks to install & remove the
debugging code, plus something like the pragma syntax so these can be
written inline like this...
   myMethod
       x := y + 1.
       <  metalink: #debugging installWhenActive: [Log this: y]  >
       z := x + y.
Then you can enable debugging with...    MetaLink activate: #debugging.

cheers -ben

Reply | Threaded
Open this post in threaded view
|

Re: Debugging and optimizations

Marcus Denker-4
>>
>> This leads to several questions:
>> - Is something like this feasible/already possible in Pharo?
>> - Any pointers on how Opal does optimizations? Do we have similar optimizations (removing
>>  code that could not be reached) that could be used.
>> - How do you usually deal with additional tracing code when you do not want to
>>  have too much runtime overhead.
>
> Does Pharo have pool *constants* or only pool *variables* ?  I don't
> think a variable can/should affect compilation.
>
They are the same as Globals: there is an association in the literal frame
whose value is pushed to read or stored into for write.


> I like Eliot's approach of a variable holding a NullLogger, but if you
> have expensive arguments to marshall and you need absolute
> performance, perhaps an alternative proposal would be to consider
> expanding the semantic that pragmas affect compilation, such that you
> can use angle bracket blocks throughout the method that look up a
> variable during compilation
>   <compileThis: [Log this: 'error'] when: DebugFlag>
>
> Another alternative may be to use MetaLinks to install & remove the
> debugging code, plus something like the pragma syntax so these can be
> written inline like this...
>   myMethod
>       x := y + 1.
>       <  metalink: #debugging installWhenActive: [Log this: y]  >
>       z := x + y.
> Then you can enable debugging with...    MetaLink activate: #debugging.


I am a bit sceptical to have meta links show up as syntax… I see two possibilities
one could experiment with:

1) just add a Global  + put a link.

someMethod

        ...
        LogHere
        …


Opal does not compile the pushLititeralVariable/pop, it only put the association in the literal frame.
Then you could put a link on this global.

LogHere link: myLink.

after which all methods that have LogHere would be recompiled with the code if the link compiled
in. The link could reify everything needed.

2) we could add the concept of a #delete link and just annotate all sends to the Logger with that.

        Marcus





Reply | Threaded
Open this post in threaded view
|

Re: Debugging and optimizations

Marcus Denker-4
In reply to this post by Torsten Bergmann
>
> I know we usually have the debugger in front of us - but sometimes a trace or log is
> required to see where code crashes. Think of a headless situation in a webserver or
> a small device like the pi where you want to reduce the performance overhead.
>
> This leads to several questions:
> - Is something like this feasible/already possible in Pharo?
> - Any pointers on how Opal does optimizations? Do we have similar optimizations (removing
>  code that could not be reached) that could be used.

The only optimisation we do is that we do not compile a

push
pop

for variables (and even blocks) that are compiled for “effect” as there is no effect…

One could of course add support for a special case like that.

But what I don’t really like is that all this would require explicit recompilation… and
it would be always global…

        Marcus
Reply | Threaded
Open this post in threaded view
|

Re: Debugging and optimizations

Clément Béra
This is an interesting problem. There is currently no simple way of executing a message at compile-time instead of at runtime in Pharo, which is useful to have settings but no runtime overhead.

I did a simple extension for opal compiler for this purpose, adding the message Cvalue, which is compiled by Opal either to its receiver's block value or the result of its value depending on the result of #allowPrecompilation sent to its class using AST manipulation before the semantic analysis.

Example:

MyClass>>#example
^ [ Smalltalk vm class ] Cvalue     

if MyClass allowPrecompilation answers true, it is compiled to:
^ VirtualMachine
if MyClass allowPrecompilation answers false, it is compiled to:
^ Smalltalk vm class
if MyClass does not use the compiler extension, Cvalue is implemented as ^ self value on BlockClosure.

In your case it's slightly different, if you write 

[ Processor outputDebugLine: 'Trace something' ] Cvalue.

It would either compile to the body of the block or evaluate the expression at compile-time, and you don't want to evaluate the expression at compile-time but just delete it.

I believe something similar to what I did could solve your problem. However, it's difficult to do something easy to read and to use without having to extend too much the Smalltalk language... Here I added a special selector which is an important constraint so I don't think such code should be in the base image.

If you're interested, the code is on Smalltalkhub ClementBera/NeoCompiler, you need to recompile the NeoCompiler examples once they're loaded to have them working, then you can look at the bytecode generated in NeoCompilerExample>>#example and NeoCompilerExample2>>#example to easily understand what is compiled. If you change NeoCompiler>>#precompile: to replace the 'aBlock CValue' expression by (RBLiteralNode value: nil) instead of the result of the expression it should do what you want.

Now as I said this is not difficult to implement the problem is how to do it without having extensions and constraints in the Smalltalk semantics.


2015-06-13 13:20 GMT+02:00 Marcus Denker <[hidden email]>:
>
> I know we usually have the debugger in front of us - but sometimes a trace or log is
> required to see where code crashes. Think of a headless situation in a webserver or
> a small device like the pi where you want to reduce the performance overhead.
>
> This leads to several questions:
> - Is something like this feasible/already possible in Pharo?
> - Any pointers on how Opal does optimizations? Do we have similar optimizations (removing
>  code that could not be reached) that could be used.

The only optimisation we do is that we do not compile a

push
pop

for variables (and even blocks) that are compiled for “effect” as there is no effect…

One could of course add support for a special case like that.

But what I don’t really like is that all this would require explicit recompilation… and
it would be always global…

        Marcus

Reply | Threaded
Open this post in threaded view
|

Re: Debugging and optimizations

Thierry Goubier
Le 13/06/2015 14:39, Clément Bera a écrit :
> This is an interesting problem. There is currently no simple way of
> executing a message at compile-time instead of at runtime in Pharo,
> which is useful to have settings but no runtime overhead.

This is the discussion I wanted to have about pragmas... It didn't turn
out this way ;)

Returning to the original problem, would something like aspect-oriented
programming [1][2] implemented via RB rewriting or Metalinks a solution
for modeling the problem?

It is easy in Smalltalk/Pharo to search for specific pragmas or comments
to trigger the rewriting on or off, and easy to do the rewriting even
without MetaLink or RB support[3].

Regards,

Thierry

[1] https://en.wikipedia.org/wiki/Aspect-oriented_programming
[2] http://www.eclipse.org/aspectj/
[3] https://github.com/ThierryGoubier/Jejak

Reply | Threaded
Open this post in threaded view
|

Re: Debugging and optimizations

Eliot Miranda-2
In reply to this post by Torsten Bergmann
Hi Torsten, Hi Ben,


On Jun 12, 2015, at 2:49 PM, "Torsten Bergmann" <[hidden email]> wrote:

> Hi,
>
> for tracing and logging one could easily write:
>
> foo
>  ...
>  Log enabled ifTrue: [ "do some tracing here" ].
>  ...
>
> but this always requires some message sends/checks if the debug mode is active.
> This is OK while debugging/tracing to find errors - but for runtime one whishes to have the
> best performance without any effect of tracing/debug code.
>
> Smalltalk/MT had a nice optimization feature that allowed to mix in debug code without
> additional overhead when in non-debug mode. If I remember correctly the basic idea was the
> following: there is a pool constant _DEBUG_INTERNAL that you compare to another pool constant
> TRUE (representing the value 1).
>
> So with an expresion like _DEBUG_INTERNAL == TRUE this ends up in 1 == 1 comparision
> which is always true if the pool constants/values have the same value of 1.
> A followup message send of #ifTrue: will always be executed - therefore means for the compiler
> to included the (debug) code:
>
> foo
>   ...
>   _DEBUG_INTERNAL == TRUE ifTrue: [
>       Processor outputDebugLine: 'Trace something'
>   ].
>   ...
>
> So in this case (always true) this was optimized by the compiler to
>
> foo
>   ...
>   Processor outputDebugLine: 'Trace something'
>   ...
>
> When _DEBUG_INTERNAL was set to FALSE (represented by value 0 as in C/C++) this
> ends up in 0 == 1 which is always false and will never be true. So a followup message
> send of #ifTrue: was optimized (removed as it was dead code and never reached).
>
> foo
>   ...
>   ...
>
> So depending on pool flag _DEBUG_INTERNAL and recompilation of (all) methods one
> could add debug/trace/logging code without runtime overhead. Yes this is very C language
> like to have conditional compilation - but very effective in keeping debug/tracing
> code influence low.
>
> I know we usually have the debugger in front of us - but sometimes a trace or log is
> required to see where code crashes. Think of a headless situation in a webserver or
> a small device like the pi where you want to reduce the performance overhead.
>
> This leads to several questions:
> - Is something like this feasible/already possible in Pharo?

Ben's reply reminded me of a prototype I did in VW that we never released that could be adapted.

I added a ConstantBinding class alongside VariableBinding.  The compiler accessed the value of the ConstantBinding directly instead of going through it.  But the compiler included the ConstantBinding in the method's literals just as it includes the selectors of optimized messages such as ifTrue:.  There was a convention whereby code could be recompiled after the values of ConstantBindings were evaluated during class initialization.  It was something like

    initialize
        self recompileChangedConstantsAfter:
            [...]

Assigning to a ConstantBinding raised a proceedable exception that was caught by recompileChangedConstantsAfter:, which would allow the assignment to proceed and add the ConstantBinding to a set.  After the block had evaluated recompileChangedConstantsAfter: would use the ConstantBindings scopes (they need back pointers to their defining scopes) to find the set of methods that referred to ConstantBindings modified in the block and recompile them.

Now if one used this scheme one could also modify the compiler to compile certain idioms specially.  For example a ConstantBinding assigned a block, or a ConstantBinding assigned a Boolean flag.  The key here is that the ConstantBinding included in the method's literals provides a way to quickly distinguish between a method merely making use of true or false and a method making optimized use of a flag in a ConstantBinding.



> - Any pointers on how Opal does optimizations? Do we have similar optimizations (removing
>  code that could not be reached) that could be used.

Remember that Opal, just like most Smalltalk compilers, optimizes ifTrue:, whileTrue:, and: at al.  And just like my closure extension to the Squeak compiler Opal optimizes temp vars accessed in blocks but defined in outer scopes into copied variables if possible.

> - How do you usually deal with additional tracing code when you do not want to
>  have too much runtime overhead.
>
> Thanks
> T.
>

Reply | Threaded
Open this post in threaded view
|

Re: Debugging and optimizations

Holger Freyther
In reply to this post by Clément Béra

> On 13 Jun 2015, at 14:39, Clément Bera <[hidden email]> wrote:


Dear Clement,

> This is an interesting problem. There is currently no simple way of executing a message at compile-time instead of at runtime in Pharo, which is useful to have settings but no runtime overhead.
>
> I did a simple extension for opal compiler for this purpose, adding the message Cvalue, which is compiled by Opal either to its receiver's block value or the result of its value depending on the result of #allowPrecompilation sent to its class using AST manipulation before the semantic analysis.

are you aware of the compile time constants in GNU Smalltalk. The syntax that was
picked is ##(EXPR). E.g. the below is an example for usage in GNU Smalltalk.


someMethod
        ^##(Character value: 8)

the compiler will create a CompiledMethod that holds a literal ($<8>) and will return
it. In general I find that syntax quite nice.

cheers
        holger
Reply | Threaded
Open this post in threaded view
|

Re: Debugging and optimizations

Thierry Goubier
Le 14/06/2015 18:39, Holger Freyther a écrit :

>
>> On 13 Jun 2015, at 14:39, Clément Bera <[hidden email]> wrote:
>
>
> Dear Clement,
>
>> This is an interesting problem. There is currently no simple way of executing a message at compile-time instead of at runtime in Pharo, which is useful to have settings but no runtime overhead.
>>
>> I did a simple extension for opal compiler for this purpose, adding the message Cvalue, which is compiled by Opal either to its receiver's block value or the result of its value depending on the result of #allowPrecompilation sent to its class using AST manipulation before the semantic analysis.
>
> are you aware of the compile time constants in GNU Smalltalk. The syntax that was
> picked is ##(EXPR). E.g. the below is an example for usage in GNU Smalltalk.
>
>
> someMethod
> ^##(Character value: 8)
>
> the compiler will create a CompiledMethod that holds a literal ($<8>) and will return
> it. In general I find that syntax quite nice.

When porting SmaCC from the Dolphin version, I found that optimisation
in many places, as well as methods returning once blocks:
^ [ Dictionary new add: .... ] once.

So, I'd say that Cvalue is the same as once :)

Regards,

Thierry

Reply | Threaded
Open this post in threaded view
|

Re: Debugging and optimizations

Ben Coman
On Mon, Jun 15, 2015 at 2:34 AM, Thierry Goubier
<[hidden email]> wrote:

> Le 14/06/2015 18:39, Holger Freyther a écrit :
>>
>>
>>> On 13 Jun 2015, at 14:39, Clément Bera <[hidden email]> wrote:
>>
>>
>>
>> Dear Clement,
>>
>>> This is an interesting problem. There is currently no simple way of
>>> executing a message at compile-time instead of at runtime in Pharo, which is
>>> useful to have settings but no runtime overhead.
>>>
>>> I did a simple extension for opal compiler for this purpose, adding the
>>> message Cvalue, which is compiled by Opal either to its receiver's block
>>> value or the result of its value depending on the result of
>>> #allowPrecompilation sent to its class using AST manipulation before the
>>> semantic analysis.
>>
>>
>> are you aware of the compile time constants in GNU Smalltalk. The syntax
>> that was
>> picked is ##(EXPR). E.g. the below is an example for usage in GNU
>> Smalltalk.
>>
>>
>> someMethod
>>         ^##(Character value: 8)
>>
>> the compiler will create a CompiledMethod that holds a literal ($<8>) and
>> will return
>> it. In general I find that syntax quite nice.
>
>
> When porting SmaCC from the Dolphin version, I found that optimisation in
> many places, as well as methods returning once blocks:
> ^ [ Dictionary new add: .... ] once.
>
> So, I'd say that Cvalue is the same as once :)
>

Does #once evaluate at compile time?
Or the first pass at run-time?
The latter would have a different runtime overhead to CValue.

There are probably valid arguments for not introducing new syntax
(including cross-dialect compatibility), but for something that occurs
at compile-time rather than run-time, I think I'd prefer syntax to a
keyword.  To reuse existing elements that are not compiled, that is
either comment quotes or pragma angle brackets.  Maybe combine pragma
with an executable block...
     <[ doThis at compileTime ]>

Marcus said:
> I am a bit sceptical to have meta links show up as syntax…
> I see two possibilities one could experiment with:

btw, Are meta links planned to show up as highlighting in code views?

btw2, I guess if debugging code is a meta link, it won't get filed out
or saved by Monticello.  This would have a benefit to minimise adhoc
logging accidentally ending up in packages, but wouldn't suit a case
where the logging is meant to permanently remain spread throughout the
code ready for all to be enabled when needed.  Otherwise you'd need
code in a separate location that installs the debugging metalinks,
which may be susceptible to getting out sync with the main code it
references.

cheers -ben

Reply | Threaded
Open this post in threaded view
|

Re: Debugging and optimizations

Thierry Goubier
Hi Ben,

Le 15/06/2015 01:09, Ben Coman a écrit :

> On Mon, Jun 15, 2015 at 2:34 AM, Thierry Goubier
> <[hidden email]> wrote:
>> Le 14/06/2015 18:39, Holger Freyther a écrit :
>>>
>>>
>>>> On 13 Jun 2015, at 14:39, Clément Bera <[hidden email]> wrote:
>>>
>>>
>>>
>>> Dear Clement,
>>>
>>>> This is an interesting problem. There is currently no simple way of
>>>> executing a message at compile-time instead of at runtime in Pharo, which is
>>>> useful to have settings but no runtime overhead.
>>>>
>>>> I did a simple extension for opal compiler for this purpose, adding the
>>>> message Cvalue, which is compiled by Opal either to its receiver's block
>>>> value or the result of its value depending on the result of
>>>> #allowPrecompilation sent to its class using AST manipulation before the
>>>> semantic analysis.
>>>
>>>
>>> are you aware of the compile time constants in GNU Smalltalk. The syntax
>>> that was
>>> picked is ##(EXPR). E.g. the below is an example for usage in GNU
>>> Smalltalk.
>>>
>>>
>>> someMethod
>>>          ^##(Character value: 8)
>>>
>>> the compiler will create a CompiledMethod that holds a literal ($<8>) and
>>> will return
>>> it. In general I find that syntax quite nice.
>>
>>
>> When porting SmaCC from the Dolphin version, I found that optimisation in
>> many places, as well as methods returning once blocks:
>> ^ [ Dictionary new add: .... ] once.
>>
>> So, I'd say that Cvalue is the same as once :)
>>
>
> Does #once evaluate at compile time?
> Or the first pass at run-time?
> The latter would have a different runtime overhead to CValue.

I don't know. It could even be at first run with a recompilation of the
method, to add another solution to the mix :)

> There are probably valid arguments for not introducing new syntax
> (including cross-dialect compatibility), but for something that occurs
> at compile-time rather than run-time, I think I'd prefer syntax to a
> keyword.  To reuse existing elements that are not compiled, that is
> either comment quotes or pragma angle brackets.  Maybe combine pragma
> with an executable block...
>       <[ doThis at compileTime ]>

#define and friends it is :) In short, a dedicated syntax for compile
time... aka macros.

Thierry


Reply | Threaded
Open this post in threaded view
|

Re: Debugging and optimizations

Marcus Denker-4
In reply to this post by Ben Coman

>
> btw, Are meta links planned to show up as highlighting in code views?
>

Yes, we will soon have the new editor everywhere which allows to do this.

The most important is to visualize breakpoints. I want to have dedicated visuals
for that.

Then, later, I want to show meta links in general, but with a non-intrusive way.
(maybe optional?). There should be a “meta link editor”, too.

But all that needs experiments...

> btw2, I guess if debugging code is a meta link, it won't get filed out
> or saved by Monticello.  This would have a benefit to minimise adhoc
> logging accidentally ending up in packages, but wouldn't suit a case
> where the logging is meant to permanently remain spread throughout the
> code ready for all to be enabled when needed.  Otherwise you'd need
> code in a separate location that installs the debugging metalinks,
> which may be susceptible to getting out sync with the main code it
> references.
>
Yes, meta links and breakpoints are never saved with the code.

        Marcus
Reply | Threaded
Open this post in threaded view
|

Re: Debugging and optimizations

Clément Béra
In reply to this post by Thierry Goubier
Hey all,

Holger Freyther, I didn't know about ##() and once in GNU Smalltalk, but that was definitely what I was trying to experiment with.

As Thierry mentioned, these elements come from C/C++ where you have macros that are very useful when writing high performance code so your code can be both flexible and very efficient.

Ben you're right, normal users should not be confused by those features or new syntax elements. For CValue, I added class-side of the classes using it:

MyClass>>#compiler
| comp |
comp := super compiler.
comp compilationContext semanticAnalyzerClass: NeoCompiler.
^ comp

I think the solution for this problem is to do another Opal compilation options for precompilation. This way, if you want to use your compiler extension, you have either to:
- override #compiler class side for a hierachy of classes granularity
- add a pragma such as <compilationOptions: + optionPrecompilation> for a per method granularity

then in the methods affected by the compiler extension, you can either use a different parser with new syntax elements such as ##() or use new special selectors such as #once or #Cvalue. 


2015-06-15 7:13 GMT+02:00 Thierry Goubier <[hidden email]>:
Hi Ben,


Le 15/06/2015 01:09, Ben Coman a écrit :
On Mon, Jun 15, 2015 at 2:34 AM, Thierry Goubier
<[hidden email]> wrote:
Le 14/06/2015 18:39, Holger Freyther a écrit :


On 13 Jun 2015, at 14:39, Clément Bera <[hidden email]> wrote:



Dear Clement,

This is an interesting problem. There is currently no simple way of
executing a message at compile-time instead of at runtime in Pharo, which is
useful to have settings but no runtime overhead.

I did a simple extension for opal compiler for this purpose, adding the
message Cvalue, which is compiled by Opal either to its receiver's block
value or the result of its value depending on the result of
#allowPrecompilation sent to its class using AST manipulation before the
semantic analysis.


are you aware of the compile time constants in GNU Smalltalk. The syntax
that was
picked is ##(EXPR). E.g. the below is an example for usage in GNU
Smalltalk.


someMethod
         ^##(Character value: 8)

the compiler will create a CompiledMethod that holds a literal ($<8>) and
will return
it. In general I find that syntax quite nice.


When porting SmaCC from the Dolphin version, I found that optimisation in
many places, as well as methods returning once blocks:
^ [ Dictionary new add: .... ] once.

So, I'd say that Cvalue is the same as once :)


Does #once evaluate at compile time?
Or the first pass at run-time?
The latter would have a different runtime overhead to CValue.

I don't know. It could even be at first run with a recompilation of the method, to add another solution to the mix :)

There are probably valid arguments for not introducing new syntax
(including cross-dialect compatibility), but for something that occurs
at compile-time rather than run-time, I think I'd prefer syntax to a
keyword.  To reuse existing elements that are not compiled, that is
either comment quotes or pragma angle brackets.  Maybe combine pragma
with an executable block...
      <[ doThis at compileTime ]>

#define and friends it is :) In short, a dedicated syntax for compile time... aka macros.

Thierry



Reply | Threaded
Open this post in threaded view
|

Re: Debugging and optimizations

Ben Coman
On Mon, Jun 15, 2015 at 3:51 PM, Clément Bera <[hidden email]> wrote:

> Hey all,
>
> Holger Freyther, I didn't know about ##() and once in GNU Smalltalk, but
> that was definitely what I was trying to experiment with.
>
> As Thierry mentioned, these elements come from C/C++ where you have macros
> that are very useful when writing high performance code so your code can be
> both flexible and very efficient.
>
> Ben you're right, normal users should not be confused by those features or
> new syntax elements. For CValue, I added class-side of the classes using it:
>
> MyClass>>#compiler
> | comp |
> comp := super compiler.
> comp compilationContext semanticAnalyzerClass: NeoCompiler.
> ^ comp
>
> I think the solution for this problem is to do another Opal compilation
> options for precompilation. This way, if you want to use your compiler
> extension, you have either to:
> - override #compiler class side for a hierachy of classes granularity
> - add a pragma such as <compilationOptions: + optionPrecompilation> for a
> per method granularity
>
> then in the methods affected by the compiler extension, you can either use a
> different parser with new syntax elements such as ##() or use new special
> selectors such as #once or #Cvalue.
>
>
> 2015-06-15 7:13 GMT+02:00 Thierry Goubier <[hidden email]>:
>>
>> Hi Ben,
>>
>>
>> Le 15/06/2015 01:09, Ben Coman a écrit :
>>>
>>> On Mon, Jun 15, 2015 at 2:34 AM, Thierry Goubier
>>> <[hidden email]> wrote:
>>>>
>>>> Le 14/06/2015 18:39, Holger Freyther a écrit :
>>>>>
>>>>>
>>>>>
>>>>>> On 13 Jun 2015, at 14:39, Clément Bera <[hidden email]> wrote:
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Dear Clement,
>>>>>
>>>>>> This is an interesting problem. There is currently no simple way of
>>>>>> executing a message at compile-time instead of at runtime in Pharo,
>>>>>> which is
>>>>>> useful to have settings but no runtime overhead.
>>>>>>
>>>>>> I did a simple extension for opal compiler for this purpose, adding
>>>>>> the
>>>>>> message Cvalue, which is compiled by Opal either to its receiver's
>>>>>> block
>>>>>> value or the result of its value depending on the result of
>>>>>> #allowPrecompilation sent to its class using AST manipulation before
>>>>>> the
>>>>>> semantic analysis.
>>>>>
>>>>>
>>>>>
>>>>> are you aware of the compile time constants in GNU Smalltalk. The
>>>>> syntax
>>>>> that was
>>>>> picked is ##(EXPR). E.g. the below is an example for usage in GNU
>>>>> Smalltalk.
>>>>>
>>>>>
>>>>> someMethod
>>>>>          ^##(Character value: 8)
>>>>>
>>>>> the compiler will create a CompiledMethod that holds a literal ($<8>)
>>>>> and
>>>>> will return
>>>>> it. In general I find that syntax quite nice.
>>>>
>>>>
>>>>
>>>> When porting SmaCC from the Dolphin version, I found that optimisation
>>>> in
>>>> many places, as well as methods returning once blocks:
>>>> ^ [ Dictionary new add: .... ] once.
>>>>
>>>> So, I'd say that Cvalue is the same as once :)
>>>>
>>>
>>> Does #once evaluate at compile time?
>>> Or the first pass at run-time?
>>> The latter would have a different runtime overhead to CValue.
>>
>>
>> I don't know. It could even be at first run with a recompilation of the
>> method, to add another solution to the mix :)
>>
>>> There are probably valid arguments for not introducing new syntax
>>> (including cross-dialect compatibility), but for something that occurs
>>> at compile-time rather than run-time, I think I'd prefer syntax to a
>>> keyword.  To reuse existing elements that are not compiled, that is
>>> either comment quotes or pragma angle brackets.  Maybe combine pragma
>>> with an executable block...
>>>       <[ doThis at compileTime ]>
>>
>>
>> #define and friends it is :) In short, a dedicated syntax for compile
>> time... aka macros.
>>
>> Thierry
>>
>>
>

Reply | Threaded
Open this post in threaded view
|

Re: Debugging and optimizations

Ben Coman
In reply to this post by Clément Béra
whoops. please excuse my previous clicktoofast.

On Mon, Jun 15, 2015 at 3:51 PM, Clément Bera <[hidden email]> wrote:

> Hey all,
>
> Holger Freyther, I didn't know about ##() and once in GNU Smalltalk, but
> that was definitely what I was trying to experiment with.
>
> As Thierry mentioned, these elements come from C/C++ where you have macros
> that are very useful when writing high performance code so your code can be
> both flexible and very efficient.
>
> Ben you're right, normal users should not be confused by those features or
> new syntax elements. For CValue, I added class-side of the classes using it:
>
> MyClass>>#compiler
> | comp |
> comp := super compiler.
> comp compilationContext semanticAnalyzerClass: NeoCompiler.
> ^ comp
>
> I think the solution for this problem is to do another Opal compilation
> options for precompilation. This way, if you want to use your compiler
> extension, you have either to:
> - override #compiler class side for a hierachy of classes granularity
> - add a pragma such as <compilationOptions: + optionPrecompilation> for a
> per method granularity

To make it more explicit, possibly consider requiring the special
selector to be mentioned in the pragma.
    <precompile: #once>  or   <precompile: #Cvalue>

Now I wonder how this precompiled code might look or operate within a debugger?
cheers -ben


>
> then in the methods affected by the compiler extension, you can either use a
> different parser with new syntax elements such as ##() or use new special
> selectors such as #once or #Cvalue.
>
>
> 2015-06-15 7:13 GMT+02:00 Thierry Goubier <[hidden email]>:
>>
>> Hi Ben,
>>
>>
>> Le 15/06/2015 01:09, Ben Coman a écrit :
>>>
>>> On Mon, Jun 15, 2015 at 2:34 AM, Thierry Goubier
>>> <[hidden email]> wrote:
>>>>
>>>> Le 14/06/2015 18:39, Holger Freyther a écrit :
>>>>>
>>>>>
>>>>>
>>>>>> On 13 Jun 2015, at 14:39, Clément Bera <[hidden email]> wrote:
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Dear Clement,
>>>>>
>>>>>> This is an interesting problem. There is currently no simple way of
>>>>>> executing a message at compile-time instead of at runtime in Pharo,
>>>>>> which is
>>>>>> useful to have settings but no runtime overhead.
>>>>>>
>>>>>> I did a simple extension for opal compiler for this purpose, adding
>>>>>> the
>>>>>> message Cvalue, which is compiled by Opal either to its receiver's
>>>>>> block
>>>>>> value or the result of its value depending on the result of
>>>>>> #allowPrecompilation sent to its class using AST manipulation before
>>>>>> the
>>>>>> semantic analysis.
>>>>>
>>>>>
>>>>>
>>>>> are you aware of the compile time constants in GNU Smalltalk. The
>>>>> syntax
>>>>> that was
>>>>> picked is ##(EXPR). E.g. the below is an example for usage in GNU
>>>>> Smalltalk.
>>>>>
>>>>>
>>>>> someMethod
>>>>>          ^##(Character value: 8)
>>>>>
>>>>> the compiler will create a CompiledMethod that holds a literal ($<8>)
>>>>> and
>>>>> will return
>>>>> it. In general I find that syntax quite nice.
>>>>
>>>>
>>>>
>>>> When porting SmaCC from the Dolphin version, I found that optimisation
>>>> in
>>>> many places, as well as methods returning once blocks:
>>>> ^ [ Dictionary new add: .... ] once.
>>>>
>>>> So, I'd say that Cvalue is the same as once :)
>>>>
>>>
>>> Does #once evaluate at compile time?
>>> Or the first pass at run-time?
>>> The latter would have a different runtime overhead to CValue.
>>
>>
>> I don't know. It could even be at first run with a recompilation of the
>> method, to add another solution to the mix :)
>>
>>> There are probably valid arguments for not introducing new syntax
>>> (including cross-dialect compatibility), but for something that occurs
>>> at compile-time rather than run-time, I think I'd prefer syntax to a
>>> keyword.  To reuse existing elements that are not compiled, that is
>>> either comment quotes or pragma angle brackets.  Maybe combine pragma
>>> with an executable block...
>>>       <[ doThis at compileTime ]>
>>
>>
>> #define and friends it is :) In short, a dedicated syntax for compile
>> time... aka macros.
>>
>> Thierry
>>
>>
>

Reply | Threaded
Open this post in threaded view
|

Re: Debugging and optimizations

Guillermo Polito
Well, we already have in-image a #once like solution based on memoization.

block := [ :key | key factorial ] memoized.

"first time"
[1000 to: 1500 do: [:index |block value: index]] timeToRun. 
 "0:00:00:01.598"

"second time, cached"
[1000 to: 1500 do: [:index |block value: index]] timeToRun.
 "0:00:00:00"

And it requires no compiler nor syntax changes.

In any case, changing the compiler would be good as:
  - an example of how to write opal extensions
  - an additional package for those who would like to play and do hacky stuff (why not huh?)

So I would not discourage it (but it would be good to have it as an extra loadable package instead of an in-image one).

Guille


El lun., 15 de jun. de 2015 a la(s) 10:08 a. m., Ben Coman <[hidden email]> escribió:
whoops. please excuse my previous clicktoofast.

On Mon, Jun 15, 2015 at 3:51 PM, Clément Bera <[hidden email]> wrote:
> Hey all,
>
> Holger Freyther, I didn't know about ##() and once in GNU Smalltalk, but
> that was definitely what I was trying to experiment with.
>
> As Thierry mentioned, these elements come from C/C++ where you have macros
> that are very useful when writing high performance code so your code can be
> both flexible and very efficient.
>
> Ben you're right, normal users should not be confused by those features or
> new syntax elements. For CValue, I added class-side of the classes using it:
>
> MyClass>>#compiler
> | comp |
> comp := super compiler.
> comp compilationContext semanticAnalyzerClass: NeoCompiler.
> ^ comp
>
> I think the solution for this problem is to do another Opal compilation
> options for precompilation. This way, if you want to use your compiler
> extension, you have either to:
> - override #compiler class side for a hierachy of classes granularity
> - add a pragma such as <compilationOptions: + optionPrecompilation> for a
> per method granularity

To make it more explicit, possibly consider requiring the special
selector to be mentioned in the pragma.
    <precompile: #once>  or   <precompile: #Cvalue>

Now I wonder how this precompiled code might look or operate within a debugger?
cheers -ben


>
> then in the methods affected by the compiler extension, you can either use a
> different parser with new syntax elements such as ##() or use new special
> selectors such as #once or #Cvalue.
>
>
> 2015-06-15 7:13 GMT+02:00 Thierry Goubier <[hidden email]>:
>>
>> Hi Ben,
>>
>>
>> Le 15/06/2015 01:09, Ben Coman a écrit :
>>>
>>> On Mon, Jun 15, 2015 at 2:34 AM, Thierry Goubier
>>> <[hidden email]> wrote:
>>>>
>>>> Le 14/06/2015 18:39, Holger Freyther a écrit :
>>>>>
>>>>>
>>>>>
>>>>>> On 13 Jun 2015, at 14:39, Clément Bera <[hidden email]> wrote:
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Dear Clement,
>>>>>
>>>>>> This is an interesting problem. There is currently no simple way of
>>>>>> executing a message at compile-time instead of at runtime in Pharo,
>>>>>> which is
>>>>>> useful to have settings but no runtime overhead.
>>>>>>
>>>>>> I did a simple extension for opal compiler for this purpose, adding
>>>>>> the
>>>>>> message Cvalue, which is compiled by Opal either to its receiver's
>>>>>> block
>>>>>> value or the result of its value depending on the result of
>>>>>> #allowPrecompilation sent to its class using AST manipulation before
>>>>>> the
>>>>>> semantic analysis.
>>>>>
>>>>>
>>>>>
>>>>> are you aware of the compile time constants in GNU Smalltalk. The
>>>>> syntax
>>>>> that was
>>>>> picked is ##(EXPR). E.g. the below is an example for usage in GNU
>>>>> Smalltalk.
>>>>>
>>>>>
>>>>> someMethod
>>>>>          ^##(Character value: 8)
>>>>>
>>>>> the compiler will create a CompiledMethod that holds a literal ($<8>)
>>>>> and
>>>>> will return
>>>>> it. In general I find that syntax quite nice.
>>>>
>>>>
>>>>
>>>> When porting SmaCC from the Dolphin version, I found that optimisation
>>>> in
>>>> many places, as well as methods returning once blocks:
>>>> ^ [ Dictionary new add: .... ] once.
>>>>
>>>> So, I'd say that Cvalue is the same as once :)
>>>>
>>>
>>> Does #once evaluate at compile time?
>>> Or the first pass at run-time?
>>> The latter would have a different runtime overhead to CValue.
>>
>>
>> I don't know. It could even be at first run with a recompilation of the
>> method, to add another solution to the mix :)
>>
>>> There are probably valid arguments for not introducing new syntax
>>> (including cross-dialect compatibility), but for something that occurs
>>> at compile-time rather than run-time, I think I'd prefer syntax to a
>>> keyword.  To reuse existing elements that are not compiled, that is
>>> either comment quotes or pragma angle brackets.  Maybe combine pragma
>>> with an executable block...
>>>       <[ doThis at compileTime ]>
>>
>>
>> #define and friends it is :) In short, a dedicated syntax for compile
>> time... aka macros.
>>
>> Thierry
>>
>>
>

Reply | Threaded
Open this post in threaded view
|

Re: Debugging and optimizations

Clément Béra
In reply to this post by Ben Coman


2015-06-15 10:07 GMT+02:00 Ben Coman <[hidden email]>:
whoops. please excuse my previous clicktoofast.

On Mon, Jun 15, 2015 at 3:51 PM, Clément Bera <[hidden email]> wrote:
> Hey all,
>
> Holger Freyther, I didn't know about ##() and once in GNU Smalltalk, but
> that was definitely what I was trying to experiment with.
>
> As Thierry mentioned, these elements come from C/C++ where you have macros
> that are very useful when writing high performance code so your code can be
> both flexible and very efficient.
>
> Ben you're right, normal users should not be confused by those features or
> new syntax elements. For CValue, I added class-side of the classes using it:
>
> MyClass>>#compiler
> | comp |
> comp := super compiler.
> comp compilationContext semanticAnalyzerClass: NeoCompiler.
> ^ comp
>
> I think the solution for this problem is to do another Opal compilation
> options for precompilation. This way, if you want to use your compiler
> extension, you have either to:
> - override #compiler class side for a hierachy of classes granularity
> - add a pragma such as <compilationOptions: + optionPrecompilation> for a
> per method granularity

To make it more explicit, possibly consider requiring the special
selector to be mentioned in the pragma.
    <precompile: #once>  or   <precompile: #Cvalue>

Now I wonder how this precompiled code might look or operate within a debugger?
cheers -ben

Debugger works fine don't worry.
It either ignores the Cvalue block if precompiled or highlights the inner statements of the blocks if not precompiled while debugging the outer context.
As my code edits the AST and that the AST is kept in a cache there might be some issues with the tools though, but nothing hard to solve.

As Guille said, I think this is interesting experiment, and you can use it and hack it if you want, but I am not sure it should be part of the base image.



>
> then in the methods affected by the compiler extension, you can either use a
> different parser with new syntax elements such as ##() or use new special
> selectors such as #once or #Cvalue.
>
>
> 2015-06-15 7:13 GMT+02:00 Thierry Goubier <[hidden email]>:
>>
>> Hi Ben,
>>
>>
>> Le 15/06/2015 01:09, Ben Coman a écrit :
>>>
>>> On Mon, Jun 15, 2015 at 2:34 AM, Thierry Goubier
>>> <[hidden email]> wrote:
>>>>
>>>> Le 14/06/2015 18:39, Holger Freyther a écrit :
>>>>>
>>>>>
>>>>>
>>>>>> On 13 Jun 2015, at 14:39, Clément Bera <[hidden email]> wrote:
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Dear Clement,
>>>>>
>>>>>> This is an interesting problem. There is currently no simple way of
>>>>>> executing a message at compile-time instead of at runtime in Pharo,
>>>>>> which is
>>>>>> useful to have settings but no runtime overhead.
>>>>>>
>>>>>> I did a simple extension for opal compiler for this purpose, adding
>>>>>> the
>>>>>> message Cvalue, which is compiled by Opal either to its receiver's
>>>>>> block
>>>>>> value or the result of its value depending on the result of
>>>>>> #allowPrecompilation sent to its class using AST manipulation before
>>>>>> the
>>>>>> semantic analysis.
>>>>>
>>>>>
>>>>>
>>>>> are you aware of the compile time constants in GNU Smalltalk. The
>>>>> syntax
>>>>> that was
>>>>> picked is ##(EXPR). E.g. the below is an example for usage in GNU
>>>>> Smalltalk.
>>>>>
>>>>>
>>>>> someMethod
>>>>>          ^##(Character value: 8)
>>>>>
>>>>> the compiler will create a CompiledMethod that holds a literal ($<8>)
>>>>> and
>>>>> will return
>>>>> it. In general I find that syntax quite nice.
>>>>
>>>>
>>>>
>>>> When porting SmaCC from the Dolphin version, I found that optimisation
>>>> in
>>>> many places, as well as methods returning once blocks:
>>>> ^ [ Dictionary new add: .... ] once.
>>>>
>>>> So, I'd say that Cvalue is the same as once :)
>>>>
>>>
>>> Does #once evaluate at compile time?
>>> Or the first pass at run-time?
>>> The latter would have a different runtime overhead to CValue.
>>
>>
>> I don't know. It could even be at first run with a recompilation of the
>> method, to add another solution to the mix :)
>>
>>> There are probably valid arguments for not introducing new syntax
>>> (including cross-dialect compatibility), but for something that occurs
>>> at compile-time rather than run-time, I think I'd prefer syntax to a
>>> keyword.  To reuse existing elements that are not compiled, that is
>>> either comment quotes or pragma angle brackets.  Maybe combine pragma
>>> with an executable block...
>>>       <[ doThis at compileTime ]>
>>
>>
>> #define and friends it is :) In short, a dedicated syntax for compile
>> time... aka macros.
>>
>> Thierry
>>
>>
>