Materializing BlockClosure's outerContext?

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

Materializing BlockClosure's outerContext?

fniephaus
 
Hi all,

I'm trying to understand when the outerContext of a BlockClosure is materialized in Cog. I'm assuming Cog does much better than materializing the context at allocation time of the BlockClosure, but I wasn't able to find that out by browsing VMMaker code.

Example: In `BlockClosureTest>>#setUp`, a BlockClosure is stored in an inst var and then accessed later in some tests. Assuming contexts are only materialized when necessary, how does Cog determine that the context of this closure has escaped? In the previous example, thisContext is also stored in an inst var. Whenever a context is stored like this, we must consider it escaped. Does the same apply for closures? As in, must we consider their home context escaped as soon as closures are stored somewhere?

Cheers,
Fabio
Reply | Threaded
Open this post in threaded view
|

Re: Materializing BlockClosure's outerContext?

Clément Béra
 
Hi,

There is no such thing as escape analysis. There is no such a thing as materializing context on demand, what we have is married context which are materialized upon divorce (typically, store into the context).

At BlockClosure allocation time Cog allocates a Context object, enough to hold context contents, but set only dead context available fields (args, method, receiver, etc). The context is at this point married to a frame. All accesses to contexts are caught by the VM, and if the context is still married to the frame, the frame is read instead of the context, while the frame-context divorces if the context is written. Upon divorce the context fields are set, the context become single, single contexts are normal objects. If the frame dies (return), the context is then a dead context and only fields written at creation time are available. The tricky bit is to check wether a married context refers to a frame its married to, or to random data on stack. Upon return frame do not mark the context as dead (cost too much execution time while context almost never access their frame). Context are accessed, aside from reflective API, debugging and so on, only for non local returns, which needs to check if the married frame is still alive or not. If you want to go deeper in that topic, read the 2 blog posts of Eliot on context/frame wedding/divorce.

What you are describing, with escape analysis, is performed other Smalltalks in the bytecode compiler. The idea is that a context is marked as clean (no escaping var, no non local return), copying (no non local return) or full (a non local return is present). In this case, clean blocks do not require any allocations, copying block never require their outerContext and full block do. That speeds up execution, but forbids some uncommon debugging operations. Clean blocks also mess up closure identity.

Closures are always allocated, except if through inlining you can prove they don't escape in the sista JIT.

Best!

On Fri, Jan 4, 2019 at 6:11 PM Fabio Niephaus <[hidden email]> wrote:
 
Hi all,

I'm trying to understand when the outerContext of a BlockClosure is materialized in Cog. I'm assuming Cog does much better than materializing the context at allocation time of the BlockClosure, but I wasn't able to find that out by browsing VMMaker code.

Example: In `BlockClosureTest>>#setUp`, a BlockClosure is stored in an inst var and then accessed later in some tests. Assuming contexts are only materialized when necessary, how does Cog determine that the context of this closure has escaped? In the previous example, thisContext is also stored in an inst var. Whenever a context is stored like this, we must consider it escaped. Does the same apply for closures? As in, must we consider their home context escaped as soon as closures are stored somewhere?

Cheers,
Fabio


--
Reply | Threaded
Open this post in threaded view
|

Re: Materializing BlockClosure's outerContext?

Eliot Miranda-2
In reply to this post by fniephaus
 
Hi Fabio,  Hi All,

On Fri, Jan 4, 2019 at 9:11 AM Fabio Niephaus <[hidden email]> wrote:
 
Hi all,

I'm trying to understand when the outerContext of a BlockClosure is materialized in Cog. I'm assuming Cog does much better than materializing the context at allocation time of the BlockClosure, but I wasn't able to find that out by browsing VMMaker code.

Unless Sista is being used then the outer context is always materialized whenever a closure is created.  In the StackInterpreter the two methods called by the actual bytecode routines are pushClosureNumArgs:copiedValues:blockSize: and pushFullClosureNumArgs:copiedValues:compiledBlock:receiverIsOnStack:ignoreContext:.  You'll see that the first thing pushClosureNumArgs:copiedValues:blockSize: does is ensure the current frame is married, i.e. it materializes the context if it isn't already.  In pushFullClosureNumArgs:copiedValues:compiledBlock:receiverIsOnStack:ignoreContext: the first thing that happens is the same unless the ignoreContext flag has been set, in which case the outerContext will be nil.

In Cog the JIT generates code to materialize as quickly as possible.  See CogObjectRepresentationForSpur>>#genGetActiveContextLarge:inBlock: which is sent four times to create four trampolines ceSmallMethodContext, ceSmallBlockContext, ceSmallFullBlockContext, and ceLargeMethodContext.  These are used in turn by CogObjectRepresentationForSpur>>#genGetActiveContextNumArgs:large:inBlock:, which is used by CogObjectRepresentationForSpur>>#genNoPopCreateClosureAt:numArgs:numCopied:contextNumArgs:large:inBlock: & CogObjectRepresentationForSpur>>#genCreateFullClosure:numArgs:numCopied:ignoreContext:contextNumArgs:large:inBlock:, which generates the JIT code that parallels the StackInterpreter bytecodes.

Some rationale:
In VW there is no adaptive optimization and so no closure4 inlining.  Instead there are three different kinds of blocks:

clean: created at compile time, no copied values, only arguments, and hence no need for an outer context.  If in a clean closure in a debugger there is no information as to where the closure was activated; only its static method name is known.
copying: created at run-time, but since there is no up-arrow return there is no need to create an outer context.  If in a copying closure in a debugger there is no information as to where the closure was activated; only its static method name is known.
full: created at run-time; requires the outerContext to be materialized

When I came to add closures to Squeak (good in itself but also essential in implementing a sane context-to-stack mapping scheme) there were only 8 unused bytecodes, and two of these were already being used in a Newspeak implementation I had affection for.  Qwaq wanted to keep things as simple as possible, as did I.  Therefore the logical thing to do was to only provide closures that always had an outerContext; KISS.  Implementing closures used 5 bytecodes (create closure, create indirection vector, push/store/pop indirect), leaving three to spare (at this time I had no idea about a Sista bytecode set or about multiple bytecode sets). Knowing that at some stage adaptive optimization would be a much better approach than simply micro-optimizing closure creation, with all the extra complexity and infidelity in the debugger, and that I could optimize materialization aggressively, I think I made the right decision.  We can still implement the equivalent of clean blocks in the compiler (and indeed someone has done this in Pharo, and with full blocks it would be easy to do in Squeak).  But much more interesting is to finish Sista/Scorch.  To this end I'm currently frustrated by not being able to load Clément's tonel Scorch repository into Squeak.

Why do I want too be able to load Sista in Squeak right now? (Forgive me if I've already told you this).  Clément has been developing Scorch (Sista is the name for the overall architecture, Scorch is the name of the image-level optimizing compiler) in a live Pharo image.  That means any bugs crash his system and progress is slow.  But the simulator can be modified to intercept the counter callback and instead of delivering it into the image being simulated can deliver it to Scorch in the current image.  The simulator can provide "facade" proxy context objects for contexts in the simulation.  These appear to be contexts, but are actually proxies for objects in the simulation (we do the same for methods so we can use InstructionPrinter, StackDepthFinder et al on methods in the simulation).  And we can rewrite the back end of Scorch, the part that installs methods into method dictionaries, to use a mirror object.  Then we can substitute a mirror that materializes a method in the simulation and invokes code in the simulation to install the method there-in.

So this allows Scorch to be developed as intended, in the current image, but have it affect only the simulation, and therefore be immune from crashing the current system.  This should speed up productization a lot, and will allow Sophie and I to work on register allocation in the Sista JIT.  So any energy anyone can put into getting Scorch to load into Squeak would be much appreciated.  I shall return to it as soon as I've finished assuring myself I can determine conditional branch dominators with a simple FSM in bytecode (this is part of the register allocation problem).
 
Example: In `BlockClosureTest>>#setUp`, a BlockClosure is stored in an inst var and then accessed later in some tests. Assuming contexts are only materialized when necessary, how does Cog determine that the context of this closure has escaped? In the previous example, thisContext is also stored in an inst var. Whenever a context is stored like this, we must consider it escaped. Does the same apply for closures? As in, must we consider their home context escaped as soon as closures are stored somewhere?

As Clément stated, there is no escape analysis at the standard bytecode compiler/JIT level.  Escape analysis is done in the Scorch optimizer (and if it doesn't yet, that's where we will do it when we get to it).
 
Cheers,
Fabio

_,,,^..^,,,_
best, Eliot
Reply | Threaded
Open this post in threaded view
|

Re: Materializing BlockClosure's outerContext?

Ben Coman
 


On Sat, 5 Jan 2019 at 03:08, Eliot Miranda <[hidden email]> wrote:
 
Hi Fabio,  Hi All,

On Fri, Jan 4, 2019 at 9:11 AM Fabio Niephaus <[hidden email]> wrote:
 
Hi all,

I'm trying to understand when the outerContext of a BlockClosure is materialized in Cog. I'm assuming Cog does much better than materializing the context at allocation time of the BlockClosure, but I wasn't able to find that out by browsing VMMaker code.

Unless Sista is being used then the outer context is always materialized whenever a closure is created.  In the StackInterpreter the two methods called by the actual bytecode routines are pushClosureNumArgs:copiedValues:blockSize: and pushFullClosureNumArgs:copiedValues:compiledBlock:receiverIsOnStack:ignoreContext:.  You'll see that the first thing pushClosureNumArgs:copiedValues:blockSize: does is ensure the current frame is married, i.e. it materializes the context if it isn't already.  In pushFullClosureNumArgs:copiedValues:compiledBlock:receiverIsOnStack:ignoreContext: the first thing that happens is the same unless the ignoreContext flag has been set, in which case the outerContext will be nil.

In Cog the JIT generates code to materialize as quickly as possible.  See CogObjectRepresentationForSpur>>#genGetActiveContextLarge:inBlock: which is sent four times to create four trampolines ceSmallMethodContext, ceSmallBlockContext, ceSmallFullBlockContext, and ceLargeMethodContext.  These are used in turn by CogObjectRepresentationForSpur>>#genGetActiveContextNumArgs:large:inBlock:, which is used by CogObjectRepresentationForSpur>>#genNoPopCreateClosureAt:numArgs:numCopied:contextNumArgs:large:inBlock: & CogObjectRepresentationForSpur>>#genCreateFullClosure:numArgs:numCopied:ignoreContext:contextNumArgs:large:inBlock:, which generates the JIT code that parallels the StackInterpreter bytecodes.

Some rationale:
In VW there is no adaptive optimization and so no closure4 inlining.  Instead there are three different kinds of blocks:

clean: created at compile time, no copied values, only arguments, and hence no need for an outer context.  If in a clean closure in a debugger there is no information as to where the closure was activated; only its static method name is known.
copying: created at run-time, but since there is no up-arrow return there is no need to create an outer context.  If in a copying closure in a debugger there is no information as to where the closure was activated; only its static method name is known.
full: created at run-time; requires the outerContext to be materialized

When I came to add closures to Squeak (good in itself but also essential in implementing a sane context-to-stack mapping scheme) there were only 8 unused bytecodes, and two of these were already being used in a Newspeak implementation I had affection for.  Qwaq wanted to keep things as simple as possible, as did I.  Therefore the logical thing to do was to only provide closures that always had an outerContext; KISS.  Implementing closures used 5 bytecodes (create closure, create indirection vector, push/store/pop indirect), leaving three to spare (at this time I had no idea about a Sista bytecode set or about multiple bytecode sets). Knowing that at some stage adaptive optimization would be a much better approach than simply micro-optimizing closure creation, with all the extra complexity and infidelity in the debugger, and that I could optimize materialization aggressively, I think I made the right decision.  We can still implement the equivalent of clean blocks in the compiler (and indeed someone has done this in Pharo, and with full blocks it would be easy to do in Squeak).  But much more interesting is to finish Sista/Scorch.  To this end I'm currently frustrated by not being able to load Clément's tonel Scorch repository into Squeak.

Why do I want too be able to load Sista in Squeak right now? (Forgive me if I've already told you this).  Clément has been developing Scorch (Sista is the name for the overall architecture, Scorch is the name of the image-level optimizing compiler) in a live Pharo image.  That means any bugs crash his system and progress is slow.  But the simulator can be modified to intercept the counter callback and instead of delivering it into the image being simulated can deliver it to Scorch in the current image.  The simulator can provide "facade" proxy context objects for contexts in the simulation.  These appear to be contexts, but are actually proxies for objects in the simulation (we do the same for methods so we can use InstructionPrinter, StackDepthFinder et al on methods in the simulation).  And we can rewrite the back end of Scorch, the part that installs methods into method dictionaries, to use a mirror object.  Then we can substitute a mirror that materializes a method in the simulation and invokes code in the simulation to install the method there-in.

So this allows Scorch to be developed as intended, in the current image, but have it affect only the simulation, and therefore be immune from crashing the current system.  This should speed up productization a lot, and will allow Sophie and I to work on register allocation in the Sista JIT. 

 
So any energy anyone can put into getting Scorch to load into Squeak would be much appreciated. 

What is the work breakdown required for this?  

cheers -ben
Reply | Threaded
Open this post in threaded view
|

Re: Materializing BlockClosure's outerContext?

Eliot Miranda-2
 
Hi Ben,

On Fri, Jan 4, 2019 at 5:30 PM Ben Coman <[hidden email]> wrote:
 
On Sat, 5 Jan 2019 at 03:08, Eliot Miranda <[hidden email]> wrote:
 
Hi Fabio,  Hi All,

On Fri, Jan 4, 2019 at 9:11 AM Fabio Niephaus <[hidden email]> wrote:
 
Hi all,

I'm trying to understand when the outerContext of a BlockClosure is materialized in Cog. I'm assuming Cog does much better than materializing the context at allocation time of the BlockClosure, but I wasn't able to find that out by browsing VMMaker code.

Unless Sista is being used then the outer context is always materialized whenever a closure is created.  In the StackInterpreter the two methods called by the actual bytecode routines are pushClosureNumArgs:copiedValues:blockSize: and pushFullClosureNumArgs:copiedValues:compiledBlock:receiverIsOnStack:ignoreContext:.  You'll see that the first thing pushClosureNumArgs:copiedValues:blockSize: does is ensure the current frame is married, i.e. it materializes the context if it isn't already.  In pushFullClosureNumArgs:copiedValues:compiledBlock:receiverIsOnStack:ignoreContext: the first thing that happens is the same unless the ignoreContext flag has been set, in which case the outerContext will be nil.

In Cog the JIT generates code to materialize as quickly as possible.  See CogObjectRepresentationForSpur>>#genGetActiveContextLarge:inBlock: which is sent four times to create four trampolines ceSmallMethodContext, ceSmallBlockContext, ceSmallFullBlockContext, and ceLargeMethodContext.  These are used in turn by CogObjectRepresentationForSpur>>#genGetActiveContextNumArgs:large:inBlock:, which is used by CogObjectRepresentationForSpur>>#genNoPopCreateClosureAt:numArgs:numCopied:contextNumArgs:large:inBlock: & CogObjectRepresentationForSpur>>#genCreateFullClosure:numArgs:numCopied:ignoreContext:contextNumArgs:large:inBlock:, which generates the JIT code that parallels the StackInterpreter bytecodes.

Some rationale:
In VW there is no adaptive optimization and so no closure4 inlining.  Instead there are three different kinds of blocks:

clean: created at compile time, no copied values, only arguments, and hence no need for an outer context.  If in a clean closure in a debugger there is no information as to where the closure was activated; only its static method name is known.
copying: created at run-time, but since there is no up-arrow return there is no need to create an outer context.  If in a copying closure in a debugger there is no information as to where the closure was activated; only its static method name is known.
full: created at run-time; requires the outerContext to be materialized

When I came to add closures to Squeak (good in itself but also essential in implementing a sane context-to-stack mapping scheme) there were only 8 unused bytecodes, and two of these were already being used in a Newspeak implementation I had affection for.  Qwaq wanted to keep things as simple as possible, as did I.  Therefore the logical thing to do was to only provide closures that always had an outerContext; KISS.  Implementing closures used 5 bytecodes (create closure, create indirection vector, push/store/pop indirect), leaving three to spare (at this time I had no idea about a Sista bytecode set or about multiple bytecode sets). Knowing that at some stage adaptive optimization would be a much better approach than simply micro-optimizing closure creation, with all the extra complexity and infidelity in the debugger, and that I could optimize materialization aggressively, I think I made the right decision.  We can still implement the equivalent of clean blocks in the compiler (and indeed someone has done this in Pharo, and with full blocks it would be easy to do in Squeak).  But much more interesting is to finish Sista/Scorch.  To this end I'm currently frustrated by not being able to load Clément's tonel Scorch repository into Squeak.

Why do I want too be able to load Sista in Squeak right now? (Forgive me if I've already told you this).  Clément has been developing Scorch (Sista is the name for the overall architecture, Scorch is the name of the image-level optimizing compiler) in a live Pharo image.  That means any bugs crash his system and progress is slow.  But the simulator can be modified to intercept the counter callback and instead of delivering it into the image being simulated can deliver it to Scorch in the current image.  The simulator can provide "facade" proxy context objects for contexts in the simulation.  These appear to be contexts, but are actually proxies for objects in the simulation (we do the same for methods so we can use InstructionPrinter, StackDepthFinder et al on methods in the simulation).  And we can rewrite the back end of Scorch, the part that installs methods into method dictionaries, to use a mirror object.  Then we can substitute a mirror that materializes a method in the simulation and invokes code in the simulation to install the method there-in.

So this allows Scorch to be developed as intended, in the current image, but have it affect only the simulation, and therefore be immune from crashing the current system.  This should speed up productization a lot, and will allow Sophie and I to work on register allocation in the Sista JIT. 

 
So any energy anyone can put into getting Scorch to load into Squeak would be much appreciated. 

What is the work breakdown required for this?  

Basically get the following to work in 64-bit Squeak trunk.  For me none of the Scorch load attempts work.

Installer ensureRecentMetacello.
(Smalltalk classNamed: #Metacello) new
  repository: 'github://j4yk/tonel:squeak'; "Loads Jakob's squeak branch"
  baseline: 'Tonel';
  load.

Metacello new
repository: 'github://clementbera/Scorch:master/repository';
baseline: 'Scorch';
onWarningLog;
load.

"or maybe..."
Metacello new
repository: 'github://clementbera/Scorch:repository';
baseline: 'Scorch';
onWarningLog;
load.
"or maybe..."
Metacello new
githubUser: 'clementbera' project: 'Scorch' commitish: 'master' path: 'repository';
baseline: 'Scorch';
onWarningLog;
load.

_,,,^..^,,,_
best, Eliot
Reply | Threaded
Open this post in threaded view
|

Re: Materializing BlockClosure's outerContext?

Jakob Reschke
 
Quick note on the Tonel port to Squeak: the current state is that its tests are green and it can be used through the Monticello tools. I was able to load Scorching, except for one method that has an ifFalse: block with an argument and, of course, without the extension methods for classes and traits that do not exist in Squeak. I have not verified whether the loaded code actually works.

The next steps, next to verifying the load result, would be to investigate whether Metacello or Gofer need to be changed in Squeak to notice this is a Tonel repository, not a FileTree one.

Am Sa., 5. Jan. 2019, 04:48 hat Eliot Miranda <[hidden email]> geschrieben:
 
Hi Ben,

On Fri, Jan 4, 2019 at 5:30 PM Ben Coman <[hidden email]> wrote:
 
On Sat, 5 Jan 2019 at 03:08, Eliot Miranda <[hidden email]> wrote:
 
Hi Fabio,  Hi All,

On Fri, Jan 4, 2019 at 9:11 AM Fabio Niephaus <[hidden email]> wrote:
 
Hi all,

I'm trying to understand when the outerContext of a BlockClosure is materialized in Cog. I'm assuming Cog does much better than materializing the context at allocation time of the BlockClosure, but I wasn't able to find that out by browsing VMMaker code.

Unless Sista is being used then the outer context is always materialized whenever a closure is created.  In the StackInterpreter the two methods called by the actual bytecode routines are pushClosureNumArgs:copiedValues:blockSize: and pushFullClosureNumArgs:copiedValues:compiledBlock:receiverIsOnStack:ignoreContext:.  You'll see that the first thing pushClosureNumArgs:copiedValues:blockSize: does is ensure the current frame is married, i.e. it materializes the context if it isn't already.  In pushFullClosureNumArgs:copiedValues:compiledBlock:receiverIsOnStack:ignoreContext: the first thing that happens is the same unless the ignoreContext flag has been set, in which case the outerContext will be nil.

In Cog the JIT generates code to materialize as quickly as possible.  See CogObjectRepresentationForSpur>>#genGetActiveContextLarge:inBlock: which is sent four times to create four trampolines ceSmallMethodContext, ceSmallBlockContext, ceSmallFullBlockContext, and ceLargeMethodContext.  These are used in turn by CogObjectRepresentationForSpur>>#genGetActiveContextNumArgs:large:inBlock:, which is used by CogObjectRepresentationForSpur>>#genNoPopCreateClosureAt:numArgs:numCopied:contextNumArgs:large:inBlock: & CogObjectRepresentationForSpur>>#genCreateFullClosure:numArgs:numCopied:ignoreContext:contextNumArgs:large:inBlock:, which generates the JIT code that parallels the StackInterpreter bytecodes.

Some rationale:
In VW there is no adaptive optimization and so no closure4 inlining.  Instead there are three different kinds of blocks:

clean: created at compile time, no copied values, only arguments, and hence no need for an outer context.  If in a clean closure in a debugger there is no information as to where the closure was activated; only its static method name is known.
copying: created at run-time, but since there is no up-arrow return there is no need to create an outer context.  If in a copying closure in a debugger there is no information as to where the closure was activated; only its static method name is known.
full: created at run-time; requires the outerContext to be materialized

When I came to add closures to Squeak (good in itself but also essential in implementing a sane context-to-stack mapping scheme) there were only 8 unused bytecodes, and two of these were already being used in a Newspeak implementation I had affection for.  Qwaq wanted to keep things as simple as possible, as did I.  Therefore the logical thing to do was to only provide closures that always had an outerContext; KISS.  Implementing closures used 5 bytecodes (create closure, create indirection vector, push/store/pop indirect), leaving three to spare (at this time I had no idea about a Sista bytecode set or about multiple bytecode sets). Knowing that at some stage adaptive optimization would be a much better approach than simply micro-optimizing closure creation, with all the extra complexity and infidelity in the debugger, and that I could optimize materialization aggressively, I think I made the right decision.  We can still implement the equivalent of clean blocks in the compiler (and indeed someone has done this in Pharo, and with full blocks it would be easy to do in Squeak).  But much more interesting is to finish Sista/Scorch.  To this end I'm currently frustrated by not being able to load Clément's tonel Scorch repository into Squeak.

Why do I want too be able to load Sista in Squeak right now? (Forgive me if I've already told you this).  Clément has been developing Scorch (Sista is the name for the overall architecture, Scorch is the name of the image-level optimizing compiler) in a live Pharo image.  That means any bugs crash his system and progress is slow.  But the simulator can be modified to intercept the counter callback and instead of delivering it into the image being simulated can deliver it to Scorch in the current image.  The simulator can provide "facade" proxy context objects for contexts in the simulation.  These appear to be contexts, but are actually proxies for objects in the simulation (we do the same for methods so we can use InstructionPrinter, StackDepthFinder et al on methods in the simulation).  And we can rewrite the back end of Scorch, the part that installs methods into method dictionaries, to use a mirror object.  Then we can substitute a mirror that materializes a method in the simulation and invokes code in the simulation to install the method there-in.

So this allows Scorch to be developed as intended, in the current image, but have it affect only the simulation, and therefore be immune from crashing the current system.  This should speed up productization a lot, and will allow Sophie and I to work on register allocation in the Sista JIT. 

 
So any energy anyone can put into getting Scorch to load into Squeak would be much appreciated. 

What is the work breakdown required for this?  

Basically get the following to work in 64-bit Squeak trunk.  For me none of the Scorch load attempts work.

Installer ensureRecentMetacello.
(Smalltalk classNamed: #Metacello) new
  repository: 'github://j4yk/tonel:squeak'; "Loads Jakob's squeak branch"
  baseline: 'Tonel';
  load.

Metacello new
repository: 'github://clementbera/Scorch:master/repository';
baseline: 'Scorch';
onWarningLog;
load.

"or maybe..."
Metacello new
repository: 'github://clementbera/Scorch:repository';
baseline: 'Scorch';
onWarningLog;
load.
"or maybe..."
Metacello new
githubUser: 'clementbera' project: 'Scorch' commitish: 'master' path: 'repository';
baseline: 'Scorch';
onWarningLog;
load.

_,,,^..^,,,_
best, Eliot
Reply | Threaded
Open this post in threaded view
|

Fwd: Materializing BlockClosure's outerContext?

Jakob Reschke
 

Hi Eliot,

Am Sa., 5. Jan. 2019 um 16:48 Uhr schrieb Eliot Miranda <[hidden email]>:

Great news.  Thank you!  Could you post yo the list the exact load expression you used to load Scorching.  Also post how you update the Tonel code.  I am so excited!  I want this so badly :-). Thank you!!

It is not an integrated workflow yet. I cloned the Scorch repository using the git command line:
Then, with the Tonel port loaded,
    Installer ensureRecentMetacello.
    (Smalltalk classNamed: #Metacello) new
      repository: 'github://j4yk/tonel:squeak';
      baseline: 'Tonel';
      load.
you can add tonel:// repositories in the Monticello browser.
Choose the "repository" subdirectory of the Scorch clone there, as in the "old days" with FileTree.
Then you can open the Monticello Tonel repository, it should list the Scorching packages each with -tonel.1 as the version indication, and load the packages as usual.
As with metadata-less FileTree, you won't have any history in Monticello. The history is only in Git. 
To store back in Tonel format, save the packages with Monticello to the Tonel repository. Then commit with Git to actually make it a version.
I still have to do the integration with the "Git browser" tool and the underlying framework of it, which you can install via the Do menu, just like I still haven't had time to look into the proper cooperation with Metacello yet. Once that is done, I shall post a more discoverable announcement (not hidden in another thread) on squeak-dev.

Also note that there won't be any method timestamps. I suppose in Pharo/Iceberg they are derived from the Git history, so the timestamps are not stored with the code.

Best,
Jakob