Re: [squeak-dev] Context status 2015-01-16

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
9 messages Options
Reply | Threaded
Open this post in threaded view
|

Re: [squeak-dev] Context status 2015-01-16

Chris Muller-3
That sounds awesome Craig, congratulations.  If you ever do a video of
a reasonably-sized imprinting experiment onto your core-classes image
from a modern Squeak image, I will be very interested to see it!

On Fri, Jan 16, 2015 at 2:01 PM, Craig Latta <[hidden email]> wrote:

>
> Hoi all--
>
>      Context[1] is the umbrella project for Naiad (a distributed module
> system for all Smalltalks[2]), Spoon (a minimal object memory that
> provides the starting point for Naiad), and Lightning (a
> remote-messaging framework which performs live serialization, used by
> Naiad for moving methods and other objects between systems). I intend
> for it to be a future release of Squeak, and a launcher and module
> system for all the other Smalltalks. I'm writing Context apps for cloud
> computing, web services, and distributed computation.
>
>      Commits b7676ba2cc and later of the Context git repo[3] have:
>
> -    Support for installable object memories as git submodule repos.
>
> -    Submodule repos for memories for each of the known Smalltalk
>      dialects, with Naiad support pre-loaded. I'm currently working on
>      the submodules for Squeak[4] and Pharo[5].
>
> -    A web-browser-based console for launching and managing object
>      memories.
>
> -    A WebDAV-based virtual filesystem that enables Smalltalk to appear
>      as a network-attached storage device, and mappings of the system
>      to that filesystem that make Smalltalk accessible from external
>      text editors (e.g., for editing code, managing processes and
>      object memories).
>
> -    Remote code and process browsers.
>
>      Live discussion at [6]. Mailing list at [7]. The newsgroup is
> gmane.comp.lang.smalltalk.squeak.context.
>
>
>      Thanks for checking it out!
>
> -C
>
> [1] http://thiscontext.com
> [2] http://thiscontext.com/a-detailed-naiad-description
> [3] https://github.com/ccrraaiigg/context
> [4] https://github.com/ccrraaiigg/3EAD9A45-F65F-445F-89C1-4CA0A9D5C2F8
> [5] https://github.com/ccrraaiigg/CFE10A14-D883-4ACE-990A-0DDA86AA362B
> [6] http://squeak.slack.com
> [7] mailto:[hidden email]
>
> --
> Craig Latta
> netjam.org
> +31 6 2757 7177 (SMS ok)
> + 1 415 287 3547 (no SMS)
>
>

Reply | Threaded
Open this post in threaded view
|

Re: [squeak-dev] re: Context status 2015-01-16

Chris Muller-3
     Sure, right now you can browse the kernel memory from Squeak 4.5.
Every time you accept a method, you're compiling it in Squeak 4.5 and
imprinting it onto the kernel. But you're referring to imprinting driven
by method execution?

Yes exactly!
  
(Each method that is run is imprinted elsewhere, as
a side-effect.) What code would you like to see imprinted?

Test suites.  To me, the holy-grail of deployment scaling is I first I develop and configure in a big luxurious Cadillac image loaded with tools until I'm ready to deploy.  When I'm ready to deplooy I fire up a copy of your 1MB core image as the "target" with my luxury image as the 'source' and then I simply bring over a top-level test-suite method into the target and run it.

As the methods are executed and found to be missing in the 1MB target, they are brought over from the source.

By the time the tests are done, /all/ and /only/ the methods which were needed to run the tests were brought over by Spoon / Naiad.

Now I want to save that target image (maybe 5MB now) and deploy it.  I will want to run many multiple copies of of that 5MB image but since my tests probably don't have 100% coverage it would be too risky to run in production unless each of those 5MB images could have backup Spoon access to a single running copy of my mother Caddi' image, in case one more method is found to be needed...

So I see Spoon as a way for learning about systems like you, but also as a solution for scaling -- from the smallest single core for limited hardware <---> to the largest multi-core, applications because one can run MORE images concurrently due to their smaller size.



     In any case, I do plan to make videos showing all the top-level
features.


     thanks,

-C

--
Craig Latta
netjam.org
<a href="tel:%2B31%20%20%206%202757%207177" value="+31627577177">+31 6 2757 7177 (SMS ok)
<a href="tel:%2B%201%20415%20%20287%203547" value="+14152873547">+ 1 415 287 3547 (no SMS)



Reply | Threaded
Open this post in threaded view
|

Re: [squeak-dev] re: Context status 2015-01-16

stepharo
For your information guille produced pharo images of 11k for simple
addition, 18k for large numbers.
Also Seaside counter application of about 500k.
We will continue to work on this architecture.
Of course this is done in Pharo so it will not really be exciting for you.

Stef


Reply | Threaded
Open this post in threaded view
|

Re: Context status 2015-01-16

ccrraaiigg

Hi Stef--

> For your information guille produced pharo images of 11k for simple
> addition, 18k for large numbers. Also Seaside counter application of
> about 500k.

     Why so large? The simple addition image I made is 1k.

> Of course this is done in Pharo so it will not really be exciting for
> you.

     The minimal kernel isn't the exciting part, it's just a
prerequisite. The exciting part is the distributed module system
(Naiad). You need a minimal kernel for that to be practical. There is
nothing like Naiad for Pharo yet.

     As always, I'm very interested in collaborating with you.


     thanks!

-C

--
Craig Latta
netjam.org
+31   6 2757 7177 (SMS ok)
+ 1 415  287 3547 (no SMS)


Reply | Threaded
Open this post in threaded view
|

Re: [squeak-dev] re: Context status 2015-01-16

Chris Muller-4
In reply to this post by stepharo
Hi Stef,

On Sun, Jan 18, 2015 at 5:15 AM, stepharo <[hidden email]> wrote:
> For your information guille produced pharo images of 11k for simple
> addition, 18k for large numbers.
> Also Seaside counter application of about 500k.

Of course, the real trick is if those tiny images can actually be
grown into something relevant and useful..  That's the dream I have
for Craig's Spoon -- letting the _machine_ form the "application"
rather than a human.

No doubt, a 500K Seaside image is /super cool/ and I think its past
time I congratulated you and your entire team on Pharo.  What y'all
are doing takes a lot of patience and skill, your perseverence after
this many years is inspiring.  You seem to be doing all the things
necessary to put "Smalltalk" onto the radar of young and cool
developers, I hope it works for the sake of Smalltalk.

> We will continue to work on this architecture.
> Of course this is done in Pharo so it will not really be exciting for you.

Well, if Craig's dream will be realized then it will be exciting for
both Squeak and Pharo.  I do find Pharo very interesting to observe
even though I'm motivated by different goals..

Reply | Threaded
Open this post in threaded view
|

Re: Context status 2015-01-16

Guillermo Polito
In reply to this post by ccrraaiigg


El Sun Jan 18 2015 at 1:53:51 PM, Craig Latta <[hidden email]> escribió:


Hi Stef--

> For your information guille produced pharo images of 11k for simple
> addition, 18k for large numbers. Also Seaside counter application of
> about 500k.

     Why so large? The simple addition image I made is 1k.

Haha, this already looks like a competition. I don't want to compete ^^.

One explanation is that the tailoring process I use is generic and cannot anticipate whether the VM uses an object or not. That is, I start the tailoring process from an almost empty special objects array and an expression to execute. But that expression is arbitrary, it may contain a simple addition or start a seaside server. Then I ensure some objects that the VM uses directly in any execution, making ~10K of common denominator for almost any application (then 11k is ~10k of common denominator + ~1k of integer addition code ;) ).

For example, I have to add the character table in the special objects array just in case a string is used in the starting-point-expression. The character table is 256 long:

character table slots = 256 * 4 bytes = 1k
characters = (8bytes headers + 4 bytes slot) * 256 = 3k

that makes already 4k.
 
Of course, for a simple addition you just need
- a special objects array
- a processor
- a process
- a context
- the method for the context doing the addition
- some SmallInteger class for the VM to check
 
And voilá. That should be even smaller than 1K. But I lack support to detect whether the VM access directly an object or not. And adding that in the VM for a couple of kilobytes... I don't know, I have enough writing my thesis just now :).

Then I have some other implementation details that make my images a bit bigger. I use for example a block to handle exceptions. I could remove that and make my object memories smaller at the cost of being less robust... 

In any case, the image just doing an addition is just the example that pushes the limits of *my* particular tailoring framework... But afterwards it has no other real usage :).


> Of course this is done in Pharo so it will not really be exciting for
> you.

     The minimal kernel isn't the exciting part, it's just a
prerequisite. The exciting part is the distributed module system
(Naiad). You need a minimal kernel for that to be practical. There is
nothing like Naiad for Pharo yet.

Well, not only. As Chris says, what the minimal kernel should ensure is that you can grow it. That may be through a distributed module system or another way to install code inside it. As we don't have such a module system, in our first steps we are generating a small image that contains only a compiler and a class builder.

Cheers,
Guille
Reply | Threaded
Open this post in threaded view
|

Re: Context status 2015-01-16

ccrraaiigg

Hi Guille--

> > > For your information guille produced pharo images of 11k for
> > > simple addition...
>
> > Why so large? The simple addition image I made is 1k.
>
> Haha, this already looks like a competition. I don't want to compete
> ^^.

     Oh, that wasn't my intention. :)  I really wanted to know what
objects are in there.

> Of course, for a simple addition you just need
> - a special objects array
> - a processor
> - a process
> - a context
> - the method for the context doing the addition
> - some SmallInteger class for the VM to check
>
> And voilá. That should be even smaller than 1K.

     By my count, you need fifteen objects. I wrote a summary of them
(including a visualization) at [1].

> In any case, the image just doing an addition is just the example that
> pushes the limits of *my* particular tailoring framework... But
> afterwards it has no other real usage :).

     Indeed so. The value of making it is showing that you can account
for each and every bit in the result, and that's just easiest to do if
you go for the least amount of code.

> As Chris says, what the minimal kernel should ensure is that you can
> grow it.

     Exactly, the definition of "minimal" depends on your intended use.


     thanks,

-C

[1] http://netjam.org/context/smallest

--
Craig Latta
netjam.org
+31   6 2757 7177 (SMS ok)
+ 1 415  287 3547 (no SMS)


Reply | Threaded
Open this post in threaded view
|

Re: [squeak-dev] re: Context status 2015-01-16

Nicolas Cellier
In reply to this post by Chris Muller-3


2015-01-19 15:41 GMT+01:00 Craig Latta <[hidden email]>:

     Okay, I'll add both of the execution-driven imprinters to the
repository. They're called "active" and "passive" imprinting.

     Active imprinting is directed by the system that initially has the
desired code. An ActiveImprintingServer has clients in the systems which
will receive the code. Every time the server system runs a method in a
certain process, it imprints that method onto each of the clients. One
use case for this is giving the code of a demo to an audience as you run it.

     Passive imprinting is directed by the system that wants the code.
The target system makes a remote-messaging connection to a system which
has the code, and runs an expression which will use the code. Every time
a method is missing from the target system (in any process), the target
system requests the missing method from the provider system, installs
it, and retries running that method.

     I have imprinted the exception-handling system, the compiler, and
the class builder with both approaches.



In this scheme, something is striking me.
Some images share some code (classes, compiledMethods)
But what about code mutations/updates?

Without such mutations, is it still Smalltalk?

Without active imprinting, such mutations might not be obvious to propagate (for example a subclass now overrides a message of super)

And since you import class builder and compiler in the target, on what purpose? Is the target going to change a class locally? What if it then imports incompatible methods from provider? Or is the goal to just replicate some mutations from the provider?

Maybe the scheme is more interesting for deployment of static code, but I'm curious to know if ever live updates would still be possible...

 
     thanks,

-C

--
Craig Latta
netjam.org
<a href="tel:%2B31%20%20%206%202757%207177" value="+31627577177">+31 6 2757 7177 (SMS ok)
<a href="tel:%2B%201%20415%20%20287%203547" value="+14152873547">+ 1 415 287 3547 (no SMS)



Reply | Threaded
Open this post in threaded view
|

Re: [squeak-dev] re: Context status 2015-01-16

Chris Muller-3
On Tue, Jan 20, 2015 at 7:39 PM, Nicolas Cellier
<[hidden email]> wrote:

>
>
> 2015-01-19 15:41 GMT+01:00 Craig Latta <[hidden email]>:
>>
>>
>>      Okay, I'll add both of the execution-driven imprinters to the
>> repository. They're called "active" and "passive" imprinting.
>>
>>      Active imprinting is directed by the system that initially has the
>> desired code. An ActiveImprintingServer has clients in the systems which
>> will receive the code. Every time the server system runs a method in a
>> certain process, it imprints that method onto each of the clients. One
>> use case for this is giving the code of a demo to an audience as you run
>> it.
>>
>>      Passive imprinting is directed by the system that wants the code.
>> The target system makes a remote-messaging connection to a system which
>> has the code, and runs an expression which will use the code. Every time
>> a method is missing from the target system (in any process), the target
>> system requests the missing method from the provider system, installs
>> it, and retries running that method.
>>
>>      I have imprinted the exception-handling system, the compiler, and
>> the class builder with both approaches.
>>
>>
>
> In this scheme, something is striking me.
> Some images share some code (classes, compiledMethods)
> But what about code mutations/updates?
>
> Without such mutations, is it still Smalltalk?
>
> Without active imprinting, such mutations might not be obvious to propagate
> (for example a subclass now overrides a message of super)
>
> And since you import class builder and compiler in the target, on what
> purpose? Is the target going to change a class locally? What if it then
> imports incompatible methods from provider? Or is the goal to just replicate
> some mutations from the provider?
>
> Maybe the scheme is more interesting for deployment of static code, but I'm
> curious to know if ever live updates would still be possible...

The particular scenario I was dreaming of would never have changes
made to the imprinted images.  At worst, the source Caddilac image
might be have a patch applied in production, in which case it would
need some way to invalidate the updated methods in all clients...