Concurrent Futures

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
152 messages Options
1 ... 45678
Reply | Threaded
Open this post in threaded view
|

Re: Thoughts on a concurrent Squeak VM

Igor Stasenko
Ok, here some code to illustrate what i have in mind.
There's nothing sophisticated and VM can be modified relatively easy
to support it.
We can do same for primitives and inter-image sends.

We can have a same simple struct in C to encapsulate context state, like:
struct context {
  void (fn*) (void*param) ; // function pointer
  void * param; // arbitrary parameter
  struct context * sender; // pointer to context which will receive
result and be activated after   fn done
  void * result;  // can be present to store fn result , if fn is a
function returning non void
};

On 01/11/2007, Rob Withers <[hidden email]> wrote:

>
> ----- Original Message -----
> From: "Igor Stasenko" <[hidden email]>
> To: "The general-purpose Squeak developers list"
> <[hidden email]>
> Sent: Thursday, November 01, 2007 8:21 AM
> Subject: Re: Thoughts on a concurrent Squeak VM
>
>
> > On 01/11/2007, Andreas Raab <[hidden email]> wrote:
> >> 2) Implement the "failure mode" for calling primitives from non-primary
> >> threads and possibly implement the first few plugins that are
> >> multi-threaded (files, sockets, ffi come to mind).
> >>
> >
> > Writing a generic threading framework comes in mind.
> > A brief description:
> > - each object in system should have a way how to get it's assigned
> > thread id (and how assign it , of course).
> > - if we see, that object having assigned thread, that means, that this
> > object is 'captured' by given thread for processing, and we need to
> > schedule new messages to that thread.
> >
> > Early i proposed this approach only for smalltalk objects and their
> > active contexts, but this concept can be easily expanded to virtually
> > any objects in system (Interpreter/GC/primitives).
>
> +1.
>
> Please expose the ability for a use to do a scheduled move of an object from
> one thread to another thread.
>
>
> > You can see that with this concept we can virtually mix an interpreter
> > 'sends' with VM low-level 'sends'.
>
> Yes, indeed.
>
>
>
>

--
Best regards,
Igor Stasenko AKA sig.



ObjectsMT.st (5K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Thoughts on a concurrent Squeak VM

Igor Stasenko
Forgot to add.

A futures/promises fits well with this.
A future is just an action (message send) which should be enqueued
after receiving result of some operation.
This means that future can be simply a special kind of context, which
activates after original send and then uses result to send new message
for it.
This means that futures can be seemly handled at VM level.


--
Best regards,
Igor Stasenko AKA sig.

Reply | Threaded
Open this post in threaded view
|

Re: Thoughts on a concurrent Squeak VM

Rob Withers
Hi Igor,

As I wait for SystemTracer to do its thing...

I've been reading Mark Miller's thesis on E, over at erights.org.  It's very
interesting.  He is a proponent of non-shared memory event loops.  He
describes each Vat as being a stack, a queue and a heap.  Its a stack of
immediate calls, of the currently executing event, a queue of pending
events, and a heap of objects this Vat controls.  On the other hand, he
talks about a BootStrapComm (I think this is what it is called) system which
allows Vats in the same address space to pass references back and forth, so
E supports shared-memory event loops as well.  I thought you'd find this
interesting.

You have yourself a queue and a stack (as you activate a pending context).
I think of a future/promise more as a pending action that get's scheduled,
before it has a value in its continuation.  It just so happens that the
resolve: action for activating the continuation of a eventual message send,
is also an eventual message send, but with no continuation of its own.

> This means that future can be simply a special kind of context, which
> activates after original send and then uses result to send new message
> for it.

That sounds right, although I am unclear on what "uses result to send new
message for it" means.

The other thought I had was that garbage collection in squeak seems to
happen when needed, immediately.  Things may be so dire for memory that it
has to do or die.  This would give us a problem making it scheduled as
another event, wouldn't it?

cheers,
Rob


Reply | Threaded
Open this post in threaded view
|

Re: Thoughts on a concurrent Squeak VM

Igor Stasenko
On 03/11/2007, Rob Withers <[hidden email]> wrote:

> Hi Igor,
>
> As I wait for SystemTracer to do its thing...
>
> I've been reading Mark Miller's thesis on E, over at erights.org.  It's very
> interesting.  He is a proponent of non-shared memory event loops.  He
> describes each Vat as being a stack, a queue and a heap.  Its a stack of
> immediate calls, of the currently executing event, a queue of pending
> events, and a heap of objects this Vat controls.  On the other hand, he
> talks about a BootStrapComm (I think this is what it is called) system which
> allows Vats in the same address space to pass references back and forth, so
> E supports shared-memory event loops as well.  I thought you'd find this
> interesting.
>
I found another thing, which may be interesting:
http://www.stackless.com/


> You have yourself a queue and a stack (as you activate a pending context).
> I think of a future/promise more as a pending action that get's scheduled,
> before it has a value in its continuation.  It just so happens that the
> resolve: action for activating the continuation of a eventual message send,
> is also an eventual message send, but with no continuation of its own.
>
> > This means that future can be simply a special kind of context, which
> > activates after original send and then uses result to send new message
> > for it.
>
> That sounds right, although I am unclear on what "uses result to send new
> message for it" means.
>
i meant that future send needs a receiver to send message. so when it
activates, it means that receiver(result) is now known and thus, a
message send can be performed.

> The other thought I had was that garbage collection in squeak seems to
> happen when needed, immediately.  Things may be so dire for memory that it
> has to do or die.  This would give us a problem making it scheduled as
> another event, wouldn't it?
>

What makes you think that futures will die upon GC?
For working properly, a reference to the future are placed as 'sender
context' in context of our interest. So, when such context will done
working and return result, a sender context will be activated  - which
is our future message send.

I'm personally much more worrying about non-local returns.

If we suppose that we built a chain of future message sends in:

object future message1 message2 message3 ... messageN.

then if an error occurs in message1 (or there is non-local return), it
means that all chain of futures, which awaits for activation (message2
... messageN) should be thrown overboard.
It seems building long chains of futures is impractical.

Of course, in this case its better ask developer, why he uses futures
with methods which can do non-local returns. :)

> cheers,
> Rob
>


--
Best regards,
Igor Stasenko AKA sig.

Reply | Threaded
Open this post in threaded view
|

Re: Thoughts on a concurrent Squeak VM

johnmci
In reply to this post by Rob Withers

On Nov 2, 2007, at 5:53 PM, Rob Withers wrote:

> The other thought I had was that garbage collection in squeak seems  
> to happen when needed, immediately.  Things may be so dire for  
> memory that it has to do or die.  This would give us a problem  
> making it scheduled as another event, wouldn't it?
>
> cheers,
> Rob

Well it happens when there is no free space to allocate something, or  
some limit reached,
like free space becoming too low, or number of objects allocated over  
some limit.  Certainly if there is no memory that's critical, the  
others are soft types of conditions.

however...

"GC scheduled as another event"
I know of one smalltalk implementation that would signal when memory  
was becoming low, thus letting some process deal with the situation,  
however
as machines got really fast the amount of head room became too small,  
and the VM memory allocator would run out of memory before the GC  
process could wake up...
Usually increasing the headroom by as little as 10K would solve the  
problem.


--
========================================================================
===
John M. McIntosh <[hidden email]>
Corporate Smalltalk Consulting Ltd.  http://www.smalltalkconsulting.com
========================================================================
===



Reply | Threaded
Open this post in threaded view
|

Re: Thoughts on a concurrent Squeak VM

Igor Stasenko
Found it using google-fu :)

http://www.slideshare.net/Arbow/stackless-python-in-eve

I'd like to hear your comments about concepts represented in stackless python.
I'm still didn't have a time to study it deeply, but slides in link
above telling about themselves.


--
Best regards,
Igor Stasenko AKA sig.

Reply | Threaded
Open this post in threaded view
|

Re: Thoughts on a concurrent Squeak VM

Rob Withers
In reply to this post by Igor Stasenko

----- Original Message -----
From: "Igor Stasenko" <[hidden email]>

> I found another thing, which may be interesting:
> http://www.stackless.com/

Ok, I took a look and I decided I do not like Python.  :)   What a horrible
syntax.  On the topic of microstacks in stackless, I figure we already do
this in Squeak.  Each method has a stack and the msg sending is like the
channel comms.

>> The other thought I had was that garbage collection in squeak seems to
>> happen when needed, immediately.  Things may be so dire for memory that
>> it
>> has to do or die.  This would give us a problem making it scheduled as
>> another event, wouldn't it?
>>
>
> What makes you think that futures will die upon GC?
> For working properly, a reference to the future are placed as 'sender
> context' in context of our interest. So, when such context will done
> working and return result, a sender context will be activated  - which
> is our future message send.

Ok, you are sending messages to futures, as am I.   No, the comments about
GC had to do with a comment made where GC actions are scheduled with normal
image activities, and I thought they might not be executed when needed.


> I'm personally much more worrying about non-local returns.

Me too.

>
> If we suppose that we built a chain of future message sends in:
>
> object future message1 message2 message3 ... messageN.
>
> then if an error occurs in message1 (or there is non-local return), it
> means that all chain of futures, which awaits for activation (message2
> ... messageN) should be thrown overboard.

Actually, the error should propogate through all future msgs, not thrown
overboard.

> It seems building long chains of futures is impractical.
>
> Of course, in this case its better ask developer, why he uses futures
> with methods which can do non-local returns. :)

He wants to because the capability is there.  He must use them everywhere.
:)

Rob


Reply | Threaded
Open this post in threaded view
|

Re: Thoughts on a concurrent Squeak VM

Igor Stasenko
On 04/11/2007, Rob Withers <[hidden email]> wrote:
>
> ----- Original Message -----
> From: "Igor Stasenko" <[hidden email]>
>
> > I found another thing, which may be interesting:
> > http://www.stackless.com/
>
> Ok, I took a look and I decided I do not like Python.  :)   What a horrible
> syntax.

I don't like a python myself, a two weeks of programming on it was
enough for the rest of my life ;) But i pointed more on concepts which
allowed concurrency in Python VM , and i think we can learn on their
experience.

> On the topic of microstacks in stackless, I figure we already do
> this in Squeak.  Each method has a stack and the msg sending is like the
> channel comms.

Not exactly. See, the one main difference, i think, that we can't
lookup for a method before all arguments are evaluated even for known
receiver.

I mean that in code:

object1 method1: object2 method2

we can't do lookup for #method1 before done evaluating a method2,
because in method2 there can't be things which can change the class of
object.
Thus, we need to push args on stack or create an argument-vector  to
collect all arguments, and then, only after having all of them, we can
actually do lookup and create a context for receiver's method.

> >> The other thought I had was that garbage collection in squeak seems to
> >> happen when needed, immediately.  Things may be so dire for memory that
> >> it
> >> has to do or die.  This would give us a problem making it scheduled as
> >> another event, wouldn't it?
> >>
> >
> > What makes you think that futures will die upon GC?
> > For working properly, a reference to the future are placed as 'sender
> > context' in context of our interest. So, when such context will done
> > working and return result, a sender context will be activated  - which
> > is our future message send.
>
> Ok, you are sending messages to futures, as am I.   No, the comments about
> GC had to do with a comment made where GC actions are scheduled with normal
> image activities, and I thought they might not be executed when needed.
>

Ah, i see. A run-time GC should be designed in a way that it will
guarantee that marking will be done before application allocates N
bytes of memory.

>
> > I'm personally much more worrying about non-local returns.
>
> Me too.
>
> >
> > If we suppose that we built a chain of future message sends in:
> >
> > object future message1 message2 message3 ... messageN.
> >
> > then if an error occurs in message1 (or there is non-local return), it
> > means that all chain of futures, which awaits for activation (message2
> > ... messageN) should be thrown overboard.
>
> Actually, the error should propogate through all future msgs, not thrown
> overboard.
>

Err,  why? IIRC, an error(exception) initiates a stack unwinding
looking for context which can handle it.


> > It seems building long chains of futures is impractical.
> >
> > Of course, in this case its better ask developer, why he uses futures
> > with methods which can do non-local returns. :)
>
> He wants to because the capability is there.  He must use them everywhere.
> :)
>
> Rob
>
>
>


--
Best regards,
Igor Stasenko AKA sig.

Reply | Threaded
Open this post in threaded view
|

Re: Thoughts on a concurrent Squeak VM

Rob Withers

----- Original Message -----
From: "Igor Stasenko" <[hidden email]>
To: "The general-purpose Squeak developers list"
<[hidden email]>
Sent: Saturday, November 03, 2007 4:57 PM
Subject: Re: Thoughts on a concurrent Squeak VM


> On 04/11/2007, Rob Withers <[hidden email]> wrote:
>>
>> ----- Original Message -----
>> From: "Igor Stasenko" <[hidden email]>
>>
>> > I found another thing, which may be interesting:
>> > http://www.stackless.com/
>>
>> Ok, I took a look and I decided I do not like Python.  :)   What a
>> horrible
>> syntax.
>
> I don't like a python myself, a two weeks of programming on it was
> enough for the rest of my life ;) But i pointed more on concepts which
> allowed concurrency in Python VM , and i think we can learn on their
> experience.
>
>> On the topic of microstacks in stackless, I figure we already do
>> this in Squeak.  Each method has a stack and the msg sending is like the
>> channel comms.
>
> Not exactly. See, the one main difference, i think, that we can't
> lookup for a method before all arguments are evaluated even for known
> receiver.
>
> I mean that in code:
>
> object1 method1: object2 method2
>
> we can't do lookup for #method1 before done evaluating a method2,
> because in method2 there can't be things which can change the class of
> object.
> Thus, we need to push args on stack or create an argument-vector  to
> collect all arguments, and then, only after having all of them, we can
> actually do lookup and create a context for receiver's method.

Unless you are using futures. :)  In that case the invocation schedules the
message for later, and we move to sending method1: before method2 is
evaluated.  Even if method2 changes the class of object1, it just happened
to late to be of importance.


>> >
>> > If we suppose that we built a chain of future message sends in:
>> >
>> > object future message1 message2 message3 ... messageN.
>> >
>> > then if an error occurs in message1 (or there is non-local return), it
>> > means that all chain of futures, which awaits for activation (message2
>> > ... messageN) should be thrown overboard.
>>
>> Actually, the error should propogate through all future msgs, not thrown
>> overboard.
>>
>
> Err,  why? IIRC, an error(exception) initiates a stack unwinding
> looking for context which can handle it.

object future message1 throws an error.  message2 will now be sent to this
error and will itself break with an error, and so on up the line.  It is
another way of propogating errors.

Rob


Reply | Threaded
Open this post in threaded view
|

Non-local returns with promises (was: Re: Thoughts on a concurrent Squeak VM)

Rob Withers
In reply to this post by Igor Stasenko

----- Original Message -----
From: "Igor Stasenko" <[hidden email]>

> I'm personally much more worrying about non-local returns.

As an example of the problem of non-local return, let's look at this simple
method:

foo: bar

    bar ifTrue: [^ 1].
    self snafu.
    ^ 0

If bar is eventual, we don't know at the time of invocation whether the
method will exit through the non-local
return or through the return at the end of the method.

How to best deal with this?

My thought is that the context needs to become eventual, be sent as a lambda
to the bar promise, and the context return a promise.  The return will check
to see if there is a resolver on the context and #resolve: it with the
appropriate return value.  The lambda needs to be all sequential code from
the point where a message is sent to the promise that includes a block as an
argument to the end of the context.  So the lambda would be:

f := [:barVow |
    barVow ifTrue: [^1].
    self snafu.
    ^ 0]

and we would send:
    ^ bar whenResolved: f.

So the challenge here would be in capturing the lambda and in effect
rewriting the code for this method, on the fly.  The lambda is just a one
arg blockContext, where the ip points to the pushTemp: 0.

17 <10> pushTemp: 0
18 <99> jumpFalse: 21
19 <76> pushConstant: 1
20 <7C> returnTop
21 <70> self
22 <D0> send: snafu
23 <87> pop
24 <75> pushConstant: 0
25 <7C> returnTop

and the two return bytecodes, would each need to resolve the promise
returned.  This needs to become something like the following, where lines
28-36 is the original block with three rewrites:
    first, pushTemp: 1 instead of pushTemp: 0, since the block arg is a
separate temp.
    second, the two returnTops needs to become a jumpTo: and a blockReturn,
so we can resolve the promise.

21 <10> pushTemp: 0
22 <89> pushThisContext:
23 <76> pushConstant: 1
24 <C8> send: blockCopy:
25 <A4 0A> jumpTo: 37
27 <69> popIntoTemp: 1
28 <11> pushTemp: 1
29 <99> jumpFalse: 32
30 <76> pushConstant: 1
31 <93> jumpTo: 36
32 <70> self
33 <D1> send: snafu
34 <87> pop
35 <75> pushConstant: 0
36 <7D> blockReturn
37 <E0> send: whenResolved:
38 <7C> returnTop

Of course, we don't want to modify the original method, since another call
to it may not involve a promise.  So we are looking at creating a new
MethodContext, which we rewrite to create a new BlockContext defined from
lines 22-25 through 36 above.  For every non-local return method when called
with a promise.

What do you think?

Rob


Reply | Threaded
Open this post in threaded view
|

Re: Thoughts on a concurrent Squeak VM

Igor Stasenko
In reply to this post by Rob Withers
On 04/11/2007, Rob Withers <[hidden email]> wrote:

>
> ----- Original Message -----
> From: "Igor Stasenko" <[hidden email]>
> To: "The general-purpose Squeak developers list"
> <[hidden email]>
> Sent: Saturday, November 03, 2007 4:57 PM
> Subject: Re: Thoughts on a concurrent Squeak VM
>
>
> > On 04/11/2007, Rob Withers <[hidden email]> wrote:
> >>
> >> ----- Original Message -----
> >> From: "Igor Stasenko" <[hidden email]>
> >>
> >> > I found another thing, which may be interesting:
> >> > http://www.stackless.com/
> >>
> >> Ok, I took a look and I decided I do not like Python.  :)   What a
> >> horrible
> >> syntax.
> >
> > I don't like a python myself, a two weeks of programming on it was
> > enough for the rest of my life ;) But i pointed more on concepts which
> > allowed concurrency in Python VM , and i think we can learn on their
> > experience.
> >
> >> On the topic of microstacks in stackless, I figure we already do
> >> this in Squeak.  Each method has a stack and the msg sending is like the
> >> channel comms.
> >
> > Not exactly. See, the one main difference, i think, that we can't
> > lookup for a method before all arguments are evaluated even for known
> > receiver.
> >
> > I mean that in code:
> >
> > object1 method1: object2 method2
> >
> > we can't do lookup for #method1 before done evaluating a method2,
> > because in method2 there can't be things which can change the class of
> > object.
> > Thus, we need to push args on stack or create an argument-vector  to
> > collect all arguments, and then, only after having all of them, we can
> > actually do lookup and create a context for receiver's method.
>
> Unless you are using futures. :)  In that case the invocation schedules the
> message for later, and we move to sending method1: before method2 is
> evaluated.  Even if method2 changes the class of object1, it just happened
> to late to be of importance.
>
>
> >> >
> >> > If we suppose that we built a chain of future message sends in:
> >> >
> >> > object future message1 message2 message3 ... messageN.
> >> >
> >> > then if an error occurs in message1 (or there is non-local return), it
> >> > means that all chain of futures, which awaits for activation (message2
> >> > ... messageN) should be thrown overboard.
> >>
> >> Actually, the error should propogate through all future msgs, not thrown
> >> overboard.
> >>
> >
> > Err,  why? IIRC, an error(exception) initiates a stack unwinding
> > looking for context which can handle it.
>
> object future message1 throws an error.  message2 will now be sent to this
> error and will itself break with an error, and so on up the line.  It is
> another way of propogating errors.
>
True, but not for non-local returns.

> Rob
>
>
>


--
Best regards,
Igor Stasenko AKA sig.

Reply | Threaded
Open this post in threaded view
|

Re: Non-local returns with promises (was: Re: Thoughts on a concurrent Squeak VM)

Igor Stasenko
In reply to this post by Rob Withers
On 04/11/2007, Rob Withers <[hidden email]> wrote:

>
> ----- Original Message -----
> From: "Igor Stasenko" <[hidden email]>
>
> > I'm personally much more worrying about non-local returns.
>
> As an example of the problem of non-local return, let's look at this simple
> method:
>
> foo: bar
>
>     bar ifTrue: [^ 1].
>     self snafu.
>     ^ 0
>
> If bar is eventual, we don't know at the time of invocation whether the
> method will exit through the non-local
> return or through the return at the end of the method.
>
> How to best deal with this?
>
> My thought is that the context needs to become eventual, be sent as a lambda
> to the bar promise, and the context return a promise.  The return will check
> to see if there is a resolver on the context and #resolve: it with the
> appropriate return value.  The lambda needs to be all sequential code from
> the point where a message is sent to the promise that includes a block as an
> argument to the end of the context.  So the lambda would be:
>
> f := [:barVow |
>     barVow ifTrue: [^1].
>     self snafu.
>     ^ 0]
>
> and we would send:
>     ^ bar whenResolved: f.
>
> So the challenge here would be in capturing the lambda and in effect
> rewriting the code for this method, on the fly.  The lambda is just a one
> arg blockContext, where the ip points to the pushTemp: 0.
>
> 17 <10> pushTemp: 0
> 18 <99> jumpFalse: 21
> 19 <76> pushConstant: 1
> 20 <7C> returnTop
> 21 <70> self
> 22 <D0> send: snafu
> 23 <87> pop
> 24 <75> pushConstant: 0
> 25 <7C> returnTop
>
> and the two return bytecodes, would each need to resolve the promise
> returned.  This needs to become something like the following, where lines
> 28-36 is the original block with three rewrites:
>     first, pushTemp: 1 instead of pushTemp: 0, since the block arg is a
> separate temp.
>     second, the two returnTops needs to become a jumpTo: and a blockReturn,
> so we can resolve the promise.
>
> 21 <10> pushTemp: 0
> 22 <89> pushThisContext:
> 23 <76> pushConstant: 1
> 24 <C8> send: blockCopy:
> 25 <A4 0A> jumpTo: 37
> 27 <69> popIntoTemp: 1
> 28 <11> pushTemp: 1
> 29 <99> jumpFalse: 32
> 30 <76> pushConstant: 1
> 31 <93> jumpTo: 36
> 32 <70> self
> 33 <D1> send: snafu
> 34 <87> pop
> 35 <75> pushConstant: 0
> 36 <7D> blockReturn
> 37 <E0> send: whenResolved:
> 38 <7C> returnTop
>
> Of course, we don't want to modify the original method, since another call
> to it may not involve a promise.  So we are looking at creating a new
> MethodContext, which we rewrite to create a new BlockContext defined from
> lines 22-25 through 36 above.  For every non-local return method when called
> with a promise.
>
> What do you think?
>
Hmm.. i think, you given a bad example. It depends on 'knowledge' that
#ifTrue: is optimized by compiler as an execution branch, but not a
regular message send.
For general case we should consider passing a block as a argument, like:

bar someMethod: [^1].

But this again, not includes cases, when blocks are assigned to
temps/ivars within context and never passed as arguments to
message(s).

I thought a bit about marking a method context containing blocks with
non-local returns with special flag, which will force context to not
activate until all free variables in method are resolved
(agruments/temps). This means that before any activation (entering
method and each send in method it will wait for resolving all promises
it contains).
But i'm still doubt if this gives us anything.

There is many methods which operate with blocks by sending #value.. to
them. And these methods actually don't care if those blocks contain
non-local returns or not. They simply do their job.

--
Best regards,
Igor Stasenko AKA sig.

Reply | Threaded
Open this post in threaded view
|

Re: Non-local returns with promises (was: Re: Thoughts on aconcurrent Squeak VM)

Rob Withers

----- Original Message -----
From: "Igor Stasenko" <[hidden email]>

> Hmm.. i think, you given a bad example. It depends on 'knowledge' that
> #ifTrue: is optimized by compiler as an execution branch, but not a
> regular message send.

My example was specifically what to do when we have ifTrue: optimized.  But
you are right to consider the general case first.

> For general case we should consider passing a block as a argument, like:
>
> bar someMethod: [^1].
>
> But this again, not includes cases, when blocks are assigned to
> temps/ivars within context and never passed as arguments to
> message(s).

But we do know that these blocks are actually created in the home context
(unlike the optimizing example I gave-no blocks are created).  So if it were
possible to mark the block context, that it includes an explicit return,
then those are the blocks that should cause synchronization with promises.
We could know that a block has a return at compile time, so we would just
need to have a new block creation bytecode that marked the block:
#bytecodePrimBlockCopyWithReturn.

>
> I thought a bit about marking a method context containing blocks with
> non-local returns with special flag, which will force context to not
> activate until all free variables in method are resolved
> (agruments/temps). This means that before any activation (entering
> method and each send in method it will wait for resolving all promises
> it contains).
> But i'm still doubt if this gives us anything.

Where does a block return from if it has an explicit return?  The home
context or the calling context?   I think it is the calling context.

So what you just said about marking a method context if it contains blocks
with non-local returns.  How do you know the block has a non-local return?
As I mentioned, we could mark the blocks.   We could mark the methods that
CREATE blocks that return.  At runtime, we could look at the method flag for
non-local return and look at all args and temps for blocks with the flag and
synchronize.  I agree with you that it will have to be before every send.

>
> There is many methods which operate with blocks by sending #value.. to
> them. And these methods actually don't care if those blocks contain
> non-local returns or not. They simply do their job.

What do you mean here?


Cheers,
Rob


Reply | Threaded
Open this post in threaded view
|

Re: Preliminary new Yaxo version

Michael Rueger-4
In reply to this post by Michael Rueger-4
Michael Rueger wrote:
> Boris.Gaertner wrote:

>> 2. In Squeak 3.10beta #7158 I could try your code. The
>>    surprising result is, that a structured xml element with
>>    subelements can be parsed, but in the parse tree, all
>>    subelements are missing. Problem analysis:
>
> Thanks for tracking this down. I will look at your proposed changes.

Fixes are in
http://source.impara.de/infrastructure/XML-Parser-mir.11.mcz

Remember that the fixed parser will only work with 3.8.2 (not released
yet) and 3.10.

Michael

Reply | Threaded
Open this post in threaded view
|

Re: Preliminary new Yaxo version

NorbertHartl

On Wed, 2007-11-14 at 17:38 +0100, Michael Rueger wrote:

> Michael Rueger wrote:
> > Boris.Gaertner wrote:
>
> >> 2. In Squeak 3.10beta #7158 I could try your code. The
> >>    surprising result is, that a structured xml element with
> >>    subelements can be parsed, but in the parse tree, all
> >>    subelements are missing. Problem analysis:
> >
> > Thanks for tracking this down. I will look at your proposed changes.
>
> Fixes are in
> http://source.impara.de/infrastructure/XML-Parser-mir.11.mcz
>
> Remember that the fixed parser will only work with 3.8.2 (not released
> yet) and 3.10.
>
Is there a short summary you can give why it won't run in 3.9?

thanks,

Norbert


Reply | Threaded
Open this post in threaded view
|

Re: Preliminary new Yaxo version

Michael Rueger-4
Norbert Hartl wrote:
> Is there a short summary you can give why it won't run in 3.9?

it misses the character and string related fixes that are in 3.10 and 3.8.2.

Michael

Reply | Threaded
Open this post in threaded view
|

Re: Preliminary new Yaxo version

Giovanni Corriga
Il giorno mer, 14/11/2007 alle 21.24 +0100, Michael Rueger ha scritto:
> Norbert Hartl wrote:
> > Is there a short summary you can give why it won't run in 3.9?
>
> it misses the character and string related fixes that are in 3.10 and 3.8.2.

Maybe it's time to think about Squeak 3.9.1?

        Giovanni


Reply | Threaded
Open this post in threaded view
|

Re: Preliminary new Yaxo version

Karl-19
Giovanni Corriga wrote:

> Il giorno mer, 14/11/2007 alle 21.24 +0100, Michael Rueger ha scritto:
>  
>> Norbert Hartl wrote:
>>    
>>> Is there a short summary you can give why it won't run in 3.9?
>>>      
>> it misses the character and string related fixes that are in 3.10 and 3.8.2.
>>    
>
> Maybe it's time to think about Squeak 3.9.1?
>
> Giovanni
>
>
>  
Stephane and Marcus both left after the release so I think 3.9.1 is up
for grab.
Karl

Reply | Threaded
Open this post in threaded view
|

Re: Preliminary new Yaxo version

keith1y
In reply to this post by Giovanni Corriga
Giovanni Corriga wrote:

> Il giorno mer, 14/11/2007 alle 21.24 +0100, Michael Rueger ha scritto:
>  
>> Norbert Hartl wrote:
>>    
>>> Is there a short summary you can give why it won't run in 3.9?
>>>      
>> it misses the character and string related fixes that are in 3.10 and 3.8.2.
>>    
>
> Maybe it's time to think about Squeak 3.9.1?
>
> Giovanni
>  
I did think about it for a long while, there are/were scripts available
with bugs to be harvested for this...
but to be honest I conceded that 3.10 is as good a 3.9.1 as you are
likely to get.

best regards

Keith

Reply | Threaded
Open this post in threaded view
|

Re: Preliminary new Yaxo version

Andreas.Raab
Keith Hodges wrote:

> Giovanni Corriga wrote:
>>
>> Maybe it's time to think about Squeak 3.9.1?
>>
>> Giovanni
>>  
> I did think about it for a long while, there are/were scripts available
> with bugs to be harvested for this...
> but to be honest I conceded that 3.10 is as good a 3.9.1 as you are
> likely to get.

That is not true for someone who uses 3.9 in production settings. For
anyone who is using a particular version of Squeak in production the
step to the next version needs to be very carefully evaluated. Providing
a small set of fixes addressing the most important problems without fear
"what else it may break" is a very valuable service. Unfortunately, few
people in the Squeak community seem to understand that.

Cheers,
   - Andreas


1 ... 45678