Concurrent Futures

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
152 messages Options
1234 ... 8
pwl
Reply | Threaded
Open this post in threaded view
|

Concurrent Futures

pwl
Hi,

This link will be of interest to those wanting easy concurrency. Works
best for simple cases and can be used as a primitive in building up more
complex concurrency controls. However, use with caution, especially when
larger amounts of code are being executed that could conflict with other
threads is involved. This applies even in the single native thread
multiple Smalltalk "light weight green" processes environment of most
Smalltalks.

http://onsmalltalk.com/programming/smalltalk/smalltalk-concurrency-playing-with-futures/

Cheers,

Peter

Reply | Threaded
Open this post in threaded view
|

Re: Concurrent Futures

Rob Withers
That's a nice example.  The difference with E's promises is that a Promise
doesn't block for value, it sends the new msg eventually and returns a
second promise.  The way I have this implemented in SqueakElib, is you send
#eventual to get an eventual ref, then you can send a msg to the eventual
ref, returning a promise, which you can then send a second msg to, returning
a second promise.

| promise1 promise2 |
promise1 := anObject eventual foo.
promise2 := promise1 bar.
promise1 whenResolved: [:val | Transcript show: ' foo: ', val printString].
promise2 whenResolved: [:val | Transcript show: ' bar: ', val printString].
Transcript show: 'sent foo bar...'

This prevents deadlocks.  This is also very interesting when looked at in a
distributed system, using Promise Pipelining.  Let us say that anObject
resides in Vat B on a different machine.  Instead of having a 2 round trip
communications situation, we have 1 round trip.  So instead of
    1) send #foo, 2) receive fooResult, 3) send bar, 4) receive barResult
We have:
    1) send #foo, 2) send #bar to fooResultPromise, 3) receive barResult.
Where #foo and #bar are both sent to the objects in VatB.  They are sent in
the same packet.

Rob

----- Original Message -----
From: "Peter William Lount" <[hidden email]>
To: "The general-purpose Squeak developers list"
<[hidden email]>
Sent: Saturday, October 27, 2007 11:21 AM
Subject: Concurrent Futures


> Hi,
>
> This link will be of interest to those wanting easy concurrency. Works
> best for simple cases and can be used as a primitive in building up more
> complex concurrency controls. However, use with caution, especially when
> larger amounts of code are being executed that could conflict with other
> threads is involved. This applies even in the single native thread
> multiple Smalltalk "light weight green" processes environment of most
> Smalltalks.
>
> http://onsmalltalk.com/programming/smalltalk/smalltalk-concurrency-playing-with-futures/
>
> Cheers,
>
> Peter
>
>


pwl
Reply | Threaded
Open this post in threaded view
|

Re: Concurrent Futures

pwl
Hi Rob,

That sounds quite interesting - chains of promises until the results
start flowing. It should be easy to adapt the
http://onsmalltalk.com/programming/smalltalk/smalltalk-concurrency-playing-with-futures 
example implementation to do the same kind of deferral.

Of all the classes of possible deadlocks which do the E promises help
resolve? (Please don't say all unless you back up such a general
statement with a proof. Thanks).

All the best,

Peter




Rob Withers wrote:

> That's a nice example.  The difference with E's promises is that a
> Promise doesn't block for value, it sends the new msg eventually and
> returns a second promise.  The way I have this implemented in
> SqueakElib, is you send #eventual to get an eventual ref, then you can
> send a msg to the eventual ref, returning a promise, which you can
> then send a second msg to, returning a second promise.
>
> | promise1 promise2 |
> promise1 := anObject eventual foo.
> promise2 := promise1 bar.
> promise1 whenResolved: [:val | Transcript show: ' foo: ', val
> printString].
> promise2 whenResolved: [:val | Transcript show: ' bar: ', val
> printString].
> Transcript show: 'sent foo bar...'
>
> This prevents deadlocks.  This is also very interesting when looked at
> in a distributed system, using Promise Pipelining.  Let us say that
> anObject resides in Vat B on a different machine.  Instead of having a
> 2 round trip communications situation, we have 1 round trip.  So
> instead of
>    1) send #foo, 2) receive fooResult, 3) send bar, 4) receive barResult
> We have:
>    1) send #foo, 2) send #bar to fooResultPromise, 3) receive barResult.
> Where #foo and #bar are both sent to the objects in VatB.  They are
> sent in the same packet.
>
> Rob
>
> ----- Original Message ----- From: "Peter William Lount"
> <[hidden email]>
> To: "The general-purpose Squeak developers list"
> <[hidden email]>
> Sent: Saturday, October 27, 2007 11:21 AM
> Subject: Concurrent Futures
>
>
>> Hi,
>>
>> This link will be of interest to those wanting easy concurrency.
>> Works best for simple cases and can be used as a primitive in
>> building up more complex concurrency controls. However, use with
>> caution, especially when larger amounts of code are being executed
>> that could conflict with other threads is involved. This applies even
>> in the single native thread multiple Smalltalk "light weight green"
>> processes environment of most Smalltalks.
>>
>> http://onsmalltalk.com/programming/smalltalk/smalltalk-concurrency-playing-with-futures/ 
>>
>>
>> Cheers,
>>
>> Peter
>>
>>
>
>
>


Reply | Threaded
Open this post in threaded view
|

Re: Concurrent Futures

Rob Withers

----- Original Message -----
From: "Peter William Lount" <[hidden email]>


> Hi Rob,
>
> That sounds quite interesting - chains of promises until the results start
> flowing. It should be easy to adapt the
> http://onsmalltalk.com/programming/smalltalk/smalltalk-concurrency-playing-with-futures 
> example implementation to do the same kind of deferral.

It is easy.  The Promise just needs to hold onto pending msgs, sent to the
result when the promise resolves.  The hard part is getting the Squeak
environment to behave in the presence of promises.  To that end I added the
protocol #immediate, which, you guessed it, waits on a Semaphore.  I'll be
removing that in Phase 3.


> Of all the classes of possible deadlocks which do the E promises help
> resolve? (Please don't say all unless you back up such a general statement
> with a proof. Thanks).

What are all the classes of possible deadlocks? (Please provide simple
Squeak examples demonstrating each class of deadlock).   I'll simply refer
you to http://www.erights.org/elib/concurrency/event-loop.html, see towards
the bottom.

Cheers,
Rob

>
> All the best,
>
> Peter
>
>
>
>
> Rob Withers wrote:
>> That's a nice example.  The difference with E's promises is that a
>> Promise doesn't block for value, it sends the new msg eventually and
>> returns a second promise.  The way I have this implemented in SqueakElib,
>> is you send #eventual to get an eventual ref, then you can send a msg to
>> the eventual ref, returning a promise, which you can then send a second
>> msg to, returning a second promise.
>>
>> | promise1 promise2 |
>> promise1 := anObject eventual foo.
>> promise2 := promise1 bar.
>> promise1 whenResolved: [:val | Transcript show: ' foo: ', val
>> printString].
>> promise2 whenResolved: [:val | Transcript show: ' bar: ', val
>> printString].
>> Transcript show: 'sent foo bar...'
>>
>> This prevents deadlocks.  This is also very interesting when looked at in
>> a distributed system, using Promise Pipelining.  Let us say that anObject
>> resides in Vat B on a different machine.  Instead of having a 2 round
>> trip communications situation, we have 1 round trip.  So instead of
>>    1) send #foo, 2) receive fooResult, 3) send bar, 4) receive barResult
>> We have:
>>    1) send #foo, 2) send #bar to fooResultPromise, 3) receive barResult.
>> Where #foo and #bar are both sent to the objects in VatB.  They are sent
>> in the same packet.
>>
>> Rob
>>
>> ----- Original Message ----- From: "Peter William Lount"
>> <[hidden email]>
>> To: "The general-purpose Squeak developers list"
>> <[hidden email]>
>> Sent: Saturday, October 27, 2007 11:21 AM
>> Subject: Concurrent Futures
>>
>>
>>> Hi,
>>>
>>> This link will be of interest to those wanting easy concurrency. Works
>>> best for simple cases and can be used as a primitive in building up more
>>> complex concurrency controls. However, use with caution, especially when
>>> larger amounts of code are being executed that could conflict with other
>>> threads is involved. This applies even in the single native thread
>>> multiple Smalltalk "light weight green" processes environment of most
>>> Smalltalks.
>>>
>>> http://onsmalltalk.com/programming/smalltalk/smalltalk-concurrency-playing-with-futures/
>>>
>>> Cheers,
>>>
>>> Peter
>>>
>>>
>>
>>
>>
>
>
>


Reply | Threaded
Open this post in threaded view
|

Re: Concurrent Futures

Rob Withers
In reply to this post by pwl
Hi Peter,  It struck me that I should clarify something I said, that
probably misled you.

> Rob Withers wrote:
>> That's a nice example.  The difference with E's promises is that a
>> Promise doesn't block for value, it sends the new msg eventually and
>> returns a second promise.  The way I have this implemented in SqueakElib,
>> is you send #eventual to get an eventual ref, then you can send a msg to
>> the eventual ref, returning a promise, which you can then send a second
>> msg to, returning a second promise.
>>
>> | promise1 promise2 |
>> promise1 := anObject eventual foo.
>> promise2 := promise1 bar.
>> promise1 whenResolved: [:val | Transcript show: ' foo: ', val
>> printString].
>> promise2 whenResolved: [:val | Transcript show: ' bar: ', val
>> printString].
>> Transcript show: 'sent foo bar...'
>>
>> This prevents deadlocks.

>From the first paragraph, I talk about both E and SqueakElib.  When I make
this claim about eventual refs and promises preventing deadlocks, I mean it
does this for E, not for SqueakElib.  Since E processes msg sends in an
event-loop, they have no Processes nor Semaphores.  Since they restrict
scope when compiling and executing code, so you can't even reference them.
There is no way to block in E.

This is what I aspire to with SqueakElib but it won't make it, except
perhaps in a very constrained environment, namely a restricted scope Island.
An Island that likewise does not include Processes nor Semaphore (nor lots
of other stuff that may use them, internally).  In the general case, which I
am interested in, I want to integrate refs and promises in the general
image, with its Processes and Semaphores.  I may need to block for immediate
values, processing primitives with promises and processing method contexts
which include promises with multiple points of return.  I am going to be
blocking and I make, or I intent to make, no claim that I don't block.

Sorry for the confusion,
Rob


Reply | Threaded
Open this post in threaded view
|

Re: Concurrent Futures

Ralph Johnson
In reply to this post by Rob Withers
> What are all the classes of possible deadlocks?

The first one I thought of was the Dining Philosopher's problem.
Google showed me
ttp://www.erights.org/e/satan/index.html

This is an interesting article.  It uses capabilities to ensure
freedom from deadlock.  If a philosopher has a fork for two long, the
system will revoke his capability for the fork.

If E was automatically free from deadlock, why was this solution so complex?

-Ralph

Reply | Threaded
Open this post in threaded view
|

Re: Concurrent Futures

Andreas.Raab
Ralph Johnson wrote:
> If E was automatically free from deadlock, why was this solution so complex?

Because it's trying to illustrate three three issues at once (security,
capabilities, concurrency) and consequently fails to illustrate either
one very well :-(

It is trivial to solve this problem via event-loop concurrency though.
Consider a class Table with operations "grabLeftFork:", "grabRightFork:"
(which take a seat index as argument and return the fork or nil). You
can now implement the dining philosophers "Croquet style" as follows:

Philosopher>>eat
   "Start eating"
   | leftForkPromise rightForkPromise |

   "Try to aquire the left fork first"
   leftForkPromise := table future grabLeftFork: seat.
   leftForkPromise whenResolved:[:leftFork|

     "If we couldn't get the left fork, we need to decide
     what to do. Simply retry later but use a random amount
     of time to avoid live-lock"
     leftFork ifNil:[^(self future: self randomMSecs) eat].

     rightForkPromise := table future grabRightFork: seat.
     rightForkPromise whenResolved:[:rightFork|

       rightFork ifNil:[
         "Same as before, but return the fork first"
         table future returnLeftFork: seat.
         ^(self future self randomMSecs) eat "still hungry"
       ] ifNotNil:[
         "We got both forks, eat"
         state := #eating.

         "And return the forks when time is up"
         ^(self future: self timeToEat)
            returnFork: leftFork and: rightFork
       ].
     ].
   ].

Note that there is no wait anywhere in the above - the only thing that
exists are #future sends that schedule messages to be executed at some
point later. Because of this, the philosopher never really "waits" for
anything; he will happily keep thinking (or grumbling ;-) until he
obtains the forks. Even *while* he is eating one could schedule an
emergency request to "drop those forks" if that were necessary.

Cheers,
   - Andreas

Reply | Threaded
Open this post in threaded view
|

Re: Concurrent Futures

Rob Withers
In reply to this post by Ralph Johnson

----- Original Message -----
From: "Ralph Johnson" <[hidden email]>
To: "The general-purpose Squeak developers list"
<[hidden email]>
Sent: Saturday, October 27, 2007 9:00 PM
Subject: Re: Concurrent Futures


>> What are all the classes of possible deadlocks?
>
> The first one I thought of was the Dining Philosopher's problem.
> Google showed me
> ttp://www.erights.org/e/satan/index.html

This is the first time I've read that link.

> This is an interesting article.  It uses capabilities to ensure
> freedom from deadlock.  If a philosopher has a fork for two long, the
> system will revoke his capability for the fork.

This problem is much more than a deadlock problem.  It is a resource sharing
problem which includes a liveness problem.  It seems that it is classically
solved with a semaphore representing each fork, as well as a strategy to
ensure liveness.  See the section on page 93 in
http://www.greenteapress.com/semaphores/downey05semaphores.pdf

> If E was automatically free from deadlock, why was this solution so
> complex?

There is no danger of deadlock.  It is impossible, since there are no
Semaphores.  They still run into the problem of shared resource utilization
and livelock/starvation/liveness issues.  E doesn't claim the address those
in its language.  Perhaps its claim to avoid deadlock is reduced in
importance since it can experience other concurrency problems, but I still
think it's a neat feature.

I agree it seems complex, but they are revoking the fork after a delay.  The
classical problem states that each philosopher eats for a finite time (see
above reference).  They also provide an implementation for the philosopher
and the act of eating, by getting shrimp from the plate with the forks, and
they prevent unauthorized access.  Finally, they restrict access to
references to the classes, like with the protections for access to the
plate:  only the fork knows the plate, not the philosopher.

Note that they say this was done with a previous version of E, so it may be
somewhat different in current E.  I really don't know the language that
well, so I couldn't say.

Here is a pared down version, that ignores a lot of other stuff and just
shows the resource acquisition, but also includes the revocation after
timeout.  It is 5 methods, 3 on the Philosopher and 2 on the
ForkDispenserMaker, which could easily be seen as methods in Smalltalk.  I
don't see how a language could make it easier, but a class library could.

They are doing these jobs it seems:

    ResourceDispenser                "ForDispenserMaker"
        - resourceRequestCallback      "serveOther"
        - resourceRequest                   "forkPlease"
    ResourceConsumer                "Philosopher"
        - resourceAcquisition               "startEating"
        - acquisitionCallback               "hereIsYourFork"
        - useResources                        "eat"
oh, yes they also have thye following referenced in code:
    Resource
        - revoke

Below is my scratch code (it is incomplete, because I eliminated state
variables and other methods).

Regards,
Rob

def ForkDispenserMaker(mySource) {
 to forkPlease(who) {
  if (nowServing == null) {
   nowServing := who
   nowServing <- hereIsYourFork(theFork)
  } else if (nowServing == who) {
   nowServing <- hereIsYourFork(theFork)
  } else {
   myTimer after(10_000, def listener noticeTimeout() {
    theFork <- revoke
    serveOther ())
  }
 }
 def serveOther() {
  if (nowServing == firstPhilosopher) {
   nowServing := secondPhilosopher
  } else {
   nowServing := firstPhilosopher
  }
  nowServing <- hereIsYourFork(theFork)
 }
}


def NicePhilosopherMaker() {
 to startEating {
  firstForkDispenser  <- forkPlease(self)
  secondForkDispenser <- forkPlease(self)
 }
 to hereIsYourFork(theFork) {
  if (firstFork == null) {
   firstFork := theFork
  } else {
   secondFork := theFork
   eat()
  }
 }
 def eat() {
  ***
 }
}



Reply | Threaded
Open this post in threaded view
|

Re: Concurrent Futures

Ralph Johnson
On 10/28/07, Rob Withers <[hidden email]> wrote:
> There is no danger of deadlock.  It is impossible, since there are no
> Semaphores.

This does not compute.  It is like saying "There is no danger of
murder, since there are no guns."  There are many ways of having
deadlock without semaphores.  Deadlock is possible in an asynchronous
messaging system.  An algorithm might be waiting for an event that
will never occur.

I think the real answer is that there is no way to have a deadlock,
because there is no way to wait.  Programs can only busy-wait, so
deadlock problems all get converted into starvation problems.  This
might not be very efficient on a machine with a large number of
processes running on a small number of processors, but becomes
efficient as the process/processor ration drops.

An Andreas' solution, a philosopher who can't pick up a fork will drop
one that he already has, and will wait a random amount of time.  This
has the possibility of starvation, but with possibility 0 as time goes
to infinity.

-Ralph

Reply | Threaded
Open this post in threaded view
|

Re: Concurrent Futures

Rob Withers

----- Original Message -----
From: "Ralph Johnson" <[hidden email]>


> On 10/28/07, Rob Withers <[hidden email]> wrote:
>> There is no danger of deadlock.  It is impossible, since there are no
>> Semaphores.
>
> This does not compute.  It is like saying "There is no danger of
> murder, since there are no guns."  There are many ways of having
> deadlock without semaphores.  Deadlock is possible in an asynchronous
> messaging system.  An algorithm might be waiting for an event that
> will never occur.

My assumption is that the only mechanism of waiting is using a semaphore.
Is this incorrect?

I know that E talks about converting control flow problems into data flow
problems, using the continuation passing style, and that deadlocks go away
but datalocks become possible.  It seems to me that an algorithm pending an
event is in a datalock situation.

> I think the real answer is that there is no way to have a deadlock,
> because there is no way to wait.

This is a clearer way of stating it.

Rob


Reply | Threaded
Open this post in threaded view
|

Re: Concurrent Futures

Andreas.Raab
In reply to this post by Ralph Johnson
Ralph Johnson wrote:
> I think the real answer is that there is no way to have a deadlock,
> because there is no way to wait.  Programs can only busy-wait, so
> deadlock problems all get converted into starvation problems.

The first sentence is correct, but the second is wrong. It's wrong
because there is no wait ;-) so even "busy-waiting" becomes meaningless.
(Simulation) time, for example, does not advance in Croquet while a
message is being executed. In this sense busy-waiting (here being the
idea of performing computations until a measurable amount of time has
passed) can't be done.

As for "deadlock problems all get converted into starvation problems"
let's not forget that the problem we were looking at is over-constraint
by definition. I *chose* to accept starvation mostly because this is the
solution that is closest to the original. There are most certainly other
(but more complex) solutions which could avoid starvation. The simplest
one (which dodges the question of "and how exactly would that work") is
to post requests for pairs of forks to the table and have the table
implement a scheduling algorithm for the handing out forks.

Cheers,
   - Andreas


Reply | Threaded
Open this post in threaded view
|

Re: Concurrent Futures

Igor Stasenko
In reply to this post by Andreas.Raab
On 28/10/2007, Andreas Raab <[hidden email]> wrote:

> Ralph Johnson wrote:
> > If E was automatically free from deadlock, why was this solution so complex?
>
> Because it's trying to illustrate three three issues at once (security,
> capabilities, concurrency) and consequently fails to illustrate either
> one very well :-(
>
> It is trivial to solve this problem via event-loop concurrency though.
> Consider a class Table with operations "grabLeftFork:", "grabRightFork:"
> (which take a seat index as argument and return the fork or nil). You
> can now implement the dining philosophers "Croquet style" as follows:
>
> Philosopher>>eat
>    "Start eating"
>    | leftForkPromise rightForkPromise |
>
>    "Try to aquire the left fork first"
>    leftForkPromise := table future grabLeftFork: seat.
>    leftForkPromise whenResolved:[:leftFork|
>
>      "If we couldn't get the left fork, we need to decide
>      what to do. Simply retry later but use a random amount
>      of time to avoid live-lock"
>      leftFork ifNil:[^(self future: self randomMSecs) eat].
>
>      rightForkPromise := table future grabRightFork: seat.
>      rightForkPromise whenResolved:[:rightFork|
>
>        rightFork ifNil:[
>          "Same as before, but return the fork first"
>          table future returnLeftFork: seat.
>          ^(self future self randomMSecs) eat "still hungry"
>        ] ifNotNil:[
>          "We got both forks, eat"
>          state := #eating.
>
>          "And return the forks when time is up"
>          ^(self future: self timeToEat)
>             returnFork: leftFork and: rightFork
>        ].
>      ].
>    ].
>
> Note that there is no wait anywhere in the above - the only thing that
> exists are #future sends that schedule messages to be executed at some
> point later. Because of this, the philosopher never really "waits" for
> anything; he will happily keep thinking (or grumbling ;-) until he
> obtains the forks. Even *while* he is eating one could schedule an
> emergency request to "drop those forks" if that were necessary.
>
> Cheers,
>    - Andreas
>

A small correction: the problem of dining philosophers are about how
to share a limited resources (forks) among consumers (philosophers)
effectively to prevent starvation of any of them. And your solution
are based on 'nice' behavior of philosopher, which drops the first
fork, when he's unable to obtain second one.
Now imagine a greedy philosopher, who grabs the first fork and never
hang it over until he obtain second one and done eating. This is a
good illustration that you must not pass responsibility of resource
management to consumers - you must manage them yourself (of course, if
you want to prevent any kind of deadlocks/resource starvation).

A 'bad' behavior is much more probable , because most developers tend
writing a code in simple imperative style, like: do this, after done
it, do that. And understanding that they code can contain problems
(deadlocks or whatever) comes in mind only after problem shows itself.
And considering that deadlocks are very hard to track and reproduce
they'll never know what may cause it :)

--
Best regards,
Igor Stasenko AKA sig.

Reply | Threaded
Open this post in threaded view
|

Re: Concurrent Futures

Andreas.Raab
Igor Stasenko wrote:
> A small correction: the problem of dining philosophers are about how
> to share a limited resources (forks) among consumers (philosophers)
> effectively to prevent starvation of any of them. And your solution
> are based on 'nice' behavior of philosopher, which drops the first
> fork, when he's unable to obtain second one.

Right. I was trying to keep things simple for the benefit of
illustrating the concurrency aspects only. If you look at the full
version in E it addresses that and probably half a dozen of problems
that none of us has ever thought about (these guys really are amongst
the smartest people that I've ever met).

> A 'bad' behavior is much more probable , because most developers tend
> writing a code in simple imperative style, like: do this, after done
> it, do that. And understanding that they code can contain problems
> (deadlocks or whatever) comes in mind only after problem shows itself.
> And considering that deadlocks are very hard to track and reproduce
> they'll never know what may cause it :)

Indeed. However, this is where E really helps you. Everything you can
express in E is by definition deadlock-free (in the classic sense) so
you stop worrying about these things and focus more on the solution to
the problem at hand. Not so different to manual memory management where
before the advent of GC you always had that nagging feeling in the back
of your head saying "and who's gonna know when and where to free that
reference?". The point is that as GC issues become tractable by not
causing a segfault, concurrency issues become tractable by not having
your system come to a screaching halt every time something goes wrong.

Cheers,
   - Andreas

Reply | Threaded
Open this post in threaded view
|

Re: Concurrent Futures

Igor Stasenko
On 29/10/2007, Andreas Raab <[hidden email]> wrote:

> Igor Stasenko wrote:
> > A small correction: the problem of dining philosophers are about how
> > to share a limited resources (forks) among consumers (philosophers)
> > effectively to prevent starvation of any of them. And your solution
> > are based on 'nice' behavior of philosopher, which drops the first
> > fork, when he's unable to obtain second one.
>
> Right. I was trying to keep things simple for the benefit of
> illustrating the concurrency aspects only. If you look at the full
> version in E it addresses that and probably half a dozen of problems
> that none of us has ever thought about (these guys really are amongst
> the smartest people that I've ever met).
>
> > A 'bad' behavior is much more probable , because most developers tend
> > writing a code in simple imperative style, like: do this, after done
> > it, do that. And understanding that they code can contain problems
> > (deadlocks or whatever) comes in mind only after problem shows itself.
> > And considering that deadlocks are very hard to track and reproduce
> > they'll never know what may cause it :)
>
> Indeed. However, this is where E really helps you. Everything you can
> express in E is by definition deadlock-free (in the classic sense) so
> you stop worrying about these things and focus more on the solution to
> the problem at hand. Not so different to manual memory management where
> before the advent of GC you always had that nagging feeling in the back
> of your head saying "and who's gonna know when and where to free that
> reference?". The point is that as GC issues become tractable by not
> causing a segfault, concurrency issues become tractable by not having
> your system come to a screaching halt every time something goes wrong.
>
I'm not really sure, that GC example is good parallel for 'automatic
deadlock-free'.
What prevents me (or anyone) to write a deadlocks like following:
[ self grabFork ] whileFalse: ["do nothing " ]

There is no such language (except from ones, which don't have
loops/branches) which prevents from writing this.  And the example
above is 'busy waiting' which is even worser than waiting using
semaphores, because it wasting CPU resources for evaluating same
expression until some state will change by external entity.
So, unless a developer writes a better code, we will have same issues.
As for GC - you have automatic memory management instead of manual.
But there's no automatic algorithm management and never will be ,
given any language :)

> Cheers,
>    - Andreas
>
>


--
Best regards,
Igor Stasenko AKA sig.

Reply | Threaded
Open this post in threaded view
|

Re: Concurrent Futures

Andreas.Raab
Igor Stasenko wrote:

>> Indeed. However, this is where E really helps you. Everything you can
>> express in E is by definition deadlock-free (in the classic sense) so
>> you stop worrying about these things and focus more on the solution to
>> the problem at hand. Not so different to manual memory management where
>> before the advent of GC you always had that nagging feeling in the back
>> of your head saying "and who's gonna know when and where to free that
>> reference?". The point is that as GC issues become tractable by not
>> causing a segfault, concurrency issues become tractable by not having
>> your system come to a screaching halt every time something goes wrong.
>>
> I'm not really sure, that GC example is good parallel for 'automatic
> deadlock-free'.
> What prevents me (or anyone) to write a deadlocks like following:
> [ self grabFork ] whileFalse: ["do nothing " ]

I know this can be hard to grok, but the above is sort of the equivalent
of asking "how do I free an object in Smalltalk". It doesn't make sense
in the way you are asking the question, because in the above you assume
that #grabFork is a remote message returning a value. In E (and Croquet)
remote message invocations *do not return values* they return promises
(futures) which get resolved only *after* the current message is
completed. In other words, writing a loop like this:

   p := self future doSomething.
   [p isResolved] whileFalse.

will *never* get passed that line. Not once, not ever. So in order to do
what you are trying to do you need to write it like here:

   p := self future doSomething.
   p whenComplete:[:value| ...].

Which allows the message invoking the above to complete, and once the
promise is resolved, the associated resolver block will be executed as a
new message. BTW, the above *does* allow you to write a recursion
similar to what you were trying to do in the above, e.g.,

getMyFork
   table future grabFork whenComplete:[:haveIt|
     haveIt ifFalse:[self getMyFork]. "retry"
   ].

but it is still deadlock-free since there is no wait involved (neither
"classic" nor "busy-wait). In fact, we use recursions like the above in
various places in Croquet.

> There is no such language (except from ones, which don't have
> loops/branches) which prevents from writing this.  And the example
> above is 'busy waiting' which is even worser than waiting using
> semaphores, because it wasting CPU resources for evaluating same
> expression until some state will change by external entity.

You can *write* it, but it makes about as much sense as writing "Object
new free". Because the promise will not resolve, not ever, while you are
"busy-waiting" (just as the object will not get freed when you send the
free message or assign nil to a slot) so you'll very quickly abstain
from that practice ;-)

> So, unless a developer writes a better code, we will have same issues.

Absolutely not. See above. There is no deadlock in it.

> As for GC - you have automatic memory management instead of manual.
> But there's no automatic algorithm management and never will be ,
> given any language :)

And what's that supposed to mean?

Cheers,
   - Andreas

Reply | Threaded
Open this post in threaded view
|

Re: Concurrent Futures

Igor Stasenko
On 29/10/2007, Andreas Raab <[hidden email]> wrote:

> Igor Stasenko wrote:
> >> Indeed. However, this is where E really helps you. Everything you can
> >> express in E is by definition deadlock-free (in the classic sense) so
> >> you stop worrying about these things and focus more on the solution to
> >> the problem at hand. Not so different to manual memory management where
> >> before the advent of GC you always had that nagging feeling in the back
> >> of your head saying "and who's gonna know when and where to free that
> >> reference?". The point is that as GC issues become tractable by not
> >> causing a segfault, concurrency issues become tractable by not having
> >> your system come to a screaching halt every time something goes wrong.
> >>
> > I'm not really sure, that GC example is good parallel for 'automatic
> > deadlock-free'.
> > What prevents me (or anyone) to write a deadlocks like following:
> > [ self grabFork ] whileFalse: ["do nothing " ]
>
> I know this can be hard to grok, but the above is sort of the equivalent
> of asking "how do I free an object in Smalltalk". It doesn't make sense
> in the way you are asking the question, because in the above you assume
> that #grabFork is a remote message returning a value. In E (and Croquet)
> remote message invocations *do not return values* they return promises
> (futures) which get resolved only *after* the current message is
> completed. In other words, writing a loop like this:
>

>    p := self future doSomething.
>    [p isResolved] whileFalse.
>
> will *never* get passed that line. Not once, not ever. So in order to do
> what you are trying to do you need to write it like here:
>
>    p := self future doSomething.
>    p whenComplete:[:value| ...].
>
> Which allows the message invoking the above to complete, and once the
> promise is resolved, the associated resolver block will be executed as a
> new message. BTW, the above *does* allow you to write a recursion
> similar to what you were trying to do in the above, e.g.,
>
> getMyFork
>    table future grabFork whenComplete:[:haveIt|
>      haveIt ifFalse:[self getMyFork]. "retry"
>    ].
>
> but it is still deadlock-free since there is no wait involved (neither
> "classic" nor "busy-wait). In fact, we use recursions like the above in
> various places in Croquet.
>

See, unless you make all message sends in language as futures, you
can't guarantee that  some code will not end up with locking
semantics.
This is exactly what GC doing - it revoking all manual memory
management control from developer. But can we do the same with all
message sends?

Lets see , what is happen if we have only a future sends.
Then, given code:
a print.
b print.

will not guarantee that a will be printed before b. Now you must
ensure that you preserve imperative semantics, which may be done like
following:
futureA := a print.
futureA whenComplete: [ b print ].

Yes, we can make 'futureA whenComplete:'  check implicitly (by
modifying VM), then we can preserve old code. But do we really need a
futures everywhere?

As i personally see, an imperative code writing style is something,
that stands on the opposite side from future message sends. If we
using first, then its difficult to effectively support second and vise
versa.

Or we give up with an imperative style and use something different
which fits better with futures, or we give up with futures.
Or, we using both of them by mixing.. (which i think is most
appropriate).. But then, stating that such system can be really
lock-free, is wrong, because it depends on decision of concrete
developer and his code.

> > There is no such language (except from ones, which don't have
> > loops/branches) which prevents from writing this.  And the example
> > above is 'busy waiting' which is even worser than waiting using
> > semaphores, because it wasting CPU resources for evaluating same
> > expression until some state will change by external entity.
>
> You can *write* it, but it makes about as much sense as writing "Object
> new free". Because the promise will not resolve, not ever, while you are
> "busy-waiting" (just as the object will not get freed when you send the
> free message or assign nil to a slot) so you'll very quickly abstain
> from that practice ;-)
>
> > So, unless a developer writes a better code, we will have same issues.
>
> Absolutely not. See above. There is no deadlock in it.
>
> > As for GC - you have automatic memory management instead of manual.
> > But there's no automatic algorithm management and never will be ,
> > given any language :)
>
> And what's that supposed to mean?
>
I pointed that futures as an 'automatic lock-free' approach is not
quite parallel to 'automatic memory management by GC'.

> Cheers,
>    - Andreas
>
>


--
Best regards,
Igor Stasenko AKA sig.

Reply | Threaded
Open this post in threaded view
|

Re: Concurrent Futures

Rob Withers

----- Original Message -----
From: "Igor Stasenko" <[hidden email]>


> Lets see , what is happen if we have only a future sends.
> Then, given code:
> a print.
> b print.
>
> will not guarantee that a will be printed before b.

In SqueakElib, it will guarantee order if a and b are in the same Vat and
already resolved (not Promises/Futures).  If a is a promise and b is
resolved, then they will print out of order.  Likewise, if a is remote and b
is local, they will print out of order.  Msgs to objects in the same Vat are
scheduled in order.

> Yes, we can make 'futureA whenComplete:'  check implicitly (by
> modifying VM), then we can preserve old code. But do we really need a
> futures everywhere?

This is what I am trying to do with SqueakElib.  Any old object referenced
in the system is an eventual local ref, but the system should handle
promises or non-local refs anywhere.

> Or, we using both of them by mixing.. (which i think is most
> appropriate).. But then, stating that such system can be really
> lock-free, is wrong, because it depends on decision of concrete
> developer and his code.

We may be able to rewrite Semaphores in terms of promises, and modify
Processes to run on a particular vat as an event.  Thus remove possible
deadlocks from old code.

Rob


Reply | Threaded
Open this post in threaded view
|

Re: Concurrent Futures

Igor Stasenko
On 29/10/2007, Rob Withers <[hidden email]> wrote:

>
> ----- Original Message -----
> From: "Igor Stasenko" <[hidden email]>
>
>
> > Lets see , what is happen if we have only a future sends.
> > Then, given code:
> > a print.
> > b print.
> >
> > will not guarantee that a will be printed before b.
>
> In SqueakElib, it will guarantee order if a and b are in the same Vat and
> already resolved (not Promises/Futures).  If a is a promise and b is
> resolved, then they will print out of order.  Likewise, if a is remote and b
> is local, they will print out of order.  Msgs to objects in the same Vat are
> scheduled in order.
>
> > Yes, we can make 'futureA whenComplete:'  check implicitly (by
> > modifying VM), then we can preserve old code. But do we really need a
> > futures everywhere?
>
> This is what I am trying to do with SqueakElib.  Any old object referenced
> in the system is an eventual local ref, but the system should handle
> promises or non-local refs anywhere.
>
> > Or, we using both of them by mixing.. (which i think is most
> > appropriate).. But then, stating that such system can be really
> > lock-free, is wrong, because it depends on decision of concrete
> > developer and his code.
>
> We may be able to rewrite Semaphores in terms of promises, and modify
> Processes to run on a particular vat as an event.  Thus remove possible
> deadlocks from old code.
>

Yes we do, but what prevents others from implementing own locking
semantics based on direct message sends (not futures)?

> Rob
>
>
>


--
Best regards,
Igor Stasenko AKA sig.

Reply | Threaded
Open this post in threaded view
|

Re: Concurrent Futures

Rob Withers

----- Original Message -----
From: "Igor Stasenko" <[hidden email]>
To: "The general-purpose Squeak developers list"
<[hidden email]>
Sent: Monday, October 29, 2007 2:16 PM
Subject: Re: Concurrent Futures


> On 29/10/2007, Rob Withers <[hidden email]> wrote:
>>
>> ----- Original Message -----
>> From: "Igor Stasenko" <[hidden email]>
>>
>>
>> > Lets see , what is happen if we have only a future sends.
>> > Then, given code:
>> > a print.
>> > b print.
>> >
>> > will not guarantee that a will be printed before b.
>>
>> In SqueakElib, it will guarantee order if a and b are in the same Vat and
>> already resolved (not Promises/Futures).  If a is a promise and b is
>> resolved, then they will print out of order.  Likewise, if a is remote
>> and b
>> is local, they will print out of order.  Msgs to objects in the same Vat
>> are
>> scheduled in order.
>>
>> > Yes, we can make 'futureA whenComplete:'  check implicitly (by
>> > modifying VM), then we can preserve old code. But do we really need a
>> > futures everywhere?
>>
>> This is what I am trying to do with SqueakElib.  Any old object
>> referenced
>> in the system is an eventual local ref, but the system should handle
>> promises or non-local refs anywhere.
>>
>> > Or, we using both of them by mixing.. (which i think is most
>> > appropriate).. But then, stating that such system can be really
>> > lock-free, is wrong, because it depends on decision of concrete
>> > developer and his code.
>>
>> We may be able to rewrite Semaphores in terms of promises, and modify
>> Processes to run on a particular vat as an event.  Thus remove possible
>> deadlocks from old code.
>>
>
> Yes we do, but what prevents others from implementing own locking
> semantics based on direct message sends (not futures)?

Remove #primitiveWait from the VM.

Rob


Reply | Threaded
Open this post in threaded view
|

Re: Concurrent Futures

Andreas.Raab
In reply to this post by Igor Stasenko
Igor Stasenko wrote:
>> but it is still deadlock-free since there is no wait involved (neither
>> "classic" nor "busy-wait). In fact, we use recursions like the above in
>> various places in Croquet.
>
> See, unless you make all message sends in language as futures, you
> can't guarantee that  some code will not end up with locking
> semantics.

Not "all messages sends". Only messages between concurrent entities
(islands). This is the main difference to the all-out actors model
(where each object is its own unit of concurrency) and has the advantage
that you can reuse all of todays single-threaded code.

> Lets see , what is happen if we have only a future sends.
> Then, given code:
> a print.
> b print.
>
> will not guarantee that a will be printed before b.

Actually it will, if and only if a and b are in the same unit of
concurrency (island). Your example is a bit misleading because of having
different receivers - if those were in different islands then indeed
there will be no guarantee that a prints before b. So for simplicity
let's change this to:

   Transcript future print: a.
   Transcript future print: b.

Do we need a whenResolved: block to serialize execution? No we don't
because messages between two islands are executed in the order in which
they were scheduled. Everything else would be a straight ticket to
insanity ;-)

> Yes, we can make 'futureA whenComplete:'  check implicitly (by
> modifying VM), then we can preserve old code. But do we really need a
> futures everywhere?

No, we don't. See above.

> Or we give up with an imperative style and use something different
> which fits better with futures, or we give up with futures.

The nice thing about futures is that they can be put on top of
everything else. We use them in Croquet today.

> Or, we using both of them by mixing.. (which i think is most
> appropriate).. But then, stating that such system can be really
> lock-free, is wrong, because it depends on decision of concrete
> developer and his code.

This may be the outcome for an interim period. The good thing here is
that you can *prove* that your program is deadlock-free simply by not
using waits. And ain't that a nice property to have.

>>> As for GC - you have automatic memory management instead of manual.
>>> But there's no automatic algorithm management and never will be ,
>>> given any language :)
>> And what's that supposed to mean?
>>
> I pointed that futures as an 'automatic lock-free' approach is not
> quite parallel to 'automatic memory management by GC'.

The similarity is striking. Both in terms of tradeoffs (trade low-level
control for better productivity) as well as the style of arguments made
against it ;-) Not that I mind by the way, I find these discussions
necessary.

Cheers,
   - Andreas

1234 ... 8