Multiple processes using #nextPutAll:

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
28 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Re: Multiple processes using #nextPutAll:

Bert Freudenberg

On May 26, 2007, at 22:00 , J J wrote:

>> From: Bert Freudenberg <[hidden email]>
>> Reply-To: The general-purpose Squeak developers list<squeak-
>> [hidden email]>
>> To: The general-purpose Squeak developers list<squeak-
>> [hidden email]>
>> Subject: Re: Multiple processes using #nextPutAll:
>> Date: Sat, 26 May 2007 21:55:10 +0200
>>
>> Perhaps we're talking past each other. Anyway, this shouldn't  
>> matter  for the problem at hand.
>
> Or I'm not being clear enough. :)
>
>>> Yes, much like how modern OS'es work.  It's just that I was  
>>> under  the impression that once the current process is  
>>> interrupted that  another at that same priority would be given a  
>>> chance to run.
>>
>> Yes, that's what I wrote.
>
> What I meant here is (and why this is relevant for the problem at  
> hand):
>
> If he forks the first one it is at some priority.  Then he forks  
> the next at the same priority.  If the first one takes enough time  
> (presumably around 40 ms) it will get preempted by the UI handlers,  
> timer handlers or something.  Now, when the higher priority  
> processes are finished if it goes back to the one it was running  
> before (i.e. the first one that was forked) then yes, you're right  
> that it wont matter.  But if it picks another from that list then  
> he can cause the second thread to run while the first is still in  
> the loop.  This is what I was trying to say. :)

Okay - I was trying to say that it will indeed pick the next process,  
not the first one.

Btw, if you just #fork it will run at the same priority as the  
current process (which will be the UI process if yiu do this in a  
workspace). And at least in the default image there is no higher-
priority process that loops at 40ms intervals. There is an "event  
tickler" at 500ms (see your process browser).

- Bert -



Reply | Threaded
Open this post in threaded view
|

Re: Multiple processes using #nextPutAll:

Nicolas Cellier-3

So Damien just has to fork a process at higher priority first, that will
repeatedely wake up.

[100 timesRepeat:  [(Delay forMilliseconds: 50) wait]] forkAtPriotity:
etc...

If individual nextPutAll: process last longer than 50ms, then he will
get what he wanted to see. Otherwise, reduce the delay.

But, question: what is the minimum delay (delay resolution)?

Nicolas

Bert Freudenberg a écrit :

>
> On May 26, 2007, at 22:00 , J J wrote:
>
>>> From: Bert Freudenberg <[hidden email]>
>>> Reply-To: The general-purpose Squeak developers
>>> list<[hidden email]>
>>> To: The general-purpose Squeak developers
>>> list<[hidden email]>
>>> Subject: Re: Multiple processes using #nextPutAll:
>>> Date: Sat, 26 May 2007 21:55:10 +0200
>>>
>>> Perhaps we're talking past each other. Anyway, this shouldn't matter  
>>> for the problem at hand.
>>
>> Or I'm not being clear enough. :)
>>
>>>> Yes, much like how modern OS'es work.  It's just that I was under  
>>>> the impression that once the current process is interrupted that  
>>>> another at that same priority would be given a chance to run.
>>>
>>> Yes, that's what I wrote.
>>
>> What I meant here is (and why this is relevant for the problem at hand):
>>
>> If he forks the first one it is at some priority.  Then he forks the
>> next at the same priority.  If the first one takes enough time
>> (presumably around 40 ms) it will get preempted by the UI handlers,
>> timer handlers or something.  Now, when the higher priority processes
>> are finished if it goes back to the one it was running before (i.e.
>> the first one that was forked) then yes, you're right that it wont
>> matter.  But if it picks another from that list then he can cause the
>> second thread to run while the first is still in the loop.  This is
>> what I was trying to say. :)
>
> Okay - I was trying to say that it will indeed pick the next process,
> not the first one.
>
> Btw, if you just #fork it will run at the same priority as the current
> process (which will be the UI process if yiu do this in a workspace).
> And at least in the default image there is no higher-priority process
> that loops at 40ms intervals. There is an "event tickler" at 500ms (see
> your process browser).
>
> - Bert -
>
>
>
>


Reply | Threaded
Open this post in threaded view
|

Re: Multiple processes using #nextPutAll:

Ramiro Diaz Trepat
Hello Damien,
   Back in 2005, I ask to the list why no one was talking about
implementing native threads in Squeak, I ranted a little, and got
seriously beaten up by some heavy weights of the community.
   The important thing is that, in spite of the passionate discussion
around the subject, VERY illustrative stuff about Squeak threading
model came up in that thread.
   So if you care to read a little about the problems of native
threads and green thread on Squeak and on different platforms as well,
my original post is here:

   http://lists.squeakfoundation.org/pipermail/squeak-dev/2005-April/090791.html

   You can probably follow the thread from there.
   Cheers.

   r.

Reply | Threaded
Open this post in threaded view
|

Re: Multiple processes using #nextPutAll:

timrowledge
In reply to this post by Nicolas Cellier-3

On 26-May-07, at 1:51 PM, nicolas cellier wrote:

>
> So Damien just has to fork a process at higher priority first, that  
> will repeatedely wake up.
>
> [100 timesRepeat:  [(Delay forMilliseconds: 50) wait]]  
> forkAtPriotity: etc...
>
> If individual nextPutAll: process last longer than 50ms, then he  
> will get what he wanted to see. Otherwise, reduce the delay.
>
> But, question: what is the minimum delay (delay resolution)?

It *tries* to be 1mS. No guarantees though; a long running blocking  
primitive (a complex bitBlt copying and transforming a 10MP 32bpp  
image would probably count, or a large GC run for example) will  
almost certainly cause a longer delay. We spent a lot of effort about  
three years ago trying to get the system to be reasonably reliable  
wrt the mS timer.

If you fork several processes at the same priority as the forker then  
they will get added to the quiescent queue for that priority and will  
only get to run when
a) some higher priority process has pre-empted the scheduler OR the  
active process waits on a semaphore OR otherwise yields
b) they get to the front of the queue
In practice you will often find that they run sequentially. As  
mentioned it is possible to fork a tickler process to stir things up  
on a regular (or even irregular) basis if you want.

And I'm sure I will have forgotten some detail or other.

tim
--
tim Rowledge; [hidden email]; http://www.rowledge.org/tim
Strange OpCodes: NNI: Neglect Next Instruction



Reply | Threaded
Open this post in threaded view
|

Re: Multiple processes using #nextPutAll:

Lex Spoon-3
In reply to this post by Ramiro Diaz Trepat
"Ramiro Diaz Trepat" <[hidden email]> writes:
>    So if you care to read a little about the problems of native
> threads and green thread on Squeak and on different platforms as well,
> my original post is here:
>
>    http://lists.squeakfoundation.org/pipermail/squeak-dev/2005-April/090791.html
>
>    You can probably follow the thread from there.
>    Cheers.


Also, some good wiki pages have emerged after mailing-list
discussions.  See:

   http://wiki.squeak.org/squeak/382


Lex


Reply | Threaded
Open this post in threaded view
|

Re: Multiple processes using #nextPutAll:

"Martin v. Löwis"
In reply to this post by Damien Cassou-3
>  queue := self newQueue.
>  writingBlock := [queue nextPutAll: (1 to: 10000)].
>  writingBlock
>    fork;
>    fork;
>    fork.
>  Processor yield.
>  self assert: (queue next: 10000) asArray = (1 to: 10000) asArray.
>  self assert: (queue next: 10000) asArray = (1 to: 10000) asArray.
>  self assert: (queue next: 10000) asArray = (1 to: 10000) asArray.
>  self assert: queue atEnd.
>
>
> I assume it's because when I fork, the work is done before the
> following fork starts.
>
> Can somebody help me writing this test?

I assume that nextPutAll: relies on do: to fetch all elements of
the collection. So you could define a BlockingCollection, which
contains a collection, and a semaphore as instance variables.
Then, do: would do

do: aBlock
  collection do:[:each|
     semaphore wait.
     aBlock value: each.
     semaphore signal.
  ].

Alternatively, if it relies on at: also implement

at: index
  | result |
  semaphore wait.
  result := collection at: index.
  semaphore signal.
  ^result.

Then, in the test, you do

sem := Semaphore new.
writingBlock := [queue nextPutAll:
   (BlockingCollection on: (1 to: 10000) with: sem)].
sem signal.

HTH,
Martin


Reply | Threaded
Open this post in threaded view
|

Re: Multiple processes using #nextPutAll:

Andreas.Raab
Yes, that is a very clever solution. You could tidy this up a little by
using:

Array subclass: #YieldingArray
     instanceVariableNames:''
     classVariableNames: ''
     poolDictionaries: ''
     category: 'Temp'

YieldingArray>>do: aBlock
    "Evaluate aBlock with all the arguments. Yield in the middle."
     1 to: self size do:[:i|
         aBlock value: i.
         Processor yield.
     ].

#yield will take care of things without a semaphore due to Squeak's
process scheduling rules.

Cheers,
   - Andreas

Martin v. Löwis wrote:

>>  queue := self newQueue.
>>  writingBlock := [queue nextPutAll: (1 to: 10000)].
>>  writingBlock
>>    fork;
>>    fork;
>>    fork.
>>  Processor yield.
>>  self assert: (queue next: 10000) asArray = (1 to: 10000) asArray.
>>  self assert: (queue next: 10000) asArray = (1 to: 10000) asArray.
>>  self assert: (queue next: 10000) asArray = (1 to: 10000) asArray.
>>  self assert: queue atEnd.
>>
>>
>> I assume it's because when I fork, the work is done before the
>> following fork starts.
>>
>> Can somebody help me writing this test?
>
> I assume that nextPutAll: relies on do: to fetch all elements of
> the collection. So you could define a BlockingCollection, which
> contains a collection, and a semaphore as instance variables.
> Then, do: would do
>
> do: aBlock
>   collection do:[:each|
>      semaphore wait.
>      aBlock value: each.
>      semaphore signal.
>   ].
>
> Alternatively, if it relies on at: also implement
>
> at: index
>   | result |
>   semaphore wait.
>   result := collection at: index.
>   semaphore signal.
>   ^result.
>
> Then, in the test, you do
>
> sem := Semaphore new.
> writingBlock := [queue nextPutAll:
>    (BlockingCollection on: (1 to: 10000) with: sem)].
> sem signal.
>
> HTH,
> Martin
>
>
>


Reply | Threaded
Open this post in threaded view
|

Re: Multiple processes using #nextPutAll:

timrowledge

On 29-May-07, at 12:40 AM, Andreas Raab wrote:

> Yes, that is a very clever solution. You could tidy this up a  
> little by using:
Even better, yield is a very small, fast prim and so using it would  
have little chance of significantly affecting overall performance of  
such an algorithm. Much better than the rather longwinded version of  
yield that used to exist.


tim
--
tim Rowledge; [hidden email]; http://www.rowledge.org/tim
Useful random insult:- A gross ignoramus -- 144 times worse than an  
ordinary ignoramus.



12