"Smalltalk" and "Real-time"

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
24 messages Options
12
KJ
Reply | Threaded
Open this post in threaded view
|

"Smalltalk" and "Real-time"

KJ
Are the terms "Smalltalk" and "Real-time application development"
incompatible with each other?


Reply | Threaded
Open this post in threaded view
|

Re: "Smalltalk" and "Real-time"

Steven T Abell-2
KJ wrote:
> Are the terms "Smalltalk" and "Real-time application development"
> incompatible with each other?

Depends how real your time is.
As your time granularity shrinks,
you'll have to take increasingly aggresive measures,
but that's pretty much par for the course in any realtime programming.
The basic issue is to preallocate as many objects as you can at startup,
then manage them explicitly, attempting to prevent gc on them.
Another approach is hybrid code,
where your Smalltalk code prepares a realtime action plan,
and then hands the plan off to some executor code
that's either linked with your image,
or running elsewhere on your CPU,
or running on another CPU.
These strategies work very well,
assuming you've done your homework and chosen the right one.
For many realtime systems, the amount of actual realtime work is small,
so make your life easy and do most of the work in Smalltalk.
Factoring your application into these two parts
will not only give you a product that works well,
it will give you a product that you can work *on* well.

Steve


Reply | Threaded
Open this post in threaded view
|

Re: "Smalltalk" and "Real-time"

James T. Savidge
In reply to this post by KJ
Greetings,

KJ wrote:
> Are the terms "Smalltalk" and "Real-time application development"
> incompatible with each other?

Nope.

Check out VisualWorks add on called ControlWORKS. It is a "machine control
software framework designed for semiconductor processing equipment."

        http://www.adventact.com/products/controlworks/

I spent several years working on projects in ControlWORKS and it is much better
for working with controlling semiconductor equipment than any of the others I've
worked on.

Also, check out Mr. Robertson's blog posts about Esmertec, (formerly know as
Resilient,) a "real-time secure software platform for extremely memory
constrained devices." It is an embedded Smalltalk:

http://www.cincomsmalltalk.com/blog/blogView?search=Resilient&searchTitle=true&searchText=true

James T. Savidge, Saturday, March 5, 2005


Reply | Threaded
Open this post in threaded view
|

Re: "Smalltalk" and "Real-time"

jarober
In reply to this post by KJ
Nope.  We have customers doing real time work in VisualWorks


Reply | Threaded
Open this post in threaded view
|

Re: "Smalltalk" and "Real-time"

Janko Mivšek
In reply to this post by KJ
KJ wrote:
> Are the terms "Smalltalk" and "Real-time application development"
> incompatible with each other?

This is my observation from field experience with smalltalk systems:

If real-time means for you a guaranteed response in:
- 1 second, definitively
- 100 ms, easily achievable
- 10 ms, achievable with some tweaking of garbage collector and process
priorities

Regards
Janko


Reply | Threaded
Open this post in threaded view
|

Re: "Smalltalk" and "Real-time"

Randy A. Ynchausti-3
In reply to this post by KJ
KJ,

> Are the terms "Smalltalk" and "Real-time application development"
> incompatible with each other?

There is one sticky wicket about VisualWorks that I had to overcome when
building the architecture of the KnowledgeScape real-time adaptive
optimization software.  That sticky wicket is how the VisualWorks
ProcessorScheduler works.  Anyone building and deploying "multi-process"
real-time applications in VisualWorks should thoroughly study how processes
are handled by the ProcessorScheduler -- there are "major" implications
about how you architect your application based on that handling of
processes.

Regards,

Randy


pax
Reply | Threaded
Open this post in threaded view
|

Re: "Smalltalk" and "Real-time"

pax
Randy,

>Anyone building and deploying "multi-process"
> real-time applications in VisualWorks should thoroughly study how
processes
> are handled by the ProcessorScheduler -- there are "major"
implications
> about how you architect your application based on that handling of
> processes.

Can you be more specific about your assertion? I have constructed a
Server that handles transactions from client apps. As transactions
arrive at the server, a new Smalltalk thread is created and I let the
ProcessScheduler handle the details. A major concern is the use of
Mutex to protect the resources within an executing thread. If other
processes need said resources, they must wait until the current thread
completes and releases the resources. No problems to report as of yet,
but it is a concern. Server threads just update shared objects and when
necessary, perform CRUD transactions against a persistent data store.

Any details regarding "major" implications would be most appreciated.

Pax


Reply | Threaded
Open this post in threaded view
|

Re: "Smalltalk" and "Real-time"

keith
In reply to this post by Steven T Abell-2
Good points.
Realtime applications can usually be divided into "hard" realtime and
"soft" realtime, so some approaches may or may not be applicable
depending on actual realtime constraints. Also note that there is
ongoing work on realtime garbage collection (mainly java-focused, of
course). In addition, Lars Bak recently founded (and sold) a company
focused on a realtime smalltalk varient. See www.oovm.com for more
information (I think the purchasing company is Esmertec)

KeithB

Steven T Abell wrote:
> KJ wrote:
> > Are the terms "Smalltalk" and "Real-time application development"
> > incompatible with each other?
>
> Depends how real your time is.
> As your time granularity shrinks,
> you'll have to take increasingly aggresive measures,
> but that's pretty much par for the course in any realtime
programming.
> The basic issue is to preallocate as many objects as you can at
startup,
> then manage them explicitly, attempting to prevent gc on them.
> Another approach is hybrid code,
> where your Smalltalk code prepares a realtime action plan,
> and then hands the plan off to some executor code
> that's either linked with your image,
> or running elsewhere on your CPU,
> or running on another CPU.
> These strategies work very well,
[snip]


Reply | Threaded
Open this post in threaded view
|

Re: "Smalltalk" and "Real-time"

Reinout Heeck-3
In reply to this post by pax
Pax wrote:
> I have constructed a
> Server that handles transactions from client apps. As transactions
> arrive at the server, a new Smalltalk thread is created and I let the
> ProcessScheduler handle the details. A major concern is the use of
> Mutex to protect the resources within an executing thread. If other
> processes need said resources, they must wait until the current thread
> completes and releases the resources.

I found the NAMOS paper by David Reed (the one that Alan Kay regularly
promotes) very instructive wrt to this type of questions.

   http://lispmeister.com/blog/citations/dp-reed-thesis.html



R
-


Reply | Threaded
Open this post in threaded view
|

Re: "Smalltalk" and "Real-time"

Tim Rowledge-2
In reply to this post by keith
In message <[hidden email]>
          "keith" <[hidden email]> wrote:

> Good points.
> Realtime applications can usually be divided into "hard" realtime and
> "soft" realtime, so some approaches may or may not be applicable
> depending on actual realtime constraints.
Some years ago at Interval Research we made a fairly-hard real time OS
based on Squeak. Some info about it is in
http://sumeru.stanford.edu/tim/pooters/RTOSinSmalltalk.html 


tim
--
Tim Rowledge, [hidden email], http://sumeru.stanford.edu/tim
Strange OpCodes: DMZ: Divide Memory by Zero


Reply | Threaded
Open this post in threaded view
|

Re: "Smalltalk" and "Real-time"

Nevin Pratt
In reply to this post by keith
Realtime Definition:  Where "X" probability of a "response" can be
guaranteed within "Y" microseconds, where "X" and "Y" define how real
the time is, and "response" is also properly defined.

Without suitable definitions of "X", "Y", and "response", I've always
maintained that the term "realtime" is ambiguous and meaningless.

But that's just me.

Nevin


Reply | Threaded
Open this post in threaded view
|

Re: "Smalltalk" and "Real-time"

Randy A. Ynchausti-3
In reply to this post by pax
Pax

>
>>Anyone building and deploying "multi-process"
>> real-time applications in VisualWorks should thoroughly study how
> processes
>> are handled by the ProcessorScheduler -- there are "major"
> implications
>> about how you architect your application based on that handling of
>> processes.
>
> Can you be more specific about your assertion?

Yes, you should play around with the following code snippet:

| aBlock1 aBlock2 aProc1 aProc2 aStream aDelay |
aStream := WriteStream on: ( String new ).
aBlock1 := [
 | aMessage1 |
 aMessage1 := 'Hello from: Proc1'.
 1 to: 100 do: [ :i | aStream nextPutAll: ( i printString, ' : ',
aMessage1 ); cr ] ].
aBlock2 := [
 | aMessage2 |
 aMessage2 := 'Hello from: Proc2'.
 1 to: 100 do: [ :j | aStream nextPutAll: (  j printString, ' : ',
aMessage2 ); cr ] ].
aProc1 := aBlock1 forkAt: 50.
aProc2 := aBlock2 forkAt: 50.
aDelay := Delay forMilliseconds: 500.
[ aProc1 isTerminated and: [ aProc2 isTerminated ] ] whileFalse: [ aDelay
wait ].
Transcript show: aStream contents.
^Array with: aProc1 with: aProc2 with: aStream

One might expect to get something like:
...
25 : Hello from: Proc1
25 : Hello from: Proc2
26 : Hello from: Proc1
26 : Hello from: Proc2
...

But what really happens is:

1 : Hello from: Proc1
...
100 : Hello from: Proc1
1 : Hello from: Proc2
...
100 : Hello from: Proc2

You should figure out how to make the first scenario happen and why you need
to do what has to be done to guarantee that it will always have the same
order.  This is a good beginning to understanding.  Note that the
WriteStream is introduced in the code to eliminate the problems of writing
to the Transcript from different processes.  In other words, writing to the
Transcript is not thread-safe.

> I have constructed a
> Server that handles transactions from client apps. As transactions
> arrive at the server, a new Smalltalk thread is created and I let the
> ProcessScheduler handle the details.

This will probably work as long as the transaction processing is really
simple and short.  It will probably behave as you want until you start
getting large numbers of client app transactions hitting the server; at
which point it is possible that you can have clients app transactions that
take a really long time to begin processing assuming that the priority of
the process handling the transaction requests is running at a higher
priority than the client app transactions.  Otherwise, the client app
transaction processes will run, but the requests will not be processed in a
timely manner.

> A major concern is the use of
> Mutex to protect the resources within an executing thread.

If there is only one mutex then the code is likely protected assuming that
all of your client app transactions are forked at the same priority and
there is no higher priority processes that run and consume the available
processing capacity of the CPU that the code is running on.

> If other
> processes need said resources, they must wait until the current thread
> completes and releases the resources.

The mutext only guarantees that other processes that block on the mutex will
enter a wait state.  This does not guarantee that the client app
transactions will receive any processing time.

> No problems to report as of yet,
> but it is a concern. Server threads just update shared objects and when
> necessary, perform CRUD transactions against a persistent data store.

For your brief description, the most probable opportunity for deadlock is
probably the heavy load scenario given above.  The next most probable
opportunity for deadlock is if a client app transaction process can die
without signaling the mutex.  The next most probable opportunity for
deadlock is when a client app transacton process does not get a chance to
run because a higher priority process is constantly consuming the CPU
available, etc.

Regards,

Randy


Reply | Threaded
Open this post in threaded view
|

Re: "Smalltalk" and "Real-time"

Mike Hales-3
Understanding ProcessorScheduler is important, so...

>
> One might expect to get something like:
> ...
> 25 : Hello from: Proc1
> 25 : Hello from: Proc2
> 26 : Hello from: Proc1
> 26 : Hello from: Proc2
> ...

One shouldn't expect this behaviour because VW doesn't work that way,
processes of the same priority are non-preemptive.

>
> But what really happens is:
>
> 1 : Hello from: Proc1
> ...
> 100 : Hello from: Proc1
> 1 : Hello from: Proc2
> ...
> 100 : Hello from: Proc2
>
>
This is what should happen.  If you want to syncronize two processes,
use a semaphore.

Mike


pax
Reply | Threaded
Open this post in threaded view
|

Re: "Smalltalk" and "Real-time"

pax
In reply to this post by Randy A. Ynchausti-3
Randy,

thanks for the input. Most informative. On my server, all client
transaction threads are forked at the same priority so, the
ProcessScheduler will see this and determine which waiting thread will
run based on waiting time.

All transactions are quite simple. Under normal conditions (well,
normal for my system) a transaction updates or creates a tuple in the
database. At other times, the transaction is just to read some number
of tuples from the database. There is one "Heavy transaction" from a
certain type of client that performs a commit which writes to several
database tables during the context of the transaction.  Several being
about 7 to 10 database tables. This process has been optimized as much
as possible to ensure that its quick and safe. There are different
applications (processess) that run on the server so, even if a given
thread locks resources via a mutex, another thread may still run as it
is requesting services from a different Smalltalk process on the
server. Having different server applications handle discrete
transactions does provides flexibility and helps to keep backlog on the
low side. Each server process has its own persistance manager thus
talks to a different database. So, there isn't one big queue of
transactions waiting to run.

Your input is most helpful as it allows me to focus on server behavior
when the number of client request increases. So far, I have been able
to watch multiple client request via a "Live Transaction Inspector". I
can see waiting threads but, they don't wait very long. More attention
needs to be paid to the possibility of a thread dying and not releasing
the mutex via a semaphore signal.

Therefore, the new agenda for testing will be directed at the following
scenarios:

1. Heavy Load Testing
2. The unexpected death of a given transaction thread
3. The unexpected death of a given thread as it waits for a subsidiary
thread running on another server. (Yes, my servers can talk to each
other and request distributed services and other units of work)... Some
of the clients are remote as well so, it makes for a really nasty
situation when things don't communicate or a comm link gets dropped.

Based on your email addy, it looks like you may be running GemStone
transactions for OOCL based on out Hong Kong... Interesting...

Thanks again mate...

Pax


Reply | Threaded
Open this post in threaded view
|

Re: "Smalltalk" and "Real-time"

Randy A. Ynchausti-3
In reply to this post by Mike Hales-3
Mike,

> Understanding ProcessorScheduler is important, so...

>> One might expect to get something like:
>> ...
>> 25 : Hello from: Proc1
>> 25 : Hello from: Proc2
>> 26 : Hello from: Proc1
>> 26 : Hello from: Proc2
>> ...

> One shouldn't expect this behaviour because VW doesn't work that way,
> processes of the same priority are non-preemptive.

This is a curious comment and illustrates the reasoning for my previous
post.

One can reasonable expect a system to provide infrastructure and
architecture (preemption) to prevent a thread from being starved.  Let me
see if I can clear up any misunderstanding by providing an example in
another VM-based system, that is C#/.Net:

private void button1_Click(object sender, System.EventArgs e)
{
    Thread t1 = new Thread(new ThreadStart(ThreadProc1));
    Thread t2 = new Thread(new ThreadStart(ThreadProc2));
    t1.Start( );
    t2.Start( );
}

public static void ThreadProc1()
{
    for (int i = 0; i < 100; i++)
    {
        Console.WriteLine("ThreadProc1: {0}", i);
     }
}

public static void ThreadProc2()
{
    for (int i = 0; i < 100; i++)
    {
        Console.WriteLine("ThreadProc2: {0}", i);
    }
}

This example basically mimics the example I provided earlier in Smalltalk.
The output is:

ThreadProc1: 0
...
ThreadProc1: 60
ThreadProc2: 0
...
ThreadProc2: 60
ThreadProc1: 61
...
ThreadProc1: 99
ThreadProc2: 61
...
ThreadProc2: 99

Notice that preemption keeps a thread from being starved.  It was expected
and does occur.  My point was specifically that VW does not provide the
preemption; even though one would expect it to provide preemption in an
effort to keep a process (thread) from being starved. This preemption does
not occur in VW when forking off processes and therefore, the developer must
take extra care when developing "real-time" software that uses more than one
process (although, this statement in and of itself is very loose).  The
problems I pointed out to Pax are a few of the gotchas.

>> But what really happens is:
>>
>> 1 : Hello from: Proc1
>> ...
>> 100 : Hello from: Proc1
>> 1 : Hello from: Proc2
>> ...
>> 100 : Hello from: Proc2

> This is what should happen.  If you want to syncronize two processes,
> use a semaphore.

A process synchronization semaphore does not compensate for nor does it
overcome the problems present in a system that starves one process when
another process (even of the same priority) is unyielding.  The developer
must add additional infrastructure and architecture to deal with many of
these issues.

Regards,

Randy


Reply | Threaded
Open this post in threaded view
|

Re: "Smalltalk" and "Real-time"

Randy A. Ynchausti-3
In reply to this post by pax
Pax,

[snip]

> Therefore, the new agenda for testing will be directed at the following
> scenarios:
>
> 1. Heavy Load Testing
> 2. The unexpected death of a given transaction thread
> 3. The unexpected death of a given thread as it waits for a subsidiary
> thread running on another server. (Yes, my servers can talk to each
> other and request distributed services and other units of work)... Some
> of the clients are remote as well so, it makes for a really nasty
> situation when things don't communicate or a comm link gets dropped.

Great plan!

> Based on your email addy, it looks like you may be running GemStone
> transactions for OOCL based on out Hong Kong... Interesting...

You are very observant.  Keep up the good work and I hope to chat again
soon!

Regards,

Randy


jas
Reply | Threaded
Open this post in threaded view
|

Re: "Smalltalk" and "Real-time"

jas
In reply to this post by Randy A. Ynchausti-3
Randy A. Ynchausti wrote:

> Mike,
>
>
>>Understanding ProcessorScheduler is important, so...
>
>
>>>One might expect to get something like:
>>>...
>>>25 : Hello from: Proc1
>>>25 : Hello from: Proc2
>>>26 : Hello from: Proc1
>>>26 : Hello from: Proc2
>>>...
>
>
>>One shouldn't expect this behaviour because VW doesn't work that way,
>>processes of the same priority are non-preemptive.
>
>
> This is a curious comment and illustrates the reasoning for my previous
> post.
>
> One can reasonable expect a system to provide infrastructure and
> architecture (preemption) to prevent a thread from being starved.


Hi Randy.

I'll have to disagree here.
Yes, preemptive scheduling is a pretty reasonable assumption,
but has more to do with minimizing the skew between prioritization as
declared and as scheduled, and only indirectly to do with 'thread' (i.e.
process, for most Smalltalks) starvation.


> Let me
> see if I can clear up any misunderstanding by providing an example in
> another VM-based system, that is C#/.Net:
>
> private void button1_Click(object sender, System.EventArgs e)
> {
>     Thread t1 = new Thread(new ThreadStart(ThreadProc1));
>     Thread t2 = new Thread(new ThreadStart(ThreadProc2));
>     t1.Start( );
>     t2.Start( );
> }
>
> public static void ThreadProc1()
> {
>     for (int i = 0; i < 100; i++)
>     {
>         Console.WriteLine("ThreadProc1: {0}", i);
>      }
> }
>
> public static void ThreadProc2()
> {
>     for (int i = 0; i < 100; i++)
>     {
>         Console.WriteLine("ThreadProc2: {0}", i);
>     }
> }
>
> This example basically mimics the example I provided earlier in Smalltalk.
> The output is:
>
> ThreadProc1: 0
> ...
> ThreadProc1: 60
> ThreadProc2: 0
> ...
> ThreadProc2: 60
> ThreadProc1: 61
> ...
> ThreadProc1: 99
> ThreadProc2: 61
> ...
> ThreadProc2: 99
>
> Notice that preemption keeps a thread from being starved.


Nope - this demonstrates a 'round robin' scheduling policy.
Processes are scheduled into 'time-slices' - when a process
gets scheduled, it is only allowed to run for at most one slice,
and is then forcibly switched out, and requeued.

There are various ways of determining how big each 'time-slice'
is going to be, giving a range of possible 'round-robin' schedulers.

Smalltalk, on the other hand,  uses a real-time scheduling policy.
Once a process gets scheduled, the active process can run to completion,
or yield at its own discretion.  Only the presence of a *higher*
priority runnable process can 'take priority'.

For (hard) real-time applications, you need a real-time scheduler
AND the ability to calculate the maximum running time of
each process in the mix, AND the maximum servicing time of
every interrupt handler.


> It was expected and does occur.


Yes, it does occur.  But it is expected BECAUSE it is
running on a system with round-robin scheduling, rather
than BECAUSE that is the only "correct" behavior.


> My point was specifically that VW does not provide the
> preemption; even though


some might mistakenly


> expect it to provide

the same result they have become accustomed to seeing,
not having been exposed to any other scheduling policies.


> This does not occur in VW when forking off processes


Not sure what you mean here.
In scheduling terms, when a process is forked at the same priority as
the active process, there is a design choice:

a) queue the forked process
b) switch to the forked process immediately


To me, the most natural to 'expect' is (a),
but the most natural to use is actually (b),
noting that programmers are more likely to
think, and thus code, in terms of processes
'forked' at the point they are intended to
run, so it is pretty convenient if the designer
chooses (b).



> and therefore,
> the developer must take extra care when developing "real-time" software
> that uses more than one process (although, this statement in and of itself
> is very loose).


Yes, a little bit too loose, I think.
Even "soft" real-time requires a real-time scheduling policy.
If one loosens up the notion all the way to the point that
even round-robin schedling is allowed, the value of making
a distinction is lost.




> The problems I pointed out to Pax are a few of the gotchas.


Right - they are important issues to be aware of.
They become 'problems' if you haven't seen the impact
before (first hill to climb), or mis-predict it (the
ever present hill).  Part of what makes concurrancy "hard".


> A process synchronization semaphore does not compensate for nor does it
> overcome the problems present in a system that starves one process when
> another process (even of the same priority) is unyielding.


Correct, a semaphore by itself does not compensate.
If you want to program with the model you're used to,
you'll need a round-robin scheduling policy.

Fortunately, it is easy to change from round-robin to real-time policy:

You just need a high priority process to wake up every "time-slice",
and do something that will have the same effect as the active process
yielding (to the end of the queue), rather than simply being preempted,
(to the front of the queue).


And consider this: What if it were the reverse?
You got round-robin for free, but up to you
to construct real-time on your own
(with just application code).


> The developer
> must add additional infrastructure and architecture to deal with many of
> these issues.


Yup.

Sometimes, you need some real mental gymnastics
to get the whole architecture right.  Sometimes,
the gymnastics are a sign of phase-error, and a
quick couple of lines will restore the phasing.

Smalltalk very often gives you a worst-case of the latter.
Almost as if the designers really knew what they were doing...

   ;-)


Regards,

-cstb


Reply | Threaded
Open this post in threaded view
|

Re: "Smalltalk" and "Real-time"

Randy A. Ynchausti-3
Hello Everyone,

For those trying to understand my comments (quoted below), I should note
that I have tried to use terminology consistently standard in the industry.
Please refer to the following links for appropriate definitions for the
discussion:

Preemptive multitasking:

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpguide/html/cpconThreadsThreading.asp

http://en.wikipedia.org/wiki/Preemptive_multitasking


Scheduling:

http://en.wikipedia.org/wiki/Scheduling


Round Robin Scheduling:

http://en.wikipedia.org/wiki/Round-robin_scheduling


Starvation:

http://en.wikipedia.org/wiki/Resource_starvation


============================================================


>> One might expect to get something like:
>> ...
>> 25 : Hello from: Proc1
>> 25 : Hello from: Proc2
>> 26 : Hello from: Proc1
>> 26 : Hello from: Proc2
>> ...

One can reasonable expect a system to provide infrastructure and
architecture (preemption) to prevent a thread from being starved.  Let me
see if I can clear up any misunderstanding by providing an example in
another VM-based system, that is C#/.Net:

private void button1_Click(object sender, System.EventArgs e)
{
    Thread t1 = new Thread(new ThreadStart(ThreadProc1));
    Thread t2 = new Thread(new ThreadStart(ThreadProc2));
    t1.Start( );
    t2.Start( );
}

public static void ThreadProc1()
{
    for (int i = 0; i < 100; i++)
    {
        Console.WriteLine("ThreadProc1: {0}", i);
     }
}

public static void ThreadProc2()
{
    for (int i = 0; i < 100; i++)
    {
        Console.WriteLine("ThreadProc2: {0}", i);
    }
}

This example basically mimics the example I provided earlier in Smalltalk.
The output is:

ThreadProc1: 0
...
ThreadProc1: 60
ThreadProc2: 0
...
ThreadProc2: 60
ThreadProc1: 61
...
ThreadProc1: 99
ThreadProc2: 61
...
ThreadProc2: 99

Notice that preemption keeps a thread from being starved.  It was expected
and does occur.  My point was specifically that VW does not provide the
preemption; even though one would expect it to provide preemption in an
effort to keep a process (thread) from being starved. This preemption does
not occur in VW when forking off processes and therefore, the developer must
take extra care when developing "real-time" software that uses more than one
process (although, this statement in and of itself is very loose).  The
problems I pointed out to Pax are a few of the gotchas.

>> But what really happens is:
>>
>> 1 : Hello from: Proc1
>> ...
>> 100 : Hello from: Proc1
>> 1 : Hello from: Proc2
>> ...
>> 100 : Hello from: Proc2

> This is what should happen.  If you want to syncronize two processes,
> use a semaphore.

A process synchronization semaphore does not compensate for nor does it
overcome the problems present in a system that starves one process when
another process (even of the same priority) is unyielding.  The developer
must add additional infrastructure and architecture to deal with many of
these issues.

Regards,

Randy


Reply | Threaded
Open this post in threaded view
|

Re: "Smalltalk" and "Real-time"

Volker Zink
VisualWorks is missing preemption. But this is not really important. You
can easily write something yourself. At least i wrote a ProcessManager
for my scheduled tasks (each tasks runs in an own process) which just
waits some time and then yields the active task. The only problem we had
is that a task may be in a transaction and its process may not be
preempted during that time. This made things a bit more complex. But a
simple round-robin is easy to implement. And i assume you only have to
manage processes you started (forked). Else you would have to change
System-classes.

But we are not really "Real-time" (if it takes some milliseconds more it
doesn't really matter) so maybe our sulution is not good enough for you.

A more complete infrastructure for processes in VisualWorks would be
nice though (especially i would like interprocess-exceptions but luckily
i found an article on "Cross-process exception handling" by Ken Auer and
Barry Oglesby in an old Smalltalk Report).

Volker


Randy A. Ynchausti wrote:

> Hello Everyone,
>
> For those trying to understand my comments (quoted below), I should note
> that I have tried to use terminology consistently standard in the industry.
> Please refer to the following links for appropriate definitions for the
> discussion:
>
> Preemptive multitasking:
>
> http://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpguide/html/cpconThreadsThreading.asp
>
> http://en.wikipedia.org/wiki/Preemptive_multitasking
>
>
> Scheduling:
>
> http://en.wikipedia.org/wiki/Scheduling
>
>
> Round Robin Scheduling:
>
> http://en.wikipedia.org/wiki/Round-robin_scheduling
>
>
> Starvation:
>
> http://en.wikipedia.org/wiki/Resource_starvation
>
>
> ============================================================
>
>
>
>>>One might expect to get something like:
>>>...
>>>25 : Hello from: Proc1
>>>25 : Hello from: Proc2
>>>26 : Hello from: Proc1
>>>26 : Hello from: Proc2
>>>...
>
>
> One can reasonable expect a system to provide infrastructure and
> architecture (preemption) to prevent a thread from being starved.  Let me
> see if I can clear up any misunderstanding by providing an example in
> another VM-based system, that is C#/.Net:
>
> private void button1_Click(object sender, System.EventArgs e)
> {
>     Thread t1 = new Thread(new ThreadStart(ThreadProc1));
>     Thread t2 = new Thread(new ThreadStart(ThreadProc2));
>     t1.Start( );
>     t2.Start( );
> }
>
> public static void ThreadProc1()
> {
>     for (int i = 0; i < 100; i++)
>     {
>         Console.WriteLine("ThreadProc1: {0}", i);
>      }
> }
>
> public static void ThreadProc2()
> {
>     for (int i = 0; i < 100; i++)
>     {
>         Console.WriteLine("ThreadProc2: {0}", i);
>     }
> }
>
> This example basically mimics the example I provided earlier in Smalltalk.
> The output is:
>
> ThreadProc1: 0
> ...
> ThreadProc1: 60
> ThreadProc2: 0
> ...
> ThreadProc2: 60
> ThreadProc1: 61
> ...
> ThreadProc1: 99
> ThreadProc2: 61
> ...
> ThreadProc2: 99
>
> Notice that preemption keeps a thread from being starved.  It was expected
> and does occur.  My point was specifically that VW does not provide the
> preemption; even though one would expect it to provide preemption in an
> effort to keep a process (thread) from being starved. This preemption does
> not occur in VW when forking off processes and therefore, the developer must
> take extra care when developing "real-time" software that uses more than one
> process (although, this statement in and of itself is very loose).  The
> problems I pointed out to Pax are a few of the gotchas.
>
>
>>>But what really happens is:
>>>
>>>1 : Hello from: Proc1
>>>...
>>>100 : Hello from: Proc1
>>>1 : Hello from: Proc2
>>>...
>>>100 : Hello from: Proc2
>
>
>>This is what should happen.  If you want to syncronize two processes,
>>use a semaphore.
>
>
> A process synchronization semaphore does not compensate for nor does it
> overcome the problems present in a system that starves one process when
> another process (even of the same priority) is unyielding.  The developer
> must add additional infrastructure and architecture to deal with many of
> these issues.
>
> Regards,
>
> Randy
>
>


Reply | Threaded
Open this post in threaded view
|

Re: "Smalltalk" and "Real-time"

Blair McGlashan-3
In reply to this post by jas
"cstb" <[hidden email]> wrote in message news:[hidden email]...
> ...
> Fortunately, it is easy to change from round-robin to real-time policy:
>
> You just need a high priority process to wake up every "time-slice",
> and do something that will have the same effect as the active process
> yielding (to the end of the queue), rather than simply being preempted,
> (to the front of the queue).
> ...

Indeed, and since Dolphin puts pre-empted threads to the back of the queue
at their priority level this could be as simple as:

    [[Processor sleep: 50] repeat] forkAt: Processor timingPriority

This will be safe in Dolphin because the system is written with the
assumption that scheduling is time-slicing at the same priority level, even
if the standard Smalltalk-80 scheduling policy is used. However it is worth
bearing in mind that an assumption of non-preemption is present in quite a
lot of Smalltalk code, and certainly I have come across a number of cases in
other Smalltalk systems where "critical sections" are implemented by simply
raising the priority of execution to the highest possible level. Of course
this works if time-slicing of threads at the same priority level does not
occur, but it could break if you introduce time-slicing.

Regards

Blair


12