Quantcast

Resuming a SocketStream after a ConnectionClosed exception?

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
19 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Resuming a SocketStream after a ConnectionClosed exception?

timrowledge
My MQTT stuff needs to handle the occasional ConnectionClosed type error.

I don’t see anything in SocketStream code that suggests any sort of reconnect, and indeed the ConnectionClosed exception is non-resumable. What on earth does one do when a socket dies?

My two i/o loops are basically  just
[[outboundSharedQueue next outputOn: socketStream] repeat]
and
[[self handleIncoming: (MQTTPacket readFrom: socketStream)] repeat]
forked off as separate processes.

The socketStream can raise the ConnectionClosed when either doing the #next as part of the reading, or  in the #nextPut: whilst writing; obviously. Right now I’m trying to make it work with  the read process modified to

[[[packet := [MQTTPacket readFrom: socketStream] on: NetworkError do:[:e|
        some stuff to reconnect.
        e retry].
self handleIncoming: packet] repeat] forkThing

What ought to happen is the socket stream gets reconnected/replaced, the MQTT connection is re-established and then the [MQTTPacket readFrom: socketStream] is restarted. There’s some extra fun for dealing with interrupted multi-part handshaking sequences too, but that’s a different colour of fish-kettle.

Clearly the ‘some stuff to reconnect’ is not currently doing the job I thought it was since it doesn’t seem to reconnect and hookup again. I’m beginning to think that I ought to suspend the other process before doing the reconnect, and also that I’ve stared at this so much I’m not seeing the wood for the trees.

I’m not picking up any particular inspiration from WebClient or WebServer and haven’t googled into anything that seems relevant. Any ideas out there?

tim
--
tim Rowledge; [hidden email]; http://www.rowledge.org/tim
How many of you believe in telekinesis? Raise my hands....



Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Resuming a SocketStream after a ConnectionClosed exception?

Ben Coman
Before the retry maybe do...
* a Delay; and/or
* a garbage collection; and/or
* assign a new SocketStream to the socket stream variable

cheers -ben

On Sat, Feb 11, 2017 at 2:01 PM, tim Rowledge <[hidden email]> wrote:

> My MQTT stuff needs to handle the occasional ConnectionClosed type error.
>
> I don’t see anything in SocketStream code that suggests any sort of reconnect, and indeed the ConnectionClosed exception is non-resumable. What on earth does one do when a socket dies?
>
> My two i/o loops are basically  just
> [[outboundSharedQueue next outputOn: socketStream] repeat]
> and
> [[self handleIncoming: (MQTTPacket readFrom: socketStream)] repeat]
> forked off as separate processes.
>
> The socketStream can raise the ConnectionClosed when either doing the #next as part of the reading, or  in the #nextPut: whilst writing; obviously. Right now I’m trying to make it work with  the read process modified to
>
> [[[packet := [MQTTPacket readFrom: socketStream] on: NetworkError do:[:e|
>         some stuff to reconnect.
>         e retry].
> self handleIncoming: packet] repeat] forkThing
>
> What ought to happen is the socket stream gets reconnected/replaced, the MQTT connection is re-established and then the [MQTTPacket readFrom: socketStream] is restarted. There’s some extra fun for dealing with interrupted multi-part handshaking sequences too, but that’s a different colour of fish-kettle.
>
> Clearly the ‘some stuff to reconnect’ is not currently doing the job I thought it was since it doesn’t seem to reconnect and hookup again. I’m beginning to think that I ought to suspend the other process before doing the reconnect, and also that I’ve stared at this so much I’m not seeing the wood for the trees.
>
> I’m not picking up any particular inspiration from WebClient or WebServer and haven’t googled into anything that seems relevant. Any ideas out there?
>
> tim
> --
> tim Rowledge; [hidden email]; http://www.rowledge.org/tim
> How many of you believe in telekinesis? Raise my hands....
>
>
>

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Resuming a SocketStream after a ConnectionClosed exception?

David T. Lewis
In reply to this post by timrowledge
On Fri, Feb 10, 2017 at 10:01:46PM -0800, tim Rowledge wrote:
> My MQTT stuff needs to handle the occasional ConnectionClosed type error.
>
> I don???t see anything in SocketStream code that suggests any sort of reconnect, and indeed the ConnectionClosed exception is non-resumable. What on earth does one do when a socket dies?
>

I'm not looking at SocketStream as I write this, but the general answer
to this question is that you should close the old socket and open an
new one. Close the old one to ensure that you do not leak file descriptors,
and open a new one because there is almost never anything else to be
done anyway.

> My two i/o loops are basically  just
> [[outboundSharedQueue next outputOn: socketStream] repeat]
> and
> [[self handleIncoming: (MQTTPacket readFrom: socketStream)] repeat]
> forked off as separate processes.
>
> The socketStream can raise the ConnectionClosed when either doing the #next as part of the reading, or  in the #nextPut: whilst writing; obviously. Right now I???m trying to make it work with  the read process modified to
>
> [[[packet := [MQTTPacket readFrom: socketStream] on: NetworkError do:[:e|
> some stuff to reconnect.
> e retry].
> self handleIncoming: packet] repeat] forkThing
>
> What ought to happen is the socket stream gets reconnected/replaced, the MQTT connection is re-established and then the [MQTTPacket readFrom: socketStream] is restarted. There???s some extra fun for dealing with interrupted multi-part handshaking sequences too, but that???s a different colour of fish-kettle.
>

If you have some kind of object that represents "MQTT connection", that may
be the right place to put the reconnect logic. You'll probably want some kind
of delay to avoid endlessly retrying a connection that is not going to work.

Dave


> Clearly the ???some stuff to reconnect??? is not currently doing the job I thought it was since it doesn???t seem to reconnect and hookup again. I???m beginning to think that I ought to suspend the other process before doing the reconnect, and also that I???ve stared at this so much I???m not seeing the wood for the trees.
>
> I???m not picking up any particular inspiration from WebClient or WebServer and haven???t googled into anything that seems relevant. Any ideas out there?
>
> tim
> --
> tim Rowledge; [hidden email]; http://www.rowledge.org/tim
> How many of you believe in telekinesis? Raise my hands....
>
>
>

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Resuming a SocketStream after a ConnectionClosed exception?

timrowledge
The good news is that I’ve got the reconnect basically working, at least from the perspective of the connection getting broken by the mqtt server whilst attempting to read data. I’m not sure right now how I can meaningfully test a problem that happens when the write process tries to send data.

All in all I’m not at all sure that having two forked processes working with the same socket stream is the best way to do this but it seems to work tolerably for now. Might it be better to have the socket reading process at a higher priority? I can’t think of a way to merge the two, which might be a better technique.

tim
--
tim Rowledge; [hidden email]; http://www.rowledge.org/tim
Strange OpCodes: RC: Rewind Core



Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Resuming a SocketStream after a ConnectionClosed exception?

David T. Lewis
On Mon, Feb 13, 2017 at 06:27:54PM -0800, tim Rowledge wrote:
> The good news is that I???ve got the reconnect basically working, at least from the perspective of the connection getting broken by the mqtt server whilst attempting to read data. I???m not sure right now how I can meaningfully test a problem that happens when the write process tries to send data.
>

It is not likely that you will ever encounter an error on write. Once you
have (or think that you have) a connection established, any small packet of
data that you write is going to at least make its way out to the network
regardless of whether anybody at the other end actually receives it.

What you will more likely see instead is a read error because nobody is at
the other end to talk to you any more, possibly because they did not like
whatever it was that you last sent to them and decided to close their end
of the connection, or just because of some kind of network issue.

Given that you are working with small data packets, your writes will always
appear to succeed immediately without blocking, and you will spend a lot of
time waiting on blocking reads that will either give you the MQTT packet you
are looking for, or will fail with an I/O error of some kind.

> All in all I???m not at all sure that having two forked processes working with the same socket stream is the best way to do this but it seems to work tolerably for now. Might it be better to have the socket reading process at a higher priority? I can???t think of a way to merge the two, which might be a better technique.
>

If you have to ask if raising priority is a good idea, then it isn't  ;-)

I really don't know if it is safe to share a socket stream between a reader
process and a writer process. At a low level (the socket) it should be
perfectly all right, but I don't know if there is any shared state in the
SocketStream. In any case, it it's working that's a good sign.

Dave


Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Resuming a SocketStream after a ConnectionClosed exception?

Ben Coman


On Tue, Feb 14, 2017 at 11:14 AM, David T. Lewis <[hidden email]> wrote:
On Mon, Feb 13, 2017 at 06:27:54PM -0800, tim Rowledge wrote:
> The good news is that I???ve got the reconnect basically working, at least from the perspective of the connection getting broken by the mqtt server whilst attempting to read data. I???m not sure right now how I can meaningfully test a problem that happens when the write process tries to send data.
>

It is not likely that you will ever encounter an error on write. Once you
have (or think that you have) a connection established, any small packet of
data that you write is going to at least make its way out to the network
regardless of whether anybody at the other end actually receives it.

What you will more likely see instead is a read error because nobody is at
the other end to talk to you any more, possibly because they did not like
whatever it was that you last sent to them and decided to close their end
of the connection, or just because of some kind of network issue.

Given that you are working with small data packets, your writes will always
appear to succeed immediately without blocking, and you will spend a lot of
time waiting on blocking reads that will either give you the MQTT packet you
are looking for, or will fail with an I/O error of some kind.

> All in all I???m not at all sure that having two forked processes working with the same socket stream is the best way to do this but it seems to work tolerably for now. Might it be better to have the socket reading process at a higher priority? I can???t think of a way to merge the two, which might be a better technique.
>

If you have to ask if raising priority is a good idea, then it isn't  ;-)

Why?    
My general impression is that its easier/safer to manage multi-threading at the application level where you can see exactly what is going on, than at the system level if that is not *guaranteed* to be thread safe.  

You could copy the pattern used by Delay>>schedule:, Delay>>unschedule:, Delay-class>>handleTimerEvent:

cheers -ben
 

I really don't know if it is safe to share a socket stream between a reader
process and a writer process. At a low level (the socket) it should be
perfectly all right, but I don't know if there is any shared state in the
SocketStream. In any case, it it's working that's a good sign.

Dave





Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Resuming a SocketStream after a ConnectionClosed exception?

Louis LaBrunda
In reply to this post by timrowledge
Hi Tim,

On Mon, 13 Feb 2017 18:27:54 -0800, tim Rowledge <[hidden email]> wrote:

>The good news is that I’ve got the reconnect basically working, at least from the perspective of the connection getting broken by the mqtt server whilst attempting to read data. I’m not sure right now how I can meaningfully test a problem that happens when the write process tries to send data.

>All in all I’m not at all sure that having two forked processes working with the same socket stream is the best way to do this but it seems to work tolerably for now.

I think your instincts about having two forked processes working with the same socket stream at
the same time are correct.  As for working tolerably for now, this may be an illusion as there
can be timing problems that seem to show up for no reason.

>Might it be better to have the socket reading process at a higher priority?

This probably won't do any good because once you have issued the read, higher priority won't
read any faster and if there isn't and data to read, it won't show up any faster.  Also,
waiting for the data may relinquish the CPU.

I don't know much about sockets and streams on Squeak.  I'm no expert but from trial and error
I have learned more than I have ever wanted to about sockets on VA Smalltalk.  That said, I
suggest two things.

1) Use a semaphore to prevent both processes from accessing the socket at the same time.
2) Test to see if there is data waiting to be read before trying to read it (before committing
to a read that will wait for data).  In other words, make the read non-blocking.  If there is
nothing to read, give up the CPU (do a delay or sleep somehow).  If there is data, read as
usual.

With two processes, we need the semaphore.  If keeping two processes is easier that combining
it into one, that's fine.  With one process, we can just alternate between reading and writing.

Hope this helps.

Lou
--
Louis LaBrunda
Keystone Software Corp.
SkypeMe callto://PhotonDemon


Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Resuming a SocketStream after a ConnectionClosed exception?

Göran Krampe
In reply to this post by timrowledge
Hi guys!

On 14/02/17 03:27, tim Rowledge wrote:
> All in all I’m not at all sure that having two forked processes
> working with the same socket stream is the best way to do this but it
> seems to work tolerably for now. Might it be better to have the
> socket reading process at a higher priority? I can’t think of a way
> to merge the two, which might be a better technique.

I wrote the current incarnation of SocketStream and I intentionally did
not add any semaphore/mutex for protections. And yes, the SocketStream
has internal state to know positions in buffers etc etc - so NO, you
should not use two Processes with the same SocketStream.

Having said that...  if you have one process only writing and one only
reading - you may get away with it - IIRC (no promises) the inBuffer and
outBuffer (and associated ivars) may be 100% separated.

regards, Göran

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Resuming a SocketStream after a ConnectionClosed exception?

David T. Lewis
In reply to this post by Ben Coman
On Tue, Feb 14, 2017 at 11:56:12AM +0800, Ben Coman wrote:

> On Tue, Feb 14, 2017 at 11:14 AM, David T. Lewis <[hidden email]>
> wrote:
>
> > On Mon, Feb 13, 2017 at 06:27:54PM -0800, tim Rowledge wrote:
> > > The good news is that I???ve got the reconnect basically working, at
> > least from the perspective of the connection getting broken by the mqtt
> > server whilst attempting to read data. I???m not sure right now how I can
> > meaningfully test a problem that happens when the write process tries to
> > send data.
> > >
> >
> > It is not likely that you will ever encounter an error on write. Once you
> > have (or think that you have) a connection established, any small packet of
> > data that you write is going to at least make its way out to the network
> > regardless of whether anybody at the other end actually receives it.
> >
> > What you will more likely see instead is a read error because nobody is at
> > the other end to talk to you any more, possibly because they did not like
> > whatever it was that you last sent to them and decided to close their end
> > of the connection, or just because of some kind of network issue.
> >
> > Given that you are working with small data packets, your writes will always
> > appear to succeed immediately without blocking, and you will spend a lot of
> > time waiting on blocking reads that will either give you the MQTT packet
> > you
> > are looking for, or will fail with an I/O error of some kind.
> >
> > > All in all I???m not at all sure that having two forked processes
> > working with the same socket stream is the best way to do this but it seems
> > to work tolerably for now. Might it be better to have the socket reading
> > process at a higher priority? I can???t think of a way to merge the two,
> > which might be a better technique.
> > >
> >
> > If you have to ask if raising priority is a good idea, then it isn't  ;-)
> >
>
> Why?

Hi Ben,

I meant that comment with a smiley. I was just saying is that adjusting
priorities to address performance is the kind of thing that can cause more
problems than it fixes (and often does). So a good rule of thumb is don't
do it without a good reason. The case that Tim is describing might be
an example of this, because changing process priority would likely have
been of no benefit, but could have added risk of other kinds of issues.

Dave

> My general impression is that its easier/safer to manage multi-threading at
> the application level where you can see exactly what is going on, than at
> the system level if that is not *guaranteed* to be thread safe.
>
> You could copy the pattern used by Delay>>schedule:, Delay>>unschedule:,
> Delay-class>>handleTimerEvent:
>
> cheers -ben
>



Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Resuming a SocketStream after a ConnectionClosed exception?

Louis LaBrunda
In reply to this post by Louis LaBrunda
Hi Tim,

>>Might it be better to have the socket reading process at a higher priority?

>This probably won't do any good because once you have issued the read, higher priority won't
>read any faster and if there isn't and data to read, it won't show up any faster.  Also,
>waiting for the data may relinquish the CPU.

On second thought, there could be a reason for either process to have a higher priority than
the other but it has little to do with the sockets.  If the code that processes the "read" data
takes a lot longer to work on it than the "write" process takes to create the data, than it
should have a higher priority.  Or if the "write" process takes longer to create the data than
the "read" code takes to work on what it has read, than it should have a higher priority.

I'm sure this is nothing new to you, it is just that when mixed in with sockets, things get
confusing.

Lou
--
Louis LaBrunda
Keystone Software Corp.
SkypeMe callto://PhotonDemon


Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Resuming a SocketStream after a ConnectionClosed exception?

Ben Coman
In reply to this post by David T. Lewis


On Wed, Feb 15, 2017 at 8:51 PM, David T. Lewis <[hidden email]> wrote:
On Tue, Feb 14, 2017 at 11:56:12AM +0800, Ben Coman wrote:
> On Tue, Feb 14, 2017 at 11:14 AM, David T. Lewis <[hidden email]>
> wrote:
>
> > On Mon, Feb 13, 2017 at 06:27:54PM -0800, tim Rowledge wrote:
> > > The good news is that I???ve got the reconnect basically working, at
> > least from the perspective of the connection getting broken by the mqtt
> > server whilst attempting to read data. I???m not sure right now how I can
> > meaningfully test a problem that happens when the write process tries to
> > send data.
> > >
> >
> > It is not likely that you will ever encounter an error on write. Once you
> > have (or think that you have) a connection established, any small packet of
> > data that you write is going to at least make its way out to the network
> > regardless of whether anybody at the other end actually receives it.
> >
> > What you will more likely see instead is a read error because nobody is at
> > the other end to talk to you any more, possibly because they did not like
> > whatever it was that you last sent to them and decided to close their end
> > of the connection, or just because of some kind of network issue.
> >
> > Given that you are working with small data packets, your writes will always
> > appear to succeed immediately without blocking, and you will spend a lot of
> > time waiting on blocking reads that will either give you the MQTT packet
> > you
> > are looking for, or will fail with an I/O error of some kind.
> >
> > > All in all I???m not at all sure that having two forked processes
> > working with the same socket stream is the best way to do this but it seems
> > to work tolerably for now. Might it be better to have the socket reading
> > process at a higher priority? I can???t think of a way to merge the two,
> > which might be a better technique.
> > >
> >
> > If you have to ask if raising priority is a good idea, then it isn't  ;-)
> >
>
> Why?

Hi Ben,

I meant that comment with a smiley.

I saw that. I just had the wrong smiley interpreter.  :) :)
Thanks for the expansion. 
cheers -ben

I was just saying is that adjusting
priorities to address performance is the kind of thing that can cause more
problems than it fixes (and often does). So a good rule of thumb is don't
do it without a good reason. The case that Tim is describing might be
an example of this, because changing process priority would likely have
been of no benefit, but could have added risk of other kinds of issues.

Dave

> My general impression is that its easier/safer to manage multi-threading at
> the application level where you can see exactly what is going on, than at
> the system level if that is not *guaranteed* to be thread safe.
>
> You could copy the pattern used by Delay>>schedule:, Delay>>unschedule:,
> Delay-class>>handleTimerEvent:
>
> cheers -ben
>






Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Resuming a SocketStream after a ConnectionClosed exception?

timrowledge
In reply to this post by Göran Krampe
Hi Göran!

> On 15-02-2017, at 3:50 AM, Göran Krampe <[hidden email]> wrote:
>

> I wrote the current incarnation of SocketStream and I intentionally did not add any semaphore/mutex for protections. And yes, the SocketStream has internal state to know positions in buffers etc etc - so NO, you should not use two Processes with the same SocketStream.
>
> Having said that...  if you have one process only writing and one only reading - you may get away with it - IIRC (no promises) the inBuffer and outBuffer (and associated ivars) may be 100% separated.

A single process to write and another to read, both at same priority so only one can be  doing stuff at once. The reader will wait on data coming in, the writer on packets being available in a shared queue. I *think* that makes it safe from what I can make out of the code.

If anyone has suggestions on a better way to do this I’d be very happy to learn.

tim
--
tim Rowledge; [hidden email]; http://www.rowledge.org/tim
Strange OpCodes: RSC: Rewind System Clock



Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Resuming a SocketStream after a ConnectionClosed exception?

Chris Muller-3
Hi Tim,

> If anyone has suggestions on a better way to do this I’d be very happy to learn.

Not sure if it fits your model, but you might be interested in the
design employed by Ma client server (available on SM), which was
originally from John McIntosh.  Take a look at the MaServerSocket
class.

The idea is to never block any request processing based on network
activity with clients, whether sending or receiving, since that can be
slow.  All sends and receives are done in background Processes (but
never at the same time with a single client).

The server creates one Process per client for extracting request bytes from
them.  When all bytes of a request have been received from a client,
the chunk is added to a common 'requestQueue' (a SharedQueue) for all
clients.

Unlike other servers, Ma client server processes the requestQueue
serially, which saves the overhead of forking an extra Process for
every request, and since Smalltalk is single-threaded anyway, there's
no loss of throughput performance doing that.  After processing the
request, the sending of the response back to the client id done in a
separately forked background process.

It works well.  The Smalltalk code is kept as busy as it can be to
make sure its always waiting on the network, this maximizes the
server's throughput.

 - Chris

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Resuming a SocketStream after a ConnectionClosed exception?

Göran Krampe
In reply to this post by timrowledge
On 15/02/17 22:34, tim Rowledge wrote:

> Hi Göran!
>
>> On 15-02-2017, at 3:50 AM, Göran Krampe <[hidden email]> wrote:
>>
>
>> I wrote the current incarnation of SocketStream and I intentionally
>> did not add any semaphore/mutex for protections. And yes, the
>> SocketStream has internal state to know positions in buffers etc
>> etc - so NO, you should not use two Processes with the same
>> SocketStream.
>>
>> Having said that...  if you have one process only writing and one
>> only reading - you may get away with it - IIRC (no promises) the
>> inBuffer and outBuffer (and associated ivars) may be 100%
>> separated.
>
> A single process to write and another to read, both at same priority
> so only one can be  doing stuff at once.

Mmmm, those two will not preempt each other - but other processes with
higher prio preempt them (right?), so perhaps I am daft but doesn't that
mean they will switch anyway (potentially in the middle of a method etc)?

regards, Göran

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Resuming a SocketStream after a ConnectionClosed exception?

timrowledge

> On 16-02-2017, at 3:54 AM, Göran Krampe <[hidden email]> wrote:
>
> On 15/02/17 22:34, tim Rowledge wrote:
>> Hi Göran!
>>
>>> On 15-02-2017, at 3:50 AM, Göran Krampe <[hidden email]> wrote:
>>>
>>
>>> I wrote the current incarnation of SocketStream and I intentionally
>>> did not add any semaphore/mutex for protections. And yes, the
>>> SocketStream has internal state to know positions in buffers etc
>>> etc - so NO, you should not use two Processes with the same
>>> SocketStream.
>>>
>>> Having said that...  if you have one process only writing and one
>>> only reading - you may get away with it - IIRC (no promises) the
>>> inBuffer and outBuffer (and associated ivars) may be 100%
>>> separated.
>>
>> A single process to write and another to read, both at same priority
>> so only one can be  doing stuff at once.
>
> Mmmm, those two will not preempt each other - but other processes with higher prio preempt them (right?), so perhaps I am daft but doesn't that mean they will switch anyway (potentially in the middle of a method etc)?

If I’ve remembered right, the scheduling sticks a suspended process at at the *front* of the queue these days, not the back as was the case when us dinosaurs first roamed the Earth. The idea being to make sure that a process interrupted by a quick timer job gets back to its work sooner rather than later.

But I’ve been wrong before...


tim
--
tim Rowledge; [hidden email]; http://www.rowledge.org/tim
Real Daleks don't climb stairs - they level the building



Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Resuming a SocketStream after a ConnectionClosed exception?

Eliot Miranda-2


On Thu, Feb 16, 2017 at 10:06 AM, tim Rowledge <[hidden email]> wrote:

> On 16-02-2017, at 3:54 AM, Göran Krampe <[hidden email]> wrote:
>
> On 15/02/17 22:34, tim Rowledge wrote:
>> Hi Göran!
>>
>>> On 15-02-2017, at 3:50 AM, Göran Krampe <[hidden email]> wrote:
>>>
>>
>>> I wrote the current incarnation of SocketStream and I intentionally
>>> did not add any semaphore/mutex for protections. And yes, the
>>> SocketStream has internal state to know positions in buffers etc
>>> etc - so NO, you should not use two Processes with the same
>>> SocketStream.
>>>
>>> Having said that...  if you have one process only writing and one
>>> only reading - you may get away with it - IIRC (no promises) the
>>> inBuffer and outBuffer (and associated ivars) may be 100%
>>> separated.
>>
>> A single process to write and another to read, both at same priority
>> so only one can be  doing stuff at once.
>
> Mmmm, those two will not preempt each other - but other processes with higher prio preempt them (right?), so perhaps I am daft but doesn't that mean they will switch anyway (potentially in the middle of a method etc)?

If I’ve remembered right, the scheduling sticks a suspended process at at the *front* of the queue these days, not the back as was the case when us dinosaurs first roamed the Earth. The idea being to make sure that a process interrupted by a quick timer job gets back to its work sooner rather than later.

That is indeed the case.  See 

    SmalltalkImage current processPreemptionYields = false 

and ProcessorScheduler class>>#startUp:
 
_,,,^..^,,,_
best, Eliot


Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Resuming a SocketStream after a ConnectionClosed exception?

Chris Muller-3
>> >> A single process to write and another to read, both at same priority
>> >> so only one can be  doing stuff at once.
>> >
>> > Mmmm, those two will not preempt each other - but other processes with
>> > higher prio preempt them (right?), so perhaps I am daft but doesn't that
>> > mean they will switch anyway (potentially in the middle of a method etc)?

In essence, they do preempt each other.  They will not preempt each
other when they are blasting bytes over the network, but as soon as
the net buffer fills up due to waiting for data transmission in either
the send or the receive, the other one, if data is available, will
take off.

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Resuming a SocketStream after a ConnectionClosed exception?

Eliot Miranda-2
Hi Chris,

   forgive my pedantry, but this is important.

On Mon, Feb 27, 2017 at 11:35 AM, Chris Muller <[hidden email]> wrote:
>> >> A single process to write and another to read, both at same priority
>> >> so only one can be  doing stuff at once.
>> >
>> > Mmmm, those two will not preempt each other - but other processes with
>> > higher prio preempt them (right?), so perhaps I am daft but doesn't that
>> > mean they will switch anyway (potentially in the middle of a method etc)?

In essence, they do preempt each other.  They will not preempt each
other when they are blasting bytes over the network, but as soon as
the net buffer fills up due to waiting for data transmission in either
the send or the receive, the other one, if data is available, will
take off.

No they will *not* preempt each other.  Preemption means an arbitrary suspension of the preempted thread by another thread.  It does /not/ mean a thread blocking and hence yielding to another thread.  So what will happen with the two threads reading and writing sockets is that they will only suspend when they wait on some semaphore that is to be signalled when some activity (e.g. reading or writing) is complete.  And these suspension points are (if properly written) safe points.  

So to the question: "doesn't that mean they will switch anyway (potentially in the middle of a method etc)?"

With the standard Smalltalk-80 scheduler, yes.  Any preemption by a higher priority process put the preempted process to the back of its run queue, hence causing the next process at that priority to also preempt the preempted process.  This destroys cooperative multi-threading.

But the current Cog scheduler does /not/ put a process to the back of its run queue provided it is in the right mode (see SmalltalkImage current processPreemptionYields = false  and ProcessorScheduler class>>#startUp: as mentioned elsewhere). In this mode the lower-priority process gets preempted by the higher-priority process, but remains at the head of the run queue at its priority and so as soon as the higher priority process blocks the lower priority process will resume where it left off.  

So the new scheduler mode (a mode that's been in the VisualWorks scheduler by default for many decades) preserves cooperative scheduling amongst processes of the same priority.  It /does/ prevent processes at the same priority from preempting each other.

Hope this clarifies things.
_,,,^..^,,,_
best, Eliot


Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Resuming a SocketStream after a ConnectionClosed exception?

Chris Muller-3
Hi Eliot, that's nice to know the strict definition of "preempt" vs.
yield -- I think I loosely equated them in my mind before as just a
process switch; still paying attention to the rules, and was the
context I was responding to Tim in.

Yes, I remember the new processPreemptionYields preference and its new
default value which switched the behavior from sending prempted
processes to the back of the queue, to now putting them onto the front
of the queue, permitting cooperative multi-threading OOTB.  I still
don't know if I could bring myself to employ such subtlety...  I like
the explicitness and safety of explicit control (via Semaphores,
Mutexes, etc.), but its nice to know if I *could* use coop which might
even perform better by not push/popping all those extra blocks..



On Mon, Feb 27, 2017 at 2:30 PM, Eliot Miranda <[hidden email]> wrote:

> Hi Chris,
>
>    forgive my pedantry, but this is important.
>
> On Mon, Feb 27, 2017 at 11:35 AM, Chris Muller <[hidden email]> wrote:
>>
>> >> >> A single process to write and another to read, both at same priority
>> >> >> so only one can be  doing stuff at once.
>> >> >
>> >> > Mmmm, those two will not preempt each other - but other processes
>> >> > with
>> >> > higher prio preempt them (right?), so perhaps I am daft but doesn't
>> >> > that
>> >> > mean they will switch anyway (potentially in the middle of a method
>> >> > etc)?
>>
>> In essence, they do preempt each other.  They will not preempt each
>> other when they are blasting bytes over the network, but as soon as
>> the net buffer fills up due to waiting for data transmission in either
>> the send or the receive, the other one, if data is available, will
>> take off.
>
>
> No they will *not* preempt each other.  Preemption means an arbitrary
> suspension of the preempted thread by another thread.  It does /not/ mean a
> thread blocking and hence yielding to another thread.  So what will happen
> with the two threads reading and writing sockets is that they will only
> suspend when they wait on some semaphore that is to be signalled when some
> activity (e.g. reading or writing) is complete.  And these suspension points
> are (if properly written) safe points.
>
> So to the question: "doesn't that mean they will switch anyway (potentially
> in the middle of a method etc)?"
>
> With the standard Smalltalk-80 scheduler, yes.  Any preemption by a higher
> priority process put the preempted process to the back of its run queue,
> hence causing the next process at that priority to also preempt the
> preempted process.  This destroys cooperative multi-threading.
>
> But the current Cog scheduler does /not/ put a process to the back of its
> run queue provided it is in the right mode (see SmalltalkImage current
> processPreemptionYields = false  and ProcessorScheduler class>>#startUp: as
> mentioned elsewhere). In this mode the lower-priority process gets preempted
> by the higher-priority process, but remains at the head of the run queue at
> its priority and so as soon as the higher priority process blocks the lower
> priority process will resume where it left off.
>
> So the new scheduler mode (a mode that's been in the VisualWorks scheduler
> by default for many decades) preserves cooperative scheduling amongst
> processes of the same priority.  It /does/ prevent processes at the same
> priority from preempting each other.
>
> Hope this clarifies things.
> _,,,^..^,,,_
> best, Eliot
>
>
>

Loading...