A matter with Delays

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
9 messages Options
Reply | Threaded
Open this post in threaded view
|

A matter with Delays

Igor Stasenko
Hi.

There is one thing which is IMO an over-engineering artifact:
- when system goes down (image shutdown), all currently scheduled
delays are "saved"
and then when image starting up they are rescheduled again to keep
waiting what time is left for delay..

But the problem is that it does not takes into account the total time
an image was frozen, and the requirement is quite ambiguous:

 - if you put a process on a delay for 5 minutes, then immediately
save image, and then restart it 10 minutes (or 1 year) after,
should this delay keep waiting for 4 +x seconds which is left? Or
should we consider this delay as utterly expired?
(and as you can see, the answer is different, if we counting time
using real, physical time, or just image uptime).

And why counting image uptime? Consider use cases, like connection
timeout.. it is all about
real time , right here , right now.. will it matter to get socket
connection timeout error when you restart some image 1 year after?
Please, give me a scenario, which will illustrate that we cannot live
without it and should count image uptime for delays, because i can't
find one.

If not, then to my opinion, and to simplify all logic inside delay
code, i would go straight and declare following:
 - when new image session starts, all delays, no matter for how long
they are scheduled to wait are considered expired (and therefore all
waiting processes
is automatically resumed).

Because as tried to demonstrate, the meaning of delay which spans over
multiple image sessions is really fuzzy and i would be really
surprised to find a code
which relies on such behavior.

This change will also can be helpful with terminating all processes
which were put on wait for too long (6304550344559763 milliseconds) by
mistake or such.


--
Best regards,
Igor Stasenko.

Reply | Threaded
Open this post in threaded view
|

Re: A matter with Delays

Sven Van Caekenberghe-2
Yes, I think you are right: the ambiguity is plain wrong, and even then not saving and continuing them sounds reasonable.

But what about the inverse: you schedule something that should happen every hour or every day, you save/restart the image and with the new approach these would all run immediately, right ?

So not only would you kill delays that are too short (reasonable), but collapse the longer ones.

In terms of design: I think the different behaviors should be implemented with different objects.

On 21 Feb 2013, at 12:28, Igor Stasenko <[hidden email]> wrote:

> Hi.
>
> There is one thing which is IMO an over-engineering artifact:
> - when system goes down (image shutdown), all currently scheduled
> delays are "saved"
> and then when image starting up they are rescheduled again to keep
> waiting what time is left for delay..
>
> But the problem is that it does not takes into account the total time
> an image was frozen, and the requirement is quite ambiguous:
>
> - if you put a process on a delay for 5 minutes, then immediately
> save image, and then restart it 10 minutes (or 1 year) after,
> should this delay keep waiting for 4 +x seconds which is left? Or
> should we consider this delay as utterly expired?
> (and as you can see, the answer is different, if we counting time
> using real, physical time, or just image uptime).
>
> And why counting image uptime? Consider use cases, like connection
> timeout.. it is all about
> real time , right here , right now.. will it matter to get socket
> connection timeout error when you restart some image 1 year after?
> Please, give me a scenario, which will illustrate that we cannot live
> without it and should count image uptime for delays, because i can't
> find one.
>
> If not, then to my opinion, and to simplify all logic inside delay
> code, i would go straight and declare following:
> - when new image session starts, all delays, no matter for how long
> they are scheduled to wait are considered expired (and therefore all
> waiting processes
> is automatically resumed).
>
> Because as tried to demonstrate, the meaning of delay which spans over
> multiple image sessions is really fuzzy and i would be really
> surprised to find a code
> which relies on such behavior.
>
> This change will also can be helpful with terminating all processes
> which were put on wait for too long (6304550344559763 milliseconds) by
> mistake or such.
>
>
> --
> Best regards,
> Igor Stasenko.
>


Reply | Threaded
Open this post in threaded view
|

Re: A matter with Delays

philippeback
I'd second that.

There are two responsibilities mixed into one, which both currently
use one tool: Delay.

BTW, Delay could have:

>>elapsedImageTime
>>elapsedRealWorldTime
>>elapsedSinceImageStartTime

which are three different things.

Now, the user of a Delay should be able to specify the intent.

Is this for managing a timeout?
Is this for managing a repeating task?
... other use cases ...

One thing is sure, there are a tad too many Delay instances in the process list.

Phil

2013/2/21 Sven Van Caekenberghe <[hidden email]>:

> Yes, I think you are right: the ambiguity is plain wrong, and even then not saving and continuing them sounds reasonable.
>
> But what about the inverse: you schedule something that should happen every hour or every day, you save/restart the image and with the new approach these would all run immediately, right ?
>
> So not only would you kill delays that are too short (reasonable), but collapse the longer ones.
>
> In terms of design: I think the different behaviors should be implemented with different objects.
>
> On 21 Feb 2013, at 12:28, Igor Stasenko <[hidden email]> wrote:
>
>> Hi.
>>
>> There is one thing which is IMO an over-engineering artifact:
>> - when system goes down (image shutdown), all currently scheduled
>> delays are "saved"
>> and then when image starting up they are rescheduled again to keep
>> waiting what time is left for delay..
>>
>> But the problem is that it does not takes into account the total time
>> an image was frozen, and the requirement is quite ambiguous:
>>
>> - if you put a process on a delay for 5 minutes, then immediately
>> save image, and then restart it 10 minutes (or 1 year) after,
>> should this delay keep waiting for 4 +x seconds which is left? Or
>> should we consider this delay as utterly expired?
>> (and as you can see, the answer is different, if we counting time
>> using real, physical time, or just image uptime).
>>
>> And why counting image uptime? Consider use cases, like connection
>> timeout.. it is all about
>> real time , right here , right now.. will it matter to get socket
>> connection timeout error when you restart some image 1 year after?
>> Please, give me a scenario, which will illustrate that we cannot live
>> without it and should count image uptime for delays, because i can't
>> find one.
>>
>> If not, then to my opinion, and to simplify all logic inside delay
>> code, i would go straight and declare following:
>> - when new image session starts, all delays, no matter for how long
>> they are scheduled to wait are considered expired (and therefore all
>> waiting processes
>> is automatically resumed).
>>
>> Because as tried to demonstrate, the meaning of delay which spans over
>> multiple image sessions is really fuzzy and i would be really
>> surprised to find a code
>> which relies on such behavior.
>>
>> This change will also can be helpful with terminating all processes
>> which were put on wait for too long (6304550344559763 milliseconds) by
>> mistake or such.
>>
>>
>> --
>> Best regards,
>> Igor Stasenko.
>>
>
>

Reply | Threaded
Open this post in threaded view
|

Re: A matter with Delays

Igor Stasenko
In reply to this post by Sven Van Caekenberghe-2
On 21 February 2013 12:36, Sven Van Caekenberghe <[hidden email]> wrote:
> Yes, I think you are right: the ambiguity is plain wrong, and even then not saving and continuing them sounds reasonable.
>
> But what about the inverse: you schedule something that should happen every hour or every day, you save/restart the image and with the new approach these would all run immediately, right ?
>

well, but in this case, once you will know that all delays is released
on image restart, you can always check for session change before doing
any action..
for example:

[
  | session |
  session := Smalltalk session.

  1 hour asDelay wait.

  "is session changed?"
  session == Smalltalk session ifFalse: [ "perhaps we should abandon
the loop here and reload/reinitialize stuff in some higher layers of
code" ].

  self doSomething.

] repeat.


Why this  "perhaps we should abandon the loop here and
reload/reinitialize stuff in some higher layers of code"..
because any resident code (like forked process with infinite loop) is
a common source of nasty problems,
unreliable behavior and usually hard to debug (especially over
multiple sessions or when you changing the code which it should run)..

And to my thinking, writing a proper session-aware code is a way to
go, instead of relying on ambiguous things.

> So not only would you kill delays that are too short (reasonable), but collapse the longer ones.
>
> In terms of design: I think the different behaviors should be implemented with different objects.
>
> On 21 Feb 2013, at 12:28, Igor Stasenko <[hidden email]> wrote:
>
>> Hi.
>>
>> There is one thing which is IMO an over-engineering artifact:
>> - when system goes down (image shutdown), all currently scheduled
>> delays are "saved"
>> and then when image starting up they are rescheduled again to keep
>> waiting what time is left for delay..
>>
>> But the problem is that it does not takes into account the total time
>> an image was frozen, and the requirement is quite ambiguous:
>>
>> - if you put a process on a delay for 5 minutes, then immediately
>> save image, and then restart it 10 minutes (or 1 year) after,
>> should this delay keep waiting for 4 +x seconds which is left? Or
>> should we consider this delay as utterly expired?
>> (and as you can see, the answer is different, if we counting time
>> using real, physical time, or just image uptime).
>>
>> And why counting image uptime? Consider use cases, like connection
>> timeout.. it is all about
>> real time , right here , right now.. will it matter to get socket
>> connection timeout error when you restart some image 1 year after?
>> Please, give me a scenario, which will illustrate that we cannot live
>> without it and should count image uptime for delays, because i can't
>> find one.
>>
>> If not, then to my opinion, and to simplify all logic inside delay
>> code, i would go straight and declare following:
>> - when new image session starts, all delays, no matter for how long
>> they are scheduled to wait are considered expired (and therefore all
>> waiting processes
>> is automatically resumed).
>>
>> Because as tried to demonstrate, the meaning of delay which spans over
>> multiple image sessions is really fuzzy and i would be really
>> surprised to find a code
>> which relies on such behavior.
>>
>> This change will also can be helpful with terminating all processes
>> which were put on wait for too long (6304550344559763 milliseconds) by
>> mistake or such.
>>
>>
>> --
>> Best regards,
>> Igor Stasenko.
>>
>
>



--
Best regards,
Igor Stasenko.

Reply | Threaded
Open this post in threaded view
|

Re: A matter with Delays

philippeback
Nice to know about this Smalltalk session. Didn't knew about that.

BTW, did a debug it and it froze my image in a BlockClosure>>newProcess

Phil

2013/2/21 Igor Stasenko <[hidden email]>:

> On 21 February 2013 12:36, Sven Van Caekenberghe <[hidden email]> wrote:
>> Yes, I think you are right: the ambiguity is plain wrong, and even then not saving and continuing them sounds reasonable.
>>
>> But what about the inverse: you schedule something that should happen every hour or every day, you save/restart the image and with the new approach these would all run immediately, right ?
>>
>
> well, but in this case, once you will know that all delays is released
> on image restart, you can always check for session change before doing
> any action..
> for example:
>
> [
>   | session |
>   session := Smalltalk session.
>
>   1 hour asDelay wait.
>
>   "is session changed?"
>   session == Smalltalk session ifFalse: [ "perhaps we should abandon
> the loop here and reload/reinitialize stuff in some higher layers of
> code" ].
>
>   self doSomething.
>
> ] repeat.
>
>
> Why this  "perhaps we should abandon the loop here and
> reload/reinitialize stuff in some higher layers of code"..
> because any resident code (like forked process with infinite loop) is
> a common source of nasty problems,
> unreliable behavior and usually hard to debug (especially over
> multiple sessions or when you changing the code which it should run)..
>
> And to my thinking, writing a proper session-aware code is a way to
> go, instead of relying on ambiguous things.
>
>> So not only would you kill delays that are too short (reasonable), but collapse the longer ones.
>>
>> In terms of design: I think the different behaviors should be implemented with different objects.
>>
>> On 21 Feb 2013, at 12:28, Igor Stasenko <[hidden email]> wrote:
>>
>>> Hi.
>>>
>>> There is one thing which is IMO an over-engineering artifact:
>>> - when system goes down (image shutdown), all currently scheduled
>>> delays are "saved"
>>> and then when image starting up they are rescheduled again to keep
>>> waiting what time is left for delay..
>>>
>>> But the problem is that it does not takes into account the total time
>>> an image was frozen, and the requirement is quite ambiguous:
>>>
>>> - if you put a process on a delay for 5 minutes, then immediately
>>> save image, and then restart it 10 minutes (or 1 year) after,
>>> should this delay keep waiting for 4 +x seconds which is left? Or
>>> should we consider this delay as utterly expired?
>>> (and as you can see, the answer is different, if we counting time
>>> using real, physical time, or just image uptime).
>>>
>>> And why counting image uptime? Consider use cases, like connection
>>> timeout.. it is all about
>>> real time , right here , right now.. will it matter to get socket
>>> connection timeout error when you restart some image 1 year after?
>>> Please, give me a scenario, which will illustrate that we cannot live
>>> without it and should count image uptime for delays, because i can't
>>> find one.
>>>
>>> If not, then to my opinion, and to simplify all logic inside delay
>>> code, i would go straight and declare following:
>>> - when new image session starts, all delays, no matter for how long
>>> they are scheduled to wait are considered expired (and therefore all
>>> waiting processes
>>> is automatically resumed).
>>>
>>> Because as tried to demonstrate, the meaning of delay which spans over
>>> multiple image sessions is really fuzzy and i would be really
>>> surprised to find a code
>>> which relies on such behavior.
>>>
>>> This change will also can be helpful with terminating all processes
>>> which were put on wait for too long (6304550344559763 milliseconds) by
>>> mistake or such.
>>>
>>>
>>> --
>>> Best regards,
>>> Igor Stasenko.
>>>
>>
>>
>
>
>
> --
> Best regards,
> Igor Stasenko.
>

Reply | Threaded
Open this post in threaded view
|

Re: A matter with Delays

Igor Stasenko
In reply to this post by philippeback
On 21 February 2013 12:49, [hidden email] <[hidden email]> wrote:

> I'd second that.
>
> There are two responsibilities mixed into one, which both currently
> use one tool: Delay.
>
> BTW, Delay could have:
>
>>>elapsedImageTime
>>>elapsedRealWorldTime
>>>elapsedSinceImageStartTime
>
> which are three different things.
>
do you mean:

delay waitImageTime
delay waitRealTime
delay waitInCurrentSession

?
and
delay wait is synonym for "delay waitInCurrentSession "

> Now, the user of a Delay should be able to specify the intent.
>
> Is this for managing a timeout?
> Is this for managing a repeating task?
> ... other use cases ...
>
> One thing is sure, there are a tad too many Delay instances in the process list.
>
> Phil
>
> 2013/2/21 Sven Van Caekenberghe <[hidden email]>:
>> Yes, I think you are right: the ambiguity is plain wrong, and even then not saving and continuing them sounds reasonable.
>>
>> But what about the inverse: you schedule something that should happen every hour or every day, you save/restart the image and with the new approach these would all run immediately, right ?
>>
>> So not only would you kill delays that are too short (reasonable), but collapse the longer ones.
>>
>> In terms of design: I think the different behaviors should be implemented with different objects.
>>
>> On 21 Feb 2013, at 12:28, Igor Stasenko <[hidden email]> wrote:
>>
>>> Hi.
>>>
>>> There is one thing which is IMO an over-engineering artifact:
>>> - when system goes down (image shutdown), all currently scheduled
>>> delays are "saved"
>>> and then when image starting up they are rescheduled again to keep
>>> waiting what time is left for delay..
>>>
>>> But the problem is that it does not takes into account the total time
>>> an image was frozen, and the requirement is quite ambiguous:
>>>
>>> - if you put a process on a delay for 5 minutes, then immediately
>>> save image, and then restart it 10 minutes (or 1 year) after,
>>> should this delay keep waiting for 4 +x seconds which is left? Or
>>> should we consider this delay as utterly expired?
>>> (and as you can see, the answer is different, if we counting time
>>> using real, physical time, or just image uptime).
>>>
>>> And why counting image uptime? Consider use cases, like connection
>>> timeout.. it is all about
>>> real time , right here , right now.. will it matter to get socket
>>> connection timeout error when you restart some image 1 year after?
>>> Please, give me a scenario, which will illustrate that we cannot live
>>> without it and should count image uptime for delays, because i can't
>>> find one.
>>>
>>> If not, then to my opinion, and to simplify all logic inside delay
>>> code, i would go straight and declare following:
>>> - when new image session starts, all delays, no matter for how long
>>> they are scheduled to wait are considered expired (and therefore all
>>> waiting processes
>>> is automatically resumed).
>>>
>>> Because as tried to demonstrate, the meaning of delay which spans over
>>> multiple image sessions is really fuzzy and i would be really
>>> surprised to find a code
>>> which relies on such behavior.
>>>
>>> This change will also can be helpful with terminating all processes
>>> which were put on wait for too long (6304550344559763 milliseconds) by
>>> mistake or such.
>>>
>>>
>>> --
>>> Best regards,
>>> Igor Stasenko.
>>>
>>
>>
>



--
Best regards,
Igor Stasenko.

Reply | Threaded
Open this post in threaded view
|

Re: A matter with Delays

Igor Stasenko
In reply to this post by philippeback
On 21 February 2013 12:59, [hidden email] <[hidden email]> wrote:
> Nice to know about this Smalltalk session. Didn't knew about that.
>
> BTW, did a debug it and it froze my image in a BlockClosure>>newProcess
>
wait for 1 hour and come back :)

> Phil
>
> 2013/2/21 Igor Stasenko <[hidden email]>:
>> On 21 February 2013 12:36, Sven Van Caekenberghe <[hidden email]> wrote:
>>> Yes, I think you are right: the ambiguity is plain wrong, and even then not saving and continuing them sounds reasonable.
>>>
>>> But what about the inverse: you schedule something that should happen every hour or every day, you save/restart the image and with the new approach these would all run immediately, right ?
>>>
>>
>> well, but in this case, once you will know that all delays is released
>> on image restart, you can always check for session change before doing
>> any action..
>> for example:
>>
>> [
>>   | session |
>>   session := Smalltalk session.
>>
>>   1 hour asDelay wait.
>>
>>   "is session changed?"
>>   session == Smalltalk session ifFalse: [ "perhaps we should abandon
>> the loop here and reload/reinitialize stuff in some higher layers of
>> code" ].
>>
>>   self doSomething.
>>
>> ] repeat.
>>
>>
>> Why this  "perhaps we should abandon the loop here and
>> reload/reinitialize stuff in some higher layers of code"..
>> because any resident code (like forked process with infinite loop) is
>> a common source of nasty problems,
>> unreliable behavior and usually hard to debug (especially over
>> multiple sessions or when you changing the code which it should run)..
>>
>> And to my thinking, writing a proper session-aware code is a way to
>> go, instead of relying on ambiguous things.
>>
>>> So not only would you kill delays that are too short (reasonable), but collapse the longer ones.
>>>
>>> In terms of design: I think the different behaviors should be implemented with different objects.
>>>
>>> On 21 Feb 2013, at 12:28, Igor Stasenko <[hidden email]> wrote:
>>>
>>>> Hi.
>>>>
>>>> There is one thing which is IMO an over-engineering artifact:
>>>> - when system goes down (image shutdown), all currently scheduled
>>>> delays are "saved"
>>>> and then when image starting up they are rescheduled again to keep
>>>> waiting what time is left for delay..
>>>>
>>>> But the problem is that it does not takes into account the total time
>>>> an image was frozen, and the requirement is quite ambiguous:
>>>>
>>>> - if you put a process on a delay for 5 minutes, then immediately
>>>> save image, and then restart it 10 minutes (or 1 year) after,
>>>> should this delay keep waiting for 4 +x seconds which is left? Or
>>>> should we consider this delay as utterly expired?
>>>> (and as you can see, the answer is different, if we counting time
>>>> using real, physical time, or just image uptime).
>>>>
>>>> And why counting image uptime? Consider use cases, like connection
>>>> timeout.. it is all about
>>>> real time , right here , right now.. will it matter to get socket
>>>> connection timeout error when you restart some image 1 year after?
>>>> Please, give me a scenario, which will illustrate that we cannot live
>>>> without it and should count image uptime for delays, because i can't
>>>> find one.
>>>>
>>>> If not, then to my opinion, and to simplify all logic inside delay
>>>> code, i would go straight and declare following:
>>>> - when new image session starts, all delays, no matter for how long
>>>> they are scheduled to wait are considered expired (and therefore all
>>>> waiting processes
>>>> is automatically resumed).
>>>>
>>>> Because as tried to demonstrate, the meaning of delay which spans over
>>>> multiple image sessions is really fuzzy and i would be really
>>>> surprised to find a code
>>>> which relies on such behavior.
>>>>
>>>> This change will also can be helpful with terminating all processes
>>>> which were put on wait for too long (6304550344559763 milliseconds) by
>>>> mistake or such.
>>>>
>>>>
>>>> --
>>>> Best regards,
>>>> Igor Stasenko.
>>>>
>>>
>>>
>>
>>
>>
>> --
>> Best regards,
>> Igor Stasenko.
>>
>



--
Best regards,
Igor Stasenko.

Reply | Threaded
Open this post in threaded view
|

Re: A matter with Delays

Eliot Miranda-2
In reply to this post by Igor Stasenko
On Thu, Feb 21, 2013 at 3:28 AM, Igor Stasenko <[hidden email]> wrote:
> Hi.
>
> There is one thing which is IMO an over-engineering artifact:
> - when system goes down (image shutdown), all currently scheduled
> delays are "saved"
> and then when image starting up they are rescheduled again to keep
> waiting what time is left for delay..

Right now one says Delay forMilliseconds: n etc.  That's clearly a
duration.  An API which said Delay until: aTime is different, and
could be added to the current API easily.

One good reason for keeping the current behaviour is profiling image
shutdown and startup.  The current MessageTally is slightly broken in
this regard but I fixed it at Cadence to profile our start-up
slowness.

So please consider carefully throwing this behaviour away.  I at least
find it quite useful.

>
> But the problem is that it does not takes into account the total time
> an image was frozen, and the requirement is quite ambiguous:
>
>  - if you put a process on a delay for 5 minutes, then immediately
> save image, and then restart it 10 minutes (or 1 year) after,
> should this delay keep waiting for 4 +x seconds which is left? Or
> should we consider this delay as utterly expired?
> (and as you can see, the answer is different, if we counting time
> using real, physical time, or just image uptime).
>
> And why counting image uptime? Consider use cases, like connection
> timeout.. it is all about
> real time , right here , right now.. will it matter to get socket
> connection timeout error when you restart some image 1 year after?
> Please, give me a scenario, which will illustrate that we cannot live
> without it and should count image uptime for delays, because i can't
> find one.
>
> If not, then to my opinion, and to simplify all logic inside delay
> code, i would go straight and declare following:
>  - when new image session starts, all delays, no matter for how long
> they are scheduled to wait are considered expired (and therefore all
> waiting processes
> is automatically resumed).
>
> Because as tried to demonstrate, the meaning of delay which spans over
> multiple image sessions is really fuzzy and i would be really
> surprised to find a code
> which relies on such behavior.
>
> This change will also can be helpful with terminating all processes
> which were put on wait for too long (6304550344559763 milliseconds) by
> mistake or such.
>
>
> --
> Best regards,
> Igor Stasenko.
>



--
best,
Eliot

Reply | Threaded
Open this post in threaded view
|

Re: A matter with Delays

Igor Stasenko
On 21 February 2013 20:56, Eliot Miranda <[hidden email]> wrote:

> On Thu, Feb 21, 2013 at 3:28 AM, Igor Stasenko <[hidden email]> wrote:
>> Hi.
>>
>> There is one thing which is IMO an over-engineering artifact:
>> - when system goes down (image shutdown), all currently scheduled
>> delays are "saved"
>> and then when image starting up they are rescheduled again to keep
>> waiting what time is left for delay..
>
> Right now one says Delay forMilliseconds: n etc.  That's clearly a
> duration.  An API which said Delay until: aTime is different, and
> could be added to the current API easily.
>

yes. Still this won't make #forMilliseconds: protocol less ambiguous.


> One good reason for keeping the current behaviour is profiling image
> shutdown and startup.  The current MessageTally is slightly broken in
> this regard but I fixed it at Cadence to profile our start-up
> slowness.
>
> So please consider carefully throwing this behaviour away.  I at least
> find it quite useful.

Can you shed some details? How delay's implementation helps with that?

>>
>> But the problem is that it does not takes into account the total time
>> an image was frozen, and the requirement is quite ambiguous:
>>
>>  - if you put a process on a delay for 5 minutes, then immediately
>> save image, and then restart it 10 minutes (or 1 year) after,
>> should this delay keep waiting for 4 +x seconds which is left? Or
>> should we consider this delay as utterly expired?
>> (and as you can see, the answer is different, if we counting time
>> using real, physical time, or just image uptime).
>>
>> And why counting image uptime? Consider use cases, like connection
>> timeout.. it is all about
>> real time , right here , right now.. will it matter to get socket
>> connection timeout error when you restart some image 1 year after?
>> Please, give me a scenario, which will illustrate that we cannot live
>> without it and should count image uptime for delays, because i can't
>> find one.
>>
>> If not, then to my opinion, and to simplify all logic inside delay
>> code, i would go straight and declare following:
>>  - when new image session starts, all delays, no matter for how long
>> they are scheduled to wait are considered expired (and therefore all
>> waiting processes
>> is automatically resumed).
>>
>> Because as tried to demonstrate, the meaning of delay which spans over
>> multiple image sessions is really fuzzy and i would be really
>> surprised to find a code
>> which relies on such behavior.
>>
>> This change will also can be helpful with terminating all processes
>> which were put on wait for too long (6304550344559763 milliseconds) by
>> mistake or such.
>>
>>
>> --
>> Best regards,
>> Igor Stasenko.
>>
>
>
>
> --
> best,
> Eliot
>



--
Best regards,
Igor Stasenko.