Hi Blair,
I'm trying to put together a framework for "nagging a server" - by that, I mean noticing (with help from applications) periods of inactivity and reconnecting after some configurable delay. Having done things like this a couple of times, I hope to encapsulate a reusable solution to the problem. Some fiddling got me to realize that there is value in having application threads blocked until a connection exists. Of course, that won't shield them from errors, but, it will avoid a lot of ugly checks for nil, etc., and put the inevitable blocking code in one place rather than in every application. At this point, I was reminded of a suggestion of yours: "What you really want to do is to suspend the process at certain well defined prompts when it is safe to do so. You can either do this by getting the process to suspend itself, resuming it from some other process, or by using a Semaphore with an initial signal count of 1. At certain points you can do send a matched pair of wait and signal messages to the Semaphore from the background process. When you want to suspend the background process from another process, say the UI process, you can then do a single wait on the Semaphore, and the next time the background process tries its (wait,signal) pair, it will stop. Other simpler techniques with shared flags tested from the loop might also be workable in this case, though they are generally unsafe for process synchronisation." Sounds good to me. I've used this trick elsewhere, and it looks like it will work as well as anything could for my current objective. My concern (perhaps flawed) is that this could be difficult to keep synchronized. The concern is over what happens if a thread manages to wait but gets terminated before putting back the signal it consumed. Even a structure such as [ aSemaphore wait. ^protectedGizmo ] ensure:[ aSemaphore signal ]. might present problems. I'm assuming that it's possible to get far enough into #ensure: to evaluate the ensured block on exit, but, without having actually done the wait - or am I worrying about things that can never happen? Even if it did happen, perhaps the worst that would result is that a broken connection would be used and cleaned up later - that's not really any worse that what would happen if somebody tripped over a wire after the ^. Failure to put back the consumed signal is _really_ bad beause it could hang the nagging thread. Still, I can envision a scenario, however unlikely, in which excess signals would pile up to cause a sort of infinite loop. Would it be better to use #set rather than #signal? Is there a more clever use of ensured execution that would signal iff needed? Or, would you recommend using #wait: to poll the semaphore from the nagging thread so that it can't get blocked if the signal is not restored at some point? Either way, I have an observation/question about some method comments. Semaphore>>set states the the signal count is reduced to one if no processes are waiting. #primSetSignals: does not mention a condition. Would it be more accurate to say, in #set, that the signal count is dropped, but, that any processes that have already grabbed signals will run the next time they are scheduled? Have a good one, Bill -- Wilhelm K. Schwab, Ph.D. [hidden email] (352) 846-1287 |
Free forum by Nabble | Edit this page |