Comment #21 on issue 4768 by [hidden email]: Code loading/compilation followed by starting a TCP server results in a hang http://code.google.com/p/pharo/issues/detail?id=4768 We reduced the code to even simpler case: | t1 t2 | [ Smalltalk garbageCollect ] valueUnpreemptively. t1 := Time millisecondClockValue. Semaphore new waitTimeoutMSecs: 5000. t2 := Time millisecondClockValue. Transcript cr; show: 'Delta: '; show: t2 - t1 _______________________________________________ Pharo-bugtracker mailing list [hidden email] http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-bugtracker |
Updates:
Status: FixToInclude Comment #22 on issue 4768 by [hidden email]: Code loading/compilation followed by starting a TCP server results in a hang http://code.google.com/p/pharo/issues/detail?id=4768 fix attached Attachments: BlockClosure-valueUnpreemptively.st 986 bytes _______________________________________________ Pharo-bugtracker mailing list [hidden email] http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-bugtracker |
Comment #23 on issue 4768 by [hidden email]: Code loading/compilation followed by starting a TCP server results in a hang http://code.google.com/p/pharo/issues/detail?id=4768 I have no time to test right now, but I believe you: this is super great news! You guys rock, I wish I would have been there too. Sven _______________________________________________ Pharo-bugtracker mailing list [hidden email] http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-bugtracker |
Updates:
Status: Integrated Comment #24 on issue 4768 by [hidden email]: Code loading/compilation followed by starting a TCP server results in a hang http://code.google.com/p/pharo/issues/detail?id=4768 in 1.4 and in 13314 _______________________________________________ Pharo-bugtracker mailing list [hidden email] http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-bugtracker |
Comment #25 on issue 4768 by [hidden email]: Code loading/compilation followed by starting a TCP server results in a hang http://code.google.com/p/pharo/issues/detail?id=4768 Hi I saw the change and although it seems to work it masks somewhat what the real issue was. i.e. The fix is to do what the Processor>>yield primitive does manually. It looks like the issue is to do the priority of the processes. The Delay timer event loop is set to TimingPriority := 80. Delay class>startTimerEventLoop ... TimerEventLoop priority: Processor timingPriority. ... In BlockClosure>>valueUnpreemptively the code 'Processor highestPriority.' answers 80 too. We only call Processor yield after changing the current process back to it's original priority, but surely this could cause an issue for any code that was at the highest priority? So I'd recommend to change the code to be as below BlockClosure>>valueUnpreemptively "Evaluate the receiver (block), without the possibility of preemption by higher priority processes. Use this facility VERY sparingly!" "Think about using Block>>valueUninterruptably first, and think about using Semaphore>>critical: before that, and think about redesigning your application even before that! After you've done all that thinking, go right ahead and use it..." | activeProcess oldPriority result semaphore | activeProcess := Processor activeProcess. oldPriority := activeProcess priority. activeProcess priority: Processor highestPriority. result := self ensure: [ Processor yield. activeProcess priority: oldPriority.]. "Yield after restoring priority to give the preempted processes a chance to run" ^result i.e. the change is to this line: result := self ensure: [ Processor yield. activeProcess priority: oldPriority.]. Rather yield the current highest priority before reverting the priorities. It seems to work but not sure if we should yield again after reverting the priorities as well? _______________________________________________ Pharo-bugtracker mailing list [hidden email] http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-bugtracker |
Free forum by Nabble | Edit this page |