Hi all,
I discovered a bug when stepping over a call to Generator >> #nextPut:. !! Save your image before trying the following !!
Generator
on: [:stream | stream
nextPut: #foo]
The first is annoying for newcomers, but I see this behavior is reasonable and without the effect of the second, it wouldn't be a big problem.
The second, however, looks like a bigger issue to me. Actually, I think the most serious aspect is that in Squeak 5.1, the same procedure does not crash your image, but you are forwarded to the emergency debugger and can terminate the process:
Also, this is not the first scenario
in the last time where I got an infinite debugger chain. See [BUG(s)] in Context control (#jump, #runUntilErrorOrReturnFrom:) for
a similar issue. And there were even more situations which I could not yet reproduce exactly.
I'm afraid that the recent changes to the debuggers might have weakened its ability to detect recursive errors. Can someone else tell about these problems?
I suppose we ignore a large number of recursive errors as in MorphicDebugger >> #openOn:context:label:contents:fullView:, the uiBlock is often triggered as a separate UI message which will be executed after the recursion flag has been cleared
by the base class.
Would be great if someone could have a look at it or share more information :)
Best,
Christoph
Carpe Squeak!
|
By request, screenshots from a clean image ...
⤴ Press over Press cmd-dot ⤵
The screenshots from 5.1 were made in a clean 5.1 image.
Best, Christoph Von: Thiede, Christoph
Gesendet: Freitag, 13. Dezember 2019 19:48:00 An: Squeak Dev Betreff: BUG/REGRESSION while debugging Generator >> #nextPut: Hi all,
I discovered a bug when stepping over a call to Generator >> #nextPut:. !! Save your image before trying the following !!
Generator
on: [:stream | stream
nextPut: #foo]
The first is annoying for newcomers, but I see this behavior is reasonable and without the effect of the second, it wouldn't be a big problem.
The second, however, looks like a bigger issue to me. Actually, I think the most serious aspect is that in Squeak 5.1, the same procedure does not crash your image, but you are forwarded to the emergency debugger and can terminate the process:
Also, this is not the first scenario
in the last time where I got an infinite debugger chain. See [BUG(s)] in Context control (#jump, #runUntilErrorOrReturnFrom:) for
a similar issue. And there were even more situations which I could not yet reproduce exactly.
I'm afraid that the recent changes to the debuggers might have weakened its ability to detect recursive errors. Can someone else tell about these problems?
I suppose we ignore a large number of recursive errors as in MorphicDebugger >> #openOn:context:label:contents:fullView:, the uiBlock is often triggered as a separate UI message which will be executed after the recursion flag has been cleared
by the base class.
Would be great if someone could have a look at it or share more information :)
Best,
Christoph
pastedImage.png (33K) Download Attachment
Carpe Squeak!
|
On Dec 14, 2019, at 5:43 AM, Thiede, Christoph <[hidden email]> wrote:
I’ve tried this in two trunk 64-bit images, one with the V3PlusClosures bytecode set and one with the SistaV1 bytecode set and no problem occurs in either case. If this only happens in a clean 5.1 image then I suspect it has already been fixed.
|
On Sat, Dec 14, 2019 at 04:09:22PM -0800, Eliot Miranda wrote:
> > > > On Dec 14, 2019, at 5:43 AM, Thiede, Christoph <[hidden email]> wrote: > > > > ??? > > By request, screenshots from a clean image ... > > > > > > <pastedImage.png> > > > > ??? Press over > > > > Press cmd-dot ??? > > > > <pastedImage.png> > > > > > > > > > > The screenshots from 5.1 were made in a clean 5.1 image. > > > > Hi Christoph, > > I???ve tried this in two trunk 64-bit images, one with the V3PlusClosures bytecode set and one with the SistaV1 bytecode set and no problem occurs in either case. If this only happens in a clean 5.1 image then I suspect it has already been fixed. > I can reproduce the problem in my trunk image. Chrostoph's example is to debug this: Generator on: [:stream | stream nextPut: #foo] The failure happens when I step over the #nextPut: If I step into the #nextPut: then all is well. Dave |
Hi David, Hi Christoph, On Sun, Dec 15, 2019 at 8:52 AM David T. Lewis <[hidden email]> wrote: On Sat, Dec 14, 2019 at 04:09:22PM -0800, Eliot Miranda wrote: Interesting. I indeed do step over (/not/ step into) and do /not/ see the bug. Dave, Christoph, what VMs are you running? _,,,^..^,,,_ best, Eliot |
Hi Eliot,
On Sun, Dec 15, 2019 at 01:55:13PM -0800, Eliot Miranda wrote: > Hi David, Hi Christoph, > > On Sun, Dec 15, 2019 at 8:52 AM David T. Lewis <[hidden email]> wrote: > > > On Sat, Dec 14, 2019 at 04:09:22PM -0800, Eliot Miranda wrote: > > > > > > > > > > On Dec 14, 2019, at 5:43 AM, Thiede, Christoph < > > [hidden email]> wrote: > > > > > > > > ??? > > > > By request, screenshots from a clean image ... > > > > > > > > > > > > <pastedImage.png> > > > > > > > > ??? Press over > > > > > > > > Press cmd-dot ??? > > > > > > > > <pastedImage.png> > > > > > > > > > > > > > > > > > > > > The screenshots from 5.1 were made in a clean 5.1 image. > > > > > > > > > > Hi Christoph, > > > > > > I???ve tried this in two trunk 64-bit images, one with the > > V3PlusClosures bytecode set and one with the SistaV1 bytecode set and no > > problem occurs in either case. If this only happens in a clean 5.1 image > > then I suspect it has already been fixed. > > > > > > > > > I can reproduce the problem in my trunk image. Chrostoph's example > > is to debug this: > > > > Generator on: [:stream | stream nextPut: #foo] > > > > The failure happens when I step over the #nextPut: > > > > If I step into the #nextPut: then all is well. > > > > Interesting. I indeed do step over (/not/ step into) and do /not/ see the > bug. Dave, Christoph, what VMs are you running? > The VM that I used is: /usr/local/lib/squeak/5.0-201911282316/squeak Open Smalltalk Cog[Spur] VM [CoInterpreterPrimitives VMMaker.oscog-eem.2597] Unix built on Nov 28 2019 23:23:45 Compiler: 4.2.1 Compatible Clang 7.0.0 (tags/RELEASE_700/final) platform sources revision VM: 201911282316 https://github.com/OpenSmalltalk/opensmalltalk-vm.git Date: Thu Nov 28 15:16:31 2019 CommitHash: 4710c5a Plugins: 201911282316 https://github.com/OpenSmalltalk/opensmalltalk-vm.git CoInterpreter VMMaker.oscog-eem.2597 uuid: 7a69be2e-f0d0-4d41-9854-65432d621fed Nov 28 2019 StackToRegisterMappingCogit VMMaker.oscog-eem.2596 uuid: 8500baf3-a5ae-4594-9f3b-08cedfdf1fb3 Nov 28 2019 But I do not think that this is a VM issue. I get the same result when I run Christoph's snippet on a trunk-level V3 image with an interpreter VM. So the issue must be in the image, not in the VM. Dave |
On Sun, Dec 15, 2019 at 08:18:53PM -0500, David T. Lewis wrote:
> Hi Eliot, > > On Sun, Dec 15, 2019 at 01:55:13PM -0800, Eliot Miranda wrote: > > Hi David, Hi Christoph, > > > > On Sun, Dec 15, 2019 at 8:52 AM David T. Lewis <[hidden email]> wrote: > > > > > On Sat, Dec 14, 2019 at 04:09:22PM -0800, Eliot Miranda wrote: > > > > > > > > > > > > > On Dec 14, 2019, at 5:43 AM, Thiede, Christoph < > > > [hidden email]> wrote: > > > > > > > > > > ??? > > > > > By request, screenshots from a clean image ... > > > > > > > > > > > > > > > <pastedImage.png> > > > > > > > > > > ??? Press over > > > > > > > > > > Press cmd-dot ??? > > > > > > > > > > <pastedImage.png> > > > > > > > > > > > > > > > > > > > > > > > > > The screenshots from 5.1 were made in a clean 5.1 image. > > > > > > > > > > > > > Hi Christoph, > > > > > > > > I???ve tried this in two trunk 64-bit images, one with the > > > V3PlusClosures bytecode set and one with the SistaV1 bytecode set and no > > > problem occurs in either case. If this only happens in a clean 5.1 image > > > then I suspect it has already been fixed. > > > > > > > > > > > > > I can reproduce the problem in my trunk image. Chrostoph's example > > > is to debug this: > > > > > > Generator on: [:stream | stream nextPut: #foo] > > > > > > The failure happens when I step over the #nextPut: > > > > > > If I step into the #nextPut: then all is well. > > > > > > > Interesting. I indeed do step over (/not/ step into) and do /not/ see the > > bug. Dave, Christoph, what VMs are you running? > > > > The VM that I used is: > > /usr/local/lib/squeak/5.0-201911282316/squeak > Open Smalltalk Cog[Spur] VM [CoInterpreterPrimitives VMMaker.oscog-eem.2597] > Unix built on Nov 28 2019 23:23:45 Compiler: 4.2.1 Compatible Clang 7.0.0 (tags/RELEASE_700/final) > platform sources revision VM: 201911282316 https://github.com/OpenSmalltalk/opensmalltalk-vm.git Date: Thu Nov 28 15:16:31 2019 CommitHash: 4710c5a Plugins: 201911282316 https://github.com/OpenSmalltalk/opensmalltalk-vm.git > CoInterpreter VMMaker.oscog-eem.2597 uuid: 7a69be2e-f0d0-4d41-9854-65432d621fed Nov 28 2019 > StackToRegisterMappingCogit VMMaker.oscog-eem.2596 uuid: 8500baf3-a5ae-4594-9f3b-08cedfdf1fb3 Nov 28 2019 > > But I do not think that this is a VM issue. I get the same result > when I run Christoph's snippet on a trunk-level V3 image with an > interpreter VM. So the issue must be in the image, not in the VM. > Indeed, I did a quick check on Squeak4.6-15102.image with an interpreter VM, and again I get the same symptoms. We are probably seeing two different issues here: 1) The debugger gets confused when trying to step over Generator>>nextPut: (presumably something related to the context swap). 2) The <alt><period> interrupt handler gets confused when trying to figure out what to attach itself to after 1) happens. Both of these are probably issues that have been with us for a long time, and are just now being noticed. Dave |
Hi all. I am just investigating this issue. However, looking at the tests for Generator, I would :-) suggest :-) to re-phrase this example: Generator on: [:g | g yield: #foo]. -or- Generator on: [:generator | generator yield: #foo]. In any case, countless debuggers show up on "step over". "Step into" works fine. Squeak 5.3beta #19273 Image 6521 (32 bit) VM: 201911282316 (cog.spur) Windows 10 Best, Marcel
|
Squeak 5.3beta #19276 Image 68021 (64 bit) VM 201911282316 Win10 (1903).
@Dave As mentioned, in Squeak 5.1, we get an emergency debugger instead of infinite debuggers. I think this point is clearly a regression. See Morphic-ct.1610 for an approach to prevent countless debuggers - I'm afraid it does not fix this problem, but it prevents others.
What code are you exactly referring to when you talk about the interrupt handler? Via debugging, I found out that the BlockCannotReturn errors are already raised before interrupting. The debugger chain is only blocking itself. You can test this by putting a simple #inform: before the Debugger opening.
Maybe the bug is related to the other context simulation bugs I reported a few weeks ago? I could not explain why the error does not occur if you step into ... Via debugging the debugger, I found out that the bug is raised in the debugged process itself, not in the debugger process.
Best, Christoph
Von: Squeak-dev <[hidden email]> im Auftrag von Taeumel, Marcel
Gesendet: Montag, 16. Dezember 2019 11:49:55 An: John Pfersich via Squeak-dev Betreff: Re: [squeak-dev] BUG/REGRESSION while debugging Generator >> #nextPut:
Hi all.
I am just investigating this issue. However, looking at the tests for Generator, I would :-) suggest :-) to re-phrase this example:
Generator on: [:g | g yield: #foo].
-or-
Generator on: [:generator | generator yield: #foo].
In any case, countless debuggers show up on "step over". "Step into" works fine.
Squeak 5.3beta #19273
Image 6521 (32 bit)
VM: 201911282316 (cog.spur)
Windows 10
Best,
Marcel
Carpe Squeak!
|
On Mon, Dec 16, 2019 at 11:53:23AM +0000, Thiede, Christoph wrote:
> Squeak 5.3beta #19276 > > Image 68021 (64 bit) > > VM 201911282316 > > Win10 (1903). > > > @Dave As mentioned, in Squeak 5.1, we get an emergency debugger instead of infinite debuggers. I think this point is clearly a regression. > I can also confirm that the emergency debugger part of the problem happened somewhere between Squeak4.5-13352 and Squeak4.6-15102. Testing with interpreter VM (to totally exclude Cog/Spur as a cause), I see the emergency debugger with Squeak4.5-13352, and infinite debuggers with Squeak4.6-15102. I think that this aligns with the Squeak 5.1 -> Squeak 5.2 period, so we are both seeing the same thing, and it is not a VM problem. > See Morphic-ct.1610 for an approach to prevent countless debuggers - I'm afraid it does not fix this problem, but it prevents others. > > > What code are you exactly referring to when you talk about the interrupt handler? Via debugging, I found out that the BlockCannotReturn errors are already raised before interrupting. The debugger chain is only blocking itself. You can test this by putting a simple #inform: before the Debugger opening. > I was referring to the emergency interrupt handler, which should result in just one debugger as you see with your Squeak 5.1 test. Dave > > Maybe the bug is related to the other context simulation bugs I reported a few weeks ago? I could not explain why the error does not occur if you step into ... Via debugging the debugger, I found out that the bug is raised in the debugged process itself, not in the debugger process. > > > Best, > > Christoph > > > ________________________________ > Von: Squeak-dev <[hidden email]> im Auftrag von Taeumel, Marcel > Gesendet: Montag, 16. Dezember 2019 11:49:55 > An: John Pfersich via Squeak-dev > Betreff: Re: [squeak-dev] BUG/REGRESSION while debugging Generator >> #nextPut: > > Hi all. > > I am just investigating this issue. However, looking at the tests for Generator, I would :-) suggest :-) to re-phrase this example: > > Generator on: [:g | g yield: #foo]. > > -or- > > Generator on: [:generator | generator yield: #foo]. > > In any case, countless debuggers show up on "step over". "Step into" works fine. > > [cid:f4de31a1-db76-485a-8eed-ce0b553021f1] > > Squeak 5.3beta #19273 > Image 6521 (32 bit) > VM: 201911282316 (cog.spur) > Windows 10 > > Best, > Marcel > > Am 16.12.2019 02:39:38 schrieb David T. Lewis <[hidden email]>: > > On Sun, Dec 15, 2019 at 08:18:53PM -0500, David T. Lewis wrote: > > Hi Eliot, > > > > On Sun, Dec 15, 2019 at 01:55:13PM -0800, Eliot Miranda wrote: > > > Hi David, Hi Christoph, > > > > > > On Sun, Dec 15, 2019 at 8:52 AM David T. Lewis wrote: > > > > > > > On Sat, Dec 14, 2019 at 04:09:22PM -0800, Eliot Miranda wrote: > > > > > > > > > > > > > > > > On Dec 14, 2019, at 5:43 AM, Thiede, Christoph > > > > [hidden email]> wrote: > > > > > > > > > > > > ??? > > > > > > By request, screenshots from a clean image ... > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ??? Press over > > > > > > > > > > > > Press cmd-dot ??? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > The screenshots from 5.1 were made in a clean 5.1 image. > > > > > > > > > > > > > > > > Hi Christoph, > > > > > > > > > > I???ve tried this in two trunk 64-bit images, one with the > > > > V3PlusClosures bytecode set and one with the SistaV1 bytecode set and no > > > > problem occurs in either case. If this only happens in a clean 5.1 image > > > > then I suspect it has already been fixed. > > > > > > > > > > > > > > > > > I can reproduce the problem in my trunk image. Chrostoph's example > > > > is to debug this: > > > > > > > > Generator on: [:stream | stream nextPut: #foo] > > > > > > > > The failure happens when I step over the #nextPut: > > > > > > > > If I step into the #nextPut: then all is well. > > > > > > > > > > Interesting. I indeed do step over (/not/ step into) and do /not/ see the > > > bug. Dave, Christoph, what VMs are you running? > > > > > > > The VM that I used is: > > > > /usr/local/lib/squeak/5.0-201911282316/squeak > > Open Smalltalk Cog[Spur] VM [CoInterpreterPrimitives VMMaker.oscog-eem.2597] > > Unix built on Nov 28 2019 23:23:45 Compiler: 4.2.1 Compatible Clang 7.0.0 (tags/RELEASE_700/final) > > platform sources revision VM: 201911282316 https://github.com/OpenSmalltalk/opensmalltalk-vm.git Date: Thu Nov 28 15:16:31 2019 CommitHash: 4710c5a Plugins: 201911282316 https://github.com/OpenSmalltalk/opensmalltalk-vm.git > > CoInterpreter VMMaker.oscog-eem.2597 uuid: 7a69be2e-f0d0-4d41-9854-65432d621fed Nov 28 2019 > > StackToRegisterMappingCogit VMMaker.oscog-eem.2596 uuid: 8500baf3-a5ae-4594-9f3b-08cedfdf1fb3 Nov 28 2019 > > > > But I do not think that this is a VM issue. I get the same result > > when I run Christoph's snippet on a trunk-level V3 image with an > > interpreter VM. So the issue must be in the image, not in the VM. > > > > Indeed, I did a quick check on Squeak4.6-15102.image with an interpreter > VM, and again I get the same symptoms. > > We are probably seeing two different issues here: > > 1) The debugger gets confused when trying to step over Generator>>nextPut: > (presumably something related to the context swap). > > 2) The interrupt handler gets confused when trying to figure > out what to attach itself to after 1) happens. > > Both of these are probably issues that have been with us for a long time, > and are just now being noticed. > > Dave > > > |
On Mon, Dec 16, 2019 at 12:45:28PM -0500, David T. Lewis wrote:
> On Mon, Dec 16, 2019 at 11:53:23AM +0000, Thiede, Christoph wrote: > > Squeak 5.3beta #19276 > > > > Image 68021 (64 bit) > > > > VM 201911282316 > > > > Win10 (1903). > > > > > > @Dave As mentioned, in Squeak 5.1, we get an emergency debugger instead of infinite debuggers. I think this point is clearly a regression. > > > > I can also confirm that the emergency debugger part of the problem > happened somewhere between Squeak4.5-13352 and Squeak4.6-15102. > Testing with interpreter VM (to totally exclude Cog/Spur as a > cause), I see the emergency debugger with Squeak4.5-13352, and > infinite debuggers with Squeak4.6-15102. > > I think that this aligns with the Squeak 5.1 -> Squeak 5.2 period, > so we are both seeing the same thing, and it is not a VM problem. > I have stepped through the update maps in the trunk update stream to find where this problem was introduced. The last good update map is update-nice.282.mcm. The first bad update map is update-dtl.283.mcm. The changes between mcm 282 and 283 include: Compiler-eem.283 -> Compiler-eem.284 Kernel-eem.854 -> Kernel-eem.857. Checking for the regression: Load Compiler-eem.284 -> GOOD Load Kernel-cmm.855 -> GOOD Load Kernel-dtl.856 -> GOOD Load Kernel-eem.857 -> BAD I have not yet found the root cause, but the problem was apparently introduced in Kernel-eem.857. I am out of time to look at it this evening, but I think this narrows the problem down to about a dozen method changes :-) Dave > > See Morphic-ct.1610 for an approach to prevent countless debuggers - I'm afraid it does not fix this problem, but it prevents others. > > > > > > What code are you exactly referring to when you talk about the interrupt handler? Via debugging, I found out that the BlockCannotReturn errors are already raised before interrupting. The debugger chain is only blocking itself. You can test this by putting a simple #inform: before the Debugger opening. > > > > I was referring to the emergency interrupt handler, which should > result in just one debugger as you see with your Squeak 5.1 test. > > Dave > > > > > > Maybe the bug is related to the other context simulation bugs I reported a few weeks ago? I could not explain why the error does not occur if you step into ... Via debugging the debugger, I found out that the bug is raised in the debugged process itself, not in the debugger process. > > > > > > Best, > > > > Christoph > > > > > > ________________________________ > > Von: Squeak-dev <[hidden email]> im Auftrag von Taeumel, Marcel > > Gesendet: Montag, 16. Dezember 2019 11:49:55 > > An: John Pfersich via Squeak-dev > > Betreff: Re: [squeak-dev] BUG/REGRESSION while debugging Generator >> #nextPut: > > > > Hi all. > > > > I am just investigating this issue. However, looking at the tests for Generator, I would :-) suggest :-) to re-phrase this example: > > > > Generator on: [:g | g yield: #foo]. > > > > -or- > > > > Generator on: [:generator | generator yield: #foo]. > > > > In any case, countless debuggers show up on "step over". "Step into" works fine. > > > > [cid:f4de31a1-db76-485a-8eed-ce0b553021f1] > > > > Squeak 5.3beta #19273 > > Image 6521 (32 bit) > > VM: 201911282316 (cog.spur) > > Windows 10 > > > > Best, > > Marcel > > > > Am 16.12.2019 02:39:38 schrieb David T. Lewis <[hidden email]>: > > > > On Sun, Dec 15, 2019 at 08:18:53PM -0500, David T. Lewis wrote: > > > Hi Eliot, > > > > > > On Sun, Dec 15, 2019 at 01:55:13PM -0800, Eliot Miranda wrote: > > > > Hi David, Hi Christoph, > > > > > > > > On Sun, Dec 15, 2019 at 8:52 AM David T. Lewis wrote: > > > > > > > > > On Sat, Dec 14, 2019 at 04:09:22PM -0800, Eliot Miranda wrote: > > > > > > > > > > > > > > > > > > > On Dec 14, 2019, at 5:43 AM, Thiede, Christoph > > > > > [hidden email]> wrote: > > > > > > > > > > > > > > ??? > > > > > > > By request, screenshots from a clean image ... > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ??? Press over > > > > > > > > > > > > > > Press cmd-dot ??? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > The screenshots from 5.1 were made in a clean 5.1 image. > > > > > > > > > > > > > > > > > > > Hi Christoph, > > > > > > > > > > > > I???ve tried this in two trunk 64-bit images, one with the > > > > > V3PlusClosures bytecode set and one with the SistaV1 bytecode set and no > > > > > problem occurs in either case. If this only happens in a clean 5.1 image > > > > > then I suspect it has already been fixed. > > > > > > > > > > > > > > > > > > > > > I can reproduce the problem in my trunk image. Chrostoph's example > > > > > is to debug this: > > > > > > > > > > Generator on: [:stream | stream nextPut: #foo] > > > > > > > > > > The failure happens when I step over the #nextPut: > > > > > > > > > > If I step into the #nextPut: then all is well. > > > > > > > > > > > > > Interesting. I indeed do step over (/not/ step into) and do /not/ see the > > > > bug. Dave, Christoph, what VMs are you running? > > > > > > > > > > The VM that I used is: > > > > > > /usr/local/lib/squeak/5.0-201911282316/squeak > > > Open Smalltalk Cog[Spur] VM [CoInterpreterPrimitives VMMaker.oscog-eem.2597] > > > Unix built on Nov 28 2019 23:23:45 Compiler: 4.2.1 Compatible Clang 7.0.0 (tags/RELEASE_700/final) > > > platform sources revision VM: 201911282316 https://github.com/OpenSmalltalk/opensmalltalk-vm.git Date: Thu Nov 28 15:16:31 2019 CommitHash: 4710c5a Plugins: 201911282316 https://github.com/OpenSmalltalk/opensmalltalk-vm.git > > > CoInterpreter VMMaker.oscog-eem.2597 uuid: 7a69be2e-f0d0-4d41-9854-65432d621fed Nov 28 2019 > > > StackToRegisterMappingCogit VMMaker.oscog-eem.2596 uuid: 8500baf3-a5ae-4594-9f3b-08cedfdf1fb3 Nov 28 2019 > > > > > > But I do not think that this is a VM issue. I get the same result > > > when I run Christoph's snippet on a trunk-level V3 image with an > > > interpreter VM. So the issue must be in the image, not in the VM. > > > > > > > Indeed, I did a quick check on Squeak4.6-15102.image with an interpreter > > VM, and again I get the same symptoms. > > > > We are probably seeing two different issues here: > > > > 1) The debugger gets confused when trying to step over Generator>>nextPut: > > (presumably something related to the context swap). > > > > 2) The interrupt handler gets confused when trying to figure > > out what to attach itself to after 1) happens. > > > > Both of these are probably issues that have been with us for a long time, > > and are just now being noticed. > > > > Dave > > > > > |
Wow, great to hear that you could locate the probable reason! :-)
By the way, do we have any kind of automation code for the search you did? Is squeak-history capable of doing so? If not, I think such a thing could be a powerful tool!
Best,
Christoph
Von: Squeak-dev <[hidden email]> im Auftrag von David T. Lewis <[hidden email]>
Gesendet: Mittwoch, 18. Dezember 2019 04:01:04 An: The general-purpose Squeak developers list Betreff: Re: [squeak-dev] BUG/REGRESSION while debugging Generator >> #nextPut: On Mon, Dec 16, 2019 at 12:45:28PM -0500, David T. Lewis wrote:
> On Mon, Dec 16, 2019 at 11:53:23AM +0000, Thiede, Christoph wrote: > > Squeak 5.3beta #19276 > > > > Image 68021 (64 bit) > > > > VM 201911282316 > > > > Win10 (1903). > > > > > > @Dave As mentioned, in Squeak 5.1, we get an emergency debugger instead of infinite debuggers. I think this point is clearly a regression. > > > > I can also confirm that the emergency debugger part of the problem > happened somewhere between Squeak4.5-13352 and Squeak4.6-15102. > Testing with interpreter VM (to totally exclude Cog/Spur as a > cause), I see the emergency debugger with Squeak4.5-13352, and > infinite debuggers with Squeak4.6-15102. > > I think that this aligns with the Squeak 5.1 -> Squeak 5.2 period, > so we are both seeing the same thing, and it is not a VM problem. > I have stepped through the update maps in the trunk update stream to find where this problem was introduced. The last good update map is update-nice.282.mcm. The first bad update map is update-dtl.283.mcm. The changes between mcm 282 and 283 include: Compiler-eem.283 -> Compiler-eem.284 Kernel-eem.854 -> Kernel-eem.857. Checking for the regression: Load Compiler-eem.284 -> GOOD Load Kernel-cmm.855 -> GOOD Load Kernel-dtl.856 -> GOOD Load Kernel-eem.857 -> BAD I have not yet found the root cause, but the problem was apparently introduced in Kernel-eem.857. I am out of time to look at it this evening, but I think this narrows the problem down to about a dozen method changes :-) Dave > > See Morphic-ct.1610 for an approach to prevent countless debuggers - I'm afraid it does not fix this problem, but it prevents others. > > > > > > What code are you exactly referring to when you talk about the interrupt handler? Via debugging, I found out that the BlockCannotReturn errors are already raised before interrupting. The debugger chain is only blocking itself. You can test this by putting a simple #inform: before the Debugger opening. > > > > I was referring to the emergency interrupt handler, which should > result in just one debugger as you see with your Squeak 5.1 test. > > Dave > > > > > > Maybe the bug is related to the other context simulation bugs I reported a few weeks ago? I could not explain why the error does not occur if you step into ... Via debugging the debugger, I found out that the bug is raised in the debugged process itself, not in the debugger process. > > > > > > Best, > > > > Christoph > > > > > > ________________________________ > > Von: Squeak-dev <[hidden email]> im Auftrag von Taeumel, Marcel > > Gesendet: Montag, 16. Dezember 2019 11:49:55 > > An: John Pfersich via Squeak-dev > > Betreff: Re: [squeak-dev] BUG/REGRESSION while debugging Generator >> #nextPut: > > > > Hi all. > > > > I am just investigating this issue. However, looking at the tests for Generator, I would :-) suggest :-) to re-phrase this example: > > > > Generator on: [:g | g yield: #foo]. > > > > -or- > > > > Generator on: [:generator | generator yield: #foo]. > > > > In any case, countless debuggers show up on "step over". "Step into" works fine. > > > > [cid:f4de31a1-db76-485a-8eed-ce0b553021f1] > > > > Squeak 5.3beta #19273 > > Image 6521 (32 bit) > > VM: 201911282316 (cog.spur) > > Windows 10 > > > > Best, > > Marcel > > > > Am 16.12.2019 02:39:38 schrieb David T. Lewis <[hidden email]>: > > > > On Sun, Dec 15, 2019 at 08:18:53PM -0500, David T. Lewis wrote: > > > Hi Eliot, > > > > > > On Sun, Dec 15, 2019 at 01:55:13PM -0800, Eliot Miranda wrote: > > > > Hi David, Hi Christoph, > > > > > > > > On Sun, Dec 15, 2019 at 8:52 AM David T. Lewis wrote: > > > > > > > > > On Sat, Dec 14, 2019 at 04:09:22PM -0800, Eliot Miranda wrote: > > > > > > > > > > > > > > > > > > > On Dec 14, 2019, at 5:43 AM, Thiede, Christoph > > > > > [hidden email]> wrote: > > > > > > > > > > > > > > ??? > > > > > > > By request, screenshots from a clean image ... > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ??? Press over > > > > > > > > > > > > > > Press cmd-dot ??? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > The screenshots from 5.1 were made in a clean 5.1 image. > > > > > > > > > > > > > > > > > > > Hi Christoph, > > > > > > > > > > > > I???ve tried this in two trunk 64-bit images, one with the > > > > > V3PlusClosures bytecode set and one with the SistaV1 bytecode set and no > > > > > problem occurs in either case. If this only happens in a clean 5.1 image > > > > > then I suspect it has already been fixed. > > > > > > > > > > > > > > > > > > > > > I can reproduce the problem in my trunk image. Chrostoph's example > > > > > is to debug this: > > > > > > > > > > Generator on: [:stream | stream nextPut: #foo] > > > > > > > > > > The failure happens when I step over the #nextPut: > > > > > > > > > > If I step into the #nextPut: then all is well. > > > > > > > > > > > > > Interesting. I indeed do step over (/not/ step into) and do /not/ see the > > > > bug. Dave, Christoph, what VMs are you running? > > > > > > > > > > The VM that I used is: > > > > > > /usr/local/lib/squeak/5.0-201911282316/squeak > > > Open Smalltalk Cog[Spur] VM [CoInterpreterPrimitives VMMaker.oscog-eem.2597] > > > Unix built on Nov 28 2019 23:23:45 Compiler: 4.2.1 Compatible Clang 7.0.0 (tags/RELEASE_700/final) > > > platform sources revision VM: 201911282316 https://github.com/OpenSmalltalk/opensmalltalk-vm.git Date: Thu Nov 28 15:16:31 2019 CommitHash: 4710c5a Plugins: 201911282316 https://github.com/OpenSmalltalk/opensmalltalk-vm.git > > > CoInterpreter VMMaker.oscog-eem.2597 uuid: 7a69be2e-f0d0-4d41-9854-65432d621fed Nov 28 2019 > > > StackToRegisterMappingCogit VMMaker.oscog-eem.2596 uuid: 8500baf3-a5ae-4594-9f3b-08cedfdf1fb3 Nov 28 2019 > > > > > > But I do not think that this is a VM issue. I get the same result > > > when I run Christoph's snippet on a trunk-level V3 image with an > > > interpreter VM. So the issue must be in the image, not in the VM. > > > > > > > Indeed, I did a quick check on Squeak4.6-15102.image with an interpreter > > VM, and again I get the same symptoms. > > > > We are probably seeing two different issues here: > > > > 1) The debugger gets confused when trying to step over Generator>>nextPut: > > (presumably something related to the context swap). > > > > 2) The interrupt handler gets confused when trying to figure > > out what to attach itself to after 1) happens. > > > > Both of these are probably issues that have been with us for a long time, > > and are just now being noticed. > > > > Dave > > > > >
Carpe Squeak!
|
Hi all. I think that in "Kernel-eem.857", we introduced or extended the concept of Process >> #effectiveProcess. The effective process allows us to simulate execution through the debugger while preserving the original process context. This is important for dynamic (or process-local) scope, for example. See DynamicVariable. Maybe the generator's logic to manipulate the stack causes some interference with this feature? See: Generator >> #initializeOn: Generator >> #nextPut: (or #yield:) Generator >> #next Context >> #swapSender: Context >> #cannotReturn: Best, Marcel
|
In reply to this post by Christoph Thiede
On Wed, Dec 18, 2019 at 08:07:49AM +0000, Thiede, Christoph wrote:
> Wow, great to hear that you could locate the probable reason! :-) > > > By the way, do we have any kind of automation code for the search you did? Is squeak-history capable of doing so? If not, I think such a thing could be a powerful tool! > In this case I did it manually. I had no idea what the problem might be, so I did not know what to look for. Therefore I started with a Squeak4.5 release image, and located the update map that corresponds to that image, I did that by browsing update maps until I found the one that matched. Then I manually merged each sequential update map, testing after each merge until I located the problem. Dave |
In reply to this post by Christoph Thiede
Hi Christophe, On Fri, Dec 13, 2019 at 10:48 AM Thiede, Christoph <[hidden email]> wrote:
Apologies. I can now reproduce the bug. I was stupidly stepping over the block evaluation, not the send of nextPut:.
So the issue seems to me to be the interaction between the block established in runUntilErrorOrReturnFrom: to catch unhandled errors (this one: Context contextOn: UnhandledError do: [:ex | error ifNil: [ error := ex exception. topContext := thisContext. ex resumeUnchecked: here jump] ifNotNil: [ex pass] ] ) and the swapSender: in Generator>>nextPut:. So after the swapSender: the Generator's continue variable refers to this BlockClosure>>on:do:. BlockClosure>>ensure: pair introduced by runUntilErrorOrReturnFrom:. What I don't understand was how the evaluateOnBehalfOf: changes interact with that. But what I can say is that the Debugger is leaving the Generator (or any code that uses swapSender:) in an invalid state because a) once the swapSender: occurs the protect blocks (the on:do:,ensure: pair) are no longer on the stack. What the swapSender should have been morphed into doing was preserving the protect blocks under the current context, or rather that swapSender should have been persuaded to swap the senders of the protect blocks. Now when the debugger continues stepping it never gets to the protect blocks because they have been swapped out of the way, and execution proceeds all the way down to the fork in Generator>>reset. You can observe this by - stepping through "Generator on: [:stream | stream nextPut: #foo]" until at the send of #nextPut: - inspecting the Debugger (via the Morphic tool handle menu "inspect model" - debugging "self doStep" to step the Debugger through the send - when you get to runUntilErrorOrReturnFrom: *ONLY STEP UP TO THE jump* *DO NOT STEP OVER THE jump* - at the jump, step into, then do step into, and you'll find the debugger now in Generator>>nextPut:, with the stack still including the protect pair. But when you step over the swapSender: then, of course, they've gone missing, and the stack points back to the fork in reset with nothing on the stack to stop execution (because there's no ensure: block which is what runUntilErrorOrReturnFrom: inserted to try and catch any attempt to return). So to get to the point of the swapSender: you should see this: Context>>jump Context>>runUntilErrorOrReturnFrom: [] in Process>>complete: BlockClosure>>ensure: Process>>evaluate:onBehalfOf: Process>>complete: Process>>completeStep: [] in MorphicDebugger(Debugger)>>doStep BlockClosure>>on:do: MorphicDebugger(Debugger)>>handleLabelUpdatesIn:whenExecuting: MorphicDebugger(Debugger)>>doStep and the stack from the Context about to jump is Generator>>nextPut: BlockClosure>>on:do: BlockClosure>>ensure: [] in UndefinedObject>>DoIt Generator>>fork [] in Generator>>reset Generator>>reset Generator>>initializeOn: Generator class>>on: UndefinedObject>>DoIt ... and the Generator's "continue" stack immediately before the swapSender: in Generator>>nextPut: is Generator>>reset Generator>>initializeOn: Generator class>>on: UndefinedObject>>DoIt What we'd like to see after the swapSender: is the protect blocks moved to the continue stack: Generator>>nextPut: BlockClosure>>on:do: BlockClosure>>ensure: Generator>>reset Generator>>initializeOn: Generator class>>on: UndefinedObject>>DoIt and the new "continue" stack looking like this: Generator>>fork [] in Generator>>reset Generator>>reset Generator>>initializeOn: Generator class>>on: UndefinedObject>>DoIt ... What I don't know is how to fix this in the general case. This seems hard. The debugger is executing Smalltalk (i.e. *not* stepping bytecodes) in the context of runUntilErrorOrReturnFrom: and a swapSender: could happen at an arbitrarily deep point in that execution and we want the protect blocks in runUntilErrorOrReturnFrom: to protect execution after a swapSender:. I have no idea how to do that. Now what's more likely is that I'm not seeing what's obviously wrong about the introduction of evaluate:onBehalfOf:. Why does this make any difference? It seems to me that the issue is with runUntilErrorOrReturnFrom: and swapSender:, not with evaluate:onBehalfOf:. What am I missing? > Best, > Christoph _,,,^..^,,,_ best, Eliot |
Hi Eliot,
you're genius, thanks a lot for your answer! :-)
First, for convenience only, here is a snippet to reproduce the bug even simpler:
First, what exact behavior do we expect when stepping over some code that includes a sender swap, speaking in general? The current implementation wants to keep the process running until we return from the selected message - which is the desired behavior in
most cases. Just in this special case, our problem is that the selected message will never return.
If I understand your goal definition correctly, the debugger would halt each time a sender swap occurs while the over button is pressed? I fear this could be inconvenient if you step over any method from a higher abstraction level that uses a generator in any deep
implementation details. So I would like to refine the goal rather as "when pressing over, the process should be resumed until the suspended context is in the stack of the original context (i. e. where the over button was pressed)". For illustration, neglecting
the whole optimization, Process >> #complete: should basically equal the pseudocode "self runUntil: [:ctxt | aContext = ctxt or: [aContext hasSender: ctxt]]". Would you agree so far?
Second, my approach was to modify #swapSender: to manually identify and transport these "essential" sender contexts. The
attached changeset is really WIP, so it only works for the simplest examples, but it might give you an impression of my approach. It does not yet work properly for the following code, if you step over #contents, for example:
If you like the idea, I will try to fix that later.
I know that this approach is in O(n) whereas the current implementation has a constant complexity only, but to me, this appears inherent and inevitable complexity.
However, we would need this modification only for debugging. This leads me to my next concern:
From an architectural point of view, I would like to put this modification into a decorator class (DebuggedContext) that is only installed
from #runUntilErrorOrReturnFrom:.
But it appears the VM does not like this, you cannot even do:
(computation has been terminated), unless you are
debugging. I'd guess that primitive 160 always allocates new memory for the receiver, which would be invisible for the image, but visible to the VM. But I don't have any clue of the actual implementation! Any chance to get around this limitation?
In general, how would you think about this approach? Do you have any better ideas? Looking forward to your opinion :-)
Best,
Christoph
Von: Squeak-dev <[hidden email]> im Auftrag von Eliot Miranda <[hidden email]>
Gesendet: Dienstag, 24. Dezember 2019 03:57:12 An: The general-purpose Squeak developers list Betreff: Re: [squeak-dev] BUG/REGRESSION while debugging Generator >> #nextPut: Hi Christophe,
On Fri, Dec 13, 2019 at 10:48 AM Thiede, Christoph <[hidden email]> wrote:
Apologies. I can now reproduce the bug. I was stupidly stepping over the block evaluation, not the send of nextPut:.
So the issue seems to me to be the interaction between the block established in runUntilErrorOrReturnFrom: to catch unhandled errors (this one:
Context contextOn: UnhandledError do: [:ex |
error ifNil: [
error := ex exception.
topContext := thisContext.
ex resumeUnchecked: here jump]
ifNotNil: [ex pass]
]
)
and the swapSender: in Generator>>nextPut:. So after the swapSender: the Generator's continue variable refers to this BlockClosure>>on:do:. BlockClosure>>ensure: pair introduced by runUntilErrorOrReturnFrom:. What I don't understand was how the evaluateOnBehalfOf:
changes interact with that. But what I can say is that the Debugger is leaving the Generator (or any code that uses swapSender:) in an invalid state because
a) once the swapSender: occurs the protect blocks (the on:do:,ensure: pair) are no longer on the stack. What the swapSender should have been morphed into doing was preserving the protect blocks under the current context, or rather that swapSender should
have been persuaded to swap the senders of the protect blocks. Now when the debugger continues stepping it never gets to the protect blocks because they have been swapped out of the way, and execution proceeds all the way down to the fork in Generator>>reset.
You can observe this by
- stepping through "Generator on: [:stream | stream nextPut: #foo]" until at the send of #nextPut:
- inspecting the Debugger (via the Morphic tool handle menu "inspect model"
- debugging "self doStep" to step the Debugger through the send
- when you get to runUntilErrorOrReturnFrom: *ONLY STEP UP TO THE jump* *DO NOT STEP OVER THE jump*
- at the jump, step into, then do step into, and you'll find the debugger now in Generator>>nextPut:, with the stack still including the protect pair. But when you step over the swapSender: then, of course, they've gone missing, and the stack points back
to the fork in reset with nothing on the stack to stop execution (because there's no ensure: block which is what runUntilErrorOrReturnFrom: inserted to try and catch any attempt to return).
So to get to the point of the swapSender: you should see this:
Context>>jump
Context>>runUntilErrorOrReturnFrom:
[] in Process>>complete:
BlockClosure>>ensure:
Process>>evaluate:onBehalfOf:
Process>>complete:
Process>>completeStep:
[] in MorphicDebugger(Debugger)>>doStep
BlockClosure>>on:do:
MorphicDebugger(Debugger)>>handleLabelUpdatesIn:whenExecuting:
MorphicDebugger(Debugger)>>doStep
and the stack from the Context about to jump is
Generator>>nextPut:
BlockClosure>>on:do:
BlockClosure>>ensure:
[] in UndefinedObject>>DoIt
Generator>>fork
[] in Generator>>reset
Generator>>reset
Generator>>initializeOn:
Generator class>>on:
UndefinedObject>>DoIt ...
and the Generator's "continue" stack immediately before the swapSender: in Generator>>nextPut: is
Generator>>reset
Generator>>initializeOn:
Generator class>>on:
UndefinedObject>>DoIt
What we'd like to see after the swapSender: is the protect blocks moved to the continue stack:
Generator>>nextPut:
BlockClosure>>on:do:
BlockClosure>>ensure:
Generator>>reset
Generator>>initializeOn:
Generator class>>on:
UndefinedObject>>DoIt
and the new "continue" stack looking like this:
Generator>>fork
[] in Generator>>reset
Generator>>reset
Generator>>initializeOn:
Generator class>>on:
UndefinedObject>>DoIt ...
What I don't know is how to fix this in the general case. This seems hard. The debugger is executing Smalltalk (i.e. *not* stepping bytecodes) in the context of runUntilErrorOrReturnFrom: and a swapSender: could happen at an arbitrarily deep point in
that execution and we want the protect blocks in runUntilErrorOrReturnFrom: to protect execution after a swapSender:. I have no idea how to do that.
Now what's more likely is that I'm not seeing what's obviously wrong about the introduction of evaluate:onBehalfOf:. Why does this make any difference? It seems to me that the issue is with runUntilErrorOrReturnFrom: and swapSender:, not with evaluate:onBehalfOf:. What am I missing? > Best, > Christoph _,,,^..^,,,_
best, Eliot
bugfix swapsender (1).4.cs (2K) Download Attachment
Carpe Squeak!
|
Hi all, hi Eliot, hi Marcel,
it's been a long time, but this issue still exists and I have been making a lot of thoughts about it while investigating this and other issues. Marcel and I will soon release a fix for the infinite debugger chains in general, but this issue is a separate one.
<short recap>
Under certain circumstances, running a single step in a debugger (mostly via the Over button) abandons the current UI process and proceeds the debugged process in its place. This is caused by an insane and extremely powerful hack in Context>>#runUntilErrorReturnFrom:, which hacks the context to be simulated into the currently running process. This can speed up debugging by around factor 1000. To ensure that the execution returns to the debugging process, two guard contexts are installed on top of the respective context.
Unfortunately, if the debugged process contains any piece of context metaprogramming, i.e. by implementing coroutine logic with Context>>#swapSender:, or by performing #jumps, this can uninstall the guard contexts or make them unreachable, eventually causing the debugged process to successfully hijack the debugging process and never return control back to it.
</short recap>
After all, I think there is no viable alternative to explicitly informing the debugger about such acts of context metaprogramming. For this reason, I have set up a working and unpolished prototype in my image, which, in a nutshell, applies the following changes to Context:
1. In #jump and #swapSender:, insert a send {self informDebuggerAboutContextSwitchTo: coroutine} right before installing the new sender.
2. #informDebuggerAboutContextSwitchTo: searches the sender stack for an UnhandledError handler that was installed by #runUntilErrorOrReturnFrom:. If it finds one, it checks whether this context would still be available after the context switch. (If yes, nothing is done, otherwise stepping over complex messages such as {self systemNavigation allCallsOn: #foo} would become really messy.) If no, an UnhandledError is signaled to abort the #runUntilErrorOrReturnFrom: execution prematurely.
You can find the details in the attached changeset, but IMO implementation details are less relevant for the current discussion than a general understanding of the problem and the solution approach.
Here are some questions for you:
1. Would you agree that is the right approach to the problem because there are no alternatives?
2. Now it's getting indeed a bit more technical: At the moment, #findNextRunUntilErrorOrReturnFromCalleeContextUpTo: in my implementation (horrible name, I know ...) manually traverses the chain of #nextHandlerContexts to search for an UnhandledError handler of interest. However, I don't really like this approach because a) it creates a high coupling to the #runUntilErrorOrReturnFrom: implementation (well, on the other hand, both methods resist in the same class ...) and b) it might introduce a noticeable performance drop (premature benchmarks have suggested an overhead of 5%-25% depending on the task).
I need to build better measures, but a question in general: Are we willing to accept this performance drop in order to regain correctness? (In my opinion, we should be willing, it is only used for debugging and "still fast enough for our neurons" :-))
An alternative to manually scanning all handler contexts could be to introduce a new exception for this (maybe ContextSwitchNotification) and let the VM do all the work. Or am I overrating "The Great VM" in this regard and things won't be able to become faster than #handleSignal: at all? For such a low-level, performance-critical decision, I think design questions should be secondary.
Do you have some thoughts and opinions about this?
I am looking forward to your feedback! Let's get this problem solved, too, definitively before the next release! :-)
Best,
Christoph
runUntilErrorOrReturnFrom.cs
Carpe Squeak!
Sent from the Squeak - Dev mailing list archive at Nabble.com.
Carpe Squeak!
|
The formatting of this message was unacceptable. I'm attaching it in a reformated style ...
Christoph
Von: Squeak-dev <[hidden email]> im Auftrag von Thiede, Christoph
Gesendet: Montag, 8. März 2021 20:21 Uhr An: [hidden email] Betreff: [squeak-dev] Tackling Context>>#runUntilErrorReturnFrom: (was: BUG/REGRESSION while debugging Generator >> #nextPut:) Hi all, hi Eliot, hi Marcel,
it's been a long time, but this issue still exists and I have been making a lot of thoughts about it while investigating this and other issues. Marcel and I will soon release a fix for the infinite debugger chains in general, but this issue is a separate
one.
<short recap>
Under certain circumstances, running a single step in a debugger (mostly via the Over button) abandons the current UI process and proceeds the debugged process in its place. This is caused by an
insane and extremely powerful hack in Context>>#runUntilErrorReturnFrom:, which hacks the context to be simulated into the currently running process. This can speed up debugging by around factor 1000. To ensure that the execution returns to the debugging
process, two guard contexts are installed on top of the respective context.
Unfortunately, if the debugged process contains any piece of context metaprogramming, i.e. by implementing coroutine logic with Context>>#swapSender:, or by performing #jumps, this can uninstall the guard contexts or make them unreachable, eventually
causing the debugged process to successfully hijack the debugging process and never return control back to it.
</short recap>
After all, I think there is no viable alternative to explicitly informing the debugger about such acts of context metaprogramming. For this reason, I have set up a working and unpolished prototype in my image, which, in a nutshell, applies the following
changes to Context:
1. In #jump and #swapSender:, insert a send {self informDebuggerAboutContextSwitchTo: coroutine} right before installing the new sender.
2. #informDebuggerAboutContextSwitchTo: searches the sender stack for an UnhandledError handler that was installed by #runUntilErrorOrReturnFrom:. If it finds one, it checks whether this context would still be available after the context switch. (If yes,
nothing is done, otherwise stepping over complex messages such as {self systemNavigation allCallsOn: #foo} would become really messy.) If no, an UnhandledError is signaled to abort the #runUntilErrorOrReturnFrom: execution prematurely.
You can find the details in the attached changeset, but IMO implementation details are less relevant for the current discussion than a general understanding of the problem and the solution approach. Here are some questions for you:
1. Would you agree that is the right approach to the problem because there are no alternatives?
2. Now it's getting indeed a bit more technical: At the moment, #findNextRunUntilErrorOrReturnFromCalleeContextUpTo: in my implementation (horrible name, I know ...) manually traverses the chain of #nextHandlerContexts to search for an UnhandledError handler
of interest. However, I don't really like this approach because a) it creates a
high coupling to the #runUntilErrorOrReturnFrom: implementation (well, on the other hand, both methods resist in the same class ...) and b) it might introduce a
noticeable performance drop (premature benchmarks have suggested an overhead of 5%-25% depending on the task).
I need to build better measures, but a question in general: Are we willing to accept this performance drop in order to regain correctness? (In my opinion, we should be willing, it is only used for debugging and "still fast enough for our neurons"
:-))
An alternative to manually scanning all handler contexts could be to introduce a new exception for this (maybe ContextSwitchNotification) and let the VM do all the work. Or am I overrating "The Great VM" in this regard and things won't be able to become
faster than #handleSignal: at all? For such a low-level, performance-critical decision, I think design questions should be secondary. Do you have some thoughts and opinions about this
I am looking forward to your feedback! Let's get this problem solved, too, definitively before the next release! :-)
Best, Christoph
runUntilErrorOrReturnFrom.4.cs (6K) Download Attachment
Carpe Squeak!
|
By the way, another occurrence for this issue is when you debug such an
expression: [^6*7] ensure: [2+3] Step through the first block, into Context>>aboutToReturn:through:, and then over #return:through:. I think the cause is the same. ----- Carpe Squeak! -- Sent from: http://forum.world.st/Squeak-Dev-f45488.html
Carpe Squeak!
|
Hi Christoph,
Christoph Thiede wrote > By the way, another occurrence for this issue is when you debug such an > expression: > > [^6*7] ensure: [2+3] > > Step through the first block, into Context>>aboutToReturn:through:, and > then > over #return:through:. I think the cause is the same. This particular issue is related to this one: http://forum.world.st/stepping-over-non-local-return-in-a-protected-block-td5128777.html ----- ^[^ Jaromir -- Sent from: http://forum.world.st/Squeak-Dev-f45488.html
^[^ Jaromir
|
Free forum by Nabble | Edit this page |