Hi Thomas,
So I expect the code should read activePC: aContext ^ (self isLatestContext: aContext) ifTrue: [ aContext pc ] ifFalse: [ self previousPC: aContext ]
Because the active pc is used to derive display feedback in the debugger. In particular it is used to derive source ranges for contexts.
|
In reply to this post by Sven Van Caekenberghe-2
Sven,
|
In reply to this post by Thomas Dupriez-2
Hi Thomas,
forgive me, my first response was too terse. Having thought about it in the shower it becomes clear :-) > On Jan 11, 2019, at 6:49 AM, Thomas Dupriez <[hidden email]> wrote: > > Hi, > > Yes, my question was just of the form: "Hey there's this method in DebugSession. What is it doing? What's the intention behind it? Does someone know?". There was no hidden agenda behind it. > > @Eliot > > After taking another look at this method, there's something I don't understand: > > activePC: aContext > ^ (self isLatestContext: aContext) > ifTrue: [ interruptedContext pc ] > ifFalse: [ self previousPC: aContext ] > > isLatestContext: checks whether its argument is the suspended context (the context at the top of the stack of the interrupted process). And if that's true, activePC: returns the pc of **interruptedContext**, not of the suspended context. These two contexts are different when the debugger opens on an exception, so this method is potentially returning a pc for another context than its argument... > > Another question I have to improve the comment for this method is: what's the high-level meaning of this concept of "activePC". You gave the formal definition, but what's the point of defining this so to speak? What makes this concept interesting enough to warrant defining it and giving it a name? There are two “modes” where a pc us mapped to a source range. One is when stepping a context in the debugger (the context is on top and is actively executing bytecodes). Here the debugger stops immediately before a send or assignment or return, so that for sends we can do into or over, or for assignments or returns check stack top to see what will be assigned or returned. In this mode we want the pc of the send, assign or return to map to the source range for the send, or the expression being assigned or returned. Since this is the “common case”, and since this is the only choice that makes sense for assignments ta and returns, the bytecode compiler constructs it’s pc to source range map in terms of the pc of the first byte if the send, assign or return bytecode. The second “mode” is when selecting a context below the top context. The pc for any context below the top context will be the return pc for a send, because the send has already happened. The compiler could choose to map this pc to the send, but it would not match what works for the common case. Another choice would appear be to have two map entries, one for the send and one for the return pc, both mapping to the source range. But this wouldn’t work because the result of a send might be assigned or returned and so there is a potential conflict. I stead the reasonable solution is to select the previous pc for contexts below the top of context, which will be the pc for the start of the send bytecode. HTH > > Cheers, > Thomas > >> On 11/01/2019 13:53, Tudor Girba wrote: >> Hi, >> >> @Eliot: Thanks for the clarifying answer. >> >> I believe you might have jumped to conclusion about the intention of the question. Thomas asked a legitimate question. Without users of a method it is hard to understand its use. It does not necessarily imply that the intention is to remove it, but it does show that someone wants to understand. >> >> As far as I know, Thomas actually wants to write a test to cover that usage. I am sure that you appreciate and encourage that :). >> >> @Thomas: Thanks for this effort! >> >> Cheers, >> Doru >> >> >>> On Jan 10, 2019, at 3:11 PM, Eliot Miranda <[hidden email]> wrote: >>> >>> Hi Thomas, >>> >>>> On Jan 10, 2019, at 2:24 AM, Thomas Dupriez via Pharo-dev <[hidden email]> wrote: >>>> >>>> <mime-attachment> >>> in a stack of contexts the active pc is different for the top context. For other than the top context, a context’s pc will be pointing after the send that created the context above it, so to find the pc of the send one finds the previous pc. For the top context its pc is the active pc. >>> >>> Typically the debugger is invoked in two different modes, interruption or exception. When interrupted, a process is stopped at the next suspension point (method entry or backward branch) and the top context in the process is the context to be displayed in the debugger. When an exception occurs the exception search machinery will find the signaling context, the context that raised the exception, which will be below the search machinery and the debugger invocation above that. The active pc of the signaling context will be the of for the send of digbsl et al. >>> >>> So the distinction is important and the utility method is probably useful. >>> >>> Do you want to remove the method simply because there are no senders in the image? >>> >>> If so, this is indicative of a serious problem with the Pharo development process. In the summer I ported VMMaker.oscog to Pharo 6. Now as feenk try and build a VMMaker.oscog image on Pharo 7, the system is broken, in part because of depreciations and in part because useful methods (isOptimisedBlock (isOptimizedBlock?) in the Opal compiler) have been removed. >>> >>> Just because a method is not in the image does not imply it is not in use. It simply means that it is not in use in the base image. As the system gets modularised this issue will only increase. There are lots of collection methods that exist as a library that are not used in the base image and removing them would clearly damage the library for users. This is the case for lots of so-called system code. There are users out there, like those of us in the vm team, who rely on such system code, and it is extremely unsettling and frustrating to have that system code change all the time. If Pharo is to be a useful platform to the vm team it has to be more stable. >> -- >> www.feenk.com >> >> “The smaller and more pervasive the hardware becomes, the more physical the software gets." >> >> > |
In reply to this post by Eliot Miranda-2
> On 11 Jan 2019, at 19:32, Eliot Miranda <[hidden email]> wrote: > > Sven, > > On Jan 11, 2019, at 10:03 AM, Sven Van Caekenberghe <[hidden email]> wrote: > >> Eliot, >> >> I can assure you that multiple core Pharo people had the same reaction, don't turn this (again) in a play on one person's emotions (apart from the fact that those are present in all living creatures). > > First you assume a motive I don’t have. I am not trying to provoke anyone. Clearly you are, given the reactions. Like Doru said, you did not just answer the question, your last two paragraphs contained lots of provocation. > Second, I think emotions are the results of mammalian brains, perhaps bird and fish brains, and certainly not present in amoeba. First an IS reference, now this: yes, you are a man ratio and reason only, devout of human emotions like the rest of us. Good for you. Hundreds of libraries and frameworks were moved between Pharo 6.x and 7, with minimal changes. We are an active, living community where many, many people contribute, are allowed to make mistakes, to question old code and old rules, to learn, to make things better. >> Sven >> >>> On 11 Jan 2019, at 18:57, Eliot Miranda <[hidden email]> wrote: >>> >>> Stef, >>> >>> On Jan 10, 2019, at 11:24 PM, ducasse <[hidden email]> wrote: >>> >>>> Ben >>>> >>>> Since you asked I reply. >>>> For calypso we try and sometimes fail and retry. But we do not rant. >>>> >>>> Now the solution is also to have tests and this is what we are doing. >>>> We want more tests and we are working on having more tests. >>>> >>>> The solution is also to have ***********positive********* communication. >>>> >>>> There is no point to piss on our process because >>>> - we were the first to push package usage back in squeal 3.9 >>>> - increase enormously the number of tests >>>> - have CI to run the tests and use PR. >>>> and it is working! >>>> >>>> So before bashing us I would expect a bit of respect that it is due to our track record. >>> >>> Again you fail to respond to an attempt to discuss real issues and instead take it as a personal map attack and respond emotionally. Ben is /not/ bashing your process in an attempt to damage Pharo. As an academic researcher you should be able to respond objectively to criticism. This frequent emotionality doesn’t help you or the community. >>> >>>> >>>> Finally it takes 1 min to enter a bug entry and now you cannot even complain that you have to log >>>> because it is on github. (BTW nobdoy is asking the amount of time it takes US to migrate and go over the bug entry - >>>> again I ask for respect for the people doing this tedious, boring but important job). >>>> >>>> When VMMaker will be available in Pharo we will be able to automate things not before. >>>> Please remember also that Igor paid by us spent a lot of time making sure that >>>> everybody and in particular our jenkins server could automatically build vm. >>>> >>>> So we believe in agility, communication and automation. >>>> >>>> Stef >>>> >>>> >>>> >>>> >>>>> On 11 Jan 2019, at 05:54, Ben Coman <[hidden email]> wrote: >>>>> >>>>> On Thu, 10 Jan 2019 at 23:51, ducasse via Pharo-dev <[hidden email]> wrote: >>>>> Thomas can you integrate such comments in the debugger class comment >>>>> >>>>> @Eliot thanks for the explanation. >>>>> About the method removed, could you please react less negatively? It would be nice. >>>>> I cannot believe that you the guy that know the VM would get stopped to open a bug entry telling us that isOptimizedBlock >>>>> has been removed and it should not. How much time opening a bug entry can take? Under 1 min I guess. >>>>> >>>>> I'd guess it takes more than 1 minute overall - a few minutes to shift context to open an old Pharo image >>>>> and a few more open the original method to copy it to Pharo and repeat that for the next ten missing methods, >>>>> and then having fixed it for yourself, rather than just log a job for someone else, having fixed your own >>>>> you now repair your pharo repo with Iceberg and submit a commit, and your now off-task by half an hour. >>>>> Not a great deal of time if that was what you schedule to work on, you but frustrating when dragged off task. >>>>> >>>>> The thing is, when is someone is frustrated, without sharing there is no chance to resolve anything, >>>>> so the frustration doubles and builds up, and unconsciously creeps in relationships and/or leads to a breakdown. >>>>> Putting it out in the world relieves that pressure and provides the possibility that someone might >>>>> find a middle path. As always, it is not what is said but how it is said, >>>>> and personally that seemed okay to me. >>>>> >>>>>>> Just because a method is not in the image does not imply it is not in use. It simply means that it is not in use in the base image. As the system gets modularised this issue will only increase. >>>>> >>>>> On the flip side, if the rule was "don't touch unused methods", that would block a lot of action >>>>> around cleaning, minimisation and modulation that are important. Even though those things >>>>> aren't directly the shiney new tools that make Pharo great, its their philosophy that underpins >>>>> a lot of the visible Pharo improvements which has facilitated Pharo's growth. >>>>> That "vision" is why I'm here. >>>>> >>>>> The pivot point here the concept of "unused" and perhaps where we can do better. >>>>> Currently developers have no information available to base their decision on. >>>>> Requiring developers to query the mail list about every cleaning, minimisation and modulation action >>>>> would have a freezing effect. >>>>> >>>>> For stuff that is image its much easier for developers since: >>>>> * its "visible" right in front of them >>>>> * they can make decisions and take action around it >>>>> * tests can be run >>>>> >>>>> So the question is how we can get those things for important modules outside the image? >>>>> For me, VM is not like any third party app but is very much a *part* of Pharo >>>>> since its something which helps Pharo itself advance. So lets treat it as such, similar >>>>> to how we treat other modules like Calypso or Iceberg which happen distributed in-Image. >>>>> Can we consider the last step of the CI (after packing the CI product) >>>>> could load a static version of VMMaker? Not that any issues would fail the commit, but just report >>>>> to bring "visibility" to the table ? >>>>> >>>>> Or, once VMMaker is in git, perhaps a separate opensmalltalk-vm CI job >>>>> could run against each latest Pharo image to report problems? >>>>> >>>>> Or to broaden the perspective of "unused" further, we could put broader #senders-info in front >>>>> of developers so then can make informed decisions at design time rather cleaning up after-the-fact? >>>>> Projects could register on a central site their list of #senders (generated from some tool) >>>>> so the Pharo IDE could reference that when asked for #senders - so the information is available >>>>> to make informed decisions. >>>>> >>>>> Clicking on an external#sender could pop up the actual code hosted on github, >>>>> so developers have even better knowledge for decisions and communication. >>>>> >>>>> Of course, creating such a system requires effort in our world of limited resources. >>>>> But the external#senders register could be a premium service available just >>>>> to Pharo Association/Consortium members which sets up a bit of a win/win scenario. >>>>> It helps developers and is an incentive to join up. This is not to guarantee changes won't >>>>> affect your project's #senders, but just to improve communication around them. >>>>> >>>>> >>>>> So why if marcus removed it inadvertently would you want to make him feel bad? I don’t want people to feel bad. >>>>> >>>>> We can do mistake. >>>>> >>>>> That is an important philosophy, but here is a key thing to be clear on... >>>>> [ "If you are not open to hearing about mistakes, then you are not open to making mistakes." ] repeat. >>>>> >>>>> There is no reason for an individual to feel bad about removing an "unused" method if that is the process. >>>>> But can the process be improved? >>>>> >>>>> cheers -ben >> >> |
In reply to this post by Eliot Miranda-2
Yeah, it's a bit unfortunate you assumed I wanted to remove the method.
It brought up a not so pleasant discussion. Everyone makes mistakes. :-) So if I understand, this method gives the pc that maps to the ast node the debugger should highlight when a context is selected in the stack widget? If I'm right, how comes that this method has no senders in the debugger code? That would mean this feature is also implemented somewhere else. Thomas Le 11/01/2019 à 20:28, Eliot Miranda a écrit : > Hi Thomas, > > forgive me, my first response was too terse. Having thought about it in the shower it becomes clear :-) > >> On Jan 11, 2019, at 6:49 AM, Thomas Dupriez <[hidden email]> wrote: >> >> Hi, >> >> Yes, my question was just of the form: "Hey there's this method in DebugSession. What is it doing? What's the intention behind it? Does someone know?". There was no hidden agenda behind it. >> >> @Eliot >> >> After taking another look at this method, there's something I don't understand: >> >> activePC: aContext >> ^ (self isLatestContext: aContext) >> ifTrue: [ interruptedContext pc ] >> ifFalse: [ self previousPC: aContext ] >> >> isLatestContext: checks whether its argument is the suspended context (the context at the top of the stack of the interrupted process). And if that's true, activePC: returns the pc of **interruptedContext**, not of the suspended context. These two contexts are different when the debugger opens on an exception, so this method is potentially returning a pc for another context than its argument... >> >> Another question I have to improve the comment for this method is: what's the high-level meaning of this concept of "activePC". You gave the formal definition, but what's the point of defining this so to speak? What makes this concept interesting enough to warrant defining it and giving it a name? > There are two “modes” where a pc us mapped to a source range. One is when stepping a context in the debugger (the context is on top and is actively executing bytecodes). Here the debugger stops immediately before a send or assignment or return, so that for sends we can do into or over, or for assignments or returns check stack top to see what will be assigned or returned. In this mode we want the pc of the send, assign or return to map to the source range for the send, or the expression being assigned or returned. Since this is the “common case”, and since this is the only choice that makes sense for assignments ta and returns, the bytecode compiler constructs it’s pc to source range map in terms of the pc of the first byte if the send, assign or return bytecode. > > The second “mode” is when selecting a context below the top context. The pc for any context below the top context will be the return pc for a send, because the send has already happened. The compiler could choose to map this pc to the send, but it would not match what works for the common case. Another choice would appear be to have two map entries, one for the send and one for the return pc, both mapping to the source range. But this wouldn’t work because the result of a send might be assigned or returned and so there is a potential conflict. I stead the reasonable solution is to select the previous pc for contexts below the top of context, which will be the pc for the start of the send bytecode. > > HTH > >> Cheers, >> Thomas >> >>> On 11/01/2019 13:53, Tudor Girba wrote: >>> Hi, >>> >>> @Eliot: Thanks for the clarifying answer. >>> >>> I believe you might have jumped to conclusion about the intention of the question. Thomas asked a legitimate question. Without users of a method it is hard to understand its use. It does not necessarily imply that the intention is to remove it, but it does show that someone wants to understand. >>> >>> As far as I know, Thomas actually wants to write a test to cover that usage. I am sure that you appreciate and encourage that :). >>> >>> @Thomas: Thanks for this effort! >>> >>> Cheers, >>> Doru >>> >>> >>>> On Jan 10, 2019, at 3:11 PM, Eliot Miranda <[hidden email]> wrote: >>>> >>>> Hi Thomas, >>>> >>>>> On Jan 10, 2019, at 2:24 AM, Thomas Dupriez via Pharo-dev <[hidden email]> wrote: >>>>> >>>>> <mime-attachment> >>>> in a stack of contexts the active pc is different for the top context. For other than the top context, a context’s pc will be pointing after the send that created the context above it, so to find the pc of the send one finds the previous pc. For the top context its pc is the active pc. >>>> >>>> Typically the debugger is invoked in two different modes, interruption or exception. When interrupted, a process is stopped at the next suspension point (method entry or backward branch) and the top context in the process is the context to be displayed in the debugger. When an exception occurs the exception search machinery will find the signaling context, the context that raised the exception, which will be below the search machinery and the debugger invocation above that. The active pc of the signaling context will be the of for the send of digbsl et al. >>>> >>>> So the distinction is important and the utility method is probably useful. >>>> >>>> Do you want to remove the method simply because there are no senders in the image? >>>> >>>> If so, this is indicative of a serious problem with the Pharo development process. In the summer I ported VMMaker.oscog to Pharo 6. Now as feenk try and build a VMMaker.oscog image on Pharo 7, the system is broken, in part because of depreciations and in part because useful methods (isOptimisedBlock (isOptimizedBlock?) in the Opal compiler) have been removed. >>>> >>>> Just because a method is not in the image does not imply it is not in use. It simply means that it is not in use in the base image. As the system gets modularised this issue will only increase. There are lots of collection methods that exist as a library that are not used in the base image and removing them would clearly damage the library for users. This is the case for lots of so-called system code. There are users out there, like those of us in the vm team, who rely on such system code, and it is extremely unsettling and frustrating to have that system code change all the time. If Pharo is to be a useful platform to the vm team it has to be more stable. >>> -- >>> www.feenk.com >>> >>> “The smaller and more pervasive the hardware becomes, the more physical the software gets." >>> >>> |
In reply to this post by Sven Van Caekenberghe-2
Sven,
> On Jan 11, 2019, at 11:40 AM, Sven Van Caekenberghe <[hidden email]> wrote: > > > >> On 11 Jan 2019, at 19:32, Eliot Miranda <[hidden email]> wrote: >> >> Sven, >> >>> On Jan 11, 2019, at 10:03 AM, Sven Van Caekenberghe <[hidden email]> wrote: >>> >>> Eliot, >>> >>> I can assure you that multiple core Pharo people had the same reaction, don't turn this (again) in a play on one person's emotions (apart from the fact that those are present in all living creatures). >> >> First you assume a motive I don’t have. I am not trying to provoke anyone. > > Clearly you are, given the reactions. > > Like Doru said, you did not just answer the question, your last two paragraphs contained lots of provocation. You’re entitled to your opinion. But since the intent to provoke or not would be in my head you’re is a projection, not fact. > >> Second, I think emotions are the results of mammalian brains, perhaps bird and fish brains, and certainly not present in amoeba. > > First an IS reference, now this: yes, you are a man ratio and reason only, devout of human emotions like the rest of us. Good for you. > > Hundreds of libraries and frameworks were moved between Pharo 6.x and 7, with minimal changes. > > We are an active, living community where many, many people contribute, are allowed to make mistakes, to question old code and old rules, to learn, to make things better. > >>> Sven >>> >>>> On 11 Jan 2019, at 18:57, Eliot Miranda <[hidden email]> wrote: >>>> >>>> Stef, >>>> >>>>> On Jan 10, 2019, at 11:24 PM, ducasse <[hidden email]> wrote: >>>>> >>>>> Ben >>>>> >>>>> Since you asked I reply. >>>>> For calypso we try and sometimes fail and retry. But we do not rant. >>>>> >>>>> Now the solution is also to have tests and this is what we are doing. >>>>> We want more tests and we are working on having more tests. >>>>> >>>>> The solution is also to have ***********positive********* communication. >>>>> >>>>> There is no point to piss on our process because >>>>> - we were the first to push package usage back in squeal 3.9 >>>>> - increase enormously the number of tests >>>>> - have CI to run the tests and use PR. >>>>> and it is working! >>>>> >>>>> So before bashing us I would expect a bit of respect that it is due to our track record. >>>> >>>> Again you fail to respond to an attempt to discuss real issues and instead take it as a personal map attack and respond emotionally. Ben is /not/ bashing your process in an attempt to damage Pharo. As an academic researcher you should be able to respond objectively to criticism. This frequent emotionality doesn’t help you or the community. >>>> >>>>> >>>>> Finally it takes 1 min to enter a bug entry and now you cannot even complain that you have to log >>>>> because it is on github. (BTW nobdoy is asking the amount of time it takes US to migrate and go over the bug entry - >>>>> again I ask for respect for the people doing this tedious, boring but important job). >>>>> >>>>> When VMMaker will be available in Pharo we will be able to automate things not before. >>>>> Please remember also that Igor paid by us spent a lot of time making sure that >>>>> everybody and in particular our jenkins server could automatically build vm. >>>>> >>>>> So we believe in agility, communication and automation. >>>>> >>>>> Stef >>>>> >>>>> >>>>> >>>>> >>>>>> On 11 Jan 2019, at 05:54, Ben Coman <[hidden email]> wrote: >>>>>> >>>>>> On Thu, 10 Jan 2019 at 23:51, ducasse via Pharo-dev <[hidden email]> wrote: >>>>>> Thomas can you integrate such comments in the debugger class comment >>>>>> >>>>>> @Eliot thanks for the explanation. >>>>>> About the method removed, could you please react less negatively? It would be nice. >>>>>> I cannot believe that you the guy that know the VM would get stopped to open a bug entry telling us that isOptimizedBlock >>>>>> has been removed and it should not. How much time opening a bug entry can take? Under 1 min I guess. >>>>>> >>>>>> I'd guess it takes more than 1 minute overall - a few minutes to shift context to open an old Pharo image >>>>>> and a few more open the original method to copy it to Pharo and repeat that for the next ten missing methods, >>>>>> and then having fixed it for yourself, rather than just log a job for someone else, having fixed your own >>>>>> you now repair your pharo repo with Iceberg and submit a commit, and your now off-task by half an hour. >>>>>> Not a great deal of time if that was what you schedule to work on, you but frustrating when dragged off task. >>>>>> >>>>>> The thing is, when is someone is frustrated, without sharing there is no chance to resolve anything, >>>>>> so the frustration doubles and builds up, and unconsciously creeps in relationships and/or leads to a breakdown. >>>>>> Putting it out in the world relieves that pressure and provides the possibility that someone might >>>>>> find a middle path. As always, it is not what is said but how it is said, >>>>>> and personally that seemed okay to me. >>>>>> >>>>>>>> Just because a method is not in the image does not imply it is not in use. It simply means that it is not in use in the base image. As the system gets modularised this issue will only increase. >>>>>> >>>>>> On the flip side, if the rule was "don't touch unused methods", that would block a lot of action >>>>>> around cleaning, minimisation and modulation that are important. Even though those things >>>>>> aren't directly the shiney new tools that make Pharo great, its their philosophy that underpins >>>>>> a lot of the visible Pharo improvements which has facilitated Pharo's growth. >>>>>> That "vision" is why I'm here. >>>>>> >>>>>> The pivot point here the concept of "unused" and perhaps where we can do better. >>>>>> Currently developers have no information available to base their decision on. >>>>>> Requiring developers to query the mail list about every cleaning, minimisation and modulation action >>>>>> would have a freezing effect. >>>>>> >>>>>> For stuff that is image its much easier for developers since: >>>>>> * its "visible" right in front of them >>>>>> * they can make decisions and take action around it >>>>>> * tests can be run >>>>>> >>>>>> So the question is how we can get those things for important modules outside the image? >>>>>> For me, VM is not like any third party app but is very much a *part* of Pharo >>>>>> since its something which helps Pharo itself advance. So lets treat it as such, similar >>>>>> to how we treat other modules like Calypso or Iceberg which happen distributed in-Image. >>>>>> Can we consider the last step of the CI (after packing the CI product) >>>>>> could load a static version of VMMaker? Not that any issues would fail the commit, but just report >>>>>> to bring "visibility" to the table ? >>>>>> >>>>>> Or, once VMMaker is in git, perhaps a separate opensmalltalk-vm CI job >>>>>> could run against each latest Pharo image to report problems? >>>>>> >>>>>> Or to broaden the perspective of "unused" further, we could put broader #senders-info in front >>>>>> of developers so then can make informed decisions at design time rather cleaning up after-the-fact? >>>>>> Projects could register on a central site their list of #senders (generated from some tool) >>>>>> so the Pharo IDE could reference that when asked for #senders - so the information is available >>>>>> to make informed decisions. >>>>>> >>>>>> Clicking on an external#sender could pop up the actual code hosted on github, >>>>>> so developers have even better knowledge for decisions and communication. >>>>>> >>>>>> Of course, creating such a system requires effort in our world of limited resources. >>>>>> But the external#senders register could be a premium service available just >>>>>> to Pharo Association/Consortium members which sets up a bit of a win/win scenario. >>>>>> It helps developers and is an incentive to join up. This is not to guarantee changes won't >>>>>> affect your project's #senders, but just to improve communication around them. >>>>>> >>>>>> >>>>>> So why if marcus removed it inadvertently would you want to make him feel bad? I don’t want people to feel bad. >>>>>> >>>>>> We can do mistake. >>>>>> >>>>>> That is an important philosophy, but here is a key thing to be clear on... >>>>>> [ "If you are not open to hearing about mistakes, then you are not open to making mistakes." ] repeat. >>>>>> >>>>>> There is no reason for an individual to feel bad about removing an "unused" method if that is the process. >>>>>> But can the process be improved? >>>>>> >>>>>> cheers -ben >>> >>> > > |
In reply to this post by Pharo Smalltalk Developers mailing list
Hi guys. Funny thing that DebugSession does not exists in Squeak. It was introduced in Pharo to represent the model for debugger. пт, 11 янв. 2019 г. в 22:34, Thomas Dupriez via Pharo-dev <[hidden email]>: Yeah, it's a bit unfortunate you assumed I wanted to remove the method. |
In reply to this post by Eliot Miranda-2
> On 11 Jan 2019, at 20:28, Eliot Miranda <[hidden email]> wrote: > > Hi Thomas, > > forgive me, my first response was too terse. Having thought about it in the shower it becomes clear :-) > >> On Jan 11, 2019, at 6:49 AM, Thomas Dupriez <[hidden email]> wrote: >> >> Hi, >> >> Yes, my question was just of the form: "Hey there's this method in DebugSession. What is it doing? What's the intention behind it? Does someone know?". There was no hidden agenda behind it. >> >> @Eliot >> >> After taking another look at this method, there's something I don't understand: >> >> activePC: aContext >> ^ (self isLatestContext: aContext) >> ifTrue: [ interruptedContext pc ] >> ifFalse: [ self previousPC: aContext ] >> >> isLatestContext: checks whether its argument is the suspended context (the context at the top of the stack of the interrupted process). And if that's true, activePC: returns the pc of **interruptedContext**, not of the suspended context. These two contexts are different when the debugger opens on an exception, so this method is potentially returning a pc for another context than its argument... >> >> Another question I have to improve the comment for this method is: what's the high-level meaning of this concept of "activePC". You gave the formal definition, but what's the point of defining this so to speak? What makes this concept interesting enough to warrant defining it and giving it a name? > > There are two “modes” where a pc us mapped to a source range. One is when stepping a context in the debugger (the context is on top and is actively executing bytecodes). Here the debugger stops immediately before a send or assignment or return, so that for sends we can do into or over, or for assignments or returns check stack top to see what will be assigned or returned. In this mode we want the pc of the send, assign or return to map to the source range for the send, or the expression being assigned or returned. Since this is the “common case”, and since this is the only choice that makes sense for assignments ta and returns, the bytecode compiler constructs it’s pc to source range map in terms of the pc of the first byte if the send, assign or return bytecode. > > The second “mode” is when selecting a context below the top context. The pc for any context below the top context will be the return pc for a send, because the send has already happened. The compiler could choose to map this pc to the send, but it would not match what works for the common case. Another choice would appear be to have two map entries, one for the send and one for the return pc, both mapping to the source range. But this wouldn’t work because the result of a send might be assigned or returned and so there is a potential conflict. I stead the reasonable solution is to select the previous pc for contexts below the top of context, which will be the pc for the start of the send bytecode. > I checked with Thomas -> for source mapping, we use the API of the method map. The map does the “get the mapping for the instruction before”, it just needs to be told that we ask the range for an active context: #rangeForPC:contextIsActiveContext: it is called ^aContext debuggerMap rangeForPC: aContext pc contextIsActiveContext: (self isLatestContext: aContext) ] So the logic was move from the debugger to the Map. (I think this is even your design?), and thus the logic inside the debugger is not needed anymore. -> For the question why the AST node of the Block has no simple method to check if it is an “isOptimized” block (e.g. in an ifTrue:): Yes, that might be nice. The reason why is is not there is that the compiler internally using OCOptimizedBlockScope and a check on the AST level was never needed. I would have added it as soon as it would be needed (quite easy to do). On the AST level there is already a check, though, if a *send* is optimized, which is used for code generation (#isInlined). The question why we did not add more compatibility layers to the old AST is that is is quite different.. so even with some methods here and there 99% of clients need changes, so I did not even investigate it too much. With the idea that if we end up in a situation where we need just a method, we can just add it as needed. In general, I have no srtong feelings about the details of the implementation of Opal. It’s just what was possible, with the resources and the understanding that I had at the point it was done. There are for sure even mistakes. I always make mistakes. In addition, it is based on a prior compiler which was done for a different closure model whose Design-choices influenced it too much I think, sometimes good, sometimes not. But I like the “high level”: using a shared AST between the compiler and the tools *and* having a mapping BC -> AST -> Text. Of course I understand that the choice to use the RB AST for the compiler is not a “traditional” one.. but it turned out to work very well *and* it brings some amazing power, as we have now not only a mapping bc->text offset, but a mapping bc->AST, too. This e.g. (just a a simple example) makes the magic of the runtime transforming deprecations possible. See #transform on class Deprecation, the #sourceNodeExecuted: transform | node rewriteRule aMethod | self shouldTransform ifFalse: [ ^ self ]. self rewriterClass ifNil:[ ^ self signal ]. aMethod := self contextOfSender method. aMethod isDoIt ifTrue:[^ self]. "no need to transform doits" node := self contextOfSender sourceNodeExecuted. rewriteRule := self rewriterClass new replace: rule key with: rule value. (rewriteRule executeTree: node) ifFalse: [ ^ self ]. node replaceWith: rewriteRule tree. Author useAuthor: 'AutoDeprecationRefactoring' during: [aMethod origin compile: aMethod ast formattedCode classified: aMethod protocol]. Marcus |
I simply love the dynamic rewriting this is just too cool. We should systematically use it. I will continue to use it in any deprecation. Now I have a simple question (You can explain it to me over lunch one of these days). I do not get why RBAST would not be a good representation for the compiler? I would like to know what is the difference. You mean that before going from BC to AST was difficult? How opal performs it? It does not use the source of the method to recreate the AST but he can do it from the BC? Stef
|
In reply to this post by Pharo Smalltalk Developers mailing list
> On 18 Jan 2019, at 14:26, ducasse <[hidden email]> wrote: > > I simply love the dynamic rewriting this is just too cool. We should systematically use it. > I will continue to use it in any deprecation. > On my TODO is to make it stand-alone and provide is as a “compatibility transform”, too. So we can add it to methods that we want to keep for compatibility, but they will nevertheless transform the code automatically. (this then might be disabled in production to not transform) > Now I have a simple question (You can explain it to me over lunch one of these days). > > I do not get why RBAST would not be a good representation for the compiler? > I would like to know what is the difference. > I think it is a good one. I have not yet seen a reason why not. But remember, Roel left Squeak because his visitor pattern for the compiler was rejected as a dumb idea… so there are definitely different views on core questions. E.g. the RB AST is annotated and the whole things for sure uses a bit more memory than the compiler designed for a machine from 1978. > You mean that before going from BC to AST was difficult? You need to do the mapping somehow, the compiler needs to remember the BC offset in the code generation phase and the AST (somehow) needs to store that information (either in every node or some table). > How opal performs it? It does not use the source of the method to recreate the AST but he can do it from the BC? > It uses the IR (which I still am not 100% sure about, it came from the old “ClosureCompiler” Design and it turned out to be quite useful, for example for the mapping: every IR node retains the offset of the BC it creates, then the IR Nodes retain the AST node that created them. -> so we just do a query: “IRMethod, give me the IRInstruction that created BC offset X. then “IR, which AST node did create you? then the AST Node: what is your highlight interval in the source? The devil is in the detail as one IR can produce multiple byte code offsets (and byte codes) and one byte code might be created by two IR nodes, but it does seem to work with some tricks. Which I want to remove by improving the mapping and even the IR more… there is even the question: do we need the IR? could we not do it simpler? The IR was quite nice back when we tried to do things with byte code manipulation (Bytesurgeon), now it feels a bit of an overkill. But it simplifies e.g. the bc mapping. Marcus |
Le ven. 18 janv. 2019 à 14:42, Marcus Denker <[hidden email]> a écrit :
agree! IR is super when we want to manipuate the byte codes (decompiling, instrumenting, etc...). But if it's just for generating the CompiledMethod byte codes (compiling), it's a bit too much and is one contributor of compilation slowdown. Maybe we could have another polymorphic kind of IRBuilder that would be a DirectByteCodeGenerator |
In reply to this post by Marcus Denker-4
Hi marcus
>> I simply love the dynamic rewriting this is just too cool. We should systematically use it. >> I will continue to use it in any deprecation. >> > > On my TODO is to make it stand-alone and provide is as a “compatibility transform”, too. I have to dig because I remember that I went over all the deprecation in Pharo 60 and started to look at the ones that I could “transformified” so that we get a nice package that rewrite more :) > So we can add it to methods that we want to keep for compatibility, but they will nevertheless transform the code automatically. > (this then might be disabled in production to not transform) Yes I like that I will look for my code. > >> Now I have a simple question (You can explain it to me over lunch one of these days). >> >> I do not get why RBAST would not be a good representation for the compiler? >> I would like to know what is the difference. >> > I think it is a good one. I have not yet seen a reason why not. But remember, Roel left Squeak because his visitor pattern for the compiler was rejected as a dumb idea… so there are definitely different views on core questions. > > E.g. the RB AST is annotated and the whole things for sure uses a bit more memory than the compiler designed for a machine from 1978. > >> You mean that before going from BC to AST was difficult? > > You need to do the mapping somehow, the compiler needs to remember the BC offset in the code generation phase and the AST (somehow) needs to store that information (either in every node or some table). > >> How opal performs it? It does not use the source of the method to recreate the AST but he can do it from the BC? >> > > It uses the IR (which I still am not 100% sure about, it came from the old “ClosureCompiler” Design and it turned out to be quite useful, for example for the mapping: every IR node retains the offset of the BC it creates, then the IR Nodes > retain the AST node that created them. > > -> so we just do a query: “IRMethod, give me the IRInstruction that created BC offset X. then “IR, which AST node did create you? then the AST Node: what is your highlight interval in the source? > > The devil is in the detail as one IR can produce multiple byte code offsets (and byte codes) and one byte code might be created by two IR nodes, but it does seem to work with some tricks. ok I see. And all in all it is really nice. > Which I want to remove by improving the mapping and even the IR more… there is even the question: do we need the IR? could we not do it simpler? > > The IR was quite nice back when we tried to do things with byte code manipulation (Bytesurgeon), now it feels a bit of an overkill. But it simplifies e.g. the bc mapping. We should document this because this is cool. Stef |
>>
>> On my TODO is to make it stand-alone and provide is as a “compatibility transform”, too. > > I have to dig because I remember that I went over all the deprecation in Pharo 60 and started to look at the ones that I > could “transformified” so that we get a nice package that rewrite more :) > > >> So we can add it to methods that we want to keep for compatibility, but they will nevertheless transform the code automatically. >> (this then might be disabled in production to not transform) > > Yes I like that I will look for my code. Apparently I published under MCSmalltalkhubRepository owner: 'PharoExtras' project: ‘Migrator' Several packages MigratorPharo60 contains what I did for Pharo60 probably Migrator contains some of the Pharo70 Now I would like to understand how to organise it. I have the impression that we should keep the one that can be transformed. I do not think that we should go to Pharo50 because it is too old. Stef |
In reply to this post by Pharo Smalltalk Developers mailing list
Hi Marcus, On Fri, Jan 18, 2019 at 5:15 AM Marcus Denker via Pharo-dev <[hidden email]> wrote:
"Design" is giving my code a little too much respect. I was desperately trying to get something to work to be able to deploy Cog with the new closure model. I happily admit that DebuggerMethodMap in Squeak is ugly code. It had to be extended recently to handle full blocks. But it would be great to rewrite it. I dream of a harmonisation of Squeak/Pharo/Cuis execution classes such that we have the same Context, CompiledCode, CompiledBlock, CompiledMethod, debuggerMap and BytecodeEncoder (which is effectively the back end of the compiler that generates bytecode, and the interface to the debugger when bytecode is analyses or executed in the debugger), which would make my life easier maintaining the VM and execution classes, especially as we introduce Sista. I think Opal as a separate compiler is great; the work you've done with mustBeBoleanMagic is superb. At the same time the Squeak compiler is fine for its job and I still find it easier to understand than Opal (probably because I'm not spending enough time with it). One major reason for the harmonization is bug fixes. Right now I use SistaV1 and 64-bits as my default work environment, in a 64-bit Squeak image that was compiled with V3PlusClosures, so that half the code base (the base image) is the old bytecode set, but all of VMMaker is the new bytecode set & full blocks. So far in the past few months I've fixed a few issues with pc mapping, debugging in the different bytecode sets, searching for literals with full blocks, fixing Slang for full blocks, etc. And I'd like these bug fixes to make it into Pharo too. If this speaks to you Marcus, maybe we can try and figure out a path that doesn't ruffle too many feathers. -> For the question why the AST node of the Block has no simple method to check if it is an “isOptimized” block (e.g. in an ifTrue:): Yes, that might be nice. The reason why is is not there is that the compiler internally using OCOptimizedBlockScope and a check on the AST level was never needed. I would have added it as soon as it would be needed (quite easy to do). OK. isOptimized is something I use a lot when analyzing parse trees in looking at the relationship between bytecode and hit code. For example it is key to work I;'m doing right now on register allocation for Sista. On the AST level there is already a check, though, if a *send* is optimized, which is used for code generation (#isInlined). The Squeak versions here are MessageNode>>isOptimized, isOptimizedLoop & isOptimizedWhileLoop The question why we did not add more compatibility layers to the old AST is that is is quite different.. so even with some methods here and there 99% of clients need changes, so I did not even investigate it too much. With the idea that if we end up in a situation where we need just a method, we can just add it as needed. Reasonable. I simply have to maintain compatibility for the two parse trees. In general, I have no srtong feelings about the details of the implementation of Opal. It’s just what was possible, with the resources and the understanding that I had at the point it was done. There are for sure even mistakes. I always make mistakes. I can join you in all of these for the Squeak compiler. It was not possible to have someone to pair with when modifying the Squeak compiler to conform to my closure design; I was in a tiny startup and had to do all execution technology work. And I always make mistakes too :-) In addition, it is based on a prior compiler which was done for a different closure model whose Design-choices influenced it too much I think, sometimes good, sometimes not. Yes, this is great. Of course I understand that the choice to use the RB AST for the compiler is not a “traditional” one.. but it turned out to work very well *and* it brings some amazing power, as we have now not only a mapping bc->text offset, but a mapping bc->AST, too. This e.g. (just a a simple example) makes the magic of the runtime transforming deprecations possible. See #transform on class Deprecation, the #sourceNodeExecuted: +10 Marcus _,,,^..^,,,_ best, Eliot |
In reply to this post by Marcus Denker-4
Hi Marcus, On Fri, Jan 18, 2019 at 5:42 AM Marcus Denker <[hidden email]> wrote:
I find Bytesurgeon functionality, specifically a bytecode dis/assembler very useful, but wouldn't use it for the back end of the bytecode compiler. It adds overhead that has no benefit. But I use my version, MethodMassage, for lots of things: - transporting compiled methods from one dialect to another, e.g. to do in-image JIT compilation omg a method from Pharo in Squeak. - generating JIT test cases - generating methods that can't be generated from Smalltalk source, e.g. an accessor for a JavaScript implementation above Smalltalk where inst var 0 is the prototype slot and nil is unbound, and so in a loop one wants to fetch the Nth inst var from a temporary initialized to self, and ion the value is non-nil return it, otherwise setting the temporary to the prototype slot, hence walking up the prototype chain until an initialized inst var is found. I based mine around the messages that InstructionStream sends to the client in the interpretFooInstructionFor: methods; a client that catches doesNotUnderstand: then forms the basis of the disassembler. Simple and light-weight. Marcus _,,,^..^,,,_ best, Eliot |
Free forum by Nabble | Edit this page |