Hi,
My question is considering that I use the default transaction management that comes with Seaside in GemStone (#seasideProcessRequestWithRetry:resultBlock:) and friends. I am trying to do something bizarre, I know. My ultimate goal, is to be able to run a Seaside callback twice. First time to get the execution time (to calculate optimal number of samples for profiling) and second time to profile the callback with a sample number calculated based on the first result. So... the first callback execution must be "discarded" and should cause no bad side effect. I have my own subclass of WACallbackProcessingActionContinuation that implements this (below code is simplified for this email purpose): (self session sessionCache at: #'shouldProfile' ifAbsent: [ false ]) ifTrue: [ [ super performAction ] on: WAResponseNotification do: [ :ex | System abortTransaction. System beginTransaction ]. self profile: [ super performAction ] ] ifFalse: [ super performAction ] As you can see, with the bold part I am trying to execute the callback but do not respond back to the client. That "might" be all I need from Seaside (which I may be wrong....). But aside from that, the callback may have triggered some code (imagine adding something to a queue) that if I consider that such code is executed twice, then I have a different behavior (I would have added 2 things into the queue instead of 1). So, aside from that, I also need to abortTransaction and begin new one, and I assume I would retain the session mutex.. is this correct? Anyway, even if that code is only used when profiling I sometimes profile some serious extent, in which I cannot alter data and let incorrect results, so I would appreciate some thoughts from you to see if at least I am not seeing something obvious. Thanks a lot in advance, _______________________________________________ Glass mailing list [hidden email] http://lists.gemtalksystems.com/mailman/listinfo/glass |
On 05/29/2017 07:21 AM, Mariano
Martinez Peck via Glass wrote:
The thing that concerns me is doing an abort in the "middle" of request handling ... I know that technically you are aborting at the end, but your abort will reset any modified persistent objects that were modified before you got to callback ... it would be safer to use nested transactions: (self session sessionCache at: #'shouldProfile' ifAbsent: [
false ])
ifTrue: [
[ System beginNestedTransaction.
super performAction ] on: WAResponseNotification
do: [ :ex |
System abortTransaction].
self profile: [ super performAction ] ]
ifFalse: [ super performAction ]
I guess in your case it is necessary to isolate the profiling to the single action to get good data, but there is Seaside code floating around that allows you to turn on profiling for one or more seaside gems. With this approach you get profiling results from the handling of multiple requests ... I can dig this code up if you are interested ... Dale _______________________________________________ Glass mailing list [hidden email] http://lists.gemtalksystems.com/mailman/listinfo/glass |
On Tue, May 30, 2017 at 2:07 PM, Dale Henrichs via Glass <[hidden email]> wrote:
That's really good idea! I finally gave up about this idea. I now shifted my procedure with a "first simply check the number of samples and measure running time, adjust interval if needed, profile again "... But...I paste (at the end of the email) the code I used as my final version in case someone wants to follow it.
mmmm I am not sure I am following you but I am interesting. Can you explain me a bit more please? _______________________________________________ Glass mailing list [hidden email] http://lists.gemtalksystems.com/mailman/listinfo/glass |
Here is the code: performAction "Profiling: My ultimate goal, is to be able to run a Seaside callback twice. First time to get the execution time (to calculate optimal number of samples for profiling) and second time to profile the callback with a sample number calculated based on the first result. So... the first callback execution must be 'discarded' and should cause no bad side effect. " self session db ifNotNilDo: [ :db | db lastRequestDateAndTimeUpdate ]. "Sometimes we want to profile a seaside request. With this hook we are able to do that and write the profiling output directly into Quuve user folder under 'profiling' subdirectory. An admin user can enable or disable this profiling from the admin page." (self session sessionCache at: #shouldProfile ifAbsent: [ false ]) ifTrue: [ FaSmalltalkPlatform current isGemStone ifTrue: [ | response cpuTimeToRun | "I should re-check...I am not sure I need these 2 value holders. " response := ValueHolder new. cpuTimeToRun := ValueHolder new. "What we do in the first callback is to NOT answer back to the client, that's why we capture WAResponseNotification and we do not do a #pass or anything. But aside from that, the callback may have triggered some code (imagine adding something to a queue) that if I consider that such code is executed twice, then I have a different behavior (I would have added 2 things into the queue instead of 1). Or some data cached which will mis lead profiling. So, aside from that, I also need to abortTransaction and begin new one. Finally, the deepCopy of the response is because the following callback will be writing AGAIN same thing into the same response. In such a case I would have written the response twice which is not what we want. I tried reseting the response or the document after the first callback so that the second one is frehs. But that is not possible becuse if I reset it, and I start the response from THIS point in time (a callback), the response would be incomplete as it misses all the things written before arriving at this point (like headers, metadata etc). So.... the easiest workaround is to get the whole response from the first callback and keep it. Then execute second callback but just before responding (the response in this case would have double contents) , plug the kept good response (content written only once). " cpuTimeToRun contents: (FACompatibilityUtils current cpuTimeToRun: [ [ super performAction ] on: WAResponseNotification do: [ :ex | response contents: WACurrentRequestContext value response deepCopy. System abortTransaction. System beginTransaction ] ]). Transcript show: 'Cpu time to run: ' , cpuTimeToRun contents asString; cr. self profile: [ [ super performAction ] on: WAResponseNotification do: [ :ex | WACurrentRequestContext value instVarNamed: 'response' put: response contents. ex pass ] ] totalCpuTime: cpuTimeToRun contents ] ifFalse: [ self profile: [ super performAction ] ] ] ifFalse: [ super performAction ] ----- profile: aBlock estimatedTotalCpuTime: totalCpuTime tallyThreshold: tallyThreshold writingReportOn: aStream "Profiles aBlock with an interval of nanoseconds based on totalCpuTime. We use the helper method #computeInterval: that calculates more or less the needed nanosecond interval for totalCpuTime, to get approx 100k samples. This is considered a very good resolution profiling. However, note that it needs quite some Gem temp memory. If you get the error 'Error occurred (error 2517), 2 failed attempts to signal AlmostOutOfMemory' when profiling, then you should either increase GEM_TEMPOBJ_CACHE_SIZE or decrease a bit the resolution of the samples by multiplying the result of #computeInterval: by some number. Finally, writes the resulting report (limited to tallyThreshold number of invokations of a given method) into aStream. " | profMon startTime endTime persistentKey persistentDict | "All this persistency idea (UserGlobals and #commitOnAlmostOutOfMemoryDuring:threshold:) is the same as we do for SIXX. See method materializeFromSixxFile:. Basically what I try to do is to make the samples / data of the profiling persistent so that I have less needs of a higher GEM_TEMPOBJ_CACHE_SIZE. But so far I am not sure this is helping." persistentKey := ('PROFMONITOR' , Object new identityHash asString) asSymbol. persistentDict := UserGlobals at: #'ProfMonitorRoots' ifAbsentPut: [ RcKeyValueDictionary new ]. [ FACompatibilityUtils current commitOnAlmostOutOfMemoryDuring: [ startTime := System _timeGmtFloat. profMon := ProfMonitorTree new. persistentDict at: persistentKey put: profMon. "We multiply by 4 because else its too much samples..it takes too much to profile and needs really big GEM_TEMPOBJ_CACHE_SIZE and we still may arrive to a AlmostOutOfMemory " profMon intervalNs: (profMon class computeInterval: totalCpuTime) * 4. profMon startMonitoring. [ aBlock value ] ensure: [ "This #ensure: is very important because the closure we are profiling may raise a signal... (like seaside request processing which uses notifications)..so without the #ensure: we do not write the report anywhere..." [ endTime := System _timeGmtFloat. profMon stopMonitoring. profMon gatherResults. aStream nextPutAll: 'Total time: ' , ((endTime - startTime) * 1000) asInteger asString; cr; cr. aStream nextPutAll: (profMon reportDownTo: tallyThreshold); cr. "Lets try to remove results (most profiling memory) ASAP so that we free gem memory" profMon removeResults ] on: Error do: [ :ex | FaSmalltalkPlatform current saveExceptionContinuation: ex. aStream nextPutAll: 'There was an error trying to profile: ' , ex printString ] ] ] threshold: 80 ] ensure: [ persistentDict removeKey: persistentKey ifAbsent: [ ]. System commit ] ------ On Wed, May 31, 2017 at 3:24 PM, Mariano Martinez Peck <[hidden email]> wrote:
_______________________________________________ Glass mailing list [hidden email] http://lists.gemtalksystems.com/mailman/listinfo/glass |
Free forum by Nabble | Edit this page |