Hi,
While processing certain requests I need to initiate a connection with a payment service to receive a key.On a subsequent request I use the key to initiate payment. However I don't want the initial response blocked by my call to the payment service. I'd planned to use something like a Future (http://onsmalltalk.com/smalltalk-concurrency-playing-with-futures) stored in the session. However reading http://gemstonesoup.wordpress.com/2007/05/10/porting-application-specific-seaside-threads-to-gemstone/ I'm unsure how well this will map itself onto Gemstone's transaction model.
Thinking aloud - as the Future will only change state when the request returns, perhaps I could initiate a nested transaction when the request returns, or perhaps I could add the state change to a transaction queue. Am I thinking along the right lines? Any thoughts or pointers gratefully received
Thanks Nick
|
Nick,
You are correct, that it isn't a good idea to spawn threads within the vms that are servicing Seaside requests ... GemStone doesn't support nested transactions, so you end up either delaying the response or pushing the work off onto a gem whose job is to service "long running operations." The basic model is that concurrent, transactional work in GemStone needs to be done in separate vms ... The basic idea is that you create a separate gem that services tasks that are put into an RCQueue (multiple producers and a single consumer). The gem polls for tasks in the queue, performs the task, then finishes the task, storing the results in the task....On the Seaside side you would use HTML redirect (WADelayedAnswerDecoration) while waiting for the task to be completed. There are a couple of details that need to be dealt with and they are easier to explain with an example or two. I have been threatening for a long time to put together an example of what I have been thinking about when it comes to creating a gem to service "long running operations", so I think it's about time that I build one. I should be able to have a functional example in a day or two...I want to finish GLASS 1.0-beta.8.3 first:) Dale Nick Ager wrote: > Hi, > > While processing certain requests I need to initiate a connection with a > payment service to receive a key.On a subsequent request I use the key > to initiate payment. However I don't want the initial response blocked > by my call to the payment service. I'd planned to use something like a > Future > (http://onsmalltalk.com/smalltalk-concurrency-playing-with-futures) > stored in the session. However reading > http://gemstonesoup.wordpress.com/2007/05/10/porting-application-specific-seaside-threads-to-gemstone/ > I'm unsure how well this will map itself onto Gemstone's transaction model. > > Thinking aloud - as the Future will only change state when the request > returns, perhaps I could initiate a nested transaction when the request > returns, or perhaps I could add the state change to a transaction queue. > Am I thinking along the right lines? Any thoughts or pointers gratefully > received > > Thanks > > Nick |
Hi Dale,
Thanks for clarifying. A functional example would be really helpful. One area I'm unsure about is how the RCQueue polls for tasks. Would it be possible to use, say, an inter-gem Announcement (or other cross-gem notification mechanism) to inform the "long running task gem" that a task had been added to it's queue - or perhaps this is a non-issue. Anyway I'll await your example...
Thanks Nick On 23 August 2010 19:11, Dale Henrichs <[hidden email]> wrote: Nick, |
Nick Ager wrote: > Hi Dale, > > Thanks for clarifying. A functional example would be really helpful. One > area I'm unsure about is how the RCQueue polls for tasks. Would it be > possible to use, say, an inter-gem Announcement (or other cross-gem > notification mechanism) to inform the "long running task gem" that a > task had been added to it's queue - or perhaps this is a non-issue. > Anyway I'll await your example... Nick, This happens to be one of the details ... there are a range of possibilities for handling the polling, but the one that we think would work pretty well is incrementing a persistent shared counter ... reading the counter is cheap, the shared counter is updated independently of transaction boundaries, so a gem can spin in checking the value of the counter and comparing the value to a stored value ... when the gem sees that the counter has been incremented, it then aborts and looks for entries in the RCQueue...there is the possibility of a race condition, when the gem sees the incremented counter before the entry is committed to the queue, but this can be accounted for relatively easily... I've got a basic example coded up and I plan to beat on it a little bit to make sure that it isn't too fragile:) Dale > > Thanks > > Nick > > On 23 August 2010 19:11, Dale Henrichs <[hidden email] > <mailto:[hidden email]>> wrote: > > Nick, > > You are correct, that it isn't a good idea to spawn threads within > the vms that are servicing Seaside requests ... GemStone doesn't > support nested transactions, so you end up either delaying the > response or pushing the work off onto a gem whose job is to service > "long running operations." The basic model is that concurrent, > transactional work in GemStone needs to be done in separate vms ... > > > The basic idea is that you create a separate gem that services tasks > that are put into an RCQueue (multiple producers and a single > consumer). The gem polls for tasks in the queue, performs the task, > then finishes the task, storing the results in the task....On the > Seaside side you would use HTML redirect (WADelayedAnswerDecoration) > while waiting for the task to be completed. > > There are a couple of details that need to be dealt with and they > are easier to explain with an example or two. > > I have been threatening for a long time to put together an example > of what I have been thinking about when it comes to creating a gem > to service "long running operations", so I think it's about time > that I build one. > > I should be able to have a functional example in a day or two...I > want to finish GLASS 1.0-beta.8.3 first:) > > Dale > > > Nick Ager wrote: > > Hi, > > While processing certain requests I need to initiate a > connection with a payment service to receive a key.On a > subsequent request I use the key to initiate payment. However I > don't want the initial response blocked by my call to the > payment service. I'd planned to use something like a Future > (http://onsmalltalk.com/smalltalk-concurrency-playing-with-futures) > stored in the session. However reading > http://gemstonesoup.wordpress.com/2007/05/10/porting-application-specific-seaside-threads-to-gemstone/ > I'm unsure how well this will map itself onto Gemstone's > transaction model. > Thinking aloud - as the Future will only change state when the > request returns, perhaps I could initiate a nested transaction > when the request returns, or perhaps I could add the state > change to a transaction queue. Am I thinking along the right > lines? Any thoughts or pointers gratefully received > > Thanks > > Nick > > > |
Norm raised a good point offline, that persistent counters aren't needed
in this case. persistent counters preserve their value across system shutdowns and that isn't necessary here. Dale Dale Henrichs wrote: > > Nick Ager wrote: >> Hi Dale, >> >> Thanks for clarifying. A functional example would be really helpful. One >> area I'm unsure about is how the RCQueue polls for tasks. Would it be >> possible to use, say, an inter-gem Announcement (or other cross-gem >> notification mechanism) to inform the "long running task gem" that a >> task had been added to it's queue - or perhaps this is a non-issue. >> Anyway I'll await your example... > > Nick, > > This happens to be one of the details ... there are a range of > possibilities for handling the polling, but the one that we think would > work pretty well is incrementing a persistent shared counter ... reading > the counter is cheap, the shared counter is updated independently of > transaction boundaries, so a gem can spin in checking the value of the > counter and comparing the value to a stored value ... when the gem sees > that the counter has been incremented, it then aborts and looks for > entries in the RCQueue...there is the possibility of a race condition, > when the gem sees the incremented counter before the entry is committed > to the queue, but this can be accounted for relatively easily... > > I've got a basic example coded up and I plan to beat on it a little bit > to make sure that it isn't too fragile:) > > Dale > >> Thanks >> >> Nick >> >> On 23 August 2010 19:11, Dale Henrichs <[hidden email] >> <mailto:[hidden email]>> wrote: >> >> Nick, >> >> You are correct, that it isn't a good idea to spawn threads within >> the vms that are servicing Seaside requests ... GemStone doesn't >> support nested transactions, so you end up either delaying the >> response or pushing the work off onto a gem whose job is to service >> "long running operations." The basic model is that concurrent, >> transactional work in GemStone needs to be done in separate vms ... >> >> >> The basic idea is that you create a separate gem that services tasks >> that are put into an RCQueue (multiple producers and a single >> consumer). The gem polls for tasks in the queue, performs the task, >> then finishes the task, storing the results in the task....On the >> Seaside side you would use HTML redirect (WADelayedAnswerDecoration) >> while waiting for the task to be completed. >> >> There are a couple of details that need to be dealt with and they >> are easier to explain with an example or two. >> >> I have been threatening for a long time to put together an example >> of what I have been thinking about when it comes to creating a gem >> to service "long running operations", so I think it's about time >> that I build one. >> >> I should be able to have a functional example in a day or two...I >> want to finish GLASS 1.0-beta.8.3 first:) >> >> Dale >> >> >> Nick Ager wrote: >> >> Hi, >> >> While processing certain requests I need to initiate a >> connection with a payment service to receive a key.On a >> subsequent request I use the key to initiate payment. However I >> don't want the initial response blocked by my call to the >> payment service. I'd planned to use something like a Future >> (http://onsmalltalk.com/smalltalk-concurrency-playing-with-futures) >> stored in the session. However reading >> http://gemstonesoup.wordpress.com/2007/05/10/porting-application-specific-seaside-threads-to-gemstone/ >> I'm unsure how well this will map itself onto Gemstone's >> transaction model. >> Thinking aloud - as the Future will only change state when the >> request returns, perhaps I could initiate a nested transaction >> when the request returns, or perhaps I could add the state >> change to a transaction queue. Am I thinking along the right >> lines? Any thoughts or pointers gratefully received >> >> Thanks >> >> Nick >> >> >> |
In reply to this post by Nick
Nick,
Just an update ... I'm very close to having a pretty good example ... I've been hitting the system with siege this afternoon looking for vulnerabilities and at this point it looks pretty solid (BTW, I'm using a restful api for scheduling tasks in the long running task vm, which makes it real easy to use siege to hammer the system)... I've got just a little bit more work to clean up a couple of loose ends and then I'll be able to publish a configuration and scripts. Soooo, tomorrow I should be able to release the example... Dale Nick Ager wrote: > Hi Dale, > > Thanks for clarifying. A functional example would be really helpful. One > area I'm unsure about is how the RCQueue polls for tasks. Would it be > possible to use, say, an inter-gem Announcement (or other cross-gem > notification mechanism) to inform the "long running task gem" that a > task had been added to it's queue - or perhaps this is a non-issue. > Anyway I'll await your example... > > Thanks > > Nick > > On 23 August 2010 19:11, Dale Henrichs <[hidden email] > <mailto:[hidden email]>> wrote: > > Nick, > > You are correct, that it isn't a good idea to spawn threads within > the vms that are servicing Seaside requests ... GemStone doesn't > support nested transactions, so you end up either delaying the > response or pushing the work off onto a gem whose job is to service > "long running operations." The basic model is that concurrent, > transactional work in GemStone needs to be done in separate vms ... > > > The basic idea is that you create a separate gem that services tasks > that are put into an RCQueue (multiple producers and a single > consumer). The gem polls for tasks in the queue, performs the task, > then finishes the task, storing the results in the task....On the > Seaside side you would use HTML redirect (WADelayedAnswerDecoration) > while waiting for the task to be completed. > > There are a couple of details that need to be dealt with and they > are easier to explain with an example or two. > > I have been threatening for a long time to put together an example > of what I have been thinking about when it comes to creating a gem > to service "long running operations", so I think it's about time > that I build one. > > I should be able to have a functional example in a day or two...I > want to finish GLASS 1.0-beta.8.3 first:) > > Dale > > > Nick Ager wrote: > > Hi, > > While processing certain requests I need to initiate a > connection with a payment service to receive a key.On a > subsequent request I use the key to initiate payment. However I > don't want the initial response blocked by my call to the > payment service. I'd planned to use something like a Future > (http://onsmalltalk.com/smalltalk-concurrency-playing-with-futures) > stored in the session. However reading > http://gemstonesoup.wordpress.com/2007/05/10/porting-application-specific-seaside-threads-to-gemstone/ > I'm unsure how well this will map itself onto Gemstone's > transaction model. > Thinking aloud - as the Future will only change state when the > request returns, perhaps I could initiate a nested transaction > when the request returns, or perhaps I could add the state > change to a transaction queue. Am I thinking along the right > lines? Any thoughts or pointers gratefully received > > Thanks > > Nick > > > |
In reply to this post by Nick
Okay,
I've finished the example code. to load up the example, execute the following in GLASS 1.0-beta.8.3: MCPlatformSupport commitOnAlmostOutOfMemoryDuring: [ [ Gofer project load: 'GsServiceVMExample' version: '1.0.0'. Gofer project load: 'Seaside30' group: 'Seaside-Adaptors-FastCGI'. "OR" Gofer project load: 'Seaside30' group: 'Seaside-Adaptors-Swazoo'. ] on: Warning do: [:ex | Transcript cr; show: ex description. ex resume ]]. Then execute the following code, to install the object log and install a stats task in the maintenance vm. The rest of the Service VM example code should have bee initialized during installation: WAAdmin register: WAObjectLog asApplicationAt: WAObjectLog entryPointName user: 'admin' password: 'tool'. WAGemStoneMaintenanceTask initialize. You need to replace the runSeasideGems30 script with this version (http://code.google.com/p/glassdb/wiki/ServiceVMExampleRunSeasideGems30Script) and add a startServiceVM30 script (http://code.google.com/p/glassdb/wiki/ServiceVMExampleStartServiceVM30Script) that starts up the Service VM (started by the new runSeasideGems30). I'm in the process of writing more documentation for the example on the glassdb wiki (http://code.google.com/p/glassdb/wiki/ServiceVMExample) where I'll go into more detail about the my rationale. This morning I hammered the application using siege (I used jcrawler to flush out concurrency issues yesterday)... siege the RESTful urls at 500 requests/second while the Service VM was handling tasks at a rate of about 6 requests/second (this was about the limit for doing Monticello repository queries against the bibiliocello site, not necessarily the processing limit for the Service VM:)... so there should be plenty of head room for growth using this approach. The Service VM logic for handling the RCQueue is in the class WAGemStoneServiceVMTask. WAGemStoneServiceTask instances are created and dropped into the queue and are popped from the queue and executed in the Service VM (threads are forked to process each task). There's an RCQueue for the queue itself, an RCQueue that hangs onto every instance of WAGemStoneServiceTask (assuming that every instance is important), An RCIdentityBag for keeping track of the tasks that are in process (used to prime the queue when restarting a crashed Service VM). Transactions are wrapped around the code that removes entries from the queue and adds them to the inProcess bag. the actual task work is done outside of a transaction (waiting for an HTTP request to the bibliocello site to complete) no persistent state is being modified while waiting so it is transaction safe (other threads will be aborting/committing while we're waiting for a response). Once the response is received we enter back into transaction updating the state of the task and removing the task from the inProcess bag)... The transaction processing is protected by a mutex so only one thread is allowed to abort/commit at a time. At no time during the operation of the Service VM is shared state modified outside of transaction and the shared structures are all RC (and conflict free as they are being used in this example). The tasks handle errors and each task has a built in log that timestamps each update ... I started with a very simple model and then added things as I ran into issues during load testing and I loaded up the WAGemStoneServiceTask with logging, etc. to help characterize some of the issues I was running into. I added the RESTful API so I could hammer the system with siege and stress the Service VM. There's a component-based interface that allows you to interactively walk a task through it's 3 steps... At this point. I've got a number of ideas for improvements/cleanup, but I don't want to take away all of your fun:) Let me know if you run into trouble or have questions ... Dale Nick Ager wrote: > Hi Dale, > > Thanks for clarifying. A functional example would be really helpful. One > area I'm unsure about is how the RCQueue polls for tasks. Would it be > possible to use, say, an inter-gem Announcement (or other cross-gem > notification mechanism) to inform the "long running task gem" that a > task had been added to it's queue - or perhaps this is a non-issue. > Anyway I'll await your example... > > Thanks > > Nick > > On 23 August 2010 19:11, Dale Henrichs <[hidden email] > <mailto:[hidden email]>> wrote: > > Nick, > > You are correct, that it isn't a good idea to spawn threads within > the vms that are servicing Seaside requests ... GemStone doesn't > support nested transactions, so you end up either delaying the > response or pushing the work off onto a gem whose job is to service > "long running operations." The basic model is that concurrent, > transactional work in GemStone needs to be done in separate vms ... > > > The basic idea is that you create a separate gem that services tasks > that are put into an RCQueue (multiple producers and a single > consumer). The gem polls for tasks in the queue, performs the task, > then finishes the task, storing the results in the task....On the > Seaside side you would use HTML redirect (WADelayedAnswerDecoration) > while waiting for the task to be completed. > > There are a couple of details that need to be dealt with and they > are easier to explain with an example or two. > > I have been threatening for a long time to put together an example > of what I have been thinking about when it comes to creating a gem > to service "long running operations", so I think it's about time > that I build one. > > I should be able to have a functional example in a day or two...I > want to finish GLASS 1.0-beta.8.3 first:) > > Dale > > > Nick Ager wrote: > > Hi, > > While processing certain requests I need to initiate a > connection with a payment service to receive a key.On a > subsequent request I use the key to initiate payment. However I > don't want the initial response blocked by my call to the > payment service. I'd planned to use something like a Future > (http://onsmalltalk.com/smalltalk-concurrency-playing-with-futures) > stored in the session. However reading > http://gemstonesoup.wordpress.com/2007/05/10/porting-application-specific-seaside-threads-to-gemstone/ > I'm unsure how well this will map itself onto Gemstone's > transaction model. > Thinking aloud - as the Future will only change state when the > request returns, perhaps I could initiate a nested transaction > when the request returns, or perhaps I could add the state > change to a transaction queue. Am I thinking along the right > lines? Any thoughts or pointers gratefully received > > Thanks > > Nick > > > |
Hi Dale,
Much appreciated. I've installed your code into a fresh extent and I'm currently studying the code. I'll get back to you soon with any questions, if may or may-not raise. Thanks again Nick On 26 August 2010 00:59, Dale Henrichs <[hidden email]> wrote: Okay, |
Hi Dale, In the transcript window I can see "Starting service gem" which I presume means I should be good to go... however when I use http://localhost:8384/examples/serviceInteractive and start "Blocking perform step 1" the task stays at step 1 with the counter incrementing indefinitely. I also see in the object log that the number of items steadily increases as I start new tasks without any ever being 'inprocess'. I've tried WAGemStoneMaintenanceTask initialize and "repriming the pump" without luck. Any thoughts? In the meantime I'll try again with a fresh extent.
Once I get the tasks up and running is the idea that I subclass WAGemStoneServiceTask and implement #processStep. Thanks again Nick |
BTW I've checked that the code in the step don't block eg:
Gofer new disablePackageCache;
allResolved returns a result. On 26 August 2010 18:10, Nick Ager <[hidden email]> wrote:
|
In reply to this post by Nick
Nick,
It sounds like the Service gem isn't running ... 1) check the service.log (/opt/gemstone/log) to make sure that the service gem is running 2) Make sure that WAGemStoneServiceVMTask got initialized by inspecting: WAGemStoneServiceVMTask tasks. Then we'll go from there... Dale Nick Ager wrote: > Hi Dale, > > Now for the questions... > > In the transcript window I can see "Starting service gem" which I > presume means I should be good to go... however when I > use http://localhost:8384/examples/serviceInteractive and start > "Blocking perform step 1" the task stays at step 1 with the counter > incrementing indefinitely. I also see in the object log that the number > of items steadily increases as I start new tasks without any ever being > 'inprocess'. I've tried WAGemStoneMaintenanceTask initialize and > "repriming the pump" without luck. Any thoughts? In the meantime I'll > try again with a fresh extent. > > Once I get the tasks up and running is the idea that I subclass > WAGemStoneServiceTask and implement #processStep. > > Thanks again > > Nick > > > > |
Hi Dale,
Yes you were spot on. I checked: /opt/gemstone/log/service_start.log and saw:
/opt/gemstone/product/seaside/bin/runSeasideGems30: line 35: /opt/gemstone/product/seaside/bin/startServiceVM30: Permission denied
I hadn't set the execute bit on startServiceVM30. I can now track the tasks moving through their steps. Fantastic. So next question: is WAGemStoneServiceTask intended for subclassing?
Thanks again Nick On 26 August 2010 18:23, Dale Henrichs <[hidden email]> wrote: Nick, |
Nick Ager wrote:
> Hi Dale, > > Yes you were spot on. I checked: /opt/gemstone/log/service_start.log and > saw: > > /opt/gemstone/product/seaside/bin/runSeasideGems30: line 35: > /opt/gemstone/product/seaside/bin/startServiceVM30: Permission denied > > I hadn't set the execute bit on startServiceVM30. > > I can now track the tasks moving through their steps. Fantastic. > > So next question: is WAGemStoneServiceTask intended for subclassing? Marvelous! As for subclassing WAGemStoneServiceTask ... I basically threw the example together to get functionality. It's probably better to abstract out the things that you find useful and then we can make that part of the GemStone Seaside3.0 subsystem (and recast the example to use the abstractions). If you want to use the name WAGemStoneServiceTask, I can rename the class in the example (it's too good a name to waste on an example:). If you register up on GemSource, I'll add you to the GLASS dev group so you can commit your abstractions to the GemStone repository. I regret that when I originally wrote WAGemStoneMaintenanceTask, that I didn't use a class instance variable for the Tasks ... It's a change that I plan to make in the near future which will result in a change for WAGemStoneServiceVMTask. So, WAGemStoneServiceVMTask isprobably usable asis with that change (and maybe a couple of other tweaks ... like setting the cycle time for the task within the task class itself ... Dale |
Hi Dale, It's taken me a while to get my head around various issues and I had some fun migrating classes, but I'm finally happy with the result. I've wrapped it all up in a future. In practice the #future interface is used like: remoteTime := [self getRemoteTime] future ---- html text: 'the time is: ', remoteTime value "will block until the future has a value" ---- remoteTime hasValue or: [remoteTime hasError] ifTrue: ["we have the remote value"]
---- #future is an extension method of BlockClosure BlockClosure>>future ^ WAGemStoneFuture value: self ----
Hopefully with the Service Task gem code wrapped in a future interface, my code can continuing working on Pharo as well as GemStone - although I've yet to implement the future interface on Pharo.
I decided against making WAGemStoneFuture a transparent proxy for the resulting object as although cute I thought it could cause unforeseen problems down the line. I've packaged code in: 'Seaside-GemStone-ServiceTask' package. It comprises of:
WAGemStoneFuture WAGemStoneServiceTask WAGemStoneServiceVMTask WAGemStoneFuture - creates the WAGemStoneServiceTask and forwards all requests to the contained task.
WAGemStoneServiceTask - changed from the code you shipped: * no longer stores a collection of task instances * removed methods which were closed tied with your examples * #processTask is where the work is done.
* WAGemStoneServiceVMTask - largely as you shipped, changed to call #processTask and removed the call to the instance collection I've also two examples in a package: 'Future-Seaside-Examples'
FutureSeasideExampleSingle - requests a page from worldtimeserver.com as a background task and when the task completes displays the UTC time from scraping the page.
FutureSeasideExampleMultiple - creates multiple FutureSeasideExampleSingle components and sets them off in parallel requesting remote pages. The examples are registered:
examples/backgroundRequest
examples/multipleBackgroundRequest I backed-off significant reorganisation of the code although I struggled a little initially understanding the functionality for the task gem with split between WAGemStoneServiceVMTask class>>serviceVMTaskServiceExample and WAGemStoneServiceTask class methods. I wondered if it would make sense to have an object represent the task gem and move this code into that object class - say WAGemStoneServiceTaskGem?
If you register up on GemSource, I'll add you to the GLASS dev group so you can commit your abstractions to the GemStone repository. I've registered in GemSource and am happy to commit my package. Which repository should I use?
I regret that when I originally wrote WAGemStoneMaintenanceTask, that I didn't use a class instance variable for the Tasks ... It's a change that I plan to make in the near future which will result in a change for WAGemStoneServiceVMTask. I haven't grasped how you intended to change this. Don't know if these changes are vital.
Be great if you have a chance to have a look over the code and feedback any thoughts... Thanks Nick |
Nick,
Excellent! sounds pretty cool ... I suppose I should wait and look at the code and see how it all functions, but the point of the original example was to do a non-blocking call to the service gem, thereby allowing the Seaside gem to continue to process HTTP requests (safely) while the external call was being processed. Unless I misunderstand things (entirely possible:), the #future call is a blocking call... Nonetheless, I am interested in seeing your code and playing around a little bit, so I think we should add the code to the Seaside3.0 repository: http://seaside.gemstone.com/ss/Seaside30 I've added you to the GLASS DEVS group and added the GLASS DEVS group to Seaside3.0. Regarding the WAGemStoneServiceVMTask and WAGemStoneMaintenanceTask...the WAGemStoneMaintenanceTask class was intended to provide a framework for adding/controlling a list of tasks that are performed at regular intervals by the maintenance vm. I wanted users to be able to add/remove tasks ... the SqueakSource project needs several periodic tasks to be performed and the frequency/density of tasks in the Maintenance vm meant that those tasks could be handled by the Maintenance vm. When it cam time to add the Service example, I simply subclassed WAGemStoneMaintenanceTask and went my merry way ... Now that there is a second use case for the WAGemStoneMaintenanceTask it is more obvious which direction the refactoring could/should go. Once you get your code up on the site, I'll be able to update my example, get a handle on what you are thinking and then go from there ... Thanks, Dale Nick Ager wrote: > Hi Dale, > > It's taken me a while to get my head around various issues and I had > some fun migrating classes, but I'm finally happy with the result. > > I've wrapped it all up in a future. In practice the #future interface is > used like: > > > remoteTime := [self getRemoteTime] future > > > ---- > > html text: 'the time is: ', remoteTime value "will block until the > future has a value" > > > ---- > > remoteTime hasValue or: [remoteTime hasError] ifTrue: ["we have the > remote value"] > > ---- > > #future is an extension method of BlockClosure > > BlockClosure>>future > ^ WAGemStoneFuture value: self > > ---- > > Hopefully with the Service Task gem code wrapped in a future interface, > my code can continuing working on Pharo as well as GemStone - although > I've yet to implement the future interface on Pharo. > I decided against making WAGemStoneFuture a transparent proxy for the > resulting object as although cute I thought it could > cause unforeseen problems down the line. > > I've packaged code in: 'Seaside-GemStone-ServiceTask' package. It > comprises of: > WAGemStoneFuture > WAGemStoneServiceTask > WAGemStoneServiceVMTask > > WAGemStoneFuture - creates the WAGemStoneServiceTask and forwards all > requests to the contained task. > WAGemStoneServiceTask - changed from the code you shipped: > * no longer stores a collection of task instances > * removed methods which were closed tied with your examples > * #processTask is where the work is done. > * WAGemStoneServiceVMTask - largely as you shipped, changed to call > #processTask and removed the call to the instance collection > > I've also two examples in a package: 'Future-Seaside-Examples' > > FutureSeasideExampleSingle - requests a page from worldtimeserver.com > <http://worldtimeserver.com> as a background task and when the task > completes displays the UTC time from scraping the page. > FutureSeasideExampleMultiple - creates > multiple FutureSeasideExampleSingle components and sets them off in > parallel requesting remote pages. > > The examples are registered: > > examples/backgroundRequest > examples/multipleBackgroundRequest > > > I backed-off significant reorganisation of the code although I struggled > a little initially understanding the functionality for the task gem with > split between WAGemStoneServiceVMTask class>>serviceVMTaskServiceExample > and WAGemStoneServiceTask class methods. I wondered if it would make > sense to have an object represent the task gem and move this code into > that object class - say WAGemStoneServiceTaskGem? > > If you register up on GemSource, I'll add you to the GLASS dev group > so you can commit your abstractions to the GemStone repository. > > > I've registered in GemSource and am happy to commit my package. Which > repository should I use? > > I regret that when I originally wrote WAGemStoneMaintenanceTask, > that I didn't use a class instance variable for the Tasks ... It's a > change that I plan to make in the near future which will result in a > change for WAGemStoneServiceVMTask. > > So, WAGemStoneServiceVMTask isprobably usable asis with that change > (and maybe a couple of other tweaks ... like setting the cycle time > for the task within the task class itself ... > > > I haven't grasped how you intended to change this. Don't know if these > changes are vital. > > Be great if you have a chance to have a look over the code and feedback > any thoughts... > > Thanks > > Nick |
Dale,
I suppose I should wait and look at the code and see how it all functions, but the point of the original example was to do a non-blocking call to the service gem, thereby allowing the Seaside gem to continue to process HTTP requests (safely) while the external call was being processed. Unless I misunderstand things (entirely possible:), the #future call is a blocking call... Absolutely you use #hasValue before the blocking #value. It should all be clear from the examples - http requests are started in background which don't block user's request.
Nonetheless, I am interested in seeing your code and playing around a little bit, so I think we should add the code to the Seaside3.0 repository: I've copied the packages across to GemStone's Seaside30 repository: Seaside-GemStone-ServiceTask-NickAger.14 Future-Seaside-Examples-NickAger.3 Preload: Gofer project load: 'Seaside30' group: #('Seaside-Adaptors-Swazoo' 'Base')
I'm afraid I haven't created a Metacello ConfigurationOfXXXX Regarding the WAGemStoneServiceVMTask and WAGemStoneMaintenanceTask...the WAGemStoneMaintenanceTask class was intended to provide a framework for adding/controlling a list of tasks that are performed at regular intervals by the maintenance vm. I wanted users to be able to add/remove tasks ... the SqueakSource project needs several periodic tasks to be performed and the frequency/density of tasks in the Maintenance vm meant that those tasks could be handled by the Maintenance vm. When it cam time to add the Service example, I simply subclassed WAGemStoneMaintenanceTask and went my merry way ... Now that there is a second use case for the WAGemStoneMaintenanceTask it is more obvious which direction the refactoring could/should go. Great - look forward to the feedback.
Nick |
Nick,
Great...I'll handle the configuration issues ... I'm imagining that it will be included in the the standard configuration for GemStone ... My example used Seaside-REST, which is an addon and therefore needs a separate config ... It'll be a day or two before I get to this...I have a number of balls in the air right now:) Dale Nick Ager wrote: > Dale, > > I suppose I should wait and look at the code and see how it all > functions, but the point of the original example was to do a > non-blocking call to the service gem, thereby allowing the Seaside > gem to continue to process HTTP requests (safely) while the external > call was being processed. Unless I misunderstand things (entirely > possible:), the #future call is a blocking call... > > > Absolutely you use #hasValue before the blocking #value. > It should all be clear from the examples - http requests are started in > background which don't block user's request. > > > Nonetheless, I am interested in seeing your code and playing around > a little bit, so I think we should add the code to the Seaside3.0 > repository: > > http://seaside.gemstone.com/ss/Seaside30 > > I've added you to the GLASS DEVS group and added the GLASS DEVS > group to Seaside3.0. > > > I've copied the packages across to GemStone's Seaside30 repository: > > Seaside-GemStone-ServiceTask-NickAger.14 > Future-Seaside-Examples-NickAger.3 > > Preload: Gofer project load: 'Seaside30' group: > #('Seaside-Adaptors-Swazoo' 'Base') > > I'm afraid I haven't created a Metacello ConfigurationOfXXXX > > > Regarding the WAGemStoneServiceVMTask and > WAGemStoneMaintenanceTask...the WAGemStoneMaintenanceTask class was > intended to provide a framework for adding/controlling a list of > tasks that are performed at regular intervals by the maintenance vm. > I wanted users to be able to add/remove tasks ... the SqueakSource > project needs several periodic tasks to be performed and the > frequency/density of tasks in the Maintenance vm meant that those > tasks could be handled by the Maintenance vm. When it cam time to > add the Service example, I simply subclassed > WAGemStoneMaintenanceTask and went my merry way ... Now that there > is a second use case for the WAGemStoneMaintenanceTask it is more > obvious which direction the refactoring could/should go. > > Once you get your code up on the site, I'll be able to update my > example, get a handle on what you are thinking and then go from > there ... > > > Great - look forward to the feedback. > > Nick |
Dale,
I've updated the ServiceTask package in GemStone's Seaside30 repository and created an equivalent future interface for Pharo. In doing so I had to modify the examples slightly. The Pharo version and the updated examples are available at: http://www.squeaksource.com/Futures. The updated examples work in both Pharo and GemStone.
When testing the code a little more I noticed that if I didn't check #hasValue returned true before calling #value the code would become stuck in an endless loop. I've fixed the problem with the following code in #value
totalWait := 0. [totalWait < 2000 and: [(self hasValue or: [self hasError]) not]] whileTrue: [ System _sleepMs: 10.
totalWait := totalWait + 10. GRPlatform current doAbortTransaction]. Could performing an abort within a request cause problems? I've also noticed that if I execute the following within a request I see the result I expect:
(([(HTTPSocket httpGet: 'http://www.worldtimeserver.com/current_time_in_UTC.aspx') contents] future) value) size However if I execute the same code within a workspace the background task never appears to return a result.
Nick On 31 August 2010 18:29, Dale Henrichs <[hidden email]> wrote: Nick, |
Nick Ager wrote:
> Dale, > > I've updated the ServiceTask package in GemStone's Seaside30 repository > and created an equivalent future interface for Pharo. In doing so I had > to modify the examples slightly. The Pharo version and the updated > examples are available at: http://www.squeaksource.com/Futures. The > updated examples work in both Pharo and GemStone. > > When testing the code a little more I noticed that if I didn't check > #hasValue returned true before calling #value the code would become > stuck in an endless loop. I've fixed the problem with the following > code in #value > > totalWait := 0. > [totalWait < 2000 and: [(self hasValue or: [self hasError]) not]] > whileTrue: [ > System _sleepMs: 10. > totalWait := totalWait + 10. > GRPlatform current doAbortTransaction]. > > Could performing an abort within a request cause problems? We don't want to abort in the middle of handling a request - we would lose all of the persistent changes that were made to that point.. this is an area where we should use the sharedCounters perhaps, since they are atomically updated and are non-transactional... > > I've also noticed that if I execute the following within a request I see > the result I expect: > > (([(HTTPSocket httpGet: > 'http://www.worldtimeserver.com/current_time_in_UTC.aspx') contents] > future) value) size > > However if I execute the same code within a workspace the background > task never appears to return a result. Basically in GemTools the you are running two os processes a gem process (running on the stone's machine) and the client Smalltalk process. The way that GemTools is currently set up, when the client Smalltalk has control, the gem is blocked (so forked processes don't get a chance to run)... For an explanation about what might be happening see http://gemstonesoup.wordpress.com/2009/04/15/glass-beta-update-working-with-soap-preview/#GsProcesses. The following section SOAP/HTTP Server Development Tips has informatino about things you can do to work with forked processes and GemTools. Dale |
Dale,
Makes sense. The problem is that if the service gem completes during a request is there any way to receive the updated value without performing an abort - which, as you point out, we don't want to do as we'll lose all the persistent changes made to that point? Would a sharedCounter help in this scenario?
If receiving updates from another gem during a request is not possible - then I can work round that limitation. CI could modify the Pharo version to work in a similar manner. Thanks Nick |
Free forum by Nabble | Edit this page |