Commiting transactions

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
12 messages Options
Reply | Threaded
Open this post in threaded view
|

Commiting transactions

hernan.wilkinson
Hi Dale, James,
 I think we talked about this, but I do not exactly recall the answer... so, here goes the question again :-)
 Is there a problem to commit a transaction when running code called from seaside? I remember that glass does a beginTransaction (or abortTransaction) when it receives a http request and the a commitTransaction when done, so I think it should not be a problem to do a commit and begin in code called by seaside but I wanted to confirm that...

 Thanks,
 Hernan.
Reply | Threaded
Open this post in threaded view
|

Re: Commiting transactions

Dale
Extra commits while processing seaside transactions are not normally a good idea... There are a couple of potential problems:

  - what should happen if the intermediate commit fails?
  - what should happen if the intermediate commit succeeds, but the final commit
    fails?
  - will the intermediate commit save partial session state that confuses other
    sessions?

There are probably other potential problems as well...

Having said that, doing a commit is infinitely safer than doing an abort part way through. My biggest concern is about saving partial session state and then failing the commit that completes the session state. If I were to do this, I would think about setting some kind of "poison pill" in the session before the intermediate commit so that if the final commit does fail, the session will not be usable and anymore and partial session state would be less of a concern.

The other area of concern is that when you commit, all of the persistent data is refreshed to the current "view". So if you've done partial calculations based on the earlier "view" you may have some inconsistencies to deal with.

Finally, Seaside runs in manual transaction mode, so you'll need to do a beginTransaction immediately after your commit, which opens a window during which "interesting things can happen," especially under heavy load.

I would advise you tothink long and hard about alternative approaches before going with intermediate commits, not to mention lots of heavy load testing (many concurrent sessions for long periods of time). If your load testing shows no ill effects, then you can probably get away with it.


Dale
----- "Hernan Wilkinson" <[hidden email]> wrote:

| Hi Dale, James,
|  I think we talked about this, but I do not exactly recall the
| answer... so,
| here goes the question again :-)
|  Is there a problem to commit a transaction when running code called
| from
| seaside? I remember that glass does a beginTransaction (or
| abortTransaction)
| when it receives a http request and the a commitTransaction when done,
| so I
| think it should not be a problem to do a commit and begin in code
| called by
| seaside but I wanted to confirm that...
|
|  Thanks,
|  Hernan.
Reply | Threaded
Open this post in threaded view
|

Re: Commiting transactions

hernan.wilkinson
Hi Dale!
 thank you for the answer. How would you deal with a case were the system has to do a transaction in other system and then a commit on gemstone (we dont have 2 phase commit...)
 For example, we do a debit on a credit card using a merchant processor and the add that debit to a collection of debits, so that collection is a point of conflicts... if we have a conflict adding the debit to the collection we can not redo the debit, the customer would see more that one debit in hes credit card statement... 
 The collection is a RcIdentityBag, so we should not get conflicts when adding objects of different identity, is that right?
 btw, when you say "which opens a window during which "interesting things can happen," especially under heavy load." you scared me man!... hehe, could you be more specific about the "interesting things"?

 Thanks!
 Hernan.

On Tue, Mar 2, 2010 at 3:21 PM, Dale Henrichs <[hidden email]> wrote:
Extra commits while processing seaside transactions are not normally a good idea... There are a couple of potential problems:

 - what should happen if the intermediate commit fails?
 - what should happen if the intermediate commit succeeds, but the final commit
   fails?
 - will the intermediate commit save partial session state that confuses other
   sessions?

There are probably other potential problems as well...

Having said that, doing a commit is infinitely safer than doing an abort part way through. My biggest concern is about saving partial session state and then failing the commit that completes the session state. If I were to do this, I would think about setting some kind of "poison pill" in the session before the intermediate commit so that if the final commit does fail, the session will not be usable and anymore and partial session state would be less of a concern.

The other area of concern is that when you commit, all of the persistent data is refreshed to the current "view". So if you've done partial calculations based on the earlier "view" you may have some inconsistencies to deal with.

Finally, Seaside runs in manual transaction mode, so you'll need to do a beginTransaction immediately after your commit, which opens a window during which "interesting things can happen," especially under heavy load.

I would advise you tothink long and hard about alternative approaches before going with intermediate commits, not to mention lots of heavy load testing (many concurrent sessions for long periods of time). If your load testing shows no ill effects, then you can probably get away with it.


Dale
----- "Hernan Wilkinson" <[hidden email]> wrote:

| Hi Dale, James,
|  I think we talked about this, but I do not exactly recall the
| answer... so,
| here goes the question again :-)
|  Is there a problem to commit a transaction when running code called
| from
| seaside? I remember that glass does a beginTransaction (or
| abortTransaction)
| when it receives a http request and the a commitTransaction when done,
| so I
| think it should not be a problem to do a commit and begin in code
| called by
| seaside but I wanted to confirm that...
|
|  Thanks,
|  Hernan.

Reply | Threaded
Open this post in threaded view
|

Re: Commiting transactions

Dale
In reply to this post by hernan.wilkinson

----- "Hernan Wilkinson" <[hidden email]> wrote:

| Hi Dale!
|  thank you for the answer. How would you deal with a case were the system
| has to do a transaction in other system and then a commit on gemstone
| (we dont have 2 phase commit...)
|  For example, we do a debit on a credit card using a merchant processor and
| the add that debit to a collection of debits, so that collection is a point
| of conflicts... if we have a conflict adding the debit to the collection we
| can not redo the debit, the customer would see more that one debit in
| hes credit card statement...
|  The collection is a RcIdentityBag, so we should not get conflicts when
| adding objects of different identity, is that right?

That is right ... the RcIdentityBag is the right collection to use.

The trick is that you have to isolate your update to the RcIdentityBag so that only the RcIdentityBag is subject to commit conflicts. While handling an HTTP request other objects are going to be dirtied by the time you get to the point where you do your credit card transaction.

The Seaside session information will be updated by the time you are running code in your component. The Seaside session information is protected by a write lock (on the session object), so there should be no commit conflicts from Seaside session information itself. I've not seen seaside session conflicts in my testing, but I would be hard-pressed to say that they would _never_ happen. I would be comfortable saying that the _shouln't_ happen:).

Your application may have updated your business model objects while handling the request and those modifications will be subject to commit conflicts,too.

To get the proper isolation you will need to do a commit right before modifying the RcIdentityBag (to isolate yourself from potential commit conflicts due to modified objects not directly involved in this transaction) and another commit once you've updated the bag (to seal the deal).

If any business objects might be modified while doing the credit card transaction, you will want to obtain a write lock on a "sentinel object" to ensure that no other sessions will modify the business objects involved while you are processing the credit card transaction. by doing so you are basically guaranteeing that the second commit will succeed.

I've not done much load testing under this scenario, so I'm not sure what gotchas you might run into ... I would worry more about the fact that the commit right before the HTTP response is returned might fail which would mean that the WASession object would be left in an odd state. Also, with the "auto retry" of the HTTP request you will have to distinguish between the initial request and a retry request that was partially completed (i.e., the credit card debit performed) ....

Ideally, the "order processing" transaction and the "http request" transaction would be isolated from each other. Instead of embedding the "order processing" transaction in the seaside gem, I'd be inclined to have the seaside gem submit an "order" to an RcQueue ... if the commit to the queue succeeds the user can be provided with an "order number" while the credit card transaction is being processed and you can arrange to poll until the "order number" has completed processing (either successful debit, or failed order).

There would be an "order processing gem" (or even a set of gems) that take orders off of the queue and do all of the necessary operations to fulfill the order. Each gem would process one order at a time (multiple gems for concurrent order handling) and be responsible for updating the objects involved in the fulfilling the order. The advantage here is that you are under complete control of all of the order processing code, including all of the transaction logic, so you are better able to guarantee conflict free operations.

|  btw, when you say "which opens a window during which "interesting things
| can happen," especially under heavy load." you scared me man!... hehe, could
| you be more specific about the "interesting things"?

Haha, I intended for you to have a healthy respect for that window. In manual transaction mode, time passes between the commit (which ends the previous transaction) and the beginTransaction (which starts the next). Imagine hundreds of transactions occurring between those two statements. It's a given that the persistent objects might well change, but you also must be aware that temporary variables that were calculated based on the values of persistent objects will possibly be incorrect.

In Seaside, the session lock will be dropped upon the commit, which means that another browser could access the Seaside session and do possibly interesting things "while you were away." Perhaps the user got tired of waiting and resubmitted their order with another request or in another browser?...You can modify the framework so that the lock isn't dropped, but as soon as you start pulling on the "modify framework thread" you really don't know how far you'll have to go.

The key is that while you are in transaction you can be assured that you are dealing with a completely consistent view of the object graph ... as soon as you cross a transaction boundary you have to be careful about what you think you know.

In the end, I'd be inclined to recommend that you use the queue approach. From the Seaside/HTTP request perspective either the order was submitted for processing or not. If not, then you can try to submit the order again. If the order was submitted, then you can poll for order status ... On the order handling side, You can arrange to do your initial abort, remove the item from the queue, do you order processing logic, update the order status and commit ... If retrys are necessary or other special business logic, you can do aborts, etc. without worrying about juggling all of these things while processing an HTTP request and how it might impact the seaside session.

Finally, keep in mind that I'm super conservative when it comes to these things. I like to have deterministic, testable processes in place and the separate "order handling gem" satisfies that desire....

Dale
Reply | Threaded
Open this post in threaded view
|

Re: Commiting transactions

hernan.wilkinson
Hi Dale
 I really appreciate your answer, it has really good ideas.
 The problem with the queue is that sometime when authenticating credit cards we need to do something called 3d secure, so we need to redirect the user to another page and when the user authenticates, the credit card server will send back a new http request to our app...  hmm maybe we can queue those request too...
 Well, a lot to think about this... the truth is that only two "shared" collections are modified during the credit card authentication because the other created objects (like payment or failedPayment or purchase, etc) are local to the session... so I think I will do some heavy test on making sure we do not have commit problems on heavy load under the code we have right now and in the meantime thing about the queue architecture you propose.

 Thanks!
 Hernan.

On Wed, Mar 3, 2010 at 5:16 PM, Dale Henrichs <[hidden email]> wrote:

----- "Hernan Wilkinson" <[hidden email]> wrote:

| Hi Dale!
|  thank you for the answer. How would you deal with a case were the system
| has to do a transaction in other system and then a commit on gemstone
| (we dont have 2 phase commit...)
|  For example, we do a debit on a credit card using a merchant processor and
| the add that debit to a collection of debits, so that collection is a point
| of conflicts... if we have a conflict adding the debit to the collection we
| can not redo the debit, the customer would see more that one debit in
| hes credit card statement...
|  The collection is a RcIdentityBag, so we should not get conflicts when
| adding objects of different identity, is that right?

That is right ... the RcIdentityBag is the right collection to use.

The trick is that you have to isolate your update to the RcIdentityBag so that only the RcIdentityBag is subject to commit conflicts. While handling an HTTP request other objects are going to be dirtied by the time you get to the point where you do your credit card transaction.

The Seaside session information will be updated by the time you are running code in your component. The Seaside session information is protected by a write lock (on the session object), so there should be no commit conflicts from Seaside session information itself. I've not seen seaside session conflicts in my testing, but I would be hard-pressed to say that they would _never_ happen. I would be comfortable saying that the _shouln't_ happen:).

Your application may have updated your business model objects while handling the request and those modifications will be subject to commit conflicts,too.

To get the proper isolation you will need to do a commit right before modifying the RcIdentityBag (to isolate yourself from potential commit conflicts due to modified objects not directly involved in this transaction) and another commit once you've updated the bag (to seal the deal).

If any business objects might be modified while doing the credit card transaction, you will want to obtain a write lock on a "sentinel object" to ensure that no other sessions will modify the business objects involved while you are processing the credit card transaction. by doing so you are basically guaranteeing that the second commit will succeed.

I've not done much load testing under this scenario, so I'm not sure what gotchas you might run into ... I would worry more about the fact that the commit right before the HTTP response is returned might fail which would mean that the WASession object would be left in an odd state. Also, with the "auto retry" of the HTTP request you will have to distinguish between the initial request and a retry request that was partially completed (i.e., the credit card debit performed) ....

Ideally, the "order processing" transaction and the "http request" transaction would be isolated from each other. Instead of embedding the "order processing" transaction in the seaside gem, I'd be inclined to have the seaside gem submit an "order" to an RcQueue ... if the commit to the queue succeeds the user can be provided with an "order number" while the credit card transaction is being processed and you can arrange to poll until the "order number" has completed processing (either successful debit, or failed order).

There would be an "order processing gem" (or even a set of gems) that take orders off of the queue and do all of the necessary operations to fulfill the order. Each gem would process one order at a time (multiple gems for concurrent order handling) and be responsible for updating the objects involved in the fulfilling the order. The advantage here is that you are under complete control of all of the order processing code, including all of the transaction logic, so you are better able to guarantee conflict free operations.

|  btw, when you say "which opens a window during which "interesting things
| can happen," especially under heavy load." you scared me man!... hehe, could
| you be more specific about the "interesting things"?

Haha, I intended for you to have a healthy respect for that window. In manual transaction mode, time passes between the commit (which ends the previous transaction) and the beginTransaction (which starts the next). Imagine hundreds of transactions occurring between those two statements. It's a given that the persistent objects might well change, but you also must be aware that temporary variables that were calculated based on the values of persistent objects will possibly be incorrect.

In Seaside, the session lock will be dropped upon the commit, which means that another browser could access the Seaside session and do possibly interesting things "while you were away." Perhaps the user got tired of waiting and resubmitted their order with another request or in another browser?...You can modify the framework so that the lock isn't dropped, but as soon as you start pulling on the "modify framework thread" you really don't know how far you'll have to go.

The key is that while you are in transaction you can be assured that you are dealing with a completely consistent view of the object graph ... as soon as you cross a transaction boundary you have to be careful about what you think you know.

In the end, I'd be inclined to recommend that you use the queue approach. From the Seaside/HTTP request perspective either the order was submitted for processing or not. If not, then you can try to submit the order again. If the order was submitted, then you can poll for order status ... On the order handling side, You can arrange to do your initial abort, remove the item from the queue, do you order processing logic, update the order status and commit ... If retrys are necessary or other special business logic, you can do aborts, etc. without worrying about juggling all of these things while processing an HTTP request and how it might impact the seaside session.

Finally, keep in mind that I'm super conservative when it comes to these things. I like to have deterministic, testable processes in place and the separate "order handling gem" satisfies that desire....

Dale

Reply | Threaded
Open this post in threaded view
|

Re: Commiting transactions

otto
Hi,

We found that the Rc classes become quite slow when they become
bigger. We built indexes on our collections as well, which we could
not do on Rc classes. So we built a new "wrapper" collection class
that has an Rc object "in front" with a IdentitySet. When adding items
to the new collection type, it will add them to the Rc object. When
the Rc object reaches a certain size, we lock the set we flush all
objects from the Rc object into the internal set. This removed
conflicts entirely where we have a number of "producers" that add
items to the collection. We keep the objects in the collection
(indefinitely), only changing state on the object to indicate if it's
been "processed" or whatever.

Let me know if this may help.

Cheers
Otto

On Thu, Mar 4, 2010 at 2:58 AM, Hernan Wilkinson
<[hidden email]> wrote:

> Hi Dale
>  I really appreciate your answer, it has really good ideas.
>  The problem with the queue is that sometime when authenticating credit
> cards we need to do something called 3d secure, so we need to redirect the
> user to another page and when the user authenticates, the credit card server
> will send back a new http request to our app...  hmm maybe we can queue
> those request too...
>  Well, a lot to think about this... the truth is that only two "shared"
> collections are modified during the credit card authentication because the
> other created objects (like payment or failedPayment or purchase, etc) are
> local to the session... so I think I will do some heavy test on making sure
> we do not have commit problems on heavy load under the code we have right
> now and in the meantime thing about the queue architecture you propose.
>
>  Thanks!
>  Hernan.
>
> On Wed, Mar 3, 2010 at 5:16 PM, Dale Henrichs <[hidden email]>
> wrote:
>>
>> ----- "Hernan Wilkinson" <[hidden email]> wrote:
>>
>> | Hi Dale!
>> |  thank you for the answer. How would you deal with a case were the
>> system
>> | has to do a transaction in other system and then a commit on gemstone
>> | (we dont have 2 phase commit...)
>> |  For example, we do a debit on a credit card using a merchant processor
>> and
>> | the add that debit to a collection of debits, so that collection is a
>> point
>> | of conflicts... if we have a conflict adding the debit to the collection
>> we
>> | can not redo the debit, the customer would see more that one debit in
>> | hes credit card statement...
>> |  The collection is a RcIdentityBag, so we should not get conflicts when
>> | adding objects of different identity, is that right?
>>
>> That is right ... the RcIdentityBag is the right collection to use.
>>
>> The trick is that you have to isolate your update to the RcIdentityBag so
>> that only the RcIdentityBag is subject to commit conflicts. While handling
>> an HTTP request other objects are going to be dirtied by the time you get to
>> the point where you do your credit card transaction.
>>
>> The Seaside session information will be updated by the time you are
>> running code in your component. The Seaside session information is protected
>> by a write lock (on the session object), so there should be no commit
>> conflicts from Seaside session information itself. I've not seen seaside
>> session conflicts in my testing, but I would be hard-pressed to say that
>> they would _never_ happen. I would be comfortable saying that the _shouln't_
>> happen:).
>>
>> Your application may have updated your business model objects while
>> handling the request and those modifications will be subject to commit
>> conflicts,too.
>>
>> To get the proper isolation you will need to do a commit right before
>> modifying the RcIdentityBag (to isolate yourself from potential commit
>> conflicts due to modified objects not directly involved in this transaction)
>> and another commit once you've updated the bag (to seal the deal).
>>
>> If any business objects might be modified while doing the credit card
>> transaction, you will want to obtain a write lock on a "sentinel object" to
>> ensure that no other sessions will modify the business objects involved
>> while you are processing the credit card transaction. by doing so you are
>> basically guaranteeing that the second commit will succeed.
>>
>> I've not done much load testing under this scenario, so I'm not sure what
>> gotchas you might run into ... I would worry more about the fact that the
>> commit right before the HTTP response is returned might fail which would
>> mean that the WASession object would be left in an odd state. Also, with the
>> "auto retry" of the HTTP request you will have to distinguish between the
>> initial request and a retry request that was partially completed (i.e., the
>> credit card debit performed) ....
>>
>> Ideally, the "order processing" transaction and the "http request"
>> transaction would be isolated from each other. Instead of embedding the
>> "order processing" transaction in the seaside gem, I'd be inclined to have
>> the seaside gem submit an "order" to an RcQueue ... if the commit to the
>> queue succeeds the user can be provided with an "order number" while the
>> credit card transaction is being processed and you can arrange to poll until
>> the "order number" has completed processing (either successful debit, or
>> failed order).
>>
>> There would be an "order processing gem" (or even a set of gems) that take
>> orders off of the queue and do all of the necessary operations to fulfill
>> the order. Each gem would process one order at a time (multiple gems for
>> concurrent order handling) and be responsible for updating the objects
>> involved in the fulfilling the order. The advantage here is that you are
>> under complete control of all of the order processing code, including all of
>> the transaction logic, so you are better able to guarantee conflict free
>> operations.
>>
>> |  btw, when you say "which opens a window during which "interesting
>> things
>> | can happen," especially under heavy load." you scared me man!... hehe,
>> could
>> | you be more specific about the "interesting things"?
>>
>> Haha, I intended for you to have a healthy respect for that window. In
>> manual transaction mode, time passes between the commit (which ends the
>> previous transaction) and the beginTransaction (which starts the next).
>> Imagine hundreds of transactions occurring between those two statements.
>> It's a given that the persistent objects might well change, but you also
>> must be aware that temporary variables that were calculated based on the
>> values of persistent objects will possibly be incorrect.
>>
>> In Seaside, the session lock will be dropped upon the commit, which means
>> that another browser could access the Seaside session and do possibly
>> interesting things "while you were away." Perhaps the user got tired of
>> waiting and resubmitted their order with another request or in another
>> browser?...You can modify the framework so that the lock isn't dropped, but
>> as soon as you start pulling on the "modify framework thread" you really
>> don't know how far you'll have to go.
>>
>> The key is that while you are in transaction you can be assured that you
>> are dealing with a completely consistent view of the object graph ... as
>> soon as you cross a transaction boundary you have to be careful about what
>> you think you know.
>>
>> In the end, I'd be inclined to recommend that you use the queue approach.
>> From the Seaside/HTTP request perspective either the order was submitted for
>> processing or not. If not, then you can try to submit the order again. If
>> the order was submitted, then you can poll for order status ... On the order
>> handling side, You can arrange to do your initial abort, remove the item
>> from the queue, do you order processing logic, update the order status and
>> commit ... If retrys are necessary or other special business logic, you can
>> do aborts, etc. without worrying about juggling all of these things while
>> processing an HTTP request and how it might impact the seaside session.
>>
>> Finally, keep in mind that I'm super conservative when it comes to these
>> things. I like to have deterministic, testable processes in place and the
>> separate "order handling gem" satisfies that desire....
>>
>> Dale
>
>
Reply | Threaded
Open this post in threaded view
|

Re: Commiting transactions

Dale
Otto,

I think that what you've done with "fronting" collection is a good idea, but I do want to mention that in 2.x we did introduced rc indexes, so you can put an index on an RCIdentityBag and have it updated in an RC fashion...

Rc collections work their magic by recording in a playback log every add and remove. In the case of a conflict the collection is selectively aborted and the adds and removes are replayed from the playback log ... So the additional overhead of keeping the playback log can slow things down ...

Dale
----- "Otto Behrens" <[hidden email]> wrote:

| Hi,
|
| We found that the Rc classes become quite slow when they become
| bigger. We built indexes on our collections as well, which we could
| not do on Rc classes. So we built a new "wrapper" collection class
| that has an Rc object "in front" with a IdentitySet. When adding
| items
| to the new collection type, it will add them to the Rc object. When
| the Rc object reaches a certain size, we lock the set we flush all
| objects from the Rc object into the internal set. This removed
| conflicts entirely where we have a number of "producers" that add
| items to the collection. We keep the objects in the collection
| (indefinitely), only changing state on the object to indicate if it's
| been "processed" or whatever.
|
| Let me know if this may help.
|
| Cheers
| Otto
|
| On Thu, Mar 4, 2010 at 2:58 AM, Hernan Wilkinson
| <[hidden email]> wrote:
| > Hi Dale
| >  I really appreciate your answer, it has really good ideas.
| >  The problem with the queue is that sometime when authenticating
| credit
| > cards we need to do something called 3d secure, so we need to
| redirect the
| > user to another page and when the user authenticates, the credit
| card server
| > will send back a new http request to our app...  hmm maybe we can
| queue
| > those request too...
| >  Well, a lot to think about this... the truth is that only two
| "shared"
| > collections are modified during the credit card authentication
| because the
| > other created objects (like payment or failedPayment or purchase,
| etc) are
| > local to the session... so I think I will do some heavy test on
| making sure
| > we do not have commit problems on heavy load under the code we have
| right
| > now and in the meantime thing about the queue architecture you
| propose.
| >
| >  Thanks!
| >  Hernan.
| >
| > On Wed, Mar 3, 2010 at 5:16 PM, Dale Henrichs
| <[hidden email]>
| > wrote:
| >>
| >> ----- "Hernan Wilkinson" <[hidden email]> wrote:
| >>
| >> | Hi Dale!
| >> |  thank you for the answer. How would you deal with a case were
| the
| >> system
| >> | has to do a transaction in other system and then a commit on
| gemstone
| >> | (we dont have 2 phase commit...)
| >> |  For example, we do a debit on a credit card using a merchant
| processor
| >> and
| >> | the add that debit to a collection of debits, so that collection
| is a
| >> point
| >> | of conflicts... if we have a conflict adding the debit to the
| collection
| >> we
| >> | can not redo the debit, the customer would see more that one
| debit in
| >> | hes credit card statement...
| >> |  The collection is a RcIdentityBag, so we should not get
| conflicts when
| >> | adding objects of different identity, is that right?
| >>
| >> That is right ... the RcIdentityBag is the right collection to
| use.
| >>
| >> The trick is that you have to isolate your update to the
| RcIdentityBag so
| >> that only the RcIdentityBag is subject to commit conflicts. While
| handling
| >> an HTTP request other objects are going to be dirtied by the time
| you get to
| >> the point where you do your credit card transaction.
| >>
| >> The Seaside session information will be updated by the time you
| are
| >> running code in your component. The Seaside session information is
| protected
| >> by a write lock (on the session object), so there should be no
| commit
| >> conflicts from Seaside session information itself. I've not seen
| seaside
| >> session conflicts in my testing, but I would be hard-pressed to say
| that
| >> they would _never_ happen. I would be comfortable saying that the
| _shouln't_
| >> happen:).
| >>
| >> Your application may have updated your business model objects
| while
| >> handling the request and those modifications will be subject to
| commit
| >> conflicts,too.
| >>
| >> To get the proper isolation you will need to do a commit right
| before
| >> modifying the RcIdentityBag (to isolate yourself from potential
| commit
| >> conflicts due to modified objects not directly involved in this
| transaction)
| >> and another commit once you've updated the bag (to seal the deal).
| >>
| >> If any business objects might be modified while doing the credit
| card
| >> transaction, you will want to obtain a write lock on a "sentinel
| object" to
| >> ensure that no other sessions will modify the business objects
| involved
| >> while you are processing the credit card transaction. by doing so
| you are
| >> basically guaranteeing that the second commit will succeed.
| >>
| >> I've not done much load testing under this scenario, so I'm not
| sure what
| >> gotchas you might run into ... I would worry more about the fact
| that the
| >> commit right before the HTTP response is returned might fail which
| would
| >> mean that the WASession object would be left in an odd state. Also,
| with the
| >> "auto retry" of the HTTP request you will have to distinguish
| between the
| >> initial request and a retry request that was partially completed
| (i.e., the
| >> credit card debit performed) ....
| >>
| >> Ideally, the "order processing" transaction and the "http request"
| >> transaction would be isolated from each other. Instead of embedding
| the
| >> "order processing" transaction in the seaside gem, I'd be inclined
| to have
| >> the seaside gem submit an "order" to an RcQueue ... if the commit
| to the
| >> queue succeeds the user can be provided with an "order number"
| while the
| >> credit card transaction is being processed and you can arrange to
| poll until
| >> the "order number" has completed processing (either successful
| debit, or
| >> failed order).
| >>
| >> There would be an "order processing gem" (or even a set of gems)
| that take
| >> orders off of the queue and do all of the necessary operations to
| fulfill
| >> the order. Each gem would process one order at a time (multiple
| gems for
| >> concurrent order handling) and be responsible for updating the
| objects
| >> involved in the fulfilling the order. The advantage here is that
| you are
| >> under complete control of all of the order processing code,
| including all of
| >> the transaction logic, so you are better able to guarantee conflict
| free
| >> operations.
| >>
| >> |  btw, when you say "which opens a window during which
| "interesting
| >> things
| >> | can happen," especially under heavy load." you scared me man!...
| hehe,
| >> could
| >> | you be more specific about the "interesting things"?
| >>
| >> Haha, I intended for you to have a healthy respect for that window.
| In
| >> manual transaction mode, time passes between the commit (which ends
| the
| >> previous transaction) and the beginTransaction (which starts the
| next).
| >> Imagine hundreds of transactions occurring between those two
| statements.
| >> It's a given that the persistent objects might well change, but you
| also
| >> must be aware that temporary variables that were calculated based
| on the
| >> values of persistent objects will possibly be incorrect.
| >>
| >> In Seaside, the session lock will be dropped upon the commit, which
| means
| >> that another browser could access the Seaside session and do
| possibly
| >> interesting things "while you were away." Perhaps the user got
| tired of
| >> waiting and resubmitted their order with another request or in
| another
| >> browser?...You can modify the framework so that the lock isn't
| dropped, but
| >> as soon as you start pulling on the "modify framework thread" you
| really
| >> don't know how far you'll have to go.
| >>
| >> The key is that while you are in transaction you can be assured
| that you
| >> are dealing with a completely consistent view of the object graph
| ... as
| >> soon as you cross a transaction boundary you have to be careful
| about what
| >> you think you know.
| >>
| >> In the end, I'd be inclined to recommend that you use the queue
| approach.
| >> From the Seaside/HTTP request perspective either the order was
| submitted for
| >> processing or not. If not, then you can try to submit the order
| again. If
| >> the order was submitted, then you can poll for order status ... On
| the order
| >> handling side, You can arrange to do your initial abort, remove the
| item
| >> from the queue, do you order processing logic, update the order
| status and
| >> commit ... If retrys are necessary or other special business logic,
| you can
| >> do aborts, etc. without worrying about juggling all of these things
| while
| >> processing an HTTP request and how it might impact the seaside
| session.
| >>
| >> Finally, keep in mind that I'm super conservative when it comes to
| these
| >> things. I like to have deterministic, testable processes in place
| and the
| >> separate "order handling gem" satisfies that desire....
| >>
| >> Dale
| >
| >
Reply | Threaded
Open this post in threaded view
|

Re: Commiting transactions

otto
> I think that what you've done with "fronting" collection is a good idea, but I do want to mention that in 2.x we did introduced rc indexes, so you can put an index on an RCIdentityBag and have it updated in an RC fashion...
>

Grand! I did not know this, still coming from 32 bit. How about size?
Will you put a million objects in one?

> Rc collections work their magic by recording in a playback log every add and remove. In the case of a conflict the collection is selectively aborted and the adds and removes are replayed from the playback log ... So the additional overhead of keeping the playback log can slow things down ...

Very fancy. I suppose this will be when the probability of conflicts
are very high? If we build our system with seaside, we'll have to make
sure that service calls are kept short because the transaction starts
in the beginnig of every call?

Don't you think that doing intermediate commits / aborts can
technically cause integrity faults? I always understood that
autonomous transactions were good because the opration (all changes in
a transaction) either succeeds or does not. A commit / abort (even if
partial) may cause 1/2 of a logical transaction to be committed.

Perhaps there is a requirement to distinguish between logical and
"technical" transactions?

>
> Dale
> ----- "Otto Behrens" <[hidden email]> wrote:
>
> | Hi,
> |
> | We found that the Rc classes become quite slow when they become
> | bigger. We built indexes on our collections as well, which we could
> | not do on Rc classes. So we built a new "wrapper" collection class
> | that has an Rc object "in front" with a IdentitySet. When adding
> | items
> | to the new collection type, it will add them to the Rc object. When
> | the Rc object reaches a certain size, we lock the set we flush all
> | objects from the Rc object into the internal set. This removed
> | conflicts entirely where we have a number of "producers" that add
> | items to the collection. We keep the objects in the collection
> | (indefinitely), only changing state on the object to indicate if it's
> | been "processed" or whatever.
> |
> | Let me know if this may help.
> |
> | Cheers
> | Otto
> |
> | On Thu, Mar 4, 2010 at 2:58 AM, Hernan Wilkinson
> | <[hidden email]> wrote:
> | > Hi Dale
> | >  I really appreciate your answer, it has really good ideas.
> | >  The problem with the queue is that sometime when authenticating
> | credit
> | > cards we need to do something called 3d secure, so we need to
> | redirect the
> | > user to another page and when the user authenticates, the credit
> | card server
> | > will send back a new http request to our app...  hmm maybe we can
> | queue
> | > those request too...
> | >  Well, a lot to think about this... the truth is that only two
> | "shared"
> | > collections are modified during the credit card authentication
> | because the
> | > other created objects (like payment or failedPayment or purchase,
> | etc) are
> | > local to the session... so I think I will do some heavy test on
> | making sure
> | > we do not have commit problems on heavy load under the code we have
> | right
> | > now and in the meantime thing about the queue architecture you
> | propose.
> | >
> | >  Thanks!
> | >  Hernan.
> | >
> | > On Wed, Mar 3, 2010 at 5:16 PM, Dale Henrichs
> | <[hidden email]>
> | > wrote:
> | >>
> | >> ----- "Hernan Wilkinson" <[hidden email]> wrote:
> | >>
> | >> | Hi Dale!
> | >> |  thank you for the answer. How would you deal with a case were
> | the
> | >> system
> | >> | has to do a transaction in other system and then a commit on
> | gemstone
> | >> | (we dont have 2 phase commit...)
> | >> |  For example, we do a debit on a credit card using a merchant
> | processor
> | >> and
> | >> | the add that debit to a collection of debits, so that collection
> | is a
> | >> point
> | >> | of conflicts... if we have a conflict adding the debit to the
> | collection
> | >> we
> | >> | can not redo the debit, the customer would see more that one
> | debit in
> | >> | hes credit card statement...
> | >> |  The collection is a RcIdentityBag, so we should not get
> | conflicts when
> | >> | adding objects of different identity, is that right?
> | >>
> | >> That is right ... the RcIdentityBag is the right collection to
> | use.
> | >>
> | >> The trick is that you have to isolate your update to the
> | RcIdentityBag so
> | >> that only the RcIdentityBag is subject to commit conflicts. While
> | handling
> | >> an HTTP request other objects are going to be dirtied by the time
> | you get to
> | >> the point where you do your credit card transaction.
> | >>
> | >> The Seaside session information will be updated by the time you
> | are
> | >> running code in your component. The Seaside session information is
> | protected
> | >> by a write lock (on the session object), so there should be no
> | commit
> | >> conflicts from Seaside session information itself. I've not seen
> | seaside
> | >> session conflicts in my testing, but I would be hard-pressed to say
> | that
> | >> they would _never_ happen. I would be comfortable saying that the
> | _shouln't_
> | >> happen:).
> | >>
> | >> Your application may have updated your business model objects
> | while
>

--
www.FinWorks.biz
+27 82 809 2375
Reply | Threaded
Open this post in threaded view
|

Re: Commiting transactions

Dale
In reply to this post by hernan.wilkinson

----- "Otto Behrens" <[hidden email]> wrote:

| > I think that what you've done with "fronting" collection is a good
| idea, but I do want to mention that in 2.x we did introduced rc
| indexes, so you can put an index on an RCIdentityBag and have it
| updated in an RC fashion...
| >
|
| Grand! I did not know this, still coming from 32 bit. How about size?
| Will you put a million objects in one?

I've run tests with over a million objects in the RC indexed collections ... the full test suite on million element collections runs for several days:)

In fact now that you mention it, I think that with very large RcIndentityBags, you are better off accessing the elements through the index api...some operations (like #do:) on RcIdentityBags send #_asIdentityBag, which creates a local copy of the bag...pretty nasty for very large bags ...  

|
| > Rc collections work their magic by recording in a playback log every
| add and remove. In the case of a conflict the collection is
| selectively aborted and the adds and removes are replayed from the
| playback log ... So the additional overhead of keeping the playback
| log can slow things down ...
|
| Very fancy. I suppose this will be when the probability of conflicts
| are very high? If we build our system with seaside, we'll have to make
| sure that service calls are kept short because the transaction starts
| in the beginnig of every call?

Yes. that's another reason why I think that passing things off via a queue to another gem for processing is a good idea. If there are delays in processing the http gems won't be directly affected.

|
| Don't you think that doing intermediate commits / aborts can
| technically cause integrity faults? I always understood that
| autonomous transactions were good because the opration (all changes in
| a transaction) either succeeds or does not. A commit / abort (even if
| partial) may cause 1/2 of a logical transaction to be committed.

Aborts are absolutely a bad idea from within Seaside ... the universe will collapse pretty quickly. The thing that makes commits almost okay is if you do make sure that your business logic is isolated in a single transaction. The commit in the middle of handling a Seaside request does result in partial data be committed for the Seaside session state, but sessions are protected by a write lock on the session object itself, so concurrent access is not an issue. However, if the seaside operation cannot be committed for some reason that I _am_ very concerned that the affect session will be effectively corrupted. I think I've mentioned that a session that has been partially committed needs to have a poison pill embedded in it that invalidates the session completely if something happens to prevent the final Seaside-state commit from succeeding.

|
| Perhaps there is a requirement to distinguish between logical and
| "technical" transactions?

GemStone itself cannot tell the difference between the logical and "technical" commits, so it is up to the application programmers to manage things ...

I feel that the cleanest approach is to push that work into a separate gem where the transactions and business-state can be tightly controlled without introducing HTTP session-state into the mix.

Dale
Reply | Threaded
Open this post in threaded view
|

Re: Commiting transactions

hernan.wilkinson
Hi Otto, Dale,
 thank you for your comments. I agree that having a separate gem it is a better architecture, but we still have potential conflicts... I mean having just one gem doing credit cards validation could be a bottleneck... merchant processors take their time to respond... so, if we want to have more that one gem doing credit card validation we have the same issue as before... I need to think more time about this...
 Hmm, I did not know about the rc being slow, we will check that too, thanks for the comment and suggestion Otto.

On Thu, Mar 4, 2010 at 4:33 PM, Dale Henrichs <[hidden email]> wrote:

----- "Otto Behrens" <[hidden email]> wrote:

| > I think that what you've done with "fronting" collection is a good
| idea, but I do want to mention that in 2.x we did introduced rc
| indexes, so you can put an index on an RCIdentityBag and have it
| updated in an RC fashion...
| >
|
| Grand! I did not know this, still coming from 32 bit. How about size?
| Will you put a million objects in one?

I've run tests with over a million objects in the RC indexed collections ... the full test suite on million element collections runs for several days:)

In fact now that you mention it, I think that with very large RcIndentityBags, you are better off accessing the elements through the index api...some operations (like #do:) on RcIdentityBags send #_asIdentityBag, which creates a local copy of the bag...pretty nasty for very large bags ...

|
| > Rc collections work their magic by recording in a playback log every
| add and remove. In the case of a conflict the collection is
| selectively aborted and the adds and removes are replayed from the
| playback log ... So the additional overhead of keeping the playback
| log can slow things down ...
|
| Very fancy. I suppose this will be when the probability of conflicts
| are very high? If we build our system with seaside, we'll have to make
| sure that service calls are kept short because the transaction starts
| in the beginnig of every call?

Yes. that's another reason why I think that passing things off via a queue to another gem for processing is a good idea. If there are delays in processing the http gems won't be directly affected.

|
| Don't you think that doing intermediate commits / aborts can
| technically cause integrity faults? I always understood that
| autonomous transactions were good because the opration (all changes in
| a transaction) either succeeds or does not. A commit / abort (even if
| partial) may cause 1/2 of a logical transaction to be committed.

Aborts are absolutely a bad idea from within Seaside ... the universe will collapse pretty quickly. The thing that makes commits almost okay is if you do make sure that your business logic is isolated in a single transaction. The commit in the middle of handling a Seaside request does result in partial data be committed for the Seaside session state, but sessions are protected by a write lock on the session object itself, so concurrent access is not an issue. However, if the seaside operation cannot be committed for some reason that I _am_ very concerned that the affect session will be effectively corrupted. I think I've mentioned that a session that has been partially committed needs to have a poison pill embedded in it that invalidates the session completely if something happens to prevent the final Seaside-state commit from succeeding.

|
| Perhaps there is a requirement to distinguish between logical and
| "technical" transactions?

GemStone itself cannot tell the difference between the logical and "technical" commits, so it is up to the application programmers to manage things ...

I feel that the cleanest approach is to push that work into a separate gem where the transactions and business-state can be tightly controlled without introducing HTTP session-state into the mix.

Dale

Reply | Threaded
Open this post in threaded view
|

Re: Commiting transactions

Dale
In reply to this post by hernan.wilkinson
Hernan,

You _can_ have multiple gems doing credit card transactions ... Just like multiple gems can serve Seaside requests ... the important thing is that a single gem should have a consistent transaction model ...

To distribute work to multiple gems, you wouldn't want to use an RcQueue (it is designed to have a single consumer). An OrderedCollection and a writeLock can be used just as easily (if not easier, since the writeLock is non-transactional). You would add as many credit card transaction gems as you have concurrent credit card requests ...

Getting the infrastructure right is a little bit of work, but doing commits in the middle of Seaside request handling has it's own issues...

Remember that slow is a relative term ... you should measure performance for yourself....I've done seaside benchmarks of 1000's of commits per second and several of the seaside data structures are Rc....

Dale
----- "Hernan Wilkinson" <[hidden email]> wrote:

| Hi Otto, Dale,
|  thank you for your comments. I agree that having a separate gem it is
| a
| better architecture, but we still have potential conflicts... I mean
| having
| just one gem doing credit cards validation could be a bottleneck...
| merchant
| processors take their time to respond... so, if we want to have more
| that
| one gem doing credit card validation we have the same issue as
| before... I
| need to think more time about this...
|  Hmm, I did not know about the rc being slow, we will check that too,
| thanks
| for the comment and suggestion Otto.
|
| On Thu, Mar 4, 2010 at 4:33 PM, Dale Henrichs
| <[hidden email]>wrote:
|
| >
| > ----- "Otto Behrens" <[hidden email]> wrote:
| >
| > | > I think that what you've done with "fronting" collection is a
| good
| > | idea, but I do want to mention that in 2.x we did introduced rc
| > | indexes, so you can put an index on an RCIdentityBag and have it
| > | updated in an RC fashion...
| > | >
| > |
| > | Grand! I did not know this, still coming from 32 bit. How about
| size?
| > | Will you put a million objects in one?
| >
| > I've run tests with over a million objects in the RC indexed
| collections
| > ... the full test suite on million element collections runs for
| several
| > days:)
| >
| > In fact now that you mention it, I think that with very large
| > RcIndentityBags, you are better off accessing the elements through
| the index
| > api...some operations (like #do:) on RcIdentityBags send
| #_asIdentityBag,
| > which creates a local copy of the bag...pretty nasty for very large
| bags ...
| >
| > |
| > | > Rc collections work their magic by recording in a playback log
| every
| > | add and remove. In the case of a conflict the collection is
| > | selectively aborted and the adds and removes are replayed from
| the
| > | playback log ... So the additional overhead of keeping the
| playback
| > | log can slow things down ...
| > |
| > | Very fancy. I suppose this will be when the probability of
| conflicts
| > | are very high? If we build our system with seaside, we'll have to
| make
| > | sure that service calls are kept short because the transaction
| starts
| > | in the beginnig of every call?
| >
| > Yes. that's another reason why I think that passing things off via a
| queue
| > to another gem for processing is a good idea. If there are delays
| in
| > processing the http gems won't be directly affected.
| >
| > |
| > | Don't you think that doing intermediate commits / aborts can
| > | technically cause integrity faults? I always understood that
| > | autonomous transactions were good because the opration (all
| changes in
| > | a transaction) either succeeds or does not. A commit / abort (even
| if
| > | partial) may cause 1/2 of a logical transaction to be committed.
| >
| > Aborts are absolutely a bad idea from within Seaside ... the
| universe will
| > collapse pretty quickly. The thing that makes commits almost okay is
| if you
| > do make sure that your business logic is isolated in a single
| transaction.
| > The commit in the middle of handling a Seaside request does result
| in
| > partial data be committed for the Seaside session state, but
| sessions are
| > protected by a write lock on the session object itself, so
| concurrent access
| > is not an issue. However, if the seaside operation cannot be
| committed for
| > some reason that I _am_ very concerned that the affect session will
| be
| > effectively corrupted. I think I've mentioned that a session that
| has been
| > partially committed needs to have a poison pill embedded in it that
| > invalidates the session completely if something happens to prevent
| the final
| > Seaside-state commit from succeeding.
| >
| > |
| > | Perhaps there is a requirement to distinguish between logical and
| > | "technical" transactions?
| >
| > GemStone itself cannot tell the difference between the logical and
| > "technical" commits, so it is up to the application programmers to
| manage
| > things ...
| >
| > I feel that the cleanest approach is to push that work into a
| separate gem
| > where the transactions and business-state can be tightly controlled
| without
| > introducing HTTP session-state into the mix.
| >
| > Dale
| >
Reply | Threaded
Open this post in threaded view
|

Re: Commiting transactions

hernan.wilkinson


On Fri, Mar 5, 2010 at 7:03 PM, Dale Henrichs <[hidden email]> wrote:
Hernan,

You _can_ have multiple gems doing credit card transactions ...

yes yes, I know that. What I tried to said is that because of our design having many gems doing credit cards transactions will have the same issues as we have right now where many gems are handling seaside request and doing credit card transactions, that is why I said we should change our design a little

Just like multiple gems can serve Seaside requests ... the important thing is that a single gem should have a consistent transaction model ...

that's right, that is what I tried to say

Bye,
Hernan.
 

To distribute work to multiple gems, you wouldn't want to use an RcQueue (it is designed to have a single consumer). An OrderedCollection and a writeLock can be used just as easily (if not easier, since the writeLock is non-transactional). You would add as many credit card transaction gems as you have concurrent credit card requests ...

Getting the infrastructure right is a little bit of work, but doing commits in the middle of Seaside request handling has it's own issues...

Remember that slow is a relative term ... you should measure performance for yourself....I've done seaside benchmarks of 1000's of commits per second and several of the seaside data structures are Rc....

Dale
----- "Hernan Wilkinson" <[hidden email]> wrote:

| Hi Otto, Dale,
|  thank you for your comments. I agree that having a separate gem it is
| a
| better architecture, but we still have potential conflicts... I mean
| having
| just one gem doing credit cards validation could be a bottleneck...
| merchant
| processors take their time to respond... so, if we want to have more
| that
| one gem doing credit card validation we have the same issue as
| before... I
| need to think more time about this...
|  Hmm, I did not know about the rc being slow, we will check that too,
| thanks
| for the comment and suggestion Otto.
|
| On Thu, Mar 4, 2010 at 4:33 PM, Dale Henrichs
| <[hidden email]>wrote:
|
| >
| > ----- "Otto Behrens" <[hidden email]> wrote:
| >
| > | > I think that what you've done with "fronting" collection is a
| good
| > | idea, but I do want to mention that in 2.x we did introduced rc
| > | indexes, so you can put an index on an RCIdentityBag and have it
| > | updated in an RC fashion...
| > | >
| > |
| > | Grand! I did not know this, still coming from 32 bit. How about
| size?
| > | Will you put a million objects in one?
| >
| > I've run tests with over a million objects in the RC indexed
| collections
| > ... the full test suite on million element collections runs for
| several
| > days:)
| >
| > In fact now that you mention it, I think that with very large
| > RcIndentityBags, you are better off accessing the elements through
| the index
| > api...some operations (like #do:) on RcIdentityBags send
| #_asIdentityBag,
| > which creates a local copy of the bag...pretty nasty for very large
| bags ...
| >
| > |
| > | > Rc collections work their magic by recording in a playback log
| every
| > | add and remove. In the case of a conflict the collection is
| > | selectively aborted and the adds and removes are replayed from
| the
| > | playback log ... So the additional overhead of keeping the
| playback
| > | log can slow things down ...
| > |
| > | Very fancy. I suppose this will be when the probability of
| conflicts
| > | are very high? If we build our system with seaside, we'll have to
| make
| > | sure that service calls are kept short because the transaction
| starts
| > | in the beginnig of every call?
| >
| > Yes. that's another reason why I think that passing things off via a
| queue
| > to another gem for processing is a good idea. If there are delays
| in
| > processing the http gems won't be directly affected.
| >
| > |
| > | Don't you think that doing intermediate commits / aborts can
| > | technically cause integrity faults? I always understood that
| > | autonomous transactions were good because the opration (all
| changes in
| > | a transaction) either succeeds or does not. A commit / abort (even
| if
| > | partial) may cause 1/2 of a logical transaction to be committed.
| >
| > Aborts are absolutely a bad idea from within Seaside ... the
| universe will
| > collapse pretty quickly. The thing that makes commits almost okay is
| if you
| > do make sure that your business logic is isolated in a single
| transaction.
| > The commit in the middle of handling a Seaside request does result
| in
| > partial data be committed for the Seaside session state, but
| sessions are
| > protected by a write lock on the session object itself, so
| concurrent access
| > is not an issue. However, if the seaside operation cannot be
| committed for
| > some reason that I _am_ very concerned that the affect session will
| be
| > effectively corrupted. I think I've mentioned that a session that
| has been
| > partially committed needs to have a poison pill embedded in it that
| > invalidates the session completely if something happens to prevent
| the final
| > Seaside-state commit from succeeding.
| >
| > |
| > | Perhaps there is a requirement to distinguish between logical and
| > | "technical" transactions?
| >
| > GemStone itself cannot tell the difference between the logical and
| > "technical" commits, so it is up to the application programmers to
| manage
| > things ...
| >
| > I feel that the cleanest approach is to push that work into a
| separate gem
| > where the transactions and business-state can be tightly controlled
| without
| > introducing HTTP session-state into the mix.
| >
| > Dale
| >