I am a bit out of ideas for the right way to handle optimitic locking failures
-- I've been using a (patched version of) #commitUnitOfWorkAndContinue and #rollbackUnitOfWorkAndContinue for a few years now, because we have a complex model in which lots of objects need to be updated and traversed in many occasions. So the performance trick is to re-register all registered objects form the currently committed or rolled-back transaction in a newly started one. The reason is that we cannot reload a few thousand objects each time a user clicks something. This works pretty well under the assumption that there is always a maximum of 1 user who modifies data. Let's call her User A. However, we are getting into trouble as soon as a second user opens a session and also changes data that is currently registered in User A's session. Let's look at this simple scenario:
The usual reaction would be to present User B with an error message that says: please refresh your data and repeat your changes. The problem, however, is that it is not easy to find out which objects need refreshing, because the GlorpWriteFailure doesn't tell you. So you'd have to do a refresh for all objects currently loaded into the session. This would take a lot of time, and maybe User B will not really use all of them. The problem here is the ...AndContinue part because this will potentially keep outdated versions of objects for quite a while by rescuing them into many consecutive Transactions. So what is the best thing I can do when User B gets a GlorpWriteFailure if I still want to save as much reloading of objects as possible? Wouldn't it be a good idea to implement a variant of rollbackUOWAndContinue and commitUOWAndContinue that instead of keeping the old versions of Objects proxifies them and keeps the proxies? Why don't I use commitTransaction and rollbackTransaction? Because that way I'd have to start loading everything over and over again and still might run into conflicts. I guess there is no perfect solution, but I'd like to discuss ideas. Joachim You received this message because you are subscribed to the Google Groups "glorp-group" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. To post to this group, send email to [hidden email]. Visit this group at https://groups.google.com/group/glorp-group. For more options, visit https://groups.google.com/d/optout. |
I never had to manage those conflicts at the "framework level", but
instead made it at "application domain level". However, what I used for "conflict prone" objects, is to use a creation and a modification timestamp column, so I know when was the last time they were modified, in this case, you could tweak your app to load only those modified after the last read (which should be kept somewhere), So... User A modifies objects already read by User B User B tries to commit and continue and gets a GlorpWriteFailure User B reloads everything since last read I can't foresee the caching implications of the object read, but I think that if object B read N objects, on failure it will have to read again a number from 1 to N, but not always N. Best regards! Esteban A. Maringolo 2017-07-11 9:47 GMT-03:00 jtuchel <[hidden email]>: > I am a bit out of ideas for the right way to handle optimitic locking > failures > > I've been using a (patched version of) #commitUnitOfWorkAndContinue and > #rollbackUnitOfWorkAndContinue for a few years now, because we have a > complex model in which lots of objects need to be updated and traversed in > many occasions. So the performance trick is to re-register all registered > objects form the currently committed or rolled-back transaction in a newly > started one. > The reason is that we cannot reload a few thousand objects each time a user > clicks something. > > This works pretty well under the assumption that there is always a maximum > of 1 user who modifies data. Let's call her User A. > > However, we are getting into trouble as soon as a second user opens a > session and also changes data that is currently registered in User A's > session. Let's look at this simple scenario: > > User A loads a bunch of objects > User B loads a bunch of objects including objects that User A has loaded > User A commitAndContinues a couple of objects > User B also commits a couple of objects, one of which has just been updated > by User A > > The usual reaction would be to present User B with an error message that > says: please refresh your data and repeat your changes. > > The problem, however, is that it is not easy to find out which objects need > refreshing, because the GlorpWriteFailure doesn't tell you. So you'd have to > do a refresh for all objects currently loaded into the session. This would > take a lot of time, and maybe User B will not really use all of them. > > > The problem here is the ...AndContinue part because this will potentially > keep outdated versions of objects for quite a while by rescuing them into > many consecutive Transactions. > > > So what is the best thing I can do when User B gets a GlorpWriteFailure if I > still want to save as much reloading of objects as possible? Wouldn't it be > a good idea to implement a variant of rollbackUOWAndContinue and > commitUOWAndContinue that instead of keeping the old versions of Objects > proxifies them and keeps the proxies? > > > > Why don't I use commitTransaction and rollbackTransaction? Because that way > I'd have to start loading everything over and over again and still might run > into conflicts. > > > I guess there is no perfect solution, but I'd like to discuss ideas. > > Joachim > > -- > You received this message because you are subscribed to the Google Groups > "glorp-group" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [hidden email]. > To post to this group, send email to [hidden email]. > Visit this group at https://groups.google.com/group/glorp-group. > For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups "glorp-group" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. To post to this group, send email to [hidden email]. Visit this group at https://groups.google.com/group/glorp-group. For more options, visit https://groups.google.com/d/optout. |
In reply to this post by jtuchel
> … > The problem, however, is that it is not easy to find out which objects need refreshing, because the GlorpWriteFailure doesn't tell you. > … My initial take on this is that the exception offers the list of objects being updated, (by sending “ex objects”), but that these may all be your registered objects, not merely the ones that have changed in your UOW. Since the changed objects
are presumably a much smaller set, you only need to refresh those. If I’m wrong, and the change objects are precisely the ones returned, then that should be very helpful to you right off. Otherwise, perhaps acquiring these won’t be too difficult (the glorp
cache knows which ones have changed). Dave From: [hidden email] [mailto:[hidden email]]
On Behalf Of jtuchel I am a bit out of ideas for the right way to handle optimitic locking failures
The usual reaction would be to present User B with an error message that says: please refresh your data and repeat your changes. The problem, however, is that it is not easy to find out which objects need refreshing, because the GlorpWriteFailure doesn't tell you. So you'd have to do a refresh for all objects currently loaded into the session. This would take a lot of time, and maybe
User B will not really use all of them. The problem here is the ...AndContinue part because this will potentially keep outdated versions of objects for quite a while by rescuing them into many consecutive Transactions. So what is the best thing I can do when User B gets a GlorpWriteFailure if I still want to save as much reloading of objects as possible? Wouldn't it be a good idea to implement a variant of rollbackUOWAndContinue and commitUOWAndContinue that instead of
keeping the old versions of Objects proxifies them and keeps the proxies? Why don't I use commitTransaction and rollbackTransaction? Because that way I'd have to start loading everything over and over again and still might run into conflicts.
-- You received this message because you are subscribed to the Google Groups "glorp-group" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. To post to this group, send email to [hidden email]. Visit this group at https://groups.google.com/group/glorp-group. For more options, visit https://groups.google.com/d/optout. |
David,
-- unfortunately, we're not only speaking of objects that have been changed in the current transaction, but of objects that might have changed in another one, maybe even from outside the current image. Even if the excpetion only contained the objects changed in this transaction, it doesn't filter out the ones that could possibly be updated without any problem (speaking of groupWrite: here). The exception is thrown as soon as the first object cannot be updated. So it might be the only one or just the first of a long list. The more I think of it, I come to the conclusion that I should probably be giving up on the ...AndContinue idea. It does save a lot of time (for cases where no two users work on the same set of objects), but it seems it is not a good idea at all as soon as objects can be used by multiple users at the same time. I'd like to ask a new question: What happens if you have a net of objects that has been updated and you just do a commitTransaction? In that case, I think the caches of a newly started transaction will be empty. I guess that means that objects that are still around in the image are detached, meaning a new transaction would think they are new and need to be inserted, right? So how can I make sure that no detached objects are around after a commitTransaction? Am Dienstag, 11. Juli 2017 21:57:40 UTC+2 schrieb Wallen, David:
You received this message because you are subscribed to the Google Groups "glorp-group" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. To post to this group, send email to [hidden email]. Visit this group at https://groups.google.com/group/glorp-group. For more options, visit https://groups.google.com/d/optout. |
Let me give you an example of what I mean by that:
The problem here is that there is not only that user object, but a whole net of objects that is associated to it (her company, all the business objects we've loaded from the database so far, lots of stuff). The only (horribly low-tech) thing I could possibly think of that would work for sure is to only store a user's id in the Session and not a user object. Every time somebody needs the user, we'd have to do a read: User where: [:us| us id = self currentUserId] and return the result of the query (which may or may not be a result of a cache lookup or a freshly materialized object). But what if somewhere in the app we have a reference to some of the objects in that object net associated to that User object? How do we make sure nobody is holding/using a detached object from an old Transaction? What I think of at this moment is Proxies. Maybe it would be a good idea to turn everything into its Proxy when we do a Commit or Rollback. This way we could make sure people always get freshly loaded objects and at the same time save the time of refreshing a bunch of objects that may never be used again in the next Transaction. But
Any ideas? How do people deal with this problem in Seaside Applications? Joachim You received this message because you are subscribed to the Google Groups "glorp-group" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. To post to this group, send email to [hidden email]. Visit this group at https://groups.google.com/group/glorp-group. For more options, visit https://groups.google.com/d/optout. |
In reply to this post by Esteban A. Maringolo
Esteban,
-- unfortunately, I never read: objects, I just follow the object graph using getters at the Smalltalk level. The only instance I read: an object is when a user is logging in. The timestamp you suggest is quite similar to the LockKey, it adds the time aspect though. Thinking further, I'd have to go through a UOWs cache and refresh all objects that are in the image longer than some point in time. Not sure this is much different than a timed CachingStrategy (which I haven't used so far). Has anybody used a timed cache in Glorp? I wonder if this will make sure there are no detached objects in the image...? Joachim Am Dienstag, 11. Juli 2017 15:05:43 UTC+2 schrieb Esteban A. Maringolo: I never had to manage those conflicts at the "framework level", but You received this message because you are subscribed to the Google Groups "glorp-group" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. To post to this group, send email to [hidden email]. Visit this group at https://groups.google.com/group/glorp-group. For more options, visit https://groups.google.com/d/optout. |
In reply to this post by jtuchel
Hi Joachim, Sorry, I was away for a bit. I’ll respond, and if you have the patience, you can correct my misunderstandings of your situation. > unfortunately, we're not only speaking of objects that have been changed in the current transaction, but of objects that might have changed in another one, maybe even from outside the current image. Even if the excpetion only contained
the objects changed in this transaction, it doesn't filter out the ones that could possibly be updated without any problem (speaking of groupWrite: here). The exception is thrown as soon as the first object cannot be updated. So it might be the only one or
just the first of a long list. That sounds right—if you read 1000 objects, update 30 by sending >>commitAndContinue, and the 20th one violates the lock, all 30 will be returned
in the message, ex object, since the group write failed. And then you’re left with the task of reloading all 30 (see >>refresh), to find out which one(s) failed. In addition, your app probably wants to preserve whatever edits were locally performed on those
30. So the app would compare the 30 local copies with the 30 it just read from the DB, and somehow reconcile those few that have conflicts. Not an easy task. I think complex objects which span multiple tables will be written in separate pieces, so you may
get errors for the “customer” table first, fix those, retry and then discover errors in the “products” table. But eventually in theory you can fix these problems each in turn, and finally get a successful commit. So, I’m thinking that you’re starting a UOW, loading all your data either at once, or over time, performing various edits, and finally committing the whole
set of changes. The main thing is that - the number of objects which actually change is small (your edits). - only those objects will be updated to the DB on commit (since Glorp caches the old value for comparison). - therefore, you only need to resolve a few conflicts within that already small set. If this isn’t happening, or I’ve completely misunderstood the situation (or Glorp!), then perhaps something else is going wrong. For example, if your app needs
to update a great many items, or if there are hundreds of users out there who’s updates cannot be caught up with, that may call for an entirely different strategy. > What happens if you have a net of objects that has been updated and you just do a commitTransaction? In that case, I think the caches of a newly started transaction will be empty. I guess that means that objects that are still around
in the image are detached, meaning a new transaction would think they are new and need to be inserted, right? Right, if the objects have not changed in any way since your session loaded them inside a UOW, then a commit will actually write nothing to the DB. In Glorp,
Transactions are lower level creatures, and I think all they really do is tell the database to commit whatever has changed since a Transaction was begun. If you’re just using a Transaction object, I think your changes are limited. I believe that one of the
benefits of UOW is that it uses a cache to hold the old values for comparison, and keeps track of your objects. > So how can I make sure that no detached objects are around after a commitTransaction? Again, I’d stay away from Transactions, and let the UOW do that stuff. Perhaps you could start a UOW, register: all your objects, then refresh: each them, if
necessary to get synched with the DB. Hth, Dave From: [hidden email] [mailto:[hidden email]]
On Behalf Of jtuchel David, > … > The problem, however, is that it is not easy to find out which objects need refreshing, because the GlorpWriteFailure doesn't tell you. > … My initial take on this is that the exception offers the list of objects being updated, (by sending “ex objects”), but that these may all be your registered objects, not merely
the ones that have changed in your UOW. Since the changed objects are presumably a much smaller set, you only need to refresh those. If I’m wrong, and the change objects are precisely the ones returned, then that should be very helpful to you right off. Otherwise,
perhaps acquiring these won’t be too difficult (the glorp cache knows which ones have changed). Dave
From:
<a href="javascript:" target="_blank">glorp...@... [mailto:<a href="javascript:" target="_blank">glorp...@...]
On Behalf Of jtuchel I am a bit out of ideas for the right way to handle optimitic locking failures
The usual reaction would be to present User B with an error message that says: please refresh your data and repeat your changes. The problem, however, is that it is not easy to find out which objects need refreshing, because the GlorpWriteFailure doesn't tell you. So you'd have to do a refresh for all objects currently loaded into the session. This would take a lot of time, and maybe
User B will not really use all of them. The problem here is the ...AndContinue part because this will potentially keep outdated versions of objects for quite a while by rescuing them into many consecutive Transactions. So what is the best thing I can do when User B gets a GlorpWriteFailure if I still want to save as much reloading of objects as possible? Wouldn't it be a good idea to implement a variant of rollbackUOWAndContinue and commitUOWAndContinue that instead of
keeping the old versions of Objects proxifies them and keeps the proxies? Why don't I use commitTransaction and rollbackTransaction? Because that way I'd have to start loading everything over and over again and still might run into conflicts.
--
-- You received this message because you are subscribed to the Google Groups "glorp-group" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. To post to this group, send email to [hidden email]. Visit this group at https://groups.google.com/group/glorp-group. For more options, visit https://groups.google.com/d/optout. |
David,
-- thanks for your time. Am Freitag, 14. Juli 2017 04:18:33 UTC+2 schrieb Wallen, David:
Yep, that's unfortunate, but at least a hint.
So what I'll try is to do a #rollbackUOWAndContinue and a refresh of those 30 afterwards. This way I can reuse all. IN my first attempts it seemed like the lockKey is not refreshed, however. But I need to investigate further.
That might be a step for the future ;-) At first I am happy if I get the pieces together to inform the user of what's happened and keep them in the web/glorp session and give them a chance to re-enter their data.
You lost me here. How would I know which ones have conflicts? We just said there is just an exception telling me "at least one of these 30 had a locking problem"...
Uups, now that you mention it.... So this might be solvable on the app level by using a crystal ball that tells us that "often when an update of class X fails, we also need to refresh the associated Y's"... Not a nice solution either.
I load them over time, as the user walks the app.
Right
There may be a dozen or so objects that need refreshing. But I think things are quite straightforward in most cases.
Not sure I understand exactly. Most of the times, I am happy with the caching in the uow.
I think I just threw the word Transaction into the ring, but was talking of a UOW. So if that is the reason for our misunderstanding: sorry! I'll have to tinker a little, esp. in the context of Seaside, which adds to the puzzle in non-trivial ways ;-) Joachim
You received this message because you are subscribed to the Google Groups "glorp-group" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. To post to this group, send email to [hidden email]. Visit this group at https://groups.google.com/group/glorp-group. For more options, visit https://groups.google.com/d/optout. |
Free forum by Nabble | Edit this page |