lock conflicts

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

lock conflicts

Johan Brichau-2
Hi,

When I'm running multiple Fastcgi gems, the logs for those gems are getting swamped with entries like the one at the end of this email.

Is this something normal? If not, are there any clues how I can trace these problems to fix them?
We are doing are own transactions, but never use any locks... so I'm a bit puzzled on what I'm reading here

Johan
=====

....
Write-Dependency Conflicts...
Write-ReadLock Conflicts...
Write-WriteLock Conflicts...
Rc-Write-Write Conflicts...
Synchronized-Commit Conflicts...
----------- Commit failure - retrying LOG ENTRY: aSymbolDictionary-----------
failure
Read-Write Conflicts...
Write-Write Conflicts...
    38461441
    389138945
    389718529
    389718785
    389719041
    389727745
    389731073
.....
Reply | Threaded
Open this post in threaded view
|

Re: lock conflicts

otto
Hi,

> Is this something normal? If not, are there any clues how I can trace these problems to fix them?
> We are doing are own transactions, but never use any locks... so I'm a bit puzzled on what I'm reading here

You've got to be careful with aborts if you do your own transaction
management in Seaside.

Make sure that you have a beginTransaction <do stuff>
commitTransaction that's as tight as possible around your changes in
order to minimise transaction conflict potential. If your <do stuff>
code takes a long time to execute, then the probability of a
transaction conflict is higher.

The best code that I've seen implements a commit block; something like this:

System commit: [<do stuff>]

This function would make sure that transactions are handled correctly,
and commit failures as well.

Another option is if the objects listed under your write-write
conflicts can use the Rc classses (reduced conflict, e.g. RcQueue),
and you need longer transactions, then they could work for you.

HTH
Otto
Reply | Threaded
Open this post in threaded view
|

Re: lock conflicts

Johan Brichau-2
Hi Otto,

On 31 Mar 2011, at 14:50, Otto Behrens wrote:

> Make sure that you have a beginTransaction <do stuff>
> commitTransaction that's as tight as possible around your changes in
> order to minimise transaction conflict potential. If your <do stuff>
> code takes a long time to execute, then the probability of a
> transaction conflict is higher.

Yes, that's the strategy we follow.
Given that our application has multiple users working on the same data objects (by definition), you are saying that it's normal I'm seeing these?

The remark would be that it's worthwhile for me to investigate ways to reduce the tx conflicts? Like using more RC classes ;-)

> Another option is if the objects listed under your write-write
> conflicts can use the Rc classses (reduced conflict, e.g. RcQueue),
> and you need longer transactions, then they could work for you.

How can I know what objects are listed? Are the numbers in the log entry the object's ids? How can I use those to get to the actual objects?

thanks!
Johan
Reply | Threaded
Open this post in threaded view
|

Re: lock conflicts

otto
In reply to this post by otto
> Yes, that's the strategy we follow.
> Given that our application has multiple users working on the same data objects (by definition), you are saying that it's normal I'm seeing these?

They will happen occasionally if your transaction is short (in time).
But it should not happen too often. If two transactions interleave and
both transactions write the same object, it is a write-write conflict.
If the conflict is not a logical one and there is a lot of contention
on a "shared" object (for example a list of customers whereby you
create a lot of new customers), and you have short transactions, then
use Rc classes.

It could be that your design requires too many "shared" objects. For
example, if you have a set of accounts and each account contains a set
of transactions, it makes sense not to keep a global list of
transactions because this would increase contention.

In our system we have a class inst var called "instances" on a class
called DomainObject. Many of our objects inherit from this one. On one
installation, we have 3 concurrent Hyper's running. I haven't seen too
many transaction conflicts.

> The remark would be that it's worthwhile for me to investigate ways to reduce the tx conflicts? Like using more RC classes ;-)

Yes. I would look at shortening transactions first, then design and
then Rc classes.

> How can I know what objects are listed? Are the numbers in the log entry the object's ids? How can I use those to get to the actual objects?

They are oops. Object _objectForOop: 1234. We wrote this method that helps:

Integer | asObject
  ^Object _objectForOop: self
Reply | Threaded
Open this post in threaded view
|

Re: lock conflicts

Johan Brichau-2

On 31 Mar 2011, at 16:21, Otto Behrens wrote:

>> How can I know what objects are listed? Are the numbers in the log entry the object's ids? How can I use those to get to the actual objects?
>
> They are oops. Object _objectForOop: 1234.

aha! didn't know about that one. thanks!

Johan
Reply | Threaded
Open this post in threaded view
|

Re: lock conflicts

Dale Henrichs
In reply to this post by Johan Brichau-2
Hey Johan,

The log entries that you are seeing are coming from the "normal" handling of requests. Take a look at GrGemStonePlatform>>seasideProcessRequestWithRetry:resultBlock:. The log entry is coming from the following chunk of code:

                self doCommitTransaction
                        ifFalse: [ | conflicts |
                                conflicts := System transactionConflicts.
                                self doAbortTransaction.
                                self
                                        saveLogEntry: (WAObjectLogEntry
                                                warn: 'Commit failure - retrying' request: aNativeRequest
                                                object: conflicts)  
                                        shouldCommit: true.
                                ^nil "retry request"].

What this means is that two http requests were handling concurrently, the ensuing commit had conflicts, and the http request was retried.  You will get a 'Too many retries:' internal error if the request fails multiple times ...

This is normal and expected behavior. if you are getting the 'Too many retries' then something would need to be done.

There are also entries dropped into the object log (note that the log entry is a warning) and perhaps the entry in the file should contain the word WARNING, to make it sound less critical.

You might also consider just returning the nil to avoid the overhead and noise of these warnings ... I suppose that should be a preference...

Dale

On Mar 31, 2011, at 1:25 AM, Johan Brichau wrote:

> Hi,
>
> When I'm running multiple Fastcgi gems, the logs for those gems are getting swamped with entries like the one at the end of this email.
>
> Is this something normal? If not, are there any clues how I can trace these problems to fix them?
> We are doing are own transactions, but never use any locks... so I'm a bit puzzled on what I'm reading here
>
> Johan
> =====
>
> ....
> Write-Dependency Conflicts...
> Write-ReadLock Conflicts...
> Write-WriteLock Conflicts...
> Rc-Write-Write Conflicts...
> Synchronized-Commit Conflicts...
> ----------- Commit failure - retrying LOG ENTRY: aSymbolDictionary-----------
> failure
> Read-Write Conflicts...
> Write-Write Conflicts...
>    38461441
>    389138945
>    389718529
>    389718785
>    389719041
>    389727745
>    389731073
> .....

Reply | Threaded
Open this post in threaded view
|

Re: lock conflicts

Johan Brichau-2
Hi Dale,

Thanks for the feedback. It helps us a lot to know that it comes from the normal request handling.
We indeed had the 'too many retries' internal error sometimes before. We traced and fixed that but I was a little bit afraid that the log entries were pointing some similar problem...

On 31 Mar 2011, at 18:12, Dale Henrichs wrote:

> Hey Johan,
>
> The log entries that you are seeing are coming from the "normal" handling of requests. Take a look at GrGemStonePlatform>>seasideProcessRequestWithRetry:resultBlock:. The log entry is coming from the following chunk of code:
>
> self doCommitTransaction
> ifFalse: [ | conflicts |
> conflicts := System transactionConflicts.
> self doAbortTransaction.
> self
> saveLogEntry: (WAObjectLogEntry
> warn: 'Commit failure - retrying' request: aNativeRequest
> object: conflicts)  
> shouldCommit: true.
> ^nil "retry request"].
>
> What this means is that two http requests were handling concurrently, the ensuing commit had conflicts, and the http request was retried.  You will get a 'Too many retries:' internal error if the request fails multiple times ...
>
> This is normal and expected behavior. if you are getting the 'Too many retries' then something would need to be done.
>
> There are also entries dropped into the object log (note that the log entry is a warning) and perhaps the entry in the file should contain the word WARNING, to make it sound less critical.
>
> You might also consider just returning the nil to avoid the overhead and noise of these warnings ... I suppose that should be a preference...
>
> Dale
>
> On Mar 31, 2011, at 1:25 AM, Johan Brichau wrote:
>
>> Hi,
>>
>> When I'm running multiple Fastcgi gems, the logs for those gems are getting swamped with entries like the one at the end of this email.
>>
>> Is this something normal? If not, are there any clues how I can trace these problems to fix them?
>> We are doing are own transactions, but never use any locks... so I'm a bit puzzled on what I'm reading here
>>
>> Johan
>> =====
>>
>> ....
>> Write-Dependency Conflicts...
>> Write-ReadLock Conflicts...
>> Write-WriteLock Conflicts...
>> Rc-Write-Write Conflicts...
>> Synchronized-Commit Conflicts...
>> ----------- Commit failure - retrying LOG ENTRY: aSymbolDictionary-----------
>> failure
>> Read-Write Conflicts...
>> Write-Write Conflicts...
>>   38461441
>>   389138945
>>   389718529
>>   389718785
>>   389719041
>>   389727745
>>   389731073
>> .....
>