DuplicatePrimaryKeyException

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

DuplicatePrimaryKeyException

Maarten Mostert

Hi,

The Task object I map to Glorp is a subclass off AssociationTreeWithParent

 

 

Smalltalk defineClass: #Task

superclass: #{UI.AssociationTreeWithParent}

 

As Tasks are linked together with various relations ships both parent child and different end to start relationships they and up with an almost unlimited level of proxied relationships.

 

Now when Glorp checks whether the Cache contains a Task Object with the method

 

 

includesKey: key as: anObject

"Return true if we include the object, and it matches the given object. If we include a different object with the same key, raise an exception. Don't listen to any expiry policy"

| item value |

item := self basicAt: key ifAbsent: [^false].

value := policy contentsOf: item.

value == anObject ifFalse: [

(DuplicatePrimaryKeyException new: anObject existing: value) signal].

^true.

 

The comparaison value == anObject seem to return false because of the lower level proxy differences. (and hash values are sometimes different to ?!)

 

 

This has given me lots of problems as most of the time I have to use refeshed reads in order not to have duplicate errors. Meaning creating a seperate Object with a refreshed read, updating it and then refreshing the original one. In the end instead of doing a single update I refesh a whole number of related things, then I update and then I reread all these things again, ending up very inefficient in database trafic having way to much code.

 

No if I read all the comments in "isNew: anObject " I potentially have the impression that I might have hit something unfinished here ?!?

 

isNew: anObject

"When registering, do we need to add this object to the collection of new objects? New objects are treated specially when computing what needs to be written, since we don't have their previous state"

 

| key descriptor |

(currentUnitOfWork notNil and: [currentUnitOfWork isRegistered: anObject]) ifTrue: [^false].

descriptor := self descriptorFor: anObject.

descriptor isNil ifTrue: [^false].

"For embedded values we assume that they are not new. This appears to work. I can't really justify it."

self needsWork: 'cross your fingers'.

descriptor mapsPrimaryKeys ifFalse: [^false].

 

key := descriptor primaryKeyFor: anObject.

 

key isNil ifTrue: [super halt.   ^true].

"If the cache contains the object, but the existing entry is due to be deleted, then count this entry as a new one being added with the same primary key (ick) as the old one"

^[(self cacheContainsObject: anObject key: key) not]

on: DuplicatePrimaryKeyException

do: [:ex |

(currentUnitOfWork notNil and: [currentUnitOfWork willDelete: ex existingObject])

ifTrue: [

self cacheRemoveObject: ex existingObject.

ex return: true]

ifFalse: [ex pass]].


 

I am now experimenting with comparaisons instead of == comparisons, not sure this is the way to doit though.


Any hints ?

 

Regards,

 

@+Maarten,

 

 

--
You received this message because you are subscribed to the Google Groups "glorp-group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
To post to this group, send email to [hidden email].
Visit this group at http://groups.google.com/group/glorp-group?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.
 
 
Reply | Threaded
Open this post in threaded view
|

Re: [vwnc] DuplicatePrimaryKeyException

Tom Robinson
On 4/27/13 7:30 AM, [hidden email] wrote:

Hi,

The Task object I map to Glorp is a subclass off AssociationTreeWithParent

 

 

Smalltalk defineClass: #Task

superclass: #{UI.AssociationTreeWithParent}

 

As Tasks are linked together with various relations ships both parent child and different end to start relationships they and up with an almost unlimited level of proxied relationships.

 

Now when Glorp checks whether the Cache contains a Task Object with the method

 

 

includesKey: key as: anObject

"Return true if we include the object, and it matches the given object. If we include a different object with the same key, raise an exception. Don't listen to any expiry policy"

| item value |

item := self basicAt: key ifAbsent: [^false].

value := policy contentsOf: item.

value == anObject ifFalse: [

(DuplicatePrimaryKeyException new: anObject existing: value) signal].

^true.

 

The comparaison value == anObject seem to return false because of the lower level proxy differences. (and hash values are sometimes different to ?!)

If you remove a child, do you want the Task with the child to be = to the Task without the child? If you change the parent or the predecessor or successor, do you want the Task before the change to be equal to the one after the change? I would suggest, assuming that there is a bunch of Task related info inside the Task object, that you want it to be equal to itself even if the relationships change. That suggests that you may want to look at implementing = and hash on your Task class such that both exclude the parent(s), children, predecessor, successor, etc from their calculations.  This may mean that you need other methods that do comparisions between 2 tasks that are =, but have different relationships, but that depends on your application.

It seems to me that the relationships *can't* be included in = or hash because what the Cache is trying to figure out is "Does this object represent a row in the database that I have loaded and turned into an object?". The only values that can be included in the = or hash calculation are ones that, if changed, would require that you write a new row to the database and delete the old one, I think.

 

 

This has given me lots of problems as most of the time I have to use refeshed reads in order not to have duplicate errors. Meaning creating a seperate Object with a refreshed read, updating it and then refreshing the original one. In the end instead of doing a single update I refesh a whole number of related things, then I update and then I reread all these things again, ending up very inefficient in database trafic having way to much code.

 

No if I read all the comments in "isNew: anObject " I potentially have the impression that I might have hit something unfinished here ?!?

Probably the only person who could speak to that question definitively is Alan Knight...

 

isNew: anObject

"When registering, do we need to add this object to the collection of new objects? New objects are treated specially when computing what needs to be written, since we don't have their previous state"

 

| key descriptor |

(currentUnitOfWork notNil and: [currentUnitOfWork isRegistered: anObject]) ifTrue: [^false].

descriptor := self descriptorFor: anObject.

descriptor isNil ifTrue: [^false].

"For embedded values we assume that they are not new. This appears to work. I can't really justify it."

self needsWork: 'cross your fingers'.

descriptor mapsPrimaryKeys ifFalse: [^false].

 

key := descriptor primaryKeyFor: anObject.

 

key isNil ifTrue: [super halt.   ^true].

"If the cache contains the object, but the existing entry is due to be deleted, then count this entry as a new one being added with the same primary key (ick) as the old one"

^[(self cacheContainsObject: anObject key: key) not]

on: DuplicatePrimaryKeyException

do: [:ex |

(currentUnitOfWork notNil and: [currentUnitOfWork willDelete: ex existingObject])

ifTrue: [

self cacheRemoveObject: ex existingObject.

ex return: true]

ifFalse: [ex pass]].


 

I am now experimenting with comparaisons instead of == comparisons, not sure this is the way to doit though.


Any hints ?

 

Regards,

 

@+Maarten,

 

 


--
You received this message because you are subscribed to the Google Groups "glorp-group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
To post to this group, send email to [hidden email].
Visit this group at http://groups.google.com/group/glorp-group?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.
 
 
Reply | Threaded
Open this post in threaded view
|

Re: [vwnc] DuplicatePrimaryKeyException

Maarten Mostert

Well the problem is what Tom indicated, I tried the hash thing but that doesn't give any improvement. I modified the cache policy and that seems to help a bit. I also simplified some of my descriptor systems and passed some operations over to a separate (simpler) mapping using the same table.

I don't really have the problem of not knowing my primary keys (as Anthony indicated), I can image this within a kind of web application where you might do a lot of things before actually registering but in my case I am on a distant desktop were other users can do annoying things at the same time.

All I want to do is to reduce traffic as much as possible while remaining as close as possible in sync with the db.

 

The thing is that I don't really know how this cache operates. Why is it that I can have 10 different versions of the same Object in my cache. ? Why is it that isRegistered: is a useless method ? What is the difference between realObject, registeredObject ExpiredObject etc.

 

In the next snippet I add newAv to a collection within aTask. aTask has just been updated and committed so totally up to date. However if I don't refresh: aTask before adding newAv (or crate another Task with the same key) it will provoke a duplicate error, but only sometimes ??.

 

newAv := Avancement

registerdate: Date today

percentage: aTask task_pct_comp

delta: aTask task_pct_comp / 100 * aTask task_work

projectKey: aTask proj_id

wbs_chain: aTask task_wbs_chain.

 

self getGlorpSession inUnitOfWorkDo:

[self getGlorpSession register: newAv.

self getGlorpSession register: (self getGlorpSession refresh: aTask).

aTask avancementcol add: newAv].

 

I suspect this to happen because of the multiple examples of the same Object that can reside within the cache. The lookup method will return whatever Object in the cache with the same key. If the object returned has the same pointer Glorp is happy. But if it is not the same pointer I will get an duplicate error … In order to avoid troubles I read object and all of its associated stuff which cost me maybe 30 milliseconds. Knowing that all this information is ready available within my system annoys me however, because I do this at hundreds of places.

 

The natural behavior for me would be that register: does an insert only when the primary key is nil. If the primary key is not nil it provokes an update if the object is available in the cache and otherwise checks the database of the key (row) actually exists.  If the object knows it primary key, is not in the cache and does not exists on the database it does an insert, all other cases its an update.

 

What do you think ??

 

Regards,

 

@+Maarten,

 

 

> "Tom Robinson" <[hidden email]> |

On 4/27/13 7:30 AM, [hidden email] wrote:

Hi,

The Task object I map to Glorp is a subclass off AssociationTreeWithParent

 

 

 

Smalltalk defineClass: #Task

superclass: #{UI.AssociationTreeWithParent}

 

As Tasks are linked together with various relations ships both parent child and different end to start relationships they and up with an almost unlimited level of proxied relationships.

 

Now when Glorp checks whether the Cache contains a Task Object with the method

 

 

 

includesKey: key as: anObject

"Return true if we include the object, and it matches the given object. If we include a different object with the same key, raise an exception. Don't listen to any expiry policy"

| item value |

item := self basicAt: key ifAbsent: [^false].

value := policy contentsOf: item.

value == anObject ifFalse: [

(DuplicatePrimaryKeyException new: anObject existing: value) signal].

^true.

 

The comparaison value == anObject seem to return false because of the lower level proxy differences. (and hash values are sometimes different to ?!)

If you remove a child, do you want the Task with the child to be = to the Task without the child? If you change the parent or the predecessor or successor, do you want the Task before the change to be equal to the one after the change? I would suggest, assuming that there is a bunch of Task related info inside the Task object, that you want it to be equal to itself even if the relationships change. That suggests that you may want to look at implementing = and hash on your Task class such that both exclude the parent(s), children, predecessor, successor, etc from their calculations.  This may mean that you need other methods that do comparisions between 2 tasks that are =, but have different relationships, but that depends on your application.

It seems to me that the relationships *can't* be included in = or hash because what the Cache is trying to figure out is "Does this object represent a row in the database that I have loaded and turned into an object?". The only values that can be included in the = or hash calculation are ones that, if changed, would require that you write a new row to the database and delete the old one, I think.

 

 

This has given me lots of problems as most of the time I have to use refeshed reads in order not to have duplicate errors. Meaning creating a seperate Object with a refreshed read, updating it and then refreshing the original one. In the end instead of doing a single update I refesh a whole number of related things, then I update and then I reread all these things again, ending up very inefficient in database trafic having way to much code.

 

No if I read all the comments in "isNew: anObject " I potentially have the impression that I might have hit something unfinished here ?!?

Probably the only person who could speak to that question definitively is Alan Knight...

 

isNew: anObject

"When registering, do we need to add this object to the collection of new objects? New objects are treated specially when computing what needs to be written, since we don't have their previous state"

 

| key descriptor |

(currentUnitOfWork notNil and: [currentUnitOfWork isRegistered: anObject]) ifTrue: [^false].

descriptor := self descriptorFor: anObject.

descriptor isNil ifTrue: [^false].

"For embedded values we assume that they are not new. This appears to work. I can't really justify it."

self needsWork: 'cross your fingers'.

descriptor mapsPrimaryKeys ifFalse: [^false].

 

key := descriptor primaryKeyFor: anObject.

 

key isNil ifTrue: [super halt.   ^true].

"If the cache contains the object, but the existing entry is due to be deleted, then count this entry as a new one being added with the same primary key (ick) as the old one"

^[(self cacheContainsObject: anObject key: key) not]

on: DuplicatePrimaryKeyException

do: [:ex |

(currentUnitOfWork notNil and: [currentUnitOfWork willDelete: ex existingObject])

ifTrue: [

self cacheRemoveObject: ex existingObject.

ex return: true]

ifFalse: [ex pass]].


 

I am now experimenting with comparaisons instead of == comparisons, not sure this is the way to doit though.


Any hints ?

 

Regards,

 

@+Maarten,

 

 

--
You received this message because you are subscribed to the Google Groups "glorp-group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
To post to this group, send email to [hidden email].
Visit this group at http://groups.google.com/group/glorp-group?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.
 
 
Reply | Threaded
Open this post in threaded view
|

Re: [vwnc] DuplicatePrimaryKeyException

Maarten Mostert
In reply to this post by Maarten Mostert

Tom,

Here you got for my Descriptor:  Are you sure the cache should not contain multiple versions of the same Object ?



classModelForTaskGroup: aClassModel

aClassModel newAttributeNamed: #key.

aClassModel newAttributeNamed: #value.

aClassModel newAttributeNamed: #proj_id.

aClassModel newAttributeNamed: #task_order.

aClassModel newAttributeNamed: #parent type: TaskGroup.

aClassModel newAttributeNamed: #children collectionOf: TaskGroup.

aClassModel newAttributeNamed: #outlinenumber.

aClassModel newAttributeNamed: #msp_uid.

aClassModel newAttributeNamed: #financial type: Boolean.

aClassModel newAttributeNamed: #imputation.

aClassModel newAttributeNamed: #task_early_finish type: Date.     "fin plustot"

aClassModel newAttributeNamed: #task_late_start type: Date.         "debut plus tard"

aClassModel newAttributeNamed: #task_dur.            "duree"

aClassModel newAttributeNamed: #task_start_date type: Date.        "ordonnancée"

aClassModel newAttributeNamed: #task_finish_date type: Date.      "ordonnancée"

aClassModel newAttributeNamed: #task_pct_comp. "Avancement"

aClassModel newAttributeNamed: #critical_a.

aClassModel newAttributeNamed: #critical_b.

aClassModel newAttributeNamed: #critical_c.

aClassModel newAttributeNamed: #critical_d.

aClassModel newAttributeNamed: #task_type.

aClassModel newAttributeNamed: #task_creation_date type: Date.

aClassModel newAttributeNamed: #task_early_start type: Date.       "debut plustot"

aClassModel newAttributeNamed: #task_late_finish type: Date.       "fin plustard"

aClassModel newAttributeNamed: #task_cal_uid.     "Calender UID"

aClassModel newAttributeNamed: #task_work.          "Temps ordonnancée"

aClassModel newAttributeNamed: #task_cost.

aClassModel newAttributeNamed: #task_fixed_cost.

aClassModel newAttributeNamed: #task_wbs_chain.

aClassModel newAttributeNamed: #task_wbs_node type: HierarchyTreeNode.

aClassModel newAttributeNamed: #task_org_key.

aClassModel newAttributeNamed: #task_org_node type: OrganisationTreeNode.

aClassModel newAttributeNamed: #task_resource type: Resource.

aClassModel newAttributeNamed: #tasksToStart  collectionOf: TaskAllRelation.

aClassModel newAttributeNamed: #tasksFromEnd collectionOf: TaskAllRelation.

aClassModel

newAttributeNamed: #avancementcol

collection: OrderedCollection

of: Avancement.

aClassModel

newAttributeNamed: #attachementcol

collection: OrderedCollection

of: Attachement

descriptorForTaskGroup: aDescriptor

| table |

table := self tableNamed: 'MMT_TASK'.

aDescriptor table: table.

aDescriptor addMapping: (DirectMapping from: #key to: (table fieldNamed: 'task_id')).

(aDescriptor newMapping: DirectMapping) from: #value to: (table fieldNamed: 'task_name').

(aDescriptor newMapping: DirectMapping) from: #task_order to: (table fieldNamed: 'task_order').

"Define parent child links"

(aDescriptor newMapping: Glorp.OneToOneMapping)

attributeName: #parent;

useLinkTable;

join: (Glorp.Join from: (table fieldNamed: 'task_id') to: ((self tableNamed: 'MMT_TASK_TREE_LINK') fieldNamed: 'CHILD')).

(aDescriptor newMapping: Glorp.ToManyMapping)

attributeName: #children;

useLinkTable;

orderBy: #task_order;

join: (Glorp.Join from: (table fieldNamed: 'task_id') to: ((self tableNamed: 'MMT_TASK_TREE_LINK') fieldNamed: 'PARENT')).

"aDescriptor newMapping: Glorp.ToManyMapping."

"====//===="

(aDescriptor newMapping: DirectMapping) from: #outlinenumber to: (table fieldNamed: 'outlinenumber').

(aDescriptor newMapping: DirectMapping) from: #proj_id to: (table fieldNamed: 'proj_id').

(aDescriptor newMapping: DirectMapping) from: #financial to: (table fieldNamed: 'task_financial').

(aDescriptor newMapping: DirectMapping) from: #imputation to: (table fieldNamed: 'task_imputation').

(aDescriptor newMapping: DirectMapping) from: #task_early_finish to: (table fieldNamed: 'task_early_finish').

(aDescriptor newMapping: DirectMapping) from: #task_late_start to: (table fieldNamed: 'task_late_start').

(aDescriptor newMapping: DirectMapping) from: #task_dur to: (table fieldNamed: 'task_dur').

(aDescriptor newMapping: DirectMapping) from: #task_start_date to: (table fieldNamed: 'task_start_date').

(aDescriptor newMapping: DirectMapping) from: #task_finish_date to: (table fieldNamed: 'task_finish_date').

(aDescriptor newMapping: DirectMapping) from: #task_pct_comp to: (table fieldNamed: 'task_pct_comp').

(aDescriptor newMapping: DirectMapping) from: #critical_a to: (table fieldNamed: 'critical_a').

(aDescriptor newMapping: DirectMapping) from: #critical_b to: (table fieldNamed: 'critical_b').

(aDescriptor newMapping: DirectMapping) from: #critical_c to: (table fieldNamed: 'critical_c').

(aDescriptor newMapping: DirectMapping) from: #critical_d to: (table fieldNamed: 'critical_d').

(aDescriptor newMapping: DirectMapping) from: #task_type to: (table fieldNamed: 'task_type').

(aDescriptor newMapping: DirectMapping) from: #task_creation_date to: (table fieldNamed: 'task_creation_date').

(aDescriptor newMapping: DirectMapping) from: #task_early_start to: (table fieldNamed: 'task_early_start').

(aDescriptor newMapping: DirectMapping) from: #task_late_finish to: (table fieldNamed: 'task_late_finish').

(aDescriptor newMapping: DirectMapping) from: #task_work to: (table fieldNamed: 'task_work').

(aDescriptor newMapping: DirectMapping) from: #task_cost to: (table fieldNamed: 'task_cost').

(aDescriptor newMapping: DirectMapping) from: #task_fixed_cost to: (table fieldNamed: 'task_fixed_cost').

(aDescriptor newMapping: DirectMapping) from: #task_wbs_chain to: (table fieldNamed: 'task_wbs_chain').

(aDescriptor newMapping: Glorp.OneToOneMapping) attributeName: #task_wbs_node.

(aDescriptor newMapping: DirectMapping) from: #task_org_key to: (table fieldNamed: 'task_org_key').

(aDescriptor newMapping: Glorp.OneToOneMapping) attributeName: #task_org_node.

(aDescriptor newMapping: Glorp.OneToOneMapping) attributeName: #task_resource.

(aDescriptor newMapping: Glorp.ToManyMapping)

attributeName: #tasksFromEnd;

join: (Glorp.Join from: (table fieldNamed: 'task_id')

to: ((self tableNamed: 'MMT_TASK_ALL_RELATION') fieldNamed: 'START_TASK'));

referenceClass: TaskAllRelation.

(aDescriptor newMapping: Glorp.ToManyMapping)

attributeName: #tasksToStart;

join: (Glorp.Join from: (table fieldNamed: 'task_id')

to: ((self tableNamed: 'MMT_TASK_ALL_RELATION') fieldNamed: 'END_TASK'));

referenceClass: TaskAllRelation.

(aDescriptor newMapping: Glorp.ToManyMapping)

attributeName: #avancementcol;

orderBy: #registerdate;

join: (Glorp.Join from: (table fieldNamed: 'task_id')

to: ((self tableNamed: 'MMT_AVANCEMENT') fieldNamed: 'avanc_plannedtask_id')).

(aDescriptor newMapping: Glorp.ToManyMapping)

attributeName: #attachementcol;

orderBy: #id;

join: (Glorp.Join from: (table fieldNamed: 'task_id')

to: ((self tableNamed: 'MMT_ATTACHEMENT') fieldNamed: 'att_plannedtask_id')).

(aDescriptor newMapping: DirectMapping) from: #msp_uid to: (table fieldNamed: 'msp_uid')

tableForMMT_TASK: aTable

| resource orgnode wbsnode |

(aTable createFieldNamed: 'task_id' type: platform sequence) bePrimaryKey.

aTable createFieldNamed: 'task_name' type: (platform varchar: 255).

aTable createFieldNamed: 'task_order' type: platform int4.

(aTable createFieldNamed: 'proj_id' type: platform integer) beNullable: true.

aTable createFieldNamed: 'outlinenumber' type:  (platform varchar: 255). "platform clob."

aTable createFieldNamed: 'msp_uid' type: platform int4.

aTable createFieldNamed: 'task_financial' type: (platform boolean).

aTable createFieldNamed: 'task_imputation' type: (platform varchar: 255).

aTable createFieldNamed: 'task_early_finish' type: platform date.

aTable createFieldNamed: 'task_late_start' type: platform date.

(aTable createFieldNamed: 'task_dur' type: platform integer) beNullable: true.

(aTable createFieldNamed: 'task_start_date' type: platform date) beNullable: true.

(aTable createFieldNamed: 'task_finish_date' type: platform date) beNullable: true.

aTable createFieldNamed: 'task_pct_comp' type: platform float.

aTable createFieldNamed: 'task_last_registerdate' type: platform date.

aTable createFieldNamed: 'critical_a' type: platform float.

aTable createFieldNamed: 'critical_b' type: platform float.

aTable createFieldNamed: 'critical_c' type: platform float.

aTable createFieldNamed: 'critical_d' type: platform float.

aTable createFieldNamed: 'task_type' type: platform int2.

aTable createFieldNamed: 'task_creation_date' type: platform timestamp.

aTable createFieldNamed: 'task_early_start' type: platform date.

aTable createFieldNamed: 'task_late_finish' type: platform date.

(aTable createFieldNamed: 'task_cal_uid' type: platform int4) beNullable: true.

aTable createFieldNamed: 'task_work' type: platform float.

aTable createFieldNamed: 'task_cost' type: platform float.

aTable createFieldNamed: 'task_fixed_cost' type: platform float.

aTable createFieldNamed: 'task_wbs_chain' type: platform clob.

wbsnode := aTable createFieldNamed: 'task_wbs_node' type: platform int4.

aTable addForeignKeyFrom: wbsnode to: ((self tableNamed: 'MMT_HIERARCHY_TREE_NODE') fieldNamed: 'id').

aTable createFieldNamed: 'task_org_key' type: platform int4.

orgnode := (aTable createFieldNamed: 'task_org_node' type: platform int4) beNullable: true.

aTable addForeignKeyFrom: orgnode to: ((self tableNamed: 'MMT_ORGANISATION_TREE_NODE') fieldNamed: 'id').

resource := aTable createFieldNamed: 'task_resource' type: platform int4.

aTable addForeignKeyFrom: resource to: ((self tableNamed: 'MMT_RESOURCES') fieldNamed: 'res_id').

aTable createFieldNamed: 'task_regression_early' type: platform float.

aTable createFieldNamed: 'task_regression_planned' type: platform float.

aTable createFieldNamed: 'task_regression_late' type: platform float.

 

 


 

 

> "Tom Robinson" <[hidden email]> |

On 4/28/13 2:18 PM, [hidden email] wrote:

 

Well the problem is what Tom indicated, I tried the hash thing but that doesn't give any improvement. I modified the cache policy and that seems to help a bit. I also simplified some of my descriptor systems and passed some operations over to a separate (simpler) mapping using the same table.

You need to reimplement #= as well as #hash. They need to be consistent. Can you post your Task class and your descriptor? There should not be duplicates of the same Object in the cache. Writing #= and #hash correctly will prevent this.

I don't really have the problem of not knowing my primary keys (as Anthony indicated), I can image this within a kind of web application where you might do a lot of things before actually registering but in my case I am on a distant desktop were other users can do annoying things at the same time.

All I want to do is to reduce traffic as much as possible while remaining as close as possible in sync with the db.

 

The thing is that I don't really know how this cache operates. Why is it that I can have 10 different versions of the same Object in my cache. ? Why is it that isRegistered: is a useless method ? What is the difference between realObject, registeredObject ExpiredObject etc.

 

In the next snippet I add newAv to a collection within aTask. aTask has just been updated and committed so totally up to date. However if I don't refresh: aTask before adding newAv (or crate another Task with the same key) it will provoke a duplicate error, but only sometimes ??.

 

newAv := Avancement

registerdate: Date today

percentage: aTask task_pct_comp

delta: aTask task_pct_comp / 100 * aTask task_work

projectKey: aTask proj_id

wbs_chain: aTask task_wbs_chain.

 

self getGlorpSession inUnitOfWorkDo:

[self getGlorpSession register: newAv.

self getGlorpSession register: (self getGlorpSession refresh: aTask).

aTask avancementcol add: newAv].

 

I suspect this to happen because of the multiple examples of the same Object that can reside within the cache. The lookup method will return whatever Object in the cache with the same key. If the object returned has the same pointer Glorp is happy. But if it is not the same pointer I will get an duplicate error … In order to avoid troubles I read object and all of its associated stuff which cost me maybe 30 milliseconds. Knowing that all this information is ready available within my system annoys me however, because I do this at hundreds of places.

 

The natural behavior for me would be that register: does an insert only when the primary key is nil. If the primary key is not nil it provokes an update if the object is available in the cache and otherwise checks the database of the key (row) actually exists.  If the object knows it primary key, is not in the cache and does not exists on the database it does an insert, all other cases its an update.

 

What do you think ??

 

Regards,

 

@+Maarten,

 

 

> "Tom Robinson" [hidden email] |

On 4/27/13 7:30 AM, [hidden email] wrote:

Hi,

The Task object I map to Glorp is a subclass off AssociationTreeWithParent

 

 

 

Smalltalk defineClass: #Task

superclass: #{UI.AssociationTreeWithParent}

 

As Tasks are linked together with various relations ships both parent child and different end to start relationships they and up with an almost unlimited level of proxied relationships.

 

Now when Glorp checks whether the Cache contains a Task Object with the method

 

 

 

includesKey: key as: anObject

"Return true if we include the object, and it matches the given object. If we include a different object with the same key, raise an exception. Don't listen to any expiry policy"

| item value |

item := self basicAt: key ifAbsent: [^false].

value := policy contentsOf: item.

value == anObject ifFalse: [

(DuplicatePrimaryKeyException new: anObject existing: value) signal].

^true.

 

The comparaison value == anObject seem to return false because of the lower level proxy differences. (and hash values are sometimes different to ?!)

If you remove a child, do you want the Task with the child to be = to the Task without the child? If you change the parent or the predecessor or successor, do you want the Task before the change to be equal to the one after the change? I would suggest, assuming that there is a bunch of Task related info inside the Task object, that you want it to be equal to itself even if the relationships change. That suggests that you may want to look at implementing = and hash on your Task class such that both exclude the parent(s), children, predecessor, successor, etc from their calculations.  This may mean that you need other methods that do comparisions between 2 tasks that are =, but have different relationships, but that depends on your application.

It seems to me that the relationships *can't* be included in = or hash because what the Cache is trying to figure out is "Does this object represent a row in the database that I have loaded and turned into an object?". The only values that can be included in the = or hash calculation are ones that, if changed, would require that you write a new row to the database and delete the old one, I think.

 

 

This has given me lots of problems as most of the time I have to use refeshed reads in order not to have duplicate errors. Meaning creating a seperate Object with a refreshed read, updating it and then refreshing the original one. In the end instead of doing a single update I refesh a whole number of related things, then I update and then I reread all these things again, ending up very inefficient in database trafic having way to much code.

 

No if I read all the comments in "isNew: anObject " I potentially have the impression that I might have hit something unfinished here ?!?

Probably the only person who could speak to that question definitively is Alan Knight...

 

isNew: anObject

"When registering, do we need to add this object to the collection of new objects? New objects are treated specially when computing what needs to be written, since we don't have their previous state"

 

| key descriptor |

(currentUnitOfWork notNil and: [currentUnitOfWork isRegistered: anObject]) ifTrue: [^false].

descriptor := self descriptorFor: anObject.

descriptor isNil ifTrue: [^false].

"For embedded values we assume that they are not new. This appears to work. I can't really justify it."

self needsWork: 'cross your fingers'.

descriptor mapsPrimaryKeys ifFalse: [^false].

 

key := descriptor primaryKeyFor: anObject.

 

key isNil ifTrue: [super halt.   ^true].

"If the cache contains the object, but the existing entry is due to be deleted, then count this entry as a new one being added with the same primary key (ick) as the old one"

^[(self cacheContainsObject: anObject key: key) not]

on: DuplicatePrimaryKeyException

do: [:ex |

(currentUnitOfWork notNil and: [currentUnitOfWork willDelete: ex existingObject])

ifTrue: [

self cacheRemoveObject: ex existingObject.

ex return: true]

ifFalse: [ex pass]].


 

I am now experimenting with comparaisons instead of == comparisons, not sure this is the way to doit though.


Any hints ?

 

Regards,

 

@+Maarten,

 

 

 

 

--
You received this message because you are subscribed to the Google Groups "glorp-group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
To post to this group, send email to [hidden email].
Visit this group at http://groups.google.com/group/glorp-group?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.
 
 
Reply | Threaded
Open this post in threaded view
|

Re: [vwnc] DuplicatePrimaryKeyException

Maarten Mostert
In reply to this post by Maarten Mostert

Tom,

Below you can see what my cache looks like.

Do you have an explanation of all the different copies in extraReferences ??

 

Or the reason of existance of the following method ??  Its adding the current version of an item to extraReferences, Okay, but extraReferences is an OrderedCollection with many occurrences of the tem. This done you lost track of who and where current item is ?? yes/no ??

 

markEntryAsCurrent: anItem

"The policy has told us to mark an item as current. This is only really useful for weak policies, which tell us to keep an additional pointer to the object in a (presumably) fixed-size collection"


extraReferences isNil ifFalse: [extraReferences add: anItem].


==========================

 

 

items :EphemeralValueDictionary (1 -> 1 -> 'root'  from: 8 avril 2013   to: 3 mai 2013 2 -> 2 -> 'Project 1'  from: 8 avril 2013   to: 3 mai 2013 3 -> 3 -> '<new task>'  from: 29 avril 2013   to: 3 mai 2013 4 -> 4 -> '<new task>'  from: 29 avril 2013   to: 3 mai 2013 5 -> 5 -> '<new task>'  from: 29 avril 2013   to: 3 mai 2013 6 -> 6 -> '<new task>'  from: 29 avril 2013   to: 3 mai 2013 7 -> 7 -> '<new task>'  from: 29 avril 2013   to: 3 mai 2013 8 -> 8 -> '<new task>'  from: 29 avril 2013   to: 3 mai 2013 9 -> 9 -> 'azazzafdff'  from: 8 avril 2013   to: 15 avril 2013 10 -> 10 -> 'Project 2'  from: 29 avril 2013   to: 3 mai 2013 11 -> 11 -> 'ghghghjhhjhhjjhjh'  from: 29 avril 2013   to: 3 mai 2013 13 -> 13 -> 'dsdsddsdsdsdssdsd'  from: 29 avril 2013   to: 3 mai 2013 )

 

 

extraReferences

 

OrderedCollection (10 -> 'Project 2'  from: 29 avril 2013   to: 3 mai 2013 10 -> 'Project 2'  from: 29 avril 2013   to: 3 mai 2013 11 -> 'ghghghjhhjhhjjhjh'  from: 29 avril 2013   to: 3 mai 2013 11 -> 'ghghghjhhjhhjjhjh'  from: 29 avril 2013   to: 3 mai 2013 13 -> 'dsdsddsdsdsdssdsd'  from: 29 avril 2013   to: 3 mai 2013 13 -> 'dsdsddsdsdsdssdsd'  from: 29 avril 2013   to: 3 mai 2013 1 -> 'root'  from: 8 avril 2013   to: 3 mai 2013 1 -> 'root'  from: 8 avril 2013   to: 3 mai 2013 11 -> 'ghghghjhhjhhjjhjh'  from: 29 avril 2013   to: 3 mai 2013 2 -> 'Project 1'  from: 8 avril 2013   to: 3 mai 2013 2 -> 'Project 1'  from: 8 avril 2013   to: 3 mai 2013 3 -> '<new task>'  from: 29 avril 2013   to: 3 mai 2013 3 -> '<new task>'  from: 29 avril 2013   to: 3 mai 2013 4 -> '<new task>'  from: 29 avril 2013   to: 3 mai 2013 4 -> '<new task>'  from: 29 avril 2013   to: 3 mai 2013 5 -> '<new task>'  from: 29 avril 2013   to: 3 mai 2013 5 -> '<new task>'  from: 29 avril 2013   to: 3 mai 2013 6 -> '<new task>'  from: 29 avril 2013   to: 3 mai 2013 6 -> '<new task>'  from: 29 avril 2013   to: 3 mai 2013 7 -> '<new task>'  from: 29 avril 2013   to: 3 mai 2013 7 -> '<new task>'  from: 29 avril 2013   to: 3 mai 2013 8 -> '<new task>'  from: 29 avril 2013   to: 3 mai 2013 8 -> '<new task>'  from: 29 avril 2013   to: 3 mai 2013 9 -> 'azazzafdff'  from: 8 avril 2013   to: 15 avril 2013 9 -> 'azazzafdff'  from: 8 avril 2013   to: 15 avril 2013 1 -> 'root'  from: 8 avril 2013   to: 3 mai 2013 10 -> 'Project 2'  from: 29 avril 2013   to: 3 mai 2013 2 -> 'Project 1'  from: 8 avril 2013   to: 3 mai 2013 10 -> 'Project 2'  from: 29 avril 2013   to: 3 mai 2013 2 -> 'Project 1'  from: 8 avril 2013   to: 3 mai 2013 10 -> 'Project 2'  from: 29 avril 2013   to: 3 mai 2013 2 -> 'Project 1'  from: 8 avril 2013   to: 3 mai 2013 10 -> 'Project 2'  from: 29 avril 2013   to: 3 mai 2013 2 -> 'Project 1'  from: 8 avril 2013   to: 3 mai 2013 10 -> 'Project 2'  from: 29 avril 2013   to: 3 mai 2013 10 -> 'Project 2'  from: 29 avril 2013   to: 3 mai 2013 1 -> 'root'  from: 8 avril 2013   to: 3 mai 2013 2 -> 'Project 1'  from: 8 avril 2013   to: 3 mai 2013 10 -> 'Project 2'  from: 29 avril 2013   to: 3 mai 2013 10 -> 'Project 2'  from: 29 avril 2013   to: 3 mai 2013 1 -> 'root'  from: 8 avril 2013   to: 3 mai 2013 10 -> 'Project 2'  from: 29 avril 2013   to: 3 mai 2013 11 -> 'ghghghjhhjhhjjhjh'  from: 29 avril 2013   to: 3 mai 2013 13 -> 'dsdsddsdsdsdssdsd'  from: 29 avril 2013   to: 3 mai 2013 2 -> 'Project 1'  from: 8 avril 2013   to: 3 mai 2013 10 -> 'Project 2'  from: 29 avril 2013   to: 3 mai 2013 2 -> 'Project 1'  from: 8 avril 2013   to: 3 mai 2013 10 -> 'Project 2'  from: 29 avril 2013   to: 3 mai 2013 2 -> 'Project 1'  from: 8 avril 2013   to: 3 mai 2013 10 -> 'Project 2'  from: 29 avril 2013   to: 3 mai 2013)

 

 


 

 

> "Tom Robinson" <[hidden email]> |

On 4/28/13 2:18 PM, [hidden email] wrote:

 

Well the problem is what Tom indicated, I tried the hash thing but that doesn't give any improvement. I modified the cache policy and that seems to help a bit. I also simplified some of my descriptor systems and passed some operations over to a separate (simpler) mapping using the same table.

You need to reimplement #= as well as #hash. They need to be consistent. Can you post your Task class and your descriptor? There should not be duplicates of the same Object in the cache. Writing #= and #hash correctly will prevent this.

I don't really have the problem of not knowing my primary keys (as Anthony indicated), I can image this within a kind of web application where you might do a lot of things before actually registering but in my case I am on a distant desktop were other users can do annoying things at the same time.

All I want to do is to reduce traffic as much as possible while remaining as close as possible in sync with the db.

 

The thing is that I don't really know how this cache operates. Why is it that I can have 10 different versions of the same Object in my cache. ? Why is it that isRegistered: is a useless method ? What is the difference between realObject, registeredObject ExpiredObject etc.

 

In the next snippet I add newAv to a collection within aTask. aTask has just been updated and committed so totally up to date. However if I don't refresh: aTask before adding newAv (or crate another Task with the same key) it will provoke a duplicate error, but only sometimes ??.

 

newAv := Avancement

registerdate: Date today

percentage: aTask task_pct_comp

delta: aTask task_pct_comp / 100 * aTask task_work

projectKey: aTask proj_id

wbs_chain: aTask task_wbs_chain.

 

self getGlorpSession inUnitOfWorkDo:

[self getGlorpSession register: newAv.

self getGlorpSession register: (self getGlorpSession refresh: aTask).

aTask avancementcol add: newAv].

 

I suspect this to happen because of the multiple examples of the same Object that can reside within the cache. The lookup method will return whatever Object in the cache with the same key. If the object returned has the same pointer Glorp is happy. But if it is not the same pointer I will get an duplicate error … In order to avoid troubles I read object and all of its associated stuff which cost me maybe 30 milliseconds. Knowing that all this information is ready available within my system annoys me however, because I do this at hundreds of places.

 

The natural behavior for me would be that register: does an insert only when the primary key is nil. If the primary key is not nil it provokes an update if the object is available in the cache and otherwise checks the database of the key (row) actually exists.  If the object knows it primary key, is not in the cache and does not exists on the database it does an insert, all other cases its an update.

 

What do you think ??

 

Regards,

 

@+Maarten,

 

 

> "Tom Robinson" [hidden email] |

On 4/27/13 7:30 AM, [hidden email] wrote:

Hi,

The Task object I map to Glorp is a subclass off AssociationTreeWithParent

 

 

 

Smalltalk defineClass: #Task

superclass: #{UI.AssociationTreeWithParent}

 

As Tasks are linked together with various relations ships both parent child and different end to start relationships they and up with an almost unlimited level of proxied relationships.

 

Now when Glorp checks whether the Cache contains a Task Object with the method

 

 

 

includesKey: key as: anObject

"Return true if we include the object, and it matches the given object. If we include a different object with the same key, raise an exception. Don't listen to any expiry policy"

| item value |

item := self basicAt: key ifAbsent: [^false].

value := policy contentsOf: item.

value == anObject ifFalse: [

(DuplicatePrimaryKeyException new: anObject existing: value) signal].

^true.

 

The comparaison value == anObject seem to return false because of the lower level proxy differences. (and hash values are sometimes different to ?!)

If you remove a child, do you want the Task with the child to be = to the Task without the child? If you change the parent or the predecessor or successor, do you want the Task before the change to be equal to the one after the change? I would suggest, assuming that there is a bunch of Task related info inside the Task object, that you want it to be equal to itself even if the relationships change. That suggests that you may want to look at implementing = and hash on your Task class such that both exclude the parent(s), children, predecessor, successor, etc from their calculations.  This may mean that you need other methods that do comparisions between 2 tasks that are =, but have different relationships, but that depends on your application.

It seems to me that the relationships *can't* be included in = or hash because what the Cache is trying to figure out is "Does this object represent a row in the database that I have loaded and turned into an object?". The only values that can be included in the = or hash calculation are ones that, if changed, would require that you write a new row to the database and delete the old one, I think.

 

 

This has given me lots of problems as most of the time I have to use refeshed reads in order not to have duplicate errors. Meaning creating a seperate Object with a refreshed read, updating it and then refreshing the original one. In the end instead of doing a single update I refesh a whole number of related things, then I update and then I reread all these things again, ending up very inefficient in database trafic having way to much code.

 

No if I read all the comments in "isNew: anObject " I potentially have the impression that I might have hit something unfinished here ?!?

Probably the only person who could speak to that question definitively is Alan Knight...

 

isNew: anObject

"When registering, do we need to add this object to the collection of new objects? New objects are treated specially when computing what needs to be written, since we don't have their previous state"

 

| key descriptor |

(currentUnitOfWork notNil and: [currentUnitOfWork isRegistered: anObject]) ifTrue: [^false].

descriptor := self descriptorFor: anObject.

descriptor isNil ifTrue: [^false].

"For embedded values we assume that they are not new. This appears to work. I can't really justify it."

self needsWork: 'cross your fingers'.

descriptor mapsPrimaryKeys ifFalse: [^false].

 

key := descriptor primaryKeyFor: anObject.

 

key isNil ifTrue: [super halt.   ^true].

"If the cache contains the object, but the existing entry is due to be deleted, then count this entry as a new one being added with the same primary key (ick) as the old one"

^[(self cacheContainsObject: anObject key: key) not]

on: DuplicatePrimaryKeyException

do: [:ex |

(currentUnitOfWork notNil and: [currentUnitOfWork willDelete: ex existingObject])

ifTrue: [

self cacheRemoveObject: ex existingObject.

ex return: true]

ifFalse: [ex pass]].


 

I am now experimenting with comparaisons instead of == comparisons, not sure this is the way to doit though.


Any hints ?

 

Regards,

 

@+Maarten,

 

 

--
You received this message because you are subscribed to the Google Groups "glorp-group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
To post to this group, send email to [hidden email].
Visit this group at http://groups.google.com/group/glorp-group?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.
 
 
Reply | Threaded
Open this post in threaded view
|

Re: [vwnc] DuplicatePrimaryKeyException

Maarten Mostert

Oké I now tested with surpression of the VW type Weak Cache.

 

collectionForExtraReferences


"^FixedSizeQueue maximumSize: self numberOfReferencesToKeepAround."


^nil

================


So in the next table on the left side I use Glorp with VW's Weak policy which means that I need to refresh to avoid duplicates. The  red part are the extra queries Glorp does to refresh.

 

On the right side I set the Weak Cache to nil register directly and got a more effective query without "for the moment " the duplicate error.

 

The time gain is indicated about 40 ms on a local postgreSQL.

 

Let's cross gingers to wheather this works over the long run ..

 

 

self getGlorpSession inUnitOfWorkDo:

[self getGlorpSession register: newAv.                                                                                                                     self getGlorpSession register: (self getGlorpSession refresh: aTask).

aTask avancementcol add: newAv.

aTask verifyAvancements].

self getGlorpSession inUnitOfWorkDo:

[self getGlorpSession register: newAv.

self getGlorpSession register: aTask.

aTask avancementcol add: newAv.

aTask verifyAvancements].

SELECT t1.id, t1.avanc_proj_id, t1.avanc_org_key, t1.avanc_wbs_chain, t1.avanc_wbs_key, t1.avanc_resource_key, t1.registerdate, t1.pourcentage, t1.delta

FROM MMT_AVANCEMENT t1

WHERE (t1.avanc_plannedtask_id = 14) ORDER BY t1.registerdate

(0.001 s)

SELECT t1.task_id, t1.task_name, t1.task_order, t1.proj_id, t1.outlinenumber, t1.msp_uid, t1.task_financial, t1.task_imputation, t1.task_early_finish, t1.task_late_start, t1.task_dur, t1.task_start_date, t1.task_finish_date, t1.task_pct_comp, t1.critical_a, t1.critical_b, t1.critical_c, t1.critical_d, t1.task_type, t1.task_creation_date, t1.task_early_start, t1.task_late_finish, t1.task_work, t1.task_cost, t1.task_fixed_cost, t1.task_wbs_chain, t1.task_wbs_node, t1.task_org_key, t1.task_org_node, t1.task_resource

FROM MMT_TASK t1, MMT_TASK_TREE_LINK t2

WHERE ((t2.child = t1.task_id) AND (t2.parent = 14)) ORDER BY t1.task_order

(0.002 s)

SELECT t1.task_id, t1.task_name, t1.task_order, t1.proj_id, t1.outlinenumber, t1.msp_uid, t1.task_financial, t1.task_imputation, t1.task_early_finish, t1.task_late_start, t1.task_dur, t1.task_start_date, t1.task_finish_date, t1.task_pct_comp, t1.critical_a, t1.critical_b, t1.critical_c, t1.critical_d, t1.task_type, t1.task_creation_date, t1.task_early_start, t1.task_late_finish, t1.task_work, t1.task_cost, t1.task_fixed_cost, t1.task_wbs_chain, t1.task_wbs_node, t1.task_org_key, t1.task_org_node, t1.task_resource

FROM MMT_TASK t1, MMT_TASK_TREE_LINK t2

WHERE ((t2.parent = t1.task_id) AND (t2.child = 14)) LIMIT 1

(0.002 s)

SELECT t1.task_id, t1.task_name, t1.task_order, t1.proj_id, t1.outlinenumber, t1.msp_uid, t1.task_financial, t1.task_imputation, t1.task_early_finish, t1.task_late_start, t1.task_dur, t1.task_start_date, t1.task_finish_date, t1.task_pct_comp, t1.critical_a, t1.critical_b, t1.critical_c, t1.critical_d, t1.task_type, t1.task_creation_date, t1.task_early_start, t1.task_late_finish, t1.task_work, t1.task_cost, t1.task_fixed_cost, t1.task_wbs_chain, t1.task_wbs_node, t1.task_org_key, t1.task_org_node, t1.task_resource

FROM MMT_TASK t1

WHERE (t1.task_id = 14) LIMIT 1

(0.001 s)

SELECT t1.id, t1.avanc_proj_id, t1.avanc_org_key, t1.avanc_wbs_chain, t1.avanc_wbs_key, t1.avanc_resource_key, t1.registerdate, t1.pourcentage, t1.delta

FROM MMT_AVANCEMENT t1

WHERE (t1.avanc_plannedtask_id = 14) ORDER BY t1.registerdate

(0.001 s)

Begin Transaction

select nextval('MMT_AVANCEMENT_id_seq') from pg_attribute limit 1

(0.027 s)

INSERT INTO MMT_AVANCEMENT (id,avanc_proj_id,avanc_org_key,avanc_wbs_chain,avanc_wbs_key,avanc_resource_key,registerdate,pourcentage,delta,avanc_plannedtask_id)  VALUES (12,1,1,'<2><1>',2,1,'2013-05-01',12.0,0.0,14)

(0.003 s)

Commit Transaction

SELECT t1.id, t1.avanc_proj_id, t1.avanc_org_key, t1.avanc_wbs_chain, t1.avanc_wbs_key, t1.avanc_resource_key, t1.registerdate, t1.pourcentage, t1.delta

FROM MMT_AVANCEMENT t1

WHERE (t1.avanc_plannedtask_id = 15) ORDER BY t1.registerdate

(0.001 s)

SELECT t1.task_id, t1.task_name, t1.task_order, t1.proj_id, t1.outlinenumber, t1.msp_uid, t1.task_financial, t1.task_imputation, t1.task_early_finish, t1.task_late_start, t1.task_dur, t1.task_start_date, t1.task_finish_date, t1.task_pct_comp, t1.critical_a, t1.critical_b, t1.critical_c, t1.critical_d, t1.task_type, t1.task_creation_date, t1.task_early_start, t1.task_late_finish, t1.task_work, t1.task_cost, t1.task_fixed_cost, t1.task_wbs_chain, t1.task_wbs_node, t1.task_org_key, t1.task_org_node, t1.task_resource

FROM MMT_TASK t1, MMT_TASK_TREE_LINK t2

WHERE ((t2.child = t1.task_id) AND (t2.parent = 15)) ORDER BY t1.task_order

(0.003 s)

SELECT t1.task_id, t1.task_name, t1.task_order, t1.proj_id, t1.outlinenumber, t1.msp_uid, t1.task_financial, t1.task_imputation, t1.task_early_finish, t1.task_late_start, t1.task_dur, t1.task_start_date, t1.task_finish_date, t1.task_pct_comp, t1.critical_a, t1.critical_b, t1.critical_c, t1.critical_d, t1.task_type, t1.task_creation_date, t1.task_early_start, t1.task_late_finish, t1.task_work, t1.task_cost, t1.task_fixed_cost, t1.task_wbs_chain, t1.task_wbs_node, t1.task_org_key, t1.task_org_node, t1.task_resource

FROM MMT_TASK t1, MMT_TASK_TREE_LINK t2

WHERE ((t2.parent = t1.task_id) AND (t2.child = 15)) LIMIT 1

(0.002 s)

Begin Transaction

select nextval('MMT_AVANCEMENT_id_seq') from pg_attribute limit 1

(0.001 s)

INSERT INTO MMT_AVANCEMENT (id,avanc_proj_id,avanc_org_key,avanc_wbs_chain,avanc_wbs_key,avanc_resource_key,registerdate,pourcentage,delta,avanc_plannedtask_id)  VALUES (13,1,1,'<2><1>',2,1,'2013-05-01',23.0,0.0,15)

(0.001 s)

Commit Transaction

Time to run Register Av 119.68 milliseconds

Time to run Register Av 83.393 milliseconds

--
You received this message because you are subscribed to the Google Groups "glorp-group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
To post to this group, send email to [hidden email].
Visit this group at http://groups.google.com/group/glorp-group?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.
 
 
Reply | Threaded
Open this post in threaded view
|

Re: [vwnc] DuplicatePrimaryKeyException

Alan Knight-2
I just noticed this thread, and don't have time right now to go through all the details. However, the most basic point is that the cache should never have proxies, so the identity check is correct. There should only ever be one copy of an object in the cache, and it should be keyed by its primary key. If that's not happening, then something is going very badly wrong, and you need to track down why that's happening. Anything that puts an object into the cache should be unwrapping it if it's a proxy.


On 1 May 2013 04:27, <[hidden email]> wrote:

Oké I now tested with surpression of the VW type Weak Cache.

 

collectionForExtraReferences


"^FixedSizeQueue maximumSize: self numberOfReferencesToKeepAround."


^nil

================


So in the next table on the left side I use Glorp with VW's Weak policy which means that I need to refresh to avoid duplicates. The  red part are the extra queries Glorp does to refresh.

 

On the right side I set the Weak Cache to nil register directly and got a more effective query without "for the moment " the duplicate error.

 

The time gain is indicated about 40 ms on a local postgreSQL.

 

Let's cross gingers to wheather this works over the long run ..

 

 

self getGlorpSession inUnitOfWorkDo:

[self getGlorpSession register: newAv.                                                                                                                     self getGlorpSession register: (self getGlorpSession refresh: aTask).

aTask avancementcol add: newAv.

aTask verifyAvancements].

self getGlorpSession inUnitOfWorkDo:

[self getGlorpSession register: newAv.

self getGlorpSession register: aTask.

aTask avancementcol add: newAv.

aTask verifyAvancements].

SELECT t1.id, t1.avanc_proj_id, t1.avanc_org_key, t1.avanc_wbs_chain, t1.avanc_wbs_key, t1.avanc_resource_key, t1.registerdate, t1.pourcentage, t1.delta

FROM MMT_AVANCEMENT t1

WHERE (t1.avanc_plannedtask_id = 14) ORDER BY t1.registerdate

(0.001 s)

SELECT t1.task_id, t1.task_name, t1.task_order, t1.proj_id, t1.outlinenumber, t1.msp_uid, t1.task_financial, t1.task_imputation, t1.task_early_finish, t1.task_late_start, t1.task_dur, t1.task_start_date, t1.task_finish_date, t1.task_pct_comp, t1.critical_a, t1.critical_b, t1.critical_c, t1.critical_d, t1.task_type, t1.task_creation_date, t1.task_early_start, t1.task_late_finish, t1.task_work, t1.task_cost, t1.task_fixed_cost, t1.task_wbs_chain, t1.task_wbs_node, t1.task_org_key, t1.task_org_node, t1.task_resource

FROM MMT_TASK t1, MMT_TASK_TREE_LINK t2

WHERE ((t2.child = t1.task_id) AND (t2.parent = 14)) ORDER BY t1.task_order

(0.002 s)

SELECT t1.task_id, t1.task_name, t1.task_order, t1.proj_id, t1.outlinenumber, t1.msp_uid, t1.task_financial, t1.task_imputation, t1.task_early_finish, t1.task_late_start, t1.task_dur, t1.task_start_date, t1.task_finish_date, t1.task_pct_comp, t1.critical_a, t1.critical_b, t1.critical_c, t1.critical_d, t1.task_type, t1.task_creation_date, t1.task_early_start, t1.task_late_finish, t1.task_work, t1.task_cost, t1.task_fixed_cost, t1.task_wbs_chain, t1.task_wbs_node, t1.task_org_key, t1.task_org_node, t1.task_resource

FROM MMT_TASK t1, MMT_TASK_TREE_LINK t2

WHERE ((t2.parent = t1.task_id) AND (t2.child = 14)) LIMIT 1

(0.002 s)

SELECT t1.task_id, t1.task_name, t1.task_order, t1.proj_id, t1.outlinenumber, t1.msp_uid, t1.task_financial, t1.task_imputation, t1.task_early_finish, t1.task_late_start, t1.task_dur, t1.task_start_date, t1.task_finish_date, t1.task_pct_comp, t1.critical_a, t1.critical_b, t1.critical_c, t1.critical_d, t1.task_type, t1.task_creation_date, t1.task_early_start, t1.task_late_finish, t1.task_work, t1.task_cost, t1.task_fixed_cost, t1.task_wbs_chain, t1.task_wbs_node, t1.task_org_key, t1.task_org_node, t1.task_resource

FROM MMT_TASK t1

WHERE (t1.task_id = 14) LIMIT 1

(0.001 s)

SELECT t1.id, t1.avanc_proj_id, t1.avanc_org_key, t1.avanc_wbs_chain, t1.avanc_wbs_key, t1.avanc_resource_key, t1.registerdate, t1.pourcentage, t1.delta

FROM MMT_AVANCEMENT t1

WHERE (t1.avanc_plannedtask_id = 14) ORDER BY t1.registerdate

(0.001 s)

Begin Transaction

select nextval('MMT_AVANCEMENT_id_seq') from pg_attribute limit 1

(0.027 s)

INSERT INTO MMT_AVANCEMENT (id,avanc_proj_id,avanc_org_key,avanc_wbs_chain,avanc_wbs_key,avanc_resource_key,registerdate,pourcentage,delta,avanc_plannedtask_id)  VALUES (12,1,1,'<2><1>',2,1,'2013-05-01',12.0,0.0,14)

(0.003 s)

Commit Transaction

SELECT t1.id, t1.avanc_proj_id, t1.avanc_org_key, t1.avanc_wbs_chain, t1.avanc_wbs_key, t1.avanc_resource_key, t1.registerdate, t1.pourcentage, t1.delta

FROM MMT_AVANCEMENT t1

WHERE (t1.avanc_plannedtask_id = 15) ORDER BY t1.registerdate

(0.001 s)

SELECT t1.task_id, t1.task_name, t1.task_order, t1.proj_id, t1.outlinenumber, t1.msp_uid, t1.task_financial, t1.task_imputation, t1.task_early_finish, t1.task_late_start, t1.task_dur, t1.task_start_date, t1.task_finish_date, t1.task_pct_comp, t1.critical_a, t1.critical_b, t1.critical_c, t1.critical_d, t1.task_type, t1.task_creation_date, t1.task_early_start, t1.task_late_finish, t1.task_work, t1.task_cost, t1.task_fixed_cost, t1.task_wbs_chain, t1.task_wbs_node, t1.task_org_key, t1.task_org_node, t1.task_resource

FROM MMT_TASK t1, MMT_TASK_TREE_LINK t2

WHERE ((t2.child = t1.task_id) AND (t2.parent = 15)) ORDER BY t1.task_order

(0.003 s)

SELECT t1.task_id, t1.task_name, t1.task_order, t1.proj_id, t1.outlinenumber, t1.msp_uid, t1.task_financial, t1.task_imputation, t1.task_early_finish, t1.task_late_start, t1.task_dur, t1.task_start_date, t1.task_finish_date, t1.task_pct_comp, t1.critical_a, t1.critical_b, t1.critical_c, t1.critical_d, t1.task_type, t1.task_creation_date, t1.task_early_start, t1.task_late_finish, t1.task_work, t1.task_cost, t1.task_fixed_cost, t1.task_wbs_chain, t1.task_wbs_node, t1.task_org_key, t1.task_org_node, t1.task_resource

FROM MMT_TASK t1, MMT_TASK_TREE_LINK t2

WHERE ((t2.parent = t1.task_id) AND (t2.child = 15)) LIMIT 1

(0.002 s)

Begin Transaction

select nextval('MMT_AVANCEMENT_id_seq') from pg_attribute limit 1

(0.001 s)

INSERT INTO MMT_AVANCEMENT (id,avanc_proj_id,avanc_org_key,avanc_wbs_chain,avanc_wbs_key,avanc_resource_key,registerdate,pourcentage,delta,avanc_plannedtask_id)  VALUES (13,1,1,'<2><1>',2,1,'2013-05-01',23.0,0.0,15)

(0.001 s)

Commit Transaction

Time to run Register Av 119.68 milliseconds

Time to run Register Av 83.393 milliseconds

--
You received this message because you are subscribed to the Google Groups "glorp-group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
To post to this group, send email to [hidden email].
Visit this group at http://groups.google.com/group/glorp-group?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

--
You received this message because you are subscribed to the Google Groups "glorp-group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
To post to this group, send email to [hidden email].
Visit this group at http://groups.google.com/group/glorp-group?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.