I am currently having deep trouble with DuplicatePrimaryKeyExceptions in read operations ... ???
-- The problem is that the Session's Cache contains two subcaches for separate classes that are the same object. Meaning: (sess cacheFor: ClassA) == (sess cacheFor: ClassB) evaluates to true. The DuplicateKey exception happens in a check for isNew: during registration of a TransitiveClosure. It happens because in that Cache there are two instances of classes ClassA and ClassB that have the same primary key. That is why Cache>>#includesKey: key as: anObject "Return true if we include the object, and it matches the given object. If we include a different object with the same key, raise an exception. Don't listen to any expiry policy" | item value | item := self basicAt: key ifAbsent: [^false]. value := policy contentsOf: item. value == anObject ifFalse: [(DuplicatePrimaryKeyException new: anObject existing: value) signal]. ^true fails with a DuplicateKeyException. instanceA is an instance of ClassA and has id=1541 and instanceB us an instance of ClassB and also has id=1541. Glorp assumes thata subcache only contains instances of one class, and that usually is true. Inpscting the Cache and its subcaches clearly shows that the subcache contains objects from both ClassA and ClassB. What could I have done wrong? How could I have two subcaches in a Session's Cache that both are the identical object???? I am not messing with Caches in my application code. I know I am not clever enough to change anything "down there"... ;-) I tried a fresh image and loaded my code into it and the problem still persists. Any hints? Anybody ever seen this? Where to look? Joachim You received this message because you are subscribed to the Google Groups "glorp-group" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. To post to this group, send email to [hidden email]. Visit this group at https://groups.google.com/group/glorp-group. For more options, visit https://groups.google.com/d/optout. |
Hi Joachin,
I had a somewhat related issue when using TypeResolvers for abstract classes. Where if you had AbstractClass parent of ClassA and ClassB, if you query ClassA it will end in a Cache instance, but if you query AbstractClass you'd have another cache instance. Are you using Hierarchical Type Resolvers? Regards, Esteban A. Maringolo 2017-10-10 10:51 GMT-03:00 jtuchel <[hidden email]>: > I am currently having deep trouble with DuplicatePrimaryKeyExceptions in > read operations ... ??? > > The problem is that the Session's Cache contains two subcaches for separate > classes that are the same object. > > Meaning: > > (sess cacheFor: ClassA) == (sess cacheFor: ClassB) evaluates to true. > > The DuplicateKey exception happens in a check for isNew: during registration > of a TransitiveClosure. It happens because in that Cache there are two > instances of classes ClassA and ClassB that have the same primary key. That > is why > > > Cache>>#includesKey: key as: anObject > "Return true if we include the object, and it matches the given object. > If we include a different object with the same key, raise an exception. > Don't listen to any expiry policy" > > | item value | > > item := self basicAt: key ifAbsent: [^false]. > value := policy contentsOf: item. > value == anObject ifFalse: [(DuplicatePrimaryKeyException new: anObject > existing: value) signal]. > ^true > > > fails with a DuplicateKeyException. instanceA is an instance of ClassA and > has id=1541 and instanceB us an instance of ClassB and also has id=1541. > Glorp assumes thata subcache only contains instances of one class, and that > usually is true. > Inpscting the Cache and its subcaches clearly shows that the subcache > contains objects from both ClassA and ClassB. > > What could I have done wrong? How could I have two subcaches in a Session's > Cache that both are the identical object???? I am not messing with Caches in > my application code. I know I am not clever enough to change anything "down > there"... ;-) > > > I tried a fresh image and loaded my code into it and the problem still > persists. > > Any hints? Anybody ever seen this? Where to look? > > > Joachim > > -- > You received this message because you are subscribed to the Google Groups > "glorp-group" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [hidden email]. > To post to this group, send email to [hidden email]. > Visit this group at https://groups.google.com/group/glorp-group. > For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups "glorp-group" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. To post to this group, send email to [hidden email]. Visit this group at https://groups.google.com/group/glorp-group. For more options, visit https://groups.google.com/d/optout. |
Esteban,
Yes, these are both Subclasses of an Abstract Class. They live in distinct tables. The TypeResolver, however, is a HorizontalTypeResolver. The mappings which are used to read the instances, are oneToOneMappings or oneToManyMappings that reference the concrete subclass and table. Loading of these objects stopped working when I changed the superclass of ClassA and ClassB from Object to this abstract superclass. So you are spot on! Do you have an idea how I could solve the issue? Thanks, Joachim Am 10.10.17 um 16:10 schrieb Esteban A. Maringolo: > Hi Joachin, > > I had a somewhat related issue when using TypeResolvers for abstract classes. > > Where if you had AbstractClass parent of ClassA and ClassB, if you > query ClassA it will end in a Cache instance, but if you query > AbstractClass you'd have another cache instance. > > Are you using Hierarchical Type Resolvers? > > > Regards, > > > > Esteban A. Maringolo > > > 2017-10-10 10:51 GMT-03:00 jtuchel <[hidden email]>: >> I am currently having deep trouble with DuplicatePrimaryKeyExceptions in >> read operations ... ??? >> >> The problem is that the Session's Cache contains two subcaches for separate >> classes that are the same object. >> >> Meaning: >> >> (sess cacheFor: ClassA) == (sess cacheFor: ClassB) evaluates to true. >> >> The DuplicateKey exception happens in a check for isNew: during registration >> of a TransitiveClosure. It happens because in that Cache there are two >> instances of classes ClassA and ClassB that have the same primary key. That >> is why >> >> >> Cache>>#includesKey: key as: anObject >> "Return true if we include the object, and it matches the given object. >> If we include a different object with the same key, raise an exception. >> Don't listen to any expiry policy" >> >> | item value | >> >> item := self basicAt: key ifAbsent: [^false]. >> value := policy contentsOf: item. >> value == anObject ifFalse: [(DuplicatePrimaryKeyException new: anObject >> existing: value) signal]. >> ^true >> >> >> fails with a DuplicateKeyException. instanceA is an instance of ClassA and >> has id=1541 and instanceB us an instance of ClassB and also has id=1541. >> Glorp assumes thata subcache only contains instances of one class, and that >> usually is true. >> Inpscting the Cache and its subcaches clearly shows that the subcache >> contains objects from both ClassA and ClassB. >> >> What could I have done wrong? How could I have two subcaches in a Session's >> Cache that both are the identical object???? I am not messing with Caches in >> my application code. I know I am not clever enough to change anything "down >> there"... ;-) >> >> >> I tried a fresh image and loaded my code into it and the problem still >> persists. >> >> Any hints? Anybody ever seen this? Where to look? >> >> >> Joachim >> >> -- >> You received this message because you are subscribed to the Google Groups >> "glorp-group" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to [hidden email]. >> To post to this group, send email to [hidden email]. >> Visit this group at https://groups.google.com/group/glorp-group. >> For more options, visit https://groups.google.com/d/optout. -- ----------------------------------------------------------------------- Objektfabrik Joachim Tuchel mailto:[hidden email] Fliederweg 1 http://www.objektfabrik.de D-71640 Ludwigsburg http://joachimtuchel.wordpress.com Telefon: +49 7141 56 10 86 0 Fax: +49 7141 56 10 86 1 -- You received this message because you are subscribed to the Google Groups "glorp-group" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. To post to this group, send email to [hidden email]. Visit this group at https://groups.google.com/group/glorp-group. For more options, visit https://groups.google.com/d/optout. |
In reply to this post by Esteban A. Maringolo
Hi agian, Esteban,
Am 10.10.17 um 16:10 schrieb Esteban A. Maringolo: > Hi Joachin, > > I had a somewhat related issue when using TypeResolvers for abstract classes. > > Where if you had AbstractClass parent of ClassA and ClassB, if you > query ClassA it will end in a Cache instance, but if you query > AbstractClass you'd have another cache instance. This is a bug, isn't it? The cache should be chosen based on the concrete Class an object is an instance of, no? My expectation would be that if I do: session read: AbstractClass where: [:whatever| ] I would expect the resulting objects to always reside in the caches of the concrete subclasses... Also I would expect that cache lookups do not happen in a Cache of the Abstract superclass, but on the concrete subclasses' subcaches.... Am I wrong? What's funny in my case is that teh Cache has subcaches for both (and other) concrete subclasses, but both the subcaches are the very same (identical) cache object. Joachim -- ----------------------------------------------------------------------- Objektfabrik Joachim Tuchel mailto:[hidden email] Fliederweg 1 http://www.objektfabrik.de D-71640 Ludwigsburg http://joachimtuchel.wordpress.com Telefon: +49 7141 56 10 86 0 Fax: +49 7141 56 10 86 1 -- You received this message because you are subscribed to the Google Groups "glorp-group" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. To post to this group, send email to [hidden email]. Visit this group at https://groups.google.com/group/glorp-group. For more options, visit https://groups.google.com/d/optout. |
SOLVED! Life can be so damn hard. Especially when you find out that you've been looking at all kinds of esoteric options instead of the only obvious thing. The problem was indeed that in a Cache that holds objects of different classes there were objects with identical duplicate keys. But nothing was wrong with Glorp - it was (again) my fault. I had changed two classes to be a subclass of an already mapped class which is abstract and already had subclasses. To make a long story short, I completely forgot about the id sequences. The objects in the tables already had ids, and these had been generated by a sequence different from the one of that abstract superclass and all its subclasses, so there had been overlaps. Glorp had told me so: DuplicatePrimaryException. But I was so blinded by the fact that Glorp reuses a Cache for several classes (which is okay) that I didn't see the obvious. Now that I did some SQL magic to assign new IDs to my objects, all is well again. Nevertheless, I think without Esteban's comment, I would possibly have searched for another decade or so ;-) Thanks for your tip, dude! Joachim -- You received this message because you are subscribed to the Google Groups "glorp-group" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. To post to this group, send email to [hidden email]. Visit this group at https://groups.google.com/group/glorp-group. For more options, visit https://groups.google.com/d/optout. |
I'm glad I helped you to narrow the search although the bug wasn't a
Glorp issue per se. Now you mention I was bitten by a similar issue, after that I made a lot of helper methods in my DescriptorSystem class that create common fields such as IDs, name, modification/creation timestamps, etc. So I can be sure and easily spot when I'm not using the same sequence/ids for different class in the same resolver. Regards! Esteban A. Maringolo 2017-10-11 11:00 GMT-03:00 jtuchel <[hidden email]>: > SOLVED! > > > Life can be so damn hard. Especially when you find out that you've been > looking at all kinds of esoteric options instead of the only obvious thing. > > The problem was indeed that in a Cache that holds objects of different > classes there were objects with identical duplicate keys. But nothing was > wrong with Glorp - it was (again) my fault. > > I had changed two classes to be a subclass of an already mapped class which > is abstract and already had subclasses. To make a long story short, I > completely forgot about the id sequences. The objects in the tables already > had ids, and these had been generated by a sequence different from the one > of that abstract superclass and all its subclasses, so there had been > overlaps. > > Glorp had told me so: DuplicatePrimaryException. But I was so blinded by the > fact that Glorp reuses a Cache for several classes (which is okay) that I > didn't see the obvious. > > Now that I did some SQL magic to assign new IDs to my objects, all is well > again. > > > Nevertheless, I think without Esteban's comment, I would possibly have > searched for another decade or so ;-) Thanks for your tip, dude! > > > Joachim > > > > >> >> -- >> ----------------------------------------------------------------------- >> Objektfabrik Joachim Tuchel mailto:[hidden email] >> Fliederweg 1 http://www.objektfabrik.de >> D-71640 Ludwigsburg http://joachimtuchel.wordpress.com >> Telefon: +49 7141 56 10 86 0 Fax: +49 7141 56 10 86 1 >> > -- > You received this message because you are subscribed to the Google Groups > "glorp-group" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [hidden email]. > To post to this group, send email to [hidden email]. > Visit this group at https://groups.google.com/group/glorp-group. > For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups "glorp-group" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. To post to this group, send email to [hidden email]. Visit this group at https://groups.google.com/group/glorp-group. For more options, visit https://groups.google.com/d/optout. |
One way to avoid this problem is to use a single sequence for all of your primary keys. It means that the numbers in the tables get bigger faster, but it also means that every database object has a unique id. You may have to futz with Glorp a little to do this, but it can save round trips to the database, since you only get seque nce numbers once... It sounds like your application might be beyond the stage where this is doable, but there is always next time. On Wed, Oct 11, 2017 at 12:09 PM, Esteban A. Maringolo <[hidden email]> wrote: I'm glad I helped you to narrow the search although the bug wasn't a You received this message because you are subscribed to the Google Groups "glorp-group" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. To post to this group, send email to [hidden email]. Visit this group at https://groups.google.com/group/glorp-group. For more options, visit https://groups.google.com/d/optout. |
2017-10-11 13:53 GMT-03:00 Tom Robinson <[hidden email]>:
> One way to avoid this problem is to use a single sequence for all of your > primary keys. It means that the numbers in the tables get bigger faster, but > it also means that every database object has a unique id. You may have to > futz with Glorp a little to do this, but it can save round trips to the > database, since you only get seque nce numbers once... It sounds like your > application might be beyond the stage where this is doable, but there is > always next time. In another system using Smalltalk with a custom ORM (not Glorp), we went down that path of using a a GUID/UUID based ID generation. We avoided some rare ID collisions in the cases that no sequences were used (or available) and of course saved one extra roundtrip. The problem was that the end result in terms of database size was significant, and doing manual queries was a real pain. I've read that some massive systems create unique IDs by using a shard ID + timestamp + some random. With all that you can have a more readable number, that can even be encoded as Hex or any other more compact representation. Regards! Esteban A. Maringolo -- You received this message because you are subscribed to the Google Groups "glorp-group" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. To post to this group, send email to [hidden email]. Visit this group at https://groups.google.com/group/glorp-group. For more options, visit https://groups.google.com/d/optout. |
In reply to this post by Esteban A. Maringolo
You see, that's the kind of stuff I was looking for in my other post about getting together: sometimes you just need to learn lessons, and sometimes there is an abbreviation to the process if you have peers who've found out the hard way or learned from another peer.
-- Am Mittwoch, 11. Oktober 2017 18:10:04 UTC+2 schrieb Esteban A. Maringolo: I'm glad I helped you to narrow the search although the bug wasn't a You received this message because you are subscribed to the Google Groups "glorp-group" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. To post to this group, send email to [hidden email]. Visit this group at https://groups.google.com/group/glorp-group. For more options, visit https://groups.google.com/d/optout. |
In reply to this post by Tom Robinson
Tom,
-- You are of course right and your tip/comment is highly appreciated. Some choices cannot be easily changed for en existing system, unfortunately. I am working on a customer project that uses this approach in TopLink and there are no such problems. Joachim Am Mittwoch, 11. Oktober 2017 18:53:17 UTC+2 schrieb Tom Robinson:
You received this message because you are subscribed to the Google Groups "glorp-group" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. To post to this group, send email to [hidden email]. Visit this group at https://groups.google.com/group/glorp-group. For more options, visit https://groups.google.com/d/optout. |
In reply to this post by jtuchel
I'm coming back to this, since I was being affected by a similar situation where I had a FilteredTypeResolver and had three caches, one for the abstract class (Participant) and two for the concrete subclasses (Player and Team). This was causing also some "identity" issues, where new objects were being saved as new when they existed (were read previously in the session but where in a different subCache). So what I wanted was simplify the cache lookup and to avoid having duplicate caches. I solved the cache lookup issues (see below), but no the creation, I always end up having a duplicate cache, so I found that it was `makeCacheFor: aClass` the one who was explicitly creating one for the "describedClass" (the root of the resolver, if any). So I also changed this to query the descriptor as whether it wants to use the same cache for abstract and concrete classes. CacheManager>>makeCacheFor: aClass | rootClass cache | rootClass := session isNil ifTrue: [aClass] ifFalse: [session lookupRootClassFor: aClass]. cache := subCaches at: rootClass ifAbsentPut: [ Cache newFor: rootClass in: self ]. subCaches at: aClass put: cache. ^cache GlorpSession>>lookupRootClassFor: aClass | descriptor | descriptor := self system descriptorFor: aClass. ^descriptor notNil ifTrue: [ descriptor typeMappingRootDescriptor describedClass ] ifFalse: [ aClass ] What did was to hook into the cache lookup delegating to the type resolver as follows: CacheManager>>cacheForClass: aClass | classKey | classKey := (self system cacheKeyClassFor: aClass). ^subCaches at: classKey ifAbsentPut: [self makeCacheFor: classKey] DescriptorSystem>>cacheKeyClassFor: aClass ^(self descriptorFor: aClass) typeResolver cacheKeyClassFor: aClass FilteredTypeResolver>>cacheKeyClassFor: aClass ^ self useRootClassCache ifTrue: [ self rootClass ] ifFalse: [ aClass ] FilteredTypeResolveruseRootClassCache ^useRootClassCache ifNil: [ useRootClassCache := true ] FilteredTypeResolver>>useRootClassCache: aBoolean useRootClassCache := aBoolean TypeResolver>>cacheKeyClassFor: aClass ^aClass And of course I added the `useRootClassCache` instVar to the FilteredTypeResolver class. I also modified the makeCache to only use the root cache class (if specified) and do not creaty any "sub" cache. CacheManager>>makeCacheFor: aClass | rootClass cache | rootClass := session isNil ifTrue: [aClass] ifFalse: [ session system cacheKeyClassFor: aClass]. cache := subCaches at: rootClass ifAbsentPut: [ Cache newFor: rootClass in: self ]. "subCaches at: aClass put: cache." ^cache This doesn't cause any new test to fail, unifies the cache creation and lookup criteria (I don't know why you create two caches but lookup in only one), and solves the above mentioned issue with the identity lookup, where I would end with aPlayer (id=1) in the Participant cache but another copy in the Player cache. What am I missing? Is this a valid solution or just a workaround? If it's a valid solution, how to contribute it upstream? Regards, -- Esteban Maringolo On Thursday, October 12, 2017 at 3:52:26 AM UTC-3, jtuchel wrote:
You received this message because you are subscribed to the Google Groups "glorp-group" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. To view this discussion on the web visit https://groups.google.com/d/msgid/glorp-group/458ee313-2db7-4ec6-bef1-5dab9259ec08%40googlegroups.com. |
Continuing with what I wrote today, I kept debugging the whole process
of instantiation and how it was affecting some parts of the logic of my code that relies on object identity, so debugging a OneToOne Reference I found that when it is materializing the Proxy it cannot determine its primary key and then doesn't have a key to lookup in the cache table, causing hence a fault and a new (unnecessary) instantiation. primaryKeyFrom: aDictionary "Construct a primary key from the given parameters." <..trimmed code..> ^self whereClause primaryKeyFromDictionary: aDictionary And then it hits primaryKeyFromDictionary: aDictionary "Given a set of parameters, return a primary key suitable for retrieving our target. Do this only if the expression is for a primary key, and has no other conditions than the primary key one. If the table's primary key is composite, return the array that will be needed with the found values in the right position and nils elsewhere. (If unreferenced primaryKey fields are nillable, this could return a matching key. A query with other values elsewhere will throw away the primary key, returning nil. One without will " | left right field primaryKeyFields | relation = #AND ifTrue:[ left := leftChild primaryKeyFromDictionary: aDictionary. left isNil ifTrue: [^nil]. right := rightChild primaryKeyFromDictionary: aDictionary. right isNil ifTrue: [^nil]. ^self assembleCompositePrimaryKeyFrom: left and: right]. relation = #= ifFalse: [^nil]. "... continues but not for my case" Which basically performs a lookup of the relation looking only for AND or = relation operators, but since the lookup is performed using a filtered type resolver, then the where clause is composed with an IN relation and a = relation as follows: "Base(GwParticipant).GWPARTICIPANT.type IN #('team' 'player') AND Base(GwParticipant).GWPARTICIPANT.id = Parameter(GWSCORECARD.participant_id)" My gut tells me to avoid at all means hierarchical type resolvers, since for most cases they cause more trouble than solutions, but it goes against the reuse you get from a sound hierarchy of objects. The alternative is to modify #primaryKeyFromDictionary: to consider the case of FilteredTypeResolvers (and probably all resolvers except identity ones), but since RelationExpressions doesn't know anything about that, then it might end up dirty. What do you suggest, do you have issues like this in other dialects? Regards, Esteban A. Maringolo On Sat, Jul 27, 2019 at 1:06 PM Esteban Maringolo <[hidden email]> wrote: > > I'm coming back to this, since I was being affected by a similar situation where I had a FilteredTypeResolver and had three caches, one for the abstract class (Participant) and two for the concrete subclasses (Player and Team). This was causing also some "identity" issues, where new objects were being saved as new when they existed (were read previously in the session but where in a different subCache). > > So what I wanted was simplify the cache lookup and to avoid having duplicate caches. > > I solved the cache lookup issues (see below), but no the creation, I always end up having a duplicate cache, so I found that it was `makeCacheFor: aClass` the one who was explicitly creating one for the "describedClass" (the root of the resolver, if any). So I also changed this to query the descriptor as whether it wants to use the same cache for abstract and concrete classes. > > CacheManager>>makeCacheFor: aClass > | rootClass cache | > rootClass := session isNil ifTrue: [aClass] ifFalse: [session lookupRootClassFor: aClass]. > cache := subCaches at: rootClass ifAbsentPut: [ Cache newFor: rootClass in: self ]. > subCaches at: aClass put: cache. > ^cache > > GlorpSession>>lookupRootClassFor: aClass > | descriptor | > descriptor := self system descriptorFor: aClass. > ^descriptor notNil > ifTrue: [ descriptor typeMappingRootDescriptor describedClass ] > ifFalse: [ aClass ] > > > What did was to hook into the cache lookup delegating to the type resolver as follows: > > CacheManager>>cacheForClass: aClass > > | classKey | > classKey := (self system cacheKeyClassFor: aClass). > ^subCaches at: classKey > ifAbsentPut: [self makeCacheFor: classKey] > > > DescriptorSystem>>cacheKeyClassFor: aClass > ^(self descriptorFor: aClass) typeResolver cacheKeyClassFor: aClass > > > FilteredTypeResolver>>cacheKeyClassFor: aClass > ^ self useRootClassCache > ifTrue: [ self rootClass ] > ifFalse: [ aClass ] > > FilteredTypeResolveruseRootClassCache > ^useRootClassCache ifNil: [ useRootClassCache := true ] > > FilteredTypeResolver>>useRootClassCache: aBoolean > useRootClassCache := aBoolean > > TypeResolver>>cacheKeyClassFor: aClass > ^aClass > > > And of course I added the `useRootClassCache` instVar to the FilteredTypeResolver class. > > I also modified the makeCache to only use the root cache class (if specified) and do not creaty any "sub" cache. > > CacheManager>>makeCacheFor: aClass > | rootClass cache | > rootClass := session isNil ifTrue: [aClass] ifFalse: [ > session system cacheKeyClassFor: aClass]. > cache := subCaches at: rootClass ifAbsentPut: [ Cache newFor: rootClass in: self ]. > "subCaches at: aClass put: cache." > ^cache > > This doesn't cause any new test to fail, unifies the cache creation and lookup criteria (I don't know why you create two caches but lookup in only one), and solves the above mentioned issue with the identity lookup, where I would end with aPlayer (id=1) in the Participant cache but another copy in the Player cache. > > What am I missing? Is this a valid solution or just a workaround? > If it's a valid solution, how to contribute it upstream? > > Regards, > > -- > Esteban Maringolo > > > On Thursday, October 12, 2017 at 3:52:26 AM UTC-3, jtuchel wrote: >> >> You see, that's the kind of stuff I was looking for in my other post about getting together: sometimes you just need to learn lessons, and sometimes there is an abbreviation to the process if you have peers who've found out the hard way or learned from another peer. >> >> >> >> Am Mittwoch, 11. Oktober 2017 18:10:04 UTC+2 schrieb Esteban A. Maringolo: >>> >>> I'm glad I helped you to narrow the search although the bug wasn't a >>> Glorp issue per se. >>> >>> Now you mention I was bitten by a similar issue, after that I made a >>> lot of helper methods in my DescriptorSystem class that create common >>> fields such as IDs, name, modification/creation timestamps, etc. So I >>> can be sure and easily spot when I'm not using the same sequence/ids >>> for different class in the same resolver. >>> >>> Regards! >>> >>> Esteban A. Maringolo >>> > -- > You received this message because you are subscribed to the Google Groups "glorp-group" group. > To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. > To view this discussion on the web visit https://groups.google.com/d/msgid/glorp-group/458ee313-2db7-4ec6-bef1-5dab9259ec08%40googlegroups.com. -- You received this message because you are subscribed to the Google Groups "glorp-group" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. To view this discussion on the web visit https://groups.google.com/d/msgid/glorp-group/CAJMgPCLXw%3DzO5%2BA_tq4PdP3VQhPnC3c%2B3SeB-jPWEq20fkC%3DAg%40mail.gmail.com. |
In reply to this post by Esteban A. Maringolo
Esteban, I think I am seeing the same bug from time to time - but not on every read or write exception. Just to make sure it is the same problem: I keep getting "ominous" DuplicatePrimaryKeyException" errors in #isNew: . The classes I get these for are subclasses of a nabstract superclass, all living in the same table, so I am using a FilteredTypeResolver as well. They share a Sequence, so this is not a duplicate key, but rather a Cach Lookup Problem. I am very sure it is not a classical DuplicateKey-Exception because it occurs before any SQL statement is sent to the Database. This all happens somewhere in the registerTranscitiveClosure cascade from a persistent object. Here's the relevant part of the walkback: DuplicatePrimaryKeyException(GlorpError)>>#signal receiver = Signal on Exception: (ExDuplicatePrimaryKeyException) An exception has occurred Cache>>#includesKey:as: receiver = a Cache arg1 = 7109 arg2 = an ElsterUStVA temp1 = an ElsterUStVA temp2 = an ElsterUStVA CacheManager>>#containsObject:key: receiver = a CacheManager arg1 = an ElsterUStVA arg2 = 7109 temp1 = a Cache GlorpSession>>#cacheContainsObject:key: receiver = a GlorpSession arg1 = an ElsterUStVA arg2 = 7109 [] in GlorpSession>>#isNew: receiver = a GlorpSession arg1 = an ElsterUStVA temp1 = 7109 temp2 = a Descriptor(ElsterUStVA) Does that look familiar to you, Esteban? Joachim You received this message because you are subscribed to the Google Groups "glorp-group" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. To view this discussion on the web visit https://groups.google.com/d/msgid/glorp-group/234fc6b2-604e-4cd8-adb2-3230f3b4d279%40googlegroups.com. |
I'm not getting duplicate PK errors, but maybe because I'm using
SQLite for this and the sequences (PK's with AUTOINCREMENT setting) work different than with PostgreSQL or MSSQL. I'll move eventually to PostgreSQL, but SQLite is so conveniente to create and discard whole databases :) So I cannot tell you if this is the same thing, but I can tell something for sure is that the Cache won't hit any instance of a class with a Descriptor using a FilteredTypeResolver, becase of the above mentioned issue with primaryKeyFromDictionary: (which I'll fix somehow). My basic model is as follows: Tournament (1:n) Participant Tournament (1:n) Scorecard Scorecard (1:1) Participant Participant can be Player or Team, but all relations and class models are defined as Participant. When I read the participants from a Tournament, they're properly added to the session's Cache for Participant (and only Participant as per my latest change). But when I read the Scorecard, it tries to load the reference to a Participant, but because primaryKeyFromDictionary: is written in a way that doesn't considers FilteredTypeResolvers (that uses IN() for the list of types), then it always instantiates a new Player. So when I take a player from aTournament and try to look for its scorecards (by doing something like: `aTournament scorecards select: [:each | each player = aPlayer]`) it returns an empty collection, because altough the player of the scorecard and the player of the argument (aPlayer) represent the same object in the database, they're not the same object in the session (or the image at all). I'll try to find a way to fix it, but need another Saturday to "waste" on these things :D Regards! Esteban A. Maringolo On Mon, Jul 29, 2019 at 11:17 AM jtuchel <[hidden email]> wrote: > > Esteban, > > I think I am seeing the same bug from time to time - but not on every read or write exception. > > Just to make sure it is the same problem: I keep getting "ominous" DuplicatePrimaryKeyException" errors in #isNew: . The classes I get these for are subclasses of a nabstract superclass, all living in the same table, so I am using a FilteredTypeResolver as well. > They share a Sequence, so this is not a duplicate key, but rather a Cach Lookup Problem. I am very sure it is not a classical DuplicateKey-Exception because it occurs before any SQL statement is sent to the Database. This all happens somewhere in the registerTranscitiveClosure cascade from a persistent object. > > Here's the relevant part of the walkback: > > DuplicatePrimaryKeyException(GlorpError)>>#signal > receiver = Signal on Exception: (ExDuplicatePrimaryKeyException) An exception has occurred > Cache>>#includesKey:as: > receiver = a Cache > arg1 = 7109 > arg2 = an ElsterUStVA > temp1 = an ElsterUStVA > temp2 = an ElsterUStVA > CacheManager>>#containsObject:key: > receiver = a CacheManager > arg1 = an ElsterUStVA > arg2 = 7109 > temp1 = a Cache > GlorpSession>>#cacheContainsObject:key: > receiver = a GlorpSession > arg1 = an ElsterUStVA > arg2 = 7109 > [] in GlorpSession>>#isNew: > receiver = a GlorpSession > arg1 = an ElsterUStVA > temp1 = 7109 > temp2 = a Descriptor(ElsterUStVA) > > > > Does that look familiar to you, Esteban? > > > Joachim > > > -- > You received this message because you are subscribed to the Google Groups "glorp-group" group. > To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. > To view this discussion on the web visit https://groups.google.com/d/msgid/glorp-group/234fc6b2-604e-4cd8-adb2-3230f3b4d279%40googlegroups.com. -- You received this message because you are subscribed to the Google Groups "glorp-group" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. To view this discussion on the web visit https://groups.google.com/d/msgid/glorp-group/CAJMgPCJ4je0VV2vcuzAO66kD%3DhArTFwuLka6LuuVaLSpXgzNCw%40mail.gmail.com. |
Esteban, did you find a way to reproduce this error? In our Application, this happens in production from time to time, in an operation that is used (successfully) a dozen to hundreds of times a day, but we never found a way to reliably reproduce it. This makes bug hunting "nice". I'd like to see if we have the same bug and if your fix works, but for this as well as to fix it differently (if necassary) we need a reliable way of producing the error... Joachim Am Montag, 29. Juli 2019 21:05:52 UTC+2 schrieb Esteban Maringolo: I'm not getting duplicate PK errors, but maybe because I'm using You received this message because you are subscribed to the Google Groups "glorp-group" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. To view this discussion on the web visit https://groups.google.com/d/msgid/glorp-group/01905fc8-54df-4f65-9d40-1c2b6f8d4fbb%40googlegroups.com. |
I have a reproducible way of doing this, as part of the "Glorp" Tests
of my application. I didn't find time to put myself to fix this, but once I do (if I can) I'll add at least a test to the Glorp test suite. But this in Pharo, I don't know how to contribute upstream, because as far as I understand, the canonical version of Glorp comes from VisualWorks. Regards, Esteban A. Maringolo On Tue, Jul 30, 2019 at 2:16 AM jtuchel <[hidden email]> wrote: > > Esteban, > > > did you find a way to reproduce this error? In our Application, this happens in production from time to time, in an operation that is used (successfully) a dozen to hundreds of times a day, but we never found a way to reliably reproduce it. This makes bug hunting "nice". > > I'd like to see if we have the same bug and if your fix works, but for this as well as to fix it differently (if necassary) we need a reliable way of producing the error... > > Joachim > > > > > > > > Am Montag, 29. Juli 2019 21:05:52 UTC+2 schrieb Esteban Maringolo: >> >> I'm not getting duplicate PK errors, but maybe because I'm using >> SQLite for this and the sequences (PK's with AUTOINCREMENT setting) >> work different than with PostgreSQL or MSSQL. I'll move eventually to >> PostgreSQL, but SQLite is so conveniente to create and discard whole >> databases :) >> >> So I cannot tell you if this is the same thing, but I can tell >> something for sure is that the Cache won't hit any instance of a class >> with a Descriptor using a FilteredTypeResolver, becase of the above >> mentioned issue with primaryKeyFromDictionary: (which I'll fix >> somehow). >> >> My basic model is as follows: >> >> Tournament (1:n) Participant >> Tournament (1:n) Scorecard >> Scorecard (1:1) Participant >> >> Participant can be Player or Team, but all relations and class models >> are defined as Participant. >> >> When I read the participants from a Tournament, they're properly added >> to the session's Cache for Participant (and only Participant as per my >> latest change). >> But when I read the Scorecard, it tries to load the reference to a >> Participant, but because primaryKeyFromDictionary: is written in a way >> that doesn't considers FilteredTypeResolvers (that uses IN() for the >> list of types), then it always instantiates a new Player. >> >> So when I take a player from aTournament and try to look for its >> scorecards (by doing something like: `aTournament scorecards select: >> [:each | each player = aPlayer]`) it returns an empty collection, >> because altough the player of the scorecard and the player of the >> argument (aPlayer) represent the same object in the database, they're >> not the same object in the session (or the image at all). >> >> I'll try to find a way to fix it, but need another Saturday to "waste" >> on these things :D >> >> Regards! >> >> >> >> Esteban A. Maringolo >> >> >> On Mon, Jul 29, 2019 at 11:17 AM jtuchel <[hidden email]> wrote: >> > >> > Esteban, >> > >> > I think I am seeing the same bug from time to time - but not on every read or write exception. >> > >> > Just to make sure it is the same problem: I keep getting "ominous" DuplicatePrimaryKeyException" errors in #isNew: . The classes I get these for are subclasses of a nabstract superclass, all living in the same table, so I am using a FilteredTypeResolver as well. >> > They share a Sequence, so this is not a duplicate key, but rather a Cach Lookup Problem. I am very sure it is not a classical DuplicateKey-Exception because it occurs before any SQL statement is sent to the Database. This all happens somewhere in the registerTranscitiveClosure cascade from a persistent object. >> > >> > Here's the relevant part of the walkback: >> > >> > DuplicatePrimaryKeyException(GlorpError)>>#signal >> > receiver = Signal on Exception: (ExDuplicatePrimaryKeyException) An exception has occurred >> > Cache>>#includesKey:as: >> > receiver = a Cache >> > arg1 = 7109 >> > arg2 = an ElsterUStVA >> > temp1 = an ElsterUStVA >> > temp2 = an ElsterUStVA >> > CacheManager>>#containsObject:key: >> > receiver = a CacheManager >> > arg1 = an ElsterUStVA >> > arg2 = 7109 >> > temp1 = a Cache >> > GlorpSession>>#cacheContainsObject:key: >> > receiver = a GlorpSession >> > arg1 = an ElsterUStVA >> > arg2 = 7109 >> > [] in GlorpSession>>#isNew: >> > receiver = a GlorpSession >> > arg1 = an ElsterUStVA >> > temp1 = 7109 >> > temp2 = a Descriptor(ElsterUStVA) >> > >> > >> > >> > Does that look familiar to you, Esteban? >> > >> > >> > Joachim >> > >> > >> > -- >> > You received this message because you are subscribed to the Google Groups "glorp-group" group. >> > To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. >> > To view this discussion on the web visit https://groups.google.com/d/msgid/glorp-group/234fc6b2-604e-4cd8-adb2-3230f3b4d279%40googlegroups.com. > > -- > You received this message because you are subscribed to the Google Groups "glorp-group" group. > To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. > To view this discussion on the web visit https://groups.google.com/d/msgid/glorp-group/01905fc8-54df-4f65-9d40-1c2b6f8d4fbb%40googlegroups.com. -- You received this message because you are subscribed to the Google Groups "glorp-group" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. To view this discussion on the web visit https://groups.google.com/d/msgid/glorp-group/CAJMgPCKqjaDR1DgdADOwieD6zWCK8LostJGKvAUO%3Dx47KwMpYA%40mail.gmail.com. |
Can you zip up the test case and post it. Glorp code itself should be
fairly portable... On 7/30/2019 8:44 AM, Esteban Maringolo wrote: > I have a reproducible way of doing this, as part of the "Glorp" Tests > of my application. > > I didn't find time to put myself to fix this, but once I do (if I can) > I'll add at least a test to the Glorp test suite. > > But this in Pharo, I don't know how to contribute upstream, because as > far as I understand, the canonical version of Glorp comes from > VisualWorks. > > Regards, > > Esteban A. Maringolo > > On Tue, Jul 30, 2019 at 2:16 AM jtuchel <[hidden email]> wrote: >> Esteban, >> >> >> did you find a way to reproduce this error? In our Application, this happens in production from time to time, in an operation that is used (successfully) a dozen to hundreds of times a day, but we never found a way to reliably reproduce it. This makes bug hunting "nice". >> >> I'd like to see if we have the same bug and if your fix works, but for this as well as to fix it differently (if necassary) we need a reliable way of producing the error... >> >> Joachim >> >> >> >> >> >> >> >> Am Montag, 29. Juli 2019 21:05:52 UTC+2 schrieb Esteban Maringolo: >>> I'm not getting duplicate PK errors, but maybe because I'm using >>> SQLite for this and the sequences (PK's with AUTOINCREMENT setting) >>> work different than with PostgreSQL or MSSQL. I'll move eventually to >>> PostgreSQL, but SQLite is so conveniente to create and discard whole >>> databases :) >>> >>> So I cannot tell you if this is the same thing, but I can tell >>> something for sure is that the Cache won't hit any instance of a class >>> with a Descriptor using a FilteredTypeResolver, becase of the above >>> mentioned issue with primaryKeyFromDictionary: (which I'll fix >>> somehow). >>> >>> My basic model is as follows: >>> >>> Tournament (1:n) Participant >>> Tournament (1:n) Scorecard >>> Scorecard (1:1) Participant >>> >>> Participant can be Player or Team, but all relations and class models >>> are defined as Participant. >>> >>> When I read the participants from a Tournament, they're properly added >>> to the session's Cache for Participant (and only Participant as per my >>> latest change). >>> But when I read the Scorecard, it tries to load the reference to a >>> Participant, but because primaryKeyFromDictionary: is written in a way >>> that doesn't considers FilteredTypeResolvers (that uses IN() for the >>> list of types), then it always instantiates a new Player. >>> >>> So when I take a player from aTournament and try to look for its >>> scorecards (by doing something like: `aTournament scorecards select: >>> [:each | each player = aPlayer]`) it returns an empty collection, >>> because altough the player of the scorecard and the player of the >>> argument (aPlayer) represent the same object in the database, they're >>> not the same object in the session (or the image at all). >>> >>> I'll try to find a way to fix it, but need another Saturday to "waste" >>> on these things :D >>> >>> Regards! >>> >>> >>> >>> Esteban A. Maringolo >>> >>> >>> On Mon, Jul 29, 2019 at 11:17 AM jtuchel <[hidden email]> wrote: >>>> Esteban, >>>> >>>> I think I am seeing the same bug from time to time - but not on every read or write exception. >>>> >>>> Just to make sure it is the same problem: I keep getting "ominous" DuplicatePrimaryKeyException" errors in #isNew: . The classes I get these for are subclasses of a nabstract superclass, all living in the same table, so I am using a FilteredTypeResolver as well. >>>> They share a Sequence, so this is not a duplicate key, but rather a Cach Lookup Problem. I am very sure it is not a classical DuplicateKey-Exception because it occurs before any SQL statement is sent to the Database. This all happens somewhere in the registerTranscitiveClosure cascade from a persistent object. >>>> >>>> Here's the relevant part of the walkback: >>>> >>>> DuplicatePrimaryKeyException(GlorpError)>>#signal >>>> receiver = Signal on Exception: (ExDuplicatePrimaryKeyException) An exception has occurred >>>> Cache>>#includesKey:as: >>>> receiver = a Cache >>>> arg1 = 7109 >>>> arg2 = an ElsterUStVA >>>> temp1 = an ElsterUStVA >>>> temp2 = an ElsterUStVA >>>> CacheManager>>#containsObject:key: >>>> receiver = a CacheManager >>>> arg1 = an ElsterUStVA >>>> arg2 = 7109 >>>> temp1 = a Cache >>>> GlorpSession>>#cacheContainsObject:key: >>>> receiver = a GlorpSession >>>> arg1 = an ElsterUStVA >>>> arg2 = 7109 >>>> [] in GlorpSession>>#isNew: >>>> receiver = a GlorpSession >>>> arg1 = an ElsterUStVA >>>> temp1 = 7109 >>>> temp2 = a Descriptor(ElsterUStVA) >>>> >>>> >>>> >>>> Does that look familiar to you, Esteban? >>>> >>>> >>>> Joachim >>>> >>>> >>>> -- >>>> You received this message because you are subscribed to the Google Groups "glorp-group" group. >>>> To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. >>>> To view this discussion on the web visit https://groups.google.com/d/msgid/glorp-group/234fc6b2-604e-4cd8-adb2-3230f3b4d279%40googlegroups.com. >> -- >> You received this message because you are subscribed to the Google Groups "glorp-group" group. >> To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. >> To view this discussion on the web visit https://groups.google.com/d/msgid/glorp-group/01905fc8-54df-4f65-9d40-1c2b6f8d4fbb%40googlegroups.com. -- You received this message because you are subscribed to the Google Groups "glorp-group" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. To view this discussion on the web visit https://groups.google.com/d/msgid/glorp-group/f8b2df73-dc21-a29a-1cfb-743c055b551a%40gmail.com. |
Yes, that's was my idea, but it is a very manual process nonetheless.
If I introduce some changes, I'll share the code with you (and the list) somehow. Thank you. Esteban A. Maringolo On Tue, Jul 30, 2019 at 12:55 PM Tom Robinson <[hidden email]> wrote: > > Can you zip up the test case and post it. Glorp code itself should be > fairly portable... > > On 7/30/2019 8:44 AM, Esteban Maringolo wrote: > > I have a reproducible way of doing this, as part of the "Glorp" Tests > > of my application. > > > > I didn't find time to put myself to fix this, but once I do (if I can) > > I'll add at least a test to the Glorp test suite. > > > > But this in Pharo, I don't know how to contribute upstream, because as > > far as I understand, the canonical version of Glorp comes from > > VisualWorks. > > > > Regards, > > > > Esteban A. Maringolo > > > > On Tue, Jul 30, 2019 at 2:16 AM jtuchel <[hidden email]> wrote: > >> Esteban, > >> > >> > >> did you find a way to reproduce this error? In our Application, this happens in production from time to time, in an operation that is used (successfully) a dozen to hundreds of times a day, but we never found a way to reliably reproduce it. This makes bug hunting "nice". > >> > >> I'd like to see if we have the same bug and if your fix works, but for this as well as to fix it differently (if necassary) we need a reliable way of producing the error... > >> > >> Joachim > >> > >> > >> > >> > >> > >> > >> > >> Am Montag, 29. Juli 2019 21:05:52 UTC+2 schrieb Esteban Maringolo: > >>> I'm not getting duplicate PK errors, but maybe because I'm using > >>> SQLite for this and the sequences (PK's with AUTOINCREMENT setting) > >>> work different than with PostgreSQL or MSSQL. I'll move eventually to > >>> PostgreSQL, but SQLite is so conveniente to create and discard whole > >>> databases :) > >>> > >>> So I cannot tell you if this is the same thing, but I can tell > >>> something for sure is that the Cache won't hit any instance of a class > >>> with a Descriptor using a FilteredTypeResolver, becase of the above > >>> mentioned issue with primaryKeyFromDictionary: (which I'll fix > >>> somehow). > >>> > >>> My basic model is as follows: > >>> > >>> Tournament (1:n) Participant > >>> Tournament (1:n) Scorecard > >>> Scorecard (1:1) Participant > >>> > >>> Participant can be Player or Team, but all relations and class models > >>> are defined as Participant. > >>> > >>> When I read the participants from a Tournament, they're properly added > >>> to the session's Cache for Participant (and only Participant as per my > >>> latest change). > >>> But when I read the Scorecard, it tries to load the reference to a > >>> Participant, but because primaryKeyFromDictionary: is written in a way > >>> that doesn't considers FilteredTypeResolvers (that uses IN() for the > >>> list of types), then it always instantiates a new Player. > >>> > >>> So when I take a player from aTournament and try to look for its > >>> scorecards (by doing something like: `aTournament scorecards select: > >>> [:each | each player = aPlayer]`) it returns an empty collection, > >>> because altough the player of the scorecard and the player of the > >>> argument (aPlayer) represent the same object in the database, they're > >>> not the same object in the session (or the image at all). > >>> > >>> I'll try to find a way to fix it, but need another Saturday to "waste" > >>> on these things :D > >>> > >>> Regards! > >>> > >>> > >>> > >>> Esteban A. Maringolo > >>> > >>> > >>> On Mon, Jul 29, 2019 at 11:17 AM jtuchel <[hidden email]> wrote: > >>>> Esteban, > >>>> > >>>> I think I am seeing the same bug from time to time - but not on every read or write exception. > >>>> > >>>> Just to make sure it is the same problem: I keep getting "ominous" DuplicatePrimaryKeyException" errors in #isNew: . The classes I get these for are subclasses of a nabstract superclass, all living in the same table, so I am using a FilteredTypeResolver as well. > >>>> They share a Sequence, so this is not a duplicate key, but rather a Cach Lookup Problem. I am very sure it is not a classical DuplicateKey-Exception because it occurs before any SQL statement is sent to the Database. This all happens somewhere in the registerTranscitiveClosure cascade from a persistent object. > >>>> > >>>> Here's the relevant part of the walkback: > >>>> > >>>> DuplicatePrimaryKeyException(GlorpError)>>#signal > >>>> receiver = Signal on Exception: (ExDuplicatePrimaryKeyException) An exception has occurred > >>>> Cache>>#includesKey:as: > >>>> receiver = a Cache > >>>> arg1 = 7109 > >>>> arg2 = an ElsterUStVA > >>>> temp1 = an ElsterUStVA > >>>> temp2 = an ElsterUStVA > >>>> CacheManager>>#containsObject:key: > >>>> receiver = a CacheManager > >>>> arg1 = an ElsterUStVA > >>>> arg2 = 7109 > >>>> temp1 = a Cache > >>>> GlorpSession>>#cacheContainsObject:key: > >>>> receiver = a GlorpSession > >>>> arg1 = an ElsterUStVA > >>>> arg2 = 7109 > >>>> [] in GlorpSession>>#isNew: > >>>> receiver = a GlorpSession > >>>> arg1 = an ElsterUStVA > >>>> temp1 = 7109 > >>>> temp2 = a Descriptor(ElsterUStVA) > >>>> > >>>> > >>>> > >>>> Does that look familiar to you, Esteban? > >>>> > >>>> > >>>> Joachim > >>>> > >>>> > >>>> -- > >>>> You received this message because you are subscribed to the Google Groups "glorp-group" group. > >>>> To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. > >>>> To view this discussion on the web visit https://groups.google.com/d/msgid/glorp-group/234fc6b2-604e-4cd8-adb2-3230f3b4d279%40googlegroups.com. > >> -- > >> You received this message because you are subscribed to the Google Groups "glorp-group" group. > >> To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. > >> To view this discussion on the web visit https://groups.google.com/d/msgid/glorp-group/01905fc8-54df-4f65-9d40-1c2b6f8d4fbb%40googlegroups.com. > -- You received this message because you are subscribed to the Google Groups "glorp-group" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. To view this discussion on the web visit https://groups.google.com/d/msgid/glorp-group/CAJMgPC%2BLjv6xA6abGFQ7d9W52xDNJcNA1A8suwG6p%3D%2BcSxwigQ%40mail.gmail.com. |
In reply to this post by Esteban A. Maringolo
Esteban, did you make any progress on this front? I would like to find out if your issue and mine are the same problem. At least it sounds like it. I would love to take a look at your test code and to isolate my problem and make it reproducible as well (I failed so far, it only happens on a headless production machine). I'd love to help make the process of working on Glorp issues (and new Glorp features) smoother, and I guess we started a few good discussion threads at ESUG last week... Joachim Am Dienstag, 30. Juli 2019 16:44:44 UTC+2 schrieb Esteban Maringolo: I have a reproducible way of doing this, as part of the "Glorp" Tests You received this message because you are subscribed to the Google Groups "glorp-group" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. To view this discussion on the web visit https://groups.google.com/d/msgid/glorp-group/11a98abb-cffd-45ea-bc4d-673fbcc78ad4%40googlegroups.com. |
Hi Joachim,
I haven't pursued this any further, I've been busy with other stuff since then and have not touched that project to the point I forgot the state in which I left it. I might get back to it this weekend/next week. And I'd happily share what I find with this list. Regards, Esteban A. Maringolo On Mon, Sep 2, 2019 at 7:11 AM jtuchel <[hidden email]> wrote: > > Esteban, > > did you make any progress on this front? > > I would like to find out if your issue and mine are the same problem. At least it sounds like it. I would love to take a look at your test code and to isolate my problem and make it reproducible as well (I failed so far, it only happens on a headless production machine). > > I'd love to help make the process of working on Glorp issues (and new Glorp features) smoother, and I guess we started a few good discussion threads at ESUG last week... > > Joachim > > > > > > Am Dienstag, 30. Juli 2019 16:44:44 UTC+2 schrieb Esteban Maringolo: >> >> I have a reproducible way of doing this, as part of the "Glorp" Tests >> of my application. >> >> I didn't find time to put myself to fix this, but once I do (if I can) >> I'll add at least a test to the Glorp test suite. >> >> But this in Pharo, I don't know how to contribute upstream, because as >> far as I understand, the canonical version of Glorp comes from >> VisualWorks. >> >> Regards, >> >> Esteban A. Maringolo >> >> On Tue, Jul 30, 2019 at 2:16 AM jtuchel <[hidden email]> wrote: >> > >> > Esteban, >> > >> > >> > did you find a way to reproduce this error? In our Application, this happens in production from time to time, in an operation that is used (successfully) a dozen to hundreds of times a day, but we never found a way to reliably reproduce it. This makes bug hunting "nice". >> > >> > I'd like to see if we have the same bug and if your fix works, but for this as well as to fix it differently (if necassary) we need a reliable way of producing the error... >> > >> > Joachim >> > >> > >> > >> > >> > >> > >> > >> > Am Montag, 29. Juli 2019 21:05:52 UTC+2 schrieb Esteban Maringolo: >> >> >> >> I'm not getting duplicate PK errors, but maybe because I'm using >> >> SQLite for this and the sequences (PK's with AUTOINCREMENT setting) >> >> work different than with PostgreSQL or MSSQL. I'll move eventually to >> >> PostgreSQL, but SQLite is so conveniente to create and discard whole >> >> databases :) >> >> >> >> So I cannot tell you if this is the same thing, but I can tell >> >> something for sure is that the Cache won't hit any instance of a class >> >> with a Descriptor using a FilteredTypeResolver, becase of the above >> >> mentioned issue with primaryKeyFromDictionary: (which I'll fix >> >> somehow). >> >> >> >> My basic model is as follows: >> >> >> >> Tournament (1:n) Participant >> >> Tournament (1:n) Scorecard >> >> Scorecard (1:1) Participant >> >> >> >> Participant can be Player or Team, but all relations and class models >> >> are defined as Participant. >> >> >> >> When I read the participants from a Tournament, they're properly added >> >> to the session's Cache for Participant (and only Participant as per my >> >> latest change). >> >> But when I read the Scorecard, it tries to load the reference to a >> >> Participant, but because primaryKeyFromDictionary: is written in a way >> >> that doesn't considers FilteredTypeResolvers (that uses IN() for the >> >> list of types), then it always instantiates a new Player. >> >> >> >> So when I take a player from aTournament and try to look for its >> >> scorecards (by doing something like: `aTournament scorecards select: >> >> [:each | each player = aPlayer]`) it returns an empty collection, >> >> because altough the player of the scorecard and the player of the >> >> argument (aPlayer) represent the same object in the database, they're >> >> not the same object in the session (or the image at all). >> >> >> >> I'll try to find a way to fix it, but need another Saturday to "waste" >> >> on these things :D >> >> >> >> Regards! >> >> >> >> >> >> >> >> Esteban A. Maringolo >> >> >> >> >> >> On Mon, Jul 29, 2019 at 11:17 AM jtuchel <[hidden email]> wrote: >> >> > >> >> > Esteban, >> >> > >> >> > I think I am seeing the same bug from time to time - but not on every read or write exception. >> >> > >> >> > Just to make sure it is the same problem: I keep getting "ominous" DuplicatePrimaryKeyException" errors in #isNew: . The classes I get these for are subclasses of a nabstract superclass, all living in the same table, so I am using a FilteredTypeResolver as well. >> >> > They share a Sequence, so this is not a duplicate key, but rather a Cach Lookup Problem. I am very sure it is not a classical DuplicateKey-Exception because it occurs before any SQL statement is sent to the Database. This all happens somewhere in the registerTranscitiveClosure cascade from a persistent object. >> >> > >> >> > Here's the relevant part of the walkback: >> >> > >> >> > DuplicatePrimaryKeyException(GlorpError)>>#signal >> >> > receiver = Signal on Exception: (ExDuplicatePrimaryKeyException) An exception has occurred >> >> > Cache>>#includesKey:as: >> >> > receiver = a Cache >> >> > arg1 = 7109 >> >> > arg2 = an ElsterUStVA >> >> > temp1 = an ElsterUStVA >> >> > temp2 = an ElsterUStVA >> >> > CacheManager>>#containsObject:key: >> >> > receiver = a CacheManager >> >> > arg1 = an ElsterUStVA >> >> > arg2 = 7109 >> >> > temp1 = a Cache >> >> > GlorpSession>>#cacheContainsObject:key: >> >> > receiver = a GlorpSession >> >> > arg1 = an ElsterUStVA >> >> > arg2 = 7109 >> >> > [] in GlorpSession>>#isNew: >> >> > receiver = a GlorpSession >> >> > arg1 = an ElsterUStVA >> >> > temp1 = 7109 >> >> > temp2 = a Descriptor(ElsterUStVA) >> >> > >> >> > >> >> > >> >> > Does that look familiar to you, Esteban? >> >> > >> >> > >> >> > Joachim >> >> > >> >> > >> >> > -- >> >> > You received this message because you are subscribed to the Google Groups "glorp-group" group. >> >> > To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. >> >> > To view this discussion on the web visit https://groups.google.com/d/msgid/glorp-group/234fc6b2-604e-4cd8-adb2-3230f3b4d279%40googlegroups.com. >> > >> > -- >> > You received this message because you are subscribed to the Google Groups "glorp-group" group. >> > To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. >> > To view this discussion on the web visit https://groups.google.com/d/msgid/glorp-group/01905fc8-54df-4f65-9d40-1c2b6f8d4fbb%40googlegroups.com. > > -- > You received this message because you are subscribed to the Google Groups "glorp-group" group. > To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. > To view this discussion on the web visit https://groups.google.com/d/msgid/glorp-group/11a98abb-cffd-45ea-bc4d-673fbcc78ad4%40googlegroups.com. -- You received this message because you are subscribed to the Google Groups "glorp-group" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. To view this discussion on the web visit https://groups.google.com/d/msgid/glorp-group/CAJMgPC%2Bfd804J%3D6NothSsiU3ZHRcCTD6Wxv9oGpTZjOUQUcxqg%40mail.gmail.com. |
Free forum by Nabble | Edit this page |