Andy,
I had a possible use for #isKey and friends, but they were returning false when I expected true. This was with an Access 97 database, with no relationships defined - is that the problem? Still, it would be nice to detect the primary key fields, as those are defined in the tables themselves. Have a good one, Bill ------- Wilhelm K. Schwab, Ph.D. [hidden email] |
Bill
You wrote in message news:9fiv0r$3vqri$[hidden email]... > > I had a possible use for #isKey and friends, but they were returning false > when I expected true. This was with an Access 97 database, with no > relationships defined - is that the problem? Still, it would be nice to > detect the primary key fields, as those are defined in the tables > themselves. The 'Database Connection' package is actually the ODBC support layers from an object-relational mapping package that we wrote a while back called DOOM (Dolphin Object-Oriented Mapper). #isKey etc were part of the schema analysis from that and is not operational without additional pieces which are not in the "Database Connection" package, hence the answer will always be false, sorry. Regards Blair |
Object-relational mapping package? Drool!
Jerry "Blair McGlashan" <[hidden email]> wrote in message news:9flb2n$4ov6e$[hidden email]... > Bill > > You wrote in message news:9fiv0r$3vqri$[hidden email]... > > > > I had a possible use for #isKey and friends, but they were returning false > > when I expected true. This was with an Access 97 database, with no > > relationships defined - is that the problem? Still, it would be nice to > > detect the primary key fields, as those are defined in the tables > > themselves. > > The 'Database Connection' package is actually the ODBC support layers from > an object-relational mapping package that we wrote a while back called DOOM > (Dolphin Object-Oriented Mapper). #isKey etc were part of the schema > analysis from that and is not operational without additional pieces which > are not in the "Database Connection" package, hence the answer will always > be false, sorry. > > Regards > > Blair > > |
Jerry,
> Object-relational mapping package? Drool! I wouldn't get too enthusiastic. The original DOOM stuff was done back in 1995 and got to the state where we could build a mapping layer to *read* objects from an existing relational database with an arbitrary schema (i.e. you didn't need a specialised DB structure to support it). However, we never got as far as implementing record update and creation (and transactions). Having said that, I do believe it was pretty good as far as it went. We eventually stopped work on it when we realized the project would take quite some time to complete and do well. Our idea at that point was to instead persuade the Object People to convert TopLink over to Dolphin at some point in the future. Sadly, that's unlikely to happen now. I haven't looked at the Camp Smalltalk GLORP stuff but I would think that is probably the way to go in future to get object/relational mapping in Dolphin. Best Regards, Andy Bower Dolphin Support http://www.object-arts.com --- Visit the Dolphin Smalltalk WikiWeb http://www.object-arts.com/wiki/html/Dolphin/FrontPage.htm --- > "Blair McGlashan" <[hidden email]> wrote in message > news:9flb2n$4ov6e$[hidden email]... > > Bill > > > > You wrote in message news:9fiv0r$3vqri$[hidden email]... > > > > > > I had a possible use for #isKey and friends, but they were returning > false > > > when I expected true. This was with an Access 97 database, with no > > > relationships defined - is that the problem? Still, it would be nice > > > detect the primary key fields, as those are defined in the tables > > > themselves. > > > > The 'Database Connection' package is actually the ODBC support layers from > > an object-relational mapping package that we wrote a while back called > DOOM > > (Dolphin Object-Oriented Mapper). #isKey etc were part of the schema > > analysis from that and is not operational without additional pieces which > > are not in the "Database Connection" package, hence the answer will always > > be false, sorry. > > > > Regards > > > > Blair > > > > > > |
>...
> I wouldn't get too enthusiastic. The original DOOM stuff was done back in > 1995 and got to the state where we could build a mapping layer to *read* > objects from an existing relational database with an arbitrary schema (i.e. > you didn't need a specialised DB structure to support it). However, we never > got as far as implementing record update and creation (and transactions). > Having said that, I do believe it was pretty good as far as it went. Actually Andy is doing DOOM a disservice, it did do transactional updating. As Andy says it was able to query the database schema to build a set of classes and methods that represented the table and foreign-key relationships, and the database could then be accessed completely transparently. For this to work well, though, it was necessary to have the foreign-key relationships defined - in fact the more information in the schema the better. To do updating one just modified the objects (or added new ones to collections) within a transaction block, and at the end of the block the necessary SQL Insert/Update statements would be issued to write the changes back to the DB. If anyone can remember the Tensegrity OODB, we had used that to build part of a vertical market application as an experiment with Smalltalk. DOOM was developed to transparently replace the Tensegrity OODB, which it was able to do. However it was a prototype, and as Andy says there are more sensible options that reviving it such as GLORP, or the rumour that TopLink for Smalltalk might be open-sourced, and of course there is David Gorisek's Omnibase OODB. It is interesting to note that DOOM relied heavily on random access to result sets using ODBC's "extended fetch". This means it wouldn't have worked that well on Oracle, which still (the last time I checked) supported only forward-only read-only cursors. It would have worked after a fashion, since the ODBC layers can emulate the behaviour, but performance would have been poor. In the light of this sort of limitation I am constantly amazed at how successful Oracle has been, but, ah, the power of marketing to the right people... Regards Blair |
Interesting to read about DOOM. I'm currently working on a
Dolphin-Relational interface, but starting from the other end - Smalltalk classes automatically translated into a database schema. The rationale is to have the benefits of relational storage (open, visible data; powerful querying etc.) with the rapid development potential of Smalltalk. The initial demand for ReStore (as the interface is now called) came from a project requiring data storage in Access, but as there seems to be some interest in a relational interface for Dolphin I'm now planning to offer it as a low-cost commercial product. There's some initial information here: www.solutionsoft.pwp.blueyonder.co.uk/what_is_restore.htm Any feedback on this (positive or negative) would be useful in influencing further development. Regards, John Aspinall Solutions Software Blair McGlashan <[hidden email]> wrote in message news:9fnh77$5690n$[hidden email]... > >... > > I wouldn't get too enthusiastic. The original DOOM stuff was done back in > > 1995 and got to the state where we could build a mapping layer to *read* > > objects from an existing relational database with an arbitrary schema > (i.e. > > you didn't need a specialised DB structure to support it). However, we > never > > got as far as implementing record update and creation (and transactions). > > Having said that, I do believe it was pretty good as far as it went. > > Actually Andy is doing DOOM a disservice, it did do transactional updating. > As Andy says it was able to query the database schema to build a set of > classes and methods that represented the table and foreign-key > relationships, and the database could then be accessed completely > transparently. For this to work well, though, it was necessary to have the > foreign-key relationships defined - in fact the more information in the > schema the better. To do updating one just modified the objects (or added > new ones to collections) within a transaction block, and at the end of the > block the necessary SQL Insert/Update statements would be issued to write > the changes back to the DB. If anyone can remember the Tensegrity OODB, we > had used that to build part of a vertical market application as an > experiment with Smalltalk. DOOM was developed to transparently replace the > Tensegrity OODB, which it was able to do. However it was a prototype, and > Andy says there are more sensible options that reviving it such as GLORP, or > the rumour that TopLink for Smalltalk might be open-sourced, and of course > there is David Gorisek's Omnibase OODB. > > It is interesting to note that DOOM relied heavily on random access to > result sets using ODBC's "extended fetch". This means it wouldn't have > worked that well on Oracle, which still (the last time I checked) supported > only forward-only read-only cursors. It would have worked after a fashion, > since the ODBC layers can emulate the behaviour, but performance would have > been poor. In the light of this sort of limitation I am constantly amazed at > how successful Oracle has been, but, ah, the power of marketing to the right > people... > > Regards > > Blair > > |
John Aspinall <[hidden email]> wrote in message
news:WGIT6.8622$[hidden email]... > The initial demand for ReStore (as the interface is now called) came from a > project requiring data storage in Access, but as there seems to be some > interest in a relational interface for Dolphin I'm now planning to offer it > as a low-cost commercial product. > > There's some initial information here: > > www.solutionsoft.pwp.blueyonder.co.uk/what_is_restore.htm > > Any feedback on this (positive or negative) would be useful in influencing > further development. This looks very interesting. Do you have pricing and availability information yet? Any chance of a demo copy being available? I have a number of questions: 1. In this example: ===== addClassDefinitionTo: aClassDefinition aClassDefinition define: #surname as: String; define: #firstName as: String; define: #address as: Address; define: #relatives as: (OrderedCollection of: Person) ===== How would (OrderedCollection of: Person) be handled if you had a collection of different subclasses of Person, for example: Mother, Father, Sister, Brother, all subclasses of Person? Could it handle a more complex but less common case where there could be a collection of classes that do not share a common parent but share a common protocol? 2. How would it handle cyclical references? Could references to a parent in a composite structure be maintained and restored as references rather than a copy? 3. In regard to #synchronizeAllClasses I assume it could be used in a distributed application to migrate an older DB if there were changes in the object structures? Could this feature skip to different DB structures, for example: where objectLayout1 - > objectLayout2 -> objectLayout3 could it also take a DB from objectLayout1 directly to objectLayout3 if objectLayout2 was never synchronized with the DB in some cases? Could it also handle instance variable name changes? ReStore looks quite cool. Let me know when it is available. Chris [hidden email] |
Christopher,
> How would (OrderedCollection of: Person) be handled if you had a collection > of different subclasses of Person, for example: Mother, Father, Sister, > Brother, all subclasses of Person? In the case of a hierarchy of classes, you have the option of grouping instances of these in one database table (this is the default behaviour, in fact). A reference to Person in a collection definition will then allow Person or any of its subclasses to appear in that collection. The same goes for querying: (aReStore instancesOf: Person) select: [ :each | ...] ...will query for Person or any of its subclasses. > Could it handle a more complex but less > common case where there could be a collection of classes that do not share > a common parent but share a common protocol? Not at present; to an extent ReStore is restricted by the statically-typed nature of relational databases. However I plan to address this in the future by allowing completely generalised references, e.g. define: #thing as: Object; define: #things as: OrderedCollection. > 2. How would it handle cyclical references? Could references to a parent > in a composite structure be maintained and restored as references rather > than a copy? Cyclical references are not a problem; ReStore transparently assigns a unique ID to all objects enabling identity to be maintained. When fetching an object from the database the ID is checked against objects already in memory - if an existing object is found it is reused. > 3. In regard to #synchronizeAllClasses I assume it could be used in a > distributed application to migrate an older DB if there were changes in > the object structures? Could this feature skip to different DB > structures, for example: where objectLayout1 - > objectLayout2 -> > objectLayout3 could it also take a DB from objectLayout1 directly > to objectLayout3 if objectLayout2 was never synchronized with the > DB in some cases? Yes; the synchronization mechanism compares the current object structure against the current database structure, and resolves the two by adding/removing columns and tables as necessary. > Could it also handle instance variable name changes? Not directly, although there is a workaround for this. Say you want to rename the Person instance variable #firstName to #foreName; you would proceed as follows: 1) add the #foreName instance variable, update the class definition method and synchronize. 2) evaluate the following: (aReStore instancesOf: Person) modify: [ :each | each foreName: each firstName] 3) remove the #firstName instance variable, update the class definition and synchronize again. > ReStore looks quite cool. Let me know when it is available. Thanks. I expect to have the first release (including a full example application) available in the next few weeks. Best regards, John Aspinall Solutions Software |
Free forum by Nabble | Edit this page |