Hi,
I am working on an export / import feature to be
able take my data from one database to another.
In order to do so I use the actual table mappings
to create a dictionay of accessors. So I can write out objects withtout embeding
others.
This means that I need to use double mappings
to the same table row. One will direct to the object and the other one to
the actual id as stored in the table.
If I have a for example tree_node object then
I make an additonal tree_node_key accessor so I can figure out
that during exporting / importing to use the '_key' accessor.
This approach have been working on some mappings
but not on all. The question is if this approach is approriate and if their is
an 'official' way to do this.
_______________________________________________ vwnc mailing list [hidden email] http://lists.cs.uiuc.edu/mailman/listinfo/vwnc |
I'm not understanding from your explanation what you're
doing, but it sounds odd.
What I've done in the past in taking data from one database to another is to read from one database, copy the data, then write it from a different session attached to the other database. That's what the Store replicator does, although there's some additional complexity in there in that the data needs to find the corresponding pieces that already exist in the other database, and will use different keys. In general, it's fine to have multiple mappings to the same field in the database, but if they're going to write, they all need to write the same value. You can also make mappings that don't write, don't read, or do neither. And surprisingly, the last case is useful, because you can still use them in queries. See e.g. #canWrite: At 03:11 AM 2010-01-02, Maarten MOSTERT wrote: Hi, --
Alan Knight [|], Engineering Manager, Cincom Smalltalk
_______________________________________________ vwnc mailing list [hidden email] http://lists.cs.uiuc.edu/mailman/listinfo/vwnc |
My intention was to go from one databe to
another using a Sixx or Boss file format and taking the database in an
identical way not by adding and modfying id's (or at least not yet). In
Sixx this is pritty easy just take any mapped object and write it out and
upload. The problem is that it will write out all the linked or embedded objects
at the same time. This means that for each of your objects your are reading and
writing a large part of your database. Allthough this works pretty well in both
directions it is increadably slow and bulky.
My idea was therefore to unwind the mapped objects
and create a simple dictionary that I can use to write them to a file, producing
something like the following:
MyClass-> Dictionary (#res_proj_key
-> 3 #res_email -> nil #res_avail_from -> nil #res_id
-> 1).
What I want to avoid is something like
this.
MyClass-> Dictionary (#res_proj
-> a Project #res_email -> nil #res_avail_from ->
nil #res_id -> 1).
For reading fom the database that would be fairly
easy to do as I can just request the embedded object's id. For writing it
is way more complicated. I cannot simply write with the embedded objects id as I
need to give Glorp the complete object which I haven't created yet (unless there
is a bypass). This is why I was looking for a way to do it with a seperate
mapping and write anyway.
I can probably also achieve this with
subclassing my descriptor, it all depends which way is more
elegant.
Thanks anyway.
@+Maarten,
_______________________________________________ vwnc mailing list [hidden email] http://lists.cs.uiuc.edu/mailman/listinfo/vwnc |
I don't know that that's a good way to approach it.
Glorp really isn't that keen on writing out objects one at a time, since maintaining the networks of objects is one of the major things it does. And generating the rows from single object isn't necessarily going to work well, because the information in a row can come from several objects. For example, it's possible to have, say, a Person with a list of Addresses, and the address rows have a foreign key to the person row, but the Address objects don't know about the Person object. If you try to write the row for the Person object, you won't get the complete information. If what you want is really just to move the rows across, why not just do that? For example, if your objects each map to one table, and your primary keys are all single and named "id", self allObjectsImInterestedIn do: [:each | You could generalize that to composite or differently named keys, multiple rows, etc., and could leverage glorp's sql generation capabilities, turn the rows into dictionaries with field names, and so on. Or, alternatively, you could just read the data, write it to a SQLite database and send that as the file. If you really want to go with the objects, and assuming that you are using Glorp on the other end to write out the data, then probably the easiest thing to do is to serialize proxies such that they leave out their resolved value and session, and just bring along the query parameters (and set it so that they believe they're uninstantiated). If you're writing out objects, I think Glorp should be clever enough to get the field values from the uninstantiated proxies if the objects aren't available. At 04:42 PM 2010-01-02, Maarten MOSTERT wrote: My intention was to go from one databe to another using a Sixx or Boss file format and taking the database in an identical way not by adding and modfying id's (or at least not yet). In Sixx this is pritty easy just take any mapped object and write it out and upload. The problem is that it will write out all the linked or embedded objects at the same time. This means that for each of your objects your are reading and writing a large part of your database. Allthough this works pretty well in both directions it is increadably slow and bulky. --
Alan Knight [|], Engineering Manager, Cincom Smalltalk
_______________________________________________ vwnc mailing list [hidden email] http://lists.cs.uiuc.edu/mailman/listinfo/vwnc |
Free forum by Nabble | Edit this page |