CommandBase question

CommandBase question

Old forum URL: forums.lhotka.net/forums/t/1126.aspx


akabak posted on Tuesday, September 05, 2006

I think that a command object is the way to go for this task, but I'm open to suggestions--both for alternatives and for implementation help.

My situation is that I've built an application that will be used to QC some production data. Basically, this production data is queued up, copied to a parallel QC database, checked (including data modifications, inserts, and deletions), and then needs to be pushed back into production when it has been approved.

I've got a set of stored procedures that can be used to push the data back into production. Each stored procedure takes a primary key for that table (I keep the keys synchronized between the two databases) and then either deletes or updates based on a status field in the related table.

In other words, I think I've gotten the back end covered. But now I'm faced with trying to call these stored procedures from my business object layer, and I'm a little stumped. I've got one main object (a Task) with a child object (the data) that contains children and grandchildren, itself. The object map contains all the primary keys I need, and I'm not worried about the performance hit about loading all the data (deep object map but not very wide). Once I load the data, I need to call something like Task.UpdateProduction. Should I include a command object in each business object and have that command take the primary key and run the stored procedure?

I think that way I would be calling:
    Task.AgencyData.UpdateProduction;
    foreach(Contact c in Task.AgencyData.ContactList)
          c.UpdateProduction;

    etc.

Is that the best way to handle this or should I use a command object that is totally separate from the rest of my business objects? Any ideas would be great.


amanda

RockfordLhotka replied on Tuesday, September 05, 2006

Is this a single process that is non-interactive? Or is the copy-to-QC a step, followed by manual data manipulation, followed by copy-to-production? I suspect the latter.

In that case you might use a command object to represent the copy-to-QC operation, and another to represent the copy-to-production operation, with a set of editable objects used for the manual step in the middle.

Whether you have just two command objects or several depends on the use case. Do users copy data in chunks, or all at once? Either way, your command objects need to reflect that. But on the whole, I think I'd create explicit command objects for the to-QC and to-production processes. Seperation of concerns - it isn't the concern of the interactive editable objects to be worried about the data copying.

You might have a CopyToQC object with several methods (.CopyChunkA(), .CopyChunkB(), etc.). That would make sense if the user is allowed to copy isolated chunks of data over time. The same for CopyToProduction.

akabak replied on Tuesday, September 05, 2006

It is an interactive process.

The copy to production I've got covered in a read-only business object that represents a state (as in Illinois or Massachusetts) and its collection of county-jurisdictions. It basically just calls a single stored procedure that takes a state as a parameter and pushes all data from production to QC for that state. Once the data is in the QC database, it goes through a vetting process and approval by several diffferent departments. I've got a set of business objects to handle the process as well as updating the QC data based on the actions taken during the data vetting.

I thought a command object to push back to production is the way to go, but the only question I have about a separate command outside the business object is how it obtains the primary keys that each stored procedure requires to push the data around. Because this is a hierarchy of data, I'm not sure how to load the grandchildren keys based on the grandparent key if I don't load the whole object hierarchy.

akabak replied on Tuesday, September 05, 2006

One more thing: the data gets pushed back to production on a jurisdiction by jurisdiction basis according to the workflow of that jurisdiction (i.e. when it is marked as validated, it should get pushed back into production). The jurisdiction is defined by a primary key both in the table and in an object. When the jurisdiction gets pushed back, all related data must get pushed back as well.

RockfordLhotka replied on Tuesday, September 05, 2006

akabak:
One more thing: the data gets pushed back to production on a jurisdiction by jurisdiction basis according to the workflow of that jurisdiction (i.e. when it is marked as validated, it should get pushed back into production). The jurisdiction is defined by a primary key both in the table and in an object. When the jurisdiction gets pushed back, all related data must get pushed back as well.


It seems to me that your database contains all the keys needed except for the top-level key. So why wouldn't you just pass the top-level key and let the database do the work of finding the child/grandchild data? In other words, why instantiate an object graph just to pass that data back to the database, when the database already has the data?

akabak replied on Tuesday, September 05, 2006

Mostly because I'm lazy and don't want to code the cursors required in SQL to iterate through the children and call a stored procedure on each one. Seems easier to iterate through the object model and take the keys from there even though it is counter-intuitive.

Copyright (c) Marimer LLC