Having a major issue with mapping collections to DTO and back

Having a major issue with mapping collections to DTO and back

Old forum URL: forums.lhotka.net/forums/t/4785.aspx


rd_bigdog posted on Thursday, May 01, 2008

Hi,

I'm looking for a pattern that i can use to map parent objects with child colleciton objects where child objects get deleted/updated, etc.

I have a client windows applicaiton that utilizes the business object library and a WCF layer that encapsulates the server with the same business objects behind it. Pretty standard I think...

What happens is as items are deleted from teh collection on the client and that parent object is sent to the server we lose all the state of the objects. THis is because we use the DataMapper to map the data from business object fields into DTO fields, then back from DTO fields on the server to the Business object. When we reconstruct the business objects on the server, we lose all the state from teh BaseBusiness object and BaseBusinessList such as the IsNew, IsDeleted, DeletedItems collection.

Does anyone have any patterns that they have implmented in order to preserve the state of the collections and business objects in this type of scenario.

We can move to dealing with the child objects individually on the client and passing them in a different way to the server, but it would be prefered to keep our existing design as it is quite nice...except for this nuance...

Thanks All.

 

 

RockfordLhotka replied on Friday, May 02, 2008

At a higher level you need to make a decision about responsibility. Who is responsible for tracking/managing the state of the child data (or any object data)?

In an SOA world you have two separate applications.

Very clearly the server application (running the CSLA objects) must be designed to manage its own state, independent of what any given consumer might or might not do. However, the service application may choose to require that the consuming application provide the new/deleted/dirty state of the data sent into the service.

In other words, the message contract defined by the service application may require that the consumer provide new/delete/dirty information with every object. Exactly how much the service trusts this metadata is really up to the service application of course.

If this is all true, then all consuming applications now assume responsibility for managing the new/deleted/dirty status of their own data - however they choose to do that. Ultimately the only real requirement is that the consuming application be able to compose the required message to send to the service. Hopefully the consumer uses similar semantics to the service, but of course you never really know what will happen - these are independent apps after all.

Assuming we all agree on the thought process so far, then the problem becomes one of technical detail - of implementation. The service, on receiving a message from a consumer, needs to take the message data and use it to create an object graph of business objects.

And here we come back to trust. All your business object properties will be validated automatically - which is good, because you should never trust the consumer.

But the new/deleted/dirty properties are read-only, and more importantly need to be validated too!! Again, do you trust the consumer?

The safest solution is to load the object graph from the database and then merge the message data into the graph. That way CSLA can automatically manage the new/deleted/dirty state for you - essentially doing the validation of these values (and eliminating the need for the consumer to manage them at all btw).

But that solution is often considered to be too slow. So then you are left in a bit of a spot. You either need to trust the consumer or come up with a scheme by which the consumer's new/deleted/dirty values are considered 'suggestions', which are ultimately confirmed by the DataPortal_XYZ methods.

Most people, at this point, decide to just trust the consumer data because that's simplest (though a bit scary perhaps - however, the database integrity rules typically do provide a level of safety so it isn't as bad as it might at first appear).

Assuming we trust the consumer data, the problem is back to one of implementation - how to set those read-only properties with the message data?

I'd suggest two things:

  1. Define an ISetStatus interface and implement that interface in your six custom base classes
  2. Create a helper method to map the three properties from the message data into any object that implements ISetStatus

public interface ISetStatus
{
  void SetNew(bool value);
  void SetDeleted(bool value);
  void SetDirty(bool value);
}

You can then use DataMapper for everything else, and use this helper to set these fields.

All that said, it should be possible to use the new CSLA 3.5 DataMapper to do the whole thing, because it can map from properties to fields and so you should be able to create a mapping that would map your DTO properties (like IsDirty) to your object's fields (like _isDirty).

I haven't tried that specific scenario, nor do I think it is a great idea (no validation of inbound data!), but you can try it if you'd like.

Copyright (c) Marimer LLC