Hello
We are adding a datalayer to CSLA because our application will access one of two databases (the old one we cannot change, the new one is going to have a very different structure, although contain much the same data). I have read the two articles Rocky has written on different approaches to this problem and first I am looking at using the DTO method - i.e. return a DTO to the Dataportal_Fetch method and map this into the Business Object. This works well but with the overhead that every Business Object needs to map from and to the appropriate DTO for getting from/passing to the Datalayer.
My question is, what are the ramifications of using this DTO object directly within the Business Object? So, rather than having a set of private variables exposed as public properties, the BO would have one private variable which the DTO is assigned to and the BO's public properties would then map through to the DTO's own public properties.
I think one of the articles is suggesting that there is a performance hit, I suppose because each call to a Business Object property has to be passed through to the underlying DTO object (rather than just going straight to the value type variable).
What other problems are there? Is this approach hindering any OO design goals that I should be aiming for?
One problem I can see is that when the Business Object contains child objects just assigning one DTO to a single private variable will not do the job.
I am fairly new to OO and CSLA, so any thoughts or alternative approaches would be much appreciated.
Cheers
Matt
Typically, DTO's are used to get data into an out of the DP_ methods. Your BO's still work the same, but instead of accessing a DataReader directly, you access your DTO's and then write a bunch of mapping code that copies data from your DTO properties into your BO member variables. That's where the performance hit comes in - you've taken the speed of the DataReader access and negated it by adding another layer of objects you have to populate.
The design that Rocky subscribes to is that your business objects should be centered around use cases, not database tables. So CSLA is designed around that kind of programming style. If you subscribe to that philosophy (and doing so will make programming with CSLA easier), then it's not too long before your BO's don't look much like your database structure. This is the classic "impedence mismatch" problem you'll hear about. You might have objects that mix pieces of data from multiple tables, or that have properties that have nothing to do with the database. And you've already touched on the child-object question. Trying to use the DTO as the basis for your property data ends up complicating that kind of design. Having the DataPortal (and the corresponding DP_ methods) limits the scope of that impedence mismatch to a small section of your objects, leaving you free to model the rest of the object however you need to to fit your use case/business requirements.
You could certainly apply a similar kind of design to your DTO's. That effectively would push the impedence mismatch into your DAL, which some might argue is bad design. Setting that argument aside, choosing that route essentially duplicates your coding effort, since you would be building the same object graph twice - once as a set of DTO's, and once as a set of BO's. Plus, you've added another level of complexity to your DAL, in addition to all the multiple-database hoops you have to jump through.
HTH
- Scott
Scott - Thanks for your reply. Nice to know I am on the right track (or at least one of the many tracks!).
I guess my question was based around whether or not that mapping process is really necessary. That was my first approach (to write lots of mapping code), but it then struck me that the DTO is based very much on the Business Object (not the database structure), so why not just use the DTO within the Business Object without mapping it across. So within a Supplier class you would get:
Private m_data as SupplierDTO
Public Property Name() As String
Get
Return m_data.Name
End Get
Set(ByVal value As String)
m_data.Name = Value
End Set
End Property
Public Property Postcode() As String
Get
Return m_data.Postcode
End Get
Set(ByVal value As String)
m_data.Postcode = Value
End Set
End Property
Rather than the more traditional:
Private m_name As String
Private m_postcode As String
Public Property Name() As String
Get
Return m_name
End Get
Set(ByVal value As String)
m_name = value
End Set
End Property
Public Property Postcode() As String
Get
Return m_postcode
End Get
Set(ByVal value As String)
m_postcode = value
End Set
End Property
I don't have the experience or knowledge to decide whether this is a good way to get round the mapping issue, or just a lazy coders abuse of object design!
Matt
MattJ:
I don't have the experience or knowledge to decide whether this is a good way to get round the mapping issue, or just a lazy coders abuse of object design!
Well... how you "get round the mapping issue" has a lot of answers.
Programming is one of the laziest jobs to have, from a certain point of view. The old saying that a programmer will spend 15 minutes automating a job that takes them 10 is not too far off. People solve this problem through a number of methods, from code generation on to more esoteric home-grown things. Many folks use a scheme similar to this as well.
I wouldn't necessarily call it an abuse of object design. What I would ask is this: if you're going to model your DTO's to match your BO's, why are you using them? It really is doing the work twice. What your design is really doing it separating the data your BO's work on with the logic of the BO. Pre-.NET versions of CSLA did this, largely because it was the best solution available, but once .NET came around, it became possible (and practical) to abandon that design.
The thing to remember is that your DAL strictly deals with the DTO's, not your CSLA objects. So you have to include a lot of the metadata that CSLA classes provide - IsDirty, IsNew, IsDeleted, etc. - in your DTO. Otherwise, your DAL won't know whether to run an INSERT or an UPDATE. And that data is managed internally by your BO's, so you'll have to write mapping code for those properties. You'll end up duplicating a lot of code that CSLA provides for you in your DTO's, and that's usually a red flag with your design. You can get around this a couple of different ways, but those involve even more coding.
(FWIW, you still have your child-object issue to work with. And, while I haven't spent a ton of time looking at it, the upcoming 3.5 version of CSLA may give you problems if you go down this route. Many of the cooler things Rocky is doing with the framework may be out of your reach if you build your objects this way.)
If it were me, I wouldn't go the DTO route. If your two databases are the same type (both SQL Server, for example), then use your DAL to determine which one to run from (and what SQL/procs to run) and return your data like you always would. If they're different types, you can return base types (like DbDataReaders) instead. You'll lose some provider-specific functionality, so you need to see whether you have to have that. But if you don't, you can work with the base types and still get the job done.
- Scott
Matt,
How are you getting on with your private DTO field idea?
Have you pursued this technique at all?
This approach has occurred to me and I have tried it out on a fairly complex object graph – parent, child and grandchild with Csla 3.5. As you have suggested, the main problem seems to be collecting all the parent/child/grandchild DTOs together for posting to the data layer and ensuring the data layer knows whether each DTO is represents a new, an update or a deleted item. But the process is fairly systematic.
Have you done any more work on or formed an opinion about this yet?
Regards,
Patrick
Here is the link (to Rocky's links):
http://forums.lhotka.net/forums/permalink/12631/12681/ShowThread.aspx#12681
Copyright (c) Marimer LLC