Hey everybody. I know that TDD with CSLA has been the topic for discussion in several threads within this forum. I've been playing around with a solution that tries to make them play together. I have started to write a series of blog posts on this. If you are interested, please check it out.
The DataPortal's control over construction is definitely at the core of the issue. I understand that a service locator can be used to call upon dependencies as required. This is a valid solution. There is nothing wrong with that at all. (Have you checked out the Common Service Locator on codeplex? http://www.codeplex.com/CommonServiceLocator)
My preference is to use constructor injection of dependencies and call upon the service locator in constructor overloads. In this scenario, the constructor forms a vital part of the class interface, and when combined with the properties and methods, fully specifies class dependencies. That just sits better with me.
For those interested, my latest post expands a little on the details of my design.
My point for discussion is this – If we go to the trouble of abstracting out the data access by introducing a DTO, why don’t we send the DTO across the wire rather than the business object? The actual data access code will still run on a server, but we don’t send the business object across the wire.
Rather than make the DTO the mobile object, I suggest that we piggyback the serializable DTO on a mobile command object, since the DataPortal has infrastructure in place for a command object.
Note: Since the business object has methods to reconstruct itself from a DTO, we could always create the business object again on the server in order to check validation rules and authorization rules if necessary. If any logic run on the server results in a state change, the state is captured in the DTO and provided back to the business object on the client after the operation is complete. A new instance of the business object does not need to be created on the client, instead the existing instance will update it’s state from the returned DTO.
Check out the code on my blog for more details.
Hi, thanks for your reply. I'm definitely no expert, so please bear with me! I'm just trying to share my thoughts as I work my way through this exploratory process.
I've tried my best to convey what I am trying to accomplish in the second blog post. If I attempt to distill it down to what I really want ... I want all of my business object dependencies to be explicit and I want to pass them in through a constructor. That's just my preference. I am aware that there are other ways to inject dependencies and to achieve most of what I am aiming for. However, I only want to inject these dependencies once throughout the lifetime of the object (unless of course there is a reason for changing them, then I would expose a setter). In addition, I only ever wish to work with one instance of the business object once it is constructed, rather than changing my references after each save (the DTO enables this). I also want to keep using everything else that CSLA has to offer ... I'm not attempting a re-design, or trying to prove?? serialization. So, I think I can achieve the above by sending a DTO accross the wire rather than the business object. (1) I'm not using the DataPortal Fetch or Create methods, so the power of construction is back in my hands, and (2) because the DTO effectively becomes the mobile object (piggybacking on a command object), there is no need to work with a new business object instance after the save, just the new DTO instance (which the same business object instance uses to update itself after a save).
I understand that the nature of the DataPortal is that it controls construction and that serialization brings me back a new copy of the business object. The purpose for sending the DTO accross the wire instead is so I can continue to work with the same instance of the business object on the client even after a save. As a side affect, I don't need to keep updating references and re-injecting dependencies.
Yes and no. Most halfway-decent DI frameworks I've seen do a fair amount of searching to find the constructor that matches the objects they need to inject. So if you need to inject something new, you can create a constructor overload with your new object, and the DI framework should pick it up just fine. It's a measure of backwards compatibility - though you could probably argue that the old code that tries to work with your object without the new dependency is very likely suspect, and should change. But you have to make a code change no matter what, so it's probably a six-of-one situation.
Unless your intent was to debate the merits of the overall approach... from where I sit, it appears that the OP is taking a "middle-ground step" between the pre-.NET versions of CSLA and the .NET version. Restrictions in VB 6 required a situation very similar to what's been proposed. One of the main issues with that design - and a big reason why Rocky chose the architecture he did - was that this method tended to break some encapsulation. While you could overcome these limitations in .NET with reflection, performance concerns become a real issue. So a new way of building objects was needed. Not exactly a new way - more like Rocky could finally do what he always wanted to do.
This kind of discussion has been going on around CSLA for quite some time now. I have not read the OP's blog entries, so I can't really speak to how "good" the solution is. If it works for his/her project, then we can very likely call it "good". That's ultimately the standard upon which the solution needs to be judged, IMO.
Having said that, even with the hooks that Rocky has provided in 3.5/3.6, there's still a fair amount of work that had to be done to get to the "TDD friendly" approach. And much of that work involves modifying the framework code itself. I won't say that Rocky's codebase is the "pure, one-and-only" version of CSLA. Many folks have created modifications to suit their needs. But the more you modify, the more you have to maintain as new versions come out. And I suspect that a fair amount of modification to the framework was required. Given that the current version of CSLA represents several man-months of work spread across multiple developers, providing support for four different UI technologies and multiple communication protocols...
Even if you got it working (which it appears the OP did), you could probably still call it CSLA, but that might be pretty confusing to anyone else who has any CSLA experience. A lot of what CSLA does for you has to be side-stepped/refactored/hacked/enter-your-choice-of-phrase-here in order to get this working. And it just seems that the T/M/DDD group of folks have a fundamentally different view of the world. That's great, since it works for them. I'm not debating the merits of either approach, mostly because I hate religious arguments. But I don't think any solution that tries to meld CSLA and TDD is going to work out very well. They simply approach the world from a different perspective, and I don't see a happy meeting of the minds between the two.
This ultimately means that the ideas presented here probably cannot be coherently and effectively merged into the existing CSLA codebase that Rocky and his Magenic team manage. So it's not very long before you end up having to decide whether you're going to break with CSLA and its update cycles... and then it's not CSLA anymore.
Again, I'm not mocking the OP's work. If it serves his purpose, then it's done the job. I just don't think he's going to get a lot of love from either side of the fence.
Yes, I agree completely with everything you say. TDD/DDD is definitely a different view of the world. I guess you just pick the view that helps you sleep at night ... I also hate religious arguments! Most of the people on this forum have already decided what side of the fence they sit on, however, I am still deciding.
And yes, the solution does require some code changes ... so like most solutions to anything, there's a lot of trade-offs. E.g., does the benefit of the refactorings, hacks, etc, outweight the fact that I may need to make more modifications as newer versions of CSLA are released. Mmmm, I don't know. Time will tell. But it's fun playing around. If it all gets too hard, I'll just use CSLA as is - it does most of what I want out-of-the-box anyway. However, if my tinkerings lead to a solution that some people may also get some benefit from, then it's all good.
While the class is still being developed and I have control of the source code, the constructor will evolve as dependencies are fleshed out.
I guess I am largely influenced by Martin Fowler's discussion on dependency injection (See the "Deciding which option use" section):
Fowler discusses Service Locator, Constructor injection, and Setter injection. My aim, like Fowler, is to at least aim for purely constructor injection -
"My long running default with objects is as much as possible, to create valid objects at construction time ... Constructors with parameters give you a clear statement of what it means to create a valid object in an obvious place. If there's more than one way to do it, create multiple constructors that show the different combinations ... Despite the disadvantages my preference is to start with constructor injection, but be ready to switch to setter injection as soon as the problems I've outlined above start to become a problem" - Fowler.
You see, I write classes and modules that may be reusable in a number of applications. Other developers that use my libraries for other applications may not have access, or not allowed access, to the source code, so the dependencies need to be evident in the class interface -
"With dependency injector you can just look at the injection mechanism, such as the constructor, and see the dependencies. With the service locator you have to search the source code for calls to the locator" - Fowler.
Thanks for your reply. Most of the guts of the dirty management and tracking logic is still taken care of by the CSLA framework. I am using the DTO to take a snap shot of the dirty state so I can send this to my data access layer, so that the data access code can determine what it needs to insert, update, delete, etc. The primary reason for this, is that it allows separation of my data access layer ... I think you'll find that something similar would need to be done if one choses to use the ObjectFactory to separate out the data access, since Rocky only provides extension points for root level fetch, insert, update, and delete. So, it's really a result of fleshing out the data access.
It is not neccessary to duplicate the dirty/new/deleted logic when using DTOs and a DAL.
Thanks Daniel. Thanks for your patience. I'd appreciate it if you - or anyone else - can provide an example of deleting an editable child through the ObjectFactory? If I am using DTOs to separate out my DAL - specifically using the ObjectFactory (ie., the data access code is not privy to my protected or private fields) - how does the DAL know when to delete a child if I don't send the IsDeleted flag in the DTO (since the DeletedList is a protected member)?
My apologies. I know nothing about the ObjectFactory.
Copyright (c) Marimer LLC