I am trying to set up an elaborate unit test project in my solution. One problem that has me stumped is the Access level of our Methods, for example Friend factory methods in all Child classes.
I do not want to mess up the access levels of hundreds of my objects just to set up tests. I would also like to keep all of the tests in one or two specialized test projects separate from the rest of the code. Only thought that comes to mind is to add a single Public class to every project with Public access methods that work as jump off points to the actual objects. Has anybody devised any other civilized way to do this?
Jav
Jav,
Sorry for the slow reply, only just catching up with new forum emails.
I agree, you shouldn't have to "mess up the access levels" of your objects. You should write the objects exactly as they are meant to be used by your users (or your consuming code).
Therefore, your test unit projects should test the public interfaces that your objects are exposing. That is what you want to guarantee doesn't change from one version to the next.
So we target our unit tests (we use MbUnit, although NUnit seems to have a much better take up) at our publicly exposed stuff that we don't want to break. But we don't test private methods within classes, because that's specific to the implementation within each class.
For each assembly (i.e. MyBusiness.Dll) we have a corresponding test assembly (i.e. MyBusiness.Tests.Dll) that tests the functionality exposed by that assembly. That seems to be a common way to structure the different projects.
We have the whole thing automated with CruiseControl.NET as well.
I disagree on your comment that unit tests need to be fast. They need to be 100% accurate and 100% repeatable. I would say that speed is least important.
This is why we use an automated build process that gets the source from SourceSafe, builds the assemblies and runs all our tests on a daily basis. We don't care how long it takes, providing it can do it in a standard repeatable way. It then becomes a group responsibility to sort out a broken build.
As for the dependency on the database, yes you do have to assume there will be a database. But that's why you've got BOs that persist themselves to the DB right?
So we have a standard abstract test base class that allows us to test the simple CRUD functions of each BO class against the DB. Obviously we have to write a concrete test class for each BO to override the unique things for that BO. But the mechanics of the test process are essentially the same for each BO.
Create a new BO and save it. Then read it back, edit it and save it. Read it back again and check it updated ok, before finally deleting it. Check it was deleted properly.
It's fairly easy to write such a unit test harness for your BO framework if you think that is what you are trying to test at a unit level.
If you are trying to do System Testing (i.e. testing the functionality of the application) rather than just unit level testing then I would agree that you need a pre-populated DB to work with.
In that case you can either go with a standard SQL script that you run to populate your DB with data, or use your (fully unit tested) BOs to populate the DB for you. It all depends on the amount of investment you want to make in the test environment and how you plan to use it.
Ok, some of this depends on your definition of "unit" and also of "fast". But IMO I strongly disagree.
If you are saying that running 1 individual test must be responsive within your IDE when you want to test if you have broken 1 thing in your class - then I do agree with you. But I must re-iterate again that the point of unit testing is NOT to make sure that the code executes quickly - it's to make sure that your code does what it is supposed to do! That is all - nothing else. If it takes 5 seconds to prove you are 100% correct - then it takes 5 seconds - end of story. I don't want something to take 1 second to tell me that it's 85% correct. What about the people who need to use the other 15% of the functionality? What do I tell them? So I want the full 100% tested properly no matter how long it takes.
That's why reliability and runnability are more important than speed. Hey, if you have unit tests that you know run slowly then say so in your documentation, or mark then in a special way - that doesn't mean that they shouldn't be there to guarantee 100% accuracy.
And I believe you have to communicate with the DB, if the "unit" you are testing is your BO. How are you going to prove that your BO can perform basic CRUD operations if it never actually persists data into your database?
C'mon that's what your BO is supposed to do! It's supposed to create itself via a factory method, allow some properties to be set on itself and then persist itself into the database when you call the Save() method. How can you possibly test that it does its job properly if you don't communicate with a database. That's where the data has to end up!
I understand the points you make, but in the real world you have to do what makes sense for the "unit" you are testing. So if that means you go to the database or to AD, then that's what you should do.
Otherwise, your unit tests have no value as they don't test how your application will work when it's deployed!
I also disagree with the "fast" comment. In my opinion you should not even attempt to decouple your business objects from the database for purposes of unit testing. My number one rule for unit tests is that they ought to be testing what the method they are exercising actually does. If a method saves to the database then by God the unit test for that method should be checking that it did write to the database otherwise what's the point? If your unit tests have not verified the behavior you are expecting at runtime then they are worthless.
For those that insist that their unit tests should be super fast because they want their answers now, then my only suggestion is that you should just run the tests for the parts you changed? Certainly NUnit allows you to do this. Maybe I don't have time for the full 200 tests in my suite when I only made changes to one section of the code... so just run those tests. However: Do not trust this "quick" test to verify your entire API and push the thing to production. You must run the entire suite before giving the API the stamp of approval.
And really, come on, you can't spare one minute out of your day to save hours and hours of painstaking debugging? I've never had a unit test suite that was so intollerably long that I couldn't suffer through it. Go get a drink or something and come back, it's not that bad. My unit tests typically consist of at the very least a "CRUD" test, and sometimes a "CRUD x 100" test which just repeats the first test 100 times. This is for really punishing the system or trying to get more accurate numbers for comparing timing. The "x 100" tests are marked as "Explicit" in NUnit, though. I don't want everything running 100 times during my normal test cycle. These are specific tests for specific times, and they stay neatly out of the way until told "Explicit"ly to run.
We recently began a CSLA-based project which has us unit testing in a serious way for the first time. Here's where we started, not necessarily saying this is good...
We did mock the database using NMock. I can share more details on how that works if that's of interest. We were trying to unit test the public factory methods, which call to static methods of the DataPortal. The DataPortal is difficult to mock because of those static methods. So instead we mocked the ADO.NET interfaces used in the eventual call to our DataPortal_ABC methods. When the factory methods are done executing you can assert that data was fetched into the object as expected (for a _Fetch), etc..
But here's the problem part...
1) The theory was that mocking would keep the tests fast and allow us to focus on what the DataPortal_ABC code did with the results of the data access. In the case of a Delete, there really is nothing to check on the business object afterwords, so why bother testing with mocks? Unless you just want to make sure an exception is not happening. Same problem with Insert and Update. The only thing that really gets changed there is the internal timestamp. So it seems that perhaps we went down the mockery road unnecessarily in this case. It certainly didn't test that the data was persisted to the actual database, which some folks on this thread have pointed out as valuable!
2) For this particular project we chose to implement stored procedures in the database instead of building SQL statements. That's great, but it spreads the data access logic across the DataPortal_ABC method and the SP. As we mocked the ADO objects we realized that while it allowed us to isolate the tests towards the code in the DP_ABC methods, it really didn't test the SP's. I could see someone advocating unit tests on the SP's specifically, but I'm thinking it's more manageable if you just consider them part of the data access logic in the DP_ABC methods instead.
That's a long winded way of saying that I'm having a change of heart related to mocking those ADO objects. I'm now thinking it may be better to go ahead and test those DP_ABC methods against the real thing. As MelGrubb implies you can organize your tests into categories: "all tests", the "fast tests" (hopefully the majority of your tests), and the "slower tests". That will help if you are following the conventional wisdom of regressing your test suite frequently - just make sure to run them all before you check in.
I'd be very interested in hearing about the experience of others unit testing their CSLA business objects...
Dennis
On the subject of "fast". Automated unit tests are described as "fast" because they are automated. Think back to your life before NUnit... how did you test things? Did you test things? Your tests probably consisted of test scripts or custom built harnesses to exercise the different functions manually one at a time. Now we have NUnit, and you can test your whole suite, and people want to say it's not fast enough because it's still hitting the database and taking entire minutes out of their day. Wah! I say that a 10 minute long unit test suite is still considered "fast" compared to how we used to do this stuff. Having said that, if your test suite is running for 10 minutes, there'd better be a whole lotta objects your testing. I've never had a suite that took more than 3 minutes, and that was for a fairly dense API.
I would also take exception to the broad categorization of anything that interacts with the database as "integration testing". In one way yes, we are testing the integration of our business objects with the database, okay you can argue that point. BUT, if the specific method I am testing is a "Get" function, then I'm going to have to call that a unit test. In this case the "unit" is the get function, and in order to test it I'm going to have to get something... get it?
When I want to test my web UI talking to the business objects talking to the database, then yes that is integration testing. I'm checking the whole package instead of a specific part.
If I have broken my data access layer out from my business obejcts then yes, I could mock the DAL and it would no longer need to touch the database. Rockys patterns (As well as my own) dictate that the business object itself is it's own DAL, so there is no integration here apart from the integration with the database itself, and that's a whole philisophical debate of its own.
On a different note, my unit tests can arguably be called integration tests because of the way they rely on each other. Due to referential integrity rules, I can't save an Order object if it doesn't have a Customer to belong to, so my Order tests create dummy Customers in the course of their own work. They do this by calling methods on the Customer test class, so I'm not duplicating logic. So strictly speaking, all my unit tests except for the bottom layer lookup-table objects are actually integration tests because they rely on other objects and methods. It works, and it helps me find the problems at the points where my objects interact. So call it an "NInt" test if you want. Whatever, sue me.
Hey, this is an important question. Because you have to assume a level of trust somewhere.
So if we trust that the .NET framework does what is it is supposed to do...
and we trust that ADO.NET does what it is supposed to do...
and we trust that the CSLA.NET framework does what it is supposed to do...
then shouldn't we just accept them (without question) and use them.
If so, then what is the point of creating mock objects to replace/replicate these items?
We all agree (I hope), that the unit test code we write should only test the business code we write, not all the dependencies it relies on.
We have to assume that all the dependencies below a certain point have been tested 100% accurately already by somebody else.
Who's to say that in writing a mock object you don't actually introduce a bug/side effect that isn't in the original underlying item you are mocking?
So should we write unit tests to test our mock objects as well? Do you see where I'm going with this?
Where does that trust boundary start?
Hi David,
Thanks for activating this priceless discussion, and I appreciate everyone's comments. Most of the discussion appears to be about NUnit, which I have used before. The Test System part of the Team suite allows one to create unit tests on an entire project. When I tried that a few weeks ago, I got test "shells" for every instance variable and every property for every object in the project. At the time I was overwhemed enough just getting my projects converted to Csla2.0, so I put the unit-testing aside for the time. Now that the things are more stable, I intend to get into it.
My questions are:
1. Is anyone using the VS Test system?
2. If so, is it worth doing the automatic creation of thousands of tests and then fill in each with the required info to make the test workable - or is too much to wade through?
3. Any other words of wisdom?
TIA
Jav
(a) I have not used the VS Test System. However, I did follow its news in the past two years.
My feeling is that it sounds like that you are afraid of “wasting” it, since it can generate the code for you …. My take is that we use cut/paste all the time (especially via quickCode/snippets) for so long, generating the code is nothing. Ignore it.
(b) To mock or not to mock: We really need to be careful here. We need to put things in perspective. TDD is THE agile and THE lightweight methodology – that is, before .net getting into the picture, i.e., when we are talking only in C++/Java context. Now, .net brings in VB tradition. VB has been in a RAD context, which is much lighter.
Before you dismiss my point lightly, note that I used Java and I love Java. Also, I used TDD and I love TDD. Of course, I also used classic VB, VB.net and C#.
My point: if you really want the lightweight end of TDD, then, do not mock. Mock is for developing frameworks, not for developing everyday applications.
A healthy way to resist the temptation of using mocks in developing everyday applications, your can participate some open source framework developments. In that way, you learn how to mock, the right place, and not mess up your everyday application developments.
Note that once you mock, your unit testing code is totally different from your real code; as a result, your unit testing code loses a key functionality: documentation.
Here is my blog. http://survic.blogspot.com/2006/04/8-ottec-123-enterprise-computing.html
in answer to Davids post with the timer-info in it: YES 3 seconds is a very long time for 4 - 8 tests to run in. If you want unit tests for most of your application, and your application is some 50k lines of code, you should have at least a couple of hundred unit tests, if each test takes half a second then I can't run all my unit tests in one go (as they take 4 minutes or more to run) so I have to partition them and do some guesswork as to which suite is the most likely to have errors introduced in it. I would want ALL my tests to run within 5-10 seconds, that way my developers have no excuse for not testing the entire suite at least a couple of times per ten minutes.
The reason why mocking DB - access is a good idea is
1) it's faster, If I do it consistently I might just reach my goal stated above
2) I trust the DB, I don't need my tests to execute ADO.NET code every time (which de facto would mean that I was testing the ADO framework in addition to my own code)
However, mocking something as low level as ADO is generally a waste of time. What I typically would like to mock is a layer that has methods like InvoiceCollection GetInvoices(Customer c)
That way the mock could expect a call to that method and populate up real Invoice - objects, but with fake data in them (not coming from the DB, just made on the spot)
This would make doing tests involving the InvoiceCollection would a breeze to code, and All the objects would think that they were in fact getting "real" data from the DB and it would be superfast.
I still mean that I would need some tests that went against the DB, but they would be few, possibly even in a separate suite (if there were too many and they were slowing execution of my functional tests down)
Now, the problem I see in CSLA is that it is Always assumed that an object should create itself in tight coupling with the DB. You might just be able to put a mock in between that coupling, but that would mean mocking low level stuff, and that's a bad thing (even if you only do parts of it it's still a lot of work)
It doesn't seem right that if I want to test the method that aggregates up the sums and taxes from a collection of several invoices I first need to put them into the DB before I can test it. I know what I want to test, if I was outside the CSLA environment I would just instantiate a lot of objects and be done....
The only solution as I see it is to have a separate Criteria object that contains all the data you need for your object, when the static factory method receives a call to get, it should just new up an object and input the data into the object, that would work for simple tests, but tests that are more complex, in which the objects you faked up get other objects from CSLA, you would be screwed again (metaphorically)
Apparently there is no easy solution to this, anyone have any other ideas to get around this?
Yours, Andreas
Andreas,
I'm sorry but I have to disagree with you on this. As I said in one of my previous posts, it all depends on your definition of a "unit". I think your definition of a "unit" is unclear.
Using your definitions you would like each developer to run the entire test suite once every five minutes (you said "a couple of times per ten minutes"). I'm sorry, but not even the most productive developer on the planet is able to write/change enough source code in five minutes that require the entire test suite to be re-run. That's not UNIT testing, that's application/framework/system testing - call it what you like, but it is not UNIT testing.
At the lowest level a "unit" could be a single method (or possibly a class). Consequently, there will be a set of tests (let's call them unit tests) that completely test the functionality of that method (or class). So when a developer makes a change to the code in that method (or class) they should run the "unit tests" that ensure that they have not broken that method (or class).
It is a waste of time to run the entire test suite at this point - nothing else has changed other than the one method (or class) just modified.
And to some extent I agree with you. At this level, running the unit tests should be as fast as possible for the developer. Change the code, run the tests, check it back in. Repeat.
Running the entire test suite is a different task. This typically involves a "full build" of the application. Running all the unit tests at this point guarantees that everything still works when it's all put together. It does not matter that this takes more time than running an individual "unit test", it's the 100% guarantee of accuracy that you want here.
This is commonly achieved by using a build server and an automated build process. This completely automates the process of getting the code from the source code repository, compiling it and then running all the unit tests. It should be 100% repeatable and able to run unattended (i.e. perhaps as nightly build).
We use a tool called CruiseControl.NET to achieve this in our environment. We have a daily build process that gets code from SourceSafe, compiles it, runs the NUnit tests, runs the FxCop tool and builds our documentation using NDoc. It runs automatically and currently takes a couple of minutes.
-----
The reasons why mocking DB access is a BAD idea:
1) It means you have to write code to mock something you trust (you said "I trust the DB"). I also trust the DB. I do not trust developers to build something that replicates this 100% accurately (and I include myself in that statement).
2) You said "I trust the DB". I also trust the DB and the ADO.NET framework. However, I do not trust developers to write 100% correct SQL code in stored procedures (and I include myself in that statement). Therefore, I want to be sure that the code uses the correct stored procedures, so that the data is accessed using the exact mechanism that will be used in production.
-----
Just for information, our current project has 275 tests and takes approximately 25 seconds to run. How does this compare to yours?
Andreas,
Your points are valid and I can see from the size of the projects you're working with you know what you're talking about. But here's the thing.
andreakn:
If I write 5 lines of code, compile, run ALL my tests (in less than 5 seconds) see that they're all green, then I KNOW that I haven't introduced a bug.
If I write 5 lines of code, compile, run the 10 tests that completely test the behaviour of my unit in less than 5 seconds, see they're all green, then I KNOW that I haven't introduced a bug.
The difference is that I've only run 10 tests, not the entire suite, but I still KNOW that the unit I'm working on is 100% accurate. Those 10 tests give me complete 100% coverage of the functionality of my unit. Any code that uses my unit will still work exactly as required. I don't need to test the rest of the code to prove that, I already KNOW - it's not an assumption.
My point is that you shouldn't need to run all the tests to KNOW that the unit works 100% correctly. You only need to run the exact number of tests required.
If you need to run any more than that, then the unit tests for that unit are not complete. And that is where the problem is.
So I agree with the sentiment of the article you quote and I would agree that on the whole the way we work is to write small "units" of software with the associated "unit" test cases.
And I'm sure you'd agree that TDD (as a concept), along with the associated testing tools like NUnit and VS 2003/2005, provides a much easier way to do this level of testing compared to the way we did it 5-10 years ago.
jokiz
I think that one of the real benefits from this CSLA forum is that you can get some great discussions on topics that are not directly CSLA related - although obviously everyone shares a common interest in the CSLA framework. This thread falls into that category.
If you're making multiple changes across several source files, then running the entire test suite is probably the easiest way to make sure you've not broken something - agreed. That's because you've changed multiple units - so you need to run multiple unit tests.
With regard to the testing of CSLA objects, have you seen the latest blog entry about Mock Objects from Fredrik Normen. He presents an interesting way to create objects, passing in mock objects via a special constructor method. Have a look for some possible ideas.
Hey there,
I've been reading this whole thread. Obviously there are two camps on this discussion. The basic fact is that you can't test a single business object in CSLA without running code from another class. This is called object coupling. If you want to have a maintainable system, you need to go for object cohesion, not coupling. (for a good explanation of the difference, see http://www.toa.com/pub/oobasics/oobasics.htm#ococ).
There are of course alternatives to this. The thing I came up with is the following. I create a new interface called IDataPortal. This interface will declare all the public methods of the Csla.DataPortal (I'm not sure about the eventhandlers though, but we'll come to that later).
public interface IDataPortal{
object Create(object criteria);T Create<T>();
T Create<T>(
object criteria);...
}I then create a new class that implements this interface and maps them one on one with the Csla.DataPortal:
public class MyDataPortal : IDataPortal{
public object Create(object criteria){
return DataPortal.Create(criteria);}
...
}
On my business object, the code changes slightly:
public class MyBusinessObject : BusinessBase<MyBusinessObject > { private static IDataPortal _dataPortal; public static IDataPortal LocalDataPortal{
get{
if (null == _dataPortal) _dataPortal = new MyDataPortal(); //Instantiate default dataportal return _dataPortal;}
set { _dataPortal = value; }}
...
When I create a new business object, I then call:
LocalDataPortal.Create<
Sheet>();In your test you would write:
MyBusinessObject.LocalDataPortal = MockDataPortal;
Expect.Once.On(MockDataPortal).Method("Create").Will(Return.Value(whateverYouWantItToReturn);
MyBusinessObject myNewObject = MyBusinessObject.NewObject();
I'm not really happy with the fact that the members of MyBusinessObject have to be static, as this will mean that every test that doesn't initilaize the LocalDataPortal property will end up with the MockObject of the previous test, or even the default that hits the database.. But hey, this makes it testable. You can even have tests hit the database by setting the property to null. If some members need to call Csla.ApplicationContext.User, this is ok, because the Csla.ApplicationContext.User property's type is an IPrincipal, which can also be mocked.
I'm not sure how deep the implications are, as I don't know the Csla framework that well yet. But this gives me the reassurance that a solution can be found.
UmpSens:
Obviously there are two camps on this discussion. The basic fact is that you can't test a single business object in CSLA without running code from another class.
I'm not trying to prove my object interacts correctly with the Csla framework. I'm trying to bypass the framework. When I test a business object that uses tha csla framework, I assume two things:
When I instantiate a business object with the default Csla implementation guidelines, I actually run at least three methods in my business object:
So I'm actually testing the implementation of three methods in one single call. If it runs well, all is ok. However, if one of them fails, I don't know which one did. What I want to do is test each method separately.
So to test a single method of my business object, I am writing my contract first, setting up the expected calls it will do to the framework or any external code. That way, I know the calls were made, and the method did what was expected. I can apply that to all the methods of my class individually.
I think a potential problem with the approach you are taking is that you are changing the behaviour of your BO (by re-implementing the Data Portal a different way) to make it testable. That doesn't seem quite right to me.
I understand what you're trying to do and why. And I understand the argument from the pro-Mock Objects camp as to why you might want to do this as well.
But altering your design just to make something testable, is that really the way to go? Is that really what TDD is promoting?
Surely, you should stick with the design and find a way to test your BO that does not involve changing the design. It might be harder to do it this way, but it's more correct.
What's to stop a developer using your LocalDataPortal property for something other than unit testing as part of your application?
I'd like to introduce a slightly different question if I may... the concept of unit testing a CSLA business layer has had me in a quandry for a couple of years.
Based on how David describes his tests, it seems to me there's two levels of unit testing the BL.
Level1 - (what david describes) You are basically testing just the CRUD operations of the business object, with the goal of catching bugs caused by changes to the database schema or the data access code of the business object.
Level2 - You are testing the business logic of each BO. It's not just that I can retrieve a previously saved BO, this level is also testing that if I change PropertyX back and forth between these 10 values the IsValid changes appropriately. That if I read invalid data from the database the BO's IsValid is false after loading. And so on and so on.
The quandry for me is that it seems like to achieve Level2 you'd have to spend a ridiculous amount of time writing and maintaining all the tests. I've pretty muched dismissed it as too burdensome.
So the question becomes whether or not Level1 testing is really worthwhile. How often do you make and catch the type of mistake that it would detect?
Dan, I think you make a good point and one that I would argue comes back to something I said in an earlier post some time ago - it depends on your definition of a "unit".
So, IMO, I think the two levels you describe are actually part of what I choose to call the "unit tests" for a BO. My 100% functional unit (i.e. my BO) has to behave exactly the way I expect and it has to persist itself. But that's my definition of a "unit" in this scenario. Other people may have a different opinion.
And I agree that the creation and maintenance of all the test code to perform all the required tests is a large task, but I'm sure we'd all like it as an ideal to aim for. However, we (you and me both!) choose not to write the level 2 tests, because we want to spend our time writing the application and not writing test cases. I'm not saying that we don't have any level 2 style tests at all (as we have some), but we don't have the ideal 100% coverage for each BO.
So that's why we developed an abstract test base class which allows us to test the CRUD part of the "unit test". This didn't take very long and gives us basic coverage for any BO.
Does it test the basic CRUD functionality of a BO? Yes
Did it take long to setup? No
Does it test the business rule functionality of a BO? No.
Would that take a long time to setup? Yes.
So, it's a compromise we're prepared to take. Yes, we may find that some bugs "get through the cracks" because we didn't write all the tests for all the BOs. But against that we're getting more of the application development completed.
It's a classic compromise trade-off between: test coverage vs. project delivery.
Andy and David,
Yes we all agree on the trade-off / compromise point of view. I guess now what I'm wondering is about the bang-for-the-buck of whatever you have defined as a "unit".
I like the idea of being able to easily and quickly check basic CRUD operations to verify the database schema, views, stored procedures, BO data access code, DAL etc. are all still in alignment.
My question is beyond that what's been particularly useful or not useful. Do you find that only 2% of the bugs are data access related? Is it still worth it since the CRUD tests are so easy to write? Do you find with perhaps a few additional business logic tests per BO to test to the complex validation you catch another 60% of bugs? And so on.
Also, David forgive me if it was covered earlier in the thread as I didn't read every word going all the way back, but could you give a little detail on what exactly you mean about having an abstract test base (or whatever the exact wording you used)?
I guess the bang-for-the-buck is what it's all about at the end of the day. I think most of the "boilerplate" style code doesn't require test cases, because it can either be code generated or sensibly cut-and-pasted. That stuff is easy to code and easy for someone else to pick up and understand.
It's the unusual and non-standard behaviours that warrant the effort of writing unit tests. That's where you get the bang-for-the-buck payback. That's where you need some test cases to provide the "guidance" to the next developer that comes along who needs to change/extend the "complex" behaviour. You definitely want test cases in place then to make sure that the existing behaviour is not broken.
For what it's worth I think the "bug" we've fallen over most frequently is BOs not saving when you call the Save() method. And every time this comes back to the property setter in the BO not making the object dirty. A simple oversight in terms of the setter code, but it has the unexpected effect of not putting your data into the DB.
And there must be an easy way to create a test harness for this (using reflection?), so you don't have to code it up for every different BO. But we haven't gone down that route yet.
With regards to the abstract test base class, see the comments I made on post 1152 where I explained the principle of what was happening. In essence though, a test class/framework can be just like any other class/framework - you can use inheritence, generics, whatever, to get the job done in the most practical way possible.
So, the principle behind the abstract test base class was a class that could be inherited from that provided the basic framework for doing the CRUD testing (it's analoguous in some ways to the BusinessBase base class). It has abstract methods that each derived class must implement to provide the BO specific stuff - like the setting of properties.
So a new test class for a new BO means you inherit from the abstract base and implement the methods that require BO specific behaviour. And hey presto - you have simple CRUD testing.
UmpSens:
I'm not trying to prove my object interacts correctly with the Csla framework. I'm trying to bypass the framework. When I test a business object that uses tha csla framework, I assume two things:
- The Csla framework does what it's supposed to do without bugs (some might consider this bold, however if you want to test the Csla framework, this is not the right place to do it)
- My business object is buggy until all it's tests pass
UmpSens:
When I instantiate a business object with the default Csla implementation guidelines, I actually run at least three methods in my business object:
- the static factory method to create the object (calling the Csla.DataPortal.Create)
- the business object's constructor, which will probably initialize the object and children objects
- The static DataPortal_Create method in my business object
So I'm actually testing the implementation of three methods in one single call. If it runs well, all is ok. However, if one of them fails, I don't know which one did. What I want to do is test each method separately.
So to test a single method of my business object, I am writing my contract first, setting up the expected calls it will do to the framework or any external code. That way, I know the calls were made, and the method did what was expected. I can apply that to all the methods of my class individually.
I am a PM, but I don't understand the reason why hitting the DB is so bad. How long do you think it takes to run these unit tests?
And this is nothing to do with CSLA, BTW, this is a general discussion that relates to any software development project. Unit testing is a generic concept. Don't use CSLA as a mis-direction here. However you build your software you will at some point in the development lifecycle have to test that data is actually going into the database!
I'll repeat one of my previous comments, it depends on your definition of a "unit". As others have commented in this thread, if you want to call testing a BO persisting itself to the database an "integration test" then call it that if you want.
Personally, I think of BOs as a "unit". Fine - that's just different naming conventions and we'll leave it there. What is important is that you do test that behaviour somehow.
So a "unit" test may be checking that when you set the Person.Name property in your Person BO with a new value it does/doesn't trigger the business rule validation to occur (maximum string length for example). Ok, that's fine. And you'll have a whole bunch of "unit" tests like that.
But to me the "unit" I'm interested in is my Person BO, since this "unit" can be surfaced via any one of the different GUIs that I may want to expose my BO in (WinForms, WebForms, WebService). Now I want to be 100% sure that the BO works properly regardless of the GUI. That's my definition of the "unit" in this context.
And yes I do want to make sure that whoever wrote that BO has written all the low-level "unit" tests as well (that test property setters for example) and that it persists itself nicely into a database (could be MS SQL Server, Oracle or MySQL for example).
And all these tests need to be 100% accurate and 100% repeatable. And I don't care how long they take to run, but they must be included in the automated build process. And when one of them goes wrong, it does need to be investigated and corrected ASAP and the build re-done.
So the whole development lifecycle becomes much more automated and repeatable at the click of a button.
And this is before we even get to testing the different GUIs that might go on top as well
In answer to my own question "How long do you think it takes to run these unit tests?" here's my answer. I ran an example from our own test project within the IDE.
The test is written mainly within an abstract base class, so it can be used for any CSLA BO provided you override/implement the right methods.
My setup is:
OS: Windows XP and MS SQL Server 2000.
Hardware: Dell D800 Latitude laptop with Intel 1.6 Ghz CPU and 512Mb RAM.
Software: VS2005, ReSharper, TestDriven.Net, NUnit.
The NUnit test I ran involves a CSLA BO with 3 properties and does the following 4 steps:
(1) CreateTest - Create a new BO, set all the properties, save it.
(2) ReadTest - Create a new BO, set all the properties (to the same values). Get the first BO back from the DB and compare the two objects for equality by comparing all the properties are the same.
(3) UpdateTest - Get the first BO back again, change it, save it (using the same object reference to retain the object). Get the same object from the DB into a second object reference. Again, compare the two objects for equality as above.
(4) DeleteTest - Delete the original object from the DB. Check that it is no longer in the DB.
I ran the whole test using both TestDriven.Net and the new NUnit support within ReSharper. The results were very similar with the whole test taking approximately 3 seconds. I know the code is not "optimal" and could be made to perform faster, but it works and gives 100% accuracy on the CRUD functionality (including the testing of any relevant CSLA properties like IsNew, IsDirty, IsSavable).
So is 3 seconds too long for that level of "unit" test?
Note: The first time you run the test it takes longer, but that's due to the setup overhead of the relevant testing harnesses within TestDriven.Net and ReSharper. After that the times are pretty consistent.
Copyright (c) Marimer LLC