Unit Test Quandry

Unit Test Quandry

Old forum URL: forums.lhotka.net/forums/t/42.aspx


Jav posted on Tuesday, May 09, 2006

I am trying to set up an elaborate unit test project in my solution.  One problem that has me stumped is the Access level of our Methods, for example Friend factory methods in all Child classes. 

I do not want to mess up the access levels of hundreds of my objects just to set up tests.  I would also like to keep all of the tests in one or two specialized test projects separate from the rest of the code.  Only thought that comes to mind is to add a single Public class to every project with Public access methods that work as jump off points to the actual objects.  Has anybody devised any other civilized way to do this?

Jav

Jav replied on Tuesday, May 09, 2006

Okay, I see it.  Visual Studio's test module automatically creates Accessors for the Private methods in other projects.

DavidDilworth replied on Monday, May 15, 2006

Jav,

Sorry for the slow reply, only just catching up with new forum emails.

I agree, you shouldn't have to "mess up the access levels" of your objects.  You should write the objects exactly as they are meant to be used by your users (or your consuming code).

Therefore, your test unit projects should test the public interfaces that your objects are exposing.  That is what you want to guarantee doesn't change from one version to the next.

So we target our unit tests (we use MbUnit, although NUnit seems to have a much better take up) at our publicly exposed stuff that we don't want to break.  But we don't test private methods within classes, because that's specific to the implementation within each class.

For each assembly (i.e. MyBusiness.Dll) we have a corresponding test assembly (i.e. MyBusiness.Tests.Dll) that tests the functionality exposed by that assembly.  That seems to be a common way to structure the different projects.

We have the whole thing automated with CruiseControl.NET as well.

andreakn replied on Tuesday, May 30, 2006

Hello, new to the forum and new to csla (halfway through the book), but here's my followup-question anyway:

How do you detatch yourself from using the DB when doing unit tests? Considering how tightly coupled the objects are to the DB in the standard use of CSLA as described in Lhotkas book, it seems to me to be hard to be able to test without having a DB loaded with test-data.

A key point in doing unit tests are that they are to be fast, going to the DB takes time, I'd much rather instantiate "fake" objects loaded with just enough data to do my tests.

There should be a way to inject the dependency on the DB I think...

any thoughts?

Yours
Andreas, Norway

DavidDilworth replied on Wednesday, May 31, 2006

I disagree on your comment that unit tests need to be fast.  They need to be 100% accurate and 100% repeatable.  I would say that speed is least important.

This is why we use an automated build process that gets the source from SourceSafe, builds the assemblies and runs all our tests on a daily basis.  We don't care how long it takes, providing it can do it in a standard repeatable way.  It then becomes a group responsibility to sort out a broken build.

As for the dependency on the database, yes you do have to assume there will be a database.  But that's why you've got BOs that persist themselves to the DB right?

So we have a standard abstract test base class that allows us to test the simple CRUD functions of each BO class against the DB.  Obviously we have to write a concrete test class for each BO to override the unique things for that BO.  But the mechanics of the test process are essentially the same for each BO.

Create a new BO and save it.  Then read it back, edit it and save it.  Read it back again and check it updated ok, before finally deleting it.  Check it was deleted properly.

It's fairly easy to write such a unit test harness for your BO framework if you think that is what you are trying to test at a unit level.

If you are trying to do System Testing (i.e. testing the functionality of the application) rather than just unit level testing then I would agree that you need a pre-populated DB to work with.

In that case you can either go with a standard SQL script that you run to populate your DB with data, or use your (fully unit tested) BOs to populate the DB for you.  It all depends on the amount of investment you want to make in the test environment and how you plan to use it.

jokiz replied on Wednesday, May 31, 2006

i have been struggling with unit testing lately since i have been using CSLA 1.1.  i have read a number of articles on unit testing and they really are ought to be fast.  after all, you wanted to know immediately if you break something after you've done your changes.

unit tests, as most of the articles have said, should not communicate with an external environment (DB, Active Directory, etc.).  a better setup is to have a separate project for these tests (persistence, AD authentication) and the unit tests for the business objects should not load themselves from DB.  and since CSLA makes use of static factories, you should have an abstraction inside this factory methods, i haven't implemented it though but i can see some opening...

DavidDilworth replied on Wednesday, May 31, 2006

Ok, some of this depends on your definition of "unit" and also of "fast".  But IMO I strongly disagree.

If you are saying that running 1 individual test must be responsive within your IDE when you want to test if you have broken 1 thing in your class - then I do agree with you.  But I must re-iterate again that the point of unit testing is NOT to make sure that the code executes quickly - it's to make sure that your code does what it is supposed to do!   That is all - nothing else.  If it takes 5 seconds to prove you are 100% correct - then it takes 5 seconds - end of story.  I don't want something to take 1 second to tell me that it's 85% correct.  What about the people who need to use the other 15% of the functionality?  What do I tell them?  So I want the full 100% tested properly no matter how long it takes.

That's why reliability and runnability are more important than speed.  Hey, if you have unit tests that you know run slowly then say so in your documentation, or mark then in a special way - that doesn't mean that they shouldn't be there to guarantee 100% accuracy.

And I believe you have to communicate with the DB, if the "unit" you are testing is your BO.  How are you going to prove that your BO can perform basic CRUD operations if it never actually persists data into your database?

C'mon that's what your BO is supposed to do!  It's supposed to create itself via a factory method, allow some properties to be set on itself and then persist itself into the database when you call the Save() method.  How can you possibly test that it does its job properly if you don't communicate with a database.  That's where the data has to end up!

I understand the points you make, but in the real world you have to do what makes sense for the "unit" you are testing.  So if that means you go to the database or to AD, then that's what you should do.

Otherwise, your unit tests have no value as they don't test how your application will work when it's deployed!

 

ajj3085 replied on Wednesday, May 31, 2006

I just want to say I agree with David.  you want the tests to run in a reasonable amount of time.  You don't want to find out tomorrow that something you coded today failed.  You want to find out at most in a few hours.

I also agree that part of the BO's behavior is persisting itself.  You can use mocks, but that doesn't prove 100% that you BO communicates with your data layer properly, it just proves it communicates with the mock properly. 

Andy

MelGrubb replied on Wednesday, May 31, 2006

I also disagree with the "fast" comment.  In my opinion you should not even attempt to decouple your business objects from the database for purposes of unit testing.  My number one rule for unit tests is that they ought to be testing what the method they are exercising actually does.  If a method saves to the database then by God the unit test for that method should be checking that it did write to the database otherwise what's the point?  If your unit tests have not verified the behavior you are expecting at runtime then they are worthless.

For those that insist that their unit tests should be super fast because they want their answers now, then my only suggestion is that you should just run the tests for the parts you changed?  Certainly NUnit allows you to do this.  Maybe I don't have time for the full 200 tests in my suite when I only made changes to one section of the code... so just run those tests.  However: Do not trust this "quick" test to verify your entire API and push the thing to production.  You must run the entire suite before giving the API the stamp of approval.

And really, come on, you can't spare one minute out of your day to save hours and hours of painstaking debugging?  I've never had a unit test suite that was so intollerably long that I couldn't suffer through it.  Go get a drink or something and come back, it's not that bad.  My unit tests typically consist of at the very least a "CRUD" test, and sometimes a "CRUD x 100" test which just repeats the first test 100 times.  This is for really punishing the system or trying to get more accurate numbers for comparing timing.  The "x 100" tests are marked as "Explicit" in NUnit, though.  I don't want everything running 100 times during my normal test cycle.  These are specific tests for specific times, and they stay neatly out of the way until told "Explicit"ly to run.

andreakn replied on Wednesday, May 31, 2006

Well, though I must say that I understand the whole argument that you need to test that each object can persist itself correctly, it is downright overkill to force EACH of your unit tests to have to roundtrip the DB in order to test some part of your logic.

There's a fair amount of literature out there supporting that one of the very key features of unit tests are that they are independent and fast. If all your tests roundtrip the same DB, then you cannot easily guarantee either, and certainly not both, as the only way to guarantee independent tests is to reset the DB between tests. I won't try to defend this standpoint here (call me lazy if you will) as I'm not trying to convert anyone to any specific point of view.

Let me ask this then: *given* that I have the need to have an indirection between the objects and the DB for testing purposes, what would be an appropriate way of going about that business?

is it even possible to do this within CSLA without breaking everything apart? If anyone have any insight on this I would love to hear it

Yours
Andreas

ajj3085 replied on Wednesday, May 31, 2006

I fail to see how a roundtrip to the db is overkill; its necessary for the BO to carry out its behavior. 

You can easily guarantee idenpendance; you reset the DB just as you suggest, and this can be as simple as calling a cleanup procedure in your teardown.  The 'fast' part is relative.  You can make tests faster with no programming; just increase the network connection, processor speed, memory, hard drive, etc. 

I'm sure you could use mocks if you absolutely wanted to not hit a database. 
http://msdn.microsoft.com/msdnmag/issues/04/10/NMock/. 

Andy

andreakn replied on Wednesday, May 31, 2006

well assume that my PM / Architect / Team lead / *insert power that be* / QA-guy

comes to me and says: " this CSLA you're talking about sure is great, now if it only had an easy modification that would allow us to do our unit testing without hitting the DB all the time"

what am I to respond (also assume that the not hitting DB during majority of unit tests is non-negotiable, coz it is in my neck of the woods)

I know about NMock, I like NMock, I just don't see how I can easily modify either my usage of CSLA or modify CSLA itself to accomodate using mocks for testing.

I'm getting more and more into the CSLA way of thinking and I'd really like to "sell" the framework to our company, but unless we can decouple the DB for tests it's disqualified :(

jokiz replied on Wednesday, May 31, 2006

may i know how do you persist your BO's andreakn?  are you using stored procedures hardcoded on the bo's dataportal methods?

DennisWelu replied on Wednesday, May 31, 2006

We recently began a CSLA-based project which has us unit testing in a serious way for the first time. Here's where we started, not necessarily saying this is good...

We did mock the database using NMock. I can share more details on how that works if that's of interest. We were trying to unit test the public factory methods, which call to static methods of the DataPortal. The DataPortal is difficult to mock because of those static methods. So instead we mocked the ADO.NET interfaces used in the eventual call to our DataPortal_ABC methods. When the factory methods are done executing you can assert that data was fetched into the object as expected (for a _Fetch), etc..

But here's the problem part...

1) The theory was that mocking would keep the tests fast and allow us to focus on what the DataPortal_ABC code did with the results of the data access. In the case of a Delete, there really is nothing to check on the business object afterwords, so why bother testing with mocks? Unless you just want to make sure an exception is not happening. Same problem with Insert and Update. The only thing that really gets changed there is the internal timestamp. So it seems that perhaps we went down the mockery road unnecessarily in this case. It certainly didn't test that the data was persisted to the actual database, which some folks on this thread have pointed out as valuable!

2) For this particular project we chose to implement stored procedures in the database instead of building SQL statements. That's great, but it spreads the data access logic across the DataPortal_ABC method and the SP. As we mocked the ADO objects we realized that while it allowed us to isolate the tests towards the code in the DP_ABC methods, it really didn't test the SP's. I could see someone advocating unit tests on the SP's specifically, but I'm thinking it's more manageable if you just consider them part of the data access logic in the DP_ABC methods instead.

That's a long winded way of saying that I'm having a change of heart related to mocking those ADO objects. I'm now thinking it may be better to go ahead and test those DP_ABC methods against the real thing. As MelGrubb implies you can organize your tests into categories: "all tests", the "fast tests" (hopefully the majority of your tests), and the "slower tests". That will help if you are following the conventional wisdom of regressing your test suite frequently - just make sure to run them all before you check in.

I'd be very interested in hearing about the experience of others unit testing their CSLA business objects...

Dennis

hurcane replied on Wednesday, May 31, 2006

There are "unit" tests and there are "integration" tests. I think a lot of us are running integration tests, but calling the unit tests. From an academic standpoint, you want your unit tests to test a single unit of code, and not run any code in any other modules. That's what mock objects are for.

In theory, I want to write a test for the Get factory method. The code consists of:
Public Shared Function GetProject(ByVal ID as Guid) as Project
    If Not CanGetObject() Then
       Throw New Exception ...
    End If
    Return DataPortal.Fetch(Of Project)(New Criteria(id))
End Function

Testing the access exception is easy as it would never go to the database. But suppose you want to write a unit test that confirms that the Shared method is doing what it is designed to do. You would mock the DataPortal object. The mock would be designed to expect a call to Fetch and the mock object would return a Project object (also a mock, but with no expectations). Part of the unit test asserts that Expectations of the mock were met. If somebody changed the code and accidentally commented out the DataPortal.Fetch line, this test would fail.

This test would be very fast. In my opinion, it's not a very useful test. However, suppose you want to unit test the DataPortal_Fetch method for the Project object (Pg. 426 in the VB book), which is what normally hits the database. There are multiple possible tests for the DataPortal_Fetch method. Let's design one that confirms all the appropriate fields are being pulled from the database.

Typical code in the DataPortal_Fetch method uses SQLConnection, SqlCommand, SafeDataReader, and mResources. All of these objects need to be mocked.

The connection mock has to include the Open method, but it doesn't need any expectations for this test.

The command mock needs the CommandType and CommandText methods, but it also needs the ExecuteReader function which has to return the mock data reader. For this test, there are no expectations.

The data reader has to include Read, GetGuid, GetString, GetSmartDate, GetBytes, and NextResult. None of these methods have to actually return any values. For this test, there are lots of expectations to set up on the mock data reader. It is expected to get two GetString calls, one with a parameter of Name and the other with a parameter of Description. It is expected to get two GetSmartDate calls (Started and Ended). It is expected to get one GetGuid call (Id). And finally, it is expected to get a GetBytes call (LastChanged).

Like the first test, it passes if all the expectations on the mock data reader are met.

Here's another good test I just thought of: Started date is assigned to the started field. This test would use the same mock objects as the previous test, but the data reader would only have the expectation that GetString is called one with the Started parameter. This expectation would return some value that you define in the expectation. The unit test then asserts that the Started Date of the Project matches the the value you put in the expectation.

When I started writing tests a couple months ago, I initially went this route. I found it very tedious to write all these expectations. The mock objects were somewhat tedious at first, but they are highly reusable. Designing the expectations is where I spent most of my time. As a result, I use integration tests. I have a suite of about 650 tests that I run several times a day. They take about 25 seconds with a local data portal and a remote database.

There are downsides to using integration tests. One bug can break multiple tests. That's why it is important to always make small changes and test between each change. If a previously running test breaks, then it has to be in the code you just changed. Another downside is that it is slower. When the test suite becomes too lengthy to run in 5 minutes or less, you have to partition the tests into functional areas that should have no crossover. You run the suite of tests appropriate to the code you are working on. The nightly/daily build should run the full suite of tests.

hurcane replied on Wednesday, May 31, 2006

Reading some other replies made me think of another point. Even if you use mocks, you still should be testing the actual database access at some point (the integration tests) before you release the product.

andreakn replied on Thursday, June 01, 2006

jokiz: I have not used CSLA for any project yet, but we normally put DataAccess code in a separate project that is accessed through interfaces that can be mocked.

hurcane: naturally you need to test against actual database, but this is not unit testing, this is integration testing, which is a separate thing.

consider this: if I lose the network connection to the DB-server for some reason, if all my tests run against the DB, then all my tests will fail. furthermore, I would have to wait until my ISP fix the problem before I can develop any code. it may be a farfetched example, and this particular risk might be easily mitigated, but it's one out of many examples why your unit tests should be independent (both independent from eachother and the outside world)

I didn't think that mocking the ADO.NET framework would be a good idea. I think I have read that you should basically only mock your own code, and even then preferably only mock well defined interfaces.

jokiz replied on Thursday, June 01, 2006

i agree that the persistence of the BO's should also be tested, but they should be part of the integration tests and not part of the unit tests that a developer runs a number of times within the day.  of course it should be part of the tests ran during daily builds.  so i really prefer to have this integration tests in a separate project since i normally do a right-click project run tests with testdriven.net.  it really pays if the unit tests run as fast as possible.

i also read that article that mocking ado.net framework would not be a good idea.

MelGrubb replied on Thursday, June 01, 2006

On the subject of "fast".  Automated unit tests are described as "fast" because they are automated.  Think back to your life before NUnit... how did you test things?  Did you test things?  Your tests probably consisted of test scripts or custom built harnesses to exercise the different functions manually one at a time.  Now we have NUnit, and you can test your whole suite, and people want to say it's not fast enough because it's still hitting the database and taking entire minutes out of their day.  Wah!  I say that a 10 minute long unit test suite is still considered "fast" compared to how we used to do this stuff.  Having said that, if your test suite is running for 10 minutes, there'd better be a whole lotta objects your testing.  I've never had a suite that took more than 3 minutes, and that was for a fairly dense API.

I would also take exception to the broad categorization of anything that interacts with the database as "integration testing".  In one way yes, we are testing the integration of our business objects with the database, okay you can argue that point.  BUT, if the specific method I am testing is a "Get" function, then I'm going to have to call that a unit test.  In this case the "unit" is the get function, and in order to test it I'm going to have to get something... get it?

When I want to test my web UI talking to the business objects talking to the database, then yes that is integration testing.  I'm checking the whole package instead of a specific part.

If I have broken my data access layer out from my business obejcts then yes, I could mock the DAL and it would no longer need to touch the database.  Rockys patterns (As well as my own) dictate that the business object itself is it's own DAL, so there is no integration here apart from the integration with the database itself, and that's a whole philisophical debate of its own.

On a different note, my unit tests can arguably be called integration tests because of the way they rely on each other.  Due to referential integrity rules, I can't save an Order object if it doesn't have a Customer to belong to, so my Order tests create dummy Customers in the course of their own work.  They do this by calling methods on the Customer test class, so I'm not duplicating logic.  So strictly speaking, all my unit tests except for the bottom layer lookup-table objects are actually integration tests because they rely on other objects and methods.  It works, and it helps me find the problems at the points where my objects interact.  So call it an "NInt" test if you want.  Whatever, sue me.

hurcane replied on Thursday, June 01, 2006

A follow up comment on the statement cautioning against mocking the ADO.NET framework. Of course, it is a waste of time to mock the entire framework. However, the idea of using mock objects is that you want to eliminate any interdependencies between objects during a unit test. If the code being tested makes use of ADO.NET objects, you must provide a mock for these objects to eliminate any interdependencies. You want to test the DataPortal_Fetch code, and only the DataPortal_Fetch code.

Mocking a SQLConnection object doesn't require providing every possible property, method, function, and event. It only requires mocking the properties used by the code under test.

Mocks are pretend objects. They always provide consistently expected behavior, which has to be hand-coded to eliminate interdependencies. Proper use of mocks does really isolate the unit test and make it independent. In theory, they are great.

What happens, though, when a public interface changes? If you are using mocks, you have to update the mocks to behave using the new interface. Of course, you are already having to change any other clients that consumed that public interface, so it could be argued that the additional overhead of updating the mock objects is trivial. That's a debate that can only be answered in the context of your company.

Like all tools and techniques, unit testing has pros and cons. There is no perfect implementation. We must always balancing the pros and cons and make a subjective choice of what technique is better for our environment.

ajj3085 replied on Thursday, June 01, 2006

Yes, you shouldn't be mocking the entire Ado.Net framework.. sorry if I made it sound that way.

I am curious though, if you are trying to mock away dependencies.  Where do you stop?  Do you only mock objects which communicate with an outside service?  Only those dependencies that are 'slow'? 

I'm not trying to be silly or anything, I really am curious.  People often say you should mock access to a database, or Active Directory.  Your unit tests shouldn't test integration with dependencies.  But if you're using a string, aren't you relying on a dependency?  What about the crypto in System.Security.Cryptography?  What about the objects in System.Collections?

The argument is that you want to only pick up bugs in your library, not in the dependancy.  But whenever I hear others claiming you need to mock things so that you're only testing 'your' code, I only ever hear about database and ActiveDirectory, I never hear anyone claiming you should mock BindingList or HashTable.

So what determines what you should mock and what you should just 'trust' that works?

Andy

DavidDilworth replied on Thursday, June 01, 2006

Hey, this is an important question.  Because you have to assume a level of trust somewhere.

So if we trust that the .NET framework does what is it is supposed to do...
and we trust that ADO.NET does what it is supposed to do...
and we trust that the CSLA.NET framework does what it is supposed to do...
then shouldn't we just accept them (without question) and use them.

If so, then what is the point of creating mock objects to replace/replicate these items?

We all agree (I hope), that the unit test code we write should only test the business code we write, not all the dependencies it relies on.

We have to assume that all the dependencies below a certain point have been tested 100% accurately already by somebody else.

Who's to say that in writing a mock object you don't actually introduce a bug/side effect that isn't in the original underlying item you are mocking?

So should we write unit tests to test our mock objects as well?  Do you see where I'm going with this? 

Where does that trust boundary start?

hurcane replied on Thursday, June 01, 2006

These are good points about trust.

In general, I trust the frameworks. I trust that the framework is going to invoke AddBusinessRules in the appropriate place for instance. Rocky does not have that level of trust, so he has unit tests for the framework.

For ADO.NET, you can trust ADO.NET, but I'm not sure I trust the database. If the database is on a secure box, I am the only one who has access, and I rarely change the database schema, then I might trust the database. In my development environment, I have to frequently synchronize my database with changes submitted by other developers. If I trust the database, I could end up on a wild goose chase through my own code, only to find that the problem is with the database change.

After all my unit tests pass and my work is complete, I can run the integration tests that use the database. If those tests fail now, I know that problem must be with the database, or my expectations of the database (the mocks) are flawed. In either case, I'm able to narrow my search for the bug because there are less variable that have changed.

I'm not using mocks myself, but I'm starting to make myself rethink whether I should. Hmm [^o)]

Jav replied on Thursday, June 01, 2006

Hi David,

Thanks for activating this priceless discussion, and I appreciate everyone's comments.  Most of the discussion appears to be about NUnit, which I have used before.  The Test System part of the Team suite allows one to create unit tests on an entire project.  When I tried that a few weeks ago, I got test "shells" for every instance variable and every property for every object in the project.  At the time I was overwhemed enough just getting my projects converted to Csla2.0, so I put the unit-testing aside for the time.  Now that the things are more stable, I intend to get into it.

My questions are:
1. Is anyone using the VS Test system?
2. If so, is it worth doing the automatic creation of thousands of tests and then fill in each with the required info to make the test workable - or is too much to wade through?
3. Any other words of wisdom?
TIA

Jav

survic replied on Tuesday, June 06, 2006

 

 

(a) I have not used the VS Test System. However, I did follow its news in the past two years.


My feeling is that it sounds like that you are afraid of “wasting” it, since it can generate the code for you …. My take is that we use cut/paste all the time (especially via quickCode/snippets) for so long, generating the code is nothing. Ignore it.

 

(b) To mock or not to mock: We really need to be careful here. We need to put things in perspective. TDD is THE agile and THE lightweight methodology – that is, before .net getting into the picture, i.e., when we are talking only in C++/Java context. Now, .net brings in VB tradition. VB has been in a RAD context, which is much lighter.

 

Before you dismiss my point lightly, note that I used Java and I love Java. Also, I used TDD and I love TDD. Of course, I also used classic VB, VB.net and C#.

 

My point: if you really want the lightweight end of TDD, then, do not mock. Mock is for developing frameworks, not for developing everyday applications.

 

A healthy way to resist the temptation of using mocks in developing everyday applications, your can participate some open source framework developments. In that way, you learn how to mock, the right place, and not mess up your everyday application developments.

 

Note that once you mock, your unit testing code is totally different from your real code; as a result, your unit testing code loses a key functionality: documentation.

 

 

Here is my blog. http://survic.blogspot.com/2006/04/8-ottec-123-enterprise-computing.html

andreakn replied on Friday, June 09, 2006

in answer to Davids post with the timer-info in it: YES 3 seconds is a very long time for 4 -  8 tests to run in. If you want unit tests for most of your application, and your application is some 50k lines of code, you should have at least a couple of hundred unit tests, if each test takes half a second then I can't run all my unit tests in one go (as they take 4 minutes or more to run) so I have to partition them and do some guesswork as to which suite is the most likely to have errors introduced in it. I would want ALL my tests to run within 5-10 seconds, that way my developers have no excuse for not testing the entire suite at least a couple of times per ten minutes.

The reason why mocking DB - access is a good idea is

1) it's faster, If I do it consistently I might just reach my goal stated above

2) I trust the DB, I don't need my tests to execute ADO.NET code every time (which de facto would mean that I was testing the ADO framework in addition to my own code)

However, mocking something as low level as ADO is generally a waste of time. What I typically would like to mock is a layer that has methods like InvoiceCollection GetInvoices(Customer c)

That way the mock could expect a call to that method and populate up real Invoice - objects, but with fake data in them (not coming from the DB, just made on the spot)

This would make doing tests involving the InvoiceCollection would a breeze to code, and All the objects would think that they were in fact getting "real" data from the DB and it would be superfast.

I still mean that I would need some tests that went against the DB, but they would be few, possibly even in a separate suite (if there were too many and they were slowing execution of my functional tests down)

Now, the problem I see in CSLA is that it is Always assumed that an object should create itself in tight coupling with the DB. You might just be able to put a mock in between that coupling, but that would mean mocking low level stuff, and that's a bad thing (even if you only do parts of it it's still a lot of work)

It doesn't seem right that if I want to test the method that aggregates up the sums and taxes from a collection of several invoices I first need to put them into the DB before I can test it. I know what I want to test, if I was outside the CSLA environment I would just instantiate a lot of objects and be done....

The only solution as I see it is to have a separate Criteria object that contains all the data you need for your object, when the static factory method receives a call to get, it should just new up an object and input the data into the object, that would work for simple tests, but tests that are more complex, in which the objects you faked up get other objects from CSLA, you would be screwed again (metaphorically)

Apparently there is no easy solution to this, anyone have any other ideas to get around this?

Yours, Andreas

DavidDilworth replied on Saturday, June 10, 2006

Andreas,

I'm sorry but I have to disagree with you on this.  As I said in one of my previous posts, it all depends on your definition of a "unit".  I think your definition of a "unit" is unclear.

Using your definitions you would like each developer to run the entire test suite once every five minutes (you said "a couple of times per ten minutes"). I'm sorry, but not even the most productive developer on the planet is able to write/change enough source code in five minutes that require the entire test suite to be re-run.  That's not UNIT testing, that's application/framework/system testing - call it what you like, but it is not UNIT testing.

At the lowest level a "unit" could be a single method (or possibly a class).  Consequently, there will be a set of tests (let's call them unit tests) that completely test the functionality of that method (or class).  So when a developer makes a change to the code in that method (or class) they should run the "unit tests" that ensure that they have not broken that method (or class).

It is a waste of time to run the entire test suite at this point - nothing else has changed other than the one method (or class) just modified.

And to some extent I agree with you.  At this level, running the unit tests should be as fast as possible for the developer.  Change the code, run the tests, check it back in.  Repeat.

Running the entire test suite is a different task.  This typically involves a "full build" of the application.  Running all the unit tests at this point guarantees that everything still works when it's all put together.  It does not matter that this takes more time than running an individual "unit test", it's the 100% guarantee of accuracy that you want here.

This is commonly achieved by using a build server and an automated build process.  This completely automates the process of getting the code from the source code repository, compiling it and then  running all the unit tests.  It should be 100% repeatable and able to run unattended (i.e. perhaps as nightly build).

We use a tool called CruiseControl.NET to achieve this in our environment.  We have a daily build process that gets code from SourceSafe, compiles it, runs the NUnit tests, runs the FxCop tool and builds our documentation using NDoc.  It runs automatically and currently takes a couple of minutes.

-----

The reasons why mocking DB access is a BAD idea:

1) It means you have to write code to mock something you trust (you said "I trust the DB").  I also trust the DB.  I do not trust developers to build something that replicates this 100% accurately (and I include myself in that statement).

2) You said "I trust the DB".  I also trust the DB and the ADO.NET framework.  However, I do not trust developers to write 100% correct SQL code in stored procedures (and I include myself in that statement).  Therefore, I want to be sure that the code uses the correct stored procedures, so that the data is accessed using the exact mechanism that will be used in production.

-----

Just for information, our current project has 275 tests and takes approximately 25 seconds to run.  How does this compare to yours?

andreakn replied on Sunday, June 11, 2006

Hello again David

I must say you do have valid points and it's obvious you know what you talk about, but I must say I still disagree.

If I write 5 lines of code, compile, run ALL my tests (in less than 5 seconds) see that they're all green, then I KNOW that I haven't introduced a bug.

If I do the same but only test those tests that I assume will be affected by my changes, and they all turn up green, then I ASSUME that I haven't introduced any new errors.

I typically would not check in a mere 5 lines of code (as I'm probably in the middle of something anyway) so if the whole suite is only run at checkin, then I won't know until lunchtime if my assumptions are correct.

say that I check in, go to lunch and come back and am greeted with an email from the automatic build/test process (yes we also use CC.net) saying that some tests failed.

I now most likely will have to spend a good 5 minutes tracking down which of my changes introduced the error. Furthermore I'm feeling stupid that code I know I've tested actually failed.

I'm sure you know of TDD, but let me just quote a snippet from a website that gives instructions on it. This is from: http://www.agiledata.org/essays/tdd.html
---
A significant advantage of TDD is that it enables you to take small steps when writing software. This is a practice that I have promoted for years because it is far more productive than attempting to code in large steps. For example, assume you add some new functional code, compile, and test it. Chances are pretty good that your tests will be broken by defects that exist in the new code. It is much easier to find, and then fix, those defects if you've written two new lines of code than two thousand. The implication is that the faster your compiler and regression test suite, the more attractive it is to proceed in smaller and smaller steps. I generally prefer to add a few new lines of functional code, typically less than ten, before I recompile and rerun my tests.
---

So if I am to test everything always (which in MY opinion is not a waste of time, but actually saves you time in the end), having unit tests run fast is important.

i agree with you: mocking the DB is not a good idea. but mocking the RESULTS that the DB gives is an excellent idea, as that makes the tests go that much faster. Also: most of my tests are NOT on basic CRUD things, as there are so many other interesting things to test in the business domain. Ideally I would like to not have to write CRUD at all and focus 100% on the domain problem at hand (to me CRUD is just something you need to do to get the problem solved.. no customer I've met has ever said "we need a program that can load into memory and persist to DB invoice objects")

I hope this clarifies why speed is important to me, and I also hope that you or anyone else who is bound to have much more experience than me with CSLA can give some practical advice on HOW to achieve a superfast test-suite, and not just debate over WHY you would need it.

for the record, we have had projects with 1100 unit tests running in less than 10 seconds (no DB-interaction in the tests mind you, they were in a different suite)

also, just on the end here I'd like to say that in my opinion if you run 10000 unit tests that basically test each part of the system, you're still doing UNIT testing, as you test each thing in isolation. Integration / application testing assumes that each test goes through more than just a unit

DavidDilworth replied on Monday, June 12, 2006

Andreas,

Your points are valid and I can see from the size of the projects you're working with you know what you're talking about.  But here's the thing.

andreakn:
If I write 5 lines of code, compile, run ALL my tests (in less than 5 seconds) see that they're all green, then I KNOW that I haven't introduced a bug.

If I write 5 lines of code, compile, run the 10 tests that completely test the behaviour of my unit in less than 5 seconds, see they're all green, then I KNOW that I haven't introduced a bug.

The difference is that I've only run 10 tests, not the entire suite, but I still KNOW that the unit I'm working on is 100% accurate.  Those 10 tests give me complete 100% coverage of the functionality of my unit.  Any code that uses my unit will still work exactly as required.  I don't need to test the rest of the code to prove that, I already KNOW - it's not an assumption.

My point is that you shouldn't need to run all the tests to KNOW that the unit works 100% correctly.  You only need to run the exact number of tests required.

If you need to run any more than that, then the unit tests for that unit are not complete.  And that is where the problem is.

So I agree with the sentiment of the article you quote and I would agree that on the whole the way we work is to write small "units" of software with the associated "unit" test cases.

And I'm sure you'd agree that TDD (as a concept), along with the associated testing tools like NUnit and VS 2003/2005, provides a much easier way to do this level of testing compared to the way we did it 5-10 years ago.

jokiz replied on Tuesday, June 13, 2006

hi david and andreas,

thanks for keeping this thread alive. 

just like andreas, i also prefer running all the unit test project to be really sure that i don't break anything.  i don't have to browse for the corresponding testfixtures for the modified classes and run it.  i prefer running the whole unit test project since i usually have changes that span a number of code sheets and it will be tedious to select just the ones that are supposed to be affected by the changes.

i'm just new to tdd and unit testing and would love to know how to really work around this tight coupling of csla with the datasource in order to make it really testable with bearable speed.

currently, i'm working around with another creation method from the collection classes to create an empty one to be used by the unit tests.  the unit tests therefore are adding items to it to mimic the Fetch creation method.  i thought i could include somewhat a switch inside dataportal_xx methods where one will not hit the DB but i also don't know how to do it.

DavidDilworth replied on Tuesday, June 13, 2006

jokiz

I think that one of the real benefits from this CSLA forum is that you can get some great discussions on topics that are not directly CSLA related - although obviously everyone shares a common interest in the CSLA framework.  This thread falls into that category.

If you're making multiple changes across several source files, then running the entire test suite is probably the easiest way to make sure you've not broken something - agreed.  That's because you've changed multiple units - so you need to run multiple unit tests.

With regard to the testing of CSLA objects, have you seen the latest blog entry about Mock Objects from Fredrik Normen.  He presents an interesting way to create objects, passing in mock objects via a special constructor method.  Have a look for some possible ideas.

UmpSens replied on Monday, August 21, 2006

Hey there,

I've been reading this whole thread. Obviously there are two camps on this discussion. The basic fact is that you can't test a single business object in CSLA without running code from another class. This is called object coupling. If you want to have a maintainable system, you need to go for object cohesion, not coupling. (for a  good explanation of the difference, see http://www.toa.com/pub/oobasics/oobasics.htm#ococ).

There are of course alternatives to this. The thing I came up with is the following. I create a new interface called IDataPortal. This interface will declare all the public methods of the Csla.DataPortal (I'm not sure about the eventhandlers though, but we'll come to that later).

public interface IDataPortal

{

object Create(object criteria);

T Create<T>();

T Create<T>(object criteria);

...

}

I then create a new class that implements this interface and maps them one on one with the Csla.DataPortal:

public class MyDataPortal : IDataPortal

{

public object Create(object criteria)

{

return DataPortal.Create(criteria);

}

...

}

On my business object, the code changes slightly:

public class MyBusinessObject : BusinessBase<MyBusinessObject > {

private static IDataPortal _dataPortal;

public static IDataPortal LocalDataPortal

{

get

{

if (null == _dataPortal) _dataPortal = new MyDataPortal(); //Instantiate default dataportal

return _dataPortal;

}

set { _dataPortal = value; }

}

...

 

When I create a new business object, I then call:

LocalDataPortal.Create<Sheet>();

In your test you would write:

MyBusinessObject.LocalDataPortal = MockDataPortal;

Expect.Once.On(MockDataPortal).Method("Create").Will(Return.Value(whateverYouWantItToReturn);

MyBusinessObject myNewObject = MyBusinessObject.NewObject();

 

I'm not really happy with the fact that the members of MyBusinessObject have to be static, as this will mean that every test that doesn't initilaize the LocalDataPortal property will end up with the MockObject of the previous test, or even the default that hits the database.. But hey, this makes it testable. You can even have tests hit the database by setting the property to null. If some members need to call Csla.ApplicationContext.User, this is ok, because the Csla.ApplicationContext.User property's type is an IPrincipal, which can also be mocked.

I'm not sure how deep the implications are, as I don't know the Csla framework that well yet. But this gives me the reassurance that a solution can be found.

ajj3085 replied on Monday, August 21, 2006

UmpSens:
Obviously there are two camps on this discussion. The basic fact is that you can't test a single business object in CSLA without running code from another class.


Hmm, I don't think that properly describes the camps.  I think its more of  a 'mocking costs more time than the benefits realized.'  After all, the Csla code has its own unit tests and, assuming they pass, why take them time to mock its behavior away?  What did you prove if your BO interacts correctly with mock?  Well nothing really.. you gain no knowlege that the BO will operate correctly with the actual production code you want to release.

Then on the other side you have mock advocates.

UmpSens replied on Tuesday, August 22, 2006

I'm not trying to prove my object interacts correctly with the Csla framework. I'm trying to bypass the framework. When I test a business object that uses tha csla framework, I assume two things:

When I instantiate a business object with the default Csla implementation guidelines, I actually run at least three methods in my business object:

So I'm actually testing the implementation of three methods in one single call. If it runs well, all is ok. However, if one of them fails, I don't know which one did. What I want to do is test each method separately.

So to test a single method of my business object, I am writing my contract first, setting up the expected calls it will do to the framework or any external code. That way, I know the calls were made, and the method did what was expected. I can apply that to all the methods of my class individually.

DavidDilworth replied on Tuesday, August 22, 2006

I think a potential problem with the approach you are taking is that you are changing the behaviour of your BO (by re-implementing the Data Portal a different way) to make it testable.  That doesn't seem quite right to me.

I understand what you're trying to do and why.  And I understand the argument from the pro-Mock Objects camp as to why you might want to do this as well. 

But altering your design just to make something testable, is that really the way to go?  Is that really what TDD is promoting?

Surely, you should stick with the design and find a way to test your BO that does not involve changing the design.  It might be harder to do it this way, but it's more correct.

What's to stop a developer using your LocalDataPortal property for something other than unit testing as part of your application?

 

DansDreams replied on Tuesday, August 22, 2006

I'd like to introduce a slightly different question if I may... the concept of unit testing a CSLA business layer has had me in a quandry for a couple of years.

Based on how David describes his tests, it seems to me there's two levels of unit testing the BL.

Level1 - (what david describes) You are basically testing just the CRUD operations of the business object, with the goal of catching bugs caused by changes to the database schema or the data access code of the business object.

Level2 - You are testing the business logic of each BO.  It's not just that I can retrieve a previously saved BO, this level is also testing that if I change PropertyX back and forth between these 10 values the IsValid changes appropriately.  That if I read invalid data from the database the BO's IsValid is false after loading.  And so on and so on.

The quandry for me is that it seems like to achieve Level2 you'd have to spend a ridiculous amount of time writing and maintaining all the tests.  I've pretty muched dismissed it as too burdensome.

So the question becomes whether or not Level1 testing is really worthwhile.  How often do you make and catch the type of mistake that it would detect?

DavidDilworth replied on Tuesday, August 22, 2006

Dan, I think you make a good point and one that I would argue comes back to something I said in an earlier post some time ago - it depends on your definition of a "unit".

So, IMO, I think the two levels you describe are actually part of what I choose to call the "unit tests" for a BO.  My 100% functional unit (i.e. my BO) has to behave exactly the way I expect and it has to persist itself.  But that's my definition of a "unit" in this scenario.  Other people may have a different opinion.

And I agree that the creation and maintenance of all the test code to perform all the required tests is a large task, but I'm sure we'd all like it as an ideal to aim for.  However, we (you and me both!) choose not to write the level 2 tests, because we want to spend our time writing the application and not writing test cases.  I'm not saying that we don't have any level 2 style tests at all (as we have some), but we don't have the ideal 100% coverage for each BO.

So that's why we developed an abstract test base class which allows us to test the CRUD part of the "unit test".  This didn't take very long and gives us basic coverage for any BO. 

Does it test the basic CRUD functionality of a BO?  Yes
Did it take long to setup?  No
Does it test the business rule functionality of a BO?  No.
Would that take a long time to setup?  Yes.

So, it's a compromise we're prepared to take.  Yes, we may find that some bugs "get through the cracks" because we didn't write all the tests for all the BOs.  But against that we're getting more of the application development completed.

It's a classic compromise trade-off between: test coverage vs. project delivery.

ajj3085 replied on Tuesday, August 22, 2006

Dan,

I would say not to skip out on level two, but instead decide what to test in level two, just like in level one.

Should you test that FirstName makes the object invalid if it does not contain a value?  Probably not.  That's a simple rule, just one line of code and using the Csla framework.  But I think you should test more complex rules.

Its a trade off yes; take more time up front to ensure quality.  However, the cost of finding a bug down the road is much, much higher than the cost of testing and finding the bug up front.  This is always true, and its something many people forget.  It will cost you more (a lot more) to find the bug when its running on your customers computers than it will on your local desktop. 

The cost of creating the initial tests can be high; maintaining them is hopefully easier, as long as you stick to TDD.  That is, finding a bug requires you create a test to prove the test exists, then fixing it.  Changing functionality requires you to modify the test to the new expected behavior, then coding to 'fix' the BO.

Also remember that people seem to not write tests after the fact, which means your test library is small or non-existant.  If there's already a test setup and cleanup routine, it becomes easier to continue writing tests than if there were none to begin with.

Andy

DansDreams replied on Tuesday, August 22, 2006

Andy and David,

Yes we all agree on the trade-off / compromise point of view.  I guess now what I'm wondering is about the bang-for-the-buck of whatever you have defined as a "unit".

I like the idea of being able to easily and quickly check basic CRUD operations to verify the database schema, views, stored procedures, BO data access code, DAL etc. are all still in alignment.

My question is beyond that what's been particularly useful or not useful.  Do you find that only 2% of the bugs are data access related?  Is it still worth it since the CRUD tests are so easy to write?  Do you find with perhaps a few additional business logic tests per BO to test to the complex validation you catch another 60% of bugs?  And so on.

Also, David forgive me if it was covered earlier in the thread as I didn't read every word going all the way back, but could you give a little detail on what exactly you mean about having an abstract test base (or whatever the exact wording you used)?

DavidDilworth replied on Wednesday, August 23, 2006

I guess the bang-for-the-buck is what it's all about at the end of the day.  I think most of the "boilerplate" style code doesn't require test cases, because it can either be code generated or sensibly cut-and-pasted.  That stuff is easy to code and easy for someone else to pick up and understand.

It's the unusual and non-standard behaviours that warrant the effort of writing unit tests.  That's where you get the bang-for-the-buck payback.  That's where you need some test cases to provide the "guidance" to the next developer that comes along who needs to change/extend the "complex" behaviour.  You definitely want test cases in place then to make sure that the existing behaviour is not broken.

For what it's worth I think the "bug" we've fallen over most frequently is BOs not saving when you call the Save() method.  And every time this comes back to the property setter in the BO not making the object dirty.  A simple oversight in terms of the setter code, but it has the unexpected effect of not putting your data into the DB.

And there must be an easy way to create a test harness for this (using reflection?), so you don't have to code it up for every different BO. But we haven't gone down that route yet.

With regards to the abstract test base class, see the comments I made on post 1152 where I explained the principle of what was happening.  In essence though, a test class/framework can be just like any other class/framework - you can use inheritence, generics, whatever, to get the job done in the most practical way possible.

So, the principle behind the abstract test base class was a class that could be inherited from that provided the basic framework for doing the CRUD testing (it's analoguous in some ways to the BusinessBase base class).  It has abstract methods that each derived class must implement to provide the BO specific stuff - like the setting of properties.

So a new test class for a new BO means you inherit from the abstract base and implement the methods that require BO specific behaviour.  And hey presto - you have simple CRUD testing.

ajj3085 replied on Tuesday, August 22, 2006

UmpSens:
I'm not trying to prove my object interacts correctly with the Csla framework. I'm trying to bypass the framework. When I test a business object that uses tha csla framework, I assume two things:
  • The Csla framework does what it's supposed to do without bugs (some might consider this bold, however if you want to test the Csla framework, this is not the right place to do it)
  • My business object is buggy until all it's tests pass


Exactly.  Part of the behavior of the object is how it interacts with the dataportal.  If you don't know it interacts correctly (because you've mocked away that functionality),  you don't know for sure that all of our objects behavior is correct.  How can you say you have confidence that your BO is correct if the getting / modification of data isn't being tested? 

UmpSens:
When I instantiate a business object with the default Csla implementation guidelines, I actually run at least three methods in my business object:

  • the static factory method to create the object (calling the Csla.DataPortal.Create)
  • the business object's constructor, which will probably initialize the object and children objects
  • The static DataPortal_Create method in my business object

So I'm actually testing the implementation of three methods in one single call. If it runs well, all is ok. However, if one of them fails, I don't know which one did. What I want to do is test each method separately.

So to test a single method of my business object, I am writing my contract first, setting up the expected calls it will do to the framework or any external code. That way, I know the calls were made, and the method did what was expected. I can apply that to all the methods of my class individually.



It seems to me its easier to set through the code than to create mock's for the Csla functionality.  It also seems less error prone, since any code may have bugs, including the mocks themselves.  I also test the expected behavior, but by thoroghly inspecting the object once its returned.  Usually its easy to see why I didn't get the expected results, but if not, I can simply trace the code which is at some point going to be the production code.  That is what I think is key; testing your actual production code.  Using mocks, you're not doing that.

Just my sixteen cents (after inflation).
Andy

DavidDilworth replied on Thursday, June 01, 2006

I am a PM, but I don't understand the reason why hitting the DB is so bad.  How long do you think it takes to run these unit tests?

And this is nothing to do with CSLA, BTW, this is a general discussion that relates to any software development project.  Unit testing is a generic concept.  Don't use CSLA as a mis-direction here.  However you build your software you will at some point in the development lifecycle have to test that data is actually going into the database!

I'll repeat one of my previous comments, it depends on your definition of a "unit".  As others have commented in this thread, if you want to call testing a BO persisting itself to the database an "integration test" then call it that if you want.

Personally, I think of BOs as a "unit".  Fine - that's just different naming conventions and we'll leave it there.  What is important is that you do test that behaviour somehow.

So a "unit" test may be checking that when you set the Person.Name property in your Person BO with a new value it does/doesn't trigger the business rule validation to occur (maximum string length for example).  Ok, that's fine. And you'll have a whole bunch of "unit" tests like that.

But to me the "unit" I'm interested in is my Person BO, since this "unit" can be surfaced via any one of the different GUIs that I may want to expose my BO in (WinForms, WebForms, WebService).  Now I want to be 100% sure that the BO works properly regardless of the GUI.  That's my definition of the "unit" in this context.

And yes I do want to make sure that whoever wrote that BO has written all the low-level "unit" tests as well (that test property setters for example) and that it persists itself nicely into a database (could be MS SQL Server, Oracle or MySQL for example).

And all these tests need to be 100% accurate and 100% repeatable.  And I don't care how long they take to run, but they must be included in the automated build process.  And when one of them goes wrong, it does need to be investigated and corrected ASAP and the build re-done.

So the whole development lifecycle becomes much more automated and repeatable at the click of a button.

And this is before we even get to testing the different GUIs that might go on top as well  Big Smile [:D]

 

 

DavidDilworth replied on Thursday, June 01, 2006

In answer to my own question "How long do you think it takes to run these unit tests?" here's my answer.  I ran an example from our own test project within the IDE.

The test is written mainly within an abstract base class, so it can be used for any CSLA BO provided you override/implement the right methods.

My setup is:
OS:  Windows XP and MS SQL Server 2000.
Hardware: Dell D800 Latitude laptop with Intel 1.6 Ghz CPU and 512Mb RAM.
Software:  VS2005, ReSharper, TestDriven.Net, NUnit.

The NUnit test I ran involves a CSLA BO with 3 properties and does the following 4 steps:

(1) CreateTest - Create a new BO, set all the properties, save it.

(2) ReadTest - Create a new BO, set all the properties (to the same values).  Get the first BO back from the DB and compare the two objects for equality by comparing all the properties are the same.

(3) UpdateTest - Get the first BO back again, change it, save it (using the same object reference to retain the object).  Get the same object from the DB into a second object reference.  Again, compare the two objects for equality as above.

(4) DeleteTest - Delete the original object from the DB.  Check that it is no longer in the DB.

I ran the whole test using both TestDriven.Net and the new NUnit support within ReSharper.  The results were very similar with the whole test taking approximately 3 seconds.  I know the code is not "optimal" and could be made to perform faster, but it works and gives 100% accuracy on the CRUD functionality (including the testing of any relevant CSLA properties like IsNew, IsDirty, IsSavable).

So is 3 seconds too long for that level of "unit" test?

Note: The first time you run the test it takes longer, but that's due to the setup overhead of the relevant testing harnesses within TestDriven.Net and ReSharper.  After that the times are pretty consistent.

ajj3085 replied on Thursday, June 01, 2006

You'd probably have to mock the ADO.Net objects (I assume you are using them directly); a big PITA, but that's really the only way to go.

I have a custom data layer, and if i wanted to mock it, it probably could.  The problem would be tests that expect to load an existing object.  How do you get the mock to give the BO the data it needs?  Seems like it would be almost as much (or more) work than just stuffing data in the db.

Some people here have claimed we are calling integration testing unit testing.  I ask, where does unit testing end and integration testing begin?

I mean, if you're going through the trouble to elimiate calling the data access code, then why wouldn't you mock the use of the DataPortal in Csla?  Shouldn't you also mock the PropertyChanged code which fires business rules?  You need to make sure that if you set X = abc, and your BO should declare itself invalid, but you're relying an awful lot on code that's just in CSLA.  So technically you should mock that as well. 

The fact is that without CSLA and without a Data layer, your business layer is pretty useless.  There are plenty of bugs which come up because you're NOT correctly communicating with the other layers upon which your BO relies.  I know I certainly had bugs because I forgot to call the PropertyHasChanged method inside a property setting.  Certainly the bug is not with Csla, but my project, but should I mock all of Csla.

Anyway, enough of my rant.  I suspsect that to get Csla to avoid hiting the db, you'd need to mock Ado.Net or a DAL if you're running something between your BOs and ADO.Net.

Andy

Copyright (c) Marimer LLC