dependency injection of mock database

dependency injection of mock database

Old forum URL: forums.lhotka.net/forums/t/1912.aspx


steveb posted on Wednesday, December 06, 2006

i am writing my first extensive csla application so bear with me :)

i usually write unit tests for my business objects with rhino mock using dependency injection for an Enterprise Library Database object. The tests look something like this:

[Test]

public void TestWithRhinoMock()

{

MockRepository mocks = new MockRepository();

Database db = mocks.DynamicMock<Database>();

DbCommand cmd = null;

SetupResult.For(db.ExecuteReader(cmd)).IgnoreArguments().Return(GetDataSetDataReader("test_data.xml"));

mocks.ReplayAll();

Project underTest = Project.GetProject("an_id", db);

Assert.AreEqual("project_name", underTest.Name, "the project name was not initialized from its db value");

mocks.VerifyAll();

}

this allows us to test without actually hitting a database. typically the mock db is injected in the business objects constructor. i am looking for advice on the best way to approach this for csla objects. my real hangup is BO.GetBO( id, db ) would somehow have to get the db to the business object before DataPortal_Fetch is called.

i have thought about adding the database to the criteria object, but it isn't serializable, and things start to get pretty complicated and messy with that approach. i can't seem to think of any good way to do this without modifying the DataPortal infrastructure, something i dont want to do.

xAvailx replied on Wednesday, December 06, 2006

>> this allows us to test without actually hitting a database <<

Is there any specific reason you don't want to hit the database?

Thx

steveb replied on Wednesday, December 06, 2006

there are all kinds of benefits to writing unit tests that mock out dependencies.

here is one concrete example. i want to test that when an error is raised at the database level that my code handles it correctly, whatever correctly means for that use case.

[Test( Description="RunScript will roll back the transaction and throw the exception if the database fails" )]

[ExpectedException( typeof( ApplicationException ) )]

public void RunScriptFails()

{

string sql = ReadScriptFromFile( @"test1.sql" );

Expect.Call( mocks.Database.ExecuteNonQuery( mocks.Transaction, CommandType.Text, sql ) ).Throw( new ApplicationException( ) );

mocks.Transaction.Rollback( ); // this line states "Rollback will be called on the transaction"

mocks.ReplayAll();

server.RunScript(mocks.Database, @"test1.sql");

}

to setup an environment to actually cause the database server to fail would require much more than just ".Throw( new ApplicationException( ) )" if i wasn't mocking out the database.

with that said, i dont really want to dig into the merits of mock objects here, just figure out a way to integrate the idea into csla :)

this is one of my favorite articles on the subject if you are interested.

http://www.martinfowler.com/articles/injection.html

 

ajj3085 replied on Wednesday, December 06, 2006

We've discussed this here before (seach for mocks), but there's also drawbacks in my opinion.  At the end of the day, your test says that the BO correctly works with the mock objects, but you still don't know if it correctly works with the actual database.  You can't say for sure because you actually haven't tested it (unless you have another set of tests that run and actually hit the db).

I'm not sure your example shows an actual benefit.  How exactly will the business layer handle some kind of database failure?  I would think there is nothing it can do, so it should let the exception pass to the next level, which would be the UI layer.  The UI layer can handle this correctly.   Usually it does so by cloning the BO and saving the clone.. then if there is an exception the clone is simply discarded, so that the BO isn't left in an invalid state and informing the user. 

Andy

xAvailx replied on Wednesday, December 06, 2006

Sorry if I wasn't clear, I wasn't questioning the use of mock objects, I was asking why you didn't want to use the database in the get project test you used as an example. I can see why mock objects are beneficial for some scenarios, but not for every scenario in my opinion.

>> i dont really want to dig into the merits of mock objects here, just figure out a way to integrate the idea into csla :) <<

Fair enough :) - I am not able to help with your answer but Iagree with ajj3045 that when using a mock object, you have to create another set of tests that "cover" hitting the database scenario.

Once upon a time...In one project I "mandated" (I was team lead) creating separate unit tests for my library, every database stored procedure and table constraint. I ended up creating a lot of extra work for the entire team  with no benefit to the project quality because many tests covered the same scenarios. For example, My New Customer library test was testing my NewCustomer stored procedure, yet I had three (new customer lib, new customer stored proc, customer table constraints) unit tests that more or less covered the same unit of work. We did stop the madness after a few weeks :) So in the end, we concentrated on a high level of code coverage for the library and unit tested on an "as needed" basis for complex stored procs / db constraints.

I am not saying this applies to what you are trying to do, but a little story I wanted to share with others.

Thx.

steveb replied on Wednesday, December 06, 2006

thanks for the replies guys, i respect what you have to say.

 

I had already read quite a bit up here and had searched for mocks. One thing that i have noticed over my time in this industry is that anybody at a high enough technical level to understand things like csla comes with a passion for technology. The same passion that makes us so good at our jobs often tends to drive strong opinions on technical subjects as well.

 

The only questions posted for mocking inside csla i have found up here turned into conversations of strong opinions about whether or not to do it at all and very little detail was discussed about how to. I see this as an unfortunate part of our passion that when posed with a question about how best to do something we don’t do ourselves, our natural response is to convince the asker that they shouldn’t do it either instead of help them figure out the best way how.

 

Today I am not interested in whether the cost of mocking an object outweighs the benefit. Before I decided to take on csla I could mock out any call to a database in a few lines of code. The solution was clean, easy, elegant, and helped us easily and quickly test our applications at a level they weren’t tested before. I believe that there are benefits to distinguishing between unit tests and integration tests. I have an entire suite of integration tests that don’t mock out the database as well. I want to be able to do the same with csla so that I can leverage its great benefits. I have just started with csla and was hoping to get some help from people with a lot of experience with it without starting another Unit Test Quandry type thread.

 

Today I am trying to find the cleanest way to pass an enterprise library Database object into my csla object. The only problem I have is DataPortal calls a default constructor. I am not interested in mocking out DataPortal right now. If I place the Database instance in my Criteria objects, which is then passed to DataPortal_Create or DataPortal_Fetch, the object can get it as needed. However, the database object is not serializable ( I could fix that as well if this is the easiest way to go ) and obviously breaks the DataPortal paradigm when not running in process.

 

The real kicker here is my application will almost surely never need to scale to the point where I need to run with an application server anyway, so in the spirit of YAGNI I would be better off just not calling DataPortal at all and directly calling DataPortal_XYZ methods directly after calling a constructor that takes the database. This way I still benefit from a common container for my data access and a common structure for all my objects and am able to mock to my hearts content.

 

I just thought it would be better if I could find a simple way to keep the unmodified DataPortal in the picture. For now I am off to read more about how best to handle variable connection strings in csla as my users connect to any database they like when the application opens and my connection strings don’t live in app.config. Perhaps I will learn something about GlobalContext that will shed help out here as well?

 

So a few approaches i have though of are:

 

- Database passed on criteria, Criteria could implement ISerializable and pass the connection string and provider type to reconstitute the database in server environment. (Gets complicated and lacks elegance)

 

- don't call DataPortal at all (just plain too stubborn for this )

 

- modify DataPortal - if i was going to do this i would most likely implement object creation with EntLib's ObjectBuilder in the dataportal. we are using CAB as well and are all accustomed to its usage, and the Database is already registered as a service and could easily be injected into our csla objects with [ServiceDependency]. This seems the most elegant. The only drawback i see to this approach is that i would have to manage a vendor branch to merge updates from rocky ;)

 

I am more than willing to post samples of what I am doing if it would help someone who is willing to help me find my way with csla.

 

Thanks Smile [:)]

steve

 

xAvailx replied on Wednesday, December 06, 2006

>> I am more than willing to post samples of what I am doing if it would help someone who is willing to help me find my way with csla. <<

If you can post a small and complete code sample I will be glad to help with what I can.

Thx.

ajj3085 replied on Thursday, December 07, 2006

Steve,

Understood that you're not interested in the debate.  I did give an answer, but its more theory because I don't mock anything.  Smile [:)]

I would think you'd have to use something like NHiberate, and possibly add your own 'mock db' connection.  Other then that, I'd think you'd need to use refelction to change the contents of the DP_xxx methods at runtime to do whatever it is you need to do.

steveb:
The real kicker here is my application will almost surely never need to scale to the point where I need to run with an application server anyway, so in the spirit of YAGNI I would be better off just not calling DataPortal at all and directly calling DataPortal_XYZ methods directly after calling a constructor that takes the database. This way I still benefit from a common container for my data access and a common structure for all my objects and am able to mock to my hearts content.


Well, I don't think that's the only benefit the DP provides.  If you don't use it, you may start sprinkling data access code willy nilly through your class.  There are good reasons not to do so, but they are covered in the book, so I won't rehash them here. 

You also should think about how much you're buying by skipping the DP.  If the app is always going to be local, fine, but that also means that there's very little overhead when using the dataportal.  In other words, using it really isn't going to affect the peformance of your application.  As Rocky has pointed out before, the database calls are the heavy performance hits, not a few milliseconds of using reflection.  I strongly advise against bypassing it. 

I agree with the YAGNI principal, but your application will likely be around much longer than  you think.  Just ask any fortran programmer today. 

steveb:
Today I am trying to find the cleanest way to pass an enterprise library Database object into my csla object. The only problem I have is DataPortal calls a default constructor. I am not interested in mocking out DataPortal right now. If I place the Database instance in my Criteria objects, which is then passed to DataPortal_Create or DataPortal_Fetch, the object can get it as needed. However, the database object is not serializable ( I could fix that as well if this is the easiest way to go ) and obviously breaks the DataPortal paradigm when not running in process.


Typically the client of the BO library doesn't know anything about the database, and that's how it should be.  So I'm not sure how you would be passing the database object to the criteria anyway.  Probably your best bet is to put it in the ClientContext, and then have your BO take the object out of the client context and use it.  I don't think that's something you'd do in a real client of the BO library, but for testing it may make the most sense.  You can use the DP, don't need to worry about serializing and don't need to worry about modifying the DP.

One last comment:
steveb:
I see this as an unfortunate part of our passion that when posed with a question about how best to do something we don’t do ourselves, our natural response is to convince the asker that they shouldn’t do it either instead of help them figure out the best way how.


I'm not sure that's a bad thing.  Sometimes, and I'm not talking about mocking here, I'm speaking in a general sense, you know that helping the person will cause them to hang themselves in the end.  That's especially true if the person trying to convience you otherwise has been done the same road and it ended in distaster.

HTH
Andy

twistedstream replied on Tuesday, December 12, 2006

I think the point that Steve is trying to get at in this post is that it appears the CSLA.NET framework, in its present form, doesn't provide true support for dependency injection.  And dependency injection really is more of a broader subject than mock objects.  It has many benefits besides making it easier to do unit tests with mock objects.

Having true dependency injection support requires access to the object instantiation process.  In CSLA.NET, object instantiation is completely encapsulated by the DataPortal with no way to override it.  At present, your have two options.  One, you modify the DaraPortal source code itself; the drawback is now you have to merge your changes whenever a new version of CSLA.NET is released.  Or two, you don't use the DataPortal; as Andy pointed out, this is bad because you loose all the benefits of the DataPortal (which, in my opinion, is one of the larger benefits of the CSLA.NET framework).

In the past Rocky has made changes to the framework specifically to accommodate popular design or coding practices.  The best example I can think of was when he added support for code generators.  In that same vein, I think he should add support for dependency injection.  In short, he should provide a way to override the actual object instantiation mechanism in the DataPortal, perhaps with provider-model style class that gets registered via the .config file.

Maybe the only reason he hasn’t done this was for simplicity sake.  He still has to fit all of this into his next book :-).

~pete

Brian Criswell replied on Thursday, December 07, 2006

Take a look at the Database object from the PTracker example.  In Rocky's example, Database provides connection strings, but there is no reason why it could not choose which database to use (or mock) based on the app.config file (test .dlls can have there own app.config file in .NET 2.0).

Or your test could set up the mock db and pass it to the Database object, which knows to prefer a test db when available:

[Test]
public void TestWithRhinoMock()
{
    MockRepository mocks = new MockRepository();
    Database db = mocks.DynamicMock<Database>();
    DatabaseHelper.TestDatabase = db;

    try
    {
       mocks.ReplayAll();
       Project underTest = Project.GetProject("an_id");
       Assert.AreEqual("project_name"), underTest.Name, "the project name was not initialized from its db value");
       mocks.VerifyAll();
    }
    finally
    {
       DatabaseHelper.TestDatabase = null;
    }
}

public static class DatabaseHelper
{
    public static Database TestDatabase
    {
       // get and set
    }

    public static Database Database
    {
       // logic to choose the database to use
    }
}


If you actually used a remote app server, your DatabaseHelper would need to attach the TestDatabase to ClientContext and pick it up from there to use it.  This would pass the test database from the local test to the remote app server.

steveb replied on Thursday, December 07, 2006

your idea has actually crossed my mind quite a bit, it makes things very easy. the reason that i was looking for other options was when i coded up the idea in our application it was:

Globals.Database = mockdb;

and the idea of using that global left a bad taste in my mouth.

however, it is starting to taste better after looking at the other options and reading about ClientContext as a way to initialize the db on the server side.

thanks,

steve

Brian Criswell replied on Thursday, December 07, 2006

No problem.  The other thing I like about using this style of approach is that it keeps the business objects unaware of whether they are using a real or mock database.  They just get the database/connection/connection string from the Database object.

DavidDilworth replied on Wednesday, February 07, 2007

Looks like I missed this thread first time round.  I was one of the main contributors on the Unit Test Quandry thread referred to earlier and I feel I need to update my view slightly.

Whilst I still believe that you have to have some tests that actually hit the database to test your Business Objects basic CRUD functionality, I now also believe that Mock Objects have a place in your testing process as well.

We found Mock Objects useful in testing different "data configurations" in order to test different flows through our Business Process Workflow.  This is much easier to test in an automated fashion with a Mock Object based framework, than it is to try and maintain multiple sets of test data in a database that can only be used for specific test scenarios.

So we have used the NMock framework and we also looked at Typemock as well. I've heard about Rhino Mocks, but didn't get any chance to investigate further. 

I'd be interested to know how you actually got on with your planned mocked database.

=====
As an aside, the guy behind Rhino Mocks (Ayende) uses and has contributed towards NHibernate.

DavidDilworth replied on Monday, March 12, 2007

We've been doing a bit more thinking and research about Mock Objects and interactions with the database and came up with the idea shown below.  I think the idea in principle is the same as that suggested by Brian, but the implementation is slightly different.  Just putting it up for comment.

public static Person GetPerson(Guid personId)
{
   Person person;
   if (MockPortal.IsMockObjectAvailable<Person>(personId))
      person = MockPortal.Fetch<Person>(personId); // Returns a Mock Object (avoids the DP and the DB)
   else
      person = DataPortal.Fetch<Person>(new CriteriaGuid(personId));  // Returns a concrete object (hits the DB)
   return person;
}

It's designed to provide a Mock Object to be used in test scenarios where the variability of the data inside the BO being mocked has an affect on the BO/Business Process/etc. being tested.  There's no need for either the Data Portal or the database, as what is really important is the value of something in the Mock Object that affects the thing that is actually being tested.

Any thoughts/comments/suggestions welcomed.

ajj3085 replied on Monday, March 12, 2007

My first thought is that skipping the DP may be skipping some potential problems.  How do you handle the case where you want to test with remoting enabled?  While you don't need to specifically test the dataportal, you may want to test your code with remoting in place... I know I found quite a few problems when I turned on remoting.

My second though is that perhaps you can encapsulate the mocking aspect with a new DataPortal client.  That would make things more transparent in your own code.  Did you go down that path and find it was unworkable?

DavidDilworth replied on Monday, March 12, 2007

ajj3085:
My first thought is that skipping the DP may be skipping some potential problems.  How do you handle the case where you want to test with remoting enabled?

I agree that if you want to "verify" that the DataPortal works then you have to turn it on and try it.  We have done that already and we also had some "teething troubles" in understanding and getting it working properly.

But that's a different set of tests to the ones I'm describing for the use of this technique.  What is important in the scenarios I'm describing is not whether the data comes from the database, or even via the DataPortal.  The thing that is important is some property of that Mock Object affects the behaviour of something else (another BO or Business Process) and we want to test all the possible variations in an automated way.

ajj3085:
My second though is that perhaps you can encapsulate the mocking aspect with a new DataPortal client.  That would make things more transparent in your own code.  Did you go down that path and find it was unworkable?

You must have read my mind.  That was an idea we considered and thought had potential, but we did not follow up.  It would be a bigger job than just the "quick-and-dirty" approach we prototyped.

ajj3085 replied on Monday, March 12, 2007

DavidDilworth:
I agree that if you want to "verify" that the DataPortal works then you have to turn it on and try it.  We have done that already and we also had some "teething troubles" in understanding and getting it working properly.

But that's a different set of tests to the ones I'm describing for the use of this technique.  What is important in the scenarios I'm describing is not whether the data comes from the database, or even via the DataPortal.  The thing that is important is some property of that Mock Object affects the behaviour of something else (another BO or Business Process) and we want to test all the possible variations in an automated way.

Ahh, sounds like you've got some more extensive tests than I do.  Smile [:)]

DavidDilworth:
You must have read my mind.  That was an idea we considered and thought had potential, but we did not follow up.  It would be a bigger job than just the "quick-and-dirty" approach we prototyped.

If this is a valid path to follow, it would help cut down the code in all your BOs, which could save you time down the road.  If you do reconsider this option, I'd love to hear about your findings.

RockfordLhotka replied on Monday, March 12, 2007

I'm attaching an old (maybe not currently working - you'll have to see) data portal channel I wrote for testing "remote" data portal functionality without actually being remote.

Creating an in-proc channel isn't very hard, but I wanted to be both in-proc AND test cross-thread and serialization issues, so this channel tries to do that. It could have some threading issues, but I've used it off and on for testing with success.

The point being, that you may be able to adapt this to do interesting things for your mock scenarios too.

 

Additionally, and on a different tack, I've been toying with some ideas around directly invoking a "factory" object rather than the business object in SimpleDataPortal. Mostly in prep for 3.5, to enable better support for ADO.NET EF and LINQ, but also useful for nHibernate and mock testing.

I don't have a working thing to share right now - I went down one road (passing a factory type through a new FactoryCriteriaBase) and didn't like the result. I'm now going down a different road (where you create an IObjectFactoryProvider and return an object that implements the DP_XYZ methods) and I like that better.

The idea is that CSLA .NET supplies a default ObjectFactoryProvider that simply returns instances of the business object - which is the same thing you have today. But if you provide a type/assembly in your server's config file, CSLA will invoke your object factory provider and then you can return any object that implements the DP_XYZ methods.

The trick, of course, is getting the results of Create or Fetch back out of the factory. Right now those methods are void/Sub - so they don't return anything. Insert/Update/DeleteSelf are easy, because they act on the object in-place., and Delete doesn't return anything at all, so it is easy too.

Right now, I'm implementing a solution where your factory implements IObjectFactory, which defines a BusinessObject property. So after a Create or Fetch call, the data portal gets the reulting object from this property. If the "factory" doesn't implement the interface, then the factory itself is returned as the result (which is what happens today).

twistedstream replied on Monday, December 22, 2008

For what it's worth, I've worked out, what I think is a fairly simple, yet effective way to use Dependency Injection with CSLA.NET.  It's not specific to mocking out the database; however, it could easily be used for that.  You can use it to inject any kind of dependency, whether it be a data access layer, a logging component, or objects to handle more complex business logic within your business object.

I've posted the solution on my blog here.  All the samples are using CSLA.NET 3.5 (which will also work with CSLA.NET 3.6).

Happy Holidays!
~pete

rlarno replied on Tuesday, March 13, 2007

Interesting...

We have actually moved around the whole problem by introducing a DAL layer that is abstracted by using a provider pattern (as used in ASP.NET RoleProvider, MembershipProvider).

This has enabled us to remove any SqlConnection or SqlCommand reference from our Business Objects. We only return DBDataReader objects back. And by setting the provider instance from the unit test code, we avoid the need for a database to test the business validation logic.

I used some code of  SubText (a .NET weblog engine) and made it more generic to be able to have a lightweight provider pattern that does not depend on a config file and can be set from code.


nermin replied on Tuesday, May 08, 2007

Hi Brian,

I know it has been months since last reply has been posted in this thread but there is an important question I have to ask:

In your example you mock the Database object whose only purpose is to provide connection string.  That means that your test still has to call the database, just not the production one.  In other words, we are not mocking the SqlConnection, or the SqlCommand that are called within DataPortal_Fetch(), right? 

The point I would like to make is that the reason I mock external dependencies is to only test the code within our business object (unit being tested) and not test whether database connection works, stored procedure is right, network conditions are optimal, Sql Server is not running out of storage space etc.  That would make it a system test and not the unit test, right?  In addition running tests against the database, where we might insert test records, change/update record, would require us to change the state back to original in a [Teardown] or [Setup], which complicates tests further.

What I would like to propose is a solution that replaces the whole database and returns a data stub SafeDataReader() instead of going to the database.  Let me explain.  If instead of Rhino Mocks, we use TypeMock.Net we do not have to inject dependencies into our Csla object being tested (in this Case project).  TypeMock looks for the types we mocked at the runtime and assures that the actual code is not called but the call is replaced with the call defined in the mock segment.

So let me explain how I mock the whole Data access layer to get the "fake" SafeDataReader().  First I refactored the Database class to encapsulate not just the connection string but every request to instantiation of any of the ADO.NET objects.  What that means is that the Database object holds an internal SqlConnection, has calls to OppenConnection(), CreateSPCommand(), ExecuteSafeDataReader(), AddWithValue().  Then if we modify the code within DataPortal_Fetch() to use this object to get SqlCOnnection, SqlCommmand, and DataReader, then we can mock this new Database object and completely avoid connecting to the actual database.

Let me first show you the modified Database object:

public class Database : IDisposable

{

    private readonly SqlConnection _activeConnection;

    private readonly List<SqlCommand> _createdCmds;

    private bool disposed;

 

    public Database(string connection)

    {

        _activeConnection = new SqlConnection(connection);

        _createdCmds = new List<SqlCommand>();

    }

 

    ~Database()

    {

        Dispose(false);

    }

 

    #region Available Connection Strings

 

    public static string PTrackerConnection

    {

        get

        {

            return ConfigurationManager.ConnectionStrings

                ["PTracker"].ConnectionString;

        }

    }

 

    public static string SecurityConnection

    {

        get { return ConfigurationManager.ConnectionStrings["Security"].ConnectionString; }

    }

 

    #endregion

 

    #region IDisposable Members

 

    public void Dispose()

    {

        Dispose(true);

        GC.SuppressFinalize(this);

    }

 

    protected virtual void Dispose(bool disposing)

    {

        if (!disposed) {

            if (disposing) {

                // Dispose managed resources.

                foreach (SqlCommand cmd in _createdCmds)

                    cmd.Dispose();

 

                _createdCmds.Clear();

 

                if (_activeConnection != null && _activeConnection.State != ConnectionState.Closed)

                    _activeConnection.Close();

            }

 

            // Dispose unmanaged resources

        }

        disposed = true;           

    }

    #endregion

 

    protected  void OpenConnection()

    {

        if (_activeConnection.State!=ConnectionState.Open)

            _activeConnection.Open();

    }

 

    public SqlCommand CreateSPCommand(string cmdName)

    {

        SqlCommand cm = _activeConnection.CreateCommand();

        cm.CommandType = CommandType.StoredProcedure;

        cm.CommandText = cmdName;

 

        _createdCmds.Add(cm);

 

        return cm;

    }

 

    public SafeDataReader ExecuteSafeDataReader(SqlCommand cm)

    {

        OpenConnection();

 

        return new SafeDataReader(cm.ExecuteReader());

    }

 

    public void AddWithValue(SqlCommand cm, string paramName, object value)

    {

        cm.Parameters.AddWithValue(paramName, value);

    }

}

Then if we use this Database object in our DataPortal_Fetch() (of the Project object), code would look like this:

private void DataPortal_Fetch(Criteria criteria)

{

    using (Database db = new Database(Database.PTrackerConnection)) {

        SqlCommand cm = db.CreateSPCommand("getProject");

        cm.Parameters.AddWithValue("@id", criteria.Id);

 

        using (SafeDataReader dr = db.ExecuteSafeDataReader(cm)) {

            dr.Read();

            _id = dr.GetGuid("Id");

            _name = dr.GetString("Name");

            _started = dr.GetSmartDate("Started", _started.EmptyIsMin);

            _ended = dr.GetSmartDate("Ended", _ended.EmptyIsMin);

            _description = dr.GetString("Description");

            dr.GetBytes("LastChanged", 0, _timestamp, 0, 8);

 

            // load child objects

            if(dr.NextResult())

                _resources = ProjectResources.GetProjectResources(dr);

        }

    }

}

If you take a look at this version of the DataPortal_Fetch() you will notice now that the only thing we need to mock is the Database object, and then set the expectations for the CreateSPCommand(), AddWithValue(), ExecuteSafeDataReader() calls, having the last one return our "fake" data stub (SafeDataReader).  So let’s take a look at the test for Project.GetProject():

[Test]

public void TestWithTypeMock()

{

    MockHelper.MockDatabaseFetchCall("PTrackerConnection", 1, new ProjectFetchOneDRStub());

 

    Project item = Project.GetProject(Guid.Empty);

    Assert.AreEqual("project name", item.Name);

}

Let me explain the need for MockDatabaseFetchCall():  It is a generic code that should be able to mock most of the DataPortal_Fetch() implementations, not just the one used in Project.GetProject().   I have noticed that generally the DataPortal_Fetch() calls differ only by database connection string, number of calls to AddWithWalue() (adding parameters), which explains the first two parameters.  Third parameter is the reference to a simple helper object that will provide a “fake” SafeDataReader. 

Before I continue I would just like to add that TypeMock requires a following setup/teardown process in order for the test above to work:

[SetUp]

public void Start()

{

    ///<remark>Initialize TypeMock before each test</remark>

    MockManager.Init();

}

 

 

[TearDown]

public void Finish()

{

    ///<remark>We will verify that the mocks have been called correctly at the end of each test</remark>

    MockManager.Verify();

}

Let’s look inside MockDatabaseFetchCall():

public static void MockDatabaseFetchCall(string connectionName, int noOfAddInParamCalls, IDataReaderStubFactory drFactory)

{

    Mock mockDb = MockManager.Mock(typeof(Database));

 

    mockDb.ExpectGet(connectionName, string.Empty);

 

    mockDb.ExpectAndReturn("CreateSPCommand", null);

 

    mockDb.ExpectCall("AddWithValue", noOfAddInParamCalls);

 

    mockDb.ExpectAndReturn("ExecuteSafeDataReader", drFactory.GetDataReaderStub())

        .Args(null);

 

    mockDb.ExpectCall("Dispose");

} 

So first we mock the Database object, then we state our expectations: we expect the call to propert get named connectionName (remember we passed “PTrackerConnection” as connectionName, so we will expect a get call to a property with that name); then we will expect a call to a method called CreateSPCommand and we will return null (we do not care about SqlCommand object as we will not go to the database for SafeDataReader()).  After that we will expect AddWithValue to be called noOfAddInParamCalls times – we do not care about parameter values for this test we only care that they vere initialized.  In the call we to this MockDatabaseFetchCall() we specified that we expect that the DataPortal_Fetch() will call AddWithValue() one time.  Next expectation is the interesting part:

We expect a call to the ExecuteSafeDataReader() method.  Instead of invoking the Database.ExecuteSafeDataReader() we want our mock to instead just return a value from drFactory.GetDataReaderStub().

drFactory is our third parameter; it defines an interface that implements a single method GetDataReaderStub().  You can see that our test passes an new instance of the object called ProjectFetchOneDRStub.  That is the object in charge of “faking” the db data, and passing it to our mock:

internal class ProjectFetchOneDRStub : IDataReaderStubFactory {

 

    public SafeDataReader GetDataReaderStub()

    {

        DataTable stubTable = GetStubTable();

        stubTable.Rows.Add(new object[] { Guid.NewGuid(), "project name",DateTime.Now,DateTime.MaxValue,string.Empty,new byteMusic [8] });

 

        return new SafeDataReader(stubTable.CreateDataReader());

    }

 

    protected static DataTable GetStubTable()

    {

        DataTable stubTable = new DataTable();

        stubTable.Columns.Add("Id", typeof(Guid));

        stubTable.Columns.Add("Name", typeof(string));

        stubTable.Columns.Add("Started", typeof (DateTime));

        stubTable.Columns.Add("Ended", typeof(DateTime));

        stubTable.Columns.Add("Description", typeof(string));

        stubTable.Columns.Add("LastChanged", typeof(byte[]));

        return stubTable;

    }

}

The key to the GetDataReaderStub is the stubTable.CreateDataReader().

More detailed description of this you can find at my blog post at:

http://www.nermins.net/PermaLink,guid,d9a9fa9c-a700-4157-9c5e-59119bf0ea08.aspx

I know it looks like a lot to start with but with the exception of the code that generates DataReader Stub, the code is re-usable.  In addition code that generates DataReader stub (IDataReaderStubFactory implementation) can be replaced with something simpler.  Basically I have generated a simple GUI tool that allows you to run a fetch SQL (that you copy from your SP) and then serializes it as an XML DataTable.  The technique is then to use that XML file as an embeded resource in your test assembly.  This DataTable is then the image of the actual database table (or portion of it that you need in your Fetch() call) that you are testing.  Implementation of the IDataReaderStubFactory.GetDataReaderStub() is down to:

1.  Loading/deserializing DataTable from this embedded resource file

2.  Returning DataTable.CreateReader().

And that is it.  I will write a detailed post about it in a day or two on my blog (if there is interest). 

I apologize for such a long post, but I believe that some of us that need a unit test solution where all of the dependencies are mocked within Csla test target, might find this technique useful.

 

Nermin

Copyright (c) Marimer LLC