i am writing my first extensive csla application so bear with me :)
i usually write unit tests for my business objects with rhino mock using dependency injection for an Enterprise Library Database object. The tests look something like this:
[
Test] public void TestWithRhinoMock(){
MockRepository mocks = new MockRepository(); Database db = mocks.DynamicMock<Database>(); DbCommand cmd = null; SetupResult.For(db.ExecuteReader(cmd)).IgnoreArguments().Return(GetDataSetDataReader("test_data.xml"));mocks.ReplayAll();
Project underTest = Project.GetProject("an_id", db); Assert.AreEqual("project_name", underTest.Name, "the project name was not initialized from its db value");mocks.VerifyAll();
}
this allows us to test without actually hitting a database. typically the mock db is injected in the business objects constructor. i am looking for advice on the best way to approach this for csla objects. my real hangup is BO.GetBO( id, db ) would somehow have to get the db to the business object before DataPortal_Fetch is called.i have thought about adding the database to the criteria object, but it isn't serializable, and things start to get pretty complicated and messy with that approach. i can't seem to think of any good way to do this without modifying the DataPortal infrastructure, something i dont want to do.
there are all kinds of benefits to writing unit tests that mock out dependencies.
here is one concrete example. i want to test that when an error is raised at the database level that my code handles it correctly, whatever correctly means for that use case.
[
Test( Description="RunScript will roll back the transaction and throw the exception if the database fails" )][
ExpectedException( typeof( ApplicationException ) )] public void RunScriptFails(){
string sql = ReadScriptFromFile( @"test1.sql" ); Expect.Call( mocks.Database.ExecuteNonQuery( mocks.Transaction, CommandType.Text, sql ) ).Throw( new ApplicationException( ) );mocks.Transaction.Rollback( );
// this line states "Rollback will be called on the transaction"mocks.ReplayAll();
server.RunScript(mocks.Database,
@"test1.sql");}
to setup an environment to actually cause the database server to fail would require much more than just ".Throw( new ApplicationException( ) )" if i wasn't mocking out the database.
with that said, i dont really want to dig into the merits of mock objects here, just figure out a way to integrate the idea into csla :)
this is one of my favorite articles on the subject if you are interested.
http://www.martinfowler.com/articles/injection.html
thanks for the replies guys, i respect what you have to say.
I had already read quite a bit up here and had searched for mocks. One thing that i have noticed over my time in this industry is that anybody at a high enough technical level to understand things like csla comes with a passion for technology. The same passion that makes us so good at our jobs often tends to drive strong opinions on technical subjects as well.
The only questions posted for mocking inside csla i have found up here turned into conversations of strong opinions about whether or not to do it at all and very little detail was discussed about how to. I see this as an unfortunate part of our passion that when posed with a question about how best to do something we don’t do ourselves, our natural response is to convince the asker that they shouldn’t do it either instead of help them figure out the best way how.
Today I am not interested in whether the cost of mocking an object outweighs the benefit. Before I decided to take on csla I could mock out any call to a database in a few lines of code. The solution was clean, easy, elegant, and helped us easily and quickly test our applications at a level they weren’t tested before. I believe that there are benefits to distinguishing between unit tests and integration tests. I have an entire suite of integration tests that don’t mock out the database as well. I want to be able to do the same with csla so that I can leverage its great benefits. I have just started with csla and was hoping to get some help from people with a lot of experience with it without starting another Unit Test Quandry type thread.
Today I am trying to find the cleanest way to pass an enterprise library Database object into my csla object. The only problem I have is DataPortal calls a default constructor. I am not interested in mocking out DataPortal right now. If I place the Database instance in my Criteria objects, which is then passed to DataPortal_Create or DataPortal_Fetch, the object can get it as needed. However, the database object is not serializable ( I could fix that as well if this is the easiest way to go ) and obviously breaks the DataPortal paradigm when not running in process.
The real kicker here is my application will almost surely never need to scale to the point where I need to run with an application server anyway, so in the spirit of YAGNI I would be better off just not calling DataPortal at all and directly calling DataPortal_XYZ methods directly after calling a constructor that takes the database. This way I still benefit from a common container for my data access and a common structure for all my objects and am able to mock to my hearts content.
I just thought it would be better if I could find a simple way to keep the unmodified DataPortal in the picture. For now I am off to read more about how best to handle variable connection strings in csla as my users connect to any database they like when the application opens and my connection strings don’t live in app.config. Perhaps I will learn something about GlobalContext that will shed help out here as well?
So a few approaches i have though of are:
- Database passed on criteria, Criteria could implement ISerializable and pass the connection string and provider type to reconstitute the database in server environment. (Gets complicated and lacks elegance)
- don't call DataPortal at all (just plain too stubborn for this )
- modify DataPortal - if i was going to do this i would most likely implement object creation with EntLib's ObjectBuilder in the dataportal. we are using CAB as well and are all accustomed to its usage, and the Database is already registered as a service and could easily be injected into our csla objects with [ServiceDependency]. This seems the most elegant. The only drawback i see to this approach is that i would have to manage a vendor branch to merge updates from rocky ;)
I am more than willing to post samples of what I am doing if it would help someone who is willing to help me find my way with csla.
Thanks
steve
steveb:The real kicker here is my application will almost surely never need to scale to the point where I need to run with an application server anyway, so in the spirit of YAGNI I would be better off just not calling DataPortal at all and directly calling DataPortal_XYZ methods directly after calling a constructor that takes the database. This way I still benefit from a common container for my data access and a common structure for all my objects and am able to mock to my hearts content.
steveb:Today I am trying to find the cleanest way to pass an enterprise library Database object into my csla object. The only problem I have is DataPortal calls a default constructor. I am not interested in mocking out DataPortal right now. If I place the Database instance in my Criteria objects, which is then passed to DataPortal_Create or DataPortal_Fetch, the object can get it as needed. However, the database object is not serializable ( I could fix that as well if this is the easiest way to go ) and obviously breaks the DataPortal paradigm when not running in process.
steveb:I see this as an unfortunate part of our passion that when posed with a question about how best to do something we don’t do ourselves, our natural response is to convince the asker that they shouldn’t do it either instead of help them figure out the best way how.
I think the point that Steve is trying to get at in this post is that it appears the CSLA.NET framework, in its present form, doesn't provide true support for dependency injection. And dependency injection really is more of a broader subject than mock objects. It has many benefits besides making it easier to do unit tests with mock objects.
Having true dependency injection support requires access to the object instantiation process. In CSLA.NET, object instantiation is completely encapsulated by the DataPortal with no way to override it. At present, your have two options. One, you modify the DaraPortal source code itself; the drawback is now you have to merge your changes whenever a new version of CSLA.NET is released. Or two, you don't use the DataPortal; as Andy pointed out, this is bad because you loose all the benefits of the DataPortal (which, in my opinion, is one of the larger benefits of the CSLA.NET framework).
In the past Rocky has made changes to the framework specifically to accommodate popular design or coding practices. The best example I can think of was when he added support for code generators. In that same vein, I think he should add support for dependency injection. In short, he should provide a way to override the actual object instantiation mechanism in the DataPortal, perhaps with provider-model style class that gets registered via the .config file.
Maybe the only reason he hasn’t done this was for simplicity sake. He still has to fit all of this into his next book :-).
~pete
your idea has actually crossed my mind quite a bit, it makes things very easy. the reason that i was looking for other options was when i coded up the idea in our application it was:
Globals.Database = mockdb;
and the idea of using that global left a bad taste in my mouth.
however, it is starting to taste better after looking at the other options and reading about ClientContext as a way to initialize the db on the server side.
thanks,
steve
Looks like I missed this thread first time round. I was one of the main contributors on the Unit Test Quandry thread referred to earlier and I feel I need to update my view slightly.
Whilst I still believe that you have to have some tests that actually hit the database to test your Business Objects basic CRUD functionality, I now also believe that Mock Objects have a place in your testing process as well.
We found Mock Objects useful in testing different "data configurations" in order to test different flows through our Business Process Workflow. This is much easier to test in an automated fashion with a Mock Object based framework, than it is to try and maintain multiple sets of test data in a database that can only be used for specific test scenarios.
So we have used the NMock framework and we also looked at Typemock as well. I've heard about Rhino Mocks, but didn't get any chance to investigate further.
I'd be interested to know how you actually got on with your planned mocked database.
=====
As an aside, the guy behind Rhino Mocks (Ayende) uses and has contributed towards NHibernate.
We've been doing a bit more thinking and research about Mock Objects and interactions with the database and came up with the idea shown below. I think the idea in principle is the same as that suggested by Brian, but the implementation is slightly different. Just putting it up for comment.
public static Person GetPerson(Guid personId)
{
Person person;
if (MockPortal.IsMockObjectAvailable<Person>(personId))
person = MockPortal.Fetch<Person>(personId); // Returns a Mock Object (avoids the DP and the DB)
else
person = DataPortal.Fetch<Person>(new CriteriaGuid(personId)); // Returns a concrete object (hits the DB)
return person;
}
It's designed to provide a Mock Object to be used in test scenarios where the variability of the data inside the BO being mocked has an affect on the BO/Business Process/etc. being tested. There's no need for either the Data Portal or the database, as what is really important is the value of something in the Mock Object that affects the thing that is actually being tested.
Any thoughts/comments/suggestions welcomed.
ajj3085:My first thought is that skipping the DP may be skipping some potential problems. How do you handle the case where you want to test with remoting enabled?
I agree that if you want to "verify" that the DataPortal works then you have to turn it on and try it. We have done that already and we also had some "teething troubles" in understanding and getting it working properly.
But that's a different set of tests to the ones I'm describing for the use of this technique. What is important in the scenarios I'm describing is not whether the data comes from the database, or even via the DataPortal. The thing that is important is some property of that Mock Object affects the behaviour of something else (another BO or Business Process) and we want to test all the possible variations in an automated way.
ajj3085:My second though is that perhaps you can encapsulate the mocking aspect with a new DataPortal client. That would make things more transparent in your own code. Did you go down that path and find it was unworkable?
You must have read my mind. That was an idea we considered and thought had potential, but we did not follow up. It would be a bigger job than just the "quick-and-dirty" approach we prototyped.
DavidDilworth:I agree that if you want to "verify" that the DataPortal works then you have to turn it on and try it. We have done that already and we also had some "teething troubles" in understanding and getting it working properly.But that's a different set of tests to the ones I'm describing for the use of this technique. What is important in the scenarios I'm describing is not whether the data comes from the database, or even via the DataPortal. The thing that is important is some property of that Mock Object affects the behaviour of something else (another BO or Business Process) and we want to test all the possible variations in an automated way.
Ahh, sounds like you've got some more extensive tests than I do.
DavidDilworth:You must have read my mind. That was an idea we considered and thought had potential, but we did not follow up. It would be a bigger job than just the "quick-and-dirty" approach we prototyped.
If this is a valid path to follow, it would help cut down the code in all your BOs, which could save you time down the road. If you do reconsider this option, I'd love to hear about your findings.
I'm attaching an old (maybe not currently working - you'll have to see) data portal channel I wrote for testing "remote" data portal functionality without actually being remote.
Creating an in-proc channel isn't very hard, but I wanted to be both in-proc AND test cross-thread and serialization issues, so this channel tries to do that. It could have some threading issues, but I've used it off and on for testing with success.
The point being, that you may be able to adapt this to do interesting things for your mock scenarios too.
Additionally, and on a different tack, I've been toying with some ideas around directly invoking a "factory" object rather than the business object in SimpleDataPortal. Mostly in prep for 3.5, to enable better support for ADO.NET EF and LINQ, but also useful for nHibernate and mock testing.
I don't have a working thing to share right now - I went down one road (passing a factory type through a new FactoryCriteriaBase) and didn't like the result. I'm now going down a different road (where you create an IObjectFactoryProvider and return an object that implements the DP_XYZ methods) and I like that better.
The idea is that CSLA .NET supplies a default ObjectFactoryProvider that simply returns instances of the business object - which is the same thing you have today. But if you provide a type/assembly in your server's config file, CSLA will invoke your object factory provider and then you can return any object that implements the DP_XYZ methods.
The trick, of course, is getting the results of Create or Fetch back out of the factory. Right now those methods are void/Sub - so they don't return anything. Insert/Update/DeleteSelf are easy, because they act on the object in-place., and Delete doesn't return anything at all, so it is easy too.
Right now, I'm implementing a solution where your factory implements IObjectFactory, which defines a BusinessObject property. So after a Create or Fetch call, the data portal gets the reulting object from this property. If the "factory" doesn't implement the interface, then the factory itself is returned as the result (which is what happens today).
Hi Brian,
I know it has been months since last reply has been posted in this thread but there is an important question I have to ask:
In your example you mock the Database object whose only purpose is to provide connection string. That means that your test still has to call the database, just not the production one. In other words, we are not mocking the SqlConnection, or the SqlCommand that are called within DataPortal_Fetch(), right?
The point I would like to make is that the reason I mock external dependencies is to only test the code within our business object (unit being tested) and not test whether database connection works, stored procedure is right, network conditions are optimal, Sql Server is not running out of storage space etc. That would make it a system test and not the unit test, right? In addition running tests against the database, where we might insert test records, change/update record, would require us to change the state back to original in a [Teardown] or [Setup], which complicates tests further.
What I would like to propose is a solution that replaces the whole database and returns a data stub SafeDataReader() instead of going to the database. Let me explain. If instead of Rhino Mocks, we use TypeMock.Net we do not have to inject dependencies into our Csla object being tested (in this Case project). TypeMock looks for the types we mocked at the runtime and assures that the actual code is not called but the call is replaced with the call defined in the mock segment.
So let me explain how I mock the whole Data access layer to get the "fake" SafeDataReader(). First I refactored the Database class to encapsulate not just the connection string but every request to instantiation of any of the ADO.NET objects. What that means is that the Database object holds an internal SqlConnection, has calls to OppenConnection(), CreateSPCommand(), ExecuteSafeDataReader(), AddWithValue(). Then if we modify the code within DataPortal_Fetch() to use this object to get SqlCOnnection, SqlCommmand, and DataReader, then we can mock this new Database object and completely avoid connecting to the actual database.
Let me first show you the modified Database object:
public class Database : IDisposable
{
private readonly SqlConnection _activeConnection;
private readonly List<SqlCommand> _createdCmds;
private bool disposed;
public Database(string connection)
{
_activeConnection = new SqlConnection(connection);
_createdCmds = new List<SqlCommand>();
}
~Database()
{
Dispose(false);
}
#region Available Connection Strings
public static string PTrackerConnection
{
get
{
return ConfigurationManager.ConnectionStrings
["PTracker"].ConnectionString;
}
}
public static string SecurityConnection
{
get { return ConfigurationManager.ConnectionStrings["Security"].ConnectionString; }
}
#endregion
#region IDisposable Members
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
protected virtual void Dispose(bool disposing)
{
if (!disposed) {
if (disposing) {
// Dispose managed resources.
foreach (SqlCommand cmd in _createdCmds)
cmd.Dispose();
_createdCmds.Clear();
if (_activeConnection != null && _activeConnection.State != ConnectionState.Closed)
_activeConnection.Close();
}
// Dispose unmanaged resources
}
disposed = true;
}
#endregion
protected void OpenConnection()
{
if (_activeConnection.State!=ConnectionState.Open)
_activeConnection.Open();
}
public SqlCommand CreateSPCommand(string cmdName)
{
SqlCommand cm = _activeConnection.CreateCommand();
cm.CommandType = CommandType.StoredProcedure;
cm.CommandText = cmdName;
_createdCmds.Add(cm);
return cm;
}
public SafeDataReader ExecuteSafeDataReader(SqlCommand cm)
{
OpenConnection();
return new SafeDataReader(cm.ExecuteReader());
}
public void AddWithValue(SqlCommand cm, string paramName, object value)
{
cm.Parameters.AddWithValue(paramName, value);
}
}
Then if we use this Database object in our DataPortal_Fetch() (of the Project object), code would look like this:
private void DataPortal_Fetch(Criteria criteria)
{
using (Database db = new Database(Database.PTrackerConnection)) {
SqlCommand cm = db.CreateSPCommand("getProject");
cm.Parameters.AddWithValue("@id", criteria.Id);
using (SafeDataReader dr = db.ExecuteSafeDataReader(cm)) {
dr.Read();
_id = dr.GetGuid("Id");
_name = dr.GetString("Name");
_started = dr.GetSmartDate("Started", _started.EmptyIsMin);
_ended = dr.GetSmartDate("Ended", _ended.EmptyIsMin);
_description = dr.GetString("Description");
dr.GetBytes("LastChanged", 0, _timestamp, 0, 8);
// load child objects
if(dr.NextResult())
_resources = ProjectResources.GetProjectResources(dr);
}
}
}
If you take a look at this version of the DataPortal_Fetch() you will notice now that the only thing we need to mock is the Database object, and then set the expectations for the CreateSPCommand(), AddWithValue(), ExecuteSafeDataReader() calls, having the last one return our "fake" data stub (SafeDataReader). So let’s take a look at the test for Project.GetProject():
[Test]
public void TestWithTypeMock()
{
MockHelper.MockDatabaseFetchCall("PTrackerConnection", 1, new ProjectFetchOneDRStub());
Project item = Project.GetProject(Guid.Empty);
Assert.AreEqual("project name", item.Name);
}
Let me explain the need for MockDatabaseFetchCall(): It is a generic code that should be able to mock most of the DataPortal_Fetch() implementations, not just the one used in Project.GetProject(). I have noticed that generally the DataPortal_Fetch() calls differ only by database connection string, number of calls to AddWithWalue() (adding parameters), which explains the first two parameters. Third parameter is the reference to a simple helper object that will provide a “fake” SafeDataReader.
Before I continue I would just like to add that TypeMock requires a following setup/teardown process in order for the test above to work:
[SetUp]
public void Start()
{
///<remark>Initialize TypeMock before each test</remark>
MockManager.Init();
}
[TearDown]
public void Finish()
{
///<remark>We will verify that the mocks have been called correctly at the end of each test</remark>
MockManager.Verify();
}
Let’s look inside MockDatabaseFetchCall():
public static void MockDatabaseFetchCall(string connectionName, int noOfAddInParamCalls, IDataReaderStubFactory drFactory)
{
Mock mockDb = MockManager.Mock(typeof(Database));
mockDb.ExpectGet(connectionName, string.Empty);
mockDb.ExpectAndReturn("CreateSPCommand", null);
mockDb.ExpectCall("AddWithValue", noOfAddInParamCalls);
mockDb.ExpectAndReturn("ExecuteSafeDataReader", drFactory.GetDataReaderStub())
.Args(null);
mockDb.ExpectCall("Dispose");
}
So first we mock the Database object, then we state our expectations: we expect the call to propert get named connectionName (remember we passed “PTrackerConnection” as connectionName, so we will expect a get call to a property with that name); then we will expect a call to a method called CreateSPCommand and we will return null (we do not care about SqlCommand object as we will not go to the database for SafeDataReader()). After that we will expect AddWithValue to be called noOfAddInParamCalls times – we do not care about parameter values for this test we only care that they vere initialized. In the call we to this MockDatabaseFetchCall() we specified that we expect that the DataPortal_Fetch() will call AddWithValue() one time. Next expectation is the interesting part:
We expect a call to the ExecuteSafeDataReader() method. Instead of invoking the Database.ExecuteSafeDataReader() we want our mock to instead just return a value from drFactory.GetDataReaderStub().
drFactory is our third parameter; it defines an interface that implements a single method GetDataReaderStub(). You can see that our test passes an new instance of the object called ProjectFetchOneDRStub. That is the object in charge of “faking” the db data, and passing it to our mock:
internal class ProjectFetchOneDRStub : IDataReaderStubFactory {
public SafeDataReader GetDataReaderStub()
{
DataTable stubTable = GetStubTable();
stubTable.Rows.Add(new object[] { Guid.NewGuid(), "project name",DateTime.Now,DateTime.MaxValue,string.Empty,new byte });
return new SafeDataReader(stubTable.CreateDataReader());
}
protected static DataTable GetStubTable()
{
DataTable stubTable = new DataTable();
stubTable.Columns.Add("Id", typeof(Guid));
stubTable.Columns.Add("Name", typeof(string));
stubTable.Columns.Add("Started", typeof (DateTime));
stubTable.Columns.Add("Ended", typeof(DateTime));
stubTable.Columns.Add("Description", typeof(string));
stubTable.Columns.Add("LastChanged", typeof(byte[]));
return stubTable;
}
}
The key to the GetDataReaderStub is the stubTable.CreateDataReader().
More detailed description of this you can find at my blog post at:
http://www.nermins.net/PermaLink,guid,d9a9fa9c-a700-4157-9c5e-59119bf0ea08.aspx
I know it looks like a lot to start with but with the exception of the code that generates DataReader Stub, the code is re-usable. In addition code that generates DataReader stub (IDataReaderStubFactory implementation) can be replaced with something simpler. Basically I have generated a simple GUI tool that allows you to run a fetch SQL (that you copy from your SP) and then serializes it as an XML DataTable. The technique is then to use that XML file as an embeded resource in your test assembly. This DataTable is then the image of the actual database table (or portion of it that you need in your Fetch() call) that you are testing. Implementation of the IDataReaderStubFactory.GetDataReaderStub() is down to:
1. Loading/deserializing DataTable from this embedded resource file
2. Returning DataTable.CreateReader().
And that is it. I will write a detailed post about it in a day or two on my blog (if there is interest).
I apologize for such a long post, but I believe that some of us that need a unit test solution where all of the dependencies are mocked within Csla test target, might find this technique useful.
Nermin
Copyright (c) Marimer LLC