Help me understand why ObjectFactory is a good idea?

Help me understand why ObjectFactory is a good idea?

Old forum URL: forums.lhotka.net/forums/t/7924.aspx


Mr.Underhill posted on Sunday, November 01, 2009

Let me put some context to this discussion before I begin...

I been using CSLA.NET for at least 3 years now, however I stopped following the latest and greatest versions/features because I was working in a large project.

So here I'm learning the new stuff, I'm following the SL training videos, I'm currently in Video # 3 and Rocky is talking about the new (new for me at least) ObjectFactory and how this helps separate the DataAccess layer to a different class and he admits this breaks encapsulation....

I'm litterally watching this video right now, I have it open in paused in my other screen and switched here to post this message...

I might get to know better by the end of the video, but I just don't get why bother splitting your object and breaking encapsulation by providing this new class.. what is the advantage or when is good idea to do this...

Thanks in advance for clarifying this fundamental concept

JonnyBee replied on Sunday, November 01, 2009

Hi,

Some like the encapsulation and others prefer to decouple the layers. I like ObjectFactory because it gives me a imo cleaner separation of layers and "opens up" code to make it easier to write unit tests.

BO don't know anything about data access code - just the business logic (Validation, Authorization) and static factory methods (calling DataPortal).

DataPortal is configured for which FactoryLoader to use at runtime and uses the factory loader to get an instance of factory and routes the call. When writing/executing Unit Tests you can plug in a "mock" ObjectFactory that allows you to test the bits you need. No change needed in BOs.

When writing code for SL and using encapsulation you will either have to put DAL code in a separate class or use complier directives to exclude DAL code from compiler in order to share the same classes in both Csla.NET and CslaLight projects.

RockfordLhotka replied on Sunday, November 01, 2009

ObjectFactory was created originally in preparation for future ADO.NET EF features I hope come to exist.

But it turns out to be pretty nice for some other DAL scenarios - especially if you write a little of your own framework code to extend the ObjectFactory base class for your particular data access model.

Honestly though, I still really like the DataPortal_XYZ model, especially when coupled with a repository pattern or some other abstraction scheme for the DAL code itself. The advantage of the DP_XYZ model is that the data portal does a lot more work for you automatically, which is something I really like.

rfcdejong replied on Monday, November 02, 2009

I choosed CSLA because it has the ability to use an ObjectFactory, if it didn't exist i wouldn't be using CSLA, that's for sure. I also don't use the FieldManager.UpdateChildren()

The main reason is that our projects need to have data seperated from business, the "smart busines objects" should not be so smart to contain any data code.
Another reason is that i don't want client code to reference any data assembly.

Thru i need to say that i created an own abstraction layer over every single 3rd party component / framework. 3 frameworks in use: OPF3 + CSLA + CAG

(PS: When i say I then i mean my company and the teams we have)

RockfordLhotka replied on Monday, November 02, 2009

I think the ObjectFactory approach is a fine one, as long as you create some abstractions of your own. CSLA merely provides the basic toolkit (the ObjectFactory base class), but not (in my view) enough abstraction to provide for consistency or productivity when building a DAL.

 

The trick of course, is that the way you’d create abstraction is different for different types of data access technologies.

 

But ultimately, I wouldn’t want to lose the automation provided by the data portal in terms of correctly managing the metastate of my objects (IsNew, IsDirty, IsChild, etc).

 

I put a lot of work into creating the child data portal concept because one of the biggest pain points with using older versions of CSLA was that people didn’t manage the metastate correctly – they simply forgot to call MarkAsChild(), MarkNew() and so forth. The child data portal eliminates that source of pain.

 

If you use code-gen, or if you create richer ObjectFactory subclasses for building your DAL, you can automatically and correctly manage the metastate too. But if all you do is build a DAL using the basic ObjectFactory base class, you are left managing all this yourself – in which case you are back in that scenario where it is easy to forget to do the right thing because it is all manual.

 

Additionally, you can easily get the separation between DAL and business layer, including not having the business layer reference the DAL, by using a DI or repository model.

 

The last couple large projects Magenic has done with CSLA have used the DataPortal_XYZ model where those methods invoke an abstract DAL using the repository pattern and a DI container. That enables the use of mock data, unit testing of the DAL and efficient testing of the business layer (against the mock data). All without anyone having to manually deal with the metastate stuff.

 

But I fully understand and appreciate the “purity” of using ObjectFactory instead, and won’t argue with people who suggest it is their preferred approach. I honestly don’t think it is as “pure” in most cases, because most people use it in a way that breaks encapsulation (their DAL uses ReadProperty() and LoadProperty() to directly manipulate the internal state of the business object – which is easy, but inelegant).

 

In the real world though, we often sacrifice purity and elegance in the name of productivity, performance and pragmatism. And I’m totally fine with that.

 

All I’m saying is that both models are, in my view, equally good if done right – it just depends on how much framework creation you want to do on your own to get up and running.

mbblum replied on Monday, November 02, 2009

RockfordLhotka:

The last couple large projects Magenic has done with CSLA have
used the DataPortal_XYZ model where those methods invoke an abstract DAL using
the repository pattern and a DI container. That enables the use of mock data,
unit testing of the DAL and efficient testing of the business layer (against
the mock data). All without anyone having to manually deal with the metastate
stuff.


Is there a good example of the Dependency Injection (DI) with DP_XYZ in the samples? Or the CSLA video series?

Seeing an example will help my understanding and, hopefully, have a "best practices" approach.

Thanks, mbb

RockfordLhotka replied on Monday, November 02, 2009

The upcoming CSLA .NET video series will have a data access segment that will illustrate a lightweight (not DI, just basic factory-repository) approach to accessing a DAL from the DataPortal_XYZ methods.

Mr.Underhill replied on Monday, November 02, 2009

Rocky, when are you planning to release the CSLA.NET training videos?

Thank you all for the comments, it's more clear now.

RockfordLhotka replied on Monday, November 02, 2009

As soon as I can get them done :)

-----Original Message-----
From: Mr.Underhill [mailto:cslanet@lhotka.net]
Sent: Monday, November 02, 2009 2:44 PM
To: rocky@lhotka.net
Subject: Re: [CSLA .NET] RE: Help me understand why ObjectFactory is a good
idea?

Rocky, when are you planning to release the CSLA.NET training videos?

Thank you all for the comments, it's more clear now.

fussion_am replied on Monday, November 02, 2009

Has anyone run any performance testing to see if using a DI container has any significant performance hit?  'When I have time' I intend to do so...but it doesn't look like I will have time for a while...

JonnyBee replied on Monday, November 02, 2009

Hi,

"But I fully understand and appreciate the “purity” of using ObjectFactory instead, and won’t argue with people who suggest it is their preferred approach. I honestly don’t think it is as “pure” in most cases, because most people use it in a way that breaks encapsulation (their DAL uses ReadProperty() and LoadProperty() to directly manipulate the internal state of the business object – which is easy, but inelegant)."

Yes, developers should use BypassPropertyChecks and set properties. However that also implies that all variables are exposed as public properties. This may not be true (example: timestamp properties in ProjectTracker that are only for "internal" use in BO/DAL)  will require LoadProperty and ReadProperty for access. The same is also true for SmartDate properties when exposed as string or datetime property through GetPropertyConvert/SetPropertyConvert in BO.

RockfordLhotka replied on Monday, November 02, 2009

Actually I think the most pure solution is for the ObjectFactory to populate a DTO and call a well-known method on the business object to provide it with the DTO. The object will then load itself with data from the DTO.

 

This keeps the business object decoupled from the object factory, and minimizes the coupling of the object factory to the business object. Most importantly it preserves encapsulation.

 

As I said earlier though, that level of purity requires more work and increases overhead – but it avoids nasty coupling. But most people accept that level of coupling to avoid the complexity/perf consequences.

JonnyBee replied on Tuesday, November 03, 2009

That would then translate into
1. well defined business object
2. well defined DTO contract objects for fetch/save
3. object factory operates on DTO objects and maps these to DAL code (that may be separate Entity/ActiveRecord/Repository objects that goes to the DB).

Seems to me that this introduces a new DTO layer (2) that must know all the properties and the state of the objecy (dirty, new, savable, deleted) for Saving/Deleting data.

The Business Objects define a contract to be used in UI Interface.  I'd like to see a way of using the same contract from a DAL and make the object factory do a "mapping" between BO and DAL objects. This will of course incur coupling but that seems like to happen anyway from my experience.

As an example: ProjectTracker uses a TimeStamp marking on Fetch/Save/Insert. In L2EF you will not get to the updated value until after ObjectContext.SaveChanges() has been called (all changes are submitted to DB). To get the updated value from an Update the DAL code must setup a callback on OnPropertyChanged on the EF object and then update the timestamp value in BO. And on Insert you will also need to send the new Id (db generated value) back to the BO.  Should the BO define a separate DTO for this and a separate update function for Save/Insert that are specifically designed for usage in one DAL?

I'd love to see a "smarter" ObjectFactory that would help manage state - but it's hard to envision all the needs for a DAL to set values in the BO as new DAL technology is introduced.

Jav replied on Tuesday, November 03, 2009

Talking about the DTOs:
Some of my object hierarchies are > 5 levels deep, making the use of DataReader quite cumbersome.
5-6 years ago when I started using Csla, I found that the easiest way to populate those hierarchies was to get the data from the database using a dataset. With proper DataRealations set up, my code can walk through the dataset populating every object. This takes place withing each object, exactly like the dataReaders. There is no breach of encapsulation. The reverse process takes place when it comes to saving the data. All code is generated.

I have a separate DataAccess assembly which fetches the data from the database and returns it to the DatPortal_Fetch of the Root object as a DataSet.

I know, I know - it is old fashioned and may be kinda 'country', but why can we not rename the DataSet as DTO, and make our life a little easier.

Javed

RockfordLhotka replied on Tuesday, November 03, 2009

I think the DataSet is a great DTO. Well, more like an entity object, but still. Microsoft did a beautiful job building an in-memory tabular data store with the DataSet, and they should be commended for it.

For better or worse however, the DataSet is clearly a legacy technology, and the focus is now on ADO.NET EF and similar technologies.

Fortunately for your scenario, something like EF will almost certainly provide you with the same general architecture and benefits you are getting from the DataSet. You can generate an EF query that returns an entity object graph that is >5 levels deep, and map that data into your business objects much like you do today.

Then again, "legacy" is all relative. Microsoft has not said they are going to sunset the DataSet, and even if they do say that at some point, they'll continue to support it for years. So if your solution works for you there's probably no reason to move off it any time soon.

Mr.Underhill replied on Tuesday, November 03, 2009

Wouldn't you agree that a most pure solution will be to use your previous suggestion of a repository pattern so that your DataPortal_XYZ interact with?

This still removes the coupling to the actual DAL implementation and gives the control to the BO as to when interact with the DAL.

This repository should return generic DTO's that the BO will act on to obtain the data and populate itself in the case of a Fetch for example.

What I don't like about the ObjectFactory/DTO approach is that you still have to have this well-known method you mentioned as private to avoid misuse of your business object API and in addition it seems that your ObjectFactory is some how in "control" of the object and not the other way around.

Any thoughts on this!

RockfordLhotka replied on Tuesday, November 03, 2009

I think you've summed up why I have two models.

You can make very compelling arguments for the BO invoking the DAL via various decoupling design patterns such as repository.

You can make very compelling arguments for the DAL assuming responsibility for creating/using the BO using many patterns employed by ORM tools.

I've always chosen to make CSLA as data access agnostic or neutral as possible. I believe this (and the UI neutrality) is one of the biggest strengths of CSLA. Supporting two of the most dominant data access architectures just extends that data access neutrality to another level, which I think is a fine thing.

Jav replied on Thursday, November 05, 2009

I haven't read up on EF yet, but if it is somewhat like a Dataset, would it not be possible for the DataPortal_Fetch method to call the DataAccess layer only to have the EF entity sent back. Once the entity is in-house, so to speak, the object can load its own properties.

This way the DataAccess layer is still separate, it can be unit tested by itself and there will be no breach of encapsulation.

Javed

RockfordLhotka replied on Thursday, November 05, 2009

Jav, this is exactly how I use ADO.NET EF.

If you look at the data access video from the Silverlight video series
(shameless plug: http://store.lhotka.net) you can see how the code uses EF
to persist the data, almost exactly in the way many people have used a
DataSet in the past.

Jav replied on Thursday, November 05, 2009

Fantastic.
I will be sure to watch the Data Access video (today actually) right to the end instead of saying "Aah! EF, well that's not for me, click!"

I am glad I asked the question. Thanks

Javed

Copyright (c) Marimer LLC