Hi
We are in the early stages of a project which will use CSLA as the framework and NHibernate for persistence. There's a number of ways to implement the two together and being newbies we are not sure which route to take. Any feedback would be greatly appreciated;
One way is to extend CSLAs BO base classes to implentment Nhibernate ovverriding the dataportal fetch methods etc. (ie same implementation as ProjectTracker.NHibernate in CSLAContrib). This requires deploying nHibernate (2MB) on the client.
Another way its to map DTOs in NHibernate, retrieve them via NH, and then copy their data into CSLA BO's. My understanding is that the advantage with that approach is that we wouldn't need to deploy NHibernate on the client. Secondly there's a cleaner seperation between framework and persistence layers therefore the source is easier to understand. The downside with this however is that there is an overhead with the copying at the data retrieval and persistence stages.
I guess one of the questions im asking is, has anyone tried the DTO implementation, and are there any pitfalls with that approach?
TIA
Devman
And, if I read that correctly about altering the base class structure of CSLA, you'll avoid a ton of headaches when moving to version 3.6. To support CSLA .NET for Silverlight, there are numerous changes to the base class structure of the CSLA base classes - all invisible to a normal user of the framework, but very visible if you've customized the code!
Also, if it is any consolation, DataMapper now uses dynamic method invocation to minimize its use of reflection (thanks to Ricky Supit), which should ease some perf concerns when copying data to/from objects using that technique.
And finally, the new ObjectFactory concept in 3.6 is designed to make the use of ORM tools easier. Whether it helps with nHibernate I don't know - time will tell - but it opens up a much more flexbile technique for creating/updating business objects. See my blog for info about ObjectFactory.
RockfordLhotka:And, if I read that correctly about altering the base class structure of CSLA, you'll avoid a ton of headaches when moving to version 3.6. To support CSLA .NET for Silverlight, there are numerous changes to the base class structure of the CSLA base classes - all invisible to a normal user of the framework, but very visible if you've customized the code!
I think you can do everything without the DTO approach simply by inserting a common BusinessBase<T> derived class between the objects fetched by NHibernate and CSLA (e.g NHibernateBusinessBase)
NHibernateBusinessBase would include NHibernate-aware versions of DataPortal_Fetch, _Update, etc, but there is no need to modify the CSLA Framework.
If you want to move to another DAL later on, you just swap out the one class. (This is basically what we did, starting with the NHibernate sample and converting it to use the Wilson ORM instead)
Yes, I have to deploy the WilsonORMapper.DLL on the client, although the Wilson DLL is tiny compared to NHibernate, a mere 200K. Our CodeSmith-generated DAL for our database (slightly modified Paul Welter templates), by contrast, is 1.2 MB.
My understanding of CSLA is that if you want the flexibility of deploying using however many tiers that you want (including single desktop), that you put all the binaries for your BOL and DAL on each tier except for thin clients like a browser.
Ideally you wouldn’t have to put the DAL or ORM
components/code on the client. Though with the technique you describe, that’s
obviously required.
But if you have a separate DAL, and use a remote data portal,
the idea is that the DAL assembly(ies) would only be deployed to the app
server.
If you use a local data portal, it is absolutely true that
everything has to deploy to the client.
With the remote data portal, keeping that clean separation of
dependencies is potentially very important if you want to move to Silverlight.
Most DAL/ORM tools won’t run on Silverlight (probably ever), and so it is
important to keep your business objects “pristine” and
self-sufficient so they can deploy to that other platform.
Rocky
From: rsbaker0
[mailto:cslanet@lhotka.net]
Sent: Thursday, September 18, 2008 9:26 AM
To: rocky@lhotka.net
Subject: Re: [CSLA .NET] Using CSLA 3.5 with NHibernate 2.0?
Yes, I have to deploy the WilsonORMapper.DLL on the client (although the
Wilson DLL is tiny compared to NHibernate, a mere 200K. Our CodeSmith-generated
DAL for our database (slightly modified Paul Welter templates), by contrast, is
1.2 MB.
My understanding of CSLA is that if you want the flexibility of deploying
using however many tiers that you want (including single desktop), that you put
all the binaries for your BOL and DAL on each tier except for thin clients like
a browser.
If that’s true, then the new ObjectFactory stuff should
make this even easier – because you could just have an NHibernate-aware
DAL, and a PW ORM-aware DAL and switch between them – possibly without
needing to change base classes.
Rocky
While we're on the subject of NHibernate, you need to think about how you want to deal with NHibernate "session" management as well as concurrency issues.
When working directly with NHibernate (as I recall), you would keep an ISession object around and let NHibernate keep track of which objects had changed and persist them for you.
However, with CSLA and using a remote data portal, would you be keeping the session around of the lifetime of the object? If not, how will you detect collisions between updates of the same object by different users?
My solution for the Wilson mapper was, ..., cough, ..., er, interesting. :)
Kevin Fairclough:For Session Management:
When the DAL is finished doing its stuff the session will be closed. I was thinking of storing the Session in the LocalContext for Parent>Child batches etc. I will let CSLA do its job of tracking dirty,new,deleted etc.
I guess this is what confounded me when I was initially coming up with an architecture for handling CSLA+ORM.
Say you have a WinForms client + remote data portal, and it loads some objects that you fetch from your Session.
Ten minutes later, the user saves the data. How does the same Session get used to apply the changes?
This is the problem with these ORM technologies that are designed
for 2-tier client/server scenarios. They really don’t work well for the
web, web services or 3-tier client/server. This includes LINQ to SQL and
ADO.NET Entity Framework btw – they are both really lame in this area.
There’s no realistic way to keep the context alive between
stateless calls – that’s kind of the point of having a
stateless/scalable server.
So on every client request, the app server must somehow recreate
the context, or work around the fact that it isn’t there.
When using LINQ to SQL, I’ve been doing all my
insert/update/delete operations with stored procedures – not trying to
let L2S do the ORM thing at all. It is just too painful to recreate the
context, and it requires too much code to “spoof” the operation
with an empty context.
Basically you have three options:
1.
Recreate the context by requerying the database (ugh)
2.
Create an empty context and “spoof” it by attaching
entity objects to that context as new/edited/deleted items (doable, but messy
code)
3.
Call stored procedures (L2S wraps them in strongly-typed
methods, so this is easy and clean)
I suppose there’s also option 4: come up with some clever
scheme to keep the original context alive and valid for x minutes/hours between
client requests. But this seems highly unrealistic.
How this translates to NHibernate I don’t know, but these
are the primary options for L2S or EF.
Rocky
RockfordLhotka:
2. Create an empty context and “spoof” it by attaching entity objects to that context as new/edited/deleted items (doable, but messy code)
My concurrency model provided a unique solution that worked well with Wilson (and might also with NHibernate).
I keep lazy-loaded cache in each object that tracks the original value of any changed property. The ultimate purpose of this allows the UPDATE sql to include the original values in the WHERE clause along with the key.
What it also lets me do is quickly reconstruct the original object from the changed one, so to do an update using the ORM:
(1) Reconstruct original object from one being saved
(2) Tell ORM to start tracking the original in an unchanged state
(3) Now apply the changed properties to the object being tracked by the ORM.
(4) Tell the ORM to persist the object. It won't know the difference between this sequence of updates and one in which it was tracking the object all along.
I get some other neat benefits from this infrastructure. For instance, I can tell if a specific property is dirty. More importantly, if you put the property back to it's original value, it's not dirty any more.
I can also do user configurable field-level auditing of any object (e.g. write who made the change and before/after values to an auditing log), etc.
All this is done by a single BusinessBase<T> derived class that sits between CSLA and my DAL classes, and since I have found this to be so flexible, it's the main reason I recommend having your own common BB class.
Kevin Fairclough:In this case I guess:
- Object lives intially in NH retieved into DTO, Dies when the Session is closed.(App Server)
- Passed/copied into CSLA BO (App Server).
- Modified by user. (Client)
- Copied back into a DTO (App Server)
- Re-attached to a new Session and saved (App Server)
Optimistic concurrency issues are automatically dealt with ...WHERE PK = pkVal and VERSION = versionVal.
This will work fine - it looks like you have a good handle on things. I wasn't sure how NHibernate allowed external objects to be attached to new Sessions, but as long you can tell it whether the object is "New" or just being updated, this is a good approach.
You're exactly right about lazy loading also -- you have to do this yourself but it works fine with separate data portal calls.
I know it is small consolation, but the Expert 2008 Business
Objects is coming along – should be out in December – and it
provides at least some coverage of these things (though mostly from the
perspective of how they are implemented).
How you create your DAL though, that’s really up to you. Fine-grained
or coarse-grained are both options, depending on your technology and approach. If
you have a fine-grained DAL, you may call it many times. If you have a
coarse-grained DAL (probably an ORM that creates/consumes your objects) you
might make just one call.
In the new book I’m using LINQ to SQL, which is comparable
to using the Entity Framework. The result is a fairly fine-grained DAL.
I’m hopeful that some future version of EF will allow for
a more coarse-grained DAL – it would be really nice if EF could directly
load data into the business objects, and perhaps that’ll happen someday. The
hope that this will happen was my primary driver for adding the new ObjectFactory
concept. Sadly that is not the case today…
So in 3.6 there are three primary models:
1.
The 3.0 and older model – data portal calls DataPortal_XYZ
and you do everything else
2.
The 3.5+ model – the data portal calls DataPortal_XYZ and
you use the child data portal to save code/effort and increase consistency
3.
The ObjectFactory model – the data portal calls methods on
a separate factory object that you write, and your factory object assumes all
responsibility for creating and interacting with any business objects
Option 2 is the simplest approach for most scenarios, and is the
one I’m using the in book.
Option 1 is really like option 2, but requires more work. Or to
put it another way, the reason for adding the child data portal was to help
automate what everyone was already doing. So option 2 shouldn’t increase
complexity – it should reduce it by letting you focus on the DAL
interaction, not the CSLA plumbing.
Option 3 server two goals. First, it hopefully enables some
future scenarios with ORM tools – assuming the ORM can get/set fields in
business objects directly. Second, it opens the architecture to be more
flexible (though often requiring more work on your part), thus possibly making
the TDD people happier because they can use DI and other patterns to do
whatever they want. Again, requiring more code – but arguably the code is
more testable/flexible/etc.
Rocky
From: Kevin Fairclough
[mailto:cslanet@lhotka.net]
Sent: Thursday, September 18, 2008 9:47 AM
To: rocky@lhotka.net
Subject: Re: [CSLA .NET] RE: Using CSLA 3.5 with NHibernate 2.0?
With regard to NHibernate dll's (about 2mb inc dependencies
) copied to client.
We (are/were) originally basing our development on CSLAContrib's
ProjectTracker.NHibernate which I'am assuming is similar in architecture to
rsbaker0. This is great and it works but it means we will have to deploy
all the dependencies.
The ObjectFactory may save that need, but we don't know how to use it,
yet. We are also not sure if we will need some sort of dependency
injection aswell as ObjectFactory.
Our DAL and DTO's do not exist at the moment we are in design and looking into
alternatives.
We are confused about calls to the DAL, would the DAL calls be
fine-grained. The root object has a collection of child objects(5), each
child object is modified. The DAL is called 6 times passing individual
rows to I/U/D. The Child_XYZ data portal confuses the calls further,
there seems a lot of new *stuff* since 3.0 eBook and we are having trouble
getting the right way.
Any pointers or places to look would be appeciated.
That depends on how your DAL works.
When using something like LINQ to SQL, I would only create DTOs
for the changed (insert/update/delete) objects. Those would be attached to the
L2S datacontext object so the context knows which are new/update/delete. Then
you tell the context to apply changes and it does the work.
But if your DAL requires access to all the data (even unchanged)
then you’d have to create DTOs for everything.
Either way, remember that insert/update operations affect the
original list. The operations often generate new ID values, timestamp values,
etc. So you need to take the DTOs resulting from the update and merge their
data back into your actual business object graph.
Rocky
Copyright (c) Marimer LLC