Why not use a State/PropertyBag for BO's

Why not use a State/PropertyBag for BO's

Old forum URL: forums.lhotka.net/forums/t/1326.aspx


SonOfPirate posted on Monday, September 25, 2006

We are in the process of developing a handful of ASP.NET controls and a question was raised while working with the ViewState object:  Why not use a similar technique to store BO properties in lieu of hard-coded private variables?

By itself, the answer is pretty clear: simplicity.  But when you add N-Level Undo capability, etc., an argument can be made that having all of the object's property values contained in a state/property bag may not be a bad idea and may make some of these "back-end" process more efficient by reducing (some of) the dependence on reflection.  The only difference between what is being done with a server control and a BO is that the current state of the former is persisted to the HTTP Response stream while the BO's state goes into memory.  But essentially, the concepts are the same, are they not?

And, given that, wouldn't it be better to implement the same type of architecture for saving and loading state that is used with web controls (LoadViewState/SaveViewState) whereby each object can be made responsible for managing how the state bag is used?  This would follow along with Rocky's rationale for not using a separate persistance object between the BO and DB - encapsulation.

So, I'm looking for opinions on these subjects so I can help run the idea through my mind (and make a better argument for or against the concepts).

Thanks in advanced.

DansDreams replied on Monday, September 25, 2006

Not only that, but we could forget all that notion of rebinding on updates since the object the UI was directly bound to would never really be changing.

This topic has come up in a few flavors.  I don't want to speak too much for Rocky in case I'm not remembering his comments accurately, but I think he has considered it himself.

ajj3085 replied on Monday, September 25, 2006

Haven't thought about this before..

RockfordLhotka replied on Monday, September 25, 2006

I have considered it at various times, yes. It is a compelling idea, no doubt about it.

There are two primary reasons I haven't gone down this route: performance and complexity.

The performance issue comes into play because a property bag is typeless. Basically in memory it is a dictionary or hashtable, serialized to a blob. But the actual data values are all stored as type Object - which is a performance issue for applications.

The complexity issue is two-fold.

Within the framework, there's complexity in "deserialization" (not insurmountable, but not trivial either). After an update, the data-object-graph comes back, with some objects gone, others altered. Somehow the corresponding business objects need to be recoupled with their data objects from the propertybag. There are solutions to this, but it isn't a drop-dead trivial problem to solve in the general case. Especially when you consider database-driven ID values, so you can't count on an object's ID value being consistent through the update process...

The other complexity is in your code. Again, there are solutions. The issue is that the propertybag is typeless, yet it really should be type-safe. There's no way to get compile-time type checking, but you can get runtime type checking by doing a cast on any values as they are put into the propertybag. Either the business developer must do this, or the propertybag class has to store data type information along with the name of each field. All of which is doable, but increases the amount of metadata stored in memory - and in any case you lose compile-time type checking, which is a high cost to pay imo.

SonOfPirate replied on Monday, September 25, 2006

Yea, I'd thought about the typeless issue(s) but do you feel that the performance hit using reflection to copy and restore the object's state is better than the hit using a typeless property bag?  Plus, using virtual LoadCacheState/SaveCacheState methods would allow individual classes to manage their own state rather than having a one-size fits all in the base class.  Your thoughts?

RockfordLhotka replied on Monday, September 25, 2006

Except that I don't use reflection to copy/restore the object's state (well, except for n-level undo, but that's running on a client workstation and so has access to effectively unlimited CPU). In any shared context (on a web/application server), reflection is not used to copy fields.

I really don't want to go back to GetState/SetState (and corresponding GetSuperState/SetSuperState) methods like I used in VB6, no. One of the primary benefits of .NET over VB6 is its built-in serialization. Throw that out the window and one of the major benefits of .NET disappears (imo anyway).

Now if I invented a new programming language, I might have the compiler emit strongly-typed serialization code along that line. And obviously you could do it with a code generation tool today. But one of my principles is to make sure that CSLA supports hand-coded classes. And manual construction of GetState/SetState code is just asking for trouble. I know, because I had to support the VB6 framework for years - and that was one key point of pain for everyone. It is far too easy to forget to add those two lines of code for serialization/deserialization when you add a field/property to your object... Then you have this hard-to-debug bug, and you send an email to Rocky, who suggests that you check those routines, where you find two missing lines of code and feel like a fool. Just not a good cycle Big Smile [:D]


So I'll put it this way: if I either chose to create a new programming language (which I do keep toying with), or decide to only support code-generated classes, then I'd fully consider the idea of explicit serialization methods - because no human would ever write or maintain them. But for the next many months it appears all I'll be doing is trying to keep up with Microsoft's ridiculous rate of change within the .NET framework...

ajj3085 replied on Tuesday, September 26, 2006

RockfordLhotka:
Now if I invented a new programming language, I might have the compiler emit strongly-typed serialization code along that line.


Would the language be a .Net language? Smile [:)]

RockfordLhotka:
decide to only support code-generated classes, then I'd fully consider the idea of explicit serialization methods - because no human would ever write or maintain them.


Yikes.  I don't code gen my classes.  I create the stubs through the class designer and then type away. 

I've also maintained explict serialization methods in VB6.. and it was NOT fun.

SonOfPirate replied on Tuesday, September 26, 2006

I appreciate the in-depth responses and explanations.  Certainly looks like the consensus is to not use a property bag and I can understand why from all of the previous posts.

I do have one situation where it looks like we will still go this route because the class is designed to be fully extensible (long story) but this is only one use case - and there are always exceptions to the rule, right?

Thanks for the feedback.

DansDreams replied on Tuesday, September 26, 2006

I guess I was thinking of this a little differently.  If we just considered a design whereby by definition any BO was really an aggregation of its own behaviors and an object containing its supporting data, then there really isn't a concept of serializing the data in and out of the data object. And the DataObject and its fields would be as strongly typed as is currently the case with a single BO.

I would think it becomes mostly just a matter of where the code like
_myField = dr.GetString("somefield") goes to get it to the point of working.  The public BO's properties change obviously so the get for CustomerLastName is
return _myDataObject.CustomerLastName;

And so on.  But all that seems fairly trivial and that on top of some seemingly relatively minor changes in the data access sections and you're set up to start using this new paradigm.  What major issue am I missing?

Now, we might entertain the notion that we're breaking some of the BO's encapsulation, but really this wouldn't be any different than what we end up with using something like NHibernate, is it?

RockfordLhotka replied on Tuesday, September 26, 2006

On the surface this idea always seems attactive to me. But I struggle with some issues.

DansDreams replied on Wednesday, September 27, 2006

There ya go worrying about the details of how it would actually work.  The vision I seen in my mind was so doggone perty until ya went and done that.

Seriously, I would think the easy solution would be to say yes there's a 1:1 even on the child objects, and each BO was responsible for its own persistence such that the BO object graph wasn't sent to the app server but each would be responsible for sending its DO off in some kind of massive recursive loop.  But then that would make for one heck of a chatty application, even if you got past the redesign based on the current paradigm of assuming the DataPortal calls go off to the "server", wouldn't it?

ajj3085 replied on Wednesday, September 27, 2006

Whoa there.

Not all of my BOs have a one to one with the tables they use.  For example, a Person belongs to a department, but I don't encapuslate that relationship in a PersonDepartment class.  Doing so would be designing according to data, not behavior. 

You also force people to give up any chance of optimization.  For example, I once optimized SQL calls by building a long list of calls into a single string, and then sending that all at once.  That is faster than sending single command one at a time.  (This was pre-Csla for me).

So no, a property bag in my opinion isn't worth these restrictions.

Andy

RockfordLhotka replied on Wednesday, September 27, 2006

ajj3085:
Whoa there.

Not all of my BOs have a one to one with the tables they use.  For example, a Person belongs to a department, but I don't encapuslate that relationship in a PersonDepartment class.  Doing so would be designing according to data, not behavior. 



I don't think this is actually an issue. Externalizing your BO data through a DO doesn't imply any coupling of the data structure to a table. That DO still has to be mapped to and from the database, just like your object's data is today. So the seperation between data structure and object model remains good.

ajj3085 replied on Wednesday, September 27, 2006

True, but with the DAL I have, I'd have to add support for such mapping.  Currently my DAL is pretty 'dumb.'  Each class represents a table or view, and only table objects can be used for data modification. 

So a change to Csla that forced a one to one would require me to either move to something like NHibernate or add support for mapping to my DAL.  One of the things I like about Csla is that it allows a lot of flexibility to how you do things 'behind the scenes.'  I know that if you use Csla you need to follow some restrictions, but its also nice that there are so few.

RockfordLhotka replied on Wednesday, September 27, 2006

DansDreams:

There ya go worrying about the details of how it would actually work.  The vision I seen in my mind was so doggone perty until ya went and done that.

Seriously, I would think the easy solution would be to say yes there's a 1:1 even on the child objects, and each BO was responsible for its own persistence such that the BO object graph wasn't sent to the app server but each would be responsible for sending its DO off in some kind of massive recursive loop.  But then that would make for one heck of a chatty application, even if you got past the redesign based on the current paradigm of assuming the DataPortal calls go off to the "server", wouldn't it?



Yes, I agree that there'd need to be a 1:1 between BO and DO.

It is also my view that the DOs themselves would not maintain an object graph. In other words, a DO wouldn't reference another DO. Cross-object references and relationships are the job of the BOs.

To make serialization practical, each BO would need to have a mechanism to accept and return its DO (externalization of state). But more importantly, it would need to have a mechanism to return references to its child BOs in a way that some serializer/deserializer could scan the object graph - ideally without the use of reflection (though I should point out that all of Microsoft's serializers do use reflection...) This would allow the serializer to create a dynamic object graph of all the DOs, probably storing them in a nested set of ArrayList or Hashtable objects.

The true complexity comes in with deserialization. This process would become one of matching the DOs from the ArrayList/Hashtable back into the actual BO object graph. Presumably the root node is easy - root DO goes into root BO. But after that things get very complex. During any update, some DOs might go away (delete operations), get new id values (insert operations) or remain relatively consistent (update operations). For the delete/insert scenarios, it is not entirely clear how to use the data in a DO to find the correct BO.

This is a solvable problem btw. In the case of a deleted DO, a DO needs to come back to the client, representing the deleted object, and containing the data needed to re-match the DO to its BO. This information is sufficient to delete the matching BO. In the case of an insert, an artificial, unique, id must be generated before serialization, and this key must be maintained by the BO and DO throughout the process so it can be used to re-link them during deserialization.

Alternately, you could (perhaps) trust that the DO arrays remain in the same positional order as the BO graph traversal. In general terms this would probably work, but it does put some level of uncertainly into the process (while improving both performance and simplicity rather a lot).

Copyright (c) Marimer LLC