CSLA objects validation via external rules engine

CSLA objects validation via external rules engine

Old forum URL: forums.lhotka.net/forums/t/7528.aspx


gwilliams posted on Friday, August 28, 2009

 I would like to get some feedback from all you CSLA people regarding what would be the best approach in calling an external rules engine to validate properties within CSLA business object.

 

Process Background

We are loading a User Object with basic demographic information. This user is allowed specific thresholds on child attributes. i.e. number of widgets based on specific contracts,

Eligibility based on specified contracts, and equipment based on needs.

 

The external rule component also has an interaction process. The rule can return a result that will require additional input and rerunning of the rule (question/answer).

 

In order to run these rules specific properties from the User object must be passed to the rules web service. Most of the properties that need to be passed to the rules engine are within the User object, however there are 3 addition properties that are held within a Contract object  ( User object is not a child of the Contract object). All interaction with the rules web service is through a custom component that implements the Strategy pattern (for what it is worth).

 

Questions:

There is a split within our development team as to where these rule calls should be made.

One team believes that the UI should interact with the rules via the custom component and then back populate the object after the results are returned.

I believe the CSLA object should drive the calls to the rules engine. In the case where the response from the rules engine needs to ask the user for more info then implement a child “RuleResponse” object which will get populated and have the UI check to see if there is anything defined and handle it if needed.

 

One of the main concerns is with the possibility of the rule returning additional questions and how to handle this.

 

This is the first big application we are building with CSLA and we are still trying to figure out where everything should go.

I am curious how others have implemented CSLA and external processes.

dlambert replied on Friday, August 28, 2009

That's a fun one.

I used to work for a company who built a business rules engine w/ an interactive mode something like you describe. Integration was always a pretty huge chore for many of the reasons you indicate here. I'll throw out some thoughts here, but I'd recommend you bat them around w/ your team to see if any of these make sense in your specific situation.

First, CSLA is all about business objects, so to even consider short-circuiting CSLA in your architecture doesn't make too much sense to me. The potential for your CSLA objects to become out-of-sync with "real" values seems huge, and you'd be relegating CSLA to nothing more than a persistence mechanism. If that's all you want, there are easier ways to do it (Subsonic, NHibernate, name your poison...).

I think the real question (assuming the BRE is a given) is whether or not to use CSLA... period.

If I were in this position, I'd lobby to build out a small prototype using CSLA talking to your BRE, and another where the UI is wired to the BRE and then some super-simple persistence layer (maybe even something like Astoria).

While building and evaluating these prototypes, watch for the stuff CSLA gives you beyond just a place to put your business rules... easy data binding, validation using standard interfaces, etc. There's a whole host of "it just works" stuff that's easy to forget about when approaching a new project, but if you're not careful, you'll end up building all this stuff in your UI layer. Again, I'd pass on the option where the UI is connected to the BRE and to CSLA, because I don't think you end up with most of the other benefits of CSLA in this case (the CSLA objects become less "real" than the BRE objects).

Final point: this is not a subject where you'll find a lot of cut-and-dried answers, so expect some vigorous debate!

Good luck.

gwilliams replied on Friday, August 28, 2009

Thanks for the quick response.

 

I have noticed a some other discussions on this subject with no clear ruling either way.

 

With the exception of the interactive part, most of the rules will need to set a property of the BO based on the results of the call.

And the results will vary based on the contract. At the onset of the project I wanted to keep as much of the logic contained within the BO.

This was due to having multiple interface. (System to System)

There has been a lot of duplicity within the code in other systems which of course has caused a nightmare with maintenance.

You would make a change in one are but not in another that uses the same functionality.

I am new here and there has not been a lot of  object reuse.. lots of copy and pasting of code (rant for another day J)

 

I am a little confused as to the best approach in setting the properties off the call to the rules engine. I think in CSLA 2.0 you could/ would

Make the call within the setter of the property. Not sure how this is done in 3.6. within the Fetch method would I simply have a call to an internal

Method that would run the rule?? Or would I use the CommandBase and work it like the Exist() example.

rfcdejong replied on Saturday, August 29, 2009

It's possible as i implemented it at my work.

We've "simple" dynamic validation rules where one property should have a value or the value should contain a vallid value based on his own input. It's harder to validate a property based on other property values, but we'll need them soon enough, dynamic ofcourse.

With dynamic i mean "stored in a database" as a "rule engine".

We also have the authorization rules dynamicly provided to the business objects.

Almost everything is implemented in a abstraction layer on top of the csla objects and since we are using objectfactory's and a own mapping it's easy to provide the business rules from within the objectfactory when creating a business object.

gwilliams replied on Monday, August 31, 2009

This is an interesting dilemma.

 

We are rewriting this application due to the fact that the prototype had business logic across all boundaries ( UI, Business Classes, and Stored procs).

On the outset of the project we all agreed that the business logic should be within the CSLA business objects regardless if we “used class files” or a Business Rules Engine (BRE).

 

Now with the implementation of the rules engine I am seeing one team passing values from the CSLA business object(s) to the external rules engine and having the UI take action on the results. i.e. show a button or form to get user input.

 

I believe we can do the same thing through the CSLA objects and have the UI check the results of the Broken Rules or in the case of a question being returned, a child BRE_Results object. I there is a question that was returned then show a popup and get the response and take action there.  

 

As it was stated previously;  I believe the CSLA objects validity is going to get out of sync real fast.

 

dlambert replied on Monday, August 31, 2009

Right. You either want to be in a position where the CSLA object sits between the UI and the BRE so it can really manage the state of the object, or you want to design so that the CSLA object doesn't have to be right all the time (and in this case, you have to ask what CSLA is really doing for you any more).

It sounds like you're drifting toward a model where the CSLA object reads out of the DB, then hands the object over to the UI (or BRE) for display. Changes made in the UI are validated through the BRE, and when the UI and the BRE are done making changes, the object is routed back through CSLA to be saved in the DB.

First question: is this close, or am I missing something?

Second question: at this point, what benefits are you really getting from CSLA? (given that you can't count on the CSLA object to really be "right" any more)

RockfordLhotka replied on Monday, August 31, 2009

CSLA is very much designed around the OO concepts of encapsulation and abstraction, and the core idea that objects are designed around behavior first, and data second.

Rules engines follow a very different philosophy, where objects are dumb, passive data containers, and some rule-engine/function-library/whatever examines and modifies data in those data containers on request.

The upside of the OO approach is that the rules and related data are encapsulated and the UI developer works against a set of "black box" objects that abstract the business processing.

The upside of the rules engine/function library approach is that there's separation of data and behavior. This makes it easier to shuffle the data around arbitrarily, and apply different sets of behaviors against that data.

I spent the first third of my career in the function library world, and it works fine, especially in a terminal-based (or now web-based) model that requires little interactivity.

The problem with the function library model in today's non-web world (Silverlight, WPF, Windows Forms) is that data binding is so powerful you don't want to live without it. But data binding is built assuming that the object implements a set of interfaces so the object can interact in a rich and immediate way with data binding.

Since rules engines are function libraries that are invoked to do batch processing against a set of dumb objects, they don't map well to the interactive UI world. You don't usually want to engage the rule pipeline just because the user changed one field, or pressed one keystroke...

And even if you can afford to engage the entire pipeline per field edit, your dumb object is what's bound to the UI, and it won't have the necessary smarts to interact with data binding, so you lose many of the cool data binding behaviors.

This is an end-to-end scenario issue really.

Microsoft has created the UI technologies assuming interactive/smart objects. Rules engines assume dumb objects. ORM technologies like EF work best with dumb objects, but are evolving to support smart objects.

CSLA .NET is designed to support the smart object model, and therefore the interactive UI first and foremost. It works with the block mode web world, but it really sings with the smart client world.

But you can probably see where the philisophical difference between the smart OO model and dumb data object model come into play. And this is the core of the tension between CSLA .NET and external rules engines.

My general recommendation is to decide if you value smart client interactivity or not, then decide whether you want smart objects (CSLA style) or dumb objects (rule engine style) based on that. The two models can interact, but not easily...

gwilliams replied on Monday, August 31, 2009

You have all touched on the main point. If the Business Objects do not call the BRE what benefit do we really get out of using them.

 

The idea from the start was to have the UI talk to the CSLA BO. The common validation rules would be using the CSLA validation and we would write custom methods to interact with the BRE. These methods would be similar to the Exist() example just calling the BRE instead of hitting a database.

 

On the outset of the project we knew a few things.

1.       We needed a consistent way to manage views of the UI based on the logged in users Roles.

2.       Due to having multiple interfaces (web, windows, system) we need a standard object that all UI’s can interact with. We do not want to re-implement logic for each UI.

3.       Each object should be able to data bind.

4.       We did not want the data to drive the object creation. ( we were going to use the prototype database and re factor it as we went. Wanted to be able to run the application without the database if needed.)

5.       Due to contract changes some business rules would be driven through the BRE. This will allow the business sponsors to change logic without the need of a programmer.

6.       Need a common structure to implement Role based security.

 

Now that we are getting into the project we know a little more

1.       Even though an object can be in an invalid state it still must be able to be saved.

a.       Either set validation rules as a warning not error or override the save() to have a bool param. If the UI tries to save (and the programmer didn’t check broken rules) then prompt user to save an invalid record.  if accepted call the save() with a true param overriding all validations.

2.       CSLA business objects are not being directly bound to the UI.

a.       Developers are going old school and setting page controls to CSLA object controls.

3.       More interaction with the BRE then what was previously defined. More Question/Answer type interaction.

a.       This should be able to be handled with a standard child object that the UI can check after calling a CommandBase type rule method. If the rule requires user interaction the UI can handle that until the rule can return a result which then can populate the CSLA Objects parameter.

 

Based on the feedback and the initial requirements, I believe we can handle having the CSLA objects call out to the BRE and handle any question\answer type interaction.

The UI developers will need to do some additional checks against CSLA properties after the rule call.

ajj3085 replied on Wednesday, September 02, 2009

You'd do it mostly the same way. Build a validation rule, which instead of validating, executes maybe a commandbase instance to determine the value, then update your BO accordingly.

There is a drawback to this, in that while you may want to run actual validation rules on Create or Fetch, if you don't do anything you'll run these update rules as well when you might not want to.

drfunn replied on Wednesday, June 02, 2010

Although I'm sure you have made your decision by now (based upon the original posting date), I figured I would add a few other points to consider for those people still struggling with a similar decision.

First, a quick analogy: Back in the early 80's GM designed the new C4 corvette from the ground up and starting in 1984 produced one of the most problematic generations of the sportscar in history. Although well-recieved by the public, they soon earned a reputation for being rattle traps and maintenance nightmares. When producing the next generation Corvette, GM studied the problems of the past and realized that there were too many parts in the earlier cars. More parts meant more manufacturing to produce the parts themselves, more assembly time, and most of all,  more things to go wrong.

In my experience, software is exactly the same. Every new "integration part" added to your application has a greater potential to add complexity and maintenance issues than it does to add true benefit to your application. Now before you poke fun at my generalization, hear me out. In a simple production fat client app coded poorly to run on a desktop, you have UI screens with code in them. Despite the spaghetti code, when something goes wrong, you know you can just debug the exe source code, find the problem and voila! Fixed. As the app grows, we want to organize the code a bit so we separate the UI code from the business logic and a new data layer and BAM! Something still goes wrong. Although we now have 3 potential areas to look to debug (UI code, Business logic code, Data access code) the benefit we receive from the organization of the code more than outweighs the addition of a few "parts" to our solution. This is when adding "parts" is worth it.

Now the app grows and we put it on the web. We also separate the application into web and app tiers and don't forget the DB server. Now we added more "parts", and certainly the complexity increases and maintenance increases. But the ability to server a greater number of users more reliably also increases. In this instance i like to say that the "Juice is worth the squeeze".

Now let's say we have 3 options, CSLA  the code, just add a rules engine, or CSLA the code w/ a rules engine.

1. We use CSLA and combine our business logic and data access code in the same objects, and inherit all of the additional benefits and features of CSLA that I know we didn't include when we undid the spaghetti code. Sure we add a new framework of code we didn't build, but a whole community is available to support it, we know it's been proven and we still combine a lot of separate layers into single smart objects. When rules change or bugs need to be fixed, developers know right where to go. (And yes, DEVELOPERS make the rules changes despite the advertisements of many rules engines that sell the fact that non-developer types can change rules with ease). In this scenario the CSLA "part" makes sense. "The juice is worth the sqeeze"

2. Just add the rules engine. Easy one for me really. Now we add a new technology, a new piece of software, a new way to code, a new "part" that our entire application must rely upon that is (most likely) not native to our Microsoft architecture. Sure we can get it to work, but what unknowns lie in getting the application layers to communicate, possibly (and probably) across strict firewalls. What other "parts" does the rules engine rely upon. In the perfect "rules engine" world where business types are opening up spreadsheets and changing business rules, what happens if unexpected values get into those rules without the developers being aware of it? And in the end, what does the user see? Nothing. A user does not see the value in the rules engine because rarely does it add value to them. Added load balanced web and app servers may  not be obvious to the user, but it adds unknown value to them when the site is available despite heavy load. Yes rules engines with their REET processing can speed up rule processing, but I'd argue that rarely will performance improvements be a valid argument for a rules engine. And the biggie... When bugs occur we must debug the code, the code implementing the rules in the rules engine, and potentially the rules engince processing itself. Not overly difficult, but more pieces nonetheless. Basically, ask yourself "Is the juice worth the squeeze?"

3. CSLA w/ a rules engine. Do you even really need this answer now? CSLA is just simply a different approach to architecting a solution and not one that should really be implemented alongside a rules engine.

Rules engines have their place, please don't get me wrong. But the percentage of application that would truly benefit from them, I find to be very low. I've heard the arguments and they are similar to the J2EE vs .net religious debates. Rules engines specifcally, take great pride in advertising their ability to consolidate business logic for the enterprise and to make it available in such a manner that "non-developer" types can make changes without a need for code changes. i'd argue that that is what we have developers for.

Ultimately, integration efforts wind up being the some of the most difficult hurdles in getting production apps working. Heteregenous applications are here to stay and they are part of our industry, but we should try our best to not add complexity because it's "sexy". Just like GM realized after cutting the total part count in the C5 corvette and producing the most reliable and well-respected corvettes in history....

RockfordLhotka replied on Wednesday, June 02, 2010

It should be noted that CSLA 4 has a new rules system, which should make it easier to integrate with external rules engines. This still won't necessarily be trivial, but the new rules system should open up many new options.

Copyright (c) Marimer LLC