Advice needed: business rules & incremental improvement

Advice needed: business rules & incremental improvement

Old forum URL: forums.lhotka.net/forums/t/3445.aspx


david.wendelken posted on Tuesday, August 28, 2007

I just got caught in a catch-22 with the csla rules framework.

The csla rules architecture appears to violate a business rules principle I have, that of "incremental improvement".

Let's say that I have some pre-existing data and I have to add several new validation rules to the object.    And, of course, some of that pre-existing data violates the new rules.  And, of course, it may take some time to fix the data.  I can't just write and run a simple script to make this problem go away.

So, when I hydrate a BusinessListBase containing said business objects, some of them are in an invalid state from the start.

Let's say that Mary and Joe have invalid data in the Person objects that represent them.

I'm using a UI interface that lets me edit a single person's record at a time.

So, I correct Mary's data and press save.

Can't do it!  Why?  Because if I ask BusinessListBase if it is valid, it will return false - because Joe's data is still invalid.

I cannot save Mary's data because Joe's is invalid.  Nor can I fix Joe's data and save it, because Mary's data is invalid.

The only way to fix the problem via a screen is to get all the invalid objects on the same screen in an UI that allows all of them to be changed before any of them are saved. 

I cannot change the rule to return a warning if any other object uses that rule method and needs it to return an error upon violation.  That is because the rule decides whether an error or warning is issued, not the object using the rule.

Actually, a similar problem could occur in a single object.  Let's say that we now require valid state codes and zipcodes in our Person object, and that Mary has neither one of those fields valid.

When I hydrate the object instance representing Mary, it will have two broken rules.  If we know Mary's state but don't know her zipcode yet, we won't be able to incrementally improve the data by fixing the state code now.  We will have to wait until we know the answer to all broken rules before we can fix any of them.  That's a clear case of the perfect being an enemy of the better.

I think what is needed is the ability to tell the ValidationRules.CheckRules that it is doing an initial or secondary determination of BrokenRules.   On an initial determination, the minimum identifying info for each broken rule (property and rulename) with severity of error would be added to an InitialBrokenRulesCollection. 

The business object and its collection would need an additional property to supplement IsValid.  For lack of a better name, the new property would be IsNotMoreInvalid.  It would return false if there were any broken rules that were not in the InitialBrokenRulesCollection.

This seems to be the simplest way to adapt the csla rules architecture to support the principle of incremental improvement.

Any ideas on a simpler way to achieve this purpose?

 

david.wendelken replied on Tuesday, August 28, 2007

david.wendelken:

Any ideas on a simpler way to achieve this purpose?

Me!  Me!  I have one! :)

I don't have to modify the ValidationRules.CheckRules method at all.

Instead, when the object is being instantiated, after the initial call to CheckRules, all I have to do is loop thru the list of broken rules and copy them into the InitialBrokenRulesCollection.

Duh!

Any ideas on how to make this even simpler?

Bowman74 replied on Wednesday, August 29, 2007

Your problem is in the use of BusniessListBase which I imagine you are using a near implementation to what is in the book.  When you create a collection like this what the framework is implicitly understanding is that all objects in the list are part of the same overall conceptual object as they are marked as children of the list.  That is you can't take pieces off from the list piecemeal and save them individually like you are trying to do.  They all succeed or fail as a unit.

What you need to design are individually savable savable BusinessBase objects.  However for performance reasons I suspect  you also don't want to fetch them individually and add them to some collection to bind to the grid.  There are ways around this so that you can fetch them as part of BusinessListBase but have them not marked as children so you can create transactions and save them individually.  You might want to look into doing that (or some other way of fetching a collection of stand alone person objects).

As far as saving Mary individually with only correcting some of the broken rules; I'd argue that it is by design and a good thing.  If, however, you want to allow it to be saved with "error" data then you want to use a warning not an error.  A way around it is to make them errors when creating a new person object but only warnings when instantiating an existing one for any broken rules that are pre-existing in the data.  But you would need to use instance rules for this. 

But like I said, as a philosophical principle I don't think error data should be allowed to be saved, even if it was pre-existing in the database.  Of course I wouldn't have allowed that in the first place.  Correcting data that is in violation of new business rules in a software upgrade should be part of the upgrade process in the first place.  I don't want to possibility of bad data being used by some other business process that only uses (but doesn't save) the person because the person information was not corrected.  But that's my opinion and you know what they say about those.

Thanks,

Kevin

david.wendelken replied on Wednesday, August 29, 2007

Bowman74:

As far as saving Mary individually with only correcting some of the broken rules; I'd argue that it is by design and a good thing.  If, however, you want to allow it to be saved with "error" data then you want to use a warning not an error. 

Setting the new rule to a warning instead of an error only transfers the problem somewhere else!

That is because a warning would allow newly created objects to be saved with data that breaks the rule.  That's a definite business no-no.

Bowman74:

A way around it is to make them errors when creating a new person object but only warnings when instantiating an existing one for any broken rules that are pre-existing in the data.  But you would need to use instance rules for this. 


Hmm.  I could create extra private variables that hold the orginal values instantiated from the database and write my rule to use those data items also.  Of course, once the data is upgraded to the new rules, those extra variables become extra, unwanted baggage...

Bowman74:

But like I said, as a philosophical principle I don't think error data should be allowed to be saved, even if it was pre-existing in the database.  Of course I wouldn't have allowed that in the first place.  Correcting data that is in violation of new business rules in a software upgrade should be part of the upgrade process in the first place. 

I agree with you that it should be part of the upgrade process when possible.  It simply is not always possible.  If the requirements for an Employee object change to require some new data fields, it is not always possible to programmatically deduce the correct data.  Someone is going to have to enter in the newly required data after the upgrade.  And, if they are ready to fix some of the data right away, but it will take some time to assemble the rest, it is an inefficient use of the time, not to mention downright rude to make them wait until everything is known before they can start fixing the data. 

Bowman74:

I don't want to possibility of bad data being used by some other business process that only uses (but doesn't save) the person because the person information was not corrected. 

Agreed.  But the business process (being newly modified to deal with the newly required/created data elements) can easily check to see whether those rules are valid or not for the object before using it. 

Bowman74 replied on Wednesday, August 29, 2007

I'm going to just reiterate that I'm viewing this discussion as philosophical so I don't think that there necessarily is a "right answer."  I tend to not get stuck on philosophical rules as some situations require special handling and yours might be one of those where your solution may just be the right one for the situation and the needs of the users.

david.wendelken:

Hmm.  I could create extra private variables that hold the original values instantiated from the database and write my rule to use those data items also.  Of course, once the data is upgraded to the new rules, those extra variables become extra, unwanted baggage...

Sure, but you are going to bite the "extra processing and code" bullet by dealing with the possibility of invalid data no matter how you cut it.  Your solution below of always checking broken rules on any object fetched from the database before using the data in a business process creates deadwood once the data is all updated as well.  The minute all the data is conforment to the rules in the database, all that code to handle if it is not is no longer needed (well until you create another situation where there is invalid data in the database). 

david.wendelken:

I agree with you that it should be part of the upgrade process when possible.  It simply is not always possible.  If the requirements for an Employee object change to require some new data fields, it is not always possible to programmatically deduce the correct data.  Someone is going to have to enter in the newly required data after the upgrade.  And, if they are ready to fix some of the data right away, but it will take some time to assemble the rest, it is an inefficient use of the time, not to mention downright rude to make them wait until everything is known before they can start fixing the data. 

Once again I'd still argue that this should be part of the upgrade process and users (who know the answers to those questions) are a necessary part of that process, just like they are necessary for user acceptance testing.  Of course you could be talking about one person having to fix 1,000,000 records in which case them fixing it would be impractical.  But normally this isn't the case but if it (or something like it) is then I understand your dilemma. 

It is never "rude" to consider having users be a part of the software upgrade process (and fixing data through the system UI isn't your only option).  In some cases it is more cost efficient for the company to fix the data as part of the upgrade process.  After all there is a monetary cost in your efforts to handle the possibility that this data is incorrect, and possible further costs based on testing that special handling code, costs of using the bad data if some problem with the handling code is missed and goes live, etc. 

On the other hand, in some cases, such as the one I talked about above, the opportunity cost to the company by having the user fix all 1,000,000 records might be much higher than you taking time to write/test the code to handle it; so maybe you don't fix it as part of the upgrade.  But in general the possibility of it happening isn't rude, it is a business consideration and people should be thinking in those terms.  After all, you're writing business systems to save the company money.  Convenience for the users is a by product of that goal but not the goal in and of itself.

Of course there may also be a myriad of political considerations that make any logical cost benefit analysis of the situation moot.  Systems you write to sell commercially also have different considerations and factors.

david.wendelken:

Agreed.  But the business process (being newly modified to deal with the newly required/created data elements) can easily check to see whether those rules are valid or not for the object before using it. 

If you have written your system to do this from the ground up to do this then you are likely golden.  If not, you will most likely miss a spot (or two).  Also it implies that read only objects will never be used for source information in any business processes as the process code will not be able to check for broken rules to make sure the data is valid.  So just keep that in mind.

Thanks,

Kevin

david.wendelken replied on Wednesday, August 29, 2007

Bowman74:
I'm going to just reiterate that I'm viewing this discussion as philosophical so I don't think that there necessarily is a "right answer."  I tend to not get stuck on philosophical rules as some situations require special handling and yours might be one of those where your solution may just be the right one for the situation and the needs of the users.

True enough!

Bowman74:
david.wendelken:

Hmm.  I could create extra private variables that hold the original values instantiated from the database and write my rule to use those data items also.  Of course, once the data is upgraded to the new rules, those extra variables become extra, unwanted baggage...

Sure, but you are going to bite the "extra processing and code" bullet by dealing with the possibility of invalid data no matter how you cut it.  Your solution below of always checking broken rules on any object fetched from the database before using the data in a business process creates deadwood once the data is all updated as well.  The minute all the data is conforment to the rules in the database, all that code to handle if it is not is no longer needed (well until you create another situation where there is invalid data in the database). 

Hmm...  Very true!

Bowman74:

Once again I'd still argue that this should be part of the upgrade process and users (who know the answers to those questions) are a necessary part of that process, just like they are necessary for user acceptance testing.  Of course you could be talking about one person having to fix 1,000,000 records in which case them fixing it would be impractical.  But normally this isn't the case but if it (or something like it) is then I understand your dilemma. 

There is also the business value of having partially improved functionality sooner rather than perfect functionality later.
 

Bowman74:

It is never "rude" to consider having users be a part of the software upgrade process (and fixing data through the system UI isn't your only option). 

You misunderstood what I considered rude on the part of developers.  Otherwise I agree with you. :)

It's not the fact that the user has to be involved in the upgrade process (pre or post installation, as the case may be) that I disagree with.  It's the idea that the user must fix 100% of all problems in a given record in order to fix one problem in it.  That would be like saying that I have to update all the records in the table with one horribly complex sql statement instead of five simple ones in order to upgrade the data programmatically!

 

Bowman74:
david.wendelken:

Agreed.  But the business process (being newly modified to deal with the newly required/created data elements) can easily check to see whether those rules are valid or not for the object before using it. 

If you have written your system to do this from the ground up to do this then you are likely golden.  If not, you will most likely miss a spot (or two).   Also it implies that read only objects will never be used for source information in any business processes as the process code will not be able to check for broken rules to make sure the data is valid.  So just keep that in mind.


Business rules can be defined to constrain behaviour (validation rules) or to take measure of a situation.  I typically write a method (a measurement-oriented business rule ) that answers the question, "Is this object ready for use by this process?"  It either returns a "yes" result or a list of reasons (more "broken" measurement-oriented business rules) that signify why the object is not ready for use by the process.  (This is great for debugging by the way!)

Any process that cares just needs to call that method before processing the object.  And yes, there is always the possibility of forgetting to do so in any given process. 

david.wendelken replied on Wednesday, August 29, 2007

Oh, and Rocky advises to use BusinessBase with getters (only) on the properties for those read-only business objects that need access to the broken rules.

 

Bowman74 replied on Wednesday, August 29, 2007

david.wendelken:

Oh, and Rocky advises to use BusinessBase with getters (only) on the properties for those read-only business objects that need access to the broken rules.

Yea, Rocky says lots of things, some of them even useful (hey someone has to give Rocky a hard time! ;) ).  Seriously, there is a performance overhead in doing that, which is why the read only objects exist in the first place.  Not a great difference, but just something to be cognizant of.  Other than that and having to ignore a bunch of unneeded methods like Save, it works fine.

Thanks,

Kevin

Bowman74 replied on Wednesday, August 29, 2007

david.wendelken:

There is also the business value of having partially improved functionality sooner rather than perfect functionality later.

Sure could be.

david.wendelken:

You misunderstood what I considered rude on the part of developers.  Otherwise I agree with you. :)

It's not the fact that the user has to be involved in the upgrade process (pre or post installation, as the case may be) that I disagree with.  It's the idea that the user must fix 100% of all problems in a given record in order to fix one problem in it.  That would be like saying that I have to update all the records in the table with one horribly complex sql statement instead of five simple ones in order to upgrade the data programmatically!

Fair enough, though I wouldn't characterize it so much as rude but inconvenient. ;)

david.wendelken:

Business rules can be defined to constrain behaviour (validation rules) or to take measure of a situation.  I typically write a method (a measurement-oriented business rule ) that answers the question, "Is this object ready for use by this process?"  It either returns a "yes" result or a list of reasons (more "broken" measurement-oriented business rules) that signify why the object is not ready for use by the process.  (This is great for debugging by the way!)

Any process that cares just needs to call that method before processing the object.  And yes, there is always the possibility of forgetting to do so in any given process. 

Only really required due to the fact that you have the possibility of invalid data in the first place, but it would work.

Like I said, you may have a good reason why you can't include users in the upgrade process where it isn't cost effective and or practical.  But I would personally consider that a system design outlier, not my standard practice.  Which I suspect is also why Rocky's default implementation works that way.

Thanks,

Kevin

RockfordLhotka replied on Wednesday, August 29, 2007

Remember (and I know we disagree here) that the Severity can be set in the rule at runtime.

So based on your rules I'm hearing that a new object has some rules it must meet, and others that may be warnings.

An existing object has different rules it must meeting, and others that may be warnings.

Or perhaps more accurately, it sounds like your data is "versioned", and some rules only apply as Error to current version data, but appear as Warning for earlier version data. Again, this "version" metadata must be part of your object's state, and then it becomes possible to make your rules version-aware.

Ultimately the key is to make your rule methods aware of the broader state of the object.

david.wendelken replied on Wednesday, August 29, 2007

RockfordLhotka:

Remember (and I know we disagree here) that the Severity can be set in the rule at runtime.

So based on your rules I'm hearing that a new object has some rules it must meet, and others that may be warnings.

An existing object has different rules it must meeting, and others that may be warnings.

Or perhaps more accurately, it sounds like your data is "versioned", and some rules only apply as Error to current version data, but appear as Warning for earlier version data. Again, this "version" metadata must be part of your object's state, and then it becomes possible to make your rules version-aware.

Ultimately the key is to make your rule methods aware of the broader state of the object.

You are quite right, one good way to solve the problem is to add the version meta-data to the object.  Then the rule can be written to act appropriately.

The downside of that approach is that the object must be re-coded and re-deployed in order to add the necessary internal variables to hold the old object state. 

I'm in the process of moving my calls to ValidationRules.AddRule from using hard-code to database-driven code.  So, as long as the rule method has already been coded (in CommonRules, for instance), I (or key business users) can add (or drop) a rule to an object without having to reinstall the application.  (In some environments, particularly very secure ones, this is a BIG deal, as the paperwork burden to prepare the installation documents and the timelag while the installation documents remain unread (and therefore unapproved) in someone's inbox can be substantial.)  :(

Software developers who build code for use by many customers also appreciate the ability to change rules without having to change the sourcecode and re-install it.

What I've ended up doing is the following:

I've created a pair of classes, InitialBrokenRules and InitialBrokenRulesCollection.  They are very similar to BrokenRules and BrokenRulesCollection.  No text description though, just the property, rulename and severity.

ValidationRules contains a BrokenRulesCollection and I added an InitialBrokenRules collection. I added a simple method to populate the initial broken rules from the broken rules collection and another to clear it. 

I also changed how IsValid is determined.  Instead of "BrokenRulesList.ErrorCount == 0", I only count broken rules that aren't in the InitialBrokenRulesCollection as errors.

(ROCKY - FYI - your comment text on the IsValid methods throughout CSLA refers to the existence of BrokenRules, not the existence of BrokenRules with a severity of Error.  )

I also added a IsFreeFromErrors property which returns what the original IsValid returned.

(I was going to add a property called IsNotMoreInvalidThanBefore instead, but that mean updating lots of places in the framework where IsValid was being checked to IsNotMoreInvalidThanBefore, so I didn't. :)

Then, in order to make the entire thing transparent to the code I've already written, I made some changes to Core.BusinessBase.  In MarkNew() I cleared the list of InitialBrokenRules with a call to ValidationRules.ClearInitialBrokenRules().  In MarkOld() I called ValidationRules.CheckRules() followed by ValidationRules.CheckInitialBrokenRules().

For the code I've tested so far, this seems to be making my editable business objects automagically able to incrementally improve their existing data when rules change.

david.wendelken replied on Wednesday, August 29, 2007

Oh, and thanks for the feedback!  I did a better job on this than I would have without it. :)

david.wendelken replied on Wednesday, August 29, 2007

Forgot to add one more change:

If a class or interface had an IsValid property, I added another property called IsFreeFromErrors which did what IsValid used to do.

 


And the UI control that I'm working on - that automagically adds the right validators to the page based upon the business rules on the underlying property - can be made to make use of this information.  It can downgrade the validator from an Error to a Warning.  :)

Copyright (c) Marimer LLC