If you are using CSLA 3.6 you might look at the ChildChanged event. This would be raised by the list (parent) object when a child changes, and it gives you a reference to the changed child. Actually the ListChanged event would give you the index to the child as well.
In other words, you might be better off listening at the list level, rather than the individual object level, because then you'll know which particular object changed, and may be able to optimize your recalc.
Rockford,
Thanks for your reply – I had started with ChildChanged, the
only issue with listening to ChildChanged is that it will fire for all changes
to any property of any child no? I am only interested in one property on
a select number of children (those that have an IsInACalc flag set from the database
to True). That is my first filter to lessen the number of times the event
fires.
I had contemplated trying to use LINQ to CSLA to create a bunch
of subLists that I could listen to in lieu.
I believe I can optimize the process I outlined in the first
bost by using the CalcObjBO as the middleman between DataObjBO and CalcVarObjBO
which means less events need to fire. The CalcObjBO has the keys to join
to both objects.
My biggest question back to you is if you can recommend the subscribing
to an event and checking if a dependant object cares is a better plan than
pre-linking and connecting all the objects? It feels cleaner as an
interface but I’m pondering the performance implications.
Thanks
jack
From: RockfordLhotka
[mailto:cslanet@lhotka.net]
Sent: Friday, January 09, 2009 6:46 AM
To: jaddington@alexandergracie.com
Subject: Re: [CSLA .NET] OT: Design Advice
If you are using CSLA 3.6 you might look at the ChildChanged event. This
would be raised by the list (parent) object when a child changes, and it gives
you a reference to the changed child. Actually the ListChanged event would give
you the index to the child as well.
In other words, you might be better off listening at the list level, rather
than the individual object level, because then you'll know which particular
object changed, and may be able to optimize your recalc.
Well, obviously events are an easy way to communicate between objects but remember that they are references. So you will still end-up with the spiderweb you described. Implement IDisposable and clean-up after yourself to prevent memory issues.
I'm wondering, though, if you are forcing BOs into your model just for the sake of doing so and possibly complicating things. Remember that the purpose of a business object is to carry-out some behavior. What behavior do your data values have? Do you perform validation or authorization on the data values? Just because you are implementing CSLA doesn't mean that everything has to be a BO.
I ran into a similar problem awhile back putting together a simulation application that had some pretty complex equations with a lot of dependancies. I had to deal with default values as well as min and max limits for each, so I thought it was a good fit. But, it quickly became a nightmare and I went a different route.
When reassessing my use cases, it became clear that the default, min and max values where there to support the UI and provide validation for the data entry screen. In addition, I realized that making my variables all properties would allow me to cascade the calls and all of the dependancies would resolve.
In our case, we used a set of specifications to drive a slew of equations. The first step was to wrap each of the specs into objects. These were business objects because we applied validation, based on the same min and max values, and wanted to raise PropertyChanged events whenever a value was changed. These spec objects were then bound to the UI forms used for data entry.
In addition to the editable properties, our spec objects exposed calculated values based on its values. For example, say we had a RectangleSpec object. We would have Height and Width properties that would be bound to the UI and a read-only Area property that returned the result of the calculation based on the entered values for the other two properties. The PropertyChanged event could be used to notify of a change to any of the values. We also sometimes applied validation to the calculated properties so, for instance, if the Area was constrained, we could indicate to the user that one or both of the Height and Width properties had to be changed because the resulting Area was invalid.
The entire process was managed by a Calculator class which held references to each of the spec objects (via Dependancy Injection). The Calculator class exposed the final results as properties. Some of these were just delegates to the same property/calculated value on one of the spec classes or, more often, were a higher level calculation that used the raw and calculated values from the spec classes.
When they user navigated to the results screen, it was bound to the properties of the Calculator class which caused each calculation to be evaluated. Any dependancies were handled because the property would be requesting values from another property either in the Calculator class or one of the spec classes.
That's kind of a simple view of it, but maybe it will help you think about the model you are using. The key is to model behavior, not data. And not everything has to be a BO. Make sure there is a reason for making a type a BO and don't just do it because you can.
Hope that helps.
SOP,
Thanks for the nice lengthly response – my DataValueBO is
really my core BO. This portions of my application is essentially a generic
data capture module for test or interview results. Each DataValue BO
represents a single question/answer and would match your SpecObjects.
It manages all the metadata about the value to be captured (some
of which you mention below). It contains all the validation logic, all
the input criteria, all the logic around managing missing data and tracking
audits.
I support a min of 5 types of Calcs for each BO (IsEnabled, IsVisible,
IsRequired, AllowNA, DataValue). This supports things like if A = 5 then
hide questions B, C, disable question D, allow Question E to be N/A and calculate
the datavalue for question H to be ((A+F) *3)-G.
This means I have a IsVisableCalcB & C, IsEnabledCalcD,
IsNACalcE and DoMathH that all listen to the value of DataValueA.Data via the
DataVariableA.Data.
Every time a dataValue changes I need to refresh the related
Variables, redo the Math, and push the values back out. My prior implementation
was all push with a single watch on the final update. I’m trying to
optimize it and make it cleaner and eliminate as much pre-processing to link
things together. My original implementation also had a CalcManager
class. I used hashtables and dictionaries to link everything together and
it worked fairly well. I’m hoping that a far more generic pull type
implementation will make it cleaner so long as I can limit the number of unnecessary
checks
My biggest constraint is that everything is metadata driven via
the database so my test questions and calculations are all managed by the
users.
I’m not sure why I still have the spiderweb with the event
driven model? In a simple case where my DataCalc object watches for a
DataChanged event, the event fires, I get the reference to the DataValue
object, I check the CalcID, it matches I copy the value and I’m
done. I can see that I would gave SpiderWeb if I pre-linked each DataCalcBO,
DataCalcVarBO, and DataValueBO to listen specifically to the events of their
related BO’s. I was thinking more along the lines of listening to a
single event on the parent lists. Once I had that in place I would try to
optimize where I had the same DataValue used in multiple places.
What am I missing?
Thank you kindly again.
Jack
From: SonOfPirate
[mailto:cslanet@lhotka.net]
Sent: Friday, January 09, 2009 8:23 AM
To: jaddington@alexandergracie.com
Subject: Re: [CSLA .NET] OT: Design Advice
Well, obviously events are an easy way to communicate between objects but
remember that they are references. So you will still
end-up with the spiderweb you described. Implement IDisposable and
clean-up after yourself to prevent memory issues.
I'm wondering, though, if you are forcing BOs into your model just for
the sake of doing so and possibly complicating things. Remember that the
purpose of a business object is to carry-out some behavior. What behavior
do your data values have? Do you perform validation or authorization on
the data values? Just because you are implementing CSLA doesn't mean that
everything has to be a BO.
I ran into a similar problem awhile back putting together a simulation
application that had some pretty complex equations with a lot of
dependancies. I had to deal with default values as well as min and max
limits for each, so I thought it was a good fit. But, it quickly became a
nightmare and I went a different route.
When reassessing my use cases, it became clear that the default, min and
max values where there to support the UI and provide validation for the
data entry screen. In addition, I realized that making my variables all
properties would allow me to cascade the calls and all of the dependancies
would resolve.
In our case, we used a set of specifications to drive a slew of
equations. The first step was to wrap each of the specs into
objects. These were business objects because we applied validation, based
on the same min and max values, and wanted to raise PropertyChanged events
whenever a value was changed. These spec objects were then bound to
the UI forms used for data entry.
In addition to the editable properties, our spec objects exposed calculated
values based on its values. For example, say we had a RectangleSpec
object. We would have Height and Width properties that would be bound to
the UI and a read-only Area property that returned the result of the
calculation based on the entered values for the other two properties. The
PropertyChanged event could be used to notify of a change to any of the
values. We also sometimes applied validation to the calculated properties
so, for instance, if the Area was constrained, we could indicate to the user
that one or both of the Height and Width properties had to be changed because
the resulting Area was invalid.
The entire process was managed by a Calculator class which held references
to each of the spec objects (via Dependancy Injection). The Calculator
class exposed the final results as properties. Some of these were just
delegates to the same property/calculated value on one of the spec classes or,
more often, were a higher level calculation that used the raw and calculated
values from the spec classes.
When they user navigated to the results screen, it was bound to the
properties of the Calculator class which caused each calculation to be
evaluated. Any dependancies were handled because the property would be
requesting values from another property either in the Calculator class or one
of the spec classes.
That's kind of a simple view of it, but maybe it will help you think about
the model you are using. The key is to model behavior, not data.
And not everything has to be a BO. Make sure there is a reason for making
a type a BO and don't just do it because you can.
Hope that helps.
I think what SOP is trying to say is that you’ll have the spider
web even if you don’t see it yourself.
An event is simply an object reference that is set up by the
compiler instead of by you.
When you declare an event, you create the potential for an
object reference.
When you hook an event (set up a listener/handler) you create a
reference from the event source to the event handler. You probably don’t
think of this as an object reference, you think of it as “handling an event”.
But behind the scenes, the compiler just spit out some code that set up an
object reference, so the event handler is in a collection of handlers managed
by the object where the event was declared.
In other words events are just syntactic sugar on top of a relatively
simple design pattern based around object references.
You’d asked earlier whether this event approach was better
or not. I think SOP reveals the answer, in that behind the scenes it is the
same.
Now you might find that coding using events is simpler to
write/maintain – in which case using events is a win. Or you may find
that manually managing the references is simpler to write/maintain – in which
case events would be a loss.
One thing that complicates all this, is that you must unhook events
too. So if a child object is removed from the list, you must make sure to
unhook its events. So while events may seem simple on the surface, they can
become quite complex if you allow objects to be added/removed from your lists –
because then you need to do a bunch of plumbing to properly hook/unhook the
events.
Of course this is pretty comparable to the plumbing you’d
have to do to reference/dereference the objects if you weren’t using
events. And you probably see why they are similar, given that events are just
wrappers over references.
Rocky
Great response – thank you for the clarification.
I rarely do any adding/deleting of children so I think perhaps
the event driven model is cleaner, especially if I hook into parent list
events. I hope to be able to support some more advanced calculations
based on loading additional data in the background on the fly so that makes it
easier.
So much to learn J
Thanks
jack
From: Rockford Lhotka
[mailto:cslanet@lhotka.net]
Sent: Friday, January 09, 2009 9:17 AM
To: jaddington@alexandergracie.com
Subject: RE: [CSLA .NET] OT: Design Advice
I think what SOP is trying to say is that you’ll have the
spider web even if you don’t see it yourself.
An event is simply an object reference that is set up by the
compiler instead of by you.
When you declare an event, you create the potential for an
object reference.
When you hook an event (set up a listener/handler) you create a
reference from the event source to the event handler. You probably don’t
think of this as an object reference, you think of it as “handling an
event”. But behind the scenes, the compiler just spit out some code that
set up an object reference, so the event handler is in a collection of handlers
managed by the object where the event was declared.
In other words events are just syntactic sugar on top of a
relatively simple design pattern based around object references.
You’d asked earlier whether this event approach was better
or not. I think SOP reveals the answer, in that behind the scenes it is the
same.
Now you might find that coding using events is simpler to
write/maintain – in which case using events is a win. Or you may find
that manually managing the references is simpler to write/maintain – in
which case events would be a loss.
One thing that complicates all this, is that you must unhook
events too. So if a child object is removed from the list, you must make sure
to unhook its events. So while events may seem simple on the surface, they can
become quite complex if you allow objects to be added/removed from your lists
– because then you need to do a bunch of plumbing to properly hook/unhook
the events.
Of course this is pretty comparable to the plumbing you’d
have to do to reference/dereference the objects if you weren’t using
events. And you probably see why they are similar, given that events are just
wrappers over references.
Rocky
Sounds like you have a workflow element that manages the flow of the questions and a calculation piece that generates the final results. If you pulled the responsibility for managing the workflow rules out of the DataValue objects, then they can be made more generic and perhaps your calculation "manager" could be more generic as a result (just a sum of all questions, for instance).
I've worked on several projects that boil down to the same type of flowchart/workflow process where step 2 is predicated on the results of step 1 and so on. And, I think each one was implemented a different way!
I like the flexibility WF gives but I don't know if that would help in your case. A lot of it depends on where the workflow logic is defined. What's nice about WF is that you could wrap each question as an Activity then manage the flow like a regular workflow or flowchart with if-else's, etc.
How do your users indicate that questions B & C are to be skipped when A = 5? That's probably the key to figuring out how you may be able to simplify your model more.
My data entry screen is dynamic and generic. It is built
at runtime as a transposed version of a regular table. The data isn’t
stored in columns, its in rows of numbers, strings, or dates. So I read
in all the dataFields and present them in a giant filtered list.
There isn’t so much of a workflow as there are simple UI
property tweaks. The UI interface is fairly straightforward – it s just
one row per DataValue object. There are no jumping around the screen in
the way you would do with a normal data input screen based on a table. If
a datavalue is to be disabled I just disable the row. Or hide the
row.
There isn’t really a codeable workflow that is known in
advance.
Every DataValue object (really a DataFieldBO) has a
corresponding FieldBO which contains all the metadata (name, type of value,
constrataints, etc).
These are basically the columns in a standard table. The
users are able to modify the metadata which includes enabled, visible, required,
N/A, and CalculatedValue. The first four are Yes/No & Calculated
options – and this is how they show up in the dynamic data entry
window. If they select the Calculated option then they need to enter in a
formula that matches to the FieldShortName. So for formula for Enabled
for B is: <> 5. In the database this is stored as a
field_calc(calcID, calcTypeEnabled, formula) and in the field_calc_var table
(calcID, FieldID=FieldAID).
It gets more complicated when I get to the actual data entry as
the field metadata can be derived from a templateField and the same
fieldDefinition can appear more than once so I have to make sure I create instances
of the variables that correspond to the data/time/subject being entered.
I also support the ability to use variables from different time
periods (previous visit). I use the database to recode the variable names
and formulas to match the data being entered so the front-end becomes simpler.
Formula ‘ <> 5’ becomes ‘[A_324242]
<> 5’ where 324242 equals the key to the DataField/DataValue
object. Makes it nice and neat to do the math based on variable name
lookups (and why I used a hashtable to link stuff before).
In terms of the GUI – I have the enabled/visible
properties of the dataValue tied to (Field.Enabled &&
FieldCalc.Enabled). One checks the default enabled metadata flag and the
other checks the calculated value. If no calculation exists then the
calculated value is defaulted to true.
Works quite well in the windows form especially since the UI
component is terrible. Everything fires when I leave/tab out of the
field.
Make any sense? The key feature is that once the
application is deployed IT is not required to intervene to make changes to what
data is to be input. If the users want 10 new questions with new data
values they just add 10 new fields, set the metadata, link the fields to the questionnaire
and the next time they do data entry the questions are there.
Thanks for your time,
jack
From: SonOfPirate
[mailto:cslanet@lhotka.net]
Sent: Friday, January 09, 2009 10:16 AM
To: jaddington@alexandergracie.com
Subject: Re: [CSLA .NET] RE: OT: Design Advice
Sounds like you have a workflow element that manages the flow of the
questions and a calculation piece that generates the final results. If
you pulled the responsibility for managing the workflow rules out of the
DataValue objects, then they can be made more generic and perhaps your
calculation "manager" could be more generic as a result (just a sum
of all questions, for instance).
I've worked on several projects that boil down to the same type of
flowchart/workflow process where step 2 is predicated on the results of step 1
and so on. And, I think each one was implemented a different way!
I like the flexibility WF gives but I don't know if that would help in your
case. A lot of it depends on where the workflow logic is defined.
What's nice about WF is that you could wrap each question as an Activity then
manage the flow like a regular workflow or flowchart with if-else's, etc.
How do your users indicate that questions B & C are to be skipped when A
= 5? That's probably the key to figuring out how you may be able to
simplify your model more.
I think I’m going to ignore references/event and just go
with a push model.
DataValueChanges -> use LINQ to update all DataCalcVar.DataValue
in DataCalcVarList where keys match.
DataVarChanges -> use LINQ to update all DataCalc in
DataCalcList where calcID key matches
DataCalcResultChanges -> use LINQ on Indexed key DataValue in
DataValueList where keys match
Makes it simple, generic, and so long as the right objects exist
in the lists I can add/remove data on the fly.
From: Jack Addington
[mailto:cslanet@lhotka.net]
Sent: Friday, January 09, 2009 10:47 AM
To: jaddington@alexandergracie.com
Subject: RE: [CSLA .NET] RE: OT: Design Advice
My data entry screen is dynamic and generic. It is built
at runtime as a transposed version of a regular table. The data
isn’t stored in columns, its in rows of numbers, strings, or dates.
So I read in all the dataFields and present them in a giant filtered list.
There isn’t so much of a workflow as there are simple UI
property tweaks. The UI interface is fairly straightforward – it s
just one row per DataValue object. There are no jumping around the screen
in the way you would do with a normal data input screen based on a table.
If a datavalue is to be disabled I just disable the row. Or hide the
row.
There isn’t really a codeable workflow that is known in
advance.
Every DataValue object (really a DataFieldBO) has a
corresponding FieldBO which contains all the metadata (name, type of value,
constrataints, etc).
These are basically the columns in a standard table. The
users are able to modify the metadata which includes enabled, visible,
required, N/A, and CalculatedValue. The first four are Yes/No &
Calculated options – and this is how they show up in the dynamic data
entry window. If they select the Calculated option then they need to enter
in a formula that matches to the FieldShortName. So for formula for
Enabled for B is: <> 5. In the database this is stored as a
field_calc(calcID, calcTypeEnabled, formula) and in the field_calc_var table
(calcID, FieldID=FieldAID).
It gets more complicated when I get to the actual data entry as
the field metadata can be derived from a templateField and the same
fieldDefinition can appear more than once so I have to make sure I create
instances of the variables that correspond to the data/time/subject being
entered.
I also support the ability to use variables from different time
periods (previous visit). I use the database to recode the variable names
and formulas to match the data being entered so the front-end becomes simpler.
Formula ‘ <> 5’ becomes ‘[A_324242]
<> 5’ where 324242 equals the key to the DataField/DataValue
object. Makes it nice and neat to do the math based on variable name
lookups (and why I used a hashtable to link stuff before).
In terms of the GUI – I have the enabled/visible
properties of the dataValue tied to (Field.Enabled &&
FieldCalc.Enabled). One checks the default enabled metadata flag and the
other checks the calculated value. If no calculation exists then the
calculated value is defaulted to true.
Works quite well in the windows form especially since the UI
component is terrible. Everything fires when I leave/tab out of the
field.
Make any sense? The key feature is that once the
application is deployed IT is not required to intervene to make changes to what
data is to be input. If the users want 10 new questions with new data
values they just add 10 new fields, set the metadata, link the fields to the
questionnaire and the next time they do data entry the questions are there.
Thanks for your time,
jack
From: SonOfPirate [mailto:cslanet@lhotka.net]
Sent: Friday, January 09, 2009 10:16 AM
To: jaddington@alexandergracie.com
Subject: Re: [CSLA .NET] RE: OT: Design Advice
Sounds like you have a workflow element that manages the flow of the
questions and a calculation piece that generates the final results. If
you pulled the responsibility for managing the workflow rules out of the
DataValue objects, then they can be made more generic and perhaps your
calculation "manager" could be more generic as a result (just a sum
of all questions, for instance).
I've worked on several projects that boil down to the same type of
flowchart/workflow process where step 2 is predicated on the results of step 1
and so on. And, I think each one was implemented a different way!
I like the flexibility WF gives but I don't know if that would help in your
case. A lot of it depends on where the workflow logic is defined.
What's nice about WF is that you could wrap each question as an Activity then
manage the flow like a regular workflow or flowchart with if-else's, etc.
How do your users indicate that questions B & C are to be skipped when A
= 5? That's probably the key to figuring out how you may be able to
simplify your model more.
Copyright (c) Marimer LLC