Version 2.1 comments

Version 2.1 comments

Old forum URL: forums.lhotka.net/forums/t/302.aspx


RockfordLhotka posted on Wednesday, June 07, 2006

OK, I know 2.0 just came out, but during the months it took to write the book other ideas/concepts were coming to me. Scope management dictated that they be put on hold so I could actually get the books and version 2.0 out the door. Now that the book is out, I can start applying some of those other ideas into version 2.1.

Not that version 2.1 is just around the corner. Mostly it is still some ideas I've had, along with a couple new ideas from readers. You can see what I'm actively working on by looking at the change log

http://www.lhotka.net/Articles.aspx?id=f12cc951-0452-42d1-96a6-cfa7656863b1

If you have comments or thoughts, please feel free to share them.

But as I did with CSLA .NET 1.x, I'll avoid breaking changes as much as humanly possible. Any change might break someone of course - that's always a risk. But if at all possible I won't do anything that causes wholesale breaks or radical changes to existing code. (though to use a new feature you might need radical changes, but that would be optional)

Massong replied on Wednesday, June 07, 2006

Hi Rocky,

First: thanks for the great CSLA framework.

We are a transport company in Germany and are using CSLA in our own transport planning software since the end of 2003. I started to upgrade our CSLA 1.x classes to version 2.0 a few weeks ago.

I tried to use CSLA 1.x and CSLA 2.0 in the same project simultaneously so that I can upgrade my classes step by step - and this seems to work. First I renamed the assembly names of the old framework to CSLA1.*.dll and the namespaces too. The first classes I upgraded were BusinessPrincipal and BusinessIdentity. Now it is even possible to upgrade a child collection first and its root object anytime later.

What do you suggest is the best practise to upgrade an existing project with lots of classes?

Greetings,
Christian

RockfordLhotka replied on Wednesday, June 07, 2006

Hi Christian,
 
Running two versions of CSLA in one app is an interesting idea, and it it works for you then it seems like a nice way to do an incremental upgrade.
 
I'd be worried about the remote data portal (the old and new don't work quite the same) and about ApplicationContext (because it is in thread-local storage and there could be a conflict).
But if you aren't using a remote data portal then these two issues shouldn't be a problem for you.
 
Rocky


From: Massong [mailto:cslanet@lhotka.net] 
 
Hi Rocky,

First: thanks for the great CSLA framework.

We are a transport company in Germany and are using CSLA in our own transport planning software since the end of 2003. I started to upgrade our CSLA 1.x classes to version 2.0 a few weeks ago.

I tried to use CSLA 1.x and CSLA 2.0 in the same project simultaneously so that I can upgrade my classes step by step - and this seems to work. First I renamed the assembly names of the old framework to CSLA1.*.dll and the namespaces too. The first classes I upgraded were BusinessPrincipal and BusinessIdentity. Now it is even possible to upgrade a child collection first and its root object anytime later.

What do you suggest is the best practise to upgrade an existing project with lots of classes?

Greetings,
Christian

ajj3085 replied on Wednesday, June 07, 2006

Those sound like excellent ideas. 

Csla has proven to be a great framework to build business objects on, and the improvements from v1 to v2 made things even better.


guyroch replied on Wednesday, June 07, 2006

Rocky,

 

I’m not sure I understand the value proposition of having a “Csla – Data portal” that would give power to the developer to write a replacement for SimpleDataPortal and have that be invoked instead.  I think this would complicate things.

 

It is relatively simple as it is to let CSLA call the dataportal and then code the appropriate DataPortal_XYZ method to fulfill it duties.  In fact, the legacy approach is even more compelling as _all_ of the object code such as business rules, loading it, and persisting it are all local, in the same object.  

 

I’m I missing something?  Perhaps one of 2 use cases / persona would help.  I don't mean to be negative here, in fact I'm a huge admirer of all to good things you do for all of us out there, but I'm just failing to understand the rational behind this approach.

 

Thanks

RockfordLhotka replied on Wednesday, June 07, 2006

Ibm not sure I understand the value proposition of having a bCsla b Data portalb that would give power to the developer to write a replacement for SimpleDataPortal and have that be invoked instead.  I think this would complicate things.

 
I should be clear - my goal isn't for business developers to write a SimpleDataPortal replacement. I'm decoupling that component so _I_ can write alternatives. A side-effect is that other framework developers could do so as well - just like they can write other data portal channel components, etc.
 
Of course, if this does turn out to be a bad idea as I work with it, I reserve the right to drop the whole thing :)
 
 
While it is most certainly possible for you to call an external data access layer (DAL) from within an existing DataPortal_XYZ method, it does somewhat complicate things.
 
Either your business object assembly has a reference to the DAL, or the DAL has a reference to the business object assembly. But in that latter case, the DataPortal_XYZ method must use reflection to create the DAL object - which is the type of code no business developer should write (imo). Yet if the business assembly references the DAL, then the DAL has no easy way to load the business object, because it can't have a strongly typed reference to the business object.
 
So I've been giving a lot of thought to various solutions - in the general sense - to this problem. A variety of specific solutions have been discussed on threads here and in the old forum, and in a couple articles on my web site. But whether you load the object using reflection, an interface, a DTO or whatever - the fact remains that there's this broader reference issue between the business object and the DAL - and so the business developer ends up manually using reflection either in the DataPortal_XYZ or in the DAL methods.
 
Note that I'm not trying to avoid reflection as such - I'm trying to avoid a business developer needing to know about or directly use reflection. Reflection is plumbing, and business developers shouldn't have to deal directly with plumbing. (at least that's one of my guiding principles)
 
 
One answer is for me to create a formal DAL concept in CSLA .NET. In that case I'd be truly entering the ORM space, and I'm not convinced that is wise. Though ObjectSpaces is dead and gone, I doubt Microsoft is entirely sitting still in this area (DLinq is a limited move in this area for example).
 
Another answer is for me to create a way by which you can have the data portal directly invoke the DAL, so the DAL can have a strong reference to the business object. This means that there's no reason at all for the business object to reference the DAL.
 
An interesting side-effect of this approach is that I (almost automatically) simplify testing, by allowing you to create a mock DAL - without having to do arcane things like mocking part or all of ADO.NET or anything. You just have to mock the 6 DataPortal_XYZ methods and use your mock assembly instead of the real one. That, of course, doesn't test the actual data access, but it makes it easy to test the loading and usage of your objects.
 
 
I am also specifically looking forward to DLinq. Whether it is the best answer or not, it is pretty clear that Anders isn't going to allow anything to get in the way of DLinq shipping. At the moment it is clear that Dlinq doesn't support distributed computer well at all, nor does it have capabilities for loading complex collection types (like BusinessListBase-derived collections). So the question I'm wrestling with is how to implement an editable root collection with DLinq. I do think this can be addressed with CSLA as-is, but I want to explore the possibility that it might be simpler with an external, function-based, DAL.
 
I honestly remain skeptical that this is a good idea - so I share the sentiment in your email :)  At the same time, I've been having a hard time deciding whether it is smart or not in a totally theoretical sense and so I've been playing with implementation.
 
If you look in cvs you won't find any changes along this line. That's because I don't like any of what I've done thus far. I may yet return to some formalized DAL scheme where the DAL is invoked from within the existing DataPortal_XYZ methods...
 
Rocky

Mountain replied on Wednesday, June 07, 2006

I would like to propose an idea. I do recognize that it may be too disruptive to incorporate into the main line CSLA code, but it may be something that we implement in a modified CSLA.

The idea is something I'll call "double-walled business objects" because each business object consists of two objects, each of which honors encapsulation. It's a variation on a DTO, but without the drawbacks (afaik). There is no copying of data from a DTO to a BO, no violation of encapsulation, etc. On top of that, it solves the problem with old vs. new references to BO upon an update (as described on page 81 of the C# book).

To implement this, each Xyz business class has a corresponding Xyz EntityData class (or struct) that represents all the data fields that would have gone into the Xyz business class. The EntityData object encapsulates the data fields, but not the business logic. The logic still goes into the Xyz business class, as before.

To comment briefly on how this works with updates, the inner Xyz EntityData object is replaced upon an update, but the wrapper around it, which is the original Xyz business object remains valid and doesn't have to be updated.

In this design, an appropriate EntityData object is an integral part of each business object. Each property of the business object "delegates" the data (only the data) to the EntityData object. (Therefore, no copying of data after it is fetch from the db.) The business object has a field of type Xyz EntityData called _entityData. Then each property on the business object looks like this (in C#):

public IReturnType SomeProperty
  {
   get { return _entityData.SomeProperty; }
   set { _entityData.SomeProperty= value; }
  }

In the code, it's simply a matter of changing "this" (or the equivalent VB keyword) to _entityData. This design has a nice effect of keeping all data fields grouped together, and it also works really well with code generation. We've been using something like this (without CSLA) for more than a year and it has worked out well.

RockfordLhotka replied on Wednesday, June 07, 2006

Mountain,
 
This is very comparable to the approach I used in the VB5 and VB6 versions of CSLA. And it did work well - except that it prevents the direct/easy use of .NET's support for serialization and thus mobile objects.
 
Combining the various state objects from a complex object graph and then reassembling it requires a fair bit of work - most of which can be done in a framework of course. I had given this serious thought back in 1999 when I started working on the .NET version of CSLA - and it was compelling because it would have more closely followed the old VB/COM approach.
 
In the end however, I felt it was better to exploit the mobile object support provided by .NET and so I went with the approach used in the framework today. I don't regret that decision, but there's no doubt that a somewhat similar framework could have been built on that other concept.
 
Rocky

To implement this, each Xyz business class has a corresponding Xyz EntityData class (or struct) that represents all the data fields that would have gone into the Xyz business class. The EntityData object encapsulates the data fields, but not the business logic. The logic still goes into the Xyz business class, as before.

guyroch replied on Friday, June 09, 2006

RockfordLhotka:

While it is most certainly possible for you to call an external data access layer (DAL) from within an existing DataPortal_XYZ method, it does somewhat complicate things.
 
Either your business object assembly has a reference to the DAL, or the DAL has a reference to the business object assembly. But in that latter case, the DataPortal_XYZ method must use reflection to create the DAL object - which is the type of code no business developer should write (imo). Yet if the business assembly references the DAL, then the DAL has no easy way to load the business object, because it can't have a strongly typed reference to the business object.


Now you got me thinking... let me explain...

I've always been from the school of thoughts that in a true multi-tier application the rule of thumb was to have your UI consume your Business Logic layer and have your Business Logic layer consume your Data Access layer.  So, to do so it is quite natural to have your UI reference your BL assemblies, and in turn have your BL reference your DAL assemblies. 

Now, while this is only a rule of thumb, there is _some_ cases for instance where you need to go at it the other way around for some specific operations.  Then you have a fighting chance of achieving this with reflection.

While I understand that this proposed change in CSLA 2.1 will not be a breaking change as it will be optional, I’m more concerned about how developers might abuse this flexibility without fully understanding the ramification. 

By that I mean that I don’t believe that it is the DAL’s responsibility to load the Business Object but rather the Business Object’s responsibility to consume the DAL (any DAL) and load itself, whereever the data may be coming from.  Why I’m from this school of thought?  A DAL should be application agnostic so that is can be reused for other applications.

Any thoughts?


ajj3085 replied on Friday, June 09, 2006

GuyRoch,

This has been my opinion for quite some time.  The Business layer is a consumer of the DAL.  The only wrinkle is that the DAL is usually System.Data.SqlClient; that I believe to be a mistake.

A more formal DAL is needed, to give you the flexibility of changing your database (not that it happens very often, but if it does and you're using SqlClient, you'll likely have to hit all your BOs to make the change).

Andy

RockfordLhotka replied on Friday, June 09, 2006

guy,

The more time I've been spending on that particular idea, the less I'm liking it. Though I can do it and retain backward compatibility, it complicates the data portal rather a lot (well, "complicates" isn't right, so much as causes me to rewrite very large sections of it). But more importantly, it doesn't offer significant code savings.

The only benefit I've found is in exactly one scenario: where you want a polymorphic factory for your objects. In other words, suppose you have a set of three editable root objects which are polymorphic: A1, A2, A3, all inheriting from A<T> and implementing interface IA, which inherits from BusinessBase<T>. Then you want a common factory:

x = AFactory.GetA(blah)

Now suppose that you can't know which type of A to return until you are already talking to the database. It is data in the database that defines which A to instantiate.

Today the easiest answer is to use a command object, so AFactory.GetA() looks like this:

public static IA GetA(int id)
{
  return AGetter.GetA(id);
}

Where AGetter is a CommandBase-derived object that runs over to the server to do the data access to create and retrieve the appropriate type of A.

Of course the concept I proposed in the 2.1 change log doesn't really help to solve this Smile [:)]  In fact solving this issue probably means putting the factory attribute on the Criteria object, or making you supply it as part of the DataPortal.Fetch() call or something. If on the criteria then the factory becomes:

public static IA GetA(int id)
{
  return DataPortal.Fetch<IA>(new Criteria(id));
}

The thing is, that all I've done here is had you write a Criteria object instead of a Command object - and the amount of overall code is almost identical either way. Sure, this second way saves about 4 lines, but I'm not sure 4 lines is worth introducing a whole new model for working with the data portal. The resulting increase in the surface area of CSLA .NET means training/learning is harder and that merely reduces productivity.

In short, I am leaning more and more AWAY from the idea of having the data portal directly invoke a DAL. Instead I may return to my very original thoughts around having the business object call a more formalized DAL concept.

colinjack replied on Monday, July 10, 2006

guyroch:
RockfordLhotka:

While it is most certainly possible for you to call an external data access layer (DAL) from within an existing DataPortal_XYZ method, it does somewhat complicate things.
 
Either your business object assembly has a reference to the DAL, or the DAL has a reference to the business object assembly. But in that latter case, the DataPortal_XYZ method must use reflection to create the DAL object - which is the type of code no business developer should write (imo). Yet if the business assembly references the DAL, then the DAL has no easy way to load the business object, because it can't have a strongly typed reference to the business object.


Now you got me thinking... let me explain...

I've always been from the school of thoughts that in a true multi-tier application the rule of thumb was to have your UI consume your Business Logic layer and have your Business Logic layer consume your Data Access layer.  So, to do so it is quite natural to have your UI reference your BL assemblies, and in turn have your BL reference your DAL assemblies. 

Now, while this is only a rule of thumb, there is _some_ cases for instance where you need to go at it the other way around for some specific operations.  Then you have a fighting chance of achieving this with reflection.

While I understand that this proposed change in CSLA 2.1 will not be a breaking change as it will be optional, I’m more concerned about how developers might abuse this flexibility without fully understanding the ramification. 

By that I mean that I don’t believe that it is the DAL’s responsibility to load the Business Object but rather the Business Object’s responsibility to consume the DAL (any DAL) and load itself, whereever the data may be coming from.  Why I’m from this school of thought?  A DAL should be application agnostic so that is can be reused for other applications.

Any thoughts?

Most OO books I've read teach you not to include any data access code in your business objects as the data access code will vary for different reasons from the business code (plus it doesn't help coupling/cohesion).

Therefore your business objects shouldn't directly interact with the DAL and, if anything,  they would call to other objects (data mappers or whatever) to interact with the data layer.

However I wonder if a better approach is to keep the the two things seperate, so your business objects don't directly use the data layer and they don't even access the data mapper classes (yhey just concentrate on being business objects). I'd hope to have one data mapper class for each type of business object and it would know about the business object but the business object wouldn't know about it.

It seems like this is possible with the way CSLA does things (through the Data Portal and using reflection) but I've read that the calls (e.g. _Fetch) need to go through the business objects so I'm not sure its possible.

ajj3085 replied on Monday, July 10, 2006

There's no reason you couldn't use Csla and an ORM mapper for the data access.  Yes, the business object ends up still running after the DataPortal call, so it would likely need some knowledge of how to persit itself, but that shouldn't be a huge deal.  After all, if you're swapping out the data access code, you'll need to make some changes no matter what.

The only alternative I can think of is that the data layer knows about the BOs, a less desireable situtation, I would think.

colinjack replied on Tuesday, July 11, 2006

I haven't gotten into the data poral in detail but I'd have imagined it would be possible to configure which class handled the persistence related calls for a particular type of business object.

So in a configuration file I'd say mapperA handles persistence calls for classA, mapperA would then have to know about classA but classA wouldn't need to know about persistence at all. I realise it isn't that bad for the business entity to call out to the data mapper, and that could be the default, but I'd also like the option of not having the business entity involved in the CRUD stuff for it at all.

RockfordLhotka replied on Tuesday, July 11, 2006

On the surface this would seem to be true. And for 2.1 I'd considered making a change along this line.
 
However, it is actually more complex. The data portal looks at the method to be called a couple times - once on the client to support the RunLocal attribute (which would be entirely impossible with the new scheme) and a couple times on the server (to determine whether the method wants tansactions, and then to actually invoke it).
 
While making such a change would certainly be possible, I ultimately decided it just didn't have enough value to be worth all the effort.
 
In the end, you need to (or at least I need to) have a very clearly defined scenario where directly calling a DAL (rather than having a DataPortal_XYZ method call a DAL) provides substantial code savings or clarity. I have yet to find a scenario where there's any substantial code savings - and usually it is the reverse, where more code is required on your part (because with a separate DAL the data portal can't create the instance of the business object, and with a private ctor you are left writing your own reflection code to determine the business object type and create it).
 
The one remaining possibility is a viable/useful scenario where a separate DAL would allow for some functionality that you can't get by having the data call route through the root object. The only cases along this line I've come up with involve various levels of loose typing or polymorphic behavior. And these are interesting, but I've not convinced myself they are worth such deep, fundamental changes to the data portal.
 
One such example is this: you want a root object, but you don't know what kind. So you call the data portal and get back something that matches a common interface. This is possible because the DAL makes the determination of what type of object to create based on the criteria. Pretty cool!
 
Of course I could enable this in a simpler way, without the need for radical data portal changes... If I make CriteriaBase.ObjectType virtual/Overridable, then you'd just have your criteria inherit from CriteriaBase, and when ObjectType is invoked you'd do what you need to do to determine the type of object should actually be created. I do think that's an odd solution, but it would work :)
 
In the end though, the question is whether it is worth the effort involved to change/test/support (it is the support one that's hard for me to swallow) the changes to the data portal to call an external DAL
 
1) in client-side data portal, skip all RunLocal processing in this case
 
2) in Server\DataPortal, detect this case for transactional attribute checking (instead of looking at DataPortal_XYZ)
 
3) in Server\SimpleDataPortal, detect this case and create/invoke the DAL object instead of the business object
 
4) do I provide helper support for your DAL code? like to determine the type of the business object and created it like the data portal does today? how about loading the private fields, do I provide reflection wrappers for you? I think so, because (especially in C#) reflection is a PITA, and even in VB if I wrap the calls I could eventually make them more efficient in a transparent way (Reflection.Emit, etc.)
 
 
I'd originally considered this change to make some percentage of people happy. To me there is great hurmor here, because in my VB5-6 framework I had a separate DAL, with no option for total encapsulation. This made some people happy, and others unhappy - most were neutral. In the .NET framework I have total encapsulation with (I think) a reasonable option for a separate DAL. This made some people happy, and others unhappy - most are neutral.
 
To misquote Abe Lincoln: you can make some of the people happy, some of the time... :)
 
 
So my CURRENT thinking is this: if you want a "separate" DAL, subclass BusinessBase<T> and provide implementations of the DataPortal_XYZ methods that invoke the DAL (using dynamic assembly loading based on a config file, etc). Then in your actual business classes, you never need to implement the DataPortal_XYZ methods. The base class implementations are always invoked, calling the DAL on your behalf.
 
The end result is that your business object code always percieves the use of a separate DAL, and never includes the data code. So the desired separation is achieved, within the context of the existing framework.
 
Rocky


From: colinjack [mailto:cslanet@lhotka.net]
Sent: Tuesday, July 11, 2006 3:30 AM
To: rocky@lhotka.net
Subject: Re: [CSLA .NET] RE: RE: Version 2.1 comments

I haven't gotten into the data poral in detail but I'd have imagined it would be possible to configure which class handled the persistence related calls for a particular type of business object.

So in a configuration file I'd say mapperA handles persistence calls for classA, mapperA would then have to know about classA but classA wouldn't need to know about persistence at all.




Fabio replied on Tuesday, July 11, 2006

RockfordLhotka:
So my CURRENT thinking is this: if you want a "separate" DAL, subclass BusinessBase<T> and provide implementations of the DataPortal_XYZ methods that invoke the DAL (using dynamic assembly loading based on a config file, etc). Then in your actual business classes, you never need to implement the DataPortal_XYZ methods. The base class implementations are always invoked, calling the DAL on your behalf.
 
The end result is that your business object code always percieves the use of a separate DAL, and never includes the data code. So the desired separation is achieved, within the context of the existing framework.


YES MAN!
With actual implementation we can work with or without DAL.
I'm using CSLA with NHibernate in one application.
The problem of many FW, but CSLA, is to understand that a collection IS a Bo and is NOT only a Bo set.... so using other persistent FW we need some overhead... :( this is the life....



Jav replied on Wednesday, June 07, 2006

Those are fantastic enhancements. Can't wait.  Thank you.

Jav

Mark replied on Wednesday, June 07, 2006

Will you be able to mix-and-match with the validation rules?  For example, store 10 rules statically and 2 rules per-instance?  I like the idea of storing <some> rules statically.  I just don't want to have to store all my rules that way (mainly the validation routines that aren't part of 'CommonRules').

Take the following rule from ProjectTracker...

private bool StartDateGTEndDate(object target, Csla.Validation.RuleArgs e)
{
   if (_started > _ended)
   {
      e.Description = "Start date can't be after end date";
      return false;
   }
   else
     return true;
}

Obviously, this rule couldn't be stored statically as-is since it references instance variables.  But the other rules (StringRequired, StringMaxLength) could be stored statically, since they already use reflection. 

Anyway - the proposed changes for 2.1 do sound great...

RockfordLhotka replied on Wednesday, June 07, 2006

 

Will you be able to mix-and-match with the validation rules?  For example, store 10 rules statically and 2 rules per-instance?  I like the idea of storing <some> rules statically.  I just don't want to have to store all my rules that way (mainly the validation routines that aren't part of 'CommonRules').

 

This is under consideration. My primary concern is whether it makes the usage too complex. I can just see myself spending a lot of time over the next few years explaining to people why their validation rules all run twice - and it is because they did the association at both the shared and instance levels.

To do the dual-scheme approach, ValidationRules would simply do what it already does at the instance level, plus run rules from an associated shared list at the type level. Easy enough for me to do, but it is the resulting complexity/confusion that I fear.

Thoughts?

Rocky

Mark replied on Wednesday, June 07, 2006

Is it possible to keep the developer ignorant of how/where the rules are stored?  Can you use reflection to check the method signature for the RuleHandler that's passed during AddRule?  If it's a static method, you would add it to the static list of ValidationRules.  If it's an instance method, you'd add it to the instance list of ValidationRules.

If this is possible, we'd have the best of both worlds without having to worry about the behind-the-scenes details...

ajj3085 replied on Wednesday, June 07, 2006

Perhaps a more clean break between static and instance rules. 

If you want to add an instance rule, you do what you'd normally do.  For static, you would have to add them to SharedValidationRules.

Perhaps the shared version could even check to make sure the RuleMethod is a static method somehow.

RockfordLhotka replied on Wednesday, June 07, 2006

Yes ajj, I think you are on to something.
 
So Validation Rules would look like this:
 
static MyObject
{
  Csla.Validation.SharedValidationRules.AddRule(myRule, typeof(MyObject), "Name");
}
 
protected override void AddBusinessRules()
{
  this.ValidationRules.AddRule(otherRule, "Name");
}
 
Much cleaner, and allows the dual mode approach. I am still nervous about confusion, but this is more clear.
 
Rocky

RockfordLhotka replied on Wednesday, June 07, 2006

The problem is that a "rule method" is defined by not just the method, but also by the RuleArgs. You can associate the same rule method with a property multiple times, with different argument parameters, and that qualifies as a set of different "rule methods". Due to this, it would be relatively difficult (expensive in terms of performance) to do a set of comparisons against existing rules every time AddRule() is called.
 
Not impossible of course, but I'm not sure it is wise. To do the comparison I have to instantiate the RuleMethod object so it can generate its unique RuleName, and then see if that rule is already in the list of rules for the property. Doing this would, I think, defeat the whole reason for enabling type-level rules in the first place, which is to allow the rules to be associated once per type rather than once per instance for performance reasons.
 
 
The trigger isn't so much whether the _method_ is static or not, but whether you want a custom rule for a specific instance that is different from the rules for the type as a whole.
 
It is true that only static methods are allowed at the per-type level. But at the per-instance level both instance and static methods are fine - and there could be valid reasons for both, in the case that you are customizing the rules on a per-instance basis.
 
 
Rocky


From: Mark [mailto:cslanet@lhotka.net]
Sent: Wednesday, June 07, 2006 2:05 PM
To: rocky@lhotka.net
Subject: Re: [CSLA .NET] RE: Version 2.1 comments

Is it possible to keep the developer ignorant of how/where the rules are stored?  Can you use reflection to check the method signature for the RuleHandler that's passed during AddRule?  If it's a static method, you would add it to the static list of ValidationRules.  If it's an instance method, you'd add it to the instance list of ValidationRules.

If this is possible, we'd have the best of both worlds without having to worry about the behind-the-scenes details...

Mark replied on Wednesday, June 07, 2006

Whatever the solution is - it does need to be kept simple and straightforward and (fairly) idiot-proof.  Based on your sample in the 2.1 change log, what would happen if a developer were to do this...

private static ValidationRulesManager _validationRulesMgr = new ValidationRulesManager();

static MyBusinessObject()
{
  _validationRulesMgr.AddRule(CommonRules.StringRequired, "Name");
  _validationRulesMgr.AddRule(CommonRules.StringRequired, "City");
}

protected override void AddBusinessRules()
{
   ValidationRules.UseRulesManager(_validationRulesMgr);
   ValidationRules.AddRule(StartDateGTEndDate, "Started");
   ValidationRules.AddRule(StartDateGTEndDate, "Ended");
}

private bool StartDateGTEndDate(object target, Csla.Validation.RuleArgs e)
{
   if (_started > _ended)
   {
      e.Description = "Start date can't be after end date";
      return false;
   }
   else
     return true;
}

This is definitely a mix-and-match situation and at first glance would appear to work just fine.  :-)  In fact, this is what I'd like to see work.  Just not sure if it's realistic.

ajj3085 replied on Wednesday, June 07, 2006

Perhaps this minor tweak, so that BO developers needn't worry about managing the creation or storage of validationRulesManager. 

validationRules = Csla.Validation.ValidationRules.GetRulesManager<myBO>();

Behind the scenes, there would be a static Dictionary in ValidationRules to keep a singleton reference to the ValidationRulesManager.

Thoughts?

Also Rocky, I have to give you a lot of credit for how often you're on the forum.  You're here quite a bit and offer some great advice.  Personally if I was in your position I'd probably be saying "That's answered in Chap. 3, if you buy the book." and leave it at that. Wink [;)]

Andy

RockfordLhotka replied on Wednesday, June 07, 2006

ajj, Check out the updated 2.1 change log
 
http://www.lhotka.net/Articles.aspx?id=f12cc951-0452-42d1-96a6-cfa7656863b1
 
I think I've got it pretty well abstracted now - to the point that management of all plumbing (like ValidationRulesManager objects) is entirely encapsulated. In fact I've changed ValidationRulesManager from public to internal, as there's no longer any need for a business developer to see it.
 
Rocky
 
p.s. I really do hope people buy the book - but I want people to be successful too, as I expect more success leads to more goodwill, which maybe leads to more word-of-mouth sales of the books :)

Mark replied on Wednesday, June 07, 2006

Excellent! We get the best of both worlds, the most flexibility, without breaking existing code.

<bowing to the ground> We're not worthy... We're not worthy...Big Smile [:D]

PatrickVD replied on Monday, July 10, 2006

Hi,

As the use of the 'shared/static' validation rules seems to 'stabilize', I'd like to throw another concept into the discussion.

I was recently reading an article in 'Visual Studio Magazine' about 'Validate Business Objects Declaratively' (for the record, this was in the June 2006 edition).
As I was reading this, I kinda immediately started thinking on how this could also be implemented in CSLA...
Hasn't anybody considered using declarative (attribute-based) validation rules in CSLA yet?

I know this would require the use of Reflection, but I think the cost of using reflection in order to 'load' the shared validation rules (via the attributes) should not be as expensive as this would be for 'instance-based' validation rules...

I'd like to hear what the 'Csla-fans' think about this...

Regards,

Patrick.

 

 

david.wendelken replied on Tuesday, July 11, 2006

PatrickVD:

Hi,

As the use of the 'shared/static' validation rules seems to 'stabilize', I'd like to throw another concept into the discussion.

I was recently reading an article in 'Visual Studio Magazine' about 'Validate Business Objects Declaratively' (for the record, this was in the June 2006 edition).
As I was reading this, I kinda immediately started thinking on how this could also be implemented in CSLA...
Hasn't anybody considered using declarative (attribute-based) validation rules in CSLA yet?

The thing that got me excited was Examples (with apologies for syntax boo-boos - haven't had time to learn it yet):

[Required=true]
[MaxLength=128]
[Label="Project Name"]
[Hint="This is the official name for the project."]
[RegEx=""]
[DropDownList="ProjectNameValue"]

or

[FieldProperties Required=true MaxLength=128 Label="Project Name" Hint="This is the official name for the project." RegEx="" DropDownList="ProjectNameValue"]

The later might be more efficient to pull out of the object.

Others might be:

[case="upper"]   // or mixed or lower

for numbers:

[scale=5]
[precision=7]

And yes, some of the above could be done with regular expressions.   And I could get my tooth pulled by my neighbor using a pair of pliers with a shot of bourbon for an anesthetic, too. :) 

To me, the idea behind this is to make simple things very simple and very quick.
Most of the suggested tags could be deduced straight from the data model 95% of the time, which means code generation support would be easy to implement.  (The rest could be supplemented with some additional meta-data that the code generator could access.  I'm experimenting with extended properties in SqlServer for this purpose.)

xal replied on Wednesday, June 07, 2006

Rocky, there is an issue with your current implementation:
If you have a intermediate class between businessbase and your bo that also defines rules, then those rules will not be added to your bo's type rules, but rather to their own type rules and they will never run in your bo.


Now, in order to get this to work there are some things to consider: (I have been working on this all day long...)

-Shared/Static constructors aren't called in any particular order.
-Given the avobe, the base class' rules might be adding themselves after the derived class' rules, and so we cant rely on the cache in mManagers to get the rules for the base class.


Here are the neccesary changes for everything to work as expected:
In SharedValidationRules:


Friend Function GetManager(ByVal objectType As Type) As ValidationRulesManager

      If mManagers.ContainsKey(objectType) Then
        Return mManagers.Item(objectType)
      End If
      EnsureTypeInitialization(objectType.BaseType)

      If mManagers.ContainsKey(objectType.BaseType) Then
        Dim rules As ValidationRulesManager
        rules = New ValidationRulesManager(mManagers.Item(objectType.BaseType))
        mManagers.Add(objectType, rules)
      Else
        mManagers.Add(objectType, New ValidationRulesManager)
      End If
      Return mManagers.Item(objectType)
    End Function


    Private Sub EnsureTypeInitialization(ByVal t As Type)

      If t.Equals(GetType(Object)) Then Return

      'Make sure the base type is initialized.
      EnsureTypeInitialization(t.BaseType)

      'Check for static fields
      Dim fields() As Reflection.FieldInfo = t.GetFields(Reflection.BindingFlags.Static Or Reflection.BindingFlags.NonPublic)
      If fields.Length = 0 Then
        'if the type has no static fields, we can safely call it's shared constructor
        System.Runtime.CompilerServices.RuntimeHelpers.RunClassConstructor(t.TypeHandle)
      Else
        'if the type has static fields, calling their initializers could cause trouble
        'because it could reset fields values.
        'to overcome this we will attempt to get a fields value,
        'which will take care of our problems
        fields(0).GetValue(Nothing)
      End If


    End Sub



The validation rules manager should have a constructor that allows itself to copy from another one and besides the standard public one:


  Friend Sub New(ByVal Rules As ValidationRulesManager)
      Dim enumerator As Generic.Dictionary(Of String, List(Of RuleMethod)).Enumerator
      enumerator = Rules.mRulesList.GetEnumerator()
      While enumerator.MoveNext()
        Me.RulesList.Add(enumerator.Current.Key, enumerator.Current.Value)
      End While
    End Sub
    Public Sub New()

    End Sub



I have tested this and it works as expected, but it could use some more testing...
I'll be creating a new thread discussing other things about how the rules could be managed.


Andrés

xal replied on Wednesday, June 07, 2006

Rocky,
This is a great addition, thank you for adding this.

As for the instance rules, my opinion is that now that we have the possibility to use type rules it's best to stay away from instance rules:
-The make the code slower. This is specially true for large objects with many children.
-Rules really apply to the type, you should not selectively add rules based on object's data, you should always add all the rules for your object (or now type) and let the rule take care of itself. The reason I think this is because it's easier to maintain and it's less error prone.

Again, this is just my oppinion. There is room for discussion.

Andrés

Mark replied on Wednesday, June 07, 2006

I tend to agree - validation rules should primarily be listed by type rather than by-instance.  The rules themselves should be aware if they're needed (or not).

On the security side of the house (AuthorizationRules) - would there ever be a need to have AuthorizationRules listed by instance rather than by type (other than to keep backwards compatibility)?  You're typically not checking instance data here (I never have anyway) - you're just associating roles and access rights with property names, which are obviously part of the type definition.  Still, I guess it's nice to have the flexibility to go per-instance if needed.

ajj3085 replied on Thursday, June 08, 2006

Well, I don't think there's a problem keeping the instance validation rules; if you don't use them there's no overhead, thats how they are designed, and I don't think adding static (shared) rules changes this.

I would hate to have the option of instance rules removed only to come across a ligitimate case where I need them.

I like were Rocky is going with the Shared rules; I think its pretty clear, SharedValidationRules and ValidationRules.  XML comments could clearly state the difference between the two.  Some people may still have some confusion, but we could politely send them to a FAQ (or better yet the book) to explain it.

Andy

MBonner replied on Thursday, June 08, 2006

We have business models where I think we'd need AuthorizationRules by instance.  We oversee grants to various agencies within our state and each agency has a service area.  Generally one agency services one county.  For some of the small counties one agency may service multiple-counties.  For some of the larger counties multiple agencies may service a single county delegated by zip-code regions.  In the latter case all agencies within a single county can view all the county data but can only update information in their own delegated regions.

RockfordLhotka replied on Friday, June 09, 2006

I view validation and authorization rules as requiring parity. Though I haven't gotten to auth rules yet, I expect to implement them in the same way I did validation rules - so both per-type and per-instance are intermixed.
 
The primary difference is that you can override the way rules are handled - and when I make this change I'll break anyone who has overridden the CSLA behaviors - because there'll now be (potentially) multiple lists to check for each for allow/deny read/write. But that can't be helped, and I doubt too many people will have gone there... (or so I hope)
 
Rocky


From: MBonner [mailto:cslanet@lhotka.net]
Sent: Friday, June 09, 2006 4:38 PM
To: rocky@lhotka.net
Subject: Re: [CSLA .NET] RE: RE: RE: Version 2.1 comments

We have business models where I think we'd need AuthorizationRules by instance.  We oversee grants to various agencies within our state and each agency has a service area.  Generally one agency services one county.  For some of the small counties one agency may service multiple-counties.  For some of the larger counties multiple agencies may service a single county delegated by zip-code regions.  In the latter case all agencies within a single county can view all the county data but can only update information in their own delegated regions.




xal replied on Saturday, June 10, 2006

Rocky, I've been rethinking a bit about the way the rules are loaded.
I remember that at some point we discussed the possibility of having a cache in businessbase to solve the inheritance of rules issue. But then at some point we started talking about loading them in the cctor and that complicated matters quite a bit.

We're now loading rules inside AddBusinessRules and in cctor. And when we start validating rules, we are walking up the inheritance chain to discover inherited rules.
I remember you said "I really don't want to be adding rules in Initialize()", how is adding them in the constructor any different? It does complicate things, and it's even more complex if you're using code generation.

So, consider this:
How about having some method in business base like "AddSharedBusinessRules()" that would only be called if the shared rules were not loaded (which is how this whole thing started). Businessbase could Call SharedRules.Exists(GetType(Me)) to find out if the rules where loaded, and if not, then you call AddSharedBusinessRules.

The derived classes would do:

Protected Sub AddSharedBusinessRules()
    MyBase.AddSharedBusinessRules()
   
End Sub

Also, since the base classes now know the actual type, we can create some methods to add shared rules that don't require that we pass the type, which makes things even simpler and now exactly the same as in AddBusinessRules.

What do you think?

Andrés

xal replied on Monday, June 12, 2006

Hi Rocky!
There's a bug in ValidationRules.vb line 57:
SharedValidationRules.GetManager(currentType.GetType, False)

should be:
SharedValidationRules.GetManager(currentType, False)


Have you given any thought to my previous post? There is an added value in that: overriding rules. Since the validation methods are not overridable because they are shared, you can effectively override the rules defined in the base class by not calling MyBase.AddSharedBusinessRules(). I'm not sure why anyone would want that but, who knows?


Andrés

RockfordLhotka replied on Monday, June 12, 2006

I did fix the bug in cvs, thanks.
 
I haven't had time to work on this further, and won't for a couple weeks due to travel and other work. And I want to get 2.0.2 out soon, as it includes a couple important bug fixes.
 
Rocky


From: xal [mailto:cslanet@lhotka.net]
Sent: Monday, June 12, 2006 8:26 AM
To: rocky@lhotka.net
Subject: Re: [CSLA .NET] RE: RE: RE: RE: Version 2.1 comments

Hi Rocky!
There's a bug in ValidationRules.vb line 57:
SharedValidationRules.GetManager(currentType.GetType, False)

should be:
SharedValidationRules.GetManager(currentType, False)


Have you given any thought to my previous post? There is an added value in that: overriding rules. Since the validation methods are not overridable because they are shared, you can effectively override the rules defined in the base class by not calling MyBase.AddSharedBusinessRules(). I'm not sure why anyone would want that but, who knows?


AndrC)s



RockfordLhotka replied on Wednesday, June 07, 2006

OK, I think I follow, though I wonder if there isn't another answer.
 
What you want, if I understand correctly, is to allow a base class to define a set of per-type rules, and then a subclass to define a set of per-type rules and have both sets of rules checked by CheckRules()?
 
Why not just have CheckRules() walk up the inheritance hierarchy to run each set of rules? In other words, why go through the work of touching each type during initialization, and copying collection items around, when we can just store the collection items by type and execute them as needed?
 
This could be optimized a bit by having ValidationRules assemble a List(Of ValidationRulesManager) in the TypeRules property, so it would just keep direct references to all the ValidationRulesManager objects in its inheritance hierarchy. This would avoid any need to forcibly trigger the static constructors on other types - because by the time your object has been instantiated you are guaranteed that those ctors will have run, and TypeRules could find them.
 
Rocky


From: xal [mailto:cslanet@lhotka.net]
Sent: Wednesday, June 07, 2006 5:11 PM
To: rocky@lhotka.net
Subject: Re: [CSLA .NET] RE: RE: RE: Version 2.1 comments

Rocky, there is an issue with your current implementation:
If you have a intermediate class between businessbase and your bo that also defines rules, then those rules will not be added to your bo's type rules, but rather to their own type rules and they will never run in your bo.


Now, in order to get this to work there are some things to consider: (I have been working on this all day long...)

-Shared/Static constructors aren't called in any particular order.
-Given the avobe, the base class' rules might be adding themselves after the derived class' rules, and so we cant rely on the cache in mManagers to get the rules for the base class.


Here are the neccesary changes for everything to work as expected:
In SharedValidationRules:


Friend Function GetManager(ByVal objectType As Type) As ValidationRulesManager

      If mManagers.ContainsKey(objectType) Then
        Return mManagers.Item(objectType)
      End If
      EnsureTypeInitialization(objectType.BaseType)

      If mManagers.ContainsKey(objectType.BaseType) Then
        Dim rules As ValidationRulesManager
        rules = New ValidationRulesManager(mManagers.Item(objectType.BaseType))
        mManagers.Add(objectType, rules)
      Else
        mManagers.Add(objectType, New ValidationRulesManager)
      End If
      Return mManagers.Item(objectType)
    End Function


    Private Sub EnsureTypeInitialization(ByVal t As Type)

      If t.Equals(GetType(Object)) Then Return

      'Make sure the base type is initialized.
      EnsureTypeInitialization(t.BaseType)

      'Check for static fields
      Dim fields() As Reflection.FieldInfo = t.GetFields(Reflection.BindingFlags.Static Or Reflection.BindingFlags.NonPublic)
      If fields.Length = 0 Then
        'if the type has no static fields, we can safely call it's shared constructor
        System.Runtime.CompilerServices.RuntimeHelpers.RunClassConstructor(t.TypeHandle)
      Else
        'if the type has static fields, calling their initializers could cause trouble
        'because it could reset fields values.
        'to overcome this we will attempt to get a fields value,
        'which will take care of our problems
        fields(0).GetValue(Nothing)
      End If


    End Sub



The validation rules manager should have a constructor that allows itself to copy from another one and besides the standard public one:


  Friend Sub New(ByVal Rules As ValidationRulesManager)
      Dim enumerator As Generic.Dictionary(Of String, List(Of RuleMethod)).Enumerator
      enumerator = Rules.mRulesList.GetEnumerator()
      While enumerator.MoveNext()
        Me.RulesList.Add(enumerator.Current.Key, enumerator.Current.Value)
      End While
    End Sub
    Public Sub New()

    End Sub



I have tested this and it works as expected, but it could use some more testing...
I'll be creating a new thread discussing other things about how the rules could be managed.


AndrC)s




xal replied on Wednesday, June 07, 2006

Sure! That's just the way I thought of it... you always seem to come up with a better solution! Big Smile [:D]

Now, we wouldn't be walking up the hierarchy every time CheckRules is called, right?
Maybe adding a boolean "IHaveAlreadyDoneTheWalkingUpTheHierarchyThing" field could help.


Andrés

RockfordLhotka replied on Wednesday, June 07, 2006

Nope, no walking up the hierarchy the way I did it. When the per-type rules are invoked the first time on a given type the walking is done to assemble a list of all the ValidationRulesManager objects for that hierarchy (which will usually be just one - or perhaps two I should think). After that, the lists are already known and are readily available.
 
Rocky

Now, we wouldn't be walking up the hierarchy every time CheckRules is called, right?
Maybe adding a boolean "IHaveAlreadyDoneTheWalkingUpTheHierarchyThing" field could help.

ajj3085 replied on Thursday, June 08, 2006

RockfordLhotka:
ajj, Check out the updated 2.1 change log
 
http://www.lhotka.net/Articles.aspx?id=f12cc951-0452-42d1-96a6-cfa7656863b1
 
I think I've got it pretty well abstracted now - to the point that management of all plumbing (like ValidationRulesManager objects) is entirely encapsulated. In fact I've changed ValidationRulesManager from public to internal, as there's no longer any need for a business developer to see it.


Looks good to me.

RockfordLhotka:
 
p.s. I really do hope people buy the book - but I want people to be successful too, as I expect more success leads to more goodwill, which maybe leads to more word-of-mouth sales of the books :)


Well your strategry seems to be working; I'm always encouraging people to check the book and the framework out.   It definately helps me 'think object oriented.'  Can't wait to see what might be in Version 3.

BARRY4679 replied on Wednesday, June 07, 2006

Hi Rocky,

sounds good. On the subject of Authorisation it would be good to have a static way of getting to CanReadProperty and CanWriteProperty information, as there is with CanGetObject etc.

The reason is because in a child|parent situation, where the parent may or may not have children, we can't setup the UI completely until we have stuck a parent that actually does have children. This is because databinding doesn't create the child object until that time I presume.

Upon the subject of the optional Data Portal  would that mean that the framework would become more easily applicable in a situation where there is no requirement to persist the objects to a database? I am working on something at the moment which has a number of clases which get created and used, but are never persisted. I didn't use any of CSLA because I couldn't get my head around removing the data portal. It would be nice to be having all the business rules, serialisation (for backtracking momentos), clone support, data binding support, etc.

I suppose that it would have been a doable thing, but seemed too hard at the time. It would be good to have a "no portal" option out of the box IMO.

RockfordLhotka replied on Wednesday, June 07, 2006

Barry, you need is easily met today without any change to the data portal. You just don't use it - that's all.

[Serializable]
public class MySimpleObject
{
  // properties as normal
  // validation rules as normal
  // authorization rules as normal

  public static MySimpleObject NewObject()
  {
    return new MySimpleObject();
  }

  private MySimpleObject()
  {
    // initialize object here
    ValidationRules.CheckRules();
  }
}

That's it. You can opt to skip the factory method and make the ctor public if you want - that's up to you and how you wrestle with the demons of consistency vs reduction of code.

Since the object doesn't persist you don't need a "Get" factory. The only things you might consider doing are overriding Save() and Delete() (in 2.0.2) to immediately throw exceptions - or perhaps shadowing them to hide them (yuck!).

Mountain replied on Wednesday, June 07, 2006

On page 175 of the C# book, you discuss the different ways the framework can invoke the 10 DataPortal_XYZ methods [Table 4-10] on the business objects: 1) interface or abstract base class, 2) virtual methods in base class, 3) reflection.

I would really like to explore the possibility of using interfaces for this purpose. Therefore, I would like to make some comments about the 3 interface-related points you list in the book and see if you think they make sense.

The three paraphrased statements from the book are in bold, and my comments are in standard font below.

Using an interface isn't ideal because not all classes will implement all 10 DataPortal_XYZ methods:

First, I think it is equally less than ideal to use reflection in this situation because if some class has not implemented a DataPortal_XYZ when it should, this will not be discovered until run time, when it should be discovered at compile time.

Now, more directly to the point, it seems to make a lot of sense to me break the 10 DataPortal_XYZ methods into groups.

I believe the 3 event-related methods (_OnDataPortalInvoke, _OnDataPortalInvokeComplete, _OnDataPortalException) would be ideal as virtual base class methods. Incurring the overhead of reflection for these methods doesn’t make sense to me in any case. Am I overlooking something here?

There is a group of DataPortal_XYZ methods that satisfies read only business objects. This group consists of the single DataPortal_Fetch method. An interface can be clearly defined for read only business objects.

There is a second group of DataPortal_XYZ methods that satisfies the needs of editable root objects. This group consists of the DataPortal_Fetch and the DataPortal_Update methods. This second interface would implement the first one and define one more method.

I believe there is a third group of DataPortal_XYZ methods that satisfies the needs of all other business objects. This group consists of:

·        DataPortal_Create

·        DataPortal_Fetch

·        DataPortal_Insert

·        DataPortal_Update

·        DataPortal_DeleteSelf

·        DataPortal_Delete

The interface would implement the prior interface (for my second group) and add 4 methods.

Finally, there is an interface for Command Objects that include DataPortal_Execute.

If I have overlooked something, then maybe the interfaces can be made more granular. I would even be OK if each interface contained only 1 or 2 methods, and there were some convenience interfaces that 'assembled' the interfaces that are commonly used together.

When a business developer implements a business object, he or she will definitely know which DataPortal_XYZ methods are required by that business object. Therefore, it will be trivial to select the correct interface. In fact, in Visual Studio, using the interfaces makes implementing the methods even easier (because the IDE writes the stubs).

By using appropriately granular interfaces, we eliminate the objection stated in the book and gain the benefit of more robust, clear code.

Defining the DataPortal_XYZ methods at such an abstract level prevents the use of strong typing. Since the data types of the parameters being passed to the server by the client are defined by the business application, there’s no way the framework can anticipate all the types – meaning that the parameters must be passed as type object or other very generic base type:

Is there a reason that generics wouldn’t solve this issue? I would think that generics would be an easy, elegant solution to this. (If generics can't completely solve it, then maybe a bit of lightweight reflection might be appropriate in places, in combination with the use of interfaces. Light (and fast) reflection includes most of the type operations. Heavy-weight reflection is the stuff required to find methods, etc. and the use of interfaces will eliminate the need for that heavy-weight reflection while also improving the clarity and robustness of the business code.

The DataPortal_XYZ methods end up being publicly available, so a UI developer could call them:

Is there a reason that internal interfaces and Friend assemblies wouldn’t solve this? I think that would be really easy and really elegant. (If that isn't a complete solution, blending in the use of some abstract base classes might be the next step, in combination with some of the interfaces.)

RockfordLhotka replied on Wednesday, June 07, 2006

Mountain,

I honestly don't view the use of reflection as an issue. I fully recognize that some people do, but given how it is used within the context of the data portal (especially in any remote setting) it is the least of the overhead worries we face.

More importantly at this point, is that changing to use interfaces would be a major breaking change that would impact every root object in every application out there. Everyone would have to touch every existing DataPortal_XYZ method to conform to the new model - and I am just not willing to break compatibility to such a degree.

Also, I don't think generics offer any answer to the loose typing issue. The data portal never has access to the actual types until runtime, and to use generics the data portal would need to call the interface like this:

IPersistData obj = (IPersistData)theBusinessObject;
obj.DataPortal_Fetch<theBusinessObject.GetType()>(criteria);

And that won't work. Generics are primarily a compile-time (well, JIT-time) artifact. By the time your code is running it is too late. So to use generics the data portal would have to become aware of your business types - and that's not workable.

Another alternative is that the data portal could call some business intermediary - that you'd write - which would implement loosely typed methods conforming to the interface, and would then be able to convert those into strongly typed calls. But at this point I think we've entered the realm of being silly Smile [:)]

So the fact is, using interfaces means being loosely typed and forcing you to cast the types manually within the DataPortal_XYZ methods. It also means returning to a single DataPortal_Fetch() for all types of criteria in an object and implementing your own branching. The current ability to have overloaded DataPortal_Fetch() methods is (I think) a major improvement over version 1.x.

rangda replied on Monday, June 12, 2006

Actually there are ways to manipulate generics at runtime.  The following method creates a List<T> based on a type provided at runtime:

private IList CreateGenericList(Type memberType)
{
    // We need a sample type to create a specific generic List<> object.
    // We just use List<object>.
    Type sampleType = typeof(List<object>);

    // Generate a Type for List<memberType>.  To do that we need to
    //
get the generic type from our sample type, then from that we
    // can make a new generic List<> type with
    // memberType as the type of the contained items.
    Type listType = sampleType.GetGenericTypeDefinition().MakeGenericType(
        new Type[] { memberType });

    // List<> implements the default constructor so we can use
    //
that to create the List<memberType> intance.
    ConstructorInfo constructor = listType.GetConstructor(Type.EmptyTypes);
    IList instance = (IList) constructor.Invoke(null);
    return instance;
}

You could do something similar to use generics in the data portal, if desired. (which is entirely independent of whether you want to or should, I just wanted to point out that you can Smile [:)])

Also, I don't think generics offer any answer to the loose typing issue. The data portal never has access to the actual types until runtime, and to use generics the data portal would need to call the interface like this:

IPersistData obj = (IPersistData)theBusinessObject;
obj.DataPortal_Fetch<theBusinessObject.GetType()>(criteria);

And that won't work. Generics are primarily a compile-time (well, JIT-time) artifact. By the time your code is running it is too late. So to use generics the data portal would have to become aware of your business types - and that's not workable.


RockfordLhotka replied on Monday, June 12, 2006

The idea was to AVOID reflection :)


From: rangda [mailto:cslanet@lhotka.net]
Sent: Monday, June 12, 2006 1:37 PM
To: rocky@lhotka.net
Subject: Re: [CSLA .NET] Version 2.1 comments

Actually there are ways to manipulate generics at runtime.  The following method creates an IList<T> based on a type provided at runtime:

Mountain replied on Monday, June 12, 2006

RockfordLhotka:
The idea was to AVOID reflection :)


From: rangda [mailto:cslanet@lhotka.net]
Sent: Monday, June 12, 2006 1:37 PM
To: rocky@lhotka.net
Subject: Re: [CSLA .NET] Version 2.1 comments

Actually there are ways to manipulate generics at runtime.  The following method creates an IList<T> based on a type provided at runtime:

Yes, but I wouldn't characterize my suggestion in _exactly_ that way. What I was after was stronger typing, removing certain issues such as having to update all object references after the update method is called, and some better solutions for persistence. (I am also seeking a nice way to integrate CSLA with an MVC framework we're using, but that's a separate issue.)

As part of this, I was hoping that reflection could be be avoided as much as possible for finding members and invoking members. Disregarding any performance issues related to those heavy-weight reflection operations, there are problems with strong typing when using reflection like this.

Having strong runtime typing is only part of what strong typing is about. The full benefits of strong typing come when the compiler can catch problems that otherwise would not be caught until runtime (exception). That full strong typing is missing from many parts of CSLA 2.0 at this time, and I would like to work on changing that.

However, as you pointed out Rocky, some of the changes I had in mind would probably be breaking changes, and that might not be acceptable to some.

I guess where that leaves my team is using a modified version of CSLA. However, that's better than no CSLA at all!

Fabio replied on Thursday, June 08, 2006

Great changes!!
The rules (validation and authorization) hare loaded just one time when we work with List (childs).
DataPortal changes? simply super-great to work better whit another DAL.
Best of all? All changes mantein backdraw compatibility.
Thanks Rocky.

Fabio replied on Thursday, June 08, 2006

Now (2.1) that we have rules for type we can have, may be in the future, an XML to mapping rules for a BusinessObject something like:
<?xml version="1.0" encoding="utf-8" ?>
<csla-mapping xmlns="urn:csla-mapping-2.1">
    <class name="MyNameSpace.MyClass">
        <validation-rules>
            <rule method="Csla.Validation.CommonRules.StringRequired" property="MyPropertyName" decription="{0} required"/>
        </validation-rules>
        <authorization-rules>
            <property name="MyPropertyName" allowwrite="true" allowread="true">
                <role name="ProjectManager"/>
                <role name="AnotherRole"/>
            </property>
        </authorization-rules>
    </class>
</csla-mapping>
Mapping file can be standard named: className.cslam.xml

We can have one more sintax to separate mapping for validation and authorization.
<csla-mapping xmlns="urn:csla-mapping-2.1">
    <class name="MyNameSpace.MyClass">
        <validation-rules>
            .....
        </validation-rules>
    </class>
</csla-mapping>

<csla-mapping xmlns="urn:csla-mapping-2.1">
    <class name="MyNameSpace.MyClass">
        <authorization-rules>
                      ....
        </authorization-rules>
    </class>
</csla-mapping>

The Csla.Validation.SharedValidationRules may need some more methods:
AddAssembly(Assembly assembly)
AddAssembly(string assemblyname)
AddFile(string xmlfilepath)
... and so on

Similar change need Csla.Security.SharedAuthorizationRules.

Now the *.cslam.xml can be distributed in the same assembly of a businessObject like embedded resource, in a separate assemby and so on.

Like now (2.0), we can have a separate assembly, for one or more BO, with static class and static methods for the business rules of a specific BO but, unlike now, the logical layer hare not hardcoded.

Bye.
Fabio.

xal replied on Thursday, June 08, 2006

Fabio,
If you really wanted that you could create a static method that handles the complexity.
In your bo you'd do something like:
Public Class SomeClass
    Shared Sub New
        RuleLoader.LoadRules(GetType(SomeClass))
    End Sub
End Class

Anyway, are you sure you want this? It kind of beats the idea of strong typing...
Also, you're exposing yourself to the "naughty user", the kind of user that likes to get into config files to see what he can do. Not that there are many, but you never know....

Andrés

Fabio replied on Thursday, June 08, 2006

Hi Andrés.

Xml can be compiled in a DLL, i know some "naughty user" ;)
I can do a change in my BO (a BusinessBase inherited from Csla.BusinessBase) but i think is better for us not only for me.
The proposal mantein backdraw compatibility so, if you want, you can work like 2.0, 2.1 or 2.1XmlM....

Think about this:
If you have investigate a business problem, probally, you have all right property and logical for all your's customers, but each customer can have some difference on validation rules or some difference in the authorization architecture. Using XML mapping strategy for validation and authorization you can divide one Bo in 4 DLL (or 2 DLLs and 2 XmlFiles, or 2DLL and 1 XmlFiles).
DLL1  with  pure Bo
DLL2 with XML for validation
DLL3 with XML for authorization
DLL4 with methods for Bo's validation rules

If a customer say you "I can't have invoice with amount less then 2$" you don't need to make any change on DLL1 but only in DLL2 and you can send him a specialized DLL3 (named MyNameSpace.CustomerName.BoName.Validate.DLL). So you have a standard DLL1 a customized DLL2 an a special DLL3.
The Rocky's solution give us a great future: now we can have CustomizableRules for each of our's customer.
GREAT ROCKY!!

Fabio replied on Thursday, June 08, 2006

Fabio:

If a customer say you "I can't have invoice with amount less then 2$" you don't need to make any change on DLL1


One more thing...
Rules in XML en hardcoded rules can coexist so you don't need to write all rules in the XML but only configurable rules.
For example you can have an hardcoded rule for InvoiceNeedCustomer (CUSTOMERID in the invoice is not nullable) and then you can have some configurable rules un the XML.

GREAT GREAT ROCKY!!! ;)) :))

sorry but i'm so happy....

xal replied on Thursday, June 08, 2006

Fabio,
Why not just create static methods that add these rules in another class and spare yourself the xml mainteinance nightmare?


Public Class MyClass
Shared Sub New
    SomeNamespaceInSomeOtherAssembly.ValidationRules.AddRulesForMyClass
End Sub
End Class

Public Module ValidationRules
    Public Sub AddRulesForMyClass()
        Csla.Validation.SharedValidationRules.AddRule( _
            AddressOf RuleMethodName, _
            GetType(MyClass), _
            "FieldName")
        ...
        ...
        ...
    End Sub
End Module


Andrés

Fabio replied on Friday, June 09, 2006

xal:

Why not just create static methods that add these rules in another class and spare yourself the xml mainteinance nightmare?


You mean i must separe from xml mainteinance nightmare and go to C#/Vb code mainteinance nightmare ?
But this isn't the metter.

If Rocky give us a IClassValidationLoader and an IClassAutorizationLoader we can work like each prefer.
If you need, you can also load validation and authorization from the db or you can have a mixed behavior.
If each class can have it's own loader (by a config) you have one more power, and its not mean that you must use it because you can work like now (2.0).
Finally... with rules for class instead rules for instance Rocky introduce not only a increase of performance when we work with BusinessListBase but the possibility to introduce more powerfull future to CSLA.

Fabio.

RockfordLhotka replied on Friday, June 09, 2006

Fabio's idea is quite good. Magenic has at least one client that loads their rules from a database, which is a very similar concept. I don't know that I'll put such a thing into the framework itself - because I can already see that Fabio wants it from XML and others want it from the database, and who knows where yet others might want to load from...
 
Rocky


From: xal [mailto:cslanet@lhotka.net] 
 
Anyway, are you sure you want this? It kind of beats the idea of strong typing...
Also, you're exposing yourself to the "naughty user", the kind of user that likes to get into config files to see what he can do. Not that there are many, but you never know....

Fabio replied on Friday, June 09, 2006

RockfordLhotka:
Magenic has at least one client that loads their rules from a database, which is a very similar concept. I don't know that I'll put such a thing into the framework itself - because I can already see that Fabio wants it from XML and others want it from the database, and who knows where yet others might want to load from...


Thank's Rocky.
You know that i would like to load rules from DB too. I had implement that in a "Delphi version" of the old csla... you know.... my own interpretation of old csla....
Now you don't need to implement a way for a DB loader o a XML loader of rules.
You can limit you to illuminate the way to us giving us two interfaces for validation and autorization rules loader, then we can send you some implementation.
I have some more idea about rules for class, like a lazy loading of rules at the first time a instance of a BO is used but this is another story....
I think that is better to open a new thread to speak about rules, posible rules loader and so on, so all interested csla developers can give its opinions.
Fabio.

Massong replied on Thursday, June 08, 2006

Hi Rocky,

 

could you please add a TcpRemotingProxy to the new version of the CSLA framework for the people using a windows service for remoting instead of the IIS?

 

I added a copy of the RemotingProxy to my copy of CSLA and changed the constructor to use the TCP channel instead of HTTP:

 

        ''' <summary>

        ''' Configure .NET Remoting to use a binary

        ''' serialization technology when using

        ''' the TCP channel. Also ensures that the

        ''' user's Windows credentials are passed to

        ''' the server appropriately.

        ''' </summary>

        Shared Sub New()

 

            ' create and register a custom TCP channel

            ' that uses the binary formatter

            Dim properties As New Hashtable

            properties("name") = "TcpBinary"

 

            If ApplicationContext.AuthenticationType = "Windows" Then

                ' make sure we pass the user's Windows credentials

                ' to the server

                properties("useDefaultCredentials") = True

            End If

 

            Dim formatter As New BinaryClientFormatterSinkProvider

 

            Dim channel As New TcpChannel(properties, formatter, Nothing)

 

            ChannelServices.RegisterChannel(channel, EncryptChannel)

 

        End Sub

 

Thanks,

Christian

 

RockfordLhotka replied on Friday, June 09, 2006

That shouldn't be necessary. There's no reason to explicitly configure a TCP channel, as Remoting does that for you automatically.
 
In other words, the existing remoting channel will work with TCP - you just need to put a tcp:// url in your client config file.
 
Rocky

could you please add a TcpRemotingProxy to the new version of the CSLA framework for the people using a windows service for remoting instead of the IIS?

matt tag replied on Thursday, June 08, 2006

I would be interested in a SafeRowReader (for datarows) to go along with a SafeDataReader.  You could even create a new ISafeReader interface that includes GetSmartDate and have SafeRowReader and SafeDataReader implement this interface.

Petar used to have a SafeRowReader in 1.x ActiveObjects - it looks like he removed it.  I personally use it for loading "parent/child/grandchild" object graphs via a dataset.

matt tag

ryancammer replied on Tuesday, June 13, 2006

this isn't that big of a deal, but one of the first things i did with the source was take resharper to it in order to clean up the unnecessary using directives and other housekeeping tasks. for the next version (2.2?), there's no reason not to just give the code a quick run-through with resharper.

my 2 cents.

RockfordLhotka replied on Saturday, June 17, 2006

ryan, is resharper free? And remember, I have to do everything in both the VB and C# code bases and keep them in sync across the board. By the name, I'm guessing resharper isn't open minded enough to provide support for VB, in which case there's the lovely idea of doing manual diffing of the C# source and reapplying it to the VB code. Not my idea of a fun time.

Which isn't to say I won't do it, but this is the sort of thing that is work with no payoff in terms of satisfaction, so it is the kind of thing I tend to do when there's nothing in the queue that provides actual intellectual stimulation Smile [:)]

Michael Hildner replied on Thursday, June 22, 2006

Resharper isn't free, but it's a good tool. I used it in VS2003, but didn't bother buying it for 2005 because of all the IDE improvements in 2005. And yes, it only supports C#.

Resharper does a lot of things, one is to notice unneccesary "using" statements and flag them as warnings. You like to have all green lights when using Resharper. Of course, the using statements are just an IDE thing, and don't affect the MSIL.

Mike

tetranz replied on Thursday, June 29, 2006

Here's something I've just implemented which I think people might find useful in the framework. Its specifically for comboboxes etc to bind to and have the typical "Select Item" or "Show All" pseudo item at the top. I'm happy to share it although its nothing special in terms of code. It was easier than I thought it would be.

I've called it a ViewWithHeaders (I'm thinking now it should be called HeadedBindingList). Its along the same lines as SortedBindingList and, I assume, FilteredBindingList. In a sense, its the opposite of a filter.

Its constructor takes an IList<T> and an array of T. From the outside it looks like the original list but with the array items at the top. It has an overload which takes a single object rather than an array.

I recall Rocky you mentioning people building custom controls for this. I think you'd need to do something like this inside the control if you want it to efficiently respond to ListChanged events. The way I'm doing it now my business layer provides a "pure" singleton with only real data. Then I have another singleton in my UI with the key part being something like:

private static ViewWithHeaders<MyInfo> _dropdownList = null;
[singleton logic (if null etc)]
_dropdownList = new ViewWithHeaders(MyList.SingleList, new MyInfo(0, "Select Item"));

My combobox datasource gets set to the new view.

If that was inside a custom control, it wouldn't easily be able to be a singleton thoughout the application. Not that it really matters I guess because the whole idea is that it takes very little resources. ie it doesn't duplicate the collection or the items. Internally it mostly just does some fiddling with the index to cope with the extra items at the top.

Anyway ... its solves an issue that I've always found a bit awkward. I know I've done some kludgy stuff with this in the past with pre .NET VB and classic ADO etc. I remember putting a "UNION ALL" in a lot of my dynamic SQL to add a fixed item Embarrassed [:$]

Cheers
Ross

RockfordLhotka replied on Thursday, June 29, 2006

Ross, now that is a clever idea! As long as this view is logically considered as part of the UI layer (like you describe), this seems like a really nice solution.
 
Rocky

DansDreams replied on Friday, June 30, 2006

I apologize if this has been addressed - I just don't have time to read this entire thread.

With Csla 1.x I built a UI framework around my custom sortable AND filterable list.  It makes for a very rich and powerful selection form.

With the two distinct classes you're making would you just nest them to accomplish this?  Are you testing that?

Also, have you considered just making the "normal" list classes sortable/filterable rather than requiring the use of the intermediate object by the UI developer.  I realize it would take some significant rework, but wouldn't it be easier to use at the end of the day if you just made your lists in the standard way and sorted/filtered them when desired.  Since the default (with no sorting applied) is just to return the raw list, it doesn't seem like this would cause any performance or memory concerns until it was necessary.

RockfordLhotka replied on Friday, June 30, 2006

Dan,
 
The SortedBindingList and FilteredBindingList are composable, yes. So you can create a list, apply a filter and then apply a sort.
 
At this time I don't plan to merge the functionality into BusinessListBase and ReadOnlyListBase, no. While I understand what you are saying about simpler use, I think there's a very good argument to be made that sorting and filtering are typically a UI behavior, not a business behavior. By putting these concepts into separate objects, I allow you to use them purely at the UI layer where they typically belong, rather than cluttering up the business layer with a lot of extra code.
 
Rocky

DansDreams replied on Friday, July 07, 2006

RockfordLhotka:
Dan,
 
The SortedBindingList and FilteredBindingList are composable, yes. So you can create a list, apply a filter and then apply a sort.
 
At this time I don't plan to merge the functionality into BusinessListBase and ReadOnlyListBase, no. While I understand what you are saying about simpler use, I think there's a very good argument to be made that sorting and filtering are typically a UI behavior, not a business behavior. By putting these concepts into separate objects, I allow you to use them purely at the UI layer where they typically belong, rather than cluttering up the business layer with a lot of extra code.
 
Rocky

Good point Rocky.  In support of that, I was a little surprised to find the DevExpress grid able to sort the collection with the column heading click even when using a standard CSLA list object, so it is apparently doing it's own version of what the SortedBindingList does.

RockfordLhotka replied on Friday, July 07, 2006

I think the idea of having UI components do their own views is excellent! That is the point of the UI after all :)
 
Rocky


From: DansDreams [mailto:cslanet@lhotka.net]
Sent: Friday, July 07, 2006 12:06 PM
To: rocky@lhotka.net
Subject: Re: [CSLA .NET] RE: RE: Version 2.1 comments

RockfordLhotka:
Dan,
 
The SortedBindingList and FilteredBindingList are composable, yes. So you can create a list, apply a filter and then apply a sort.
 
At this time I don't plan to merge the functionality into BusinessListBase and ReadOnlyListBase, no. While I understand what you are saying about simpler use, I think there's a very good argument to be made that sorting and filtering are typically a UI behavior, not a business behavior. By putting these concepts into separate objects, I allow you to use them purely at the UI layer where they typically belong, rather than cluttering up the business layer with a lot of extra code.
 
Rocky

Good point Rocky.  In support of that, I was a little surprised to find the DevExpress grid able to sort the collection with the column heading click even when using a standard CSLA list object, so it is apparently doing it's own version of what the SortedBindingList does.




Brian Criswell replied on Saturday, June 17, 2006

Could we have the assembly marked as
[assembly: CLSCompliant(true)]

RockfordLhotka replied on Sunday, June 18, 2006

Brian, I'll mark 2.0.3 and higher CLS compliant. It amazes me that the C# defaults to a false setting in this case - it seems like it should be a concious choice to do the bad thing and mark your assembly as being not compliant...  VB does default to being CLS compliant, which seems like the right approach to me. This change is good because it brings the two versions into sync in this regard.

sune42 replied on Tuesday, July 11, 2006

Some "small" suggestions for the 2.1 version.

1. CommonRules.MinStringLength??? Couldn't find that... perhaps useful?

2. For CommonRules.RegExMatch (or what it now was called) to allow us to specify a custom error message that is returned to the client?

Of course I can create my own rules, but it would be nice to see them here.

 

 

Copyright (c) Marimer LLC