CSLA.NET 2.0 Optimized for Web

CSLA.NET 2.0 Optimized for Web

Old forum URL: forums.lhotka.net/forums/t/815.aspx


jules_ce posted on Friday, August 04, 2006

Hi guys, quick question:

Has anyone looked into optimizing the CSLA.NET 2.0 framework specifically for web applications?

If so, which were areas of the framework that you targeted, and which targeted areas gave the most "return on investment"?

We are busy reviewing various frameworks and performance is a critical factor.

Thanks very much,

James

RockfordLhotka replied on Friday, August 04, 2006

You shouldn't need to alter the framework to use it in a web setting. It is more a matter of choosing the right way to use (or not use) certain features based on your transaction volumes

If you have thousands of concurrent users you obviously have a very different problem than if you have hundreds or dozens of concurrent users. At some point (which varies on many factors), you need to use purely stateless object models. That's expensive in terms of complexity and development cost, but it can be necessary.

The CSLA .NET framework supports this model, primarily through the overload of Save() that allows you to force a "new" object to actually perform an update operation.

But the big thing with being stateless, is that all your objects become root objects. There's almost never a case where child objects exist in a stateless model, because every page operates on an object, and that object typically must directly persist itself - even if it would have been a child object normally (like an order LineItem). This is why the complexity goes up, because now a LineItem object must enforce not only its child rules, but also any cross-child rules that normally would have been handled by a parent Order object...

Even if you can use some state, and thus reduce your development cost, there are still some simple things you want to do to optimize how you use CSLA in the web. Specifically, avoid the use of a remote data portal (as it is not necessary for scalability and has a perf cost), and avoid the use of n-level undo because web sites rarerly have cancel buttons.

And, of course, doing a good object model where your objects are driven by your use cases and are based around responsibility and behavior rather than data will have a tremendous positive impact on performance. This is a truism for CSLA in any environment and is, in my view, the key factor for success when using CSLA.

pelinville replied on Friday, August 04, 2006

We have just completed a fairly large web application with at least 70,00 users.
 
I will give a few suggestions.
 
Actually they simple extend Rocky's comments.
 
First haveing everything being able to be a root does make the "Web" part easier.  Middle tier gets more complicated.  I made the middle tier and some of the things I did to make it more managable.
 
1.  Give everything a GUID as the id.  Yea, I know there are alot of DB reasons not to use them but it makes developing in the "everything is a root" world much, much easier.
2.  Just live with the fact you cannot do the OOP thing as pure as you would like.  Doing a large scale app on the web with many users is going to require some compromise.
3.  Don't compromise to much.  Work hard to implement OOP principles.
4.   While you should ba "able" to treat every object as a root does not mean you to make this public.  Only do this if there is a actual reason.  One way to do this is initailly make all the factory methods that create root objects friend.  Make them public when you must.
5.  With regard to 4 above, use what I call "entry objects".  They are kinda like units of work but not exactly.  What they do is create objects that a particular page or section of the application needs.  They do it in a way that is fast and direct but they also allow the GUI writter to call a single Save or IsDirty or IsValid call to check all the object that might have been used during the request. Keeps things organized. Another good thing about this is it allows you to keep many of the factory methods that create the root objects as friend.
6. Have a switch that allows you to bypass the dataportal.  I don't recomend removing it completly.  But it does save on the CPU load.
7. This one is just my experiance but... Avoid putting any objects in session state.  We did this initially in a couple of places thinking it would not hurt to much.  The problem arose when trying to implementing a load balancer.  This requires using the state server or sql server. It seems that serializing object to those two was a pretty big hit to performance.

ajj3085 replied on Monday, August 07, 2006

Regarding #7, doesn't using session serialize the object to the page's ViewState? 

I know in .Net 1.1 this was the case, as the larger the object in session the larger the ViewState.  I know it slows things down a bit (the page posts are larger, and take more processing on the server) but I'd think it'd be workable.  What specifically was preventing this in a load balancing (as I thought ViewState was server independant)?

Andy

xal replied on Monday, August 07, 2006

Andy,
Take a look at chapter 10 on the book. In there, Rocky discusses the different options for maintaining state and their pros / cons.

Andrés

ajj3085 replied on Monday, August 07, 2006

Andrés,

I just did, and storing state in the ViewState does have the problems I mentioned, but using it in a web farm wasn't one of the drawbacks. Smile [:)]

Andy

RockfordLhotka replied on Monday, August 07, 2006

ajj3085:
Regarding #7, doesn't using session serialize the object to the page's ViewState? 

I know in .Net 1.1 this was the case, as the larger the object in session the larger the ViewState.  I know it slows things down a bit (the page posts are larger, and take more processing on the server) but I'd think it'd be workable.  What specifically was preventing this in a load balancing (as I thought ViewState was server independant)?

Andy


No, not at all. Session is stored in the web server's memory (or in a centralized state server, or in SQL server based on how you configure it). ViewState is used by ASP.NET controls to store state to handle postbacks on that same page. They are totally separate technologies.

Session does require that the client track a session id. That is typically tracked through a cookie, but ASP.NET 2.0 does allow it to be tracked through URL mangling, and it can optionally be configured to use cookies when possible, and fall back to the URL scheme.

ajj3085 replied on Monday, August 07, 2006

Hmm, I must be remembering wrong then... its been a few years since I did an Asp.net page.

jules_ce replied on Wednesday, August 09, 2006

Thanks guys,

Pelinville: You mention having a switch to bypass the dataportal. Is there something built in for this, or is this something that I would have to modify in the framework myself?

Thanks very much!

James

ajj3085 replied on Wednesday, August 09, 2006

You'd have to build this yourself; I'm not sure its a good idea to do this though.  If you ever need to use those business objects and have an application server via remoting (or Enterprise Services, or WCF), you're code won't work properly.

pelinville replied on Wednesday, August 09, 2006

You do have to do it yourself but it is pretty simple.  (This is 1.x code)
 
In your factory methods do something like this (psuedo code)
 
Public Shared Function GetBO() as BO
      dim newObj as New BO
      dim CritObj as New Criteria
   If there is no PortalServer Defined
         newObj.DataPortal_Fetch(CritObj)
         Return newObj
   else
      return DataPortal.Fetch(CritObj)
   end if
 
End Function
 
 
Now create a new class that inherits BusinessBase.
 
Public Class NewBusinessBase
      Public Overrides Function() As BusinessBase
            ...
            If IsDirty Then
               If there is no PortalServer Defined
                     Me.DataPortalUpdate_Update()
                     Return Me
               else
                  return DataPortal.Update(Me)
               End If
            Else
               Return Me
            End If
      End Function
End Class
 
Another one would be neeeded for the BusinessCollectionBase, of course.
 
I have to say that the only reason I did this is because I went with the fact that every object is a root and loaded only when needed. Doing this causes the dataportal to be hit much more often than if Roots got all the data for their children in one call and hit the datportal only once.
 
Using the Performance stuff in VS2005 I found that with all these small, root object being loaded almost 20% of the time was spent in the DataPortal.  This is misleading since that is where the data calls are made and it is the network traffic of passing the data back and forth that is taking up all the time.
 
But.  After doing this we did see an overall improvement of about four  percent. This would not be enough to justify any major changes in a normal application.  But we are trying to get every bit of performance we can since leasing servers is not cheap AND we have a very limited support staff.  This change was minor and hadled exclusivly by the CodeGen and the creation of two simple sub classes. Also, it does not alter the actual functionality in any way.
 
 

Copyright (c) Marimer LLC