CSLA performance

CSLA performance

Old forum URL: forums.lhotka.net/forums/t/4979.aspx


mosesmalone posted on Monday, June 16, 2008

I am trying to decide on whether or not to use CSLA for smart client application with a hosted application layer and datacenter.  The application will have between 10 and 15 thousand concurrent users during peak times.  Has anybody had experience building a csla application for this type of usage?  How does CSLA perform?  What kind of performance issues should I be aware of?  What tricks are there for designing a high performance application with csla?  Is CSLA even the right choice for me?

 

 

SomeGuy replied on Monday, June 16, 2008

Well the S stands for Scalable. ;-)

 

mosesmalone replied on Monday, June 16, 2008

It is a done deal then. ..lol.   I need something concrete.

RockfordLhotka replied on Monday, June 16, 2008

That is a large app, and I doubt you'll find many examples that push CSLA .NET (or any other framework) that far. Apps at that scale are just plain rare.

I know CSLA has been used for some very large web apps, though I don't have concrete usage numbers - but thousands of concurrent users.

The good news is that smart client architectures scale farther than web architectures do - by definition - the processing is more distributed because smart client apps leverage the client's processing power, and client/server protocols consume less bandwidth than the typical HTML-based app.

However, there are clearly many things you can do to reduce scalability - but if you are working in this space, I'm sure you are well aware of the many pitfalls. So let's talk about CSLA specifically.

CSLA uses a mobile object pattern - which you can use or not use as you choose.

Mobile objects means that your business objects are cloned across the network from client to app server and back. This is a powerful technique, because it means that you can write a single set of classes, and that code runs on client and/or server as appropriate. And only the field data for the objects actually move across the network, so it is reasonably efficient.

The value for scalability is quite high, because the technique helps you create a lot of business logic that can run on the client workstation. Most validation rules, most authorization rules and a lot of business logic can run purely on the client - which is as scalable as you can get because no shared resources are consumed.

Obviously some server interaction is required. If you design your objects around use cases, following responsibility-driven design, your objects will tend to have the minimum data required to fulfil thier responsiblity within the use case - meaning that as little data as possible will flow to/from the server. This isn't really a CSLA thing, it is an OO design thing, but it is obviously important.

Similarly, careful identification of resource, reference, response/request and activity data will help you design your objects to minimize server access through the proper use of caching and batching. CSLA makes caching pretty easy on a smart client, and effectively does batching by default.

But I want to be clear here - CSLA enables the creation of a rich OO business layer. The performance and scalability of that layer is more determined by the OO design than it is by CSLA itself.

Back to serialiation of objects over the wire. CSLA helps you out by also moving a certain amount of context data. Things like the client's principal (when using custom authentication), the client's culture (a short string), the object's meta-state (isnew/isdirty/isdeleted/etc) and a couple other items. Usually this is desirable, because it means that (to a very large extent) your objects have the same context on the client and server, so your code is very consistent between the two environments.

The potential downside to this is that it inflates the serialized byte stream. And if you don't care about contextual parity, then this is just overhead. But don't run to the DataSet for help, because CSLA tends to use a lot less bandwidth than the DataSet - to improve over CSLA you'd need to use optimized data contracts.

What I'm getting at here, is that one of the obvious constraints when dealing with the scale you are talking about is bandwidth at the server end, and while CSLA is quite efficient at moving objects across the network, you could do better if you architect specifically to minimize bytes on the wire (as long as you recognize that you'd be unable to also do what CSLA does in terms of features).

So I'll return to my earlier statement - you can use or not use mobile objects as you choose. If mobile objects is attractive (and it usually is) then use it. If it is not, then you can still use CSLA to create your client app, and then your objects can make (presumably optimized) calls to back-end services for any server interaction. Some people have used this approach to create optimized datagrams that flow between CSLA client and the app server to minimize bandwidth. There is no consistent context between client and server of course, but they dealt with that in order to minimize bandwidth.

In any case, I assume you are familiar with this formula: ctps = (u / t) * d
http://www.lhotka.net/Article.aspx?id=1049435b-6b05-412a-8bad-62869b1f1074

If you can get solid estimates for t and d you can get a pretty good idea what kind of load you'll be placing on your servers. And if you can derive an estimate for the number of bytes transferred on each server request you can come up with an idea of the likely bandwidth that your app will use.

 

RockfordLhotka replied on Monday, June 16, 2008

One other word of caution. While you certainly need to give serious thought to your architecture and design when scaling this high, it is important to remember that premature optimization is an anti-pattern.

With an app of this size, it is a good bet that Microsoft will let you use one of their test centers. My recommendation would be to prototype some key parts of the app - the parts where peak usage will likely hit most - and then see if you can get access to a testing lab to simulate very high loads.

That way you can make adjustments to your architectural choices and/or design decisions early on, based on objective testing data.

Otherwise it is terribly easy to start architecting and designing around hypothetical scenarios, and you can often end up wasting a lot of time and money solving problems that don't really exist.

mosesmalone replied on Wednesday, June 18, 2008

Rocky,

Thanks for the quick and helpful response.  You make some excellent points.  

You mentioned in your first post this approach:

"Some people have used this approach to create optimized datagrams that flow between CSLA client and the app server to minimize bandwidth. There is no consistent context between client and server of course, but they dealt with that in order to minimize bandwidth."

Did these people start out with a pure csla solution?  Do you recommend this "optimized datagram" approach starting out? (given the scale)

Does not having a consistent context between client and server defeat the purpose of CSLA? 

Thanks again for all the help.

MosesMalone

RockfordLhotka replied on Wednesday, June 18, 2008

They didn’t start out with the optimized datagram, no. They used it for select cases where they were manipulating just a few items in large collections. This is not an all-or-nothing option – you can use the normal data portal for most things, and optimize for specific use cases.

 

Regarding the consistent context and its value, yes it is valuable. But most people use CSLA because of these benefits:

 

1.       Data binding support

2.       Validation and business rule support

3.       Authorization support

4.       Consistent data access pattern

5.       Consistent coding model for all objects

 

These benefits are constant, regardless of whether you are building 2- or 3-tier applications.

 

If you are building a 3- or n-tier application, then mobile objects and the data portal become important. The data portal provides the consistent context between client and server – and that is a powerful benefit.

 

But architecture is all about choosing trade-offs, that is an inescapable truth. Whenever possible, CSLA allows you to make choices. In the case of the data portal, you can use mobile objects and shared context, or you can run the “server-side” code on the client and make your own service calls to the server (like the datagram example), or you can use a mix of both.

 

My recommendation would be to use mobile objects to start, and only fall back to some of the more painful (though presumably optimized) solutions on a per-case basis when absolutely necessary.

 

Rocky

 

Copyright (c) Marimer LLC