Old forum URL:

SonOfPirate posted on Wednesday, July 22, 2009

After reading several previous threads discussing use of CSLA business objects with ASP.NET MVC, I noticed that these topics were all at least 10 months old.  I'm wondering if there are lessons learned during that time that can help those of us making an entry into this platform a bit easier.

In particular, I am wondering if anyone has found the stateless solution to be problematic?  I guess my biggest apprehension about ASP.NET MVC is that we are incurring the overhead of instantiating our business objects each time the page is posted back to the server.  At a minimum this means when the page is first loaded then again when the form is posted back.  In the (standard) ASP.NET sample Project Tracker app, and others, we store the business object in session between calls to eliminate this overhead.  Now that the community has had more exposure to the ASP.NET MVC approach, I'm wondering if the overhead is worth it.

Any insight is appreciated.


RockfordLhotka replied on Thursday, July 23, 2009

I think you can use Session in ASP.NET MVC. I would expect the controller would manage what objects come and go from Session, but I think it is available as part of the plumbing.

There's no doubt that you need to alter (I often use the phrase 'bastardize') your object model to efficiently fit into a stateless server model. But if you need the scalability provided by a stateless server, then this is the price you pay.

In a stateless model, all you have is root objects. No child objects except in read-only scenarios where you are populating a page with a list of data. But each postback that does a put (insert/update) requires a single root object that handles that postback.

So your SalesOrder object is a root. And your LineItemEdit is a root. Because there's no natural relationship between these objects (they are independent), you must manually code more business rules into your classes - again, this is the price you pay for scalability.

Some of the fastest code in the world is the code used for the perf benchmarks for databases. It is also code you'd never want to maintain. They sacrifice everything for performance - readability, maintainability, reuse, etc.

The same is true for super-scaling. You compromise. You do manual optimizations that reduce maintainability. But it is a price you pay for super-scaling.

Fortunately the vast, vast majority of apps don't need super-scaling. Most apps can use some Session (though often via a centralized state server for fault tolerance) and work just fine. That's a much cheaper, easier approach overall, because you can do things like let the object model naturally enforce some rules through the relationships of the objects themselves.

This isn't an MVC thing - it is a broader web thing.

dlambert replied on Thursday, July 23, 2009

Very interesting discussion. I've been playing with Azure a bit (nominally the current killer platform for scalability), and quite a few of the demos I've seen are also using MVC.

Personally, one of the problems I've got with these demos is that the "Model" is typically reduced to something like empty entity shells (or maybe an EF model in a traditional MVC app). Clearly, this runs counter to the concept of intelligent objects (which is one of the big reasons we're all here, right?).

So, I'm taking a shot at stuffing some CSLA objects into the Model of an Azure-MVC app. I suspect I'm going to run into some of these same issues, but I think the alternative is pretty chaotic with respect to business logic - I'd rather not just leave the model to be filled in as an afterthought.

I hope to understand some of the gotcha's in this model a little better once I've got an app working.

RockfordLhotka replied on Thursday, July 23, 2009

Be careful with Azure and Session. Azure's pricing model includes charges
for "transactions" for internal services like talking to a database or
talking to a worker role. Any out-of-proc Session (which you'd have to use,
because Azure will require a minimum of 2 instances) will cause 1-2
transactions per page hit (one to get the Session, one to save Session).

Also, use of Membership (auth, roles, profile) will cause transactions.

You can imagine that a web site with moderate load could trigger hundreds of
thousands of "transactions" very rapidly, which might increase your daily
cost for using Azure by quite a bit.

But from a pure Model perspective, I think you need to weigh the pros and
cons of an anemic model carefully.

A rich model (like a CSLA model) is really nice for interactivity and
encapsulation, but you do lose some of the power when all your objects must
stand alone as root objects.

An anemic model (like service proxy objects, EF entities, etc) are often
created by tools, and so are simple to create. The model pre-supposes some
external business logic location, like a function library, rules engine or
something like that. So there's separation of data and logic - which breaks
encapsulation, but provides a different type of separation that some people
really like (though to me it feels a lot like FORTRAN). Business rules are
usually invoked as a batch in this world.

In a smart client or any interactive user scenario the anemic model approach
is not a good match, because it is hard to get interactivity with a batch
mode business logic approach. The rich domain model approach is far better,
because logic is triggered at a very granular level.

But in a web world things are already done in what's called block mode (a
term from the mainframe), where a block of data is posted to the server at a
time (a page postback). So triggering business logic as a batch is fine,
because that matches the model imposed by the broader technology.

All that said, the web is rapidly switching toward a model where chatty
communication between a "rich client" running Javascript and the server is
commonplace. So the web is undergoing its own revolution - one that rather
turns much accepted wisdom and best practices on its ear.

What IS the best practice for a stateless web server that needs to validate
the user's entries field-by-field? Is that even cost-effective if your data
center isn't built on a power dam? If you are paying per "transaction" like
in Azure, can you actually afford to even allow this type of model?

I think there are a lot of unknowns as the web switches from a batch mode to
chatty interactive mode - like whether that is even realistic, and if so,
how much will it cost, and if affordable, how nasty must your code become to
make it work?


dlambert replied on Thursday, July 23, 2009

Thanks for the point on transaction costs -- that's certainly a consideration. I don't know that we can really consider Azure's introductory pricing to be definitive for the class of "cloud computing", yet, though. Other cloud services, if I'm not mistaken, don't include a transaction component at all in their pricing, and Microsoft is also preparing to release Azure-derived bits so you can run a private cloud in your own data center, which throws another question-mark into pricing.

The "worker role" concept in Azure maps to your batch-mode comment. In tech introductions I've seen, "workers" are supposed to handle longer-running processes, so that the "web role" processes can operate quickly. The part of this that bugs me is that you end up with a really fragmented object model -- the blind man and the elephant looks like an absolute savant in comparison.

RE: block-mode vs. interactive, you're right -- we've seen massive swings in the interactivity of our interfaces since the green-screen mainframe days. Windows giveth, the web taketh away, and so on. Clearly, our web interfaces started getting more interactive with the introduction of AJAX. Maybe the better stack to prototype here would be Azure / CSLA / Silverlight.

RockfordLhotka replied on Thursday, July 23, 2009

Personally I'm sold on Silverlight. I guess we'll see if the industry agrees

Silverlight provides web deployment characteristics (transparent to the
user) with a real smart client technology, so all the cool architectural
capabilities of Windows Forms or comparable technologies are available to
you as a "web" developer.

To me it is the sweet spot, where you get all the smart client goodness, and
all the stateless web server goodness. Best of both worlds.


SonOfPirate replied on Thursday, July 23, 2009

Amazing how these discussions transition!  I am actually looking at diving into my first Silverlight UI as a possible alternative to the ASP.NET MVC approach.  Where are you, Rocky, with documentation for CslaLight?

What has driven me towards the ASP.NET MVC framework is the fact that I've worked a lot with the UI Automation approach that Microsoft uses for a lot of its applications.  I was recently a part of a large-scale web solution that used an automation model to drive the UI.  Unfortunately we had the same debate over performance and state because there is a lot of overhead reconstructing the UI model each time a page posts back.  On the other hand, we lose scalability when we save the "Application" object in session.  And, truth be told, with the web apps, we still have overhead each postback because we have to rewire and unwire event handlers from the pages each time.  Ultimately, we decided to save the UI model in memory in the hopes that this would be more performant.

So, I was looking at the MVC or MVP pattern for some insight how we might improve upon this design.  I think both patterns work nicely with Csla for our business objects but get stuck right back at the question of state.  The vast majority of examples apply these pattern to Windows Form applications where maintaining references to objects isn't an issue.  But, looking at ASP.NET MVC, it appears that the framework is designed to create a new instance of the controller each time the page is requested or posted back.  When we start working with data-driven objects, this can be a lot of overhead.

Take for example a simple form used to edit an existing Customer record.  When the page is first requested, we create an instance of the Controller/Presenter which creates our Csla business object which goes through the layers and tiers to a database to retrieve the object's property values.  We then initialize the UI and return the page to the browser.  When the user makes changes to the form and posts the page back, we have to repeat all of this before we can update the object then persist it back to the database.  This requires that our Controller/Presenter, BO and supporting classes be instantiated twice and THREE trips to the database (unless we implement caching).  Wow!  This is what makes me apprehensive.

Then, if we do save our business object in session state, are we really following the patterns when we have to pass the object back to the Controller/Presenter for persistance?  I know the MVP pattern is supposed to decouple the View from the Model, so only the Presenter knows about the model so it would seem that we have to save the Presenter in session state or the whole pattern breaks down.

I know that Silverlight has a different mechanism for working with underlying code but don't know enough about it to know if it addresses any of this or is simply another pretty face on the same ol' problem.

Any pearls of wisdom?


pondosinat replied on Thursday, July 23, 2009

In regard to the Silverlight solution - I think Rocky said it all. Silverlight does address the stateless persistence problems, because it's all stored in memory, on the client. It really does blow ASP.NET out of the water if you're talking about LOB apps (or any other types of non-trivial apps for that matter). There is a big learning curve, but well worth it IMO.

RockfordLhotka replied on Thursday, July 23, 2009

I’m not actively working on an ebook for CSLA .NET for Silverlight. I released the video series, and am now working on a video series for CSLA .NET for Windows (and using much of my vacation while the summer weather is here in Minnesota – that’s important!!).


Regarding all the MVC/MVP/MVVP/M-O-U-S-E patterns out there, it is absolutely critical – CRITICAL – to remember that patterns have consequences. Good and bad. Every pattern has different consequences.


Patterns are identified as a repeating technique or approach that solves a specific problem – so one consequence is always that the pattern solves a problem. All patterns have negative consequences too, which people often forget to consider as they get excited about the fact that there’s a solution.


There’s no accident that MVC and MVP exist as separate patterns (each with at least 2-3 sub-patterns – for a total of around 6 actual patterns). MVC might work well for me, and MVP might work better for you. Why? Because the consequences of each are different, and you and I have different apps, with different requirements.


I know this is very theoretical – but it is such an important thing to internalize! Using a pattern because it is trendy, or because it is the first you found (among many others) that solves your problem, is very dangerous.


You must do what you are doing – which is to evaluate the consequences of each pattern in the context of your particular requirements, and decide if the pattern really does make your life better. Picking the wrong pattern, or variation of a pattern, can make your life a living hell.


I’ve been focusing much more on MVVM lately, because that’s the really trendy pattern at the moment. It could become  the POTY (pattern of the year), displacing DI/IoC.


POTY is (in my view) an anti-pattern, where people get so excited about the hype around a pattern that they use it blindly, not really knowing if it is good, bad or otherwise. Every year seems to have a POTY, and it really makes me sad to watch that much waste accumulate each time.


MVP was the POTY a few years ago.


What is really sad, is that some of these patterns emerge from all the crap and are permanently stained. People are disappointed, and so the pattern gets a bad rap. People assume it stinks. But every pattern has value when used in the right context, and for the right reason.




Copyright (c) Marimer LLC