What stereotype to use for rapidly changing collections

What stereotype to use for rapidly changing collections

Old forum URL: forums.lhotka.net/forums/t/9346.aspx


SeanO posted on Monday, August 09, 2010

This will be my first exposure to CSLA, and I'm not quite finished with the book yet, but I had a question I can't seem to figure out just by looking through the index in the book.

I am developing an application that can assess various measure of risk associated with commodities portfolios.  This involves taking input from the user about the forward prices of various commodities, as well as information about various contracts they might have.  For example, the forward price of gas for the next 36 months might be entered by the user and then stored.  The forward price of oil might stored in another array for the same time period.

For some of the procedures, we must perform monte carlo simulations to evaluate the portfolio based on thousands of different simulated prices.  This means changing the entire array of forward prices for each commodity, then evaluating all contracts based those new prices and storing the results, many times.  Note that we don't care about storing the simulated prices, but we do care about maintaining the integrity of the original prices in the data.

Any suggestions would be greatly appreciated.

 

RockfordLhotka replied on Monday, August 09, 2010

The CSLA stereotypes are generally concerned with supporting use cases (user scenarios, stories - whatever you want to call them) that deal with some sort of interface (web, windows, service, etc).

I wouldn't use BusinessBase to create batch processing objects for example. That seems to make little sense, since the features you gain from that base class are probably useless in such a scenario.

Running a simulation is a batch process - or at least a non-interactive, compute-intensive process. So you have to ask whether you need n-level undo, data binding, interactive business rules or authorization to implement such an algorithm. My guess: no.

Here's an important secret (not really a secret, but something that people don't often realize): the data portal doesn't care about the CSLA stereotypes or base classes. Any serializable object can flow through the data portal. So it is quite realistic to use the data portal to retrieve a set of near-POCO objects that contain the data necessary to seed your simulation (as an example), and to save the results.

So based on what little I know from your post, for the data entry user scenario I'd use editable objects (BusinessBase/BusinessListBase) because that's what they are designed to do. And for the simulation algorithm I'd use much simpler objects.

There are many possibilities on the simulation.

  1. You might make the simulation objects serializable and directly fetch/save them.
  2. You might have a simple set of data load objects, a set of results saver objects and other objects that implement the algorithm.
  3. Etc.

I do think the key is to remember that the data portal is largely independent of the rest of CSLA. It has its own requirements, but those do not include the CSLA base classes. In short, the data portal:

Technically that's it.

Then again, it is often simpler to implement data retrieval objects by subclassing ReadOnlyBase, and data update objects using CommandBase - just to get the managed property capabilities if nothing else (though that's far more important for Silverlight than .NET).

 

Copyright (c) Marimer LLC