I know there are some threads regarding serialization and performance. But I want to discuss this topic in regard to "standard" .Net applications (e.g. WPF) using the WCF-Portal (not SL or WP7!).
First of all I'm wondering if it possible to use the new serialization capabilities introduced in CSLA 4.3 for SL and WPF (ICslaReader / ICslaWriter) are available for the WCF-Portal portal too. When I configure a new behavior within the WCF-Portal configuration in order to use a custom serializer it will be removed by CSLA and replaced by a NetDataContractOperationBehavior (which infact uses a NetDataContractSerializer).
So it seems not possible to introduce a custom serializer when using the WCF-Portal?
Second question: How should a multi server application be constructed without being affected by the performance penalty introduced by serialization / deserialization of the entire object graph.
Let's work with a little example: We have a Data-Server holding the SQL database server, an App-Server running the server part of the business tier and some WPF clients having UI, ViewModel und the client side of the Business tier. The performance degree occurs when the client accesses data from the App-Server (e.g. a list of 5000 customers) via WCF-Portal. Data will be retrieved from Data-Server (< 100 ms), Business Object list is build (approx. 100 ms) object list is serialized, data is transfered over the wire to the client, data is deserialized (serialization, transfer and deserialization takes 2000 to 3000 ms, stream size is 40 MB!!!). But hold on: Isn't the data transfered over the wire when we access the data server too? The data is transfered here from SQL-Server to App-Server but 10 to 20 times FASTER!
Let's rethink the architecture: What is our App-Server good for? The App-Server should speed up the data access (e.g. by introducing a cache) and it should run some time consuming business logic (asynchronously on a central server). But instead of speeding things up it makes the situation just worst! The data access through the WCF-Portal (with serial. / deserialz.) is 10 to 20 times slower than the database access itself! It would be much faster to configure direct database access for the WPF-Clients! However we do not want to miss the App-Server. It has some other nice architectural features / properties...
How about compression?! Unfortunately compression does not solve the problem... The transfer over the wire costs only 10% of the overall transfer time from App-Server to client. Reducing this small part is not worth the pain and even if you get it for free the result would still not be acceptable!
Our current approach is to wrap the data access within our own WebService with custom (fast) serialization. Now the data transfer from App-server to client takes just as long as from SQL- to App-Server (OK, this is the penalty we have to pay for a second hop - this is fair).
But isn't the WCF-Portal useless then? Please don't get me wrong - 5000 objects are not that much for a business application. I think I understand you design goals and reasons for choosing NetDataContractSerializer. But I have no idea how the WCF-Portal is intend to work within a data centric application... Am I missing a point?
With best regards
One of my goals for version 4.5 is to allow the use of MobileFormatter for .NET application scenarios - basically to allow the Silverlight/WP7/WinRT serialization model to be used for pure .NET apps. But you are right, that's not possible today.
In your analysis, did you make sure to separate out the time to load the object graph from the time to serialize it?
In other words, your DataPortal_Fetch method loads the object graph, and that takes an amount of time. THEN the result is serialized, which also takes time.
You can't lump those together. It is relatively common to have forgotten (for example) to turn of RaiseListChangedEvents when loading a collection, which can cause the object graph loading process to be extremely slow. That would have nothing to do with serialization - it is an issue with the code in DataPortal_Fetch.
is it mandatory to use the MobileFormatter in the future? Isn't the BinaryFormatter available in WinRT (or Windows Phone 8)? The reason I'm asking is that I'm trying to minimize the code I write in a business object. I use private fields (except for relations of course) - for performance reasons. That means I have to overwrite OnGetState and OnSetState if I use the MobileFormatter.
There's no binary formatter in WinRT or Windows Phone 8 (just as there is no binary formatter for Silverlight).
So the recommended approach is to use managed properties in CSLA so you do not have to override and add code in OnGetState/OnSetState.
Jonny is correct, there is no BinaryFormatter in WinRT or Silverlight or WP7 or WP8. All the new modern platforms run in a tighter security model, where the private reflection required for Microsoft to implement something like BinaryFormatter is disallowed.
As a result, the MobileFormatter is the only real option.
I appreciate using private backing fields for performance. I would point out however, that this only really matters if you are doing intensive algorithmic work against the properties of your object. If you are doing the more typical data binding of values with validation and relatively light-weight business rules there's no meaningful performance difference.
Now if you are writing an app that does some massive Monte Carlo simulation across hundreds of thousands of objects, each of which is involved in heavy mathematical calculations - then you could see a difference by using private fields.
Thanks for the quick response. Right now, when I'm trying to convince my team to use csla, it's more important to keep it simple then it is to optimize performance, so i guess managed fields is the way to go.
Copyright (c) Marimer LLC