CSLA 4 version 4.3.0 alpha available

CSLA 4 version 4.3.0 alpha available

Old forum URL: forums.lhotka.net/forums/t/11097.aspx

RockfordLhotka posted on Thursday, January 26, 2012

I have posted an alpha version of CSLA 4 version 4.3.0 for download from the CSLA download page.

Although Jonny has been extremely busy with a number of bug fixes and some feature changes, I think the biggest change in this alpha release is a major optimization of the MobileFormatter.

MobileFormatter is used to serialize object graphs on Silverlight and Windows Phone. It is used by the data portal, and n-level undo (if you use that feature).

Until now, I have recommended that you use compression on the byte stream that flows over the data portal, because the XML created by the MobileFormatter is often quite large. It compresses efficiently, and we’re quite efficient about what we put into the byte stream, but it is just plain big.

Sergey did some really nice work for version 4.3, allowing the use of alternate reader/writer objects so the data can be serialized into something other than XML. Specifically, he created binary reader and writer objects that are around 70% more efficient in terms of byte stream size. That’s about as much as you could expect to get with compression!

The result is that you can probably avoid the CPU intensive overhead of compression and still get a small byte stream to transfer over the network.

The CSLA 4 version 4.3.0 change log includes a discussion of the configuration settings you need to change to use the new reader/writer objects.

This is a non-breaking change, because the default is the same behavior as in 4.2. But this is a big change and we really appreciate your help in testing the new reader/writer objects to ensure they work across a wide range of applications.

Killian35 replied on Thursday, January 26, 2012

Hello Rocky!

This is great news. We all appreciate the hard work you and everyone has put into this framework.

After tinkering around with the source code, I feel the CslaBinaryReader and CslaBinaryWriter should be extended to support a CslaKnownTypes.Custom. Then having overridable read and write methods. The reason is to allow for custom value types without having to implement IMobileObject on them. I create my own value types that better align with the business domain I work with and have to implement IMobileObject to integrate them without having to override all Get/Set state methods in any business classes that use my datatype.

sergeyb replied on Thursday, January 26, 2012

I thought about it as I worked on it.  The issue is that enumeration tells the system how to serialize a specific type.  Having Custom is not going to accomplish much as you will only be able to do one extra type.  Hence, you now can swap out entire reader/writer pair if you need to.  You can probably use existing pair's code as your starting point.  Just a different way to accomplish what you are trying to accomplish.  Am I missing something with Custom enumeration value?


Killian35 replied on Thursday, January 26, 2012

The enumeration is only to signal to your base class to hand off the read/write of the next object to an override method. So when your base class comes across an unsupported type, it writes (CslaKnownTypes.Custom) to the writer, then passes the target and writer to an override method (that has a default implementation to throw the not supported exception.) It then becomes responsibility of the derived class to write any additional data to identify my custom type and write the type data to the writer.

So the byte stream would be (CslaKnownTypes.Custom)(MyCustomTypeIndicator)(my type bytes)...

When reading the stream back, if it comes across a (CslaKnownTypes.Custom), the base class then calls a virtual method to have the object read back. The derived class would read (MyCustomTypeIndicator)(my type bytes) returning an instance of my custom type.

I hope that makes some sense.


sergeyb replied on Friday, January 27, 2012

It does, but you would have to leave your own markers in the stream to let you know how to handle that specific "primitive" type.  You would have to do that on both ends: read and write.  I think we could open up existing class for that, provided Rocky is cool with that and I have some time to do so along with unit tests to support.

RockfordLhotka replied on Monday, January 30, 2012


It does, but you would have to leave your own markers in the stream to let you know how to handle that specific "primitive" type.  You would have to do that on both ends: read and write.  I think we could open up existing class for that, provided Rocky is cool with that and I have some time to do so along with unit tests to support.

I don't think we'll pursue this. We have a serialization model, and we've now opened the model so it is possible for people to swap in their own reader/writer pairs - so if you want something different you can write it.

In this particular scenario, the value type has to implement two simple methods: OnSetState/OnGetState. Even if we went to all the work to enable a custom serialization type, all that would do is move the code from those two methods into some other location. It wouldn't save any code, it would just rearrange it.

To be honest, I'd much rather spend that dev time optimizing the byte stream and making sure we fully and reliably cover all existing serialization scenarios. Obviously MobileFormatter needs to work in a very reliable manner, so having as many people (you!) test it as possible is important, and having Sergey able to do optimizations and fixes is critical.

tiago replied on Monday, January 30, 2012

Looking back at my compression adventures (http://forums.lhotka.net/forums/p/8067/38627.aspx#38627) I found out that the BinaryFormatter byte stream was compressed to 80% of its original size, when using ICSharpCode.SharpZipLib.dll. I guess the Xml byte stream could easily be compressed to 90%. But that's not the point. The point is the compression/decompression overhead.

Usually we need to compress ALL data before starting (compressed) data transmission. Likewise, we need to receive all (compressed) data before starting to decompress. The data is delivered to the application only after it's all decompresssed. This is a nuisance to say the least and only pays back on very low bandwidth lines (like 2 MBps wireless modems); on regular wired networks, compression results on data taking longer to arrive. (I didn't test on a local wireless network) .

I also tried other open source compression algorithm - some GPL and some "use as you wish": SharpZipLib proved to be the one that compresses most but the slowest one.

At the time I was told to use a Sink and an algorithm that can compress the data as it passes by and doesn't have to wait for the whole packet but I didn't try that solution.

Back to the point

1) Can the MobileFormatter start transmission of data before... well, never mind. The MobileFormatter doesn't take care of data transmission, just prepares the packet of data for transmission.

2) Does the MobileFormatter delivers data faster than a compression engine? Yes, of course it does, since it just does a simple data streamline job and doesn't use complex mathematical algorithms.

3) Does the reduced physical transmission time overcome the overhead of the MobileFormatter? I didn't test, but I believe so.

4) Does the reduced physical transmission time overcome the overhead of any compression engine? Referring to my previous tests with Remoting, I don't believe it does.

sergeyb replied on Friday, February 10, 2012

On #4.  I think the important thing is the choise that you have.  You can decide what is more important to you and your application, CPU/Memory or size of the data over the wire.  The choise is entirely yours.  My recommendation would be to test all configuration, monitoring memory and CPU usage and then decide.  As I mentioned before, you can use compression on top of the new scheme, and will ikley end up still with much better results then before.

skagen00 replied on Thursday, January 26, 2012

Interested in understanding if my general perception is wrong here...

I've always felt that the compression was actually pretty quick - and a bit contrary to the post above I found that most objects were 90-95% compressed, not 70%.

If it's indeed 70%, then I would be looking at 3X as much data to go between my users and the server - and I've tended to think that this is the aspect that would have some of the most impact on user experience.  Isn't there a good likelihood that while increasing performance in not compressing it's getting outweighed by the extra data going back and forth?

Clearly this wouldn't be moving forward if it wasn't visibly seen as beneficial to Rocky & his team.

Just wondering!

RockfordLhotka replied on Thursday, January 26, 2012

skagen00, Please feel free to do independent testing using your real apps/data. That would be incredibly helpful!!

For me the priority of this was raised dramatically when WinRT showed up and has the same basic restrictions as Silverlight. To me this indicates that the future of .NET app development doesn't really support BinaryFormatter or NDCS, and so eventually most CSLA apps will use MobileFormatter.

Either we need to accept the dependency of some compression library (what a mess that would be!), or figure out some other way by which most apps don't need to use compression to work effectively.

The MobileFormatter is really quite efficient in terms of minimizing the amount of actual data it tries to serialize. The real problem is with the encoding of that data into a byte stream.

The XML encoding used by the DataContractSerializer and XmlWriter creates a very large byte stream. I'm sure different compression techniques shrink this by different amounts, but if you are really seeing 95% compression on average I'm surprised. My observations were more like 70% compression.

A couple years ago I did a bunch of research into possible ways to shrink the byte stream by eliminating obvious duplication of things like .NET type names and other things that get serialized by the DCS. By itself that resulted in a 50% reduction on the whole, but that's pretty far off the typical compression result, so I stopped pursuing the effort.

Again, my interest in pursuing this now comes back to WinRT, and the likelihood that MobileFormatter will become the default solution, with BF/NDCS being used only in those rare cases where your app has FullTrust on client and server, and you aren't using SL/WP7/WinRT.

I know, that's been the norm thus far. But I'm skeptical that the wild west FullTrust world is the future...

sergeyb replied on Friday, January 27, 2012

Even with the new scheme compression will likley shrink the stream some.  The key point though is that compression is not free.  You are eating up CPU cycles and memory doing it, on both client and server.

tbaldridge replied on Friday, February 10, 2012

With our app we send a large amount of data across the wire via CSLA. It was our initial performance issues with the serializer that me looking into how to improve the serialization. 

I think what should be mentioned though, is that there is a "critical mass" of sorts where this new serializer begins to be a big improvement. In our app, we realized that we were trying to stream 30-70MB of data to the client, and although this compressed down to about 3MB of data after it hit the compressor (SharpZipLib), still, that pressure on the serializer was intense.

With our custom serializer (which is very very close to the new serializer in 4.3.0) we can send massive lists of objects without issue. We can send 70k records to the server in about a 10th of the time of the old serializer.

But back on track...the issue is, the larger your dataset, the more time will be spent in compression, but also the more benefit you will see with compression. The smaller the datasets, the less benefit you'll see with compression, but also the faster the whole processes will be. So for small datasets, why turn compression off? Compression will only take a fraction of the time it takes to send the data down the wire. For larger datasets (and mobile apps) you pretty much have to have compression, or the client will be waiting forever to get the data. 

The short version is, we're using the new serializer format, with compression turned on. And and we have no complaints about either.

Jack replied on Thursday, February 16, 2012

Would you (or someone else) be able to post some basic configuration infomation on how to set that up in the 4.3.x model?

I'm having horrible issues going from 4.2.x using a serviceReferences.clientconfig to the new model.



RockfordLhotka replied on Friday, February 17, 2012

I would recommend going from 4.2 to 4.3 first - which should be entirely seamless.

Then work on configuring the client and server to use the new binary reader/writer.

A Silverlight client must be configured in code as the app starts up. The app server is configured in web.config.

The change log document has Sergey's writeup of the required configuration.

Jack replied on Tuesday, February 21, 2012

Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

That is one of my issues – it is a breaking change going from 4.2 to 4.3 for me.  I end up with the Remote server not found errors.  As I mentioned earlier I tried doing the all or nothing approach, that didn’t work, so I rolled back, finally got things working again.  Then I ran the 4.3 installer with the plans of starting another version and it replaced my 4.2 install.  The way I noticed is my next rebuild then picked up the 4.3 dll’s and I got the remote server not found errors again.


I had to then do the rollback and uninstall of all CSLA installations (kept complaining about newer version being installed when I ran the MSI’s).  I put back 4.2 and got things running again.


I then started a new SVN checkout against 4.3 and after a fruitless afternoon then posted in the forums.


My setup is simple Web project to host the Silverlight Client and then the Silverlight Client talks directly to the WCF web application using the basic compression example in the forums.   It is configured with a ServiceReferences.ClientConfig file, not the application startup method.


I haven’t changed that since 3.8 (my production deployment) and had no problems installing all dev builds through 4.2.x



RockfordLhotka replied on Tuesday, February 21, 2012

I don't know what is happening. I upgraded all the sample apps without issue...

Maybe it is something to do with compression? Though a couple of the samples use compression too...

sergeyb replied on Wednesday, February 22, 2012


This process should work just fine.  It should work regardless of you using config file or not.  It is not intended to be a breaking change.  Can you post a sample project that we can use for testing?  Did you try what I suggested and capture WCF debug information?

Please let me know more details so that I can look some more into the issue.


Jack replied on Monday, February 06, 2012

To use this with a SL5 application is the recommended approach to leave the libraries as SL4 or to recompile as SL5?



RockfordLhotka replied on Monday, February 06, 2012

In the alpha release you can use SL 4 or 5.

In the next release the CSLA assemblies will be bound to SL 5, so you'll need to use SL 5 to use that version of 4.3.

Jack replied on Thursday, February 09, 2012

I am currently using the SL/WCF Data Portal configuration with a ServiceReferences.ClientConfig and the standard Compression implementation.  I tried reverting from the Compression Host to the basic default host as well as implementing the steps outlined in the ReleaseNotes but all I ended up with for all my efforts is a RemoteServer NotFound message.

I made a few attempts to model my configuration after the ProjectTracker implementation but no avail.

I then rolled back all my code changes and simply used the 4.3 release and again I had the same issues.

Rolling back to 4.2 and rebuilding finally restored my application

Given the information provided in the release notes should that work with my current setup?  Are there 'baby' steps I can make to ensure it works?  My setup is Web to host SL app and SL Client talks directly to WCFHost Web app.

Also I recompiled the SL4 project as SL5 before referencing in my application.



RockfordLhotka replied on Thursday, February 09, 2012

You probably can't use the clientconfig file when using the new serialization model.

The client-side data portal configures itself either via code or from the config file, but not both. The new serialization model requires code-based configuration.

This is because there is no System.Configuration in Silverlight, so we can't really leverage the clientconfig file ourselves - so we need to choose one or the other.

sergeyb replied on Friday, February 10, 2012

404 is the standard message you get from SL WCF stack as a result of almost any possible error on the server.  I think you should put in WCF debugging into your web.config to find out the real issue if you have trouble debugging otherwise.

Just add this section to web.config and run your app to hopefully see the real error:


<source name="System.ServiceModel"
switchValue="Information, ActivityTracing"
<add name="traceListener"
initializeData= "c:\Traces.svclog" />

Copyright (c) Marimer LLC