I'm trying to use enterprise library for Silverlight caching block. Using the IsolatedStorage container, I'm able to successfully serialize a ReadOnlyList to my IsolatedStorage. However, on deserialization, I get the following exception:
The type 'Csla.Core.MobileObject' cannot be deserialized because it does not have a public parameterless constructor. Alternatively, you can make it internal, and use the InternalsVisibleToAttribute attribute on your assembly in order to enable serialization of internal members - see documentation for more details. Be aware that doing so has certain security implications.
I have verified that my ReadOnlyList and each ReadOnlyInfo child contain public paramaterless constructors, but I still receive this exception.
I'm not familiar with this application block. But it is possible that they are using the DataContractSerializer (DCS) to directly serialize the object graph - and that won't work.
The problem is that Silverlight has no BinaryFormatter or NetDataContractSerializer (like .NET), and so the more common solution to cloning an object graph isn't available on Silverlight. For CSLA we implemented our own serializer (MobileFormatter) to overcome this limitation - but of course this Silverlight caching block wouldn't know about such a thing, and is probably restricted to the more limited/primitive DCS...
They are actually using the DataContractJsonSerializer, belonging to the System.Runtime.Serialization.Json namespace of the System.ServiceModel.Web assembly.
In any event, I have come up with a work around. When using the Ent Library Caching block, you interact with a CacheItem, which exposes a Value property. The Value property is the object you want cached.
So before setting the Value of the CacheItem, I perform the following (note that the CompressionUtility is part of the SharpZipLib project):
var data = Csla.Serialization.Mobile.MobileFormatter.Serialize(result.Object); data = CompressionUtility.Compress(data);
cacheContainer.Add(cacheKey, data, DateTimeOffset.MaxValue);
First, result.Object is representing a ReadOnlyList of objects. Notice that I'm first serializing the object graph with the Csla MobileFormatter, then further compressing the results. This is important. To put things into perspective, here are some size differentials with different configurations:
Given a ReadOnlyList that contains five ReadOnly instances, each one containing and ID and Name property (not alot of data here)
1.) Using Ent Lib Cache serialization default: 2K (remember that objects cannot be deserialized by Ent Lib after they've been serialized in iso storage)
2.) Serialize using MobileFormatter: 31K (remember, this is 31K for FIVE objects containing only ID and Name property values)
3.) Serialize using MobileFormatter, then compress using CompressionUtility: 4K
As you can see, option 3 is twice the size of option option 1. But this allows you to continue to use your ReadOnlyInfoLists in conjunction with the Ent Lib Caching Block (Isolated Storage, not a problem with In Memory). Option 2 speaks for itself.
So after successful serialization to Iso Storage, here is how I get it back:
var data = CompressionUtility.Decompress(item.Value as byte); object values = Csla.Serialization.Mobile.MobileFormatter.Deserialize(data);
return values as whateverTheTypeIsThatYouSerialized;
The reason number 2 is large (and why compression works so well) is that the .NET type data is included in the serialization data. To a large degree, the same type names are repeated numerous times in the stream - so compression works great, but the initial size is large.
I evaluated techniques to optimize the stream by creating a dictionary of type names. And that would work fine, but is really duplicate processing effort over compression. Or to put it another way, compression was still valuable, and if you are going to compress anyway it is cheaper to just let it do all the work than to have the MobileFormatter consolidate the type names.
For some future version of CSLA I'm considering completely abandoning Microsoft's serialization schemes to create one unique to CSLA. Given the intrinsic knowledge CSLA has about its own types, I could serialize the object graphs in a much more efficient manner than Microsoft does with reflection, and could generate a data stream more like JSON than the highly verbose NDCS XML results (which is similar to what MobileFormatter generates).
In any case, I'm rambling - I'm glad you have a solution!
That would be fantastic. Would you use this serialization scheme when marshalling objects between servers as well? Anything to reduce that bloat would be gold.
I'm noticing that just sending ReadOnly objects to my SL clients (even with compression) is pretty huge. 4X more than serializing POCO objects containing the same properties and runtime values.
I realize the weight is because you're serializing the entire objects, and not just the data, but I'm wondering if there's a way around that, especially when dealing with ReadOnly objects.
Although, I'm suspecting you'd run into trouble with this approach as you can't reflect against private members in SL. (assuming your ReadOnly property setters were private)
This is purely a guess, but I would think that Rocky's custom solution would utilize either the managed backing fields (which are very strongly encouraged in SL) or the custom serialization hook methods (which are required if you aren't using managed backing fields). That's the easiest way to get around the reflection issues. I would also guess that any custom solution built would be used everywhere. After all, the DataPortal is basically the same regardless of the client tech...
What I find somewhat amusing is that if this does get implemented, the likely solution is one that some have been clamoring for, which is to only send a "raw data stream" over the wire. Where this gets interesting is that doing so potentially obviates the need for cloning the business object, thus giving some folks the "in-place update" they seem to think is so incredibly important. That obviously introduces a whole new set of rules, and perhaps that is still a bridge too far to implement in a way that doesn't break existing functionality. I do realize the "returns a new object" concept has its issues, and maybe I'm just so used to the CSLA architecture that it doesn't bother me. But for some people, that seems to be a concept akin to eating your own young...
Scott is correct, if I do this it would rely on the same infrastructure I put in place for MobileFormatter, and so would have the same limitations in terms of what could be serialized.
The MobileFormatter (MF) uses the IMobileObject interface to politely ask each object in the object graph for its field data, and then a list of its child objects. The MF then goes through the list of child objects, politely asking each child for its field data, and a list of its child objects. And so on.
As each object's data is collected, the data is placed into a DTO that can be serialized by the DataContractSerializer (DCS).
The end result is a DTO object graph that mirrors the business object graph. Except this DTO object graph only contains DCS serializable properties. Each DTO contains all the information necessary to create a clone of the original business object - including the object's type, and the type of each data field. The type names are assembly qualified type names, so they aren't compact. As you can imagine, for a simple object, the size of the type information may be larger than the actual object data.
The DCS is then used to serialize the DTO graph into (by default) BinaryXML, and that's what sent across the network.
OK, given that background, you can probably imagine how a "CslaFormatter" might exist that would do much the same thing, but would rely on intrinsic knowledge of the business object graph shape to do the deserialization. More like JSON, less like SOAP/BinaryFormatter/NDCS.
The key here is that the field manager has the required intrinsic knowledge. Even the field names can probably be dropped from the data stream, because each property has an int index value that is consistently known by the field manager on both sides of the wire.
If I went that far (not sending field names) then this new formatter would only work for properties that have an IPropertyInfo descriptor. Personally I think that's an acceptible requirement, but it is certainly a topic for discussion.
I also suspect that this new formatter would still serialize into a DTO graph, and would still use the DCS to do the final wire serialization. But even that's not a given. I think I have some old Module II code from my Amiga days, where I implemented the Kermit protocol, and that is more compact than JSON or XML :)
I should also say that the in-place serialization vs new-copy serialization used today is (in my view) out of scope for the serialization discussion.
That could also happen someday, but the pressing need (in my view) is to solve the more immediate issues around network bandwidth and related performance.
Copyright (c) Marimer LLC