Silverlight's Isolated Storage with ReadOnlyListBase

Silverlight's Isolated Storage with ReadOnlyListBase

Old forum URL: forums.lhotka.net/forums/t/6456.aspx


skagen00 posted on Wednesday, February 18, 2009

I wanted to use IsolatedStorage to manage cached "coding" type values. I store a version number as well as the readonlylist - I ask the server what the current version # of the coding is, and if it's valid, I use what's in IsolatedStorage. If not, I initiate the fetch to refresh isolated storage with the new version of this coding.

This coding is a list of 24 items, each of which have on the order of 15 categories and potentially subcategories, but there are very few subcategories (15 perhaps in total). The items are nothing special - just a code & description.

ReadOnlyClient objects have a collection of ReadOnlyActivity. ReadOnlyActivity have a collection of ReadOnlySubActivity.

In code, it does not complain when I set the isolated storage values. In fact, I can look in the debugger and my collection is fully populated within the IsolatedStorage object.

However, when I close my browser and try to hop back in, my two isolated storage setting values (the version number and the collection) are not to be found - there is nothing in isolated storage.

If I look at the actual IsolatedStorageSetting file, my version number is in the file:

<KeyValueOfstringanyType><Key>CategoriesVersionNumberTEST</Key><Value xmlns:d3p1="http://www.w3.org/2001/XMLSchema" i:type="d3p1:int">3</Value></KeyValueOfstringanyType>

However, my collection is represented by a bunch of empty nodes.

<d3p1:ReadOnlyClient> <d3p1:Activities> <d3p1:ReadOnlyActivity> <d3p1:SubActivities /> </d3p1:ReadOnlyActivity> <d3p1:ReadOnlyActivity> <d3p1:SubActivities> <d3p1:ReadOnlySubActivity /> </d3p1:SubActivities> </d3p1:ReadOnlyActivity> <d3p1:ReadOnlyActivity> <d3p1:SubActivities /> </d3p1:ReadOnlyActivity> <d3p1:ReadOnlyActivity> <d3p1:SubActivities /> </d3p1:ReadOnlyActivity> <d3p1:ReadOnlyActivity> <d3p1:SubActivities /> </d3p1:ReadOnlyActivity> <d3p1:ReadOnlyActivity> <d3p1:SubActivities /> </d3p1:ReadOnlyActivity> <d3p1:ReadOnlyActivity> <d3p1:SubActivities> <d3p1:ReadOnlySubActivity /> <d3p1:ReadOnlySubActivity /> <d3p1:ReadOnlySubActivity /> <d3p1:ReadOnlySubActivity /> <d3p1:ReadOnlySubActivity /> <d3p1:ReadOnlySubActivity /> <d3p1:ReadOnlySubActivity /> <d3p1:ReadOnlySubActivity /> <d3p1:ReadOnlySubActivity /> <d3p1:ReadOnlySubActivity /> <d3p1:ReadOnlySubActivity /> </d3p1:SubActivities> </d3p1:ReadOnlyActivity> <d3p1:ReadOnlyActivity> <d3p1:SubActivities> <d3p1:ReadOnlySubActivity /> </d3p1:SubActivities> </d3p1:ReadOnlyActivity> <d3p1:ReadOnlyActivity> <d3p1:SubActivities /> </d3p1:ReadOnlyActivity> <d3p1:ReadOnlyActivity> <d3p1:SubActivities /> </d3p1:ReadOnlyActivity> <d3p1:ReadOnlyActivity> <d3p1:SubActivities> <d3p1:ReadOnlySubActivity /> </d3p1:SubActivities> </d3p1:ReadOnlyActivity> <d3p1:ReadOnlyActivity> <d3p1:SubActivities /> </d3p1:ReadOnlyActivity> <d3p1:ReadOnlyActivity> <d3p1:SubActivities /> </d3p1:ReadOnlyActivity> <d3p1:ReadOnlyActivity> <d3p1:SubActivities> <d3p1:ReadOnlySubActivity /> <d3p1:ReadOnlySubActivity /> </d3p1:SubActivities> </d3p1:ReadOnlyActivity> <d3p1:ReadOnlyActivity> <d3p1:SubActivities /> </d3p1:ReadOnlyActivity> <d3p1:ReadOnlyActivity> <d3p1:SubActivities /> </d3p1:ReadOnlyActivity> </d3p1:Activities> </d3p1:ReadOnlyClient>

What simple thing am I missing?

 

skagen00 replied on Wednesday, February 18, 2009

It looks like MobileFormatter is what I need to use, I'll replace this with a solution just so there's an answer to my question once I get it working.

RockfordLhotka replied on Wednesday, February 18, 2009

Just be careful about size - the MobileFormatter uses the DCS, and that can create pretty large XML blobs - which is why the Silverlight data portal has hooks where you can plug in compression.

You might need/want to use compression for your cached data too.

skagen00 replied on Wednesday, February 18, 2009

You are not kidding - I was all prepared to happily post how easy it was...

To store it,

settings[myAppSettingsKey] = MobileFormatter.Serialize(myCslaObject);

To retrieve it,

myCslaObject = MobileFormatter.Deserialize((byte[])settings[myAppSettingsKey]) as MyBusinessObjectType;

And saw my byte array to be 2 megabytes(!!!!). Obviously, I had to shrink down my result set to verify that I could use the MobileFormatter and get back my value.

When I shrunk down my result set to contain one client with 15 activities (2 total subactivities) the total number of characters in that object were less than 500. But the byte array was 71,000. Ouch.

Is my 400 item (code + description) hierarchical list really causing 2MB to come down through regular DataPortal activity as well? All the more reason to get this caching to work, because that's brutal.

Using MobileFormatter is just not a good option, I don't think, for this situation. But is it the case that my fetch is bringing down 2MB?

 

 

 

 

 

sergeyb replied on Wednesday, February 18, 2009

It is not really mobile formatter issue, just .NET serializer.  You really should use compression for any SL app IMHO.  Once you have that, you can actually use the same compression for isolated storage as well.

 

Sergey Barskiy

Principal Consultant

office: 678.405.0687 | mobile: 404.388.1899

cid:_2_0648EA840648E85C001BBCB886257279
Microsoft Worldwide Partner of the Year | Custom Development Solutions, Technical Innovation

 

From: skagen00 [mailto:cslanet@lhotka.net]
Sent: Wednesday, February 18, 2009 5:22 PM
To: Sergey Barskiy
Subject: Re: [CSLA .NET] Silverlight's Isolated Storage with ReadOnlyListBase

 

You are not kidding - I was all prepared to happily post how easy it was...

To store it,

settings[myAppSettingsKey] = MobileFormatter.Serialize(myCslaObject);

To retrieve it,

myCslaObject = MobileFormatter.Deserialize((byte[])settings[myAppSettingsKey]) as MyBusinessObjectType;

And saw my byte array to be 2 megabytes(!!!!). Obviously, I had to shrink down my result set to verify that I could use the MobileFormatter and get back my value.

When I shrunk down my result set to contain one client with 15 activities (2 total subactivities) the total number of characters in that object were less than 500. But the byte array was 71,000. Ouch.

Is my 400 item (code + description) hierarchical list really causing 2MB to come down through regular DataPortal activity as well? All the more reason to get this caching to work, because that's brutal.

Using MobileFormatter is just not a good option, I don't think, for this situation. But is it the case that my fetch is bringing down 2MB?

 

 

 

 

 



RockfordLhotka replied on Wednesday, February 18, 2009

Yes, your fetch is bringing back 2 MB.

 

Writing a serializer is non-trivial – just to get it working. At some point I’d like to do some further optimization – but I must say that I’m more hoping that Microsoft just creates one for Silverlight…

 

In the meantime, compression is the answer.

 

Really though, if you are using WCF in .NET (ignore Silvleright) you should explore compression as well. The NetDataContractSerializer is more efficient than the MobileFormatter, but still creates pretty darn big XML blobs…

 

Rocky

skagen00 replied on Thursday, February 19, 2009

Thanks both for your answers.

I assume RemotePortalWithCompressedProxy is the example I want to look at, just judging from the project name. I'll have to take a look at that when I get home from work.

How effective is the compression? What scale of reduction tends to occur? Even 1MB or even 0.5MB seems expensive to me when you consider the amount of data that is truly being exchanged in my case...

Reduction in this situation and that in a grander scale is very important if one wants to cache lots of things that don't change regularly into IsolatedStorage. If my situation, which is really a proof-in-concept for Silverlight w/ CSLA for others here, my one list compressed at 50% would take up the entire default allocation of IsolatedStorage. (This list is something that is lucky to change once a month - so simply doing a quick ping to the server to get the active version number and comparing it to the one in IsolatedStorage should be very effective).

Perhaps for IsolatedStorage and ReadOnlyObjects in certain circumstances, a JSON based representation of the object with a similar factory method might be the route to go -- when you think about the minimal nature of state for read only objects in the CSLA Framework and are dealing with managed properties entirely, I think it makes it completely reasonable that my storing into IsolatedStorage could be more along the lines of the size of the meaningful data. I was thinking that it would not be particularly hard to write such a routine.

Any thoughts on that?

 

 

 

tmg4340 replied on Thursday, February 19, 2009

The complexity of a serializer is not in developing the serialized representation - it's managing an object graph of arbitrary complexity and depth.  Your situation may be relatively simple, but others might want to serialize a more complicated graph (e.g. with circular references).  Heck, your graph probably has circular references, since CSLA objects provide a property to access a child's parent.  While I'm pretty sure it's marked as non-serializable, a serializer can't bank on that.

Having said that, building your own custom serializer to manage your particular needs would probably not be particularly hard, depending on the level of "genericness" you want to build in.

HTH

- Scott

skagen00 replied on Thursday, February 19, 2009

I definitely contemplated that circular references is one case where it gets more complex. But I will say that the vast majority of objects that I deal with tend to not have circular references. Could the algorithm be built to handle such scenarios? Obviously that would take more effort but wouldn't be an immediate focus.

The nature of a child in CSLA having a parent is not a big deal - when I talk about a factory method to take the JSON representation of the object from IsolatedStorage, I'm talking about considering it as data one would get from a fetch - just in a different format. So you'd simply build up the CSLA business object in the exact same manner that you would with the fetch.

I'm also only planning on doing this with ReadOnly objects. Other than authorization rules, I can't really think of any meaningful state off the top of my head when it comes to read only lists. So I really don't think - if one considers a scenario where one is using all managed properties, that this would actually be that difficult. Maybe I'm overlooking something, but I don't think so.

As far as the DataPortal performance, I'm going to try that compression and how that works. I have not looked into what it takes to do ones own serialization and am hoping that compression has a meaningful effect.

I'm just surprised when I saw the size that was generated because I remember doing a CSLA WinForms app in my earliest days with CSLA (I've been in Web mostly since) and someone from CT (I'm in MN) was pumping down 1000s of rows like this with better performance using WinForms. So when I fired up my silverlight application from a remote location and the thing hung while trying to populate and cache a 300 item coding list (code + description) I was really disappointed.

 

RockfordLhotka replied on Thursday, February 19, 2009

Building a serializer isn’t terribly hard, even with circular references.

 

Building a deserializer is really quite difficult even without circular references, and it is non-trivial when circular references are involved. Which is why the MobileFormatter doesn’t support circular references.

 

If you are willing to not preserve the graph shape it is a lot simpler – note that XmlSerializer and DataContractSerializer don’t preserve the graph shape J

 

I’ve been down this road a couple times, once unsuccessfully and once (MobileFormatter) successfully. It isn’t for the faint of heart, trust me.

 

Background on MobileFormatter:

http://www.lhotka.net/weblog/CSLALightObjectSerialization.aspx

http://www.lhotka.net/weblog/CSLALightIMobileObjectDiagram.aspx

http://www.lhotka.net/weblog/CSLALightSerializationImplementation.aspx

 

There are three reasons MF generates such large data.

 

One is that it serializes the object graph – including type names, broken rules and other metadata – just like the BinaryFormatter or NDCS.

 

Another is that it doesn’t de-dup the data, which is the primary optimization you can make with an XML-based serializer. If you look at the stream, the same type names, data values and so forth may be repeated numerous times. If we added another level of indirection within the serialized data it would be possible to de-dup these values so they each appear once. That would complicate the process and incur more processing overhead, but would shrink the blob size. It would also result in a blob of XML that is virtually indecipherable by a human because it would be pointers to pointers to values.

 

Finally, it is XML. Ultimately we use the DataContractSerializer to do the actual serialization of an object graph to/from XML. The DCS isn’t bad, but XML is terrible. It may be good for interop, but XML is really a very bad technology for efficient data transfer or storage. Unfortunately the only other option right now is Json, and the Json serializer is slower than the XML serializer because Json is harder to parse. And it isn’t clear that Json would generate a radically smaller blob than XML does.

 

So another optimization would be to create a binary equivalent to the DCS – or even a text equivalent that used something more compact than XML. And as long as you kept the same constraints on the object graph as the DCS, writing such a serializer/deserializer wouldn’t be terribly hard to do. But I still have no real desire to write my own serializer at this level if I can avoid it.

 

The upside to all this, is that XML is very compressible. It is just text after all – and it typically has a low level of “randomness”, which means that compression algorithms can really do a great job shrinking the blob.

 

Rocky

sergeyb replied on Thursday, February 19, 2009

I have measured message sizes in Fiddler, and got over 90% compression in that project you mentioned.  One specific case I recall had message shrinking from 8.5 MB to 400 K.  On a side note, Isolated Storage is not something I use, and probably will never use actually.  The issue that it is stored on a PC per user, so if you switch a PC or a user, you will lose that data.  It has a limit as well, although the newest run time release (yesterday’s) supposedly fixes the issue with increasing the quotas, but still I think liability of using IS outweighs the benefits.  Just my 2 cents though…

 

Sergey Barskiy

Principal Consultant

office: 678.405.0687 | mobile: 404.388.1899

cid:_2_0648EA840648E85C001BBCB886257279
Microsoft Worldwide Partner of the Year | Custom Development Solutions, Technical Innovation

 

From: skagen00 [mailto:cslanet@lhotka.net]
Sent: Thursday, February 19, 2009 9:25 AM
To: Sergey Barskiy
Subject: Re: [CSLA .NET] RE: Silverlight's Isolated Storage with ReadOnlyListBase

 

Thanks both for your answers.

I assume RemotePortalWithCompressedProxy is the example I want to look at, just judging from the project name. I'll have to take a look at that when I get home from work.

How effective is the compression? What scale of reduction tends to occur? Even 1MB or even 0.5MB seems expensive to me when you consider the amount of data that is truly being exchanged in my case...

Reduction in this situation and that in a grander scale is very important if one wants to cache lots of things that don't change regularly into IsolatedStorage. If my situation, which is really a proof-in-concept for Silverlight w/ CSLA for others here, my one list compressed at 50% would take up the entire default allocation of IsolatedStorage. (This list is something that is lucky to change once a month - so simply doing a quick ping to the server to get the active version number and comparing it to the one in IsolatedStorage should be very effective).

Perhaps for IsolatedStorage and ReadOnlyObjects in certain circumstances, a JSON based representation of the object with a similar factory method might be the route to go -- when you think about the minimal nature of state for read only objects in the CSLA Framework and are dealing with managed properties entirely, I think it makes it completely reasonable that my storing into IsolatedStorage could be more along the lines of the size of the meaningful data. I was thinking that it would not be particularly hard to write such a routine.

Any thoughts on that?

 

 

 



skagen00 replied on Thursday, February 19, 2009

sergeyb:

I have measured message sizes in Fiddler, and got over 90% compression in that project you mentioned.  One specific case I recall had message shrinking from 8.5 MB to 400 K.  On a side note, Isolated Storage is not something I use, and probably will never use actually.  The issue that it is stored on a PC per user, so if you switch a PC or a user, you will lose that data.  It has a limit as well, although the newest run time release (yesterday’s) supposedly fixes the issue with increasing the quotas, but still I think liability of using IS outweighs the benefits.  Just my 2 cents though…

 

Sergey Barskiy

Principal Consultant

office: 678.405.0687 | mobile: 404.388.1899

cid:_2_0648EA840648E85C001BBCB886257279
Microsoft Worldwide Partner of the Year | Custom Development Solutions, Technical Innovation

 

I definitely understand IsolatedStorage has limited use, but I think it definitely has its uses in certain scenarios. For instance, the application I'm doing as a proof in concept involves time entry. The clients, activities, and subactivities don't change but every month or two, but the application is used daily for a brief period of time by each staffperson. When currently the wait is 10 seconds or so if someone is not on-site to get the possible selections for their one time use each day, that's discouraging to using the application versus the legacy one.

Now, compression may make a huge difference in my thinking that I want to pursue IsolatedStorage -- right now I'm trying to avoid the 10+ second delay in many cases. If it all of a sudden becomes 1-2 seconds with compression, then I probably have an entirely different perspective on this.

OK... at this point I have to try the compression, I'll come back w/ my results.

skagen00 replied on Thursday, February 19, 2009

Thanks everybody.

I'm not far from having it work, I think. The compressed WCF service is getting hit but I'm getting an error -- I realize not too many people have played with this area yet but here it is. If there's anything obvious that sticks out, I'm interested to hear. I think this thread will likely be useful for others, as if I can get the compression working, that would be great.

So within CompressedHost, I get to this line when handling "ConvertRequest".

returnValue.ClientContext = CompressionUtility.Decompress(request.ClientContext);

My client context is roughly 572 bytes.

Upon this call in CompressionUtility.Decompress on the server side, it sets
up the stream:

          Stream s2 =
             new ICSharpCode.SharpZipLib.Zip.Compression.Streams.InflaterInputStream(ms);

This s2 appears to not be constructed properly - length is 0 in the debugger. So when this is passed to ReadFullStream in the Compression utility, and this line hits:


 int read = stream.Read(buffer, 0, buffer.Length);

I get an exception: Header checksum illegal.

Does anybody have a suggestion as to what might be causing this? 

 

RockfordLhotka replied on Thursday, February 19, 2009

Just guessing here, but you might need to set the Position of your memory stream to 0 first, otherwise it may be trying to read from the end of the stream and so would get 0 bytes.

 

Rocky

 

skagen00 replied on Thursday, February 19, 2009

I discovered that my breakpoints in the CompressedProxy (as copied from the sample) were not getting hit, and noticed I had forgotten to change this in App.Xaml's code behind:

Csla.DataPortal.ProxyTypeName = typeof(MyCompany.MyApplication.Business.Compression.CompressedProxy<>).AssemblyQualifiedName;

//Csla.DataPortal.ProxyTypeName = typeof(Csla.DataPortalClient.SynchronizedWcfProxy<>).AssemblyQualifiedName;

After that, it worked. And did it!

At this stage, I will not be using IsolatedStorage for what I was doing, and cannot underemphasize the importance for people to use the compression. I have only worked with one application now and maybe there is some scenario where you wouldn't want it (??), but I can't think of one. The difference in usability is quite improved now.

I would have been satisfied, for the moment, with 90% reduction. I still think that's unacceptable (200K for < 20K in meaningful data), but it would have been workable.

Clearly, the type of the object matters. I suspect for record maintenance where you have very fat records, reduction won't be as good as I achieved. My reduction was 98.33%!! My 2MB business object ended up being under 34K.

So I just want to really emphasize the compression channel. It was very easy to set up - I can't think of doing a project without the compression channel.

Thanks!!

Copyright (c) Marimer LLC