Howdy Friends,
Got a question. Say I have a type CAR that is pretty large, and I want to cache the CAR's as they are requested. Is there anything wrong with the implementation shown in snippet below? I am in particular concerned about network effectiveness in a application server setup :
[
Serializable()] public class Car: Csla.ReadOnlyBase<Car>{
[
NonSerialized] //use dictionary to cache cars already asked, so next time no need to go to dataportalprivate static Dictionary<int, Car> _list = new Dictionary<int, Car>();
.....
.....
public
static Car GetCar(int carID){
PurchaseSourceInfo ret = null;//check if already cached before going to dataportal
_list.TryGetValue(carID,
out ret); if (ret == null){
ret =
DataPortal.Fetch<Car>(new Criteria(carID));_list.Add(carID, ret);
}
return ret;}
// called by editable objects, no too worried about clearing whole list if only some Cars changed, updates not happening too often
public static void InvalidateCache(){
_list.Clear();
}Many many thanks!
BTW, I'm involved in a major project converting our company's bread and butter application to CSLA and CSLA rocks! Thanks Rocky!
Be sure to use the Codesmith templates available, they save mega time.
If you're using remoting, I'd argue that you'd maintain your list on the application server rather than the client, which you appear to be doing. The application server, shared by all users, would know whether certain objects should be invalidated and would present the ability to be able to retain one centralized cache.
You'd do this by implementing the check on the list in the DataPortal_Fetch (if not using remoting, each client would use their own cache just as is happening with your code above). If you needed to be able to invalidate the cache from the client (which I'm not entirely sure you'd need to) you would have to utilize a CSLA Command so that logic could run on the server to clear the cache. And, when invalidating the cache, you may want to provide an overload so that only certain objects are invalidated (not necessarily the whole cache!).
You do incur a little overhead in terms of still going to the dataportal to get your object, but I'd argue that if it's loaded and ready on the application server, that overhead will be minimal compared to the loading of the actual object from the database - and you could enable the ability for multiple users to share a more versatile, single cache.
(I did a test with singletons on the application server and know that between remoting calls, the singleton is preserved, so your cache list should be too).
With caching, also make sure you don't have any other applications modifying the data you could be caching (which is kind of a no-brainer).
Good luck!
Certainly - I think I acknowledged the extra latency - I guess it depends on what's being passed around but if one is caching on the client you have to go to the DataPortal anyways to see if the object is still valid, don't you?? (unless you want to set up some sort of listening mechanism, yuck).
If there's one cache on the application server, there really won't ideally be a lot of invalidation - if multiple users are changing a single object over the course of their sessions, they'll simply be updating the shared cache with their object. Whereas if you have it client side, there'll be a lot of invalidation and a lot of reloading of the object as a result. (Yes, you could cache in two places like you say to avoid this)
I guess given a cache-in-one-place scenario, I'd personally look towards the app server in most situations.
To each their own!
skagen00:Certainly - I think I acknowledged the extra latency - I guess it depends on what's being passed around but if one is caching on the client you have to go to the DataPortal anyways to see if the object is still valid, don't you?? (unless you want to set up some sort of listening mechanism, yuck).
I think in some respects you'd want to go through the dataportal to verify the object is valid. Otherwise they'd either end up looking at stale data or trying to change information on the object that will fail to update if concurrency checking is implemented.
I agree you wouldn't gain a whole lot if data is changing a lot and you're caching it on the client - but if you're caching it on the application server, I think that problem isn't there in the same manner. If you have a hashtable of cached objects, and someone is updating an object in the cache, you simply notice the object is in the hashtable of cached objects and you replace the cached reference rather than inserting a new one.
Definitely a lot of different ways caching can be approached and I'm sure each one has its own advantages and drawbacks to some extent.
Thanks for the responses, which helped me realize that I am picking up the cat be the tail here. My code above cache on UI thread but Invalidate on App server.
The objects I want to cache are big and not changing frequently. I am thinking of caching on app server.... at least i'l save trip to DB
Although not changing frequently it do change and its important that client get changes on next request. I need to consider the possibility of more than one client, but only one application server. So at least the app server would always know when data stale since all updates will go though it.
I am starting to think towards some timestamp implementation, where the client at least always confirms the data not invalid. At least that saves the trip of the data back to client if still valid.
Am I on track here, or missing something fundamental?
Copyright (c) Marimer LLC