You could be correct. This code hasn't caused a problem in several years, so I'm not eager to change it, but technically it could be an issue.
The problem, of course, is that the solution would be to broaden the scope of the lock, so a lock is always taken out before we look in the cache. That too would be quite expensive, especially before .NET 4.0 because all the lock mechanisms are expensive. In .NET 4.0 I hope to change all the locking code in the framework to use the new less expensive user model locks, as well as to use the various new synchronized data structures that encapsulate the threading code almost completely.
Yes, GetMethod() is expensive. But it should only be called if the item isn't in the cache. So I can't see how it would be invoked outside the lock. Surely we wouldn't invoke it before the lock, because at that time we don't know if we need to invoke it at all. And we can't invoke it after the lock, because that's too late - we already needed the value. And if I split the process into two lock sections we run the risk of deadlocks and race conditions, which would be quite hard to resolve.
If you have a suggestion on how to make this better without adding the risk of deadlock, I'd love to hear it.
rasupit:_cache.Add is wrapped within the lock statement, so I'm not sure why you said is no lock. This code is pretty standard double-check locking pattern that is widely use at least before .net 4 comes out. We're welcome on any suggestion to improve the code.
This is the reason why the result of this expensive operation is being cache.
It is somewhat safe, but not completely.
For example, suppose that the Add() method adds the key to the keys collection, and then adds the value to the values collection (all inside the dictionary).
Then suppose that the key gets added, and before the value is added, some other thread's TryGetValue() finds the key and returns the (not yet set) value of null. Or throws an exception because the dictionary is in an indeterminate state. We actually don't know for sure what might happen.
So that other thread could get an invalid value or exception or something.
Again, this code is in widespread use and has been for several years now. I rather suspect one of two scenarios:
So while there's some theoretical risk to this code, there doesn't appear to be any practical risk.
_cache.Add() is protected by the lock, but I think the OP is asking about the initial "TryGetValue". While this is a basic double-check pattern, s/he is technically correct - there is a chance that it could return an invalid result and say it was successful, though I think the chance is pretty small. There might be a way to mitigate that with the "volatile" keyword, but I'm not 100% sure on that. Other implementations I know of that are considered more robust would probably end up calling "MethodCaller.GetMethod" every time, thus killing the whole idea.
You are not the first person to point this out.
The problem is again, that the solution is so incredibly expensive (always lock) that I've chosen to live with the minimal risk so far.
And .NET 4.0 offers better solutions. Even if we do always lock, at least they can be user-mode locks, but hopefully the new synchronized data types (like dictionary) will allow me to remove all this code and just let the data structure do the work (which is the way it should be imo).
if (!_dict.TryGetValue(key, out val))
_dict[key] = value;
Copyright (c) Marimer LLC