I have an object model where I not only have a BLB with many BB children, but additionally, I have a few BB and RB objects that contain more information that hang directly off of the BLB. Some of these have references back to my parent...
We are running the app in 3 tier, via WCF. I am seeing the memory footprint continue to grow with each DataPortal_Fetch. I am pretty sure that I need to explicitly remove these references on the server after sending back the data to the client, but I do not know where to do this.
It seems like I need something like a OnSerialized() method where I can clean up ...is there a method that I can intercept after the object is sent back to the client?
Sounds like your object model is incorrect. You shouldn't have BB or RB objects directly off your BLB; its not designed to support that.
I would redesign so that a Command based object returns the BLB, seperate BBs and RBs, or I'd create a new root BB which contains all the necessary information. It depends if these other BB objects need to be saved along with the list each and every time, or if they are independant.
While you may be correct, I am not really up to debating the merits of the object model.
Regardless of its "correctness" or not, it is really not a feasible option to rework 18 months of code at this point.
Soooo, is there anyway to accomplish what I had asked originally: to be able to hook into the processing after the collection had been delivered to the client?
If you're talking about disposing of the objects on the server side, AFAIK there is no method you can hook into after the server-side DP is done processing. I'm not even sure how you would build one, since that would essentially expose the innards of the DataPortal...
But even if there were, I'm not sure it would be terribly helpful for you. Under .NET, you don't have a way to "explicitly remove" the references in such a way that they are immediately destroyed. You don't have a variable reference you can set to NULL. And even if all your business objects implemented IDisposable, calling Dispose() merely marks the objects for disposal - the GC has to actually do it, and there's no guarantee of when that will happen. And forcing a garbage collection after every DP call will kill the performance of your server-side DP.
Is the growing-memory footprint an issue? Because I would assume that, at some point, .NET will invoke the GC and clear out your objects. The DatPortal itself doesn't hang onto a reference to the objects it creates, so there shouldn't be a hanging reference there. Circular references should not be an issue either - the GC is designed to handle that. And while WCF may be implementing some sort of caching of the objects it's passing back and forth (and I don't think it does), eventually that cache would expire.
Good info Scott,
I am not trying to force "when" the objects are destroyed, I am merely trying to make sure that they are going to be destroyed at some point.
The growing memory has been an issue on our test server in dev and test...but we have a VERY beefy production 64 bit server and it has not been a problem there...yet. My fear is that over time if the objects are not being destroyed then we will eventually will have the same problem.
Your comment about the DataPortal not keeping a reference reminded me of a discussion I had about that. It was my opinion, that if the root object was sent via the DataPortal and that the result was a local variable that was out of scope as soon as the method completes, then it should not MATTER what the object does internally. If there is no root reference to it then the GC should be able to collect it.
So if I have a BLB, that has custom events attached between in and its children, and I even if I have a bad object model with references to other objects from my BLB and maybe even references from them back to my BLB...IT SHOULDN'T MATTER! Once that result object goes out of scope the root reference to ALL OF THAT goes away. That should make it available for eventual garbage disposal. Am I missing something?
But I am running Antz Memory Profiler, and it is telling me that there are objects dangling after the call completes. I guess I just don't understand how that would be...
One note...our WCF service has a service attribute of
ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)
So there is no reuse of the objects across a single call
From a GC perspective, no - having a "bad" object model won't matter. So long as all the references go out-of-scope, the GC will pick them all up. Andy's comment is in relation to trying to use your BB- or ROB-based objects that hang directly off your BLB. As he mentioned, CSLA is not designed for that scenario, so much of the UI integration, design-time behaviors, and other goodies you get for free on your CSLA objects may not work with those. So while there's technically nothing wrong with the design, it does force you to do more work, depending on what you're expecting from those objects.
If you haven't, you might want to spend some time reading up on how .NET manages objects. If you came from the COM (e.g. VB6) world, it's fairly different. There isn't any reference counting, and reclamation of unused objects in .NET's GC system is non-deterministic - meaning that when you're done with an object, you can be assured that it will be destroyed, but not when it will be destroyed. Since there is no reference counting, circular references are easier to deal with, because the graph doesn't require that you "start in the right place" in order to make sure your object graph goes away.
You do need to be careful with events. Events are just object references, but they can hang around longer than you might think. For example, when you close a form, that doesn't necessarily de-reference your form object reference, so any event handlers are still live, and thus so are any objects those events reference. Obviously this doesn't apply in a server-side scenario, and if your events are entirely contained within your object graph, then you're still OK. But it's certainly something to pay attention to.
It wouldn't necessarily surprise me that a memory profiler would still show the objects as being around. I don't know Antz's product, but if it's a strict memory profiler, it probably has no concept of de-referenced .NET objects - it's just monitoring the heap. Since the memory for the object isn't re-claimed until the GC cycle runs, those objects could still show as allocated for quite some time.
If you're running into memory issues on your test server, there are things you can do to tweak the configuration to relieve some of the memory pressures. What are you hosting your WCF service in?
Keep in mind that the server and client GCs work differently. It has been a few years since I dug into those behaviors, but they are optimized for different things.
The client GC is optimized for performance, and so it does incremental work, trying to avoid ever doing a lot of work at once where the user would see a visible impact.
The server GC is optimized for throughput, and so (iirc) defers work until it is forced to handle it, and then it does everything at once - potentially blocking all other work for a period of time.
So it is quite possible to see server memory climb quite high before the GC triggers and does its job - again, assuming I'm remembering correctly, and assuming these behaviors haven't changed in the meantime.
Copyright (c) Marimer LLC