There is
nothing wrong with passing an object to a class. However, you should keep in mind the OOP principle
that states you should program to an interface not an implementation. This basically states that you should program
to interfaces or abstract classes rather than concrete classes when it is
prudent to do so. Keep in mind this is a
suggestion not a rule. You can pass
control objects to other classes, but you should not pass the control object to
a business object. The business object
should know nothing about the presentation layer.
I think we need to keep a pragmatic view of this when considering applying this principle as a general rule just because you should. I'm building a specific UI solely to interact with a specific business layer built on a specific framework. The reason for decoupling with interfaces would be so you could "swap" out the underlying layer. There is precisely zero chance of that IMO, so it's just a waste of effort.
I still use several interfaces, mind you, but it's for when I can't know the specifics in a particular area of functionality.
DansDreams:The reason for decoupling with interfaces would be so you could "swap" out the underlying layer. There is precisely zero chance of that IMO, so it's just a waste of effort.
ajj3085:
I wouldn't go that far; perhaps in your circumstances, but not for everyone. Especially if the OP has chosen to build seperate assemblies which still must work together. I have this situtation right now, and having interfaces allows me to test each assembly regardless of whether or not the other assembly is working properly or not.
I do have one case where this isn't true in the testing layer; it directly uses another business assembly, and its been causing me nothing but problems. When I get more time, I do plan on breaking that dependancy. Changing one assembly shouldn't cause me to change the testing assembly for another different assembly.
Andy
I was referring to the rather dogmatic adherence to this principle that the OP suggested with words like "always" and "never". I just think it's silly in a case where a form is built specifically to edit Customer objects which inherit from BusinessBase to add a bunch of interfaces to the mix. There's zero chance I'll use that form for anything else, so I've gained nothing from the interface.
I'm interested in a little further explanation of your scenario though. We are all using "separate assemblies which still must work together" by definition because we're building n-tiered applications. I'm presuming you're meaning multiple assemblies at the same tier, as in two assemblies of business objects. But what is the test scenario where an interface helps you? Are you saying you put "dummy" classes in one assembly to represent the classes that would really be found in the other in production so you can stay "local" for testing?
DansDreams:I was referring to the rather dogmatic adherence to this principle that the OP suggested with words like "always" and "never". I just think it's silly in a case where a form is built specifically to edit Customer objects which inherit from BusinessBase to add a bunch of interfaces to the mix. There's zero chance I'll use that form for anything else, so I've gained nothing from the interface.
I want to clarify that in actual practice I often build forms or controls that have common functionality for dealing with an unknown implementing type. But I didn't feel that was specifically what the OP was addressing.
DansDreams:I was referring to the rather dogmatic adherence to this principle that the OP suggested with words like "always" and "never". I just think it's silly in a case where a form is built specifically to edit Customer objects which inherit from BusinessBase to add a bunch of interfaces to the mix. There's zero chance I'll use that form for anything else, so I've gained nothing from the interface.
Yes, I have multiple business assemblies. There are some cases where I need to know the properties of say a Contact, but managing contacts is handled by an external assembly. Rather than recode the object, I put an interface in a lower level assembly, so my Contact manager assembly has some objects which implement that interface, and my quoting assembly needs to consume that data.DansDreams:I'm interested in a little further explanation of your scenario though. We are all using "separate assemblies which still must work together" by definition because we're building n-tiered applications. I'm presuming you're meaning multiple assemblies at the same tier, as in two assemblies of business objects. But what is the test scenario where an interface helps you? Are you saying you put "dummy" classes in one assembly to represent the classes that would really be found in the other in production so you can stay "local" for testing?
I understand the argument that there are many times and applications where adhering to the OOP principals, such as coding to an interface not an implementation, is "overkill" and may seem excessive. However, it is my seasoned opinion that you are either disciplined to program following solid OOP practices or you are not. I don't care if you are programming a simple calculator to do only addition, there is no justifiable reason why you should ever do LESS than a standard job developing an app.
Among the many reasons for this opinion is the fact that everythign changes and while you all can say today that your apps are not going to need code reuse or being extended or grow, s*** happens. Everything changes. In addition, if you don't develop a standard set of practices for yourself, how can you ever expect to have a framework that supports you? Furthermore, going back and looking at an app that you completed 6 months ago using one approach after spending the gap working on a new app with a completely different tact will only lead to increased time reassessing what you did and why. Add in the case where many of us are battling with contractors (both on-shore and off-shore) and these kinds of variations and deviations are killers.
Fact is, the best way to make yourself a good OOP programmer is to simply make it a habit to do things the OOP way. Now, I am not saying that OOP is the be-all-end-all. Just saying that picking and chosing when to follow good, solid, standard practices because "it is easier" is just lazy - IMO.
The original post indicates a real confusion over how this principal is implemented and that still seems to be a concern. By telling the author that he can follow the rule sometimes will certainly not help matters as now you've introduced another variable into the decision making process. These rules, like the decisions made by Rocky with CSLA, are there for a reason - to share with the developer community the lessons learned by more experienced programmers. Fact is, this would not be a rule if those that have walked this path before didn't feel it was important and significant. And rather than add the additional variable, just make it a habit of thinking this way and developing this way and, voila! Before you know it you will be doing it without consciously thinking about it.
The result will be better code. And whether you are making a database app of your CD collection or an e-commerce solution for a Fortune 500 company, IMO it doesn't matter and there is never an excuse for less than your best. But, I have taught myself to be a highly disciplined programmer over my 25 years in the trenches. I understand where these rules, principals and guidelines comes from because I've been bittin the same way those that came up with the tenets have. I'm the guy that comments everything (!) because I've been the guy brought in one an app that was under development for a year when the guy up and died! Try picking up the pieces when that happens!
Again, IMO, there is never an excuse to pick and choose when to apply good standard and practices. Anyone working in manufacturing or quality-controlled industries know that is just not allowed. This "rule" should be applied to programming as well. If you are going to claim to be an OOP programmer, than be one all the time. Not just when you have the time.
As for the original post, the rule is intended to facilitate change, modularization and extensibility and even if you don't think these things are necessary for your application, it is simply good practice to follow them anyway. The key to understanding this one is understanding what an interface is. The fact that an actual Interface and an Abstract Class can be thought of the same way makes it a bit more confusing, but remember, you can't instantiate either directly - that is the key. In both case, some other class has to exist that either inherits from the base class or implements the interface. In either case, your goal is to make your dependant class use the interface rather than the derived class.
Perhaps the most prevalent example of this rule is the IEnumerable interface. This interface indicates that the concrete class that has implemented it allows enumeration via the GetEnumerator() method. Coupled with the IEnumerator interface that is returned from GetEnumerator, this provides for a very meaty and flexible way to interact with an incredibly wide-spectrum of objects.
Let's say you want to create a method to generate a simple listing of objects. Your method would accept an IEnumerable as an argument as simply iterate of the elements, calling ToString() to dump the text for each item. You don't care if this is a List, Array, ControlCollection or some custom collection because it doesn't matter to your functionality what the actual object is. All that matters to you is that the object has the properties and methods that you need to complete the task. That is what an interface does for you.
Yes, it may seem like redundancy because you've defined the properties and methods in your actual class, but the interface allows you to "decouple" - as they say - that concrete class. And even in a simple case, this can be valuable if for no other reason than to simply make it a habit of programming this way. Why? Because is it good practice to do so.
One finaly note. Generics have introduced a new complexity that can only be solved by interfaces. Try determining the type of a generic object in code (for comparitive purposes) and you'll quickly find out how valuable it is that there are interfaces to check for. This is because each type that the generic class handles actually becomes its own type at run-time. So, there is no checking for: type == typeof(List<>), for instance. You either know the specific type that you want, like List<int>, or you need to be able to check against an interface (IList).
Make it a habit to follow good practices and it will simply become a matter of fact rather than another variable you must decide when developing your apps.
Of course, that's just my opinion.
The OOP principles are guidelines not rules. In many cases it is not possible or practical to follow every principle for every situation. There are some principles that you should follow every time but the "program to an interface not an implementation" principle is not one of them. This does not mean you are doing less than your best if you don’t follow an OOP principle. It does not mean your application is any less OOP. It takes experience to know when to follow the principles. The only advice I can really give to the OP is to know all of the principles and use them. Using them will help you understand when to use them.
SonOfPirate, thank you for that lengthy thoughtful response.
However, and this is certainly my OPINION, I have to say what you've described is exactly the level of dogmatic adherence I think is ridiculous.
Having an interface for every single business object results in "better code"? I should say not!
I'd like to humbly suggest you do a little googling on agile development principles and then spend some time pondering them. While I don't consider myself a full-blown agile or test-driven development, I have found keeping them in mind has made me a more disciplined developer.
And the interesting point about that is that one of the core principles is exactly 180 degrees the opposite of what you describe - namely that you never do something "just because", but rather only when there's a specific need to. I stop myself several times a week from going down the path of some complex solution when a simple one will work just fine. Yes, sometimes I have to refactor or redesign things, but the time saved still far outweighs what's spent doing that.
And that's the bottom line, not what some alleged OO guru says in a popular book. At the end of the day it's all about having the most functional and maintainable code.
So, I guess I'll consider myself "lazy", as I have to pass on your advice.
The following links contain the OOP design principles that you should by heart. If you need more information, you can google it.
http://sis36.berkeley.edu/projects/streek/agile/oo-design-principles.html
http://architechie.blogspot.com/2005/10/oo-design-principles-quick-rundown.html
http://www.codeproject.com/gen/design/nfOORules.asp
The following link contains a very detailed explanation of the program to an interface, not an implementation principle. This is a very good article and I recommend reading it.
If one class is only going to implement the interface or abstract class and no future classes will, then it is not necessary to create that interface or abstract class in most cases. However, if many classes are going to implement the interface or abstract class, you might want to consider creating the interface or abstract class. Keep in mind this is an oversimplification of the issue and it does not work for every scenario. For example, the separated interface pattern uses interfaces to separate a package from its implementation. Many of these interfaces will only be implemented by one class, depending on the usage. Despite this, it can be a very useful pattern to decouple assemblies.
Copyright (c) Marimer LLC