Security and CSLA

Security and CSLA

Old forum URL: forums.lhotka.net/forums/t/6248.aspx


kyle.l.watson@gmail.com posted on Friday, January 23, 2009

I'd like confirmation that my understanding of CSLA and security isn't entirely off base.

I'd like to better understand the security aspect using Client---AppServer---DB.

In this scenario, I'd like to say we don't trust the client. The client will call the appserver and the appserver will do the actually persistance calls to the db.

So, even if the client application is hacked, when the client tries to call the data portal to update this information it still shouldn't go through because there are checks on the appserver. Is this correct?

For example, if the client bypasses a event on the client ( and lets say that event checks authorization to view or change data) even though this person abused its client application the app server should still enforce the correct rules?

I apologize if this is completely elementary. Of course there are a couple of other things you must do, like use SSL between the client and application server and use strongly typed assemblies. But this is not the responsibility of CSLA. CSLA uses its own Authentication/Authorization and validation and thats what I want to make sure can be trusted.

ajj3085 replied on Sunday, January 25, 2009

If you can't trust the client, and your Csla objects shouldn't be going back and forth between the client and application server.  You basically have two different applications, one which provides services and the other which consumes those services.  I don't think Csla is meant to be used on both sides of your trust boundry..

For example, with an asp.net application, if you don't trust that a user (client browser) to validate their inputs, you check on the server.  Say you have javascript that checks the format of a SSN.  Your code on the server should not blindly trust that the input is always in the correct format, because the client browser is outside your trust boundry.  The server must also check.

Does that make sense?  Basically, Csla BOs should never cross your trust boundry... at least I believe that's what most here would tell you.

Oh, and finally, I don't know how much of this applies to Silverlight clients.  Csla for Silverlight seems to have been designed so that the BOs flow back and forth... I have no idea if this general concept applies there.

kyle.l.watson@gmail.com replied on Sunday, January 25, 2009

My understanding is still slightly clouded.

Using a windows smart client, the client has access to all the dlls on its workstation and can do what they like on their end, bypassing validation on its end. I assumed that this was fine, since when the business objects move to the app server, you could simply check on the server itself for correct validation.

I'm having a hard time understanding when to ever trust a client. It is easy to cage a couple of application servers and say 'Hey these are trusted. They are physically secured and the only way in or out is through the network.' But the clients, anyone can use them or subvert them - they are out in the wild so to speak.

In your example with the asp.net application - the server does need to check all values, the object graph, authorizaiton etc before it persists information but it needs to do the same thing for windows smart client or silverlight app or at least be a way to tell the mobile objects to recheck validation on the app server.

I could certainly be missing a concept - which is the reason for the inquiry. I'm just trying to wrap my mind around whether csla is the right framework for us.








RockfordLhotka replied on Sunday, January 25, 2009

Security is all about threat assessment. And budget.

The more threat vectors you consider important, the higher your cost to protect against them. At some point you hit the barrier where the cost of security outweighs the benefit of the system, so you either scrap the project or back off on the security requirements.

While that may sound trite, or self-evident, the reality is that people don't usually think through the threat assessment clearly (probability of vector being exploited, cost if it is exploited, cost of blocking vector, etc).

The question of whether a client is "trusted" is not black and white. Take Microsoft Money: they trust the client explicitly, since the valuable data is on the client. Or take your bank, who (one hopes) never trusts a client, not even a browser, even a little tiny bit. And yet they accept user input via the browser, and that input is used to make dramatic changes to where the user's money sits. So clearly they DO trust the client quite a lot after all don't they :)

Architecturally you have two basic models to consider: client/server and service-oriented. CSLA .NET supports both models, though in different ways.

I've written quite a number of blog posts on this general topic - just click the service oriented link in the tag cloud at www.lhotka.net/weblog to filter to them.

The basic deal is this: if you "trust" the client, then you can use client/server (n-tier) models, including the CSLA .NET data portal.

If you don't "trust" the client, then you should really use SOA, which means you are now writing (at least) TWO applications. One that runs on the client, and one that runs on the server. In a pure sense, they must share nothing except the contract for data messages, and the messages themselves. SOA is expensive, but it is also the only well-known way to deal with scenarios where the client is untrusted, and yet isn't a dumb terminal (like the pre-AJAX web).

You can also imagine a hybrid, depending on your definition of "trust". This model is enabled by CSLA .NET for Silverlight. In this model, you do share code on client and server, and use the data portal with its mobile object model. However, all objects are subject to a bit of pre-processing when they hit the server - and you'd typically re-run your business/validation rules at that point.

Why rerun them? They run on the client to give honest users (99.999% one supposes) a great experience that is interactive. But they are rerun on the server in case the 0.0001% manage to hack the Silverlight runtime, disassemble your code, modify the disassembled code, re-insert that code back into Silverlight and fool Silverlight into ignoring the obvious crypto signature violation. Even if all that happened, rerunning the rules/processing on the server will almost certainly block anything bad.

I say it like this, because I come back to threat assessment. Sure, an untrusted client is a problem. And there are people in the world capable of hacking Silverlight, or cracking the .NET runtime to bypass how the assembly loader validates signed assemblies. But these people are pretty rare, and are mostly concerned with figuring out how to crack Left 4 Dead so they can post it on a warez site before their hacker buddies do.

In other words, you need to determine how likely it is for someone to actually care about your app enough to crack it. And then you need to determine how likely that is to happen. And then you need to determine the cost of a scenario where that does happen. Then you need to determine if that scenario is likely enough, and costly enough, to warrant the pricetag of SOA.

If so, then you should absolutely be doing SOA. If it is fuzzy, then perhaps the hybrid scenario is better. If not, then go for the cheaper n-tier client/server model.

In the end, CSLA .NET supports all three of these scenarios. The data portal supports n-tier, and the Silverlight data portal supports the hybrid model (and the .NET data portal is extensible enough you could do the same in .NET). And Chapter 21 of the Expert 2008 Business Objects book is all about creating services for SOA, using CSLA .NET objects on the server. Also, CSLA .NET objects work great to create an edge application (the SOA term for an app that interacts with a user) that consumes services.

kyle.l.watson@gmail.com replied on Monday, January 26, 2009

You can also imagine a hybrid, depending on your definition of "trust". This model is enabled by CSLA .NET for Silverlight. In this model, you do share code on client and server, and use the data portal with its mobile object model. However, all objects are subject to a bit of pre-processing when they hit the server - and you'd typically re-run your business/validation rules at that point.

I think this is what I'm looking for, for internal applications. Our tentative plan is to use winforms for the next 1.2 years and hopefuly migrate to silverlight at some point after that.

The data portal supports n-tier, and the Silverlight data portal supports the hybrid model (and the .NET data portal is extensible enough you could do the same in .NET).

Do you mean we can get this hybrid setup going for regular winform apps as well? If so, is there any material available to help us understand the approach and concepts.

One more question: How would you define a client you trust? What are the requirements a client must have to be considered in the boundary of trust? For example, can you use certificate based security to establish trust between the application server and the client using a n/tier architecture?

 

 

 

 

ajj3085 replied on Monday, January 26, 2009

kyle.l.watson@gmail.com:
One more question: How would you define a client you trust? What are the requirements a client must have to be considered in the boundary of trust?


Well, part of this answer depends on you and your evaluation.  Can you trust your internal employees?  What are the chances of one being 1) malevolent and 2) capable of "doing evil" by looking at your assemblies and defeating the code signing? 

A digital certificate doesn't mean you can trust the client; it's a technological method to make it harder to compromise the client.  As Rocky said, you'll have to do your own risk analysis (and cost analysis) to determine if you're going to trust your employees to run your client without compromising them.

If it helps, I am developing an internal application as well.  I've chosen to trust the client; it's only run on our internal network.  The probability an employee here that is both capable of attacking the client AND wants to do so is low.  Also, depending on how soon we detect something, our transaction log backups would allow us to go back to before the compromise.  Since we do log backups every hour, and full backups every night, risk of data loss is also low (I believe).

HTH

kyle.l.watson@gmail.com replied on Monday, January 26, 2009

At this point of the game it sounds to me like we must trust the client application otherwise we're stuck using SOA which defeats much of the ease of use of ntier client/server. The selling point of CSLA to us is not having to duplicate the business logic on the client.

I'm still interested in this hybrid approach.

I can definitely see your point ajj3085. I think in some cases yes, you can choose to trust the client. But, it is easier if you simply don't allow the scenario for abuse if possible. If it is too costly of course you can't implement it, but I still have to do my homework before I can make a reasonable conclusion.

 

On a complete side note, it seems to me if possible we should standardize or find a way to expose business rules to a client in a SOA architecture. I'll just dream about it.

RockfordLhotka replied on Monday, January 26, 2009

kyle.l.watson@gmail.com:

Do you mean we can get this hybrid setup going for regular winform apps as well? If so, is there any material available to help us understand the approach and concepts.



This concept was a new introduction for the CSLA .NET for Silverlight data portal, because I expect a fair number of Silverlight apps to "not trust the client" and yet want to keep the cost profile of n-tier and the interactivity of having business logic on the client.

The concept has not found its way into the .NET data portal yet, because the demand really hasn't been there for this hybrid model. Most .NET apps are just n-tier, plain and simple. And the reality is that the existing .NET data portal offers a pretty decent solution even without the fancier plumbing required by the SL model.

As there's no book for CSLA .NET for Silverlight as yet, I really don't have anywhere to point you for a lot of detail. My blog (www.lhotka.net/weblog - use the tag cloud to filter for Silverlight) has one or two posts about the design of the SL data portal.

The gist of it is fairly simple though. When the object graph deserializes on the server (coming from the client), you can have the data portal instantiate an "observer" object that gets to examine and interact with the object before it continues through the normal data portal processing. This observer object can force a CheckRules() call to rerun all business/validation logic, do extra authz processing, etc. Then it can block all processing, or allow the processing to continue, or even do other things.

In the standard .NET data portal though, really all you'd need to do is put a call to ValidationRules.CheckRules() at the top of every DataPortal_Insert/Update/DeleteSelf method and you'd achieve pretty much the same thing - without anything fancy at all. This is what I mean by saying that the existing data portal already provides a pretty decent solution - and people do exactly this when they aren't sure if the client's code caught everything.

kyle.l.watson@gmail.com:

One more question: How would you define a client you trust? What are the requirements a client must have to be considered in the boundary of trust? For example, can you use certificate based security to establish trust between the application server and the client using a n/tier architecture?



The definition of "trust" is nebulous. I can't give you a firm answer, because it depends on the application, the types of data involved, the type of organization involved, etc. You need to do a threat assessment for the app - determine your threats, how likely they are, the cost of a breach, etc. You can then decide whether your threat analysis would have you trust or not trust the client.

Andy's post is good, as he has a concrete example of their decision process. And I think that most internal apps go down that exact route, and end up trusting the client. In that scenario, a malicious user cracking the system is unlikely, and odds are that their actions would be criminal - so even if it occurred and there was some damage/cost - you'd get to send the employee to prison :)  Or at least fire them, with one of those cool police escorts to the door.

External apps are a different story. In those cases I think the focus shifts more to determination of whether you are a target (does anyone care), and the cost of a breach. Assuming those are both high, then you talk mitigation strategy - which might be to use log files, reports, expert systems or give the user a dumb terminal.

Credit card processing is a good example here. They have HIGHLY untrusted clients - all those little scanners in every store in the world. Their mitigation strategy is multi-pronged, but a big part of it is the use of expert systems and reporting to detect fraud/misuse and block it. In other words, they trust the client more than they "should", because the alternative is a largely useless client. So they make the client trusted and simple, and catch/fix issues on the backend.

Video games are also a good example. They are a high value target for hackers, and have developed very sophisticated schemes for protection - including the use of root kits. Some manufacturers basically spread their own viruses to block piracy. Nasty, but true. Others require additional services to be installed (punkbuster, etc) on the client to "watch over" the app. None are perfect, all are expensive, and most tend to piss of the customer something fierce. But games need a rich client in a VERY untrusted scenario...

kyle.l.watson@gmail.com replied on Monday, January 26, 2009

In addition to many other threads on the same topic on this forum, I think this thread just adds another layer of clarification for me. And you are certainly right - in many scenarios with the right logging in place you can at least verify who is being malecious etc.

Calling validate.CheckRules on the app server maybe that extra check that will make someone sleep better.

I understand more clearly now, the assessment that needs to take place and we will definitely drill into that more before we start coding out a solution. It seems likely that CSLA will work quite nicely for us - I do not get the final say, though I have some weight on the argument.

I have enjoyed working with the project tracker solution, so from that stand point CSLA looks very promising.

Thanks for the responses.

 

 

 

 

Copyright (c) Marimer LLC