Multiple "views" of the same data

Multiple "views" of the same data

Old forum URL: forums.lhotka.net/forums/t/1901.aspx


SonOfPirate posted on Tuesday, December 05, 2006

I have a situation where I have a set of business objects, we'll go with Projects for consistency.  In code, we have created a Project BO and a read-only ProjectInfo BO - just as in the book.  Then we have a ProjectCollection (read/write using Project - allows use to add, modify & remove via DataGrids, etc.) and a ProjectList (read-only using ProjectInfo for listing).  We've already begun receiving requests to add additional "views" of this data.  For instance...

We need to be able to generate a list/collection of Open Projects, Closed Projects, Deleted Projects and Archived Projects - just for starters.  Implementing these are relatively simple as they are just filters based on the current state and/or age of the project.

In addition, we are looking to allow users the ability to define their own "custom" filters to apply to the list/collection displayed.  These filters could then be saved as part of their personalization of the interface.

My question is what approach to use that will satisfy our need for multiple "views" and allowing the user to create their own views that are dynamically recreated at run-time.

Our first thought was to create a separate OpenProjectList class, etc. but that hardly seems practical as the number of "views" can increase dramatically.  AND, we wouldn't be able to support dynamic filtering this way.

So, it occurred to me that perhaps we are dealing with coding the basic read-only "view" - ProjectList - as these filters would only apply to read-only lists anyway and treating all of these variations as "Views" much like the System.Data.DataTable does.  Then, we would define a mechanism for defining the filtering applied by the user (there have been several other posts along these lines with were VERY helpful in developing this part).  So, in the end, we would instantiate our ProjectList class, create a new View by applying our filtering logic and bind our UI to the view rather than the ProjectList class.

Is this wise though?  It would mean that our ProjectList class would do no filtering of its own and would have to be populated with every record in the table whether deleted, closed, open or otherwise so that the filtering could be applied in memory, dynamically.  As the application moves forward, this could mean hundreds of thousands of records when all the user may want is a view of projects in an Open state that have been assigned to him/her - and may only be 10-20 items out of several hundred thousand.

Thoughts/suggestions?

 

ajj3085 replied on Tuesday, December 05, 2006

I would have the ProjectList accept different criteria.  You may even want to expose some kind of criteria bo that ProjectList would understand.

This has popped up on the forum before.  For starters, check out this thread.

HTH
Andy

david.wendelken replied on Tuesday, December 05, 2006

SonOfPirate:

I have a situation where I have a set of business objects, we'll go with Projects for consistency.  In code, we have created a Project BO and a read-only ProjectInfo BO - just as in the book.  Then we have a ProjectCollection (read/write using Project - allows use to add, modify & remove via DataGrids, etc.) and a ProjectList (read-only using ProjectInfo for listing).  We've already begun receiving requests to add additional "views" of this data.  For instance...

Don't want to hijack this thread, but I use Project and ProjectList, ProjectInfo and ProjectInfoList as naming conventions.  That way, it's dirt obvious which object goes in which collection and whether it's readonly or not.  hth.

 

SonOfPirate replied on Tuesday, December 05, 2006

Andy, yea, I took part in that thread.  Wasn't connecting the two since my mind was on the whole "view" concept.  But, I think you pointed me in the right direction that the so-called "filter" is nothing but a set of criteria passed to the same BO (collection).  This will also allow us to implement static methods for the canned "views" - which was one of the things on my wish-list.  So, for instance, we would have:

ProjectInfoList.GetProjects(); // returns only non-deleted items.
ProjectInfoList.GetAllProjects(); // returns all projects, regardless of their deleted flag.
ProjectInfoList.GetOpenProjects(); // returns all projects in an 'open' state.
ProjectInfoList.GetClosedProject(); // returns all projects in a 'closed' state.
etc.

And these can simply make use of a pre-defined criteria object and use the same data access methods to retrieve the data from the database.  And we only pull the data we actually want out of the database - which I like.

I'll have to go back and revisit that thread - and the one referenced from it - and retrace our steps to see how we implemented this before so we can make use of it here.  I believe we made use of a special SearchCommand object and custom CommandBuilder that used the criteria object and created the SQL for the SearchCommand to execute in a manner similar to what you described in the other topic.

I guess there would be one downside to this approach. Even the "canned views" would be treated as dynamic queries when they really don't have to be.  What I mean is, let's say we have a stored procedure, GetProjects, that is used by default to retrieve the records.  This is great and we get all the benefits of using a sproc for data access.  But, in order to implement the dynamic search functionality, we have to go with dynamic-SQL instead of a sproc for our query.  To create the Open Projects "view" created using the GetOpenProjects() factory method, we would use the same dynamic facility that is used for user-defined searches rather than simply calling a GetOpenProjects stored procedure.  See my point?

Would be nice if there was some happy middle-ground that would allow us to do both.  Any ideas?

 

(BTW - I use the same conventions you described, David.  I just got sloppy in typing my original post.  I agree that it becomes too confusing otherwise.)

 

Curelom replied on Tuesday, December 05, 2006

I create new sproc for each of the "views" so

ProjectInfoList.GetProjects();

would use the GetProjects sproc and

ProjectInfoList.GetAllProjects(); 

would use the GetAllProjects

While you would have separate queries, you wouldn't have to mess with dynamic sql.

Or am I missing your point Embarrassed [:$]

 

SonOfPirate replied on Tuesday, December 05, 2006

Trouble here is integrating this into the data portal mechanism.  At some point there needs to be a way to distinguish which sproc to call based on which method is used to instantiate the collection.

 

Norty replied on Tuesday, December 05, 2006

Your table has a set number of fields.  The filter can be any combination of these fields.

I create a search Proc that has 1 parameter for each possible search field (allowing nulls).  Your DP_Fetch only passes in the parameters that it cares about.  The query sorts out the nulls from the values passed in.  If no parameters are passed in, then ALL data is returned, else data is filtered by all passed in parameters.

Hope this helps

Des

 

create procedure dbo.sprProjectSearch
(
   @ProjectID uniqueidentifier = null
 , @ProjectName varchar (50) = null
 , @ResourceID int = null
) as


 select [ProjectID], [ProjectName], [ResourceID] -- , etc, etc
 from tblProject

 where
      CASE
          WHEN @ProjectID is null THEN 1
          WHEN [ProjectID] = @ProjectID THEN 1
          ELSE 0
      END = 1

     and CASE
          WHEN @ProjectName is null THEN 1
          WHEN [ProjectName] = @ProjectName THEN 1
          -- WHEN [ProjectName] like '%'+@ProjectName+'%' THEN 1 -- For partial matches
          ELSE 0
      END = 1

     and CASE
          WHEN @ResourceID is null THEN 1
          WHEN [ResourceID] = @ResourceID THEN 1
          ELSE 0
      END = 1

GO

JoeFallon1 replied on Wednesday, December 06, 2006

I use a special BO to trap all the fields in the Search form that the user selects.

If certain things are not selected then they are omitted from the SQL.

This special BO outputs a filter (the WHERE clause without the WHERE) and then this filter is used in a dynamic SQL statement. This is much easier than coding dozens of SPs. This is one of the places that dynamic SQL shines. Naturally you have to validate the input prior to generating the filter to avoid SQL Injection.

Joe

 

SonOfPirate replied on Thursday, December 07, 2006

I am not opposed to coding multiple sprocs if necessary in order to take advantage of their benefits.  I agree that this is pointless (and wasteful) - not to mention impossible - for user-defined queries.  But, there are a handful of queries that I know going in are needed.  We will always need to have the capability to list "Open" Projects, for instance.

A good example of what I mean can be found in Outlook 2003+ where you have your "Favorite Folders" view.  By default, this list includes not only your Inbox, Sent Items, etc. but the special folders/views "Unread Mail" and "For Follow Up".  These are simply filtered views of the same data but they are pre-defined views - as opposed to user-defined.

So, I am looking for the best way to implement a combination or pre-defined and user-defined views on the same data.  Here are the possible approaches as I see them:

  1. Create a separate BO for each pre-defined view (e.g. ProjectList, OpenProjectList, ClosedProjectList, etc.) and a special BO to support dynamic searching/filtering by the user.
  2. Create a single BO that always accepts the dynamically-defined search criteria object.  When null, the object returns the full list; otherwise, the specific filter is applied.  This means one BO, one sproc and a ton of dynamic code.  (This can be done by either generating the SQL in code or passing the applicable parameters to a sproc as described in the previous posts.)
  3. Create a single BO that exposes View child objects which can be used to sort, filter, etc. the data to match the specific criteria.  Unless the View is instantiated directly, this would mean that the BO retrieves all of the data from the data source so that the filter has a source to work on.  This would be consistent with the way the DataTable and DataSource classes are designed (by MS).

To recap, what we are trying to achieve is:

  1. A list of all projects in the system (that have not been flagged as deleted).
  2. A list of all projects in an "open" state.
  3. A list of all projects in a "closed" state.
  4. A list of all projects in an "open" state assigned to "Bob Smith".

The first three represent pre-defined "views" while the last is an example of something that a user may defined at run-time.

I'm still not convinced which approach makes better sense, offers the scalability we need and remains consistent with CSLA and expected behavior of such objects.

 

 

 

Brian Criswell replied on Thursday, December 07, 2006

Norty:

Your table has a set number of fields.  The filter can be any combination of these fields.

I create a search Proc that has 1 parameter for each possible search field (allowing nulls).  Your DP_Fetch only passes in the parameters that it cares about.  The query sorts out the nulls from the values passed in.  If no parameters are passed in, then ALL data is returned, else data is filtered by all passed in parameters.

Hope this helps

Des

 

create procedure dbo.sprProjectSearch
(
   @ProjectID uniqueidentifier = null
 , @ProjectName varchar (50) = null
 , @ResourceID int = null
) as


 select [ProjectID], [ProjectName], [ResourceID] -- , etc, etc
 from tblProject

 where
      CASE
          WHEN @ProjectID is null THEN 1
          WHEN [ProjectID] = @ProjectID THEN 1
          ELSE 0
      END = 1

     and CASE
          WHEN @ProjectName is null THEN 1
          WHEN [ProjectName] = @ProjectName THEN 1
          -- WHEN [ProjectName] like '%'+@ProjectName+'%' THEN 1 -- For partial matches
          ELSE 0
      END = 1

     and CASE
          WHEN @ResourceID is null THEN 1
          WHEN [ResourceID] = @ResourceID THEN 1
          ELSE 0
      END = 1

GO


This is the tactic I use, but the following where clause might be more efficient by avoiding the searched case statements (depends on what the query optimizer does to it).

create procedure dbo.sprProjectSearch
(
   @ProjectID uniqueidentifier = null
 , @ProjectName varchar (50) = null
 , @ResourceID int = null
) as

select [ProjectID], [ProjectName], [ResourceID] -- , etc, etc
from tblProject
where

    isnull (@ProjectId, ProjectId) = ProjectID
    and @ProjectName like '%' + isnull (@ProjectName, ProjectName) + '%'
    and isnull (@ResourceID, ResourceID) = ResourceID

GO

SonOfPirate replied on Thursday, December 07, 2006

Not to step on anyone's toes, but I don't need help on implementing dynamic searches.  And for anyone that does, there have been at least two other threads that have covered the topic in much greater detail.  So, in the hopes of getting the thread back on topic...

My concern/question has to do with the best approach to use when implementing a combination or pre-defined and user-defined views on the same list of data.  Obviously we need one form or another of the search mechanism described for the user-defined view(s).  However, the question still remains whether this is the best approach for views that we know we will need up-front - when we can hard-code them into our app and potentially realize better performance, security, etc.

Again, let's use the following 4 views as examples:

  1. Pre-defined: A list of all projects in the system (that have not been flagged as deleted).
  2. Pre-defined: A list of all projects in an "open" state.
  3. Pre-defined: A list of all projects in a "closed" state.
  4. User-defined: A list of all projects in an "open" state assigned to "Bob Smith".

Do we:

  1. Create a separate BO for each pre-defined view (e.g. ProjectList, OpenProjectList, ClosedProjectList, etc.) and a special BO to support dynamic searching/filtering by the user.
  2. Create a single BO that always accepts the dynamically-defined search criteria object.  When null, the object returns the full list; otherwise, the specific filter is applied.  This means one BO, one sproc and a ton of dynamic code.  (This can be done by either generating the SQL in code or passing the applicable parameters to a sproc as described in the previous posts.)
  3. Create a single BO that exposes View child objects which can be used to sort, filter, etc. the data to match the specific criteria.  Unless the View is instantiated directly, this would mean that the BO retrieves all of the data from the data source so that the filter has a source to work on.  This would be consistent with the way the DataTable and DataSource classes are designed (by MS).

or is there another option not mentioned yet?  What are the pro's and con's of each of these approaches and why use one over the other?

I'm not so much concerned about the implementation as I am with understanding why.

Thx.

 

 

Brian Criswell replied on Thursday, December 07, 2006

It sounds like one list to me.  Same list, same data, different options of how much of it to pull down.  So I would personally just make several factory methods on a single list.

To talk about it in the manner that is all the rage:
Responsibility: Display a list of projects
Behaviour: Display different contents based on what the user wants

Whether the user wants open, closed or polka-dotted projects, the fact that you are displaying a list of projects does not change.  Unless you do different things with open vs. closed projects?

SonOfPirate replied on Thursday, December 07, 2006

I agree completely.  That is why I posted the original message - cuz I didn't think creating separate objects for each "view" was the way to go.  But, given that we will have a single ProjectList BO, are we treating each "view" as an actual view pulled off the original BO, implementing "dynamic search" for each static (factory) method or somehow filtering the data in the data portal method(s)?

I hesitate to go with a total dynamic search approach because of performance concerns.  If we are hard-coding factory methods, then we can hard-code what is necessary to implement data access for those methods.  I believe that we should be able to use the standard Criteria object for this purpose.  It would expose a State property, for instance, that when set would use the GetProjectsByState sproc accepting the state value as an argument.

That still leaves the question over using true Views to accomplish this.  I just spent a bit of time going through the CslaDataSource set of classes and am intrigued by the model used.  Under this approach, our ProjectList class would serve as a pass-through placeholder for the various views on the actual data.  Just as with the CslaDataSource, we could create multiple ProjectListView classes and add them to the internal list.  Each ProjectListView would be our actual BO with the logic necessary to access the back-end data store.  One of our views would support a dynamic definition for user-defined views.  This seems appealing because its extensible, allows us to use BOs to represent each view, etc. - and we don't have to use dynamic queries to implement our pre-defined views.

Thoughts?

 

SonOfPirate replied on Friday, December 08, 2006

Obviously the object (and data) model presented by the ObjectDataSource and emulated in the CslaDataSource classes is new but it certainly provides for an interesting approach to this type of scenerio.

I can see having our "core" business collection, ProjectInfoList, expose a static DefaultView property that will return a bindable, data-driven BO containing all of our ProjectInfo objects.  We could add other static properties or methods, such as GetOpenProjects() or OpenProjectView (whatever name works) that will create and return other, pre-defined views implemented in a similar way to how the ObjectDataSource/CslaDataSource classes are defined.

This seems like a pretty powerful model but I am concerned that this may be more complicated to implement than it is worth.

Thoughts?

xAvailx replied on Monday, December 11, 2006

Here is some pseudo code that may be of help:



ProjectFilter myFilter = new ProjectFilter();
ProjectSort mySortOptions = new ProjectSort();

myFilter.ProjectStatus = "Open";
myFilter.ProjectAssigned = "Bob Smith";
mySortOptions.SortByName = SortOrder.Desc;

ProjectList myList = ProjectList.GetProjectList(myFilter, mySortOptions);



>> Create a separate BO for each pre-defined view (e.g. ProjectList, OpenProjectList, ClosedProjectList, etc.) and a special BO to support dynamic searching/filtering by the user. <<

I don't think this is necessary. Your pre-defined views can be just previously saved filters.

>> Create a single BO that always accepts the dynamically-defined search criteria object.  When null, the object returns the full list; otherwise, the specific filter is applied.  This means one BO, one sproc and a ton of dynamic code.  (This can be done by either generating the SQL in code or passing the applicable parameters to a sproc as described in the previous posts.) <<

You can encapsulate specific filter/sort objects like the example above. The dynamic sql stuff can be handled by a different object. What we do is pass the filter/sort objects to a Sql Translator that generates the appropriate sql. The translated filter/sorting sql is then sent to the stored proc.

I prefer filtering on the database rather than bringing back 10k records that are then filtered.

HTH


david.wendelken replied on Monday, December 11, 2006

SonOfPirate:

ProjectInfoList.GetProjects(); // returns only non-deleted items.
ProjectInfoList.GetAllProjects(); // returns all projects, regardless of their deleted flag.
ProjectInfoList.GetOpenProjects(); // returns all projects in an 'open' state.
ProjectInfoList.GetClosedProject(); // returns all projects in a 'closed' state.
etc.

And these can simply make use of a pre-defined criteria object and use the same data access methods to retrieve the data from the database.  And we only pull the data we actually want out of the database - which I like.

I'll have to go back and revisit that thread - and the one referenced from it - and retrace our steps to see how we implemented this before so we can make use of it here.  I believe we made use of a special SearchCommand object and custom CommandBuilder that used the criteria object and created the SQL for the SearchCommand to execute in a manner similar to what you described in the other topic.

I guess there would be one downside to this approach. Even the "canned views" would be treated as dynamic queries when they really don't have to be.  What I mean is, let's say we have a stored procedure, GetProjects, that is used by default to retrieve the records.  This is great and we get all the benefits of using a sproc for data access.  But, in order to implement the dynamic search functionality, we have to go with dynamic-SQL instead of a sproc for our query.  To create the Open Projects "view" created using the GetOpenProjects() factory method, we would use the same dynamic facility that is used for user-defined searches rather than simply calling a GetOpenProjects stored procedure.  See my point?

Would be nice if there was some happy middle-ground that would allow us to do both.  Any ideas?

Had an idea that might be some middle ground.  

Pluses:

Minuses:

First of all, in sql server 2005 and in oracle, you can define a procedure to run as a given user or role.

Example (sql server 2005 syntax):

CREATE PROCEDURE BLAHBLAH WITH EXECUTE AS 'ReallySafeUserRole' AS ...

So, ReallySafeUserRole could be defined to only have the authority to select privileges on the tables that are necessary to service the allowable dynamic queries.  If someone slipped in some nasty dynamic sql, it simply wouldn't be able to do very much.

Secondly, if you needed to add some more protection (at the cost of more load on the database), you could make a change to the ReallySafeUserRole privileges.  Instead of having select access on the necessary tables, it would only have execute authority to run a specific stored procedure (which I explain next).

The stored procedure it would run would return a list of the four main categories you mentioned ( all, non-deleted, open, closed).  (If you needed 4 procedures to do that instead of one, then this procedure would have the authority to run all four.)  You execute them like this:

create table #project_list
(project_id int null
, etc...
)

insert into #project_list exec GetProjectData(@project_sub_list_flag)

You then build a dynamic query against #project_list.

I *think* this would be pretty secure.  Comments?

 

david.wendelken replied on Monday, December 11, 2006

darn.  still have to worry about cross-page scripting.  Have to use html encoding to stop that from the business object side.

david.wendelken replied on Tuesday, December 12, 2006

david.wendelken:

darn.  still have to worry about cross-page scripting.  Have to use html encoding to stop that from the business object side.

I was wrong on that, which means I was actually right. :)

Using two procs as I described earlier should be quite secure and avoids overloading the network by moving data that will be filtered out, while still preventing sql injection attacks.  Can't wait to build one this way!

david.wendelken replied on Wednesday, December 13, 2006

david.wendelken:
SonOfPirate:

Would be nice if there was some happy middle-ground that would allow us to do both.  Any ideas?

Had an idea that might be some middle ground.  

Pluses:

  • More secure than pure dynamic sql
  • Less network traffic than filtering it all in the business objects

Minuses:

  • More load on the database than a pure dynamic sql approach.

First of all, in sql server 2005 and in oracle, you can define a procedure to run as a given user or role.

Example (sql server 2005 syntax):

CREATE PROCEDURE BLAHBLAH WITH EXECUTE AS 'ReallySafeUserRole' AS ...

So, ReallySafeUserRole could be defined to only have the authority to select privileges on the tables that are necessary to service the allowable dynamic queries.  If someone slipped in some nasty dynamic sql, it simply wouldn't be able to do very much.

Secondly, if you needed to add some more protection (at the cost of more load on the database), you could make a change to the ReallySafeUserRole privileges.  Instead of having select access on the necessary tables, it would only have execute authority to run a specific stored procedure (which I explain next).

The stored procedure it would run would return a list of the four main categories you mentioned ( all, non-deleted, open, closed).  (If you needed 4 procedures to do that instead of one, then this procedure would have the authority to run all four.)  You execute them like this:

create table #project_list
(project_id int null
, etc...
)

insert into #project_list exec GetProjectData(@project_sub_list_flag)

You then build a dynamic query against #project_list.

I *think* this would be pretty secure.  Comments?

I think this is a really cool approach.  It's simple, fast to code, should run fast, and is also sql-injection safe.

Why no interest in it? 

Is it just because all you OOO (Object Oriented Only) types couldn't possibly find something done in the database cool?  :)   Or did I not provide sufficient detail to explain it properly?

ajj3085 replied on Wednesday, December 13, 2006

Well personally I think that's more work than just using dynamic sql... which you end up using anyway, just in the database instead of the .Net code, from my understanding of your post.  So wouldn't it just be easier to get a policy changed that allows selects to be built dynamically?

david.wendelken replied on Tuesday, December 19, 2006

ajj3085:
Well personally I think that's more work than just using dynamic sql... which you end up using anyway, just in the database instead of the .Net code, from my understanding of your post.  So wouldn't it just be easier to get a policy changed that allows selects to be built dynamically?

Sorry for the delay in answering, but life and work got in the way. 

It IS more work than just using dynamic sql - it is also more secure.  :)

I've thought of several flavors for this technique so that the added complexity better matches the scenario.

Let's say someone has a security role that allows them to select, insert, update and delete data from lots of tables in the database.  However, their ability to do issue a malicious sql command is limited because their only interface to the database is via application screens.  Letting a user type in whatever they want, then constructing a sql statement from that and dynamically executing it totally destroys the application layer of security.  They can query or destroy any data that their role gives them access to.

This is, to put it mildly, a bad thing.

It's a whole lot of coding work to try to outsmart the hackers by checking what they type in for malicious intent.  It's also not a secure approach when the user can make use of free-text fields.

My technique lets you completely avoid all that work and still be secure.

+++++++++++++++++++++++++++
Simplest Version
+++++++++++++++++++++++++++

Let's take the simplest version.  You have two tables you want to issue a dynamic query against:  project and project_status_code.   Any user with access to these tables would be able to see any record in them.  

In this case, just create a database role called "Query_Projects" and ONLY grant it select access to project and project_status_code.   Create your sql statement in your object code, or pass in the parameter values to a stored procedure and have the procedure construct the sql statement.  I don't care which.   When the stored procedure is created, use a "with execute as Query_Projects" clause.

A hacker can slip in whatever they want, but any statement that does anything other than select data from project or project_status_code will fail due to a security error.   That's as simple as it gets!

+++++++++++++++++++++++++++
More complex data security needs
+++++++++++++++++++++++++++

Let's say that a user should NOT be allowed to see every project in the table.  (Your company is managing projects for competitors, and you have to safely compartmentalize the data from teams working for those competitors. )

One way to do that would be to create a my_projects view that filtered out only those projects that each user was allowed to see.  In that case, don't grant select access on project to the Query_Projects role,  grant select on the my_projects view instead.  Done.  Still simple.

++++++++++++++++++++++++++
Really complex data security needs
++++++++++++++++++++++++++

In this case, not only must we limit what rows a user can see in the project table, but we also need to limit it to other tables we will need to dynamically query on.  Plus, the rules are so convoluted we want to make sure that our skilled database programmers do that coding (both to get it right and to get it running efficiently).

In this case, we need to split the work (and thus partition the business risk) into two stored procedures.  The first procedure (the public one directly called from the application) still executes as if the Query_Projects role was running it.  However, instead of being given direct access to the underlying data tables and views, we only give the Query_Projects role the authority to execute a second procedure.  The second procedure's job is to return a superset of data to the first, public procedure.  We then issue a dynamic sql statement against those results. 

By doing this, a hacker never has the opportunity to gain access to the data tables or database objects, and therefore can't cause that kind of mischief. 

++++++++++++++++++++++++++
Lots of data
++++++++++++++++++++++++++
Now, let's suppose that there are 100,000 records that could potentially be returned.  The second procedure hands 100,000 records to the first one, which then discards the ones it does not want.  That's a bit inefficient, but way more efficient than moving 100,000 records across the network and then discarding them in the object layer!  How could we make this more efficient?

Well, if we pass in parameters instead of a completed statement to the first procedure, it can pass some of them (those that are both safe and cheap to code) to the second procedure in order to restrict the number of records returned to the first procedure.  Even a couple parameters out of dozens of possible ones might lower the number of records chosen to a few hundred or so, which would greatly improve the overall efficiency.

So, that's four different, secure ways to simply and cheaply issue dynamic sql queries.  HTH

ajj3085 replied on Thursday, December 21, 2006

I still don't see how that's better than just properly setting permissions on the database objects themselves.  As your scenarios get more and more complex, it sounds like more and more business rules are creeping into the database.  Authroization rules are business rules as well, and its been my exprierence that changing the business layer is a lot easier than the db layer.. expecially when you start linking procs together which issue dynamicly built sql to each other..

david.wendelken replied on Thursday, December 21, 2006

ajj3085:
I still don't see how that's better than just properly setting permissions on the database objects themselves.  As your scenarios get more and more complex, it sounds like more and more business rules are creeping into the database.  Authroization rules are business rules as well, and its been my exprierence that changing the business layer is a lot easier than the db layer.. expecially when you start linking procs together which issue dynamicly built sql to each other..

I'll try again. :)

A given user is allowed to update and delete some projects in a table, but not others.   You have decided to put the business logic controlling which projects can be updated/deleted in your csla business objects - because that's the way OO people seem to like it.

Now, if you want to "keep things simple"on the database, that means that you have granted me access to update and delete on the Projects table.

That means that, as far as the database is concerned, I can update or delete ANY project in the Projects table.   However, your Csla business objects limit my behavior to the correct projects, so it doesn't matter.

Now someone writes a new program that uses dynamic sql.  It's got gobs and gobs of fields to validate, many of which are free-text, and that allows me to inject a sql select, update or delete statement of my own devising into the database.  

Using injected sql select statements and the database dictionary, I learn about your table structure.  Then I update my competitor's projects to make them go badly, and update my own to get a price break or even have you write me a refund check.  Or, I slip in a "delete from projects;" command and your database just got trashed.   

Now, you can argue - quite rightly! - that you could set up views, or very fine-grained access control on the row or column level, etc., in ordre to try to prevent this.  But then it's not a matter of which one of us is "putting business logic in the database", it's a question of which method we are using to do so! :)

I propose that the method that I put forth is simpler and safer than trying to lock things down in the database using access controls.  Why?  Because it's simpler.  New tables get added?  My way automatically protects them.  The access control method on the tables only protects them if they get set up with that protective layer.  Not everything will.  It might be another application in the database (that you didn't write!) that gets hacked.  The method I proposed would protect against sloppy developers better. :)

 

ajj3085 replied on Thursday, December 21, 2006

david.wendelken:
Now someone writes a new program that uses dynamic sql.  It's got gobs and gobs of fields to validate, many of which are free-text, and that allows me to inject a sql select, update or delete statement of my own devising into the database. 


Ahh, well now you have problems where you're allowing unknown and / or untrusted applications to communicate with your database.  I think a better option would be to address that problem, but depending on the environment maybe you can't.

david.wendelken:
Now, you can argue - quite rightly! - that you could set up views, or very fine-grained access control on the row or column level, etc., in ordre to try to prevent this.  But then it's not a matter of which one of us is "putting business logic in the database", it's a question of which method we are using to do so! :)

Perhaps the BO should be putting the appropriate security on the row (does sql 2005 now allow permissions for specific rows of data?).   If row level security is not available though, you still have the same problem with your solution.  A crafty sql statement embedded in the stored procedure call could still update data that shouldn't be updated.

david.wendelken:
I propose that the method that I put forth is simpler and safer than trying to lock things down in the database using access controls.  Why?  Because it's simpler.  New tables get added?  My way automatically protects them.  The access control method on the tables only protects them if they get set up with that protective layer.  Not everything will.  It might be another application in the database (that you didn't write!) that gets hacked.  The method I proposed would protect against sloppy developers better. :)


Well I'm not sure that there should be some magic proc that figures out security; we should always be thinking of it, especially when adding new tables and such.  Plus, what if you get the proc wrong?  How do you know?  As far as sloppy developers go, we need to stop letting sloppy developers do work on applications which must be highly secure (or any work at all, for that matter).

At any rate the idea that you and your competitor are storing data on the same database seems far fetched to me... mostly for fear of hacking you describe.  A more maintainable solution would just to be another database possibly on another server all together.. no magic stored procedures and associated maintance costs with it.  Yes, you have two product dbs to keep updated now, but you did buy Sql Compare right???? Wink [;)]

david.wendelken replied on Thursday, December 21, 2006

ajj3085:
david.wendelken:
Now someone writes a new program that uses dynamic sql.  It's got gobs and gobs of fields to validate, many of which are free-text, and that allows me to inject a sql select, update or delete statement of my own devising into the database. 


Ahh, well now you have problems where you're allowing unknown and / or untrusted applications to communicate with your database.  I think a better option would be to address that problem, but depending on the environment maybe you can't.'

Gosh, we sure are not communicating well!  :(

No, absolutely not.  I am not talking about allowing unknown/untrusted applications to communicate with the database.  I am talking about someone writing a web page for their own application that allows a user of that page to inject unwanted sql commands into it - all because they dynamically constructed the sql statement by truncating user-entered strings together.  Are we mis-communicating because you don't see how this is a problem?

ajj3085:
david.wendelken:
Now, you can argue - quite rightly! - that you could set up views, or very fine-grained access control on the row or column level, etc., in ordre to try to prevent this.  But then it's not a matter of which one of us is "putting business logic in the database", it's a question of which method we are using to do so! :)

Perhaps the BO should be putting the appropriate security on the row (does sql 2005 now allow permissions for specific rows of data?).   If row level security is not available though, you still have the same problem with your solution.  A crafty sql statement embedded in the stored procedure call could still update data that shouldn't be updated.

Again, we are not communicating.  If your business object accepts user-entered free-form text and constructs the sql statement to be issued by concatenating the strings together, a user of that web page has the capability to totally and completely subvert the security the business object tries to add.  

ajj3085:
david.wendelken:
I propose that the method that I put forth is simpler and safer than trying to lock things down in the database using access controls.  Why?  Because it's simpler.  New tables get added?  My way automatically protects them.  The access control method on the tables only protects them if they get set up with that protective layer.  Not everything will.  It might be another application in the database (that you didn't write!) that gets hacked.  The method I proposed would protect against sloppy developers better. :)


Well I'm not sure that there should be some magic proc that figures out security; we should always be thinking of it, especially when adding new tables and such.  Plus, what if you get the proc wrong?  How do you know?  As far as sloppy developers go, we need to stop letting sloppy developers do work on applications which must be highly secure (or any work at all, for that matter).

I agree that proper usage of database roles is good practice - and I will continue to use it.  It's a good, basic level of security that's very cost effective.  This technique is a quick and inexpensive way to safely use dynamically created sql on a query page.  That's all.

ajj3085:

At any rate the idea that you and your competitor are storing data on the same database seems far fetched to me... mostly for fear of hacking you describe.  A more maintainable solution would just to be another database possibly on another server all together.. no magic stored procedures and associated maintance costs with it.  Yes, you have two product dbs to keep updated now, but you did buy Sql Compare right???? Wink [;)]

It's not far-fetched.  Happens in big business all the time.  Depending upon software licensing arrangements, using a different database server and database instance might cost huge pots of money in licensing fees.  Plus extra hardware.  And extra rent, heating, cooling, hardware maintenance and repair fees, etc.  Plus the extra labor cost - even with code utilities to help out - of keeping multiple installations synchronized.

DansDreams replied on Friday, December 22, 2006

david.wendelken:

ajj3085:
Ahh, well now you have problems where you're allowing unknown and / or untrusted applications to communicate with your database.  I think a better option would be to address that problem, but depending on the environment maybe you can't.'

Gosh, we sure are not communicating well!  :(

No, absolutely not.  I am not talking about allowing unknown/untrusted applications to communicate with the database.  I am talking about someone writing a web page for their own application that allows a user of that page to inject unwanted sql commands into it - all because they dynamically constructed the sql statement by truncating user-entered strings together.  Are we mis-communicating because you don't see how this is a problem?

David, I think it's because you're missing the point.  A page that allows users to inject SQL is the problem, not dynamic sql, since that can be easily shielded from those attacks just like stored procedures via parameterization. 

Mind you, as I've stated, I'm speaking strictly of reading data and agree sprocs are a good idea for updates.

[Ok, maybe this is just going to continue in this thread.  :) ]

pelinville replied on Friday, December 08, 2006

SonOfPirate:

I have a situation where I have a set of business objects, we'll go with Projects for consistency.  In code, we have created a Project BO and a read-only ProjectInfo BO - just as in the book.  Then we have a ProjectCollection (read/write using Project - allows use to add, modify & remove via DataGrids, etc.) and a ProjectList (read-only using ProjectInfo for listing).  We've already begun receiving requests to add additional "views" of this data.  For instance...

We need to be able to generate a list/collection of Open Projects, Closed Projects, Deleted Projects and Archived Projects - just for starters.  Implementing these are relatively simple as they are just filters based on the current state and/or age of the project.

In addition, we are looking to allow users the ability to define their own "custom" filters to apply to the list/collection displayed.  These filters could then be saved as part of their personalization of the interface.

My question is what approach to use that will satisfy our need for multiple "views" and allowing the user to create their own views that are dynamically recreated at run-time.

Our first thought was to create a separate OpenProjectList class, etc. but that hardly seems practical as the number of "views" can increase dramatically.  AND, we wouldn't be able to support dynamic filtering this way.

So, it occurred to me that perhaps we are dealing with coding the basic read-only "view" - ProjectList - as these filters would only apply to read-only lists anyway and treating all of these variations as "Views" much like the System.Data.DataTable does.  Then, we would define a mechanism for defining the filtering applied by the user (there have been several other posts along these lines with were VERY helpful in developing this part).  So, in the end, we would instantiate our ProjectList class, create a new View by applying our filtering logic and bind our UI to the view rather than the ProjectList class.

Is this wise though?  It would mean that our ProjectList class would do no filtering of its own and would have to be populated with every record in the table whether deleted, closed, open or otherwise so that the filtering could be applied in memory, dynamically.  As the application moves forward, this could mean hundreds of thousands of records when all the user may want is a view of projects in an Open state that have been assigned to him/her - and may only be 10-20 items out of several hundred thousand.

Thoughts/suggestions?

This may seem cheesy and very anti oop but it works for what we do in some cases.

First I define a superset of "views". These are the factory methods of the collection class.  As you mentiond "Open", "Closed" etc.  These base views are determined by potential amount of rows/objects returned AND security concerns.  So if "Archived" could potentially return tens of thousands of records the factory method must supply a date range, for example, to limit the amount of data returned.

Now here is where it gets kinda cheesy.  Developer Express has this great grid that can persit it's state (an I am sure other vendors offer this as well). This includes sort order, filter(s), column order and just about everything else with a single call.  I let the user set this up and save the current state of the grid so that they can recall the "view" with a right click of the grid.

This allows me to only worry about limiting the amount of data retrieve (for perfomance reasons) in the factory method of the collection.  Something else is concerned, I feel correctly, about how that data is displayed to the user. I have a hard time concerning my self in the middle tier about what data is returned outside of security concerns.

On a slightly less cheesy note. There are cases where the grid stuff is not useful. For example when it is not a UI deal.  In this case I create another class that takes the BO collection in the ctor.  The class also has a number of properties and methods that will produce another collection based on the filters.

And don't discount the dynamic criteria creation.  I personally don't believe in stored procedures unless they provide something that a parameratized query can't accomplish.  That does not happen very often.

SonOfPirate replied on Saturday, December 09, 2006

I have to disagree with you on the stored procedure  issue but won't engage in a debate as the merits of stored procedures over coded SQL are well established including security and performance - just to mention a couple.  But, I do agree that there is a difference between UI view settings and what I am referring to as a "view" which reflects the make-up of the data that is retrieved from the database.  My concern is to limit the bandwidth to only handle what is required for the request.  In other words, if we only want to see a list of projects in an Open state (OpenProjectsList), then we certainly don't want to retrieve the whole list and apply a filter in memory as would be the case if we used the traditional, MS-designed collection approach.

And, I am right there with you on the idea of using static, factory methods to create our pre-defined views/collections.  Am I correct that when doing so, you are simply setting up the Criteria object for your collection per the method that was called and passing this along to your DataPortal_Fetch method for handling so that the appropriate results are returned?  So, in the end, regardless of the criteria, you are returning the same type every time?

The wrinkles I am trying to iron out include the need for dynamic searching/filtering on top of the list of pre-defined views.  Is it better to treat all of these the same and have the underlying data access code treat a pre-defined view the same as a user-defined view in that the data access code doesn't know the difference between a call that was initiated through one of the factory methods or based on some criteria established by the user in the UI -or- would it be better to have these predefined views be BOs of their own?

I think the concensus is NOT to have each view be a BO of its own; however, after looking at the 'x'DataSource/CslaDataSource model, I find it an interesting idea to implement this type of scenario in a similar way to where we define a set of 'views' for a particular set of data and have them managed by our core object which instantiates the requested object via our factory methods.  This would allow us to maintain our development standard of not allowing code direct access to database elements - code MUST use sprocs - for each view and provide for extensibility by simply creating additional views that can be added to the core object.  This would allow us to persist the user-defined views and have them recreated in code for access in a similar manner as the pre-defined views.

I'll tell you one of the reasons that this is appealling is that we are looking to add a "Folder List" type component to our UI.  This would list all of the different views that a user has available to view data.  It would come out-of-the-box with "All Projects", "Open Projects", "Closed Projects", etc. - our pre-defined views - and would expand to also include the names given to the custom views created by the user.  So, if our user wanted to always see a list of all Projects for a particular region or projects worth more than 'x' amount in revenue, etc., they would define their filter/criteria, such as (Project.EstimatedRevenue >= $1 Mil), save it as "High Yield Projects" and it would automatically appear in the Folder List for that user alongside the others.

By having this custom view, "High Yield Projects", created as another View within our ProjectList object (again, following the DataSource model) we can simply return a list of available views using the GetViews method or enumerate the Views property and our Folder List UI component can properly create the list and interact with our ProjectList BO to retrieve the requested list when the user selects the item in the UI component - via the GetView(string name) factory method.

What do you all think?

I guess at this point, I'm looking for a reason to not try this approach and see how it goes since the more we've thought about it, the more this model seems appealing and condusive to some more features that we'd like to implement than a straight collection would provide.

 

 

ajj3085 replied on Monday, December 11, 2006

SonOfPirate:
I have to disagree with you on the stored procedure  issue but won't engage in a debate as the merits of stored procedures over coded SQL are well established including security and performance - just to mention a couple. 


Well, not getting into the debate too much, don't just assume those assumptions are true.  There are times where dynamic sql actually performs better than the equivolent stored procedure. And are you just as secure creating a view (and can specify column level security) and limiting what can be done with that.  Stored procedures are no magic bullet.

SonOfPirate:
My concern is to limit the bandwidth to only handle what is required for the request.  In other words, if we only want to see a list of projects in an Open state (OpenProjectsList), then we certainly don't want to retrieve the whole list and apply a filter in memory as would be the case if we used the traditional, MS-designed collection approach.


What was wrong with exposing a criteria object, which your collection BO interperates and builds the correct sql or calls the correct procedure.  I do this all the time.  It doesn't make sense to bring back rows you know you are just going to filter out.

SonOfPirate:
And, I am right there with you on the idea of using static, factory methods to create our pre-defined views/collections.  Am I correct that when doing so, you are simply setting up the Criteria object for your collection per the method that was called and passing this along to your DataPortal_Fetch method for handling so that the appropriate results are returned?  So, in the end, regardless of the criteria, you are returning the same type every time?

That's one way to go.  Another would be to have a single internal criteria object and have your static methods translate the publicly exposed criteria object to the internal one.  The benefit there is that you have one DP_F routine that handles all cases.  Its more complex, but you save on having lots of simplier code to maintain as well (since you still need the complex scenario).

SonOfPirate:
The wrinkles I am trying to iron out include the need for dynamic searching/filtering on top of the list of pre-defined views.  Is it better to treat all of these the same and have the underlying data access code treat a pre-defined view the same as a user-defined view in that the data access code doesn't know the difference between a call that was initiated through one of the factory methods or based on some criteria established by the user in the UI -or- would it be better to have these predefined views be BOs of their own?

It might be easier to have the different methods end up calling a more complex DP_F routine.  As i said above, its a more compelx routine, but it handles everything you need to. 

SonOfPirate:
I think the concensus is NOT to have each view be a BO of its own; however, after looking at the 'x'DataSource/CslaDataSource model, I find it an interesting idea to implement this type of scenario in a similar way to where we define a set of 'views' for a particular set of data and have them managed by our core object which instantiates the requested object via our factory methods.  This would allow us to maintain our development standard of not allowing code direct access to database elements - code MUST use sprocs - for each view and provide for extensibility by simply creating additional views that can be added to the core object.  This would allow us to persist the user-defined views and have them recreated in code for access in a similar manner as the pre-defined views.

The method I'm recommending to you should be able to handle it just fine.  Its actually what I'm doing, and I'll also need to store user defined searches.

SonOfPirate:
I'll tell you one of the reasons that this is appealling is that we are looking to add a "Folder List" type component to our UI.  This would list all of the different views that a user has available to view data.  It would come out-of-the-box with "All Projects", "Open Projects", "Closed Projects", etc. - our pre-defined views - and would expand to also include the names given to the custom views created by the user.  So, if our user wanted to always see a list of all Projects for a particular region or projects worth more than 'x' amount in revenue, etc., they would define their filter/criteria, such as (Project.EstimatedRevenue >= $1 Mil), save it as "High Yield Projects" and it would automatically appear in the Folder List for that user alongside the others.

Sounds like what I am doing, more or less copying the Outlook 2003 style application. For exmaple, there's an Invoicing group (where in outlooky ou'd have Mail, Calendar,etc) and the items are Open, Closed, Pending, ect..  and I'll be adding custom search folders as well.

Good luck with whatever solution you come up with!

Andy

xAvailx replied on Monday, December 11, 2006

>>
Well, not getting into the debate too much, don't just assume those assumptions are true.  There are times where dynamic sql actually performs better than the equivolent stored procedure. And are you just as secure creating a view (and can specify column level security) and limiting what can be done with that.  Stored procedures are no magic bullet.
<<

If you start a new thread, I would like to discuss your views on the above comments.

Thx.

ajj3085 replied on Monday, December 11, 2006

I don't think we need another such thread, especially here.  Just google around though and you'll find plenty of posts where dynamic sql ends up performing better than a proc.  As far as security goes, views are just as securable and can hide the actual schema (although I don't count that as 'security') just as well as a proc can.

Just to qualify my statement though, this applies mostly to selecting data.  I think procs are probably the best way to modify data, mainly because it becomes very easy to add full row level auditing later if you're going through procs first.

xAvailx replied on Monday, December 11, 2006

That is fine if you don't want to start a new thread. As far as googling, pretty much all of my previous research and experience says otherwise (performance, security, maintenability, etc...) about using dynamic sql from a client application, that is why I was wondering if you had a specific link so I can look it up and research further.

Thx.

DansDreams replied on Tuesday, December 19, 2006

xAvailx:
That is fine if you don't want to start a new thread. As far as googling, pretty much all of my previous research and experience says otherwise (performance, security, maintenability, etc...) about using dynamic sql from a client application, that is why I was wondering if you had a specific link so I can look it up and research further.

Thx.

This was quite a pet topic of mine a year or two ago  - back when I had more time for debating for the fun of it.  LOL. 

There are a gazillion articles discussing this including ones taking the ad-hoc point of view.  I can't believe you didn't find any of them.

After spending WAY too many hours researching, listening and debating, the simple conclusion for me is that there's no hard and fast answer.  Many of the arguments for stored procedures are based in pure myth or ancient testing using SQL Server 7.0 or earlier.  Things changed in SQL 2000 rendering many of those old assumptions invalid - but lots of SQL gurus cling to them. 

The other of what I see as the two biggest dangers in sproc fanaticism is blindly assuming they're best for all scenarios.  It's not difficult to write a sproc that is a horrid mess to maintain and isn't even acting like a sproc from SQL Server's point of view (in terms of performance).

Well, we could go on for hours here.  The bottom line is my advice is to understand stored procedures offer advantages and disadvantages.  One should not blindly accept the disadvantages when you either don't understand the advantages or haven't explored other ways of achieving them.  (This is my general advice and not meant specifically for you or to accuse you of anything.)

I personally use mostly views for reading with the occasional sproc thrown in when necessary, and sprocs exclusively for writing because I want the control they offer.

xAvailx replied on Thursday, December 21, 2006

I am not looking for a neverending debate. As I previously posted, I was looking for specific verifiable and reliable resources so I can research further.

Again, pretty much all of my previous research and experience says otherwise (performance, security, maintenability, etc...) about using dynamic sql from a client application and I am not looking for a debate, just some reliable resources I can research further for which "specific" scenarios client dynamic sql is "better" than a stored proc.

If you have any specific links, I would appreciate if you would share.

Thx.


ajj3085 replied on Thursday, December 21, 2006

While you're reading my post, please keep in mind that for selecting data, I prefer dynamic sqls against views.  For updates, I think the best bet are SIMPLE procedures.  Remember, most of your logic in the Csla world should be in C# business assemblies anyway, so your procs really shouldn't be doing more than a simple update. 

I've been looking for links regarding performance, but its hard to find things out there because of all the noise.  I do recall reading that starting with Sql Server 2000, dynamic queries were more or less on par with stored procedures.  I've also found some threads that indicate on some queries, dynamic sql actually performed better than the same query encapsulated in a stored procedure.  But you'll have to search on your own.

As far as maintenablity goes, it depends on what you're doing.  If you end up building sql with in a stored procedure, or you have lots of if the blocks to alter which query is run, i'm of the belief that maintenablity will be hurt... but it depends on who's doing the maintence.  I'm pretty good with T-Sql and C#, but it seems to me that C# is easier to maintain than pure T-Sql.. probably because of intellisense and the fact that I have tests and the compiler in general is better at helping me figure out what's wrong.

As far as security goes... well, I never knew how anyone could honestly say that procs are more secure than dynamic sql.  At the end of the day, even all the way back to Sql Server 6.5, you can set permissions on tables, views, stored procedures.. pretty much every database object could be secured.  So how this whole 'procs are more secure' thing got started is beyond me... unless your putting validation logic in the proc, but with Csla you shouldn't be doing that.  Take validation logic out of the mix, and now you're basically left with the ACLs you can define on db objects.


Andy

DansDreams replied on Thursday, December 21, 2006

xAvailx:
I am not looking for a neverending debate. As I previously posted, I was looking for specific verifiable and reliable resources so I can research further.

Again, pretty much all of my previous research and experience says otherwise (performance, security, maintenability, etc...) about using dynamic sql from a client application and I am not looking for a debate, just some reliable resources I can research further for which "specific" scenarios client dynamic sql is "better" than a stored proc.

If you have any specific links, I would appreciate if you would share.

Thx.


Well, you can start by looking in the SQL 2005 Books Online under Stored Procedures Basics.  You'll note in the "benefits" section there isn't a mention of performance anywhere in sight.  In contrast, in SQL 2000 Books Online there is a specific section implying (erroneously) that the compiling of stored procedures brings a performance benefit.

The following is a rather infamous blog wherein the author later corrected himself and admitted that the pre-compile argument was complete myth.

http://weblogs.asp.net/rhoward/archive/2003/11/17/38095.aspx

The following is a fairly well-known blog on the myths of stored procedure advantages.  Frans is perhaps a little extreme himself in his disdain for sprocs, but it gives good arguments against the myths which have fueled the "always use stored procedures" position.

http://weblogs.asp.net/fbouma/archive/2003/11/18/38178.aspx

I do not doubt that you've found lots of resources spewing the mantra.  The question is were they largely based on fallacy and utter untruths like Howard's.

I don't have time to repeat all my research, but as for a specific case where dynamic SQL is more performant, see what you can find googling including "coalesce".  That was one specific example I remember - the problem being basically that in a complex sproc with lots of conditional testing going on you ended up with two problems:

1 - a lot more code to run compared to the business layer that likely already knows exactly what it needs to do

2 - the final query that ends up being performed is so variable that you really have no cached execution plans and therefore no performance benefit over the specific ad-hoc dynamic calls

DansDreams replied on Thursday, December 21, 2006

I put my previous post in a new thread more appropriate for this OT (from this thread) discussion.  Please respond there to me so other interested parties will see the correct subject.

d0tnetdude replied on Monday, January 08, 2007

We also ran into the same problem, in fact we our business object handled Task information, called TaskInfoList.  The customer wanted a vareity of different views, by status, by customer, by resource, by start date and end date. 

When we first implemented CSLA we had a separate stored procedure for each, needless to say that became a maintenance nightmare whenever we added a column to the database. 

Finally the customer said they wanted to be able to not only have specified views, they wanted to be able to search the data anyway the wanted! 

After much deliberation we made all our lists operate using dynamic sql, but we still called a stored procedure that accepted a where clause as a parameter.

As long as the customer is providing some search criteria the performance is pretty good and as far as sql-injection attacks, we've tightened up the security on the account that accesses the stored procedures to minimize any major issues.  It was a trade off, but one that I think was worth it.  We often times create (2) stored procedures, one being the base list, GetTaskList, and then a more advanced one, GetTaskList2, that also joined in all the child tables, not just the foreign key tables, so the customer could really search with some power.

All I can say is it works, it may not be best the solution, but it made the customer happy Smile [:)] and it makes it easy for us to implement changes.

Hope this helps!

Copyright (c) Marimer LLC