Complexity :(

Complexity :(

Old forum URL: forums.lhotka.net/forums/t/7673.aspx


david.wendelken posted on Wednesday, September 23, 2009

I've been going over the 2008 version of the project tracker application to (re)learn how csla works and absorb the changes to the csla framework.

One of the things I liked about the 2005 framework was how easy it was to ramp up a junior developer.  It was elegant and pretty simple.   I could point them to a class, talk them thru it in 1/2 to 1 hour, and say, "Go make some more classes like this one.  Here's the list."

The new project tracker app appears to have split the data layer out of the business objects into a separate linq project.  Not only that, but the DalLinq layer has all the data access methods jumbled into one file (although they are still separate classes).  That's an approach that doesn't scale well, because multiple developers get in each other's way with a common file like that.

The ProjectInfo class has two files, one of which doesn't appear to do anything at all.  Ditto with the ProjectList class.  I don't see how that is an improvement! 

Personally, I'm not convinced that Linq is an improvement either, but I could be biased on that, as I'm very comfortable with writing good SQL code.

It would have been nice to have one of the project tracker classes use the traditional calls via sql commands.  It would certainly make the transition easier for people who don't know linq and don't know csla, or for those wanting to upgrade to csla 2008 from csla 2005.  :( 

If anyone has a sample class (businesslistbase and businessbase) they could post that isn't split into a DalLinq layer, that uses the standard calls to sqlcommands, it would be a kindness to post it.

 

 

wjcomeaux replied on Wednesday, September 23, 2009

Actually, you can write your business objects the same way as before. The new versions add a lot of features but most (if not all) don't impose any required changes to your BO structure. You are still free to use inlineSQL, stored procs, LINQ, etc as your data access method. Likewise, you can still put your data access code in the exact places as you did before. The fact that the DAL is separated from the business logic in the example you saw was simply a choice made by the designer of that project. If you look through all of the example projects available with CSLA you will see that almost none of them use the same business object structure. That's one beauty of CSLA, it imposes very little on how you write your code. Rocky is very careful about making changes to CSLA that will break existing code and those changes are well documented in the changelogs for each version of the framework.

Your best bet, if you want to continue with your usual business object structure is simple to compile and test one or two against the new version of CSLA you are using. If it compiles and your tests succeed then you won't have to change the way you currently write code.

However, there are benefits to a few of the improvements (like using the new way to declare properties instead of the old private backing field methods).

HTH,
Will

griff replied on Wednesday, September 23, 2009

Hi I tend to agree with David - I don't think linq adds that much (over ADO etc) and given the enormous amount of new technologies still surfacing I'm sticking with ADO etc. There is simply too much to learn!! If you use cslagen to generate your BO's then this still uses the traditional methods. HTH Richard

Michael replied on Thursday, September 24, 2009

I bit the bullet and laboriously updated all of my data access to use LINQ. I don't regret it. Like most other CSLA devs, I am still using stored procedures for inserts, updates and deletes, but all retrievals are LINQ. This has dramatically reduced the total number of SP's that I have to maintain (especially for reports), as I don't have to have GetByThis, GetByThat, GetByThatOtherThing etc.

I also appreciate the strongly-typed methods and compile-time checking provided by LINQ-to-SQL for the stored procedures. As a side note, I've found putting each parameter on a new line makes it a bit easier to check parameters with intellisense (Ctrl+Shift+Space, Down, Down, Down...):

ctx.DataContext.SomeInsert(m_one,
                           m_two,
                           ...
                           ref newId,
                           ref lastChanged);

I also think the syntax looks so a lot neater than the ADO.NET calls.

Regards
Michael

ajj3085 replied on Friday, September 25, 2009

Michael:
Like most other CSLA devs, I am still using stored procedures for inserts, updates and deletes, but all retrievals are LINQ.


Well Linq doens't provide anyway to update the database anyway... Linq2Sql has object change tracking though, and you can use that and stored procedures as well. I do this,and tend to like it. It looks more natural to me for some reason, and it will batch everything until you call SubmitChanges on the datacontext. Calling the sps right from the datacontext executes them immediately, correct?

ajj3085 replied on Friday, September 25, 2009

Well, I strongly suggest you learn linq, and the technology which enables it (extension methods, lamba expressions, etc.) Once you figure it out, the productivity gains are huge.

The structure of the data layer is all done via a designer which creates a dbml file. You only would do that if you actually use the designer, probablyi for smaller apps. For larger ones, SqlMetal would be the appropriate tool to use to generate your Linq2Sql classes (which is a subset of Linq!). You can be as good as you want building sql statements... it won't help you when you change the database and now have to hunt for all your string literals to change. Linq2Sql (or Linq2EF, or Linq2whateverdb) allows you to find breaking database changes just by recompiling.

I'm not sure what you mean by seperate projectinfo classes... I haven't see the sample in a while I guess.

Of course, you don't need to learn any of that... you can still do yoru sql calls exactly the same way. The split classes might be around to demonstrate how to build the PTracker for the server side (.net) and the silverlight client side (which doesn't use the .Net runtime, it uses the silverlight runtime).

To really learn Csla though... the book is your best bet.

Things have gotten a bit more complex for Csla... but going from 2 to 3.5 can eliminate up to 40% of your boilerplate csla code... so I tend to think its rather worth it.

JonM replied on Friday, September 25, 2009

We don't use all of that new linq 2 sql either. It is more of a paint than its worth (At least to us). I want all data access routed through stored procedures. The main reason I want that is so I can control how other devs access data. It is difficult to optimize for performance if I don't know what the queries are. Now I will say that linq 2 objects rocks. It is great way to search in memory collections. Normally I stick data in a dictionary that I want to access. However sometimes I want to search dynamically on multiple columns. This is where linq comes in.

Michael replied on Friday, September 25, 2009

Andy: Yes, SP's are executed immediately (unlike linq fetch queries which run when they are enumerated).

Jon: You probably still have a huge number of SP's which are no more complicated than
SELECT * FROM X WHERE Id = i ORDER BY j

For terribly complex fetches you could still use SP's with a dbml, so you can have the best of both. And it's not all or nothing - you could use a dbml and linq just for the new stuff, and optionally update the rest of the project over time.

When I was moving over to linq2sql there were some fetch procedures that I had a lot of trouble of converting to linq. This usually indicated that my use case/object design/table design were inappropriate to begin with (nullable columns on joins etc).

JonM replied on Monday, October 05, 2009

You are right. Most of the select queries are fairly simply. The main reason I keep them in SPROCs is because I control all of the SPROC code. Whereas other developers write data access code in their applications. By forcing them to use my SPROCs I can force them to use use queries that I can control/tune for good performance. It also helps me stop bad design early. I have run across issues where a dev wants to use data in the database in a way not inteded. By controlling all the access points I can catch/correct/redesign for these issues early on. I guess it just depends on your working environment.

xAvailx replied on Wednesday, September 30, 2009

As others have pointed out, you can still stick with ADO.Net or whatever data access technology you prefer.

I agree that project tracker has gotten very confusing over the years as new features get added and different styles are used in the examples. I would suggest looking at the templates and start from there instead. The reduction on lines of code when I compare CSLA 1.5 vs 3.6 are quite significant.

Regards to Linq, we transitioned to Linq2Sql and don't regret the decision. We do all of our CRUD operations through linq2sql, I am not sure why some only do the fetches through linq and not add/update/delete (maybe because that is how PT is written?). By moving to Linq2Sql or another ORM you will save on the number of artifacts that need to be created/maintained at the database level substantially. Yikes, even remembering my old projects where there would be 4/5 stored procs per table and maintaining all of that brings me chills :)

Michael replied on Wednesday, September 30, 2009

xAvailx has got me thinking now. I suppose the reason I don't use Linq for CUD is because of recommendations in other posts on this forum and the book. With SQL Server's stored execution plan caching (and I presume other databases do a similar thing), the performance argument is not really relevant any more. And as I said earlier, I want my CUD to be very simple. If I have to worry about performance, I'll rethink my design.

I'll give it a go.

mbblum replied on Wednesday, September 30, 2009

The requirements for restricting and monitoring access/changes to the database is a major reason many use stored procedures. It gives a focused control point into the data. Anyone working with a data that must meet requirements like HIPPA, SOX, DoD, etc. usually cannot allow direct access to the data tables. As an example, if every query on the data has to be logged, you cannot depend on a program to do that.

Which is why many will have to continue using sprocs, and making linq2sql direct to the tables a non-option.

For those who are using databases with an open access policy, linq2sql can be a powerful option.

ajj3085 replied on Thursday, October 01, 2009

So you're saying the sp does some kind of auditing / logging even for selects? That makes sense.

Ignoring that though, there's no need to allow direct access to tables even using link. Your Linq2Sql objects can be built off of views instead of tables, and IUD can be done automaticaly through stored procedures by changing the behavior of the entity.

In other words, I don't allow direct access to tables at all, and all IUD is still through stored procedures using Linq2Sql.

Its fairly natural too... create an object, set its property, then call Attach or InsertOnSubmit. Relationships between tables are set by setting one object reference to the associated property on another... and then Linq2Sql figures out the correct order to call the prodedures for you,and does the changes as a single batch to the database.

xAvailx replied on Thursday, October 01, 2009

Good points re:auditing. I've had to do auditing for CUD but not reading data. In the CUD scenarios I've used triggers.

Sql08 now has that auditing functionality built in.

http://msdn.microsoft.com/en-us/library/dd392015.aspx

Using Data Services / Astoria may solve some of those auditing issues as well (assuming there are hooks when data is requested).

rsbaker0 replied on Thursday, October 01, 2009

I'm admittedly in the minority on this, but I'm not a fan of using SP's because, among other things, you end up having to duplicate major portions of code for every back-end you want to support.

I understand why people recommend them, but it is difficult to achieve any degree of database independence if you do your basic IUD operations through SP's to any degree.

I'd much rather use an ORM like NHibernate and let my BO's do persistence through a layer that provides some independence from the database (not to mention the vagaries of Microsoft, witness Linq2SQL basically already joining the long list of orphaned or semi-orphaned Microsoft technologies).

JonM replied on Monday, October 05, 2009

Why is database independence important at all? Most projects I've run across that support multiple databases are basically open-source apps that don't scale well. I think it is cool that some of these programs can work with MSSQL/MYSQL/PostregeSQL. The problem I see is that it keeps you from being able to performance tune because different databases have special commands/tools to optimize them. I guess my question is that does it gain you? Do customers require it?

rsbaker0 replied on Monday, October 05, 2009

JonM:
Why is database independence important at all? ... The problem I see is that it keeps you from being able to performance tune because different databases have special commands/tools to optimize them. I guess my question is that does it gain you? Do customers require it?


The customer requirement is the main thing.

Oracle has a huge install base out there and some of our largest installs require it.

I've found that experienced DBA's for both MSSQL and Oracle are able to tune fine. It's the ones without experienced DBA's where you get trouble. I'll leave it as an exercise for the reader as to which of these two databases is easier to get up and running without knowing what you are doing. :)

Another driver that is less important now than when we first released our product 14 years ago was that at that time there was no "free" version of SQL Server like there is now, so we have a large subset of customers on our legacy product that use Access as the back-end. (although the Ole DB driver we have to use in .NET for it doesn't work anywhere near as well as the native DAO support did).

Another very cool thing that happened during our migration is that we ended up deploying an interesting "dual-data portal" that connect both to a remote data portal as well as a local SQL Server CE database. SQL Server CE speaks a slightly different dialect than SQL Server, but it was trivial to modify the ORM to support it.

JoeFallon1 replied on Monday, October 05, 2009

JonM:
Why is database independence important at all?  Do customers require it?

Bingo!

I had to support our app on SQL Server and Oracle for many years. We finally convinced our Oracle clients to switch to SQL Server and abandoned that database. (Thank goodness!) We can finally begin optimizing for SQL Server. We can also begin using specific functionality  if we want to. (New datatypes, user defined functions, etc.)

Joe

 

rsbaker0 replied on Monday, October 05, 2009

JoeFallon1:

JonM:
Why is database independence important at all?  Do customers require it?


Bingo!


I had to support our app on SQL Server and Oracle for many years.

Joe


 



Hmmm.

Last year MS had 18% of the database market versus 48% for Oracle.

http://www.reuters.com/article/rbssTechMediaTelecomNews/idUSN2634118720080826

So, you can increase the potential market for your application by 250% just being database independent. That's why...

(I'm fairly neutral on the merits of one versus the other, just acknowledging the reality of the marketplace.)

Michael replied on Monday, October 05, 2009

Hmmm. Last year MS had 18% of the database market versus 48% for Oracle. http://www.reuters.com/article/rbssTechMediaTelecomNews/idUSN2634118720080826 So, you can increase the potential market for your application by 250% just being database independent. That's why... (I'm fairly neutral on the merits of one versus the other, just acknowledging the reality of the marketplace.)


That's a very broad statement. Are any of those Oracle customers going to be interested in my AutoCAD specific software? Nope. Most customers don't care at all how or where their data is stored, they just want to be able to do stuff faster, better and cheaper than they could before. And good business object software is use-case-centric, not data-centric.

Michael replied on Monday, November 30, 2009

Hi xAvailx

I'm trying to do an insert purely with Linq2Sql, but haven't been able to find an example updating a child collection referring to a parent with an auto Id. In some non-CSLA examples you can add rows directly to the child table, but with CSLA do you need to call SubmitChanges() on the parent first and then call the child update as usual?

if (FieldManager.FieldExists(ChildrenProperty))
    DataPortal.UpdateChild(ReadProperty(ChildrenProperty), this);

Kind regards
Michael

xAvailx replied on Monday, November 30, 2009

Hi,

Yes, I think you are on the right page. It shouldn't be any different using Linq2Sql. Here is what I do.. (sorry about the vb..)

In parent DP_Update/Insert

Insert:

ctx.DataContext.RequestItems.InsertOnSubmit(data) 'data is LinqClass
ctx.DataContext.SubmitChanges()

...set Pks (identity...)
mId = data.Id

FieldManager.UpdateChildren(Me)

Update:
Dim data = New DalLinq.SomeClass()
...init
ctx.DataContext.RequestItems.Attach(data, True)
...set data props
ctx.DataContext.SubmitChanges()

FieldManager.UpdateChildren(Me)

In child DB_Update/Insert

Private Sub Child_Insert(ByVal parent As MyParent)

Private Sub Child_Update(ByVal parent As MyParent)

HTH, let me know if that makes sense, I can post more complete code :)

VolkerS replied on Monday, November 30, 2009

Hi all,

I'm a newbie to CSLA and I must confirm the initial threadstarter.
I purchased both (05/08) books, but from time to time and topic-wise I like to dive into code and debug/step through it. While installing the 2.1.4 samples was just about building the FW itself, setting the reference-paths and fixing some DB-connection strings and e'voila the sample app runs, I struggled around with 3.8.1 and after 2h gave it up.

Could it be, that ONE single sample app is a little overstrained? Imagine an almost exclusive desktop-developer that isn't used to WF/WCF/WebServices and all that stuff. Or is CSLA not that well suited for that scenario, is it like shooting with artillery at hummingbirds. Sometimes i'm in doubt...

Hmm so much questions. Perhaps the Core-Video series may help.

V

RockfordLhotka replied on Monday, November 30, 2009

Some of the sample apps that are part of the core video series are pretty simple - not n-tier, and with very basic data acess.

I suppose you can argue that CSLA is too complex for simple apps, but I think it is a matter of ignoring parts of the framework. For a simple app, just create one project in your solution that contains the UI and business objects and data access - nothing wrong with that.

The same is true with .NET itself. People building simple apps simply choose to ignore 95% of the framework and are happy. You can surely do that with CSLA as well - just use the object structure and business rules subsystem and have fun :)

VolkerS replied on Wednesday, December 02, 2009

Hi Rocky, well ignoring parts of the FW simply doesn't help to get the PTWin-sample work. Exactly that was my point. After changing the connection-string a dozend times in several projects I finally got it made (I believe, WCFHost made the difference, or was ist DalLinq?), but since I'm quite used to have a (or more than one) connectionstring stored in ONE place and get my app working, but not used to that WCF stuff, I think that PTracker as ONE sample app maybe overly complex and overloaded, esp. for those just beginning with CSLA and not having experience with that other stuff (which indeed maybe fine, but not a condition sine qua non for using CSLA) like me. I viewed the first 30min of your first Core-series video and must say it's great so far! Both from a technical point of view and viewed from the content. Please proceed. Live long and prosper ;-)

ajj3085 replied on Wednesday, December 02, 2009

The PT sample is designed to show many possible configurations; local data portal, remote, different UIs, etc. 

The easiest way to get it working out be trying it out in local dataportal mode first.  And also, its designed to go along with the book, so you should be reading the book and refering to the PTracker, not trying to learn Csla through the sample code.  And that's only if you want to learn how the framework works and is designed.

If you just want to USE Csla, then the video series would be most appropriate for you, and I dont think any of those even  use the PTracker as a sample.

RockfordLhotka replied on Wednesday, December 02, 2009

Andy is correct - ProjectTracker is not a simple example. It is intended to illustrate a great many of the features of the framework.

That was kind of the point I was trying to make. Something like PTracker doesn't ignore parts of the framework. But some of the simpler apps in the Samples download are more focused and are much simpler.

The sample apps I'm using in the video series are all new, designed for the video series, and they are much simpler and more focused. Those samples aren't trying to show everything in one big app, they are trying to show specific things in several smaller apps - hopefully easier to digest.

Copyright (c) Marimer LLC