CSLA 4.x data virtualization to suport modern UI components?

CSLA 4.x data virtualization to suport modern UI components?

Old forum URL: forums.lhotka.net/forums/t/10837.aspx


JCardina posted on Tuesday, November 01, 2011

I'm trying to match up the Telerik WPF GridView component to my CSLA business object lists.

They suggest binding to a LINQ source using their VirtualQueryableCollectionView object which essentially takes the users filtering and sorting and paging choices in the grid and creates a LINQ query which, if the back end was LINQ based, would issue the query to the sql server I guess automatically to cut down on the data transferred.

I think they assume that the back end is a LINQ to SQL or Entity Framework or something, they refer to it as binding the grid to a "LINQ context". 

I'm confused about this or how to support it in CSLA.  I'm using the grid with read only business object collections and normally I do not use LINQ to SQL or EF or anything like that.  I use and require encapsulated DAL for performance.

I'm  lost on this at the moment.  Any theories or ideas or suggestions to get me looking in the right direction would be highly appreciated.

Is the world moving this way and I need to suck it up and use LINQ to SQL or something to take advantage of new UI components and just accept a loss of performance or is there some way to take advantage of it just for this scenario exactly in a small way or...?

Curelom replied on Tuesday, November 01, 2011

There are samples(that can be downloaded with CSLA itslef) and ebooks at the store for using linq to sql and/or Entity Framework within CSLA.

This does bring up a question about the direction of the next version of Asp.net where Microsoft looks like it is moving toward using IQueryable as detailed in Scott Guthrie's blog  http://weblogs.asp.net/scottgu/archive/2011/09/05/web-forms-model-binding-part-1-selecting-data-asp-net-vnext-series.aspx

Will CSLA be able to take advantage of this or would we be accessing CSLA the same way we do today?

JCardina replied on Tuesday, November 01, 2011

There are a *lot* of people complaining about the performance and amount of code you have to still write for entity framework. 

It scares me off it completely for my application.

I think I need to roll my own solution still, there is just nothing as fast as direct queries from the business objects themselves in an encapsulated scenario.

Maybe  I should look into implementing an IQueryable interface in by business collection objects that would work through to the db.

Curelom replied on Tuesday, November 01, 2011

Thanks Johnny

 

JCardina,

I don't think I've experience having to write more code with EF than ADO.NET.  As far as performance is concerned, I am pretty sure I could write more efficient queries than EF, but the performance hasn't been bad.  I believe you could also mix the two, using direct queries where speed is crucial. 

Back to the Telerik Grid, I've been using Telerik for a couple years now, I don't see how you load the data into your CSLA Business Objects matters at all to the Grid.  I've bound business lists to the grid in all sorts of ways, including directly to the ItemsSource, through a CollectionViewSource bound to the ItemsSource or IEnumerable results from business lists to the grid without any issues.

JCardina replied on Tuesday, November 01, 2011

Curelom, EF seems adequate for trivial applications, it seems like a huge mistake for any application of a large size and complexity with a huge database behind it.

A common complaint I came across about EF is that if you want any kind of performance anywhere you have to hand tweak.  I'm not willing to accept sub par performance in my application because my users' aren't.  I'm also not comfortable with giving up control over the queries and I'm absolutely not comfortable putting an unnecessary layer between the data and my business objects.  To me it just seems silly on the face of it, no need to even have any other reason despite there being apparently many.

Yes the Telerik grid can be bound directly to the business object until, again, you have a non trivial application where the users are faced with gigabytes of data potentially at the back end, and they need to be able to sort, filter in complex ways (with preset filter values like "This month" or "A-H") and page through the data.  This then becomes problematic. 

I have a very highly performant, grid agnostic, way of doing it in the original web and winform UI"s for the app that I'm replacing.

I'm trying to replicate that with the Telerik wpf grid and it was going great up until I discovered there's no event for when a user removes a filter so that was what triggered my asking Telerik and them telling me basically "don't worry about it, just bind to a LINQ source and it's all automatic". 

I'm not doing it that way and I"m not some code cubicle jockey.  I hand craft software to perform as fast as possible and in the way the user thinks as much as possible in a task oriented manner and that precludes almost any automatic hand holding junk from Microsoft or anyone else which seems created for the sole purpose of allowing companies to hire the least knowledgeable possible developers.

I don't want every click in my app to be a *request* for something to happen.  I want it to be an instantaneous action that happens now, not when all the intervening "happy helper" layers decide it's allowed to happen.

Sorry for the rant but I'm frustrated with the whole plug and play approach to programming that is churning out so much crap these days and as professionals I think we should all strive to do the best possible by our users, that's what the job is really about.

Curelom replied on Tuesday, November 01, 2011

I have several applications that deal with millions of records using EF and the Telerik grid and work with databases with 10s of thousands of tables.  I've been programing for nearly 30 years.  I am not some partially employed developer who learned how to program with a c# for dummies book and is completely clueless.  Partially clueless, maybe Wink

I do not believe they were talking about binding it to a "Linq to Sql" source, but rather the a linq result of an in memory collection bound to controls on the window.  It is hard to know from the little details you have given whether you want some or all of the filtering to occur on the server side or client side, so I don't know how to give concrete examples.  Having middlemen objects in the middle doesn't always suggest that it will be slower either.  Linq uses the yield statement extensively under the covers and only pulls what data you ask for, when you ask for it.  If you read through the series on the ScottGu blog mentioned in an earlier thread, you'll see how using IQueryable provides an easy way for doing the filtering and paging on the database side with less work needed from the developer.  Performance should be good and this gives you more time to work on the more important "interesting windows".

If you're uncomfortable putting any unnecessary layers in, get rid of CSLA.  You can always argue your way back to assembly.  These additional layers can provide readability, maintainability, and flexibility of an application, which are also important concerns for users.  Especially when the users tell you the one thing they would never change, they now want changed.

JCardina replied on Tuesday, November 01, 2011

Sorry if I gave the impression I was directing my rant at you specifically, I wasn't, it's just a general rant.  And I"m not asking how to do it, I already have a working solution (in use commercially for many years now world wide in over 50 countries at last check. :)  Not that I too am not partially clueless because I'm certain I am clueless about a lot of things or I'd be a crappy developer. :) ). 

Despite your experience to the contrary it seems a pretty common theme from my research online that EF has major problems at every level and seems to be something to avoid if possible.  This is typical in it's complaints: http://efvote.wufoo.com/forms/z7x3p9/

I'm just trying to determine if there are any other options out there that would be better than what I'm doing now.

(I've never seen a database with 10's of thousands of tables, didn't even imagine such a thing existed before now.  I can certainly see why you'd want to use something like EF, my problem is sort of the other way around: a properly normalized business domain database consisting of only about 50 tables but often a huge amount of data. And a major requirement to be able to slice and dice the data in interesting ways at the UI level.)

Curelom replied on Tuesday, November 01, 2011

The application I work with that has all those tables is Oracle Apps.  That is Oracle's ERP solution and is my bread and butter.  I agree, EF has LOTS of problems and has a way to go before it can be considered a mature technology.  I also don't use EF for all those tables, that would be a nightmare.  Oracle already has forms for dealing with most of them.  I build extensions that users can use that aren't addressed by Oracle.

The tables that I use containing millions of records are in a custom data warehouse we built.  I addressed most of the performance issues on the database side, using properly partitioned and indexed tables.  Also use Materialized Views to get speedy aggregate information.  I don't know what Materialized Views are called in Sql Server, but I believe it has them.  Parallel Partitioning is absolutely vital.  I then give the users a large subset of the data and let them filter on the client side to a smaller subset.

JonnyBee replied on Tuesday, November 01, 2011

Quote from ScottGu's blog post:

We could have returned the categories from our GetCategories() method using an IEnumerable<Category> – or a type that implements that interface like List<Category>.  Instead, though, we returned the categories using an IQueryable<Category> interface:

public IQueryable<Category> GetCategories() {

 

    var northwind = new Northwind();

    return northwind.Categories.Include(c => c.Products);

}

The benefit of returning IQueryable<T> is that it enables deferred execution on the query, and allows a data-bound control to further modify the query before executing it. 


Csla 3.8.x and som previous version supported the IQueryable interface and in-memory indexing. The benefit of using IQueryable with EF/L2S is that you get closer to the data access code and this can change the resulting SQL. Not quite so easy in tiered application to let expression trees flow down to the data access. My assumption is that CSLA will acess data the same way it does today and that when a web app will get a list like that example you will either send the "paging" parameters to the data access layer or use EF / L2S directly in your app.

 

Copyright (c) Marimer LLC