I've done some research in this area. Of course LINQ continues to evolve, so it is impossible to say exactly what will be possible.
The primary issue (difficulty) is that LINQ generates IEnumerable<T> as a result - so it doesn't have any automatic way to put objects into an intelligent collection like one derived from BusinessListBase. This tends to limit what one can do. Still, I have tried a few things:
What about having LINQ objects as private data members (or lists thereof) inside of CSLA business objects? The public properties of the business objects could access the LINQ object properties directly when there happened to be a clean mapping.
I'm just not sure if the LINQ objects would be able to ride along as mobile children back and forth between client and server. Could they be serialized? Also, I'm not sure this would work becuase each LINQ object is tied to a DataContext object, which I think would need to stay alive on the server because it holds on to a db connection. Anybody have any thoughts?
FWIW - I haven't looked at LINQ as a way to retrieve my BO's from the database (yet) but have rather considered this as a significant step forward in the ability to provide dynamic search capability. The issue I see here is that by returning an IEnumerable<T>, it is possible to return objects that aren't pre-defined in our object graph. HOWEVER, we control the interface, don't we? So, it should be easy enough, or realistic enough, to limit access to what we expose from our BO's.
What I mean by this, and what I have toyed with in my investigation of LINQ thus far, is extending the base classes and data portal mechanism to include "Search"-like methods that provides the interface to LINQ. For example, we could now have a Where() method on our Person BO (following the LINQ syntax) that would return a PersonCollection containing the objects that matched our criteria. Under the hood, the Where() method would pass the LINQ criteria through the data portal to a new DataPortal_Search method in our BO which would inact the LINQ command, retrieve the selected records and return them as BO's contained in the collection class.
This sounds like a lot of work, but with generics and reflection, I'm sure much of it can be coded at a base-class level to eliminate much of the work for developers.
In addition, this approach will support the "traditional" data portal methods to instantiate and update our BO's while leaving the door open for LINQ to be implemented within the DataPortal_XYZ methods as Rocky described. Plus, because all of the actual LINQ-based actions take place on the "other side" of the data portal, the n-tier limitations Rocky explains are not an issue.
In the end, we have LINQ under the covers as a technology for data access and retrieval that is transparent to the BO and UI developers. As a development tool and class library designer who passes most of his work on to others to use in their development efforts, this is ideal. I don't see LINQ as changing the way we code our objects but rather a way to extend what we can do now, hopefully simplifying some of the extensive work that we've done in the past to implement the same features, and gives us an alternative way of bridging the BO to database gap.
HTH
I am also new to LINQ and will be reading more about it in detail (in between my Biztalk exam preparation). The possibilities it presents are interesting.
I think you are correct above becuase if the BusinessBase classes were modified to support LINQ-style queries, it would theoretically be possible to do
IEnumerable<Businesscustomers> customers = Customer.Where(c => c.Type="Business")
The concern is that, this would mean tying CSLA to LINQ. Maybe it is possible to create BusinessBaseLinq style classes separately?
Not only will this make the Customer class more dynamic, it will also remove the dependence on the Critieria class. The Data_Portal methods like DataPortal_Execute could still occur on the server side as per normal but use LINQ instead of ADO.NET.
I do not think I have enough knowledge of LINQ and CSLA.NET to make that call yet.
Make Rocky has some thoughts on that?
--
With Regards
Shailen Sukul
Software Architect/Developer/Consultant
(BSc | Mcts (Biztalk (IP)) | Mcpd | Mcts (Web, Win, Dist Apps) | Mcsd.NET | Mcsd | Mcad)
Ashlen Consulting Services Pty Ltd
(http://www.ashlen.com.au)
MSN | Skype | GTalk Id: shailensukul
Ph: +61 0421 277 812
Fax: +61 3 9011 9732
Linked In: http://www.linkedin.com/in/shailensukul
This thread is pretty old
If you look at the CSLA .NET 3.5 change log, there's already some good activity around LINQ support, and the results are, and will be, very cool.
http://www.lhotka.net/Article.aspx?area=4&id=71994329-d218-4c02-a4d4-ce5ea4ac8d4b
Remember that LINQ is not just one thing. There's LINQ to SQL, LINQ to XML and LINQ to Objects and LINQ to DataSets and LINQ to Entities. There'll be more as time goes on. Each is part of the family, but they are not the same thing.
I'm only focusing on the two primaries: LINQ to Objects and LINQ to SQL.
The two major areas of LINQ to Objects integration are:
One: The merging in of Aaron Erickson's indexed query concepts, so you can mark properties of a child class in a BusinessListBase list to be indexed, and if one of those properties is used in a Where clause then the index is used. This results in a major performance improvement, though there's a cost to building the index. But if you are doing multiple queries over a list and/or the list is very large then this feature can have a big benefit.
Two: Ensuring that a non-projection query against a CSLA list will result in a live view, much like FilteredBindingList, of the original list. This is important because otherwise you get a disconnected IEnumerable<T> which can be confusing. By default, adding/removing items from that list has no impact on the real list, and thus has no impact on the data when saved. That's a problem. To solve this, queries (non-projection) against CSLA lists will result in a richer return type that is a live view against the original list, so adding/removing items from that view adds/removes the items from the real list (almost exactly like FilteredBindingList). This results in a much more natural and (I think) expected set of behaviors.
We're not going as far as the SyncLinq project, which should even keep projections in sync. However, it appears that SyncLinq will be compatible with CSLA and so I don't feel the need to duplicate all that effort. At the same time, the non-projection query issue is so common that I want to ensure it is a built-in feature of CSLA.
All that is just LINQ to Objects.
I'm concurrently working on LINQ to SQL support. L2S is primarily a replacement for ADO.NET in the CSLA .NET architecture. I don't believe it will replace criteria objects though, because LINQ querys are not serializable and even if they were they'd require too much knowledge of the data schema on the client to really be appropriate. Ultimately I think that L2S must be viewed as just another data access technology like DAO,RDO,ADO,ADO 2.0,ADO.NET and ADO.NET 2.0. That is to say the use of L2S should be strongly restricted within an application because the odds of you needing to replace it with "The Next Big Thing" in 2.6 years is very, very high.
Well, actually The Next Big Thing is ADO.NET Entity Framework, which is merely months away, so we're not going to get anywhere near the 2.6 year average lifetime for a data access technology this time around...
There are other issues to deal with around L2S (and ADO.NET EF) because of the way those technologies use "context". They both have non-serializable context objects that listen for events on the DTO/entity objects and monitor which ones are new/deleted/changed. That works fine in a stateful 2-tier world, but is really problematic in either a stateless or n-tier scenario.
CSLA .NET supports n-tier scenarios - that is a core part of its identity. So right there we have issues with this context concept.
And CSLA .NET supports stateless scenarios like stateless web apps, services, workflow activities, etc. That's a double blow to the context issue.
Fortunately CSLA objects have always maintained this contextual state themselves. So it isn't like the state is ever lost, but the L2S or EF context object does get lost (by definition - it isn't serializable and can't be maintained in a stateless scenario like the data portal).
None of this is a problem with retrieving data. But it is a problem when doing insert/update/delete operations, because things get complex. Your options (in general) are:
There's no clear winner there. Certainly option 1 seems very poor. Option 2 is my area of focus, to see if I can help simplify the otherwise unpleasant code - but I have yet to hit on a good solution. Option 3 is totally workable, and is actually quite nice if you are using the command concepts in L2S, where L2S abstracts the CUD operations into a set of nice, strongly-typed methods. It is particularly nice if you use atomic sprocs, because you can just drag the sprocs onto the designer and that gives you all the nice strongly-typed methods to call - very smooth.
As you can see, I personally really like option 3, but it offers no clear benefit over using the DAAB or other code-genned DAL - because basically it uses L2S just like a code-genned DAL (which it is, so that makes sense ).
Hi,RockfordLhotka:The two major areas of LINQ to Objects integration are:One: The merging in of Aaron Erickson's indexed query concepts, so you can mark properties of a child class in a BusinessListBase list to be indexed, and if one of those properties is used in a Where clause then the index is used.
Two: Ensuring that a non-projection query against a CSLA list will result in a live view, much like FilteredBindingList, of the original list.
There are no samples outside of the unit test code, sorry.
But it is pretty easy – just put the Indexable attribute
on your property and it will get indexed.
Right now the indexing only works for equality-based clauses
like where id == 10 or something like that. The index is a hash table,
and so can’t do < or > operators (though Aaron is working on that
for a future point release).
Rocky
The reality is that all objects that you can have more than one of at the same time are tied at the hip to LINQ. Mostly because all collections - anything that is enumerated in a foreach loop, now have the ability to do linq queries against them.
Its ok - I spend a lot of time in my talks helping people understand that LINQ is a technology for thinking about groups of things and the set theory that underlies that, and that LINQ to SQL is a specific technology (as is LINQ to Entities, LINQ to CSLA, and dozens of other IQueryable implementations) are simply optimizations of the original LINQ to objects.
Which is really cool if you think about it. Every case where you had a foreach with a condition statement inside it is now a LINQ candidate. What took multiple statements can now be done more expressivley in a single statement. Less code, IMHO, is a good thing most of the time - especially when you lose no fidelity (i.e. var mySubset = from g in myColleciton where g.SomeProp == 69 select g) is much easier to understand than the alternatives which span sometimes 5 lines of code.
Just wondering if anything more has happened with integrating Bindable Linq (aka SyncLinq) into CSLA for full syncing?
>Just wondering if anything more has happened
with integrating
>Bindable Linq (aka SyncLinq) into CSLA for full syncing?
No,
there’s never been any attempt to integrate synclinq into CSLA.
CSLA
.NET 3.5 includes “LINQ to CSLA”, which addresses two major areas
of concern when using LINQ to Objects against CSLA business objects:
1.
When doing a non-projection query against an editable collection
(BLB subclass), LINQ to CSLA ensures that you get a “live view” as
a result of your query. This means that when you add/edit/remove items in the
query result, you are actually affecting the original list.
2.
The functionality of i4o (indexed queries against objects) is
included directly into CSLA .NET. This means you can apply the Indexable
attribute to a property of a child object, and when you do a query against the
list that contains that child and the where clause uses an equality check
against the indexed property, the query will use the index to find the result.
Of
the two, the one that impacts nearly everyone is the first feature. Without
that feature a UI developer might have been tempted to do a query and bind the
result to the UI – which would appear to work up to the point that the
user tried to save the data. At that point the developer would discover that
the original list (which is what gets saved) wouldn’t contain all the
changes made by the user. But with LINQ to CSLA, the developer can bind
the query result to the UI and they’ll get the expected results when
saving.
The
second feature, indexing, is useful in limited scenarios. Specifically where
you are doing many queries against the same source list, and where the queries
involve an equality check against a specific property of the child objects.
This is particularly useful against large read-only lists that are cached for
the lifetime of the app, and where other parts of the app often need to look up
values against that large read-only list.
Hopefully
CSLA .NET 3.6 (which is in active development) will include an enhanced
implementation of the indexed queries so more operators are supported,
including <, >, <=, >= and =. While this feature will always be
targeted at scenarios where you are doing many queries against the same list,
supporting more operators will broaden its usefulness quite a lot.
Rocky
Just wondering, is it possible to have something like below:
public static Project GetProject(Func<Project, bool> criteria){...
// so we can do something like this
var data = (from p in ctx.DataContext.Projects.Where(predicate.Compile()) select p).Single();
...
}
Yes, in concept that is possible.
It is tricky, because you need to convert the Func<> into
something that is serializable, then convert it back on the server. I think
that both LLBLgen and IdeaBlade do that sort of thing.
Personally I think it is a bad idea, because it pushes knowledge
of the database/entity schema up into the UI. It is UI code that calls
GetProject(), and basically you’d be coupling the UI directly to the data
shape.
In my mind, one of the goals of having an object-oriented
business layer composed of responsibility-based domain objects is so the UI
does NOT know about the shape of the database or entities. The UI should
interact only with domain objects that are designed to solve the problems of
the use case.
Rocky
Copyright (c) Marimer LLC