Window Workflow Foundation (WF) is big, and how CSLA can support it

Window Workflow Foundation (WF) is big, and how CSLA can support it

Old forum URL: forums.lhotka.net/forums/t/2315.aspx


survic posted on Saturday, February 10, 2007

I posted the following on Rocky's Blog:

http://www.lhotka.net/weblog/DoesWorkflowCompeteWithCSLANET.aspx

 

Hope it does not count as double posting.

 

 

A. Big picture: I agree WF cannot and should not replace OO. However, I believe WF is indeed big. 

 

(1) WF is “just” an automation scripting tool, just like VBA.

 

(2) However, just as we can use VB for application development, we can push WF deeper.

 

(3) Actually, we have the whole COM, simply for VB (OK, I twisted it a little to make it VB centric, but you know what I mean). Of course, just as we cannot create a COM for every object, there must be a limit how far WF can go.

 

(4) The scripting is XML based (XAML), however, in theory, it can be written by C# etc. Note that I do not mean the “code-behind”; I mean writing the workflow in C# and the IDE translate it into XML.

 

(5) The analogy can go further: a new generation of RAD is coming back; now, it is enterprise level, so, let's say it is ERAD. The workflow and its visual designer is the beginning of ERAD. You put this with the whole Office scripting, that is the power of M$! Note that because of WF, it is also the MDA. a WF is a more structured (hence more limited but also more supported) "activity diagrams". Now, the flyweight joins force with the super-heavy weight.

 

(6) Also, it is a structured (hence more limited but more supported) aop loadtime engine (i.e. emit-style engine).

 

(7) Also, this is a structured threading engine also. Who needs low level threading anymore!

 

(8) Also, it is a super-lightweight application server. It replaces EJB session beans! For example, it provides transaction services. Also, it requires you do things in a certain way. It is all similar to EJB. Note that I guess nobody now cares about the entity bean anymore; it is replaced by OR mapping, which will be realized by EF in the future.

 

B. How CSLA can support WF

 

(1) WF activity is basically the workflow object (“command” object). I believe CSLA should help to make all such object as a workflow activity, out of box.

 

(2) One deeper WF “customization” of CSLA is to remove those CRUD methods into separate classes. Entities are just entities. Those 4 CRUD methods should be split into 4 activities. In short, together with (1), all CSLA database related programming is in “activities”.

 

As I have pointed out sometime before, the base of CSLA is command design pattern. WF’s activity is also based on command design pattern. In Java history, command pattern was very popular before J2EE, but has been declining ever since. However, because of workflow engine, it seems that in .Net, command pattern will be the dominant design. Note that J2EE also has workflow engines, but historically, it came after J2EE server, and therefore it is only a feature of application server.

 

(3) Note that for those pure entity classes, they can use rule engine, instead of CSLA’s rule engine. Note that MS’s rule engine now is very simple-minded, it is not really the “reference rule engine”, but it is still more powerful than CSLA’s. Also, it is everywhere, and it is free. It is going to be ad hoc basic-standard. 

 

(4) For performance, the workflow engine must be a singleton.

 

(5) The thing that is not touched at all in CSLA is the data-binding related features of those pure entities.

 

 

survic replied on Saturday, February 10, 2007

Perhaps I should make a second effort to sale it.

 

WF resurrects visual programming (the glory of VB/RAD). Note that you never know what will be in WF and what is not, as a result, the best way is to make it really fine-grained, so, lets build it from CRUD level. So, all CRUD methods must be activities, no mention those “commands” (“unit of work”, “workflow”, “façade”, “coordinator”, “manager”, or whatever you like to name them).

 

I know when I say “let entities be entities”, I am against “objects are determined by behaviors”. I never really believe “behavior” thing, because I have been using OR mapping for so long. OR mapping, SOA, and now WF, they all work against OO purist (“behavior”). The reality is that it is not about pure OO; it is about a good compromise for distributed computing.

 

At least it is a possible usage of CSLA. Note that the databinding core of CSLA will not be touched at all; so, it is still CSLA. Also, the data portal part is also not affected. It makes it simpler: all business objects are split and reduced into “command”.

 

You ask, what is the purpose again? Visual programming! To support the idea that an end user can develop an application – I know, that is impossible and undesirable – then, it is to support a developer can talk to user to spec the application, and then spec is automatically translated into an application 3 minutes later, i.e., the “analyst’ is the “developer”, real time – isn’t the dream of all of us?!

 

To realize the dream (I know, we still need OR mapping to get there), we need to make the activities as fine-grained as possible – this is the key against the argument that WF is only for inter-application, because you never know what is for inter-application and what is for intra-application – imagine that you are developing a product for a client. You only deliver the binary to the client; however, the client requests they can customize the product, and therefore they need fine-grained API. You have to make all CRUD available.

survic replied on Monday, February 12, 2007

 

I am disappointed.

 

(1) Is anybody there using WF?

 

(2) Regardless related with WF or not, is anybody there who customizes CSLA in such a way that the CRUD methods are split into four command classes?

 

(3) Step back one step: is anybody splitting CRUD methods away from the entity-databinding features – even Rocky points out that it is at least an alternative option.

 

(4) Step back two steps: is anybody in this list actually doing any “customization” on CSLA? or, everybody here is “just use” the framework? -- I know that once you customize the framework, you are on your own. However, before this post, I (naively) believed that at least there are one third or one forth people could see the other side of the story? It seems that I was totally wrong -- people in this list have developed a cozy “just use” it mentality?

DansDreams replied on Tuesday, February 13, 2007

I've made several posts regarding WF, partly trying to separate the hype from reality.

For example, let's take this paragraph:

"You ask, what is the purpose again? Visual programming! To support the idea that an end user can develop an application – I know, that is impossible and undesirable – then, it is to support a developer can talk to user to spec the application, and then spec is automatically translated into an application 3 minutes later, i.e., the “analyst’ is the “developer”, real time – isn’t the dream of all of us?!"

This all sounds good, and sure looks good when Microsoft presents in their dog and pony show.  But in reality there isn't much "automatic" anything.  The presenters make a cool sleight-of-hand trick when on the first slide they relate WF to the audience's need to address problems of physical process flow and then on the rest of the slides show WF to be a development tool that allegedly makes writing code easier.  WF has little to do with end-users writing business applications and solving their own business workflow problems, other than it being a developer tool to allegedly help you write those applications.

WF is essentially either a wrapper around your business code or an alternative UI for driving it.  You still have to type the equivalent code to represent the actual algorithms, business logic and data access.  You still have to write the code that would launch any of these workflows based on data analysis.  You still have to write the complex workflow management code that deletes the active workflow if those data conditions change, or reroutes the active workflow if a person is sick for the day.

I'm going to stop.  I've written about this fairly extensively in other posts.

DansDreams replied on Tuesday, February 13, 2007

It's pretty clear from a quick browse of the front page of topics that many people have "customized" CSLA, including data access issues as you mention.

Insulting the innovative ways CSLA has been challenged and pushed to the edges by users here won't make you a lot of friends.

survic:

 

I am disappointed.

 

 

(3) Step back one step: is anybody splitting CRUD methods away from the entity-databinding features – even Rocky points out that it is at least an alternative option.

 

(4) Step back two steps: is anybody in this list actually doing any “customization” on CSLA? or, everybody here is “just use” the framework? -- I know that once you customize the framework, you are on your own. However, before this post, I (naively) believed that at least there are one third or one forth people could see the other side of the story? It seems that I was totally wrong -- people in this list have developed a cozy “just use” it mentality?

survic replied on Wednesday, February 14, 2007

Using some pieces of a framework and dramatically changing everything else can be superior or interior than using the framework “as is” -- it can mean that you have understood it inside out, or, it can mean that you are still learning it piece by piece. So, there is no insulting intended.

 

I said it is disappointing because it is healthy that a list has the “right” portions of both and each of us also has the “right” portions inside “me” … It is disappointing that this list is “too mature” now – that is just my observation, of course. People are afraid of breaking the backward compatibility, so to speak. Again, note that “maintaining backward compatibility” requires creativities also.

     

As for the issue itself: “Fine-grained WF/SOA” (i.e., “WF or SOA from CRUD level”) is like using a COM for each class. Note that I know the possible downsides of it -- that is why I use the analogy. I also like that “anemia object model” name, it keeps reminding us the downsides, but then, you have to do what need to be done.

 

It is surprising that the voice of Fine-grained WF/SOA is so low in this list. I guess people who do that are not in this list – it is too bad for this list; it is too bad for them also, because the data-binding in CSLA is really good.

 

It is good that Hibernate is getting more popular in this list. That will change the landscape. If not, ADOEF will. I guess I just need to be patient.

RockfordLhotka replied on Wednesday, February 14, 2007

I think most people hang out on these forums because the signal to noise ratio is almost perfect. There's almost never anyone here who's just posting to argue, or being disrespectful. It is a peaceful, productive corner of the Internet.

This thread skates dangerously close to noise, and runs the risk of losing any semblence of signal at all.

survic, if you are looking for an ideological debate, please go elsewhere - even my blog is fine, but not here.

But if you want to propose tangible, pragmatic ideas where you or others have or should alter CSLA .NET to solve real problems - that's awesome.

Or if you want to discuss specific areas where WF and/or SOA is useful when combined with CSLA .NET, or visa versa, that's awesome.

You'll note, however, that I keep all my ranting and editorial comments to my blog. And even there I avoid making deprecating comments about other people or the forums where they choose to participate. To do otherwise is simply rude and counter productive.

Here, on these forums, I focus on pragmatic solutions to real problems. I keep the controversial content elsewhere, because it is too much like noise to too many people. And I'd appreciate the same courtesy from others.

Thank you.

DansDreams replied on Thursday, February 15, 2007

Survic, in the other threads on this topic the basic issue has come down to exactly what Rocky brings up - what is the pragmatic, real-world way that any of this matters?  Believe me when I say I have spent many many hours and significant $ investigating WF, and even though I'm not a dumb person I can't make it "click" as the de facto way to write business applications from now on as Microsoft would have us believe.

Some of the other threads have gone through some of those practical concerns individually.  If you're really investigating this from that point of view, I suggest going and reading those in the archives.

I am the last person that wants to get "left behind" by not adopting WF.  If someone could show me how it really delivers what is promised I'd start moving that direction tomorrow.  I just don't see it.

So, let's take the Project Tracker application and look at how it would be different if WF were added to the mix.  What specific design changes do you propose?

survic replied on Thursday, February 15, 2007

Rocky: There are two ways using CSLA, one is to assume the framework’s structure “as is”; another one is to make big structural changes, and use some pieces or just some ideas. Ideas for the second approach is not just noises.

 

Dan: I followed the forum for years; and before posting WF, as usual, in case I missed anything, I searched the forum about WF, and I read your posts about it. I appreciate that you are always exploring and thinking – not just WF, I have been seeing you are doing that for other things also.    

 

As for details: I am proposing to move those CRUD methods into 4 separate activity classes, each activity class for one method.

RockfordLhotka replied on Thursday, February 15, 2007

survic:
As for details: I am proposing to move those CRUD methods into 4 separate activity classes, each activity class for one method.

So from an architectural perspective you are suggesting using WF to create a data access layer? That is an interesting idea.

I've been focusing on the reverse: using WF as another form of interface, so WinForms, WebForms, WPF, WCF, asmx and WF are all consumers of objects. And that all makes a great deal of sense to me.

I've also focused on the use of WF behind a Command object. Where the object model (based on the business requirements) has need of some complex, non-interactive, back-end processing. This, btw, is also where SOA tends to fit into the model quite nicely - and the two work well together (WF and SOA), because such long-running tasks really shouldn't be synchronous. Services and queues offer a way to do an async start for the workflow - the whole thing being kicked off by a CSLA .NET Command object so the business layer includes the task as a natural part of the object model.

But I hadn't given serious thought to using a WF to retrieve, insert or update data.

When talking to Microsoft people about WF and its intended purpose, the big thing they bring up over and over again is that it is not a lightweight engine, and shouldn't be used for high performance situations, at least not if you have to instantiate a workflow quickly or repeatedly. The big benefit to WF appears to be its ability to suspend and later resume a long-running task.

All that makes sense to me, and there's certainly value in the suspend/resume concept.

But I don't see how any of that makes WF a good fit in the DAL area. Rather the reverse, because in the DAL performance is paramount, and if the DAL was constructed as a set of workflows you'd end up creating a LOT of workflows - exactly what Microsoft recommends against.

Additionally, the idea of suspend/resume has no value at all in a DAL. The user is waiting after all, so it makes little sense to suspend their request and make them wait longer.

survic replied on Thursday, February 15, 2007

>>> “you are suggesting using WF to create a data access layer?”

 

No and Yes:

 

No: The bodies of those CRUD methods are NOT workflows. I guess that is doable -- by wrapping some ADO methods into activities; however, that would not be meaningful, for reasons as you pointed out.

 

Yes: Those CRUD methods, each is contained by an activity class. Then, they are consumed directly by button1_click, or, by higher level "command" classes -- CSLA’s “Command object”. Note that basically this means we treat those CRUD methods as “Command object”.

 

----------------

 

There is a caveat: within the body of the high-level “Command objects”, to take full advantages of WF, we’d better do extensive XAML programming!

 

However, we can make a compromise: we can just call the CRUD directly, just as we are doing it now.

 

Still, there is an important difference: we now know for sure that every corner of the system is systematically WF-enabled.

 

In summary, WF is really just the last straw to tip off the scale (this typical when you are making pragmatic decisions – they are about trade-offs): because of WF, there is just another reason that we split the CRUD and the data-entity, and split those 4 CRUD methods into four separate classes.

 

DansDreams replied on Friday, February 16, 2007

I'm left a bit confused by that last post, but let me see if I'm making any sense at all.

In the Microsoft WF/SOA/SharePoint/InfoPath/MOSS/EverythingMicrosoft worldview for designing applications, an enterprise application is just a big pile of activities that end-users can arrange and rearrange to fit their specific needs.  So there would often be CRUD-like activities.

Let's say we're creating a workflow for a order approval.  Somewhere along the line we would make a form that shows any existing orders for the customer.  This would have to use some activity to read the existing orders.  So, yes in this sense you'd end up with a bunch of CRUD activities.

Let's just assume for now that is indeed a good way to build applications.  The big point not to miss is that somewhere down in that activity is still some ADO.NET code that actually does the reading.  And somehow the end-user making the form has to have a simple way of consuming that into some kind of list or grid control.

Now, the questions is can the code inside that activity really just be few lines of code that would be necessary to have a CSLA list load itself?  That seems an easy "yes" to me.  If so, then the question is whether this is a better design than writing the data access code directly in the activity?  Again, I say it's an easy "yes".

I've heard two people claim that one advantage of WF/SOA over OO CSLA is that you get code reuse only with the WF/SOA approach.  This boggles my mind.  Unless you're going to repeat the same data access code in every activity that offers the slightest twist from other similar ones, you're going to end up with all similar ones trying to use some sort of common code... leading you right back to some sort of business object - perhaps not in the CSLA sense, but at least as it relates to encapsulating the CRUD code.

So, I have to agree with Rocky that it seems to make sense the other way around from what you're suggesting (if I'm understanding you right).

DansDreams replied on Friday, February 16, 2007

Let me just also say that some time ago I tossed around the idea of making the BO-CRUD linkup more service oriented.  I'm not sure if it was on this forum.

My idea was that if you separated the data access code into a separate DataBO, and made the tie between them XML-based, you'd be really set for making the business objects very service-friendly from the other side too.  By that I mean it would be easier to create a web service that accepted XML for adding a customer if the business object was already designed to expect XML as its hydrating input.  All the XML handling stuff (what to do with missing or extra fields, etc.) would be written once, and leveraged on both sides.  This could also give some interesting flexibility on the data access side to make the business layer less fragile on database schema changes.

Is this the same thing you're trying to describe?

I wish I could remember all the details, but I know when it was discussed before there was some real devil in the details and I quickly concluded it wasn't such a great idea after all.  Performance may have been one of the big issues - XML is a bloated pig and data access is an area you usually are trying to avoid that.

RockfordLhotka replied on Friday, February 16, 2007

I think it is clear, from what Microsoft has said about performance characteristics, that starting or loading a workflow for each CRUD operation would be a bad idea. CRUD operations are simply too fine-grained in this context. So the idea of using an activity behind every combobox and every form to get all that data seems unlikely to be practical.

However, a hypothetical button that starts something more than simple CRUD, but rather a larger and more complex process - now that makes a lot of sense in terms of having the button execute a Command object, which in turn travels to the app server and queues up a workflow.

And in a Windows Forms or WPF context this workflow/SO CRUD idea is counter productive, because there needs to be some intelligent entity between the data and the UI to provide business behaviors. Well, need is a strong word. I guess you could resign yourself to building Windows/WPF apps with no more interactvity than pre-AJAX web sites. Ugh!

That said, I don't necessarily think there's anything wrong with having your intelligent entities (CSLA objects or something similar) call services to get/store their data. Conceptually that's all stored procedures are after all. However, as Dan points out, services are far less efficient than stored procedures thanks to all the overhead of XML and the service processing stacks.

Still, I think the idea of a layered architecture where the business layer interacts with entity objects (DTOs) which, in turn, are persisted, is a very workable idea. This doesn't require the use of XML in any way, but it would enable it where appropriate.

There's a minimum cost to this: the data is loaded from the data source into the DTO/entity objects and then must be loaded into the fields of any business objects. Some might argue that the entity objects can be contained within the business objects, and with some entity objects that may be possible. However, in the general case this leaves you open to serious fragility issues. In a service-oriented scenario, those DTOs are defined by the service, not by your business objects. As the service changes over time, it may change those DTOs - that's the nature of SOA. And when that happens, your objects could be adversely affected.

This is why I firmly believe that there needs to be an Object-Message Mapping (OMM) layer in there, for the same reasons that an ORM layer is needed between objects and the database.

Remember that message-based entity objects are not designed around business behaviors. They are designed around hierarchical data modeling, which is almost, but not entirely, unlike relational modeling (to paraphrase Monty Python). Neither hierarchical data modeling (which I did a lot of in the early 90's), nor relational modeling gets you anywhere close to what you'd want in a good, responsibility-driven object model.

DansDreams replied on Monday, February 19, 2007

RockfordLhotka:

I think it is clear, from what Microsoft has said about performance characteristics, that starting or loading a workflow for each CRUD operation would be a bad idea. CRUD operations are simply too fine-grained in this context. So the idea of using an activity behind every combobox and every form to get all that data seems unlikely to be practical.

I don't entirely understand this statement.

I presented a specific workflow process here to a WF/MOSS expert.  The process involves performing a service for a customer where there are several checks of existing data first.  So logically it's a series of screens where first the customer's account status is checked, then a check is done to see if the issue already exists for that customer, then to see if there are any knowledgebase notes for that issue, etc.  The WF/MOSS solution was described a series of activities each involving reading specific data and displaying it on an InfoPath form (for example). 

So, there wouldn't be a separate activity behind every combobox per se, but each of the half dozen activities in a workflow may very well involve CRUD operations (particularly reads).

The alleged glory of WF is that the "Check Existing Issues" activity/form could be used over and over in all kinds of different workflows, all under the design of end-users who best understand them.

My problem is that the "glory" doesn't really amount to all that much.  If I'm doing WinForms, I can easily enough create a user control to show existing issues etc., and with a few lines of code string them together in a "chain".  But then, I have a perhaps archaic and elitist view that end-users really don't understand things deeply enough to be writing their own enterprise applications.

RockfordLhotka replied on Monday, February 19, 2007

In late 2006 I was in Redmond for a connected systems event (WF and WCF are now under the same roof in a group called Connected Systems), where I had a conversation with several people from the WF team (devs and some marketing).

We got to talking about different usage scenarios. Like whether you could somehow use the WF rules engine behind a business object - think like the CSLA CheckRules() method invoked a workflow each time to get at the rules engine.

The problem, they said, is that instantiating a workflow is not cheap. You don't casually instantiate workflows, and trying to instantiate a workflow to check the rules for an object in an interactive setting wouldn't be effective.

Based on that, I extrapolate that if it is too expensive to create a workflow to check the rules for an object, it is most likely too expensive to create a workflow to handle data access as well. But I could be wrong on that count.

I don't think so though. 99% of the time a CRUD operation is a single activity - a single procedure. I seriously wonder how to justify incurring the overhead of WF 99% of the time for nothing, just to gain some value that 1% of the time where your CRUD operation does incorporate a bunch of extra, user-configurable, processing.

DansDreams:
RockfordLhotka:

I think it is clear, from what Microsoft has said about performance characteristics, that starting or loading a workflow for each CRUD operation would be a bad idea. CRUD operations are simply too fine-grained in this context. So the idea of using an activity behind every combobox and every form to get all that data seems unlikely to be practical.

I don't entirely understand this statement.

I presented a specific workflow process here to a WF/MOSS expert.  The process involves performing a service for a customer where there are several checks of existing data first.  So logically it's a series of screens where first the customer's account status is checked, then a check is done to see if the issue already exists for that customer, then to see if there are any knowledgebase notes for that issue, etc.  The WF/MOSS solution was described a series of activities each involving reading specific data and displaying it on an InfoPath form (for example). 

So, there wouldn't be a separate activity behind every combobox per se, but each of the half dozen activities in a workflow may very well involve CRUD operations (particularly reads).

The alleged glory of WF is that the "Check Existing Issues" activity/form could be used over and over in all kinds of different workflows, all under the design of end-users who best understand them.

My problem is that the "glory" doesn't really amount to all that much.  If I'm doing WinForms, I can easily enough create a user control to show existing issues etc., and with a few lines of code string them together in a "chain".  But then, I have a perhaps archaic and elitist view that end-users really don't understand things deeply enough to be writing their own enterprise applications.

survic replied on Monday, February 19, 2007

I guess we have to stop using the word “BO” (just for this discussion), because it is split into two, one is entity (DTO) with validations, and “command object”, so, it is very confusing if we continue to use the word “BO”, we do not know which  part we  refer now.

 

It seems that you tend to continue to use “BO” to refer the “command object”, while I tend to use “BO” to refer the “entity”.

 

Database –------ command objects ----------UI  

                           (uses/passes entities)         (uses entities)

 

 

survic replied on Monday, February 19, 2007

Note that “entities” are not a “layer”, they are mobile objects.

 

As for the cost of WF, just as avoiding XAML programming, it is still possible and desirable to call the CRUD just as using any other command objects.

 

The key is not to use WF all the time, but the guarantee that you can use WF on everything when you need them, even admittedly, a lot of times, you do not want to them.

 

Also, I agree with Rocky that the validation logic cannot be done using WF. In the diagram, the validation logic is within the DTO/entity, no WF at all. However, perhaps we can find a way to use the “rule engine” (different from using “activity”).

survic replied on Monday, February 19, 2007

Also, note that command objects themselves are directly also “activities” (implement the interface etc), no wrapping anymore.   

RockfordLhotka replied on Tuesday, February 20, 2007

In this other thread, by the way, the person starting the question is proposing the same architecture

http://forums.lhotka.net/forums/thread/12256.aspx

As you can tell, I am not a big fan of that architecture. I spent the first part of my career doing terminal-based programming, and I have no desire to go back. That's why I can't get too excited about the html-based web either - it is just glorified mainframe programming. (the new stuff, both AJAX and WPF though, now that starts to get interesting!)

Except survic is not proposing the same architecture. Why? Because those objects that move around have business logic - so they aren't DTOs. If you put business logic in a DTO then you no longer have a DTO. Putting logic in there directly violates the pattern, and so by definition it becomes something else.

The term "entity object" is also pre-defined, and is little more than a DTO.

While we could make up our own terms for this thread, that seems like a bad idea - especially when terms pre-exist for these common ideas. That, to me, is the primary benefit of the whole "pattern movement": a common language for discussing these concepts.

So what's been proposed is that we extract the data from the database, put it into a DTO-that-is-not-a-DTO (let's call it a business object for fun), and transport it across the network within a CSLA-style Command object so the UI can use it.

I see nothing wrong with this - though there's some inherit inefficiency in actually using a CommandBase-derived object for the Create/Fetch/Delete operations. A ReadOnlyBase-derived object is more efficient because it doesn't move across the network both ways. You only really need a CommandBase-derived object for Insert/Update operations.

And you can do all this today, without even altering CSLA itself. Just alter the way you use CSLA.

Consider Customer.

If you also create CustomerFactory, then you separate the persistence further from Customer itself. CustomerFactory then has the static methods, not Customer. CustomerFactory either is, or privately contains, the mobile objects that go through the data portal. CustomerFactory then has the DataPortal_XYZ methods, not Customer.

You can still technically derive Customer from BusinessBase, just ignore the data portal parts. The only thing you have to do is override Save() and throw a not supported exception, because it is no longer valid. Instead, CustomerFactory has the method (static), which accepts a Customer as a parameter.

It is the same concept as I show in the book - it just has a couple more moving parts, and is therefore more complex. But it is also arguably more flexible, and I've used that technique (sort of) in the past when I needed that flexibility.

But I don't recommend it as a general rule, because complexity is bad, and I think you should only incur it in times of pressing need.

And really that's the thing with SOA or WF. They add complexity, at least today. Perhaps that'll change in 10-15 years as the tools mature, but today the add complexity. So using them should only happen when the benefits clearly outweight the cost of complexity. And that can happen, but not in mainstream scenarios.

In short, the mainstream scenario should be as streamlined as possible, and still offer a way to tap into complexity in the rare case you actually need it.

I think the techniques I show in the book do exactly that. The static factory methods on Customer are very streamlined as they are. The DataPortal_Create/Fetch methods can, if needed, call a workflow or service to get the data. The same with the DataPortal_Insert/Update methods.

Wrapping these objects inside yet another object (a Command) merely increases the size of the byte stream that must move across the wire. The data portal already abstracts the network and the mobile object concept.

See the thing is this: as soon as you put business logic into your "DTO", you have a business object. The amount of data moving across the wire is totally dictated by the instance fields declared in your class. A DTO and a business object will have the same fields, because they need the same data. So the size is the same. But a business object encapsulates logic, where (by definition) a DTO or entity object does not.

So we're dancing around, I think, a miscommunication here.

Either we're talking about actually using DTOs and binding the UI to them like in that other thread, or we're talking about objects with logic (which then are not DTOs) - and that's what CSLA already helps you do.

Regardless, I fail to see where WF enters the picture in any big way.

Yes, a Command object can be used to kick off a workflow. Or a DataPortal_XYZ method can call or kick off a workflow. If those things make sense, then use the technology! The same with services. They fit in exactly the same spots.

Though services can be even more fun. Another way to use services is to mark your DataPortal_XYZ methods with RunLocal so they never go through the data portal. Then have your implementations just call services (instead of stored procedures) to get the data. Now your service technology is used to get data across the wire, and the objects never leave the client machine. Nothing wrong with that either, though personally I think if you are doing such a thing you should approach it using true SOA - meaning that you do NOT have tiers. You have two separate applications that are exchanging messages - and I've blogged about this numerous times.

DansDreams replied on Tuesday, February 20, 2007

I think there is indeed a bit of miscommunication going on here.

As Rocky and I have said, whatever you want to call it, when you start putting business logic into an object then by definition you have a (CSLA) business object.

Survic, I think what you are describing has in fact been discussed numerous times (at least on the old forum by my memory).  And thinking about it now, the paradigm is maybe not all that different from WF.

The idea is basically that a process may involve several BOs/entities/whatevers and so you have this layer of functionality (thinking use cases, not code or physical layers) wrapping these BOs in various combinations.  The term "Unit of Work" has been used to describe thiss.  Staying within the CSLA point of view, this could indeed be a command. 

This is not a new concept and not uncommon in complex enterprise applications.  And there's no reason why WF could not take advantage of these objects as well as the "regular" BOs or entity objects or whatever you want to call them.

But the key in my thinking is combining the two technologies in the appropriate way, as Rocky describes.  WF should be thought of as a layer or technology that takes advantage of the encapsulated business layer, not that it is the business layer.  (This latter thought is what I see as a pollution of a good design, and what in particular I can't make "click".)  The proper design IMO is that a WF activity just becomes a rather thin shell that exposes the business layer in an alternative way. 

I would still say that it's the developers who are going to need to retain tight control over how those activities can be strung together.  Is an end-user that's provided with an activity "ship an order" going to always remember that the customer must first pass a credit check?  Of course not.  So, what you're really going to have to deliver to the end-user would be a very complete and robust activity that takes advantage of a lot of code down in the business layer.  Writing such an activity would be a fairly trivial matter, precisely because it continues to leverages the advantages of the n-tier design of a good CSLA application.

RockfordLhotka replied on Tuesday, February 20, 2007

I have, by the way, giving some thought to a new stereotype (base class) for CSLA .NET 3.0 that would directly support being a workflow Activity. However, thus far I haven't figured out what it would add over a workflow creator just creating a code or custom activity.

But if I figure out some value-add, then it would make sense to add such a thing to CSLA.

Making CommandBase inherit from Activity is out of the question - that'd be a breaking change, and would instantly require the use of WF where it may not be appropriate. So if this did happen it would be a new base class.

survic replied on Tuesday, February 20, 2007

>>>>(from Rocky’s last post) a new stereotype (base class) for CSLA .NET 3.0 that would directly support being a workflow Activity. ….Making CommandBase inherit from Activity is out of the question - that'd be a breaking change

 

That is exactly what I was talking about in my earlier, admittedly a little offending post (again, I did not really mean that) when I mentioned “backward compatibility” -- it requires a lot of thinking. On the other hand, if I never think about using it “as is”, then, I will do the most straightforward thing: the commandBase and the Activity should be in one class.

 

The added value? -- By doing it this way, you systematically guaranteed that your system is automatically WF-enabled, at every fine-grained corner.

 

Note that I am not saying use it “as is” has no advantages, especially when Rocky is actively leading, and the forum is also active. I am just saying that there is another side of the story. I even admit that the “other side” “should” be a minority side. However, it is a necessary side, to keep us honest with the reality, so to speak.  

 

>>>>(from Dan’s last post) what you are describing has in fact been discussed numerous times …"Unit of Work" has been used to describe this.

 

Yes, I am aware of it. Actually I was that last person who asked Rocky about it and prompted him to write down his first solution on it. It was “workflow object”; which was turned into now the “command object”.

 

The new question is, as you pointed out: “the key in my thinking is combining the two technologies in the appropriate way”.

 

My answer is: “Making CommandBase inherit from Activity”, and use it everywhere!

 

The key difference between your approach and mine is that, you are thinking in terms of “appropriate”, “should”, “proper design”, while I am thinking about enabling it whatever you want. Do not get me wrong, I actually totally agree with you about all the rules of the right design; however, because of RAD (see below), I believe we need to guarantee automatic WF-enabled at every fine-grained corner.  

 

>>>>(from Dan’s last post) it's the developers who are going to need to retain tight control over how those activities can be strung together

 

I totally agree. In one of my previous posts, I pointed out that we do not really believe the hype of M$ that users can do it. Actually, to be fair to M$, they also say that WF is a development tool, not an end user tool.

 

However, WF does re-open the door of RAD, and in RAD, anything and everything goes. This returns to my earlier point: we need to guarantee automatic WF-enabled at every fine-grained corner.

 

 

>>>>(from Rocky’s first post) Except survic is not proposing the same architecture.

Thank you, Rocky, you really know me!  

 

 

>>>>(from Rocky’s first post) The term "entity object" is also pre-defined, and is little more than a DTO

 

Here, I do not agree.

 

In M$ “application architecture” documents, “entity object” (“Business Entities”) has validation logic. Another name is “domain object”, or, “anemic domain object” – I am not trying to be picky on names though. “Business object” is fun (but we have to keep adding “without CRUD”). “Anemic Domain Object” is a good name, it is nice that it abbreviates to ADO -- talking about making confusion ;-)

 

Note that using entity object, plus some trade-off help objects (the down side of “help entity objects” is that they duplicate some validation logic), is the “classic” way to cope with the flexible DTO headache. It has been there for a long, long time and on many platforms: it is in both classic VB (classic CSLA is an example) and Java (a lot of such stuff).

 

As a result, although I do not really care names (I like “ADO” ;-), I do feel that without having a fixed name, you get the impression that it has no history and is not in the main stream.

 

>>>>(from Rocky’s first post)  It is the same concept as I show in the book - it just has a couple more moving parts, and is therefore more complex. But it is also arguably more flexible,

 

I agree that It is “It is the same concept as I show in the book”, but I do not believe it is “more complex”. On the opposite, it is much simpler: you use “command” to wrap anemic domain objects. You need command here, because of Data-portal AND Activity. You see, “Data-portal” is together with “Activity”. One stone, too birds -- what an elegant and simple solution!

 

>>>>(from Rocky’s first post) Wrapping these objects inside yet another object (a Command) merely increases the size of the byte stream that must move across the wire.

 

Wow, you are reading my thoughts, or, I was reading yours ;-) However, I do not agree here. The wrapping is inevitable. Using your words, “It is the same concept as I show in the book” -- data portal needs that.  In your suggestion, it is the “CustomerFactory”.

survic replied on Saturday, February 17, 2007

>>>>(from Dan) separate DataBO ……Is this the same thing you're trying to describe?

 

Yes! I suggest that this “separate DataBO” is a CSLA “command object” (so, it only handles one of CRUD; we need a lot of them, at least 4 for each current CSLA business object), and also I suggest that it is a WF activity.

 

>>>>(from Rocky) the idea of a layered architecture where the business layer interacts with entity objects (DTOs) which, in turn, are persisted, is a very workable idea

 

Yes!!

 

>>>>(from Rocky) This doesn't require the use of XML in any way, but it would enable it where appropriate.

 

I agree. I also see that XML is even better, because as Dan suggested: if we write all those validation logic in XAML, and if the client-side can change it into CLR code at load-time, then, that would be wonderful. However, before that happens, using DTO-with-validation-logic is a very workable idea.

 

>>>>(from Rocky) There's a minimum cost to this: the data is loaded from the data source into the DTO/entity objects and then must be loaded into the fields of any business objects.

 

No. That is not necessary. Those DTOs are “with-validation-logic”, so, they are the “business objects”!

 

>>>>(from Rocky)  Some might argue that the entity objects can be contained within the business objects, and with some entity objects that may be possible. However, in the general case this leaves you open to serious fragility issues. In a service-oriented scenario, those DTOs are defined by the service, not by your business objects. As the service changes over time, it may change those DTOs - that's the nature of SOA. And when that happens, your objects could be adversely affected.

 

I know the "serious fragility issues" -- I used DTOs for a long time. I noticed that "real" DTOs suck, because as you pointed out: “service changes over time”! As a result, there are two ways to cope with it: one is to use “generic DTO” – a big name for the dictionary; another one is to use "domain entities" (i.e., DTO-with-validation-logic, or, anemic domain model).

 

A lot of times, when it is too wasteful for “real” (“canonical”) entities, then, we can add a few help entities – however, the point is that we try to control the change.

 

Note that using "domain entities" as DTOs is actually the solution of the "serious fragility issues", not the cause.

 

 

---------------------

As for the question: what are those for (basically that is Dan's deeper question)?! I totally agree with you that we do not really “need” it; however, I know WF is not just from M$, J2EE server-side workflow engines have been there for years (however, M$ creatively enables it on client side also; amazing!). So, for now I do not really try to find out how to use it; for now, I simply try to make the architecture SOA-friendly-and-WF-enabled. 

 

DansDreams replied on Monday, February 19, 2007

I'd really like to see a practical application of what you're suggesting.  It seems to me that once you start blurring the lines by putting validation logic into the DTOs you're heading quickly towards the model I'm suggesting.

In terms of logical layers, what I hear you describing is this:

Database -> DTO -> BO -> WinForms UI
               |     |
               |      -> Web UI
               |
                -> WF Activity -> WF UI

And I've been describing:

Database -> DTO -> BO -> WinForms UI
                     |
                      -> Web UI
                     |
                     -> WF Activity -> WF UI

Of course, in the second diagram the DTO is entirely optional.  The perhaps subtle difference becomes critical though when you're talking about where to write business/validation logic.  One of our goals is to eliminate repeating the same logic, especially when we're talking about critical business logic that must have the utmost integrity.  It just seems to me that in the first diagram you're going to have more and more critical logic creeping down a layer into the DTO, and what you end up with logically is just the second diagram anyway by definition once that push reaches critical mass.

 

RockfordLhotka replied on Tuesday, February 20, 2007

I think, then, that there is perhaps only confusion over the purpose of CommandBase.

CommandBase is a mobile object that allows the following sequence to occur:

  1. The object is created and initialized
  2. Logic runs on the client (or caller)
  3. Logic runs on the server (in the data portal)
  4. Logic runs on the client (or caller)
  5. Results (if any) are retrieved by the caller

This is not a simple or direct match for a WF activity. A WF activity is a subset of this behavior, and I don't think you can (or should) have a single stereotype representing both concepts. A WF activity:

  1. The activity is created
  2. Binding populates properties
  3. The activity is executed (in the caller's context)
  4. Binding pulls property values from the object

I think you can nest a CommandBase object inside an activity. But you can't interchange them. An Activity can call a Command, but a Command is much richer (potentially) than an Activity.

Remember that an Activity can only have one method: Execute. It can have many properties, but they must be read-write, and really should only expose simple data types (in case the workflow is suspended).

A CommandBase-derived object can have numerous methods. Yes, only DataPortal_Execute() runs on the data portal side, but many other methods can exist to allow interaction with the object in the caller's context.

Worse, remember that a COPY of the object is what comes back from the data portal. There's no way (that I've seen) to convince WF to swap out the current instance of an activity for a replacement instance. In other words, WF thinks it is executing instance X' and back comes X'' - but we have no way of getting WF to now point to X''...

Again I come back to the possibility of creating an ActivityBase object. However, it remains unclear whether this would help in any meaningful way. If I could save some code that'd be great, and I'd do it. But remember that the Activity must define the properties for binding within a workflow. And it must implement Execute(). That's it.

If you pair an Activity with a CommandBase though, you get something good.

The Activity has Execute(), which is just a little bit of code - often very little (depending on how you construct your command object itself):

void Execute()
{
  MyCommand cmd = new MyCommand();
  cmd.ImportantValue = this.ImportantValue;
  cmd = cmd.Execute();
  this.ResultingValue = cmd.ResultingValue;
}

This way the command object can move to/from a server without confusing WF, because the use of a mobile object is encapsulated within the Execute() method of the activity.

survic replied on Tuesday, February 20, 2007

  1. >>>>Activity properties “really should only expose simple data types (in case the workflow is suspended)”. 

      --- can serializable be OK?

 

  1. remote copy semantics and local reference semantics: Perhaps another way is to forbid local semantics? 'Anyway we can find a way to forbid local sementiccs -- ideally via compile time checking.

 

     3.  Although not as elegant as I would like it (I will try to think it more), if there is nothing better, I guess I will have to take your idea to “wrap” command with activity.

void Execute()
{
  MyCommand cmd = new MyCommand();
  cmd.ImportantValue = this.ImportantValue;
  cmd = cmd.Execute();
  this.ResultingValue = cmd.ResultingValue;
}

 4. It is really great to hear your thoughts on this, Dan's idea also.

RockfordLhotka replied on Wednesday, February 21, 2007

When a workflow is suspended WF serializes the workflow. I was told they are using the binaryformatter, but I'm skeptical because I would have expected one of the WCF serializers...

The thing is though, there are potential versioning issues. Suppose you expose a CSLA style (or any other custom) object as a property and it gets serialized. Then suppose you update that DLL before the workflow is resumed? The workflow would be unable to deserialize, and I assume that would be a bad thing.

survic replied on Wednesday, February 21, 2007

You are right, the versioning -- all those worms. It is tough.

 

However, even if you only use build-in CRL types, and if the new version adds a field that is crucial for the new version, or, changes a field name etc., we will also be in trouble, correct?

 

Also, the remote copy/local reference thing: Do you know some usage patterns that can effectively forbid the local reference usage? The pattern(s) must be clear enough to be easily enforceable, both by understanding and by tools like FXCop -- basically, I am thinking to sacrifice the flexibility and performance of the architecture (I know, it is a shame), in order to keep is simplicity (combining activity and command).

 

Copyright (c) Marimer LLC