Which DataPortal to use?

Which DataPortal to use?

Old forum URL: forums.lhotka.net/forums/t/1249.aspx


DeHaynes posted on Monday, September 18, 2006

   Ok, I have gone through the framework and modified it quite a bit.  I have gone through the process of building datatables, stored procedures, business objects, etc, etc.  I have a fully working system, but I am going to take the advice of a couple of people and start from scratch with the intent of subclassing instead of modifying the existing framework.  This will be my production framework.

   I need to make a final decision which DataPortal type to use?  Remoting, Web Services or Enterprise Services.  So I am interested in some opinions on how you have set it up along with some numbers. 

   My situation is I am building the company central database.  Yes, that is right, they don't have one yet.  I plan on using CSLA-type BOs for every application we have that requires access with the SQL Server.  For hardware, we have a brand new IBM xSeries server with a 3.06 Ghz XEON and an open slot for another CPU.  It has 1.5 Gb of RAM and is running SQL2005.  I had planned on running the CSLA application server on the same box as the SQL server.  My company only has 50 employees so I expect our connection load to be around 75 - 150 when considering Internet access.  I read about the different connection options in the forums and I don't want to put words in Rocky's mouth but he seemed to say he did ES only because he was forced to.  He didn't see an issue with using Remoting or Web Services.

   So have any of you put the BOs into production and what is your opinion?

RockfordLhotka replied on Monday, September 18, 2006

To be very clear: my absolute preference is the LocalProxy - avoiding all the overhead, complexity and cost of a remote app server. If you can find any way to use the local data portal configuration, you should do it!

I created the Web Service channel as a proof of concept - illustrating that even a technology not designed for client/server can still be used for the purpose. And illustrating the basic code you'd need to support WSE to get the WS-* benefits. But I personally would lean away from using the Web Service channel if possible.

I created the Enterprise Services channel because there are cases where Enterprise Services (DCOM) is faster on the network. It is also the case that COM+ provides more management capabilities than IIS. Finally, this channel doesn't require IIS on the server, and some companies view this as a key benefit. However, this channel does use DCOM, and thus is problematic for no-touch or ClickOnce deployment, and so I would lean away from this channel as well.

That leaves the Remoting channel. The only drawback to this channel is the oft-repeated Microsoft mantra to "avoid Remoting". They give that advice primarily to avoid headaches when upgrading to WCF in the future. But within the context of CSLA that isn't such a big issue, since there'll be a WCF channel, and you should be able to switch to it with little or no impact on your application.

But again, physical n-tier is costly - in terms of hardware, hardware support, points of failure, software complexity, software support and performance. If you can avoid physical n-tier, you should.

If you need n-tier to get scalability or security, then make sure those benefits outweigh the costs.

If you still decide to do n-tier, then I'd lean toward the Remoting data portal channel as my first option.

DeHaynes replied on Monday, September 18, 2006

   I am a little confused.  In order to use local Data Portal, then I would have to have the factory setup on every client right?

   Maybe I need to do some more reading.

ajj3085 replied on Monday, September 18, 2006

No, you shouldn't need anything special on the client, except of course the .Net framework itself.  The local data portal is the default; everything runs on the client, and the client connects directly to the database.

RockfordLhotka replied on Monday, September 18, 2006

One of the primary benefits of the data portal is that you can switch from local (2-tier) to remote (3-tier) with no changes to your application's code.

Now you can defeat that benefit by making certain architectural or coding decisions, but the core and intended behavior is to provide this flexibility.

There are coding errors you can make in a 2-tier model that prevent moving to 3-tier. But in my view those are bugs, and should be avoided. This is why I recommend continual n-tier testing, even if you plan to deploy in a 2-tier mode. N-tier testing will rapidly uncover such bugs so you can fix them early.

There are architectural decisions you can make - most notably having a separate DAL assembly - that might make 2-tier not work. But the reality is that a properly designed DAL could be installed on the client as well as on a server, and so the 2-/3-tier flexibility should be retained.

DeHaynes replied on Monday, September 18, 2006

   If I understand right, when you compile your application, you will get a DLL that is the framework.  When you set the application to run in in a Local Data Portal mode, you put the settings into App.config to tell it to rode the code that will make it do the data-calls itself.

   If later you want to centralize the data acces, you just change the settings in App.config and that will instruct it to run the code to communicate with a network computer via remoting, Web Services, or ES.  No matter, the DLL will already be on the client.

   Is that right?

RockfordLhotka replied on Monday, September 18, 2006

Yes, that is essentially correct.

DansDreams replied on Tuesday, September 19, 2006

RockfordLhotka:
To be very clear: my absolute preference is the LocalProxy - avoiding all the overhead, complexity and cost of a remote app server. If you can find any way to use the local data portal configuration, you should do it!

<snip>


That leaves the Remoting channel. The only drawback to this channel is the oft-repeated Microsoft mantra to "avoid Remoting". They give that advice primarily to avoid headaches when upgrading to WCF in the future. But within the context of CSLA that isn't such a big issue, since there'll be a WCF channel, and you should be able to switch to it with little or no impact on your application.

I've also always just assumed I'd use the LocalProxy (direct DB connection) for performance reasons unless I needed to do something else.  But if you're using the remoting server and the application is set up to use SQL security there would be a performance bonus from the database connection pooling, right?  Given the appealing security bonus remoting offers and what may be just a trade-off of more or less equivalent performance results, I'm becoming intrigued by remoting as my "standard" implementation.

Secondly, I just went to the Microsoft WCF (or WinCF as the slides called it) presentation.  Yeah they gave the "avoid remoting" mantra, but as you say it was in the context of what you'll need to do to upgrade to WCF.

Which leads me to my question, Rocky - have you written of what your WCF channel will look like?  Given that WCF itself is similar to your design in that the "channel" is simply a configured option and really transparent to the application, it would seem that implementing a WCF channel within your design would end up with a configurable channel that has as one of its options a configurable channel.  Or is the end result going to be more along the lines of choosing between the current paradigm and switching to WCF and configuring that via the standard WCF configuration?

RockfordLhotka replied on Tuesday, September 19, 2006

DansDreams:

I've also always just assumed I'd use the LocalProxy (direct DB connection) for performance reasons unless I needed to do something else.  But if you're using the remoting server and the application is set up to use SQL security there would be a performance bonus from the database connection pooling, right?  Given the appealing security bonus remoting offers and what may be just a trade-off of more or less equivalent performance results, I'm becoming intrigued by remoting as my "standard" implementation.



I am reasonably convinced that database connection pooling will not offset the performance hit incurred by communicating with an application server - at least not until you have a large number of users.

You get connection "pooling" on your client too you know. Your own connection(s) are pooled for your subsequent use. As long as the user is hitting the database on a relatively frequent basis (there's a timeout somewhere), their connection actually stays open and is reused over and over again.

The value of the app server and pooling is that the database itself will have fewer open connections to deal with. So the scalability benefit of having an app server comes when the number of open connections to the database starts to impact the database server's performance.

Another possible perf benefit to an app server is where the client is using a (comparatively) slow link to the servers. Since talking to the database can be chatty, direct communication over a slow, or high latency, connection can be very problematic. In that case, the data portal is helpful, because it sends an entire object graph in one call - allowing the app server to handle the chatty dialog with the database (presumably over a much higher speed connection).

DansDreams:

Which leads me to my question, Rocky - have you written of what your WCF channel will look like?  Given that WCF itself is similar to your design in that the "channel" is simply a configured option and really transparent to the application, it would seem that implementing a WCF channel within your design would end up with a configurable channel that has as one of its options a configurable channel.  Or is the end result going to be more along the lines of choosing between the current paradigm and switching to WCF and configuring that via the standard WCF configuration?



I do have a prototype WCF channel available that you can look at - click here.

However, that's really only half the story, because it doesn't allow you to use DataContract. To make DataContract work requires deeper changes in the framework, because it requires avoiding all use of the BinaryFormatter in favor of the new NetDataContractSerializer.

So that WCF channel works today, against .NET 2.0 apps that use Serializable objects. But there'll be an updated version of CSLA for .NET 3.0 - see the roadmap.


DansDreams replied on Tuesday, September 19, 2006

RockfordLhotka:


You get connection "pooling" on your client too you know. Your own connection(s) are pooled for your subsequent use. As long as the user is hitting the database on a relatively frequent basis (there's a timeout somewhere), their connection actually stays open and is reused over and over again.

The value of the app server and pooling is that the database itself will have fewer open connections to deal with. So the scalability benefit of having an app server comes when the number of open connections to the database starts to impact the database server's performance.

Right.  I guess the important point here (an enlightening duh moment for me) is that since it's a fairly trivial matter to switch to the remoting server when you have reason to believe the total database connections have become a bottleneck, that's the proper approach from a performance perspective since by definition you can't realize any performance advantage from reducing them until they are a problem.


RockfordLhotka:

I do have a prototype WCF channel available that you can look at - click here.

However, that's really only half the story, because it doesn't allow you to use DataContract. To make DataContract work requires deeper changes in the framework, because it requires avoiding all use of the BinaryFormatter in favor of the new NetDataContractSerializer.

So that WCF channel works today, against .NET 2.0 apps that use Serializable objects. But there'll be an updated version of CSLA for .NET 3.0 - see the roadmap.

I think that answers my real question.  I was struggling to align the current CSLA design with the contract paradigm, since they're essentially alternative solutions to the same problem.  (Not the data contract, the channel one, whatever the WCF term is.)

On that note - I did ask the presenter yesterday about remoting under WCF.  Since WCF seems to be largely a wrapper for the underlying communication, serialization and security protocols, you would still be ultimately making the determination of what communication protocal is best for your situation.  And in that case wouldn't you end up with remoting as often as you do today?  The presenter indicated that you could set up a remoting under WCF but his response was sort of one of "why would any sane person do that?" which seemed to come mostly from the Microsoft mantra that remoting is dead.

Or is the paradigm that you could end up with the same end result as remoting by configuring the WCF contract and channel, an HTTP channel talking to a server hosted under IIS, without it being true remoting as we know it today?

DansDreams replied on Tuesday, September 19, 2006

Actually, remembering the presentation I can partly answer my own questions.

Part of the position of remoting being dead was that the concept of your objects moving around the network is so "yesterday" compared to SOA.

Guess to that degree the concept of "smart mobile objects" is going to be even harder to justify against the juggernaut caused by Microsoft's endorsement of SOA, eh?

RockfordLhotka replied on Tuesday, September 19, 2006

DansDreams:

Guess to that degree the concept of "smart mobile objects" is going to be even harder to justify against the juggernaut caused by Microsoft's endorsement of SOA, eh?



Well, that's true. But our industry is circular in nature. SOA is just a fad - a rebirth of EAI from the 90's and EDI from the late 80's. Just like SaaS is a rebirth of ASP, which was a rebirth of "outsourcing" from the early 90's (the word meant something different then).

Marketing people get these ideas (most of which have little basis in reality) and they push them until they fail. Then they rename the idea and push again. Each time they make a lot of money off the newest crop of CxO's, who weren't around the last time they got ripped off by the idea.

I speak at so many events around the world (15-20 per year), and attendees, by and large, don't buy into the "SOA replaces n-tier client/server OO" crap. Service-orientation is a powerful concept and a good tool, but it doesn't replace OO any more than OO replaced procedural design. And it doesn't replace client/server. The goals of client/server and the goals of SO are fundamentally different, and if you use client/server to solve an SO problem you are in trouble - and visa versa.

Microsoft is very good at getting their people lined up along the party line. All working together to push a common agenda. But due to their size, they have a hard time anticipating and adjusting to market realities.

The reality is that SOA is at or beyond the peak of hype (Gartner Group has this excellent "hype cycle" chart, from which I pull the concept). The slope is all downhill from here, as people actually try SOA and find out that it is very useful, but only in very limited scenarios. And that if you "SOA" a client/server scenario you get terrible performance and software that is very costly to build and maintain.

Note that I am not saying SOA/SO is bad. On the contrary, when used to solve the problem for which it is designed (inter-application communication), it is excellent. By far the best solution evolved to date. But when misused (as a replacement for client/server for example), it is a terrible solution.

Use the right tool for the right job...

RockfordLhotka replied on Tuesday, September 19, 2006

DansDreams:

Or is the paradigm that you could end up with the same end result as remoting by configuring the WCF contract and channel, an HTTP channel talking to a server hosted under IIS, without it being true remoting as we know it today?



That's right. WCF does not do .NET Remoting as we know it today. At least in terms of the wire protocol.

However, look at the architecture of WCF: channels, sinks, formatters, etc. It is very much inspired by the architecture of Remoting. Much more so than the architecture of Web Services for instance.

But how many of us care what's on the wire? Really?

We care about the functionality provided by the technology. And it is at this level where WCF provides comparable functionality to Remoting. And to Web Services (asmx and WSE), Enterprise Services and MSMQ.

In the case of ES and MSMQ, WCF actually uses them behind the scenes - it is a wrapper.

But for Remoting, asmx and WSE, WCF is a complete replacement, and it does not use any of that pre-existing technology behind the scenes. What it does do, is provide the moral equivalents to the functionalty of those technologies.

In the case of Web Services, because SOAP is a standard, you can configure WCF to be wire-protocol compatible. In other words, WCF essentially has a "compatibility mode" so it can talk to older-style Web Services.

No such compatibility mode exists for Remoting. So the drawback to Remoting, when migrating to WCF, is that you must upgrade both the client and server to WCF at once. And that can be a serious drawback in some cases.

In the case of CSLA and the data portal, due to the use of mobile objects you need to upgrade client and server in lockstep regardless. So the Remoting->WCF issue is, in my view, secondary and relatively unimportant. By the time you've chosen to use mobile objects (via CSLA or not), you have acknowledged that your client and server are part of a single client/server application anyway.

DansDreams replied on Wednesday, September 20, 2006

RockfordLhotka:

In the case of CSLA and the data portal, due to the use of mobile objects you need to upgrade client and server in lockstep regardless. So the Remoting->WCF issue is, in my view, secondary and relatively unimportant. By the time you've chosen to use mobile objects (via CSLA or not), you have acknowledged that your client and server are part of a single client/server application anyway.

Which is why I would expect Microsoft and other SOA pushers to suggest that mobile objects are such a bad idea.

And this I guess gets back to something you've written about several times - the seemingly obvious flaw in considering service type communication as the basis for all communication, even intranet inter-tier.

RockfordLhotka replied on Wednesday, September 20, 2006

Yes, in the SOA world there is a lot of focus on the _syntactic_ concerns around communication, to the complete and utter exclusion of _semantic_ concerns.
 
In other words, they are very focused on ensuring that the service endpoint and its acceptable communication parameters be clearly defined and unambiguous. And they are very focused on ensuring that the procedure API and parameter serialization conform to a strict contract. But the _meaning_ of the data is entirely outside the scope of anything within SOA.
 
So as long as you conform to the syntax of the contract, you can send anything - even things that have no meaning to the recipient.
 
I still remember a project from 1996, where I worked with a client that had been part of a working group for FIVE years: trying to define the word "product". They had, long before, defined the syntactic issues around how to transfer data (using COBOL-style data maps, a moral predecessor to today's XSD). But the _semantic_ meaning of the data was so different between the various parties that they'd spent five years unable to implement anything!
 
Not that mobile objects would have solved the issue outright. But that story illustrates the importance of semantic information, and mobile objects, by their very nature, bring semantic information (behavior) along with the data itself.
 
So while SOA is incredibly valuable when used for the right reasons, it is a choice that can cripple applications when used in other contexts - specifically because it goes out of its way to prevent the sharing of any sort of semantic meaning about the data that is ferried across the wire in those clearly defined contracts.
 
Rocky


From: DansDreams [mailto:cslanet@lhotka.net]
Sent: Wednesday, September 20, 2006 12:33 PM
To: rocky@lhotka.net
Subject: Re: [CSLA .NET] Which DataPortal to use?

RockfordLhotka:

In the case of CSLA and the data portal, due to the use of mobile objects you need to upgrade client and server in lockstep regardless. So the Remoting->WCF issue is, in my view, secondary and relatively unimportant. By the time you've chosen to use mobile objects (via CSLA or not), you have acknowledged that your client and server are part of a single client/server application anyway.

Which is why I would expect Microsoft and other SOA pushers to suggest that mobile objects are such a bad idea.

And this I guess gets back to something you've written about several times - the seemingly obvious flaw in considering service type communication as the basis for all communication, even intranet inter-tier.




DansDreams replied on Wednesday, September 20, 2006

RockfordLhotka:
COBOL-style data maps
 
Oh lordy, the memories.  We show our age.

shawndewet replied on Thursday, November 26, 2009

Rocky said:
"I am reasonably convinced that database connection pooling will not offset the performance hit incurred by communicating with an application server - at least not until you have a large number of users.
...
The value of the app server and pooling is that the database itself will have fewer open connections to deal with. So the scalability benefit of having an app server comes when the number of open connections to the database starts to impact the database server's performance."

Have there been any studies/experiences/recommendations regarding what this number of database connections is at which point one should start considering use of a remote dataportal instead of the client dataportal?

I KNOW someone's first answer will contain "this is dependant on the hardware", so I'll lay out a few scenario's (and leave it to you to imagine the "typical" hardware that one could expect to find in their server rooms):
1) a small business with 5 to 20 users connected concurrently;
2) a medium business with 15 to 50 users connected concurrently;
3) a larger business with 30 to 100 users connected concurrently;
4) a corporation with 70+ users connected concurrently
5) anything bigger is typically out of my market.

So I'm just trying to find out if there are recommendations/experiences about when to use the remote dataportal (and which one (WCF, etc.) to use).

Any advice?

RockfordLhotka replied on Thursday, November 26, 2009

As you guess, it does depend on the hardware, and on what other apps are
using the database.

Using "modern hardware" you can probably assume a database server is minimum
quad core, probably eight core. Minimum 8 gb ram, probably 16+. Minimum 7200
rpm drives in a RAID, probably 10k rpm in a RAID or a SAN.

Such database servers can easily handle around 500 concurrent users for an
app, unless that app is incredibly badly written.

So my rule of thumb is (in terms of scalability) to go 2-tier unless I have
hundreds of concurrent users.

OR the users are in a WAN (because WAN networks have higher latency).

OR there are security concerns where the connection strings need to be moved
off the client and onto an app server.

Copyright (c) Marimer LLC