By Tim Landgrave
In any new enterprise system design using SQL Server, the architect has to make some key decisions regarding the structure of the data and the location of the application logic that will manage it. SQL Server has its own programming language (Transact-SQL or TSQL) that developers can use to manage data access, code business logic, and control transactions.
Using TSQL, developers can create stored procedures that encapsulate data access in reusable blocks of code that are precompiled and have their own permission sets. A special set of stored procedures, called triggers, is available for each table in a database. Triggers are “fired” when specific database events—such as Insert, Delete, or Update—occur to an underlying table. Using triggers, developers can write business logic that is event based so that changes to other SQL Server objects are driven by the Insert, Delete, and Update events for a given table.
Given this flexibility, why wouldn’t you write as much of the business logic as possible in TSQL and put it in the database?
Using TSQL to develop the application logic repository
There are certainly some logical reasons for using TSQL as the application logic repository not only for a single application, but also for a group of applications that need access to the same data. By centralizing data access and the business rules that manage that data in the SQL server, you can configure the security in such a way that no application will use the underlying data without going through the business rules first.
This is a common database paradigm for most two-tier client-server applications. This architecture places all of the business logic and data access in the back-end server and the rich presentation logic in the client. The client manages views of the business process and data, but does no local processing other than display rendering. Although this architecture has the potential to reduce maintenance costs by placing all of the business logic in a central repository, it does so at the price of reduced scalability.
I worked with a customer recently who had spent hundreds of man-months and thousands of dollars creating a very intricate application that used TSQL to manage all of its application logic. Although the architecture was quite elegant and performed very well with 10 to 15 users, it was too slow to be usable with 20 users. By adding processors to the SQL server, the customer was able to get about 60 simultaneous users on the system. But this was well short of their desired 100-user goal, and it made the company’s plan to open the application on the Internet impossible to execute. Because stored procedures and triggers only operate on local data, the customer couldn’t break the application into multiple SQL servers to support higher scalability. Instead the customer had to overhaul the application dramatically.
Using .NET classes for application logic
What this customer discovered the hard way, most application designers are now recognizing at the architecture phase—an n-tier architecture with application logic contained in a central set of .NET classes increases the flexibility and scalability of an application. Since TSQL is primarily a data manipulation language, it’s unwieldy, but it is possible to write complex business logic using TSQL.
Developers using the .NET Framework have their choice of languages to use when developing the application’s core business processes. This flexibility allows you to match the application requirements to the development language or development resources that make the most sense for your organization. And if developed properly, the objects that encapsulate these business processes can run on multiple machines and share the same underlying database server. When unburdened from processing TSQL business logic, the SQL server can handle huge numbers of simultaneous requests.
Row operations vs. set operations
When deciding whether to use row or set operations at the architect stage, one of the key guidelines to follow is to use TSQL for set operations and .NET for row operations. Bringing large sets of data across a network connection will impact the overall performance of your application, so it makes sense to process them at the server whenever possible. But SQL Server cursors are very expensive objects in terms of memory and processing power, so creating a cursor to iterate through a set of records and operate on them individually doesn’t generally make sense.
When you need to perform row-based processing that includes either complex procedural logic or operations that are CPU intensive, you should retrieve the rows from the server and process them in middle-tier objects.
If you want to see an example of how to encapsulate your data access into middle-tier objects, download the Data Access Application Block from MSDN. This is a reusable data access sub-system with source code upon which you can base your own database or application-specific data access objects.
By creating reusable .NET application frameworks to handle most application logic and using TSQL-based stored procedures as a security boundary and a mechanism for server-side set-based operations, you can create applications that take advantage of the best features of both TSQL and the .NET Framework.