For some time now, Microsoft has been pushing an architecture it calls Windows DNA, which involves dividing an application’s code into three distinct layers: data access, business logic, and presentation. It isn’t a new idea and it isn’t an approach specific to Microsoft, but the division of application code into distinct layers, each with distinct functional roles, is generally seen as a good approach. Recently, I worked on a project that gave me the opportunity to see how beneficial this approach is.
I was chosen to lead a project in the controlling department of a financial institution in New York City. Every month, the department heads in the bank were responsible for reconciling their individual departments’ transaction ledgers to the corporate general ledger. Any discrepancies were identified, investigated, and logged.
They had no standard for how to do this, and every department had developed its own methods and systems to accomplish the task. Because of these multiple methods and a lack of management oversight, many discrepancies between the systems had cropped up and hadn’t been investigated. In addition, there was no accountability and no central system for the controlling department to monitor these monthly activities.
The head of the controlling department wanted a system built to standardize the monthly reconciliation process. The system would provide a simple, Web-based interface that would allow each department head to view the corporate general ledger information, input the ledger entries from the department’s transaction ledger, identify the differences, and investigate the missing transactions. The completed reconciliation would be recorded and the controlling department could then monitor compliance.
Gathering the requirements
I assembled a team and held the first requirements gathering session with the users. The problems I was going to face quickly became very apparent: None of the key department heads showed up. Apparently, the only sponsorship for this project was from controlling. The individual department heads were content going on the way they had been. I ran the meeting as best I could but the users were giving me a lot of “I see your lips moving but all I hear is blah, blah, blah.”
The head of controlling saw this too. His solution was to appoint a single person to be in charge of defining the functionality. Unfortunately, even though the person assigned knew a lot about accounting, he wasn’t a department head and had never done a monthly reconciliation before. He also had very definite ideas about how the application should function and he wasn’t taking suggestions from anyone else. The manager thought that if we wrote the perfect system, we could force the users to adopt it.
I had seen too many projects fail because applications were developed without input from the key users and support from the stakeholders. This project was starting down that path. I did a quick risk assessment. The system’s goal was a good one. It added value and it would make the reconciliation process more efficient. But without the users’ input into defining the functionality, we were going to fall far short on the details necessary to make this system work. Whatever my team built, it was going to have to be very flexible. Once the users did get involved, there were going to be a lot of changes. We had to develop the application in such a way that we could react to the changes quickly and minimize the amount of additional development each change would entail.
Defining the architecture
Being able to adapt to changing requirements meant architecting the system with distinctly defined components and adhering to some strictly defined programming standards. Once we finished the system use cases defining the requirements, we started developing the object models and class diagrams using Visual Modeler. Because we were using Microsoft technologies (Active Server Pages and Visual Basic), we used the Windows DNA architecture as a framework. Our software layers would be defined this way (see Figure A):
Data access layer
At the data access layer, our approach was primarily one object per “functional object.” For example, the use cases defined the concept of a general ledger account. The account had properties such as account number, associated department, account owner, and current monthly balance.
We created a database table for all account description data. There was a foreign key relationship to the user table for account owner. There was also another foreign key relationship to the monthly balances table. A simplified version of these tables is diagrammed in Figure B.
The data layer Account object (aptly named AccountDL) was responsible for database SELECT, INSERT, UPDATE, and DELETE actions. A simplified Account object looked like this:
There was a very close association with the information stored in the database and the information available in the AccountDL object. But the implementation of the database structure was hidden from developers by this object. This meant that any database changes that might be required in future application revisions would only require code changes to the methods of the AccountDL object. As long as the interface stayed the same, the objects of the business logic and presentation layers would be unaffected.
Business logic layer
Only objects in the business logic layer could access objects in the data layer. This layer also acted as the link between the data layer and the user interface. For example, after inputting all the necessary data, the user would click a button indicating that the reconciliation was ready to be completed. The button click would be handled in the presentation layer, which would pass the reconciliation off to the Reconciliation object in the business logic layer (named ReconcileBL).
The object would validate the information, verify that the corporate general ledger balance equaled the department ledger balance plus the balances from identified missing transactions, and record the information in the application. Objects in this layer would call functions in the data layer objects to retrieve and store information but they did not interact with the database directly.
As with the data layer, as business rules changed (which I was certain they would), the code necessary to implement those changes would be confined to this level. A data validation rule could be updated or (as became the case later) rules on whether a reconciliation was complete or not could be added without affecting the presentation or data access layers.
The presentation layer poses a particular challenge with Active Server Pages because the server-side scripting is entwined with the HTML output. To minimize confusion, we standardized on a layout for all ASP pages.
Each page was divided into three sections. All scripting code was located in the first section, event handlers (client or server side) were in the middle section, and the last section contained the output HTML (with a minor exception that I’ll explain shortly).
The scripting code section was further subdivided. The first block of code would perform state checking and validation. This was necessary because of the inherent statelessness of HTML.
The second block would perform an algorithm, which would usually involve calling business logic methods to return data that would be displayed or passing form information to a business logic method so some function could be performed.
The third section defined output variables for any information that would be displayed in the HTML section. This action was done to minimize the script code that would be required between HTML tags.
The middle section of code defined all the event handlers. Differentiation between layers could get vague here. For example, if an input field value needed to have a minimal number of characters, was that presentation layer or business logic code? We decided to place all such code in the business logic layer. But because it would reside on the client side (as opposed to the server-side code discussed in the previous section), it would be implemented as ASP include files and related to the server-side objects by name. In other words, the server-side object for account handling was called AccountBL and the client-side business logic functions related to account handling would be defined in the file AccountBL_CS.asp. This modified the high-level model (see Figure C).
Note: There is no interaction between the client-side business logic functions and the data access layer. We kept all calls to the data access components in the server-side business logic components.
We went a bit further with the presentation layer and these steps also helped minimize the additional code and maintenance. On the HTML forms, we standardized the field names. The drop-down box listing all account numbers and titles was always called cmbAccountNumber. This drop-down was present on a dozen pages and was called cmbAccountNumber on every one. The on-click event handler was always called cmbAccountNumber_onClick().
Selecting an account number from this drop-down would display information about the account on that page. This meant making a call to the business logic Account object (AccountBL) to load the selected account, which would in turn make a call to the account data access layer object (AccountDL). Rather than putting these functions directly in the cmbAccountNumber_onClick() event handler, this code was put in the business logic include files. The cmbAccountNumber_onClick() handler would call a corresponding function in this include file; a change to this process would immediately become available to the Account Number drop-down box on all pages without having to make code changes on each page individually.
As expected, when the first version was released to the user community, interest began to grow. As interest increased, so did the number of functional changes required to make the application valuable to all departments. The largest change involved the page used to perform a reconciliation. We had designed a page that we thought contained all the information any department would need.
What we discovered was that even though the information was similar, each department used it in different ways. This went as far as one department reversing the meaning of the terms Credit and Debit because their initial department ledger had been incorrectly designed. The solution was to create an array of templates—15 in all—one of which would be used to reconcile a particular type of account. We made considerable changes to the user interface. But because the logic behind a reconciliation wasn’t changing and the underlying data structure was staying the same, only the code at the presentation layer needed to change. Imagine the hours of testing and retesting of the data access and business logic code that we saved because we didn’t have to change that code.
After the project was complete, I did a rough calculation of how many extra hours were used up-front to implement this architecture vs. how many hours would have been required to implement the changes had we not layered the application code as we did. After all work was completed, I estimated that we actually reduced our overall development time by almost 35 percent by implementing the architecture the way we did. And although I haven’t had the chance to verify it, I believe the final application was much more manageable and extensible than it would have been had we not been so diligent.
How else have you implemented multilayer architectures?
This is one implementation for one particular project. How else have you used these architectures and how successful have your implementations been? Send your comments to us or post your suggestions below.