After my last series of development articles, many CIOs who’ve begun converting their legacy and client-server applications to n-tier, Web-based applications responded by asking for a laundry list of things to watch out for when developing n-tier applications.

Although many of the specific problems developers encounter are environment-specific, I’ve put together a list of things that your development shop should look out for when designing, developing, and implementing n-tier applications.
The suggestions made here can be applied equally to any tier of a multi-tier application. However, most deadly mistakes occur in developing the middle tier or business logic tier of the application, rather than in the presentation or data tier. Next week, we’ll look at specific middle-tier issues that affect your ability to develop effective n-tier applications.
Differentiating between performance and scalability
One of the first mistakes that system designers and architects make is to assume that reaching the goals of performance and scalability are the same thing. Basically, an application’s performance is defined by its ability to execute a process in a defined amount of time. Scalability, on the other hand, defines an application’s ability to be used by multiple concurrent users.

Although performance and scalability are not the same thing, it’s difficult to separate them entirely. Your development team can develop a high-performance application that is also highly scalable. But it’s also possible (and more likely) that your team will create an application that performs extremely well, but will not be highly scalable.

If the team’s skill sets and developer intuition were built in the client-server world, they’re likely to be predisposed to creating applications that perform well but scale poorly. Before we bash the client server developers too much, you should recognize that it is also possible to have a highly scalable application that is not a stellar performer.

For most applications, there is a “sweet spot” of acceptable performance that a user should expect while the application scales to a certain number of users per machine or per processor.

Early on in the development of an application, system architects must decide what the user needs in order to achieve acceptable performance and balance that against the cost incurred to reach scalability.

Unfortunately, most system architects wait until the development process is well under way before they determine and publish these two key metrics.

Improperly tuning the database
New releases of database servers from major vendors like Oracle and Microsoft continue to break performance records. But poorly tuned indices and queries can bring even the most robust database system to its knees.

It is also still common to see developers coding stored procedures or queries without consulting a Database Administrator (DBA) and even running an entire project without even consulting a DBA. Great developers aren’t typically great database designers.

But for most applications, the table design, the data types of keys, the degree of normalization (or denormalization), and the index structure all play critical roles in the performance of the system. Not involving the appropriate database resources early in the design of a project can virtually guarantee performance problems in the data tier of your application once developed and ready to deploy.

Picking the first coding method instead of the best
Developers understand that it’s more likely that their schedule will be compressed rather than expanded. With that in mind, developers tend to use the algorithms and design patterns that they believe will solve the problem first instead of focusing on the best way to solve a particular problem. The ramifications of this choice usually surface when the project undergoes performance testing.

Developers will often fail to take into account the size of the production data set vs. the size of the data set they used when testing their routines. Routines designed for efficient processing of small data sets don’t perform well on large ones and those designed for processing of large data sets tend to bog down quickly when asked to perform on small ones.

The best way to avoid these poor selections is to involve people with strong architectural and testing skills early in the design of a project. And, most importantly, to make sure the developers working on the project are well aware of the scope and metrics of the project.

Spending appropriate time for performance testing
As children, we all dreaded those days in school when we had to take tests. This fear of testing has become entrenched in the development environment and process as well.

If someone tests our code, then we may find out that we failed to meet the functionality or performance marks set by the application designer. Most developers work very hard to achieve functionality goals because failure to do so is obvious.

But performance testing is always left to the “operations guys” at the end of the process under the assumption that we can always “throw hardware at it” if we need to. Unfortunately, this isn’t always the case.

To be effective, you should create a formal test plan that addresses performance and scalability. The test plan should include regular stress testing beginning as early in the development cycle as possible.

To build a highly scalable application, you should perform a large percentage of the testing on the types of hardware you expect to be in production. It’s very common for testers to perform testing on single processor machines and assume linear scalability.

Without specifically testing sections of code on multiprocessor machines, you will not be able to identify scalability and performance issues specific to multiprocessor machines.

Another important part of the design process should include building interfaces into your systems and components that expose performance information to the underlying operating system.

Building these interfaces allows testers to collect and analyze performance information. More important, it allows the operations team to monitor, maintain, and fix problems in the application by analyzing system or component statistics that would not have otherwise been exposed. Building applications to be “testable” and “monitorable” should be a basic tenet of all application designs.

Tim Landgrave is the founder, president, and CEO of Vobix Corporation, an application service provider based in Louisville, KY.

What other steps do you take to ensure that your Web-based applications will perform well? Share your tips in an e-mail or post a comment below.