By Mary Deaton
You have design ideas for your Web site: you've researched your competitors' strategies, and you've read books and Web style guides to learn about the conventions for usable Web sites. But you're still not sure if your visitors will find the site easy to use. What now?
You could get a college degree in human factors engineering, or you could hire a human factors engineer. Great ideas, except you don't have four years to spare, and your boss won't pay for a consultant. That leaves two options: doing nothing, or doing it yourself.
Doing nothing will not solve your problem. But you can learn to do your own usability testing.
The most important design elements to test are navigation; labeling of inputs, categories, links, and buttons; searching; and multistep processes such as shopping or registration.
Anytime you do something unconventional, test it. Innovation is critical to advancing usable Web design, but don't turn your visitors into unpaid test subjects. Test in the safety of your own garage with willing subjects.
Before we look at when and how to test, let's list some guiding principals:
- Test early and test often.
- Test the right people—only those who fit your user profile. If you do not have a user profile, create one.
- Avoid biasing the results in any way. The purpose of the test is not to win an argument but to find out how people actually use a Web site.
- Test only the things that are not strong conventions or standards. Don't waste your time or the company's money on problems that other people have solved.
The books and Web sites listed to the right are valuable resources to help you learn more about when, how, and who to test. If you buy only two books, I suggest the Web Site Usability Handbook by Mark Pearrow and the Handbook of Usability Testing by Jeffrey Rubin. Although focused on GUI software, Rubin's techniques are valid for the Web, and he walks you through the process of creating and implementing tests with great checklists and templates.
Testing in phases
At several points during the design process, you'll need to gather information about how people will use your site:
- When planning a site design, before coding begins
- When validating a tentative design
- When completing a functional prototype
- When beginning the beta test period
You'll test to gather different kinds of information at each stage. In stages one and two, you want user input in order to work through ideas and compare alternatives. The first two testing phases provide the information that you need to build a prototype for determining how the design works for real people. The prototype test readies you for actual production work on the site. During the beta period, you'll search for any last-minute problems that you can fix before launching.
Design the test
When most people think of usability testing, they picture a person locked in a room, surrounded by one-way glass, and trying to use a computer application by following written instructions. Believe me, that is not how most usability testing occurs.
A professional usability engineer uses sophisticated methods and software tools to gather data about keystrokes, eye tracking, and other details. (You will find a list of these methods at The Usability Methods Toolbox, compiled by James Hom.)
For our purposes (quickly and cheaply gathering good user input), we can use simpler methods—no one-way glass required. I like to think of it as guerrilla usability.
- Hire a consultant to review your site and make recommendations for improvements.
- Ask a panel of experts to conduct a heuristic review of your site. You get more than one point of view with this method.
- Compare your site to usability standards, conventions, and guidelines.
- Survey current users about their experiences; ask what they do and do not like.
- Ask users to keep a log of comments and reactions while using your site.
- Watch people use the Web in their own environment.
- With a partner, talk through using your site or site prototype as a user might, and look for problems.
- Conduct user testing using low-fidelity prototypes.
Both the Pearrow and Rubin books explain these methods with enough detail so that you can actually do the testing. We are going to explore using low-fidelity prototypes.
Let's state the obvious: to test how people use a Web site, you need a Web site. But you shouldn't expect to start by testing a polished or complete Web site; in fact, your site doesn't even have to be on a computer.
Test subjects give more honest responses if they see a site still in development. You also can test sooner in the development cycle with low-fidelity prototypes. When evolving, prototypes can take a variety of inexpensive, simple forms:
- On paper: sketches, images, and screenshots
- A series of graphical mock-ups, without interactivity
- Notes, outlines, and drawings on a whiteboard
- Simple click-through mock-ups, non-HTML
- Complete mock-up in HTML, introducing graphics and text
I've done prototype tests using sketches on paper, printouts of layouts done in Word, and drawings on a whiteboard. For paper tests, I was the "computer," handing the subject new pages when they "clicked" a button or a link. In the whiteboard test, links were sticky notes that I moved as needed to simulate a response.
Carolyn Snyder explains paper prototyping in her article "Using Paper Prototypes to Manage Risk." Pearrow and Rubin also discuss low-fidelity testing.
When you want to watch someone use a particular design, build a functional prototype. You can use any means that lets the user actually click and get a response. HTML, Microsoft Visio, or even PowerPoint can accomplish this. Use whatever tool you feel comfortable with that allows you to accomplish what you need for the test.
The prototype consists only of those pages that allow you to conduct the test. For example, if you are testing a navigation scheme (including the labels on buttons and menu design), you create a home page and a few other pages that allow the subject to move around a site.
Most importantly, a functional prototype is thrown away when you are done; it never grows up to become an actual Web site.
Setting up a usability test
When testing a prototype, you have a simple role: watch folks use the site and write down what you see. That's all.
You won't end up with scientifically valid test results. You won't have hard data that proves anything. You will, however, get messages about what people have trouble using and what they do not. And those issues quickly identify the core usability concerns of your site.
The test room
To conduct a usability test, you need several things in addition to the prototype:
- A room with a computer
- A table and a chair for the test subject
- A chair for you (the observer)
- A test subject
- A script
You should also have a video camera to record each test session. The tape will back your conclusions if someone reading your report cannot believe a user did what he or she did. The Pearrow or Rubin books have detailed information on the logistics of setting up a test room, and these texts also explain how to find test subjects.
Creating test scripts
You conduct a test to find out if a page design, or a specific element of a page design, works. You need to devise tasks for the test subject to perform so that they interact with the elements you're testing. These tasks are written up into scripts.
Test tasks can be a series of steps that you want the subject to perform exactly as written, or they can be general tasks for which the subject decides how to reach the goal. You'll choose the method, depending on the specific information that you'd like to gather.
If you are testing navigation, you might say, "Find and order a widget using the test site." You want to find out if people can accomplish your task by simply looking at the pages in front of them. If your design works, they will be successful. A general task assumes that your test subjects (and your target users) have basic Web skills (recognizing links, clicking links, and so on).
When testing button labels or link wording, you might give the subject a set of steps for browsing a product catalog to find and order a widget but not give them the button or link wording. If a label makes sense to them, they will click it. If labels do not make sense, the test subject may agonize over what to do or simply do nothing.
Below you'll find an example of a simple script in which you ask the person to perform certain tasks in certain orders, but you do not tell them exactly what to click. You want to discover if they can find the correct links based on link label or the placement of a link.
- Click the link for the page on which you think widgets are located.
- Find a red widget.
- Add the red widget to a list of items to buy.
- Find a blue widget and add it to the list.
- Check the items in your list of things to buy.
The Pearrow and Rubin books have details about writing scripts. Once you've written the scripts, you can implement the testing.
Conducting a usability test
Before you conduct a usability test with target users, test the test.
You should conduct a dry run of a usability test as if it were an actual test; I hold it in the same room with the same physical setup. I need to know whether the logistics get in the way of the test, how long it takes to complete, and if the script can be understood by other people. I also need to rehearse my scripts so that they sound natural when I am speaking to a test subject.
For a rehearsal, I use coworkers or friends as my test subjects. I choose people who are not developers or designers so that the test does not become a code review. After the dry run, I always make changes to some aspect of the test.
And now for the real thing
Test day dawns. You get dressed in something other than shorts, a T-shirt, and Tevas, and you go to work. Real users from the real world will soon descend on you.
Greet the subjects when they arrive, offer them coffee or a beverage, take their coat, and have them sign a nondisclosure agreement. If you are compensating them, give them the compensation before you conduct the test. It is not a reward but payment for their time and trouble.
During the test, remain nonjudgmental. As the observer, you should only explain the test to the person and write down what you see as they do the test. Don't comment on anything they do, don't correct them, and don't answer questions when the answer might influence how they perform. Remember, you want an unbiased result.
When they complete the test, thank them profusely, get their coat, and see them all the way to the front door. Repeat this process for every test subject.
Understanding the results
When all of the tests are completed, only then should you look at your notes and expect to have results. Look for patterns, not precise measurements of anything.
Did everyone have trouble with the same thing? Did some people have trouble with one thing? If so, what was common about these people? Is the problem one for new users only? Did people have trouble with things that you did not expect them to have trouble with? Did they perform tasks differently than you anticipated they would?
The patterns you find tell you what you can change to improve your site design. For example, if most people use the Back button to return to the Home page and choose a new category or menu item, other means of changing categories need to be more visible. Is the low visibility the fault of labeling, the placement on the page, or the color? Run another set of tests with those variables changed to determine the source of confusion. You might test this problem during the beta period, or as soon as possible, if the problem is significant.
I've never worked on a project and had the luxury of testing and retesting every design element with lots of test subjects. More often than not, I work against the clock, testing the parts of the design that I feel have the most impact on overall site usability.
Using design elements common to many other Web sites, following the advice of respected design authors, and adhering to the standards and guidelines developed by well-regarded experts and research groups greatly reduces the amount of testing you have to do.
You can usually identify the trouble spots during your early test stages. In the prototype test, you validate the locus of the problems, then change the design to address the issue. During beta testing, you find out if the change worked. If it did not, you might have time to change it again. At the very least, you know what to fix during the next redesign of the site. Usability evaluation is an ongoing process. Never stop questioning and improving your site's usability.
Next month, we'll talk about the pros and cons of letting Web-logging software gather data about user experience on a Web site. Can you trust the data? Does a tracking program mean that you don't have to test real users?