Home   Articles We've Written   Focus on Usability

Focus on Usability

Bring together user interface design and usability testing for a marriage made in heaven.

Were you the first kid on your block to hide a cassette recorder under your teacher's desk? Do you take your own notes at staff meetings because the official minutes leave out too many details? Do you daydream about being a latter-day Jane Goodall, sitting in the forest in your khaki walking shorts and spying on your study subjects as they go about their daily tasks? Do you wonder how people found meaning in their lives before there were camcorders?

Have I got a job for you.

Wired for Speed

Usability Testing is the newest mantra for software development teams. Smile -- your app's on Candid Camera!

In the beginning, there were efficiency experts. Although their work is easy to satirize, the time and motion studies they conducted taught us a great deal about how people work and how to improve workers' productivity. Their modern counterparts are called human factors engineers, and they play an increasingly important role in software development. The 1990s have ushered in the era of user-centered design, which focuses on creating software with a high (and measurable) degree of overall usability.

In his book, offsite link.Handbook of Usability Testing (John Wiley & Sons, 1994, ISBN 0-471-59403-2), Jeffrey Rubin says that there are four key factors in usability: usefulness, effectiveness, learnability, and attitude. Usefulness refers to the degree that a program helps users achieve their goals. An application can be easy to learn, easy to use, and fun as all get out, but if it doesn't do what users need it to do, what good is it?

Effectiveness, or ease-of-use, tends to be quantifiable. It might be stated this way: "Eighty percent of the tasks supported by this application can be successfully completed in no more than 10 minutes by 90 percent of the users." To define learnability, you might ask how easy it is to learn the basics of an application, to move from novice to skilled user. Similarly, how easy is it to relearn an infrequently used application? Attitude, or likability, refers to how enjoyable and satisfying the software is. Is it a pleasure to use?

Usability is the product of goal-directed design, a key concept in Alan Cooper's book offsite link.About Face: The Essentials of User Interface Design (IDG Books Worldwide, 1995, ISBN 1-56884-322-4). Cooper stresses the importance of striving to understand the user's goals, the most important of which is to not look stupid.

Once we've agreed on what usability looks like--software that's useful, easy to use, easy to learn, and fun--the question becomes how do you measure it? To answer that question, you need feedback from users, and that's where a number of techniques come into play, including usability testing.

Image: Software -- Survey of Use

Are We There Yet?

Surveying users can be a cheap but effective means of finding problems in a product.

A variety of techniques are available to get feedback regarding the usability of any product. Popular methods include surveys, focus groups, joint application design sessions, and formal and informal usability testing. You can use these feedback mechanisms at any stage in the development life cycle, although some techniques work better if they're used early or late in development.

Surveys are particularly useful when you're generating requirements, because they increase your understanding of the users' goals. Be sure to survey every type of user--those who operate the application as well as those it operates on--and categorize their goals. Surveys make it easy to get feedback from a large number of potential users of your software. One pitfall is the difficulty of constructing unambiguous questions. It's critical to test and retest the survey on a small audience, fine-tuning it until each question is absolutely clear, before you send it to the larger group.

The type of audience and the timing of focus groups are much the same as those of surveys. You won't reach a wide audience here, but you'll be able to ask more in-depth questions about the goals of each type of user. If you have the luxury of using both techniques on one project, use the results of the survey to focus the focus group research more precisely. Based on the survey results, you can create rough-cut designs and explore their usefulness in assisting the users with their goals.

Joint application design is a popular—and sometimes misused—technique for getting the users involved in software design. Even more than focus groups, JAD is a great way to gain better understanding of the users' knowledge, skills, and goals. The misuse comes into play when you expect users to be designers. They don't have the training, knowledge, or skills for that. As Cooper says, "What users say is generally goofy. They can only give faint indications of places where problems may exist. They do not have the training necessary to actually solve the problems."

Formal usability testing follows a scientific approach that includes developing a hypothesis, conducting experiments, and proving or disproving the hypothesis. A less formal approach involves iteratively testing various aspects of the software throughout the development cycle. Some developers use it to direct design decisions, essentially molding the software based on these tests. I believe this is a serious misuse of the technique. If you derive the design from testing, you will see small improvements, but they'll come at the expense of innovation and creativity.

You'll notice that each of these techniques involves users in the creation of the design. In usability testing, you ask the users to interact with and provide feedback on software that's new and perhaps different from anything that has gone before. With this kind of user involvement, there is always a risk that your pet innovations might be rejected merely because they are unfamiliar. To achieve the gains that come from the increased usability of an innovative design, this risk is worth taking.

Image: Usability Test Scenario

Uh-Oh, Scenario

To get the most out of a test you should plan testing scenarios.

Planning is the key to successful usability testing. You need to carefully define the goals of the test and identify an appropriate set of testers. Prepare test scenarios of tasks to be accomplished using the model or prototype. Be sure to take the time to try out and refine the test scenarios before you use them in the usability test. If it's possible, plan for every member of the design team to observe the usability tests and to work with other team members to consolidate their observations.

The participants in a usability test are the testers, the observers, and the facilitator. It's natural for a tester to feel that he or she—and not the software—is the one being tested. The facilitator's most important job is to make the tester feel comfortable. The facilitator briefs the tester, explaining the objectives of the test and clarifying the tester's role. The facilitator explains that the purpose of the test is to help uncover usability problems with the software.

In a typical test of a software prototype, the tester is given one or more test scenarios and is asked to use the software to achieve what's requested by each scenario. The scenario describes a particular task and asks the tester to accomplish the task using the software. The tester is asked to think aloud while working through the scenario.

One or more observers watch the tester, taking notes on how the tester accomplishes the task and on the emotions expressed by the tester. If the tester thinks out loud, it's easier for the observers to understand and document the assumptions the tester may make because of the presentation of the user interface. The observers try to note the exact user-interface features used to perform the task and the time needed to finish. They also note the problems encountered and the number and type of hints given to help resolve the problems.

It's a good idea to videotape the test. A standard video camera will suffice, but specialized usability testing equipment is also available. Typically, it lets you videotape the software in action and superimpose the tester's face—and the positive and negative facial expressions—in a corner of display. If the usability testing equipment is installed in a permanent testing facility, the observers may be able to observe behind one-way mirrors so that the tester isn't distracted by the feeling of being watched.

The facilitator ensures that the test moves along according to schedule. The scenarios and the schedule should provide sufficient challenge and time to uncover any significant struggles with the user interface. At the end of the test, the facilitator debriefs the tester, relaying questions from observers and asking for general feedback. After the tester leaves, the facilitator debriefs the observers, consolidating their observations of the tester with those from other tests. An immediate debriefing lets this information be consolidated with other findings while the test is still fresh.

After all testers have been observed, the facilitator may lead an additional work session with the observers to analyze and report the usability test findings. If a videotape was made, the observers might want to review portions of it.

This Is Not Your Father's Enchilada

Look beyond the software and its direct user to understand the whole of what is to be accomplished.

Now let's take a closer look at the core of user-centered design: the user interface. Everyone knows what a user interface is, right? Wrong. As I have talked with other software developers, I've found a basic disagreement about what constitutes the user interface. Is it what you see, click, drag, and drop? In the view of many programmers, the user interface is just that: the buttons, listboxes, tabs, and other screen elements that users interact with as they use the program.

Others argue that the user interface is everything that the user interacts with, including the screen, the keyboard, online help, paper manuals, training courses, tutorials, the help desk, and so on. A more comprehensive view looks beyond the application and the person who interacts directly with the software, expanding the definition to include the whole of what is to be accomplished and how everyone involved is affected. It's concerned with the gestalt of the application. Let's call this definition of the user interface—the view I hold—The Whole Enchilada. A good user interface design addresses The Whole Enchilada. It's concerned with ensuring that The Whole Enchilada is useful for assisting the user with whatever it is The Whole Enchilada is intended to assist with. In this approach, usability testing is a technique to help ensure that the designers got it right.

What is meant by "everyone involved"? I remember an interesting discussion in the Software Development forum on CompuServe. Someone mentioned a situation in which the operator of a radiation therapy device accidentally and seriously burned a patient. There were problems with the design of the machine's controls, leading to operator error. The main problem was that the design did not prevent the operator from accidentally ordering a serious radiation overdose. Everyone in the discussion agreed that this was a user-interface design bug. Even worse, the design did not stop the order in its tracks and prevent the burn to the patient. Many people felt that this was a program error and not a user-interface design bug.

Who Is The User? It shouldn't be a tough question to answer, but differences of opinion abound. Don't let this be the reason your software fails to succeed.

The thread led me to realize that folks couldn't agree on who the user was. Was it the operator or the patient? Some people held that the user was clearly and solely the person who operated the device. After all, that's the person who controlled the machine. Others were adamant that the patient was at least as much a user of the machine as was the operator. You're probably not surprised to learn that I argued on the side of patient as user.

But no matter how you define the user, it's clear from this story that we need to design for the success of gestalt. We need to account for all those who are affected by an application: the machine operators and their patients; the bill payers, data-entry clerks, and customer service reps; the sales force and their customers; even noise-addicted video-game players and their nerve-wracked housemates.

Image: Pencil and Paper Design

Was It Good For You, Too?

Sometimes it doesn't matter how snazzy and userful that new feature is. If users can't understand--or even find it--you've created shelfware. That's why you need to get users involved in testing the interface early in the game.

How does usability testing fit into the process of user interface design? Even before the first prototype is committed to code, you can conduct usability tests of your pencil and paper designs. Focus on how well the design fits your user's mental model of the application. Does it meet users' goals conceptually? You can test to verify that you're meeting the high-level needs of each type of user with respect not only to the interactive software, but also to the reports, the online help, the printed documentation, the training materials, and so on.

You could, for example, provide a paper design of the application's menus. You could write some test scenarios based on user goals, such as checking inventory levels and ordering out-of-stock items. Among other questions, you might ask the tester, "How would you expect to use these menus to check whether widgets are currently in stock?" If the tester stumbles, dig to understand why, and take copious notes. But resist the urge to discuss an alternative, better menu structure. You're the user-interface design expert; the user is not.

Remember that even at this early stage, you should be sketching out the contents and organization of your online help and printed documentation. Plan to test them, too. You might say to the tester, "You've just placed an order with a new vendor. You need to add the vendor to the database, but you don't remember how. Using this preliminary table of contents for the printed documentation, show me where you would expect to find instructions on this process."

As you progress from paper models to prototypes, continue to test the usability of the software. Revise and write new test scenarios to reflect your growing understanding of users' goals. Now you can begin to test the interaction of the training, the software, the online help, the printed documentation, and the help desk. Do they work in concert to support the goals of the user? Do the screens and reports provide all the necessary information? Is the structure easy to understand? Does the information include the appropriate level of detail for each task?

In all these tests, carefully choose a representative set of testers. You usually have only a handful to work with, often no more than six to 12 volunteers. Think carefully about how to ensure that they represent the entire user base. First and foremost, they need to be qualified to test each of the major user-centered goals of the application.

In an accounts receivable application, you might play the bill payer with a data-entry clerk who accepts your payment and records it in the system. In another test, you might play the role of an irate bill payer whose account was not credited and who is now speaking to a customer service representative. You'd have an observer or two watching closely and taking notes about how well the software supported each of these users.

No less important, pick testers who represent the range of abilities of the people who will use the application. It is usually important to make it easy for beginners to pick up and start using the program. You should also consider designing and testing ways to help your users make the transition from beginner to intermediate, the state most of them might stay in forever. And don't forget to help energetic, hotshot users to make expert use of the software.

Film At 11

Usability testing is the key to great user interfaces, even on a shoestring budget.

A user-centered design approach goes hand-in-hand with usability testing techniques that focus on user feedback. If these techniques are so useful, why don't more developers use them? When you actually ask them, developers conjure up the old bugaboos: not enough time and not enough money.

It's true that there's never enough time to test the usability of every aspect of a program. That's the reality of a software development project. But if you test the potentially troublesome areas--the parts of the design you're least confident about--you should expect to save time overall. Note these potential trouble spots as you create the application design. Don't kid yourself; These are the areas you know will end up costing you more in the long run in redesign, recoding, and retesting. By testing them carefully early in the design, you'll save big-time by avoiding the nasty costs of redo late in development or after the software ships.

Many people assume that usability testing costs a great deal of money. Does it? It depends. Building a permanent usability testing lab can cost tens or even hundreds of thousands of dollars; even a portable lab can cost many thousands of dollars. These facilities help test observers by providing videotapes to review long after the testers have gone home. A lab also aids testers by providing a setting that helps hide the observers, reducing the distractions so that testers can work undisturbed. But these facilities are not required. You can achieve good usability test results without spending much money. You do need to ensure that the environment is as comfortable as possible for the testers, because observers will be watching them close up.

Take-Home Tests

I've just scratched the surface of user-centered design and usability testing. If you want to learn more about how to implement these practices in your own shop, I recommend the Cooper and Ruben books. They're both down-to-earth and packed with useful and practical information. The offsite link.UPA site has a number of resources for usability testers.

Home   Articles We've Written   Focus on Usability

Copyright © 1996 - 2017 SoftMedia Artisans, Inc. All Rights Reserved.