|Home What we do Storytelling for User Experience Articles and downloads About us|
by Whitney Quesenbery
The explosion of the web exposed the general public to usability in a dramatic way. Tasks we thought of as simple- like paying for something we wanted to buy-were suddenly revealed to be more complex, difficult, and error-prone than we were willing to accept. In this environment, the finale to the 2000 presidential election simply galvanized the country's attention on the concept we call usability. Beyond the sheer political spectacle, this is an interesting usability case study.
First, voting is the ultimate usability problem. There is a huge and diverse user population who must be able to vote accurately with an infrequently used system. Worse, the voting interface is never exactly the same. There are different candidates, different offices, and even the relative position of the political parties changes. To top this off, the context of use can be stressful-voters have only one chance to get it right.
The second reason why the whole question of the usability of voting systems is so compelling is also basic. The results matter. Elections are a key element of a democracy and their outcomes should be an expression of the will of all the people.
For those of us who have been advocates for usability, it was fascinating to see our specialty catapulted into the headlines. This seemed like the perfect opportunity to explain to our friends and relatives what we do and why it's important. The lessons we can learn from the usability problems in the election can be applied directly to our own work.
Good design is difficult
The format for the ballot is determined by the voting system, so in some ways it is a routine task. Candidate names and parties are filled in following a formula, the parties and other officials check the ballot, and it is distributed to all registered voters in an informational brochure before the election. But there is always some leeway in the design, and few standards to guide the officials who do the work.
In that leeway, a small change was made to the ballot in one county in Florida. The text was made larger, so it would be easier to read, forcing the ballot from one to two columns-the so-called "butterfly" design. On quick inspection, the ballot does not look very difficult to use, but on Election Day and immediately afterwards, there was a flood of people complaining that they had difficulty voting.
What happens when voting systems are examined closely? We discover that users are not dummies. That the technology itself is confusing and sometimes contradictory and that it is possible to make myriad mistakes that can spoil your ballot. (1)
There are voluntary voting system standards, but as the Federal Election Commission FAQ points out, "...the standards address only what a voting system should do, not how the system should do it." That echoes exactly what we at Cognetics, and other user-centered designers, have identified as the central problem in software design. The problems users have with voting systems may be directly traceable to a lack of user-centered design and usability. With the (necessary) focus on technology and security issues, the actual voting experience has been disregarded.
Nationwide, we routinely accept error rates of 3-4%. Why? Simply because those numbers would not affect the outcome of the race. The 2000 election was exceptional because it was one of those rare occurrences when we were close to a statistical dead heat, given those error rates. This made the reason for errors matter, and brought usability issues to the forefront. It is in the unusual moments-when business does not go as usual-that we can really learn how things work.
The software industry has routinely accepted the premise that some percentage of users will not be able to use any given program, but the real question is why accept errors at all, especially in tasks that should be as simple as voting.
Quality and usability are different
There is certainly a relationship between quality and usability, but they are different and focus on different types of problems. A quality inspection looks for things like correctly spelled names, consistent use of typography, and whether all names on the ballot are valid and in their correct position. A usability inspection looks for the kinds of problems a voter might experience, including any ambiguity in the voting process, readability issues, and any other design problems that might contribute to errors.
Both inspections might consider how the ballot is used, but only looking at the ballot in context-in the machine, in a typical voting booth, and so on-will allow it to be examined in the same way that it will appear to users.
Instructions are part of the interface
It's no news to technical communicators that instructions are part of the interface. The booklet mailed to voters in Palm Beach County contained not only pictures of the ballot, but instructions on using the voting machines. (See figure 1.) But did they help? One problem with the instructions themselves is that they are not precisely correct. But another truth that technical communicators know all too well is that the best instructions cannot make up for poor design.
Little things count
After the basic design work is done, it's the little things that add up to a usable system. In all of the post-election analyses by statisticians, human factors, and psychology experts (2), none of the explanations for what happened in Palm Beach County were a real smoking gun. Besides avoiding the double column butterfly design, there was nothing that anyone could point to and say unequivocally "that was the big mistake."
One of the ironies is that it is easier to make things worse instead of better with an untested design change. One of the reasons the Palm Beach ballot was different from the one in other counties was that the designer was trying to make the ballot more readable. She knew she had many elderly voters, and increased the size of the type to help them read more easily. Unfortunately, this also increased the difficulty of the ballot.
Finding the right balance between competing needs can be difficult. An example of this problem can be seen in some e-commerce checkout interfaces. One way to make the transaction easier is to break it into a series of small, simple steps. But, too many steps and people drop out, failing to complete their purchase. Design is always a balance-and the way you know you have achieved a proper balance is to evaluate the design with users.
Usability testing is critical
Usability experts have been widely quoted as saying that you only need to test with five users to find 80% of the usability problems with an interface design. But these assertions are based on an assumption that the design process is iterative and that each successive revision to the design will be re-tested. There is also a difference between testing a perfectly awful interface with many, many problems and finding subtle flaws in a simple interface that works correctly for 99% of the users. The 80-20 rule is simply not good enough for voting. And it may not be good enough for your e-commerce site or application, either.
Perhaps testing seems too expensive? The simple truth is that every interface is usability tested. The only question is whether you control the test environment or not. How many millions of dollars did the county, state, and political parties spend on the recounts and court battles? Even a $50,000 test budget seems minimal in this context.
When I first started looking into usability and voting, one of the most surprising things I found was how little research has been done on voting. One of the few papers based on any direct user observations is a 1998 paper entitled "Disenfranchised by Design" by Susan King Roth (3). One of her findings is that the arrangement of the information on the ballot influenced users. She also identified human factors such as the voter's height and visual acuity as critical to the usability of the voting experience. Her pictures, for example, show some items on the ballot well over the head of the voter. A tall reviewer or someone looking at the ballot as a printed sheet spread out on a table would have no chance to notice this problem.
Users don't complain
One of the things that puzzled me was why voters didn't complain at the poll, or at least ask for help and a new ballot. Caroline Jarrett, an expert in forms and official documents, reports that users are often very hesitant to complain about official forms, even when they are clearly having problems with them (4). Complaining takes effort, and people usually want to get the unpleasant episode over as quickly as possible. They will only make the effort when they are very upset or when they think that their complaints will produce results. Unfortunately, they often do not have anyone appropriate to complain to.
What happens if they do complain or use a feedback form to let you know about a problem they encountered? Do they get an immediate answer that they can reply to? Or is the e-mail ignored or shuttled to an auto-response system? Perhaps the simplest thing you can do to improve the usability of your own work is to make sure there are open channels for communication-and listen to what comes across them. Whether it's your sales channel, technical support logs, or e-mail that comes to firstname.lastname@example.org time someone takes the time to talk to you, you should listen.
King Roth says that the people in her studies were "willing and able to provide constructive and valuable feedback"-if someone was just willing to ask.
Nothing replaces a good process
Susan King Roth concludes that there is a "sequence of interconnected factors: the failure to apply effective design principles at the system development stage, the lack of comprehensive federal guidelines related to system usability, and unfamiliarity with information design and usability issues at the local (level)."
At the heart of this quote is the best advice I can possibly give: nothing replaces a good user-centered design process based on good design principles and incorporating usability evaluation with an appropriate number of real users. One usability test thrown into the schedule just before the product releases will only tell you whether you have a disaster on your hands, not help you make the design changes that will prevent it.
This article was originally published in STC-PMC News & Views, November 2001
The URL for this article is: http://www.wqusability.com/articles/voting-nv.htmlWhitney Quesenbery works on user experience and usability with a passion for clear communication. She is the co-author of Storytelling for User Experience from Rosenfeld Media. Before she was seduced by a little beige computer, Whitney was a theatrical lighting designer. The lessons from the theatre stay with her in creating user experiences. She can be reached at www.WQusability.com