Robert Lemos


When the Maryland State Board of Elections ordered more than 5,000 voting machines from Diebold Election Systems in 2002, the touch-screen computers came with assurances that they met federal voting standards.

Election officials quickly discovered that those assurances meant little. The machines had indeed been certified by a third-party testing lab, but the federal voting system standards to which the touch-screen computers were held hailed from 1990 and did not address many of the problems in the latest voting technology.

The issues in the certification system were further highlighted when Diebold’s source code for a similar system leaked out, allowing security researchers to analyze the software. The verdict: The code had serious flaws.

For Maryland, the revelations resulted in an arduous six-month process that involved implementing 23 recommendations by security researchers. The issues also confirmed serious problems with federal standards that are intended to ensure the reliability of voting machines.

“I think everyone recognizes that there is room for improvement,” said Linda Lamone, administrator of elections for the state of Maryland.

As U.S. voters prepare to head to the polls Tuesday, weak and outdated federal voting standards have emerged as a major cause of e-voting security concerns. Over the years, state election officials have approved purchases of thousands of e-voting machines, relying on their compliance with federal guidelines that fail to address critical problems.

Security and election experts say the latest revisions to the standards, adopted by the Federal Election Commission in April 2002, are an improvement. But they acknowledge that standards for touch-screen election technology, also known as direct recording electronic (DRE) systems, have a ways to go.

“I think it is going to be 10 years before these issues work their way out,” said Paul DeGregorio, one of four commissioners at the newly formed Election Assistance Commission (EAC), which oversees the
administration of federal election standards and recommendations.
“Version 1.0 was not good enough; we’ll have to wait for version 4.5 to get it right.”

Some 30 percent of voters are registered in counties that plan to use DREs, according to Election Data Services, and computer security experts have raised doubts over how well the equipment will perform come Election Day.

“There are going to be tens of thousands of machines on Election Day that will be stressed as they never are in a lab,” said Aviel Rubin, a professor of computer science at Johns Hopkins University and a well-known critic of current electronic voting systems.

The lack of standards for DREs and other electronic voting systems has left the nearly 40 states that rely on federal voting guidelines to their own devices. Some states, such as Maryland and Georgia, have done significant additional testing and remediating of issues. Others, such as Nevada, have added additional measures, such as a paper receipt, known as a voter-verified paper audit trail (VVPAT), as an additional check on the

Maryland’s Lamone stressed that the state has applied all recommended measures and continues to run tests to make sure that elections go off without a hitch. In the end, almost the entire state will vote on the systems this election.

Nevertheless, e-voting experts said millions of voters could potentially be exposed to inadequately tested equipment, particularly where election officials have taken federal compliance as the only litmus test for adopting systems.

Many groups equated certification with security. In February 2004, the League of Women Voters dismissed calls for more security in the form of a VVPAT, choosing to put its faith in “the certification and standards process.” Five months later, the group reversed itself, backing “voting systems and procedures that are secure, accurate, recountable and accessible.”

The faith in federal compliance testing is most dangerous in states–such as California, Nevada, New Mexico, North Carolina and Virginia–that have adopted newer systems based on computer technology. Many of those states have rushed through additional testing in the past year.

Rubin, who co-authored the initial analysis of Diebold’s source code, believes that voting-machine makers used federal certification essentially as a sales tool and, in many cases, state election officials relied on certification testing as a crutch for their own quality control.

“There was no kind of pressure to do this right,” Rubin said. “People won’t do something hard deliberately when that way costs more and there is no pressure.”

Global responsibilities, local budgets

That viewpoint is voiced by some election officials as well. A great deal of the difficulties that the United States faces in elections stems from the fact that a process that has nationwide significance is largely regulated by the states.

County election officials typically only worry about holding well-run elections that are inexpensive and reflect the will of the people, said the EAC’s DeGregorio, who served as an election administrator for Missouri’s St. Louis County in the late 1980s and early 1990s. More global problems–such as machine design, audit trails and other technical issues–are generally left for another day or another person.

“I remember times when I was an election official trying to get counties to spend money to improve elections, and you had to beg them,” DeGregorio said. “People are beginning to see how complicated it is to do an election.”

Until 2002, with the passage of the Help America Vote Act (HAVA) by U.S. Congress, state election officials had never received federal funds to help them run elections, DeGregorio said. After Congress passed the election law, states had an incentive to spend the money–more than $2.3 billion–and spend they did.

“We have seen more changes in the past three years than in the past 100 years,” DeGregorio said.

More than a quarter of the nation will likely be voting on new equipment bought between the election in 2000 and the coming 2004 presidential election. That pace is accelerating as well. By 2006, a major deadline for many of the federal requirements of HAVA–such as the ability of voters to review their ballot before voting–will require changes to machines or new systems for another 30 percent of U.S. voters.

Despite newly revised standards for voting systems–the Voting System Standards for 2002–the guidelines still lag behind the technology in the new systems.

Stumbling to standards

A key portion of the U.S. Constitution guarantees states the right to hold elections as they see fit and, while the federal government explicitly has the power to alter the election process, the power has historically not been used.

Machines have been used to count votes at the polls for more than 100 years. But the reliance on state governments to run their own elections allowed voting-machine makers to design their own systems with few requirements, except those set by the states.

Concerns about the systems are nothing new. Two reports written by the National Bureau of Standards, now known as the National Institute of Standards and Technology (NIST), in 1975 and 1982 highlighted many of the problems with computer systems used to tally votes.

It was not until 1990 that the first set of standards, based on the NIST reports, were issued by the Federal Election Commission (FEC). But those recommendations were merely guidelines and proved largely toothless for years, until states began to require voting-machine makers to adhere to them.

“The certification and testing of voting software has been historically weak because it has gone through a voluntary scheme created by a voluntary organization,” said Roy Saltman, an election technology consultant and the former NIST computer scientist who penned the 1975 and 1982 reports.

With the passage of HAVA in 2002, the federal government took a role in certifying so-called Independent Testing Authorities (ITAs), which confirm that systems meet federal voting guidelines. NASED had previously taken on that role. In addition, the law created the EAC to advise state election officials and set standards for voting equipment.

Perception problems

One of the worst problems with the certification process, critics say, are disclosure rules. The three major testing labs–Wyle Laboratories, SysTest Labs and Ciber–currently do not offer any information about the voting machines that have been tested.

“Much like a lawyer, we have to keep our client information confidential,” said Dan Reeder, a spokesman for Wyle Laboratories. “The companies that produce the machines are free to talk about the issues.”

Moreover, voting-machine makers also beg off giving information about their systems, citing intellectual-property concerns. While a legitimate business concern, such posturing over technology of such public importance has garnered withering criticism from voting-technology experts.

Michael Shamos, a professor of computer science at Carnegie Mellon University and a voting technology examiner for more than two decades, called the process of granting ITA status “dysfunctional” and attacked the labs for not revealing test procedures and results.

“I find it grotesque that an organization charged with such a heavy responsibility feels no obligation to explain to anyone what it is doing,” he said during a Congressional hearing in June on voting-machine certification and testing issues.

Shamos said the danger lies less in some group taking control of the election and more in machine failures and long lines at the polling stations. He warned the Congressional committee members that “a repeat of the Florida 2000 experience will have a paralytic effect on U.S. elections.”

Election officials believe that HAVA will help make the ITAs more responsive to requests from the public and government for information regarding the certification of specific machines.

This week, four major makers of e-voting machines, including Diebold, agreed to reveal substantial portions of their source code to the EAC. Although individual states have made this a requirement, it’s the first time the companies have agreed to cooperate with federal regulators.

Evolve or die

For the standards creators and voting-machine makers alike, the last two years of travails have led to better procedures for voting machines.

While Diebold was criticized for having no formal software development system in place, it has revamped its entire process, company spokesman David Bear said.

“We recognize that the technology changes day by day, and we also realize there is an evolution of the regulatory environment,” he said.

Other machine makers have also said that the current public debate over voting systems has resulted in more rigorous development standards.

“I would agree that security is in its infancy,” said Neil McClure, vice president of voting-machine maker Hart InterCivic. “Where that evolves will be partly determined by technology advances and partly by what decisions are made regarding the security of electronic voting data.”

While critics have lambasted voting-machine makers for their products’ software flaws, election experts remind them that the current crop of voting machines have many growing pains to endure.

“People make the assumption that the election equipment industry is a mature industry,” said Merle King, associate professor of information systems for Kennesaw State University. “But most of these companies aren’t used to dealing with issues at the national level.”

While improved federal standards will be a positive result of the current debate, the heated criticism has left casualties in its wake, King said.

“We have, through our collective actions of questioning DREs, mandated that people in those battleground states will use technologies that aren’t as reliable as DREs,” King said. “The law of unintended consequence seems to be the only one that governs elections.” end