WORLD MAY 1, 2000
The decision to send American troops was easy. Sure, the Saudi demonstrators and rioters wanted democracy, but we couldn’t sell out our old ally King Fahd. And what if the insurgents were Iranian pawns? There was too much oil at stake to take a chance.
But something funny happened on the way to the counterrevolution. An unknown fundamentalist group, the Campaign for Islamic Democracy (CID), warned that if we didn’t butt out, it would wield the “Secret Sword of Allah.” We laughed. Then the CID unsheathed its “cybersword.” The attack began with rolling power outages throughout the United States. Next, an automated pipeline control near Valdez, Alaska, was manipulated to lower the temperature of the flowing oil, causing congealment, a burst pipe, and an environmental disaster.
The fundamentalists said they would stop messing with our computers if we stayed out of Saudi affairs. Furious, we continued deployment and stepped up our efforts to find them. The CID took it up a notch, too; the group hit our air-traffic-control system. Result: one midair collision over Los Angeles International Airport and several near misses. It was time to cut a deal, even if meant abandoning the king.
Of course, none of this happened. But a group of us at RAND in the early ’90s started worrying that it might. So we dreamed up a scenario quite like this one and convinced the deputy secretary of defense to pull together some high-level political and military officials to treat it as a war game. They played it out over a long Saturday—and found themselves at a complete loss.
That was five years ago. Today, we’re as open to attack as we were then. American prosperity and security increasingly depend on an extremely vulnerable information infrastructure. And, while the military has all sorts of systems to keep its communications up even during a nuclear war, civilian systems—power supplies, transportation, resource flows, and financial markets—are wide open to “cybotage” of the kind we imagined. It’s not that the U.S. government hasn’t done anything to try to protect us over the last half-decade. It just hasn’t done the right things.
Cyberwar means disrupting the flow of information—principally through computer viruses that eat data or freeze up systems and logic bombs that force machines to try to do something they can’t (like resolve the value of pi, a trick Mr. Spock once used to disable a computer on “Star Trek”).
Cyberwarfare can be used by one military against another. In the Gulf war, for example, the United States implanted viruses and made other computer intrusions into Iraqi air defenses. Against Serbia, we went further—instead of simply slowing or stopping data flows, we strove to distort the information Serb gunners saw on their screens, helping keep our planes safe during their bombing runs. Since the U.S. military already does a good job of protecting its systems against such attacks—and since the only other power seriously pursuing battlefield cyberwar capacity, China, is way behind us—there’s no reason for alarm on this front. At least not right now.
America’s civilian computers are another story—they’re much more vulnerable to attack. And it wouldn’t take an army to launch one, just a small organization, or even a knowledgeable individual. Sound far-fetched? Tell that to the system administrators at Amazon.com, eBay, and Yahoo!, who saw their sites downed by simple, though well-tooled, attacks.
It’s true that the talent needed to wage cyberwar is relatively hard to develop. Terrorists don’t yet have it. And, while some hackers do, they have not yet shown an interest in cyberterrorism—perhaps because there are ample opportunities to apply those skills for legitimate profit. (Why destroy the U.S. information infrastructure when you can make $100 million as part of it?) But this is bound to change.
For one thing, as knowledge about the Internet increases, the line between terrorist and hacker may blur. Terrorists’ initial forays have been defensive—for example, acquiring encryption technology in order to keep their communications secret. But they won’t stay defensive forever. There is even anecdotal evidence that Bulgarian hackers (I kid you not), cut off by the Russians and feared and rejected by the West because they spawned the “Michelangelo” virus some years back, may be trying to hire themselves out as cybermercenaries.
But trying to prevent terrorists from becoming cyberterrorists doesn’t much interest the U.S. government. Instead, U.S. computer-protection efforts focus on “infrastructure.” This is what the respected Presidential Commission on Critical Infrastructure Protection recommended two and a half years ago; so did the equally prestigious National Research Council. Now there’s even a National Infrastructure Protection Center in Washington, D.C.—which involves considerable collaboration between the Departments of Justice and Defense—in charge of the effort.
The trouble with infrastructure protection is that it misunderstands the cyberwar threat. The guiding notion behind nearly all current efforts is to prevent an “electronic Pearl Harbor”—a massive cyberattack that would cripple our ability to deploy our forces. The allusion is sexy but misleading, because nowhere is American power as concentrated today as it was at Battleship Row in 1941. A better analogy is the “harbor lights” phenomenon that bedeviled American efforts to protect merchant ships in the early months of 1942. Big-city mayors on the Eastern seaboard fought to keep their cities from being blacked out, because the cost to business would be high. So, for some months, U-boat skippers had their targets illuminated by well-lit skylines. Today, similarly, harbor lights are on all over cyberspace.
In response, the government has constructed a kind of Maginot Line. It has tried to build leakproof firewalls and safe areas—that is, domains protected by computer protocols and codes—that presumably will prevent hackers from accessing sensitive information and systems. Both the presidential commission and the research-council report strongly recommended this strategy, and the government is following their advice to the letter. What’s more, the government is pushing the private sector to take a similar approach, and many businesses are complying.
But, as the French discovered in 1940, even the best fortifications can be outflanked and penetrated. Cyberterrorists can always find trapdoors and glitches in software that allow them to get around obstacles; or, if that fails, they can try launching direct attacks by using very sophisticated password-cracking programs. This vulnerability was highlighted for me at one Department of Energy laboratory, where I investigated a hack that had shut down the facility for a few weeks in 1998. I learned that the lab’s own security people were running a password-cracking program to help assess and limit their risk of intrusion. But, even a year after the initial break-in, their program was still able to guess one in ten new passwords every week.
So what, then, would an effective cyberdefense look like? First, it would abandon the notion that we can simply wall off safe areas, moving instead to a “depth defense” of countermeasures designed to foil intruders once they’ve broken in. Such countermeasures would include electronic camouflage for files (e.g., making sensitive data look like office-supply orders), the strongest encryption (to keep an intruder from being able to read what he sees), and diffusing that encryption broadly throughout information systems. While some such deceptions are now employed, very strong encryption remains terribly underutilized. This is a pity, since today’s best encryption, unlike the mediocre code legally available to the public, simply cannot be broken (for reasons having to do with the length of a code’s “key,” which can now be measured in thousands of bits).
But something is keeping the best encryption technology from the public: the U.S. government. The Department of Justice argues that strong encryption would let criminals foil its cybertaps. The National Security Agency says it will be crippled if it can’t decode communications. And the Department of Commerce fears that diffusing encryption would spawn a “virtual currency,” reenergizing and expanding the subterranean economy while undermining America’s tax base. But all these objections ignore a critical reality: The best encryption is already in the hands of criminals, terrorists, and the Russians (who will probably sell them to the former). And, by not using our best technology against cyberwar, we only encourage cyberterrorists to develop their emerging capabilities even further.
In addition to freeing up controls on encryption technology, the United States could do one other thing to protect the nation from cyberwar: recruit some hackers to its side. In the same way the United States courted and cultivated German rocket scientists after World War II, the government should bring some of the best cyberminds onto its team--in recognition that, for all the techno-glitz that surrounds cyberwar, human factors still reign supreme. Only the best hackers can tell you for certain whether you’ve designed a system that is impossible (or at least very difficult) for them to disable. We could buy all the hackers there are for the price of one F-15. And they’d do us a lot more good than an F-15 if we ever came face-to-face with the “Secret Sword of Allah.”
This article originally ran in the May 1, 2000, issue of the magazine.