The Real Story of Stuxnet
How Kaspersky Lab tracked down the malware that stymied Iran’s nuclear-fuel enrichment program
Computer cables snake across the floor. Cryptic flowcharts are scrawled across various whiteboards adorning the walls. A life-size Batman doll stands in the hall. This office might seem no different than any other geeky workplace, but in fact it’s the front line of a war—a cyberwar, where most battles play out not in remote jungles or deserts but in suburban office parks like this one. As a senior researcher for Kaspersky Lab, a leading computer security firm based in Moscow, Roel Schouwenberg spends his days (and many nights) here at the lab’s U.S. headquarters in Woburn, Mass., battling the most insidious digital weapons ever, capable of crippling water supplies, power plants, banks, and the very infrastructure that once seemed invulnerable to attack.
Recognition of such threats exploded in June 2010 with the discovery of Stuxnet, a 500-kilobyte computer worm that infected the software of at least 14 industrial sites in Iran, including a uranium-enrichment plant. Although a computer virus relies on an unwitting victim to install it, a worm spreads on its own, often over a computer network.
This worm was an unprecedentedly masterful and malicious piece of code that attacked in three phases. First, it targeted Microsoft Windows machines and networks, repeatedly replicating itself. Then it sought out Siemens Step7 software, which is also Windows-based and used to program industrial control systems that operate equipment, such as centrifuges. Finally, it compromised the programmable logic controllers. The worm’s authors could thus spy on the industrial systems and even cause the fast-spinning centrifuges to tear themselves apart, unbeknownst to the human operators at the plant. (Iran has not confirmed reports that Stuxnet destroyed some of its centrifuges.)
Stuxnet could spread stealthily between computers running Windows—even those not connected to the Internet. If a worker stuck a USB thumb drive into an infected machine, Stuxnet could, well, worm its way onto it, then spread onto the next machine that read that USB drive. Because someone could unsuspectingly infect a machine this way, letting the worm proliferate over local area networks, experts feared that the malware had perhaps gone wild across the world.
In October 2012, U.S. defense secretary Leon Panetta warned that the United States was vulnerable to a “cyber Pearl Harbor” that could derail trains, poison water supplies, and cripple power grids. The next month, Chevron confirmed the speculation by becoming the first U.S. corporation to admit that Stuxnet had spread across its machines.
Although the authors of Stuxnet haven’t been officially identified, the size and sophistication of the worm have led experts to believe that it could have been created only with the sponsorship of a nation-state, and although no one’s owned up to it, leaks to the press from officials in the United States and Israel strongly suggest that those two countries did the deed. Since the discovery of Stuxnet, Schouwenberg and other computer-security engineers have been fighting off other weaponized viruses, such as Duqu, Flame, and Gauss, an onslaught that shows no signs of abating.
This marks a turning point in geopolitical conflicts, when the apocalyptic scenarios once only imagined in movies like Live Free or Die Hard have finally become plausible. “Fiction suddenly became reality,” Schouwenberg says. But the hero fighting against this isn’t Bruce Willis; he’s a scruffy 27-year-old with a ponytail. Schouwenberg tells me, “We are here to save the world.” The question is: Does the Kaspersky Lab have what it takes?
Viruses weren’t always this malicious. In the 1990s, when Schouwenberg was just a geeky teen in the Netherlands, malware was typically the work of pranksters and hackers, people looking to crash your machine or scrawl graffiti on your AOL home page.
After discovering a computer virus on his own, the 14-year-old Schouwenberg contacted Kaspersky Lab, one of the leading antivirus companies. Such companies are judged in part on how many viruses they are first to detect, and Kaspersky was considered among the best. But with its success came controversy. Some accused Kaspersky of having ties with the Russian government—accusations the company has denied.
A few years after that first overture, Schouwenberg e-mailed founder Eugene Kaspersky, asking him whether he should study math in college if he wanted to be a security specialist. Kaspersky replied by offering the 17-year-old a job, which he took. After spending four years working for the company in the Netherlands, he went to the Boston area. There, Schouwenberg learned that an engineer needs specific skills to fight malware. Because most viruses are written for Windows, reverse engineering them requires knowledge of x86 assembly language.
Over the next decade, Schouwenberg was witness to the most significant change ever in the industry. The manual detection of viruses gave way to automated methods designed to find as many as 250 000 new malware files each day. At first, banks faced the most significant threats, and the specter of state-against-state cyberwars still seemed distant. “It wasn’t in the conversation,” says Liam O’Murchu, an analyst for Symantec Corp., a computer-security company in Mountain View, Calif.
All that changed in June 2010, when a Belarusian malware-detection firm got a request from a client to determine why its machines were rebooting over and over again. The malware was signed by a digital certificate to make it appear that it had come from a reliable company. This feat caught the attention of the antivirus community, whose automated-detection programs couldn’t handle such a threat. This was the first sighting of Stuxnet in the wild.
The danger posed by forged signatures was so frightening that computer-security specialists began quietly sharing their findings over e-mail and on private online forums. That’s not unusual. “Information sharing [in the] computer-security industry can only be categorized as extraordinary,” adds Mikko H. Hypponen, chief research officer for F-Secure, a security firm in Helsinki, Finland. “I can’t think of any other IT sector where there is such extensive cooperation between competitors.” Still, companies do compete—for example, to be the first to identify a key feature of a cyberweapon and then cash in on the public-relations boon that results.
Before they knew what targets Stuxnet had been designed to go after, the researchers at Kaspersky and other security firms began reverse engineering the code, picking up clues along the way: the number of infections, the fraction of infections in Iran, and the references to Siemens industrial programs, which are used at power plants.
Schouwenberg was most impressed by Stuxnet’s having performed not just one but four zero-day exploits, hacks that take advantage of vulnerabilities previously unknown to the white-hat community. “It’s not just a groundbreaking number; they all complement each other beautifully,” he says. “The LNK [a file shortcut in Microsoft Windows] vulnerability is used to spread via USB sticks. The shared print-spooler vulnerability is used to spread in networks with shared printers, which is extremely common in Internet Connection Sharing networks. The other two vulnerabilities have to do with privilege escalation, designed to gain system-level privileges even when computers have been thoroughly locked down. It’s just brilliantly executed.”
Schouwenberg and his colleagues at Kaspersky soon concluded that the code was too sophisticated to be the brainchild of a ragtag group of black-hat hackers. Schouwenberg believes that a team of 10 people would have needed at least two or three years to create it. The question was, who was responsible?
It soon became clear, in the code itself as well as from field reports, that Stuxnet had been specifically designed to subvert Siemens systems running centrifuges in Iran’s nuclear-enrichment program. The Kaspersky analysts then realized that financial gain had not been the objective. It was a politically motivated attack. “At that point there was no doubt that this was nation-state sponsored,” Schouwenberg says. This phenomenon caught most computer-security specialists by surprise. “We’re all engineers here; we look at code,” says Symantec’s O’Murchu. “This was the first real threat we’ve seen where it had real-world political ramifications. That was something we had to come to terms with.”
In May 2012, Kaspersky Lab received a request from the International Telecommunication Union, the United Nations agency that manages information and communication technologies, to study a piece of malware that had supposedly destroyed files from oil-company computers in Iran. By now, Schouwenberg and his peers were already on the lookout for variants of the Stuxnet virus. They knew that in September 2011, Hungarian researchers had uncovered Duqu, which had been designed to steal information about industrial control systems.
While pursuing the U.N.’s request, Kaspersky’s automated system identified another Stuxnet variant. At first, Schouwenberg and his team concluded that the system had made a mistake, because the newly discovered malware showed no obvious similarities to Stuxnet. But after diving into the code more deeply, they found traces of another file, called Flame, that were evident in the early iterations of Stuxnet. At first, Flame and Stuxnet had been considered totally independent, but now the researchers realized that Flame was actually a precursor to Stuxnet that had somehow gone undetected.
Flame was 20 megabytes in total, or some 40 times as big as Stuxnet. Security specialists realized, as Schouwenberg puts it, that “this could be nation-state again.”
To analyze Flame, Kaspersky used a technique it calls the “sinkhole.” This entailed taking control of Flame’s command-and-control server domain so that when Flame tried to communicate with the server in its home base, it actually sent information to Kaspersky’s server instead. It was difficult to determine who owned Flame’s servers. “With all the available stolen credit cards and Internet proxies,” Schouwenberg says, “it’s really quite easy for attackers to become invisible.”
While Stuxnet was meant to destroy things, Flame’s purpose was merely to spy on people. Spread over USB sticks, it could infect printers shared over the same network. Once Flame had compromised a machine, it could stealthily search for keywords on top-secret PDF files, then make and transmit a summary of the document—all without being detected.
Indeed, Flame’s designers went “to great lengths to avoid detection by security software,” says Schouwenberg. He offers an example: Flame didn’t simply transmit the information it harvested all at once to its command-and-control server, because network managers might notice that sudden outflow. “Data’s sent off in smaller chunks to avoid hogging available bandwidth for too long,” he says.
Most impressively, Flame could exchange data with any Bluetooth-enabled device. In fact, the attackers could steal information or install other malware not only within Bluetooth’s standard 30-meter range but also farther out. A “Bluetooth rifle”—a directional antenna linked to a Bluetooth-enabled computer, plans for which are readily available online—could do the job from nearly 2 kilometers away.
But the most worrisome thing about Flame was how it got onto machines in the first place: via an update to the Windows 7 operating system. A user would think she was simply downloading a legitimate patch from Microsoft, only to install Flame instead. “Flame spreading through Windows updates is more significant than Flame itself,” says Schouwenberg, who estimates that there are perhaps only 10 programmers in the world capable of engineering such behavior. “It’s a technical feat that’s nothing short of amazing, because it broke world-class encryption,” says F-Secure’s Hypponen. “You need a supercomputer and loads of scientists to do this.”
If the U.S. government was indeed behind the worm, this circumvention of Microsoft’s encryption could create some tension between the company and its largest customer, the Feds. “I’m guessing Microsoft had a phone call between Bill Gates, Steve Ballmer, and Barack Obama,” says Hypponen. “I would have liked to listen to that call.”
While reverse engineering Flame, Schouwenberg and his team fine-tuned their “similarity algorithms”—essentially, their detection code—to search for variants built on the same platform. In July, they found Gauss. Its purpose, too, was cybersurveillance.
Carried from one computer to another on a USB stick, Gauss would steal files and gather passwords, targeting Lebanese bank credentials for unknown reasons. (Experts speculate that this was either to monitor transactions or siphon money from certain accounts.) “The USB module grabs information from the system—next to the encrypted payload—and stores this information on the USB stick itself,” Schouwenberg explains. “When this USB stick is then inserted into a Gauss-infected machine, Gauss grabs the gathered data from the USB stick and sends it to the command-and-control server.”
Just as Kaspersky’s engineers were tricking Gauss into communicating with their own servers, those very servers suddenly went down, leading the engineers to think that the malware’s authors were quickly covering their tracks. Kaspersky had already gathered enough information to protect its clients against Gauss, but the moment was chilling. “We’re not sure if we did something and the hackers were onto us,” Schouwenberg says.
The implications of Flame and Stuxnet go beyond state-sponsored cyberattacks. “Regular cybercriminals look at something that Stuxnet is doing and say, that’s a great idea, let’s copy that,” Schouwenberg says.
“The takeaway is that nation-states are spending millions of dollars of development for these types of cybertools, and this is a trend that will simply increase in the future,” says Jeffrey Carr, the founder and CEO of Taia Global, a security firm in McLean, Va. Although Stuxnet may have temporarily slowed the enrichment program in Iran, it did not achieve its end goal. “Whoever spent millions of dollars on Stuxnet, Flame, Duqu, and so on—all that money is sort of wasted. That malware is now out in the public spaces and can be reverse engineered,” says Carr.
Hackers can simply reuse specific components and technology available online for their own attacks. Criminals might use cyberespionage to, say, steal customer data from a bank or simply wreak havoc as part of an elaborate prank. “There’s a lot of talk about nations trying to attack us, but we are in a situation where we are vulnerable to an army of 14-year-olds who have two weeks’ training,” says Schouwenberg.
The vulnerability is great, particularly that of industrial machines. All it takes is the right Google search terms to find a way into the systems of U.S. water utilities, for instance. “What we see is that a lot of industrial control systems are hooked up to the Internet,” says Schouwenberg, “and they don’t change the default password, so if you know the right keywords you can find these control panels.”
Companies have been slow to invest the resources required to update industrial controls. Kaspersky has found critical-infrastructure companies running 30-year-old operating systems. In Washington, politicians have been calling for laws to require such companies to maintain better security practices. One cybersecurity bill, however, was stymied in August on the grounds that it would be too costly for businesses. “To fully provide the necessary protection in our democracy, cybersecurity must be passed by the Congress,” Panetta recently said. “Without it, we are and we will be vulnerable.”
In the meantime, virus hunters at Kaspersky and elsewhere will keep up the fight. “The stakes are just getting higher and higher and higher,” Schouwenberg says. “I’m very curious to see what will happen 10, 20 years down the line. How will history look at the decisions we’ve made?”
About the Author
David Kushner, a Spectrum contributing editor, has always been fascinated with tricksters and their opponents, but his article on how Kaspersky Lab detected the Stuxnet worm is the first piece he’s written about state-on-state cyberwar.
Researchers have uncovered a never-before-seen version of Stuxnet. The discovery sheds new light on the evolution of the powerful cyberweapon that made history when it successfully sabotaged an Iranian uranium-enrichment facility in 2009.
Stuxnet 0.5 is the oldest known version of the computer worm and was in development no later than November of 2005, almost two years earlier than previously known, according to researchers from security firm Symantec. The earlier iteration, which was in the wild no later than November 2007, wielded an alternate attack strategy that disrupted Iran’s nuclear program by surreptitiously closing valves in that country’s Natanz uranium enrichment facility. Later versions scrapped that attack in favor of one that caused centrifuges to spin erratically. The timing and additional attack method are a testament to the technical sophistication and dedication of its developers, who reportedly developed Stuxnet under a covert operation sponsored by the US and Israeli governments. It was reportedly personally authorized by Presidents Bush and Obama.
Also significant, version 0.5 shows that its creators were some of the same developers who built Flame, the highly advanced espionage malware also known as Flamer that targeted sensitive Iranian computers. Although researchers from competing antivirus provider Kaspersky Lab previously discovered a small chunk of the Flame code in a later version of Stuxnet, the release unearthed by Symantec shows that the code sharing was once so broad that the two covert projects were inextricably linked.
“What we can conclude from this is that Stuxnet coders had access to Flamer source code, and they were originally using the Flamer source code for the Stuxnet project,” said Liam O’Murchu, manager of operations for Symantec Security Response. “With version 0.5 of Stuxnet, we can say that the developers had access to the exact same code. They were not just using shared components. They were using the exact same code to build the projects. And then, at some point, the development [of Stuxnet and Flame] went in two different directions.”
Symantec officials announced the discovery on Tuesday at the RSA security conference in San Francisco. A paper outlining the researchers’ findings is here.
The 600K worth of code found in Stuxnet 0.5 is highly modular, just as it was in the 500K Stuxnet 1.0. The encryption algorithms, string objects, and logging functions in the earlier version are almost identical to those of Flame. In contrast, the later Stuxnet version largely eschewed the development conventions of Flame, as Stuxnet developers adhered more to the so-called tilded platform shared with Duqu, another piece of sophisticated espionage malware that targeted Middle Eastern computer systems.
Most significantly, the earlier Stuxnet version contained an alternate method of sabotaging Iran’s nuclear-enrichment process, the details of which had never been fully understood. It injected malicious code into the instructions sent to 417 series programmable logic controllers (PLCs) made by the German conglomerate Siemens. Natanz engineers used the PLCs to open and shut valves that fed Uranium hexafluoride, or UF6 gas, into centrifuge groupings. Stuxnet 0.5 closed specific valves prematurely, causing pressure to grow as much as five times higher than normal. Under those conditions, the gas would likely turn into a solid and destroy the centrifuges, possibly even the sensitive equipment used to develop them.
One of the domain names hardcoded into version 0.5 was registered in November 2005, while data on malware-scanning service VirusTotal shows that the version was in the wild no later than November 2007. This means that Stuxnet attackers’ detailed familiarity with Iran’s nuclear facilities dates back much earlier than previously known. It suggests espionage malware such as Flame, Duqu, or a still-unknown title had burrowed into Iranian systems in the months or years prior to the beginning of the development work.
“The attacker had to have extremely good knowledge of how Natanz operated in order to build this code,” O’Murchu said of version 0.5. “They also needed to know the exact layout of the cascade and centrifuges, and they needed to know that they were using 417 PLCs.”
Stuxnet 0.5 was programmed to wait 30 to 35 days between the time it took control of a computer and the time it launched the valve attack, which took two to three hours to complete. That month-long wait gave the program time to gather normal equipment readings that would be replayed while the attack was in progress to prevent operators from knowing anything was amiss in the enrichment process. The malware also contained code that prevented engineers from manipulating the valves during the attack. Like later versions, Stuxnet 0.5 was programmed to attack only equipment containing labels found in Iran’s Natanz facility, presumably to prevent malfunctions in other plants. The ability to capture normal readings and replay them during the attack was another characteristic found in later Stuxnet versions.
Up to now, however, no one has seen the attack targeting the valves. Instead, as reported by Wired reporter Kim Zetter in 2011, Stuxnet 1.x versions used an entirely different attack strategy that tampered with the computerized frequency converters controlling the speed at which centrifuges spun during the enrichment process. By injecting code into the PLCs that controlled the centrifuge speeds, 1.x versions caused them to spin too fast and then spin too slow, resulting in fatal damage to key parts of the enrichment process.
Unlike later versions of the worm, 0.5 used a single exploit to spread from computer to computer. Specifically, it exploited a vulnerability in the Siemens Simatic Step7 software that developers use to program PLCs. Once a computer was infected, any removable drive connected to it that contained Step7 files would be infected. When the infected USB drive was later plugged into another computer, it would become infected as soon as the user opened the malicious Step7 files. The exploit was dubbed a “DLL preloading attack” because it allowed Stuxnet to execute malicious dynamic link library (DLL) files on targeted computers running Microsoft Windows.
O’Murchu said there’s no way of knowing if Stuxnet 0.5 ever carried out the highly advanced attack on the Siemens 417 controllers inside Natanz. It’s also impossible to know how many systems were infected by it. But given changes that were introduced in subsequent versions, it’s reasonable to speculate that Stuxnet developers were unhappy with the infection rate of the earlier version and sought new ways to make their malware more aggressive.
Specifically, later versions of Stuxnet relied on at least five previously unknown vulnerabilities to self replicate, including two zero-day vulnerabilities in Windows that caused Stuxnet to infect computers as soon as a compromised USB drive was connected. As a result, 1.x versions ended up leaving a wide swath of collateral damage when they infected an estimated 100,000 computers, the vast majority of which had nothing to do with Iran’s uranium-enrichment program. While the PLC attacks were only activated on computers located in the Natanz facility, the mass infection still proved costly to network operators all over the world.
The targeting of valves in Version 0.5 instead of centrifuge speeds in 1.x also demonstrates the contrasting strategies for sabotaging the uranium-enrichment process once Stuxnet took hold. Although signs of the attack targeting the 417 PLCs can be found in later versions, crucial values needed to carry out the attack were stripped out, making it impossible for researchers to know exactly what the exploit did. It remains unclear why later versions of Stuxnet pursued the alternate strategy targeting the Siemens 315 PLCs that controlled the frequency converters of the centrifuges.
“Perhaps the attackers decided the 417 code wasn’t working, so they went with the simpler 315 attack strategy instead,” O’Murchu said. “Or perhaps they thought the [Natanz] operators had figured out the problems with the valves and then worked around it and they decided to go with a different attack strategy in 315 just to mix things up.”
Also making it hard to track the success of Stuxnet 0.5 is the absence of any data indicating an early disruption in operations at the Natanz facility. By contrast, in January 2010, inspectors with the International Atomic Energy Agency detected Natanz technicians jettisoning between 1,000 and 2,000 damaged centrifuges, numbers consistent with the specific attack contained in 1.x versions. There is no analogous data pointing to the success of the earlier malware.
“Deliver what the mind can dream”
The newly discovered version 0.5 also displays similarities to Flame in the way the attackers went about camouflaging the command-and-control servers used to send updates to infected machines. The earlier Stuxnet was programmed to connect to servers with four different domain names, each disguised as hosting a website for a nonexistent advertising agency called Media Suffix. The sites included smartclick.org, best-advertising.net, internetadvertising4u.com, and ad-marketing.net. Ominously, their tagline, according to archived pages of the now defunct sites, read: “Deliver what the mind can dream.”
Similarly, the Flame espionage malware relied on servers that were disguised as publishing platforms running a fictitious content management application called Newsforyou. The disguises reduced the chances that the true purpose of sites would be discovered by people working at the data centers where they were hosted or by people who happened upon the sites while browsing the Internet.
Interestingly, the command servers had considerably less granular control over machines infected by Stuxnet 0.5 than they did over machines compromised by later versions. These earlier servers could only issue software updates, compared with command servers that received data about installed software and internal and external IP addresses on machines infected by later versions. Version 0.5 was also programmed to stop connecting to command servers on January 11, 2009 and to stop infecting new machines on July 4 of the same year.
A peer-to-peer update mechanism ensured that even non-Internet-connected machines inside a network would receive revised software based on the Stuxnet 0.5 release. It relied on a Windows component known as Mailslots. A Stuxnet-infected computer would use it to advertise its current version to other compromised machines. Stuxnet code would then use Mailslots to ensure any out-of-date machines received the latest patch. Versions 1.x of Stuxnet also featured peer-to-peer updating, although it relied on a technology known as remote procedure call.
“It’s interesting, because the targeted network for the Stuxnet attacks is known to be a non-Internet connected, or limited Internet connectivity, network, so being able to spread updates via a peer-to-peer mechanism is very important,” O’Murchu said. Stuxnet developers “couldn’t be dependent on Stuxnet waiting for a command from the C&C [server] before it dropped its malicious payload because when it got to the network that it really wanted to infect—you know, the uranium enrichment facility—it wouldn’t have Internet connectivity. So it all fits in with what we expect from Stuxnet and what we’ve seen with Stuxnet 1.0.”
In addition to not knowing how many systems were infected by Stuxnet 0.5 and whether its payload targeting centrifuge valves was ever executed, Symantec researchers—who besides O’Murchu also included Geoff McDonald, Stephen Doherty, and Eric Chien—still don’t know exactly how the early version made its initial foray into the wild. They suspect attackers intended someone who works on Step7 projects with or in the Natanz facility to unwittingly unleash the worm, but there’s no way right now to confirm that.
What is clear is that the work on the sophisticated malware used to launch a surgical strike on Iran’s nuclear program was undertaken no later than 2005, some five years before there’s any confirmation that it succeeded in its goal of disrupting that country’s uranium-enrichment process. Not only does that mean the operation began sooner than previously known, but work also dates back to an era in which malware was considerably cruder in quality in comparison to today.
“That’s a five- or six-year period there where the [Stuxnet] attackers were coming up with these incredibly advanced strategies for the time to attack these facilities,” O’Murchu said. “Back in 2005, we were dealing with hackers who were still doing it for notoriety—you know, people who were working out of their basement. I think that’s really quite amazing when you look at what else was going on in 2005. It stands out even more than it did before.”
Article updated to correct wording of the tagline on Media Suffix pages, change language in second paragraph regarding alternate attack method.