By: Larry Katzen -
It remains one of the greatest travesties in the history of American business: In 2001, the 85,000 employees of one of the world’s largest accounting firms began losing their jobs in droves. Their employer had become tainted by its loose association with Enron Corp., a financial house of cards that was imploding and taking with it billions of dollars in employee pensions and shareholder investments.
In 2002, accounting firm Arthur Andersen was convicted of charges related to Enron’s fraudulent practices. The charges had nothing to do with the quality of their auditing – or any of Enron’s illicit practices. The conviction was appealed, and in 2005, the U.S. Supreme Court struck it down in a unanimous vote. But the damage had already been done.
To date, despite millions of records being subpoenaed, there is no evidence Arthur Andersen ever did anything wrong. Still, perceptions are everything: Most people are not aware that the accounting firm, which led the industry in establishing strict, high standards, became a government scapegoat.
When I speak to groups across the country, I ask the following questions. Below are the typical responses I receive – and the actual facts.
1. What do you remember about Arthur Andersen?
Typical Response: They were the ones that helped facilitate the Enron fraud. They deserved what they got.
Fact: Arthur Andersen was the largest and most prestigious firm in the country. It was considered the gold standard of the accounting profession by the business community.
2. For what was Arthur Andersen indicted?
Typical Response: They messed up the audit of Enron and signed off on false financial statements.
Fact: They were indicted for shredding documents. These documents were drafts and other items that do not support the final product. All accounting firms establish policies for routinely shredding such documents.
3. How long was it between the Enron blowup and when Arthur Andersen went out of business?
Typical Response: One to three years.
Fact: The largest accounting firm in the world was gone in 90 days.
4. Was the indictment upheld?
Typical Response: Yes, that is why they went out of business.
Fact: No. The Supreme Court overruled the lower court in a 9-0 decision, and came to the conclusion within weeks, making it one of their quickest decisions ever.
5. How many people lost their jobs as a result of the false accusations?
Typical Response: Have no idea, but the partners got what they deserved.
Fact: Eighty-five thousand people lost their jobs and only a few thousand were partners. Most were staff people and clericals who made modest sums of money.
6. Who benefited from Arthur Andersen going out of business?
Typical Response: Everyone – we finally got rid of those crooks and made a statement to the rest of business to operate ethically.
Facts: It was not the Arthur Andersen people; they lost their jobs. It was not the clients; they had to go through the stress and expense of finding a new auditing firm. It was not the business world in general: It now has fewer firms from which to choose and rates increased. It was their competitors who benefited– they got Andersen’s best people and clients and were able to increase their rates and profitability.
7. What accounting firms now have ex Arthur Andersen partners playing leadership roles in their firms?
Typical Response: None
Facts: The “big four,” all the large middle-tier firms and many small firms have former Arthur Andersen partners in leadership positions. Finally, many members of the new Public Accounting oversight Board (PCAOB), which oversees these firms, now have former Arthur Andersen people involved in reviewing the quality of these firms.
About Larry Katzen
Larry Katzen, author of “And You Thought Accountants were Boring – My Life Inside Arthur Andersen,” (www.LarryRKatzen.com), worked at Arthur Andersen from 1967 to 2002, quickly rising through the ranks to become a partner at age 30. His new memoir details the government’s unjust persecution of a company known for maintaining the highest standards.
The cryptography expert Bruce Schneier, who has been writing about computer security for more than fifteen years, is not given to panic or hyperbole. So when he writes, of the “catastrophic bug” known as Heartbleed, “On the scale of 1 to 10, this is an 11,” it’s safe to conclude that the Internet has a serious problem. The bug, which was announced on Tuesday—complete with an explanatory Web site and a bleeding-heart logo—is a vulnerability in a widely used piece of encryption software called OpenSSL.
Heartbleed is as bad as it is possible for a security flaw to be. It can be easily exploited by anyone on the Internet without leaving a trace, and it can be used to obtain login names, passwords, credit-card information, and even the keys that keep our encrypted communications safe from eavesdroppers. The bug first appeared in OpenSSL code that was released in March, 2012—so the vulnerability has been open to exploitation for more than two years. The Internet-security firm Netcraft reported that up to five hundred thousand sites thought to be secure were, in fact, vulnerable—including Twitter *, Yahoo, Tumblr, and Dropbox.
When you log on to a secure Web site—your bank’s, for example—you see a green-padlock icon at the top of your browser window, which confirms that your connection is secure. In order for browsers to communicate securely with servers, there is a standard set of steps that both sides must perform to create, and to maintain, that secure connection. This protocol is called Transport Layer Security, or T.L.S., and everything that it requires from both sides of a secure connection is laid out in a document called RFC 5246, which describes something like the Platonic ideal of a secure Internet connection. Of course, RFC 5246 cannot, by itself, be used to keep your bank account safe. To do that, someone has to write software that will make your Web browser and your bank’s Web server actually follow the steps that RFC 5246 delineates.
Among programmers, cryptography is notorious for its difficulty—even a tiny mistake can render your seemingly secure code worthless—and the conventional wisdom is that, whenever possible, the implementation of cryptography should be left to the experts. Since 1998, one way that programmers have been able to avoid implementing encryption protocols themselves has been to use an open-source library called OpenSSL. A code “library” is just a set of common functions that programmers can use within their own code, rather than having to write them from scratch. If many people are all using the same library, and the code is open-source—so that anyone can check it for bugs—it should be more reliable and more secure than a code that one person or firm could create alone.
Heartbleed is a bug in OpenSSL’s implementation of a small part of the T.L.S. protocol, called the heartbeat extension. A “heartbeat,” in this context, is like the “beep… beep…” of a hospital heart monitor: a quick way to check that the other end of a secure connection is still there. One side sends the other side a small piece of data, up to sixty-five kilobytes long, along with a number indicating the size of the data that has been sent. The other side is supposed to send back the exact same piece of data to confirm that the connection is still active. Unfortunately, in OpenSSL the replying side looks at the stated size of the data rather than at the actual size, and it always sends back the amount of data that the request asked for, no matter how much was sent. This means that if the stated amount of data is more than the amount actually provided, the response contains the data that was sent plus however much additional data, drawn from the contents of the computer’s system memory, is required to match the amount requested.
Here is why this is so bad: the heartbeat response can contain up to sixty-four kilobytes of whatever data happens to be in the server’s random access memory at the moment the request arrives. There is no way to predict what that memory will contain, but system memory routinely contains login names, passwords, secure certificates, and access tokens of all kinds. System memory is temporary: it is erased when a computer is shut down, and the data it holds is written and overwritten all the time. It is generally regarded as safe to load things like cryptographic keys or unencrypted passwords into system memory—indeed, there is little a computer can usefully do without temporarily storing pieces of sensitive data in its system memory. The Heartbleed bug allows an attacker to “bleed” out random drops of this memory simply by asking for it. Heartbeat requests aren’t usually logged or monitored in any way, so an attack leaves no trace. It’s not even possible to distinguish malicious heartbeat requests from authentic requests without close analysis. So an attacker can request new pieces of system memory over and over again; it’s almost impossible for the victim to know they’ve been targeted, let alone to know what data might have been stolen.
Among the items that can be found in a server’s system memory are the keys to cryptographically secured connections and the certificates that allow servers to prove they are what they claim to be. An attacker who steals cryptographic keys could use them to decode and read encrypted data that had previously been intercepted; an attacker who steals certificates could use them to mimic a secure site and to intercept communications. In other words, your browser could be tricked into thinking that it’s connected securely to your bank and instead be connected to an intermediary that can read all the data flowing back and forth.
In the worst-case scenario, criminal enterprises, intelligence agencies, and state-sponsored hackers have known about Heartbleed for more than two years, and have used it to systematically access almost everyone’s encrypted data. If this is true, then anyone who does anything on the Internet has likely been affected by the bug.
But, before you panic, it is worth remembering that, at this point, we don’t know how close we are to the worst-case scenario. It is possible, though improbable, that the security researchers who exposed this flaw were, in fact, the first people to find it, which would mean that it has only been known about, and exploited, for a few days. (It was found, independently, by a team of security researchers at Codenomicon and Neel Mehta, of Google Security.) At the same time the bug was announced, a new, secure version of OpenSSL was released, and updating most of the affected servers is a straightforward task. Major services like Google and Yahoo have already patched the vulnerability. Engineers did not need to stay up all night in a mad scramble to make repairs, but, as one system administrator told me, the nature of the bug made this something more than a routine update. “It’s an update, a configuration change, and a notification to your users that there’s no way to know if their data was stolen or not,” he said. To be safe, identity certificates for servers and users must be revoked and then reissued. The fix, in other words, is both urgent and tedious, which is the worst kind of job for a programmer or system administrator.
As a user, what can you do to protect yourself? Not very much, unfortunately. The standard advice is to change your passwords, but if a service is still vulnerable then changing your password just makes it more likely that it will be the one sitting in a leaked chunk of system memory. It is also not easy to determine whether a particular service you use is still vulnerable. If a provider suggests that you change your password, it should be done immediately; otherwise, it may be better to wait a few days. If you have the option to enable two-factor security, which requires more than just a password, you should do so on every service where it’s available.
How did such a catastrophic bug remain undetected for two years? OpenSSL, which is used to secure as many as two-thirds of all encrypted Internet connections, is a volunteer project. It is overseen by four people: one works for the open-source software company Red Hat, one works for Google, and two are consultants. There is nobody whose full-time job it is to work on OpenSSL.
The project’s code is more than fifteen years old, and it has a reputation for being dense, as well as difficult to maintain and to improve. Since the bug was revealed, other programmers have had harsh criticisms for what they regard as a mistake that could easily have been avoided. Theo de Raadt, the project leader for an open-source operating system called OpenBSD, put it bluntly in a message to a mailing list: “OpenSSL is not developed by a responsible team.” The portion of the code where the bug was found is written in a programming language called C, which was first developed, at Bell Labs, between 1969 and 1973. C is a finicky and old-fashioned language that puts great demands on programmers to manage the use of system memory. No modern language would let this sort of memory leakage take place, because newer languages automatically manage memory use.
Unlike a rusting highway bridge, digital infrastructure does not betray the effects of age. And, unlike roads and bridges, large portions of the software infrastructure of the Internet are built and maintained by volunteers, who get little reward when their code works well but are blamed, and sometimes savagely derided, when it fails. To some degree, this is beginning to change: venture-capital firms have made substantial investments in code-infrastructure projects, like GitHub and the Node Package Manager. But money and support still tend to flow to the newest and sexiest projects, while boring but essential elements like OpenSSL limp along as volunteer efforts. It’s easy to take open-source software for granted, and to forget that the Internet we use every day depends in part on the freely donated work of thousands of programmers. If open-source software is at the heart of the Internet, then we might need to examine it from time to time to make sure it’s not bleeding.