Running head: Pressure (not) to Publish
Pressure (not) to Publish:
Discussing the Publication of Cyber Security Research
Karen Farthing
CSC540, Spring 2013
Murray State University
Abstract
Cyber security researchers are increasingly facing a daunting dilemma: to publish or not to publish? The ethical argument can be approached from two different perspectives. The first school of thought posits that any exploits discovered should be published, so that systems administrators are aware of the ever evolving threat. The second school of thought is espoused largely by business and government, and posits that new exploits should not be published, because it leaves systems vulnerable to attack. It’s a David and Goliath struggle, leaving researchers in the unenviable position of having to choose the hard right over the easy wrong. Legislation has been unable to keep pace with a rapidly changing technological landscape, leaving the line between legal and criminal behavior open to debate. So where does that leave the researcher? No man’s land.
Pressure (not) to Publish: Discussing the Publication of Cyber Security Research
Introduction
Cyber security researchers face an increasingly difficult battle when attempting to publish or present their work. Publishing security vulnerabilities is risky. Researchers must take care not to publish too much; for example, if a researcher publishes too much functional code, the vulnerability discussed could be exploited before patches can be applied. There are also no whistleblower protections in place for researchers. They face legal threats from businesses and governments, and fall victim to smear campaigns when companies don’t have a legal leg to stand on (Attrition.org, 2013). In the following pages, this paper will discuss legal and other barriers to publication; case histories that describe white hats, grey hats, black hats, and innovators; identification of factors that contribute to the issue; and identification of steps that might alleviate the problem.
Barriers to Publication
There are many legal vehicles that contribute to the limitations placed upon researchers who want to publish vulnerability reporting. Likewise, businesses and governments sometimes resort to less than legal means aimed at discouraging researchers from publishing information about security vulnerabilities.
Legal Barriers
Copyright Lawis intended to protect a creator from unauthorized reproduction of his work. This applies to software, as well as music, video, and a number of other works. Security researchers must often make copies of software in order to find bugs or exploits, and this can violate copyright law (Electronic Frontier Foundation, 2013).
Trade Secret Law is intended to protect the proprietary works of businesses engaged in maintaining an edge over their competition. According to the Coder’s Rights Project FAQ from the Electronic Frontier Foundation, “…misappropriation of trade secrets can be both a civil and criminal offense. Generally, a trade secret is information that (1) derives independent economic value, actual or potential, from not being generally known to the public or to other persons who can obtain economic value from its disclosure or use; and (2) is the subject of efforts that are reasonable under the circumstances to maintain its secrecy. Misappropriation means a wrongful acquisition, use, or disclosure of a trade secret(Electronic Frontier Foundation, 2013).” Reverse engineering of software or hardware can fall under the auspices of violation of trade secret law. Companies often try to claim that security vulnerabilities fall under trade secret law, because if knowledge about a vulnerability were to be made public, it could cause a deleterious effect upon their competitive advantage or adversely affect the value of their holdings.
Patent Law ostensibly grants the creator of a work or invention sole use of the aforementioned for a limited period of time. It is intended to prevent the infringement of other parties upon their intellectual property, during the period of time that said property has the most earning potential. Researchers can run afoul of patent law if they create a hardware hack that behaves or operates too similarly to another product currently under patent – regardless of how the researcher created the hack.
The Digital Millennium Copyright Act (DMCA)is the juggernaut that all security researchers must face. Any security researcher venturing into the arenas of Digital Rights Management (DRM) or technological protection measures must tread very, very carefully. Even when caution is exercised, researchers will most likely violate the DMCA at some point. The terms of the DMCA are broad and open to interpretation at every turn. Congress did, however, provide three limited circumstances under which security researchers can conduct reverse engineering, encryption research, and security research. Distribution of code or tools that circumvent the provisions of the DMCA can only occur in limited circumstances and must be under the supervision of and with permission from the entity that stands to be injured as a result of said research. The DMCA has had an impact on the worldwide cryptography research community, since an argument can be made that any cryptanalytic research violates, or might violate, the DMCA. Additionally, critics argue that the DMCA stifles free expression (see case histories of Felten and Sklyarov), jeopardizes fair use for owners of various media, impedes competition, and interferes with computer intrusion laws. However, since this paper is not intended as a discussion of the DMCA, please refer to section 1201 of the Act.
Contract Law surrounds the concept of a legally enforceable “promise” between two parties. Non-disclosure Agreements (NDAs) fall into this category, as do EULAs and Terms of Service/Terms of Use. Contract law most benefits the company that employs a researcher, rather than the researcher himself. Since this area of the law is “murky”, researchers who publish their work against the wishes of their employers stand a very good chance of at least getting fired, if not sued, for breach of contract.
Criminal Law is designed to punish law breakers (of course). Researchers can be charged under various criminal codes if it can be proved that they published their work with the intent to help others commit a crime (aiding and abetting), or if the research is so detailed that it would be simple for others to commit crimes (facilitation).
International Law varies from country to country (of course), and is much too broad to cover in a limited but meaningful way. Researches should be mindful of a host country’s laws when working overseas, and should be mindful of any laws they might break via use of telecommunications technologies that might span across borders.
Other Methods
“Media smear” campaigns have been instigated against researchers when there was no clear legal method for stopping the publication of their work. A particularly vicious instance involved researchers David Maynor and Jon Ellch, who cracked a MacBook at Black Hat in 2006 using third party drivers and third party wireless hardware. Apple PR director Lynn Fox orchestrated a smear campaign accusing Maynor and Ellch of fabricating aspects of the hack, all in an attempt to make it appear that Apple was a victim of unscrupulous hackers (Ou, How Apple orchestrated web attack on researchers, 2007).
Overt and covert threats have been used to intimidate researchers into either cancelling or delaying, or removing publication. One popular method is for a company to issue a DMCA take-down notice to a researcher, only to have them rescind the notice later. In one instance, banking equipment manufacturer Thales sent a DMCA takedown notice to John Young, who runs the well-known Cryptome site, demanding that he remove a manual for one of their HSM products(Moody, 2013). HSM stands for “hardware security module”, and in the banking industry HSMs are instrumental in managing cryptographic keys and PINs used to authenticate bank card transactions. The manual in question had been used for years by security researchers who were investigating vulnerabilities cryptographic weaknesses, and those vulnerabilities were causing Thales some notable embarrassment. Another instance involves Patrick Webster, a security consultant in Australia, who quietly warned First State Superannuation Fund about a web vulnerability that would allow a hacker to access users’ accounts(Pauli, 2011). The Fund thanked him for the tip, fixed the flaw within 24 hours, then sent the police to his house the next day to “investigate”. The Fund demanded that Webster turn over his personal laptop to their in-house IT staff, and also informed him that he could be held liable for any expenses related to fixing the flaw that he reported. So, this researcher saved the company potentially millions of dollars by alerting them to the flaw, alerted them privately that the flaw existed so that they could avoid any embarrassment, and they threatened him with legal action and a repair bill.
Firings due to pressure from others is another tactic used by businesses to curtail or punish unflattering publication.Dan Geer, former CTO of @stake Inc., was let go just a day after the publication of a paper he co-authored that was sharply critical of Microsoft Corp.— one of @stake’s customers. The paper covered the effects that Microsoft’s monopolistic position have on the security of the Internet, and argued that the dominance of Windows in the marketplace has created a monoculture in which all systems are more vulnerable to widespread attacks and viruses(Fisher, 2003). Both @stake and Microsoft claimed that Greer was let go for other reasons, but Greer professed serious doubts.
Case Histories
Security researchers typically fall into one of four categories: white hats, grey hats, black hats, and innovators. They all hack or crack systems, but have varying motivations. While many researchers ascribe to being white hats, the truth is that most of them are actually grey. The following section details the attributes of each, and provides a few “case histories” for members of each category.
White hats profess to work to secure systems without breaking into them. “Hackers for good”, they work with software companies/governments to resolve vulnerabilities and won't announce vulnerabilities until a company is ready or found to be responsible. They will show the system owner - but no one else - how to exploit a vulnerability, and will only attack systems when authorized(Hafele, 2004).
Grey hats have a tendency to either skirt the law or run afoul of the law in the course of their research. They might break into systems to heighten awareness of security flaws, and have a tendency to announce vulnerabilities publicly without informing the company (or on the same day that the company is notified). They may release exploit code or tools that aren’t easily modified for hacking security, and will explore holes before notifying the owner of vulnerabilities(Hafele, 2004).
Black hats are the bad guys. A black hat cares more about controlling and accessing systems than about security. He will keep all of his exploits to himself, and will trade with others on closed lists. He won't publish, and hacks for his own gain or for malicious reasons(Hafele, 2004).
White Hats
Ed Feltenis currently the Director of Princeton's Center for Information Technology Policy. Felten was a witness for the government in US v. Microsoft, where Microsoft was accused of a variety of anti-trust violations surrounding the exclusive use of Internet Explorer with the Windows operating system. Microsoft asserted that IE could not be removed from the distribution without causing damage to the OS. Felten and a team of his students were able to prove otherwise, severely damaging Microsoft’s case.
He is probably best known for his involvement with the Secure Digital Music Initiative (SDMI), wherein the Recording Industry Association of America and Verance Corporation sued him and his team for winning a competition they sponsored. The competition asked participants to attempt to break the watermarking schema in use for protecting copyrighted music from unauthorized use. In just three weeks, Felten’s team was able to remove any watermarks, rendering the SDMI schema useless. When he attempted to publish his work, the RIAA and Veyancethreatened to sue him under the auspices of the DMCA for violation of section 1201 of the same. The suit failed, and Felten presented his work at Usenix in 2001.
Felten was instrumental in uncovering security and accuracy problems in Diebold and Sequoia voting machines. He and his students also discovered the cold boot attack, which allows someone with physical access to a machine to extract the contents in memory after bypassing any security methodologies (Wikipedia, 2013).
Michael Lynn was instrumental in highlighting security flaws in Cisco’s IOS. Dubbed “Ciscogate”, the flaw centered around IPv6 packets, and whether or not a Cisco device could be exploited remotely. Cisco fixed the flaw in early 2005, and Lynn was scheduled to present a paper at Black Hat the same year detailing the results of his research. Lynn was careful to remove as much detail as possible, but Cisco objected – strenuously. Representatives from the company arrived at the conference a few hours before he was scheduled to present, confiscated his paper and notes, and pressured Black Hat into cancelling his presentation. Lynn’s employer, ISS, also gave him a “cease and desist” order regarding the presentation, and told him he would be fired if he presented his work. Lynn resigned from his position at ISS an hour prior to presenting, and asked attendees for a job just before giving his speech. He was hired by Juniper Networks a few months later, and is still employed there (Masnick, 2005).
HD Moore is an innovator and white hat who developed Metasploit, one of the most widely used penetration and vulnerabilities testers in use (Stop The Hacker, 2012). He also developed the MetasploitDecloaking Tool, which purports to be able to identify a user’s IP address regardless of the use of proxies or VPNs. Current research projects include the Month of Browser Bugs, which aims to combine fast-paced discovery with full disclosure.
Grey Hats
Robert Morris was the first person convicted under the Computer Fraud and Abuse Act for spawning the Morris Worm – considered by many to be the first internet worm. Designed as a means for measuring networks, Morris developed the worm while he was a graduate student at Cornell. The story of how the worm “escaped” changes from time to time, but most accounts agree that Morris developed the worm as a means to test and map the limits of the local area network in a laboratory environment. However, containment of the worm failed, and in an effort to disguise where the worm originated, Morris managed to divert it to MIT – where it spread worldwide. Morris is currently a tenured professor of Computer Science at – you guessed it – MIT (Anthony, 2011).
Dmitry Sklyarov is a Russian programmer who gained notoriety for cracking Adobe’s ebook DRM scheme while employed at Russian software company ElcomSoft. In 2001, after giving a presentation at DEF CON titled “eBook's Security - Theory and Practice”, Sklyarov was arrested by the FBI and jailed for violating the DMCA after complaints from Adobe. However, the DMCA does not apply in Russia, and the courts decided that a Russian citizen working for a Russian company could not be held accountable under the DMCA. Both Sklyarov and ElcomSoft were found not guilty at trial (Wikipedia, 2013).
Jon Lech Johansen (DVD Jon) is a Norwegian programmer with a thing for DRM – he hates it. Since 2001, Johansen has developed 16 different methodologies for defeating DRM on a multitude of platforms. Ironically, the Sony Rootkit actually used code stolen from Johansen, and some have argued that he might have a case to sue Sony under the DMCA. His most notorious exploit was the release of DeCSS, a method for defeating the Content Scrambling System in use on DVDs (Anthony, 2011).
Black Hats
Kevin Mitnick’s first exploit occurred at the age of 12, when he figured out how to ride the transit system in LA for free by bypassing the punch card system in use. He became a social engineer, garnering usernames, passwords, and modem phone numbers. He hacked DEC at age 16 and was tried and convicted to 12 months in jail with three years’ supervised release. Near the end of his three year probation, he hacked PacBell’s voice mail system, then went on the run for over 2 years. By the time the FBI finally caught him, he had hacked numerous networks, cloned cell phones, and stolen proprietary software from cell companies (Anthony, 2011).
Kevin Poulsen is currently the editor of Wired Magazine, but he began his career as a phone phreak. His most notorious exploit was hacking the phone lines of a local radio station in order to ensure that he was the 102d caller – to win a Porsche. The FBI began pursuing him for myriad crimes, and he turned fugitive. When a special was aired on America’s Most Wanted profiling Poulsen, you guessed it, the phone system at AMW crashed. After his release from prison, he managed to reinvent himself as a white hat and investigative journalist. Poulsen used exploits on MySpace to identify over 700 sex offenders engaged in soliciting sex from children, and was the man who broke the Bradley Manning-WikiLeaks story (Anthony, 2011).