9

All Ears and No Mouth: Surveillance versus Privacy

In the post-9/11 world, governments have increased surveillance measures on their citizens in the name of national security. Often, this increased surveillance conflicts with individuals’ privacy. This subject is discussed briefly in “The Struggle to Govern the Commons,” by Thomas Dietz, Elinor Ostrom, and Paul C. Stern, when they discuss methods that governments should use to curtail improper use of the world’s common resources: “Governance should employ mixtures of institutional types…that employ a variety of decision rules to change incentives, increase information, monitor use, and induce compliance” (57). This implies that governments should, among other strategies, increase monitoring the use of the world’s common resources, to ascertain information on improper use of these resources. It is reasonable to assume that governments could then apply monitoring technology to other areas, such as protecting their citizens from foreign attack, especially terrorism, which requires extensive monitoring of communications technology, both domestic and foreign. This inevitably leads to the question: to what extent should privacy be sacrificed to ensure national security?

First, the terms ‘privacy’ and ‘national security’ must be defined in the context of this debate. Daniel Solove, a professor of law at George Washington University explains that privacy is being able to conceal any information which a person wishes to keep only to his/her self (“Why Privacy Matters”). National security is explained by Steven Bradbury, a former Assistant Attorney General at the U.S. Department of Justice, as the protection of “the United States and Americans from foreign attack” (5).

On September 11th, 2001, the terror attacks on New York City and Washington D.C became a pivotal event in this debate. Prior to 9/11, defending America from foreign attack meant defense from foreign armed forces, nuclear missiles, or international spying, which can be combatted with little sacrifice of individual privacy. Terrorist attacks are more frequent post-9/11, and the U.S. combats these attacks with mass information collection, to detect a threat before an attack occurs. In response to the 9/11 attacks, Congress passed the USA PATRIOT Act in 2001, and the FISA Amendments Act in 2008, both of which increased the domestic surveillance power of the government, and particularly that of the National Security Agency. The NSA can collect and store massive amounts of telephone metadata, which is “transactional data about communications, not any information about the substance of the communications themselves”; this can include “listing which phone numbers have called other numbers, as well as the date and time and duration of the calls” (Bradbury 8). The NSA can also collect internet-based data, in the form of both content of communications and metadata. Mel Hogan, assistant professor of humanities at the Illinois Institute of Technology, and Tamara Shepherd, an LSE fellow at the London School of Economics and Political Science describe how the NSA uses two programs, PRISM and XKeyscore, to collect this data. PRISM exploits the intentional and unintentional security loopholes of major internet information companies such as Facebook, Google, Apple, and Microsoft to collect information on targets. XKeyscore collects information in real time from the entire internet simultaneously, and the NSA then uses algorithms to filter information, and stores it in their databases (10). Ideally, the NSA can only access these databases when it has reasonable suspicion of a suspect linked to terrorist activity, then they can investigate the suspect based on information in the database (Bradbury 9). However, as Sharon Nelson and John Simek, digital forensics experts explain, the NSA has ignored its legal restraints and abused its power by secretly surveilling in real time using the XKeyscore program and warrantless telephone wiretapping. This changed in 2013 when Edward Snowden, a former NSA contractor, leaked thousands of documents revealing the extent of the NSA’s surveillance programs (“Snowden NSA Revelations”). The NSA’s policies have now come under public scrutiny, and many are pushing for increased privacy protections and restraints on the NSA’s power.

One argument in favor of increased surveillance is: you have nothing to hide if you are doing nothing wrong. Solove argues that this assumption is invalid, because privacy is not about hiding a wrongdoing, but rather freedom from being watched. He also states that privacy is threatened more by the gradual accumulation of small bits of information from various activities rather than large chunks of information revealed all at once, because the former method is harder to detect. He further argues that piecing together information may reveal too much and can be misleading, because the whole picture is not available. As a result, this may lead to false accusations and unfair punishment (“Why Privacy Matters”). Governments have taken different approaches to this problem. Frederico Fabbrini, a professor of law at the University of Copenhagen, reports that in 2014, the European Court of Justice ruled that the EU’s Data Retention Directive, which moved to increase collection and retention of personal data following the 7/7 bombings in 2005, contradicted the EU’s Charter of Fundamental Rights, and was illegal (67). Since then, the EU has moved towards a system by which individuals can control how their personal information can be used by governments and companies, favoring the individual as the protector of privacy. In contrast, the U.S. is moving towards a system by which individuals can control which institutions have their personal data, but not how it is used, which favors the corporation. The fundamental difference is that in the U.S., as Bradbury states, “the Supreme Court has held that a customer of a telephone company does not have a reasonable expectation of privacy in…metadata collection” because metadata is used in other aspects of normal company activities, such as billing (10). However in the EU, the ECJ ruled in the Data Retention Case that this data “may allow very precise conclusions to be drawn concerning the private lives of the persons whose data has been retained” (Fabbrini 85). Jennifer Holt and Steven Malčić, professors at the University of California, assert that this difference in cultural values and government policy is problematic, because the internet is not meant to be divided by region. This in turn makes it difficult for companies to safeguard digital privacy, “due to gaps and fissures in international data jurisdiction and the attendant difficulties [in] regulating the private sector” (156). Brian Silver and Darren Davis, professors of political science at Michigan State University, created a mathematical model of the effects of increased terrorist threats on increased civil liberties protections and increased national security measures. They found that Americans generally support civil liberties protections when the terror threat is low, regardless of race, political ideology, or trust in government, but when the terror threat is high, the majority of Americans support increasing national security measures (43). This demonstrates that societal values regarding privacy and security change over time, even within countries. While these values fluctuate within the public, governmental anti-terrorist agencies always prefer lower privacy protections. Professor Tiberiu Dragu, from the University of Illinois, created a mathematical model which asserts that in all scenarios, anti-terrorist organizations support decreasing privacy protections, even if this reduction increases the likelihood of a terror attack (70-72). Dragu’s model also shows that: in the wake of a terror attack, legislation lowering privacy easily passes, and then it is difficult to repeal this governmental power (75). Lastly, the model shows that in a country with a high level of privacy protections, such as the U.S., decreasing those protections actually increases the likelihood of a terror attack (71-72).

The Federal Intelligence Surveillance Court, created by the Federal Intelligence Surveillance Act in 1979, has repeatedly overreached its legal powers since the passage of the USA PATRIOT Act and the FISA Amendments Act. The FISC was originally designed as an ex parte court, which differs from a normal adversarial court in that its only function is to approve government surveillance requests based on interpretation of laws passed by Congress (Kerr 1516-1517). The idea was to force surveillance agencies to get court approval for their surveillance programs, which in theory would force the programs to be subject to proper constitutional scrutiny. The parameters of the FISC broadened and changed because rapid advances in technology created ambiguities in the surveillance laws, and the FISC could rule in favor of the government when interpreting these ambiguities because its actions were kept secret from both the public and Congress. Because of this secrecy, the FISC works directly with the executive branch, which bypasses the checks and balances system, and removes it from the public and Congress’s scrutiny and regulation (Kerr 1525-1527). The end result was the approval of domestic surveillance programs which are far removed from the language of the laws on which they are based. One such example is the ‘Section 215 Program,’ which is based on a section of the USA PATRIOT Act which gives the FISC the power to “obtain a court order from third parties for any tangible things under the traditional standards for subpoenas,” which are orders to “third parties to hand over documents as long as…the materials sought are relevant to a criminal investigation,” but is “expressly limited to that which any federal prosecutor would have in a criminal case” (Kerr 1527). However, because the nature of the information the NSA was collecting and the technology used to obtain and store it, the FISC interpreted section 215 of the USA PATRIOT Act as authorization for telephone companies to copy all of certain types of telephone metadata for all customers for all time, to the NSA’s databases (Kerr 1528). Thus an enormous and arguably illegal surveillance program was initiated by the NSA, all while Congress and the public were unaware that their metadata was being stored, until the Snowden leaks. This and other examples of the FISC’s attempt at blending ex parte and adversarial court powers in secret was the root cause of the unnerving personal surveillance programs of the NSA.

Ultimately, as Dragu argues, “reducing privacy does not necessarily increase security from terrorism” (75). Additionally, as the ECJ held in the Data Retention Case, “the feeling of surveillance generated by vast metadata collection is inimical to privacy and democracy, and individuals ought to be protected” from such surveillance (Fabbrini 91). Furthermore, as explained by Hina Shamsi and Alex Abdo, director and staff attorney on the ACLU’s National Security Project, transparency by the NSA and FISC and accountability on the part of the executive are necessary so that they stay within the limits of U.S. law (9). Moreover, the only digital privacy law protecting Americans, the Electronic Citizens’ Protection Act, was written in 1986 and is severely outdated. Also, the Supreme Court should recognize that individuals retain their privacy claim when they share information with electronic service providers, and should recognize that the question of personal data ownership is irrelevant because “the retention in itself…constitutes an infringement on the right to privacy” (Fabbrini 92-93). These changes would bring the privacy policies of the U.S. closer to those of the EU, and would help to solve internet privacy problems that arise when companies initiate different privacy protections for different countries. Also, executives and surveillance agencies should not be responsible for making such legislation because they always “have incentives to push for decreased privacy protections,” and are therefore biased (Dragu 75).

The best way to implement such changes would be to introduce what Orin Kerr, the Fred C. Stevenson Research Professor at George Washington University Law School, calls a ‘rule of lenity.’ This rule is adapted from criminal law, and states that “[w]hen courts interpret [surveillance] laws, ambiguity should be construed in favor of the citizen and against the state” (1532). The result of this simple rule is that when the FISC encounters an ambiguity in the law, which happens regularly because technology outpaces law, it must ask Congress to update and amend laws so it can proceed with surveillance. Congress can then review the nature of the request and can choose either to update the law or keep it as it is and force the FISC to drop the case. This prevents the FISC from making overly broad interpretations of outdated laws by updating and reforming those laws to improve them. It also brings the power to control the programs of the government’s surveillance closer to “where it belongs: the people, acting through their elected representatives,” and adds accountability to this sector of lawmaking because the lawmaking power returns to elected officials, not life-appointed judges, resulting in more sway from the public in decision making when writing these laws (Kerr 1532-1533). Lastly, it helps revert the FISC back to a true ex parte court, pushing it closer to what it was created to be.

Since the 9/11 attacks, the NSA and other surveillance agencies have enjoyed free rein over communications surveillance, under the guise of ‘necessary terrorism prevention.’ Although this is a just cause, the strategies and powers allotted to these agencies in the waves of fear produced from terror attacks are unethical and invasive. Privacy is a fundamental human right, and to protect this right individuals must actively seek change in the current surveillance system. Any action will help this cause, because after all, no actions will go unnoticed.