Sydney Enderlin

ENGL-400

Proposal

Due: 06/16/16

The Hidden Truth Behind Social Media

Everyone has those nights where they have a huge fight with their best friend, have a nasty break up, or find out some juicy family drama. Their first reaction may be to call a close friend, or it may be to take their anger and hurt to social media. If they choose the latter, the comments, posts, or even pictures, that they choose to post will most likely be inappropriate and bring unwanted attention to themselves. They might wake up the next morning to find out that their employer discovered the posts and they are now out of a job, they might find a number of messages and comments from friends asking them why they posted that and to take it down, they could even create new issues with family members. All of these circumstances would cause those people to go straight to their accounts and contemplate taking the material down, but what if it was already gone? Every day, people carelessly post on their social media sites and unknowingly wait for a follower or Facebook friend to report their material. It’s time social media sites start reviewing the content of the post or picture before it makes its way to the online world.

Right now, many social media sites use what is called “reactive moderation.” This type of moderation allows the user to post whatever it is he or she wants at that given moment, and then the followers or friends of that user get to decide whether or not to report the material based off of certain criteria (Grimes-Viort, 2010). For example, when a person chooses to report a photo or written post on Facebook, he or she is then prompted to further explain why they are reporting the photo by choosing from three options: “It’s annoying or not interesting,” “I think it shouldn’t be on Facebook,” and “It’s spam.” After choosing one of the three options, the person then has more options to choose from. Let’s say they choose option number two, “I think it shouldn’t be on Facebook.” After clicking on this option, he/she now has to decide whether the post or photo is “rude, vulgar or uses bad language,” sexually explicit,” “harassment or hate speech,” “threatening, violent or suicidal,” or “something else.” After choosing one of the above options, that person would move on to more choices. When he or she successfully completes the reporting process, then what happens? Who gets to decide if the picture or post stays or goes? The answer may shock you.

Adrian Chen, a writer and researcher for Wired, wrote an article titled “The Laborers Who Keep Dick Pics and Beheadings Out of Your Facebook Feed” where he discusses the labor force that so many of us know nothing about, yet they are responsible for keeping our personal sites free of every explicit material imaginable. Chen explains that a lot of this work isn’t even happening in the United States, but instead is common in places like the Philippines where they are hired and paid much lower wages. For example, one of the laborers Chen worked with informed him that Facebook offered to pay him just $312 per month to moderate their site. Although many sites were closed off about their moderation policies, Chen was able to learn more about an app called Whisper, which is an app somewhat like YikYak where people can post pictures and comments anonymously (Chen, 2014).

Whisper’s moderators work in a former elementary school in the Philippines. They are different from Facebook because they use what is called “active moderation” to moderate the material the users choose to post. This means that instead of waiting for someone to report the material of a specific user, the laborers, making little to nothing a day, have to work to decide in seconds whether or not a picture or post is appropriate while more and more posts are coming in seeking their approval. But how do they know what the app deems appropriate and inappropriate? “A list of categories, scrawled on a whiteboard, reminds the workers of what they’re hunting for: pornography, gore, minors, sexual solicitation, sexual body parts/images, racism” (Chen, 2014). At this point, I know you have to be wondering: what kind of effect does this have on these individuals?

Chen was also able to interview a few moderators that were based in the United States. Although these workers are able to make a substantial higher amount of money, the psychological effects are no different. One man chose to work for Google monitoring videos for YouTube as a last resort after not being able to find a job with his degree in history. At first, he didn’t mind the job and thought of it as a great opportunity to climb the career ladder. Eventually though, as he began to have to view the “brutal street fights, animal torture, suicide bombings, decapitations, and horrific traffic accidents” (Chen, 2014) this man started to become a different person. He took to alcohol to deal with the stress of the job. He knew it was time to quit (Chen, 2014).

One of the psychologists who goes to speak with the laborers in the Philippines says, “It’s like PTSD . . . . There is a memory trace in their mind.” She asks Chen, “How would you feel watching pornography for eight hours a day, every day? . . . . How long can you take that?” (Chen, 2014). This massive labor force is hidden from the online world as people type away on their phones and computers and post things that they know may be controversial, yet they sit back and wait and wonder if it will be reported. It’s time we change “reactive” and “active moderation” and make a more successful, reliable, “automated moderation.”

Large social media sites, like Facebook for example, will claim that this may be too difficult. Damon Beres, a Tech Editor for The Huffington Post, wrote an article titled “Facebook Says It Needs Your Help To Root Out Offensive Posts” where he discusses Facebook’s privacy policy. Facebook does have guidelines that help users know what is appropriate, according to the social media site. These guidelines were updated to help further detail those explanations and make clear how difficult it is for Facebook to monitor the over one billion people that take advantage of their social network, but “some experts say Facebook needs to do more to prevent objectionable content from slipping through the monitoring system already in place” (Beres, 2015).

Monika Bickert and Chris Sonderby, the Head of Global Policy Management and the Deputy General Counsel, wrote the actual blog post from Facebook titled “Explaining Our Community Standards and Approach to Government Requests.” They used this blog post to outline Facebook’s purpose and explain that they want their users “to share and connect freely and openly” (Bickert & Sonderby, 2015). For this reason they would be opposed to an automated moderation system. This type of moderation may make their users feel as if they can’t be true to themselves, something Facebook stands by.

This blog post was meant to clarify their standards for those people who felt the need to report certain material. Say a user were to come across a post and it looked to him/her like it might be hate speech, but he/she wasn’t sure what Facebook categorized as hate speech, that user could now receive more clarification on their standards regarding hate speech before deciding to report the post. The blog post reads, “We know that our policies won’t perfectly address every piece of content, especially where we have limited context, but we evaluate reported content seriously and do our best to get it right” (Bickert & Sonderby, 2015). But based on the information above, we know who is really doing the evaluating. Facebook attempts to be upfront with their users and make clear that it is difficult for them to please every user, but they want their users to continue reporting material they feel is inappropriate (Beres, 2015).

It’s true that implementing an automated moderation system into larger scale social media sites may prove to be more difficult due to the overwhelming amount of users all around the world, but I believe it is worth it if we can eliminate the need for laborers to do all of the gut-wrenching work that none of us would ever volunteer for. So many users have no idea where the picture or post goes after they report it, they just simply know they have that power and when something looks inappropriate to them, they’re going to enforce that power. If Facebook were to tell the world who is behind the “report” button, I believe many more users would then take that button much more seriously. I also believe many users would begin to wonder how we could implement an automated moderation system.

To make the change happen we would have to gain support from the users of social media sites and policymakers. It would also be a lengthy process because it would take a number of people agreeing on the efficiency and success of the policy before it can even be implemented and then evaluated (“Policy Making: Political Interactions”). Before a successful automated moderation system can be created, there are certain steps that have to be taken. For one, we have to identify the problem: material should be moderated before it can be viewed by all of the user’s friends, followers, etc. The next step that would follow is finding patterns in the material posted by users. This would help to further understand what is frequently being reported. After discovering patterns, the automated moderation system would be programmed to detect specific material regarding nudity, hate speech, criminal activity, pornography, and so on. The final step would be to change and adapt the rules when needed (Aragon, 2015).

Many benefits would come from social media sites switching to an automated moderation system. For one, sites discussed above including Facebook, Whisper, and YouTube would become less and less dependent on their moderation work force. These people would no longer have to work with such depressing conditions. Another benefit would include people being more careful and responsible about what they choose to post online. If users know that their picture or post is going to be automatically moderated, they are probably going to pay more attention to what they’re posting. It would also eliminate personal or professional issues that can be caused from friends or family reporting a person’s online material.

Social media is an increasingly important part of society being used to connect people with their friends and families, businesses with their consumers, and politicians with their constituents. Statista did a research study where they found that “[i]n 2016, 78 percent of U.S. Americans had a social network profile, representing a five percent growth compared to the previous year” ("Percentage of U.S. Population With a Social Network Profile From 2008 to 2016"). Over three-fourths of the population has some type of social media account and that number is just going to increase as time goes by. If Facebook can hardly keep up with the material that is being reported on their site now, how will they keep up as they continue to gain more users? Will they hire more moderators? An automated moderation system will help not only those behind social media’s closed doors, but the billions of users as well. The time for change is now.

Works Cited

Aragon, Jerome. "4 Steps To Developing The Perfect Automation Rule." Besedo. N.p., 30 Oct. 2015. Web. 12 June 2016.

Beres, Damon. "Facebook Says It Needs Your Help To Root Out Offensive Posts." The Huffington Post 16 Mar. 2015. Web. 11 June 2016.

Bickert, Monika, and Chris Sonderby. "Explaining Our Community Standards and Approach to Government Requests." Facebook Newsroom. WordPress, 15 Mar. 2015. Web. 11 June 2016.

Chen, Adrian. "The Laborers Who Keep Dick Pics and Beheadings Out of Your Facebook Feed." Wired. N.p., 23 Oct. 2014. Web. 11 June 2016.

Grimes-Viort, Blaise. "6 Types of Content Moderation You Need To Know About." SocialMediaToday. N.p., 7 Dec. 2010. Web. 11 June 2016.

"Percentage of U.S. Population With a Social Network Profile From 2008 to 2016." Statista: The Statistics Portal. N.p., 2016. Web. 12 June 2016.

"Policy Making: Political Interactions." American Government. Independence Hall Association, n.d. Web. 12 June 2016.