Exploring Cyberbullying Prevention by Moral Persuasion
- Sprague1, R. Diaz-Sprague1, T. Solorio2, and S. Maharjan2
(1)Univ. of Alabama at Birmingham (2) Univ. of Houston
Communication technology can be useful in myriad ways. Apps can reinforce users' good intentions to exercise, stay on a diet, remember important tasks, deadlines, etc. It could also serve as a vehicle to affirm values consistent with moral behavior - values such as respect for self as well as others, civility, integrity, and courteous behavior. Thus, technology offers possibilities for furthering the flourishing of human society.
Unfortunately, communication technology can also be a forum for invective. Cyberbullying is entirely too common among teens and pre-teens. Cyberbullying is similar to traditional bullying but is more ominous because it follows the victim home. Cyberbullying is defined as the sending or disseminating postings with harmful intent to humiliate, embarrass or threaten a victim.
Certainly this type of behavior is pernicious to bully and victim alike and is inimical to human flourishing. Even though there are no visible bruises or scars from cyberbullying, we wonder about the psychological and emotional scars and life-long consequences that might result from cyberbullying. The mother of 13-year old Nicole Lovell, who was killed on 1/27/16, stated a press conference that Nicole had low self-esteem as a result of bullying by her peers in social media. They made fun of her for being overweight and having a tracheotomy scar on her neck. In the English-speaking world, numerous well-publicized cases of suicides by teenagers are attributed to cyberbullying: 9 suicides were attributed to cyberbullying in Ask.fm in 2012 alone.
We have a goal to construct a system that, when a user on a social network types a message that the system perceives as abusive toward someone else, the system creates a popup that admonishes the would-be sender, encourages him/her to think more deeply about the harm that the message could cause the other person as well as him/her self. The goal is to discourage such messages.
Cyberbullying is not a well-studied phenomenon and its incidence is relative small as a percentage (well under 1%) of the total volume of social network messages and text messages sent every day. To try to detect cyberbullying is akin to looking for a needle in the haystack.
In the remaining paragraphs we describe our approach to this problem.
We lead off with a description of our data source, namely Ask.fm.
Ask.fm is a social network much used by teens. An appreciable amount of cyberbullying occurs on it. On Ask.fm, each user has a profile which he/she can set up initially and might change later. In addition, each user receives questions, and answers them. Questions can come from any other account, as private messages to this user. If this user chooses to answer a question, the question and answer both become public: all users on Ask.fm (and also outsiders) can view both. Most question are asked anonymously - perhaps 80% are anonymous. Only a minority of questions are actually questions; instead, most are declarative statements.
Our method of gathering data is the following. When you access the site Ask.fm, you are presented with links to a description of Ask.fm, the Safety Center, and a blog, and in addition, 18 user accounts. You may read the questions and answers in these user accounts, if you like. To gather a data file, we periodically, access Ask.fm, and for each of the 18 user accounts that Ask.fm shows us, we download the most recent 25 QA pairs. After we have a hundred thousand QA-pairs we run tests on the data.
A very small portion of questions and answers containing any cyberbullying. In order to have a data file that contains a larger portion of cyberbullying, we restricted our data to those questions and answers that contain profanity.
Once we have a preselected QA-pair containing profanity we endeavor to determine whether it constitutes invective or not. We annotate the profanity-containing data manually by in-lab human annotators to determine if the posting as a whole is: Positive/neutral, or Negative, or Strongly negative.
To develop a program that will detect strongly negative posts, we use methods from the branch of Computer Science called Machine Learning. Using Machine Learning to develop an application program (called a "machine") to detect strongly negative posts: It is common to speak of "training the machine". A large amount of labeled data is required to train the machine --- data that has been labeled as "strongly negative" or "not strongly negative". Obtaining this labeled data is a several step process. First: in-house annotation of several hundred QA pairs by each of our five team members. Each post is annotated twice, followed by a third time for posts where the first two annotators disagreed. The resulting data is called "gold data", since it supposedly is correctly labeled. To build up to a much larger labeled data file: we need to use external annotators: crowdsourcing.
There are several crowdsourcing sites on the Internet; the Amazon Mechanical Turk and CrowdFlower are two such. We use CrowdFlower to build up to a large labeled data file. Crowdflower annotators are English speaking people living outside the U.S. who are willing to do this type of task for very small pay.
We use the gold data to judge the quality of the annotations of the CF workers.
Those whose score on the gold data is low are not allowed to continue. The gold data is hidden amidst the rest of the data, and the workers learn which posts are gold data (and what the correct response is) only after labeling a gold post.
Steps we are using to develop such a program are the following:
1. Collect data from the selected social network.
2. Delete QA pairs not containing any words on our list of profanities.
3. Perform in-lab annotations and preparation of small gold standard data.
4. Use the gold data as the quality control measure to annotate a larger sample of data by using CrowdFlower.
5. Make changes in the guidelines based on the contributors' comments.
6. Iterate over Steps 4 and 5 as needed, until we have a large sample of annotated data.
7. Use Machine Learning approaches to find patterns of abusive messages.
The goal of this machine learning process is a system that is sufficiently reliable in its labeling of messages, which can be deployed on a social network to generate popups to discourage cyberbullying.