Quantcast

Investigating and Preventing Cyberbullying 5: Insight into Social Reporting

This part is entirely dedicated to the unique and effective reporting mechanism adopted by Facebook to mitigate cyberbullying and abuse scenarios.

David Kirkpatrick: So, I want to switch gears a little bit and show you guys something as a way of leading to my next questions for Joe. So, could we have the slides please? What Joe was referring to in the beginning about some of the work Facebook’s been doing, we can show you. And this has not been shown much in public. So, maybe you could just quickly talk us through, Joe, and I want to as quickly as we can, but what are we looking at here?

Reporting a photo on Facebook

Reporting a photo on Facebook

Joe Sullivan: So, say, I was on the site and I saw the photo that’s on the left there, and I thought that it was targeting me. And I chose to report (see right-hand image).

David Kirkpatrick: It’s not a very good example of the photo.

Joe Sullivan: It could be a picture of somebody wearing something. Maybe it’s a picture of me, and there is a comment where they’re talking about the length of my pants, or something like that. You can see the choices; there’s no choice there that says: “It’s a violation of the terms,” or “It’s my intellectual property,” something we put at the bottom, because we get intellectual property reports as well, but that’s kind of a separate flow. And then, what we see most of the time with the 13- and 14-year-olds is that they don’t like the photo because it’s embarrassing. They choose that almost every time.

David Kirkpatrick: But they can choose any of these.

Message box

Message box

Joe Sullivan: Right, and so they can choose one of these, and you can see the terminology – when we tested, this is the type of terminology that worked for 13- and 14-year-olds. You can see there is a message box (see right-hand image). We tested putting nothing in the box, and then we tested lots of different language, and what we found was: if we prepopulate the box, it does much better in terms of someone’s comfort to send. If they’re faced with that empty box, it’s really hard to write. And if you write, you might say something harsh, which might not be well received. So we’re measuring both from the sender and the recipient.

Message received by photo poster

Message received by photo poster

And then you see we put in the word “please”. The exact same text without the word “please” in it does not do nearly as well. So we add the word “please” and it performs substantially better both from the sender’s willingness to send it and recipient’s willingness to then remove the photo.

Response options

Response options

So that was kind of like the sending side. Now, this person is the person who posted a photo, and so they received this message and it says, they actually see the message that says: “Please take down the photo – I don’t want people to see it, would you take it down?” (see image above) No judgment there, so then you have the options: “What do you want to do? I want to remove it, change the privacy, or keep it as it is.” (right-hand image)

David Kirkpatrick: So you always force them to see this dialogue box if the recipient is going through the other stuff?

Joe Sullivan: Right, so you got to make a decision now. I mean, you could click and exit the page, but if you stay on Facebook, you’re going to get to this point. And so you’re going to make a decision to leave it up.

David Kirkpatrick: Now, you’ve had quite extraordinary success with this phase of it, right?

Joe Sullivan: Right. I can’t remember the exact number, but a dramatic percentage of the time, so, more than 50% of the time the person who received the message is going to remove the photo.

David Kirkpatrick: More than half of the time – that is a huge progress. And then, what do we have here?

Gratitude for photo removal

Gratitude for photo removal

Joe Sullivan: And then, after they’ve removed the photo, we actually encouraged them to send back to the person who reported to them a message saying: “Thanks” (see right-hand image). And then the other amazing statistic is the person who removed the photo actually comes out of this feeling positive about the experience.

David Kirkpatrick: You’ve studied that?

Joe Sullivan: Right, no one likes it if Facebook removes their photo. They come out of that experience 100% of the time feeling angry at Facebook. And no one likes it when someone else reports them. And no one likes it when someone else tells them that you’re violating the rules. But could we design a flow that made people feel comfortable removing the photo a large percentage of the time and actually feel good about the experience?

David Kirkpatrick: That is pretty amazing stuff. I’ve got to really say I’m very impressed with that.

Jaana Juvonen: So, I’m curious: when you designed this, did you have data that this was happening automatically between some kids, for example, between friends?

Joe Sullivan: We don’t have that data. The people who are typically in these situations are people who are Facebook friends. And so, there is an assumption that they are communicating on Facebook already. I wish I knew who in or outside the company first came up with the idea that rather than us deal with it, we should try and help people communicate.

Jaana Juvonen: Because what you are now doing is like going back to the empathy, so this might be one of the roles that you play. You are now making something that should be given in any good friendship, right? So if a friend asks you to take a picture down, first of all, it’s a nice request, and then you reply to it, you know you listen to them and you send them something back. So you are basically modeling something that should take place in good relationship contexts.

So, again, I think that it’s a nice thing to have; the question is when you say that maybe this is effective in about 50% of the cases, and don’t take me wrong, I think it’s fantastic; at the same time, what happens in cases where it doesn’t happen, or maybe when it escalates?

The amazing statistic is the person who removed the photo actually comes out of this feeling positive about the experience.

Joe Sullivan: I think what this shows us is that we have a long way to go, but that we’re headed in the right direction. We have other things that we try and do in situations where the photo doesn’t get removed. Chances are that photo is going to be reviewed by us, so we still have a lot of people reviewing a lot of photos.

So we’re going to be reviewing those photos, and what we used to do is that if a person was repeatedly using our product for purposes of some type of abuse like that, we would take away their account. Now we’re more likely to put them in a penalty box, if you will. So they basically can’t use Facebook to send messages or post photos for X period of time. I think it’s been mentioned in a number of stories, teenagers will respond that: “Would you rather be grounded for a week or have your Facebook privileges taken away for a week?” So we do have that lever.

But then there’s also this social reporting context, where there are times when parents do need to get involved in helping their teenagers work out their issues; there are times when teachers need to get involved, there are times when some other adult needs to get involved, and so that’s where we’ve created this mechanism: if you see a situation where you’ve been targeted, you don’t have to figure out: “How do I do a screenshot in my browser and how do I save that and then attach it to an email?” It’s actually you can report it out to a third party right in the flow.

Sameer Hinduja: I think it’s also important to mention that definitely this is dealing with pictures, and it’s a leap forward, for sure, and I love to see that. You think about SMS messaging or tweets and so you wonder what can be done on, let’s say, a site level, but also maybe on device level, maybe OS level, maybe mobile OS level, maybe a third party app level in order to, perhaps, create empathy; when individuals might be saying something and you wonder if it could be just built on some sort of corpus or dictionary of hate-based keywords, and whether that could be tied into some sort of predictive algorithm with contextual clues based on other texts. That might, again, signal some sort of warning: “Ok, this is nasty or harmful. Do you really want to send this?”

Again, I don’t have the answers to this, but I really want us as an industry to be thinking about those sorts of questions, because we care about not just profit making, but also clients and customers and stakeholders.
 

Read previous: Investigating and Preventing Cyberbullying 4: The Role of Empathy

Read next: Investigating and Preventing Cyberbullying 6: Trust and Parental Monitoring

Like This Article? Let Others Know!
Related Articles:

Leave a comment:

Your email address will not be published. Required fields are marked *

Comment via Facebook: