How Facebook Uses Technology To Block Terrorist-Related Content

Jun 22, 2017
Originally published on June 22, 2017 10:07 am

Social media companies are under pressure to block terrorist activity on their sites, and Facebook recently detailed new measures, including using artificial intelligence, to tackle the problem.

The measures are designed to identify terrorist content like recruitment and propaganda as early as possible in an effort to keep people safe, says Monika Bickert, the company's director of global policy management.

"We want to make sure that's not on the site because we think that that could lead to real-world harm," she tells NPR's Steve Inskeep.

Bickert says Facebook is using technology to identify people who have been removed for violating its community standards for sharing terrorism propaganda, but then go on to open fake accounts. And she says the company is using image-matching software to tell if someone is trying to upload a known propaganda video and blocking it before it gets on the site.

"So let's say that somebody uploads an ISIS formal propaganda video: Somebody reports that or somebody tells us about that, we look at that video, then we can use this software to create ... a digital fingerprint of that video, so that if somebody else tries to upload that video in the future we would recognize it even before the video hits the site," she says.

If it's content that would violate Facebook's policies no matter what, like a beheading video, then it would get removed. But for a lot of content, context matters, and Facebook is hiring more people worldwide to review posts after the software has flagged them.

"If it's terrorism propaganda, we're going to remove it. If somebody is sharing it for news value or to condemn violence, we may leave it up," Bickert says.

The measures come in the wake of criticism of how Facebook handles content. Last year, for example, Facebook took down a post of the Pulitzer Prize-winning photo of a naked girl in Vietnam running after a napalm attack. The move upset users, and the post was eventually restored. Facebook has also been criticized for keeping a graphic video of a murder on the site for two hours.

Morning Edition editor Jessica Smith and producer Maddalena Richards contributed to this report.

Copyright 2017 NPR. To see more, visit http://www.npr.org/.

STEVE INSKEEP, HOST:

Facebook says it wants to keep from being used by extremists. The company, used by billions of people, is under pressure not to be a platform for violence. More than once, people have committed murder on Facebook. Terrorists have proclaimed allegiance to ISIS on Facebook or won new recruits there. Facebook's Monika Bickert oversees an effort to stop that.

MONIKA BICKERT: We're focused on real-world harm, so harm in the physical world. That means that - you know, for things like terrorism recruitment or terrorism propaganda, we want to make sure that it's not on the site because we think that that could lead to real-world harm.

INSKEEP: It's a vast challenge and delicate. Facebook wants to block dangerous content without blocking free speech. Its techniques include hiring more human monitors of doubtful content. Bickert also hopes to block some offensive images before they're published using image-matching software.

BICKERT: To tell if somebody is, for instance, trying to upload a known propaganda video.

INSKEEP: Blocking harmful or violent videos, I guess, before they reach anyone. How would that work?

BICKERT: There's software that we have that allows us to recognize if a video that someone's trying to upload to Facebook is a video we've seen before. So let's say that somebody uploads an ISIS formal propaganda video. Somebody reports that or somebody tells us about that. We look at that video. Then we can use this software to create what's called a hash, or a digital fingerprint of that video, so that if somebody else tries to upload that video in the future, we would recognize it even before the video hits the site.

And, you know, I want to point out that it doesn't necessarily mean that we would take automated action. There are some types of videos that would violate our policies no matter what, like a beheading video. But there are other times where we need people to actually review the content that this software is flagging for us.

INSKEEP: What if somebody creates a - god forbid - a fresh beheading? Do you have an algorithm or software that can recognize that as it's happening?

BICKERT: Photo-matching software is not going to recognize that. But if we can find out from the community, you know, what the new image contains - and we also talked to others in industry about this, so that whoever finds it first can share it with the others - then we can go ahead and create a digital hash of that and stop anybody from uploading it to Facebook.

INSKEEP: Since you have a background in law enforcement, I know you're familiar with the phrase prior restraint, which is something that the U.S. government is never supposed to do when it comes to free speech. There may be speech that is - can be punished in some way, but there should not be prior restraint of publication. Here you are, a private company, and you're contemplating prior restraint. What are the specific instances when you think it is OK for you to do that?

BICKERT: Well, I mean, first I want to point out that, as a social media company, we set the terms and let our people know what the terms are for when they come to Facebook. So we've made it really clear for a long time that we don't allow terror propaganda.

And we are going to do everything we can to stop it from hitting the site as early as possible. We're really looking at the context of how something was shared. If it's terrorism propaganda, we're going to remove it. If somebody is sharing it for news value or to condemn violence, we may leave it up.

INSKEEP: What if there's talk of a beheading?

BICKERT: Our policy is that we'll remove anything that is promoting or glorifying these terror groups or their actions. So you can imagine people discussing a terror attack. If they are saying, this is really wonderful, this is funny, I love what happened to these people, and those posts are reported to us, we would remove those for celebrating terrorism.

INSKEEP: What if it is someone who is denouncing the United States, or denouncing the West, in ways that someone might find to be unfair or building up hatred against the United States?

BICKERT: We do allow political speech. So you know, people - some people are going to express dissatisfaction or even hatred for countries, or for their foreign policies, and that's something that you can do on Facebook. Where we draw the line is when it comes to promoting or celebrating violence.

INSKEEP: Let me ask about information bubbles. People in the United States have been more and more forcefully reminded that many of us risk being in an information bubble, where the more content of a certain slant that we click on, the more content like that we get. And there are left-wing information bubbles and right-wing information bubbles. Are there terrorist information bubbles, do you think?

BICKERT: Well, there's maybe two different things we're talking about here. And one is whether it's hate speech or terror propaganda that's being discussed among a certain group of people. And another is whether or not there is speech that pushes back on that. We do find that there is a lot of speech that pushes back on ideas like promoting terror groups or hate speech. And, in fact, we're involved in trying to enable and promote some of those campaigns.

INSKEEP: Oh, but that's kind of my question because, as everybody knows, you can have lots of speech that pushes back on Democrats that Democrats never see and lots of speech that pushes back on Republicans that Republicans never see. Are there information bubbles for people who are extremists or leaning toward extremism, where they might spend all day on Facebook and never see anything that challenges their views?

BICKERT: There's certainly - you know, there's certainly, always an opportunity for people to engage in groups they want to engage in. That's one of the reasons that we're focused on technology to help us find terror propaganda no matter where it is.

INSKEEP: But do your algorithms, which move automatically, allow people - even if they're not seeing a beheading video - even if you block that, you might - you might have people who see nothing all day but borderline-extremist content, the kind of thing that might encourage them to become more violent?

BICKERT: That's one of the reasons that we're focused on not only using our technology but also building our partnerships, so that we can help people push back on that kind of messaging.

INSKEEP: Does your strategy include invading or popping some of those bubbles, dropping in some anti-extremist content to people who seem to be tending in that direction?

BICKERT: We're not content creators. What we try to do is help the people who are creating good content against extremism, against hatred make their content succeed on Facebook. So, you know, we've done...

INSKEEP: So does that mean you tweak your algorithms so they would get to the right people?

BICKERT: No, no, we don't. But what we do is we study - and we've commissioned research a couple different times and published that research. We study the best ways for speech against terrorism and extremism to flourish. And then we try to help these civil society groups and others use those techniques to make their speech succeed and reach a lot of people on Facebook.

INSKEEP: Monika Bickert, thanks very much.

BICKERT: Thank you very much.

INSKEEP: She's Facebook's director of global policy management. Transcript provided by NPR, Copyright NPR.