THE FUTURE IS HERE

AI won’t relieve the misery of Facebook’s human moderators

No matter what companies say, AI is not going to solve the problem of content moderation online. It’s a promise we’ve heard many times before, particularly from Facebook CEO Mark Zuckerberg, but experts say the technology is just not there — and, in fact, may never be.

Most social networks keep unwanted content off their platforms using a combination of automated filtering and human moderators. As The Verge revealed in a recent investigation, human moderators often work in highly stressful conditions. Employees have to click through hundreds of items of flagged content every day — everything from murder to sexual abuse — and then decide whether or not it violates a platform’s rules, often working on tightly-controlled schedules and without adequate training or support.

When presented with the misery their platforms are creating (as well as other moderation-adjacent problems, like perceived bias) companies often say more technology is the solution. During his hearings in front of congress last year, for example, Zuckerberg cited artificial intelligence more than 30 times as the answer to this and other issues.

“AI is Zuckerberg’s MacGuffin,” James Grimmelmann, a law professor at Cornell Tech, told The Washington Post at the time. “It won’t solve Facebook’s problems, but it will solve Zuckerberg’s: getting someone else to take responsibility.”

So what is AI doing for Facebook and other platforms right now, and why can’t it do more?

The problem of automating human culture

Right now, automated systems using AI and machine learning are certainly doing quite a bit to help with moderation. They act as triage systems, for example, pushing suspect content to human moderators, and are able to weed out some unwanted stuff on their own.

But the way they do so is relatively simple. Either by using visual recognition to identify a broad category of content (like “human nudity” or “guns”), which is prone to mistakes; or by matching content to an index of banned items, which requires humans to create said index in the first place.

The latter approach is used to get rid of the most obvious infringing material; things like propaganda videos from terrorist organizations, child abuse material, and copyrighted content. In each case, content is identified by humans and “hashed,” meaning it’s turned into a unique string of numbers that’s quicker to process. The technology is broadly reliable, but it can still lead to problems. YouTube’s ContentID system, for example, has flagged uploads like white noise and bird song as copyright infringement in the past.

Artificial Intelligence
Image: Facebook
AI systems are being trained to parse new sorts of images, like memes.

Things become much trickier when the content itself can’t be easily classified even by humans. This can include content that algorithms certainly recognize, but that has many shades of meaning (like nudity — does breast-feeding count?) or that are very context-dependent, like harassment, fake news, misinformation, and so on. None of these categories have simple definitions, and for each of them there are edge-cases with no objective status, examples where someone’s background, personal ethos, or simply their mood on any given day might make the difference between one definition and another.

The problem with trying to get machines to understand this sort of content, says Robyn Caplan, an affiliate researcher at the nonprofit Data & Society, is that it is essentially asking them to understand human culture — a phenomenon too fluid and subtle to be described in simple, machine-readable rules.

“[This content] tends to involve context that is specific to the speaker,” Caplan tells The Verge. “That means things like power dynamics, race relations, political dynamics, economic dynamics.” Since these platforms operate globally, varying cultural norms need to be taken into account too, she says, as well as different legal regimes.

One way to know whether content will be difficult to classify, says Eric Goldman, a professor of law at Santa Clara University, is to ask whether or not understanding it requires “extrinsic information” — that is, information outside the image, video, audio, or text.

“For example, filters are not good at figuring out hate speech, parody, or news reporting of controversial events because so much of the determination depends on cultural context and other extrinsic information,” Goldman tells The Verge. “Similarly, filters aren’t good at determining when a content republication is fair use under US copyright law because the determination depends on extrinsic information such as market dynamics, the original source material, and the uploader’s other activities.”

How far can we push AI systems?

But AI as a field is moving very swiftly. So will future algorithms be able to reliably classify this sort of content in the future? Goldman and Caplan are skeptical.

AI will get better at understanding context, says Goldman, but it’s not evident that AI will soon be able to do so better than a human. “AI will not replace […] human reviewers for the foreseeable future,” he says.

Caplan agrees, and points out that as long as humans argue about how to classify this sort of material, what chance do machines have? “There is just no easy solution,” she says. ”We’re going to keep seeing problems.”

It’s worth noting, though, that AI isn’t completely hopeless. Advances in deep learning recently have greatly increased the speed and competency with which computers classify information in images, video, and text. Arun Gandhi, who works for NanoNets, a company that sells AI moderation tools to online businesses, says this shouldn’t be discounted.

“A lot of the focus is on how traumatic or disturbing the job of content moderator is, which is absolutely fair,” Gandhi tells The Verge. “But it also takes away the fact that we are making progress with some of these problems.”

Machine learning systems need a large number of examples to learn what offending content looks like, explains Gandhi, which means those systems will improve in years to come as training datasets get bigger. He notes that some of the systems currently in place would look impossibly fast and accurate even a few years ago. “I’m confident, given the improvements we’ve made in the last five, six years, that at some point we’ll be able to completely automate moderation,” says Gandhi.

Others would disagree, though, noting that AI systems have yet to master not only political and cultural context (which is changing month to month, as well as country to country) but also basic human concepts like sarcasm and irony. Throw in the various ways in which AI systems can be fooled by simple hacks, and a complete AI solution looks unlikely.

Sandra Wachter, a lawyer and research fellow at the Oxford Internet Institute, says there are also legal reasons why humans will need to be kept in the loop for content moderation.

“In Europe we have a data protection framework [GDPR] that allows people to contest certain decisions made by algorithms. It also says transparency in decision making is important [and] that you have a right to know what’s happening to your data,” Wachter tells The Verge. But algorithms can’t explain why they make certain decisions, she says, which makes these systems opaque and could lead to tech companies getting sued.

Wachter says that complaints relating to GDPR have already been lodged, and that more cases are likely to follow. “When there are higher rights at stake, like the right to privacy and to freedom of speech, […] it’s important that we have some sort of recourse,” she says. “When you have to make a judgement call that impacts other people’s freedom you have to have a human in the loop that can scrutinize the algorithm and explain these things.”

“A challenge no other media system has ever had to face.”

As Caplan notes, what tech companies can do — with their huge profit margins and duty of care to those they employee — is improve working conditions for human moderators. “At the very bare minimum we need to have better labor standards,” she says. As Casey Newton noted in his report, while companies like Facebook do make some effort to properly reward human moderators, giving them health benefits and above-average wages, it’s often outweighed by relentless drive to better accuracy and more decisions.

Caplan says that pressure on tech companies to solve the problem of content automation could also be contributing to this state of affairs. “That’s when you get issues where workers are held to impossible standards of accuracy,” she says. The need to come up with a fix as soon as possible plays into Silicon Valley’s often-maligned “move fast and break things” attitude. And while this can be a great way to think when launching an app, it’s a terrible mindset for a company managing the subtleties of global speech.

“And we’re saying now maybe we should use machines to deal with this problem,” says Caplan, “but that will lead to a whole new set of issues.”

It’s also worth remembering that this is a new and unique problem. Never before have platforms as huge and information-dense as Facebook and YouTube existed. These are places where anyone, anywhere in the world, any time, can upload and share whatever content they like. Managing this vast and ever-changing semi-public realm is “a challenge no other media system has ever had to face,” says Caplan.

What we do know is that the status quo is not working. The humans tasked with cleaning up the internet’s mess are miserable, and the humans creating that mess aren’t much better off. Artificial intelligence doesn’t have enough smarts to deal with the problem, and human intelligence is stretched coming up with solutions. Something’s gotta give.