AMAZON
Facebook is launching a new AI tool today that it says can proactively detect and flag intimate images and videos of someone posted without their consent. The system will be active on Facebook and Instagram, and, unlike current filters, it can detect “near-nude” content. This content is then flagged and sent to a human moderator for review.
Currently, users on Facebook and Instagram have to report such revenge porn themselves. Facebook says it hopes that the new system will better support victims by flagging images and videos for them.
“Often victims are afraid of retribution so they are reluctant to report the content themselves or are unaware the content has been shared,” explained Facebook’s global head of safety, Antigone Davis, in a blog post. The new AI system, says Davis, means such content can be found and sent to human moderators “before anyone reports it.”
The efficacy of this system is difficult to judge. Facebook has long presented artificial intelligence as a solution to the problem of moderating its sprawling platforms, despite known problems with the technology. And while automated systems can remove unwanted content that has been flagged in advance (such as terrorist propaganda), they struggle with imagery and videos that require context to understand.
It’s not clear from Facebook’s announcement exactly how its new AI tool detects what the company calls “non-consensual intimate images.” A report from The Associated Press says the system doesn’t just look at images itself; it also looks at the caption. If that caption contains “derogatory or shaming text,” says the AP, it probably means “someone uploaded the photo to embarrass or seek revenge on someone else.”
However, this would certainly not be enough to catch all instances of revenge porn. We’ve contacted Facebook for more detail.
Along with the announcement of the new AI filter, Facebook said it would launch a new hub for the topic named Not Without My Consent, improve its tools for reporting revenge porn, and respond to such requests in future with more speed.
The company also said that it hopes to extend its somewhat controversial program that lets people proactively report revenge porn to more countries. The program lets users send images and videos they suspect will be shared online to Facebook first. The company will then block the content from ever being uploaded to its sites.
As with the removal of known terrorist content, this process is much more accurate than an automated filter, but users are understandably uneasy about sharing such personal content with Facebook. Over the past few years, the company has been at the center of a number of scandals after improperly safeguarding users’ data. Facebook says it’s aware of this criticism and wants to “better explain and clarify the process and safeguards in place.”