Skip to main contentSkip to navigationSkip to navigation
Facebook is working with an Australian government agency to pilot the technology.
Facebook is working with an Australian government agency to pilot the technology. Photograph: Anadolu Agency/Getty Images
Facebook is working with an Australian government agency to pilot the technology. Photograph: Anadolu Agency/Getty Images

Facebook asks users for nude photos in project to combat 'revenge porn'

This article is more than 6 years old

In Australia pilot effort, company will ‘hash’ images, converting them into digital fingerprints that prevent any other attempts to upload the same pictures

Facebook is asking users to send the company their nude photos in an effort to tackle revenge porn, in an attempt to give some control back to victims of this type of abuse.

Individuals who have shared intimate, nude or sexual images with partners and are worried that the partner (or ex-partner) might distribute them without their consent can use Messenger to send the images to be “hashed”. This means that the company converts the image into a unique digital fingerprint that can be used to identify and block any attempts to re-upload that same image.

Facebook is piloting the technology in Australia in partnership with a government agency headed up by the e-safety commissioner, Julia Inman Grant, who told ABC it would allow victims of “image-based abuse” to take action before pictures were posted to Facebook, Instagram or Messenger.

“We see many scenarios where maybe photos or videos were taken consensually at one point, but there was not any sort of consent to send the images or videos more broadly,” she told the Australian broadcaster.

Carrie Goldberg, a New York-based lawyer who specializes in sexual privacy, said: “We are delighted that Facebook is helping solve this problem – one faced not only by victims of actual revenge porn but also individuals with worries of imminently becoming victims.

“With its billions of users, Facebook is one place where many offenders aggress because they can maximize the harm by broadcasting the nonconsensual porn to those most close to the victim. So this is impactful.”

In the Australian pilot, users must first complete an online form on the e-safety commissioner’s website outlining their concerns. They will then be asked to send the pictures they are concerned about to themselves on Messenger while the e-safety commissioner’s office notifies Facebook of their submission. Once Facebook gets that notification, a community operations analyst will access the image and hash it to prevent future instances from being uploaded or shared.

Facebook will store these images for a short period of time before deleting them to ensure it is enforcing the policy correctly, the company said.

Roughly 4% of US internet users have been victims of revenge porn, according to a 2016 report from the Data & Society Research Institute. The proportion rises to 10% when dealing with women under the age of 30.

This builds on existing tools Facebook has to deal with revenge porn. In April the social networking site released reporting tools to allow users to flag intimate photos posted without their consent to “specially trained representatives” from the site’s community operations team who “review the image and remove it if it violates [Facebook’s] community standards”. Once a picture has been removed, photo-matching technology is used to ensure the image isn’t uploaded again.

Facebook and other technology companies use this type of photo-matching technology where images are “hashed” to tackle other types of content including child sex abuse and extremist imagery.

The technology was first developed in 2009 by Microsoft, working closely with Dartmouth and the National Center for Missing and Exploited Children to clamp down on the same images of sexually abused children being circulated over and over again on the internet. There was technology that could find exact matches of images, but abusers could get around this by slightly altering the files – either by changing their size or adding a small mark.

PhotoDNA’s “hash” matching technology made it possible to identify known illegal images even if someone had altered them. Facebook, Twitter and Google all use the same hash database to identify and remove illegal images.

Hany Farid, a professor of computer science at Dartmouth who helped develop PhotoDNA, described Facebook’s pilot as a “terrific idea”.

“The deployment of this technology would not prevent someone from sharing images outside of the Facebook ecosystem, so we should encourage all online platforms to participate in this program, as we do with PhotoDNA,” he said.

A Facebook spokeswoman said the company was exploring additional partners and countries.

Contact the author: olivia.solon@theguardian.com

More on this story

More on this story

  • Love Island star has had ‘serious conversations’ about becoming MP

  • Georgia Harrison: I didn’t realise how difficult revenge porn case would be

  • Laura Bates: ‘For teenage girls, escaping harassment, revenge porn and deepfake porn is impossible’

  • ‘I have moments of shame I can’t control’: the lives ruined by explicit ‘collector culture’

  • UK's revenge porn helpline registers busiest year on record

  • New Zealand ‘revenge porn’ laws in spotlight amid accusations against former National candidate

  • More than 500 child victims of 'revenge porn' in England and Wales last year

  • Domestic abuse surged in lockdown, Panorama investigation finds

Most viewed

Most viewed