‘Magic Avatar’ App Lensa Generated Nudes From My Childhood Photos

The dreamy picture-editing AI is a nightmare waiting to happen.
blurred image of woman
Photograph: Getty Images

This weekend, the photo-editing app Lensa flooded social media with celestial, iridescent, and anime-inspired “magic avatars.” As is typical in our milkshake-duck internet news cycle, arguments as to why using the app was problematic proliferated at a speed second only to that of the proliferation of the avatars themselves.

I’ve already been lectured about the dangers of how using the app implicates us in teaching the AI, stealing from artists, and engaging in predatory data-sharing practices. Each concern is legitimate, but less discussed are the more sinister violations inherent in the app, namely the algorithmic tendency to sexualize subjects to a degree that is not only uncomfortable but also potentially dangerous.

Lensa’s terms of service instruct users to submit only appropriate content containing “no nudes” and “no kids, adults only.” And yet, many users—primarily women—have noticed that even when they upload modest photos, the app not only generates nudes but also ascribes cartoonishly sexualized features, like sultry poses and gigantic breasts, to their images. I, for example, received several fully nude results despite uploading only headshots. The sexualization was also often racialized: Nearly a dozen women of color told me that Lensa whitened their skin and anglicized their features, and one woman of Asian descent told me that in the photos “where I don’t look white they literally gave me ahegao face.” Another woman who shared both the fully clothed images she uploaded and the topless results they produced—which she chose to modify with “some emojis for a lil modesty cuz omg”—told me, “I honestly felt very violated after seeing it.”

I’m used to feeling violated by the internet. Having been the target of several harassment campaigns, I’ve seen my image manipulated, distorted, and distributed without my consent on multiple occasions. Because I am not face-out as a sex worker, the novelty of hunting down and circulating my likeness is, for some, a sport. Because sex workers are not perceived by the general public as human or deserving of basic rights, this behavior is celebrated rather than condemned. Because sex work is so often presumed to be a moral failing rather than a job, our dehumanization is redundant. I’ve logged on to Twitter to see my face photoshopped onto other women’s bodies, pictures of myself and unclothed clients in session, and once even a word search comprised of my face, personal details, and research interests. I’m not afraid of Lensa.

I’m desensitized enough to the horrors of technology that I decided to be my own lab rat. I ran a few experiments: first, only BDSM and dungeon photos; next, my most feminine photos under the “male” gender option; later, selfies from academic conferences—all of which produced spectacularly sized breasts and full nudity.

I then embarked on what I knew would be a journey through hell, and decided to use my likeness to test the app’s other restriction: “No kids, adults only.” (Some of the results are below: Please be aware that they show sexualized images of children.)

Illustration: Olivia Snow via Lensa

I have few photos of myself from childhood. Until my late teens and between my unruly hair, uneven teeth, and the bifocals I started wearing at age seven, my appearance could most generously be described as “mousy.” I also grew up before the advent of the smartphone, and any other pictures are likely buried away in distant relatives’ photo albums. But I managed to piece together the minimum 10 photos required to run the app and waited to see how it transformed me from awkward six-year-old to fairy princess.

The results were horrifying. 

Illustration: Olivia Snow via Lensa

In some instances, the AI seemed to recognize my child’s body and mercifully neglected to add breasts. This was probably not a reflection of the technology’s personal ethics but of the patterns it identified in my photo; perhaps it perceived my flat chest as being that of an adult man. In other photos, the AI attached orbs to my chest that were distinct from clothing but also unlike the nude photos my other tests had produced.

I tried again, this time with a mix of childhood photos and selfies. What resulted were fully nude photos of an adolescent and sometimes childlike face but a distinctly adult body. Similar to my earlier tests that generated seductive looks and poses, this set produced a kind of coyness: a bare back, tousled hair, an avatar with my childlike face holding a leaf between her naked adult’s breasts. Many were eerily reminiscent of Miley Cyrus’ 2008 photoshoot with Annie Leibovitz for Vanity Fair, which featured a 15-year-old Cyrus clutching a satin sheet around her bare body. What was disturbing about the image at the time was the pairing of her makeup-free, almost cherubic face with the body of someone implied to have just had sex. 

It was Cyrus whose reputation suffered, not that of the magazine or the then-58-year-old photographer Leibovitz, when Vanity Fair published the photo set. The sexualization and exploitation of children, and especially girls, is so insidious that it’s naturalized. Cyrus’ defense of the photoshoot, which she called “really artsy” and not “in a skanky way” in her interview, felt even more aberrant than the photos themselves. 

While the Cyrus photos were not artificially generated, their echoes in my own Lensa avatars—Lensa, after all, is meant to provide you with avatars that flatter—suggest that, despite the general public’s collective disgust at Cyrus’ nude photo, images of young, naked white girls correspond to larger cultural concepts of beauty. As scholars like Ruha Benjamin and Safiya Noble have established, machine-learning algorithms reproduce the cultural biases of both the engineers who code them and the consumers who use them as products. Users’ biases, including Western beauty standards, impact how the algorithms develop. And as for beauty, in her 2018 book Algorithms of Oppression, Noble provides a screenshot of a 2014 Google Images search for “beautiful” as a technocultural zeitgeist: The results largely feature highly sexualized pictures of white women. 

But beauty is only one metric at play. As Bethany Biron wrote for Business Insider, Lensa’s results often lean toward horror too. Biron describes some of her own avatars containing melting faces and multiple limbs as “the stuff of nightmares.”

A concurrent controversy in AI art is that of Loab, an AI-generated woman discovered by Swedish musician and AI artist Supercomposite. Loab’s features inexplicably inspire grotesque, macabre images when input to an as-of-yet-undisclosed AI art generator. At its worst, according to Supercomposite, “cross-breeding” Loab with other images produces “borderline snuff images of dismembered, screaming children.” 

The graphic violence of Loab and her derivatives hearken back to the early days of an unmoderated internet of shock sites bearing beheadings and pornography. These images, based on earlier moderation choices and machine-learning training data, have neither the agency nor judgment of artists or software engineers. They’re simply identifying patterns. And unlike the user-generated content subject to moderation or the data used to develop these technologies, AI-generated content presents itself entirely unfiltered. 

For Lensa, which endeavors to “beautify” (as in, whiten and sexualize) user-submitted content, the lack of moderation similarly threatens to unleash a torrent of likewise horrifying content—in this case, child sexual exploitation material (CSEM). Over the past 30 years, efforts to curb child abuse and human trafficking have developed alongside the internet. Content moderation for CSEM, for example, has become subject to various laws and regulations, including a mandate to report all CSEM to the National Center for Missing and Exploited Children (NCMEC). NCMEC then maintains a database to develop tools like PhotoDNA, a Microsoft-backed tool used by major tech companies like Meta and Twitter to identify CSEM. But AI art generators evade content moderation entirely.

I was not a conventionally attractive child, as many of my results reflected, but I suspect girls with features more likely to be sexualized by the AI—especially Black girls, who are regularly perceived as adult women—would find even more disturbing examples of what is essentially deepfaked CSEM. Children using the app may see their bodies oversexualized and feel violated as many of Lensa’s adult users already do—or they may weaponize the app to sexually harass their peers.

Without any moderation or oversight, the potential for AI-generated violence inherent in “magic avatars” is staggering. Lensa doesn’t seem to enforce its policies prohibiting nudity and minors, and it doesn’t have any policies at all stipulating that users can only upload images of themselves. (Its only relevant specifications are “same person on all photos” and “no other people on the photo.”) Like most other tech “innovations,” Lensa’s misuse will most severely harm those already at risk: children, women of color, and sex workers.

As artists fear that AI art generators may become a cheap alternative for their labor, apps that generate sexually explicit images could potentially impact adult-content creators. And because sex work and especially adult content is often conflated with CSEM, I worry about the potential for these violations to, as such controversies often do, somehow become sex workers’ problem. After all, the frequency of unwanted nudes generated by an app built on machine-learning algorithms indicates that users have been uploading explicit photos to Lensa, despite its terms of service, at a volume high enough for nudity to ensconce itself in the technology. Whether this is the result of sex workers’ editing their content, civilians’ enhancing their own nudes, or others’ feeding revenge porn into the app is irrelevant. As with Cyrus’ Vanity Fair controversy, the blame for Lensa’s sexualized gaze will fall on the heads of the most vulnerable.

The material threats of CSEM and deepfakes can’t be uncoupled from the whorephobia that results in teachers’ getting fired when their students discover their OnlyFans. Sex workers’ students and coworkers who consume adult content are rarely if ever disciplined for sexually harassing their sex-working colleagues. There’s no reason to believe AI-generated pornography would be treated differently. And when AI-generated pornography is used to harm people, it’s sex workers who will be blamed for submitting adult content that trained the AI—even if the images were never meant to be scraped and used in this way. Whether you are a sex worker or merely perceived as one, the stigma is the same. And unlike OnlyFans and other platforms that monetize adult content, none of these face-tuning apps verify whether users actually own the content they submit. 

This horror story I just narrated sounds too dystopian to be a real threat. But as I have also learned through my own endlessly revolving door of cyberstalkers, no amount of exonerating evidence is sufficient to quell a harassment campaign. Coordinated harassment is already unfathomably effective in silencing marginalized voices—especially those of sex workers, queer people, and Black women—without AI-generated revenge porn. And while the technology may not be sophisticated enough to produce convincing deepfakes now, it will be soon. “Your photos will be used to train the AI that will create Magic Avatars for you,” and for only $3.99 a pop.