The Rise of Deepfakes on ‘Skinnytok’

Trigger warning: discussion of eating disorders

Our early understandings of eating disorders span from antiquity. The Romans binged and purged. In the Middle-ages extreme starvation known as “holy anorexia” was practiced for spiritual perfection; a way of gaining control over one’s life and faith, popularised by the likes of Saint Catherine of Siena. Medicalisation began in the 17th to 19th century, with early physicians such as Richard Morton providing the first medical description of “nervous consumption” and Sir William Gull later coining the term “Anorexia Nervosa”, recognising its mental characteristics. In the early 20th century, eating disorders were thought to stem from hormonal imbalances, before a shift to the psychological. The medical professionals of our past could never have foreseen the insidious impact that social media would have. 

This has then been transported to Tumblr’s 2010s golden age. Highly curated, pastel toned and full of gifs, fandoms and film stills. Sepia-filtered polaroids of Vladimir Nabokov’s ‘Lolita’ laying in patches of daisies encapsulate the tone, preoccupied with the aesthetic. Crucially, these online spaces were permeated with a certain melancholy. Sadness held a certain ‘trendiness’, that certainly persists today, but was particularly pertinent on Tumblr. Coupled with the lack of moderation, and prior to whispers of the body positivity movement, Tumblr was the perfect breeding ground for the competition and encouragement of eating disorders. We saw the emergence of ‘pro-ana’ (pro anorexia) content. Profiles would post ‘thinspo’, forging communities based on holding one another ‘accountable’ and even blocking members that weren’t seen as dedicated enough to their own starvation. 

I spoke with Emily, 23 (all names have been changed to preserve anonymity), who was active in ‘pro-ana’ communities during this epoch. She recalls feeling “addicted to the validation” that she would receive from being part of these groups. Her time online made her feel like her dangerous thoughts were normal and even cool, which meant she didn’t seek out help. She looks back and wishes there had been tighter restrictions online, or that she hadn’t been given a mobile phone so young. In another conversation, Lauren, 22, revealed how growing up using social media impacted her body image. She recalls comparing images of girls in skinny jeans online to herself, telling me that, “even to this day changing rooms still make me feel a bit sick”. 

Fast forward again to the present day. Tiktok’s algorithmic monolith has reconceptualised eating disorder content online, generating personalised feeds. Users don’t need to seek out ‘pro-ana’ communities in the same way that they once did, lingering for a moment too long on the wrong video and suddenly whole fyp’s are tailored. A report from the ‘Center for Countering Digital Hate’ a US NGO, found that Tiktok’s algorithm targets vulnerable teens and recommends “harmful” content to them every 27 to 39 seconds. 

The algorithm amplifies the internet’s harm, and makes content even more addictive. Now, a newer phenomenon intensifies the problem further. The rise of AI and the use of deepfakes (videos or images that have been digitally altered to change appearance or audio, typically used maliciously) online, means that users can transcend the traditional use of celebrity or trending images for thinspo, and new images can be generated. Today’s technology also allows people to create deepfakes of themselves. The ability to alter their own bodies digitally. Eating disorders already involve distorted body image, and now sufferers are comparing themselves to bodies that don’t exist. The standard of perfection has surpassed the limitations of the physical body. This phenomenon is known as “digitalised dysmorphia”. 

The platforms themselves are struggling to respond. TikTok has banned certain hashtags and claims to remove pro-eating disorder content, but the material simply gets coded differently. Users replace obvious tags like #thinspo with innocuous-seeming terms or even random strings of emojis that only community members can recognise. Deepfake content is particularly difficult to moderate because it often doesn’t violate existing policies against showing harmful real-world content. After all, it’s technically not real. Yet the harm is. Eating disorders have the highest mortality rate of any mental illness, and social media exposure to pro-eating disorder content has been linked to increased symptoms and relapse. The addition of personalised AI-generated images takes an already dangerous situation and makes it exponentially worse. 

It is difficult to know what the answer is, tougher legislation is needed but users have shown that their content can slip through the cracks. Australia made a bold move on the global stage, banning social media for under-16s, with Meta blocking 550,000 accounts in the first few days. The ban is the first of its kind; other countries watch in curiosity to see how it will play out. Perhaps this could be the way forward. If young people have less access to online spaces, before their brains have fully developed, maybe they can resist some of the harmful impacts of “skinnytok”.

Latest