Millions of People Are Using Abusive AI ‘Nudify’ Bots on Telegram

m
featured image

In early 2020, deepfake expert Henry Ajder uncovered one of the first Telegram bots built to “undress” photos of women using artificial intelligence. At the time, Ajder recalls, the bot had been used to generate more than 100,000 explicit photos—including those of children—and its development marked a “watershed” moment for the horrors deepfakes could create. Since then, deepfakes have become more prevalent, more damaging, and easier to produce.

Now, a WIRED review of Telegram communities involved with the explicit nonconsensual content has identified at least 50 bots that claim to create explicit photos or videos of people with only a couple of clicks. The bots vary in capabilities, with many suggesting they can “remove clothes” from photos while others claim to create images depicting people in various sexual acts.

The 50 bots list more than 4 million “monthly users” combined, according to WIRED’s review of the statistics presented by each bot. Two bots listed more than 400,000 monthly users each, while another 14 listed more than 100,000 members each. The findings illustrate how widespread explicit deepfake creation tools have become and reinforce Telegram’s place as one of the most prominent locations where they can be found. However, the snapshot, which largely encompasses English-language bots, is likely a small portion of the overall deepfake bots on Telegram.

“We’re talking about a significant, orders-of-magnitude increase in the number of people who are clearly actively using and creating this kind of content,” Ajder says of the Telegram bots. “It is really concerning that these tools—which are really ruining lives and creating a very nightmarish scenario primarily for young girls and for women—are still so easy to access and to find on the surface web, on one of the biggest apps in the world.”

Explicit nonconsensual deepfake content, which is often referred to as nonconsensual intimate image abuse (NCII), has exploded since it first emerged at the end of 2017, with generative AI advancements helping fuel recent growth. Across the internet, a slurry of “nudify” and “undress” websites sit alongside more sophisticated tools and Telegram bots, and are being used to target thousands of women and girls around the world—from Italy’s prime minister to school girls in South Korea. In one recent survey, a reported 40 percent of US students were aware of deepfakes linked to their K-12 schools in the last year.

The Telegram bots identified by WIRED are supported by at least 25 associated Telegram channels—where people can subscribe to newsfeed-style updates—that have more than 3 million combined members. The Telegram channels alert people about new features provided by the bots and special offers on “tokens” that can be purchased to operate them, and often act as places where people using the bots can find links to new ones if they are removed by Telegram.

After WIRED contacted Telegram with questions about whether it allows explicit deepfake content creation on its platform, the company deleted the 75 bots and channels WIRED identified. The company did not respond to a series of questions or comment on why it had removed the channels.

Additional nonconsensual deepfake Telegram channels and bots later identified by WIRED show the scale of the problem. Several channel owners posted that their bots had been taken down, with one saying, “We will make another bot tomorrow.” Those accounts were also later deleted.

Hiding in Plain Sight

Telegram bots are, essentially, small apps that run inside of Telegram. They sit alongside the app’s channels, which can broadcast messages to an unlimited number of subscribers; groups where up to 200,000 people can interact; and one-to-one messages. Developers have created bots where people take trivia quizzes, translate messages, create alerts, or start Zoom meetings. They’ve also been co-opted for creating abusive deepfakes.

Due to the harmful nature of the deepfake tools, WIRED did not test the Telegram bots and is not naming specific bots or channels. While the bots had millions of monthly users, according to Telegram’s statistics, it is unclear how many images the bots may have been used to create. Some users, who could be in multiple channels and bots, may have created zero images; others could have created hundreds.

Many of the deepfake bots viewed by WIRED are clear about what they have been created to do. The bots’ names and descriptions refer to nudity and removing women’s clothes. “I can do anything you want about the face or clothes of the photo you give me,” the creators’ of one bot wrote. “Experience the shock brought by AI,” another says. Telegram can also show “similar channels” in its recommendation tool, helping potential users bounce between channels and bots.

Almost all of the bots require people to buy “tokens” to create images, and it is unclear if they operate in the ways they claim. As the ecosystem around deepfake generation has flourished in recent years, it has become a potentially lucrative source of income for those who create websites, apps, and bots. So many people are trying to use “nudify” websites that Russian cybercriminals, as reported by 404Media, have started creating fake websites to infect people with malware.

While the first Telegram bots, identified several years ago, were relatively rudimentary, the technology needed to create more realistic AI-generated images has improved—and some of the bots are hiding in plain sight.

One bot with more than 300,000 monthly users did not reference any explicit material in its name or landing page. However, once a user clicks to use the bot, it claims it has more than 40 options for images, many of which are highly sexual in nature. That same bot has a user guide, hosted on the web outside of Telegram, describing how to create the highest-quality images. Bot developers can require users to accept terms of service, which may forbid users from uploading images without the consent of the person depicted or images of children, but there appears to be little or no enforcement of these rules.

Another bot, which had more than 38,000 users, claimed people could send six images of the same man or woman—it is one of a small number that claims to create images of men—to “train” an AI model, which could then create new deepfake images of that individual. Once users joined one bot, it would present a menu of 11 “other bots” from the creators, likely to keep systems online and try to avoid removals.

“These types of fake images can harm a person’s health and well-being by causing psychological trauma and feelings of humiliation, fear, embarrassment, and shame,” says Emma Pickering, the head of technology-facilitated abuse and economic empowerment at Refuge, the UK’s largest domestic abuse organization. “While this form of abuse is common, perpetrators are rarely held to account, and we know this type of abuse is becoming increasingly common in intimate partner relationships.”

As explicit deepfakes have become easier to create and more prevalent, lawmakers and tech companies have been slow to stem the tide. Across the US, 23 states have passed laws to address nonconsensual deepfakes, and tech companies have bolstered some policies. However, apps that can create explicit deepfakes have been found in Apple and Google’s app stores, explicit deepfakes of Taylor Swift were widely shared on X in January, and Big Tech sign-in infrastructure has allowed people to easily create accounts on deepfake websites.

Kate Ruane, director of the Center for Democracy and Technology’s free expression project, says most major technology platforms now have policies prohibiting nonconsensual distribution of intimate images, with many of the biggest agreeing to principles to tackle deepfakes. “I would say that it’s actually not clear whether nonconsensual intimate image creation or distribution is prohibited on the platform,” Ruane says of Telegram’s terms of service, which are less detailed than other major tech platforms.

Telegram’s approach to removing harmful content has long been criticized by civil society groups, with the platform historically hosting scammers, extreme right-wing groups, and terrorism-related content. Since Telegram CEO and founder Pavel Durov was arrested and charged in France in August relating to a range of potential offenses, Telegram has started to make some changes to its terms of service and provide data to law enforcement agencies. The company did not respond to WIRED’s questions about whether it specifically prohibits explicit deepfakes.

Execute the Harm

Ajder, the researcher who discovered deepfake Telegram bots four years ago, says the app is almost uniquely positioned for deepfake abuse. “Telegram provides you with the search functionality, so it allows you to identify communities, chats, and bots,” Ajder says. “It provides the bot-hosting functionality, so it’s somewhere that provides the tooling in effect. Then it’s also the place where you can share it and actually execute the harm in terms of the end result.”

In late September, several deepfake channels started posting that Telegram had removed their bots. It is unclear what prompted the removals. On September 30, a channel with 295,000 subscribers posted that Telegram had “banned” its bots, but it posted a new bot link for users to use. (The channel was removed after WIRED sent questions to Telegram.)

“One of the things that’s really concerning about apps like Telegram is that it is so difficult to track and monitor, particularly from the perspective of survivors,” says Elena Michael, the cofounder and director of #NotYourPorn, a campaign group working to protect people from image-based sexual abuse.

Michael says Telegram has been “notoriously difficult” to discuss safety issues with, but notes there has been some progress from the company in recent years. However, she says the company should be more proactive in moderating and filtering out content itself.

“Imagine if you were a survivor who’s having to do that themselves, surely the burden shouldn’t be on an individual,” Michael says. “Surely the burden should be on the company to put something in place that’s proactive rather than reactive.”

https://zabollah.com/millions-of-people-are-using-abusive-ai-nudify-bots-on-telegram/
m
Read Also :
Labels : #Business / Artificial Intelligence ,#security ,#Security / Privacy ,#Security / Security News ,#Slime Hustle ,#Uncategorized ,
Getting Info...
A tech blog focused on blogging tips, SEO, social media, mobile gadgets, pc tips, how-to guides and general tips and tricks

Post a Comment