Discovery: ChatGPT-Powered Crypto Bot Network Revealed on X

The researchers who came across the botnet have chosen not to notify X, citing the company’s lack of responsiveness.

In a recent article, it has come to light that X, previously known as Twitter and now owned by Elon Musk, is grappling with a significant fake account issue. Elon Musk himself acknowledged the prevalence of bots on the platform, even citing it as his initial reason for considering backing out of the company acquisition.

A study conducted by Professor Filippo Menczer and student Kai-Cheng Yang from the Observatory on Social Media at Indiana University, Bloomington, sheds light on a specific bot network, known as Fox8, that has infiltrated X. This research, initially reported by Wired, uncovered a network comprising at least 1,140 fake Twitter accounts that consistently shared tweets directing users to obscure, low-quality online “news” websites, which essentially repurposed content from reputable sources.

The majority of posts from these bot accounts were centered around cryptocurrency, frequently featuring hashtags like #bitcoin, #crypto, and #web3. These accounts also frequently engaged with popular figures in the crypto community on Twitter, such as @WatcherGuru, @crypto, and @ForbesCrypto.

The intriguing aspect is how this extensive network managed to maintain its activity. It harnessed AI, specifically ChatGPT, to automate its tweet generation. The primary objective behind these AI-generated posts appeared to be spamming Twitter with crypto-related links, hoping to catch the attention of legitimate users who might click on these URLs.

Following the publication of this research in July, X eventually suspended these bot accounts. Professor Menczer mentioned that his research group used to inform Twitter about such bot networks but ceased doing so after Musk’s acquisition due to the company’s perceived lack of responsiveness.

While AI tools, like ChatGPT, played a crucial role in generating content for thousands of accounts in this botnet, they also inadvertently contributed to its exposure. The study’s findings revealed a pattern among these accounts: they would frequently preface their tweets with the phrase “as an AI language model.” This phrase is commonly used by ChatGPT to flag content that might have potential issues due to its AI origin.

The researchers noted that if not for this “sloppy” mistake, the botnet might have continued to operate undetected.

Leave a Comment