In an unexpected twist, the widely acclaimed AI language model, ChatGPT, has been implicated in a new controversy. Researchers at Indiana University Bloomington have uncovered a botnet powered by ChatGPT operating on X, the social network formerly known as Twitter. The revelation sheds light on the potential misuse of advanced AI technology for deceptive purposes.
Dubbed “Fox8” by the researchers due to its affiliation with cryptocurrency-related websites, the botnet consisted of a staggering 1,140 accounts. These accounts appeared to use ChatGPT’s capabilities to craft social media posts and interact with one another. The primary goal of these auto-generated posts was to attract unsuspecting users to click on links leading to crypto-hyping websites.
Micah Musser, a researcher specializing in AI-driven disinformation, expressed concern about the Fox8 botnet, indicating that it might only be the tip of the iceberg in terms of AI-powered malicious campaigns. He stated, “This is the low-hanging fruit. It is very, very likely that for every one campaign you find, there are many others doing more sophisticated things.”
Although the Fox8 botnet was extensive, its utilization of ChatGPT was notably unsophisticated. The researchers stumbled upon the botnet by identifying a telltale phrase, “As an AI language model…,” which ChatGPT occasionally used in sensitive prompts. Subsequently, they manually scrutinized accounts to determine those operated by bots.
According to Filippo Menczer, a professor at Indiana University Bloomington who led the research along with student Kai-Cheng Yang, the botnet’s lack of subtlety was the reason it was discovered. “The only reason we noticed this particular botnet is that they were sloppy,” Menczer remarked. Despite the botnet’s shortcomings, it managed to post convincingly worded messages that promoted cryptocurrency sites.
The revelation has raised concerns about the potential misuse of advanced AI technology in orchestrating scams and disinformation campaigns. The ease with which the botnet leveraged OpenAI’s artificial intelligence capabilities underscores the possibility of more sophisticated campaigns yet to be uncovered. Menczer expressed, “Any pretty-good bad guys would not make that mistake.”
As of now, OpenAI has not responded to inquiries about the botnet’s discovery. The usage policy for OpenAI’s AI models explicitly prohibits their use in scams or disinformation. The incident serves as a reminder of the evolving challenges in managing the ethical use of AI technology across various online platforms.
Note: This article is based on the findings of Indiana University Bloomington researchers and statements from involved parties up to the time of posting.