Is ChatGPT making the dystopia of social bots a reality?


Summary

Content generated by ChatGPT ends up on Twitter. Are we about to see the wave of manipulative, democracy-threatening bots that have been predicted for years?

2016 was the year of the Brexit referendum and the election of Donald Trump as President of the United States. These events sparked a heated debate about the impact of social media bots on the democratic process. Were there people using this technology to manipulate the masses?

There is no doubt that social media has played and continues to play a central role in these and other events. But some researchers suspected that the state and other actors were using bots to automate decision-making — and that it was effective.

According to the report, the bots would number in the tens of thousands, hundreds of thousands, or even millions, extending their reach across Twitter and other platforms. The debate continues to heat up, most recently when Elon Musk spoke of a huge bot problem on the platform before its acquisition of Twitter.

A d

The role of social robots in democratic processes is not scientifically proven

While the term “bot” is mainly used in the United States, in German-speaking countries the term “social bots” is more commonly used – whether due to a desire for precise terminology or “German Angst “.

Researchers such as the Berlin data journalist Michael Kreil or Prof. Dr.-Ing. Florian Gallwitz from TH Nuremberg has been researching social media bots since 2016.

Kreil’s conclusion on the Trump case: “Social bots influenced the election. Sound plausible? Yes. Is it scientifically proven? Not at all.” The researcher speaks of an “army that never existed”.

The two researchers also criticize tools designed to detect bots: These are imprecise, they say, and often classify people as bots. This sometimes leads to a change in terminology: bots are no longer just automated systems, possibly powered by AI. It is enough for a human with a few followers to be part of a targeted campaign, i.e. display “bot-like” behavior, to be classified as a bot.

“The countless media reports in recent years of human-like political social robots said to have the ability to automatically generate text are pure fairy tales. They are based on methodologically flawed research that violates the most basic scientific standards,” says Gallwitz. According to the researcher, “ordinary human Twitter users have repeatedly been falsely labeled as ‘social bots’ en masse.”

Recommendation

AI in Wartime: How Artificial Intelligence is Changing the Battlefield
AI in Wartime: How Artificial Intelligence is Changing the Battlefield

Social robots were then a dystopia rather than a reality, as well as an attempt to explain supposedly inexplicable events like Brexit.

Does ChatGPT turn dystopia into reality?

Six years later, with OpenAI’s ChatGPT, a technology has entered the public domain that forces us to ask new questions about what social bots can do.

Indeed, the quality of the chatbot far exceeds that of the other systems available. The texts generated by ChatGPT are hardly, if at all, recognizable as artificial products. In fact, there are already examples of Twitter messages generated by ChatGPT.

So are we going to change?

“With ChatGPT, and to a limited extent with its basic GPT-3 model, a technology is now available for the first time with which it would be possible to automatically generate freely worded tweets on hot topics or even widely meaningful to tweets from other accounts,” Gallwitz says of OpenAI’s chatbot.

With an appropriate combination of OpenAI and Twitter programming interfaces, a bot could be made that automatically joins conversations or tweets news with a brief summary, says the media informatics professor.

Is ChatGPT too boring for Twitter?

However, Gallwitz is skeptical of a ChatGPT bot’s ability to garner attention on Twitter.

“Reach on Twitter is driven primarily by wit and originality, and often by provocation, emotionality, and novelty. ChatGPT, on the other hand, was trained to produce predictable word sequences that rarely contain surprises or even new thoughts, and are often wordy and drawn-out,” says Gallwitz. “It’s the complete opposite of what promises success on a medium like Twitter.”

Also, he says, there are tools that can recognize ChatGPT text quite reliably. “In this regard, I do not expect ChatGPT or similar tools to form the basis for attempts at political manipulation on Twitter in the foreseeable future.”

If Gallwitz succeeds, the dystopia will remain just that: a threat whose realization is still in question.

In fact, AI systems like GPT-3 could play a very different role in information warfare: In one study, GPT-3 replicated responses that matched human response patterns along demographic axes fine-grained. Scholars see large-scale language models as new and powerful tools for social and political science.

But they also warn of potential misuse. With some methodological advances, the models could be used to test specific subpopulations for their susceptibility to misinformation, manipulation, or fraud.

Leave a Comment

Your email address will not be published. Required fields are marked *