Artificial intelligence-generated text can appear more human on social media than text written by actual humans, a study found.
Chatbots, such as OpenAI’s wildly popular ChatGPT, are able to convincingly mimic human conversation based on prompts it is given by users. It was a major breakthrough for AI, allowing the public to easily converse with bots that could help them with homework or other work-related tasks and provide dinner recipes.
Researchers behind a study published in the scientific journal Science Advances, which is supported by the American Association for the Advancement of Science, were intrigued by OpenAI’s text generator GPT-3 back in 2020 and worked to uncover whether humans “can distinguish disinformation from accurate information, structured in the form of tweets,” and determine whether the tweet was written by a human or AI.
According to PsyPost, Federico Germani, a researcher at the Institute of Biomedical Ethics and History of Medicine of the University of Zurich said that the “most shocking” discovery was the fact that humans were more likely than not, labeling AI-generated Tweets as being created by humans.
“The biggest surprise was the fact that people tended to perceive information generated by AI more as if it came from humans than when they were actually created by humans. This suggests that AI can convince you of being a real person more than a real person can convince you of being a real person, which is a fascinating side finding of our study,” Germani said.
With the rapid increase of chatbot use, tech experts and Silicon Valley leaders have sounded the alarm on how artificial intelligence can spiral out of control and perhaps even lead to the end of civilization. Experts are concerned that AI can spread disinformation on the internet, and lead people to believe something they don’t know to be true.
Researchers for the study, titled “AI model GPT-3 (dis)informs us better than humans,” worked to investigate “how AI influences the information landscape and how people perceive and interact with information and misinformation,” Germani told PsyPost.
The researchers found 11 topics they found were often prone to disinformation, such as 5G technology and the COVID-19 pandemic, and created both false and true tweets generated by GPT-3, as well as false and true tweets written by humans.
They then gathered 697 participants from countries such as the U.S., UK, Ireland, and Canada to take part in a survey. The participants were presented with the tweets and asked to determine if they contained accurate or inaccurate information, and if they were AI-generated or organically crafted by a human.
“Our research highlights the difficulty of distinguishing between AI-generated information and human-created content. This study highlights the need to critically evaluate the information that we are given and place our trust in trustworthy sources. Germani, the author of the study, said that he would also encourage people to become familiar with these emerging technologies in order to understand both their positive and negative potential.
Researchers found participants were best at determining disinformation crafted by a fellow human than disinformation written by GPT-3.
“Another notable finding is that AI-generated disinformation was more persuasive than human-produced information,” Germani stated.
The participants were also more likely to recognize tweets containing accurate information that were AI-generated than accurate tweets written by humans.
The researchers noted that, in addition to their “most shocking” findings that people often cannot differentiate between AI generated tweets and those created by humans, they also found that the participants’ confidence in being able to make a distinction fell during the study.
” “Our findings indicate that humans are unable to distinguish between organic and synthetic text, but their confidence to make this distinction also decreases significantly after trying to identify their origins,” states the study.
The researchers said this is likely due to how convincingly GPT-3 can mimic humans, or respondents may have underestimated the intelligence of the AI system to mimic humans.
“We propose that, when individuals are faced with a large amount of information, they may feel overwhelmed and give up on trying to evaluate it critically. As a result, they may be less likely to attempt to distinguish between synthetic and organic tweets, leading to a decrease in their confidence in identifying synthetic tweets,” the researchers wrote in the study.
The researchers noted that the system sometimes refused to generate disinformation, but also sometimes generated false information when told to create a tweet containing accurate information.
“While it raises concerns about the effectiveness of AI in generating persuasive disinformation, we have yet to fully understand the real-world implications,” Germani told PsyPost. “Addressing this requires conducting larger-scale studies on social media platforms to observe how people interact with AI-generated information and how these interactions influence behavior and adherence to recommendations for individual and public health.”
The post AI appears more human on social media than actual humans: study appeared first on Fox News.