May 19, 2023
2 mins read

Writing with AI tools can have bias, reveals study

To probe, Maurice Jakesch, a doctoral student in the field of information science from the varsity, asked more than 1,500 participants to write a paragraph answering the question, “Is social media good for society?”..reports Asian Lite News

Just as social media can facilitate spread of misinformation, Artificial Intelligence (AI)-powered writing assistants that autocomplete sentences or offer “smart replies” can be biased and produce shifts in opinion, and hence can be misused, warned researchers calling for more regulation.

Researchers from Cornell University in the US said the biases baked into AI writing tools — whether intentional or unintentional — could have concerning repercussions for culture and politics.

To probe, Maurice Jakesch, a doctoral student in the field of information science from the varsity, asked more than 1,500 participants to write a paragraph answering the question, “Is social media good for society?”

People who used an AI writing assistant that was biased for or against social media were twice as likely to write a paragraph agreeing with the assistant, and significantly more likely to say they held the same opinion, compared with people who wrote without AI’s help.

“The more powerful these technologies become and the more deeply we embed them in the social fabric of our societies,” Jakesch said, “the more careful we might want to be about how we’re governing the values, priorities and opinions built into them.”

These technologies deserve more public discussion regarding how they could be misused and how they should be monitored and regulated, the researchers said. Jakesch presented the study at the 2023 CHI Conference on Human Factors in Computing Systems in April.

Further, the team found that the survey revealed that a majority of the participants did not even notice the AI was biased and didn’t realise they were being influenced.

When repeating the experiment with a different topic, the research team again saw that participants were swayed by the assistants.

“We’re rushing to implement these AI models in all walks of life, but we need to better understand the implications,” said Mor Naaman, Professor at the Jacobs Technion-Cornell Institute at Cornell Tech.

“Apart from increasing efficiency and creativity, there could be other consequences for individuals and also for our society — shifts in language and opinions,” Naaman added.

ALSO READ-New ChatGPT, Bard like AI tool to turn thoughts into text

Previous Story

Fake ChatGPT apps exploiting users

Next Story

Queen’s funeral cost govt £162 mn

Latest from Tech LITE

Uber Targets India Dominance

On the subject of travel, Khosrowshahi observed that booking processes remain outdated and ripe for disruption. “I don’t think that the travel industry has innovated that much Uber CEO Dara Khosrowshahi has

Arab League urges Bigger AI investments

A central message of the Arab AI Forum was the urgent adoption of the league’s recently endorsed ethical AI charter….reports Asian Lite News In a defining moment for the future of artificial

Japan City Limits Smartphones

The proposal comes as new figures from Japan’s Children and Families Agency show that young people in the country spend an average of more than five hours online each weekday A city

India Embraces AI Future

Upskilling is emerging as a critical focus, with 51 per cent of leaders naming it their top priority. Around 63 per cent of managers expect AI training to become a core team

UAE Wows Osaka!

The UAE Pavilion at Expo 2025 Osaka celebrates its three millionth visitor, blending culture, innovation, and hospitality in an immersive showcase of heritage, sustainability, and forward-looking global vision….reports Asian Lite News The
Go toTop

Don't Miss

E-tongue, AI model to help predict bitterness of medicines

Using data from the e-tongue, the AI model breaks down

Why AI fails to reproduce human vision

Previous studies have shown that deep learning cannot perfectly reproduce