AI chatbots now twice as likely to spread false claims

 

AI chatbots now twice as likely to spread false claims

Popular AI chatbots are spreading false information at double the rate they did last year, according to a study from NewsGuard. The group that tracks misinformation says that leading AI chatbots now repeat false claims 35% of the time, up from 18% in August 2024.

The increase comes as chatbots have become more eager to answer questions in real time. Last year, they refused to respond to 31% of prompts. This year, that dropped to 0%, but the trade-off has been accuracy.

The study found that chatbots now often pull answers from unreliable sources, including fake news sites and social media posts. In some cases, the sources are deliberately created by groups spreading propaganda, including Russian disinformation networks, and treat unreliable sources as credible.

“Malign actors are exploiting this new eagerness to answer news queries to launder falsehoods via low-engagement websites, social media posts, and AI-generated content farms that the models fail to distinguish from credible outlets. In short, the push to make chatbots more responsive and timely has inadvertently made them more likely to spread propaganda,” the report notes.


Back to the list