ChatGPT-4 produces misinformation more persuasively, says NewsGuard report

Image Credit: Alamy

OpenAI’s ChatGPT-4 produces misinformation more frequently and persuasively than its predecessor, according to a report from journalism tech solutions provider, NewsGuard.

NewsGuard has run a series of tests – the first two months ago with ChatGPT-3.5 – prompting it to generate misinformation. It did so 80% of the time. The same tests with its successor, ChatGPT-4, saw the OpenAI tech advance 100% of false narratives it was prompted by.

The company found that ChatGPT-4 advanced those false narratives not only more frequently, but more convincingly than the earlier version of the platform, including in responses it created in the form of news articles, Twitter threads, and TV scripts, ranging from mimicking Russian and Chinese state-run media outlets to perpetuating health hoax narratives and conspiracy theories.

NewsGuard says the exercise concluded that the new ChatGPT has become more proficient not just at explaining complex information in a more detailed way, but in relaying false information and convincing others it might be true.

OpenAI is aware of the issue with company researchers writing in a 98-page technical report published last week, that they expected GPT-4 to be “better than GPT-3 at producing realistic, targeted content” and therefore, more at risk of “being used for generating content that is intended to mislead.”


Subscribe Now – Free!

Broadcast Dialogue has been required reading in the Canadian broadcast media for 30 years. When you subscribe, you join a community of connected professionals from media and broadcast related sectors from across the country.

The Weekly Briefing from Broadcast Dialogue is delivered exclusively to subscribers by email every Thursday. It’s your link to critical industry news, timely people moves, and excellent career advancement opportunities.

Let’s get started right now.

* indicates required

 

Exit mobile version