A strong majority of Canadians (92%) believe that news organizations should have clear and transparent policies on how they use artificial intelligence (AI) to produce content, according to a new survey conducted for the Canadian Journalism Foundation (CJF).
Relying on data provided by Maru Public Opinion, collected April 3-4 from a random selection of 1,516 Canadian adults who are Maru Voice Canada panelists, the results were weighted to census data by education, age, gender, and region (and in Quebec, language).
The survey found that a vast majority of Canadians are concerned AI in journalism will produce or spread misinformation (85%), bringing with it the potential for inaccuracy (86%). A significant number (85%) also believe that a governing body should have strict oversight of AI practices in journalism.
Sixty-eight per cent of those surveyed asserted that journalism will not be made better by using machine-learning AI. The Gen Z demo (those aged 18-26) is most likely to believe the future of journalism will be improved by AI (54%).
AI being used to populate ‘content farms’
NewsGuard, a provider of software that helps advertisers identify credible news sources for ad placements, says since April it has identified 49 websites spanning seven languages — Chinese, Czech, English, French, Portuguese, Tagalog, and Thai — that appear to be entirely or mostly generated by AI language models in the form of what appear to be typical news sites.
These so-called “content farms” churn out, in some cases hundreds of clickbait articles a day to optimize advertising revenue, NewsGuard found, under benign branding like Biz Breaking News, News Live 79, Daily Business Post, and Market News Reports.
Some of the content advances false narratives or fabricated information with nearly all featuring bland language and repetitive phrases, hallmarks of artificial intelligence, which NewsGuard says is strong evidence that these sites likely operate with little to no human oversight.
“With the rapid cascade of AI creating the potential for a deep-fake world, Canadians want journalists and their outlets to be transparent and accountable for their use of this ascendent tool,” said John Wright, Executive Vice-President of Maru Public Opinion, in a release. “At the end of the day, they want human beings with oversight to govern AI and its output and not to have it the other way around.”
“Canadians need and want trustworthy, reliable, fact-based journalism – the antidote to misinformation,” added Kathy English, chair of the CJF board of directors. “In these early days of discovery of how fast-moving generative AI technology such as ChatGPT can – or should – be used in journalism, it’s crucial that newsrooms establish robust, transparent policies and parameters to guide journalists and bolster public trust in our vital work. Journalism at its best is a human endeavour. These poll results tell us that Canadians want the media industry to take steps to ensure transparency with audiences in how and when it uses AI.”
The CJF recently hosted a panel of global experts, including Aimee Rinehart, Program Manager for The Associated Press’ Local News AI initiative; Gina Chua, Executive Editor, Semafor; and Patrick White, Director of the journalism program at the University of Quebec in Montreal (UQAM). for a one-hour online discussion of the editorial and ethical issues surrounding the use of AI in journalism.
Subscribe Now – Free!
Broadcast Dialogue has been required reading in the Canadian broadcast media for 30 years. When you subscribe, you join a community of connected professionals from media and broadcast related sectors from across the country.
The Weekly Briefing from Broadcast Dialogue is delivered exclusively to subscribers by email every Thursday. It’s your link to critical industry news, timely people moves, and excellent career advancement opportunities.
Let’s get started right now.