ADVERTISEMENT

ADVERTISEMENT

2:33pm 23/12/2023
Font
AI and ethics in journalism
By:Bhanu Bhakta Acharya / The Kathmandu Post / ANN

Technology has been a timeless force shaping the landscape of journalism from the telegraph in the 1840s to the recent surge of various generative Artificial Intelligence (AI) tools, such as ChatGPT, Zapier, DALL-E, Bard and more.

The integration of generative AI tools has been transforming various industries, and journalism is no exception.

In recent years, media organisations have been experimenting with AI to automate news production, personalise content delivery and enhance data-driven insights.

In other words, AI tools signal a new era of journalism with efficiency, speed and content diversity.

AI in journalism

The scope of AI in journalism is vast and diverse, ranging from automating news production to exploring archived content.

According to a report by the Reuters Institute for the Study of Journalism, there are four stages of AI in news dissemination.

Each stage presents unique opportunities and challenges for media organisations to leverage AI for news coverage.

AI can be used in content creation to automate news production, especially for breaking news and routine stories.

Using natural language generation (NLG) algorithms, AI can generate news articles based on structured data, such as sports scores, financial reports and weather forecasts.

This can save time and resources for journalists and enable media organisations to produce more content with limited resources.

However, using AI in news production raises ethical concerns, such as the risk of plagiarism and editorial biases. 

The Los Angeles Times, for example, used an AI system called Quakebot in 2014 to generate a news article within three minutes of an earthquake that occurred in Southern California.

While using Quakebot enabled the newspaper to provide timely and accurate news coverage, it also raised concerns about the lack of human supervision in news production.

AI can also be used to curate news content based on audience preferences and needs.

Using machine learning algorithms, AI can personalise news delivery for individual users, such as recommending articles based on their reading history, location and interests.

Although this enhances user engagement and loyalty, it also raises concerns about the risk of echo chambers, where users are exposed only to content that confirms their existing beliefs and biases.

Facebook is often criticised for its role in spreading fake news during the US presidential elections.

Also, Facebook’s news feed algorithm has been accused of promoting sensationalism, clickbait and fake news, which can undermine the credibility of news content.

It can distribute news content across multiple platforms, such as social media, mobile apps and websites.

This can enhance the reach and accessibility of news content but raises concerns about the risk of sensationalism, clickbait and loss of credibility.

In 2017, Google was criticised for promoting fake news during the French presidential election.

According to a report by the Oxford Internet Institute, Google’s algorithmic search results favoured fake news websites over legitimate news sources, which can mislead and deceive users.

While Google has since taken steps to address this issue, the incident highlights the need for media organisations to ensure that their news content is transparent, accurate and trustworthy.

AI analyses user behaviour and feedback to improve news coverage.

By using sentiment analysis and semantic analysis algorithms, AI can identify patterns and trends in user engagement and feedback, such as the most popular topics, most effective headlines and most shared articles.

This can enhance data-driven insights and enable media organisations to optimise their news coverage for user preferences and needs. 

The Washington Post used an AI system called Heliograf in 2018 to cover the midterm elections in the US. Heliograf generated news articles based on structured data, such as election results and voter demographics.

However, using Heliograf has raised concerns about the loss of human supervision in the news production process.

Ethical concerns

AI in journalism raises ethical and editorial concerns, such as the risk of misinformation, editorial biases and loss of human touch.

To address these concerns, media organisations must develop clear policies and guidelines for using AI and ensure that these systems are transparent, accountable and ethical.

AI in news production and curation can increase the risk of misinformation, such as fake news, propaganda and hoaxes.

These systems can be programmed to generate and distribute news content that is sensational, misleading, or outright false.

This can undermine the credibility and trustworthiness of news content and erode public confidence in journalism.

For example, in 2017, a group of researchers from the University of Washington developed an AI system called Grover to generate fake news articles that are indistinguishable from real news articles.

The researchers argued that their system can be used to detect and combat fake news, but it also raises concerns about the potential misuse of AI for propaganda and disinformation.

AI in news production and curation can also perpetuate editorial biases, such as political, racial and gender biases.

AI systems can be trained on biased data sets or programmed with biased algorithms, resulting in biased news content.

This can reinforce existing stereotypes and prejudices and marginalise under-represented voices and perspectives.

For example, in 2018, Amazon was criticised for its AI-powered recruiting tool as it was found to be biased against women.

The tool was trained on resumes submitted to Amazon over a 10-year period, which were mostly from men.

As a result, the tool learned to penalise resumes that contained words or phrases associated with women, such as “women’s” or “female”.

While Amazon has since discontinued the tool, the incident highlights the need for media organisations to ensure their AI systems are free from biases and discrimination.

AI in news production and curation can also result in losing human touch, such as empathy, creativity and critical thinking.

AI systems can be programmed to generate and distribute news content that is formulaic, repetitive and lacking in nuance.

This can reduce the quality and diversity of news content and limit the role of journalists as storytellers and watchdogs.

For example, in 2019, The New York Times used an AI system called Editor to recommend photos for news articles.

The editor analysed the content of news articles and suggested photos that were relevant and engaging.

While the use of Editor enabled the enhancement of the visual appeal of its news content, it also raised concerns about the loss of human judgement and creativity in the photo selection process.

To conclude, AI has the potential to transform journalism by automating news production, personalising content delivery and enhancing data-driven insights.

We cannot completely prohibit AI tools from the newsrooms.

However, the use of AI in journalism must address ethical and editorial concerns, such as avoiding misinformation, controlling editorial biases and increasing human supervision on all content enhanced by AI tools.

Additionally, media organisations must develop clear policies and guidelines for using AI in journalism and ensure that AI systems are transparent, accountable and ethical.

By doing so, media organisations can leverage AI to enhance their news coverage and engage with their audiences in new and innovative ways.

ADVERTISEMENT

AI
Asia News Network
journalism

ADVERTISEMENT

2 d ago
2 d ago
5 d ago
5 d ago
5 d ago
6 d ago

Read More

ADVERTISEMENT