Eight years ago, we were first introduced to artificial intelligence (AI) as a tool for journalism. The concept was met with skepticism, and for good reason.
Could an algorithm truly replicate the nuanced skills of a seasoned journalist, honed over years of professional experience? The technology was young and inexperienced, and we were prideful.
Back then, the answer was a resounding no.
Then came November 2022, and with it, the unveiling of OpenAI’s ChatGPT. The media landscape trembled. Here was an AI tool that exhibited an unprecedented level of natural language understanding and generation.
But while its application garnered significant interest across newsrooms worldwide, media outlets approached the use of this AI tool with measured caution, fully aware of its implications on content creation and audience communication.
Even we, the initial skeptics, began its exploration with thoughtful diligence.
Inquirer.net assembled a working group composed of editors, social media specialists, site traffic and technology experts, marketing executives, and human resource officials. Our task: test the waters.
Our experience with ChatGPT was nothing short of revelatory.
The AI’s efficiency in writing formulaic stories, such as weather reports or earthquake alerts, was startling.
Tasks that would traditionally take journalists at least 15 minutes, such as producing breaking news, are accomplished in a fraction of that time.
And the capabilities didn’t stop there: ChatGPT is also adept at simplifying technical papers, providing summaries, generating story ideas, formulating interview questions, presenting different SEO-friendly headlines, and even translating major Filipino languages to English with impressive accuracy.
However, the rosy picture I paint comes with distinct caveats.
Despite its notable capabilities, ChatGPT is not omniscient.
Its data cutoff in September 2021 means it occasionally dispenses stale information.
It also has a tendency to generate fictional direct quotes or background details—an alarming trait that needs cautious handling.
In April, I used ChatGPT to write an earthquake story in Davao City. It quickly produced an article with a quote from “Davao City Mayor” Sara Duterte, who, at the time (and until now), is the Vice President of the Philippines, not the mayor of the city.
Did she really issue a statement? She didn’t.
This instance underscores ChatGPT’s occasional shortcomings in providing up-to-date and accurate information, which, in turn, undermines its overall credibility as a research aide.
Legal issues, too, loom large. As ChatGPT draws from its training data, the possibility of inadvertently including copyrighted material presents a concerning legal grey area.
At the same time, from a journalistic perspective, AI lacks the capability to discern fresh and best angles in a story. It cannot conduct essential tasks like interviewing sources and attending press conferences.
Our jobs are safe.
But the moment AI starts dialing the Senate President for a reaction, it’s time for us journalists to put our pens down, close our laptops, and consider joining a circus as tightrope walkers.
The unique skills a journalist brings—emotional intelligence, intuition, empathy, and authentic human conversation—are irreplaceable. AI, for all its advancements, cannot mimic these.
Most importantly, the ethical implications surrounding AI use in journalism must be addressed. How do we handle situations where AI inadvertently creates defamatory content or disseminates misinformation? What guidelines can ensure responsible AI use without curtailing its potential?
In this exciting AI era, we must tread carefully. While the power of tools like ChatGPT is undeniable, they are not infallible.
We must continually refine our understanding of AI’s capabilities and limitations, and remember that even in this technologically advanced world, the essence of journalism remains resolutely human.
ADVERTISEMENT
ADVERTISEMENT