A few days ago, I was chatting with a colleague who is also researching machine learning when someone curiously asked, “With the rapid advancement of artificial intelligence, especially in natural language processing, will our teaching methods change?” My friend laughingly replied, “Of course! I often tell my students that it’s quite rude to submit assignments without first running them by ChatGPT.”
Since the emergence of ChatGPT, many have been concerned about its potential negative impacts. Debates have sprung up, and tools have been developed to detect if assignments were crafted with the help of ChatGPT.
This concern is understandable. For instance, even in a notoriously difficult statistical machine learning course I taught last semester, ChatGPT could easily tackle it. I tested it with my final exam questions, and it outperformed nearly 90% of my students.
However, I don’t think we should prohibit students from using such tools.
Is AI-generated content plagiarism?
That is to be debated. But to start, implementing such a ban is almost impossible due to the nature of machine learning.
Machine learning is fundamentally different from our typical internet searches. It efficiently extracts inherent relationships in data.
When AI generates content, it randomly samples based on these relationships rather than retrieving the most matching content. Therefore, the output can’t strictly be called “plagiarism” as it doesn’t specifically “copy” from any particular dataset, just like an impromptu speech might revolve around the same ideas but will never be identical each time.
So, in my classes, I not only don’t prevent students from using ChatGPT, but I also encourage them to embrace it. As my colleague put it, it’s quite impolite not to.
Redefining the educational model
Should we be worried that students will stop learning?
While these tools will definitely revolutionize our teaching methods, I see it as a positive change. Just like I became reliant on Google Maps, it allows me to invest my time and energy into honing other skills.
Undoubtedly, traditional rote-learning methods are most at risk, especially when faced with advanced language models.
While the future of educational approaches remains uncertain, I’m convinced that the new methods will prioritize problem-solving skills over rote memorization.
Typical assignments, like multiple-choice questions, are too easy for these large language models. I prefer assigning research-based projects, which reflect students’ genuine capabilities.
While current large-scale language models do have reasoning abilities, they still somewhat lack lateral thinking. If students can guide AI models based on their learning, the quality of the solutions achieved with AI assistance will significantly improve.
In this era, learning how to harness AI tools effectively is essential for everyone.
Leading universities like Tsinghua and Harvard now offer foundational courses for non-computer science majors. These courses teach them how to use tools like ChatGPT, delve into the principles behind them, and how to tweak these models for specific needs.
Even in my recent astrophysics class, most of my students weren’t computer science majors, but I still dedicated a session to introduce them to the nuances of large-scale language models.
The rise of the generalists
While machine learning is all the rage, and computer science enrollments are skyrocketing, I’m often asked about major selection advice. Being at the intersection of astrophysics and computer science, I have some thoughts to share.
For those genuinely passionate about computing, a computer science degree is great. But, understand that the curriculum encompasses much more than just machine learning, including traditional system designs and compilers. As the use of large language models becomes commonplace, interdisciplinary knowledge is crucial. For instance, without knowledge in astrophysics, utilizing these models to extract valuable insights from large language models becomes challenging.
Interestingly, many leaders in the machine learning community don’t come from a computer science background. For instance, some researchers at Anthropic, a main competitor to OpenAI, hail from physics backgrounds. Experts from varied academic fields bring unique perspectives and often introduce innovative viewpoints.
Breaking the alienation of the individual
We are in an era where generalists are emerging as leaders. Only by understanding and integrating specific application scenarios can the true value of these tools be fully realized. And I believe this is the greatest gift machine learning offers.
Looking back at the Industrial Revolution, individuals were often seen as mere cogs in the vast machinery of society, repetitively performing the same tasks. This “cog-in-the-machine” approach meant anyone could easily be replaced, ensuring continuous societal functioning. However, this often led to dehumanization.
Now, as machine learning capabilities show, repetitive tasks are likely the first to be automated, whether they are low-level or high-end specialized tasks.
Although jobs will undergo major transformations, and there are genuine concerns, those who will excel in this tech-driven era will be the generalists who thrive in multiple domains, rather than isolated specialists.
In this era of rapid change and competition, the accessibility of machine learning (as I’ve mentioned in previous writings) has made global competition unprecedentedly open.
It’s no longer just about the race between superpowers (unlike the nuclear arms race). Only nations that nurture a vast number of interdisciplinary experts will stand out.
So, is the Malaysian educational system ready?
Read also:
- AI, ChatGPT, and my mom’s Roomba
- On AI and the soul-stirring char siu rice
- Redefining education in the AI era: the rise of the generalists
- AI in astronomy: Promise and the path forward
- Unpacking AI: Where opportunity meets responsibility
(Yuan-Sen Ting graduated from Chong Hwa Independent High School in Kuala Lumpur before earning his degree from Harvard University in 2017. Subsequently, he was honored with a Hubble Fellowship from NASA in 2019, allowing him to pursue post-doctoral research at the Institute for Advanced Study in Princeton. Currently, he serves as an associate professor at the Australian National University, splitting his time between the School of Computing and the Research School of Astrophysics and Astronomy. His primary focus is on utilizing advanced machine learning techniques for statistical inference in the realm of astronomical big data.)
ADVERTISEMENT
ADVERTISEMENT