Last month, I attended a conference on machine learning, commonly known as AI, in Hawaii.
One day, I wrapped up early and went to grab a bite at a local Chinese restaurant. I was the only customer at the time, so I chatted with the lady who owned the place.
She asked me if I was attending the computer conference, and when I nodded, she half-jokingly said, “The AI you young people are developing is going to be the end of humanity.”
The warp-speed advance of AI technology
Ever since ChatGPT made its dazzling debut, it feels like the world’s attention to AI and machine learning has rocketed overnight.
As someone knee-deep in the field of machine learning, I couldn’t be happier about this newfound fascination. Particularly in the realm of natural language processing, the advancements are nothing short of jaw-dropping.
Yet, lurking behind this are many unanswered research questions, calling for the passion and commitment of the younger generation.
This attention is a good thing.
However, the lightning-fast pace of these changes is causing many to feel left behind, stirring up a cocktail of anxieties and fears.
The first fear is job displacement due to automation. The second is the haunting spectre of a nuclear-style crisis in the current geopolitical climate, where superpowers are throwing sacks of money at AI like there’s no tomorrow. The third, echoing the sentiments of the restaurant owner I met, is the cinematic nightmare of a “Terminator” future—wherein 2023 becomes Year Zero for Skynet, Arnold Schwarzenegger descends upon us, and humanity meets its laser-ridden doom.
As someone in the trenches, I dare not dismiss these concerns as mere wind and smoke. That’s mainly because the landscape of machine learning is shifting at breakneck speed.
The industry has become increasingly competitive, a veritable pressure cooker of innovation. What’s considered impossible today might undergo a facelift in just a few years, slapping me and my presumptions right in the face.
But for what it’s worth, based on the current state of affairs, these fears may very well be overblown.
Over the next couple of months, I aim to lay down some truth bombs through a series of columns, shedding light on the real state of machine learning. Maybe, just maybe, knowing the facts will put your mind at ease.
Today, let’s briefly touch upon the concerns surrounding AI and consciousness.
The varied realms of machine learning
Firstly, let’s address an open secret: most professionals in the field actually aren’t fans of the term “AI”, or Artificial Intelligence.
The term implies some sort of “virtual” and “intelligent” entity, while the technology we have is neither truly “virtual” nor “intelligent”.
Academia prefers to call it machine learning, which is essentially about teaching machines to recognise patterns through heaps of data.
Machine learning is a sprawling landscape, but it has three main avenues.
First off, there’s Natural Language Processing (NLP), the domain where the ubiquitous ChatGPT holds court. Then we have Computer Vision, which most people would recognise from facial recognition tech and self-driving cars.
Last but not least, there’s Reinforcement Learning—best exemplified by AlphaGo’s Go exploits a few years back.
But don’t pigeonhole Reinforcement Learning; its most significant applications are actually in robotics and mechanical arms.
Regardless of the specific avenue, the crux of machine learning remains the same: to allow machines to learn patterns from data.
And since most of this training data—like online images and text documents—are microcosms of human history, machines are essentially learning human patterns.
For instance, in NLP, the primary task is prediction: given a prefix, what should be the next word or even the next sentence?
Now and then, someone might jump out and claim that a certain AI model seemingly exudes “human characteristics”.
Let’s clear the air: such views aren’t exactly held in high regard academically.
The good news is that the barrier to entry in machine learning is not as formidable as one might think (more on this in future discussions).
Sure, we may not all have access to hulking computational resources to conjure our machine learning “elixirs”, but grasping the basic principles isn’t rocket science. Any undergrad who buckles down can understand the fundamentals and even build their own models—making this a far cry from Oppenheimer’s atomic bomb conundrum.
Machine thought: reality or illusion?
Indeed, anyone who has interacted with ChatGPT is likely to be deeply impressed, and it’s understandable why some people wonder if these language models possess consciousness.
But here lies a possible blind spot—let’s not forget that a language model is just that, a model.
The paradox here may lie in the intrinsic value we place on language, often considered one of humanity’s most precious abilities.
Our communication, social structures, and even our distinguishing edge over other species, are largely based on complex linguistic capabilities. So, when a machine can articulate thoughts in an eerily human-like manner, the impact often far exceeds that of, say, witnessing a robotic feat.
In reality, the frontal lobe of our brain, which processes language, has been formed over the last few millions of years—quite the newbie on the evolutionary timeline.
In contrast, our other mammalian functions, like physical and visual capabilities, were established much earlier; the earliest known mammals date back hundreds of millions of years.
In other words, if mammalian evolution were a 24-hour clock, language would have emerged merely in the last fifteen minutes.
Because language is a relatively recent development in our evolutionary timeline, machines learning its nuances is actually far simpler than mastering other human capabilities.
This helps explain why, in the realm of machine learning, Natural Language Processing (NLP) is far ahead in terms of generalisation capabilities compared to Computer Vision or Reinforcement Learning, both of which face substantial challenges.
Take Computer Vision as an example: generalisation remains a massive hurdle. That’s why, despite the buzz around self-driving technology, its widespread implementation remains elusive.
Even if a machine can learn to understand a scene through extensive data, minor changes in the environment could render it ineffective.
Compared to human abilities, this is a considerable shortfall.
So, how far is AI from achieving consciousness? Firstly, this question is a bit nebulous, primarily because the nature of human consciousness itself is the subject of extensive debate.
Philosophers and neuroscientists have argued for centuries, and to date, there is no conclusive answer. Whether a machine can have consciousness might largely depend on how we define “consciousness”. But let’s indulge the “Duck Test”—if it looks like a duck, walks like a duck, and quacks like a duck, then it probably is a duck. Even with this loose yardstick, machine learning still has a long journey ahead.
We often overvalue the capabilities of language. Though language and logic are closely connected (which is why ChatGPT can solve mathematical problems), human consciousness is not limited to these faculties.
Current machine learning at best has mastered those last few minutes in our evolutionary “24-hour clock”.
In a way, the emergence of ChatGPT serves as a wake-up call. Perhaps many abilities we take pride in—logic, language, and so forth—are not as intricate as even the simple acts of lifting a finger or blinking an eye.
I remember a conversation with a colleague in Hawaii who’s a rising star in mathematics.
He remarked, “I’m really not sure to what extent my work could be replaced by language models given the rate of advancement in machine learning. What I am sure of, however, is that my job will be replaced long before that of a janitor.”
So, let’s dismiss visions of Schwarzenegger’s robot apocalypse for a moment.
It’s worth noting that even simple robots like my mother’s Roomba have a long way to go in mastering the basics.
- AI, ChatGPT, and my mom’s Roomba
- On AI and the soul-stirring char siu rice
- Redefining education in the AI era: the rise of the generalists
(Yuan-Sen Ting is a native of Malaysia. He obtained his Ph.D. in astrophysics from Harvard University in 2017. His postdoctoral experiences spanned the first four-way fellowship involving institutions, including Institute for Advanced Study at Princeton, Princeton University, the Carnegie Institution for Science and NASA Hubble Fellowship. He serves as an Associate Professor at the Australian National University currently, where he contributes to both the astronomy and computer science departments.)