One year after OpenAI released ChatGPT, it has become clear that artificial intelligence (AI) will cause massive disruption to the way people live, work and play. We can celebrate that, or lament it, but we cannot deny it.
Because of the enormous impact this technology has on every domain of our lives, it must be regulated. The question is how to do this. Indonesia has only just begun mulling over that question.
The European Union, which tends to be instinctively guided by fear of digital technology rather than enthusiasm, has been pressing ahead.
The EU Parliament’s draft AI Act, which has yet to be adopted, categorises AI systems by risk and then defines rules for the provision and use of AI.
“Unacceptable risk” category AI systems will be banned, while others will be subject to assessment before coming to the market and throughout their life cycle.
Generative AI provided by ChatGPT and other large language models (LLM) falls into a separate category ranked between “high risk” and “limited risk” applications.
Indonesia should carefully study the EU’s approach, as well as those of other countries, to see what might be worth adopting.
A basic question in designing a regulatory framework is whether to categorise AI based on what it does (function and use cases) or how it works (technology and development).
That is but one aspect to consider in the highly complex task of ensuring that AI works for the benefit of the people.
A standalone AI regulation makes little sense, because AI risks to society are intertwined with issues like privacy, big data, surveillance and manipulation.
AI vastly amplifies threats posed by the misuse of digital technology.
Last week, the government shared the draft for a letter to be addressed to businesspeople as an initial step toward regulating AI tech.
Transparency is pivotal for the regulation of AI, so kudos to the Communications and Information Ministry for making the draft public.
The four-page document, which comes three years after the government issued the National AI Strategy 2020-2045, proposes ethical guidelines to shape company-internal policies for AI programming, analysis, and consulting.
That is a good start, but legally binding regulations will be necessary once we know what exactly we want from AI. Determining the latter will require a broad public discourse involving civil society organisations, businesses and academia.
There is no time to lose. Most of us were either surprised or shocked a year ago when we discovered just how far tech firms had pushed AI, with little public insight, let alone oversight.
We must ensure that society at large, not just the tech industry, sets the course for AI development because this technology will shape what we see and how we think. It can be used, or miscued, to manipulate individuals and societies.
For the sake of transparency, open source coding should be the preferred option for AI applications.
Like security, climate change and public health, AI requires international cooperation and global rules. The use of AI, as well as its risks, transcend national borders.
Even though Indonesia is hardly a heavyweight in AI development, Jakarta need not shy away from the global debate on AI regulation.
It could begin by shaping discussion within ASEAN or the Regional Comprehensive Economic Partnership (RCEP) area.
For many people, the most menacing question about AI is its impact on employment.
A nationwide survey conducted by empirical research service provider Populix in September found that 55 percent of the respondents are concerned about AI replacing their job, while only 12 percent are unconcerned or somewhat unconcerned.
We must discuss how to fairly share the massive efficiency gains AI enables so that everyone remains employed.
Tech literacy is crucial in this regard, which is why the basics of AI and information science should be taught in high school.
To improve AI, we must also improve human intelligence.
ADVERTISEMENT
ADVERTISEMENT