8 Apr 2024
Weighing the Consequences of AI in the Workplace and Society
Safeguards against AI and the challenges of regulating the unknown provided much food for thought during a panel discussion on “The Intersections of Code and Conscience – AI Ethics and Governance”.
Artificial Intelligence (AI) is everywhere — literally. Even if it is not apparent, AI has either been deployed in products or systems that we already use, or it soon will be. Given this ubiquity, a growing number of people — including scientists — have raised concerns about bias in AI due to the biases of its creators.
Continuing from a
previous discussion on AI organised by the National University of Singapore’s (NUS) Office of Alumni Relations as part of its Intellectual Insights series, the most recent session on AI held on 13 March at the Shaw Foundation Alumni House explored “The Intersections of Code and Conscience – AI Ethics and Governance”.
It helps that there is already a governance process for AI in Singapore under the
Model AI Governance Framework which was launched in 2019, and even the world’s first AI governance testing framework, A.I. Verify, which was launched in 2022. Within NUS, to accelerate frontier AI research, it recently set up the
NUS AI Institute to boost real-world impact for public good.
Session moderator Mr Raju Chellam, Editor-in-Chief of the AI Ethics & Governance Body of Knowledge, an initiative by the Singapore Computer Society, lauded these efforts and noted that the IDC (International Data Centre) estimates that
ASEAN alone is projected to spend US$46.6 billion on AI solutions by 2026, up from US$20.6 billion in 2022. “Much of this is without ethical oversight and is going to impact all of us in how we work, play and live,” says Mr Raju, who is also Chairman of the Cloud & Data Standards of Singapore, IT Standards Committee.
There's a long history in the AI space of overestimating its impact in the short term and underestimating it in the long term.
Professor Simon Chesterman
AI: Tool or Threat?
Responding to a question about whether he was afraid that AI might one day ‘do him in’, Professor Simon Chesterman, David Marshall Professor of Law and Vice Provost (Educational Innovation) at NUS, was unperturbed. “There's a long history in the AI space of overestimating its impact in the short term and underestimating it in the long term,” explains Prof Chesterman, who is also the Dean of NUS College and Senior Director (AI Governance), AI Singapore.
“I don't lose any sleep worrying that the machines are going to rise up and kill us. If there is an AI-kind of apocalypse, it's not going to look like the Terminator movies.” Instead, he sees it as more akin to the Writers Guild of America strike last year. He says, “It won't be that you wake up and the machines have taken over. It's that progressively you realise your jobs are being sort of taken away.”
Ms Brindha Jeyaraman, Principal Architect, AI APAC, at Google (Systems Science ’12), likened AI to a knife that may be useful if used in the right way, but potentially dangerous otherwise. “Does the risk really outweigh the benefit? It really depends on how responsible you (the user) are,” she reasons. “As big companies or product developers, startups, or anyone, we must adhere to government compliance checklists… (but) almost every industry has adopted AI at a very rapid pace and even the (government) regulatory compliance teams do not have the time to come up with compliances to catch up with the developments.”
AI Hallucination and Black Box Issues
This pace of development is certainly an issue, especially with challenges like AI hallucination where an AI will ‘dream up’ facts seemingly without rhyme or reason, or the black box issue where we simply do not know how AI uses everything it has learned to come up with results.
To this, Ms Lee Wan Sie, Director (Data-Driven Tech), Infocomm Media Development Authority (IMDA) (Science ’95), reiterated the value of using AI for the public good while putting safeguards in place to protect the public. “It's not just about machines taking over the world, but us being just careless about letting machines run critical systems (unchecked). Just because we don’t have (specific) regulations here doesn’t mean we don’t have (relevant laws),” she said, citing the Online Privacy Act, Protection from Online Falsehoods and Manipulation Act and various laws governing healthcare and financial services.
It's not just about machines taking over the world, but us being just careless about letting machines run critical systems (unchecked).
Ms Lee Wan Sie
Regulating the Unknown
“The reason we do not regulate (AI specifically) is that we think it is too early. The technology is moving at such a fast pace…and in places such as the US (where most of the AI development is happening) there are no governing regulations.”
This lack of regulation benefits innovation, the panellists broadly agreed. Technology firms famously do not seek approval before building new products and systems; Open AI did not ask for the nod before it released ChatGPT into the wild. Prof Chesterman suggested that such firms don’t seek legal advice because such advice will invariably be “don’t do it.”
“The short answer to whether the benefits outweigh the risks is that ‘we don’t know,’” said Prof Chesterman. “We aren’t just beta testers in this new (AI) regime but guinea pigs in a big experiment…there’s a realisation though that we have seen this before with the rise of social media 20 years ago; governments did very little and today we can see the negative consequences.” The difference with the AI situation today, he added, is that governments and companies are at least interested in discussing the risks and potential problems now.
The thought-provoking session led to a healthy Q&A with the capacity audience for more than an hour, which Ms Jasmine Liew (Arts and Social Sciences ’00) found very engaging. “There were a lot of really robust questions from the audience,” she shared. Her husband, Mr Boon Seng Meng (Engineering ’02), echoed her sentiments, adding that he appreciated the diverse backgrounds of the panellists.
Text by Ashok Soman. Photographs by Mark Lee.