Man vs Machine
How the pitfalls of tech can be tackled by tech, according to Mr Wang Weimin (Science ’17).
WHO HE IS
The term “AI-driven problem-solver” sums up Mr Wang Weimin perfectly. He burnished these credentials at the Trusted Media Challenge in 2022, where he took home the top prize for developing an AI model that can identify disinformation. He is committed to furthering this cause through his work at ByteDance, the Chinese tech giant behind TikTok.
There was a time when seeing was indeed believing. But not in today’s age of deepfakes, when technology can be used to create impressive videos of someone doing or saying things they never did or said. Politicians and celebrities have been the victims of deepfakes, including former American president Barack Obama and Tesla’s Elon Musk. There are mounting concerns about how deepfakes could impact future elections and national security.
To encourage research into combating deepfakes, AI Singapore, a national programme supported by the National Research Foundation and hosted by NUS, organised the Trusted Media Challenge. Some 470 teams gathered to turn tech on tech by developing AI models that weed out deepfakes. Entrants were presented with thousands of videos, some authentic and some fake. They then developed AI models that spotted trends to determine the authenticity of any given video. Minister of State for Communications and Information Mr Tan Kiat How praised this method at the prize presentation ceremony. “Technology is not just part of the problem; it can also be part of the solution,” he said.
That is precisely what one-man team Mr Wang Weimin demonstrated with his winning AI model. Mr Wang, who graduated with a Master of Science in Mathematical Statistics and Probability from the University in 2017, developed a model that was 98.53 per cent accurate at telling apart genuine clips from deepfakes — besting his nearest competitor by 0.2 per cent.
To inch ahead of the competition, Mr Wang studied the competition’s dataset in even greater detail. “I found that the fake videos in the dataset had two additional traits: They could feature a newscaster with a faked voice to mislead people with what appears to be a media report,” he says. “Another type of fake video was selfie recordings of people with their voices also faked.” Armed with these, he spent several months developing the winning model.
That model earned Mr Wang first place at the competition and a handsome cash prize of $100,000. The 33-year-old also received an offer for a $300,000 start-up grant to commercialise his invention, but he preferred to use the technology at his current organisation, ByteDance. He works at its in-house AI research team based in Singapore.
AI has developed so fast that people can’t ignore it anymore.
A TREK THROUGH TECH
Mr Wang is increasingly aware of how AI-generated content could threaten society. Currently, victims of deepfakes can release a statement distancing themselves from a fake image or video. But the prevalence of deepfakes will only grow in time, making it nearly impossible to rebut each one, he says. “So, it’s important that we develop tools that can detect deepfakes.”
He believes such tools could even help those who are confident that they would be able to spot deepfakes. “But deepfakes are increasingly sophisticated,” he admits. “When I was viewing the videos provided for the competition, even I couldn’t always tell which ones were fake or real.”
He adds that battling disinformation should not stifle innovation or technological advancements either, citing AI’s benefits in many areas of society, including healthcare and education. “AI has developed so fast that people can’t ignore it anymore. Just look at the success of ChatGPT,” he says, referring to the popular AI chatbot that boasts 100 million users and counting since its launch in November 2022. As a technologist, he advises people to use these platforms cautiously. “It’s fine if you’re using it for harmless fun. But it gets a bit tricky if you are using AI-generated content for news creation because the information still needs to be verified and authenticated by a human.”
US$78 billion
That’s what fake news costs the global economy annually (about S$105 billion), as found in a 2019 study.
Source: University of Baltimore
SOLVING PROBLEMS WITH AI
Mr Wang’s penchant for AI developed after he earned his Bachelor of Engineering in Electrical and Electronic Engineering from Nanyang Technological University. “I attended a lecture by Dr Andrew Ng, the co-founder of Google Brain, and he shared the possibilities of AI,” recalls Mr Wang. He adds that the world of AI called out to him because of its blend of mathematics and coding — two subjects he grew to love during his undergraduate days. “But in the engineering programme, we did very little coding. So, I taught myself how to do it, spending hours studying models and getting savvy with the foundations.”
To shore up his knowledge, he enrolled in the MSc (Statistics) programme at NUS. “I had this interest in machine learning but found a gap in my education — statistics — which is key to understanding machine learning models,” he says. He juggled school with a full-time job at pharmaceutical company MSD and, after graduating, got a job at Gojek. As a data science lead at the ride-hailing company, he put his newfound skills to the test to develop systems that would better connect riders and drivers. “So that’s why I applied for the Master’s programme, just to develop machine learning systems that helped people.” If his recent success is anything to go by, it seems like it was a worthy investment.