SINGAPORE: The winner of a competition designed to find artificial intelligence (AI) solutions to combat fake media aims to make deepfake detection available on Chinese technology company ByteDance’s platform, said AI Singapore (AISG) in a media release on Friday (Apr 29).
The competition, Trusted Media Challenge, which began on Jul 15 last year, was a five-month-long initiative driven by ISGF, a national AI program launched by the National Research Foundation.
“The competitors took part in the challenge in order to raise public awareness about how media can be manipulated, and how to detect media that has been manipulated,” said ISGF.
“This will allow people to identify and stop the spread of unethical and malicious AI applications and instead view media that has been authenticated,” it added.
The winning project was led by Wang Weimin, a Singaporean working as a research scientist at ByteDance, the parent company of video app TikTok.
Mr Wang, a National University of Singapore graduate, said that he was motivated to participate in the competition as the “prevailing challenge in the media landscape” matched his own research interests.
“Good or evil, deepfake is an emerging tech you simply can’t ignore,” he said.
ISGF said that Mr Wang was working towards incorporating his AI model into ByteDance’s BytePlus platform to make deepfake detection available to users.
The second prize went to a team comprising Swiss software engineer Peter Grönquist and Chinese PhD student Ren Yufan. The duo is looking to collaborate with Singaporean companies to develop authentication and certification interfaces, said AISG.
In third place was a team led by PhD student Li Tianlin from Nanyang Technological University’s (NTU) Cyber Security Lab and four others from NTU, Singapore Management University and Japan’s Kyushu University. The team is looking to develop its AI model further through its start-up, VAISION.
The top three winners will receive prize money and start-up grants amounting to S$700,000.