On 29 April 2022, AI Singapore (AISG) announced the three winners of its Trusted Media Challenge to find the best AI solutions to combat fake media. The five-month long Challenge invited AI researchers and enthusiasts globally to build AI models that will easily detect audio-visual media that has been manipulated.
The Challenge aims to help strengthen Singapore’s position as a global AI hub by incentivising international contributors to get involved in AI research in Singapore, and by sourcing innovative ideas from all over the globe.
One unique feature of the winners’ AI models is that they were trained and tested using a dataset which included real video clips from Mediacorp’s CNA and Singapore Press Holdings’ The Straits Times, in addition to custom videos which were collected from consented actors. The focus on pan-Asian media makes the solutions developed for the Challenge unique in comparison with overseas-based solutions that are usually tailored to North American, European, or Chinese applications.
The Challenge drew the interest of 470 teams from across the Asia Pacific region, North America, Europe, Africa and Oceania. Teams were required to design and test their solutions against short video clips in which the video and/or audio may have been modified.
At the end of Phase 1, 20 teams qualified for Phase 2 which ended in December 2021. The qualifying teams pitted against each other where their submissions were evaluated against the full hidden test set and scored on a LIVE leaderboard. The three models that achieved the highest scores were then reviewed by technical experts in the field.
The three winners are:
- 1st Prize: Team WILL, led by Singaporean Wang Weimin, who is a Research Scientist at ByteDance and a graduate of the National University of Singapore.
- 2nd Prize: Team IVRL, led by Swiss software engineer Peter Grönquist and his Chinese teammate, PhD student Ren Yufan. They are both from the Image and Visual Representation Lab (IVRL) at the Swiss Federal Institute of Technology, Lausanne (also widely known as EPFL).
- 3rd Prize: Team HideOnFakeBush, led by PhD student Li Tianlin from the Cyber Security Lab (NTU) and four other team mates from Kyushu University (Japan), Nanyang Technological University and Singapore Management University.
The top 3 winners are eligible to receive prize money and start-up grants amounting to $700,000 (approximately US$500,000).
The competitors took part in the Challenge in order to raise public awareness about how media can be manipulated, and how to detect media that has been manipulated. This will allow people to identify and stop the spread of unethical and malicious AI applications and instead view media that has been authenticated.
Wang Weimin from Team WILL said he was motivated to make a submission because of the alignment of the prevailing challenge in the media landscape coupled with his own research interests.
Good or evil, deepfake is an emerging tech you simply can’t ignore.
With the winning submission, he is working towards the possibility of incorporating his AI model into ByteDance’s BytePlus platform where deepfake detection could be one of the machine learning services made available to users.
Team IVRL is also working to develop their AI solution, and are looking to collaborate with Singaporean companies on developing authentication and certification interfaces.
Team HideOnFakeBush will be taking up the grant offer to develop its AI model further through their start-up, VAISION.
AI Singapore is optimistic that the winning solutions will spur the translation and commercialisation of media-focused AI innovations to industry and make Singapore the home of pioneering AI products and services.
More information on the Trusted Media Challenge can be found here.