Institute’s AI Researchers aim to tackle misinformation in the news
Kate Saenko, Hariri Institute’s Co-Director of Artificial Intelligence Research Initiative (AIR) and Bryan Plummer, a Hariri Institute’s Faculty Affiliate and AIR Core Faculty Affiliate, recently received $2,000,000 in funding from Defense Advanced Research Projects Agency (DARPA) on their project “TONIC: Trusted Online Content,” which aims to battle automated misinformation in the news.
Collaborating with UC Berkeley, University of Washington and UC Davis, they will develop a range of stand-alone and integrated text-, audio-, image-, and video-based authentication techniques to determine if the content is real or falsified, with a particular focus on AI-synthesized multimedia content. Below is the project summary.
Rapid advances in AI and machine learning have led to the ability to synthesize images of people who don’t exist, videos of people doing things they never did, recordings of them saying things they never said, and entirely fabricated news stories about events that never happened. While many earlier forensic techniques have proven effective at combating more traditional falsified content, new approaches are required to combat this new type of AI-synthesized content.