The San Franciso-based DeepMedia said on Nov. 28 that it has won two U.S. Air Force Phase I Small Business Innovation Research (SBIR) awards worth $150,000 for the detection of deep fakes–artificial intelligence (AI)-enabled creations of fake video and audio content that appear real.

“These federal contracts will lay the foundation to ensure the U.S. and its allies continue to strategically operate at the forefront of AI-assisted technologies, such as synthetically generated media while helping to weed out misinformation and serve national security needs through the United States Department of Defense,” Deep Media said. “With the amount of deepfake videos available online doubling year over year, the first contract will directly address the need for a scaled solution that can accurately triage and flag deepfake threats for the DoD at a pace of roughly ten thousand videos an hour. The second contract will focus on enabling synthetically driven universal translation, a pivotal aspect of allied relationships, and includes translating and dubbing DoD content in over 50 languages.”

Last year, Forbes Magazine named 28-year-old DeepMedia CEO Rijul Gupta, a Yale graduate who launched DeepMedia in 2017, to the magazine’s “Next 1000”–a list of “upstart entrepreneurs redefining the Amercian dream.”

Gupta said in a DeepMedia statement on Nov. 28 that the company would help DoD combat adversary deep fakes, as “the quality, sophistication, and speed of deepfake creation today is making the content easier to access and harder to identify and keep up with.”

Earlier this year, the Air Force Research Laboratory (AFRL) seaid that it was working with DeepMedia.

“AFRL’s Information Directorate in Rome, New York, which leads research in text and imagery exploitation technologies, became aware of DeepMedia’s technology via Tech Connect, the website that links developers to the appropriate Department of the Air Force Science and Technology [DAF S&T] subject matter experts and opportunities,” per AFRL. “The website provides access to the DAF S&T ecosystem through its idea submission pipeline, ultimately flagging technologies and capabilities relevant to the U.S. military.”

DeepMedia said on Nov. 28 that AFRL has picked the company for a Phase 2 SBIR award to aid AFRL “in creating advanced deepfake detection tools.”

The company said that it released DeepMedia DeepFake (DMDF) Faces V1 this month, “the first publicly-available dataset built to detect the next generation of advanced deepfakes—trained from the highest quality and most comprehensive range of images available.”

“This dataset solves the major issues that plagued previous datasets, such as low-quality training data and a lack of diverse training sets,” per DeepMedia. “To support the release of the dataset as a proof point of its efficiency, DeepMedia launched a Twitter bot that can analyze videos on social media in real-time and provide a perspective on if the media is real or a deepfake.”

In October last year, Marine Corps Lt. Gen. Michael Groen, the director of the Joint Artificial Intelligence Center, noted the deep fake, spoofing, and data poisoning challenges to DoD’s increasing use of AI.

“Despite the growing need of banking institutions, large corporations, and wealthy individuals against the threat of deepfakes that fool biometric analysis tools, the extreme recency of deepfake technology means there are no other commercial solutions ‘of a type’ that achieve meaningful deepfake detection accuracy,” DeepMedia said of its technology on Nov. 29.