(Bloomberg) --
Just ask the researchers at Deakin University's School of Information Technology, outside of Melbourne. Their algorithm performed the best in identifying the altered images of celebrities in a set of so-called deepfakes last year, according to Stanford University's Artificial Intelligence Index 2023.
"It's a fairly good performance," said Chang-Tsun Li, a professor at Deakin's Centre for Cyber Resilience and Trust who developed the algorithm, which proved correct 78% of the time. "But the technology is really still under development." Li said the method needs to be further enhanced before it's ready for commercial use.
Deepfakes have been around, and prompting concern, for years. Former House Speaker Nancy Pelosi appeared to be slurring her words in a
While the image of the
Big tech companies as well as a
Experts agree there's undue attention on AI generation and not enough on detection, said Claire Leibowicz, head of the AI and Media Integrity Program at nonprofit organization The Partnership on AI.
While the buzz around the technology, dominated by applications like OpenAI's ChatGPT, has reached a fever pitch, executives from Tesla Inc. CEO Elon Musk to Alphabet Inc. CEO Sundar Pichai have
It will be a while before detection tools are ready to be used to fight back against the wave of realistic-looking altered images from generative AI programs like
"I talk to security leaders every day," said Jeff Pollard, an analyst at Forrester Research. "They are concerned about generative AI. But when it comes to something like deepfake detection, that's not something they spend budget on. They've got so many other problems."
Still, a handful of startups such as Netherlands-based
"The motivation of doing deepfake detection now is not money; It is helping to decrease online disinformation," said Ilke Demir, senior staff research scientist at Intel.
So far, deepfake detection startups mainly serve governments and businesses that want to reduce fraud and aren't aimed at consumers.
Platforms like Facebook and Twitter aren't required by law to detect and alert the deepfake content on their platforms, leaving consumers in the dark, said Ben Colman, CEO of Reality Defender. "The only organizations that do anything are the ones like banks that have a direct connection to financial fraud."
Current methods of detecting fake images and videos involve comparing visual characteristics in the content by training computers to learn from examples and embedding watermarks and camera fingerprints on original works. But the rapid proliferation of deepfakes requires more powerful algorithms and computing resources, said Xuequan Lu, another Deakin University professor who worked on the algorithm.
And without a commercially available and massively adopted tool to distinguish fake online content from real, there's plenty of opportunity for bad actors.
"What I see is pretty similar to what I saw in the early days of the anti-virus industry," said Ted Schlein, chairman and general partner at Ballistic Ventures, who invests in deepfake detection and previously invested in anti-virus software in the early days. As hacks became more sophisticated and damaging, anti-virus software developed and eventually became cheap enough for consumers to download on their PCs. "We're at the very beginning stages of deepfakes," which so far is mostly being done for entertainment purposes. "Now you're just starting to see a few of the malicious cases," Schlein said.
But even if it's cheap enough, consumers might not be willing to pay for such technology, said
"Consumers don't want to do any additional work themselves," he said. "They want to automatically be protected as much as possible."
To contact the author of this story:
Diana Li in New York at