With one new AI capability after another entering the mainstream, it’s tempting to give each one the same cursory consideration. But some merit more attention than others.
Consider AI deepfakes. Scammers can now use generative-AI tools to create voices, or even live video fakes, that sound or look like specific people—and request money transfers. As such, there’s a “significant” risk of such capabilities “breaking the trust and identity systems upon which our entire economy relies,” said Emily Chiu, CEO of Miami-based fintech startup Novo, at Fortune’s Most Powerful Women summit in Riyadh, Saudi Arabia, last week.
AI-powered fraud
She cited a case in Hong Kong last year in which a finance employee was duped into transferring more than $25 million to fraudsters. The employee, despite being skeptical after receiving an email request for the funds, was lured into a Zoom call in which nobody else was real—though they looked and sounded like the company’s U.K.-based CFO and other executives.
A police official investigating the case told local media that while previous scams had involved one-on-one video calls, “this time, in a multi-person video conference, it turns out that everyone you see is fake.”
Yet as sophisticated as the AI technology behind such scams is, it’s relatively easy to access and use.
“The public accessibility of these services has lowered the barrier of entry for cyber criminals—they no longer need to have special technological skill sets,” David Fairman, chief security officer at cybersecurity company Netskope, told CNBC.
Arup, a U.K. engineering firm, later confirmed that it had been the victim in the attack.
“Like many other businesses around the globe, our operations are subject to regular attacks, including invoice fraud, phishing scams, WhatsApp voice spoofing, and deepfakes,” said Arup CIO Rob Greig in a statement. “This is an industry, business, and social issue, and I hope our experience can help raise awareness of the increasing sophistication and evolving techniques of bad actors.”
Ongoing threat
Deloitte’s Center for Financial Services recently weighed in on the issue, stating, “Generative AI is expected to significantly raise the threat of fraud, which could cost banks and their customers as much as US$40 billion by 2027.”
Chiu said the Hong Kong incident shows that “we’re going to run into a world where our ability to really trust and validate what’s real—the system of trust upon which commerce relies, upon which fintech relies—is going to be a real challenge.”
Of course, that presents opportunities for companies that can come up with effective solutions to this problem, “but it’s not a solved situation yet,” Chiu said. “So, it’s something I would be on the lookout for…even if you’re outside of fintech.”
This story was originally featured on Fortune.com
Read More Details
Finally We wish PressBee provided you with enough information of ( AI deepfakes pose ‘significant’ risk to ‘identity systems upon which our entire economy relies,’ warns fintech CEO )
Also on site :
- Cafes like Gail’s are replacing local pubs – young people will regret letting it happen
- Trump’s Gulf deals and political promises
- Israel Orders Evacuation of Much of Northern Gaza, Warning of Dangerous Combat