It’s already getting tough to discern real text from fakegenuine video from deepfake. Now, it appears that use of fake voice tech is on the rise too.
That’s according to the Wall Street Journal, which reported the first ever case of AI-based voice fraud — aka vishing (short for “voice phishing”) — that cost a company $243,000.

In a sign that audio deepfakes are becoming eerily accurate, criminals sought the help of commercially available voice-generating AI software to impersonate the boss of a German parent company that owns a UK-based energy firm.
They then tricked the latter’s chief executive into urgently wiring said funds to a Hungarian supplier in an hour, with guarantees that the transfer would be reimbursed immediately.

The company CEO, hearing the familiar slight German accent and voice patterns of his boss, is said to have suspected nothing, the report said.
But not only was the money not reimbursed, the fraudsters posed as the German CEO to ask for another urgent money transfer. This time, however, the British CEO refused to make the payment.

As it turns out, the funds the CEO transferred to Hungary were eventually moved to Mexico and other locations. Authorities are yet to determine the culprits behind the cyber-crime operation.

The firm was insured by Euler Hermes Group, which covered the entire cost of the payment. The incident supposedly happened in March, and the names of the company and the parties involved were not disclosed, citing ongoing investigation.
AI-based impersonation attacks are just the beginning of what could be major headaches for businesses and organizations in the future.
In this case, the voice-generation software was able to successfully imitate the German CEO’s voice. But it’s unlikely to remain an isolated case of a crime perpetrated using AI.

On the contrary, they are only bound to increase in frequency if social engineering attacks of this nature prove to be successful.

As the tools to mimic voices become more realistic, so is the likelihood of criminals using them to their advantage. By feigning identities on the phone, it makes it easy for a threat actor to access information that’s otherwise private and exploit it for ulterior motives.

Back in July, Israel National Cyber Directorate issued warning of a “new type of cyber attack” that leverages AI technology to impersonate senior enterprise executives, including instructing employees to perform transactions such as money transfers and other malicious activity on the network.
The fact that an AI-related crime of this precise nature has already claimed its first victim in the wild should be a cause for concern, as it complicates matters for businesses that are ill-equipped to detect them.

Last year, Pindrop — a cyber-security firm that designs anti-fraud voice software — reported a 350 percent jump in voice fraud from 2013 through 2017, with 1 in 638 calls reported to be synthetically created.

To safeguard companies from the economic and reputational fallout, it’s crucial that “voice” instructions are verified via a follow-up email or other alternative means.

The rise of AI-based tools has its upsides and downsides. On one hand, it gives room for exploration and creativity. On the other hand, it also allows for crime, deception, and nearly (unfortunately) damn competent fraud.

Source: TNW

If the government doesn’t use it against you, scammer will do. Personally, I’m still waiting for the good use cases of artificial intelligence.



  • About

    Destabilizing cishetero amatonormativity. Providing disruption as a service. Once you know the way, you see it in all things. Unless you puke, faint or die, keep going. Also we never asked for this. I̸͝t̸̑ ̵̽i̷͗s̶͐ ̵͝a̶͒l̷ ͍r̷ ̗͕e̵͑a̶͌d̸̄y̷̚ ̶̀ ͓͑t̷̚ô̶o̸ ̥ ̶́ ̡l̷͝a̶̽t̵͒ė̶.̸ ̋͑

  • Got something interesting?

    You think you got something which should be on this site? Then contact us. You want something removed from this site, because you think it should not be here? Then go fuck yourself. This is a free website. Free as in freedom. It tolerates every opinion from everyone. However, it does not tolerate things which are illegal according to the Swiss legislation.

  • Disclaimer

    We cannot be held responsible for any kind of direct, indirect or consequential damages caused by the stuff and or opinions we provide here. Use this on your own risk. Don’t blame us if something goes wrong or totally messes up your machine, your life or whatever. If this is unacceptable for you then go away and never come back again. Thank you!