Security Through Education

A free learning resource from Social-Engineer, Inc

  • Newsletter
  • Contact Us
  • Social-Engineer, LLC
  • The Human Hacking Conference
  • The Human Hacking Book
Home
  • Home
  • Blog
  • Podcast
  • Framework
  • More
    • Social Engineer Village (SEVillage) at DEF CON
    • SEVillage at DerbyCon
    • The Human Hacking Conference
    • What is Social Engineering?
    • Newsletter
  • Home
  • About
  • Blog
  • Podcast
  • Framework
  • EVENTS
    • Social Engineer Village (SEVillage) at DEF CON
    • SEVillage at DerbyCon
    • The Human Hacking Conference
  • Resources
  • YouTube
  • Linked In
  • Twitter
  • Facebook

by Social-Engineer • January 3, 2017 3 Comments

Social-Engineer Newsletter Vol 07 – Issue 88

Please Don’t Put Words in My Mouth

Please Don't Put Words in My Mouth

‘Well… now you’re just putting words in my mouth.’

Adobe recently announced Project VoCo at the November Adobe Max conference. It’s purported to have the ability to take recordings of someone’s voice, then create audio that sounds like it is from that person. In a nutshell, it’s Photoshop for audio. People in the television industry, book narrators, and podcast creators may be rejoicing as this would mean less time redoing mistakes made in the studio. On the other hand, this could be a malicious social engineer’s dream tool. According to Adobe the software needs about twenty minutes of someone’s voice and then it can recreate that voice exactly. The software doesn’t just find words and patch them together; the demo shows it can actually create speech that the person never said. Couple that with the fact that spear phishing of C-suite employees is becoming a bigger problem and you’ve got a volatile mixture. It’s not hard at all to find twenty minutes of audio on most CEOs and other high-level employees, considering many of them participate in press conferences, speeches, podcasts, and interviews.

How Could This Attack Work in The Real World?

Once the audio is acquired (through OSINT) and loaded in the program, it could just be a matter of typing in what you want the program to produce. A malicious social engineer’s attack vector may go like this:

  1. Perform OSINT and find that a CEO will be spending a week overseas for a conference.

  2. Create a fake voicemail from the CEO to the head of finance stating, “Hi Sue. I’m in London this week, so I need you to talk to our new vendor Phil Jones tomorrow to transfer some funds for a critical purchase. He’ll call you tomorrow around 9 a.m.

  3. When Sue gets to work and hears this voicemail from her CEO, the pretext has been primed and she’s now expecting a phone call.

  4. The following morning the malicious visher calls posing as Phil Jones, and gives Sue the instructions to initiate a wire transfer for a large sum of money.

Isn’t That Scenario Too Implausible?

It may sound far-fetched, but phishing and vishing fraud is occurring like this on a daily basis without the help of VoCo. The F.B.I. recently reported a 270% increase in “CEO Fraud” since 2015. An estimated $2.3 Billion was lost over the last three years to these attacks, and adding VoCo to the mix could significantly increase this amount.

How Else Might This Possibly be Exploited?

Furthermore, with the increasing use of voice activated assistants like the Amazon Echo and Google Home, VoCo could be used to create attacks against these devices. Many are able to integrate with IoT devices, like a garage door or home security system. Imagine if a criminal could pick a lock to a residence and then play an audio file in the homeowner’s voice saying “Alexa, disable home alarm.” Some systems require a vocal PIN as well to disarm the system, but that could possibly be gathered by phishing the target.

Finally, VoCo could be used as a smear-campaign tactic used to sway elections or cause severe controversy. Imagine if multiple “leaked recordings” emerged of a CEO leaving sexually harassing voicemails. The recordings could likely go viral, causing the stock of the company to plummet. Unlike Photoshop, it would be much harder to disprove the authenticity of the recordings made in VoCo; giving the attacker enough time to make several lucrative stock trades in their favor. How could a CEO really prove that they didn’t leave the incriminating voicemails that they were accused of doing in this case?

What Can I Currently do to Mitigate The Threat?

Currently VoCo is still in development phase and appears to have some limitations, but this will change as the technology is tweaked and improved. The best way to fight against this possibility is to continually train employees to be vigilant against vishing and spear phishing attacks. This will help prepare them to continue to fend off these attacks, as they get more sophisticated with tools like VoCo. You may also want to consider what you allow a voice-activated assistant to control in your home, and if available, setup a disarm PIN on your voice-controlled security system.

Written By: Laurie V.

Sources:
https://www.youtube.com/watch?v=I3l4XLZ59iw&feature=youtu.be&list=PLD8AMy73ZVxVLnQh5m-qK0efH3rKIYGx2
https://krebsonsecurity.com/2016/04/fbi-2-3-billion-lost-to-ceo-email-scams/
https://www.cnet.com/news/scouting-out-a-security-system-that-talks-to-amazons-alexa/

Filed Under: Newsletter Like it? Share it!

PREVSocial-Engineer Newsletter Vol 06 – Issue 87
NEXTSocial-Engineer Newsletter Vol 07 – Issue 89

Trackbacks

  1. How Fraudsters Could Hack Audio Recordings for Social Engineering says:
    January 17, 2017 at 11:02 pm

    […] V. explains in Social-Engineer Newsletter how one such attack might […]

  2. The Rise of Machine Learning and Social Engineering Attacks - Security Through Education says:
    March 27, 2017 at 10:52 am

    […] convince them to click on a malicious link.”   Couple the malicious A.I. with Adobe’s VoCo (Photoshop for voice), it’s possible that we may soon see sophisticated vishing attacks that emulate a trusted […]

  3. The Rise of Machine Learning and Social Engineering Attacks - InfoSecHotSpot says:
    March 27, 2017 at 11:17 am

    […] to click on a malicious link.”   Couple the malicious A.I. with Adobe’s VoCo (Photoshop for voice), it’s possible that we may soon see sophisticated vishing attacks that emulate a trusted […]

Leave A Reply Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Become a Newsletter Subscriber

Upcoming Events

human hacking conference image

Need S.E. Training?

pro-services

What’s Going On…

  • Human Hacking Conference 2021 Goes Virtual!
  • Ep. 138 – Security With Marcus Sailler of Capital Group
  • Ep. 137 – Human Hacking With Chris Hadnagy

Need a speaker for your event?

Looking for a good book?

The newest book from Chris Hadnagy:

Or any of his older books:

  

Find Posts by Topic

Find Posts by Month

Our Valued Sponsors & Partners

Print EFF
Back To Top Copyright © 2021 Social Engineer, Inc • All Rights Reserved • Site design by Emily White Designs