When Internal Auditors Discover Fraud: Don’t Trample the Evidence
August 6, 2024Progress Through Sharing: A Hallmark of the Internal Audit Profession
August 19, 2024In a recent blog, I urged internal auditors to not ignore the growing threat of new technologies, particularly those involving deepfake schemes. By leveraging artificial intelligence (AI) and machine learning, often-nefarious players are creating audio, video and other images that are nearly indistinguishable from the real thing.
While deepfake technology has some positive applications in entertainment and education, its potential for misuse poses significant challenges to individuals, organizations and society as a whole. The risks are serious, ranging from reputational damage to financial loss. As I related in that earlier blog, a finance worker in Hong Kong was tricked into paying out $25 million to fraudsters who used deepfake technology to pose as the company’s chief financial officer – in a video conference call, no less. Similar stories are starting to emerge.
To mitigate the growing risks from deepfakes, companies must be proactive on several levels, including implementing robust security measures, fostering awareness and developing response strategies.
The same goes for risk managers and internal auditors, who should zero in on precisely what risks these frauds present for their businesses. Deepfake technology can be weaponized in several ways, including:
- Reputational Damage. Deepfakes can be used to fabricate videos or audio recordings of company executives or employees supposedly engaging in unethical or illegal activities. For example, a deepfake video of a CEO making controversial statements could quickly go viral, leading to a public relations crisis. Even if the deepfake is eventually proven false, the initial impact can be devastating. The adage, “a lie can travel halfway around the world while the truth is still putting on its shoes,” rings particularly true in the digital age.
- Financial Fraud. The incident with the Honk Kong bank is a prime example. In another notable case, cybercriminals used AI-generated audio to mimic the voice of a CEO, instructing an employee to transfer $243,000 to a fraudulent account. Such attacks underscore the potential for deepfakes to facilitate sophisticated phishing and business email compromise (BEC) schemes. As these incursions become more advanced, traditional verification methods, such as phone calls, may no longer suffice.
- Manipulation of Equity Markets. A deepfake video or audio clip that appears to show a company executive making a negative financial forecast or announcing a major setback could trigger panic selling of a company’s stock, resulting in a sharp decline in its value. Even after such a deepfake is debunked, the company may struggle to recover.
- Legal and Regulatory Challenges. The use of deepfake technology can also lead to legal and regulatory challenges. Companies may face lawsuits if they are accused of using deepfakes in misleading advertising or if their employees are implicated in deepfake-related crimes. Additionally, regulators may impose fines or other penalties on companies that fail to adequately protect against deepfake threats.
Staying a step ahead of deepfakers
Given the growing sophistication of deepfake technology, internal auditors will increasingly be called upon – not only for assurance that related risks are identified but that mitigation strategies are in place. Here are a few strategies that can help businesses safeguard their assets, reputation and bottom line:
- Undertake a Threat Analysis. As a recent Forbes article coached, companies should evaluate where and how deepfakes could be used against them. “How could a threat actor with a dark web face fraud factory at their disposal put it to use to break into your organization? For example, a bad actor could call an internal IT helpdesk, use a voice deepfake to impersonate a privileged user and socially engineer a helpdesk agent into resetting the victim’s account credentials.”
- Implement Advanced Detection Technologies. As deepfake technology continues to evolve, so too must the tools used to detect it. Companies should invest in advanced deepfake detection software that leverages AI and machine learning to identify subtle anomalies in audio, video and other images that may indicate manipulation. For example, these tools can analyze facial movements, eye blinking and voice patterns to differentiate between real and fake content. Additionally, companies can collaborate with cybersecurity firms and research institutions to stay abreast of the latest advancements in deepfake detection.
- Educate Employees and Stakeholders. Employee awareness is critical in defending against these threats. Companies should provide regular training on the risks associated with deepfakes and how to recognize them. This includes teaching employees to verify the authenticity of communications, especially those that involve financial transactions or sensitive information. Executives and public-facing employees also should be trained on how to handle deepfake incidents and respond to potential inquiries, including from the news media.
- Enhance Security Protocols. Companies should have a comprehensive strategy for implementing multi-factor authentication (MFA) for all sensitive transactions and communications and establishing strict verification procedures for financial transfers. Companies also should consider using biometric verification methods, such as voice recognition, to confirm the identity of executives or employees involved in high-stakes communications.
- Monitor and Respond to Online Content. Proactive monitoring of online content is essential for identifying and responding to deepfake threats in a timely manner. Companies should set up alerts and use social media monitoring tools to track mentions of their brand, executives and employees. If a deepfake is detected, the company must act quickly to debunk the content, issue public statements, and take legal action, if necessary. Having a comprehensive crisis communication plan with buy-in at all levels of an organization can help ensure a swift and coordinated response.
- Collaborate with Industry Partners. Given the widespread nature of deepfake threats, companies can benefit from collaborating with industry partners, government agencies and cybersecurity firms. By sharing information and best practices, companies can better stay ahead of emerging threats and develop more effective defenses. Additionally, industry-wide initiatives to combat deepfakes, such as development of deepfake detection standards, can help reduce the overall risk.
Help is likely coming by way of stronger legislation and regulation. Even the tech giants are pushing for stronger legislation to curb the threats from deepfakes. According to a recent article from The Verge:
“Microsoft is calling on members of Congress to regulate the use of AI-generated deepfakes to protect against fraud, abuse, and manipulation. Microsoft vice chair and president Brad Smith is calling for urgent action from policymakers to protect elections and guard seniors from fraud and children from abuse…
“Microsoft wants a ‘deepfake fraud statute’ that will give law enforcement officials a legal framework to prosecute AI-generated scams and fraud. Smith is also calling on lawmakers to ‘ensure that our federal and state laws on child sexual exploitation and abuse and non-consensual intimate imagery are updated to include AI-generated content.’ ”
For now, companies can begin mitigating the risks associated with deepfakes by undertaking threat analyses, implementing advanced detection technologies, educating employees, enhancing security protocols, monitoring online content, and collaborating with industry partners. And here is where risk managers and internal auditors play an important role by proactively helping their organizations to address these rapidly emerging risks.
I welcome your comments via LinkedIn or Twitter (@rfchambers).