A Jury Has Spoken: Retaliation Against Internal Auditors Will Cost You Big!
May 19, 2024The Next Job Opportunity: Use AI to Enhance Your Story – Not Create it
June 4, 2024I was recently gathered with a group of colleagues at a networking event in London when one of them gently chastised me for unabashedly urging the profession to jump into the artificial intelligence (AI) arena. In her opinion my words carry a lot of weight and that internal auditors who hear me say “start using AI” do so without considering all the risks or consequences. I pushed back a little, but her point was well taken. The opportunities that generative AI solutions present abound for our profession, but we must make sure we know the risks and play by our organizations’ “rules” – especially when accessing AI using the organization’s technology.
In the past year, the integration of AI technologies has proliferated across various sectors, revolutionizing how businesses operate. The advancement that has generated the most excitement is generative AI. Tools like ChatGPT are capable of creating content, from text to images, with remarkable human-like fluency. While generative AI holds immense potential for innovation and efficiency, its adoption in the workplace also brings forth a myriad of risks that demand careful consideration and adherence to established policies.
Generative AI, powered by deep learning algorithms, has garnered attention for its ability to generate content autonomously. Whether it’s crafting product descriptions, generating marketing copy, or even creating entire articles like this one, generative AI can mimic human writing styles and produce content at scale with astonishing speed.
Internal auditors are using generative AI to help with risk assessment, audit planning, data analysis, and even report writing. I have been encouraging its use, and even chastising the profession for being slow on adoption. Afterall, the allure of generative AI lies in its potential to streamline workflows, boost productivity, and reduce labor-intensive tasks. By automating internal audit processes, we can save time and resources while conforming with Global Internal Audit Standards and generating timely and impactful internal audit reports.
Risks Associated with Generative AI in the Workplace
However, the widespread adoption of generative AI in the workplace also presents inherent risks that internal auditors should be keenly aware of and that should guide us in our own use. Among the primary concerns are:
- Accuracy and Quality: While generative AI can produce content quickly, ensuring its accuracy, relevance, and adherence to quality standards remains a challenge. Without proper oversight, there’s a risk of generating inaccurate internal audit results leading to inappropriate corrective measures.
- Ethical Considerations: As I have cautioned before, generative AI raises ethical concerns regarding the authenticity and integrity of content. There’s a risk of perpetuating misinformation or generating biased content that violates internal audit standards and could render our audit work inaccurate and damage our reputation.
- Data Privacy and Security: Generative AI relies on vast datasets to learn and generate content. However, if internal auditors are accessing and processing sensitive data without adequate safeguards privacy can be compromised and confidential information can be exposed to unauthorized parties. As internal auditors, we must recognize that the risk of data breaches or malicious attacks targeting AI systems underscores the importance of robust cybersecurity measures.
- Legal Compliance: As AI-generated content becomes more prevalent, we must help our organizations ensure compliance with relevant laws and regulations governing intellectual property rights, consumer protection, and false claims. The only thing worse than someone else in our company violating these compliance requirements would be if we do so ourselves in a rush to leverage AI.
The Importance of Policy Compliance
In light of these risks, many organizations have established policies and guidelines governing the use of generative AI in the workplace. If we are to leverage AI using company resources, we had better comply with the very policies whose compliance we would hold others accountable. As KPMG, ISACA, HBR and others have noted, here’s why policies and compliance are crucial:
- Risk Mitigation: Well-defined policies help identify and mitigate potential risks associated with generative AI usage. As internal auditors, we should certainly recognize and embrace policies designed to mitigate risks. By following acceptable practices, standards, and procedures, we can minimize the likelihood of unintended consequences and adverse outcomes from leveraging AI for internal audit purposes.
- Ethical Framework: Some organizations have adopted policies that provide a framework for ethical decision-making and responsible AI usage. We must respect the boundaries of acceptable behavior, ensuring that AI applications we use align with ethical principles, respect individual rights, and uphold societal values.
- Legal Compliance: As internal auditors, compliance with relevant laws and regulations is non-negotiable. We must embrace policies and legal requirements pertaining to data privacy, intellectual property, transparency, and accountability, ensuring that AI deployments adhere to legal standards and avoid legal pitfalls.
- Accountability and Transparency: As internal auditors, we should understand our obligations when using generative AI tools and be accountable for the content they generate.
- Training and Awareness: We should be full participants in employee training and awareness programs that educate staff about the risks, benefits, and best practices associated with generative AI. We must embrace a culture of AI literacy and responsible usage and encourage our organizations to empower employees to make informed decisions and uphold policy compliance.
Generative AI offers unprecedented opportunities for innovation and efficiency for internal auditors, but its adoption comes with inherent risks that must be carefully managed. By prioritizing policy compliance and ethical considerations, we can harness the power of generative AI while safeguarding against potential harms. With clear policies, robust governance frameworks, and a commitment to responsible AI usage, internal auditors can navigate the complexities of AI technology and unlock its full potential.
Depending on your organization’s policies, a safe way for internal audit functions to leverage AI capabilities is through AI-enabled audit and risk management solutions. Some of these models such as those developed by AuditBoard (my organization) are either developed and hosted locally or utilize third party integrations with equivalently well-established data governance solutions. These solutions prioritize security and privacy as primary concerns and often present much less risk than leveraging the generative AI solutions themselves.
At the conclusion of the discussion in London, I jokingly told my colleagues that in the future my blog posts would include a disclaimer for internal auditors to follow the popular cliché and “don’t try this at home.” However, as I sat down to write this blog, I was reminded of another phrase whose origins I won’t get into – don’t engage in AI usage that is “not safe for work.”
I welcome your feedback on my perspectives via LinkedIn, X or by email to blogs@richardchambers.com.
I welcome your comments via LinkedIn or Twitter (@rfchambers).