Audit Committee Chairs Are Loud and Clear: Internal Audit Must Communicate Better
February 19, 2024For Internal Audit, Complacency Is Not a Strategy
March 3, 2024As I’ve been traveling recently and speaking with internal auditors around the U.S., the most frequently raised topic has been artificial intelligence (AI). Like almost everyone, internal auditors are struggling to comprehend the opportunities and risks that AI presents for the organizations they serve. While time may be on our side to better understand and leverage the opportunities, the clock is ticking on identifying and addressing the risks.
Artificial intelligence has fundamentally transformed the landscape of business and technology, introducing new efficiencies, capabilities, and opportunities across industries. However, with these advancements come significant risks that organizations must navigate to leverage AI effectively and responsibly in 2024. As I have been sharing with internal auditors who are asking my perspectives, I believe there are at least 6 critical risks that should be on our radars (and our risk assessments) for the months ahead:
Accuracy and Accountability. A recent Forbes article on risks AI is presenting for companies listed accuracy and accountability first. Where is AI acquiring information from? How is it being verified? As the Forbes article notes: “Most AI-driven applications are non-transparent and non-verifiable. Using a term adopted by the intelligence community – “trust but verify” – most AI sourcing can’t be verified. It’s a black box.”
The article went on to share several examples of errors AI solutions have made:
- Google’s Bard AI got off to a rocky start by making some well-known mistakes and “hallucinations,” a phenomenon in which AI models make up facts. Some of the hallucinations it made in the field of astronomy led a leading astrophysicist, Grant Tremblay, to state that while Google’s Bard is “impressive,” AI chatbots like ChatGPT and Bard “have a tendency to confidently state incorrect information.”
- ·Microsoft’s Bing chatbot failed to differentiate key financial data in a basic comparison of vacuum cleaners and clothing. Humans have to clean this up.
- OpenAI’s ChatGPT hallucinations are common – such as the documented case in which it made up fictional court cases when a lawyer used it for legal research. Hallucinations strike again.
- AI largely failed to help diagnose COVID-19 in order to aid clinical assessment of the pandemic. In an examination of 415 such AI tools, none of them was fit for clinical use.
- Online real-estate data company Zillow incurred a $300 million write-down because its Offers program couldn’t accurately price homes with an AI-driven algorithm.
Ethical Considerations. As AI systems become more integrated into decision-making processes, ethical considerations have moved to the forefront. The development and deployment of AI systems without a robust ethical framework can lead to biased outcomes, discrimination, and a lack of accountability. For instance, AI algorithms trained on historically biased data can perpetuate or even exacerbate these biases, leading to unfair treatment of certain groups in hiring, lending, and law enforcement applications.
Organizations must prioritize the development of AI in an ethical manner, ensuring that these systems are transparent, explainable, and free from bias. This involves investing in diverse datasets, implementing rigorous testing protocols, and developing AI systems that can be easily understood and interrogated by humans.
Data Privacy Issues. AI systems require vast amounts of data to train and operate effectively. This dependency raises significant data privacy issues, especially as regulations like the General Data Protection Regulation (GDPR) in Europe and the California Consumer Privacy Act (CCPA) in the United States impose strict rules on data collection, processing, and storage.
Organizations must navigate these regulations carefully to avoid substantial fines and reputational damage. This involves implementing robust data governance frameworks that ensure data is collected, used, and stored in compliance with all relevant laws and regulations. Additionally, there’s a growing need for technologies like federated learning, which allows AI models to be trained across multiple decentralized devices or servers without exchanging data samples, thus enhancing privacy.
Talent Disruption. The automation capabilities of AI have long been touted as a double-edged sword, offering significant efficiencies while also posing a risk to jobs. As AI technologies advance, the potential for job displacement increases, with both low-skilled and high-skilled roles at risk.
Organizations must approach the deployment of AI technologies with a human-centric perspective, considering the impact on their workforce. This includes investing in retraining and reskilling programs to help employees transition to new roles that AI cannot easily replicate, such as those requiring emotional intelligence, creativity, and complex problem-solving skills.
Intellectual Property and Legal Risks. Forbes also cited IP and legal threats when enumerating the emerging risks AI presents for companies who are adopting its use. As Forbes observed:
- There are a host of legal issues in the application of AI. Will the AI be treated like humans when it makes mistakes? Identifying the origins of an AI error or source of data is particularly difficult to trace (i.e. in the case of hallucinations). And then there are the huge intellectual property (IP) questions. If AI is using data models borrowing from IP such as software, art, and music – who owns the IP? AI disintermediates the owners of IP. If Google is used to search for something, typically it can return a link to the source or the originator of the IP — such is not the case with AI.
Forbes notes that organizations in which AI tools are being widely adopted on a decentralized basis face a “legal and liability nightmare that has led dozens of companies to ban the adoption of AI tools such as ChatGPT — including big names such as Apple, JPMorgan Chase, Citigroup, Deutsche Bank, Wells Fargo, and Verizon.”
AI Governance and Regulation. The rapid advancement of AI technology has outpaced the development of governance frameworks and regulatory standards, creating a landscape of uncertainty for organizations. This uncertainty can hinder innovation, as organizations may be reluctant to invest in AI technologies without clear guidelines on their acceptable use.
To mitigate this risk, organizations must engage with policymakers, industry groups, and other stakeholders to shape the development of sensible AI governance and regulation. This involves advocating for regulations that balance innovation with ethical considerations, data privacy, and security.
As we move further into 2024, the risks presented by AI are as significant as its potential benefits. Organizations must navigate these risks with a balanced approach, prioritizing ethical considerations, data privacy, security, and the well-being of their workforce. By doing so, they can harness the power of AI to drive innovation and growth while ensuring that these technologies are developed and deployed in a manner that is beneficial to society as a whole.
The journey toward responsible AI adoption is complex and ongoing. Internal auditors should recognize that AI adoption requires a commitment to continuous learning, adaptation, and engagement with the broader ecosystem of regulators, technologists, and civil society. Only through a concerted effort can organizations and their internal auditors ensure that AI associated risks are mitigated, but also contribute to the development of AI technologies that are secure, ethical, and beneficial for all.
As always, I welcome your thoughts on this topic. Feel free to email me at blogs@richardchambers.com or message me on LinkedIn or via X.
I welcome your comments via LinkedIn or Twitter (@rfchambers).