Introduction
With the proliferation and wide-spread adoption of artificial intelligence (AI) across industries, Australia’s regulatory bodies are emphasizing the need for robust frameworks to govern AI systems.
Recently the Office of the Australian Information Commissioner (OAIC) released guidance on the use of commercially available AI products, as well as the development and training of Generative AI models.
The Australian Securities and Investments Commission (ASIC) released its report “Beware the gap: Governance arrangements in the face of AI innovation“, where it examines the ways Australian financial services and credit licensees are implementing AI, particularly where it impacts consumers.
Both stress the importance for companies to adapt their governance models to address AI’s challenges and risks.
As AI systems evolve, so do their associated ethical, legal, and operational concerns including data privacy, fairness, environmental and societal impacts. The OAIC and ASIC highlight that comprehensive governance, clear policies, and independent model validation are essential in mitigating these risks.
ASIC cautions: “Simply put, some licensees are adopting AI more rapidly than their risk and governance arrangements are being updated to reflect the risks and challenges of AI. There is a real risk that such gaps widen as AI use accelerates and this magnifies the potential for consumer harm”
As organizations integrate advanced AI technologies like generative AI models it is crucial to examine their implications, identify potential risks, and establish effective governance and regulatory mechanisms.
Defining AI: A Broad and Inclusive Approach
The definition of AI has evolved in recent times and often encompasses models and algorithms ranging from statistical models to generative AI.
ASIC and OAIC both advocate for AI to be defined in terms of data inputs, methods employed, outputs derived and crucially, a broad set of results including predictions, recommendations as well as content such as images, video or text.
This inclusivity acknowledges that risks associated with AI are not confined to novel or sophisticated techniques, but are inherent in any model that influences decision-making or consumer outcomes.
ASIC’s AI Definition
ASIC defines AI to include:
Advanced Data Analytics: “Algorithms and techniques that enable autonomous or semi-autonomous data examination to discover insights, predict future outcomes, or make recommendations. These methods go beyond standard business intelligence (BI) approaches by integrating sophisticated statistical and machine learning (ML) techniques.”
With references made to models developed using supervised, unsupervised or deep learning techniques, in our view, this definition includes the breadth of models developed historically in financial services.
Generative AI: “A subset of AI dedicated to creating novel content—text, images, music, and video—based on the patterns learned from large training datasets. Unlike rule-based systems, generative AI leverages statistical structures to generate outputs that resemble but are not direct copies of training data.”
In our view, while Chat-GPT and other Large-Language-Models (LLMs) are certainly included in this definition of AI, credit scoring systems including origination scorecards, debt collection and fraud detection systems could also be included.
OAIC’s AI Definition
Leveraging work from the the OECD’s AI Policy Observatory, the OAIC provides a similarly inclusive definition, describing AI as any “machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
The OAIC definition also distinguishes between an ‘AI model’ (“raw, mathematical essence that is often the ‘engine’ of AI applications’ such as GPT-4”) and an ‘AI system’ (“the ensemble of several components, including one or more AI models, that is designed to be particularly useful to humans in some way’ such as the ChatGPT app”).
This broad, inclusive definition of AI is helpful as it recognises that AI, models, and decision-making systems are evolving and may cause harm, contravene an individual’s rights, or risk the integrity of the financial system if AI is not developed, validated, monitored and governed appropriately. In the wake of recent failures, it is vital that organisations are aware of these potential consequences.
Governance through the entire spectrum of AI
The broader definition of AI increases the need to implement robust model risk management practices across the entire spectrum of AI and decision-making systems to identify, size and address their associated risks.
Models previously adhering to differing industry regulation may now be considered under the same umbrella. For example, models that currently leverage exemptions to the Privacy Act will now need to consider the data lineage and potential leakage of personal information if exposed to External AI products such as Chat-GTP. In fact the OAIC recommends that “organisations do not enter personal information, and particularly sensitive information, into publicly available generative AI tools, due to the significant and complex privacy risks involved.”
Credit scoring systems include calculators, rules, algorithms and models that have historically been the backbone of the lending industry. Where some of these components are validated and governed appropriately, many are primarily evaluated based on their business benefit. However, the evolving landscape of AI necessitates that these tools, alongside all other statistically derived models, be subjected to comprehensive AI governance and model risk management frameworks.
This need for increased scrutiny aligns with ASIC’s concerns that the rapid adoption of AI in response to competitive pressures and business demands may outpace the development of adequate risk management and governance structures. ASIC emphasizes the importance of ensuring that safeguards keep pace with the increasing sophistication and deployment of AI technologies to protect consumers and maintain ethical standards.
Australia’s Voluntary AI Safety Standard, the PRA and Model Risk Management
The Australian Government has published the Voluntary AI Safety Standard to help organisations develop and deploy AI systems in Australia safely and reliably.
The standard consists of 10 voluntary guardrails that apply to all organisations throughout the AI supply chain. They include transparency and accountability requirements across the supply chain and explain the responsibilities of developers and deployers of AI systems.
The UK’s Prudential Regulation Authority (PRA) outlines a set of principles and sub-principles in order for banks to take a strategic approach to model risk management (MRM). The principles are intended to support firms to strengthen their policies, procedures, and practices to identify, manage, and control the risks associated with the use of models.
Consistent with the OAIC and ASIC’s broad definition of AI, the PRA defines a model as “a quantitative method that applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into output.”
“The output of models are estimates, forecasts, predictions, or projections, which themselves could be the input data or parameters of other quantitative methods or models.”
“Firms’ use of models covers a wide range of areas relevant to its business decision making, risk management, and reporting. Business decisions should be understood here as all decisions made in relation to the general business and operational banking activities, strategic decisions, financial, risk, capital, and liquidity measurement and reporting, and any other decisions relevant to the safety and soundness of firms.”
Because the definition aligns well with both ASIC and the OAIC’s definition of AI (perhaps being even more extensive), its guidance may help define a practical governance framework for AI in Australia.
In our view, akin to the inclusive definitions of AI provided by ASIC and the OAIC, we find that combining the guidance of Australia’s Voluntary AI Safety Standard and the PRA’s Model risk management principles provides a robust framework for AI governance in Australia and will be explored and defined in our next set of articles.
For an introduction to the PRA’s model risk principles, see our earlier article here:
Sources:
Guidance on privacy and developing and training generative AI models:
Guidance on privacy and the use of commercially available AI products:
Bank of England, SS1/23 – Model risk management principles for banks:
OECD.AI – What is AI? Can you make a clear distinction between AI and non-AI systems?
https://oecd.ai/en/wonk/definition
ASIC REP 798 Beware the gap: Governance arrangements in the face of AI innovation:
Voluntary AI Safety Standard:
https://www.industry.gov.au/publications/voluntary-ai-safety-standard/10-guardrails
Kadre is a specialist credit risk and data science consultancy, solving meaningful problems.
Contributors from Kadre include Thushare Dissanayake (t), Steve Johnson, Mike Cutter, Michael Hartman, Ewa Baranowska, Christian Klettner and Dennis Rappoport.