Cyber Security

Governance of AI

Explore the Governance of AI and how Boards should be thinking about the use of AI in the businesses they oversee.


With the recent release of the Australian Government’s Voluntary AI Safety Standard, now is the perfect time to explore the Governance of AI and how Boards should be thinking about the use of AI in the businesses they oversee. 

First, it's important to get a common understanding of what is meant by the term Artificial Intelligence, or AI. Digital NSW describes AI as “the ability of a computer system to perform tasks that would normally require human intelligence, such as learning, reasoning, and making decisions.” So it's important to remember that AI is more than just Chat GPT – and in many cases we are barely scratching the surface when it comes to AI use and adoption. There are also many technical definitions of AI that should be considered and many law firms who could offer a legal definition.  

One way to become better informed on all the various interpretations of AI is to understand the definitions of the variations that exist. They typically vary by how each computer system works, which tends to come down to what inputs and outputs they are looking to use and provide. 

The four main AI domains which are discussed in contemporary business circles are:

terms-commonly-used-in-ai-artificial-intelligence-digital-nsw

  • Generative AI – This form of AI creates new content which could be across text, image, voice, video and code which the system has learnt from existing data patterns and pulling information from what the system has access too. 

  • Machine Learning – It’s a branch of AI that focuses on the use and development of multiple computer systems that can learn and adapt without following explicit instructions, instead it uses algorithms and statistical models to analyse and draw  from patterns in data. 

  • Natural Language Processing – NLP is the application of computational techniques to the analysis and synthesis of natural language and speech. NLP algorithms are used to analyse text, comprehend, converse with users and perform tasks like language translation, sentiment analysis and question answering.  

  • Computer Vision - CV is a field of artificial intelligence (AI) enabling computers to derive information from images, videos and other inputs. This form of technology can be used for tasks like object detection, face recognition and self-driving vehicles.

Now that we have a broad understanding of AI options, the first step that businesses and boards need to take is thinking about what function within a business could use AI to either provide efficiency or to innovate (keeping in mind the risk versus reward equation which needs to be in the positive balance if it’s to be pursued). This is where the AI Safety Standard kicks in and the existence of guardrails, which should be consulted and adopted when deciding on implementation. But where is the intersection between the Board and management? In many cases this is determined by the size and the sophistication of an organisation. In large businesses, the topic of Data Strategy and Data Governance may well sit with the Executive Team and the Chief Data Officer. However, in smaller businesses it could be more of a Board Committee or Board of Management discussion as to the best approach to dealing with data traceability and security.  

Below are some sample questions that the Board/Executive could be asking of Management: 

  • What are functions and process within your business that are considering the use of AI to help with innovation and efficiency?
  • What forms of AI are being considered for use?  
  • What are the decision-making frameworks for determining AI adoption? 
  • Who will be the owner of the decision as to whether specific tools are to be used/adopted?  
  • What AI related policies exist to provide the appropriate settings with an organisation? 
  • What verification is provided that the AI Safety Standards are being adopted?

Policy Fundamentals 
In most cases, a policy containing the following elements would be a very good start. These are:  

  • Statement  
  • Purpose
  • Scope
  • Acceptable use 
  • Confidentiality/Privacy 
  • Regulatory compliance 
  • Governance & Quality  
  • Training and Awareness 
  • Common technologies/tools (or even an explicit list of tools) 
  • Breaches  

For organisations who are looking for best practice in AI, we would recommend that Executives consider adopting Standards Australia’s guidance being the international standard for AI Management System, AS ISO/IEC 42001:2023. 

In summary, AS ISO/IEC 42001:2023 provides organisations with a roadmap for responsible and effective AI system development and management. By adhering to this standard, companies can enhance their AI applications, reduce development costs, and ensure regulatory compliance. 

For a deeper discussion on how your Boards and Executive can get a reading on their assurance level for IT, Cyber and Data Governance, reach out to discuss our review mechanisms at leon@governwith.com

For further reading on the subject, we recommend the 'AI Ethics and Governance - White Paper' by the Governance Institute of Australia. 



For illustrative purposes we have asked ChatGPT about this topic:

Question to ChatGPT: Where should I start with implementing an AI policy for my business? 
 
Chat GPTs Response: 
Implementing an AI policy for your business involves several key steps: 

  1. Assess Current AI Usage: Evaluate how AI is currently being used in your business and identify potential areas for improvement. 
  1. Define Objectives: Clarify what you want to achieve with your AI policy. This could include ethical considerations, compliance, risk management, and innovation.
  1. Engage Stakeholders: Involve key stakeholders, including legal, IT, and relevant department heads, to gather insights and ensure broad support.
  1. Research Best Practices: Look into industry standards and guidelines for AI use, such as the IEEE or OECD principles, to inform your policy framework.
  1. Draft the Policy: Create a comprehensive policy that addresses:

   - Ethical guidelines (fairness, transparency, accountability) 

   - Data privacy and security 

   - Compliance with relevant regulations (e.g., GDPR) 

   - Usage guidelines for employees 

  1. Review and Revise: Share the draft with stakeholders for feedback and make necessary revisions.
  1. Implement Training Programs: Educate employees about the policy, emphasizing responsible AI use and compliance.
  1. Monitor and Evaluate: Establish metrics to assess the policy's effectiveness and adapt as technology and regulations evolve. 
  1. Stay Updated: Regularly review and update the policy to align with technological advancements and changes in the legal landscape.

By following these steps, you can create a robust AI policy that aligns with your business goals while ensuring ethical and responsible AI usage. 

As shown above, AI is very helpful in finding existing information that offers some guidance, however it does look eerily similar to a page we've seen from a large Australian advisory firm. It’s also worth noting that ChatGPT is not so good at basic thinking and literacy tasks, see below: 
 
Question to ChatGPT: 
How many r's are in raspberry?  

ChatGPT said: 
There are two "r's" in the word "raspberry”. 


Leon Cox 
Chief Accounts & Partnerships Officer 
GovernWith 
Email: leon@governwith.com 

Similar posts

Governance Insights

Each week, join us in a governance 'fireside chat' where you ask the questions and we give you the answers.  The latest data and trends from the GovernWith platform are included in our Insights Sessions.