AI Governance: No better time than now
As use of AI technology becomes more accessible and pervasive, the need for robust AI governance has never been more critical. Businesses that allow their workforce to use generative artificial intelligence (GenAI), producing outputs that mimic human-created material, should establish clear parameters for its use. Without a clear framework governing the use of AI, businesses and organisations risk regulatory breaches, ethical dilemmas, reputational harm, and potentially costly litigation.
GenAI tools have been integrated into workplaces at an unprecedented speed and are also becoming increasingly embedded into existing tools, such as Microsoft 365 Copilot. Many organisations have adopted these technologies to enhance productivity and gain competitive advantages, but the existing management policies of those organisations often fail to adequately address the unique capabilities and risks involved.
Emerging risks and compliance – GenAI introduces new AI-related risks
GenAI introduces new risks relating to the security and privacy of personal data and the potential misuse of company and third-party intellectual property (such as copyright, confidential information and trade secrets) through its unauthorised input into public GenAI systems like ChatGPT.
Companies may be held liable for breaches of federal workplace, discrimination or privacy laws, including through failure to consult on the implementation of AI in the workplace or through the uncontained or unauthorised use of GenAI by workers in relation to other workers, or by the illegitimate use of GenAI for surveillance of staff, which could breach specific state workplace laws.
The EU AI Act, introduced in August 2024, takes a risk-based approach to AI, requiring proactive compliance measures from business. Against this backdrop of evolving global standards, Australia has released the Voluntary AI Safety Standard (September 2024) with plans for future mandatory rules for high-risk AI. Early adoption of the voluntary framework is encouraged, and it will be important for organisations across Australia to begin the process of aligning their governance processes and policies with rapidly changing technology.
With risks associated with GenAI, businesses must now establish clear parameters and framework for the governance of the use of AI in the workplace.
What is good AI governance?
- Clear accountability
- Define roles and responsibilities for AI oversight.
- Assign decision-making authority and escalation paths for issues.
- Ensure senior leadership involvement in AI strategy and risk management.
- Transparency and Explainability
- Document how AI models work, what data they use, and how decisions are made. Be transparent across the AI supply chain. Provide explanations for automated decisions, especially those impacting rights or interests.
- Disclose automated decision-making in privacy policies.
- Maintain audit trails for AI processes.
- Ethical and Fair Practices
- Implement bias detection and mitigation strategies.
- Ensure AI decisions do not discriminate against individuals or groups.
- Align AI use with organisational values and societal norms.
- Compliance with Laws and Standards
- Stay updated on amendments to legislation such as Australia’s Privacy Act 1988 (Cth) and global frameworks (e.g., EU AI Act).
- Integrate compliance checks into AI development and deployment.
- Regularly review privacy policies and disclosures.
- Risk Management
- Conduct AI risk assessments before deployment.
- Monitor AI systems for performance, security, and unintended consequences.
- Establish incident response plans for AI-related failures.
- Continuous Monitoring and Improvement
- Regularly audit AI systems for accuracy, fairness, and compliance.
- Update governance frameworks as technology and regulations evolve.
- Engage with stakeholders, encourage feedback loops and carry out impact assessments.
- Human Oversight
- Ensure critical decisions are reviewable by humans.
- Avoid full automation for high-risk decisions without safeguards. Ensure that there is a built-in human overriding points.
Next steps
The challenges and risks posed by AI are considerable, requiring organisations and businesses to think carefully about their preparedness and exposure. Page Seager Lawyers can assist with AI governance and legal compliance and advice.

