Certified AI Governance Professional (AIGP)

Provided by

Enquire about this course

Overview

The Artificial Intelligence Governance Professional training or 'AIGP' teaches professionals how to develop, integrate and deploy trustworthy AI systems in line with emerging laws and policies around the world. The course and certification provide an overview of AI technology, survey of current law and strategies for risk management, among many other relevant topics.

Why should I take the Certified Artificial Intelligence Governance Professional training?

Businesses and institutions need professionals who can evaluate AI, curate standards that apply to their enterprises, and implement strategies for complying with applicable laws and regulations.

With the expansion of AI technology, there is a need for professionals in all industries to understand and execute responsible AI governance. The AIGP credential demonstrates that an individual can ensure safety and trust in the development and deployment of ethical AI and ongoing management of AI systems.

What's included:
  • Official learning materials
  • Exam Voucher
  • The cost of this course includes your exam voucher, and 12 months membership to the IAPP. If you are already an IAPP member, we will extend your membership by another year.
About the exam
  • Taken post course, via Pearson Vue
  • 3 hours including a 15-minute optional break.
  • 100 multiple choice questions, 30% relating to scenarios.
  • Results available via the IAPP portal
+

Prerequisites

There are no prerequisites for this course.

Who Should Train?

We must continue to build and refine the governance processes through which trustworthy AI will emerge and we must invest in the people who will build ethical and responsible AI. Those who work in compliance, privacy, security, risk management, legal, HR and governance together with data scientists, AI project managers, business analysts, AI product owners, model ops teams and others must be prepared to tackle the expanded equities at issue in AI governance.

Including any professionals tasked with developing AI governance and risk management in their operations, and anyone pursuing IAPP Artificial Intelligence Governance Professional (AIGP) certification.

+

Delegates will learn how to

AIGP training teaches how to develop, integrate, and deploy trustworthy AI systems in line with emerging laws and policies. The curriculum provides an overview of AI technology, survey of current law, and strategies for risk management, security and safety considerations, privacy protection and other topics.
  • Establish foundational knowledge of AI systems and their use cases, the impacts of AI, and comprehension of responsible AI principles.
  • Demonstrate an understanding of how current and emerging laws apply to AI systems, and how major frameworks are capable of being responsibly governed.
  • Show comprehension of the AI life cycle, the context in which AI risks are managed, and the implementation of responsible AI governance.
  • Presents awareness of unforeseen concerns with AI and knowledge of debated issues surrounding AI governance.
This training teaches critical AI governance concepts that are also integral to the AIGP certification exam. While not purely a "test prep" course, this training is appropriate for professionals who plan to certify, as well as for those who want to deepen their AI governance knowledge. Both the training and the exam are based on the same body of knowledge.

+

Outline

Module 1: Foundations of artificial intelligence

Defines AI and machine learning, presents an overview of the different types of AI systems and their use cases, and positions AI models in the broader socio-cultural context. At the end of this module you will be able to;
  • Describe and explain the differences among types of AI systems.
  • Describe and explain the AI technology stack.
  • Describe and explain AI and the evolution of data science.
Module 2: AI impacts on people and responsible AI principles

Outlines the core risks and harms posed by AI systems, the characteristics of trustworthy AI systems, and the principles essential to responsible and ethical AI. At the end of this module you will be able to;
  • Describe and explain the core risks and harms posed by AI systems.
  • Describe and explain the characteristics of trustworthy AI systems.
Module 3: AI development life cycle

Describes the AI development life cycle and the broad context in which AI risks are managed. At the end of this module you will be able to;
  • Describe and explain the similarities and differences among existing and emerging ethical guidance on AI.
  • Describe and explain the existing laws that interact with AI use.
  • Describe and explain key GDPR intersections.
  • Describe and explain liability reform.
Module 4: Implementing responsible AI governance and risk management

Explains how major AI stakeholders collaborate in a layered approach to manage AI risks while acknowledging AI systems; potential societal benefits. At the end of this module you will be able to;
  • Describe and explain the requirements of the EU AI Act.
  • Describe and explain other emerging global laws.
  • Describe and explain the similarities and differences among the major risk management frameworks and standards.
Module 5: Implementing AI projects and systems

Outlines mapping, planning and scoping AI projects, testing and validating AI systems during development, and managing and monitoring AI systems after deployment. At the end of this module you will be able to;
  • Describe and explain the key steps in the AI system planning phase.
  • Describe and explain the key steps in the AI system design phase.
  • Describe and explain the key steps in the AI system development phase.
  • Describe and explain the key steps in the AI system implementation phase.
Module 6: Current laws that apply to AI systems

Surveys the existing laws that govern the use of AI, outlines key GDPR intersections, and provides awareness of liability reform. At the end of this module you will be able to;
  • Ensure interoperability of AI risk management with other operational risk strategies
  • Integrate AI governance principles into the company.
  • Establish an AI governance infrastructure.
  • Map, plan and scope the AI project.
  • Test and validate the AI system during development.
  • Manage and monitor AI systems after deployment.
Module 7: Existing and emerging AI laws and standards

Describes global AI-specific laws and the major frameworks and standards that exemplify how AI systems can be responsibly governed. At the end of this module you will be able to;
  • Gain an awareness of legal issues.
  • Gain an awareness of user concerns.
  • Gain an awareness of AI auditing and accountability issues.
Module 8: Ongoing AI issues and concerns

Presents current discussions and ideas about AI governance, including awareness of legal issues, user concerns, and AI auditing and accountability issues.

+

Enquire

Start date Location / delivery
27 May 2025 QA On-Line Virtual Centre, Virtual Book now
01132207150 01132207150

Related article

The Cyber Pulse is QA's new portal to free Cyber content, including on-demand webinars, articles written by leading experts,