Certified AI Governance Professional (AIGP)
Provided by QA
Overview
The Artificial Intelligence Governance Professional training or 'AIGP' teaches professionals how to develop, integrate and deploy trustworthy AI systems in line with emerging laws and policies around the world. The course and certification provide an overview of AI technology, survey of current law and strategies for risk management, among many other relevant topics.
Why should I take the Certified Artificial Intelligence Governance Professional training?
Businesses and institutions need professionals who can evaluate AI, curate standards that apply to their enterprises, and implement strategies for complying with applicable laws and regulations.
With the expansion of AI technology, there is a need for professionals in all industries to understand and execute responsible AI governance. The AIGP credential demonstrates that an individual can ensure safety and trust in the development and deployment of ethical AI and ongoing management of AI systems.
What's included:
Prerequisites
There are no prerequisites for this course.
Who Should Train?
We must continue to build and refine the governance processes through which trustworthy AI will emerge and we must invest in the people who will build ethical and responsible AI. Those who work in compliance, privacy, security, risk management, legal, HR and governance together with data scientists, AI project managers, business analysts, AI product owners, model ops teams and others must be prepared to tackle the expanded equities at issue in AI governance.
Including any professionals tasked with developing AI governance and risk management in their operations, and anyone pursuing IAPP Artificial Intelligence Governance Professional (AIGP) certification.
+
Delegates will learn how to
AIGP training teaches how to develop, integrate, and deploy trustworthy AI systems in line with emerging laws and policies. The curriculum provides an overview of AI technology, survey of current law, and strategies for risk management, security and safety considerations, privacy protection and other topics.
+
Outline
Module 1: Foundations of artificial intelligence
Defines AI and machine learning, presents an overview of the different types of AI systems and their use cases, and positions AI models in the broader socio-cultural context. At the end of this module you will be able to;
Outlines the core risks and harms posed by AI systems, the characteristics of trustworthy AI systems, and the principles essential to responsible and ethical AI. At the end of this module you will be able to;
Describes the AI development life cycle and the broad context in which AI risks are managed. At the end of this module you will be able to;
Explains how major AI stakeholders collaborate in a layered approach to manage AI risks while acknowledging AI systems; potential societal benefits. At the end of this module you will be able to;
Outlines mapping, planning and scoping AI projects, testing and validating AI systems during development, and managing and monitoring AI systems after deployment. At the end of this module you will be able to;
Surveys the existing laws that govern the use of AI, outlines key GDPR intersections, and provides awareness of liability reform. At the end of this module you will be able to;
Describes global AI-specific laws and the major frameworks and standards that exemplify how AI systems can be responsibly governed. At the end of this module you will be able to;
Presents current discussions and ideas about AI governance, including awareness of legal issues, user concerns, and AI auditing and accountability issues.
+
The Artificial Intelligence Governance Professional training or 'AIGP' teaches professionals how to develop, integrate and deploy trustworthy AI systems in line with emerging laws and policies around the world. The course and certification provide an overview of AI technology, survey of current law and strategies for risk management, among many other relevant topics.
Why should I take the Certified Artificial Intelligence Governance Professional training?
Businesses and institutions need professionals who can evaluate AI, curate standards that apply to their enterprises, and implement strategies for complying with applicable laws and regulations.
With the expansion of AI technology, there is a need for professionals in all industries to understand and execute responsible AI governance. The AIGP credential demonstrates that an individual can ensure safety and trust in the development and deployment of ethical AI and ongoing management of AI systems.
What's included:
- Official learning materials
- Exam Voucher
- The cost of this course includes your exam voucher, and 12 months membership to the IAPP. If you are already an IAPP member, we will extend your membership by another year.
- Taken post course, via Pearson Vue
- 3 hours including a 15-minute optional break.
- 100 multiple choice questions, 30% relating to scenarios.
- Results available via the IAPP portal
Prerequisites
There are no prerequisites for this course.
Who Should Train?
We must continue to build and refine the governance processes through which trustworthy AI will emerge and we must invest in the people who will build ethical and responsible AI. Those who work in compliance, privacy, security, risk management, legal, HR and governance together with data scientists, AI project managers, business analysts, AI product owners, model ops teams and others must be prepared to tackle the expanded equities at issue in AI governance.
Including any professionals tasked with developing AI governance and risk management in their operations, and anyone pursuing IAPP Artificial Intelligence Governance Professional (AIGP) certification.
+
Delegates will learn how to
AIGP training teaches how to develop, integrate, and deploy trustworthy AI systems in line with emerging laws and policies. The curriculum provides an overview of AI technology, survey of current law, and strategies for risk management, security and safety considerations, privacy protection and other topics.
- Establish foundational knowledge of AI systems and their use cases, the impacts of AI, and comprehension of responsible AI principles.
- Demonstrate an understanding of how current and emerging laws apply to AI systems, and how major frameworks are capable of being responsibly governed.
- Show comprehension of the AI life cycle, the context in which AI risks are managed, and the implementation of responsible AI governance.
- Presents awareness of unforeseen concerns with AI and knowledge of debated issues surrounding AI governance.
+
Outline
Module 1: Foundations of artificial intelligence
Defines AI and machine learning, presents an overview of the different types of AI systems and their use cases, and positions AI models in the broader socio-cultural context. At the end of this module you will be able to;
- Describe and explain the differences among types of AI systems.
- Describe and explain the AI technology stack.
- Describe and explain AI and the evolution of data science.
Outlines the core risks and harms posed by AI systems, the characteristics of trustworthy AI systems, and the principles essential to responsible and ethical AI. At the end of this module you will be able to;
- Describe and explain the core risks and harms posed by AI systems.
- Describe and explain the characteristics of trustworthy AI systems.
Describes the AI development life cycle and the broad context in which AI risks are managed. At the end of this module you will be able to;
- Describe and explain the similarities and differences among existing and emerging ethical guidance on AI.
- Describe and explain the existing laws that interact with AI use.
- Describe and explain key GDPR intersections.
- Describe and explain liability reform.
Explains how major AI stakeholders collaborate in a layered approach to manage AI risks while acknowledging AI systems; potential societal benefits. At the end of this module you will be able to;
- Describe and explain the requirements of the EU AI Act.
- Describe and explain other emerging global laws.
- Describe and explain the similarities and differences among the major risk management frameworks and standards.
Outlines mapping, planning and scoping AI projects, testing and validating AI systems during development, and managing and monitoring AI systems after deployment. At the end of this module you will be able to;
- Describe and explain the key steps in the AI system planning phase.
- Describe and explain the key steps in the AI system design phase.
- Describe and explain the key steps in the AI system development phase.
- Describe and explain the key steps in the AI system implementation phase.
Surveys the existing laws that govern the use of AI, outlines key GDPR intersections, and provides awareness of liability reform. At the end of this module you will be able to;
- Ensure interoperability of AI risk management with other operational risk strategies
- Integrate AI governance principles into the company.
- Establish an AI governance infrastructure.
- Map, plan and scope the AI project.
- Test and validate the AI system during development.
- Manage and monitor AI systems after deployment.
Describes global AI-specific laws and the major frameworks and standards that exemplify how AI systems can be responsibly governed. At the end of this module you will be able to;
- Gain an awareness of legal issues.
- Gain an awareness of user concerns.
- Gain an awareness of AI auditing and accountability issues.
Presents current discussions and ideas about AI governance, including awareness of legal issues, user concerns, and AI auditing and accountability issues.
+
Enquire
Start date | Location / delivery | |
---|---|---|
27 May 2025 | QA On-Line Virtual Centre, Virtual | Book now |
01132207150
01132207150