CloudBrain Training Logo

Introduction to Responsible AI in Practice

(1 day)

The development of AI has created new opportunities to improve the lives of people around the world, from business to healthcare to education. It has also raised new questions about the best way to build fairness, interpretability, privacy, and safety into these systems.

In this course, you will do a high-level exploration of Google’s recommended best practices for responsible AI usage across different areas of focus: Fairness, Interpretability, Privacy, and Safety. Along the way, you will learn how you can leverage different open-source tools and tools on Vertex AI to explore these concepts and spend time considering the different challenges that arise with generative AI.

Course Objectives

  • Overview of responsible AI principles and practices
  • Implement processes to check for unfair biases within machine learning models
  • Explore techniques to interpret the behavior of machine learning models in a human-understandable manner
  • Create processes that enforce the privacy of sensitive data in machine learning applications
  • Understand techniques to ensure safety for GenAI-powered applications

Audience

Machine learning practitioners and AI application developers wanting to leverage generative AI in a responsible manner.

Prerequisites

To get the most out of this course, participants should have:

  • Familiarity with basic concepts of machine learning
  • Familiarity with basic concepts of generative AI on Google Cloud in Vertex AI

Course Outline

Module 1: AI Principles and Responsible AI

  • Google’s AI principles
  • Responsible AI practices
  • General best practices

Module 2: Fairness in AI

  • Overview of fairness in AI
  • Examples of tools to study fairness of datasets and models
  • Lab: Using TensorFlow Data Validation and TensorFlow Model Analysis to Ensure Fairness

Module 3: Interpretability of AI

  • Overview of interpretability in AI
  • Metric selection
  • Taxonomy of explainability in ML models
  • Examples of tools to study interpretability
  • Lab: Learning Interpretability Tool for Text Summarization

Module 4: Privacy in ML

  • Overview of privacy in ML
  • Data security
  • Model security
  • Security for generative AI on Google Cloud

Module 5: AI Safety

  • Overview of AI safety
  • Adversarial testing
  • Safety in Vertex AI Studio
  • Lab: Responsible AI with Vertex AI Studio