Red teaming in AI governance refers to the process of intentionally testing an AI system’s safeguards by simulating adversarial attacks or challenging scenarios. The goal is to identify vulnerabilities, assess risks, and ensure the model’s robustness, safety, and compliance.