Back to Glossary

Back to Glossary

Adversarial Testing

Adversarial testing, also referred to as “red teaming”, refers to risk identification exercises through adversarial attacks on AI models, infrastructure, development and deployment environments, and deployed AI products/systems. It is a mode of evaluation targeted at finding vulnerabilities in AI systems that can be exploited to get an AI system to output harmful content.

Unlock the Future with AI Governance.

Get a demo

Get a demo