Back to Glossary

Adversarial Testing

Adversarial testing, also referred to as “red teaming”, refers to risk identification exercises through adversarial attacks on AI models, infrastructure, development and deployment environments, and deployed AI products/systems. It is a mode of evaluation targeted at finding vulnerabilities in AI systems that can be exploited to get an AI system to output harmful content.

Last Updated:

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call