How Microsoft Blends Safety and Security in AI Red Teaming
Presented by Tori
Tori started her career as a national security strategy consultant, supporting agencies like DHS and the FBI. At Microsoft, she has led initiatives in People Analytics, AR/VR monetization, and now heads AI Safety Red Teaming, ensuring Microsoft’s high-risk GenAI technologies are safe and secure before launch.Tori started her career as a national security strategy consultant, supporting agencies like DHS and the FBI. At Microsoft, she has led initiatives in People Analytics, AR/VR monetization, and now heads AI Safety Red Teaming, ensuring Microsoft’s high-risk GenAI technologies are safe and secure before launch.
Abstract
The Microsoft AI Red Team (AIRT)’s principles and methods combine security red teaming practices and adversarial ML techniques, with safety frameworks and perspectives. This talk will cover how our AIRT integrates these different approaches while red teaming all of our high risk GenAI tech, resulting in a cross functional team approach that adjusts to our diverse technology offerings, from models to copilots.