
Principal Analyst, Content Adversarial Red Team
- Dublin
- Permanent
- Full-time
- Bachelor's degree or equivalent practical experience.
- 10 years of experience in Trust and Safety, risk mitigation, cybersecurity, or related fields.
- 6 years of experience in adversarial testing, red teaming, jailbreaking for trust and safety, or a related field, with a focus on AI safety.
- Master's degree or PhD in a relevant field (e.g., computer science, information security, artificial intelligence).
- Experience in an individual contributor role within a technology company, focused on product safety or risk management.
- Experience working closely with both technical and non-technical teams on complex, dynamic solutions or automations to improve user safety.
- Understanding of AI systems/architecture including specific vulnerabilities, machine learning, and AI responsibility.
- Ability to effectively articulate complex concepts to both technical and non-technical stakeholders.
- Exceptional written and verbal communication skills.
- Design, develop, and oversee the execution of innovative and highly complex red teaming strategies to uncover content abuse risks. Create and refine new red teaming methodologies, strategies and tactics.
- Influence across Product, Engineering, Research and Policy to drive the implementation of strategic safety initiatives. Be a key advisor to executive leadership on complex content safety issues, providing actionable insights and recommendations.
- Mentor and guide junior and senior analysts, fostering excellence and continuous learning within the team. Act as a subject matter expert, sharing deep knowledge of adversarial and red teaming techniques, and strategic risk mitigation.
- Represent Google's AI safety efforts in external forums and conferences. Contribute to the development of industry-wide best practices for responsible AI development.