About the job
Join Our Elite AI Red Team
At c-serv, we are assembling a top-tier AI Red Team dedicated to rigorously stress-testing and fortifying enterprise-scale AI solutions implemented for some of the world's most prominent organizations.
This position transcends theoretical research; it is a pivotal role at the convergence of adversarial machine learning, security architecture, and governance. You will spearhead the design and execution of comprehensive red team engagements across diverse AI systems, effectively translating technical risks into actionable enterprise-level assurances.
Are you tired of seeing AI risk findings trapped in slide decks without any operational impact? This role is tailored to change that dynamic.
Your Responsibilities
- Design and lead adversarial testing for large language models (LLM) and AI-driven systems.
- Conduct thorough threat modeling across model, infrastructure, and data layers.
- Oversee and execute testing for:
- Prompt injection
- Jailbreaking
- Model exploitation
- Data leakage and extraction
- RAG system manipulation
- Convert findings into structured, audit-ready documentation.
- Align vulnerabilities and remediation pathways with:
- ISO 27001 controls
- SOC 2 Trust Service Criteria
- ISO 27701 privacy controls
- ISO 27017 cloud security controls
- Collaborate closely with engineering, security, and compliance teams.
- Present findings clearly to executive leadership.
This position guarantees that AI security insights are seamlessly integrated into enterprise governance frameworks.

