About the job
c-serv is forming an AI Red Team to improve the security of enterprise AI products. As an Adversarial Machine Learning Engineer, the focus will be on identifying vulnerabilities in large language model (LLM) systems and testing their defenses before they reach enterprise clients. This hands-on position addresses practical security challenges in real-world AI deployments.
Main responsibilities
- Perform adversarial assessments targeting LLMs and other AI systems
- Simulate real-world attack scenarios, including:
- Prompt injection
- Jailbreaking and bypassing model guardrails
- Data exfiltration
- Model inversion and evasion
- Manipulation of retrieval-augmented generation (RAG) pipelines
- Develop scripts and tools to automate attack scenarios
- Evaluate model behavior and performance under adversarial conditions
- Identify weaknesses in technical components, such as:
- APIs
- Embedding pipelines
- Vector databases
- Fine-tuned model deployments
- Collaborate with engineering teams to verify fixes and improvements
- Document findings clearly and thoroughly
Location
This position is based in Calgary, Alberta, Canada.
Impact
The work in this role directly supports the reliability and security of AI systems before they are deployed at scale for enterprise use.

