About the job
About the Role
c-serv is building a dedicated AI Red Team to strengthen the security of enterprise AI products. The Adversarial Machine Learning Engineer will play a central part in uncovering vulnerabilities in LLM-based systems and testing their defenses before they reach enterprise clients. This is a hands-on position focused on practical security challenges in real-world AI deployments.
Main Responsibilities
- Carry out adversarial assessments targeting LLMs and other AI systems
- Simulate real-world attacks, such as:
- Prompt injection
- Jailbreaking and bypassing model guardrails
- Data exfiltration
- Model inversion and evasion
- Manipulation of retrieval-augmented generation (RAG) pipelines
- Develop scripts and tools to automate attack scenarios
- Evaluate model behavior and performance under adversarial conditions
- Pinpoint weaknesses in technical components, including:
- APIs
- Embedding pipelines
- Vector databases
- Fine-tuned model deployments
- Work closely with engineering teams to confirm fixes and improvements
- Document findings clearly and thoroughly
Location
This role is based in Calgary, Alberta, Canada.
Impact
The work done in this role will directly support the reliability and security of AI systems before they are deployed at scale for enterprise use.

