About the job
Lila Sciences is seeking a Principal Engineer with expertise in AI Security for its Cambridge, MA office. This senior individual contributor will help define and drive the technical strategy to secure AI applications across the organization. The role involves close collaboration with IT and business teams to ensure AI tools and platforms are implemented safely and in compliance with internal and external requirements.
Securing both third-party and in-house AI tools is central to this position. The work will focus on protecting sensitive data, intellectual property, and scientific processes as AI becomes increasingly embedded in Lila's operations.
What You Will Do
- Enterprise AI Security Strategy: Develop and roll out security controls and guidelines for using AI tools throughout the company, covering LLM APIs and SaaS AI platforms.
- AI Gateway & Agentic Gateway Security: Design and enforce controls for AI gateways, monitoring and managing access to AI systems. Ensure agentic workflows are secured with strong identity and authorization measures.
- AI Red Teaming & Adversarial Testing: Lead red teaming and adversarial testing efforts to uncover vulnerabilities in AI usage, such as prompt injection and data exfiltration risks.
- Data Protection for AI Usage: Establish and maintain safeguards to prevent sensitive data exposure through AI systems, with a focus on input/output filtering and secure data management.
- Multi-Layer AI Security: Integrate AI security practices with the organization's broader security framework, maintaining visibility and control over AI service access and data flows.
- AI Threat Modeling: Build and maintain threat models tailored to enterprise AI applications, addressing risks like data leaks and unauthorized agent actions.
- Vendor & Platform Security: Assess and advise on secure integration of third-party AI vendors, with close attention to their data handling practices and model behavior.

