Research Manager In Ai Safety jobs in Cambridge – Browse 527 openings on RoboApply Jobs

Research Manager In Ai Safety jobs in Cambridge

Open roles matching “Research Manager In Ai Safety” with location signals for Cambridge. 527 active listings on RoboApply Jobs.

527 jobs found

1 - 20 of 527 Jobs
Apply
companyCambridge Boston Alignment Initiative logo
Research Manager in AI Safety

Cambridge Boston Alignment Initiative

Full-time|From $100K/yr|On-site|Cambridge, Massachusetts

We are open to hiring for this role at various levels of expertise. For the right candidate, this position can be structured as a Senior Research Manager, with compensation tailored to experience and the anticipated scope of work, potentially exceeding the listed pay rate.About the Cambridge Boston Alignment InitiativeThe Cambridge Boston Alignment Initiative (CBAI) is a nonprofit research organization dedicated to promoting research and education aimed at facilitating a safe and beneficial transition to advanced AI systems. Our efforts include generating original research and accelerating AI safety initiatives through our fellowship programs.Our first summer fellowship cohort has already published significant papers at the Mechanistic Interpretability Workshop at NeurIPS and had accepted papers at ICLR. Additionally, some fellows have transitioned to roles at Goodfire and Redwood Research. Following a successful launch in 2025, we are poised for rapid expansion in 2026, with plans to host multiple fellowship cycles (Fall, Spring, and Summer), double our fellowship cohort, and quadruple our team size.Refer candidates to us and earn $5,000 if they are hired.The RoleIn this role, you will collaborate closely with research fellows and their esteemed mentors—renowned researchers from Cambridge and beyond—to support pioneering work on interpretability, AI control, formal verification for provably safe AI, evaluations, and various aspects of AI governance and policy. We are looking for research managers with experience in both technical research and governance and policy research.Research Management Responsibilities (0.7 FTE)Conduct regular one-on-one meetings with fellows to provide constructive feedback on research progress, assist in overcoming challenges, and coach them through issues such as debugging research methodologies and preparing literature scaffolds, as well as supporting data collection, analysis, and methodology development for experiments and hypothesis testing.Offer feedback on fellows' research and help cultivate an environment that encourages rigorous approaches among them.Connect fellows with relevant resources, literature, and opportunities available during and after the fellowship program.Facilitate communication between fellows and their mentors to ensure a supportive research ecosystem.

Mar 30, 2026
Apply
companyCambridge Boston Alignment Initiative logo
Research Program Associate in AI Safety

Cambridge Boston Alignment Initiative

Full-time|$100K/yr - $125K/yr|On-site|Cambridge, Massachusetts

Join the Cambridge Boston Alignment InitiativeThe Cambridge Boston Alignment Initiative (CBAI) is a nonprofit organization dedicated to pioneering research and educational initiatives aimed at ensuring a safe and beneficial transition to advanced AI systems. Our mission focuses on producing original research and accelerating AI safety through comprehensive fellowship programs.Since our initial summer fellowship cohort, we have achieved significant milestones, including published papers at prominent conferences such as NeurIPS and ICLR. As we enter 2026, we are poised for rapid growth, planning multiple fellowship cycles and expanding our team significantly.Refer candidates to us, and if hired, you will receive a $5,000 referral bonus!Your RoleAs a Research Program Associate, you will collaborate closely with Research Managers, mentors, and program leadership to design and refine the frameworks that empower fellows to excel in their research. This is a pivotal program-building position where you will create systems for mentor matching, research goal tracking, progress assessment, and problem-solving support for fellows.Program Design & Development (0.6 FTE)Enhance CBAI's fellow selection process and program deliverables.Identify effective outreach channels and manage outreach campaigns for future iterations.Develop evaluation frameworks to assess fellow progress and program effectiveness.Implement structural improvements based on feedback from fellows, mentors, and research managers.Assist in the planning and execution of fellowship events, such as speaker series and poster days.Fellow & Mentor Experience (0.4 FTE)Design and oversee the onboarding process for mentors, ensuring a positive experience.

Mar 31, 2026
Apply
companyLila Sciences logo
Full-time|$176K/yr - $304K/yr|On-site|Cambridge, MA USA; London, UK; San Francisco, CA USA

Your Impact at Lila Join our dynamic and innovative AI safety team at Lila Sciences, where we prioritize talent and agency to mitigate risks associated with scientific superintelligence. Our mission is to craft and execute a tailored safety strategy that aligns with our unique objectives and deployment methods. This role involves creating technical safety strategies, engaging with the broader scientific community, and producing critical technical documentation, including evaluations focused on risk and capability assessments. What You’ll Be Creating Design and implement capability evaluations to assess scientific risks, particularly from cutting-edge scientific models integrated with automated physical laboratories across biological and physical sciences. Lead and coordinate threat modeling sessions with both internal and external scientific experts, keeping abreast of emerging technologies and use cases. Develop and manage high-quality training and testing datasets for evaluations and safety systems. Analyze risks associated with Lila’s capabilities and their interactions with the broader ecosystem of general-purpose frontier models and specialized scientific tools. Contribute to high-quality research initiatives focused on scientific capability evaluation and restriction as needed. Assist with external communications regarding Lila’s safety initiatives. What You’ll Need to Succeed A PhD in biological sciences (e.g., molecular biology, virology, computational biology) or physical sciences (e.g., materials science, physics, chemistry, or chemical engineering), or similar experience. Proficient in scientific computing related to biological or physical sciences. Familiarity with dual-use research and dissemination issues within relevant safety, regulatory, and governance frameworks (e.g., export control, biological and chemical conventions). Exceptional communication skills to convey complex technical concepts to non-expert audiences effectively. Proven ability to lead internal and external teams in developing Lila's perspective on biological and physical risks. Demonstrated capacity to collaborate with cross-functional stakeholders (science, AI, product, policy) in a complex environment.

Mar 4, 2026
Apply
companylilasciences logo
Full-time|$268K/yr - $384K/yr|On-site|Cambridge, MA USA; London, UK; San Francisco, CA USA

Your Contribution at Lila At Lila, we are assembling a highly skilled and proactive AI safety team that will collaborate with all core departments, including science, model training, and lab integration, to effectively address risks associated with scientific superintelligence. The primary mission of this team is to develop and execute a tailored safety strategy that aligns with Lila's unique objectives and deployment methodologies. This will encompass formulating technical safety strategies, engaging with the broader ecosystem, and producing technical documentation such as risk and capability assessments and safety measures. Your Responsibilities Establish the research and development strategy for Lila’s safety framework concerning biological and physical risks. Design and implement capability evaluations to identify scientific risks (both recognized and novel) arising from state-of-the-art scientific models integrated with automated physical laboratories across biological and physical sciences. Lead and coordinate threat modeling sessions with both internal and external scientific experts, including monitoring advancements in technologies and their applications. Create and curate high-quality training and testing datasets for evaluations and safety systems. Assess risks linked to Lila’s capabilities, considering interactions with the broader ecosystem of capabilities (including general-purpose frontier models and specialized scientific tools). Contribute to extensive, high-quality research initiatives when needed for scientific capability evaluation and restriction. Engage in external communications regarding Lila’s safety initiatives. Qualifications for Success A PhD in a biological sciences field (e.g., molecular biology, virology, computational biology) or a physical sciences field (e.g., materials science, physics, chemistry, chemical or nuclear engineering) or equivalent experience. Proven track record in setting research directions for open issues surrounding dual-use risks in biological and physical sciences. Experience in scientific computing within the biological or physical sciences. Understanding of dual-use research and dissemination issues in relation to relevant safety, regulatory, and governance frameworks (e.g., export controls, biological and chemical-related conventions). Excellent communication skills, capable of articulating complex technical concepts to non-specialist audiences. Demonstrated leadership capabilities in guiding teams of internal and external collaborators in developing Lila's perspective on biological and physical risks.

Mar 4, 2026
Apply
companyLila Sciences logo
Full-time|$228K/yr - $358K/yr|On-site|Cambridge, MA USA; London, UK; San Francisco, CA USA

Your Contribution at Lila At Lila, we are assembling a dynamic and empowered AI safety team dedicated to proactively addressing the potential risks associated with scientific superintelligence. This team will collaborate closely with all core departments, including science, model training, and lab integration, to craft a customized safety strategy that aligns with our unique objectives and deployment methods. Key responsibilities will encompass the development of technical safety strategies, engagement with the broader ecosystem, and the creation of essential technical documentation, including risk assessments and capability evaluations. Your Key Responsibilities Design and execute evaluations to identify scientific risks—focusing on both established and emerging threats—from state-of-the-art scientific models integrated with automated physical laboratories. Develop initial proof-of-concept safety measures, such as machine learning models designed to detect and mitigate unsafe behaviors from scientific AI models and physical laboratory outputs. Gain a comprehensive understanding of various model capabilities, primarily within scientific contexts but also extending to non-scientific domains (e.g., persuasion, deception) to shape Lila's overarching safety strategy. Engage in high-quality research initiatives as needed to evaluate and restrict scientific capabilities effectively. Qualifications for Success A Bachelor's degree in a relevant technical field (e.g., computer science, engineering, machine learning, mathematics, physics, statistics) or equivalent experience. Proficient programming skills in Python and hands-on experience with machine learning frameworks (such as Inspect) for large-scale evaluations and structured testing. Demonstrated experience in constructing evaluations or conducting red-teaming exercises pertaining to CBRN/cyber risks or frontier model capabilities, encompassing both unsafe and benign attributes. Background in designing and/or implementing AI safety frameworks in cutting-edge AI enterprises. Exceptional ability to communicate intricate technical concepts and issues to audiences without technical expertise. Desirable Qualifications A Master’s or PhD in a field pertinent to safety evaluations of AI models within scientific areas or another technical discipline. Publications in AI safety, evaluations, or model behavior at leading ML/AI conferences (such as NeurIPS, ICML, ICLR, ACL) or model release documentation. Experience exploring risks arising from novel scientific advancements (e.g., biosecurity, computational biology) or utilizing specialized scientific tools (e.g., large-scale foundational models in science).

Mar 4, 2026
Apply
companyLila Sciences logo
Full-time|$192K/yr - $272K/yr|On-site|Cambridge, MA USA; San Francisco, CA USA

Lila Sciences is forming a dedicated AI safety team to address the unique risks and challenges posed by scientific superintelligence. The company seeks a Senior or Principal Technical Program Manager to guide the operational side of AI safety research, helping to shape how the team approaches complex and evolving problems. Role overview This Technical Program Manager position connects research, engineering, model development, policy, and executive leadership. The work involves translating fast-moving research into structured, accountable plans. While this is not a research role, curiosity about the technical aspects of AI safety is important. The team values clear communication and the ability to bring clarity and structure as the organization expands. What you will do Act as the primary communication link between the AI safety team and technical, research, and scientific groups. Share complex results and coordinate resource needs. Establish information flows to keep teams connected. Promote accountability within cross-functional, distributed teams, building consensus and trust through open communication and sound judgment. Support rapid experimentation and iteration by refining and applying effective program management practices. Create clear documentation and reports to communicate vision, track progress, and ensure alignment with company objectives. Accurately represent program status and risks, even in uncertain or shifting situations. Requirements Bachelor’s or Master’s degree in Computer Science, Engineering, Life Sciences, or a related discipline. Minimum of 6 years of program or project management experience in technology or life sciences. Demonstrated success in program management, leading cross-functional teams, and delivering projects. Strong analytical and problem-solving abilities, with skill in turning technical requirements into actionable plans. Excellent written and verbal communication skills, including experience preparing executive-level documents, roadmaps, and updates. Location This position is based in Cambridge, MA or San Francisco, CA, USA.

Apr 24, 2026
Apply
companyCambridge Boston Alignment Initiative logo
Part-Time Research Manager for AIxBio at CBAI

Cambridge Boston Alignment Initiative

Part-time|$60K/yr - $125K/yr|On-site|Cambridge, Massachusetts

This is a part-time position tailored for researchers, postdoctoral fellows, or professionals eager to contribute to the AIxBio fellowship while balancing other commitments.About the Cambridge Boston Alignment Initiative (CBAI)The Cambridge Boston Alignment Initiative is a nonprofit research organization dedicated to advancing research and education aimed at ensuring that society safely and beneficially transitions to advanced AI systems. Our mission includes producing original research and accelerating AI safety initiatives through innovative fellowship programs.Following the successful launch of our AI Safety Research Fellowship in 2025, which saw its inaugural cohort publish significant findings at the Mechanistic Interpretability Workshop at NeurIPS and contribute to prestigious conferences like ICLR, we are poised for rapid expansion in 2026. We are introducing the AIxBio Research Fellowship, applying our high-touch, mentorship-driven research approach to the critical intersection of AI and biosecurity.The Role OverviewAs a Research Manager, you will collaborate closely with fellows, mentors, biosecurity researchers, AI safety experts, and academic and industry leaders to facilitate pioneering research at the confluence of AI and biosecurity. Your responsibilities will involve evaluating capabilities in biological systems, developing regulatory frameworks to mitigate dual-use research risks, and addressing other research priorities related to AI-enabled biosecurity threats. We welcome candidates with either technical research expertise or experience in policy and governance research. For the ideal candidate, we are flexible with the role's scope and compensation.Research Management Responsibilities (0.7 FTE)Conduct regular one-on-one meetings with fellows, providing essential feedback on their research progress and assistance in overcoming challenges, including refining research designs, creating literature frameworks, and supporting data collection and analysis.Offer in-depth feedback on fellows' research and contribute to fostering an environment that encourages rigor and clarity.Connect fellows with relevant resources, literature, and opportunities in the field of AI and biosecurity during and after their fellowship.Collaborate with fellows' mentors to establish clear research objectives and facilitate the progression of fellows' research projects.Assist in the selection of fellows by reviewing applications and conducting interviews.

Mar 31, 2026
Apply
companyGraphcore logo
Full-time|On-site|Cambridge, UK

About Graphcore At Graphcore, we are pioneering the future of artificial intelligence computing. Our team comprises semiconductor, software, and AI specialists with extensive expertise in developing the complete AI compute stack—from silicon and software to large-scale infrastructure. As a proud member of the SoftBank Group, we benefit from substantial long-term investments, enabling us to contribute essential technology to the rapidly evolving SoftBank AI ecosystem. To capture the immense potential of AI, Graphcore is expanding globally, uniting the brightest minds to tackle the most challenging problems, where every individual is empowered to make a significant impact on our company, our products, and the future of AI. Job Summary As a Research Scientist at Graphcore, you will play a vital role in advancing AI research by exploring innovative ideas that address significant AI/ML challenges. The evolution of AI has been primarily driven by specialized hardware over the past decade, and we believe that developing hardware-aware AI algorithms and AI-optimized hardware will remain crucial for progress in this exciting domain. We seek candidates who are not only curious scientists but also proficient engineers, equipped with both theoretical knowledge and practical skills essential for impactful AI research. We welcome applicants with experience in low-power, edge, and embodied AI applications, including robotics, autonomous vehicles, and augmented/virtual reality. Your expertise will contribute to the training and deployment of multimodal AI models in these contexts, focusing on areas such as world models, real-time computer vision, and reasoning over audio and video streams. The Team The Graphcore Research team engages in both fundamental and applied research to define the computational needs of machine intelligence and showcase how hardware advancements can lead to the next generation of innovative AI models. We actively publish in leading AI/ML conferences (NeurIPS, ICML, ICLR) and participate in specialized workshops while collaborating with various research teams and organizations globally. We take pride in fostering a supportive and collaborative environment, where we organize ourselves around individual research interests to collectively solve challenges in domains such as efficient computation, model scaling, and distributed training and inference of AI models across multiple modalities and applications, including sequence and graph-based data. Our teams are spread across London, Cambridge, and Bristol, with projects and discussions that involve all locations.

Mar 13, 2026
Apply
companyrai logo
Full-time|On-site|Cambridge, MA

Our MissionAt rai, we are dedicated to addressing the most pressing and foundational challenges in Artificial Intelligence and Robotics. Our goal is to pave the way for future generations of intelligent machines that enhance our daily lives.Position OverviewWe are seeking passionate and innovative Research Scientists with substantial hands-on research experience in one or more of the following areas: Cognitive AI, Athletic AI, Organic Hardware Design, or Robot Ethics. If you're enthusiastic about advancing robotic technology and its applications to improve functionality and effectiveness, we invite you to join our team!

Oct 5, 2022
Apply
companyLila Sciences logo
Full-time|On-site|Cambridge, MA USA; San Francisco, CA USA

Join our innovative team at Lila Sciences as an AI Lab Research Engineer. In this role, you will contribute to cutting-edge AI research and development, focusing on enhancing the capabilities of our laboratory systems. Your expertise will help drive forward our mission to revolutionize scientific research through artificial intelligence.

Apr 7, 2026
Apply
companyCambridge Boston Alignment Initiative logo
Full-time|$100K/yr - $115K/yr|On-site|Cambridge, Massachusetts

About the Cambridge Boston Alignment InitiativeThe Cambridge Boston Alignment Initiative (CBAI) is a nonprofit organization dedicated to pioneering research and education that facilitates a safe and beneficial transition to advanced artificial intelligence systems. Our mission is to produce original research and expedite AI safety initiatives through fellowship programs.Following the successful launch of our AI Safety Research Fellowship in 2025, where our inaugural cohort made significant contributions including a spotlight paper at the Mechanistic Interpretability Workshop at NeurIPS and placements at prestigious conferences such as ICLR, we are poised for rapid growth in 2026. We are excited to introduce the inaugural AIxBio Research Fellowship, which will apply our mentor-driven research model to the critical intersection of AI and biosecurity.We invite you to refer candidates and earn a $5,000 reward for any successful hires.

Mar 31, 2026
Apply
companyHarvard University logo
Full-time|On-site|Cambridge

The Harvard Kennedy School (HKS) is embarking on an ambitious project to establish a premier research enablement environment for public policy. In the coming two years, the School will pilot innovative strategies and solutions to address the growing complexities of research and compliance. This role is essential to support these initiatives.As a vital member of the Research Computing and Data Services (RCDS) team within Library and Research Services (LRS) at the Harvard Kennedy School, and reporting directly to the Director of Research Computing and Data Services, the Data Safety Reviewer will oversee the Data Safety process and queue. You will provide guidance to Principal Investigators and researchers through the Data Safety research compliance process, advising on best practices for research data management in alignment with the Harvard Research Data Security Policy and Harvard IT Enterprise Policy. You will also collaborate with IT, legal, and other experts as needed, particularly when ethical, security, policy, or legal considerations arise during data collection, processing, and dissemination. Close collaboration with the Research Data Steward, Director of Research Computing and Data Services, and the HKS Chief Research Data and Privacy Officer will be essential to effect meaningful improvements in the Data Safety process, management of regulated and contractual data, and other crucial areas impacting HKS researchers handling sensitive data.We are looking for candidates who possess:A capability to navigate a distributed organization, often requiring referrals to services related to research data management.An aptitude for partnering with research, business, legal, and technical subject matter experts to support researchers while appropriately managing risks.A proactive attitude and proven ability to take initiative in resolving cross-functional issues across organizational boundaries.A demonstrated proficiency in collaborating effectively within a team environment.A strong commitment to fostering belonging, community, and connection.At LRS, we embrace a culture of continuous learning. You will be encouraged to pursue new learning opportunities relevant to this role and share your knowledge with others in LRS.HKS, including LRS, is dedicated to cultivating a welcoming and supportive community among staff, faculty, and students. This role will contribute to building such a community and nurturing an environment that prioritizes belonging, community, and connection.

Mar 3, 2026
Apply
companyHarvard University logo
Senior Research Manager

Harvard University

Full-time|On-site|Cambridge

Job Summary: The Center for Education Policy Research (CEPR) at Harvard University is on the lookout for a highly skilled Senior Research Manager. This pivotal role involves overseeing and enhancing the analytical efforts of the Center, which collaborates with school districts, charter school networks, and state education agencies to apply rigorous research methodologies and data analysis to inform strategic management and policy decisions. CEPR is dedicated to revolutionizing education through research and evidence, and believes that all students can succeed when education leaders rely on data-driven decision-making rather than intuition or untested assumptions. The Senior Research Manager will play a crucial role in this mission by leading innovative quantitative research on key issues and developing agile research models that better equip education leaders. The selected candidate will spearhead the development, management, and execution of high-quality analyses on significant educational topics. This role will likely involve managing a research partnership with an education non-profit organization in the Philippines and its governmental partners, focusing on school improvement and research agenda development. Additionally, the Senior Research Manager will support various projects within CEPR and supervise Research Analysts across multiple initiatives.Duties & Responsibilities:As a Senior Research Manager, your primary responsibilities will include:Leading analytics across diverse projects throughout the research life cycle, which entails formulating research questions, designing studies, writing grant proposals, collecting data, conducting analyses, and authoring reports and academic papers.Managing research analysts while providing professional development and training workshops to enhance their research competencies and professional skills through mentorship and collaborative learning experiences.Delivering technical training to analysts and staff from local education agencies on research methodologies and appropriate statistical methods.Facilitating joint meetings for the research team, project directors, and principal investigators.Collaborating closely with executive leadership, principal investigators, and representatives from partner organizations to refine ideas and strategies for new projects, identifying funding opportunities, and designing research frameworks.Ensuring the effective execution of research projects by developing comprehensive work plans, adhering to project timelines, and coordinating tasks for ongoing and future CEPR projects.Building professional relationships of trust with internal and external stakeholders, including data administration personnel, program staff, and principal investigators.Ensuring the integrity and quality of all research outputs.

Feb 27, 2026
Apply
companyLila Sciences logo
Full-time|$176K/yr - $304K/yr|On-site|Cambridge, MA USA

Your Role at Lila SciencesJoin our innovative Physical Sciences division, where you will spearhead the development of intelligent agent-driven systems designed to enhance AI-accelerated and AI-orchestrated process engineering across diverse industrial applications. In this pivotal role, your mission will be to create methodologies empowering AI agents to reason, design, simulate, optimize, and operate complex physical and chemical processes using both traditional and machine learning-driven process engineering tools. You will focus on building agentic infrastructures that facilitate AI systems in planning and executing multi-step process engineering workflows, which include process synthesis, flowsheet generation, steady-state and dynamic simulation, control strategy design, and techno-economic evaluation. Your contributions will significantly influence how Lila's scientific superintelligence addresses real-world challenges through closed-loop autonomous process engineering.Your ContributionsDesign and implement agentic frameworks supporting comprehensive process engineering workflows, encompassing process setup, simulation, optimization, and analysis.Create AI agents capable of autonomously planning, executing, and refining process engineering tasks utilizing existing tools such as steady-state and dynamic simulators, optimizers, and data systems.Investigate agentic strategies for advanced objectives, including process intensification, control co-design, real-time optimization, and closed-loop learning informed by operational data.Enhance the robustness, interpretability, and reproducibility of agent-driven process engineering workflows; develop internal tools for debugging, observability, validation, and auditability.Collaborate with interdisciplinary teams to apply agentic process engineering across a wide range of industrial applications.

Mar 4, 2026
Apply
companyIntegrated Resources, Inc. logo
Clinical Safety Data Manager

Integrated Resources, Inc.

Full-time|On-site|Cambridge

Join our team as a Clinical Safety Data Manager, where you will play a pivotal role in ensuring the safety and efficacy of our clinical trials. You will manage safety data, conduct thorough analyses, and collaborate closely with cross-functional teams to maintain the highest standards of patient safety.

Oct 29, 2015
Apply
companyLila Sciences logo
Full-time|$224K/yr - $336K/yr|On-site|Cambridge, MA USA; San Francisco, CA USA

Your Impact at Lila As a Senior or Principal Research Engineer specializing in Synthetic Data, you will play a pivotal role in shaping the vision, roadmap, and execution of our synthetic data initiatives. Your responsibilities will span from asset generation and simulation to integrating machine learning training and achieving measurable enhancements in model performance. Collaborating closely with our Research Engineering team, you will design, generate, and implement artificial datasets aimed at training, testing, and refining Lila’s platform to achieve our strategic objectives. What You Will Build Define and refine the synthetic data strategy along with a comprehensive multi-quarter roadmap. Create evaluation frameworks that effectively connect synthetic interventions with genuine model performance. Establish high standards for asset quality, diversity, thorough documentation, and reproducibility while fostering a robust review culture. What You Will Need to Succeed Over 6 years of experience in applied ML/ML systems, with at least 3 years leading industry initiatives, showcasing a strong track record in advanced algorithms and frameworks designed for large-scale synthetic data generation. More than 8 years of experience working with contemporary ML workflows, including Python, PyTorch, dataset tools, training loops, and evaluation frameworks; adept at profiling and optimizing GPU-intensive pipelines. Bonus Points For A proven history of constructing synthetic datasets from source data to significantly enhance model performance in specific domains. Experience with instruction fine-tuning and hill-climbing techniques. Ability to translate product requirements and feedback into a scalable synthetic data generation pipeline. Knowledge of quantization, distillation, routing, mixture-of-experts, and cost optimization at scale. Experience in compliance-heavy settings (HIPAA, PCI, FedRAMP) and with on-premises/VPC deployments. About Lila Lila Sciences stands at the forefront of innovation as the world’s first scientific superintelligence platform and autonomous lab, dedicated to life sciences, chemistry, and materials science. We are ushering in an era of limitless discovery by harnessing AI to enhance every facet of the scientific method. Our mission is to empower scientists to tackle humanity's most pressing challenges in health, climate, and sustainability at an unprecedented pace and scale. Discover more about our mission at www.lila.ai. If this sounds like an environment in which you would thrive, we encourage you to apply even if your experience doesn't perfectly align with every requirement listed.

Mar 25, 2026
Apply
companyIntegrated Resources, Inc. logo
Clinical Safety Data Manager

Integrated Resources, Inc.

Full-time|On-site|Cambridge

Join our dynamic team at Integrated Resources, Inc. as a Clinical Safety Data Manager. In this pivotal role, you will oversee the management and analysis of clinical safety data, ensuring compliance with regulatory standards and enhancing patient safety. Your expertise will be vital in developing safety databases and implementing data management processes that support clinical trials.

Dec 21, 2015
Apply
companyToyota Research Institute logo
Full-time|On-site|Los Altos, CA; Cambridge, MA

Join us at the Toyota Research Institute (TRI) as we strive to enhance the quality of human life through innovative technologies. Our mission is to create groundbreaking tools that enrich human experiences. To spearhead this transformative evolution in mobility, we've assembled a stellar team that is pushing the boundaries in artificial intelligence, robotics, driving, and material sciences.The TeamWithin TRI's Energy and Materials division, the Future Factory team is dedicated to pioneering advanced tools and methodologies that drive flexibility and efficiency in Toyota's product design and manufacturing processes. Our goal is to expedite the journey towards an emissions-free future. We are developing comprehensive AI systems capable of reasoning through the creation of physical objects — from initial design concepts to the assembly of actual components — and building the necessary infrastructure to train and evaluate these systems on a large scale.The OpportunityWe are seeking a Research Scientist to help us build intelligent systems for physical assembly. This role is an excellent fit for recent PhD graduates with a proven track record in implementation and a deep curiosity about the manufacturing process.As a member of our research team, you will design and implement learning pipelines from the ground up, conduct experiments to assess various architectural, data, and algorithmic alternatives, and influence the application of modern machine learning to the challenges of robotic assembly. Your work will intersect policy learning, reinforcement learning, and physical reasoning, allowing you to explore the integration of large language models and agentic infrastructure in solving real-world manufacturing challenges.

Apr 2, 2026
Apply
companyHarvard University logo
Full-time|On-site|Cambridge

Harvard University is seeking an innovative and strategic Associate Director of Research to lead groundbreaking research initiatives. The ideal candidate will possess a deep understanding of research methodologies, strong leadership skills, and a passion for academic excellence. This role involves collaborating with faculty and students to enhance research output and drive impactful scholarship.

Feb 13, 2026
Apply
companyIntegrated Resources, Inc. logo
Clinical Trials Safety Data Manager

Integrated Resources, Inc.

Full-time|On-site|Cambridge

Join Integrated Resources, Inc. as a Clinical Trials Safety Data Manager, where you will play a crucial role in ensuring the safety and efficacy of clinical trials. In this dynamic position, you will oversee data management processes, ensuring the integrity and accuracy of safety data collected during trials.We are seeking a detail-oriented professional with a strong background in clinical data management and an understanding of regulatory requirements related to clinical trials. You will collaborate with cross-functional teams to support our mission of advancing medical research and improving patient outcomes.

Dec 8, 2015

Sign in to browse more jobs

Create account — see all 527 results

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.