Data Engineer For Safeguards At Anthropic London jobs in London – Browse 10,302 openings on RoboApply Jobs

Data Engineer For Safeguards At Anthropic London jobs in London

Open roles matching “Data Engineer For Safeguards At Anthropic London” with location signals for London. 10,302 active listings on RoboApply Jobs.

10,302 jobs found

1 - 20 of 10,302 Jobs
Apply
companyAnthropic logo
On-site|On-site|London, UK

Join Anthropic as a Data Engineer focusing on safeguards and play a pivotal role in driving the development of innovative AI systems. You will collaborate with cross-functional teams to ensure the reliability and safety of our data processes. This position offers a unique opportunity to shape the future of AI responsibly.

Mar 19, 2026
Apply
companyMultiverse logo
Full-time|On-site|London

We are seeking a dedicated and passionate Safeguarding Specialist to join our team at Multiverse. In this pivotal role, you will be responsible for ensuring the safety and well-being of our learners, implementing effective safeguarding policies, and promoting a culture of safety within our organization.Your expertise will help us create a safe environment for all learners, enabling them to thrive and succeed. You will work closely with various stakeholders to identify risks and develop strategies to mitigate them.

Apr 1, 2026
Apply
companyAnthropic logo
Full-time|On-site|London, UK

About AnthropicAt Anthropic, our mission is to develop AI systems that are reliable, interpretable, and steerable. We believe that AI should be safe and advantageous for our users and society as a whole. Our rapidly expanding team consists of dedicated researchers, engineers, policy experts, and business leaders who collaborate to create beneficial AI solutions.About the RoleWe are seeking a Community Lead to spearhead the strategy and execution of our developer and builder community initiatives across the EMEA region. This is an exciting and impactful opportunity where you will take charge of customizing and implementing centrally developed playbooks to suit local markets, executing community activations directly, and discovering new opportunities that foster a B2C2B growth strategy throughout Europe.You will work closely with our central Community team in the US, which oversees program operations, playbook development, and strategic direction. Your primary focus will be on market-specific execution, adaptation, and growth—partnering with EMEA sales, marketing, and product teams to formulate market entry and expansion strategies for community programs while determining the optimal deployment of initiatives based on regional priorities. In addition to adapting existing programs, you will be tasked with identifying and promoting new opportunities, whether that involves innovative formats, partnerships, or activations that address specific needs or gaps in local markets not currently covered by centralized resources.The essence of this role lies in scaling a community flywheel—recruiting and empowering local ambassadors, organizing regular events, and transforming community engagement into tangible business results. Your goal will be to take what has been established centrally and effectively adapt it for key EMEA markets, achieving deeper engagement and faster execution than a remote-first approach typically permits.

Feb 12, 2026
Apply
companyAnthropic logo
Full-time|Remote|London, UK; Ontario, CAN; Remote-Friendly, United States; San Francisco, CA

The Anthropic Fellows Program in Economics connects researchers and practitioners interested in the economic dimensions of artificial intelligence. Fellows join a community focused on understanding how economic systems interact with new technologies. Program structure Fellows collaborate with established experts and peers, taking part in research that examines the relationship between economic principles and AI development. The program values innovative approaches and practical solutions to real-world economic questions. Research focus Participants investigate how advances in AI influence economic systems, and how economic thinking can inform the development and deployment of AI technologies. The environment supports original research and encourages practical problem-solving. Locations London, UK Ontario, Canada Remote-friendly, United States San Francisco, CA

Apr 24, 2026
Apply
companyAnthropic logo
On-site|On-site|London, UK

Join Anthropic as a Software Engineer within our Safeguards Infrastructure team, where you'll play a vital role in developing systems that ensure the safety and ethical use of AI technologies. In this position, you'll be responsible for creating and maintaining robust mechanisms that monitor AI models, prevent misuse, and prioritize user well-being. Your work will include designing infrastructure for data management and evaluation, as well as advancing multi-layered defenses that enhance our safety protocols at scale.

Jan 29, 2026
Apply
companyWise logo
Full-time|On-site|London

Wise is looking for a Senior Analytics Manager focused on Safeguarding and Ledger to join the London team. This position leads a group of analysts, working to shape data-driven strategies and improve performance metrics tied to safeguarding requirements. Role overview The Senior Analytics Manager will oversee analytics projects that support compliance and operational excellence. This role involves guiding a team, setting priorities, and ensuring that safeguarding measures are reflected in data processes and reporting. Collaboration and impact Collaboration with cross-functional teams is central to this role. The manager will deliver insights that influence business decisions and support Wise's commitment to compliance and effective ledger management. Requirements Experience leading analytics teams Strong background in data-driven strategy and performance measurement Knowledge of safeguarding and ledger processes Ability to work closely with multiple teams to deliver actionable insights

Apr 29, 2026
Apply
company
Full-time|On-site|London

Role overview Checkout.com seeks a CASS Specialist - Associate in Safeguarding for its London office. This position supports the Safeguarding team in maintaining compliance with Client Assets Sourcebook (CASS) regulations. The main goal is to help protect client assets and ensure the company meets regulatory standards. What you will do Work with teams across the business to establish and maintain safeguarding controls for client assets Monitor compliance with CASS regulations and contribute to ongoing oversight Assist in improving internal processes related to safeguarding client funds Professional development This role offers the opportunity to build expertise in financial regulations and safeguarding practices. Working at Checkout.com provides exposure to a specialized area of compliance within the fintech sector, supporting long-term career growth.

Apr 22, 2026
Apply
companyAirwallex logo
Full-time|On-site|UK - London

About AirwallexAirwallex is an innovative global financial platform that revolutionizes payments and financial management for businesses worldwide. With our proprietary technology and infrastructure, we serve over 200,000 businesses, including notable names like Brex, Rippling, Navan, Qantas, and SHEIN. Our integrated solutions encompass everything from business accounts and payments to spend management and embedded finance.Founded in Melbourne, we take pride in our diverse team of over 2,000 talented professionals across 26 global offices. As a company valued at US$8 billion and supported by esteemed investors such as T. Rowe Price, Visa, Mastercard, and Sequoia, we are at the forefront of shaping the future of global finance. If you're eager to tackle ambitious challenges and make a significant impact, we invite you to join our team.Attributes We ValueWe seek dynamic individuals with a founder-like spirit who are driven by the desire for substantial impact, rapid learning, and genuine ownership. You possess a depth of role-related expertise, are motivated by our mission and operating principles, and are able to act swiftly with sound judgment. Your curiosity drives you to explore deeply, and you effectively balance speed with thoroughness.Collaboration and humility are key to your approach; you transform innovative ideas into tangible products and ensure that tasks are executed effectively from start to finish. You leverage AI to enhance efficiency and problem-solving capabilities. In this role, you'll confront intricate, high-visibility challenges alongside exceptional colleagues, furthering your career as we redefine global banking. If you resonate with this vision, let’s create the future together.The OpportunityWe are excited to welcome a Treasury Safeguarding Manager to our team. This pivotal role will be instrumental in establishing robust payment reconciliation frameworks and safeguarding measures for our global payment processes, ensuring the protection of customer funds and seamless financial operations. You will provide senior management with valuable insights regarding cash positions, compliance with safeguarding protocols, and reconciliation performance, enabling informed risk management and supporting scalable growth, particularly within the UK market.The Manager will collaborate closely with Product & Engineering, Operations, and Legal & Compliance teams to enhance reconciliation tools, implement safeguarding controls, and drive continuous improvement in our financial processes.

Mar 17, 2026
Apply
companyAnthropic logo
Full-time|Remote|London, UK; Ontario, CAN; Remote-Friendly, United States; San Francisco, CA

Join the prestigious Anthropic Fellows Program, focused on advancing AI Safety. As a Fellow, you will collaborate with distinguished researchers and contribute to groundbreaking projects that prioritize the ethical development and deployment of artificial intelligence.

Apr 10, 2026
Apply
companyAnthropic logo
Full-time|On-site|London, UK

Anthropic seeks a Research Engineer specializing in Machine Learning, with a focus on Reinforcement Learning (RL Velocity), for its London office. This role supports ongoing AI research and contributes to building advanced machine learning systems. Key responsibilities Work alongside researchers and engineers to solve complex reinforcement learning problems Participate in designing and developing new machine learning models and systems Shape solutions that directly influence Anthropic’s research objectives Collaboration and team environment Join a team of skilled colleagues dedicated to AI advancement. Team members regularly exchange ideas, review each other's work, and support one another to create effective solutions.

Apr 23, 2026
Apply
companyCatch22 logo
Full-time|On-site|London

Role overview Catch22 is hiring a Safeguarding and Special Educational Needs and Disabilities (SEND) Lead in London. This position shapes the educational experience for young people with SEND, focusing on their safety and support throughout their learning journey. What you will do Work directly with schools and local communities to identify and address the needs of children with SEND. Champion safeguarding practices, making sure every child receives appropriate protection and care. Collaborate with educators and families to build effective support plans for students. Location This role is based in London.

Apr 15, 2026
Apply
companyAnthropic logo
Full-time|On-site|London, UK

About Anthropic Anthropic builds AI systems with a focus on reliability, interpretability, and steerability. The company’s mission centers on making AI safe and beneficial for both individuals and society. The team includes researchers, engineers, policy experts, and business leaders working together to advance responsible AI development. Role Overview: Software Engineer, Safeguards Foundations – Internal Tooling The Safeguards team at Anthropic is responsible for detecting, reviewing, and addressing potential misuse of the company’s AI models. Within this team, the Foundations group develops the infrastructure, platforms, and internal tools that support these safeguards across the organization. This role focuses on improving internal tooling for human review. The work covers case management, labeling workflows, investigative processes, and enforcement interfaces used daily by analysts and policy specialists. Although these tools operate behind the scenes, their reliability and clarity directly affect how quickly Anthropic can spot harmful behaviors, make enforcement decisions, and provide feedback for model training. The position involves close collaboration with Trust & Safety operations, policy, and detection-engineering teams. The goal: turn complex operational needs into robust, maintainable software that supports Anthropic’s safety mission. What You Will Do Enhance and maintain internal tools for human review, including case management and enforcement interfaces Work across the stack to deliver reliable, user-friendly products for internal stakeholders Partner with operations, policy, and engineering teams to understand workflows and translate them into effective software solutions Support the organization’s ability to detect and respond to AI misuse efficiently Location London, UK

Apr 20, 2026
Apply
companyAnthropic logo
Full-time|On-site|London, UK

Join Anthropic as a Senior Software Engineer specializing in Inference, where you will develop cutting-edge machine learning systems and inference algorithms. You will play a crucial role in enhancing our AI products and ensuring they are reliable and efficient.

Mar 12, 2026
Apply
companyAnthropic logo
Full-time|Hybrid|London, UK; Ontario, CAN; Remote-Friendly, United States; San Francisco, CA

Join the prestigious Anthropic Fellows Program, where you'll have the opportunity to delve into cutting-edge research in Reinforcement Learning. This program is designed for individuals passionate about advancing AI safety and developing innovative solutions. As a fellow, you will collaborate with a team of experts, engage in impactful projects, and contribute to a progressive research environment.

Apr 10, 2026
Apply
companyAnthropic logo
Full-time|On-site|London, UK

About AnthropicAt Anthropic, we are dedicated to pioneering safe, interpretable, and controllable AI systems. Our goal is to ensure that AI technologies are beneficial for users and society at large. We have assembled a rapidly expanding team of passionate researchers, engineers, policy specialists, and business leaders working collaboratively to create advanced AI systems that serve humanity well.As a leader in AI research, Anthropic is committed to developing ethical, powerful artificial intelligence. We aim to align transformative AI systems with human values. We invite you to join our Pretraining team as a Research Engineer, where you will be instrumental in creating the next generation of large language models. This role allows you to operate at the crossroads of cutting-edge research and practical engineering, playing a key part in building safe, steerable, and trustworthy AI systems.Key Responsibilities:Conduct innovative research and develop solutions in areas such as model architecture, algorithms, data processing, and optimization techniques.Independently lead small-scale research projects while partnering with colleagues on larger initiatives.Design, execute, and analyze scientific experiments to deepen our understanding of large language models.Enhance and scale our training infrastructure to boost efficiency and reliability.Develop and refine development tools to improve team productivity.Contribute across the entire stack, from low-level optimizations to high-level model design.

Feb 17, 2026
Apply
companyAnthropic logo
Full-time|On-site|London, UK

Join Anthropic as a Research Engineer specializing in the Science of Scaling, where you will play a pivotal role in advancing cutting-edge AI technologies. Collaborate with a dynamic team to explore innovative solutions that enhance our understanding of scalability in artificial intelligence systems.

Mar 12, 2026
Apply
companyAnthropic logo
Full-time|On-site|London, UK

Join Anthropic as a Senior Engineer specializing in Datacenter Server Lifecycle. In this pivotal role, you will be responsible for overseeing the complete lifecycle of our datacenter servers, ensuring optimal performance and reliability. You will collaborate with cross-functional teams to design, implement, and maintain state-of-the-art server infrastructure.

Mar 12, 2026
Apply
companyAnthropic logo
On-site|On-site|London, UK

About AnthropicAt Anthropic, we are dedicated to the advancement of safe, interpretable, and steerable AI systems. Our mission is to ensure that AI technologies remain beneficial to users and society as a whole. Our rapidly expanding team consists of passionate researchers, engineers, policy experts, and business leaders collaborating to create cutting-edge AI solutions.About the Role:As a Research Engineer focused on Alignment Science, you will design and execute sophisticated machine learning experiments aimed at understanding and guiding the behavior of powerful AI systems. You are driven by a desire to make AI systems helpful, honest, and harmless, and you recognize the complexities associated with human-level capabilities. This role requires a blend of scientific inquiry and engineering expertise. You will engage in exploratory research concerning AI safety, addressing potential risks associated with advanced future systems (classified as ASL-3 or ASL-4 according to our Responsible Scaling Policy), frequently collaborating with teams focused on Interpretability, Fine-Tuning, and the Frontier Red Team.For insights into our ongoing research, visit our blog. We are currently looking to expand our London team in the following research areas:AI Control: Developing methodologies to ensure that advanced AI systems remain safe and non-threatening in unpredictable or adversarial environments.Alignment Stress-testing: Implementing innovative alignment stress-testing frameworks to evaluate AI system resilience.

Jan 29, 2026
Apply
companyAnthropic logo
Full-time|Remote|London, UK; Ontario, CAN; Remote-Friendly, United States; San Francisco, CA

Join the Anthropic Fellows Program where you will delve into the exciting world of Machine Learning Systems & Performance. This unique opportunity allows you to work alongside some of the brightest minds in AI research and development, tackling complex challenges and contributing to groundbreaking projects that aim to enhance the capabilities of machine learning systems.

Apr 10, 2026
Apply
companyAnthropic logo
Remote|Remote|London, UK; Ontario, CA; Remote-Friendly, United States; San Francisco, CA

Join Anthropic as an AI Safety Fellow, where our mission is to develop reliable and interpretable AI systems that prioritize safety and societal benefits. This unique opportunity allows you to engage in cutting-edge AI safety research for four months, with guidance from leading experts in the field. You will utilize external infrastructures to undertake an empirical project aligned with our research goals, culminating in a public output such as a research paper. We foster a collaborative environment, connecting you with the broader AI safety research community, and providing generous stipends and resources to support your research endeavors.

Jan 29, 2026

Sign in to browse more jobs

Create account — see all 10,302 results

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.