Ai Research Engineer jobs in London – Browse 3,216 openings on RoboApply Jobs

Ai Research Engineer jobs in London

Open roles matching “Ai Research Engineer” with location signals for London. 3,216 active listings on RoboApply Jobs.

3,216 jobs found

1 - 20 of 3,216 Jobs
Apply
companyXaira Therapeutics logo
AI Research Engineer

Xaira Therapeutics

Full-time|Hybrid|London, England, United Kingdom / Remote

About Xaira TherapeuticsXaira Therapeutics is a pioneering biotech startup dedicated to harnessing the power of artificial intelligence to revolutionize drug discovery and development. We are at the forefront of creating generative AI models aimed at designing protein and antibody therapeutics, facilitating the development of treatments for historically challenging molecular targets. Our innovative approach also includes the development of foundational models for biological processes and diseases, enhancing target identification and patient stratification. Through these groundbreaking technologies, we strive to unlock novel therapies and improve drug development success rates. With headquarters in the San Francisco Bay Area, Seattle, and London, we are positioned to make a significant impact in the biotech field.Position OverviewWe are on the lookout for passionate and driven individuals to join our team as AI Research Engineers. We embrace candidates from diverse backgrounds and experiences, believing that varied perspectives strengthen our ability to solve complex challenges. Our London office, located near Old Street, fosters a highly collaborative environment where teamwork is essential to our success. We promote a hybrid work culture built on trust, with team members typically working in the office three days a week.Key ResponsibilitiesDemonstrate industry experience as a research engineer within an AI-focused organization.Exhibit enthusiasm for collaboration, learning, and teaching while tackling complex problems as part of a team.Desirable QualificationsWe value diverse experiences and recognize that each individual’s journey is unique. While the following qualifications are ideal, they are not mandatory:Master’s degree or PhD in an AI-related discipline.Contributions to public codebases or GitHub repositories.Experience in building and training neural networks.Familiarity with distributed training and inference.Expertise in profiling and optimizing large-scale AI models.Knowledge in BioAI or related fields.If you possess a strong drive to utilize AI in advancing drug discovery and enhancing human health, we invite you to apply and join us in our mission to create a positive impact in the world.

Mar 4, 2026
Apply
companyhyperexponential logo
Full-time|On-site|London

Join our cutting-edge AI team at hyperexponential as a Research Engineer. In this role, you will be at the forefront of developing innovative AI solutions that drive efficiency and transformation across various sectors.As a Research Engineer, you will collaborate closely with data scientists, software engineers, and domain experts to design, implement, and optimize AI algorithms. Your insights will directly contribute to advancing our product offerings and enhancing client experiences.If you are passionate about harnessing AI technology to solve real-world problems, we want to hear from you!

Apr 8, 2026
Apply
companyApollo Research logo
Full-time|On-site|London

Application Deadline: We are actively interviewing candidates and aim to fill this position promptly with the right individual.ABOUT THE ROLEAt Apollo Research, we conduct evaluations that meticulously assess the risks associated with advanced AI systems. Collaborating with leading laboratories such as OpenAI, Anthropic, and Google DeepMind, you will have the unique opportunity to engage with cutting-edge AI models ahead of their public release. The ideal candidate possesses a passion for rigorously testing state-of-the-art AI technologies and excels in creating and automating efficient evaluation pipelines.YOUR RESPONSIBILITIES WILL INCLUDE- Conducting pre-deployment evaluation campaigns on the world's most advanced AI systems. Our partnerships with various labs provide access to a wide range of models that no single lab can offer, allowing you to be among the first to engage with new models.- Exploring AI cognition by analyzing extensive model transcripts to identify novel behavioral patterns that have yet to be documented. These insights can be both surprising and enlightening, including phenomena like non-standard language and reward-seeking reasoning as discussed in our anti-scheming paper.- Developing new evaluations for frontier risks, creating innovative test environments, and scaling these across multiple scenarios.- Collaborating with leading AI developers to share your insights, receive feedback, and ensure your evaluations influence the deployment strategies of the most advanced AI systems.- Optimizing and automating the evaluation pipeline. We utilize automation in building, executing, and analyzing evaluations. As agent capabilities evolve rapidly, you will have the autonomy to reshape the pipeline to keep pace with these advancements.KEY REQUIREMENTS

Feb 13, 2026
Apply
companyaisi logo
Full-time|On-site|London, UK

aisi seeks a Research Engineer or Research Scientist to focus on Model Transparency in London, UK. The position involves research aimed at making AI models more understandable and accountable. The goal is to clarify complex behaviors within these systems and help build trust in their outcomes. Key responsibilities Research new ways to improve the interpretability of AI models Create methods and tools that help explain how models make decisions Tackle challenges in understanding and communicating model behavior Contribute to initiatives that support greater accountability in AI Location This role is based in London, UK.

Apr 24, 2026
Apply
companyLightning AI logo
Full-time|$180K/yr - $250K/yr|Hybrid|London, England, United Kingdom

Lightning AI builds tools that help researchers and organizations bring AI projects from idea to production. Known for PyTorch Lightning, the team focuses on making it easier to develop, train, and deploy AI systems. Since merging with Voltage Park, Lightning AI combines developer-focused software with scalable compute resources. The platform supports experimentation, training, and production inference, while emphasizing security and control. Offices are located in London, New York City, San Francisco, and Seattle. Investors include Coatue, Index Ventures, Bain Capital Ventures, and Firstminute. Values Move Fast: Break big challenges into achievable tasks and act quickly. Focus: Work as one team to deliver well-crafted features, concentrating on a single goal at a time. Balance: Support healthy work habits and value rest for long-term results. Craftsmanship: Aim for excellence and innovation, paying close attention to detail. Minimal: Keep solutions simple and disciplined, avoiding unnecessary complexity. Role overview This hybrid Research Engineer position is based in London. The role centers on optimizing training and inference workloads on compute accelerators and clusters, with a focus on the Lightning Thunder compiler and the PyTorch Lightning ecosystem. Work will blend deep learning research, compiler development, and large-scale system optimization. The Research Engineer will help build foundational software that improves model performance and efficiency, shaping the future of machine learning infrastructure. This position reports directly to the Tech Lead and is part of the Engineering Team. Flexible work options are available.

Apr 24, 2026
Apply
companyLightning AI logo
Research Engineer

Lightning AI

Full-time|$180K/yr - $250K/yr|Hybrid|London, England, United Kingdom; New York, New York, United States; San Francisco, California, United States

About Lightning AIFounded in 2019, Lightning AI is the driving force behind PyTorch Lightning. We create a comprehensive platform for the development, training, and deployment of AI systems, streamlining the journey from innovative research to impactful production.Our merger with Voltage Park, a neocloud and AI factory, enhances our offerings by integrating developer-centric software with efficient, large-scale computing resources. We provide teams with essential tools for experimentation, training, and production inference, ensuring security, observability, and control are integral to our solutions.We cater to individual researchers, startups, and established enterprises, with a global presence including offices in New York City, San Francisco, Seattle, and London. Lightning AI is supported by esteemed investors such as Coatue, Index Ventures, Bain Capital Ventures, and Firstminute.Our ValuesMove Fast: We prioritize speed and precision, deconstructing complex challenges into manageable tasks.Focus: We tackle one objective at a time with care, working collaboratively to deliver features accurately.Balance: We believe in maintaining a healthy work-life balance, promoting sustained performance through rest and recovery.Craftsmanship: We strive for excellence and take pride in the details that contribute to our innovation.Minimal: We embrace simplicity in our innovations, focusing on what truly matters.Role OverviewWe are looking for an accomplished Research Engineer to enhance the optimization of training and inference workloads using compute accelerators and clusters, particularly through the Lightning Thunder compiler and the broader PyTorch Lightning ecosystem. This role exists at the intersection of deep learning research, compiler development, and large-scale system optimization. You will be instrumental in advancing technology that enhances model performance and efficiency, establishing essential software that will influence the entire machine learning landscape.As part of the Engineering Team, you will report to our Tech Lead. This hybrid role is based in our offices located in New York City, San Francisco, or London.

Mar 13, 2026
Apply
companyAnthropic logo
Full-time|On-site|London, UK

About AnthropicAt Anthropic, we are dedicated to pioneering safe, interpretable, and controllable AI systems. Our goal is to ensure that AI technologies are beneficial for users and society at large. We have assembled a rapidly expanding team of passionate researchers, engineers, policy specialists, and business leaders working collaboratively to create advanced AI systems that serve humanity well.As a leader in AI research, Anthropic is committed to developing ethical, powerful artificial intelligence. We aim to align transformative AI systems with human values. We invite you to join our Pretraining team as a Research Engineer, where you will be instrumental in creating the next generation of large language models. This role allows you to operate at the crossroads of cutting-edge research and practical engineering, playing a key part in building safe, steerable, and trustworthy AI systems.Key Responsibilities:Conduct innovative research and develop solutions in areas such as model architecture, algorithms, data processing, and optimization techniques.Independently lead small-scale research projects while partnering with colleagues on larger initiatives.Design, execute, and analyze scientific experiments to deepen our understanding of large language models.Enhance and scale our training infrastructure to boost efficiency and reliability.Develop and refine development tools to improve team productivity.Contribute across the entire stack, from low-level optimizations to high-level model design.

Feb 17, 2026
Apply
companyHudson River Trading (HRT) logo
AI Researcher

Hudson River Trading (HRT)

Full-time|$200K/yr - $300K/yr|On-site|London, United Kingdom; New York, NY, United States; Seattle, Washington, United States

Join Hudson River Trading (HRT) as an AI Research Scientist in our cutting-edge HAIL (HRT AI Labs) team. This team is dedicated to creating and refining advanced models that empower our trading operations, significantly influencing our market strategies. We are on a mission to develop 'foundation models for markets' that analyze extensive market data to forecast future trends. As an integral member of our small, agile team, you will have the freedom to explore innovative research paths and contribute to impactful solutions. Our state-of-the-art research infrastructure, featuring high GPU-to-researcher ratios, and our robust support teams will enable you to bring your vision to life. Your contributions will directly affect our business outcomes, tackling complex challenges without straightforward solutions. You will engage in enhancing every aspect of our models, from data featurization to architectural design and training methodologies, ultimately influencing trading decision-making.

Feb 9, 2026
Apply
companyMoneybox logo
Full-time|On-site|London

About MoneyboxAt Moneybox, we strive to empower individuals to enrich their lives. We believe that wealth is not merely about money but rather about having the resources for more—more freedom, opportunities, and peace of mind. As an award-winning wealth management platform, we assist over 1.5 million users in building their financial futures through saving, investing, purchasing homes, and planning for retirement.Job OverviewWe are developing Aurora, an innovative AI system aimed at guiding customers to achieve optimal financial outcomes. This role presents a significant technical challenge: how to effectively provide reliable guidance to customers who may have incomplete or uncertain information regarding their financial situations and goals, all while navigating a regulated environment where decisions must be auditable and traceable.Key challenges include efficiently addressing uncertainties in customer data through active information gathering, formulating questions at the right times without overwhelming users, and translating natural language policies and regulations into formal optimization logic that is both accurate and inspectable.Additionally, you will need to integrate learned and symbolic components to ensure that the overall system operates reliably, degrades gracefully, and remains comprehensible to human users, all without incurring excessive engineering costs on the specialized elements of the system.We hold strong hypotheses and established architectural plans for these challenges, yet we remain open to revising our approaches when presented with compelling arguments or new evidence. If you have insights that could improve our strategies, we welcome your input.Our models primarily operate internally. Our development process utilizes Databricks on Azure, with deployments conducted via Databricks or directly on Azure Kubernetes Service (AKS).This role represents the pinnacle of research within our ML team. You will report directly to the Director of AI and Decision Intelligence and collaborate with a principal data scientist, senior ML engineer, senior data scientist, and two ML engineers.

Feb 27, 2026
Apply
companyApollo Research logo
Full-time|On-site|London

Application Deadline: We are actively conducting interviews and aim to onboard the right candidate promptly.ABOUT THE ROLEAt Apollo Research, we are embarking on an ambitious journey to establish a groundbreaking field known as the “Science of Scheming.” We are seeking passionate Research Scientists and Engineers eager to contribute to the creation of a new hard science from the ground up.YOUR RESPONSIBILITIES- Collaborate with leading AI innovators. Our partnerships with various labs provide access to an extensive range of models that no single AI lab can offer. Your contributions will significantly influence the development and deployment of advanced AI systems.- Conduct in-depth studies of reinforcement learning dynamics that drive the emergence of reward-seeking behaviors, evaluation awareness, and misaligned preferences. Design and train model organisms, scaling insights to cutting-edge systems.- Explore the “Scaling Laws of Scheming.” Establish empirical foundations to forecast how scheming risks evolve as models increase in capability.- Innovate evaluation techniques that can potentially scale to highly evaluation-aware models.- Investigate AI cognition. Uncover unique patterns in the reasoning processes of advanced AI systems that have yet to be documented.Note: This position does not focus on interpretability roles.KEY REQUIREMENTSWe seek a diverse set of skills to advance our research agenda. We do not expect any single candidate to meet all the criteria listed below. However, a successful candidate will likely excel in one or several of the following areas:- Fast-paced empirical research: Ability to design and execute experiments with a focus on accelerating iteration cycles.

Feb 13, 2026
Apply
companyApollo Research logo
Full-time|On-site|London

Application Deadline: We are actively interviewing candidates and seek to fill this position promptly with a suitable applicant. THE OPPORTUNITYBecome a vital member of our groundbreaking AGI safety product team and play a key role in transforming intricate AI research into actionable tools aimed at minimizing AI-related risks. In your role as an applied researcher, you will collaborate closely with our CEO (who also acts as Head of Product), product engineers, and the Evals team’s software engineers to develop solutions that enhance AI agent safety for our clients. Currently, we are concentrating on the oversight of AI coding agents to identify failures in safety and security. You will be part of a compact team, which allows you to significantly influence both team dynamics and technological approaches while quickly assuming greater responsibilities. This position is perfect for you if you have a fervent desire to employ empirical research methodologies to enhance the safety of AI systems in practical applications. If you relish the challenge of converting theoretical AI risks into tangible detection mechanisms, thrive in fast-paced environments, and are eager to see your research make a meaningful impact on real-world AI safety, then we would love to hear from you.KEY RESPONSIBILITIESResearch & Development- Collect and catalog coding agent failure modes systematically from real-world instances, public examples, research literature, and theoretical predictions.- Design and execute experiments to evaluate monitor effectiveness across various failure modes and agent behaviors.- Develop and maintain evaluation frameworks to track advancements in monitoring capabilities.- Refine monitoring strategies based on empirical findings, optimizing detection accuracy alongside computational efficiency.- Stay updated with the latest research in AI safety, agent failures, and detection methodologies.- Keep abreast of advancements in coding security and safety vulnerabilities.Monitor Design & Optimization- Create a comprehensive library of monitoring prompts tailored to specific failure modes (e.g., security vulnerabilities, goal misalignment, deceptive behaviors).- Experiment with various reasoning strategies and output formats to enhance monitor reliability.- Design and evaluate hierarchical monitoring architectures and ensemble approaches.

Dec 17, 2025
Apply
companyFaculty logo
Full-time|On-site|London

Why join Faculty?Founded in 2014, Faculty is on a mission to harness the transformative power of AI, believing it to be the most pivotal technology of our era. With a diverse portfolio of over 350 global clients, we have consistently driven performance improvements through human-centric AI solutions. Discover our tangible impacts here.We focus on responsible AI development rather than chasing trends. Our team excels in innovating, building, and deploying AI solutions that truly matter. We offer unmatched expertise in technology, product development, and service delivery across various sectors including government, finance, retail, energy, life sciences, and defense.As our reputation grows, so does our commitment to finding individuals who share our passion for intellectual curiosity and the ambition to create a positive legacy through technology.AI defines this epoch; join us in exploring its most impactful applications and turning them into reality.About Our TeamOur dedicated team at Faculty engages in vital red teaming and creates evaluations for misuse capabilities in critical domains such as CBRN, cybersecurity, and international security. Our contributions have been recognized, notably in OpenAI's system card for o1.We are committed to conducting foundational research on mitigation strategies, sharing our findings through peer-reviewed conferences, and with national security institutions. Furthermore, we design evaluations for model developers focusing on safety and societal impacts of advanced AI models, showcasing our extensive expertise in the safety domain.About The RoleAs the Principal Research Scientist for AI Safety, you will lead Faculty's dynamic research team, influencing the future of safe AI systems. Your role will encompass overseeing the scientific research agenda centered on large language models and other significant systems. You will guide fellow researchers, spearhead external publications, and align your efforts with Faculty’s mission to develop trustworthy AI, allowing you to make a substantial impact in this fast-evolving field.

Dec 11, 2025
Apply
companyAnthropic logo
Full-time|On-site|London, UK

About AnthropicAt Anthropic, we are dedicated to developing reliable, interpretable, and controllable AI systems. Our goal is to ensure that AI technology is safe and beneficial for both users and society. Our rapidly expanding team consists of passionate researchers, engineers, policy experts, and business leaders collaborating to create advantageous AI systems.About the TeamsThe Reinforcement Learning teams at Anthropic spearhead our research and development in reinforcement learning, playing an essential role in enhancing our AI systems. We have made significant contributions to all Claude models, particularly impacting the autonomy and coding capabilities of Claude Sonnet 4.5 and Opus 4.5. Our work encompasses several critical areas:Creating systems that empower models to utilize computers effectively.Enhancing code generation through reinforcement learning techniques.Conducting pioneering RL research for large language models.Establishing scalable RL infrastructure and training methodologies.Improving model reasoning capabilities.We work closely with Anthropic's alignment and frontier red teams to ensure our systems are both capable and secure. Additionally, we collaborate with the applied production training team to seamlessly integrate research advancements into deployed models, demonstrating our commitment to implementing research at scale. Our Reinforcement Learning teams operate at the intersection of cutting-edge research and engineering excellence, dedicated to building high-quality, scalable systems that expand the possibilities of AI.About the RoleAs a Research Engineer in the Reinforcement Learning domain, you will partner with a diverse group of researchers and engineers to enhance the capabilities and safety of large language models. This position merges research and engineering responsibilities, requiring you to implement innovative approaches while contributing to the research strategy. You will engage in fundamental research in reinforcement learning, developing 'agentic' models capable of tool use for open-ended tasks such as computer usage and autonomous software generation, improving reasoning skills in disciplines like mathematics, and creating prototypes for internal applications, productivity, and evaluation.Representative Projects:Design and optimize core reinforcement learning infrastructure, from clean training abstractions to distributed experiment management across GPU clusters, scaling our systems to manage increasingly complex research workflows.Invent, implement, and evaluate novel training environments, evaluations, and methodologies for reinforcement learning.

Feb 12, 2026
Apply
companyRecraft logo
Full-time|On-site|London, UK

About UsEstablished in the United States in 2022 and now operating from London, UK, Recraft is an innovative AI platform tailored for designers, illustrators, and marketers, setting a new benchmark in image generation excellence.Our cutting-edge tool empowers creators to swiftly generate and refine original images, vector art, illustrations, icons, and 3D graphics using advanced AI technology. With over 3 million users in 200 countries, our community has produced hundreds of millions of images with Recraft, and we are just beginning our journey.Become part of a world filled with professional opportunities, contribute to large-scale projects, and be a pioneer in the creative industry’s future. We are dedicated to making Recraft a vital tool for every designer, striving to set the industry standard. Our mission focuses on ensuring that creators maintain complete control over their creative processes with AI, offering them innovative tools to transform their ideas into reality.If you have a passion for pushing the limits of AI technology, we would love to have you join our team!

Sep 2, 2025
Apply
companyAnthropic logo
Full-time|On-site|London, UK

Join Anthropic as a Research Engineer specializing in the Science of Scaling, where you will play a pivotal role in advancing cutting-edge AI technologies. Collaborate with a dynamic team to explore innovative solutions that enhance our understanding of scalability in artificial intelligence systems.

Mar 12, 2026
Apply
companyFaculty logo
Full-time|On-site|London

Why Join Faculty?Founded in 2014, Faculty is at the forefront of artificial intelligence, believing it to be the most transformative technology of our era. Over the years, we have partnered with more than 350 clients globally, enhancing their performance through human-centered AI solutions. Our tangible impact can be explored here.We prioritize genuine innovation over fleeting trends. Our commitment lies in developing and implementing responsible AI that significantly influences outcomes. Our diverse clientele includes sectors such as government, finance, retail, energy, life sciences, and defense, all of whom benefit from our profound expertise in technology, product development, and delivery.Our rapidly expanding business and reputation drive us to seek individuals who share our passion for intellectual exploration and aspire to create a positive technological legacy.AI is a groundbreaking technology; at Faculty, you will have the freedom to conceive its most impactful applications and bring them to fruition.About Our TeamThe Research team at Faculty is dedicated to critical red teaming and developing evaluations for misuse capabilities in sensitive domains, including CBRN, cybersecurity, and international security. We collaborate with leading frontier model developers and national safety institutes, and our contributions have been recognized in OpenAI's system card for o1.We also engage in fundamental technical research focused on mitigation strategies, with our findings presented at peer-reviewed conferences and shared with national security organizations. Additionally, we create evaluations for model developers across various safety-related areas, highlighting our comprehensive expertise in the safety domain.Role OverviewWe are in search of a Senior Research Scientist to join our high-impact R&D team. You will spearhead innovative research that enhances scientific understanding and drives our goal of developing safe AI systems. This is a vital role within a small, empowered team that conducts essential red teaming and evaluations for frontier models in sensitive fields such as cybersecurity and national security, allowing you to influence the future of safe AI deployment in real-world scenarios.

Nov 12, 2025
Apply
company
Full-time|On-site|London, England, United Kingdom

About the RoleJoin Woozle Research as an Associate Director in our dynamic London or Glasgow office. This pivotal leadership role is designed for seasoned professionals eager to influence high-caliber primary research utilized by leading hedge funds, private equity firms, and consultancies worldwide.Your key responsibilities include leading a team of Analysts and Associates, ensuring our deliverables consistently surpass client expectations, and nurturing lasting partnerships with investors and decision-makers.If you are passionate about guiding teams, enhancing client relationships, and delivering impactful market insights, we want to hear from you!

Feb 11, 2026
Apply
companyAnthropic logo
Full-time|On-site|London, UK

Anthropic seeks a Research Engineer specializing in Machine Learning, with a focus on Reinforcement Learning (RL Velocity), for its London office. This role supports ongoing AI research and contributes to building advanced machine learning systems. Key responsibilities Work alongside researchers and engineers to solve complex reinforcement learning problems Participate in designing and developing new machine learning models and systems Shape solutions that directly influence Anthropic’s research objectives Collaboration and team environment Join a team of skilled colleagues dedicated to AI advancement. Team members regularly exchange ideas, review each other's work, and support one another to create effective solutions.

Apr 23, 2026
Apply
companyXapien logo
Full-time|On-site|Xapien London

About UsXapien is revolutionizing the way businesses understand their connections, whether it’s with suppliers, investors, partners, or third parties. We harness the power of advanced AI to provide organizations with unmatched speed, scale, and accuracy in their due diligence processes.Since our inception in 2018, we've transformed from a deep-tech startup into a global leader in AI-driven risk intelligence and due diligence solutions. In 2024, we celebrated a milestone achievement by securing $10M in Series A funding, receiving accolades in the Chartis RiskTech100® and Everest Group’s Leading 50™, and broadening our product offerings, markets, and customer base.2025 has brought even greater opportunities: With increasing compliance demands, emerging regulations, and heightened reputation risks, organizations worldwide are seeking innovative and efficient solutions, placing their trust in Xapien. From renowned law firms and financial institutions to educational and non-profit organizations, our clients rely on us to transform cumbersome manual research into actionable insights delivered within moments.Our growth trajectory:As we expand our reach beyond the UK, we are witnessing a surge in demand across continents, particularly in the Middle East, Asia, and Oceania, with significant growth in sectors such as wealth management, financial services, and supply chain onboarding.Why Join Us Now?This role presents a unique chance to influence the future of the due diligence industry with our leading-edge product, trusted by organizations globally. Regardless of your background—whether in marketing, sales, finance, or communications—you’ll be pivotal in driving growth, establishing credibility, and connecting our product to a dynamic market landscape.We are rapidly scaling, facing new regulatory challenges, and responding to heightened customer expectations. At Xapien, due diligence now embodies real-time insights delivered efficiently and effectively. You will not just execute tasks; you will influence strategies, optimize procedures, and contribute to a brand that is redefining standards for trust and transparency.If you are passionate about learning, innovating, and making a meaningful impact, now is the perfect time to join Xapien.The RoleWe are on the lookout for a detail-oriented Applied Research Engineer to assist in the development of the next generation of algorithms at the core of our systems. With our product experiencing rapid growth, your contributions will be crucial in shaping our solutions.

Apr 2, 2026
Apply
company
Full-time|On-site|London

At Moonvalley AI, our cutting-edge laboratory operates at the forefront of world models, video generation, and robotics. We are dedicated to developing sophisticated systems that accurately represent intricate environments, intelligently reason about objects and their dynamics, and seamlessly translate high-level AI objectives into smooth, efficient, safe, and responsive actions across a diverse range of robotic platforms.What You Will Be Responsible For:Spearheading the design and implementation of our robotics research agenda within the AI domain.Recruiting, mentoring, and overseeing a small team of talented research scientists and engineers in our London laboratory.Collaborating closely with world model and simulation teams to create state-of-the-art training platforms for robotics.Directing the development of persistent 3D/4D scene representations along with advanced embodied AI methodologies.Leading research initiatives in scene understanding, sim-to-real transfer, and advanced planning techniques.Building and sustaining partnerships with leading machine learning researchers, hardware experts, and external collaborators.Contributing to the establishment of the lab's technical culture and enhancing its external reputation.Areas of Expertise We Seek: We are particularly interested in candidates who are passionate about working on:World models tailored for embodied systems.3D/4D generative models.Scene reconstruction and comprehension.Embodied AI technologies.Object-level and semantic SLAM methodologies.Multimodal AI applications.Desired Qualifications:Solid foundation in robotics, computer vision, or closely related fields.Experience in leading or managing small technical or research teams.Practical expertise in 3D perception, scene representations, world models, or simulations.Exceptional programming skills in Python, C++, or similar languages.An entrepreneurial mindset and enthusiasm for developing projects in a startup or early-stage lab setting.Proven ability to collaborate effectively across the fields of machine learning, perception, and robotics.Outstanding communication and team-building abilities.What We Offer:Highly competitive salary and equity options.Comprehensive private health insurance.

Mar 30, 2026

Sign in to browse more jobs

Create account — see all 3,216 results

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.