Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Mid to Senior
Qualifications
Key ResponsibilitiesDevelop robust autonomous robot policies for real-world applications. Engage in the complete stack of robot learning, encompassing hardware, data gathering, training, assessment, and deployment. Innovate new data collection techniques and pipelines to produce high-quality datasets for advanced robot models. Enhance vision-language-action models and learning algorithms for versatile manipulation and control tasks. Curate and organize extensive datasets, task distributions, and training strategies for robot pretraining and adaptation. Conduct swift and thorough experiments to identify performance bottlenecks and improve policy effectiveness. Collaborate with interdisciplinary teams of researchers and engineers across robotics, infrastructure, and ML systems. Contribute to defining the technical roadmap for general-purpose physical intelligence.
About the job
About Physical Intelligence
Physical Intelligence is building general-purpose AI for the physical world. The team brings together engineers, scientists, roboticists, and entrepreneurs focused on foundational models and learning algorithms for robots and interactive devices.
Role Overview
The Robotics Research Engineer works at the intersection of hardware, software, and large-scale model training. The goal: develop efficient autonomous robot policies that move the field forward.
What You Will Do
Design robotic systems and data collection pipelines to generate high-quality training data
Develop learning algorithms that turn collected data into reliable, effective robot policies
Contribute to vision-language-action models, from concept through implementation
Help shape datasets, research infrastructure, and the direction of robotics research at Physical Intelligence
Location
San Francisco
About Physical Intelligence
Physical Intelligence is at the forefront of merging AI with physical systems. Our diverse team is committed to advancing robotic technologies and creating systems that positively impact the future of automation and interaction.
WHO WE AREAt Applied Compute, we specialize in creating Specific Intelligence for enterprises—agents that continually learn from a company's processes, data, expertise, and goals. Our mission is to develop a continual learning layer and platform that captures context, memory, and decision traces across organizations, fostering an environment where specialized agents perform real work effectively.Why Join Us: We operate at a unique intersection of product development and advanced research. Our product team is building the platform for a new generation of digital coworkers, while our research team is pioneering advancements in post-training and reinforcement learning to enrich product experiences. Our applied research engineers collaborate closely with customers, deploying agents into production seamlessly. This blend of robust product focus, in-depth research, and real-world application is our approach to integrating AI into enterprises. We pride ourselves on being product-led, research-enabled, and forward-deployed.Our Team: We are a diverse group of engineers, researchers, and operators, many of whom are former founders with experience in RL infrastructure at OpenAI, data foundations at Scale AI, and various systems across renowned firms like Two Sigma and Watershed. We collaborate with Fortune 50 clients and are proudly backed by reputable investors including Kleiner Perkins, Benchmark, Sequoia, Lux, and Greenoaks.Who Thrives Here: We seek individuals passionate about applying innovative research and complex systems to solve real-world challenges. You should be adept at navigating new environments swiftly, whether it's a fresh codebase, a customer's data architecture, or an unfamiliar problem domain. Our team values collaboration with customers, emphasizing active listening and understanding their workflows. We find that former founders, individuals with extensive side projects, and those who demonstrate end-to-end ownership excel in our culture.THE ROLEIn the role of Research Systems Engineer, you will train frontier-scale models and devise methodologies to implement continual learning in enterprise settings. Your responsibilities will include designing and executing large-scale experiments, investigating cutting-edge reinforcement learning techniques, and developing tools to gain insights into training processes. This position lies at the crossroads of research and systems engineering, where you will innovate algorithms alongside researchers and collaborate with infrastructure engineers to implement them on GPUs.
About UsAt Applied Compute, we are pioneering Specific Intelligence for enterprises through advanced AI agents that learn continuously from organizational processes, data, and objectives. We recognize the significant gap between what AI models can achieve in isolation and their performance within actual business contexts, often failing to adapt to feedback. Our mission is to build a continual learning layer that captures context, memory, and decision traces across enterprises, creating environments where specialized agents excel at real tasks.Why Join Us? We operate at a unique intersection of product development and research. Our product team is developing the platform that empowers a new generation of digital coworkers, while our research team is advancing post-training and reinforcement learning to enhance product experiences. As applied research engineers, we work closely with customers to deploy models into production effectively. This blend of robust product focus, deep research, and customer engagement is our strategy for successfully integrating AI into enterprise operations. We are product-led, research-enabled, and strategically deployed.Meet Our Team: Our team consists of engineers, researchers, and operators, many of whom are former founders. We have established RL infrastructure at OpenAI, developed data foundations at Scale AI, and built systems at Together, Two Sigma, and Watershed. We collaborate with Fortune 50 clients, including DoorDash, Mercor, and Cognition, and are backed by esteemed investors such as Benchmark, Sequoia, and Lux.Who Excels Here: We seek individuals passionate about applying innovative research and complex systems to overcome real-world challenges. Candidates should thrive in unfamiliar environments, whether it involves navigating new codebases, understanding new customer data architectures, or tackling unfamiliar problem domains. A genuine enjoyment of customer interactions—listening, empathizing, and comprehending how work is accomplished within organizations—is essential. Those with prior entrepreneurial experience, extensive side projects, or a proven ability to manage initiatives from start to finish will thrive in our culture.Your RoleAs a Research Systems Engineer, you will be responsible for training cutting-edge models and developing methodologies that facilitate continual learning within enterprise settings. You will design and execute large-scale experiments, delve into advanced reinforcement learning techniques, and create tools that enhance our understanding of the training process. This role uniquely positions you at the crossroads of research and systems engineering, where you will innovate new algorithms in collaboration with researchers and work alongside infrastructure engineers to deploy them on GPUs.
OverviewPluralis Research is at the forefront of innovation in Protocol Learning, specializing in the collaborative training of foundational models. Our approach ensures that no single participant ever has or can obtain a complete version of the model. This initiative aims to create community-driven, collectively owned frontier models that operate on self-sustaining economic principles.We are seeking experienced Senior or Staff Machine Learning Engineers with over 5 years of expertise in distributed systems and large-scale machine learning training. In this role, you will design and implement a groundbreaking substrate for training distributed ML models that function effectively over consumer-grade internet connections.
Full-time|$218.4K/yr - $273K/yr|On-site|San Francisco, CA; Seattle, WA; New York, NY
Join Scale AI's ML platform team (RLXF) as a Machine Learning Research Engineer, where you will play a pivotal role in developing our advanced distributed framework for training and inference of large language models. This platform is vital for enabling machine learning engineers, researchers, data scientists, and operators to conduct rapid and automated training, as well as evaluation of LLMs and data quality.At Scale, we occupy a unique position in the AI landscape, serving as an essential provider of training and evaluation data along with comprehensive solutions for the entire ML lifecycle. You will collaborate closely with Scale's ML teams and researchers to enhance the foundational platform that underpins our ML research and development initiatives. Your contributions will be crucial in optimizing the platform to support the next generation of LLM training, inference, and data curation.If you are passionate about driving the future of AI through groundbreaking innovations, we want to hear from you!
OpenAI's research infrastructure group creates and maintains the backbone systems for advanced machine learning model training. This team often goes beyond conventional training methods, developing new infrastructure to support novel research at scale. Their work closely connects systems engineering with research progress, making it possible to run experiments that would otherwise be too slow or complex. Role overview The Research Infrastructure Engineer for Training Systems designs and improves the platforms that power large-scale ML training. This role bridges research concepts and the practical systems that make large model training possible. The work has a direct impact on model release timelines and requires building systems that perform reliably in demanding, real-world scenarios. What you will do Build and maintain infrastructure for large-scale model training and experimentation Design APIs and interfaces to simplify complex training workflows and prevent misuse Enhance reliability, debuggability, and performance across training and data pipelines Troubleshoot issues involving Python, PyTorch, distributed systems, GPUs, networking, and storage Create tests, benchmarks, and diagnostic tools to catch regressions early Requirements Interest in building systems that support new training methods, not just optimizing existing ones Strong instincts in systems engineering, especially regarding performance, reliability, and clean abstractions Experience designing APIs and interfaces for researchers and engineers Ability to work across ML research code and production infrastructure Enjoys evidence-based debugging using profiles, traces, logs, tests, and reproducible cases
Full-time|$350K/yr - $475K/yr|On-site|San Francisco
At Thinking Machines Lab, our mission is to empower humanity by advancing collaborative general intelligence. We're dedicated to crafting a future where everyone can harness the power of AI to meet their unique needs and aspirations.Our team comprises scientists, engineers, and innovators who have developed some of the most widely utilized AI products, including ChatGPT and Character.ai, as well as open-weight models like Mistral, in addition to renowned open-source projects such as PyTorch, OpenAI Gym, Fairseq, and Segment Anything.About the RoleWe are seeking a talented Infrastructure Research Engineer to architect and develop the foundational systems that facilitate the scalable and efficient training of large models using reinforcement learning.This position exists at the crossroads of research and large-scale systems engineering, requiring a professional who not only comprehends the algorithms behind reinforcement learning but also appreciates the practicalities of distributed training and inference at scale. You will have a diverse set of responsibilities, from optimizing rollout and reward pipelines to enhancing the reliability, observability, and orchestration of systems. Collaboration with researchers and infrastructure teams will be essential to ensure reinforcement learning is stable, rapid, and production-ready.Note: This is an evergreen role that we maintain on an ongoing basis to express interest. Due to the high volume of applications we receive, there may not always be an immediate position that aligns perfectly with your skills and experience. We encourage you to apply, as we continuously review applications and reach out to candidates when new opportunities arise. You may reapply after gaining more experience, but please refrain from applying more than once every six months. Additionally, you may notice postings for specific roles that cater to unique project or team needs; in those circumstances, you are welcome to apply directly alongside this evergreen role.What You’ll DoDesign, implement, and optimize the infrastructure that supports large-scale reinforcement learning and post-training workloads.Enhance the reliability and scalability of the RL training pipeline, including distributed RL workloads and training throughput.Create shared monitoring and observability tools to ensure high uptime, debuggability, and reproducibility of RL systems.Work closely with researchers to translate algorithmic concepts into production-quality training pipelines.Develop evaluation and benchmarking infrastructure to assess model performance based on helpfulness, safety, and factual accuracy.Publish and disseminate insights through internal documentation, open-source libraries, or technical reports that contribute to the advancement of scalable AI infrastructure.
On-site|On-site|San Francisco, CA | New York City, NY | Seattle, WA
Join Anthropic as a Machine Learning Systems Engineer within our Encodings and Tokenization team, where you'll play a pivotal role in refining and optimizing our tokenization systems across Pretraining and Finetuning workflows. By bridging the gap between our Pretraining and Finetuning teams, you will help shape the essential infrastructure that enhances how our AI models learn from diverse data. Your contributions will be crucial in ensuring our AI systems remain reliable, interpretable, and steerable, driving forward our mission of developing beneficial AI technologies.
Role OverviewAt Intrinsic Safety, we are at the forefront of teaching machines to navigate complex judgment calls at scale. Our mission is to create AI agents that tackle the nuanced challenges of preventing fraud, scams, and abuse. This is not merely another sales tool or customer service application; we are addressing critical issues in investigations and fraud prevention to ensure the safety of the innocent.Based in San Francisco, our small, highly skilled team is dedicated to pushing the boundaries of what AI can achieve. We aim to make sound decisions in unpredictable, adversarial, real-world situations.We are seeking a Research Engineer to join us in advancing this frontier. In this role, you will design evaluations, analyze failures, create innovative research cycles, and transform research concepts into operational capabilities.This position blends research and engineering: you will be a model architect, an experimental investigator, and a systems engineer all at once.
About Distyl AIDistyl AI specializes in creating high-performance AI systems that enhance the fundamental operational processes of Fortune 500 companies. Through a strategic alliance with OpenAI, proprietary software accelerators, and extensive expertise in enterprise AI, we deliver effective AI solutions with swift time-to-value, often within a quarter.Our innovations have empowered Fortune 500 clients in various sectors, including insurance, consumer packaged goods, and non-profit organizations. Joining our team means you will assist organizations in recognizing, developing, and extracting value from their Generative AI investments, frequently for the first time. We prioritize customer needs, working backward from the client's challenges and ensuring we generate financial benefits while enhancing the experiences of end-users.Distyl is guided by seasoned leaders from top-tier companies like Palantir and Apple and enjoys backing from prominent investors including Lightspeed, Khosla, Coatue, Dell Technologies Capital, Nat Friedman (Former CEO of GitHub), Brad Gerstner (Founder and CEO of Altimeter), along with board members from numerous Fortune 500 firms.What We Are Looking ForAt Distyl, we are at the forefront of leveraging AI within enterprises. We seek imaginative researchers who aspire to go beyond incremental enhancements on benchmarks and are eager to redefine the application of software in innovative ways.Our researchers hail from diverse academic disciplines but possess a robust research background, operate in an AI-centric manner, and would find conventional research environments unfulfilling.Key ResponsibilitiesThe AI Systems team is dedicated to architecting complex, comprehensive solutions that integrate perception, reasoning, planning, and execution. Researchers amalgamate various components (LLMs, retrievers, evaluators, memory systems, and execution agents) into resilient, scalable systems that deliver consistent performance across dynamic enterprise workflows.Researchers in AI Systems examine the principles governing intricate system interactions. They analyze coordination, information flow, and emergent behavior across multiple agents and models. Their research reveals the foundational mechanics of robustness, composability, and alignment, ultimately establishing the design paradigm for constructing intelligent systems.
Join Anthropic as a Research Engineer focusing on Economic Research. In this role, you will leverage your analytical skills to conduct in-depth economic analysis and contribute to innovative projects aimed at enhancing our understanding of economic models and their implications.
About the TeamJoin the innovative Post-Training team at OpenAI, where we focus on refining and elevating pre-trained models for deployment in ChatGPT, our API, and future products. Collaborating closely with various research and product teams, we conduct crucial research that prepares our models for real-world deployment to millions of users, ensuring they are safe, efficient, and reliable.About the RoleAs a Research Engineer / Scientist, you will spearhead the research and development of enhancements to our models. Our work intersects reinforcement learning and product development, aiming to create cutting-edge solutions.We seek passionate individuals with robust machine learning engineering skills and research experience, particularly with innovative and powerful models. The ideal candidate will be driven by a commitment to product-oriented research.This position is located in San Francisco, CA, and follows a hybrid work model requiring three days in the office each week. Relocation assistance is available for new employees.In this role, you will:Lead and execute a research agenda aimed at enhancing model capabilities and performance.Work collaboratively with research and product teams to empower customers to optimize their models.Develop robust evaluation frameworks to monitor and assess modeling advancements.Design, implement, test, and debug code across our research stack.You may excel in this role if you:Possess a deep understanding of machine learning and its applications.Have experience with relevant models and methodologies for evaluating model improvements.Are adept at navigating large ML codebases for debugging purposes.Thrive in a fast-paced and technically intricate environment.About OpenAIOpenAI is a pioneering AI research and deployment organization dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We are committed to pushing the boundaries of AI capabilities while prioritizing safety and human-centric values in our products. Our mission is to embrace diverse perspectives, voices, and experiences that represent the full spectrum of humanity, as we strive for a future where AI is a powerful ally for everyone.
About Our TeamJoin the forefront of AI innovation with the RL and Reasoning team at OpenAI. Our team is dedicated to advancing reinforcement learning research and has pioneered transformative projects, including o1 and o3. We are committed to pushing the limits of generative models while ensuring their scalable deployment.About the RoleAs a Research Engineer/Research Scientist at OpenAI, you will play a pivotal role in enhancing AI alignment and capabilities through state-of-the-art reinforcement learning techniques. Your contributions will be essential in training intelligent, aligned, and versatile agents that power various AI models.We seek individuals with a solid foundation in reinforcement learning research, agile coding skills, and a passion for rapid iteration.This position is located in San Francisco, CA, and follows a hybrid work model of three days in the office per week. We also provide relocation assistance for new hires.You may excel in this role if:You are enthusiastic about being at the cutting edge of RL and language model research.You take initiative, owning ideas and driving them to fruition.You value principled methodologies, conducting simple experiments in controlled environments to draw trustworthy conclusions.You thrive in a fast-paced, complex technical environment where rapid iteration is essential.You are adept at navigating extensive ML codebases to troubleshoot and enhance them.You possess a profound understanding of machine learning and its applications.About OpenAIOpenAI is a pioneering AI research and deployment organization committed to ensuring that general-purpose artificial intelligence serves the greater good for humanity. We strive to push the boundaries of AI system capabilities while prioritizing safe deployment through our innovative products. We recognize AI as a powerful tool that must be developed with safety and human-centric principles, embracing diverse perspectives to reflect the full spectrum of humanity.We are proud to be an equal opportunity employer, welcoming applicants from all backgrounds without discrimination based on race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or any other legally protected characteristic.
Full-time|$350K/yr - $475K/yr|On-site|San Francisco
At Thinking Machines Lab, our mission is to empower humanity by advancing collaborative general intelligence. We envision a future where everyone has access to the knowledge and tools necessary to harness AI for their unique needs and goals.Our team comprises scientists, engineers, and builders who have developed some of the most widely utilized AI products, such as ChatGPT and Character.ai, alongside open-weight models like Mistral, and popular open-source initiatives like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.About the PositionWe are seeking an Infrastructure Research Engineer to design and construct the foundational systems that facilitate the scalable and efficient training of large models for both deployment and research purposes. Your primary objective will be to streamline experimentation and training at Thinking Machines, enabling our research teams to concentrate on scientific advancements rather than system limitations.This role is a perfect match for an individual who possesses a strong blend of deep systems expertise and a keen interest in machine learning at scale. You will take full ownership of the training stack, ensuring that every GPU cycle contributes to scientific progress.Note: This is an evergreen role that we keep open continuously to express interest. We receive numerous applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. However, we encourage you to apply. We regularly review applications and reach out to candidates as new opportunities arise. Feel free to reapply as you gain more experience, but please avoid applying more than once every six months. We may also post specific roles for individual projects or team needs, in which case you are welcome to apply directly alongside this evergreen role.Key ResponsibilitiesDesign, implement, and optimize distributed training systems that scale across thousands of GPUs and nodes for extensive training workloads.Develop high-performance optimizations to maximize throughput and efficiency.Create reusable frameworks and libraries that enhance training reproducibility, reliability, and scalability for new model architectures.Establish standards for reliability, maintainability, and security, ensuring systems remain robust under rapid iterations.Collaborate with researchers and engineers to construct scalable infrastructure.Publish and disseminate findings through internal documentation, open-source libraries, or technical reports that contribute to the advancement of scalable AI infrastructure.
On-site|On-site|New York City, NY; San Francisco, CA; Seattle, WA
About AnthropicAt Anthropic, we are dedicated to developing AI systems that are reliable, interpretable, and steerable. We aim to ensure that AI is safe, beneficial, and aligned with the needs of both our users and society. Our expanding team consists of passionate researchers, engineers, policy experts, and business leaders collaborating to create groundbreaking AI solutions.About the RoleWe are seeking a talented Research Engineer with a solid foundation in computer vision, who shares our belief that visual and spatial reasoning are essential for unleashing the full potential of large language models (LLMs). In this collaborative role, you will engage in research, development, and evaluation of cutting-edge Claude models, with a specific emphasis on enhancing visual and spatial capabilities. You will contribute across multiple facets of our research initiatives, employing a full-stack approach that encompasses pretraining, reinforcement learning (RL), and runtime techniques such as agentic harnesses. Additionally, you will work closely with our product team to ensure that your vision enhancements positively influence Claude's performance in real-world applications.
About Our TeamThe Frontier Systems team at OpenAI is at the forefront of technology, responsible for creating, deploying, and maintaining some of the world's largest supercomputers. These supercomputers are pivotal for training our most advanced AI models, pushing the boundaries of innovation.We transform sophisticated data center designs into operational systems and develop the software infrastructure necessary for extensive frontier model training. Our goal is to ensure these hyperscale supercomputers operate reliably and efficiently, supporting groundbreaking AI research.About the RoleAs a key member of the Frontier Systems team, you will be instrumental in designing the critical infrastructure that ensures our supercomputers function seamlessly for pioneering AI research. In this role, you'll address system-level challenges and implement automation solutions that minimize disruptions during large-scale training processes.Your responsibilities will encompass end-to-end ownership of your projects, allowing you to make significant contributions to our mission. This position is ideal for individuals who excel in diagnosing complex system issues and crafting automation strategies to proactively resolve problems across a vast network of machines.Your Responsibilities Include:Enhancing system health checks to maintain the stability of our hyperscale supercomputers during model training.Conducting in-depth investigations into hardware failures and system-level bugs to uncover root causes.Developing automation tools that monitor and resolve issues across thousands of systems, enabling uninterrupted research progress.You May Be a Great Fit If You Possess:7+ years of hands-on experience in software engineering.Strong proficiency in Python and shell scripting.Expertise in analyzing complex data sets using SQL, PromQL, Pandas, or other relevant tools.Experience in creating reproducible analyses.A solid balance of skills in both building and operationalizing systems.Prior experience with hardware is not a prerequisite for this role.Preferred Qualifications:Familiarity with the intricacies of hardware components, protocols, and Linux tools (e.g., PCIe, Infiniband, networking, power management, kernel performance tuning).Experience with system optimization and performance tuning.
About UsSieve is a pioneering AI research lab dedicated solely to video data. We harness exabyte-scale video infrastructure and innovative video understanding techniques, along with a multitude of data sources, to create datasets that advance the field of video modeling. Given that video constitutes 80% of internet traffic, it serves as a vital medium that fuels creativity, communication, gaming, AR/VR, and robotics. Our mission is to tackle the most significant challenge in the development of these applications: acquiring high-quality training data.With a small yet highly skilled team of just 15 members, we have formed strategic partnerships with leading AI labs and achieved $XXM in revenue last quarter alone. Our Series A funding round last year was backed by prestigious firms, including Matrix Partners, Swift Ventures, Y Combinator, and AI Grant.About the RoleAs a Distributed Systems Engineer at Sieve, you will be responsible for designing and implementing systems that efficiently manage the compute, scheduling, and orchestration of complex machine learning and ETL pipelines. Your work will ensure these systems operate quickly, reliably, and cost-effectively while processing large volumes of video data.You will thrive in this role if you are passionate about optimizing system uptime, have experience with cloud technologies, and enjoy working with high-performance distributed systems involving thousands of GPUs. Additionally, you will play a key role in developing excellent internal tools and CI/CD pipelines to facilitate rapid iteration.
Achira is seeking a Machine Learning Research Engineer to help improve workflows and systems for artificial intelligence projects. This position is based in the San Francisco office. Role overview This role centers on developing and refining machine learning pipelines. The focus is on efficient deployment and scaling of AI models in production environments. Collaboration with colleagues from different disciplines is a key part of the work, aiming to bring forward new ideas and solid practices in machine learning systems. What you will do Design and optimize machine learning workflows for better performance and scalability Work closely with cross-functional teams to implement improvements in AI systems Support the deployment process, helping ensure models run efficiently in real-world settings Location This position is based at Achira's San Francisco office.
At World Labs, we are pioneering the development of Large World Models—advanced AI systems designed to comprehend, reason, and engage with the physical environment. Our initiatives are at the cutting-edge of spatial intelligence, robotics, and multimodal AI, with an objective to empower machines to perceive and operate effectively in intricate real-world settings.We are curating a global team of researchers, engineers, and innovators dedicated to transcending the existing boundaries of artificial intelligence. If you are eager to work on transformative technology that will redefine machine perception and enhance human-AI interaction, this role is tailored for you.About World Labs:World Labs is an AI research and development company committed to creating spatially intelligent systems capable of modeling, reasoning, and acting in the real world. We envision a future where AI transcends text or pixels to thrive in three-dimensional, dynamic environments, and we are constructing the foundational models that will make this a reality.Our team unites expertise in machine learning, robotics, computer vision, simulation, and systems engineering. We operate with the agility of a startup combined with the vision of a research lab, tackling long-term challenges that require creativity, rigor, and resilience.Our mission is to develop the most advanced world models and leverage them to empower individuals, industries, and society.Role Overview:We are seeking a Tech Lead for 3D Modeling & Reconstruction to establish technical direction and drive execution for our essential 3D modeling initiatives. This position is suited for someone with a strong background in research science (RS) or research engineering (RE) who has made significant contributions to the field of 3D reconstruction and/or modeling—evidenced by academic publications, widely adopted open-source projects, or large-scale production systems.This is a hands-on leadership role where you will merge profound technical expertise with the ability to guide a high-impact team, influencing both the research roadmap and the production systems that integrate contemporary 3D modeling techniques into tangible products. You will collaborate closely with research, engineering, and product teams to convert innovative concepts into dependable, scalable solutions.
About Physical Intelligence Physical Intelligence is building general-purpose AI for the physical world. The team brings together engineers, scientists, roboticists, and entrepreneurs focused on foundational models and learning algorithms for robots and interactive devices. Role Overview The Robotics Research Engineer works at the intersection of hardware, software, and large-scale model training. The goal: develop efficient autonomous robot policies that move the field forward. What You Will Do Design robotic systems and data collection pipelines to generate high-quality training data Develop learning algorithms that turn collected data into reliable, effective robot policies Contribute to vision-language-action models, from concept through implementation Help shape datasets, research infrastructure, and the direction of robotics research at Physical Intelligence Location San Francisco
Join Gridware as a Mechanical Research Engineer, where your innovative spirit and engineering expertise will contribute to groundbreaking projects in the energy sector. You will be responsible for conducting research, developing prototypes, and collaborating with a team of skilled engineers to advance our technology solutions.
Apr 9, 2026
Sign in to browse more jobs
Create account — see all 5,633 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.