About the job
About Our Team
Join our innovative Robotics team at OpenAI, where we are dedicated to pioneering general-purpose robotics and advancing towards AGI-level intelligence within dynamic, real-world environments. Our collaborative efforts span the entire model stack as we integrate state-of-the-art hardware and software to explore a diverse array of robotic form factors. We aim to harmonize high-level AI capabilities with the constraints of physical systems to enhance the quality of life for people worldwide.
About the Role
As a Research Engineer specializing in SLAM and Multi-View Geometry, you will be instrumental in developing systems that empower robots to perceive, track, and reconstruct their environment in 3D using multi-camera and multimodal sensor data. Your work will focus on creating real-time and offline SLAM pipelines for teleoperation and data collection while also building scalable systems for 3D structure reconstruction from extensive datasets.
We seek individuals who possess a strong foundation in computer vision and hands-on experience in constructing robust perception systems. The ideal candidate is adept in both classical geometry-based techniques and contemporary machine learning methods, thriving in close collaboration with AI researchers and engineers.
This position is based in San Francisco, CA, utilizing a hybrid work model of 4 days in the office each week, and we provide relocation assistance for new hires.
Key Responsibilities:
- Develop and implement online SLAM systems for robotic data collection utilizing multi-camera sensor arrays and teleoperation platforms.
- Create systems for large-scale 3D reconstruction and point tracking across extensive datasets, facilitating innovative approaches to world modeling and perception.
- Collaborate with research and engineering teams to enhance multi-view geometry pipelines for large datasets.
- Elevate the accuracy, robustness, and scalability of perception systems utilized in robotics data collection and training pipelines.
- Engage in cross-disciplinary collaboration with robotics, perception, and ML teams to integrate geometry-based methods with learned models.

