Senior Software Engineer Model Training jobs in San Francisco – Browse 7,148 openings on RoboApply Jobs

Senior Software Engineer Model Training jobs in San Francisco

Open roles matching “Senior Software Engineer Model Training” with location signals for San Francisco. 7,148 active listings on RoboApply Jobs.

7,148 jobs found

1 - 20 of 7,148 Jobs
Apply
companyBaseten logo
Full-time|On-site|San Francisco

ABOUT BASETENAt Baseten, we are at the forefront of enabling transformative AI solutions for some of the world's leading companies, including Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma, and Writer. Our innovative platform combines cutting-edge AI research, adaptable infrastructure, and developer-friendly tools to facilitate the production of advanced models. Recently, we celebrated our rapid growth with a successful $300M Series E funding round from notable investors like BOND, IVP, Spark Capital, Greylock, and Conviction. We invite you to join our dynamic team and contribute to the evolution of AI product deployment.THE ROLEAs a Senior Software Engineer specializing in Model Training at Baseten, you will play a pivotal role in constructing the infrastructure essential for the large-scale training and fine-tuning of foundational AI models. Your responsibilities will include designing and implementing distributed training systems, optimizing GPU utilization, and establishing scalable pipelines that empower Baseten and our clientele to adapt models with efficiency and reliability. This role demands a high level of technical expertise and hands-on involvement: you will be responsible for critical components of our training stack, collaborate with product and infrastructure teams to identify customer needs, and drive advancements in scalable training infrastructure.EXAMPLE WORK:Training open-source models that surpass GPT-5 capabilities for a leading digital insurerExploring specialized, continuously learning models as the future of AIOverview of our training documentationResearch initiatives we've undertakenRESPONSIBILITIESDesign, construct, and sustain distributed training infrastructures for large foundation modelsDevelop scalable pipelines for fine-tuning and training across diverse GPU/accelerator clustersEnhance training performance through optimization of algorithms and infrastructureCollaborate closely with cross-functional teams to align technical solutions with business objectivesStay abreast of advancements in the field of machine learning and AI to continually improve our training processes

Aug 29, 2025
Apply
companyHover logo
Full-time|$165K/yr - $203K/yr|On-site|san_francisco

At Hover, we empower individuals to design, enhance, and safeguard their cherished properties. Utilizing proprietary AI technology built on over a decade of real property data, we provide answers to pressing questions such as “What will it look like?” and “What will it cost?” Homeowners, contractors, and insurance professionals depend on Hover to receive fully measured, accurate, and interactive 3D models of any property—achieved through a smartphone scan in mere minutes.We are driven by curiosity, purpose, and a collective commitment to our customers, communities, and each other. At Hover, we believe the most innovative ideas stem from diverse perspectives, and we take pride in fostering an inclusive, high-performance culture that encourages growth, accountability, and excellence. Supported by leading investors like Google Ventures and Menlo Ventures, and trusted by industry leaders including Travelers, State Farm, and Nationwide, we are transforming how people perceive and interact with their environments.Why Join Hover?At Hover, 3D models are not just a feature; they are the essence of our product. Each scan and data point we process empowers homeowners, insurers, and contractors to make informed, data-driven decisions. We are seeking a Software Engineer who has a passion for geometry, automation, and making a tangible impact in the real world. In this role, you will design and implement systems that convert customer-captured imagery into meticulously accurate 3D models, enhancing the scalability and precision of Hover’s modeling pipeline. You will work collaboratively with designers and engineers across frontend, backend, computer vision, and DevOps to bring innovative capabilities to fruition, blending technical expertise with strong communication and cross-functional collaboration.The 3D Modeling Pipeline team develops the tools essential for our in-house operations to transform customer-captured scans into highly detailed, accurate 3D models of buildings. This team is also responsible for creating the pipeline and systems that process 3D data through both automated and manual steps, as well as exporting data into customer-facing formats.Your Contributions Will Include:Owning and evolving backend systems that convert raw scan data into exact 3D models, ensuring timely delivery to key ecosystem partners like Xactimate and Cotality.Building and refining internal modeling tools that enable teams to efficiently generate, validate, and optimize high-quality 3D data.Collaborating with machine learning and computer vision engineers to implement new algorithms into production, bridging research with practical applications.Enhancing customer and partner experiences by improving how Hover’s 3D outputs integrate with downstream workflows and external platforms.Promoting innovation and ongoing enhancement across our modeling pipeline.

Mar 19, 2026
Apply
companyDatabricks logo
Full-time|$166K/yr - $225K/yr|On-site|San Francisco, California

At Databricks, we are dedicated to empowering data teams to tackle some of the most challenging issues of our time—from realizing the future of transportation to speeding up medical innovations. We achieve this by developing and maintaining the premier data and AI infrastructure platform, allowing our clients to leverage profound data insights to enhance their operations. Our Model Serving product equips organizations with a cohesive, scalable, and governed platform for deploying and overseeing AI/ML models, spanning traditional ML to specialized large language models. It provides real-time, low-latency inference, governance, monitoring, and lineage capabilities. With the rapid rise of AI adoption, Model Serving stands as a fundamental component of the Databricks platform, enabling clients to operationalize models efficiently and cost-effectively at scale. As a Senior Engineer, your role will be pivotal in transforming both the product experience and the underlying infrastructure of Model Serving. You will design and create systems enabling high-throughput, low-latency inference across CPU and GPU workloads, influence architectural strategies, and work closely with platform, product, infrastructure, and research teams to deliver an exceptional serving platform.

Jan 30, 2026
Apply
companyCrusoe logo
Full-time|Remote|San Francisco, CA - US

As a Senior Staff Software Engineer specializing in Model LifeCycle at Crusoe, you will play a vital role in shaping the future of software solutions that optimize and enhance our innovative operations. You will lead complex projects, mentor junior engineers, and collaborate with cross-functional teams to deliver high-impact results.

Mar 10, 2026
Apply
companyBaseten logo
Full-time|On-site|San Francisco

ABOUT BASETENAt Baseten, we are at the forefront of AI innovation, providing critical inference solutions for leading AI companies like Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma, and Writer. Our platform combines advanced AI research, adaptable infrastructure, and intuitive developer tools, empowering organizations to deploy state-of-the-art models effectively. With rapid growth and a recent $300M Series E funding round backed by top-tier investors including BOND, IVP, Spark Capital, Greylock, and Conviction, we invite you to join our mission in building the platform of choice for engineers delivering AI products.THE ROLE:As a member of Baseten’s Model Performance (MP) team, you will play a pivotal role in ensuring our platform’s model APIs are not only fast and reliable but also cost-effective. Your primary focus will be on developing and optimizing the infrastructure that supports our hosted API endpoints for cutting-edge open-source models. This role involves working with distributed systems, model serving, and enhancing the developer experience. You will collaborate with a small, dynamic team at the intersection of product development, model performance, and infrastructure, defining how developers interact with AI models on a large scale.RESPONSIBILITIES:Design, develop, and maintain the Model APIs surface, focusing on advanced inference features such as structured outputs (JSON mode, grammar-constrained generation), tool/function calling, and multi-modal serving.Profile and optimize TensorRT-LLM kernels, analyze CUDA kernel performance, create custom CUDA operators, and enhance memory allocation patterns for maximum efficiency across multi-GPU setups.Implement performance improvements across various runtimes based on a deep understanding of their internals, including speculative decoding, guided generation for structured outputs, and custom scheduling algorithms for high-performance serving.Develop robust benchmarking frameworks to evaluate real-world performance across diverse model architectures, batch sizes, sequence lengths, and hardware configurations.Enhance performance across runtimes (e.g., TensorRT, TensorRT-LLM) through techniques such as speculative decoding, quantization, batching, and KV-cache reuse.Integrate deep observability mechanisms (metrics, traces, logs) and establish repeatable benchmarks to assess speed, reliability, and quality.

Oct 11, 2025
Apply
companyOpenAI logo
Full-time|On-site|San Francisco

About Our TeamJoin the Inference team at OpenAI, where we leverage cutting-edge research and technology to deliver exceptional AI products to consumers, enterprises, and developers. Our mission is to empower users to harness the full potential of our advanced AI models, enabling unprecedented capabilities. We prioritize efficient and high-performance model inference while accelerating research advancements.About the RoleWe are seeking a passionate Software Engineer to optimize some of the world's largest and most sophisticated AI models for deployment in high-volume, low-latency, and highly available production and research environments.Key ResponsibilitiesCollaborate with machine learning researchers, engineers, and product managers to transition our latest technologies into production.Work closely with researchers to enable advanced research initiatives through innovative engineering solutions.Implement new techniques, tools, and architectures that enhance the performance, latency, throughput, and effectiveness of our model inference stack.Develop tools to identify bottlenecks and instability sources, designing and implementing solutions for priority issues.Optimize our code and Azure VM fleet to maximize every FLOP and GB of GPU RAM available.You Will Excel in This Role If You:Possess a solid understanding of modern machine learning architectures and an intuitive grasp of performance optimization strategies, especially for inference.Take ownership of problems end-to-end, demonstrating a willingness to acquire any necessary knowledge to achieve results.Bring at least 5 years of professional software engineering experience.Have or can quickly develop expertise in PyTorch, NVidia GPUs, and relevant optimization software stacks (such as NCCL, CUDA), along with HPC technologies like InfiniBand, MPI, and NVLink.Have experience in architecting, building, monitoring, and debugging production distributed systems, with bonus points for working on performance-critical systems.Have successfully rebuilt or significantly refactored production systems multiple times to accommodate rapid scaling.Are self-driven, enjoying the challenge of identifying and addressing the most critical problems.

Feb 6, 2025
Apply
companyZyphra logo
Full-time|On-site|San Francisco

Zyphra is an innovative leader in artificial intelligence, located in the heart of San Francisco, California.Role Overview:As a Research Engineer specializing in Language Model Pre-Training, you will play a pivotal role in defining our language model strategy through comprehensive pretraining development. Your close collaboration with our pretraining team will ensure that your insights contribute to the advancement of our next-generation models.Key Responsibilities:Conduct large-scale training runs and implement model parallelization techniques.Optimize the performance of our pretraining stack.Oversee dataset collection, processing, and evaluation.Research architecture and methodologies, including optimizer ablations.Qualifications:Demonstrated engineering prowess in developing reliable and robust systems.A quick learner with a passion for implementing innovative ideas.Exceptional communication and collaboration skills, capable of working effectively on both research and engineering implementations at scale.Preferred Skills:Profound expertise in addressing machine learning challenges and training models.Experience training on large-scale (multi-node) GPU clusters.In-depth understanding of model training pipelines, including model/data parallelism and distributed optimizers.Strong methodology for conducting rigorous ablations and hypothesis testing.Familiarity with large-scale, high-performance data processing pipelines.High proficiency in PyTorch and Python programming.Ability to navigate and understand extensive pre-existing codebases swiftly.Published research in machine learning in reputable venues is an advantage.Postgraduate degree in a relevant scientific field (Computer Science, Electrical Engineering, Mathematics, Physics).Why Join Zyphra?We value a research methodology that emphasizes thoughtful, methodical progress towards ambitious objectives. Both deep research and engineering excellence are given equal importance.Join us in an environment that fosters innovation, collaboration, and professional growth.

Aug 28, 2025
Apply
companyCrusoe logo
Full-time|$172.4K/yr - $209K/yr|On-site|San Francisco, CA - US

At Crusoe, we are on a mission to accelerate the convergence of energy and intelligence. We are building a powerful engine that enables individuals to innovate boldly with AI, all while upholding principles of scalability, speed, and sustainability.Join us in spearheading the AI revolution through sustainable technology. At Crusoe, you will be at the forefront of meaningful innovation, making a significant impact while collaborating with a team dedicated to shaping the future of responsible, transformative cloud infrastructure.About the Role:As a Senior Software Engineer on the Model Lifecycle team, you will play a pivotal role in developing a managed platform that supports the entire application development lifecycle, with an emphasis on harnessing the power of Machine Learning models, particularly Large Language Models (LLMs).Your Responsibilities:Design and maintain systems for fine-tuning large foundational models (SFT, PEFT, LoRA, adapters), ensuring multi-node orchestration, checkpointing, failure recovery, and cost-effective scaling.Create and manage end-to-end training pipelines for Large Language Models.Implement components for distillation and reinforcement learning pipelines, focusing on preference optimization, policy optimization, and reward modeling.Develop and sustain the core agent execution infrastructure.Implement features for dataset, model, and experiment management, emphasizing versioning, lineage, evaluation, and reproducible fine-tuning.Collaboration and Impact:Collaborate closely with Senior Engineers, Principal Engineers, and various product and platform teams to implement systems abstractions and APIs.Engage in technical discussions surrounding training runtimes, scheduling, storage, and overall model lifecycle management.Bring 4-5+ years of industry experience, demonstrating a strong track record of successfully leading a diverse portfolio of initiatives.Participate in and contribute to the open-source LLM ecosystem.This position involves taking significant ownership of core system components.Your Qualifications:Engineering Fundamentals:Bachelor's degree in Computer Science, Engineering, or a related discipline.Proven experience in software engineering with a focus on AI models and machine learning.

Feb 9, 2026
Apply
companyOpenAI logo
Full-time|On-site|San Francisco

OpenAI is seeking a Software Engineer in San Francisco to focus on improving productivity by optimizing model performance. This position centers on developing solutions that make machine learning models more efficient and effective. Role overview This role involves working closely with teams across different functions to identify and address areas where model performance can be improved. The aim is to deliver changes that have a measurable impact on both systems and workflows. What you will do Collaborate with engineers and other specialists to enhance model efficiency Develop and implement solutions that improve the effectiveness of machine learning systems Contribute to projects that streamline processes and drive productivity gains Impact Your work will help shape improvements in how models operate and how teams at OpenAI achieve their goals. The changes you help deliver will support more effective use of resources and better outcomes for the organization.

Apr 29, 2026
Apply
companyWaymo LLC logo
Full-time|$250K/yr - $334.5K/yr|Hybrid|Mountain View, CA USA; San Francisco, CA USA;

Waymo is a pioneering company in autonomous driving technology, dedicated to becoming the world’s most trusted driver. Originating from the Google Self-Driving Car Project in 2009, Waymo has established the Waymo Driver—The World’s Most Experienced Driver™—with a mission to enhance mobility access and save lives lost in traffic accidents. The Waymo Driver powers our fully autonomous ride-hailing service and can be integrated across various vehicle platforms and applications. Having completed over ten million rider-only trips, our technology has driven more than 100 million miles on public roads and tens of billions in simulations across over 15 U.S. states.The Perception team is responsible for developing systems that learn the spatial-temporal representations and semantic meanings of the environment surrounding our autonomous vehicles (AVs). We collaborate closely with downstream teams to optimize and integrate our work into the Waymo Driver, conduct research to solve real-world challenges, and work alongside research teams at Alphabet. With access to millions of miles of diverse driving data from various sensors, we empower engineers like you to (1) create methods for efficient continuous learning from extensive real-world data, (2) develop scalable models and training methodologies, (3) analyze real-world behaviors to create systems that can navigate complexities, and (4) optimize models for both onboard and offboard hardware.In this hybrid role, you will report to a Technical Lead Manager.

Apr 13, 2026
Apply
companyDatabricks logo
Full-time|$192K/yr - $260K/yr|On-site|San Francisco, California

At Databricks, we are dedicated to empowering data teams to tackle the most challenging problems in the world — from realizing the future of transportation to fast-tracking medical innovations. We accomplish this by developing and operating the premier data and AI infrastructure platform, enabling our customers to harness profound data insights for business enhancement. Our Model Serving product equips organizations with a cohesive, scalable, and governed solution for deploying and managing AI/ML models — ranging from traditional machine learning to intricate proprietary large language models. It ensures real-time, low-latency inference, governance, monitoring, and lineage. As the adoption of AI surges, Model Serving stands as a fundamental component of the Databricks platform, allowing customers to operationalize models at scale with robust SLAs and cost efficiency. In the role of Staff Engineer, you will significantly influence both the product experience and the core infrastructure of Model Serving. Your responsibilities will include designing and constructing systems that facilitate high-throughput, low-latency inference across CPU and GPU workloads, steering architectural strategies, and collaborating extensively with platform, product, infrastructure, and research teams to create an exceptional serving platform.

Jan 30, 2026
Apply
company
Full-time|On-site|San Francisco

Preference Model develops reinforcement learning environments that mirror the complexity of real-world tasks. The company focuses on building diverse RL tasks and detailed reward structures, aiming to push the boundaries of artificial intelligence. The founding team brings experience from developing data infrastructure and datasets for Claude at Anthropic, and Preference Model works closely with top AI research labs. Role overview The Senior Software Engineer - Reinforcement Learning Environments position centers on designing and delivering RL environments that challenge and improve current AI models. This role involves leading complex projects, including multi-step workflows and realistic stakeholder interactions, within a large codebase. Engineers work directly with the founders and a small, collaborative team, delivering environments used for training advanced models at partner labs. The position provides significant autonomy, regular feedback, and support for professional development. What you will do Design, build, and iterate on reinforcement learning tasks, taking them from concept through evaluation. Lead the development of sophisticated environments, focusing on complex workflows and coding standards. Interact with coding agents, review their outputs, and identify subtle failures. Analyze whether issues stem from model limitations or environment design, then redesign tasks to reveal deeper failure modes. Contribute to building and maintaining the core infrastructure and tools for the environments team. Mentor junior engineers as the team expands. Location This role is based in San Francisco.

Apr 24, 2026
Apply
companyDatabricks logo
Full-time|$192K/yr - $260K/yr|On-site|San Francisco, California

At Databricks, we are driven by our commitment to empower data teams in tackling the world's most challenging problems — from transforming transportation solutions to accelerating medical advancements. Our mission revolves around constructing and maintaining the world's premier data and AI infrastructure platform, enabling our clients to harness deep data insights for enhanced business outcomes.Foundation Model Serving represents the API product designed for hosting and serving advanced AI model inference, catering to both open-source models like Llama, Qwen, and GPT OSS, as well as proprietary models such as Claude and OpenAI GPT. We welcome engineers who have experience managing high-scale operational systems, including customer-facing APIs, Edge Gateways, or ML Inference services, even if they do not have a background in ML or AI. A passion for developing LLM APIs and runtimes at scale is essential.As a Staff Engineer, you will play a pivotal role in defining both the product experience and the underlying infrastructure. You will be tasked with designing and building systems that facilitate high-throughput, low-latency inference on GPU workloads with cutting-edge models. Your influence will extend to architectural direction, working closely with platform, product, infrastructure, and research teams to deliver an exceptional foundation model API product.The impact you will have:Design and implement core systems and APIs that drive Databricks Foundation Model Serving, ensuring scalability, reliability, and operational excellence.Collaborate with product and engineering leaders to outline the technical roadmap and long-term architecture for workload serving.Make architectural decisions to enhance performance, throughput, autoscaling, and operational efficiency for GPU serving workloads.Contribute directly to critical components within the serving infrastructure, from systems like vLLM and SGLang to developing token-based rate limiters and optimizers, ensuring seamless and efficient operations at scale.Work cross-functionally with product, platform, and research teams to transform customer requirements into dependable and high-performing systems.Establish best practices for code quality, testing, and operational readiness while mentoring fellow engineers through design reviews and technical support.Represent the team in inter-departmental technical discussions, influencing Databricks’ wider AI platform strategy.

Jan 30, 2026
Apply
companyBaseten logo
Full-time|On-site|San Francisco

ABOUT BASETENAt Baseten, we empower leading AI companies such as Cursor, Notion, and Abridge by delivering mission-critical inference capabilities. Our innovative platform integrates applied AI research, versatile infrastructure, and intuitive developer tools, enabling organizations at the forefront of AI to deploy cutting-edge models seamlessly. With our recent $300M Series E funding from top investors like BOND and Greylock, we are rapidly expanding. Join us to create the ultimate platform for engineers to launch AI products effectively.THE ROLEWe are seeking a passionate and customer-focused software engineer to join our team. You will take ownership of features, including multi-node training and serverless reinforcement learning (RL), guiding them from concept to MVP and beyond. Your responsibilities will span the entire technology stack, from API and UI design to infrastructure architecture. By diving deep into model fine-tuning, you will gain valuable insights into user workflows. Collaborating closely with research engineers, you will apply advanced training techniques to develop solutions that address real user challenges. If you are eager to explore the intricacies of AI training, we want to hear from you!THE PRODUCTDiscover what we have accomplished so far:Comprehensive Product OverviewTraining Documentation OverviewThe Journey of Our Training ProductOur Research EndeavorsEXAMPLE INITIATIVESCheckpointing Pipeline: Our automated checkpointing feature ensures that model versions created during training are securely backed up to the cloud, allowing users to easily deploy checkpoints with minimal friction.

Jan 22, 2026
Apply
companyPerplexity logo
Full-time|On-site|San Francisco

Join Perplexity as a Research Engineering Manager, where you will spearhead a team of exceptional AI researchers and engineers dedicated to crafting the advanced models that power our innovative products. Our talented team has pioneered some of the most sophisticated models in agentic research, query understanding, and other critical domains that demand precision and depth. As we broaden our user base and expand our product offerings, our proprietary models are increasingly essential for delivering a premium experience to the world's most discerning users.You will explore our extensive datasets of conversational and agentic queries, applying state-of-the-art training methodologies to enhance AI model performance. Through proactive technical and organizational leadership, you will empower your team to create cutting-edge models for the applications that are most significant to our business and our users.

Feb 4, 2026
Apply
companyHover logo
Full-time|$139K/yr - $172K/yr|On-site|san_francisco

At Hover, we empower individuals to design, enhance, and safeguard the properties they cherish. Utilizing our proprietary AI, developed over a decade of extensive real property data, we adeptly address crucial inquiries such as “What will it look like?” and “What will it cost?” Homeowners, contractors, and insurance professionals depend on Hover for fully measured, precise, and interactive 3D models of any property, all achievable through a smartphone scan in mere minutes.Driven by curiosity, purpose, and a shared dedication to our customers, communities, and one another, we believe that the most innovative ideas stem from diverse viewpoints. We are proud to foster an inclusive, high-performance culture that encourages growth, accountability, and excellence. Supported by prominent investors like Google Ventures and Menlo Ventures, and trusted by industry leaders such as Travelers, State Farm, and Nationwide, we are revolutionizing how people perceive and engage with their spaces.Why Hover is Seeking YouIn our team, 3D models are not just a feature; they are fundamental to our offering. Each scan and data point we process empowers homeowners, insurers, and contractors to make informed, data-driven decisions. We are on the lookout for a Software Engineer who is enthusiastic about geometry, automation, and making a tangible impact in the real world. In this role, you will design and implement systems that convert customer-captured imagery into highly accurate 3D models, enhancing the scalability and accuracy of Hover’s modeling pipeline. You will work closely with designers and engineers across various domains including frontend, backend, computer vision, and DevOps to introduce new capabilities, blending technical expertise with effective communication and cross-functional collaboration.The 3D Modeling Pipeline team creates the essential tools that our internal operations rely on to convert customer-captured scans into precise, detailed 3D models of buildings. The team also develops the pipeline and systems that process 3D data through both automated and manual steps, and export data into formats for our customers.

Feb 24, 2026
Apply
companyBenchling logo
Full-time|On-site|San Francisco, CA

Benchling creates software tools for scientists and biotech companies, supporting research and development across the globe. The platform serves more than 200,000 scientists, including teams at organizations like Sanofi and Moderna, as well as academic research labs. By connecting experiments, structured data, and AI-powered insights, Benchling works to reduce the time it takes for discoveries to reach real-world applications. Role overview This Software Engineer position focuses on integrating advanced scientific AI models into the Benchling platform. The main responsibility is to build a scalable system for hosting and managing scientific models, while also developing frameworks that allow model creators to bring their solutions into the Benchling environment. What you will do Develop and maintain a platform that supports scientific AI models at scale. Create frameworks that make it easier for model developers to contribute to Benchling. Experiment with new technologies to improve model integration and performance. Work closely with internal teams and external partners to deliver solutions. Help shape how scientists design molecules and apply AI in their research workflows. Location This role is based in San Francisco, CA.

Apr 22, 2026
Apply
companyDatabricks logo
Full-time|$217K/yr - $312.2K/yr|On-site|San Francisco, California

At Databricks, we are dedicated to empowering data teams to tackle the most challenging global issues—whether it's transforming transportation or speeding up medical advancements. We achieve this by constructing and managing the world's leading data and AI infrastructure platform, enabling our clients to leverage deep data insights for business enhancement. The Model Serving product at Databricks offers enterprises a cohesive, scalable, and governed platform for deploying and managing AI/ML models—from conventional ML to sophisticated, proprietary large language models. It facilitates real-time, low-latency inference while providing governance, monitoring, and lineage capabilities. As AI adoption surges, Model Serving becomes a central component of the Databricks platform, allowing customers to operationalize models efficiently and cost-effectively. As a Senior Engineering Manager, you will lead a team responsible for both the product experience and the underlying infrastructure of Model Serving. This role involves shaping user-facing features while architecting for scalability, extensibility, and performance across CPU and GPU inference. You will collaborate closely with various teams across the platform, product, infrastructure, and research domains.

Feb 1, 2026
Apply
companyOpenAI logo
Full-time|On-site|San Francisco

OpenAI is hiring a Software Engineer for Post-Training Research in San Francisco. This position centers on improving the performance and capabilities of advanced machine learning models after their initial training phase. Role overview Work closely with a skilled team to explore new ways of strengthening AI systems. The focus is on researching and developing methods that push the boundaries of what these models can achieve once training is complete. Collaboration Expect to contribute to ongoing research efforts and share insights with colleagues who are passionate about advancing AI. Teamwork and knowledge exchange are key parts of this role. Location This position is based in San Francisco.

Apr 29, 2026
Apply
companyOpenAI logo
Full-time|Hybrid|San Francisco

Join the Sora TeamAt Sora, we are at the forefront of integrating video capabilities into OpenAI’s foundational models. Our innovative hybrid research and product team is dedicated to expanding the boundaries of video model capabilities while ensuring their reliability and safety. We achieve this through rigorous research, experimentation, and real-world deployment, aiming to disseminate our advancements to a broader audience.Your Role as a Distributed Systems/ML EngineerIn this pivotal role, you will be instrumental in enhancing the training throughput of our internal framework, empowering researchers to experiment with cutting-edge ideas. Your responsibilities will encompass designing, implementing, and optimizing state-of-the-art AI models, ensuring that your machine learning code is bug-free, and leveraging your expertise in supercomputer performance. We seek individuals who are passionate about performance optimization, possess a deep understanding of distributed systems, and have a zero-tolerance policy for bugs in code.This position is based in San Francisco, CA, following a hybrid work model with three days in the office each week. We also provide relocation assistance for new team members.Key Responsibilities:Collaborate closely with researchers to facilitate the development of systems-efficient video models and architectures.Implement the latest techniques within our training framework to achieve exceptional hardware efficiency during training runs.Profile and optimize our training framework to ensure peak performance.You Will Excel in This Role If You:Possess experience with multi-modal machine learning pipelines.Enjoy delving into system implementations and grasping their fundamentals to enhance performance and maintainability.Demonstrate strong software engineering expertise and proficiency in Python.Have experience in understanding and optimizing training kernels.Are eager to explore stable training dynamics.About OpenAIOpenAI is a pioneering AI research and deployment organization committed to ensuring that general-purpose artificial intelligence is beneficial for all of humanity. We continually push the boundaries of what is possible with AI, striving to create a positive impact in various fields.

Mar 15, 2024

Sign in to browse more jobs

Create account — see all 7,148 results

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.