Software Engineer Machine Learning Infrastructure Distributed Systems Staff Principal jobs in San Francisco – Browse 6,458 openings on RoboApply Jobs

Software Engineer Machine Learning Infrastructure Distributed Systems Staff Principal jobs in San Francisco

Open roles matching “Software Engineer Machine Learning Infrastructure Distributed Systems Staff Principal” with location signals for San Francisco. 6,458 active listings on RoboApply Jobs.

6,458 jobs found

1 - 20 of 6,458 Jobs
Apply
companyTubi, Inc. logo
Full-time|$227.2K/yr - $417K/yr|Hybrid|San Francisco, CA; Los Angeles, CA; New York, NY (Hybrid); USA - Remote

About the Role:Join our dynamic ML Infrastructure team as a Software Engineer, where you'll collaborate intimately with the Machine Learning and Product teams to construct top-tier machine learning inference platforms. These cutting-edge platforms drive vital services such as personalized recommendations, search functionalities, and content comprehension at Tubi.Your primary focus will be on the development and maintenance of low-latency ML model serving systems that cater to Deep Learning, LLM, and Search models. This will include the creation of self-service infrastructure and critical components such as the inference engine, feature store, vector store, and experimentation engine.In this role, you'll enhance our service deployment and operational processes, with opportunities to contribute to open-source projects. Enjoy architectural freedom to explore innovative frameworks, spearhead significant cross-functional projects, and elevate the capabilities of our ML and Product teams.We are currently hiring for two positions:Staff Software EngineerPrincipal Software EngineerAdditional Details: As a Principal Engineer, you will serve as a technical leader and visionary, guiding the advancement of our machine learning platform. You'll address complex technical challenges, shape architectural decisions, and mentor senior engineers, fostering a culture of excellence and continuous improvement. Your contributions will impact millions of users.

Mar 23, 2026
Apply
companyDecagon logo
Full-time|Remote|San Francisco

Join Decagon as a Staff Software Engineer specializing in Machine Learning Infrastructure. In this role, you will play a crucial part in enhancing and optimizing our machine learning systems. You will collaborate with a talented team of engineers to build scalable and efficient infrastructure that supports our AI-driven initiatives.As a key contributor, you will leverage your expertise in software engineering and machine learning to solve complex challenges and drive innovation. Your work will impact various projects and help shape the future of our technology.

Feb 24, 2026
Apply
companyDatabricks logo
Full-time|$192K/yr - $260K/yr|On-site|San Francisco, California

P-186 At Databricks, we are passionate about empowering data teams to tackle some of the world’s most challenging problems, from security threat detection to cancer drug development. Our mission is to build and operate the leading data and AI infrastructure platform, enabling our customers to concentrate on the high-value challenges that are integral to their own objectives. Founded in 2013 by the original creators of Apache Spark™, Databricks has rapidly evolved from a small office in Berkeley, California, to a global powerhouse with over 1000 employees. Trusted by thousands of organizations, from startups to Fortune 100 companies, we are recognized as one of the fastest-growing SaaS companies worldwide. Our engineering teams create highly sophisticated products that address significant needs in the industry. We continuously push the limits of data and AI technology while maintaining the resilience, security, and scalability essential for our customers' success on our platform. We manage one of the largest-scale software platforms, consisting of millions of virtual machines that generate terabytes of logs and process exabytes of data daily. At this scale, we frequently encounter cloud hardware, network, and operating system faults, and our software must effectively shield our customers from these challenges. Modern data analysis leverages advanced techniques, such as machine learning, that far exceed the capabilities of traditional SQL query engines. As a Software Engineer on the Runtime team at Databricks, you will be instrumental in developing the next generation of distributed data storage and processing systems that outshine specialized SQL query engines in relational query performance, while providing the flexibility and programming abstractions to support a variety of workloads, from ETL to data science. Examples of projects you may work on include: Apache Spark™: Contributing to the de facto open-source framework for big data. Data Plane Storage: Developing reliable, high-performance services and client libraries for storing and accessing vast amounts of data on cloud storage backends like AWS S3 and Azure Blob Store. Delta Lake: A storage management system that merges the scalability and cost-effectiveness of data lakes with the performance and reliability of data warehouses, featuring low latency streaming. Its higher-level abstractions and guarantees, including ACID transactions and time travel, significantly reduce the complexity of real-world data engineering architectures. Delta Pipelines: Aiming to simplify the management of data engineering pipelines.

Jan 30, 2026
Apply
companyAchira logo
Full-time|On-site|San Francisco Office

Why Join Achira?Become part of an exceptional team comprised of scientists, ML researchers, and engineers dedicated to transforming the landscape of drug discovery.Engage with cutting-edge machine learning infrastructure at an unprecedented scale, leveraging extensive computing resources, vast datasets, and ambitious goals.Take ownership of significant projects from conception through to architecture and deployment on large-scale infrastructures.Thrive in a culture that values thoroughness, speed, and a proactive, builder-oriented mindset.About the RoleAt Achira, we are developing state-of-the-art foundation models that address the most complex challenges in simulation for drug discovery and beyond. Our atomistic foundation simulation models (FSMs) serve as comprehensive representations of the physical microcosm, encompassing machine learning interaction potentials (MLIPs), neural network potentials (NNPs), and various generative model classes.We are looking for a Software Engineer who is enthusiastic about distributed computing and its applications in machine learning. You will play a pivotal role in designing and constructing the infrastructure for our ML data generation pipelines, model training, and fine-tuning workflows across large-scale distributed systems.Your expertise will be crucial in ensuring our compute clusters are efficient, observable, cost-effective, and dependable, enabling us to advance the frontiers of ML development. If you are passionate about distributed systems, performance optimization, and cloud cost efficiency, we encourage you to apply.You will be empowered to conceptualize and manage complex workloads across multiple vendors worldwide. Achira's mission revolves around computation, and providing seamless access to our uniquely tailored workloads at the lowest possible cost is critical to our success.

Oct 7, 2025
Apply
companyOpenAI logo
Full-time|Hybrid|San Francisco

About Our TeamJoin the innovative Sora team at OpenAI, where we are at the forefront of developing multimodal capabilities for our foundation models. Our hybrid research and product team is dedicated to seamlessly integrating multimodal functionalities into our AI solutions, ensuring they are dependable, user-centric, and aligned with our vision of benefiting society at large.Role OverviewAs a Machine Learning Engineer specializing in Distributed Data Systems, you will be instrumental in designing and scaling the infrastructure that facilitates large-scale multimodal training and evaluation at OpenAI. Your role will involve managing complex distributed data pipelines, collaborating closely with researchers to convert their requirements into robust, production-ready systems, and enhancing pipelines that are essential for Sora's rapid iteration cycles.We are seeking detail-oriented engineers with extensive experience in distributed systems who thrive in high-stakes environments and excel in building resilient infrastructure.This position is located in San Francisco, CA, and follows a hybrid work model, requiring three days in the office each week. We also provide relocation assistance for new team members.Key Responsibilities:Design, implement, and maintain data infrastructure systems, including distributed computing, data orchestration, distributed storage, streaming infrastructure, and machine learning systems, with a focus on scalability, reliability, and security.Ensure our data platform can scale exponentially while maintaining high reliability and efficiency.Collaborate with researchers to gain a deep understanding of their requirements, translating them into production-ready systems.Strengthen, optimize, and manage critical data infrastructure systems that support multimodal training and evaluation.You Will Excel in This Role If You:Possess strong experience with distributed systems and large-scale infrastructure, coupled with a keen interest in data.Exhibit meticulous attention to detail and a commitment to building and maintaining reliable systems.Demonstrate solid software engineering fundamentals and effective organizational skills.Thrive in environments characterized by ambiguity and rapid change.About OpenAIOpenAI is a trailblazing AI research and deployment organization committed to ensuring that general-purpose artificial intelligence serves humanity. We continuously push the boundaries of AI capabilities and strive to create technology that benefits everyone.

Feb 6, 2026
Apply
companyCloudflare, Inc. logo
Full-time|Hybrid|Hybrid

Join Cloudflare as a Software Engineer specializing in Distributed Systems and Infrastructure. In this role, you will be responsible for designing, implementing, and optimizing scalable systems that enhance the performance and reliability of our services. You will collaborate closely with cross-functional teams to develop innovative solutions that support our mission to help build a better Internet.

Mar 4, 2026
Apply
companyWhatnot logo
Full-time|On-site|San Francisco, CA

Role overview Whatnot seeks a Software Engineer specializing in Machine Learning Infrastructure to develop and maintain the systems powering its machine learning applications. This position is based in San Francisco, CA and centers on building the technical backbone that supports machine learning efforts across the company. What you will do Develop and improve frameworks that enable machine learning throughout Whatnot’s platforms. Collaborate with teams from multiple disciplines to design infrastructure that can scale as needs grow. Support seamless integration of machine learning models into existing products.

Apr 23, 2026
Apply
companyCohere logo
Full-Time|On-site|San Francisco

Who are we?At Cohere, our mission is to elevate intelligence to benefit humanity. We specialize in training and deploying cutting-edge models for developers and enterprises focused on creating AI systems that deliver extraordinary experiences such as content generation, semantic search, retrieval-augmented generation, and intelligent agents. We view our work as pivotal to the broad acceptance of AI technologies.We are passionate about our creations. Every team member plays a vital role in enhancing our models' capabilities and the value they provide to our customers. We thrive on hard work and speed, always prioritizing our clients' needs.Cohere is a diverse team of researchers, engineers, designers, and more, all dedicated to their craft. Each individual is a leading expert in their field, and we recognize that a variety of perspectives is essential to developing exceptional products.Join us in our mission and help shape the future of AI!Why this role?Are you excited about architecting high-performance, scalable, and reliable machine learning systems? Do you aspire to shape and construct the next generation of AI platforms that enhance advanced NLP applications? We are seeking talented Members of Technical Staff to join our Model Serving team at Cohere. This team is responsible for the development, deployment, and operation of our AI platform, which delivers Cohere's large language models via user-friendly API endpoints. In this role, you will collaborate with multiple teams to deploy optimized NLP models in production settings characterized by low latency, high throughput, and robust availability. Additionally, you will have the opportunity to work directly with customers to create tailored deployments that fulfill their unique requirements.

Jan 12, 2026
Apply
companyVoxel logo
Full-time|On-site|San Francisco, CA

Role Overview Voxel is hiring a Senior or Staff Software Engineer focused on Machine Learning Infrastructure in San Francisco, CA. This position centers on building and maintaining scalable infrastructure that supports the company’s machine learning products and services. What You Will Do Design, develop, and maintain machine learning infrastructure for production systems Work with teams across engineering, product, and data to streamline ML workflows Optimize systems for performance, reliability, and operational efficiency Collaboration This role involves frequent collaboration with colleagues from multiple disciplines to ensure machine learning solutions are robust and scalable.

Apr 14, 2026
Apply
company
Full-time|On-site|San Francisco

About UsAt Applied Compute, we specialize in creating Specific Intelligence solutions for enterprises, developing agents that learn continuously from an organization’s processes, data, expertise, and objectives. We recognize a significant gap between the capabilities of AI models in isolation and their practical applications in real-world business contexts. Our systems often fall short because they lack adaptability to feedback. To address this, we are building a continual learning infrastructure that captures context, memory, and decision-making processes throughout the enterprise, enabling specialized agents to effectively execute real tasks.What Excites Us: We operate at a unique intersection where our product team constructs the platform that fuels a new generation of digital coworkers. Our research team pushes the boundaries of post-training and reinforcement learning, creating innovative product experiences. Our applied research engineers collaborate closely with clients to deploy models into production. This blend of strong product focus, deep research, and hands-on customer engagement is crucial for integrating AI into the enterprise. We are product-driven, research-informed, and actively engaged with our clients.Our Team: Our diverse team consists of engineers, researchers, and operators, many of whom are former founders. We have built RL infrastructure at leading organizations like OpenAI and Scale AI, and developed systems at Together, Two Sigma, and Watershed. We proudly serve Fortune 50 clients alongside companies like DoorDash, Mercor, and Cognition. Our work is supported by renowned investors, including Benchmark, Sequoia, and Lux.Who Thrives in Our Environment: We seek individuals eager to apply cutting-edge research and complex systems to tackle real-world challenges. You should be adept at quickly adapting to new environments, whether it’s a fresh codebase, a client’s data architecture, or an unfamiliar problem domain. A genuine enjoyment of customer interactions—listening, empathizing, and understanding how tasks are accomplished within their organizations—is essential. Those with entrepreneurial backgrounds, extensive side projects, or demonstrated end-to-end ownership typically excel in our company.

Oct 29, 2025
Apply
company
Full-time|On-site|San Francisco

OverviewPluralis Research is at the forefront of innovation in Protocol Learning, specializing in the collaborative training of foundational models. Our approach ensures that no single participant ever has or can obtain a complete version of the model. This initiative aims to create community-driven, collectively owned frontier models that operate on self-sustaining economic principles.We are seeking experienced Senior or Staff Machine Learning Engineers with over 5 years of expertise in distributed systems and large-scale machine learning training. In this role, you will design and implement a groundbreaking substrate for training distributed ML models that function effectively over consumer-grade internet connections.

Apr 1, 2026
Apply
companySpecter logo
Full-time|On-site|San Francisco

Company Overview At Specter, we are pioneering a software-defined "control plane" designed to enhance the real-world perception of physical assets. Our mission begins with safeguarding American businesses by providing them with comprehensive insights into their physical environments.To achieve this, we are developing a robust hardware-software ecosystem leveraging multi-modal wireless mesh sensing technology. This innovation allows us to significantly reduce the cost and time involved in sensor deployment by a factor of ten. Ultimately, our platform aims to serve as the perception engine for businesses, facilitating real-time visibility and autonomous management of their operational perimeters.Our co-founders, Xerxes and Philip, are deeply committed to empowering our partners in the rapidly evolving landscape of physical AI and robotics. We are a dynamic, rapidly expanding team comprised of talent from Anduril, Tesla, Uber, and the U.S. Special Forces.Position Overview Specter is seeking a dedicated Machine Learning Infrastructure Engineer to construct and optimize the ML systems that drive real-time perception and inference capabilities across our edge-cloud platform. This position will involve overseeing the training, deployment, and enhancement of computer vision and sensor fusion models, aimed at enabling autonomous monitoring and decision-making for our clients' physical assets.Key Responsibilities Include:Design and implement scalable ML training pipelines for computer vision applications, including object detection, tracking, classification, and segmentation.Develop efficient model serving infrastructures to facilitate real-time inference on edge devices with limited computational and power resources.Optimize models for deployment on embedded hardware, employing techniques such as quantization, pruning, TensorRT, ONNX, and CoreML.Create continuous training and evaluation systems to enhance model performance through feedback loops derived from production data.Establish data pipelines for the ingestion, labeling, versioning, and management of extensive multi-modal sensor datasets, including video, radar, lidar, and thermal data.Implement model monitoring frameworks, A/B testing methodologies, and performance analytics for deployed perception systems.Collaborate with perception researchers to transition models from research environments to scalable production across thousands of edge nodes.Construct tools and infrastructure for distributed training, hyperparameter optimization, and experiment tracking.

Oct 3, 2025
Apply
companyGimlet Labs logo
Full-time|On-site|San Francisco

At Gimlet Labs, we are pioneering the development of the first heterogeneous neocloud designed specifically for AI workloads. As the demand for AI systems surges, traditional homogeneous infrastructures face critical limits in power, capacity, and cost. Our innovative platform effectively decouples AI workloads from their hardware foundations, intelligently partitioning tasks and orchestrating them to the most suitable hardware for optimal performance and efficiency. This strategy fosters heterogeneous systems that span multiple vendors and generations, including cutting-edge accelerators, enabling significant enhancements in performance and cost-effectiveness at scale.In addition to this foundational work, Gimlet is establishing a robust neocloud for agentic workloads. Our clients benefit from deploying and managing their workloads via stable, production-ready APIs, without the need to navigate hardware selection or performance optimization intricacies.We collaborate with foundation labs, hyperscalers, and AI-native companies to drive real production workloads capable of scaling to gigawatt-class AI datacenters.We are currently seeking a Member of Technical Staff specializing in ML systems and inference. In this pivotal role, you will be responsible for designing and constructing inference systems that facilitate the execution of complete models in real production environments. You will operate at the intersection of model architecture and system performance to ensure that inference processes are swift, predictable, and scalable.This position is perfect for engineers with a deep understanding of modern model execution and a passion for optimizing latency, throughput, and memory utilization across the entire inference lifecycle.

Mar 10, 2026
Apply
companyDecagon logo
Full-time|Remote|San Francisco

Join Decagon as a Senior Software Engineer specializing in Machine Learning Infrastructure. In this pivotal role, you will be responsible for designing and optimizing systems that support machine learning models and applications. Your expertise will help drive innovation and efficiency in our ML pipelines, ensuring that our algorithms are fast, scalable, and reliable.You'll collaborate with cross-functional teams to implement cutting-edge solutions that enhance our product offerings. If you are passionate about advancing machine learning technologies and thrive in a dynamic environment, we want to hear from you!

Mar 26, 2026
Apply
companyDoorDash Inc. logo
Full-time|$137.1K/yr - $246.8K/yr|Hybrid|San Francisco, CA; Sunnyvale, CA

Join us in creating the most dependable on-demand logistics engine for last-mile retail delivery! We are on the lookout for a seasoned machine learning engineer to aid in the development of cutting-edge growth and personalization models that will elevate DoorDash's expanding retail and grocery services.About the RoleWe are seeking a dedicated Applied Machine Learning expert to become part of our innovative team. As a Staff Machine Learning Engineer, you will conceptualize, design, implement, and validate algorithmic enhancements that enrich the growth and personalization experiences central to our rapidly evolving grocery and retail delivery business. Leveraging our advanced data and machine learning infrastructure, you will implement novel ML solutions to enhance the consumer search experience, making it more relevant, seamless, and enjoyable across grocery, convenience, and various retail sectors. A strong command of production-level machine learning and proven experience in addressing end-user challenges while collaborating effectively with multidisciplinary teams is essential.This position will report to the engineering manager on our Personalization team and is expected to be hybrid, combining both in-office and remote work (#LI-Hybrid).

Mar 11, 2026
Apply
company
Full-time|On-site|San Francisco, California, United States

Rockstar is on the hunt for a talented Backend Software Engineer to join a rapidly expanding startup that is pioneering the AI infrastructure for the next generation of smart products. This innovative company specializes in providing AI startups with the tools to design, fine-tune, evaluate, deploy, and maintain their specialized models across various domains like text, vision, and embeddings. Think of them as the 'AWS for AI models'—offering a comprehensive backend solution for fine-tuning, reinforcement learning, inference, and ongoing model maintenance. Their clientele includes Series A to C AI companies that are developing enterprise-grade products, with a simple promise: to enhance your AI systems.As a Backend Software Engineer focusing on ML Infrastructure, you will play a pivotal role in designing, building, and scaling the essential systems that facilitate extensive model training and deployment.Your responsibilities will include developing distributed training pipelines, establishing cloud-native infrastructure, and creating internal developer platforms that support fine-tuning, reinforcement learning, and inference at scale. This position uniquely combines backend engineering with machine learning systems, allowing you to work closely with ML engineers while taking ownership of production-grade infrastructure.This role is a fantastic opportunity for an early-career engineer eager to dive into real distributed systems, GPU workloads, and cutting-edge ML infrastructure—far removed from simple dashboards or CRUD applications.

Dec 22, 2025
Apply
companyLila Sciences logo
Full-time|On-site|San Francisco, CA USA

Lila Sciences is seeking a Principal Machine Learning Engineer based in San Francisco, CA. This position centers on designing and building advanced machine learning algorithms to support scientific progress in healthcare and agriculture. Role overview The Principal Machine Learning Engineer will develop and implement new algorithms that strengthen Lila Sciences’ product capabilities and support data-driven decisions. The work will directly affect both internal business outcomes and broader scientific initiatives. Collaboration and impact This role involves close collaboration with engineers and data scientists on projects that address a range of technical and scientific challenges. The solutions created will have a direct influence on the future of healthcare and agriculture by applying advanced analytics and machine learning techniques.

Apr 28, 2026
Apply
companyWhatnot logo
Full-time|Remote|San Francisco, CA

Whatnot is a livestream shopping marketplace connecting buyers and sellers across categories such as trading cards, fashion, electronics, and live plants. The platform supports sellers in building real businesses and is shaping live commerce at scale in North America and Europe. The team works as a distributed group with members based in the US, UK, Ireland, Poland, Germany, and Australia. Agility, user focus, and meaningful work are central to the company culture. Whatnot has been recognized among the fastest growing marketplaces and was named the #1 Best Startup Employer in America by Forbes. Learn more about Whatnot through these resources: Core Values NYT: Fastest Growing Marketplaces Forbes: Best Startup Employer Whatnot News Engineering Blog Role overview The Software Engineer - Machine Learning Infrastructure role centers on building and improving the systems that support machine learning and self-hosted large language model applications at Whatnot. This position involves close collaboration with machine learning scientists to bring advanced models into production, directly enabling new product features and experiences. What you will do Design and develop infrastructure for reliable and efficient machine learning at scale Work on low-latency serving of large models Support distributed training and high-throughput GPU inference Help advance Whatnot’s mission to unlock new capabilities through AI and machine learning Location San Francisco, CA

Apr 22, 2026
Apply
companySieve logo
Full-time|On-site|San Francisco

About UsSieve is a pioneering AI research lab dedicated solely to video data. We harness exabyte-scale video infrastructure and innovative video understanding techniques, along with a multitude of data sources, to create datasets that advance the field of video modeling. Given that video constitutes 80% of internet traffic, it serves as a vital medium that fuels creativity, communication, gaming, AR/VR, and robotics. Our mission is to tackle the most significant challenge in the development of these applications: acquiring high-quality training data.With a small yet highly skilled team of just 15 members, we have formed strategic partnerships with leading AI labs and achieved $XXM in revenue last quarter alone. Our Series A funding round last year was backed by prestigious firms, including Matrix Partners, Swift Ventures, Y Combinator, and AI Grant.About the RoleAs a Distributed Systems Engineer at Sieve, you will be responsible for designing and implementing systems that efficiently manage the compute, scheduling, and orchestration of complex machine learning and ETL pipelines. Your work will ensure these systems operate quickly, reliably, and cost-effectively while processing large volumes of video data.You will thrive in this role if you are passionate about optimizing system uptime, have experience with cloud technologies, and enjoy working with high-performance distributed systems involving thousands of GPUs. Additionally, you will play a key role in developing excellent internal tools and CI/CD pipelines to facilitate rapid iteration.

Apr 26, 2025
Apply
companyUnity Technologies logo
Full-time|On-site|San Francisco, CA, USA

Role overview Unity Technologies is looking for a Staff Machine Learning Engineer with a focus on offline infrastructure. Based in San Francisco, this position centers on building and refining systems that underpin the performance and scalability of machine learning workflows. What you will do Design and develop offline infrastructure to support machine learning projects Work closely with a team to improve system scalability and reliability Lead efforts to advance machine learning capabilities within Unity The team This group combines technical skill with creative problem-solving to expand what machine learning can accomplish at Unity.

Apr 24, 2026

Sign in to browse more jobs

Create account — see all 6,458 results

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.