Mlops Engineer jobs in San Francisco – Browse 5,193 openings on RoboApply Jobs

Mlops Engineer jobs in San Francisco

Open roles matching “Mlops Engineer” with location signals for San Francisco. 5,193 active listings on RoboApply Jobs.

5,193 jobs found

1 - 20 of 5,193 Jobs
Apply
companyHayden AI logo
Full-time|Hybrid|San Francisco HQ Office

Hayden AI builds mobile perception systems that help transit agencies and city governments address real-world challenges. The team focuses on computer vision to improve bus lane and bus stop enforcement, modernize transportation technology, and support safer, more sustainable streets. The MLOps Engineer will join the Perception Deep Learning team in San Francisco, working in a hybrid model (at least three days per week in the office). This role centers on building and advancing Hayden AI’s machine learning platform, collaborating with perception, deep learning, and platform engineers to create infrastructure for training and deploying ML models. The position involves shaping the architecture for data ingestion, training pipelines, deployment, monitoring, and governance. What you will do Design, implement, and maintain cloud-based workflows for deploying and managing AI models. Work with cross-functional teams to identify workflow bottlenecks and deliver solutions that improve efficiency. Deploy new features and updates quickly while maintaining quality and reliability, and apply cost-saving strategies to optimize infrastructure spending. Stay up to date with new MLOps tools and technologies, integrating them to improve ML workflows. Participate in the team’s software development process, including design and code reviews, brainstorming, and maintaining accurate documentation. Requirements Bachelor’s degree and 3-4 years of experience in a related field.

Apr 29, 2026
Apply
company
Full-time|$200K/yr - $220K/yr|On-site|San Francisco, CA

Contribute to a Safer World.At TRM Labs, we leverage blockchain analytics and artificial intelligence to empower law enforcement, national security agencies, financial institutions, and cryptocurrency enterprises in the fight against crypto-related fraud and financial crime. Our advanced blockchain intelligence and AI platforms are designed to trace transactions, identify illicit activities, build investigative cases, and establish a comprehensive view of potential threats. Trusted by leading organizations worldwide, TRM is committed to fostering a safer, more secure environment for everyone.The AI Engineering Team is dedicated to driving the development of next-generation AI applications, specifically focusing on Large Language Models (LLMs) and agentic systems. Our mission is to create resilient pipelines, high-performance infrastructure, and operational tools that facilitate the swift, safe, and scalable deployment of AI systems.We manage extensive petabyte-scale data pipelines, deliver model outputs with millisecond-level latency, and ensure observability and governance to make AI production-ready. Our team actively evaluates and integrates state-of-the-art tools in the LLM and agent domain, such as open-source stacks, vector databases, evaluation frameworks, and orchestration tools, which enhance TRM's ability to innovate more rapidly than the competition.In the role of Senior MLOps Engineer specializing in LLMOps, you will play a pivotal role in constructing and scaling the technical infrastructure required for AI and ML systems. Responsibilities include:Develop reusable CI/CD workflows for model training, evaluation, and deployment, incorporating tools like Langfuse, GitHub Actions, and experiment tracking.Automate model versioning, approval processes, and compliance checks across various environments.Construct a modular and scalable AI infrastructure stack, integrating vector databases, feature stores, model registries, and observability tools.Collaborate with engineering and data science teams to integrate AI models and agents into real-time applications and workflows.Regularly assess and incorporate cutting-edge AI tools (e.g., LangChain, LlamaIndex, vLLM, MLflow, BentoML, etc.).Enhance AI reliability and governance, promoting experimentation while ensuring compliance, security, and system uptime.Optimize AI/ML model performance by ensuring data accuracy, consistency, and reliability to improve training and inference processes.Deploy infrastructure that supports both offline and online LLM evaluations.

Nov 24, 2025
Apply
companyBright Machines logo
Full-time|On-site|San Francisco, California

Join Bright Machines as a Senior Platform/MLOps Engineer where you will play a crucial role in enhancing our platform capabilities. You will collaborate with cross-functional teams to design and implement robust MLOps solutions that streamline operations and improve efficiency. Your expertise will help us optimize our machine learning workflows and elevate our platform's performance.

Mar 28, 2026
Apply
company
Full-time|$220K/yr - $240K/yr|On-site|San Francisco, CA

Join Us in Building a Safer WorldAt TRM Labs, we specialize in blockchain analytics and AI solutions designed to empower law enforcement, national security agencies, financial institutions, and cryptocurrency businesses in combating fraud and financial crime. Our innovative platforms harness blockchain intelligence and AI to trace financial flows, identify suspicious activities, and provide comprehensive threat assessments. Trusted by leading organizations globally, TRM is dedicated to creating a safer and more secure environment for all.Our AI Engineering Team focuses on pioneering next-generation AI applications, particularly in the realm of Large Language Models (LLMs) and agentic systems. Our goal is to develop resilient pipelines and high-performance infrastructures that facilitate the swift, safe, and scalable deployment of AI systems.We manage vast data pipelines, ensure rapid model serving, and maintain the observability and governance essential for making AI production-ready. Our team is actively engaged in evaluating and integrating state-of-the-art tools within the LLM and agent ecosystem, including open-source frameworks, vector databases, and orchestration tools that enhance TRM’s innovative capabilities.As a Staff MLOps Engineer concentrating on LLMOps, you will play a pivotal role in constructing and scaling the technical infrastructure for our AI/ML systems. Your responsibilities will include:Developing reusable CI/CD workflows for model training, evaluation, and deployment, incorporating tools like Langfuse, GitHub Actions, and experiment tracking.Automating model versioning, approval workflows, and compliance checks across various environments.Building modular and scalable AI infrastructure stacks, including vector databases, feature stores, model registries, and observability tools.Collaborating with engineering and data science teams to integrate AI models and agents into real-time applications and workflows.Continuously assessing and adopting cutting-edge AI tools (e.g., LangChain, LlamaIndex, vLLM, MLflow, BentoML).Enhancing AI reliability and governance to enable experimentation while ensuring compliance, security, and operational uptime.Improving AI/ML model performance, ensuring data accuracy, consistency, and reliability for superior model training and inference.Deploying infrastructure to support both offline and online evaluations of LLMs.

Jan 6, 2026
Apply
company
Full-time|Hybrid|San Francisco

This is a hybrid position based in our Bay Area (SF or Palo Alto) or Chicago offices, requiring in-office attendance on Tuesdays and Thursdays.Why Join Us?At Grindr, we believe AI has the potential to transform the dating landscape. As a Staff MLOps Engineer, you will be pivotal in developing and managing the infrastructure, tools, and scalable systems that empower impactful AI initiatives. You will architect and maintain the platforms facilitating data ingestion, feature computation, model training, automated evaluation, deployment, and continuous monitoring for our machine learning teams that craft recommendations, LLM-powered experiences, advertisements, visual searches, as well as growth, trust, and safety mechanisms. Your role will involve designing foundational systems that enable our ML engineers to innovate swiftly, deploy models reliably, and manage them confidently in production.What You Will Do:We are seeking an exceptional MLOps engineer passionate about enabling ML at scale, with experience handling over 6 million daily active users and hundreds of millions of daily interactions. You will thrive in building robust, automated pipelines, developing reliable production training and inference systems, and establishing the infrastructure and processes that enhance ML product development across our organization.In this role, you will define and implement the strategy for Grindr’s ML platform and oversee the end-to-end model lifecycle.Key Responsibilities:Construct and oversee end-to-end ML pipelines for data ingestion, feature computation, model training, validation, deployment, and inference, all while managing substantial data scales.Establish and oversee a feature store, ensuring feature consistency, lineage, and reuse across teams.Utilize top-tier tools for managing deployment, scheduling, and environments within the specialized realm of ML Infrastructure.Develop automated model deployment workflows that incorporate CI/CD, secure rollout strategies, and reproducibility guarantees.Implement monitoring and observability solutions for ML systems, covering data quality checks, drift detection, performance metrics, and alerting mechanisms.Support and build training environments featuring experiment tracking, distributed training, hyperparameter tuning, and artifact and environment management.Collaborate with ML engineers and data engineers to streamline workflows, enhance model iteration speed, and enforce MLOps best practices.

Dec 22, 2025
Apply
company
Full-time|On-site|San Francisco Bay Area

Merge Labs is a pioneering research laboratory dedicated to uniting biological intelligence with artificial intelligence, aiming to enhance human potential, autonomy, and overall experience. We are innovating groundbreaking methods for brain-computer interfaces that facilitate high-bandwidth interactions with the brain, seamlessly integrate advanced AI, and ensure safety and accessibility for all users.About the Team:At Merge Labs, we are developing the future of brain-computer interfaces through the integration of cutting-edge advancements in synthetic biology, neuroscience, AI, and non-invasive imaging. Our cross-functional data and software engineering team collaborates closely with wet-lab scientists, automation engineers, and data scientists to construct a digital infrastructure that expedites molecular discoveries and optimizes device performance.About the Role:We are seeking a Senior / Principal ML Engineer to lead the development and ownership of the digital infrastructure supporting Merge's extensive computational operations. In this role, you will design distributed training and inference systems, experiment tracking, and deployment frameworks, empowering data scientists to swiftly iterate on models encompassing de-novo molecular design, biophysical modeling, signal processing, and computer vision. Your architectural contributions will transform research prototypes into production-ready systems, enhancing the speed, rigor, and fluidity of every computational scientist's workflow.Key Responsibilities:Develop the scientific and engineering framework for active learning and closed-loop optimization, including data ETL, machine learning modeling, and library architecture.Work alongside computational scientists to establish achievable optimization goals and encode domain-specific knowledge and constraints.Create model registries, evaluation frameworks, and automated reporting systems for benchmarking and experimental comparisons.Implement CI/CD pipelines and resource orchestration using tools like Kubernetes, Ray, or Slurm.Define and manage the ML engineering roadmap, providing mentorship to other computational scientists while establishing best practices for code quality, testing, and reproducibility.

Dec 11, 2025
Apply
companyCrusoe logo
Full-time|$175K/yr - $250K/yr|On-site|San Francisco, CA - US

At Crusoe, we are on a mission to drive the proliferation of energy and intelligence in the digital age. We are developing an innovative platform that enables individuals to harness the power of AI for ambitious projects, all while ensuring unparalleled scale, speed, and sustainability.Join us at the forefront of the AI revolution, where sustainable technology meets transformative cloud infrastructure. At Crusoe, you will be part of a team that is committed to meaningful innovation and making a significant impact.About the Role:We are looking for a Senior to Senior Staff level Solutions Engineer to collaborate closely with our key enterprise clients as they deploy AI and machine learning workloads on Crusoe's cutting-edge GPU infrastructure. This role is hands-on and customer-centric, requiring extensive technical knowledge in Kubernetes, MLOps, and cloud infrastructure.You will lead clients through the entire deployment journey, overseeing the proof of concept (PoC) process, optimizing workloads after the sale, and serving as an essential technical liaison between our clients and engineering teams. Successful candidates will possess a strong passion for AI infrastructure, be proficient in containerized environments, and have the ability to effectively translate workloads across various cloud platforms.What You'll Be Working On:Customer Enablement: Spearhead the technical onboarding and deployment of sophisticated AI/ML workloads with strategic enterprise customers—taking ownership of the PoC through to post-sales optimization.Kubernetes + MLOps Focus: Design and implement ML workloads utilizing Kubernetes-based technologies (e.g., Ray, Kubeflow) while ensuring optimal performance, scalability, and efficiency.Infrastructure-Centric Thinking: Engage directly with Crusoe infrastructure to deploy and fine-tune AI/ML workloads, guaranteeing performance at both the container and hardware levels.Cross-Cloud Translation: Assist clients in migrating and adapting workloads across AWS, Azure, and GCP, while clearly articulating the trade-offs between cloud-native and Crusoe-native strategies.Technical Storytelling: Facilitate workshops, live demonstrations, and solution reviews. Contribute to case studies, solution briefs, and blog articles that showcase real-world customer success stories.Voice of the Customer: Provide feedback to internal engineering and product teams to continuously enhance Crusoe’s platform based on practical implementation experiences.What You'll Bring to the Team:Deep Kubernetes Expertise: 7+ years of experience in building and deploying containerized applications.

Nov 13, 2025
Apply
companyCreatorIQ logo
FullTime|On-site|San Francisco

At CreatorIQ, we are the leading platform for creator-driven growth, trusted by over 1,300 global brands and agencies. Our mission is to humanize business interactions and amplify the impact of individuals. We uphold our core values by being intentional, striving for excellence daily, embracing collaborative journeys, and maintaining integrity in all our endeavors. Our accolades include recognition as one of the best companies to work for in various programs, such as BuiltIn LA and NY, alongside being honored as a Fastest-Growing Company in North America by the Deloitte Technology Fast 500™ for four consecutive years. We have also received accolades from IDC MarketScape and The Forrester New Wave™. Our flexible work model promotes collaboration, innovation, and accommodates individual working preferences.Join us in our quest to revolutionize the industry with your innovative spirit and passion!Senior MLOps Engineer (Applied AI Focus)As a Senior MLOps Engineer within our Product Innovations team, you'll take on the role of technical lead for Applied MLOps, effectively connecting experimental data science with production-level efficiency. Your primary focus will be on generating ground truth, model evaluation, and decision-making regarding pre/post-processing in a scaled vector embeddings ecosystem.

Feb 7, 2026
Apply
companyPrime Intellect logo
FullTime|On-site|San Francisco

Pioneering the Future of Open SuperintelligenceAt Prime Intellect, we are on a mission to construct the open superintelligence ecosystem, encompassing cutting-edge agentic models alongside the infrastructure that empowers individuals to create, train, and deploy them seamlessly. We unify global computational resources into an intuitive control plane, complemented by a comprehensive reinforcement learning post-training suite, including dynamic environments, secure sandboxes, verifiable evaluations, and our innovative asynchronous RL trainer. Our platform empowers researchers, startups, and enterprises to execute end-to-end reinforcement learning at unprecedented scales, allowing for the adaptation of models to diverse tools, workflows, and deployment scenarios.As a Research Engineer within our Reasoning team, you will be instrumental in driving our technological vision, particularly in the area of test-time compute scaling research. If you thrive on harnessing synthetic data to enhance LLM reasoning capabilities, we want to hear from you!Discover more about our exciting project by visiting our insight on decentralized training in the inference-compute paradigm.

Aug 19, 2024
Apply
companyDatabricks logo
Full-time|$153K/yr - $210.4K/yr|On-site|San Francisco, California

Location: San Francisco, Bellevue, AmsterdamRole Overview: Are you a well-respected technical authority in Generative AI and MLOps, eager to shape the future of production AI Agentic Systems? This Senior Developer Advocate position offers a unique opportunity to take strategic ownership over developer engagement and technical discussions surrounding Agent Bricks on the Databricks Data Intelligence Platform.As a vital link between our engineering teams and the global developer community, you will enhance the careers of data scientists and AI engineers by integrating advanced research, customer insights, and best practices into scalable, production-ready reference implementations, presentations, and demos. You will play a key role in fostering a global community for AI Agentic workflows and LLMOps, with a particular focus on MLflow and Agentic System governance.More About The DevRel TeamAt Databricks, we are dedicated to empowering data and AI teams to tackle the world's toughest challenges. Our mission in Developer Relations (DevRel) is to support data practitioners, data scientists, and the wider developer ecosystem by building vibrant communities, creating exceptional content, and nurturing a truly reciprocal relationship with our users. Our primary goal is to drive awareness and adoption of the Databricks Data Intelligence Platform.The Impact You Will HaveYou will utilize your technical expertise, community-building capabilities, and market insights to drive awareness and adoption, establishing Databricks as the premier technical leader in enterprise AI governance and Agentic Systems.Strategic Roadmap Ownership: Develop and implement the global technical advocacy strategy and roadmap for a critical aspect of Databricks AI Agentic Systems (e.g., RAG Architectures, AI Agent Evaluation, or LLM Governance), ensuring alignment with product objectives and measurable metrics.Evangelism: Collaborate with field AI engineers to design and deploy production-grade reference implementations and create impactful live demonstrations that address real-world enterprise GenAI challenges, highlighting best practices in performance, evaluation, and security. You will advocate for Agent Bricks as the definitive solution to 'Take your AI to your Data.'Technical Content Scaling: Produce high-quality, actionable educational resources, including comprehensive course materials and documentation.

Feb 1, 2026
Apply
companylavendo logo
Full-time|$225K/yr - $315K/yr|On-site|San Francisco

About UsAt Lavendo, we are at the forefront of AI cloud infrastructure, rapidly expanding with a significant global presence that includes R&D centers in North America, Europe, and Israel. Our exceptional team of engineers and AI researchers is dedicated to creating innovative solutions that provide the essential infrastructure for the next wave of AI-driven enterprises.We empower organizations, from Fortune 500 companies to pioneering AI startups and research institutions, allowing them to address complex AI challenges without incurring heavy infrastructure costs or the need to develop extensive in-house AI/ML teams.Our MissionWe aim to democratize access to top-tier AI infrastructure, enabling organizations of all sizes to transform ambitious AI goals into tangible outcomes. Our culture fosters creativity, embraces challenges, and thrives on teamwork.Your RoleAs a Cloud Solutions Architect (Pre-Sales), you will serve as a vital technical partner to some of the most forward-thinking AI teams globally. You will engage directly with cutting-edge GPU infrastructure, including the latest NVIDIA technology, to assist clients in designing, deploying, and optimizing AI workloads at scale. This high-profile position lies at the intersection of deep technical expertise and strategic customer interaction, significantly shaping customer experiences and platform adoption.Key ResponsibilitiesAct as a trusted technical advisor to customers throughout the entire pre-sales and onboarding process.Lead proof-of-concept initiatives, architectural workshops, presentations, and training on GPU cloud technologies and industry best practices.Work closely with customers to understand their business needs and translate them into scalable solution architectures.Develop and document Infrastructure as Code solutions, reference architectures, and technical guides in collaboration with support engineers and technical writers.Assist clients in optimizing machine learning pipeline performance and resource efficiency.Serve as a cross-functional technical expert, connecting product, technical support, and marketing teams with customer requirements.Represent Lavendo at external events, including hackathons, conferences, and industry showcases.

Feb 25, 2026
Apply
company
Staff Engineer

wordware.ai

Full-time|Remote|San Francisco

Join wordware.ai as a Staff Engineer where you will play a pivotal role in shaping the future of our innovative software solutions. We are looking for a talented engineer who thrives in a collaborative environment, is passionate about technology, and is eager to tackle complex challenges. Your expertise will help drive our product development and enhance the user experience for our clients.

Mar 20, 2026
Apply
companyNudge logo
Full-time|On-site|San Francisco

About NudgeAt Nudge, we are dedicated to pioneering advanced technologies that interface with the brain, ultimately enhancing the quality of life for individuals. Our initial focus is on developing a non-invasive, ultrasound-based device capable of high-resolution brain stimulation and imaging. This initiative encompasses the creation of state-of-the-art hardware, software, and research capabilities designed to yield products that can positively impact millions—and potentially billions—of lives.To achieve our goals, we are committed to building exceptional teams across all disciplines. We seek individuals who are not only masters of their craft but also believe in tackling challenging projects and executing with unwavering commitment and integrity.About the RoleAs an Ultrasound Engineer at Nudge, you will:Enhance acoustic models for transducer arrays and adapt to specific acoustic properties based on feedback from laboratory and clinical evaluations.Assess the devices to ensure safety and performance standards are met.Contribute to the design of next-generation devices through modeling and prototype evaluations.Create software and algorithms for signal processing, beamforming, holography, and imaging.About YouWe welcome engineers of various experience levels; however, all candidates should possess:A solid foundation in engineering and physics principles.Experience in ultrasound metrology.Proficiency with acoustic modeling software.A strong grasp of biomedical applications of ultrasound technology.A PhD in a relevant engineering field.A proven track record of significant technical contributions.A commitment to high integrity in all endeavors.

Dec 11, 2024
Apply
companymainstay logo
Full-time|Remote|West Coast required; San Francisco Bay Area highly preferred

Role Overview Mainstay is seeking a Head of Engineering based on the West Coast, with strong preference for candidates in the San Francisco Bay Area. This leader will oversee engineering operations, set technical direction, and ensure teams deliver reliable, high-quality products. What You Will Do Guide engineering strategy and execution across the organization Oversee daily engineering operations and project delivery Foster a collaborative, high-performing team environment Support and mentor engineers to help them grow and succeed Location Requirement This role requires residence on the West Coast. Candidates in the San Francisco Bay Area are highly preferred.

Apr 14, 2026
Apply
companyMetriport logo
Full-time|On-site|San Francisco

Metriport is hiring a Design Engineer in San Francisco. This position centers on turning ideas into practical designs that serve client needs. The work involves close collaboration with colleagues from different specialties to move concepts from initial sketches to finished products. Role overview The Design Engineer will contribute engineering expertise to develop new solutions and refine existing ones. Each project requires attention to both quality and efficiency, with a focus on meeting client expectations at every stage. Collaboration Expect to work alongside cross-functional teams throughout the design process. Communication and teamwork are essential as concepts evolve and move toward implementation.

Apr 28, 2026
Apply
companyCoframe logo
Full-time|On-site|SF Bay Area

Role overview The Platform Engineer at Coframe will help shape the core infrastructure that powers the company’s engineering efforts. Based in the SF Bay Area, this role involves designing and refining the systems that underpin how teams build, deploy, and manage software. Day-to-day work includes using AI tools to improve productivity and streamline deployment processes. The engineer will also focus on strengthening monitoring, enhancing security, and managing costs across the platform. Impact This position carries significant responsibility. The solutions developed will directly support all teams at Coframe, influencing how software is created and maintained across the company. The work done in this role will help set the direction for future engineering practices.

Apr 20, 2026
Apply
companyScribd, Inc. logo
Full-time|$127K/yr - $228K/yr|On-site|San Francisco

Scribd, Inc. creates products such as Scribd®, Slideshare®, Everand™, and Fable, all with the goal of turning information into insight for users worldwide. The company is dedicated to broadening access to knowledge and expertise on a global level. Culture and Work Style At Scribd, authenticity and bold thinking influence daily decisions and team interactions. Employees are encouraged to share ideas, take initiative, and engage in open dialogue to solve new challenges and meet customer needs. The company’s Scribd Flex policy allows staff to choose their work location and schedule, balanced by planned in-person meetings that help maintain strong team connections. Regardless of where employees live, everyone participates in occasional onsite gatherings to foster collaboration. The company values 'GRIT', a blend of passion and perseverance for long-term goals, which shapes how teams set objectives, innovate, deliver results, and support each other. Quality Engineering Team The Quality Engineering group develops and maintains the testing infrastructure and quality tools that back Scribd’s engineering teams. Their efforts ensure code reliability and support confident releases. Collaboration extends across both the established monolithic codebase and newer microfrontend architectures, with close partnership alongside frontend and platform engineers. Role Overview The Senior Frontend Engineer in Quality Engineering will lead work to expand the quality infrastructure for Scribd’s web engineering teams. As the company moves from a monolithic application toward modular, independently tested components, including microfrontends, this role will help shape the systems that enable that transition. Currently, much of the web platform is still monolithic, with the first microfrontends just entering production. This engineer will design and enhance quality platform capabilities, aiming to increase test ownership, reliability, and developer productivity across all engineering teams.

Apr 28, 2026
Apply
company
Full-time|On-site|San Francisco

Astro-Mechanica is hiring a Senior Rotordynamics Engineer in San Francisco. This position centers on detailed analysis of rotordynamics systems and developing solutions to support ongoing engineering projects. What you will do Perform in-depth analyses of rotordynamics systems Develop technical solutions to address engineering challenges Work closely with cross-functional teams on project advancement Requirements Strong technical background in rotordynamics or a related engineering field Demonstrated commitment to engineering quality and precision Ability to collaborate effectively with colleagues from various disciplines

Apr 29, 2026
Apply
companySerVal logo
Full-time|On-site|San Francisco

Join SerVal as a Design Engineer where you will play a crucial role in the development and improvement of innovative design solutions. You will collaborate with cross-functional teams to create high-quality products that meet customer needs and industry standards. Your responsibilities will include conceptualizing designs, conducting analyses, and refining prototypes to ensure optimal performance.

Mar 5, 2026
Apply
companyFastly logo
Full-time|$287.6K/yr - $345.1K/yr|On-site|Denver, CO; New York City, NY; San Francisco, CA

At Fastly, we empower individuals to maintain meaningful connections with the things they cherish. Our cutting-edge edge cloud platform allows customers to deliver exceptional digital experiences with speed, security, and reliability by processing and securing applications as close to end-users as possible — right at the edge of the Internet. Our platform is engineered to harness the potential of the modern internet, enabling programmability and supporting agile software development. We proudly serve some of the world's most influential companies, including GitHub, Yelp, Paramount, and JetBlue.Join us in building a more trustworthy Internet.Posting Open Date: Reposted March 30, 2026Anticipated Posting Close Date*: April 20, 2026*This job posting may close earlier due to high applicant volume. Senior Principal Engineer, Platform EngineeringAs a member of Fastly’s Platform Engineering team, you will be instrumental in establishing the essential frameworks that enable engineers to deliver quickly, safely, and at scale. In the role of Senior Principal Engineer, you will guide the technical direction and spearhead cross-organizational initiatives aimed at enhancing our platform, developer experience, and operational excellence. Collaborating closely with fellow engineers, you will work to eliminate friction, standardize best practices, and create paved roads to expedite product delivery. This is a highly collaborative and hands-on position, representing the Platform Engineering organization and reporting directly to the Senior Director of Engineering.

Mar 30, 2026

Sign in to browse more jobs

Create account — see all 5,193 results

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.