Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Mid to Senior
Qualifications
Your Role ResponsibilitiesDevelop and enhance our log ingestion and processing pipeline. Architect systems for high-throughput and low-latency data processing. Take ownership of storage, indexing, and querying systems for logs. Guarantee the reliability, scalability, and cost-effectiveness of our backend systems. Collaborate on distributed systems that drive real-time observability. Diagnose and optimize performance across all layers of the tech stack. Ideal Candidate ProfileProven experience with distributed systems and data infrastructure. Knowledge of streaming systems, databases, and large-scale data pipelines. In-depth understanding of performance trade-offs concerning latency, throughput, and cost. Capability to design systems from inception rather than merely maintaining them. Comfortable working close to the hardware when necessary. Bonus: Experience with observability platforms, logging systems, or time-series data.
About the job
Innovating the Future of Software
As we approach 2026, the software industry is facing an unprecedented challenge: the 'infinite software crisis.' At Sazabi, we are dedicated to redefining how engineering teams support, maintain, and operate the rapid growth in application development.
Introducing Sazabi: The AI-Native Observability Platform for Agile Engineering Teams.
Our platform empowers teams by providing a centralized solution to inquire about their production systems in natural language, visualize system activities automatically, and diagnose issues ten times faster.
Say goodbye to tedious instrumentation, dashboard setups, and alert tuning—just straightforward answers.
We are proud to be backed by pioneers from leading AI organizations, including Vercel, Graphite, Daytona, Browserbase, LangChain, Mastra, Replit, and others.
About Sazabi
At Sazabi, we are revolutionizing the observability space with our innovative AI-native platform designed specifically for engineering teams. Our mission is to simplify the complexities of application development, enabling teams to focus on what they do best—building outstanding software.
Join us in constructing the data infrastructure that empowers robots to operate seamlessly in the real world.As robotics transitions from research environments into practical applications within factories, warehouses, vehicles, and other deployments, the need for reliable data becomes critical. Engineers must analyze data when robots malfunction, behave unpredictably, or require enhancements.At Foxglove, we provide the observability, visualization, and data infrastructure necessary for this analysis. Our innovative tools support robotics and autonomous systems teams in processing, storing, querying, replaying, and analyzing vast amounts of multimodal sensor data from both live systems and production fleets.About the RoleWe are seeking a Data Infrastructure Solutions Engineer to facilitate the integration of Foxglove into our clients' large-scale data ecosystems. Collaborating with engineering teams across diverse stacks and frameworks, you will design efficient data ingestion and storage workflows, troubleshoot complex performance challenges, and ensure dependable data interoperability on a large scale.This position is perfect for individuals passionate about connecting intricate technical systems with the practical needs of customers. Your insights and expertise will play a pivotal role in shaping the evolution of Foxglove to better support data-driven growth within the robotics and autonomy landscape.Key ResponsibilitiesTechnical Solution Design: Work with customer engineering teams to assess their data architecture and craft scalable ingestion, storage, and visualization workflows that meet their specific systems and requirements.Demonstrations and Evaluations: Create and present technical demonstrations and proof-of-concept projects that replicate real-world data pipelines, illustrating how Foxglove integrates with current infrastructure and tools.Onboarding and Integration: Assist customers from initial setup through to production deployment, ensuring smooth data flow between their systems and Foxglove for optimal performance.Problem-Solving and Troubleshooting: Identify and resolve complex data-related issues, performance bottlenecks, and interoperability challenges during evaluations and initial deployments, collaborating closely with both client and internal engineering teams.Product Feedback: Share structured insights and feature suggestions with Foxglove’s product team based on direct technical interactions and practical data infrastructure applications.Collaboration with Sales: Act as the technical liaison in sales engagements, providing expertise to help illustrate Foxglove's value proposition.
Join our innovative team at alljoined as a Data Infrastructure Engineer where you will play a pivotal role in shaping our data architecture and ensuring the reliability and efficiency of our data systems. You will collaborate with cross-functional teams to design and implement scalable data solutions that empower data-driven decision-making.
Full-time|$200K/yr - $275K/yr|On-site|San Francisco, CA
At Peregrine Technologies, a company backed by top-tier Silicon Valley investors, we empower public safety organizations, state and local governments, federal agencies, and private-sector entities to tackle societal challenges with unparalleled speed and precision. Our cutting-edge AI-enabled platform transforms fragmented and isolated data into actionable operational intelligence, delivering crucial insights that enhance decision-making processes and improve outcomes across various scenarios. Currently, we proudly serve hundreds of clients across over 30 states and two countries, impacting more than 125 million lives, and we are poised for further growth as we expand into the enterprise sector and international markets.TeamWe believe that empathy is key to enhancing our solutions. Our engineering team prioritizes understanding how users interact with our products, which guides us in finding the best solutions. You'll have the chance to collaborate closely with our onsite team to explore the diverse use cases that Peregrine addresses.We value both ownership and teamwork. In this role, you will be responsible for significant features while working alongside fellow engineers to bring them to fruition. We hold humility and empathy in high regard as essential traits for crafting effective solutions, and you will engage directly with our deployment team and users to iterate on problem-solving. Creativity and resilience are vital as we pursue our vision.RoleWe are in search of a Staff Data Infrastructure Engineer to join our dynamic team. In this role, you will take full ownership of the data layer that is foundational to all of Peregrine's operations. You will design and construct the systems responsible for ingesting, storing, and serving vast amounts of real-time operational data, empowering our clients to make critical decisions quickly and confidently.This senior individual contributor position is ideal for someone who excels at tackling complex technical challenges and possesses the experience and judgement necessary to influence key infrastructure decisions. You will engage with a variety of intricate challenges, including:Designing and managing a high-throughput, real-time data integration platform across diverse customer environments.Architecting a scalable open table format layer to ensure reliable data storage at petabyte scale.Building and optimizing distributed data processing pipelines using Apache Spark and related streaming technologies.Enhancing performance, reliability, and cost efficiency across the entire data infrastructure stack.Collaborating with platform and product engineering teams to define data contracts, schemas, and integration pathways.
Full-time|$100.6K/yr - $148K/yr|On-site|San Francisco, CA; New York, NY; Seattle, WA; Phoenix, AZ
Join DoorDash's Go-To-Market Technology (GTMT) team as an Analytics Engineer, where you will harness data to fuel our rapid growth. You will create innovative solutions that streamline business processes and enhance the productivity of our GTM teams. In this role, you will dive deep into data analytics, build scalable tools, and transform insights into actionable strategies that drive business success. Collaborating with cross-functional teams, you will automate workflows, ensure data integrity, and leverage AI capabilities to elevate our data infrastructure.
The Bot CompanyAt The Bot Company, we are on a mission to create an innovative robot that enhances everyday life in homes everywhere.Located in the heart of San Francisco, our compact team comprises talented engineers, designers, and operators hailing from esteemed organizations such as Tesla, Cruise, OpenAI, Google, and Pixar. With a track record of delivering exceptional products to hundreds of millions of users, we understand the intricacies involved in crafting remarkable experiences.Our intentionally lean structure fosters swift decision-making while eliminating unnecessary bureaucracy. Each team member operates as an individual contributor, endowed with substantial autonomy, ownership, and accountability. We thrive on a culture of rapid iteration and efficient execution, working collaboratively across the technology stack.
Position OverviewJoin OpenEvidence as a Data Infrastructure Software Engineer, where you will engineer comprehensive systems that drive essential product and research operations. Your focus will be on optimizing performance, ensuring scalability, and enhancing accuracy, while enjoying the autonomy to manage the infrastructure that assists healthcare professionals in navigating complex clinical decisions in real-time.We value exceptional creators who thrive in versatile roles. Our engineers engage across various products and projects, taking ownership wherever they can make the most significant impact.About OpenEvidenceOpenEvidence is the leading medical AI platform globally, utilized by over 40% of clinicians in the U.S. in just over a year through organic product-led growth. As a $12 billion company, our engineering team comprises 30 talented individuals from MIT, Harvard, and Stanford. We believe that groundbreaking products are born from a small group of exceptional builders, driven by focused goals and empowered to take ownership and act swiftly. We are expanding our team to capitalize on an unparalleled opportunity to set the standard for medical AI platforms.If you are a top-tier engineer or scientist eager to push the boundaries and achieve tangible outcomes that affect millions of lives, we want to connect with you.Our CultureWe expect our work to be performed at an elite level. The journey from concept to execution and scaling is akin to a professional sport, where excellence is non-negotiable. We believe that the creation of innovative technologies is only achievable through complete ownership. Significant achievements happen when individuals take the initiative to see them through.Your ProfileThis role is not for those seeking a 9-to-5 job or merely looking to write papers. If you are ready to dive into the trenches, tackle challenges head-on, and create something from scratch that could impact millions and drive substantial revenue, you might be the perfect fit.We seek brilliant builders who are intelligent, ambitious, resourceful, self-reliant, detail-oriented, driven, hardworking, and humble. Does this sound rare? It is, as we have only found 30 of them so far, and we are eager to discover more.
Full-time|Hybrid|Houston, TX. San Francisco, CA. Hybrid.
About Giga EnergyAt Giga Energy, we are pioneering the development, construction, and operation of AI-driven energy infrastructure that powers the contemporary world. Our mission is to revolutionize the energy infrastructure experience for the better. By integrating site origination, development, infrastructure manufacturing, and power market engagement under a single operating system, we ensure efficiency and speed in project delivery. Established in Texas, Giga collaborates with AI hyperscalers, data center operators, and various energy infrastructure teams to accelerate project timelines.Why Join UsThe Pace: You will take the lead on projects progressing at an unmatched speed across the industry.The Impact: You will be the catalyst that transforms ideas into tangible contracts and permits.The Team: Collaborate with an elite team of engineers and builders who are shaping the future of AI infrastructure.Key ResponsibilitiesThe expansion of AI infrastructure represents one of the most significant capital deployment cycles of our time, and Giga Energy is at the forefront of this movement. We design and manufacture AI power infrastructure with industry-leading lead times, having successfully delivered 4.5 GW to date. Our comprehensive product offerings include cooling solutions, transformers, and switchboards specifically designed for high-density GPU clusters. We provide end-to-end site origination and development, complemented by a fully managed hosting model that enables clients to scale GPU clusters without the burdens of full development or construction. All operations are handled in-house from our facilities in Long Beach, CA, and Houston, TX, ensuring faster lead times and predictable pricing that mitigates supply chain risks for critical AI initiatives.As the Director of Data Center Solutions, you will be responsible for executing Giga's strategy to become the go-to partner for AI infrastructure, site development, and colocation services for the largest hyperscalers, GPU cloud providers, and enterprise AI operators. This role is pivotal in driving market presence rather than just managing relationships. You will leverage Giga's full suite of solutions, from turnkey, rack-ready sites with direct fiber and on-site power to flexible hosting models tailored to site specifications, alongside prefabricated modular deployments engineered for high-density compute. You will lead a high-performing team of business professionals to achieve these goals.
Founded in 2007, Airbnb has transformed the way people experience travel, connecting over 5 million hosts with more than 2 billion guests worldwide. Our platform enables unique stays and authentic experiences, fostering connections with local communities.The Team You Will Join:As a pivotal member of the Data Warehouse Infrastructure team, you will help shape the backbone of Airbnb's big data capabilities, enabling hundreds of engineers to efficiently collect, manage, and analyze vast amounts of data. We leverage cutting-edge open-source technologies such as Hadoop, Spark, Trino, Iceberg, and Airflow.Typical Responsibilities:Design and architect Airbnb's next-generation big data compute platform to enhance data ETL, analytics, and machine learning efforts.Oversee the platform's operations, focusing on improving reliability, performance, observability, and cost-effectiveness.Create high-quality, maintainable, and self-documenting code while engaging actively in code review processes.Contribute to open-source projects, making a significant impact on the industry.
At Judgment Labs, we specialize in developing cutting-edge infrastructure for Agent Behavior Monitoring (ABM). Unlike conventional observability tools that merely track exceptions and latency, our ABM technology identifies behavioral anomalies, such as instruction drifts and context retrieval losses, in large-scale production settings.Our solutions are trusted by numerous teams working on autonomous agents to gain insights into system behavior post-deployment. Rather than simply reacting to incidents, our clients analyze patterns across conversations and workflows, correlate regressions with specific interaction types, and identify critical points of reliability failure. Recently, we secured over $30 million across two funding rounds from notable investors like Lightspeed, SV Angel, and Valor Equity Partners.The Role:We are seeking a Senior Data Infrastructure Engineer to architect and enhance the real-time data pipelines essential for robust agent behavior analysis at scale. This position plays a vital role in processing hundreds of thousands of traces per second, executing LLM-based scoring and clustering in near-real-time, and ensuring low-latency query performance, which allows teams to monitor agent behavior as it unfolds. Ideal candidates will have experience designing petabyte-scale data systems, optimizing OLAP database performance, and managing the full data lifecycle from ingestion to analytics.What You'll Do:Design and automate large-scale, high-performance streaming and batch data processing systems to support Judgment's behavioral analysis products.Collaborate closely with infrastructure and backend teams to enhance scalability, data governance, and operational efficiency.Promote best practices in software engineering for data infrastructure at scale.Uphold high standards for data quality and engineering: ensuring reliability, efficiency, documentation, testability, and maintainability.Craft data models for optimal storage and access, ensuring efficient data flows to meet critical product requirements.Enhance OLAP database performance through careful schema design, partitioning strategies, storage optimization, and access pattern analysis.
Join Matter Intelligence as a Data and Machine Learning Infrastructure Engineer, where you will play a pivotal role in shaping the future of data-driven decision-making. You will be part of a dynamic team focused on building and optimizing infrastructure that supports innovative machine learning applications. Your expertise will help us enhance our data pipelines and ensure seamless integration of machine learning models into production.
Full-time|$153K/yr - $376K/yr|Remote|San Francisco, CA • New York, NY • United States
At Figma, we are expanding our team of dedicated creatives and innovators committed to making design accessible for everyone. Our platform empowers teams to transform ideas into reality—whether you're brainstorming, prototyping, converting designs into code, or utilizing AI for enhancements. From concept to product, Figma enables teams to optimize workflows, accelerate processes, and collaborate in real-time from anywhere in the world. If you're passionate about shaping the future of design and teamwork, we invite you to join us!The Data Platform team at Figma is responsible for constructing and managing the essential systems that drive analytics, AI/ML initiatives, and data-informed decision-making across our organization. We cater to a wide array of stakeholders, including AI researchers, machine learning engineers, data scientists, product engineers, and business teams that depend on data for insights and strategic planning. Our team is tasked with owning and scaling critical platforms such as the Snowflake data warehouse, ML Datalake, orchestration and pipeline infrastructure, and extensive data ingestion and processing systems, overseeing all data transactions that occur within these platforms.Despite our small size, we tackle significant, high-impact challenges. In the upcoming years, we are focused on developing the data infrastructure layer for Figma's AI-driven products, enhancing cost and performance efficiencies across our data stack, scaling our ingestion and reverse ETL capabilities for new product applications, and reinforcing data quality, reliability, and compliance at every level. If you are enthusiastic about creating scalable, high-performance data platforms that empower teams across Figma, we would love to connect with you!This is a full-time role that can be performed from one of our US hubs or remotely within the United States.
At Plaid, we believe in the power of data-driven decision-making. Our data culture demands robust and scalable data systems that ensure accuracy and completeness. As a Senior Software Engineer focusing on Data Infrastructure, you will play a pivotal role in empowering teams across engineering, product, and business sectors to swiftly and securely extract valuable data insights. Your work will directly enhance our ability to serve customers effectively. You will be responsible for building and optimizing our data and machine learning infrastructure, allowing Plaid engineers to innovate and iterate on products built on consumer-permissioned financial data. Our Data Infrastructure engineers are experts in Data Warehousing, Data Lakehouse architecture, Spark, Workflow Orchestration, and Streaming technologies. You will enhance our existing data pipelines for performance and cost efficiency while creating intuitive abstractions that simplify the development process for other engineers at Plaid.
Full-time|$350K/yr - $475K/yr|On-site|San Francisco
At Thinking Machines Lab, our vision is to enhance human potential by advancing collaborative general intelligence. We are dedicated to creating a future where individuals have the resources and knowledge to harness AI for their specific objectives and aspirations.Our team comprises scientists, engineers, and innovators who have developed some of the most popular AI products, including ChatGPT and Character.ai, as well as influential open-weight models like Mistral, along with highly regarded open-source projects such as PyTorch, OpenAI Gym, Fairseq, and Segment Anything.About the RoleWe are seeking a talented engineer to enhance our data infrastructure. You will become part of a dynamic, high-impact team tasked with designing and scaling the foundational infrastructure for distributed training pipelines, multimodal data catalogs, and sophisticated processing systems that manage petabytes of data.Our infrastructure is pivotal; it serves as the foundation for every groundbreaking achievement. You will collaborate directly with researchers to expedite experiments, develop novel datasets, optimize infrastructure efficiency, and derive essential insights from our data repositories.If you are passionate about distributed systems, large-scale data mining, and open-source tools such as Spark, Kafka, Beam, Ray, and Delta Lake, and enjoy building innovative solutions from scratch, we encourage you to apply.Note: This is an evergreen role that we keep open continuously for expressions of interest. We receive a high volume of applications, and while there may not always be an immediate position that aligns perfectly with your skills and experience, we encourage you to apply. We regularly review applications and reach out as new opportunities arise. You are welcome to reapply after gaining more experience, but please refrain from applying more than once every six months. We may also post for specific roles for particular projects or team needs, and in those cases, you are welcome to apply directly in addition to this evergreen role.
About Our TeamAt OpenAI, our Data Platform team is at the heart of our innovative approaches to data management, powering essential product, research, and analytics workflows. We manage some of the largest Spark compute fleets in production, architect data lakes and metadata systems on Iceberg and Delta, and envision exabyte-scale architectures. Our high-throughput streaming platforms utilize Kafka and Flink, while our orchestration is powered by Airflow. We also support machine learning feature engineering tools such as Chronon. Our mission is to provide secure, reliable, and efficient data access at scale, thereby enhancing intelligent, AI-assisted data workflows.Join us in building and maintaining these core platforms that are foundational to OpenAI's products, research, and analytics capabilities.We are not just scaling infrastructure; we are transforming the way people engage with data. Our vision includes intelligent interfaces and AI-powered workflows that make data interactions faster, more reliable, and intuitive.About the PositionIn this role, you will focus on constructing and managing data infrastructure that supports extensive compute fleets and storage systems optimized for high performance and scalability. You will be instrumental in designing, developing, and operating the next generation of data infrastructure at OpenAI. Your responsibilities will encompass scaling and securing big data compute and storage platforms, building and maintaining high-throughput streaming systems, ensuring low-latency data ingestion, and facilitating secure, governed data access for machine learning and analytics. You will also prioritize reliability and performance at extreme scales.You will have complete ownership of the full lifecycle: from architecture to implementation, production operations, and on-call responsibilities.You should be experienced with platforms such as Spark, Kafka, Flink, Airflow, Trino, or Iceberg. Familiarity with infrastructure tools like Terraform, along with expertise in debugging large-scale distributed systems, is essential. A passion for addressing data infrastructure challenges in the AI domain is a must.This role is based in San Francisco, CA. We offer a hybrid work model requiring 3 days in the office each week and provide relocation assistance for new hires.Responsibilities:Design, build, and maintain data infrastructure systems including distributed compute, data orchestration, distributed storage, streaming infrastructure, and machine learning infrastructure, ensuring they are scalable, reliable, and secure.Ensure our data platform can scale significantly while maintaining reliability and efficiency.Enhance company productivity by empowering your fellow engineers and teammates through innovative data solutions.
Foxglove develops data infrastructure for robotics teams operating in real-world environments such as factories and warehouses. As robots leave the lab, engineers need reliable tools for analyzing data, diagnosing issues, and improving system performance. Foxglove delivers observability, visualization, and data management solutions designed to help teams manage large volumes of multimodal sensor data from deployed fleets. Role overview This Software Engineer - Robotics Data Infrastructure position centers on building and optimizing the systems behind Foxglove’s products. The scope covers desktop and web visualization tools, backend services for data ingestion and streaming, and client libraries running directly on robots. Work ranges from enhancing decoding performance in Rust, to extending MCAP tooling in C++, integrating new data sources with TypeScript, and occasionally working with customers to resolve performance issues. What you will do Design, build, and deploy product features from start to finish, incorporating feedback from users. Work across the stack: from Rust and C++ libraries on devices, to backend cloud services, to browser-based visualization tools. Identify and address performance bottlenecks in data pipelines, including ingestion, decoding, streaming, and rendering. Contribute to MCAP and other open-source libraries used by the robotics community. Collaborate with customers and robotics engineers to gather requirements and validate new solutions. Maintain high engineering standards and help foster a culture of ownership within the team. Design systems for efficient storage and querying of petabyte-scale robotics data. Requirements At least 5 years of experience developing production software. Strong proficiency in Rust, C++, and TypeScript, with a willingness to learn new languages or frameworks as needed. Location This position is based in San Francisco, CA.
About UsAt Roboflow, our mission is to empower developers to make the world programmable through advanced artificial intelligence solutions. We believe that vision is a fundamental way we comprehend our environment, and soon, this understanding will be reflected in the software we utilize.We are dedicated to creating tools, fostering community, and providing resources that simplify the development and deployment of computer vision models. With over 1 million developers, including teams from half of the Fortune 100, leveraging Roboflow's open-source and hosted machine learning tools, we are on a mission to enhance various industries—from accelerating cancer research through cell counting to improving construction site safety, digitizing floor plans, preserving coral reef ecosystems, guiding drone operations, and much more.Our compact team is driven by a culture of collaboration, where we believe that our users' success is our success. One of our team members aptly described us as a company of
Innovating the Future of SoftwareAs we approach 2026, the software industry is facing an unprecedented challenge: the 'infinite software crisis.' At Sazabi, we are dedicated to redefining how engineering teams support, maintain, and operate the rapid growth in application development.Introducing Sazabi: The AI-Native Observability Platform for Agile Engineering Teams.Our platform empowers teams by providing a centralized solution to inquire about their production systems in natural language, visualize system activities automatically, and diagnose issues ten times faster.Say goodbye to tedious instrumentation, dashboard setups, and alert tuning—just straightforward answers.We are proud to be backed by pioneers from leading AI organizations, including Vercel, Graphite, Daytona, Browserbase, LangChain, Mastra, Replit, and others.
Full-time|$160K/yr - $225K/yr|Hybrid|San Francisco, CA (Hybrid)
About Fable SecurityAt Fable Security, we recognize that AI-driven threats and human error pose significant risks to enterprise security. Cybercriminals exploit human behavior, which is responsible for over 70% of security breaches. Our mission is to empower individuals with the right tools, transforming them from targets into an active line of defense.We have developed a human risk platform that effectively shapes employee behavior. Our user-friendly and scalable platform integrates complex employee data, identifies risky behaviors, and automatically delivers timely, relevant interventions where employees are most engaged—in real time.Supported by renowned investors such as Redpoint Ventures and Greylock Partners, and founded by members of the Abnormal Security team, Fable is addressing one of cybersecurity’s most pressing challenges within a multi-billion-dollar market. Our diverse team includes alumni from Meta, Twitter, and prestigious universities like Columbia, Stanford, and UCLA. As we experience rapid growth, this is a prime opportunity to contribute to and influence the future of security.Why Join UsHelp us build and scale the core data infrastructure that drives a groundbreaking product.Collaborate with engineering, data science, and product teams to operationalize data effectively at scale.Be part of a small, elite team where your contributions will have a significant impact.As part of an early-stage company, every engineer plays a crucial role in shaping product functionality and evolution. You will define not only the technical architecture but also the company’s data philosophy.Your RoleIn the position of Data Infrastructure Engineer, you will be responsible for the architecture, scalability, and reliability of our data platform.You will design and construct systems that support everything from real-time product functionalities to internal analytics and machine learning processes, covering the spectrum from data ingestion to production-ready datasets. Additionally, you will establish best practices that underpin our data-driven products.This role is highly cross-functional, requiring close collaboration with engineering, data, and product teams to ensure our data foundation evolves in tandem with our growth.ResponsibilitiesDesign, develop, and sustain scalable data systems.Implement best practices for data architecture and management.Collaborate with cross-functional teams to facilitate data-driven decision-making.
Full-time|$162K/yr - $216K/yr|Hybrid|San Francisco, California, United States
Who We AreBaton is Ryder’s innovative product development division dedicated to leveraging cutting-edge technologies to transform the transportation and logistics landscape. Managing over $10 billion in freight, our technology has a significant impact across the U.S. economy.We are committed to creating and delivering software that not only meets but exceeds the needs of Ryder and its 50,000+ clients, which includes some of the most recognized brands globally. Our projects range from user-centric applications to the robust data platform that will drive the future of Ryder’s innovations.Baton’s mission: To enable a supply chain that operates on autopilot.Since Ryder’s acquisition of Baton in 2022, we have been operating with the agility of a startup while benefiting from the extensive reach of a Fortune 500 company. If you're passionate about tackling intricate challenges and making a real impact in the backbone of the American economy, you’ll thrive with us.Role: Software Engineer - InfrastructureDepartment: Data PlatformLocation: Hayes Valley, San Francisco, CA
Decagon is seeking an Engineering Manager to lead its AI & Data Infrastructure team in San Francisco. This role centers on guiding engineers as they develop AI solutions and robust data frameworks to advance Decagon’s technology roadmap. Role overview The Engineering Manager will oversee a team dedicated to AI and data infrastructure initiatives. The position involves hands-on leadership, ensuring projects move forward and align with company objectives. What you will do Lead and mentor engineers working on AI and data infrastructure projects Drive project execution to enhance product capabilities Foster a collaborative and supportive team environment Oversee strategic planning and allocate resources for the team Manage team performance and encourage professional growth Requirements Experience leading technical teams in AI and data infrastructure Strong leadership and clear communication abilities Skill in strategic planning and resource management Dedication to building technology solutions that make a difference This position offers the chance to shape Decagon’s products and technology direction through AI and data-driven work.
Apr 23, 2026
Sign in to browse more jobs
Create account — see all 6,608 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.