Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Entry Level
Qualifications
Proficient in SQL and data modeling. Experience with cloud platforms such as AWS, Azure, or Google Cloud. Strong understanding of ETL processes and data warehousing. Familiarity with programming languages such as Python or Java. Excellent problem-solving skills and attention to detail. Ability to work effectively in a team-oriented environment.
About the job
Join our innovative team at alljoined as a Data Infrastructure Engineer where you will play a pivotal role in shaping our data architecture and ensuring the reliability and efficiency of our data systems. You will collaborate with cross-functional teams to design and implement scalable data solutions that empower data-driven decision-making.
About alljoined
At alljoined, we are committed to creating a collaborative and inclusive work environment where innovation thrives. Our team is dedicated to leveraging cutting-edge technologies to revolutionize data management and analytics. Join us in our mission to empower organizations with robust data solutions.
Similar jobs
1 - 20 of 8,824 Jobs
Search for Engineering Manager Workflow Orchestration Data Infrastructure
Full-time|$204K/yr - $255K/yr|On-site|United States
Founded in 2007, Airbnb has transformed from a small startup welcoming three guests into a global community of over 5 million hosts and more than 2 billion guest arrivals in virtually every country. Our platform offers distinctive stays and experiences, fostering authentic connections between guests and local communities.Join Our TeamAs a member of the Workflow Orchestration team within Airbnb’s Data Infrastructure organization, you'll play a vital role in managing the orchestration layer that coordinates, executes, and monitors sophisticated data workflows across both batch and streaming domains. Our objective is to equip data engineers, ML teams, analytics, and operational applications with scalable, reliable, and observable orchestration platforms, ensuring that essential business workflows operate seamlessly.Your ImpactIn the role of Engineering Manager, you will:Lead and develop a high-performing team tasked with designing, implementing, and maintaining distributed orchestration infrastructure that handles tens of thousands of data workflows daily.Establish the long-term technical vision and roadmap for orchestration, aligning it with Airbnb’s broader Data Infrastructure and Analytics platform strategy.Promote the adoption of best-in-class workflow paradigms and tools across data engineering, reliability, and ML teams, ensuring consistency, performance, and operational excellence.Collaborate closely with cross-functional teams in Data Platform, Compute, Storage, Analytics, and ML Infrastructure to enhance orchestration capabilities within the larger data ecosystem.Mentor and coach engineers to foster strong technical judgment, clarity of thought, and a sense of ownership within the team.A Day in Your LifeWork alongside senior engineering leaders to shape multi-year strategies for workflow orchestration and execution platforms.Stay involved in architectural decisions, review designs, and help resolve technical challenges.Collaborate with product and engineering leaders across Data Infrastructure to prioritize investments that balance reliability, developer experience, and cost.Ensure your team adheres to strong delivery discipline, optimizing workflows and practices.
Role Overview Crusoe Technologies is seeking a Senior Staff Software Engineer focused on Managed Orchestration to help shape the direction of our cloud infrastructure. This position is based in San Francisco, CA. What You Will Do Design and implement scalable orchestration solutions for cloud infrastructure. Lead a team of engineers, providing technical guidance and mentorship. Work closely with cross-functional groups to integrate services and products smoothly. Apply deep technical expertise to drive the development of new technologies that improve operational efficiency and customer experience. About Crusoe Technologies Crusoe Technologies builds innovative solutions for cloud computing with a focus on efficiency and sustainability.
Full-time|$200K/yr - $250K/yr|On-site|San Francisco
Astronomer is committed to empowering data teams to bring essential software, analytics, and AI solutions to life. We are the innovative force behind Astro, the leading unified DataOps platform powered by Apache Airflow®. Our platform accelerates the development of reliable data products that reveal insights, harness AI potential, and drive data-powered applications. Trusted by over 800 prestigious enterprises worldwide, Astronomer enables businesses to maximize their data capabilities. Discover more at www.astronomer.io.About This Role:As a Sales Engineer at Astronomer, you will play a crucial role as a partner to our clients, assisting them in deploying robust data workflows that enhance their business outcomes. This position offers the opportunity to engage with cutting-edge technology, help customers overcome intricate data challenges, and contribute to the evolution of our product through valuable client insights. This role is perfect for individuals eager to make a significant impact while becoming proficient in workflow orchestration and Apache Airflow.Key Responsibilities:Solve Real-World Challenges: Create and implement proof-of-concept solutions that enable customers to address genuine data challenges, guiding them from concept through to production.Act as a Trusted Advisor: Conduct demonstrations and provide technical support to engineering teams, illustrating how our platform can revolutionize their workflows.Contribute to Community Growth: Engage with the Apache Airflow community by producing technical content and best practices, establishing Astronomer as a thought leader in workflow orchestration.Shape Product Development: Serve as a liaison by collecting field insights and delivering critical feedback to the Product team to influence the future direction of our platform.Grow Your Expertise: Achieve mastery in Airflow, workflow orchestration, and the data engineering landscape while collaborating across departments and working on impactful projects.Qualifications:Data Engineering Expertise: A solid understanding of essential data engineering concepts including orchestration, ELT, Git, and Role-Based Access Control (RBAC), along with practical experience or familiarity with Apache Airflow in a client setting.Professional Experience: A minimum of 2 years in a Sales Engineering, Solutions Engineering, Consulting, or a similar role within the data domain.
Join our innovative team at alljoined as a Data Infrastructure Engineer where you will play a pivotal role in shaping our data architecture and ensuring the reliability and efficiency of our data systems. You will collaborate with cross-functional teams to design and implement scalable data solutions that empower data-driven decision-making.
Decagon is seeking an Engineering Manager to lead its AI & Data Infrastructure team in San Francisco. This role centers on guiding engineers as they develop AI solutions and robust data frameworks to advance Decagon’s technology roadmap. Role overview The Engineering Manager will oversee a team dedicated to AI and data infrastructure initiatives. The position involves hands-on leadership, ensuring projects move forward and align with company objectives. What you will do Lead and mentor engineers working on AI and data infrastructure projects Drive project execution to enhance product capabilities Foster a collaborative and supportive team environment Oversee strategic planning and allocate resources for the team Manage team performance and encourage professional growth Requirements Experience leading technical teams in AI and data infrastructure Strong leadership and clear communication abilities Skill in strategic planning and resource management Dedication to building technology solutions that make a difference This position offers the chance to shape Decagon’s products and technology direction through AI and data-driven work.
Join Sentry as an Engineering Manager for Issue WorkflowAre you passionate about enhancing the developer experience? At Sentry, we are dedicated to helping developers create high-quality software more efficiently. With over $217 million in funding and a user base exceeding 100,000 organizations, including industry giants like Disney, Microsoft, and Atlassian, we are at the forefront of performance and error monitoring solutions.Our hybrid work model fosters collaboration, with designated in-office days on Mondays, Tuesdays, and Thursdays. If you are eager to build innovative software monitoring tools that improve the digital landscape, we want you on our team.Your Impact in This RoleAs an Engineering Manager, you will guide a talented team responsible for our most utilized product features: issue details, issue search, and alerts. Your leadership will play a pivotal role in shaping the core experience that millions of developers rely on to debug and deploy confidently.Key ResponsibilitiesLead and nurture a team of engineers dedicated to Sentry's highest-trafficked product surfaces.Provide technical and product insights, actively participating in code reviews and architectural discussions.Set team objectives, eliminate obstacles, and mediate conflicts effectively.Engage with Sentry's customers to gain insights into their product usage and identify pain points.Establish clear goals, delegate tasks efficiently, and ensure timely project execution.Why You’ll Enjoy Working HereYou are a product-focused leader who prioritizes developer experience.You thrive in a collaborative environment where your contributions directly impact product quality.You are excited to mentor and grow your team, fostering a culture of continuous improvement.
About Sygaldry Technologies Sygaldry Technologies develops quantum-accelerated AI servers in San Francisco, focusing on faster AI training and inference. By combining quantum technology with artificial intelligence, the team addresses challenges in computing costs and energy efficiency. Their AI servers integrate multiple qubit types within a fault-tolerant system, aiming for a balance of cost, scalability, and speed. The company values optimism, rigor, and a drive to solve complex problems in physics, engineering, and AI. Role Overview: ML Infrastructure Engineer The ML Infrastructure Engineer joins the AI & Algorithms team, which includes research scientists, applied mathematicians, and quantum algorithm specialists. This role centers on building and maintaining the compute infrastructure that powers advanced research. The systems you build will support reliable GPU access, reproducible experiments, and scalable workloads, so researchers can focus on their core work without needing deep cloud expertise. Expect to design and manage compute platforms for a range of tasks, including quantum circuit simulation, large-scale numerical optimization, model training, tensor network contractions, and high-throughput data generation. These workloads span multiple cloud providers and on-premises GPU servers. Key Responsibilities Develop compute abstractions for diverse workloads, such as GPU-accelerated simulations, distributed training, high-throughput CPU jobs, and interactive analyses using frameworks like PyTorch and JAX. Set up infrastructure to support experiment tracking and reproducibility. Create developer tools that make cloud computing feel local, streamlining environment setup, job submission, monitoring, and artifact management. Scale experiments from single-GPU prototypes to large, multi-node production runs. Multi-Cloud GPU Orchestration Design orchestration strategies for workloads across multiple cloud providers, optimizing job routing for cost, availability, and capability. Monitor and improve cloud spending, keeping track of credit balances, burn rates, and expiration dates.
Full-time|$200K/yr - $275K/yr|On-site|San Francisco, CA
At Peregrine Technologies, a company backed by top-tier Silicon Valley investors, we empower public safety organizations, state and local governments, federal agencies, and private-sector entities to tackle societal challenges with unparalleled speed and precision. Our cutting-edge AI-enabled platform transforms fragmented and isolated data into actionable operational intelligence, delivering crucial insights that enhance decision-making processes and improve outcomes across various scenarios. Currently, we proudly serve hundreds of clients across over 30 states and two countries, impacting more than 125 million lives, and we are poised for further growth as we expand into the enterprise sector and international markets.TeamWe believe that empathy is key to enhancing our solutions. Our engineering team prioritizes understanding how users interact with our products, which guides us in finding the best solutions. You'll have the chance to collaborate closely with our onsite team to explore the diverse use cases that Peregrine addresses.We value both ownership and teamwork. In this role, you will be responsible for significant features while working alongside fellow engineers to bring them to fruition. We hold humility and empathy in high regard as essential traits for crafting effective solutions, and you will engage directly with our deployment team and users to iterate on problem-solving. Creativity and resilience are vital as we pursue our vision.RoleWe are in search of a Staff Data Infrastructure Engineer to join our dynamic team. In this role, you will take full ownership of the data layer that is foundational to all of Peregrine's operations. You will design and construct the systems responsible for ingesting, storing, and serving vast amounts of real-time operational data, empowering our clients to make critical decisions quickly and confidently.This senior individual contributor position is ideal for someone who excels at tackling complex technical challenges and possesses the experience and judgement necessary to influence key infrastructure decisions. You will engage with a variety of intricate challenges, including:Designing and managing a high-throughput, real-time data integration platform across diverse customer environments.Architecting a scalable open table format layer to ensure reliable data storage at petabyte scale.Building and optimizing distributed data processing pipelines using Apache Spark and related streaming technologies.Enhancing performance, reliability, and cost efficiency across the entire data infrastructure stack.Collaborating with platform and product engineering teams to define data contracts, schemas, and integration pathways.
Join our innovative team at Crusoe as a Staff Product Manager for Orchestration. In this pivotal role, you will lead our efforts in enhancing product orchestration strategies, ensuring seamless integration and execution of our technology solutions. Your expertise will guide cross-functional teams, drive product vision, and ultimately contribute to our mission of transforming the technology landscape.
The Bot CompanyAt The Bot Company, we are on a mission to create an innovative robot that enhances everyday life in homes everywhere.Located in the heart of San Francisco, our compact team comprises talented engineers, designers, and operators hailing from esteemed organizations such as Tesla, Cruise, OpenAI, Google, and Pixar. With a track record of delivering exceptional products to hundreds of millions of users, we understand the intricacies involved in crafting remarkable experiences.Our intentionally lean structure fosters swift decision-making while eliminating unnecessary bureaucracy. Each team member operates as an individual contributor, endowed with substantial autonomy, ownership, and accountability. We thrive on a culture of rapid iteration and efficient execution, working collaboratively across the technology stack.
Full-time|$180K/yr - $210K/yr|On-site|San Francisco, CA - US
At Crusoe, our mission is to accelerate the fusion of energy and intelligence. We are building the infrastructure that empowers individuals to innovate boldly with AI, ensuring that our advancements come without compromises in scale, speed, or sustainability.Join us in revolutionizing AI with sustainable technology at Crusoe, where you will spearhead impactful innovations, contribute to meaningful projects, and collaborate with a team that is reshaping the future of responsible cloud infrastructure.About the Role:We are looking for a talented Senior Software Engineer to join our cloud software team. Your role will be pivotal in enhancing our state-of-the-art infrastructure. You will leverage your expertise to design and scale our carbon-reducing operating model while managing essential hardware, software, and networking components.In this position, you will write and review code, develop proposals, and contribute to architectural documents. You will assess tools and frameworks, weighing their implications on reliability, scalability, operational costs, and ease of implementation. Your knowledge of orchestration and optimization will be crucial in advancing our managed Kubernetes and AI training clusters, ensuring they maintain a competitive edge in reliability and performance.What You'll Be Working On:Develop scalable and resilient software solutions that align with the strategic goals outlined in the Crusoe Cloud roadmap.Collaborate with tech leads and engineers to foster an environment of creativity and technical excellence, driving the development of innovative cloud solutions.Stay updated on the latest cloud software trends and techniques, integrating these insights to keep Crusoe’s innovations at the forefront of the industry.Although you won’t have formal management responsibilities, you will mentor your colleagues by sharing knowledge and guiding technical discussions.What You'll Bring to the Team:5-7 years of experience in software engineering, with strong expertise in Systems Engineering.2+ years of programming experience in GoLang.Experience with Kubernetes and Linux engineering, including debugging capabilities.A proactive attitude towards problem-solving and continuous learning.
OpenAI seeks a Data Center Infrastructure Engineering Program Manager to focus on network and whitespace projects. This position is located in San Francisco. Role overview This role manages the planning and delivery of infrastructure projects that support OpenAI’s AI systems. The work centers on both network operations and whitespace management within data centers. What you will do Lead the planning, execution, and completion of infrastructure projects critical to OpenAI’s technology. Coordinate with teams across disciplines to align on project milestones and address obstacles as they arise. Identify and implement ways to improve operational efficiency in data center network and whitespace functions. Help monitor and maintain dependable network performance within the data center setting. Location This role is based in San Francisco.
Position OverviewJoin OpenEvidence as a Data Infrastructure Software Engineer, where you will engineer comprehensive systems that drive essential product and research operations. Your focus will be on optimizing performance, ensuring scalability, and enhancing accuracy, while enjoying the autonomy to manage the infrastructure that assists healthcare professionals in navigating complex clinical decisions in real-time.We value exceptional creators who thrive in versatile roles. Our engineers engage across various products and projects, taking ownership wherever they can make the most significant impact.About OpenEvidenceOpenEvidence is the leading medical AI platform globally, utilized by over 40% of clinicians in the U.S. in just over a year through organic product-led growth. As a $12 billion company, our engineering team comprises 30 talented individuals from MIT, Harvard, and Stanford. We believe that groundbreaking products are born from a small group of exceptional builders, driven by focused goals and empowered to take ownership and act swiftly. We are expanding our team to capitalize on an unparalleled opportunity to set the standard for medical AI platforms.If you are a top-tier engineer or scientist eager to push the boundaries and achieve tangible outcomes that affect millions of lives, we want to connect with you.Our CultureWe expect our work to be performed at an elite level. The journey from concept to execution and scaling is akin to a professional sport, where excellence is non-negotiable. We believe that the creation of innovative technologies is only achievable through complete ownership. Significant achievements happen when individuals take the initiative to see them through.Your ProfileThis role is not for those seeking a 9-to-5 job or merely looking to write papers. If you are ready to dive into the trenches, tackle challenges head-on, and create something from scratch that could impact millions and drive substantial revenue, you might be the perfect fit.We seek brilliant builders who are intelligent, ambitious, resourceful, self-reliant, detail-oriented, driven, hardworking, and humble. Does this sound rare? It is, as we have only found 30 of them so far, and we are eager to discover more.
Full-time|$100.6K/yr - $148K/yr|On-site|San Francisco, CA; New York, NY; Seattle, WA; Phoenix, AZ
Join DoorDash's Go-To-Market Technology (GTMT) team as an Analytics Engineer, where you will harness data to fuel our rapid growth. You will create innovative solutions that streamline business processes and enhance the productivity of our GTM teams. In this role, you will dive deep into data analytics, build scalable tools, and transform insights into actionable strategies that drive business success. Collaborating with cross-functional teams, you will automate workflows, ensure data integrity, and leverage AI capabilities to elevate our data infrastructure.
Founded in 2007, Airbnb has transformed the way people experience travel, connecting over 5 million hosts with more than 2 billion guests worldwide. Our platform enables unique stays and authentic experiences, fostering connections with local communities.The Team You Will Join:As a pivotal member of the Data Warehouse Infrastructure team, you will help shape the backbone of Airbnb's big data capabilities, enabling hundreds of engineers to efficiently collect, manage, and analyze vast amounts of data. We leverage cutting-edge open-source technologies such as Hadoop, Spark, Trino, Iceberg, and Airflow.Typical Responsibilities:Design and architect Airbnb's next-generation big data compute platform to enhance data ETL, analytics, and machine learning efforts.Oversee the platform's operations, focusing on improving reliability, performance, observability, and cost-effectiveness.Create high-quality, maintainable, and self-documenting code while engaging actively in code review processes.Contribute to open-source projects, making a significant impact on the industry.
At Judgment Labs, we specialize in developing cutting-edge infrastructure for Agent Behavior Monitoring (ABM). Unlike conventional observability tools that merely track exceptions and latency, our ABM technology identifies behavioral anomalies, such as instruction drifts and context retrieval losses, in large-scale production settings.Our solutions are trusted by numerous teams working on autonomous agents to gain insights into system behavior post-deployment. Rather than simply reacting to incidents, our clients analyze patterns across conversations and workflows, correlate regressions with specific interaction types, and identify critical points of reliability failure. Recently, we secured over $30 million across two funding rounds from notable investors like Lightspeed, SV Angel, and Valor Equity Partners.The Role:We are seeking a Senior Data Infrastructure Engineer to architect and enhance the real-time data pipelines essential for robust agent behavior analysis at scale. This position plays a vital role in processing hundreds of thousands of traces per second, executing LLM-based scoring and clustering in near-real-time, and ensuring low-latency query performance, which allows teams to monitor agent behavior as it unfolds. Ideal candidates will have experience designing petabyte-scale data systems, optimizing OLAP database performance, and managing the full data lifecycle from ingestion to analytics.What You'll Do:Design and automate large-scale, high-performance streaming and batch data processing systems to support Judgment's behavioral analysis products.Collaborate closely with infrastructure and backend teams to enhance scalability, data governance, and operational efficiency.Promote best practices in software engineering for data infrastructure at scale.Uphold high standards for data quality and engineering: ensuring reliability, efficiency, documentation, testability, and maintainability.Craft data models for optimal storage and access, ensuring efficient data flows to meet critical product requirements.Enhance OLAP database performance through careful schema design, partitioning strategies, storage optimization, and access pattern analysis.
Join Matter Intelligence as a Data and Machine Learning Infrastructure Engineer, where you will play a pivotal role in shaping the future of data-driven decision-making. You will be part of a dynamic team focused on building and optimizing infrastructure that supports innovative machine learning applications. Your expertise will help us enhance our data pipelines and ensure seamless integration of machine learning models into production.
Full-time|$200K/yr - $400K/yr|Remote|San Francisco
At Inferact, we are on a mission to establish vLLM as the premier AI inference engine, aiming to propel AI advancements by making inference processes more efficient and cost-effective. Our company is founded by the original creators and core maintainers of vLLM, placing us at a unique intersection of models and hardware, a position we have cultivated over many years.About the RoleWe are seeking a talented Cloud Orchestration Engineer to develop and maintain the operational framework that ensures vLLM operates seamlessly at an extensive scale. In this role, you will be responsible for designing systems for cluster management, deployment automation, and production monitoring, enabling teams across the globe to deploy AI models effortlessly. Your work will guarantee that vLLM deployments are not only observable and debuggable but also recoverable, transforming operational complexities into reliable infrastructure that operates smoothly.
Join us in constructing the data infrastructure that empowers robots to operate seamlessly in the real world.As robotics transitions from research environments into practical applications within factories, warehouses, vehicles, and other deployments, the need for reliable data becomes critical. Engineers must analyze data when robots malfunction, behave unpredictably, or require enhancements.At Foxglove, we provide the observability, visualization, and data infrastructure necessary for this analysis. Our innovative tools support robotics and autonomous systems teams in processing, storing, querying, replaying, and analyzing vast amounts of multimodal sensor data from both live systems and production fleets.About the RoleWe are seeking a Data Infrastructure Solutions Engineer to facilitate the integration of Foxglove into our clients' large-scale data ecosystems. Collaborating with engineering teams across diverse stacks and frameworks, you will design efficient data ingestion and storage workflows, troubleshoot complex performance challenges, and ensure dependable data interoperability on a large scale.This position is perfect for individuals passionate about connecting intricate technical systems with the practical needs of customers. Your insights and expertise will play a pivotal role in shaping the evolution of Foxglove to better support data-driven growth within the robotics and autonomy landscape.Key ResponsibilitiesTechnical Solution Design: Work with customer engineering teams to assess their data architecture and craft scalable ingestion, storage, and visualization workflows that meet their specific systems and requirements.Demonstrations and Evaluations: Create and present technical demonstrations and proof-of-concept projects that replicate real-world data pipelines, illustrating how Foxglove integrates with current infrastructure and tools.Onboarding and Integration: Assist customers from initial setup through to production deployment, ensuring smooth data flow between their systems and Foxglove for optimal performance.Problem-Solving and Troubleshooting: Identify and resolve complex data-related issues, performance bottlenecks, and interoperability challenges during evaluations and initial deployments, collaborating closely with both client and internal engineering teams.Product Feedback: Share structured insights and feature suggestions with Foxglove’s product team based on direct technical interactions and practical data infrastructure applications.Collaboration with Sales: Act as the technical liaison in sales engagements, providing expertise to help illustrate Foxglove's value proposition.
Full-time|$153K/yr - $376K/yr|Remote|San Francisco, CA • New York, NY • United States
At Figma, we are expanding our team of dedicated creatives and innovators committed to making design accessible for everyone. Our platform empowers teams to transform ideas into reality—whether you're brainstorming, prototyping, converting designs into code, or utilizing AI for enhancements. From concept to product, Figma enables teams to optimize workflows, accelerate processes, and collaborate in real-time from anywhere in the world. If you're passionate about shaping the future of design and teamwork, we invite you to join us!The Data Platform team at Figma is responsible for constructing and managing the essential systems that drive analytics, AI/ML initiatives, and data-informed decision-making across our organization. We cater to a wide array of stakeholders, including AI researchers, machine learning engineers, data scientists, product engineers, and business teams that depend on data for insights and strategic planning. Our team is tasked with owning and scaling critical platforms such as the Snowflake data warehouse, ML Datalake, orchestration and pipeline infrastructure, and extensive data ingestion and processing systems, overseeing all data transactions that occur within these platforms.Despite our small size, we tackle significant, high-impact challenges. In the upcoming years, we are focused on developing the data infrastructure layer for Figma's AI-driven products, enhancing cost and performance efficiencies across our data stack, scaling our ingestion and reverse ETL capabilities for new product applications, and reinforcing data quality, reliability, and compliance at every level. If you are enthusiastic about creating scalable, high-performance data platforms that empower teams across Figma, we would love to connect with you!This is a full-time role that can be performed from one of our US hubs or remotely within the United States.
Apr 7, 2026
Sign in to browse more jobs
Create account — see all 8,824 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.