Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Manager
Qualifications
What You Will DoLead the AI Cloud Core Platform team of approximately six engineers, overseeing all aspects of Cloud Platform and governance capabilities. Drive the execution of roadmap features, including cluster lifecycle automation. Collaborate closely with Product and Design teams to ensure the user experience aligns with the needs of enterprise customers. Balance rapid feature delivery with strategic investments in scalability, observability, and platform design. Recruit, mentor, and cultivate a team of engineers, providing guidance and career development. Work in tandem with other Lambda teams (Control Plane, Billing, Platform) to guarantee seamless and integrated delivery across the stack. Foster a culture of high performance, documentation, humility, and curiosity. Maintain a product-focused approach in leadership and execution, prioritizing customer needs with an emphasis on feature velocity, reliability, and security. Shape a culture of sustainable, empathetic, and high-velocity engineering, emphasizing cross-team collaboration, documentation, and data-driven decision-making.
About the job
Lambda, recognized as The Superintelligence Cloud, is a pioneering force in AI cloud infrastructure, empowering tens of thousands of customers, from AI researchers to large enterprises and hyperscalers. Our mission is to make computational power as accessible as electricity, providing everyone the capability of superintelligence—one person, one GPU.
Join us in our quest to build the world’s leading AI cloud platform.
Note: This role mandates in-office presence in our San Francisco location four days a week; Lambda’s designated remote work day is Tuesday.
As an Engineering Manager at Lambda, you will lead the charge in developing and scaling our cloud offerings, which encompass the Lambda website, cloud APIs, and internal tools for deployment, management, and maintenance.
About Lambda
At Lambda, we strive to revolutionize the AI cloud landscape by providing cutting-edge solutions that empower users and organizations alike. Our innovative approach and commitment to excellence are what set us apart in the rapidly evolving technology sector.
Lambda, recognized as The Superintelligence Cloud, is a pioneering force in AI cloud infrastructure, empowering tens of thousands of customers, from AI researchers to large enterprises and hyperscalers. Our mission is to make computational power as accessible as electricity, providing everyone the capability of superintelligence—one person, one GPU.Join us in our quest to build the world’s leading AI cloud platform.Note: This role mandates in-office presence in our San Francisco location four days a week; Lambda’s designated remote work day is Tuesday.As an Engineering Manager at Lambda, you will lead the charge in developing and scaling our cloud offerings, which encompass the Lambda website, cloud APIs, and internal tools for deployment, management, and maintenance.
About Us:At novita-ai, we are a rapidly growing global provider of AI cloud infrastructure, leading the charge in the artificial intelligence revolution. Our innovative platform equips developers and enterprises with powerful, scalable, and user-friendly solutions such as Model APIs, GPU Instances, and Serverless Computing. As organizations around the globe strive to integrate AI into their offerings, we serve as the essential engine that fuels their innovative efforts.Join our world-class team and contribute to our expanding customer base. This unique opportunity allows you to be part of a dynamic company in a hyper-growth market, where your technical skills will directly impact customer success and drive our business forward.The Role:As a Solutions Engineer, you will act as the primary technical leader and trusted advisor for our clients throughout their journey. You will collaborate closely with the sales team to bridge the gap between complex customer challenges and our sophisticated technical solutions. Your mission is to build technical credibility, demonstrate the capabilities of our platform, and design tailored solutions that empower our clients to achieve their AI-related business objectives.What You'll Do:Technical Discovery & Solution Design: Collaborate with Account Executives to gain a deep understanding of customer needs, technical requirements, and business goals. Develop elegant and effective solutions utilizing our AI infrastructure stack (Model APIs, GPU Instances, Serverless).Product Demonstration & Proof of Concept (POC): Conduct engaging, customized product demonstrations and interactive workshops. Plan, manage, and execute successful POCs, showcasing the value and performance of our platform within the client’s environment.Technical Evangelism & Trusted Advisory: Communicate the value proposition of our platform to diverse audiences, including both technical and non-technical stakeholders, from engineers to C-level executives. Establish yourself as the go-to expert for customers on best practices in AI infrastructure.Sales Enablement & Market Feedback Loop: Create and maintain technical sales materials, including whitepapers, best practice guides, and demo scripts. Serve as the voice of the customer, relaying valuable feedback from the field to our Product and Engineering teams to influence our product roadmap.Onboarding & Implementation Guidance: Facilitate a seamless post-sales transition by providing initial onboarding support and architectural guidance, setting customers up for sustained success.
At ClickUp, we are not just creating software, we are crafting the future of work! In a landscape saturated with work complexities, we envisioned a better solution. This vision led to the development of the first genuinely integrated AI workspace, seamlessly combining tasks, documents, chat, calendar, and enterprise search, all enhanced by context-sensitive AI. This empowers millions of teams to break down barriers, reclaim their time, and elevate productivity to new heights. Here at ClickUp, you will have the chance to learn, leverage, and innovate with AI in ways that will not only transform our product but also redefine the future of work itself. Join us in being part of a daring, innovative team that is reshaping possibilities! Role Overview:We are in search of a highly talented and experienced Staff AI Engineer – AI Platform to become a vital member of our ClickUp Engineering team. In this pivotal role, you will significantly contribute to the development of our core AI platform and directly utilize large language models (LLMs) to implement intelligent features across ClickUp. Your focus will be on back-end systems that facilitate scalable, dependable, and secure AI-driven capabilities, while also engaging hands-on with LLMs to address real user challenges and propel product innovation.Key Responsibilities:Design, architect, and implement scalable AI platform services to support the deployment, orchestration, and lifecycle management of LLMs and other AI models.Utilize LLMs and other AI technologies to build and enhance ClickUp’s intelligent features, collaborating closely with product and engineering teams to deliver impactful solutions.Develop and maintain robust APIs and backend systems that facilitate seamless integration of AI-enhanced features into ClickUp’s core platform.Create infrastructure for model serving, monitoring, logging, and automated evaluation to ensure high reliability and performance of AI services in production.Integrate with various LLM providers (e.g., OpenAI, Anthropic, Google) and manage model selection, routing, and fallback strategies for optimal performance and cost-efficiency.Promote best practices in AI privacy, security, and compliance, including data anonymization and secure data handling.Optimize platform performance, scalability, and cost-efficiency, utilizing cloud-native technologies and distributed systems.Stay abreast of advancements in AI infrastructure, MLOps, and LLM applications, and proactively apply relevant innovations to ClickUp’s AI platform.Collaborate cross-functionally with teams to drive AI initiatives.
Full-time|$179.4K/yr - $224.3K/yr|On-site|San Francisco, CA; New York, NY
In a world where software is rapidly evolving, artificial intelligence (AI) is at the forefront, transforming how we interact with technology. At Scale AI, we recognize the immense potential of AI to enhance human capabilities, offering personalized support across various aspects of life—from coaching and tutoring to shopping and travel guidance. As enterprises, startups, and governments rush to integrate large language models (LLMs) into their operations, it is crucial to ensure these systems are safe, aligned, and effective. This involves rigorous human evaluation and reinforcement learning through human feedback (RLHF) during all stages of model development.Our innovative products, including the Generative AI Data Engine, SGP, and Donovan, are designed to empower the most advanced LLMs and generative models globally. By leveraging world-class RLHF, human data generation, model evaluation, safety, and alignment, we are shaping the future of human-AI interaction.As a member of our Platform Engineering team, you will play a pivotal role in designing and developing the foundational platforms that support Scale's operations. Your responsibilities will include architecting our core cloud infrastructure, enhancing our data lifecycle, and transforming the software development process for engineers at Scale. You will gain invaluable insights into the AI landscape as it develops within diverse sectors.
About AnythingAnything is a pioneering AI product engineering company, empowering the next generation of entrepreneurs. Our innovative AI agent transforms English into fully functional applications, encapsulating everything needed to monetize online ventures, including mobile solutions, web interfaces, design, AI capabilities, backend services, infrastructure, and payment systems. Since our launch on August 7th, we have achieved $5 million in revenue and are rapidly expanding. Discover more at anything.com.Role OverviewWhat You Will DoWe are looking for individuals eager to accelerate their growth and make a significant impact. In this role, you will develop systems that support millions of applications and billions of users, addressing the challenges that arise in a high-demand environment. You will design and maintain the runtime, control plane, and isolation boundaries essential for safely executing user-generated applications at scale.Your innovative solutions will utilize platform telemetry, execution data, and feedback loops to enhance code generation and application performance, all powered by our AI-centric platform.You will take ownership of key components of the platform from architecture and implementation to operational production and iteration.Operational ResponsibilitiesDesign and manage multi-tenant cloud infrastructure, focusing on isolation, deployment, observability, and cost control for customer applications.Ensure top-tier reliability and performance for our platform.Conduct research to inform decisions regarding technology choices and service providers.Collaborate closely with product teams to develop platform features that drive product innovation.Stay informed about the latest advancements in infrastructure research and development.Successful platform management requires composure under pressure. We value self-assurance coupled with curiosity and a commitment to evidence-based decision-making.Key Performance MetricsYour effectiveness will be evaluated based on:1. Runtime InfrastructureDevelop and oversee scalable, low-latency infrastructure for user applications.2. Platform ReliabilityYou will ensure the platform's uptime and reliability, preventing failures from affecting multiple customers. Our users expect high availability and rapid issue resolution.3. Platform Support for Product FeaturesYou will create the platform features essential to support our product roadmap, ensuring seamless integration and performance.
About AbridgeFounded in 2018, Abridge is dedicated to enhancing understanding in healthcare through our innovative AI-powered platform. We specialize in transforming medical conversations into structured clinical notes in real-time, enabling clinicians to prioritize patient care. Our enterprise-grade technology seamlessly integrates with electronic medical records (EMRs) to ensure accuracy and trust in AI-generated summaries.As pioneers in generative AI for healthcare, we are setting the industry benchmarks for responsible AI deployment across health systems. Our diverse team consists of practicing MDs, AI scientists, PhDs, creatives, technologists, and engineers united in their mission to empower patients and make healthcare more comprehensible. We have offices located in San Francisco's Mission District, New York's SoHo neighborhood, and East Liberty in Pittsburgh.The RoleJoin us as an AI Platform Engineer, where your work will significantly impact the healthcare sector. You will collaborate with a multidisciplinary team of researchers, clinical scientists, and product engineers to design and develop the runtime, orchestration engine, and evaluation platform necessary for agentic orchestration and LLM-driven workflows.What You’ll DoCreate GenAI systems that transform LLMs into composable, reliable tools, utilizing retrieval, tool use, agentic reasoning, and structured outputs.Develop a highly reliable and scalable agent runtime that includes orchestration, shared state and memory, tool-calling interfaces, and scheduling focused on cost, latency, and quality.Build secure, sandboxed environments for agent actions and code, optimizing for cold start, isolation, and observability.Deliver unified interfaces for multiple model sizes and providers; integrate with open tool ecosystems such as MCP-style connectors.Create an evaluation platform for both online and offline assessments, A/B testing, safety checks, and regression gates that enhance agent reliability over time.Collaborate with Research to bring new agent capabilities from prototype to production.What You’ll BringDemonstrated experience in building agent applications with tool-calling, context engineering, and related technologies.Strong problem-solving skills and the ability to work in a fast-paced, collaborative environment.Familiarity with generative AI technologies and their applications in healthcare.
Full-time|$210K/yr - $240K/yr|On-site|San Francisco, California, United States
Who Are We?Postman is the leading global API platform, utilized by over 45 million developers and 500,000 organizations, including 98% of the Fortune 500. We are dedicated to fostering an API-first world by simplifying every stage of the API lifecycle and enhancing collaboration, enabling users to build superior APIs more efficiently.Headquartered in San Francisco, Postman has offices in Boston, New York, Austin, Tokyo, London, and Bangalore—the birthplace of our company. We are a privately held organization, supported by investors like Battery Ventures, BOND, Coatue, CRV, Insight Partners, and Nexus Venture Partners. Discover more at postman.com or connect with us on X at @getpostman.P.S: We highly encourage you to read The "API-First World" graphic novel to gain insight into our vision at Postman.The OpportunityWe are seeking a talented Senior Backend Engineer to join our Cloud Platform team, where you will be instrumental in developing the core systems and services that drive Postman’s internal platform. You will design and implement new backend services responsible for deploying, scaling, and managing our infrastructure and product services, utilizing Java, Spring Boot, and Hibernate (JPA) alongside cloud-native technologies such as Kubernetes, ArgoCD, Istio, and Terraform.Your contributions will have a significant impact, as the systems you create will be employed across Postman engineering, enabling quicker delivery, improved scalability, and an enhanced developer experience. Additionally, you will have the chance to contribute to open-source projects, extending your influence beyond Postman’s ecosystem.What You’ll DoDesign and develop backend services in Java and Spring Boot to enhance Postman’s internal Cloud Platform.Architect new services for managing service deployment, lifecycle, and scaling across Kubernetes clusters.Implement GitOps workflows (ArgoCD) to facilitate continuous delivery.Integrate with cloud-native tools such as Istio, Helm, and Terraform.Adhere to robust software engineering practices, including testing, code reviews, CI/CD, and comprehensive documentation.Drive innovation by participating in open-source initiatives where applicable.Collaborate with Cloud Architects, DevOps engineers, and product teams to ensure alignment with business objectives.Ensure the performance, reliability, and scalability of the systems you develop.
About Brain Co.At Brain Co., we are dedicated to developing cutting-edge AI systems that enhance critical workflows for some of the globe's most vital institutions. Our platform functions within high-stakes, highly regulated environments where security is not merely an add-on — it is a fundamental requirement.As a Security Engineer at Brain Co., you will be responsible for designing and building the security architecture that safeguards AI systems deployed within government entities, energy providers, and healthcare organizations. Your work will span cloud infrastructure, application layers, and compliance-driven environments, ensuring our platform remains secure by default, auditable, and resilient at scale.This role is pivotal within our Platform organization. You will collaborate closely with teams in Infrastructure, AI/ML, and Product Engineering to integrate security into our system design, deployment, and operational processes — rather than applying it retroactively.
Full-time|$109K/yr - $160K/yr|On-site|Livingston, NJ / New York, NY / Sunnyvale, CA / San Francisco, CA / Bellevue, WA
CoreWeave is The Essential Cloud for AI™, designed and built by pioneers for pioneers. We empower innovators to confidently build and scale AI through our advanced technology, tools, and expert teams. Trusted by top AI labs, startups, and global enterprises, CoreWeave combines exceptional infrastructure performance with profound technical expertise to drive innovation. Founded in 2017, we became a publicly traded company (Nasdaq: CRWV) in March 2025. Discover more at www.coreweave.com.What You’ll DoAbout the TeamThe Enterprise Systems team at CoreWeave is tasked with constructing, maintaining, and scaling the internal platforms that facilitate collaboration and productivity across the organization. This encompasses tools such as Atlassian (Jira, Confluence) and Asana, supporting our engineering, product, and business teams, along with external partners. Our focus is on ensuring reliability, scalability, and the ongoing enhancement of internal tools to empower teams to operate efficiently and effectively.About the RoleIn the role of a Productivity Platforms Engineer, you will be instrumental in the daily administration and enhancement of CoreWeave’s collaboration and work management tools. Collaborating closely with seasoned engineers, you will maintain system reliability, troubleshoot issues, and implement improvements to optimize team workflows. This position involves hands-on configuration, user support, and gaining exposure to automation and integrations. Over time, you will assume responsibility for specific tools and workflows as you develop your technical expertise.
Full-time|$190K/yr - $230K/yr|Remote|Remote with offices in San Francisco, CA / New York, NY / Minneapolis, MN
Dagster Labs develops tools that enable organizations to build scalable and efficient data platforms. The company’s core offerings include Dagster, an open-source project popular among developers, and Dagster+, a managed cloud solution. These products support thousands of teams, ranging from early-stage startups to established enterprises, in their analytics, machine learning, and AI initiatives. With the rapid growth of AI, the need for reliable, high-quality data has never been greater. Dagster Labs is dedicated to simplifying the testing, comprehension, and usability of data platforms. Many top AI companies have adopted Dagster as a foundational part of their technology stack. Team culture The team operates with strong funding and a collaborative spirit. High standards, open communication, and a focus on trust and curiosity shape the work environment. The company values a workplace free from egos and unnecessary drama. Locations This is a remote-first company with offices in San Francisco, New York, and Minneapolis.
Full-time|On-site|San Francisco, CA | New York City, NY
Role overview The Product Marketing Lead for the Claude Platform at Anthropic will shape and carry out marketing strategies for AI products within the Cloud division. This position is based in either San Francisco, CA or New York City, NY. What you will do Collaborate with product teams to turn technical features into clear, compelling stories tailored for specific audiences. Create and execute go-to-market plans for both new and existing products. Organize and manage promotional campaigns that boost product awareness and adoption. Study market trends and use those insights to position the Claude Platform for success. Share the product’s value with customers and stakeholders through a variety of channels.
Full-time|$120K/yr - $160K/yr|On-site|New York, San Francisco, Munich or London
We appreciate your interest in joining the Uncountable Engineering team!About the RoleUncountable is on the lookout for passionate software engineers who specialize in deploying Generative AI within software applications.Our software platform is utilized by scientists at top R&D organizations to efficiently structure and analyze experimental data. The researchers leveraging our platform are tackling significant challenges in materials science, chemistry, biotechnology, and other scientific domains. Our mission is to empower them to overcome these challenges with greater speed and efficiency.In this role, you will harness the latest advancements in large language models (LLMs) to address the everyday obstacles faced by scientists. You will be responsible for developing AI-enhanced search and visualization tools, innovative user experiences utilizing multimodal LLMs, intelligent research assistants, and much more. This is a rare chance to be at the forefront of accelerating scientific innovation through LLMs and generative AI.Your Responsibilities:Design and implement LLM-powered features from start to finish, encompassing both LLM-specific tasks (prompting, latency optimization, behavior tuning/testing) and conventional full-stack development (frontend/backend coding, API design, and database architecture).Create a robust in-house LLM architecture to meet the evolving demands of our product (including API integration, testing, observability, caching, and infrastructure for fine-tuning and inference).At Uncountable, we pride ourselves on a culture of continuous deployment and rapid iteration. You will have the opportunity to release your work frequently to our users and refine it based on their feedback.The features you develop will often represent groundbreaking approaches to software applications, allowing you to take on diverse roles as a designer, researcher, engineer, product manager, and more to craft outstanding products. This position is ideal for individuals eager to quickly enhance their skills and perspectives or those aspiring to start their own ventures in the future.Salary Range: $120K-$160K + Equity
Full-time|$120K/yr - $160K/yr|On-site|New York, San Francisco, Munich or London
Thank you for your interest in Uncountable Engineering!Position OverviewUncountable is on the lookout for passionate software engineers who specialize in the deployment of Generative AI solutions.Our innovative software platform empowers scientists at leading R&D organizations to efficiently structure and analyze experimental data. Users from various disciplines, including materials science, chemistry, and biotechnology, are leveraging our platform to tackle complex problems, and our mission is to expedite their research endeavors.In this role, you will harness the latest advancements in Large Language Models (LLMs) to address real-world challenges faced by scientists daily. Your responsibilities will include developing AI-driven search and visualization tools, next-gen user experiences utilizing multimodal LLMs, intelligent research assistants, and much more. This is a unique chance to contribute to a transformative movement aimed at accelerating scientific discovery through generative AI.Key ResponsibilitiesYou will be tasked with:- Crafting end-to-end LLM-powered features, encompassing LLM-specific tasks (prompting, latency optimization, and behavior tuning/testing) alongside traditional full-stack development responsibilities (frontend/backend coding, API and database design).- Building a robust in-house LLM infrastructure to meet the evolving demands of our products (API integration, testing, observability, caching, infrastructure for fine-tuning and inference, etc.).At Uncountable, we cultivate a strong culture of continuous deployment and rapid iteration cycles. You will frequently ship your work to users and refine it based on their feedback.Many of the features you create will pioneer new and innovative software applications, allowing you to take on varied roles as a designer, researcher, engineer, product manager, and more. This position is ideal for individuals eager to quickly enhance their skills and perspectives or those aspiring to launch their own ventures in the future.Salary Range: $120K-$160K + Equity
ABOUT RETELL AIAt Retell AI, we are revolutionizing the call center experience using cutting-edge voice AI technology. In just 18 months since our inception, thousands of companies have leveraged our AI voice agents to streamline sales, support, and logistics operations that previously required extensive human teams. Supported by prominent investors such as Y Combinator and Alt Capital, we have grown from $5M to an impressive $36M ARR with a dedicated team of 20.Our ambitious vision for 2026 is to create a state-of-the-art customer experience platform where entire contact centers are driven by AI. Rather than relying on basic automation that necessitates constant human oversight, we are developing intelligent AI “workers” to function as frontline agents, QA analysts, and managers—constantly executing, monitoring, and enhancing customer interactions.As we rapidly expand, we seek passionate innovators eager to solve complex technical challenges, move swiftly, and make a meaningful impact at one of the fastest-growing voice AI startups. Join us in building the future!Ranked among the top 50 AI applications in the a16z list: https://tinyurl.com/5853dt2xRanked #4 on Brex's Fast-Growing Software Vendors of 2025: https://www.brex.com/journal/brex-benchmark-december-2025One of the top startups on the Leana I leaderboard: https://leanaileaderboard.com/THE ROLEWe are in search of a Principal/Staff Engineer to spearhead the technical direction of our core platform. This is an individual contributor role designed for someone who excels in uncertainty, acts swiftly, and elevates the standards of those around them.You will engage with various systems, infrastructure, and product surfaces while collaborating closely with engineering teams, product managers, and leadership to scale successful initiatives and innovate for the future.This role is not about merely addressing tickets; you will identify challenges, engineer solutions, and deliver impactful results.KEY RESPONSIBILITIESLead the design and evolution of our core platform and systems architecture.Oversee complex technical projects from inception to production.Make strategic technical decisions that optimize for speed, reliability, and scalability.Collaborate across teams to facilitate knowledge sharing and best practices.
Full-time|$146.3K/yr - $195K/yr|Remote|Remote, USA
About Stitch Fix, Inc. Stitch Fix (NASDAQ: SFIX) stands as the premier online personal styling service, dedicated to helping individuals uncover styles that they will adore and that flatter their unique figures. We understand that few experiences are as personal as getting dressed, yet finding clothing that fits well and looks great can be daunting. Stitch Fix addresses this challenge by merging the expertise of skilled stylists with cutting-edge AI and recommendation algorithms. Our unique blend of exclusive and nationally recognized brands caters to each client's distinct preferences and requirements, allowing them to express their personal style effortlessly without spending hours in stores or browsing through countless online options. Founded in 2011 and headquartered in San Francisco, we are transforming the retail landscape.About the RoleWe are seeking a dynamic Manager of Data & AI Platform Engineering to spearhead our team of engineers focused on core data, machine learning, and generative AI platforms. You will play a crucial role in realizing our vision and driving the technical implementation of systems that facilitate AI-driven, data-centric experiences throughout the organization. This will empower richer personalization, enhanced decision-making, intelligent automation, and innovation across the enterprise.You will help refine our technical infrastructure to support next-generation AI applications, including unified signals, adaptive and context-aware models, semantic understanding, retrieval-based intelligence, and advanced machine learning workflows.
ABOUT MITHRLAt Mithrl, we envision a future where groundbreaking medicines reach patients in mere months, not years, and where scientific innovations occur at unprecedented speeds.Mithrl is pioneering the world’s first commercially available AI Co-Scientist, a transformative discovery engine that converts complex biological data into actionable insights in minutes. Through natural language queries, scientists can engage with Mithrl to receive thorough analyses, novel targets, hypotheses, and patent-ready reports.Our impressive achievements include:12X year-over-year revenue growthTrusted by leading biotech firms and major pharmaceutical companies across three continentsFacilitating significant breakthroughs from target discovery to patient outcomes.ABOUT THE ROLEWe are seeking a Platform Solutions Engineer who will act as the technical liaison between Mithrl’s platform and our clients. This position requires a unique blend of skills, as you will architect solutions for intricate customer environments and manage DevOps responsibilities to ensure secure, scalable, and reliable deployments.In this role, you will design and implement integrations, guide clients on best practices for incorporating Mithrl into their R&D workflows, and collaborate with internal teams to facilitate seamless onboarding and deployment. Concurrently, you will construct and maintain cloud infrastructure, CI systems, monitoring, and automation that support Mithrl's large-scale operations.If you thrive on blending customer-focused problem-solving with advanced engineering and DevOps tasks, this position offers a unique chance to influence the technical framework of Mithrl as we expand.WHAT YOU WILL DOArchitect technical solutions for customers integrating Mithrl into their cloud and on-premise environmentsCollaborate closely with Account Executives, customer success teams, and scientific users to understand workflows and design scalable deployment strategiesDevelop and maintain secure and reliable deployment infrastructure in AWS, including compute environments, networking, container orchestration, and storageOversee and enhance CI pipelines, automated testing environments, and release proceduresCreate monitoring, logging, and alerting systems that provide engineering and customer success teams with clear visibility into operations.
About the TeamJoin OpenAI’s Forward Deployed Engineering (FDE) team, where innovation meets real-world application. Our organization operates at the crucial intersection of product development, engineering, research, and market strategy. We collaborate with design partners to transform raw customer insights into tangible software solutions, standardized processes, and robust products.The FDE Platform team plays a pivotal role in amplifying the impact of the FDE organization on OpenAI’s overall platform and product offerings. We offer hands-on support by integrating with customer-focused FDE pods to assist in architecture, product design, refactoring, and development. This role is ideal for collaborative software engineers passionate about creating cutting-edge products alongside fellow innovators.About the RoleAs the Engineering Manager for Platform FDE, you will lead a dedicated team that enhances the broader FDE organization’s capabilities. Your primary objective will be to accelerate the transformation of client-driven ideas into sustainable, scalable platform solutions by providing essential product and engineering support to customer-focused pods.You will work closely with customer-oriented FDE teams that focus on delivery and client outcomes, embedding Platform FDE resources where they can make the most significant impact. This involves direct participation in architectural design, product refinement, modular development, and establishing reusable abstractions, all while maintaining the pods' ownership of customer insights and daily operations. Additionally, you will collaborate with our B2B Platform Team and other key stakeholders to ensure alignment on what should be generalized, what should remain tailored to specific clients, and defining the criteria for a successful handoff.This position demands a blend of engineering leadership and strong product-oriented decision-making. You will manage a focused portfolio of platform development initiatives with clear hypotheses, success metrics, and measurable adoption goals. You will be accountable for converting deployment signals into reusable capabilities that can be seamlessly transitioned to long-term stakeholders. Your leadership will involve influencing multiple stakeholders, maintaining high technical standards, and establishing clear accountability between your team, client pods, and partner organizations.This role is primarily based in San Francisco or New York City and follows a hybrid work model requiring three days in the office per week. We provide relocation assistance, and while travel is project-based and typically low (
Full-time|$165K/yr - $242K/yr|On-site|Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA/ San Francisco, CA
CoreWeave is seeking a Security Engineering Manager to lead the Platform Security team. This position is based in Livingston, NJ, New York, NY, Sunnyvale, CA, Bellevue, WA, or San Francisco, CA. The team’s mission is to embed security into CoreWeave’s Kubernetes-based platform and public cloud environments, supporting high-performance infrastructure for AI and machine learning workloads. Role overview This manager will oversee and expand the Platform Security engineering team, reporting to the Senior Director of Security Foundations. The focus is on hands-on leadership and technical execution, with an emphasis on building and implementing security controls rather than policy development. The role requires close collaboration with Infrastructure, Platform Engineering, Site Reliability Engineering, and other security teams to ensure security measures keep pace with business growth and evolving needs. What you will do Lead and grow the Platform Security engineering team. Integrate security into Kubernetes infrastructure and public cloud platforms such as AWS, GCP, and Azure. Define and execute strategies for cloud security posture, workload isolation, platform guardrails, image integrity, and multi-cloud security. Develop and implement security controls across CoreWeave’s infrastructure. Work closely with other technical teams to align platform security with business needs. The Platform Security team The Platform Security team at CoreWeave engineers systems that enforce security at the infrastructure layer. Their work spans both CoreWeave’s own Kubernetes-based platform and third-party public cloud environments. The team supports GPU-accelerated infrastructure for demanding AI and machine learning workloads, ensuring that both customer and internal services remain secure as CoreWeave’s global presence expands.
Role overview Harvey seeks a Staff Software Engineer to strengthen the AI Platform team in San Francisco. The position focuses on designing and building advanced software solutions that extend the platform’s AI features. Collaboration is central to this role. The engineer will work closely with colleagues across different teams to deliver systems that are both scalable and efficient. Every project supports Harvey’s broader mission and long-term goals. What you will do Design and implement software components that enhance AI capabilities Work with teams throughout the company to deliver reliable, high-performance systems Contribute to platform improvements that support Harvey’s mission Location This position is based in San Francisco.
Airbyte stands at the forefront of open-source data movement, enabling data teams to seamlessly transfer information from diverse applications, APIs, unstructured sources, and databases to data warehouses, lakes, and AI applications. With tens of thousands of connectors and a widespread adoption across hundreds of thousands of companies, we have demonstrated the viability of large-scale data integration. Our ongoing mission is to construct an advanced agentic data infrastructure, meticulously designed for AI agents requiring swift and accurate access to data across numerous sources. We aim to make data universally accessible and actionable.Having secured $181M from leading investors such as Benchmark, Accel, Altimeter, Coatue, and Y Combinator, we are committed to a product-led growth strategy where we create exceptional solutions that resonate with our users. This funding empowers us to explore innovative avenues while maintaining a nimble and experimental approach in an AI-driven landscape.The Role:As a critical member of the Data Replication team, you will serve as an infrastructure and reliability engineer within a full-stack product team that executes over 3 million sync jobs weekly, facilitating thousands of data use cases across various regions and cloud environments. You will be responsible for building and maintaining the infrastructure, establishing reliability standards, reducing incidents, and streamlining the shipping process for engineers through enhanced tooling. You should feel equally at home working with Terraform files, Kubernetes clusters, and postmortem documentation.We encourage our engineers to actively leverage AI as a force multiplier—utilizing agentic tools to automate repetitive tasks, enhance incident response, and develop smarter internal tooling. If you haven't yet embraced this approach, we hope you're eager to start. We value how you work just as much as what you produce. Trust, transparency, and craftsmanship are paramount here.What You’ll Do:Take ownership of the infrastructure that supports the Data Replication platform, including Kubernetes clusters, CI/CD pipelines, secrets management, networking, and cloud resource configuration across AWS and GCP.Collaborate with product engineers to ensure reliable integration of product features with infrastructure.Enhance observability, alerting, and anomaly detection systems with a focus on LLM automation.Develop and improve AI-augmented release and internal tooling, including canary deployments, progressive rollouts, automated release qualification, and rollback automation—all with a focus on LLM automation.Establish high standards for infrastructure within the team by creating self-serve tools, writing runbooks, and mentoring engineers.
Lambda, recognized as The Superintelligence Cloud, is a pioneering force in AI cloud infrastructure, empowering tens of thousands of customers, from AI researchers to large enterprises and hyperscalers. Our mission is to make computational power as accessible as electricity, providing everyone the capability of superintelligence—one person, one GPU.Join us in our quest to build the world’s leading AI cloud platform.Note: This role mandates in-office presence in our San Francisco location four days a week; Lambda’s designated remote work day is Tuesday.As an Engineering Manager at Lambda, you will lead the charge in developing and scaling our cloud offerings, which encompass the Lambda website, cloud APIs, and internal tools for deployment, management, and maintenance.
About Us:At novita-ai, we are a rapidly growing global provider of AI cloud infrastructure, leading the charge in the artificial intelligence revolution. Our innovative platform equips developers and enterprises with powerful, scalable, and user-friendly solutions such as Model APIs, GPU Instances, and Serverless Computing. As organizations around the globe strive to integrate AI into their offerings, we serve as the essential engine that fuels their innovative efforts.Join our world-class team and contribute to our expanding customer base. This unique opportunity allows you to be part of a dynamic company in a hyper-growth market, where your technical skills will directly impact customer success and drive our business forward.The Role:As a Solutions Engineer, you will act as the primary technical leader and trusted advisor for our clients throughout their journey. You will collaborate closely with the sales team to bridge the gap between complex customer challenges and our sophisticated technical solutions. Your mission is to build technical credibility, demonstrate the capabilities of our platform, and design tailored solutions that empower our clients to achieve their AI-related business objectives.What You'll Do:Technical Discovery & Solution Design: Collaborate with Account Executives to gain a deep understanding of customer needs, technical requirements, and business goals. Develop elegant and effective solutions utilizing our AI infrastructure stack (Model APIs, GPU Instances, Serverless).Product Demonstration & Proof of Concept (POC): Conduct engaging, customized product demonstrations and interactive workshops. Plan, manage, and execute successful POCs, showcasing the value and performance of our platform within the client’s environment.Technical Evangelism & Trusted Advisory: Communicate the value proposition of our platform to diverse audiences, including both technical and non-technical stakeholders, from engineers to C-level executives. Establish yourself as the go-to expert for customers on best practices in AI infrastructure.Sales Enablement & Market Feedback Loop: Create and maintain technical sales materials, including whitepapers, best practice guides, and demo scripts. Serve as the voice of the customer, relaying valuable feedback from the field to our Product and Engineering teams to influence our product roadmap.Onboarding & Implementation Guidance: Facilitate a seamless post-sales transition by providing initial onboarding support and architectural guidance, setting customers up for sustained success.
At ClickUp, we are not just creating software, we are crafting the future of work! In a landscape saturated with work complexities, we envisioned a better solution. This vision led to the development of the first genuinely integrated AI workspace, seamlessly combining tasks, documents, chat, calendar, and enterprise search, all enhanced by context-sensitive AI. This empowers millions of teams to break down barriers, reclaim their time, and elevate productivity to new heights. Here at ClickUp, you will have the chance to learn, leverage, and innovate with AI in ways that will not only transform our product but also redefine the future of work itself. Join us in being part of a daring, innovative team that is reshaping possibilities! Role Overview:We are in search of a highly talented and experienced Staff AI Engineer – AI Platform to become a vital member of our ClickUp Engineering team. In this pivotal role, you will significantly contribute to the development of our core AI platform and directly utilize large language models (LLMs) to implement intelligent features across ClickUp. Your focus will be on back-end systems that facilitate scalable, dependable, and secure AI-driven capabilities, while also engaging hands-on with LLMs to address real user challenges and propel product innovation.Key Responsibilities:Design, architect, and implement scalable AI platform services to support the deployment, orchestration, and lifecycle management of LLMs and other AI models.Utilize LLMs and other AI technologies to build and enhance ClickUp’s intelligent features, collaborating closely with product and engineering teams to deliver impactful solutions.Develop and maintain robust APIs and backend systems that facilitate seamless integration of AI-enhanced features into ClickUp’s core platform.Create infrastructure for model serving, monitoring, logging, and automated evaluation to ensure high reliability and performance of AI services in production.Integrate with various LLM providers (e.g., OpenAI, Anthropic, Google) and manage model selection, routing, and fallback strategies for optimal performance and cost-efficiency.Promote best practices in AI privacy, security, and compliance, including data anonymization and secure data handling.Optimize platform performance, scalability, and cost-efficiency, utilizing cloud-native technologies and distributed systems.Stay abreast of advancements in AI infrastructure, MLOps, and LLM applications, and proactively apply relevant innovations to ClickUp’s AI platform.Collaborate cross-functionally with teams to drive AI initiatives.
Full-time|$179.4K/yr - $224.3K/yr|On-site|San Francisco, CA; New York, NY
In a world where software is rapidly evolving, artificial intelligence (AI) is at the forefront, transforming how we interact with technology. At Scale AI, we recognize the immense potential of AI to enhance human capabilities, offering personalized support across various aspects of life—from coaching and tutoring to shopping and travel guidance. As enterprises, startups, and governments rush to integrate large language models (LLMs) into their operations, it is crucial to ensure these systems are safe, aligned, and effective. This involves rigorous human evaluation and reinforcement learning through human feedback (RLHF) during all stages of model development.Our innovative products, including the Generative AI Data Engine, SGP, and Donovan, are designed to empower the most advanced LLMs and generative models globally. By leveraging world-class RLHF, human data generation, model evaluation, safety, and alignment, we are shaping the future of human-AI interaction.As a member of our Platform Engineering team, you will play a pivotal role in designing and developing the foundational platforms that support Scale's operations. Your responsibilities will include architecting our core cloud infrastructure, enhancing our data lifecycle, and transforming the software development process for engineers at Scale. You will gain invaluable insights into the AI landscape as it develops within diverse sectors.
About AnythingAnything is a pioneering AI product engineering company, empowering the next generation of entrepreneurs. Our innovative AI agent transforms English into fully functional applications, encapsulating everything needed to monetize online ventures, including mobile solutions, web interfaces, design, AI capabilities, backend services, infrastructure, and payment systems. Since our launch on August 7th, we have achieved $5 million in revenue and are rapidly expanding. Discover more at anything.com.Role OverviewWhat You Will DoWe are looking for individuals eager to accelerate their growth and make a significant impact. In this role, you will develop systems that support millions of applications and billions of users, addressing the challenges that arise in a high-demand environment. You will design and maintain the runtime, control plane, and isolation boundaries essential for safely executing user-generated applications at scale.Your innovative solutions will utilize platform telemetry, execution data, and feedback loops to enhance code generation and application performance, all powered by our AI-centric platform.You will take ownership of key components of the platform from architecture and implementation to operational production and iteration.Operational ResponsibilitiesDesign and manage multi-tenant cloud infrastructure, focusing on isolation, deployment, observability, and cost control for customer applications.Ensure top-tier reliability and performance for our platform.Conduct research to inform decisions regarding technology choices and service providers.Collaborate closely with product teams to develop platform features that drive product innovation.Stay informed about the latest advancements in infrastructure research and development.Successful platform management requires composure under pressure. We value self-assurance coupled with curiosity and a commitment to evidence-based decision-making.Key Performance MetricsYour effectiveness will be evaluated based on:1. Runtime InfrastructureDevelop and oversee scalable, low-latency infrastructure for user applications.2. Platform ReliabilityYou will ensure the platform's uptime and reliability, preventing failures from affecting multiple customers. Our users expect high availability and rapid issue resolution.3. Platform Support for Product FeaturesYou will create the platform features essential to support our product roadmap, ensuring seamless integration and performance.
About AbridgeFounded in 2018, Abridge is dedicated to enhancing understanding in healthcare through our innovative AI-powered platform. We specialize in transforming medical conversations into structured clinical notes in real-time, enabling clinicians to prioritize patient care. Our enterprise-grade technology seamlessly integrates with electronic medical records (EMRs) to ensure accuracy and trust in AI-generated summaries.As pioneers in generative AI for healthcare, we are setting the industry benchmarks for responsible AI deployment across health systems. Our diverse team consists of practicing MDs, AI scientists, PhDs, creatives, technologists, and engineers united in their mission to empower patients and make healthcare more comprehensible. We have offices located in San Francisco's Mission District, New York's SoHo neighborhood, and East Liberty in Pittsburgh.The RoleJoin us as an AI Platform Engineer, where your work will significantly impact the healthcare sector. You will collaborate with a multidisciplinary team of researchers, clinical scientists, and product engineers to design and develop the runtime, orchestration engine, and evaluation platform necessary for agentic orchestration and LLM-driven workflows.What You’ll DoCreate GenAI systems that transform LLMs into composable, reliable tools, utilizing retrieval, tool use, agentic reasoning, and structured outputs.Develop a highly reliable and scalable agent runtime that includes orchestration, shared state and memory, tool-calling interfaces, and scheduling focused on cost, latency, and quality.Build secure, sandboxed environments for agent actions and code, optimizing for cold start, isolation, and observability.Deliver unified interfaces for multiple model sizes and providers; integrate with open tool ecosystems such as MCP-style connectors.Create an evaluation platform for both online and offline assessments, A/B testing, safety checks, and regression gates that enhance agent reliability over time.Collaborate with Research to bring new agent capabilities from prototype to production.What You’ll BringDemonstrated experience in building agent applications with tool-calling, context engineering, and related technologies.Strong problem-solving skills and the ability to work in a fast-paced, collaborative environment.Familiarity with generative AI technologies and their applications in healthcare.
Full-time|$210K/yr - $240K/yr|On-site|San Francisco, California, United States
Who Are We?Postman is the leading global API platform, utilized by over 45 million developers and 500,000 organizations, including 98% of the Fortune 500. We are dedicated to fostering an API-first world by simplifying every stage of the API lifecycle and enhancing collaboration, enabling users to build superior APIs more efficiently.Headquartered in San Francisco, Postman has offices in Boston, New York, Austin, Tokyo, London, and Bangalore—the birthplace of our company. We are a privately held organization, supported by investors like Battery Ventures, BOND, Coatue, CRV, Insight Partners, and Nexus Venture Partners. Discover more at postman.com or connect with us on X at @getpostman.P.S: We highly encourage you to read The "API-First World" graphic novel to gain insight into our vision at Postman.The OpportunityWe are seeking a talented Senior Backend Engineer to join our Cloud Platform team, where you will be instrumental in developing the core systems and services that drive Postman’s internal platform. You will design and implement new backend services responsible for deploying, scaling, and managing our infrastructure and product services, utilizing Java, Spring Boot, and Hibernate (JPA) alongside cloud-native technologies such as Kubernetes, ArgoCD, Istio, and Terraform.Your contributions will have a significant impact, as the systems you create will be employed across Postman engineering, enabling quicker delivery, improved scalability, and an enhanced developer experience. Additionally, you will have the chance to contribute to open-source projects, extending your influence beyond Postman’s ecosystem.What You’ll DoDesign and develop backend services in Java and Spring Boot to enhance Postman’s internal Cloud Platform.Architect new services for managing service deployment, lifecycle, and scaling across Kubernetes clusters.Implement GitOps workflows (ArgoCD) to facilitate continuous delivery.Integrate with cloud-native tools such as Istio, Helm, and Terraform.Adhere to robust software engineering practices, including testing, code reviews, CI/CD, and comprehensive documentation.Drive innovation by participating in open-source initiatives where applicable.Collaborate with Cloud Architects, DevOps engineers, and product teams to ensure alignment with business objectives.Ensure the performance, reliability, and scalability of the systems you develop.
About Brain Co.At Brain Co., we are dedicated to developing cutting-edge AI systems that enhance critical workflows for some of the globe's most vital institutions. Our platform functions within high-stakes, highly regulated environments where security is not merely an add-on — it is a fundamental requirement.As a Security Engineer at Brain Co., you will be responsible for designing and building the security architecture that safeguards AI systems deployed within government entities, energy providers, and healthcare organizations. Your work will span cloud infrastructure, application layers, and compliance-driven environments, ensuring our platform remains secure by default, auditable, and resilient at scale.This role is pivotal within our Platform organization. You will collaborate closely with teams in Infrastructure, AI/ML, and Product Engineering to integrate security into our system design, deployment, and operational processes — rather than applying it retroactively.
Full-time|$109K/yr - $160K/yr|On-site|Livingston, NJ / New York, NY / Sunnyvale, CA / San Francisco, CA / Bellevue, WA
CoreWeave is The Essential Cloud for AI™, designed and built by pioneers for pioneers. We empower innovators to confidently build and scale AI through our advanced technology, tools, and expert teams. Trusted by top AI labs, startups, and global enterprises, CoreWeave combines exceptional infrastructure performance with profound technical expertise to drive innovation. Founded in 2017, we became a publicly traded company (Nasdaq: CRWV) in March 2025. Discover more at www.coreweave.com.What You’ll DoAbout the TeamThe Enterprise Systems team at CoreWeave is tasked with constructing, maintaining, and scaling the internal platforms that facilitate collaboration and productivity across the organization. This encompasses tools such as Atlassian (Jira, Confluence) and Asana, supporting our engineering, product, and business teams, along with external partners. Our focus is on ensuring reliability, scalability, and the ongoing enhancement of internal tools to empower teams to operate efficiently and effectively.About the RoleIn the role of a Productivity Platforms Engineer, you will be instrumental in the daily administration and enhancement of CoreWeave’s collaboration and work management tools. Collaborating closely with seasoned engineers, you will maintain system reliability, troubleshoot issues, and implement improvements to optimize team workflows. This position involves hands-on configuration, user support, and gaining exposure to automation and integrations. Over time, you will assume responsibility for specific tools and workflows as you develop your technical expertise.
Full-time|$190K/yr - $230K/yr|Remote|Remote with offices in San Francisco, CA / New York, NY / Minneapolis, MN
Dagster Labs develops tools that enable organizations to build scalable and efficient data platforms. The company’s core offerings include Dagster, an open-source project popular among developers, and Dagster+, a managed cloud solution. These products support thousands of teams, ranging from early-stage startups to established enterprises, in their analytics, machine learning, and AI initiatives. With the rapid growth of AI, the need for reliable, high-quality data has never been greater. Dagster Labs is dedicated to simplifying the testing, comprehension, and usability of data platforms. Many top AI companies have adopted Dagster as a foundational part of their technology stack. Team culture The team operates with strong funding and a collaborative spirit. High standards, open communication, and a focus on trust and curiosity shape the work environment. The company values a workplace free from egos and unnecessary drama. Locations This is a remote-first company with offices in San Francisco, New York, and Minneapolis.
Full-time|On-site|San Francisco, CA | New York City, NY
Role overview The Product Marketing Lead for the Claude Platform at Anthropic will shape and carry out marketing strategies for AI products within the Cloud division. This position is based in either San Francisco, CA or New York City, NY. What you will do Collaborate with product teams to turn technical features into clear, compelling stories tailored for specific audiences. Create and execute go-to-market plans for both new and existing products. Organize and manage promotional campaigns that boost product awareness and adoption. Study market trends and use those insights to position the Claude Platform for success. Share the product’s value with customers and stakeholders through a variety of channels.
Full-time|$120K/yr - $160K/yr|On-site|New York, San Francisco, Munich or London
We appreciate your interest in joining the Uncountable Engineering team!About the RoleUncountable is on the lookout for passionate software engineers who specialize in deploying Generative AI within software applications.Our software platform is utilized by scientists at top R&D organizations to efficiently structure and analyze experimental data. The researchers leveraging our platform are tackling significant challenges in materials science, chemistry, biotechnology, and other scientific domains. Our mission is to empower them to overcome these challenges with greater speed and efficiency.In this role, you will harness the latest advancements in large language models (LLMs) to address the everyday obstacles faced by scientists. You will be responsible for developing AI-enhanced search and visualization tools, innovative user experiences utilizing multimodal LLMs, intelligent research assistants, and much more. This is a rare chance to be at the forefront of accelerating scientific innovation through LLMs and generative AI.Your Responsibilities:Design and implement LLM-powered features from start to finish, encompassing both LLM-specific tasks (prompting, latency optimization, behavior tuning/testing) and conventional full-stack development (frontend/backend coding, API design, and database architecture).Create a robust in-house LLM architecture to meet the evolving demands of our product (including API integration, testing, observability, caching, and infrastructure for fine-tuning and inference).At Uncountable, we pride ourselves on a culture of continuous deployment and rapid iteration. You will have the opportunity to release your work frequently to our users and refine it based on their feedback.The features you develop will often represent groundbreaking approaches to software applications, allowing you to take on diverse roles as a designer, researcher, engineer, product manager, and more to craft outstanding products. This position is ideal for individuals eager to quickly enhance their skills and perspectives or those aspiring to start their own ventures in the future.Salary Range: $120K-$160K + Equity
Full-time|$120K/yr - $160K/yr|On-site|New York, San Francisco, Munich or London
Thank you for your interest in Uncountable Engineering!Position OverviewUncountable is on the lookout for passionate software engineers who specialize in the deployment of Generative AI solutions.Our innovative software platform empowers scientists at leading R&D organizations to efficiently structure and analyze experimental data. Users from various disciplines, including materials science, chemistry, and biotechnology, are leveraging our platform to tackle complex problems, and our mission is to expedite their research endeavors.In this role, you will harness the latest advancements in Large Language Models (LLMs) to address real-world challenges faced by scientists daily. Your responsibilities will include developing AI-driven search and visualization tools, next-gen user experiences utilizing multimodal LLMs, intelligent research assistants, and much more. This is a unique chance to contribute to a transformative movement aimed at accelerating scientific discovery through generative AI.Key ResponsibilitiesYou will be tasked with:- Crafting end-to-end LLM-powered features, encompassing LLM-specific tasks (prompting, latency optimization, and behavior tuning/testing) alongside traditional full-stack development responsibilities (frontend/backend coding, API and database design).- Building a robust in-house LLM infrastructure to meet the evolving demands of our products (API integration, testing, observability, caching, infrastructure for fine-tuning and inference, etc.).At Uncountable, we cultivate a strong culture of continuous deployment and rapid iteration cycles. You will frequently ship your work to users and refine it based on their feedback.Many of the features you create will pioneer new and innovative software applications, allowing you to take on varied roles as a designer, researcher, engineer, product manager, and more. This position is ideal for individuals eager to quickly enhance their skills and perspectives or those aspiring to launch their own ventures in the future.Salary Range: $120K-$160K + Equity
ABOUT RETELL AIAt Retell AI, we are revolutionizing the call center experience using cutting-edge voice AI technology. In just 18 months since our inception, thousands of companies have leveraged our AI voice agents to streamline sales, support, and logistics operations that previously required extensive human teams. Supported by prominent investors such as Y Combinator and Alt Capital, we have grown from $5M to an impressive $36M ARR with a dedicated team of 20.Our ambitious vision for 2026 is to create a state-of-the-art customer experience platform where entire contact centers are driven by AI. Rather than relying on basic automation that necessitates constant human oversight, we are developing intelligent AI “workers” to function as frontline agents, QA analysts, and managers—constantly executing, monitoring, and enhancing customer interactions.As we rapidly expand, we seek passionate innovators eager to solve complex technical challenges, move swiftly, and make a meaningful impact at one of the fastest-growing voice AI startups. Join us in building the future!Ranked among the top 50 AI applications in the a16z list: https://tinyurl.com/5853dt2xRanked #4 on Brex's Fast-Growing Software Vendors of 2025: https://www.brex.com/journal/brex-benchmark-december-2025One of the top startups on the Leana I leaderboard: https://leanaileaderboard.com/THE ROLEWe are in search of a Principal/Staff Engineer to spearhead the technical direction of our core platform. This is an individual contributor role designed for someone who excels in uncertainty, acts swiftly, and elevates the standards of those around them.You will engage with various systems, infrastructure, and product surfaces while collaborating closely with engineering teams, product managers, and leadership to scale successful initiatives and innovate for the future.This role is not about merely addressing tickets; you will identify challenges, engineer solutions, and deliver impactful results.KEY RESPONSIBILITIESLead the design and evolution of our core platform and systems architecture.Oversee complex technical projects from inception to production.Make strategic technical decisions that optimize for speed, reliability, and scalability.Collaborate across teams to facilitate knowledge sharing and best practices.
Full-time|$146.3K/yr - $195K/yr|Remote|Remote, USA
About Stitch Fix, Inc. Stitch Fix (NASDAQ: SFIX) stands as the premier online personal styling service, dedicated to helping individuals uncover styles that they will adore and that flatter their unique figures. We understand that few experiences are as personal as getting dressed, yet finding clothing that fits well and looks great can be daunting. Stitch Fix addresses this challenge by merging the expertise of skilled stylists with cutting-edge AI and recommendation algorithms. Our unique blend of exclusive and nationally recognized brands caters to each client's distinct preferences and requirements, allowing them to express their personal style effortlessly without spending hours in stores or browsing through countless online options. Founded in 2011 and headquartered in San Francisco, we are transforming the retail landscape.About the RoleWe are seeking a dynamic Manager of Data & AI Platform Engineering to spearhead our team of engineers focused on core data, machine learning, and generative AI platforms. You will play a crucial role in realizing our vision and driving the technical implementation of systems that facilitate AI-driven, data-centric experiences throughout the organization. This will empower richer personalization, enhanced decision-making, intelligent automation, and innovation across the enterprise.You will help refine our technical infrastructure to support next-generation AI applications, including unified signals, adaptive and context-aware models, semantic understanding, retrieval-based intelligence, and advanced machine learning workflows.
ABOUT MITHRLAt Mithrl, we envision a future where groundbreaking medicines reach patients in mere months, not years, and where scientific innovations occur at unprecedented speeds.Mithrl is pioneering the world’s first commercially available AI Co-Scientist, a transformative discovery engine that converts complex biological data into actionable insights in minutes. Through natural language queries, scientists can engage with Mithrl to receive thorough analyses, novel targets, hypotheses, and patent-ready reports.Our impressive achievements include:12X year-over-year revenue growthTrusted by leading biotech firms and major pharmaceutical companies across three continentsFacilitating significant breakthroughs from target discovery to patient outcomes.ABOUT THE ROLEWe are seeking a Platform Solutions Engineer who will act as the technical liaison between Mithrl’s platform and our clients. This position requires a unique blend of skills, as you will architect solutions for intricate customer environments and manage DevOps responsibilities to ensure secure, scalable, and reliable deployments.In this role, you will design and implement integrations, guide clients on best practices for incorporating Mithrl into their R&D workflows, and collaborate with internal teams to facilitate seamless onboarding and deployment. Concurrently, you will construct and maintain cloud infrastructure, CI systems, monitoring, and automation that support Mithrl's large-scale operations.If you thrive on blending customer-focused problem-solving with advanced engineering and DevOps tasks, this position offers a unique chance to influence the technical framework of Mithrl as we expand.WHAT YOU WILL DOArchitect technical solutions for customers integrating Mithrl into their cloud and on-premise environmentsCollaborate closely with Account Executives, customer success teams, and scientific users to understand workflows and design scalable deployment strategiesDevelop and maintain secure and reliable deployment infrastructure in AWS, including compute environments, networking, container orchestration, and storageOversee and enhance CI pipelines, automated testing environments, and release proceduresCreate monitoring, logging, and alerting systems that provide engineering and customer success teams with clear visibility into operations.
About the TeamJoin OpenAI’s Forward Deployed Engineering (FDE) team, where innovation meets real-world application. Our organization operates at the crucial intersection of product development, engineering, research, and market strategy. We collaborate with design partners to transform raw customer insights into tangible software solutions, standardized processes, and robust products.The FDE Platform team plays a pivotal role in amplifying the impact of the FDE organization on OpenAI’s overall platform and product offerings. We offer hands-on support by integrating with customer-focused FDE pods to assist in architecture, product design, refactoring, and development. This role is ideal for collaborative software engineers passionate about creating cutting-edge products alongside fellow innovators.About the RoleAs the Engineering Manager for Platform FDE, you will lead a dedicated team that enhances the broader FDE organization’s capabilities. Your primary objective will be to accelerate the transformation of client-driven ideas into sustainable, scalable platform solutions by providing essential product and engineering support to customer-focused pods.You will work closely with customer-oriented FDE teams that focus on delivery and client outcomes, embedding Platform FDE resources where they can make the most significant impact. This involves direct participation in architectural design, product refinement, modular development, and establishing reusable abstractions, all while maintaining the pods' ownership of customer insights and daily operations. Additionally, you will collaborate with our B2B Platform Team and other key stakeholders to ensure alignment on what should be generalized, what should remain tailored to specific clients, and defining the criteria for a successful handoff.This position demands a blend of engineering leadership and strong product-oriented decision-making. You will manage a focused portfolio of platform development initiatives with clear hypotheses, success metrics, and measurable adoption goals. You will be accountable for converting deployment signals into reusable capabilities that can be seamlessly transitioned to long-term stakeholders. Your leadership will involve influencing multiple stakeholders, maintaining high technical standards, and establishing clear accountability between your team, client pods, and partner organizations.This role is primarily based in San Francisco or New York City and follows a hybrid work model requiring three days in the office per week. We provide relocation assistance, and while travel is project-based and typically low (
Full-time|$165K/yr - $242K/yr|On-site|Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA/ San Francisco, CA
CoreWeave is seeking a Security Engineering Manager to lead the Platform Security team. This position is based in Livingston, NJ, New York, NY, Sunnyvale, CA, Bellevue, WA, or San Francisco, CA. The team’s mission is to embed security into CoreWeave’s Kubernetes-based platform and public cloud environments, supporting high-performance infrastructure for AI and machine learning workloads. Role overview This manager will oversee and expand the Platform Security engineering team, reporting to the Senior Director of Security Foundations. The focus is on hands-on leadership and technical execution, with an emphasis on building and implementing security controls rather than policy development. The role requires close collaboration with Infrastructure, Platform Engineering, Site Reliability Engineering, and other security teams to ensure security measures keep pace with business growth and evolving needs. What you will do Lead and grow the Platform Security engineering team. Integrate security into Kubernetes infrastructure and public cloud platforms such as AWS, GCP, and Azure. Define and execute strategies for cloud security posture, workload isolation, platform guardrails, image integrity, and multi-cloud security. Develop and implement security controls across CoreWeave’s infrastructure. Work closely with other technical teams to align platform security with business needs. The Platform Security team The Platform Security team at CoreWeave engineers systems that enforce security at the infrastructure layer. Their work spans both CoreWeave’s own Kubernetes-based platform and third-party public cloud environments. The team supports GPU-accelerated infrastructure for demanding AI and machine learning workloads, ensuring that both customer and internal services remain secure as CoreWeave’s global presence expands.
Role overview Harvey seeks a Staff Software Engineer to strengthen the AI Platform team in San Francisco. The position focuses on designing and building advanced software solutions that extend the platform’s AI features. Collaboration is central to this role. The engineer will work closely with colleagues across different teams to deliver systems that are both scalable and efficient. Every project supports Harvey’s broader mission and long-term goals. What you will do Design and implement software components that enhance AI capabilities Work with teams throughout the company to deliver reliable, high-performance systems Contribute to platform improvements that support Harvey’s mission Location This position is based in San Francisco.
Airbyte stands at the forefront of open-source data movement, enabling data teams to seamlessly transfer information from diverse applications, APIs, unstructured sources, and databases to data warehouses, lakes, and AI applications. With tens of thousands of connectors and a widespread adoption across hundreds of thousands of companies, we have demonstrated the viability of large-scale data integration. Our ongoing mission is to construct an advanced agentic data infrastructure, meticulously designed for AI agents requiring swift and accurate access to data across numerous sources. We aim to make data universally accessible and actionable.Having secured $181M from leading investors such as Benchmark, Accel, Altimeter, Coatue, and Y Combinator, we are committed to a product-led growth strategy where we create exceptional solutions that resonate with our users. This funding empowers us to explore innovative avenues while maintaining a nimble and experimental approach in an AI-driven landscape.The Role:As a critical member of the Data Replication team, you will serve as an infrastructure and reliability engineer within a full-stack product team that executes over 3 million sync jobs weekly, facilitating thousands of data use cases across various regions and cloud environments. You will be responsible for building and maintaining the infrastructure, establishing reliability standards, reducing incidents, and streamlining the shipping process for engineers through enhanced tooling. You should feel equally at home working with Terraform files, Kubernetes clusters, and postmortem documentation.We encourage our engineers to actively leverage AI as a force multiplier—utilizing agentic tools to automate repetitive tasks, enhance incident response, and develop smarter internal tooling. If you haven't yet embraced this approach, we hope you're eager to start. We value how you work just as much as what you produce. Trust, transparency, and craftsmanship are paramount here.What You’ll Do:Take ownership of the infrastructure that supports the Data Replication platform, including Kubernetes clusters, CI/CD pipelines, secrets management, networking, and cloud resource configuration across AWS and GCP.Collaborate with product engineers to ensure reliable integration of product features with infrastructure.Enhance observability, alerting, and anomaly detection systems with a focus on LLM automation.Develop and improve AI-augmented release and internal tooling, including canary deployments, progressive rollouts, automated release qualification, and rollback automation—all with a focus on LLM automation.Establish high standards for infrastructure within the team by creating self-serve tools, writing runbooks, and mentoring engineers.
Mar 17, 2026
Sign in to browse more jobs
Create account — see all 9,664 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.