Infrastructure Security Engineer Member Of Technical Staff jobs in San Francisco – Browse 6,212 openings on RoboApply Jobs
Infrastructure Security Engineer Member Of Technical Staff jobs in San Francisco
Open roles matching “Infrastructure Security Engineer Member Of Technical Staff” with location signals for San Francisco. 6,212 active listings on RoboApply Jobs.
6,212 jobs found
Infrastructure Security Engineer - Member of Technical Staff
Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Entry Level
Qualifications
Strong understanding of security principles and practices. Experience with network security, threat modeling, and risk assessment. Proficiency in security tools and technologies. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment.
About the job
About the Role
Reflection AI is hiring a Member of Technical Staff focused on Infrastructure Security in San Francisco. This position plays a key part in protecting the company’s infrastructure from security threats.
What You Will Do
Work with teams across the company to design, implement, and monitor security protocols and systems
Help safeguard digital assets by maintaining the integrity and security of infrastructure
About Reflection AI
Reflection AI is at the forefront of AI technology, dedicated to innovating and enhancing digital security solutions. We foster a dynamic and inclusive work culture that encourages creativity and collaboration. Our commitment to personal and professional growth makes us a great place to advance your career.
About the Role Reflection AI is hiring a Member of Technical Staff focused on Infrastructure Security in San Francisco. This position plays a key part in protecting the company’s infrastructure from security threats. What You Will Do Work with teams across the company to design, implement, and monitor security protocols and systems Help safeguard digital assets by maintaining the integrity and security of infrastructure
About Vapi:At Vapi, we are revolutionizing communication by making voice the primary interface for human interaction.Our platform offers unparalleled configurability for deploying voice agents.In just two years, we have attracted over 600,000 developers, with more than 2,000 joining daily.Experience Vapi now!Why We Need You:We handle millions of calls daily, with thousands occurring concurrently.Every call generates a new audio packet every 20 milliseconds, requiring responses in under 1 second.We are scaling this operation to manage hundreds of millions of calls.This challenge is exciting and incredibly rewarding.Your Responsibilities:30 Days: Get acquainted with our multi-cluster, multi-cloud infrastructure.60 Days: Launch a new service such as Anycast Global Router.90 Days: Take ownership of a domain, such as GPU inference clusters.Your Profile:You have experience from Series B to F funding stages.You have successfully scaled large, resilient, and high-performance systems.Bonus points if you've founded your own startup!Why Choose Vapi:Generational Impact: Create the human interface for every business.Ownership Culture: 70% of our team are previous founders.Supportive Team: Our founders, Jordan and Nikhil, bring that friendly Canadian spirit.Top Investors: Backed by Y Combinator, KP Seed, and Bessemer Series A.What We Provide:Equity Ownership: Competitive salary with excellent equity options.Health Coverage: Comprehensive medical, dental, and vision plans.Team Bonding: We enjoy spending time together, including quarterly off-site events.Flexible Time Off: Take the time you need to recharge.
Join runlayer as a Member of the Technical Staff specializing in Security, where you will play a pivotal role in safeguarding our systems and data. You will collaborate with a talented team of engineers to design and implement security protocols, ensuring the integrity and confidentiality of our information. This is an exciting opportunity to work in a fast-paced environment that values innovation and continuous improvement.
About Cogent SecurityCogent Security is an innovative Applied AI Lab dedicated to developing the next generation of AI agents for cybersecurity. As cyber threats evolve, so do our defenses. Our 'AI Taskforce' analyzes vast amounts of enterprise data to proactively address potential breaches before they can escalate.To maintain our edge, we integrate pioneering research with practical implementation. In addition to our core product initiatives, Cogent Research acts as our applied AI laboratory, fueling our ability to create truly intelligent security workflows.Since our launch, Cogent has rapidly expanded, collaborating with Fortune 500 companies to secure some of the world's most intricate production environments.Supported by Greylock, we've assembled a team of exceptional talent in applied AI, including individuals from:Renowned universities such as Stanford, Berkeley, Penn, Duke, Carnegie Mellon, and WaterlooHigh-growth companies including Scale AI, Databricks, Stripe, Tesla, and CoinbaseLeading cybersecurity experts from Wiz, Abnormal AI, and ZscalerPrestigious research institutions like DeepMind and SAILAbout the RoleIn the role of Senior Frontend Engineer, you will take ownership of the frontend platform and user experience, empowering customers to visualize, understand, and act upon complex security data with assurance.This position is classified as Staff+ level; you will have previously operated as a Staff, Senior Staff, or Principal Frontend Engineer and will be eager to define the technical pathway for a new platform. Your responsibilities will include designing frontend infrastructure, developing 'golden' component libraries, and enhancing the platform’s AI capabilities, all while delivering customer-facing features.You will balance in-depth architectural design with hands-on development, ensuring our frontend stack is both sophisticated and robust as we scale. Over time, you will play a key role in mentoring junior frontend engineers, laying the groundwork for a strong team based on the foundations you establish.
Mandolin is developing AI-powered clinical and financial infrastructure for healthcare, aiming to accelerate the delivery of new treatments. The team partners with leading healthcare institutions across the United States and manages over $10 billion in drug spend. Backing comes from investors including Greylock, SV Angel, Maverick, SignalFire, and founders from Vercel, Decagon, and Yahoo. Role overview This San Francisco-based Technical Staff Member - Security position centers on building secure, reliable systems as Mandolin approaches a major public launch. With platform usage increasing, the company needs its infrastructure to meet enterprise standards for reliability and security while supporting efficient developer workflows. The role requires a DevSecOps leader to establish secure cloud operations and define security best practices as the organization grows, especially when handling sensitive healthcare data. What you will do Design Zero-Trust Infrastructure on Public Cloud: Architect resilient cloud environments using Pulumi. Apply Zero Trust Networking principles and enforce service-to-service authentication with mTLS. Set up autoscaling and high-availability networking for Kubernetes (GKE) and serverless workloads, balancing strong security with cost control. Lead Proactive Security and Threat Hunting: Go beyond standard scanning by implementing in-depth threat hunting across code repositories and CI/CD pipelines. Deploy and operationalize a SIEM to analyze data from cloud logs, Kubernetes audit trails, and application telemetry. Secure the SDLC and Developer Experience: Oversee the security toolchain from code commit through deployment. Integrate SAST, dependency scanning, and container image scanning (aligned with OWASP) into GitHub Workflows and ArgoCD rollouts, supporting rapid development without compromising security. Requirements This role is for an experienced security professional ready to take ownership of Mandolin’s cloud infrastructure and software delivery security. The focus is on building and maintaining secure systems, not just achieving compliance.
Full-time|$100K/yr - $300K/yr|On-site|San Francisco, CA
About Cogent SecurityCogent Security is an innovative Applied AI Lab pioneering the future of AI agents in the realm of cybersecurity. In a world where cyber threats evolve at unprecedented speeds, our 'AI Taskforce' analyzes vast amounts of enterprise data to proactively address vulnerabilities and avert critical breaches.We remain at the forefront of technology by merging cutting-edge research with practical applications. Our dedicated Cogent Research team fuels our mission, ensuring we develop truly effective security workflows powered by AI.Since our inception, Cogent has rapidly grown, collaborating with Fortune 500 companies to safeguard the most intricate production environments globally.Supported by Greylock, our team comprises some of the brightest minds in applied AI, including experts from:Renowned universities such as Stanford, Berkeley, Penn, Duke, Carnegie Mellon, and Waterloo.High-growth unicorn companies like Scale AI, Databricks, Stripe, Tesla, and Coinbase.Leading cybersecurity specialists from Wiz, Abnormal AI, and Zscaler.Prestigious research institutions including DeepMind and SAIL.About the RoleAs we embark on building a suite of backend services and integrations with our design partners, we seek passionate and skilled Backend Engineers at both Senior and Staff levels, eager to thrive in the Applied AI domain.ResponsibilitiesDesign and implement critical backend subsystems and integration platformsComprehend business objectives and customer requirements to engineer backend subsystems that align with our technology strategies.Adapt systems to meet evolving needs of design partners and clients.Incorporate non-functional requirements such as compliance and security into system design.Establish scalable infrastructure foundationsPrepare for future growth in customer base, headcount, and data management by collaborating with your team to enhance infrastructure.
At Catalog, we are pioneering the commerce infrastructure for AI—creating the essential framework that enables digital agents to not only explore the web but also comprehend, analyze, and engage with products. Our innovations drive the future of AI-driven shopping experiences, fundamentally transforming how consumers discover and purchase items online.Role OverviewAs a Technical Staff Member, you will be instrumental in developing core systems, shaping our engineering culture, and transitioning our vision from prototype to a robust platform. This role requires full-stack expertise and a commitment to owning and resolving challenges from start to finish.Who You AreYou have experience creating beloved and trusted products from the ground up.You combine technical proficiency with a keen product sense and data-driven intuition.You are well-versed in AI technologies.You prioritize speed, write clean code, and ensure thorough instrumentation.You seek a high level of ownership within a small, talent-rich team based in San Francisco.Challenges You Will TackleDevelop and deploy agentic-search APIs that deliver structured and real-time product data in milliseconds.Build checkout systems enabling agents to conduct transactions with any merchant.Create an embeddings and retrieval layer that optimizes recall, precision, and cost efficiency.Establish a product graph and ranking pipeline that adapts based on actual user outcomes.Preferred QualificationsProven experience shipping data-centric products in a live environment.Experience with recommendation systems or information retrieval methodologies.Familiarity with API development, search indexing, and data pipeline construction.Our Work CultureWe operate with a small, high-trust, and highly motivated team, fostering an environment of in-person collaboration in North Beach, San Francisco. Our process involves debate, decision-making, and execution.If your profile aligns with our needs, we will contact you to arrange 2-3 brief technical interviews, followed by an onsite meeting in our office where you will collaborate on a small project, exchange ideas, and meet the team.
At Magic, our mission is to create safe AGI that propels humanity forward in addressing the world’s most critical challenges. We believe that the key to achieving safe AGI lies in automating research and code generation to enhance models and resolve alignment issues more effectively than humans alone. Our unique approach integrates frontier-scale pre-training, domain-specific reinforcement learning, ultra-long context, and inference-time computation to realize this vision.Role OverviewAs a vital member of our Supercomputing Platform & Infrastructure team, you will be instrumental in designing, constructing, and managing the extensive GPU infrastructure that underpins Magic’s model training and inference processes.A key aspect of your role will involve leveraging Terraform-driven infrastructure-as-code methodologies to build and maintain our infrastructure, ensuring reproducibility, reliability, and operational clarity across clusters comprising thousands of GPUs.Magic’s long-context models exert continuous demands on compute, networking, and storage systems. The infrastructure must support long-running distributed jobs, high-throughput data movement, and stringent availability requirements, necessitating designs that are automated, observable, and resilient. You will take ownership of the systems and IaC foundations that facilitate these capabilities.This position has the potential to expand into broader responsibilities encompassing supercomputing platform architecture, influencing how Magic scales GPU clusters and enhances infrastructure reliability as model workloads expand.Key ResponsibilitiesDesign and manage large-scale GPU clusters for model training and inference.Construct and sustain infrastructure utilizing Terraform across both cloud and hybrid environments.Develop modular, scalable IaC frameworks for provisioning compute, networking, and storage resources.Enhance deployment reproducibility, maintain environment consistency, and ensure operational safety.Optimize networking and storage architectures for high-throughput AI workloads.Automate fault detection and recovery mechanisms across distributed clusters.Diagnose complex cross-layer issues involving hardware, drivers, networking, storage, operating systems, and cloud environments.Enhance observability, monitoring, and reliability of essential platform systems.QualificationsStrong foundation in systems engineering principles.Extensive hands-on experience with Terraform, including module design, state management, environment isolation, and large-scale implementations.
About UsAt Parallel, we are a pioneering web infrastructure company dedicated to empowering businesses across various sectors, including sales, marketing, insurance, and software development. Our innovative products enable organizations to create cutting-edge AI agents with robust and flexible programmatic access to the web.Having successfully raised $130 million from esteemed investors such as Kleiner Perkins, Index Ventures, and Spark Capital, our mission is to reshape the web for AI applications. We are assembling a talented team of engineers, designers, marketers, and operational experts to help us achieve this vision.Job Overview: As a member of our technical staff, you will play a crucial role in building, operating, and scaling our infrastructure, particularly around large language models. Your responsibilities will include ensuring system reliability and cost-efficiency as we expand, anticipating potential bottlenecks, evolving our architecture to meet growing demands, and developing the tools that enhance engineering productivity.About You: You possess a deep understanding of distributed systems, cloud platforms, performance optimization, and scalable architecture. You are adept at balancing trade-offs between cost, reliability, and speed, and you are passionate about enabling teams to innovate rapidly and confidently while supporting products that serve millions of users seamlessly.
Our MissionAt Reflection AI, our goal is to develop open superintelligence and make it universally accessible.We are pioneering open weight models tailored for individuals, agents, enterprises, and even entire nations. Our diverse team comprises talented AI researchers and industry veterans from prestigious organizations such as DeepMind, OpenAI, Google Brain, Meta, Character.AI, Anthropic, and many more.Role OverviewConstruct and enhance distributed training systems that drive the pre-training of cutting-edge models.Collaborate with research teams to design and execute extensive training runs for foundational models.Create infrastructure that facilitates efficient training across thousands of GPUs leveraging contemporary distributed training frameworks.Enhance training throughput, stability, and efficiency for extensive model training tasks.Work closely with pre-training researchers to convert experimental concepts into scalable, production-ready training systems.Boost performance of distributed training tasks through optimization of communication, memory management, and GPU utilization.Develop and maintain training pipelines that accommodate large-scale datasets, checkpointing, and iterative experiments.Identify and resolve performance bottlenecks within distributed training systems, including model parallelism, GPU communication, and training runtime environments.Contribute to the creation of systems that promote swift experimentation and iteration on novel training methods.
At Composio, we are developing advanced infrastructure that enables agents to seamlessly interact with essential work tools such as GitHub, Gmail, Notion, Salesforce, and more. Our dedicated team of engineers is committed to tackling challenges ranging from contextual understanding to search functionalities, ensuring we provide an exceptional bridge between your agents and their tools.Having secured a $25M Series A funding from Lightspeed, alongside prominent angel investors like Guillermo Rauch (CEO of Vercel), Dharmesh Shah (CTO of HubSpot), and Gokul Rajaram, we have experienced remarkable growth, tripling our ARR at the start of this year. Our clientele includes notable names from Y Combinator cohorts to Wabi, Glean, Zoom, and beyond.Your RoleEnhance the experience of teams utilizing our platform by refining our core APIs and SDK.Create intuitive interfaces for both frontend and SDK applications.Take ownership of product development from concept through to production.Collaborate closely with customers to cultivate their loyalty while enhancing the product.Craft clear and concise documentation.
Join Composio, where we are revolutionizing the infrastructure that empowers agents to seamlessly connect with the tools you utilize daily, including GitHub, Gmail, Notion, Salesforce, and more. Our dedicated team of engineers is tackling challenges from context management to search optimization, striving to create the most efficient bridge between your agents and their essential tools.Having secured a $25M Series A funding from Lightspeed, along with support from prominent angels such as Guillermo Rauch (CEO of Vercel), Dharmesh Shah (CTO of HubSpot), and Gokul Rajaram, we have experienced significant growth, tripling our ARR this year. Our customers range from fellow Y Combinator alumni to established companies like Wabi, Glean, and Zoom.Your ResponsibilitiesEnhance our platform primitives and APIs, including authentication, automatic refreshes, triggers, tool search, planning, and sandbox management.Oversee multiple runtimes for code execution across Lambdas and Firecracker.Optimize performance through tracing, CPU/heap profiling, database query enhancements, and workflow optimization.Collaborate closely with product engineering teams and customers to effectively manage their workloads and improve our product.Produce clear and comprehensive documentation.Essential QualificationsCore Platform Engineering Skills: Extensive experience in scaling backend distributed systems, maintaining reliable systems while delivering quickly, and managing multiple platform components simultaneously.AI Expertise: Familiarity with building and working with language models.Linux Proficiency: Comfortable working in a Linux environment.Effective Communication: Ability to write well-structured documentation and articulate complex ideas clearly.Interpersonal Skills: Cultivate trust and acknowledge areas for growth.Preferred QualificationsExperience with cloud infrastructure and serverless architecture.
Full-time|$170K/yr - $230K/yr|On-site|Palo Alto / San Francisco Bay Area
Mithril is building AI infrastructure to make GPU computing accessible for enterprises, AI startups, and research organizations. The company’s customers include LG AI Research, Saronic, and the Broad Institute. Mithril was founded by a former Google DeepMind research scientist and a Stanford CS PhD, and has raised $80 million in seed and Series A funding from Sequoia Capital, Lightspeed Venture Partners, and others. Platform revenue has grown more than sixfold in the past year. Fast Company recognized Mithril as the 8th Most Innovative Company in Artificial Intelligence for 2026. The team is transitioning from bare-metal operations to a cloud-native, multi-provider platform, introducing an auction and flexibility model. This is an opportunity to help shape the platform from its early stages. Role overview The Software Engineer - Technical Staff Member will work across three main areas: Consumption: Developer-facing product, billing, and API Platform: Orchestration and marketplace solutions Supply: Cloud provider integrations and capacity management Engineers at Mithril take on significant ownership, building features end-to-end that support critical customer workloads and drive revenue. The scope includes backend systems, marketplace logic, and customer interfaces. Architectural decisions here have a direct impact on Mithril’s growth and scalability. What makes this role unique This position blends deep systems work with product-facing challenges. Engineers contribute to the orchestration engine that manages GPU capacity across providers, as well as the interfaces customers use to reserve, bid, and utilize resources. The systems built in this role handle financial transactions, real workloads, and market mechanisms such as spot auctions, reservation pricing, and capacity allocation. For those interested in the mechanics of GPU infrastructure markets and building the technology behind them, this role offers direct involvement. Location This role is based in Palo Alto or the San Francisco Bay Area.
TierZero seeks a Founding Member of Technical Staff to join the team in San Francisco. This in-person position requires working from the SF headquarters at least three days per week. Role overview This role centers on close collaboration with a group of engineers who have collectively delivered over $10 billion in value during their careers. Expect to work side by side with teammates, sharing ideas and building strong connections in the office. The environment often shifts, so adaptability and comfort with changing priorities are important. Key responsibilities Work directly with experienced engineers to design and build new products Prioritize customer needs and satisfaction in product decisions Develop solutions using large language models (LLMs), multi-cloud platforms (MCPs), cloud infrastructure, and observability tools Requirements Minimum 5 years of professional engineering experience or a strong record of open-source contributions Experience in startups and familiarity with their unique challenges is a plus Location This position is based in San Francisco. In-office presence is required three days each week for collaboration.
About TierZero TierZero helps engineering teams use AI to build and ship code more efficiently. The platform targets the bottleneck of human speed in production, giving teams tools for faster incident response, better operational visibility, and shared knowledge. TierZero is backed by $7M in funding from investors including Accel and SV Angel. Companies like Discord, Drata, and Framer trust TierZero to strengthen their infrastructure for AI-driven engineering. Role Overview: Founding Member of Technical Staff This is an on-site role based at TierZero’s San Francisco headquarters, with three days a week in the office. As a founding member, direct collaboration with the CEO, CTO, and early customers shapes the direction of both product and systems. The work spans hands-on development and close engagement with users and leadership. What You Will Do Design and build intelligent AI systems to analyze large volumes of unstructured data. Deliver full-stack features based on real user feedback. Improve the product experience so AI agents are both reliable and easy for engineers to use. Develop systems that automatically evaluate LLM outputs and advance agentic reasoning using self-play and feedback loops. Create machine learning pipelines, including data ingestion, feature generation, embedding stores, retrieval-augmented generation (RAG), vector search, and graph databases. Prototype with open-source and new LLMs, comparing their strengths and weaknesses. Build scalable infrastructure for long-running, multi-step agents, with attention to memory, state, and asynchronous workflows. What We Look For Over five years of relevant professional or open-source experience. Comfort working in environments with uncertainty and evolving challenges. Strong product focus and a drive for customer satisfaction. Interest in large language models (LLMs), Model Control Planes (MCPs), cloud infrastructure, and observability tools. Previous startup experience is a plus. Location This position is based in San Francisco. Expect to work on-site three days per week at TierZero’s HQ.
TierZero builds tools that help engineering teams deliver and manage code efficiently. The platform enables quicker incident response, clearer operational visibility, and shared knowledge among engineers. Backed by $7 million from investors like Accel and SV Angel, TierZero supports clients such as Discord, Drata, and Framer as they strengthen infrastructure for AI-driven work. This in-person role is based at TierZero's San Francisco headquarters, with a hybrid schedule requiring three days onsite each week. As a founding member of the technical staff, work directly with the CEO, CTO, and customers to influence the direction of TierZero’s core products and systems. The position calls for flexibility as priorities shift and close collaboration across the company. What you will do Design and develop AI systems that handle large volumes of unstructured data. Build full-stack product features, informed by direct feedback from users. Enhance the product so agents are intelligent, reliable, and easy for engineers to use. Create systems to automatically evaluate outputs from large language models and improve agentic reasoning through self-play and feedback. Construct machine learning pipelines, including data ingestion, feature creation, embedding stores, retrieval-augmented generation (RAG) pipelines, vector search, and graph databases. Experiment with open-source and emerging large language models to compare different approaches. Develop scalable infrastructure for long-running, multi-step agents, including memory, state management, and asynchronous workflows. Requirements Interest in working with large language models, managed cloud platforms, cloud infrastructure, and observability tools. At least 5 years of professional experience or significant open-source contributions. Comfort with shifting priorities and tackling new technical problems. Strong product focus and commitment to customer outcomes. Openness to learning from a team with a track record of delivering over $10 billion in value. Ability to work onsite in San Francisco three days per week. Bonus: Experience in a startup setting and familiarity with startup dynamics.
Join Us in Revolutionizing AI InfrastructureAt Meter, we are pioneering the application of cutting-edge AI technology to transform the way the internet is constructed, monitored, and managed.Our vertical integration encompasses the entire enterprise networking stack: from hardware and firmware to operating systems and operations. This unique position offers us comprehensive visibility and control over the entire stack via a singular API, along with a proprietary dataset that is unmatched in the industry, paving the way for complete end-to-end automation. Our solutions are already in use by Fortune 500 companies, educational institutions, manufacturing facilities, and cloud-scale clients.We are in the process of assembling a founding core engineering team dedicated to developing and training models that can comprehend these systems, enhance operational efficiency, predict failures, and resolve issues proactively. In essence, you will be instrumental in creating the decision-making framework that underpins the infrastructure of the modern world.You will collaborate closely with our founders, playing a key role in shaping the future of one of the most impactful applications of models available today.Learn more about us at meter.ai.
About tierzero tierzero builds tools that help engineering teams manage production code with stronger incident response, better operational visibility, and collaborative knowledge sharing. Companies like Discord, Drata, and Framer use tierzero to support their infrastructure in an AI-driven landscape. Backed by $7 million from investors including Accel and SV Angel, tierzero is growing quickly from its San Francisco headquarters. Role Overview: Founding Member of Technical Staff This is a hands-on role shaping tierzero’s core product and systems from the ground up. The founding technical team works closely with the CEO, CTO, and early customers to solve real engineering challenges. The position is based in San Francisco, with a hybrid schedule: three days each week in the office. What You’ll Do Design and build intelligent AI systems that process large volumes of unstructured data Deliver full-stack features informed by real-time user feedback Improve usability so AI agents are both effective and trustworthy for engineers Develop systems for automated evaluation of LLM outputs, including feedback loops and self-play Construct machine learning pipelines for data ingestion, feature generation, embedding storage, retrieval-augmented generation (RAG), vector search, and graph databases Prototype with open-source LLMs to understand their strengths and weaknesses Create scalable infrastructure for complex, multi-step agents, focusing on memory, state management, and asynchronous workflows Who We’re Looking For 5+ years of professional experience or significant open-source contributions Interest in LLMs, MCPs, cloud infrastructure, and observability tools Comfort working in changing, ambiguous situations Product-focused and customer-first mindset Experience learning from and collaborating with engineers from diverse backgrounds Bonus: Previous experience in a startup setting Work Location Hybrid schedule: three days per week in-person at the San Francisco HQ.
At Chroma, we are at the forefront of AI data infrastructure, providing top-tier retrieval solutions that empower developers worldwide.Join us as we navigate the nascent stages of AI technology, and become part of a team that values curiosity and dedication to mastering your craft.There is significant work ahead, and we invite you to contribute to our mission.
Join our dynamic team at Adyen as a Technical Staff Member in San Francisco! We are seeking innovative minds passionate about technology and problem-solving. In this role, you will collaborate with cross-functional teams to craft solutions that enhance our services and improve customer experiences.
About the Role Reflection AI is hiring a Member of Technical Staff focused on Infrastructure Security in San Francisco. This position plays a key part in protecting the company’s infrastructure from security threats. What You Will Do Work with teams across the company to design, implement, and monitor security protocols and systems Help safeguard digital assets by maintaining the integrity and security of infrastructure
About Vapi:At Vapi, we are revolutionizing communication by making voice the primary interface for human interaction.Our platform offers unparalleled configurability for deploying voice agents.In just two years, we have attracted over 600,000 developers, with more than 2,000 joining daily.Experience Vapi now!Why We Need You:We handle millions of calls daily, with thousands occurring concurrently.Every call generates a new audio packet every 20 milliseconds, requiring responses in under 1 second.We are scaling this operation to manage hundreds of millions of calls.This challenge is exciting and incredibly rewarding.Your Responsibilities:30 Days: Get acquainted with our multi-cluster, multi-cloud infrastructure.60 Days: Launch a new service such as Anycast Global Router.90 Days: Take ownership of a domain, such as GPU inference clusters.Your Profile:You have experience from Series B to F funding stages.You have successfully scaled large, resilient, and high-performance systems.Bonus points if you've founded your own startup!Why Choose Vapi:Generational Impact: Create the human interface for every business.Ownership Culture: 70% of our team are previous founders.Supportive Team: Our founders, Jordan and Nikhil, bring that friendly Canadian spirit.Top Investors: Backed by Y Combinator, KP Seed, and Bessemer Series A.What We Provide:Equity Ownership: Competitive salary with excellent equity options.Health Coverage: Comprehensive medical, dental, and vision plans.Team Bonding: We enjoy spending time together, including quarterly off-site events.Flexible Time Off: Take the time you need to recharge.
Join runlayer as a Member of the Technical Staff specializing in Security, where you will play a pivotal role in safeguarding our systems and data. You will collaborate with a talented team of engineers to design and implement security protocols, ensuring the integrity and confidentiality of our information. This is an exciting opportunity to work in a fast-paced environment that values innovation and continuous improvement.
About Cogent SecurityCogent Security is an innovative Applied AI Lab dedicated to developing the next generation of AI agents for cybersecurity. As cyber threats evolve, so do our defenses. Our 'AI Taskforce' analyzes vast amounts of enterprise data to proactively address potential breaches before they can escalate.To maintain our edge, we integrate pioneering research with practical implementation. In addition to our core product initiatives, Cogent Research acts as our applied AI laboratory, fueling our ability to create truly intelligent security workflows.Since our launch, Cogent has rapidly expanded, collaborating with Fortune 500 companies to secure some of the world's most intricate production environments.Supported by Greylock, we've assembled a team of exceptional talent in applied AI, including individuals from:Renowned universities such as Stanford, Berkeley, Penn, Duke, Carnegie Mellon, and WaterlooHigh-growth companies including Scale AI, Databricks, Stripe, Tesla, and CoinbaseLeading cybersecurity experts from Wiz, Abnormal AI, and ZscalerPrestigious research institutions like DeepMind and SAILAbout the RoleIn the role of Senior Frontend Engineer, you will take ownership of the frontend platform and user experience, empowering customers to visualize, understand, and act upon complex security data with assurance.This position is classified as Staff+ level; you will have previously operated as a Staff, Senior Staff, or Principal Frontend Engineer and will be eager to define the technical pathway for a new platform. Your responsibilities will include designing frontend infrastructure, developing 'golden' component libraries, and enhancing the platform’s AI capabilities, all while delivering customer-facing features.You will balance in-depth architectural design with hands-on development, ensuring our frontend stack is both sophisticated and robust as we scale. Over time, you will play a key role in mentoring junior frontend engineers, laying the groundwork for a strong team based on the foundations you establish.
Mandolin is developing AI-powered clinical and financial infrastructure for healthcare, aiming to accelerate the delivery of new treatments. The team partners with leading healthcare institutions across the United States and manages over $10 billion in drug spend. Backing comes from investors including Greylock, SV Angel, Maverick, SignalFire, and founders from Vercel, Decagon, and Yahoo. Role overview This San Francisco-based Technical Staff Member - Security position centers on building secure, reliable systems as Mandolin approaches a major public launch. With platform usage increasing, the company needs its infrastructure to meet enterprise standards for reliability and security while supporting efficient developer workflows. The role requires a DevSecOps leader to establish secure cloud operations and define security best practices as the organization grows, especially when handling sensitive healthcare data. What you will do Design Zero-Trust Infrastructure on Public Cloud: Architect resilient cloud environments using Pulumi. Apply Zero Trust Networking principles and enforce service-to-service authentication with mTLS. Set up autoscaling and high-availability networking for Kubernetes (GKE) and serverless workloads, balancing strong security with cost control. Lead Proactive Security and Threat Hunting: Go beyond standard scanning by implementing in-depth threat hunting across code repositories and CI/CD pipelines. Deploy and operationalize a SIEM to analyze data from cloud logs, Kubernetes audit trails, and application telemetry. Secure the SDLC and Developer Experience: Oversee the security toolchain from code commit through deployment. Integrate SAST, dependency scanning, and container image scanning (aligned with OWASP) into GitHub Workflows and ArgoCD rollouts, supporting rapid development without compromising security. Requirements This role is for an experienced security professional ready to take ownership of Mandolin’s cloud infrastructure and software delivery security. The focus is on building and maintaining secure systems, not just achieving compliance.
Full-time|$100K/yr - $300K/yr|On-site|San Francisco, CA
About Cogent SecurityCogent Security is an innovative Applied AI Lab pioneering the future of AI agents in the realm of cybersecurity. In a world where cyber threats evolve at unprecedented speeds, our 'AI Taskforce' analyzes vast amounts of enterprise data to proactively address vulnerabilities and avert critical breaches.We remain at the forefront of technology by merging cutting-edge research with practical applications. Our dedicated Cogent Research team fuels our mission, ensuring we develop truly effective security workflows powered by AI.Since our inception, Cogent has rapidly grown, collaborating with Fortune 500 companies to safeguard the most intricate production environments globally.Supported by Greylock, our team comprises some of the brightest minds in applied AI, including experts from:Renowned universities such as Stanford, Berkeley, Penn, Duke, Carnegie Mellon, and Waterloo.High-growth unicorn companies like Scale AI, Databricks, Stripe, Tesla, and Coinbase.Leading cybersecurity specialists from Wiz, Abnormal AI, and Zscaler.Prestigious research institutions including DeepMind and SAIL.About the RoleAs we embark on building a suite of backend services and integrations with our design partners, we seek passionate and skilled Backend Engineers at both Senior and Staff levels, eager to thrive in the Applied AI domain.ResponsibilitiesDesign and implement critical backend subsystems and integration platformsComprehend business objectives and customer requirements to engineer backend subsystems that align with our technology strategies.Adapt systems to meet evolving needs of design partners and clients.Incorporate non-functional requirements such as compliance and security into system design.Establish scalable infrastructure foundationsPrepare for future growth in customer base, headcount, and data management by collaborating with your team to enhance infrastructure.
At Catalog, we are pioneering the commerce infrastructure for AI—creating the essential framework that enables digital agents to not only explore the web but also comprehend, analyze, and engage with products. Our innovations drive the future of AI-driven shopping experiences, fundamentally transforming how consumers discover and purchase items online.Role OverviewAs a Technical Staff Member, you will be instrumental in developing core systems, shaping our engineering culture, and transitioning our vision from prototype to a robust platform. This role requires full-stack expertise and a commitment to owning and resolving challenges from start to finish.Who You AreYou have experience creating beloved and trusted products from the ground up.You combine technical proficiency with a keen product sense and data-driven intuition.You are well-versed in AI technologies.You prioritize speed, write clean code, and ensure thorough instrumentation.You seek a high level of ownership within a small, talent-rich team based in San Francisco.Challenges You Will TackleDevelop and deploy agentic-search APIs that deliver structured and real-time product data in milliseconds.Build checkout systems enabling agents to conduct transactions with any merchant.Create an embeddings and retrieval layer that optimizes recall, precision, and cost efficiency.Establish a product graph and ranking pipeline that adapts based on actual user outcomes.Preferred QualificationsProven experience shipping data-centric products in a live environment.Experience with recommendation systems or information retrieval methodologies.Familiarity with API development, search indexing, and data pipeline construction.Our Work CultureWe operate with a small, high-trust, and highly motivated team, fostering an environment of in-person collaboration in North Beach, San Francisco. Our process involves debate, decision-making, and execution.If your profile aligns with our needs, we will contact you to arrange 2-3 brief technical interviews, followed by an onsite meeting in our office where you will collaborate on a small project, exchange ideas, and meet the team.
At Magic, our mission is to create safe AGI that propels humanity forward in addressing the world’s most critical challenges. We believe that the key to achieving safe AGI lies in automating research and code generation to enhance models and resolve alignment issues more effectively than humans alone. Our unique approach integrates frontier-scale pre-training, domain-specific reinforcement learning, ultra-long context, and inference-time computation to realize this vision.Role OverviewAs a vital member of our Supercomputing Platform & Infrastructure team, you will be instrumental in designing, constructing, and managing the extensive GPU infrastructure that underpins Magic’s model training and inference processes.A key aspect of your role will involve leveraging Terraform-driven infrastructure-as-code methodologies to build and maintain our infrastructure, ensuring reproducibility, reliability, and operational clarity across clusters comprising thousands of GPUs.Magic’s long-context models exert continuous demands on compute, networking, and storage systems. The infrastructure must support long-running distributed jobs, high-throughput data movement, and stringent availability requirements, necessitating designs that are automated, observable, and resilient. You will take ownership of the systems and IaC foundations that facilitate these capabilities.This position has the potential to expand into broader responsibilities encompassing supercomputing platform architecture, influencing how Magic scales GPU clusters and enhances infrastructure reliability as model workloads expand.Key ResponsibilitiesDesign and manage large-scale GPU clusters for model training and inference.Construct and sustain infrastructure utilizing Terraform across both cloud and hybrid environments.Develop modular, scalable IaC frameworks for provisioning compute, networking, and storage resources.Enhance deployment reproducibility, maintain environment consistency, and ensure operational safety.Optimize networking and storage architectures for high-throughput AI workloads.Automate fault detection and recovery mechanisms across distributed clusters.Diagnose complex cross-layer issues involving hardware, drivers, networking, storage, operating systems, and cloud environments.Enhance observability, monitoring, and reliability of essential platform systems.QualificationsStrong foundation in systems engineering principles.Extensive hands-on experience with Terraform, including module design, state management, environment isolation, and large-scale implementations.
About UsAt Parallel, we are a pioneering web infrastructure company dedicated to empowering businesses across various sectors, including sales, marketing, insurance, and software development. Our innovative products enable organizations to create cutting-edge AI agents with robust and flexible programmatic access to the web.Having successfully raised $130 million from esteemed investors such as Kleiner Perkins, Index Ventures, and Spark Capital, our mission is to reshape the web for AI applications. We are assembling a talented team of engineers, designers, marketers, and operational experts to help us achieve this vision.Job Overview: As a member of our technical staff, you will play a crucial role in building, operating, and scaling our infrastructure, particularly around large language models. Your responsibilities will include ensuring system reliability and cost-efficiency as we expand, anticipating potential bottlenecks, evolving our architecture to meet growing demands, and developing the tools that enhance engineering productivity.About You: You possess a deep understanding of distributed systems, cloud platforms, performance optimization, and scalable architecture. You are adept at balancing trade-offs between cost, reliability, and speed, and you are passionate about enabling teams to innovate rapidly and confidently while supporting products that serve millions of users seamlessly.
Our MissionAt Reflection AI, our goal is to develop open superintelligence and make it universally accessible.We are pioneering open weight models tailored for individuals, agents, enterprises, and even entire nations. Our diverse team comprises talented AI researchers and industry veterans from prestigious organizations such as DeepMind, OpenAI, Google Brain, Meta, Character.AI, Anthropic, and many more.Role OverviewConstruct and enhance distributed training systems that drive the pre-training of cutting-edge models.Collaborate with research teams to design and execute extensive training runs for foundational models.Create infrastructure that facilitates efficient training across thousands of GPUs leveraging contemporary distributed training frameworks.Enhance training throughput, stability, and efficiency for extensive model training tasks.Work closely with pre-training researchers to convert experimental concepts into scalable, production-ready training systems.Boost performance of distributed training tasks through optimization of communication, memory management, and GPU utilization.Develop and maintain training pipelines that accommodate large-scale datasets, checkpointing, and iterative experiments.Identify and resolve performance bottlenecks within distributed training systems, including model parallelism, GPU communication, and training runtime environments.Contribute to the creation of systems that promote swift experimentation and iteration on novel training methods.
At Composio, we are developing advanced infrastructure that enables agents to seamlessly interact with essential work tools such as GitHub, Gmail, Notion, Salesforce, and more. Our dedicated team of engineers is committed to tackling challenges ranging from contextual understanding to search functionalities, ensuring we provide an exceptional bridge between your agents and their tools.Having secured a $25M Series A funding from Lightspeed, alongside prominent angel investors like Guillermo Rauch (CEO of Vercel), Dharmesh Shah (CTO of HubSpot), and Gokul Rajaram, we have experienced remarkable growth, tripling our ARR at the start of this year. Our clientele includes notable names from Y Combinator cohorts to Wabi, Glean, Zoom, and beyond.Your RoleEnhance the experience of teams utilizing our platform by refining our core APIs and SDK.Create intuitive interfaces for both frontend and SDK applications.Take ownership of product development from concept through to production.Collaborate closely with customers to cultivate their loyalty while enhancing the product.Craft clear and concise documentation.
Join Composio, where we are revolutionizing the infrastructure that empowers agents to seamlessly connect with the tools you utilize daily, including GitHub, Gmail, Notion, Salesforce, and more. Our dedicated team of engineers is tackling challenges from context management to search optimization, striving to create the most efficient bridge between your agents and their essential tools.Having secured a $25M Series A funding from Lightspeed, along with support from prominent angels such as Guillermo Rauch (CEO of Vercel), Dharmesh Shah (CTO of HubSpot), and Gokul Rajaram, we have experienced significant growth, tripling our ARR this year. Our customers range from fellow Y Combinator alumni to established companies like Wabi, Glean, and Zoom.Your ResponsibilitiesEnhance our platform primitives and APIs, including authentication, automatic refreshes, triggers, tool search, planning, and sandbox management.Oversee multiple runtimes for code execution across Lambdas and Firecracker.Optimize performance through tracing, CPU/heap profiling, database query enhancements, and workflow optimization.Collaborate closely with product engineering teams and customers to effectively manage their workloads and improve our product.Produce clear and comprehensive documentation.Essential QualificationsCore Platform Engineering Skills: Extensive experience in scaling backend distributed systems, maintaining reliable systems while delivering quickly, and managing multiple platform components simultaneously.AI Expertise: Familiarity with building and working with language models.Linux Proficiency: Comfortable working in a Linux environment.Effective Communication: Ability to write well-structured documentation and articulate complex ideas clearly.Interpersonal Skills: Cultivate trust and acknowledge areas for growth.Preferred QualificationsExperience with cloud infrastructure and serverless architecture.
Full-time|$170K/yr - $230K/yr|On-site|Palo Alto / San Francisco Bay Area
Mithril is building AI infrastructure to make GPU computing accessible for enterprises, AI startups, and research organizations. The company’s customers include LG AI Research, Saronic, and the Broad Institute. Mithril was founded by a former Google DeepMind research scientist and a Stanford CS PhD, and has raised $80 million in seed and Series A funding from Sequoia Capital, Lightspeed Venture Partners, and others. Platform revenue has grown more than sixfold in the past year. Fast Company recognized Mithril as the 8th Most Innovative Company in Artificial Intelligence for 2026. The team is transitioning from bare-metal operations to a cloud-native, multi-provider platform, introducing an auction and flexibility model. This is an opportunity to help shape the platform from its early stages. Role overview The Software Engineer - Technical Staff Member will work across three main areas: Consumption: Developer-facing product, billing, and API Platform: Orchestration and marketplace solutions Supply: Cloud provider integrations and capacity management Engineers at Mithril take on significant ownership, building features end-to-end that support critical customer workloads and drive revenue. The scope includes backend systems, marketplace logic, and customer interfaces. Architectural decisions here have a direct impact on Mithril’s growth and scalability. What makes this role unique This position blends deep systems work with product-facing challenges. Engineers contribute to the orchestration engine that manages GPU capacity across providers, as well as the interfaces customers use to reserve, bid, and utilize resources. The systems built in this role handle financial transactions, real workloads, and market mechanisms such as spot auctions, reservation pricing, and capacity allocation. For those interested in the mechanics of GPU infrastructure markets and building the technology behind them, this role offers direct involvement. Location This role is based in Palo Alto or the San Francisco Bay Area.
TierZero seeks a Founding Member of Technical Staff to join the team in San Francisco. This in-person position requires working from the SF headquarters at least three days per week. Role overview This role centers on close collaboration with a group of engineers who have collectively delivered over $10 billion in value during their careers. Expect to work side by side with teammates, sharing ideas and building strong connections in the office. The environment often shifts, so adaptability and comfort with changing priorities are important. Key responsibilities Work directly with experienced engineers to design and build new products Prioritize customer needs and satisfaction in product decisions Develop solutions using large language models (LLMs), multi-cloud platforms (MCPs), cloud infrastructure, and observability tools Requirements Minimum 5 years of professional engineering experience or a strong record of open-source contributions Experience in startups and familiarity with their unique challenges is a plus Location This position is based in San Francisco. In-office presence is required three days each week for collaboration.
About TierZero TierZero helps engineering teams use AI to build and ship code more efficiently. The platform targets the bottleneck of human speed in production, giving teams tools for faster incident response, better operational visibility, and shared knowledge. TierZero is backed by $7M in funding from investors including Accel and SV Angel. Companies like Discord, Drata, and Framer trust TierZero to strengthen their infrastructure for AI-driven engineering. Role Overview: Founding Member of Technical Staff This is an on-site role based at TierZero’s San Francisco headquarters, with three days a week in the office. As a founding member, direct collaboration with the CEO, CTO, and early customers shapes the direction of both product and systems. The work spans hands-on development and close engagement with users and leadership. What You Will Do Design and build intelligent AI systems to analyze large volumes of unstructured data. Deliver full-stack features based on real user feedback. Improve the product experience so AI agents are both reliable and easy for engineers to use. Develop systems that automatically evaluate LLM outputs and advance agentic reasoning using self-play and feedback loops. Create machine learning pipelines, including data ingestion, feature generation, embedding stores, retrieval-augmented generation (RAG), vector search, and graph databases. Prototype with open-source and new LLMs, comparing their strengths and weaknesses. Build scalable infrastructure for long-running, multi-step agents, with attention to memory, state, and asynchronous workflows. What We Look For Over five years of relevant professional or open-source experience. Comfort working in environments with uncertainty and evolving challenges. Strong product focus and a drive for customer satisfaction. Interest in large language models (LLMs), Model Control Planes (MCPs), cloud infrastructure, and observability tools. Previous startup experience is a plus. Location This position is based in San Francisco. Expect to work on-site three days per week at TierZero’s HQ.
TierZero builds tools that help engineering teams deliver and manage code efficiently. The platform enables quicker incident response, clearer operational visibility, and shared knowledge among engineers. Backed by $7 million from investors like Accel and SV Angel, TierZero supports clients such as Discord, Drata, and Framer as they strengthen infrastructure for AI-driven work. This in-person role is based at TierZero's San Francisco headquarters, with a hybrid schedule requiring three days onsite each week. As a founding member of the technical staff, work directly with the CEO, CTO, and customers to influence the direction of TierZero’s core products and systems. The position calls for flexibility as priorities shift and close collaboration across the company. What you will do Design and develop AI systems that handle large volumes of unstructured data. Build full-stack product features, informed by direct feedback from users. Enhance the product so agents are intelligent, reliable, and easy for engineers to use. Create systems to automatically evaluate outputs from large language models and improve agentic reasoning through self-play and feedback. Construct machine learning pipelines, including data ingestion, feature creation, embedding stores, retrieval-augmented generation (RAG) pipelines, vector search, and graph databases. Experiment with open-source and emerging large language models to compare different approaches. Develop scalable infrastructure for long-running, multi-step agents, including memory, state management, and asynchronous workflows. Requirements Interest in working with large language models, managed cloud platforms, cloud infrastructure, and observability tools. At least 5 years of professional experience or significant open-source contributions. Comfort with shifting priorities and tackling new technical problems. Strong product focus and commitment to customer outcomes. Openness to learning from a team with a track record of delivering over $10 billion in value. Ability to work onsite in San Francisco three days per week. Bonus: Experience in a startup setting and familiarity with startup dynamics.
Join Us in Revolutionizing AI InfrastructureAt Meter, we are pioneering the application of cutting-edge AI technology to transform the way the internet is constructed, monitored, and managed.Our vertical integration encompasses the entire enterprise networking stack: from hardware and firmware to operating systems and operations. This unique position offers us comprehensive visibility and control over the entire stack via a singular API, along with a proprietary dataset that is unmatched in the industry, paving the way for complete end-to-end automation. Our solutions are already in use by Fortune 500 companies, educational institutions, manufacturing facilities, and cloud-scale clients.We are in the process of assembling a founding core engineering team dedicated to developing and training models that can comprehend these systems, enhance operational efficiency, predict failures, and resolve issues proactively. In essence, you will be instrumental in creating the decision-making framework that underpins the infrastructure of the modern world.You will collaborate closely with our founders, playing a key role in shaping the future of one of the most impactful applications of models available today.Learn more about us at meter.ai.
About tierzero tierzero builds tools that help engineering teams manage production code with stronger incident response, better operational visibility, and collaborative knowledge sharing. Companies like Discord, Drata, and Framer use tierzero to support their infrastructure in an AI-driven landscape. Backed by $7 million from investors including Accel and SV Angel, tierzero is growing quickly from its San Francisco headquarters. Role Overview: Founding Member of Technical Staff This is a hands-on role shaping tierzero’s core product and systems from the ground up. The founding technical team works closely with the CEO, CTO, and early customers to solve real engineering challenges. The position is based in San Francisco, with a hybrid schedule: three days each week in the office. What You’ll Do Design and build intelligent AI systems that process large volumes of unstructured data Deliver full-stack features informed by real-time user feedback Improve usability so AI agents are both effective and trustworthy for engineers Develop systems for automated evaluation of LLM outputs, including feedback loops and self-play Construct machine learning pipelines for data ingestion, feature generation, embedding storage, retrieval-augmented generation (RAG), vector search, and graph databases Prototype with open-source LLMs to understand their strengths and weaknesses Create scalable infrastructure for complex, multi-step agents, focusing on memory, state management, and asynchronous workflows Who We’re Looking For 5+ years of professional experience or significant open-source contributions Interest in LLMs, MCPs, cloud infrastructure, and observability tools Comfort working in changing, ambiguous situations Product-focused and customer-first mindset Experience learning from and collaborating with engineers from diverse backgrounds Bonus: Previous experience in a startup setting Work Location Hybrid schedule: three days per week in-person at the San Francisco HQ.
At Chroma, we are at the forefront of AI data infrastructure, providing top-tier retrieval solutions that empower developers worldwide.Join us as we navigate the nascent stages of AI technology, and become part of a team that values curiosity and dedication to mastering your craft.There is significant work ahead, and we invite you to contribute to our mission.
Join our dynamic team at Adyen as a Technical Staff Member in San Francisco! We are seeking innovative minds passionate about technology and problem-solving. In this role, you will collaborate with cross-functional teams to craft solutions that enhance our services and improve customer experiences.
Mar 6, 2026
Sign in to browse more jobs
Create account — see all 6,212 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.