Member Of Technical Staff Ai Platform Architecture Infrastructure jobs in San Francisco – Browse 5,698 openings on RoboApply Jobs

Member Of Technical Staff Ai Platform Architecture Infrastructure jobs in San Francisco

Open roles matching “Member Of Technical Staff Ai Platform Architecture Infrastructure” with location signals for San Francisco. 5,698 active listings on RoboApply Jobs.

5,698 jobs found

1 - 20 of 5,698 Jobs
Apply
companyPostman, Inc. logo
Full-time|$256K/yr - $276K/yr|On-site|San Francisco, California, United States

Who Are We?Postman is the leading API platform worldwide, empowering over 45 million developers and 500,000 organizations, including 98% of the Fortune 500. We're committed to fostering an API-first world by simplifying the API lifecycle and enhancing collaboration, enabling users to create superior APIs with increased speed.Headquartered in San Francisco, we have expanded our presence with offices in Boston, New York, Austin, Tokyo, London, and Bangalore—the birthplace of Postman. As a privately held company, we are backed by esteemed investors such as Battery Ventures, BOND, Coatue, CRV, Insight Partners, and Nexus Venture Partners. Discover more about us at postman.com or connect with us on X via @getpostman.P.S: We highly encourage you to explore The "API-First World" graphic novel for insights into our vision and the larger narrative.The OpportunityAs a Member of Technical Staff focusing on AI Infrastructure, you'll be instrumental in developing and maintaining the core systems and distributed infrastructure crucial for AI model post-training, inference, and data pipelines. Your role will involve close collaboration with engineering and research teams to ensure the performance, scalability, and reliability of our essential AI systems.What You’ll DoDesign and implement large-scale, distributed AI infrastructure and services.Enhance performance for GPU/xPU accelerators and cloud environments.Develop tools for observability, reliability, and scalability of AI workloads.Collaborate with cross-functional teams to define AI infrastructure requirements and roadmap.Contribute to architectural design and ensure system longevity.About YouExperience with GenAI infrastructure systems, distributed systems, cloud computing, and high-performance infrastructure.Proficient in programming languages such as Python, Go, or equivalent.Understanding of scaling challenges specific to AI workloads and accelerators.

Mar 19, 2026
Apply
companyComposio logo
Full-time|On-site|sf

Join Composio, where we are revolutionizing the infrastructure that empowers agents to seamlessly connect with the tools you utilize daily, including GitHub, Gmail, Notion, Salesforce, and more. Our dedicated team of engineers is tackling challenges from context management to search optimization, striving to create the most efficient bridge between your agents and their essential tools.Having secured a $25M Series A funding from Lightspeed, along with support from prominent angels such as Guillermo Rauch (CEO of Vercel), Dharmesh Shah (CTO of HubSpot), and Gokul Rajaram, we have experienced significant growth, tripling our ARR this year. Our customers range from fellow Y Combinator alumni to established companies like Wabi, Glean, and Zoom.Your ResponsibilitiesEnhance our platform primitives and APIs, including authentication, automatic refreshes, triggers, tool search, planning, and sandbox management.Oversee multiple runtimes for code execution across Lambdas and Firecracker.Optimize performance through tracing, CPU/heap profiling, database query enhancements, and workflow optimization.Collaborate closely with product engineering teams and customers to effectively manage their workloads and improve our product.Produce clear and comprehensive documentation.Essential QualificationsCore Platform Engineering Skills: Extensive experience in scaling backend distributed systems, maintaining reliable systems while delivering quickly, and managing multiple platform components simultaneously.AI Expertise: Familiarity with building and working with language models.Linux Proficiency: Comfortable working in a Linux environment.Effective Communication: Ability to write well-structured documentation and articulate complex ideas clearly.Interpersonal Skills: Cultivate trust and acknowledge areas for growth.Preferred QualificationsExperience with cloud infrastructure and serverless architecture.

Feb 10, 2026
Apply
company
Full-time|On-site|San Francisco

At Magic, our mission is to create safe AGI that propels humanity forward in addressing the world’s most critical challenges. We believe that the key to achieving safe AGI lies in automating research and code generation to enhance models and resolve alignment issues more effectively than humans alone. Our unique approach integrates frontier-scale pre-training, domain-specific reinforcement learning, ultra-long context, and inference-time computation to realize this vision.Role OverviewAs a vital member of our Supercomputing Platform & Infrastructure team, you will be instrumental in designing, constructing, and managing the extensive GPU infrastructure that underpins Magic’s model training and inference processes.A key aspect of your role will involve leveraging Terraform-driven infrastructure-as-code methodologies to build and maintain our infrastructure, ensuring reproducibility, reliability, and operational clarity across clusters comprising thousands of GPUs.Magic’s long-context models exert continuous demands on compute, networking, and storage systems. The infrastructure must support long-running distributed jobs, high-throughput data movement, and stringent availability requirements, necessitating designs that are automated, observable, and resilient. You will take ownership of the systems and IaC foundations that facilitate these capabilities.This position has the potential to expand into broader responsibilities encompassing supercomputing platform architecture, influencing how Magic scales GPU clusters and enhances infrastructure reliability as model workloads expand.Key ResponsibilitiesDesign and manage large-scale GPU clusters for model training and inference.Construct and sustain infrastructure utilizing Terraform across both cloud and hybrid environments.Develop modular, scalable IaC frameworks for provisioning compute, networking, and storage resources.Enhance deployment reproducibility, maintain environment consistency, and ensure operational safety.Optimize networking and storage architectures for high-throughput AI workloads.Automate fault detection and recovery mechanisms across distributed clusters.Diagnose complex cross-layer issues involving hardware, drivers, networking, storage, operating systems, and cloud environments.Enhance observability, monitoring, and reliability of essential platform systems.QualificationsStrong foundation in systems engineering principles.Extensive hands-on experience with Terraform, including module design, state management, environment isolation, and large-scale implementations.

Jan 25, 2024
Apply
companyParallel logo
Full-time|On-site|San Francisco or Palo Alto

About UsAt Parallel, we are a pioneering web infrastructure company dedicated to empowering businesses across various sectors, including sales, marketing, insurance, and software development. Our innovative products enable organizations to create cutting-edge AI agents with robust and flexible programmatic access to the web.Having successfully raised $130 million from esteemed investors such as Kleiner Perkins, Index Ventures, and Spark Capital, our mission is to reshape the web for AI applications. We are assembling a talented team of engineers, designers, marketers, and operational experts to help us achieve this vision.Job Overview: As a member of our technical staff, you will play a crucial role in building, operating, and scaling our infrastructure, particularly around large language models. Your responsibilities will include ensuring system reliability and cost-efficiency as we expand, anticipating potential bottlenecks, evolving our architecture to meet growing demands, and developing the tools that enhance engineering productivity.About You: You possess a deep understanding of distributed systems, cloud platforms, performance optimization, and scalable architecture. You are adept at balancing trade-offs between cost, reliability, and speed, and you are passionate about enabling teams to innovate rapidly and confidently while supporting products that serve millions of users seamlessly.

Aug 14, 2025
Apply
companyParallel logo
Full-time|On-site|San Francisco or Palo Alto

About UsAt Parallel, we redefine web infrastructure to empower businesses across various sectors including sales, marketing, insurance, and coding. Our innovative products enable the creation of top-tier AI agents, providing them with flexible and robust programmatic access to the web.Having secured $130 million in funding from prominent investors like Kleiner Perkins, Index Ventures, Spark Capital, Khosla Ventures, First Round, and Terrain, we are on a mission to construct the web tailored for AI applications. We are assembling an elite team of engineers, designers, marketers, sellers, researchers, and operational experts to realize this vision.Job Role: As a member of our research team, your primary objective will be to explore methods to train and scale a model capable of serving a comprehensive web index.Your Profile: You possess a profound understanding of modern AI models and training methodologies. You enjoy engaging in discussions about the convergence of search algorithms, recommendation systems, and transformer models. You are passionate about ensuring your research translates into practical applications that impact millions.Life at ParallelOur team operates in a fully in-person environment, collaborating between our headquarters in Palo Alto and our San Francisco office. We maintain a flat organizational structure that values talent and is committed to tackling both technical and creative challenges.We are looking for individuals who are equally passionate about leveraging science, creativity, and consistency to address significant and complex challenges, leading to substantial outcomes. Our core values include:Own Customer Impact: We take responsibility for delivering real-world results for our clients.Obsess Over Craft: We believe in perfecting every detail, as quality compounds over time.Accelerate Change: We prioritize swift shipping, rapid adaptation, and the implementation of pioneering ideas.Create Win-Wins: We strive to transform trade-offs into advantages.Make High-Conviction Bets: We embrace experimentation, learning from failures to achieve extraordinary successes.Compensation & BenefitsCompetitive salaryGenerous equity optionsVisa sponsorship available401K retirement plansDaily lunches & office snacksDinner provided at the officeUnlimited vacation policyCaltrain pass reimbursement

Jun 13, 2025
Apply
companyListen Labs logo
Full-time|On-site|San Francisco, CA

Overview: Join Listen Labs as we respond to a surge in market demand with an ambitious 6-month product roadmap. We are expanding our engineering team and are on the lookout for a highly skilled technical expert (our current team includes three IOI medalists) who is eager to build a transformative product that reshapes decision-making for businesses. If you have a passion for solving intricate problems from start to finish, we want to connect with you.About Listen LabsListen Labs is an AI-driven research platform designed to help teams quickly extract insights from customer interviews in a matter of hours rather than months. We empower our clients by enabling them to analyze conversations, identify key themes, and make faster, more informed product decisions.Why Work with Us?Exceptional Team: Founded by seasoned entrepreneurs with a successful AI exit, along with talent from renowned companies such as Jane Street, Twitter, Stripe, Affirm, Bain, and Goldman Sachs, our team boasts impressive credentials including IOI and ICPC backgrounds.Rapid Growth: As a 40-person team backed by Sequoia Capital, we have achieved a remarkable growth trajectory, scaling from $0 to a $14 million run-rate in less than a year. We prioritize craftsmanship and thrive on collaboration with individuals who take ownership.Impressive Traction: We are experiencing rapid growth across various sectors, securing enterprise clients such as Google, Microsoft, Nestlé, and Procter & Gamble.Proven Performance: We maintain an industry-leading win rate driven by our uniquely differentiated product.Market Validation: We consistently attract customers from diverse segments, achieving six-figure contracts that facilitate quick expansions.Viral Product: Our interviews reach tens of thousands of viewers, promoting product-led growth, organic expansion, and daily interest from Fortune 500 companies.Technical Challenges Await:Research Agent Development:Unlike traditional software purchases, hiring McKinsey offers valuable opinions, expertise, and execution. We aim to provide users with an AI agent that possesses complete knowledge about our platform and best research practices, assisting them in project setup, interview conduction, and response analysis.Human Database Creation:One of our core offerings is the ability to identify target users effectively (e.g., "power users of ChatGPT and Excel"). We are in the process of building a comprehensive database that connects users with the insights they need.

Feb 25, 2026
Apply
companyReflection AI logo
Full-time|On-site|San Francisco

Our MissionAt Reflection AI, our goal is to develop open superintelligence and make it universally accessible.We are pioneering open weight models tailored for individuals, agents, enterprises, and even entire nations. Our diverse team comprises talented AI researchers and industry veterans from prestigious organizations such as DeepMind, OpenAI, Google Brain, Meta, Character.AI, Anthropic, and many more.Role OverviewConstruct and enhance distributed training systems that drive the pre-training of cutting-edge models.Collaborate with research teams to design and execute extensive training runs for foundational models.Create infrastructure that facilitates efficient training across thousands of GPUs leveraging contemporary distributed training frameworks.Enhance training throughput, stability, and efficiency for extensive model training tasks.Work closely with pre-training researchers to convert experimental concepts into scalable, production-ready training systems.Boost performance of distributed training tasks through optimization of communication, memory management, and GPU utilization.Develop and maintain training pipelines that accommodate large-scale datasets, checkpointing, and iterative experiments.Identify and resolve performance bottlenecks within distributed training systems, including model parallelism, GPU communication, and training runtime environments.Contribute to the creation of systems that promote swift experimentation and iteration on novel training methods.

Mar 24, 2026
Apply
companyReflection AI logo
Full-time|On-site|San Francisco

About the Role Reflection AI is hiring a Member of Technical Staff focused on Infrastructure Security in San Francisco. This position plays a key part in protecting the company’s infrastructure from security threats. What You Will Do Work with teams across the company to design, implement, and monitor security protocols and systems Help safeguard digital assets by maintaining the integrity and security of infrastructure

Apr 16, 2026
Apply
companyVapi logo
Full-time|On-site|San Francisco

About Vapi:At Vapi, we are revolutionizing communication by making voice the primary interface for human interaction.Our platform offers unparalleled configurability for deploying voice agents.In just two years, we have attracted over 600,000 developers, with more than 2,000 joining daily.Experience Vapi now!Why We Need You:We handle millions of calls daily, with thousands occurring concurrently.Every call generates a new audio packet every 20 milliseconds, requiring responses in under 1 second.We are scaling this operation to manage hundreds of millions of calls.This challenge is exciting and incredibly rewarding.Your Responsibilities:30 Days: Get acquainted with our multi-cluster, multi-cloud infrastructure.60 Days: Launch a new service such as Anycast Global Router.90 Days: Take ownership of a domain, such as GPU inference clusters.Your Profile:You have experience from Series B to F funding stages.You have successfully scaled large, resilient, and high-performance systems.Bonus points if you've founded your own startup!Why Choose Vapi:Generational Impact: Create the human interface for every business.Ownership Culture: 70% of our team are previous founders.Supportive Team: Our founders, Jordan and Nikhil, bring that friendly Canadian spirit.Top Investors: Backed by Y Combinator, KP Seed, and Bessemer Series A.What We Provide:Equity Ownership: Competitive salary with excellent equity options.Health Coverage: Comprehensive medical, dental, and vision plans.Team Bonding: We enjoy spending time together, including quarterly off-site events.Flexible Time Off: Take the time you need to recharge.

Jul 29, 2025
Apply
companytierzero logo
Full-time|Hybrid|SF HQ

About tierzero tierzero builds tools that help engineering teams manage production code with stronger incident response, better operational visibility, and collaborative knowledge sharing. Companies like Discord, Drata, and Framer use tierzero to support their infrastructure in an AI-driven landscape. Backed by $7 million from investors including Accel and SV Angel, tierzero is growing quickly from its San Francisco headquarters. Role Overview: Founding Member of Technical Staff This is a hands-on role shaping tierzero’s core product and systems from the ground up. The founding technical team works closely with the CEO, CTO, and early customers to solve real engineering challenges. The position is based in San Francisco, with a hybrid schedule: three days each week in the office. What You’ll Do Design and build intelligent AI systems that process large volumes of unstructured data Deliver full-stack features informed by real-time user feedback Improve usability so AI agents are both effective and trustworthy for engineers Develop systems for automated evaluation of LLM outputs, including feedback loops and self-play Construct machine learning pipelines for data ingestion, feature generation, embedding storage, retrieval-augmented generation (RAG), vector search, and graph databases Prototype with open-source LLMs to understand their strengths and weaknesses Create scalable infrastructure for complex, multi-step agents, focusing on memory, state management, and asynchronous workflows Who We’re Looking For 5+ years of professional experience or significant open-source contributions Interest in LLMs, MCPs, cloud infrastructure, and observability tools Comfort working in changing, ambiguous situations Product-focused and customer-first mindset Experience learning from and collaborating with engineers from diverse backgrounds Bonus: Previous experience in a startup setting Work Location Hybrid schedule: three days per week in-person at the San Francisco HQ.

Apr 16, 2026
Apply
companyCatalog logo
Full-time|On-site|San Francisco

At Catalog, we are pioneering the commerce infrastructure for AI—creating the essential framework that enables digital agents to not only explore the web but also comprehend, analyze, and engage with products. Our innovations drive the future of AI-driven shopping experiences, fundamentally transforming how consumers discover and purchase items online.Role OverviewAs a Technical Staff Member, you will be instrumental in developing core systems, shaping our engineering culture, and transitioning our vision from prototype to a robust platform. This role requires full-stack expertise and a commitment to owning and resolving challenges from start to finish.Who You AreYou have experience creating beloved and trusted products from the ground up.You combine technical proficiency with a keen product sense and data-driven intuition.You are well-versed in AI technologies.You prioritize speed, write clean code, and ensure thorough instrumentation.You seek a high level of ownership within a small, talent-rich team based in San Francisco.Challenges You Will TackleDevelop and deploy agentic-search APIs that deliver structured and real-time product data in milliseconds.Build checkout systems enabling agents to conduct transactions with any merchant.Create an embeddings and retrieval layer that optimizes recall, precision, and cost efficiency.Establish a product graph and ranking pipeline that adapts based on actual user outcomes.Preferred QualificationsProven experience shipping data-centric products in a live environment.Experience with recommendation systems or information retrieval methodologies.Familiarity with API development, search indexing, and data pipeline construction.Our Work CultureWe operate with a small, high-trust, and highly motivated team, fostering an environment of in-person collaboration in North Beach, San Francisco. Our process involves debate, decision-making, and execution.If your profile aligns with our needs, we will contact you to arrange 2-3 brief technical interviews, followed by an onsite meeting in our office where you will collaborate on a small project, exchange ideas, and meet the team.

Oct 15, 2025
Apply
company
Full-time|On-site|San Francisco

About Liquid AIOriginating from MIT CSAIL, Liquid AI specializes in creating versatile AI systems that operate efficiently across various platforms, from data center accelerators to on-device hardware, focusing on low latency, minimal memory consumption, privacy, and dependability. Our collaborations extend across industries including consumer electronics, automotive, life sciences, and financial services. As we undergo rapid expansion, we are on the lookout for outstanding individuals to join our journey.The OpportunityThe Vision-Language Models (VLM) team is dedicated to developing cutting-edge vision-language models that function seamlessly on devices, adhering to stringent latency and memory requirements without compromising quality. Having already launched four premier models, we are excited about what lies ahead.This team is responsible for the complete VLM pipeline, encompassing research on novel architectures, training algorithms, data curation, evaluation, and deployment. You will be part of a dedicated, hands-on team that directly engages with models and works closely with our pretraining, post-training, and infrastructure teams. Your success will be gauged by the performance of the models we deliver.

Oct 23, 2025
Apply
companyChroma logo
Full-time|On-site|San Francisco, CA

At Chroma, we are at the forefront of AI data infrastructure, providing top-tier retrieval solutions that empower developers worldwide.Join us as we navigate the nascent stages of AI technology, and become part of a team that values curiosity and dedication to mastering your craft.There is significant work ahead, and we invite you to contribute to our mission.

Sep 9, 2024
Apply
companyStuut AI logo
Full-time|On-site|San Francisco

At Stuut AI, we are revolutionizing accounts receivable for B2B companies, enhancing collections to be smarter and more efficient. Our platform is increasingly being adopted by finance teams in various sectors, including industrials, chemicals, and manufacturing, with clients ranging from Fortune 10 enterprises to growing midmarket firms. Our innovative approach is supported by esteemed investors such as Andreessen Horowitz, Khosla Ventures, Activant, 1984 Ventures, and Page One.Position OverviewWe are seeking a dedicated Technical Staff Member focused on Internal AI Tooling. This pivotal role will involve constructing the foundational systems that allow Stuut to scale effectively. Your primary responsibility will be to design and implement internal infrastructures, automation, and AI-driven workflows to enhance operational efficiency across various departments, starting with marketing and extending to sales, operations, and product development.This is a significant role for a proactive individual who thrives on transforming manual or disjointed processes into scalable systems. Collaboration with leadership and cross-functional teams will be essential as you design AI agents, automation pipelines, and internal tools that streamline operations and unlock new capabilities.We are transitioning to an agent-first model, not only in our products but also in our operational approach. This role is central to that evolution.

Mar 16, 2026
Apply
companytierzero logo
Full-time|Hybrid|SF HQ

About TierZero TierZero helps engineering teams use AI to build and ship code more efficiently. The platform targets the bottleneck of human speed in production, giving teams tools for faster incident response, better operational visibility, and shared knowledge. TierZero is backed by $7M in funding from investors including Accel and SV Angel. Companies like Discord, Drata, and Framer trust TierZero to strengthen their infrastructure for AI-driven engineering. Role Overview: Founding Member of Technical Staff This is an on-site role based at TierZero’s San Francisco headquarters, with three days a week in the office. As a founding member, direct collaboration with the CEO, CTO, and early customers shapes the direction of both product and systems. The work spans hands-on development and close engagement with users and leadership. What You Will Do Design and build intelligent AI systems to analyze large volumes of unstructured data. Deliver full-stack features based on real user feedback. Improve the product experience so AI agents are both reliable and easy for engineers to use. Develop systems that automatically evaluate LLM outputs and advance agentic reasoning using self-play and feedback loops. Create machine learning pipelines, including data ingestion, feature generation, embedding stores, retrieval-augmented generation (RAG), vector search, and graph databases. Prototype with open-source and new LLMs, comparing their strengths and weaknesses. Build scalable infrastructure for long-running, multi-step agents, with attention to memory, state, and asynchronous workflows. What We Look For Over five years of relevant professional or open-source experience. Comfort working in environments with uncertainty and evolving challenges. Strong product focus and a drive for customer satisfaction. Interest in large language models (LLMs), Model Control Planes (MCPs), cloud infrastructure, and observability tools. Previous startup experience is a plus. Location This position is based in San Francisco. Expect to work on-site three days per week at TierZero’s HQ.

Apr 15, 2026
Apply
companyTierZero logo
Full-time|Hybrid|SF HQ

TierZero builds tools that help engineering teams deliver and manage code efficiently. The platform enables quicker incident response, clearer operational visibility, and shared knowledge among engineers. Backed by $7 million from investors like Accel and SV Angel, TierZero supports clients such as Discord, Drata, and Framer as they strengthen infrastructure for AI-driven work. This in-person role is based at TierZero's San Francisco headquarters, with a hybrid schedule requiring three days onsite each week. As a founding member of the technical staff, work directly with the CEO, CTO, and customers to influence the direction of TierZero’s core products and systems. The position calls for flexibility as priorities shift and close collaboration across the company. What you will do Design and develop AI systems that handle large volumes of unstructured data. Build full-stack product features, informed by direct feedback from users. Enhance the product so agents are intelligent, reliable, and easy for engineers to use. Create systems to automatically evaluate outputs from large language models and improve agentic reasoning through self-play and feedback. Construct machine learning pipelines, including data ingestion, feature creation, embedding stores, retrieval-augmented generation (RAG) pipelines, vector search, and graph databases. Experiment with open-source and emerging large language models to compare different approaches. Develop scalable infrastructure for long-running, multi-step agents, including memory, state management, and asynchronous workflows. Requirements Interest in working with large language models, managed cloud platforms, cloud infrastructure, and observability tools. At least 5 years of professional experience or significant open-source contributions. Comfort with shifting priorities and tackling new technical problems. Strong product focus and commitment to customer outcomes. Openness to learning from a team with a track record of delivering over $10 billion in value. Ability to work onsite in San Francisco three days per week. Bonus: Experience in a startup setting and familiarity with startup dynamics.

Apr 24, 2026
Apply
companytierzero logo
Full-time|Hybrid|SF HQ

TierZero seeks a Founding Member of Technical Staff to join the team in San Francisco. This in-person position requires working from the SF headquarters at least three days per week. Role overview This role centers on close collaboration with a group of engineers who have collectively delivered over $10 billion in value during their careers. Expect to work side by side with teammates, sharing ideas and building strong connections in the office. The environment often shifts, so adaptability and comfort with changing priorities are important. Key responsibilities Work directly with experienced engineers to design and build new products Prioritize customer needs and satisfaction in product decisions Develop solutions using large language models (LLMs), multi-cloud platforms (MCPs), cloud infrastructure, and observability tools Requirements Minimum 5 years of professional engineering experience or a strong record of open-source contributions Experience in startups and familiarity with their unique challenges is a plus Location This position is based in San Francisco. In-office presence is required three days each week for collaboration.

Apr 23, 2026
Apply
companyPerplexity logo
Full-time|On-site|San Francisco

Join Perplexity as a pioneering Technical Staff Member in one of the most innovative engineering roles in the AI sector. Collaborate closely with our senior leadership to drive a diverse array of strategic technology initiatives that encompass AI policy, law, operations, and corporate affairs.Reporting directly to the VP of Global Affairs & Deputy CTO, you will gain insight into the critical policy matters shaping the AI landscape. This role offers a holistic perspective on the operations and decision-making processes of leading AI companies and their engagement with society.If you are passionate about blending technical proficiency with interests in law, policy, and corporate matters, you'll find your place here. Your contributions will challenge the status quo, supported by a team of mentors dedicated to fostering your growth and development.Responsibilities:This role demands an interdisciplinary approach firmly anchored in engineering principles. Your daily tasks may include:Utilizing AI agents to investigate critical policy and regulatory challenges relevant to Perplexity.Developing automated systems for the collection, prioritization, and patenting of Perplexity's innovations, in coordination with our AI researchers and IP attorneys.Collaborating with our Legal team and external counsel on advanced litigation issues in U.S. federal courts.Designing privacy and compliance frameworks that grow alongside our expanding suite of pioneering AI products.Assisting in regulatory submissions and responses that necessitate a thorough understanding of AI technologies, Perplexity’s systems, and the wider policy context.Working with our People team to deploy software solutions for recruitment, labor compliance, and performance evaluation.Engaging in discussions with Capitol Hill, executive branch officials, international stakeholders, and civil society as a subject-matter expert on AI technology and policy.This overview captures just a fraction of your potential activities; each day will be unique, providing you with opportunities across various domains within our organization.

Apr 2, 2026
Apply
companyReka logo
Full-time|Remote|US, UK, Remote

As a Technical Staff Member specializing in Machine Learning, you will:Engage in the complete development lifecycle of innovative large-scale deep learning models.Curate datasets, architect solutions, implement algorithms, and train and assess models to enhance our offerings.Work collaboratively with engineers and researchers to convert groundbreaking research into real-world applications.Join us at a pivotal time, take on diverse roles, and contribute to building transformative products from the ground up!

Aug 1, 2023
Apply
companyCohere logo
Full-time|On-site|San Francisco

Cohere builds and deploys advanced AI models used by developers and enterprises. These models support applications like content generation, semantic search, retrieval-augmented generation (RAG), and intelligent agents. The team’s work aims to make AI more accessible and practical for real-world use. Each person at Cohere plays a direct role in strengthening the models and increasing their value for clients. The company values practical outcomes and continuous improvement, focusing on delivering reliable technology to users. The team includes researchers, engineers, designers, and professionals from a wide range of backgrounds. Cohere believes that diverse perspectives help create better products. The company welcomes those interested in shaping the future of AI to join its mission.

Apr 28, 2026

Sign in to browse more jobs

Create account — see all 5,698 results

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.