Software Engineer Infrastructure For Consumer Devices jobs in San Francisco – Browse 5,730 openings on RoboApply Jobs
Software Engineer Infrastructure For Consumer Devices jobs in San Francisco
Open roles matching “Software Engineer Infrastructure For Consumer Devices” with location signals for San Francisco. 5,730 active listings on RoboApply Jobs.
5,730 jobs found
Software Engineer - Infrastructure for Consumer Devices
Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Entry Level
Qualifications
Strong programming skills in languages such as Python, C++, or Java. Experience with cloud infrastructure and deployment. Understanding of networking concepts and protocols. Ability to work collaboratively in a fast-paced environment. Problem-solving mindset with a passion for technology.
About the job
About the Role
OpenAI is hiring a Software Engineer to work on Infrastructure for Consumer Devices in San Francisco. This position centers on building scalable systems that support and improve the experience of users interacting with AI-powered devices.
What You Will Do
Design and develop infrastructure for consumer-facing devices
Work on systems that directly impact how users interact with AI technology
Apply technical expertise to create solutions that scale as usage grows
About OpenAI
OpenAI is at the forefront of artificial intelligence research and deployment, dedicated to ensuring that AI benefits all of humanity. Our team is composed of highly skilled individuals committed to pushing the boundaries of technology.
About the Role OpenAI is hiring a Software Engineer to work on Infrastructure for Consumer Devices in San Francisco. This position centers on building scalable systems that support and improve the experience of users interacting with AI-powered devices. What You Will Do Design and develop infrastructure for consumer-facing devices Work on systems that directly impact how users interact with AI technology Apply technical expertise to create solutions that scale as usage grows
Role overview OpenAI seeks a Graphics Software Engineer to influence the graphics features of its consumer devices. The position focuses on developing and refining graphics algorithms that enhance the experience for users of new hardware products. What you will do Create and implement graphics algorithms tailored for consumer hardware. Collaborate with engineering teams to add graphics features that improve product usability and engagement. Use expertise in software development and graphics programming to address complex technical problems. Location This role is based in San Francisco.
About the Role OpenAI is hiring a Software Engineer for the Engineering Acceleration team, working on Consumer Devices in San Francisco. This team builds and improves products that shape how people use technology in daily life. The role involves developing new features and strengthening existing systems for consumer-facing devices.
About the TeamJoin the dynamic Consumer Devices team at OpenAI, where we revolutionize the integration of AI into tangible products. Our team is at the forefront of innovation, developing comprehensive hardware and software systems that merge custom silicon, embedded solutions, operating systems, and cloud technologies to create scalable, reliable devices ready for production.About the RoleWe are in search of a passionate Operating Systems Engineer to fortify the operating system foundations for OpenAI's groundbreaking products. This role is ideal for seasoned developers who excel at creating foundational platform software and tackling complex challenges related to security, privacy, performance, power efficiency, and reliability. Your expertise will span the OS kernel, core services, security and privacy frameworks, performance optimization, and the integration of applications and user interfaces with the system. This position emphasizes in-depth debugging and accountability throughout the development cycle.Collaboration will be essential as you work closely with teams from embedded systems, firmware, hardware, applications, and product engineering. Familiarity with hardware bring-up is advantageous but not mandatory.What You Will DoContribute to comprehensive OS functionalities, including kernel, userspace services, application frameworks, UI toolkits, and APIs.Develop, integrate, and sustain OS components, which encompass scheduling, memory management, filesystems, drivers, IPC/RPC, and security-related subsystems.Build and manage core OS services and daemons such as service management, device discovery, networking, time management, logging, update hooks, and crash handling.Design and implement robust security and privacy mechanisms:Integrate secure boot and measured boot processes as applicable.Implement mandatory access controls and sandboxing.Manage secrets, secure storage, key handling, and design systems with least-privilege principles.Develop privacy-preserving telemetry and user-consent-driven system behaviors.Establish a performance and power management discipline:Conduct instrumentation, profiling, and regression detection for boot time, latency, throughput, and memory usage.Implement workflows for power measurement, battery and thermal-aware tuning, and strategies to prevent energy regression.
About Our TeamThe Future of Computing Research team is a dynamic applied research unit within the Consumer Devices group at OpenAI. We are dedicated to pioneering innovative methods, models, and evaluation frameworks that propel our vision for the future of computing. Our focus lies at the cutting edge of multimodal AI, transforming emerging model capabilities into product experiences that are not only functional and enjoyable but also foster long-term trust.Our research delves into a new generation of AI systems capable of learning and evolving over time, adapting to individual needs, and enhancing daily life. This includes exploring long-term memory, user modeling, and personalized systems aligned with broader human goals, values, and overall well-being.We collaborate closely across multiple disciplines—research, engineering, design, product management, and safety—to define what it means to build AI systems that recognize and respond to user needs in a contextually aware and respectful manner, ensuring demonstrable benefits.About the PositionWe are seeking a passionate Research Engineer/Scientist to join our Future of Computing Research team, focusing on Reinforcement Learning from Human Feedback (RLHF) and post-training techniques for personalized multimodal AI systems.In this role, you will be instrumental in establishing the learning and evaluation foundations necessary for models to become increasingly context-aware, adaptive, and useful over time. You will tackle challenges such as reward modeling, preference learning, long-horizon evaluation, and policy improvement for systems that are required to make high-quality behavioral decisions in real-world settings. Our success is measured not just by improved benchmark performance but by enhanced model behavior in actual use cases.The ideal candidate is enthusiastic about advancing beyond simplistic one-turn assistant interactions towards systems that learn and grow through feedback, utilizing richer signals and training against meaningful notions of user value. This requires a thoughtful approach to reward design, feedback mechanisms, and evaluation frameworks that assess the long-term benefits of interventions.This position is based in San Francisco, CA, with a hybrid work model of four days in the office each week. We also provide relocation assistance for new hires.Key Responsibilities:Develop RLHF and post-training strategies for multimodal models.Create reward models and preference-learning pipelines to foster adaptive, personalized model behavior.Engage in long-term evaluation and policy refinement to enhance user interactions.
Join Our TeamThe Future of Computing Research team is part of the Consumer Devices division at OpenAI, dedicated to pioneering innovative methods and models that align with our mission to develop AGI that benefits humanity as a whole.Your RoleAs a Research Engineer/Scientist on our team, you will collaborate with world-class ML researchers and exceptional design experts to expand the limits of model capabilities.This position is located in San Francisco, CA, following a hybrid work model with four days in the office each week. We provide relocation assistance for new hires.Responsibilities:Train and assess state-of-the-art models focusing on aspects crucial to our vision for future consumer devices.Overcome challenges to transform emerging research capabilities into practical solutions.Contribute to defining the software landscape for the future.Ideal Candidate:Possesses 5+ years of relevant experience.Has a strong research background in training language models for UI generation and evaluating the effectiveness of generated UIs.Enjoys cross-disciplinary collaboration across a diverse research landscape.Conducts rigorous scientific investigations to ensure confidence in experimental outcomes.Has hands-on experience in training models for language comprehension and perception.About OpenAIOpenAI is an AI research and deployment organization committed to ensuring that artificial general intelligence is developed in a manner that is safe and beneficial for all of humanity. We strive to push the limits of AI capabilities while prioritizing safety and addressing human needs. Our mission is to incorporate a wide array of perspectives, voices, and experiences to reflect the full spectrum of humanity.We are proud to be an equal opportunity employer, valuing diversity and inclusion in our workplace.
About Our TeamJoin our dedicated Quality Assurance Software Engineering team, where we prioritize the excellence and dependability of our device software. We create and uphold automated testing frameworks, hardware-in-the-loop labs, and efficient release pipelines that guarantee quality signals are reliable, facilitating swift and secure product launches. Our collaborative environment encompasses infrastructure, automation, and cross-team synergy to ensure every release adheres to the highest standards.About the PositionAs a Quality Assurance Software Engineer, you will take ownership of the automated validation process for our device software. This encompasses developing test frameworks, conducting regression testing, overseeing hardware-in-the-loop labs, and managing release gates. You will construct systems that ensure quality signals remain trustworthy, integrate them into our CI/CD processes, and streamline the execution of repeatable procedures for QA vendor technicians.We seek engineers with extensive expertise in software quality, automation, and hardware-software integration, who are passionate about building scalable and reliable validation systems.This position is located in San Francisco, CA, operating on a hybrid work model of four days in the office each week, and we provide relocation assistance to new hires.Key Responsibilities:Test Infrastructure & Frameworks: Design, implement, and maintain a cohesive test framework for device software (unit, integration, system/end-to-end), integrating adapters for GitHub/Linear/Slack and ensuring reproducible runs.CI/CD Integration & Releases: Seamlessly integrate test suites with Buildkite, enforce promotion criteria (staging/prod), automate regression filing, and publish traceable artifacts and release notes.Hardware-in-the-loop Lab Design & Orchestration: Strategically plan and establish racks, power/networking, and orchestration for device testing; facilitate automated flashing, provisioning, and telemetry capture.Automation Tooling: Create tools for API/firmware validation, result triage, log capture, and reproducible bug reports.Quality Signals, Metrics, and Flake Control: Develop dashboards and alerts for pass rates, stability, and release readiness; identify and quarantine flaky tests; lead root-cause analysis with stakeholders; and monitor DORA-style delivery metrics to ensure release health.Vendor Enablement: Draft clear procedures for QA vendor technicians, review their reports, and manage a queue for rig maintenance and repairs.Cross-Team Collaboration: Collaborate with embedded and system software teams to enhance testability and streamline processes.
Location: San Francisco, CA (Hybrid: 4 days onsite/week). Relocation assistance available.About Our Team:At OpenAI, we are at the forefront of technology, creating foundational platform software that ensures our consumer products are reliable, secure, and high-performing. Our team collaborates across various system layers, working closely with engineering partners to deliver exceptional capabilities from initial concept to final launch.Role Overview:We are looking for a passionate Systems Software Engineer to lead the design, implementation, and debugging of critical platform components and the pipelines that build and update system images. Your focus will span across operating system layers, emphasizing performance optimization, security enhancements, and in-depth system debugging to deliver production-grade systems that exceed expectations.Key Responsibilities:Design and develop robust system-level components and services within both kernel and user spaces.Configure and maintain essential OS platform services (init, services, networking, security policies) and related tools.Build and manage image and update pipelines, ensuring their reliability, reproducibility, and rollback safety.Instrument system performance through profiling and tracing; enhance CPU, memory, I/O, and energy efficiency.Oversee platform observability and reliability, including logging, crash capture, watchdogs, and diagnostics.Collaborate with cross-functional teams to define interfaces and deliver comprehensive end-to-end features.Establish and promote strong engineering practices such as code reviews, continuous integration, reproducible builds, and effective release management.Work alongside external vendors to support builds and deployments.You Will Excel in This Role If You:Have successfully launched production systems software on modern operating systems.Possess proficiency in C/C++ and a scripting language, with a strong understanding of OS internals including concurrency, memory management, filesystems, networking, and power management.Demonstrate exceptional systems debugging skills utilizing debuggers, tracers, profilers, and logs across kernel/user-space boundaries.Comprehend the configuration of platform services and interfaces, effectively translating requirements into stable, well-documented APIs.Are knowledgeable about user-space foundations including service management, IPC, networking, packaging, and automation.Have experience collaborating with external partners to deliver high-quality software solutions.
About UsAt Sierra, we are revolutionizing the way businesses engage with their customers by building a cutting-edge platform that harnesses the power of AI. Our headquarters is located in the vibrant city of San Francisco, with additional offices expanding in Atlanta, New York, London, France, Singapore, and Japan.Our company culture is deeply rooted in our core values: Trust, Customer Obsession, Craftsmanship, Intensity, and Family. These principles guide our actions and foster an environment where innovation thrives.Sierra was co-founded by visionary leaders Bret Taylor, who currently serves as the Board Chair of OpenAI and has a rich history with Salesforce and Facebook, and Clay Bavor, who previously led Google Labs and spearheaded initiatives like Google Lens and Project Starline.Your RoleAs a Software Engineer focusing on Infrastructure at Sierra, you will play a pivotal role in designing, constructing, and maintaining the foundational systems that empower our AI platform. Your expertise will ensure that our infrastructure is not only secure and reliable but also scalable, allowing product teams to execute their work with agility and confidence.Guarantee the reliability, scalability, and performance of our platform and LLM inference serving in response to increasing traffic demands.Develop and oversee cloud infrastructure using Terraform to create secure, scalable, and reproducible environments.Establish and manage a self-service infrastructure platform to empower engineering teams in deploying and operating services independently.Take ownership of and improve CI/CD pipelines and release management processes, facilitating rapid and reliable deployments across Sierra’s platform.Design and manage distributed systems utilizing distributed databases, retrieval systems, and machine learning models.Develop and sustain core data serving abstractions along with essential authentication and security features (SSO, RBAC, authentication controls).Effectively navigate and integrate our technology stack with enterprise customer environments in a scalable and maintainable manner.
At Exa, we are on a mission to create a cutting-edge search engine from the ground up, designed to cater to the diverse needs of AI applications. Our team is building a robust infrastructure that enables us to crawl the internet, train advanced embedding models for indexing, and develop high-performance vector databases using Rust. Additionally, we manage a significant $5M H200 GPU cluster that powers tens of thousands of machines.The Infrastructure Team at Exa is responsible for developing the essential tools and infrastructure that support our entire system. We are looking for talented infrastructure engineers to help us scale our capabilities rapidly. Your work could involve orchestrating GPU clusters with Kubernetes, implementing map-reduce batch jobs on Ray, or creating top-tier observability tools that set industry standards.
Join Our Innovative TeamAt OpenAI, our Consumer Products Research team is at the forefront of shaping the future of computing. We delve into cutting-edge modalities, interaction patterns, and system behaviors, engineering them into robust prototypes. The Neosensing team operates at the confluence of sensing technologies, edge algorithms, and systems engineering. We develop comprehensive software solutions that transform novel signals into reliable capabilities, including collection tools, integration protocols, and stable on-device loops that perform reliably in dynamic environments. We are passionate about software excellence and rapid iteration, emphasizing clean interfaces, debuggability, observability, and high performance even under strict device constraints.Your ContributionAs a Software Engineer in our Consumer Products Research team, you will bridge the gap between algorithm development and implementable systems. Collaborating closely with algorithm engineers, you will convert prototypes into robust interfaces, dependable data pipelines, and optimized on-device solutions, with a sharp focus on performance, observability, and resilience against real-world challenges.This role prioritizes software development, seeking a candidate who is passionate about writing high-quality code, takes pride in engineering craftsmanship, and is willing to dive deep into algorithmic intricacies to ensure seamless end-to-end functionality.Work EnvironmentThis position is based in San Francisco, CA, and follows a hybrid work model with four days in the office each week. Relocation assistance is available for new hires.Key Responsibilities:Develop and deploy pioneering production software for sensing algorithms, transforming algorithm prototypes into reliable end-to-end systems.Manage and enhance critical components of the Python shipping pipeline, including integration surfaces, evaluation hooks, and performance quality safeguards.Create embedded and on-device software within an RTOS environment (e.g., Zephyr) and implement models across various device runtimes and hardware accelerators.Refine real-time on-device perception loops (e.g., detection/tracking pipelines) to ensure stability, low latency, and efficient use of power and memory.Design and develop data collection and instrumentation tools that facilitate the introduction of new sensing modalities and expedite the process from prototype to dataset to model to device.Collaborate cross-functionally with teams in algorithms, human data, and firmware/hardware to debug, profile, and enhance systems against real-world variability.
Join Handshake as a Staff Software Engineer focused on enhancing consumer experience. In this pivotal role, you'll leverage your expertise to design and develop innovative software solutions that elevate user engagement and satisfaction. Collaborate closely with cross-functional teams to implement best practices and ensure a seamless user journey across all platforms.
Who We AreServal is an innovative AI-driven automation platform redefining operational efficiency for enterprises. Our intelligent agents seamlessly comprehend and execute real-world workflows, replacing outdated manual processes with adaptive, self-learning software. Since our inception in early 2024, we have garnered the trust of industry leaders such as General Motors, Notion, Perplexity, Vercel, Mercor, LangChain, and Verkada, streamlining high-volume operational tasks across their organizations.At the heart of Serval is a cutting-edge agentic AI platform that transforms natural language into actionable workflows. Our agents not only respond to queries but also reason, act across various systems, and continuously enhance their performance. What started as a solution for operational tasks has rapidly expanded into a versatile AI automation layer utilized across IT, HR, Finance, Security, Legal, and Engineering sectors.Our mission is to eradicate repetitive, manual tasks within enterprises, empowering teams through intelligent automation. In the long run, we aim to establish a universal AI operations layer—a system of agents that integrates across business functions, maintaining the momentum of modern companies.We are proud to be backed by renowned investors including Sequoia Capital, Redpoint Ventures, Meritech, First Round, General Catalyst, and Elad Gil, and founded by seasoned product and engineering leaders from Verkada.Role OverviewAs a Senior Software Engineer in Infrastructure at Serval, you will be pivotal in developing and scaling the core systems that empower our AI agents and workflow automation platform. A crucial aspect of this role involves enabling and supporting self-hosted deployments for enterprise clients needing on-premises or private cloud environments. We are looking for engineers with profound expertise in distributed systems, infrastructure-as-code, production operations, and customer-facing support, who aspire to influence the technical architecture of a rapidly evolving platform.What You'll DoDesign, implement, and operate large-scale distributed systems that power Serval's AI agents, workflow orchestration, and data pipelines.Create and maintain Terraform modules to provision and manage cloud infrastructure across AWS, GCP, or Azure environments.Develop and sustain deployment packages, installation scripts, and infrastructure templates, enabling customers to self-host Serval in their own environments.Provide technical support and guidance to enterprise customers during installation and deployment phases.
About UsAt Imprint, we are revolutionizing the world of co-branded credit cards and innovative financial solutions, focusing on smarter, more rewarding, and brand-first experiences. We collaborate with renowned brands such as Crate & Barrel, Rakuten, Booking.com, H-E-B, Fetch, and Brooks Brothers to establish modern credit programs that enhance customer loyalty, unlock savings, and stimulate growth. Our robust platform integrates advanced payment technologies, intelligent underwriting, and a seamless user experience, enabling brands to offer impactful financial products without the complexities of becoming a bank.Co-branded credit cards represent over $300 billion in U.S. annual spending, yet many are still managed by outdated banking systems. Imprint stands as the modern alternative—flexible, technology-driven, and tailored for today’s consumers. Supported by notable investors like Kleiner Perkins, Thrive Capital, and Khosla Ventures, we are assembling a world-class team dedicated to reshaping payment methods and driving brand growth. If you thrive in fast-paced environments, enjoy tackling complex challenges, and aspire to make a significant impact, we would be delighted to meet you.Discover more about us on Imprint's Technology Blog.The TeamThe Tech Platform Engineering Team at Imprint is pioneering the democratization of access to advanced technologies, empowering teams across our organization to innovate and excel. Our commitment to redefining the Fintech landscape drives us to build secure, highly available infrastructures while equipping our engineers with comprehensive development tools, allowing them to rapidly create world-class products.Your RoleDesign, build, and manage cloud and web infrastructure with a strong emphasis on security, reliability, and scalability.Implement and maintain infrastructure components across computing, networking, and data platforms.Adhere to security best practices in cloud infrastructure, ensuring proper access control, network isolation, and secure communication between services.Monitor system health and engage in incident response, root cause analysis, and reliability enhancements.Collaborate with platform, security, and product engineers to deliver safe and efficient infrastructure solutions.
About the RoleJoin our pioneering team at vooma as a Backend & Infrastructure Software Engineer, where you will play a critical role in shaping the technical infrastructure of a transformative company.If you are passionate about creating not only resilient systems but also the foundational architecture of a groundbreaking enterprise from the outset, this position is ideal for you.We are looking for someone who excels at crafting infrastructure that is elegant, dependable, and secure, even under high-demand scenarios. You thrive on the challenge of scaling systems that enable intelligent agents and take pride in establishing reliable foundations that others can rely on.Your Key Responsibilities Include:Design and maintain secure, scalable infrastructure tailored for AI-powered agents in production environments.Deploy and optimize AI-driven services to meet high availability and performance standards.Manage infrastructure as code, alongside cloud environments and CI/CD pipelines.Implement monitoring, observability, and alerting systems to ensure the reliability of our infrastructure.Contribute to infrastructure security and adhere to best practices.You Should Have:Experience in deploying and productionizing machine learning or AI-centric workloads.Proficiency in developing secure, scalable infrastructures on platforms such as AWS, Azure, or GCP.In-depth knowledge of backend systems, networking, and container orchestration technologies (e.g., Kubernetes).Understanding of infrastructure security principles and compliance standards (e.g., SOC2).A proactive and hands-on mindset, with a strong drive to solve challenges from start to finish.
Full-time|$300K/yr - $300K/yr|On-site|San Francisco
ABOUT BASETENJoin Baseten, where we drive mission-critical AI inference for leading companies like Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma, and Writer. Our unique blend of applied AI research, robust infrastructure, and intuitive developer tools empowers organizations at the forefront of AI innovation to deploy state-of-the-art models into production. Recently, we secured a $300M Series E funding round, backed by esteemed investors such as BOND, IVP, Spark Capital, Greylock, and Conviction. Be a part of our rapid growth and help shape the platform that engineers trust for launching AI products.THE ROLEAs an Infrastructure Software Engineer at Baseten, you will play a pivotal role in developing and maintaining our ML inference platform that powers AI applications in production. Your contributions will enhance the core infrastructure, enabling developers to deploy, scale, and monitor machine learning models with exceptional performance.EXAMPLE INITIATIVESYou will engage in innovative projects within our Infrastructure team, including:Multi-cloud capacity managementInference on B200 GPUsMulti-node inferenceFractional H100 GPUs for efficient model servingRESPONSIBILITIESDesign and develop infrastructure components for our ML inference platform, primarily using Python and Go.Implement and maintain Kubernetes deployments for optimal model serving.Contribute to the orchestration layer for model deployments.Build and enhance monitoring systems to track model performance metrics effectively.Develop efficient resource management solutions to optimize performance.
Full-time|$150K/yr - $200K/yr|On-site|San Francisco, CA
At Sift, we are revolutionizing the way cutting-edge machines are constructed, tested, and managed. Our innovative platform provides engineers with real-time visibility into high-frequency telemetry, effectively removing bottlenecks and facilitating quicker, more dependable development.Sift originated from our experience at SpaceX, contributing to projects like Dragon, Falcon, Starlink, and Starship, where the demands of scaling telemetry, debugging flight systems, and ensuring mission reliability necessitated a new kind of infrastructure. Founded by a talented team from SpaceX, Google, and Palantir, Sift is tailored for mission-critical systems where precision and scalability are imperative.As one of the pioneering engineers at Sift, your role will extend beyond just coding—you will play a crucial part in defining the architecture, shaping the product, and influencing the culture of a company dedicated to addressing real engineering challenges. If you're eager to take on intricate technical obstacles and build foundational systems that support complex machines from the ground up, we would love to connect with you.
Join Ivo's Engineering Team!At Ivo, we are pioneers in the tech industry. Our engineers are innovators who have created groundbreaking solutions such as:• An AI agent that seamlessly integrates with MS Word to enhance document editing [2023]• Revolutionizing embedding models with agentic RAG technology [2023]• Advanced LLM-based legal fact extraction capabilities [2024]• A legal assistant designed to search extensive contract databases without compromising accuracy [2024]• Clustering legal documents from the same lineage [2025]• Automatic deviation analysis to uncover hidden risks in vast contract databases [2025]• Merging contracts with their amendments to create a “composite” contract timeline that has moved our clients to tears [2025]Role OverviewAs an Infrastructure Engineer at Ivo, you will lay the groundwork for our platform's future. Your responsibilities will include:• Designing and owning the future of our infrastructure, allowing you the freedom to innovate.• Managing multiple customer deployments, ensuring each receives tailored containers, databases, and VPCs.• Instrumenting our systems to identify performance bottlenecks and errors.• Aggregating metrics and logs into visually appealing dashboards and setting up pager alerts.• Leading infrastructure-related incidents and being on-call as necessary.• Enhancing our CI/CD system to reduce deployment time from ~12 minutes.If you're passionate about LLMs, you'll thrive in our engineering team, where you’ll have the opportunity to:• Develop real-time LLM evaluations to monitor the accuracy of our responses.• Collaborate with talented engineers to push the boundaries of DevOps.
Astranis is seeking a talented and motivated Software Engineer to join our Infrastructure team. In this role, you will be at the forefront of developing and maintaining critical software systems that support our innovative satellite technology. You'll collaborate with cross-functional teams to design, implement, and optimize our infrastructure solutions, ensuring high reliability and performance.
Full-time|$196K/yr - $220K/yr|On-site|San Francisco Bay Area
At Discord, we connect over 200 million users every month, with the majority of them engaging in their favorite pastime: gaming. Our platform is not just about chatting; it’s a vibrant community where over 90% of our users immerse themselves in games, collectively spending a staggering 1.5 billion hours playing thousands of unique titles on Discord each month. We are on a mission to enhance the gaming experience by facilitating seamless communication and interaction among players.We are actively searching for skilled Senior Software Engineers to join our Consumer Revenue teams, which are pivotal in shaping premium experiences at Discord. In this role, you will focus on developing features for our premium offerings, including Nitro, the shop, boosting, user identity, and more. Your contributions will play a crucial role in delivering high-quality premium experiences for our subscribers while preserving the core functionalities for our free users. You will be instrumental in driving the revenue that supports Discord’s overarching mission.This role involves close collaboration across various departments such as Product, Data Science, Design, and Marketing to design and implement top-tier consumer experiences. You will manage projects throughout their lifecycle, from backend data modeling and API business logic to creating polished user-facing interfaces. Our infrastructure, platform, and product teams will support you as you strive to build the best premium Discord experience for our users.Explore some of our latest initiatives like Nitro, shop, boosting, user identity, and more. For deeper insights into Discord Engineering, check out our engineering blog!
About the Role OpenAI is hiring a Software Engineer to work on Infrastructure for Consumer Devices in San Francisco. This position centers on building scalable systems that support and improve the experience of users interacting with AI-powered devices. What You Will Do Design and develop infrastructure for consumer-facing devices Work on systems that directly impact how users interact with AI technology Apply technical expertise to create solutions that scale as usage grows
Role overview OpenAI seeks a Graphics Software Engineer to influence the graphics features of its consumer devices. The position focuses on developing and refining graphics algorithms that enhance the experience for users of new hardware products. What you will do Create and implement graphics algorithms tailored for consumer hardware. Collaborate with engineering teams to add graphics features that improve product usability and engagement. Use expertise in software development and graphics programming to address complex technical problems. Location This role is based in San Francisco.
About the Role OpenAI is hiring a Software Engineer for the Engineering Acceleration team, working on Consumer Devices in San Francisco. This team builds and improves products that shape how people use technology in daily life. The role involves developing new features and strengthening existing systems for consumer-facing devices.
About the TeamJoin the dynamic Consumer Devices team at OpenAI, where we revolutionize the integration of AI into tangible products. Our team is at the forefront of innovation, developing comprehensive hardware and software systems that merge custom silicon, embedded solutions, operating systems, and cloud technologies to create scalable, reliable devices ready for production.About the RoleWe are in search of a passionate Operating Systems Engineer to fortify the operating system foundations for OpenAI's groundbreaking products. This role is ideal for seasoned developers who excel at creating foundational platform software and tackling complex challenges related to security, privacy, performance, power efficiency, and reliability. Your expertise will span the OS kernel, core services, security and privacy frameworks, performance optimization, and the integration of applications and user interfaces with the system. This position emphasizes in-depth debugging and accountability throughout the development cycle.Collaboration will be essential as you work closely with teams from embedded systems, firmware, hardware, applications, and product engineering. Familiarity with hardware bring-up is advantageous but not mandatory.What You Will DoContribute to comprehensive OS functionalities, including kernel, userspace services, application frameworks, UI toolkits, and APIs.Develop, integrate, and sustain OS components, which encompass scheduling, memory management, filesystems, drivers, IPC/RPC, and security-related subsystems.Build and manage core OS services and daemons such as service management, device discovery, networking, time management, logging, update hooks, and crash handling.Design and implement robust security and privacy mechanisms:Integrate secure boot and measured boot processes as applicable.Implement mandatory access controls and sandboxing.Manage secrets, secure storage, key handling, and design systems with least-privilege principles.Develop privacy-preserving telemetry and user-consent-driven system behaviors.Establish a performance and power management discipline:Conduct instrumentation, profiling, and regression detection for boot time, latency, throughput, and memory usage.Implement workflows for power measurement, battery and thermal-aware tuning, and strategies to prevent energy regression.
About Our TeamThe Future of Computing Research team is a dynamic applied research unit within the Consumer Devices group at OpenAI. We are dedicated to pioneering innovative methods, models, and evaluation frameworks that propel our vision for the future of computing. Our focus lies at the cutting edge of multimodal AI, transforming emerging model capabilities into product experiences that are not only functional and enjoyable but also foster long-term trust.Our research delves into a new generation of AI systems capable of learning and evolving over time, adapting to individual needs, and enhancing daily life. This includes exploring long-term memory, user modeling, and personalized systems aligned with broader human goals, values, and overall well-being.We collaborate closely across multiple disciplines—research, engineering, design, product management, and safety—to define what it means to build AI systems that recognize and respond to user needs in a contextually aware and respectful manner, ensuring demonstrable benefits.About the PositionWe are seeking a passionate Research Engineer/Scientist to join our Future of Computing Research team, focusing on Reinforcement Learning from Human Feedback (RLHF) and post-training techniques for personalized multimodal AI systems.In this role, you will be instrumental in establishing the learning and evaluation foundations necessary for models to become increasingly context-aware, adaptive, and useful over time. You will tackle challenges such as reward modeling, preference learning, long-horizon evaluation, and policy improvement for systems that are required to make high-quality behavioral decisions in real-world settings. Our success is measured not just by improved benchmark performance but by enhanced model behavior in actual use cases.The ideal candidate is enthusiastic about advancing beyond simplistic one-turn assistant interactions towards systems that learn and grow through feedback, utilizing richer signals and training against meaningful notions of user value. This requires a thoughtful approach to reward design, feedback mechanisms, and evaluation frameworks that assess the long-term benefits of interventions.This position is based in San Francisco, CA, with a hybrid work model of four days in the office each week. We also provide relocation assistance for new hires.Key Responsibilities:Develop RLHF and post-training strategies for multimodal models.Create reward models and preference-learning pipelines to foster adaptive, personalized model behavior.Engage in long-term evaluation and policy refinement to enhance user interactions.
Join Our TeamThe Future of Computing Research team is part of the Consumer Devices division at OpenAI, dedicated to pioneering innovative methods and models that align with our mission to develop AGI that benefits humanity as a whole.Your RoleAs a Research Engineer/Scientist on our team, you will collaborate with world-class ML researchers and exceptional design experts to expand the limits of model capabilities.This position is located in San Francisco, CA, following a hybrid work model with four days in the office each week. We provide relocation assistance for new hires.Responsibilities:Train and assess state-of-the-art models focusing on aspects crucial to our vision for future consumer devices.Overcome challenges to transform emerging research capabilities into practical solutions.Contribute to defining the software landscape for the future.Ideal Candidate:Possesses 5+ years of relevant experience.Has a strong research background in training language models for UI generation and evaluating the effectiveness of generated UIs.Enjoys cross-disciplinary collaboration across a diverse research landscape.Conducts rigorous scientific investigations to ensure confidence in experimental outcomes.Has hands-on experience in training models for language comprehension and perception.About OpenAIOpenAI is an AI research and deployment organization committed to ensuring that artificial general intelligence is developed in a manner that is safe and beneficial for all of humanity. We strive to push the limits of AI capabilities while prioritizing safety and addressing human needs. Our mission is to incorporate a wide array of perspectives, voices, and experiences to reflect the full spectrum of humanity.We are proud to be an equal opportunity employer, valuing diversity and inclusion in our workplace.
About Our TeamJoin our dedicated Quality Assurance Software Engineering team, where we prioritize the excellence and dependability of our device software. We create and uphold automated testing frameworks, hardware-in-the-loop labs, and efficient release pipelines that guarantee quality signals are reliable, facilitating swift and secure product launches. Our collaborative environment encompasses infrastructure, automation, and cross-team synergy to ensure every release adheres to the highest standards.About the PositionAs a Quality Assurance Software Engineer, you will take ownership of the automated validation process for our device software. This encompasses developing test frameworks, conducting regression testing, overseeing hardware-in-the-loop labs, and managing release gates. You will construct systems that ensure quality signals remain trustworthy, integrate them into our CI/CD processes, and streamline the execution of repeatable procedures for QA vendor technicians.We seek engineers with extensive expertise in software quality, automation, and hardware-software integration, who are passionate about building scalable and reliable validation systems.This position is located in San Francisco, CA, operating on a hybrid work model of four days in the office each week, and we provide relocation assistance to new hires.Key Responsibilities:Test Infrastructure & Frameworks: Design, implement, and maintain a cohesive test framework for device software (unit, integration, system/end-to-end), integrating adapters for GitHub/Linear/Slack and ensuring reproducible runs.CI/CD Integration & Releases: Seamlessly integrate test suites with Buildkite, enforce promotion criteria (staging/prod), automate regression filing, and publish traceable artifacts and release notes.Hardware-in-the-loop Lab Design & Orchestration: Strategically plan and establish racks, power/networking, and orchestration for device testing; facilitate automated flashing, provisioning, and telemetry capture.Automation Tooling: Create tools for API/firmware validation, result triage, log capture, and reproducible bug reports.Quality Signals, Metrics, and Flake Control: Develop dashboards and alerts for pass rates, stability, and release readiness; identify and quarantine flaky tests; lead root-cause analysis with stakeholders; and monitor DORA-style delivery metrics to ensure release health.Vendor Enablement: Draft clear procedures for QA vendor technicians, review their reports, and manage a queue for rig maintenance and repairs.Cross-Team Collaboration: Collaborate with embedded and system software teams to enhance testability and streamline processes.
Location: San Francisco, CA (Hybrid: 4 days onsite/week). Relocation assistance available.About Our Team:At OpenAI, we are at the forefront of technology, creating foundational platform software that ensures our consumer products are reliable, secure, and high-performing. Our team collaborates across various system layers, working closely with engineering partners to deliver exceptional capabilities from initial concept to final launch.Role Overview:We are looking for a passionate Systems Software Engineer to lead the design, implementation, and debugging of critical platform components and the pipelines that build and update system images. Your focus will span across operating system layers, emphasizing performance optimization, security enhancements, and in-depth system debugging to deliver production-grade systems that exceed expectations.Key Responsibilities:Design and develop robust system-level components and services within both kernel and user spaces.Configure and maintain essential OS platform services (init, services, networking, security policies) and related tools.Build and manage image and update pipelines, ensuring their reliability, reproducibility, and rollback safety.Instrument system performance through profiling and tracing; enhance CPU, memory, I/O, and energy efficiency.Oversee platform observability and reliability, including logging, crash capture, watchdogs, and diagnostics.Collaborate with cross-functional teams to define interfaces and deliver comprehensive end-to-end features.Establish and promote strong engineering practices such as code reviews, continuous integration, reproducible builds, and effective release management.Work alongside external vendors to support builds and deployments.You Will Excel in This Role If You:Have successfully launched production systems software on modern operating systems.Possess proficiency in C/C++ and a scripting language, with a strong understanding of OS internals including concurrency, memory management, filesystems, networking, and power management.Demonstrate exceptional systems debugging skills utilizing debuggers, tracers, profilers, and logs across kernel/user-space boundaries.Comprehend the configuration of platform services and interfaces, effectively translating requirements into stable, well-documented APIs.Are knowledgeable about user-space foundations including service management, IPC, networking, packaging, and automation.Have experience collaborating with external partners to deliver high-quality software solutions.
About UsAt Sierra, we are revolutionizing the way businesses engage with their customers by building a cutting-edge platform that harnesses the power of AI. Our headquarters is located in the vibrant city of San Francisco, with additional offices expanding in Atlanta, New York, London, France, Singapore, and Japan.Our company culture is deeply rooted in our core values: Trust, Customer Obsession, Craftsmanship, Intensity, and Family. These principles guide our actions and foster an environment where innovation thrives.Sierra was co-founded by visionary leaders Bret Taylor, who currently serves as the Board Chair of OpenAI and has a rich history with Salesforce and Facebook, and Clay Bavor, who previously led Google Labs and spearheaded initiatives like Google Lens and Project Starline.Your RoleAs a Software Engineer focusing on Infrastructure at Sierra, you will play a pivotal role in designing, constructing, and maintaining the foundational systems that empower our AI platform. Your expertise will ensure that our infrastructure is not only secure and reliable but also scalable, allowing product teams to execute their work with agility and confidence.Guarantee the reliability, scalability, and performance of our platform and LLM inference serving in response to increasing traffic demands.Develop and oversee cloud infrastructure using Terraform to create secure, scalable, and reproducible environments.Establish and manage a self-service infrastructure platform to empower engineering teams in deploying and operating services independently.Take ownership of and improve CI/CD pipelines and release management processes, facilitating rapid and reliable deployments across Sierra’s platform.Design and manage distributed systems utilizing distributed databases, retrieval systems, and machine learning models.Develop and sustain core data serving abstractions along with essential authentication and security features (SSO, RBAC, authentication controls).Effectively navigate and integrate our technology stack with enterprise customer environments in a scalable and maintainable manner.
At Exa, we are on a mission to create a cutting-edge search engine from the ground up, designed to cater to the diverse needs of AI applications. Our team is building a robust infrastructure that enables us to crawl the internet, train advanced embedding models for indexing, and develop high-performance vector databases using Rust. Additionally, we manage a significant $5M H200 GPU cluster that powers tens of thousands of machines.The Infrastructure Team at Exa is responsible for developing the essential tools and infrastructure that support our entire system. We are looking for talented infrastructure engineers to help us scale our capabilities rapidly. Your work could involve orchestrating GPU clusters with Kubernetes, implementing map-reduce batch jobs on Ray, or creating top-tier observability tools that set industry standards.
Join Our Innovative TeamAt OpenAI, our Consumer Products Research team is at the forefront of shaping the future of computing. We delve into cutting-edge modalities, interaction patterns, and system behaviors, engineering them into robust prototypes. The Neosensing team operates at the confluence of sensing technologies, edge algorithms, and systems engineering. We develop comprehensive software solutions that transform novel signals into reliable capabilities, including collection tools, integration protocols, and stable on-device loops that perform reliably in dynamic environments. We are passionate about software excellence and rapid iteration, emphasizing clean interfaces, debuggability, observability, and high performance even under strict device constraints.Your ContributionAs a Software Engineer in our Consumer Products Research team, you will bridge the gap between algorithm development and implementable systems. Collaborating closely with algorithm engineers, you will convert prototypes into robust interfaces, dependable data pipelines, and optimized on-device solutions, with a sharp focus on performance, observability, and resilience against real-world challenges.This role prioritizes software development, seeking a candidate who is passionate about writing high-quality code, takes pride in engineering craftsmanship, and is willing to dive deep into algorithmic intricacies to ensure seamless end-to-end functionality.Work EnvironmentThis position is based in San Francisco, CA, and follows a hybrid work model with four days in the office each week. Relocation assistance is available for new hires.Key Responsibilities:Develop and deploy pioneering production software for sensing algorithms, transforming algorithm prototypes into reliable end-to-end systems.Manage and enhance critical components of the Python shipping pipeline, including integration surfaces, evaluation hooks, and performance quality safeguards.Create embedded and on-device software within an RTOS environment (e.g., Zephyr) and implement models across various device runtimes and hardware accelerators.Refine real-time on-device perception loops (e.g., detection/tracking pipelines) to ensure stability, low latency, and efficient use of power and memory.Design and develop data collection and instrumentation tools that facilitate the introduction of new sensing modalities and expedite the process from prototype to dataset to model to device.Collaborate cross-functionally with teams in algorithms, human data, and firmware/hardware to debug, profile, and enhance systems against real-world variability.
Join Handshake as a Staff Software Engineer focused on enhancing consumer experience. In this pivotal role, you'll leverage your expertise to design and develop innovative software solutions that elevate user engagement and satisfaction. Collaborate closely with cross-functional teams to implement best practices and ensure a seamless user journey across all platforms.
Who We AreServal is an innovative AI-driven automation platform redefining operational efficiency for enterprises. Our intelligent agents seamlessly comprehend and execute real-world workflows, replacing outdated manual processes with adaptive, self-learning software. Since our inception in early 2024, we have garnered the trust of industry leaders such as General Motors, Notion, Perplexity, Vercel, Mercor, LangChain, and Verkada, streamlining high-volume operational tasks across their organizations.At the heart of Serval is a cutting-edge agentic AI platform that transforms natural language into actionable workflows. Our agents not only respond to queries but also reason, act across various systems, and continuously enhance their performance. What started as a solution for operational tasks has rapidly expanded into a versatile AI automation layer utilized across IT, HR, Finance, Security, Legal, and Engineering sectors.Our mission is to eradicate repetitive, manual tasks within enterprises, empowering teams through intelligent automation. In the long run, we aim to establish a universal AI operations layer—a system of agents that integrates across business functions, maintaining the momentum of modern companies.We are proud to be backed by renowned investors including Sequoia Capital, Redpoint Ventures, Meritech, First Round, General Catalyst, and Elad Gil, and founded by seasoned product and engineering leaders from Verkada.Role OverviewAs a Senior Software Engineer in Infrastructure at Serval, you will be pivotal in developing and scaling the core systems that empower our AI agents and workflow automation platform. A crucial aspect of this role involves enabling and supporting self-hosted deployments for enterprise clients needing on-premises or private cloud environments. We are looking for engineers with profound expertise in distributed systems, infrastructure-as-code, production operations, and customer-facing support, who aspire to influence the technical architecture of a rapidly evolving platform.What You'll DoDesign, implement, and operate large-scale distributed systems that power Serval's AI agents, workflow orchestration, and data pipelines.Create and maintain Terraform modules to provision and manage cloud infrastructure across AWS, GCP, or Azure environments.Develop and sustain deployment packages, installation scripts, and infrastructure templates, enabling customers to self-host Serval in their own environments.Provide technical support and guidance to enterprise customers during installation and deployment phases.
About UsAt Imprint, we are revolutionizing the world of co-branded credit cards and innovative financial solutions, focusing on smarter, more rewarding, and brand-first experiences. We collaborate with renowned brands such as Crate & Barrel, Rakuten, Booking.com, H-E-B, Fetch, and Brooks Brothers to establish modern credit programs that enhance customer loyalty, unlock savings, and stimulate growth. Our robust platform integrates advanced payment technologies, intelligent underwriting, and a seamless user experience, enabling brands to offer impactful financial products without the complexities of becoming a bank.Co-branded credit cards represent over $300 billion in U.S. annual spending, yet many are still managed by outdated banking systems. Imprint stands as the modern alternative—flexible, technology-driven, and tailored for today’s consumers. Supported by notable investors like Kleiner Perkins, Thrive Capital, and Khosla Ventures, we are assembling a world-class team dedicated to reshaping payment methods and driving brand growth. If you thrive in fast-paced environments, enjoy tackling complex challenges, and aspire to make a significant impact, we would be delighted to meet you.Discover more about us on Imprint's Technology Blog.The TeamThe Tech Platform Engineering Team at Imprint is pioneering the democratization of access to advanced technologies, empowering teams across our organization to innovate and excel. Our commitment to redefining the Fintech landscape drives us to build secure, highly available infrastructures while equipping our engineers with comprehensive development tools, allowing them to rapidly create world-class products.Your RoleDesign, build, and manage cloud and web infrastructure with a strong emphasis on security, reliability, and scalability.Implement and maintain infrastructure components across computing, networking, and data platforms.Adhere to security best practices in cloud infrastructure, ensuring proper access control, network isolation, and secure communication between services.Monitor system health and engage in incident response, root cause analysis, and reliability enhancements.Collaborate with platform, security, and product engineers to deliver safe and efficient infrastructure solutions.
About the RoleJoin our pioneering team at vooma as a Backend & Infrastructure Software Engineer, where you will play a critical role in shaping the technical infrastructure of a transformative company.If you are passionate about creating not only resilient systems but also the foundational architecture of a groundbreaking enterprise from the outset, this position is ideal for you.We are looking for someone who excels at crafting infrastructure that is elegant, dependable, and secure, even under high-demand scenarios. You thrive on the challenge of scaling systems that enable intelligent agents and take pride in establishing reliable foundations that others can rely on.Your Key Responsibilities Include:Design and maintain secure, scalable infrastructure tailored for AI-powered agents in production environments.Deploy and optimize AI-driven services to meet high availability and performance standards.Manage infrastructure as code, alongside cloud environments and CI/CD pipelines.Implement monitoring, observability, and alerting systems to ensure the reliability of our infrastructure.Contribute to infrastructure security and adhere to best practices.You Should Have:Experience in deploying and productionizing machine learning or AI-centric workloads.Proficiency in developing secure, scalable infrastructures on platforms such as AWS, Azure, or GCP.In-depth knowledge of backend systems, networking, and container orchestration technologies (e.g., Kubernetes).Understanding of infrastructure security principles and compliance standards (e.g., SOC2).A proactive and hands-on mindset, with a strong drive to solve challenges from start to finish.
Full-time|$300K/yr - $300K/yr|On-site|San Francisco
ABOUT BASETENJoin Baseten, where we drive mission-critical AI inference for leading companies like Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma, and Writer. Our unique blend of applied AI research, robust infrastructure, and intuitive developer tools empowers organizations at the forefront of AI innovation to deploy state-of-the-art models into production. Recently, we secured a $300M Series E funding round, backed by esteemed investors such as BOND, IVP, Spark Capital, Greylock, and Conviction. Be a part of our rapid growth and help shape the platform that engineers trust for launching AI products.THE ROLEAs an Infrastructure Software Engineer at Baseten, you will play a pivotal role in developing and maintaining our ML inference platform that powers AI applications in production. Your contributions will enhance the core infrastructure, enabling developers to deploy, scale, and monitor machine learning models with exceptional performance.EXAMPLE INITIATIVESYou will engage in innovative projects within our Infrastructure team, including:Multi-cloud capacity managementInference on B200 GPUsMulti-node inferenceFractional H100 GPUs for efficient model servingRESPONSIBILITIESDesign and develop infrastructure components for our ML inference platform, primarily using Python and Go.Implement and maintain Kubernetes deployments for optimal model serving.Contribute to the orchestration layer for model deployments.Build and enhance monitoring systems to track model performance metrics effectively.Develop efficient resource management solutions to optimize performance.
Full-time|$150K/yr - $200K/yr|On-site|San Francisco, CA
At Sift, we are revolutionizing the way cutting-edge machines are constructed, tested, and managed. Our innovative platform provides engineers with real-time visibility into high-frequency telemetry, effectively removing bottlenecks and facilitating quicker, more dependable development.Sift originated from our experience at SpaceX, contributing to projects like Dragon, Falcon, Starlink, and Starship, where the demands of scaling telemetry, debugging flight systems, and ensuring mission reliability necessitated a new kind of infrastructure. Founded by a talented team from SpaceX, Google, and Palantir, Sift is tailored for mission-critical systems where precision and scalability are imperative.As one of the pioneering engineers at Sift, your role will extend beyond just coding—you will play a crucial part in defining the architecture, shaping the product, and influencing the culture of a company dedicated to addressing real engineering challenges. If you're eager to take on intricate technical obstacles and build foundational systems that support complex machines from the ground up, we would love to connect with you.
Join Ivo's Engineering Team!At Ivo, we are pioneers in the tech industry. Our engineers are innovators who have created groundbreaking solutions such as:• An AI agent that seamlessly integrates with MS Word to enhance document editing [2023]• Revolutionizing embedding models with agentic RAG technology [2023]• Advanced LLM-based legal fact extraction capabilities [2024]• A legal assistant designed to search extensive contract databases without compromising accuracy [2024]• Clustering legal documents from the same lineage [2025]• Automatic deviation analysis to uncover hidden risks in vast contract databases [2025]• Merging contracts with their amendments to create a “composite” contract timeline that has moved our clients to tears [2025]Role OverviewAs an Infrastructure Engineer at Ivo, you will lay the groundwork for our platform's future. Your responsibilities will include:• Designing and owning the future of our infrastructure, allowing you the freedom to innovate.• Managing multiple customer deployments, ensuring each receives tailored containers, databases, and VPCs.• Instrumenting our systems to identify performance bottlenecks and errors.• Aggregating metrics and logs into visually appealing dashboards and setting up pager alerts.• Leading infrastructure-related incidents and being on-call as necessary.• Enhancing our CI/CD system to reduce deployment time from ~12 minutes.If you're passionate about LLMs, you'll thrive in our engineering team, where you’ll have the opportunity to:• Develop real-time LLM evaluations to monitor the accuracy of our responses.• Collaborate with talented engineers to push the boundaries of DevOps.
Astranis is seeking a talented and motivated Software Engineer to join our Infrastructure team. In this role, you will be at the forefront of developing and maintaining critical software systems that support our innovative satellite technology. You'll collaborate with cross-functional teams to design, implement, and optimize our infrastructure solutions, ensuring high reliability and performance.
Full-time|$196K/yr - $220K/yr|On-site|San Francisco Bay Area
At Discord, we connect over 200 million users every month, with the majority of them engaging in their favorite pastime: gaming. Our platform is not just about chatting; it’s a vibrant community where over 90% of our users immerse themselves in games, collectively spending a staggering 1.5 billion hours playing thousands of unique titles on Discord each month. We are on a mission to enhance the gaming experience by facilitating seamless communication and interaction among players.We are actively searching for skilled Senior Software Engineers to join our Consumer Revenue teams, which are pivotal in shaping premium experiences at Discord. In this role, you will focus on developing features for our premium offerings, including Nitro, the shop, boosting, user identity, and more. Your contributions will play a crucial role in delivering high-quality premium experiences for our subscribers while preserving the core functionalities for our free users. You will be instrumental in driving the revenue that supports Discord’s overarching mission.This role involves close collaboration across various departments such as Product, Data Science, Design, and Marketing to design and implement top-tier consumer experiences. You will manage projects throughout their lifecycle, from backend data modeling and API business logic to creating polished user-facing interfaces. Our infrastructure, platform, and product teams will support you as you strive to build the best premium Discord experience for our users.Explore some of our latest initiatives like Nitro, shop, boosting, user identity, and more. For deeper insights into Discord Engineering, check out our engineering blog!
Mar 18, 2026
Sign in to browse more jobs
Create account — see all 5,730 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.