High Performance Computing Hpc Hardware Engineer jobs in San Francisco – Browse 5,358 openings on RoboApply Jobs

High Performance Computing Hpc Hardware Engineer jobs in San Francisco

Open roles matching “High Performance Computing Hpc Hardware Engineer” with location signals for San Francisco. 5,358 active listings on RoboApply Jobs.

5,358 jobs found

1 - 20 of 5,358 Jobs
Apply
companysfcompute logo
Full-time|On-site|San Francisco, CA

At sfcompute, we are on a mission to revolutionize the infrastructure landscape by minimizing the risks associated with the largest build-outs in history.When financing GPU clusters and the data centers that support them, having a contract in place—what we call an "offtake"—is crucial. This ensures that customers have signed on to lease the cluster even before it’s constructed.The financing process for GPU clusters carries inherent risks due to thin margins and large volumes. Lenders often hesitate to take on the risk that developers may default on their loans, while developers are wary of being unable to sell their clusters. This dynamic leads to the necessity of transferring risk to customers via fixed-price, long-term contracts.If customer risk isn't effectively mitigated, a market bubble can form. Unlike traditional SaaS models, application layer companies engage in multi-year contracts for compute and inference while offering customers monthly subscriptions. A miscalculation in purchasing can spell disaster; a small change in revenue growth could lead to profits or bankruptcy. Imagine a world where companies could exit their contracts by selling them back to the market.As AI technology scales, compute power will increasingly only be available for those who can manage the associated risks. A small startup in a San Francisco Victorian house cannot feasibly commit to a 5-year, take-or-pay contract for $100 million supercomputers, but they might be able to purchase a month of liquidity that someone else has sold back.That’s the market we’re building: a liquid marketplace for GPU offtake.About the RoleAs part of our infrastructure team, you will help design and deploy some of the most powerful GPU clusters in existence, with even smaller clusters today having ranked in the TOP500 five years ago. Your responsibilities will include participating in on-call rotations, deploying new environments, troubleshooting issues, and embracing automation to facilitate large-scale deployments. As a member of a small but dynamic team, you'll have the opportunity to significantly influence our company culture, mentor junior engineers, and engage directly with our customers.

Feb 25, 2026
Apply
companysfcompute logo
Full-time|On-site|San Francisco, CA

At sfcompute, we are pioneering a transformative approach to GPU cluster financing, enabling the largest infrastructure build-out in history while effectively mitigating risk.In the ever-evolving landscape of GPU technology, securing financing for GPU clusters and the essential infrastructure they require involves inherent risks. Our innovative model ensures that developers can lease clusters through fixed-price long-term contracts, thus offloading risk to the customer while maintaining financial stability.As AI and computational demands grow, our mission is to democratize access to powerful computing resources. We aim to create a liquid market for GPU offtake, allowing startups and smaller enterprises to thrive without the burden of long-term contracts that aren't feasible for them.Role OverviewJoin our dynamic infrastructure team, responsible for architecting and deploying cutting-edge GPU clusters globally. You'll play a crucial role in maintaining operational excellence, engaging in on-call rotations, and driving automation to facilitate large-scale deployments. As a key member of our small but ambitious team, you will help shape our culture, mentor junior engineers, and learn directly from our customers.

Feb 25, 2026
Apply
companyOpenAI logo
Full-time|On-site|San Francisco

About Our TeamJoin the Fleet team at OpenAI, where we empower groundbreaking research and product innovation through our advanced computing infrastructure. We manage extensive systems across data centers, GPUs, and networking, ensuring optimal performance, high availability, and efficiency. Our work is crucial in enabling OpenAI’s models to function seamlessly at scale, supporting both our internal research endeavors and external products like ChatGPT. We are committed to prioritizing safety, reliability, and the ethical deployment of AI technology.About the RoleAs a Software Engineer on the Fleet High Performance Computing (HPC) team, you will play a vital role in ensuring the reliability and uptime of OpenAI’s compute fleet. Minimizing hardware failures is essential for smooth research training progress and uninterrupted services, as even minor hardware issues can lead to significant setbacks. With the rise of large supercomputers, the stakes in maintaining efficiency and stability have never been higher.At the cutting edge of technology, we often lead the charge in troubleshooting complex, state-of-the-art systems at scale. This is a unique opportunity for you to engage with groundbreaking technologies and create innovative solutions that enhance the health and efficiency of our supercomputing infrastructure.Our team fosters a culture of autonomy and ownership, enabling skilled engineers to drive meaningful change. In this role, you will focus on comprehensive system investigations and develop automated solutions to enhance our operations. We seek individuals who dive deep into challenges, conduct thorough investigations, and create scalable automation for detection and remediation.Key Responsibilities:Develop and maintain automation systems for provisioning and managing server fleets.Create tools to monitor server health, performance metrics, and lifecycle events.Collaborate effectively with teams across clusters, networking, and infrastructure.Work closely with external operators to maintain a high level of service quality.Identify and resolve performance bottlenecks and inefficiencies in the system.Continuously enhance automation processes to minimize manual intervention.You Will Excel in This Role if You Have:Experience in managing large-scale server environments.A blend of technical skills in systems programming and infrastructure management.Strong problem-solving abilities and a methodical approach to troubleshooting.Familiarity with high-performance computing technologies and tools.

Feb 5, 2026
Apply
companySciforium logo
Full-time|On-site|San Francisco

At Sciforium, we are at the forefront of AI infrastructure, pioneering advanced multimodal AI models and an innovative, high-efficiency serving platform. With substantial backing from AMD and a dedicated team of engineers, we are rapidly expanding our capabilities to support the next generation of frontier AI models and real-time applications.About the RoleWe are looking for a highly skilled Senior HPC & GPU Infrastructure Engineer who will be responsible for ensuring the health, reliability, and performance of our GPU compute cluster. As the primary custodian of our high-density accelerator environment, you will serve as the crucial link between hardware operations, distributed systems, and machine learning workflows. This position encompasses a range of responsibilities, from hands-on Linux systems engineering and GPU driver setup to maintaining the ML software stack (CUDA/ROCm, PyTorch, JAX, vLLM). If you are passionate about optimizing hardware performance, enjoy troubleshooting GPUs at scale, and aspire to create world-class AI infrastructure, we would love to hear from you.Your Responsibilities1. System Health & Reliability (SRE)On-Call Response: Be the primary responder for system outages, GPU failures, node crashes, and other cluster-wide incidents, ensuring rapid issue resolution to minimize downtime.Cluster Monitoring: Develop and maintain monitoring protocols for GPU health, thermal behavior, PCIe/NVLink topology issues, memory errors, and general system load.Vendor Liaison: Collaborate with data center personnel, hardware vendors, and on-site technicians for repairs, RMA processing, and physical maintenance of the cluster.2. Linux & Network AdministrationOS Management: Oversee the installation, patching, and maintenance of Linux distributions (Ubuntu / CentOS / RHEL), ensuring consistent configuration, kernel tuning, and automation for large node fleets.Security & Access Controls: Set up VPNs, iptables/firewalls, SSH hardening, and network routing to secure our computing infrastructure.Identity & Storage Management: Manage LDAP/FreeIPA/AD for user identity and administer distributed file systems like NFS, GPFS, or Lustre.3. GPU & ML Stack EngineeringDeployment & Bring-Up: Spearhead the deployment of new GPU nodes, including BIOS configuration and software integration to ensure optimal performance.

Jan 7, 2026
Apply
companyCrusoe logo
Full-time|$208K/yr - $253K/yr|On-site|San Francisco, CA - US

At Crusoe, our mission is to drive the evolution of energy and intelligence. We are developing the technology that fuels a future where individuals can ambitiously harness AI capabilities without compromising on scale, speed, or sustainability.Join us in revolutionizing AI with sustainable solutions at Crusoe. In this role, you will be at the forefront of innovation, making a significant impact while collaborating with a team that is shaping the future of responsible and transformative cloud infrastructure.About This Role:We are looking for a dedicated Hardware Production/Sustaining Engineer to enhance Crusoe's Hardware Systems Engineering team. This position is critical for bridging essential skill gaps in debugging, validation, and production support for high-performance computing systems. You will manage the entire hardware lifecycle—from prototype initiation to large-scale production—focusing on automation, deep troubleshooting, and reliability within Crusoe Cloud’s GPU- and CPU-oriented infrastructure.Your collaboration with cross-functional teams will be vital in supporting, debugging, and enhancing hardware platforms on a large scale, specifically targeting PCIe, InfiniBand, and NVMe/storage, which have been highlighted as key areas for expanded expertise. Your contributions will directly influence Crusoe’s capability to deploy and maintain sustainable, AI-driven computing systems that deliver exceptional performance and reliability.Your Responsibilities Will Include:Leading the complete hardware development and sustaining lifecycle, encompassing feasibility studies, bring-up, validation, deployment, and ongoing production support.Creating and sustaining automation frameworks and scripts for hardware testing, diagnostics, and continual reliability enhancements.Executing in-depth troubleshooting and debugging across:PCIe (including link training, topology, and performance issues)InfiniBand (focusing on fabric debugging, throughput, and connectivity challenges)NVMe/storage (addressing performance bottlenecks, firmware interactions, and failure analyses)Performing extensive system validation and characterization for GPU, CPU, and high-performance computing platforms.Assisting in end-to-end integration and solution testing to guarantee that Crusoe Cloud products fulfill performance, reliability, and scalability standards.Collaborating with teams across mechanical, thermal, firmware, software, and manufacturing domains to troubleshoot and enhance system performance.

Feb 19, 2026
Apply
companySamsara logo
Full-time|$124.1K/yr - $208.5K/yr|Hybrid|San Francisco - SF9

Who We AreSamsara (NYSE: IOT) is at the forefront of the Connected Operations™ Cloud, a transformative platform that empowers businesses reliant on physical operations to tap into Internet of Things (IoT) data. Our aim is to provide actionable insights that enhance safety, efficiency, and sustainability across vital industries such as agriculture, construction, transportation, and manufacturing. By digitally transforming these sectors, which represent over 40% of global GDP, we are contributing to a more efficient and sustainable economy.Joining Samsara means being part of a team that is defining the future of physical operations. You will engage in cutting-edge solutions, including Video-Based Safety, Vehicle Telematics, and Equipment Monitoring, within a supportive environment that fosters innovation and long-term impact.About the Role:We are seeking a Senior Hardware Systems Engineer to enhance our rapidly expanding product line. Your primary responsibility will involve leading the electrical engineering components of product architecture and design, grounded in comprehensive feasibility, design, and cost analyses. This encompasses critical aspects such as component selection, thermal management, and antenna design. You will leverage extensive telemetry and direct customer insights to inform and refine our product designs. Collaborating closely with Product Management, Firmware, and Hardware leadership, you will influence key engineering decisions while mentoring fellow engineers. The role will also require interaction with our US and Taiwan EE teams, as well as our Supply Chain and laboratory resources, to achieve our project goals effectively.This role is hybrid, requiring you to be in our San Francisco, CA office three days a week, with the flexibility to work remotely for two days. Travel may be necessary up to 25% of the time, and proximity to an international airport is essential. We offer relocation assistance for this position and welcome candidates from across the U.S. who are willing to relocate to the Bay Area.

Feb 11, 2026
Apply
companyOpenAI logo
Full-time|Hybrid|San Francisco

About Our TeamAt OpenAI, our Hardware organization is at the forefront of developing cutting-edge silicon and system-level solutions tailored for the specific demands of advanced AI workloads. Our team is dedicated to creating the next generation of AI-native silicon, collaborating closely with software and research partners to co-design hardware that is seamlessly integrated with AI models. We not only deliver production-grade silicon for OpenAI’s supercomputing infrastructure but also innovate custom design tools and methodologies that drive acceleration and optimization specific to AI.About This RoleAs a member of our hardware optimization and co-design team, you will play a crucial role in co-designing future hardware from various vendors, focusing on programmability and high performance. You will partner with our kernel, compiler, and machine learning engineers to comprehend their distinct requirements concerning ML techniques, algorithms, numerical approximations, programming expressivity, and compiler optimizations. Your advocacy for these constraints will help shape and influence future hardware architectures aimed at efficient training and inference for our models. If you are passionate about efficiently distributing large language models across devices, optimizing system-wide networking bottlenecks, and customizing the compute pipeline and memory hierarchy of hardware platforms while simulating workloads at various abstraction levels, then this opportunity is perfect for you!This position is based in San Francisco, CA, utilizing a hybrid work model of three days in the office each week, with relocation assistance available for new hires.Key Responsibilities:Collaborate on the co-design of future hardware focusing on programmability and performance with hardware vendors.Support hardware vendors in developing optimal kernels and integrating support within our compiler.Generate performance estimates for critical kernels across diverse hardware configurations, influencing decisions regarding compute core and memory hierarchy features.Create system performance models at various abstraction levels and conduct analyses to guide decisions on scaling and front-end networking.Engage with machine learning engineers, kernel engineers, and compiler developers to align on high-performance accelerator needs.Facilitate communication and coordination with internal and external partners.Shape the roadmap for hardware partners to optimize their products for our AI capabilities.

Feb 11, 2026
Apply
companyOpenAI logo
Full-time|On-site|San Francisco

About the TeamThe Scaling team at OpenAI forms the architectural and engineering foundation of our infrastructure. We innovate and implement advanced systems that facilitate the deployment and operation of next-generation AI models. Our responsibilities encompass system software, networking, platform architecture, fleet-level monitoring, and performance enhancement.About the RoleWe are seeking a skilled software engineer proficient in transforming early-stage, sometimes chaotic, pre-production hardware into stable, operational systems. You will be pivotal in bootstrapping, imaging, integrating with the Kubernetes control plane, and ensuring observability. Your role will bridge early hardware bring-up, provisioning automation, fleet and cluster management, and integration with lab or cloud services—effectively converting new SKUs into usable capacity for our internal stakeholders.Key ResponsibilitiesManage the comprehensive bring-up and bootstrapping process for new systems and compute nodes, transitioning from bare metal or early access in lab or production/cloud settings to schedulable fleet capacity, including image building, user-data/configuration, cluster joining, and readiness gates.Develop and uphold top-tier golden image and provisioning workflows across lab and production environments, collaborating with partner-provided base images while ensuring OS/version compatibility.Collaborate with partner teams to integrate nodes into our fleet infrastructure and Infrastructure as Code (IaC) pipelines (Terraform, Chef, etc.), guaranteeing that cloud resources align seamlessly with our internal lifecycle expectations.Work closely with scheduling and platform owners to ensure new hardware is accessible and properly scheduled, addressing pool definitions, network connectivity, routing, admission controls, and platform-specific requirements.Ensure registration and inventory accuracy, providing hands-on support to track nodes and their metadata from end to end.Partner with teams to establish baseline health and telemetry monitoring for bring-up, including critical health signals, pass/fail assessments, and automated reporting for initial ramp decisions.Troubleshoot issues across various layers, including PXE/boot-loader, UEFI/BIOS, BMC, OS bring-up, NIC/network accessibility, kubelet/control-plane connectivity, storage limitations, and early lab/rack scenarios.

Mar 27, 2026
Apply
companySciforium logo
Full-time|On-site|San Francisco

At Sciforium, we are at the forefront of AI infrastructure, innovating next-generation multimodal AI models and a proprietary high-efficiency serving platform. With substantial funding and direct collaboration from AMD, supported by their engineers, our team is rapidly expanding to develop the complete stack that powers cutting-edge AI models and real-time applications.About the RoleWe are on the lookout for a talented GPU Kernel Engineer who is eager to explore and maximize performance on modern accelerators. In this role, you will be responsible for designing and optimizing custom GPU kernels that drive our advanced large-scale AI systems. You will navigate the hardware-software stack, engaging in low-level kernel development and integrating optimized operations into high-level machine learning frameworks for large-scale training and inference.This position is perfect for someone who excels at the intersection of GPU programming, systems engineering, and state-of-the-art AI workloads, and aims to contribute significantly to the efficiency and scalability of our machine learning platform.Key ResponsibilitiesDevelop, implement, and enhance custom GPU kernels utilizing C++, PTX, CUDA, ROCm, Triton, and/or JAX Pallas.Profile and fine-tune the end-to-end performance of machine learning operations, particularly for large-scale LLM training and inference.Integrate low-level GPU kernels into frameworks such as PyTorch, JAX, and our proprietary internal runtimes.Create performance models, pinpoint bottlenecks, and deliver kernel-level enhancements that significantly boost AI workloads.Collaborate with machine learning researchers, distributed systems engineers, and model-serving teams to optimize computational performance across the entire stack.Engage closely with hardware vendors (NVIDIA/AMD) and stay updated on the latest GPU architecture and compiler/toolchain advancements.Contribute to the development of tools, documentation, benchmarking suites, and testing frameworks ensuring correctness and performance reproducibility.Must-Haves5+ years of industry or research experience in GPU kernel development or high-performance computing.Bachelor’s, Master’s, or PhD in Computer Science, Computer Engineering, Electrical Engineering, Applied Mathematics, or a related discipline.Strong programming proficiency in C++, Python, and familiarity with machine learning frameworks.

Dec 6, 2025
Apply
companyOpenAI logo
Full-time|Hybrid|San Francisco

About Our TeamAt OpenAI, our Hardware team is at the forefront of developing cutting-edge silicon and comprehensive system solutions tailored to the specific needs of advanced AI workloads. We pride ourselves on crafting the next generation of AI-native silicon, collaborating closely with software engineers and research teams to ensure our hardware is seamlessly integrated with AI models. Our mission extends beyond creating production-grade silicon for OpenAI’s supercomputing infrastructure; we also innovate custom design tools and methodologies that spark innovation and enable hardware specifically optimized for AI.About the RoleAs a Software Engineer on the Scaling team, you will play a pivotal role in designing and optimizing the foundational stack that manages computation and data flow across OpenAI’s supercomputing clusters. Your responsibilities will include crafting high-performance runtimes, developing custom kernels, enhancing compiler infrastructure, and building scalable simulation systems to validate and optimize distributed training workloads.This position requires you to work at the intersection of systems programming, machine learning infrastructure, and high-performance computing, where you will create intuitive developer APIs alongside highly efficient runtime systems. You will balance usability and introspection with the imperative for stability and performance across our dynamic hardware landscape.This role is based in San Francisco, CA, featuring a hybrid work model (three days in-office per week). Relocation assistance is provided.Key Responsibilities:Design and implement APIs and runtime components to efficiently manage computation and data movement for diverse ML workloads.Enhance compiler infrastructure by developing optimizations and compiler passes to accommodate evolving hardware advancements.Engineer and refine compute and data kernels, ensuring precision, high performance, and compatibility across simulation and production settings.Analyze and optimize system bottlenecks, focusing on I/O, memory hierarchy, and interconnects at both local and distributed scales.Create simulation infrastructure to validate runtime behaviors, test modifications to the training stack, and support the early development of hardware and systems.Quickly deploy updates to runtime and compiler across new supercomputing builds in close collaboration with hardware and research teams.Work across a varied tech stack, primarily utilizing Rust and Python, with a chance to influence architectural decisions within the training framework.

Oct 31, 2025
Apply
companyCrusoe logo
Full-time|$172K/yr - $209K/yr|On-site|San Francisco, CA - US

At Crusoe, our mission is to propel the availability of energy and intelligence. We are designing the engine that fuels a future where individuals can ambitiously innovate with AI, all while upholding standards of scale, speed, and sustainability.Join us in the AI revolution powered by sustainable technology at Crusoe. Here, you will spearhead significant innovations, make a lasting impact, and collaborate with a team that is leading the charge in responsible, transformative cloud infrastructure.About This Role:We are on the lookout for a Hardware Production / Sustaining Engineer to enhance Crusoe’s Hardware Systems Engineering team and address critical skill gaps in debugging, validation, and production support of high-performance computing systems. In this role, you will oversee the entire hardware lifecycle—from prototype initiation to mass production—while driving automation, resolving intricate issues, and ensuring reliability across Crusoe Cloud’s GPU- and CPU-based infrastructure.You will collaborate closely with cross-functional teams to support, debug, and optimize hardware platforms at scale, with a specific focus on PCIe, InfiniBand, and NVMe/storage, which are recognized as vital areas for enhanced expertise. Your contributions will significantly influence Crusoe’s capability to deploy and manage sustainable, AI-first computing systems that deliver world-class performance and reliability.What You’ll Be Working On:Lead the entire hardware development and sustaining lifecycle, encompassing feasibility, bring-up, validation, deployment, and ongoing production support.Create and maintain scripting and automation frameworks for hardware testing, diagnostics, and continuous reliability enhancements.Guide deep troubleshooting and debugging across:PCIe (link training, topology, performance issues)InfiniBand (fabric debugging, throughput, connectivity issues)NVMe/storage (performance bottlenecks, firmware interactions, failure analysis)Perform thorough system validation and characterization for GPU, CPU, and high-performance computing platforms.Assist in end-to-end integration and solution testing to guarantee that Crusoe Cloud products fulfill performance, reliability, and scalability standards.Work in tandem with mechanical, thermal, firmware, software, and manufacturing teams to resolve system-level challenges.

Feb 19, 2026
Apply
companyGridware logo
Full-time|On-site|San Francisco, CA

About GridwareGridware is an innovative technology firm headquartered in San Francisco, committed to safeguarding and enhancing the reliability of the electrical grid. We have pioneered a revolutionary approach to grid management known as Active Grid Response (AGR), which meticulously monitors the electrical, physical, and environmental factors influencing grid safety and reliability. Our state-of-the-art AGR platform leverages high-precision sensors to identify potential issues at an early stage, facilitating proactive maintenance and fault resolution. This holistic strategy is designed to bolster safety, minimize outages, and ensure optimal grid performance. We are proud to be supported by prominent climate-tech and Silicon Valley investors. To learn more, visit www.Gridware.io.About the RoleWe are seeking a skilled Senior Hardware Reliability Engineer to lead reliability testing, analysis, and lifetime modeling of various outdoor electronic assemblies. This pivotal role will concentrate on the electronic components of our products, collaborating closely with our mechanical-focused Reliability Engineer and engaging with the broader hardware and cross-functional teams.

Feb 21, 2026
Apply
companyEcho Neurotechnologies logo
Senior Hardware Test Engineer

Echo Neurotechnologies

Full-time|On-site|San Francisco

Company OverviewEcho Neurotechnologies is a pioneering startup specializing in Brain-Computer Interface (BCI) technology. We are committed to pushing the boundaries of innovation through state-of-the-art hardware engineering and artificial intelligence solutions. Our goal is to create transformative technologies that empower individuals with disabilities, enhancing their autonomy and overall quality of life.Team CultureBecome a part of our dedicated team of passionate and skilled professionals. In our dynamic early-stage environment, you will have the chance to influence key decisions that will have lasting impacts. We prioritize continuous learning and development, promoting cross-functional collaboration where your input is essential to our collective success.Role OverviewWe are on the lookout for a seasoned Senior Hardware Test Engineer to validate our custom Echo hardware systems. In this role, you will lead the testing processes for our specialized hardware devices and subsystems while developing and implementing custom test systems.Primary ResponsibilitiesConduct in-house design verification testsCoordinate testing with external laboratoriesCollaborate with the engineering team to create tailored testing solutionsWork alongside design engineers to characterize unique hardwarePrepare tests for vendor transfer

Mar 4, 2026
Apply
companyMultiply Labs logo
Full-time|On-site|San Francisco

About Multiply LabsMultiply Labs is an innovative startup located in San Francisco, California, backed by renowned investors in technology and life sciences such as Casdin Capital, Lux Capital, and Y Combinator. Our goal is to develop the world's leading robotic systems and utilize them to make groundbreaking life-saving therapies accessible to everyone.We are transforming the manufacturing process of cell therapies through the creation of advanced robotic systems that automate and scale the production of these crucial treatments. Our cutting-edge robots enable biopharma companies to produce cell therapies efficiently without overhauling their existing processes, thus minimizing regulatory hurdles and risks. Unlike traditional methods that are labor-intensive and costly (often exceeding $1M per patient), our robotic solutions aim to make these vital treatments more affordable and reachable for those who need them.To discover more and view our robots in action, please visit www.multiplylabs.com and follow us on LinkedIn.Position OverviewWe are looking for a dedicated Hardware Reliability Engineer to become an essential part of Multiply Labs’ Reliability Engineering team. As a founding member, you will collaborate closely with the Hardware Product and Systems Integration teams to enhance our designs throughout the entire development lifecycle, from initial prototypes to fully deployed GMP production systems. Your contributions will directly support the delivery of life-saving therapies by ensuring our robots operate seamlessly within the high-stakes biotech environment.

Jan 28, 2026
Apply
companyTacit logo
Full-time|$210K/yr - $260K/yr|On-site|San Francisco

Tacit is an early-stage deep-tech startup in San Francisco, backed by General Catalyst, Khosla Ventures, and Greylock Partners. The team draws on backgrounds from Stanford, BrainGate, Oculus, and Tesla. The company is developing new hardware to advance human-computer interaction, with project details still confidential. The focus is on solving complex engineering challenges that could change how people use technology. Role overview The Head of Hardware will lead Tacit's transition from research prototypes to a consumer-ready product. This senior leader shapes the technical architecture and builds the team responsible for delivering reliable hardware at scale. The position works directly with company leadership, driving hardware strategy, resource planning, and execution. The main objective is to establish a high-performing hardware group that consistently ships quality consumer products. What you will do Lead the move from research prototypes to a production-ready consumer hardware platform. Own the hardware roadmap and system architecture, overseeing progress from early builds to mass production. Design the hardware organization, define team roles, and set hiring priorities in collaboration with leadership. Establish and uphold standards for product readiness, with attention to quality, reliability, manufacturability, and testing. Shape and execute the hardware hiring strategy, including role definitions, sequencing, and technical requirements. Build and lead a multidisciplinary hardware team spanning electrical, mechanical, RF, firmware, testing, and manufacturing. Encourage open technical discussion, clear ownership, and a product-driven approach within the team. Make key system-level trade-offs across performance, power, cost, reliability, and timelines, balancing immediate needs and long-term growth. Enhance design quality through DFM/DFA, manufacturing test planning, and strong quality processes. Collaborate closely with product, industrial design, and software/ML teams to translate product requirements into hardware specifications.

Apr 27, 2026
Apply
companyFlux logo
Full-time|On-site|San Francisco Office

Why Join Flux?At Flux, we are redefining hardware development by introducing the pioneering AI Hardware Engineer. Our mission is to democratize cutting-edge hardware creation and transform the global landscape of electronics design and manufacturing.Your RoleWe are seeking a seasoned leader in electrical engineering to spearhead our hardware design practices and principles. You will play a crucial role in enhancing our ECAD tool's capabilities and ensuring the integrity of our AI Hardware Engineer's knowledge and design strategies. Collaborating closely with product leadership, you will help shape and prioritize the advancement of AI capabilities, evaluation frameworks, and scalable strategies that empower our community to execute successful PCB projects. This position offers the opportunity to expand our Hardware Design function and team significantly.This is a high-impact, multidisciplinary role ideal for someone with a proven track record of developing complex electronics for mass production, such as smartphones, wearables, high-performance computing devices, networking equipment, and robotics, who enjoys translating real-world engineering challenges into innovative tools and AI functionalities.Key ResponsibilitiesLead ECAD Feature Development – Define product requirements for essential ECAD functionalities including placement/routing, constraints management, stackup, impedance control, DRC/DFM, manufacturing outputs, and prototyping workflows. Collaborate on specifications, validate through internal testing, beta programs, and comprehensive user feedback.Collaborate with AI/ML Teams – Educate the world’s first AI hardware engineer on effective product design practices. Establish best practices, guidelines, and evaluation frameworks; curate datasets; assess AI-generated designs; and cultivate the AI expertise to enable successful hardware production from the first run, avoiding common pitfalls.Enhance the Hardware Design Team – Start as an active manager overseeing two team members, progressively refining the team's responsibilities and scaling to create reference designs, demos, product specifications, training materials, and feedback mechanisms.Prototype Real Hardware – Oversee rapid prototyping (from schematic to realization) to showcase product/AI capabilities. Instrument designs for signal integrity, power integrity, and thermal validation; utilize data to close the feedback loop.Enhance User Experience – Take ownership of and improve current user experience metrics, feedback loops, and user engagement processes to inform our iterative product development.

Jan 21, 2026
Apply
companyPhysical Intelligence logo
Full-time|On-site|San Francisco

Physical Intelligence seeks a Hardware Systems Engineering Intern to support its core hardware team in San Francisco. This group maintains the infrastructure behind a robotic fleet that performs real-world tasks, from washing dishes for hours to brewing coffee in both warehouse and unpredictable outdoor settings. Role overview The hardware team operates at the intersection of mechanical, electrical, and systems engineering. Team members work closely with software, controls, and manufacturing engineers to transition robots from prototypes into production. Daily work includes developing and running test protocols, troubleshooting failures in the field, and building systems that ensure reliability in a range of environments. What you will do Cross-disciplinary problem solving: Address challenges that combine mechanical, electrical, and control systems. Analyze ambiguous issues, identify root causes, and design practical fixes. Reliability analysis: Collect data and conduct reliability studies to spot failure trends, monitor system uptime, and contribute to investigations using methods such as RCCA and FMEA. Failure tracking: Build failure pareto charts to highlight key failure causes and timing. Work with engineers across disciplines to monitor system performance throughout different builds and deployments. Tooling and process design: Design and construct tools, jigs, and processes that improve speed, safety, and reliability for the robot fleet. Test protocols: Execute and refine testing procedures for new hardware, field repairs, and post-service checks. Build and production support: Assist with hardware builds, inventory tracking, materials handling, and vendor coordination. Configuration and serialization: Help implement configuration, serialization, and test tracking systems to streamline service and replacement processes. Requirements Pursuing or completed a Bachelor's degree in Mechanical or Mechatronics Engineering.

Apr 24, 2026
Apply
companyOpenAI logo
Full-time|On-site|San Francisco

About the TeamAt OpenAI, our Hardware organization is at the forefront of developing silicon and system-level solutions tailored specifically for the demanding requirements of advanced AI workloads. Our team is dedicated to pioneering the next generation of AI-native silicon, collaborating intimately with software and research partners to design hardware that is tightly integrated with AI models. Beyond delivering high-quality silicon for OpenAI's supercomputing infrastructure, we innovate by creating custom design tools and methodologies that propel hardware solutions optimized for AI.About the RoleIn this pivotal role, you will enhance the tooling ecosystem that hardware engineers depend on daily, encompassing hardware compilers, IR transformations, simulation, debugging, and automation infrastructure. Your contributions will bridge software engineering, compiler concepts, and practical hardware workflows, significantly influencing the speed and efficiency of designing next-generation AI systems. Collaborating closely with architects, RTL designers, and verification engineers, you will translate real engineering challenges into sustainable, scalable tooling solutions.Key ResponsibilitiesDevelop and enhance software tools to increase the efficiency of hardware teams: compilation, IR transformations, RTL generation, simulation, debugging, and automation.Extend and integrate hardware compiler stacks (including frontends, IR passes, lowering, scheduling, and code generation to Verilog/SystemVerilog) and connect these with actual design workflows.Enhance developer experience and system reliability: focus on reproducible builds, improved error messaging, quicker iteration cycles, and robust CI and regression infrastructures.Collaborate with designers and verification engineers to transform identified pain points into effective solutions.Engage with RTL as required: analyze and interpret Verilog/SystemVerilog to troubleshoot issues, validate tool outputs, and enhance debuggability.Be prepared to delve deeply into the stack when necessary, including gate-level perspectives, synthesis outcomes, and implementation artifacts.Support PPA optimization efforts by creating analysis and automation around area, timing, and power trade-offs, while improving tools that influence these aspects.Ideal Candidate ProfileProven experience in developing and maintaining software (through projects, internships, research, open-source contributions, or similar).

Mar 2, 2026
Apply
company
Full-time|On-site|San Francisco, California, United States

The RoleRocky Talkie is on the lookout for an innovative and entrepreneurial engineering leader to spearhead our hardware engineering initiatives. In this impactful position, you will be at the forefront of designing, developing, and manufacturing products that align with our mission to enhance safety in adventure sports while revitalizing the outdated two-way radio industry. Your hands-on leadership will guide our team through the complete product development lifecycle, ensuring that we maintain our commitment to simplicity, durability, and activity-centric design principles.As a pivotal member of our leadership team, you will collaborate closely with engineering, product, and development partners to translate our product vision into reality. We are in search of a passionate and experienced individual with a strong track record in leading consumer electronics projects from initial concept to mass production. If you thrive in a small team environment and enjoy getting involved in technical work, this role is an excellent fit for you. Rocky Talkie is poised for growth, and we celebrate and reward excellence.Location: 4 days in person (SF) per week

Mar 27, 2026
Apply
companyAtomic Semi logo
Full-time|$120K/yr - $185K/yr|On-site|San Francisco Office

Join Our Innovative Team at Atomic SemiAt Atomic Semi, we are pioneering the future of semiconductor manufacturing with our state-of-the-art, compact fabrication facilities.Our approach leverages current technologies and innovative simplifications, enabling us to create our own tools for rapid iteration and enhancement.We are assembling a select group of exceptional engineers—mechanical, electrical, hardware, computer, and process specialists—who will take ownership of the entire stack from atomic structures to architectural designs. Our optimistic team is dedicated to advancing technology boundaries.We believe that smaller, faster, and self-built solutions are the key to success.Our facility is equipped with 3D printers, a variety of microscopes, e-beam writers, and general fabrication equipment. If there’s something we need, we’ll innovate to create it.Founded by Sam Zeloof and Jim Keller, our team combines practical chip-making experience with decades of industry leadership.Role OverviewAs an Electronic Hardware Design Engineer, you will collaborate with a talented team to design cutting-edge lithography systems, deposition tools, vacuum chambers, and high-precision motion control systems. You will tackle challenges, prototype innovative solutions, and bring your designs to production, contributing to transformative advancements in integrated circuit manufacturing.Your projects may involve high-voltage low-noise converters, motor drivers, attofarad sensors, and millikelvin temperature controllers—all in a single initiative.

Mar 13, 2026

Sign in to browse more jobs

Create account — see all 5,358 results

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.