Hardware Reliability Engineer
Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Entry Level
Qualifications
About Multiply Labs
Multiply Labs is at the forefront of technology in the life sciences, dedicated to making advanced therapies available and affordable through state-of-the-art robotic innovations.
Similar jobs
Search for Ml Research Engineer Hardware Codesign
5,501 results
About Our TeamAt OpenAI, our Hardware organization is pioneering the development of cutting-edge silicon and system-level solutions tailored to meet the distinctive needs of advanced AI workloads. We are dedicated to building the next generation of AI silicon, collaborating closely with software engineers and research partners to co-design hardware that integrates seamlessly with our AI models. Our mission includes not only delivering high-quality, production-grade silicon for OpenAI's supercomputing infrastructure but also creating custom design tools and methodologies that foster innovation and enable hardware optimized specifically for AI applications.About the RoleWe are on the lookout for a talented Research Hardware Co-Design Engineer to operate at the intersection of model research and silicon/system architecture. In this role, you will play a critical part in shaping the numerics, architecture, and technological strategies for the future of OpenAI's silicon in collaboration with both Research and Hardware teams.Your responsibilities will include diagnosing discrepancies between theoretical performance and real-world measurements, writing quantization kernels, assessing the risks associated with numerics through model evaluations, quantifying system architecture trade-offs, and implementing innovative numeric RTL. This is a hands-on position for individuals who are passionate about tackling challenging problems, seeking practical solutions, and driving them to production. Strong prioritization and transparent communication skills are vital for success in this role.Location: San Francisco, CA (Hybrid: 3 days/week onsite)Relocation assistance available.Key Responsibilities:Enhance our roofline simulator to monitor evolving workloads and deliver analyses that quantify the impact of architectural decisions, supporting technology exploration.Identify and resolve discrepancies between performance simulations and actual measurements; effectively communicate root causes, bottlenecks, and incorrect assumptions.Develop emulation kernels for low-precision numerics and lossy compression techniques, equipping Research with the insights needed to balance efficiency with model quality.Prototype numeric modules by advancing RTL through synthesis; either hand off innovative numeric solutions cleanly or occasionally take ownership of an RTL module from start to finish.Proactively engage with new ML workloads, prototype them using rooflines and/or functional simulations, and initiate evaluations of new opportunities or risks.Gain a holistic understanding of the transition from ML science to hardware optimization, breaking down this comprehensive objective into actionable short-term deliverables.Foster collaborative relationships across diverse teams with varying goals and expertise, ensuring that progress remains unimpeded.Clearly articulate design trade-offs with explicit assumptions and rationale.
Pluralis Research
OverviewPluralis Research is at the forefront of Protocol Learning, innovating a decentralized approach to train and deploy AI models that democratizes access beyond just well-funded corporations. By aggregating computational resources from diverse participants, we incentivize collaboration while safeguarding against centralized control of model weights, paving the way for a truly open and cooperative environment for advanced AI.We are seeking a talented Machine Learning Training Platform Engineer to design, develop, and scale the core infrastructure that powers our decentralized ML training platform. In this role, you will have ownership over essential systems including infrastructure orchestration, distributed computing, and service integration, facilitating ongoing experimentation and large-scale model training.ResponsibilitiesMulti-Cloud Infrastructure: Create resource management systems that provision and orchestrate computing resources across AWS, GCP, and Azure using infrastructure-as-code tools like Pulumi or Terraform. Manage dynamic scaling, state synchronization, and concurrent operations across hundreds of diverse nodes.Distributed Training Systems: Design fault-tolerant infrastructure for distributed machine learning, including GPU clusters, NVIDIA runtime, S3 checkpointing, large dataset management and streaming, health monitoring, and resilient retry strategies.Real-World Networking: Develop systems that simulate and manage real-world network conditions—such as bandwidth shaping, latency injection, and packet loss—while accommodating dynamic node churn and ensuring efficient data flow across workers with varying connectivity, as our training occurs on consumer nodes and non-co-located infrastructure.
Pluralis Research
OverviewPluralis Research is at the forefront of innovation in Protocol Learning, specializing in the collaborative training of foundational models. Our approach ensures that no single participant ever has or can obtain a complete version of the model. This initiative aims to create community-driven, collectively owned frontier models that operate on self-sustaining economic principles.We are seeking experienced Senior or Staff Machine Learning Engineers with over 5 years of expertise in distributed systems and large-scale machine learning training. In this role, you will design and implement a groundbreaking substrate for training distributed ML models that function effectively over consumer-grade internet connections.
Scale AI, Inc.
Join Scale AI's ML platform team (RLXF) as a Machine Learning Research Engineer, where you will play a pivotal role in developing our advanced distributed framework for training and inference of large language models. This platform is vital for enabling machine learning engineers, researchers, data scientists, and operators to conduct rapid and automated training, as well as evaluation of LLMs and data quality.At Scale, we occupy a unique position in the AI landscape, serving as an essential provider of training and evaluation data along with comprehensive solutions for the entire ML lifecycle. You will collaborate closely with Scale's ML teams and researchers to enhance the foundational platform that underpins our ML research and development initiatives. Your contributions will be crucial in optimizing the platform to support the next generation of LLM training, inference, and data curation.If you are passionate about driving the future of AI through groundbreaking innovations, we want to hear from you!
OpenAI
About the TeamJoin OpenAI’s innovative Hardware organization, where we are dedicated to developing state-of-the-art silicon and system-level solutions tailored for the complex demands of advanced AI workloads. Our team is at the forefront of crafting the next generation of AI-native silicon, collaborating closely with software and research partners to create hardware that seamlessly integrates with AI models. We not only deliver production-grade silicon for OpenAI’s supercomputing infrastructure but also engineer custom design tools and methodologies that foster innovation and optimize hardware specifically for AI applications.About the RoleWe are seeking a talented RTL Engineer to take charge of designing and implementing critical compute, memory, and interconnect components for our bespoke AI accelerator. You will engage closely with architecture, verification, physical design, and machine learning engineers to translate AI workloads into highly efficient hardware structures. This is an exciting hands-on design role that offers significant ownership over definition, modeling, and implementation aspects.This position is based in San Francisco, CA, and follows a hybrid work model requiring 3 days in the office weekly. We provide relocation assistance for new employees.ResponsibilitiesDevelop clean, production-quality microarchitecture and RTL for key accelerator subsystems.Contribute to architectural studies, including performance modeling and feasibility assessments.Collaborate with software, simulator, and compiler teams to ensure effective hardware/software co-design and optimal workload integration.Work with design verification (DV) and physical design (PD) teams to guarantee functional correctness, timing closure, area/power targets, and smooth integration.Construct and evaluate performance and functional models to validate design intentions.Engage in design reviews, documentation, and support throughout the entire silicon lifecycle.You Might Excel In This Role If You Have:Advanced research or industry experience in computer architecture and AI/ML hardware-software co-design, encompassing workload analysis, dataflow mapping, or optimization of accelerator algorithms.Proficiency in writing production-quality RTL in Verilog/SystemVerilog, with a proven track record of successfully delivering complex blocks to tape-out.Experience in developing hardware design models or architectural simulators, preferably for AI/ML or high-performance computing systems.Familiarity with industry-standard design tools (lint, CDC/RDC, synthesis, STA) and methodologies.Strong analytical skills and a collaborative mindset.
ML/AI Research Engineer - Founding Team at Agentic AI LabLocation: San Francisco Bay AreaType: Full-TimeCompensation: Competitive salary + meaningful equity (founding tier)At fabrion, backed by 8VC, we are assembling a top-tier team dedicated to addressing one of the most pressing infrastructure challenges in the industry.About the RoleJoin us in shaping the future of enterprise AI infrastructure, focusing on agents, retrieval-augmented generation (RAG), knowledge graphs, and multi-tenant governance.As an ML/AI Research Engineer, you will spearhead the design, training, evaluation, and optimization of agent-native AI models. Your work will integrate LLMs, vector search, graph reasoning, and reinforcement learning, establishing the intelligence layer for our enterprise data fabric.This role goes beyond prompt engineering; it encompasses the entire ML lifecycle—from data curation and fine-tuning to thorough evaluation, interpretability, and deployment, all while considering cost-effectiveness, alignment, and agent coordination.Core ResponsibilitiesFine-tune and assess open-source LLMs (e.g., LLaMA 3, Mistral, Falcon, Mixtral) for enterprise applications, leveraging both structured and unstructured data.Construct and enhance RAG pipelines utilizing LangChain, LangGraph, LlamaIndex, or Dust, integrating with our vector databases and internal knowledge graphs.Train agent architectures (ReAct, AutoGPT, BabyAGI, OpenAgents) using enterprise task datasets.Develop embedding-based memory and retrieval chains employing token-efficient chunking strategies.Create reinforcement learning pipelines to enhance agent behaviors (e.g., RLHF, DPO, PPO).Establish scalable evaluation harnesses for LLM and agent performance, including synthetic evaluations, trace capture, and explainability tools.Contribute to model observability, drift detection, error classification, and alignment efforts.Optimize inference latency and GPU resource utilization across both cloud and on-premises environments.Desired ExperienceModel Training:Deep understanding of machine learning principles and hands-on experience with model training.
Sygaldry Technologies
About Sygaldry Technologies Sygaldry Technologies develops quantum-accelerated AI servers in San Francisco, focusing on faster AI training and inference. By combining quantum technology with artificial intelligence, the team addresses challenges in computing costs and energy efficiency. Their AI servers integrate multiple qubit types within a fault-tolerant system, aiming for a balance of cost, scalability, and speed. The company values optimism, rigor, and a drive to solve complex problems in physics, engineering, and AI. Role Overview: ML Infrastructure Engineer The ML Infrastructure Engineer joins the AI & Algorithms team, which includes research scientists, applied mathematicians, and quantum algorithm specialists. This role centers on building and maintaining the compute infrastructure that powers advanced research. The systems you build will support reliable GPU access, reproducible experiments, and scalable workloads, so researchers can focus on their core work without needing deep cloud expertise. Expect to design and manage compute platforms for a range of tasks, including quantum circuit simulation, large-scale numerical optimization, model training, tensor network contractions, and high-throughput data generation. These workloads span multiple cloud providers and on-premises GPU servers. Key Responsibilities Develop compute abstractions for diverse workloads, such as GPU-accelerated simulations, distributed training, high-throughput CPU jobs, and interactive analyses using frameworks like PyTorch and JAX. Set up infrastructure to support experiment tracking and reproducibility. Create developer tools that make cloud computing feel local, streamlining environment setup, job submission, monitoring, and artifact management. Scale experiments from single-GPU prototypes to large, multi-node production runs. Multi-Cloud GPU Orchestration Design orchestration strategies for workloads across multiple cloud providers, optimizing job routing for cost, availability, and capability. Monitor and improve cloud spending, keeping track of credit balances, burn rates, and expiration dates.
AfterQuery
Join AfterQuery as an AI/ML Research Intern and immerse yourself in groundbreaking artificial intelligence projects. This internship is designed for exceptional undergraduate and master's students eager to collaborate with our research team on advanced reasoning and agentic models. You will have the opportunity to access specialized datasets and work closely with industry experts, contributing to exciting AI research that could lead to co-authored papers and presentations at prestigious AI conferences.We invite students currently enrolled in relevant programs to apply. This role requires a commitment of 10 to 40 hours per week, adaptable to the needs of the company.
About Liquid AILiquid AI, a pioneering company spun out of MIT CSAIL, is at the forefront of developing general-purpose AI systems that operate efficiently across various platforms, from data center accelerators to on-device hardware. Our commitment to low latency, minimal memory usage, privacy, and reliability allows us to partner with some of the most esteemed enterprises in consumer electronics, automotive, life sciences, and financial services. As we experience rapid growth, we are seeking exceptional talent to join our innovative journey.The OpportunityJoin our cutting-edge Audio team, where we are developing advanced speech-language models capable of handling Speech-to-Text (STT), Text-to-Speech (TTS), and speech-to-speech tasks within a unified architecture. This pivotal role supports applied audio model development, directly collaborating with the technical lead to deliver production systems that operate on-device under real-time constraints. You will take ownership of key workstreams encompassing data pipelines, evaluation systems, and customer deployments. If you are eager to tackle unique technical challenges within a small, elite team where your contributions are impactful, this is the role for you.What We're Looking ForWe are seeking an individual who:Builds first, theorizes later: You prioritize shipping working systems over theoretical models; production-grade code is your default.Owns outcomes end-to-end: You take full responsibility for everything from data pipelines to customer deployments and don't shy away from challenges.Thrives under constraints: On-device, low-latency, memory-constrained environments motivate you. You view constraints as opportunities for innovative design.Ramps quickly on new territory: You are comfortable closing knowledge gaps swiftly and actively seek feedback to drive results.The WorkDevelop and scale data pipelines for audio model training, including preprocessing, augmentation, and quality filtering at scale.Design, implement, and maintain evaluation systems that assess multimodal performance across both internal and public benchmarks.Fine-tune and adapt audio models to cater to customer-specific use cases, taking charge from requirement gathering through to deployment.Contribute production code to the core audio repository while collaborating closely with infrastructure and research teams.Facilitate experimentation under real hardware constraints, transitioning smoothly between customer-focused projects and core development initiatives.
Join Merge Labs, a pioneering research facility dedicated to merging biological and artificial intelligence to enhance human capabilities, agency, and experience. We aim to achieve this by crafting innovative brain-computer interfaces that communicate with the brain at high bandwidth, seamlessly integrate with cutting-edge AI, and prioritize safety and accessibility for all users.About the Team:At Merge Labs, we are on a mission to revolutionize brain-computer interfaces by leveraging advancements in synthetic biology, neuroscience, AI, and non-invasive imaging technologies. Our cross-functional data science team is situated at the convergence of computational modeling, neuroscience, and biomolecular engineering. This collaborative unit works closely with wet-lab scientists, automation specialists, and data engineers to develop machine learning frameworks that facilitate rapid molecule discovery and device enhancement.About the Role:We are seeking a talented Senior / Principal ML Scientist to architect and scale Bayesian optimization and reinforcement learning frameworks that guide molecular engineering initiatives through iterative design-build-test-learn (DBTL) cycles. You will start with a fresh approach to construct the company's closed-loop optimization infrastructure, establishing the data and modeling foundations that link experiments with these ML frameworks. Over time, you will transition prototypes into operational pipelines, significantly enhancing experimental throughput and discovery success across various biomolecular and neuroengineering sectors.Key Responsibilities:Develop the scientific and engineering framework for active learning and closed-loop optimization, encompassing data ingestion, ML modeling, and library design.Collaborate with wet-lab scientists to establish feasible optimization objectives while incorporating domain-specific priors and constraints.Create prototypes for representation learning and acquisition strategies utilizing both internal and public datasets; benchmark and validate the performance of models.Integrate machine learning models with experimental data streams, making them accessible to non-domain experts for broader utilization.Extend machine learning frameworks to accommodate multi-objective or constrained optimization challenges.Stay abreast of the latest advancements in Bayesian optimization, active learning, and reinforcement learning, and prototype innovative algorithms to enhance the company's capabilities.
Who We AreSamsara (NYSE: IOT) is at the forefront of the Connected Operations™ Cloud, a transformative platform that empowers businesses reliant on physical operations to tap into Internet of Things (IoT) data. Our aim is to provide actionable insights that enhance safety, efficiency, and sustainability across vital industries such as agriculture, construction, transportation, and manufacturing. By digitally transforming these sectors, which represent over 40% of global GDP, we are contributing to a more efficient and sustainable economy.Joining Samsara means being part of a team that is defining the future of physical operations. You will engage in cutting-edge solutions, including Video-Based Safety, Vehicle Telematics, and Equipment Monitoring, within a supportive environment that fosters innovation and long-term impact.About the Role:We are seeking a Senior Hardware Systems Engineer to enhance our rapidly expanding product line. Your primary responsibility will involve leading the electrical engineering components of product architecture and design, grounded in comprehensive feasibility, design, and cost analyses. This encompasses critical aspects such as component selection, thermal management, and antenna design. You will leverage extensive telemetry and direct customer insights to inform and refine our product designs. Collaborating closely with Product Management, Firmware, and Hardware leadership, you will influence key engineering decisions while mentoring fellow engineers. The role will also require interaction with our US and Taiwan EE teams, as well as our Supply Chain and laboratory resources, to achieve our project goals effectively.This role is hybrid, requiring you to be in our San Francisco, CA office three days a week, with the flexibility to work remotely for two days. Travel may be necessary up to 25% of the time, and proximity to an international airport is essential. We offer relocation assistance for this position and welcome candidates from across the U.S. who are willing to relocate to the Bay Area.
About Our TeamAt OpenAI, our Hardware organization is at the forefront of developing cutting-edge silicon and system-level solutions tailored for the specific demands of advanced AI workloads. Our team is dedicated to creating the next generation of AI-native silicon, collaborating closely with software and research partners to co-design hardware that is seamlessly integrated with AI models. We not only deliver production-grade silicon for OpenAI’s supercomputing infrastructure but also innovate custom design tools and methodologies that drive acceleration and optimization specific to AI.About This RoleAs a member of our hardware optimization and co-design team, you will play a crucial role in co-designing future hardware from various vendors, focusing on programmability and high performance. You will partner with our kernel, compiler, and machine learning engineers to comprehend their distinct requirements concerning ML techniques, algorithms, numerical approximations, programming expressivity, and compiler optimizations. Your advocacy for these constraints will help shape and influence future hardware architectures aimed at efficient training and inference for our models. If you are passionate about efficiently distributing large language models across devices, optimizing system-wide networking bottlenecks, and customizing the compute pipeline and memory hierarchy of hardware platforms while simulating workloads at various abstraction levels, then this opportunity is perfect for you!This position is based in San Francisco, CA, utilizing a hybrid work model of three days in the office each week, with relocation assistance available for new hires.Key Responsibilities:Collaborate on the co-design of future hardware focusing on programmability and performance with hardware vendors.Support hardware vendors in developing optimal kernels and integrating support within our compiler.Generate performance estimates for critical kernels across diverse hardware configurations, influencing decisions regarding compute core and memory hierarchy features.Create system performance models at various abstraction levels and conduct analyses to guide decisions on scaling and front-end networking.Engage with machine learning engineers, kernel engineers, and compiler developers to align on high-performance accelerator needs.Facilitate communication and coordination with internal and external partners.Shape the roadmap for hardware partners to optimize their products for our AI capabilities.
About Our TeamAt OpenAI, our Hardware team is at the forefront of developing cutting-edge silicon and comprehensive system solutions tailored to the specific needs of advanced AI workloads. We pride ourselves on crafting the next generation of AI-native silicon, collaborating closely with software engineers and research teams to ensure our hardware is seamlessly integrated with AI models. Our mission extends beyond creating production-grade silicon for OpenAI’s supercomputing infrastructure; we also innovate custom design tools and methodologies that spark innovation and enable hardware specifically optimized for AI.About the RoleAs a Software Engineer on the Scaling team, you will play a pivotal role in designing and optimizing the foundational stack that manages computation and data flow across OpenAI’s supercomputing clusters. Your responsibilities will include crafting high-performance runtimes, developing custom kernels, enhancing compiler infrastructure, and building scalable simulation systems to validate and optimize distributed training workloads.This position requires you to work at the intersection of systems programming, machine learning infrastructure, and high-performance computing, where you will create intuitive developer APIs alongside highly efficient runtime systems. You will balance usability and introspection with the imperative for stability and performance across our dynamic hardware landscape.This role is based in San Francisco, CA, featuring a hybrid work model (three days in-office per week). Relocation assistance is provided.Key Responsibilities:Design and implement APIs and runtime components to efficiently manage computation and data movement for diverse ML workloads.Enhance compiler infrastructure by developing optimizations and compiler passes to accommodate evolving hardware advancements.Engineer and refine compute and data kernels, ensuring precision, high performance, and compatibility across simulation and production settings.Analyze and optimize system bottlenecks, focusing on I/O, memory hierarchy, and interconnects at both local and distributed scales.Create simulation infrastructure to validate runtime behaviors, test modifications to the training stack, and support the early development of hardware and systems.Quickly deploy updates to runtime and compiler across new supercomputing builds in close collaboration with hardware and research teams.Work across a varied tech stack, primarily utilizing Rust and Python, with a chance to influence architectural decisions within the training framework.
At Merge Labs, we are at the forefront of research, dedicated to merging biological and artificial intelligence to enhance human capabilities and experiences. Our innovative journey involves crafting revolutionary brain-computer interfaces that communicate with the brain at high bandwidth, integrate with cutting-edge AI technologies, and are designed to be safe and accessible to all.About Our Team:We are pioneering the next wave of brain-computer interfaces by leveraging breakthroughs in synthetic biology, neuroscience, AI, and non-invasive imaging. To propel this mission, we are assembling a multidisciplinary data science team that intersects computational modeling, neuroscience, and biomolecular engineering. This team collaborates closely with wet-lab scientists, automation engineers, and data engineers to develop machine learning frameworks that expedite molecule discovery and enhance device optimization.Role Overview:We are seeking a Senior/Principal ML Scientist to architect and scale de novo design frameworks that steer molecular engineering initiatives through iterative design-build-test-learn (DBTL) cycles. Initially, you will establish the company’s closed-loop optimization infrastructure, developing the data and modeling frameworks that link experiments to these machine learning systems. As the role progresses, you will play a vital role in transitioning these prototypes into operational pipelines that significantly boost experimental throughput and success in biomolecular and neuroengineering domains.Key Responsibilities:Develop scientific and engineering frameworks in partnership with data engineering and MLOps for de novo design and closed-loop optimization, encompassing data ingestion, ML modeling, and library architecture.Work alongside wet-lab scientists to establish feasible optimization goals and incorporate domain-specific priors and constraints.Prototype de novo design frameworks utilizing both internal and public datasets; evaluate and validate model efficacy.Integrate ML models with experimental data streams, facilitating accessibility for non-expert users.Expand ML frameworks to accommodate multi-objective or constrained optimization challenges.Stay abreast of advancements in de novo design research and prototype innovative algorithms to enhance the company’s discovery and development processes.
About GridwareGridware is an innovative technology firm headquartered in San Francisco, committed to safeguarding and enhancing the reliability of the electrical grid. We have pioneered a revolutionary approach to grid management known as Active Grid Response (AGR), which meticulously monitors the electrical, physical, and environmental factors influencing grid safety and reliability. Our state-of-the-art AGR platform leverages high-precision sensors to identify potential issues at an early stage, facilitating proactive maintenance and fault resolution. This holistic strategy is designed to bolster safety, minimize outages, and ensure optimal grid performance. We are proud to be supported by prominent climate-tech and Silicon Valley investors. To learn more, visit www.Gridware.io.About the RoleWe are seeking a skilled Senior Hardware Reliability Engineer to lead reliability testing, analysis, and lifetime modeling of various outdoor electronic assemblies. This pivotal role will concentrate on the electronic components of our products, collaborating closely with our mechanical-focused Reliability Engineer and engaging with the broader hardware and cross-functional teams.
Echo Neurotechnologies
Company OverviewEcho Neurotechnologies is a pioneering startup specializing in Brain-Computer Interface (BCI) technology. We are committed to pushing the boundaries of innovation through state-of-the-art hardware engineering and artificial intelligence solutions. Our goal is to create transformative technologies that empower individuals with disabilities, enhancing their autonomy and overall quality of life.Team CultureBecome a part of our dedicated team of passionate and skilled professionals. In our dynamic early-stage environment, you will have the chance to influence key decisions that will have lasting impacts. We prioritize continuous learning and development, promoting cross-functional collaboration where your input is essential to our collective success.Role OverviewWe are on the lookout for a seasoned Senior Hardware Test Engineer to validate our custom Echo hardware systems. In this role, you will lead the testing processes for our specialized hardware devices and subsystems while developing and implementing custom test systems.Primary ResponsibilitiesConduct in-house design verification testsCoordinate testing with external laboratoriesCollaborate with the engineering team to create tailored testing solutionsWork alongside design engineers to characterize unique hardwarePrepare tests for vendor transfer
At Runway ML, we are revolutionizing the intersection of art and science through innovative AI technology. Our mission is to build sophisticated world models that transcend traditional artificial intelligence limitations. We believe that to tackle the most pressing challenges—such as robotics, disease, and scientific breakthroughs—we need systems that can learn from experiences just like humans do. By simulating these experiences, we can expedite progress in ways that were previously unimaginable.Our diverse and driven team consists of creative thinkers who are passionate about pushing boundaries and achieving the extraordinary. If you share this ambition and are eager to contribute to our groundbreaking work, we invite you to join us.About the Role*We are open to hiring remotely across North America. We also have offices in NYC, San Francisco, and Seattle.We are on the lookout for a highly skilled and intellectually inquisitive Technical Accounting Manager to be our go-to authority on intricate accounting issues. This position offers significant visibility and is ideal for a professional adept at interpreting complex accounting guidelines, formulating sound conclusions, and translating technical insights into practical accounting practices.
Air Apps
Join Our Team at Air AppsAt Air Apps, we are on a mission to revolutionize resource management through innovative technology. Founded in 2018 in Lisbon, Portugal, we have expanded our reach with offices in both Lisbon and San Francisco, boasting over 100 million downloads globally. Our vision is to create the world’s first AI-powered Personal & Entrepreneurial Resource Planner (PRP), and we are looking for passionate individuals to help us achieve this goal.Our commitment to challenging the status quo drives us to push the boundaries of AI-driven solutions that make a real impact. Here, you will have the opportunity to be a creative force, developing products that empower individuals worldwide.Join us as we embark on this journey to redefine how people plan, work, and live.
Multiply Labs
About Multiply LabsMultiply Labs is an innovative startup located in San Francisco, California, backed by renowned investors in technology and life sciences such as Casdin Capital, Lux Capital, and Y Combinator. Our goal is to develop the world's leading robotic systems and utilize them to make groundbreaking life-saving therapies accessible to everyone.We are transforming the manufacturing process of cell therapies through the creation of advanced robotic systems that automate and scale the production of these crucial treatments. Our cutting-edge robots enable biopharma companies to produce cell therapies efficiently without overhauling their existing processes, thus minimizing regulatory hurdles and risks. Unlike traditional methods that are labor-intensive and costly (often exceeding $1M per patient), our robotic solutions aim to make these vital treatments more affordable and reachable for those who need them.To discover more and view our robots in action, please visit www.multiplylabs.com and follow us on LinkedIn.Position OverviewWe are looking for a dedicated Hardware Reliability Engineer to become an essential part of Multiply Labs’ Reliability Engineering team. As a founding member, you will collaborate closely with the Hardware Product and Systems Integration teams to enhance our designs throughout the entire development lifecycle, from initial prototypes to fully deployed GMP production systems. Your contributions will directly support the delivery of life-saving therapies by ensuring our robots operate seamlessly within the high-stakes biotech environment.
Tacit
Tacit is an early-stage deep-tech startup in San Francisco, backed by General Catalyst, Khosla Ventures, and Greylock Partners. The team draws on backgrounds from Stanford, BrainGate, Oculus, and Tesla. The company is developing new hardware to advance human-computer interaction, with project details still confidential. The focus is on solving complex engineering challenges that could change how people use technology. Role overview The Head of Hardware will lead Tacit's transition from research prototypes to a consumer-ready product. This senior leader shapes the technical architecture and builds the team responsible for delivering reliable hardware at scale. The position works directly with company leadership, driving hardware strategy, resource planning, and execution. The main objective is to establish a high-performing hardware group that consistently ships quality consumer products. What you will do Lead the move from research prototypes to a production-ready consumer hardware platform. Own the hardware roadmap and system architecture, overseeing progress from early builds to mass production. Design the hardware organization, define team roles, and set hiring priorities in collaboration with leadership. Establish and uphold standards for product readiness, with attention to quality, reliability, manufacturability, and testing. Shape and execute the hardware hiring strategy, including role definitions, sequencing, and technical requirements. Build and lead a multidisciplinary hardware team spanning electrical, mechanical, RF, firmware, testing, and manufacturing. Encourage open technical discussion, clear ownership, and a product-driven approach within the team. Make key system-level trade-offs across performance, power, cost, reliability, and timelines, balancing immediate needs and long-term growth. Enhance design quality through DFM/DFA, manufacturing test planning, and strong quality processes. Collaborate closely with product, industrial design, and software/ML teams to translate product requirements into hardware specifications.
Sign in to browse more jobs
Create account — see all 5,501 results

