Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Entry Level
Qualifications
The ideal candidate will possess a strong background in computer science or a related field, with a focus on artificial intelligence and optimization techniques. Experience with machine learning frameworks and proficiency in programming languages such as Python or C++ are essential. A passion for research and a problem-solving mindset will be key to your success.
About the job
Join Zyphra as a Research Engineer specializing in AI Performance and Kernel Optimization. In this role, you will work at the forefront of AI technologies, developing and optimizing kernel solutions that enhance the performance of our systems. You will collaborate with cross-functional teams, leveraging your expertise to drive innovation and efficiency.
About Zyphra
Zyphra is a leader in AI technology, dedicated to pushing the boundaries of what's possible. Our innovative solutions empower businesses to harness the power of artificial intelligence, driving efficiency and growth. We foster a collaborative and dynamic work environment where creativity and innovation thrive.
Similar jobs
1 - 20 of 11,676 Jobs
Search for Product Engineer At Kernel San Francisco
About KernelKernel is a cutting-edge developer platform that offers Lightning-Fast Browsers-as-a-Service for browser automations and web agents. Our API and MCP server empower developers to effortlessly launch browsers in the cloud without the hassle of managing infrastructure.Our serverless browser platform takes care of the complex aspects: autoscaling reliable browser infrastructure, observability, and intricate web interactions, enabling developers to concentrate on the functionality of their agents rather than the underlying details. Kernel transforms AI into a tangible, practical, and powerful tool, allowing developers to deploy agents capable of genuine interaction with the digital landscape.We pride ourselves on being trusted by teams at Cash App, Rye, and numerous others for deep research, QA automation, and real-time web analysis. We have successfully secured $22M in funding from top investors including Accel, YCombinator, Vercel, Paul Graham, Solomon Hykes (Docker), David Cramer (Sentry), Charlie Marsh (Astral), and more.With just one line of code, you can deploy any web agent to our cloud. The rest is in your hands. If you are passionate about building essential infrastructure for the next wave of AI applications, we would love to hear from you.About the RoleAs a Product Engineer at Kernel, you will be a full-stack engineer who values product development as much as coding. You possess the ability to translate your strong product instincts into code, ranging from pixel-perfect UI decisions to backend API architecture. You proactively contribute to the specification process rather than waiting for one to be provided.You will collaborate closely with our co-founders to define product direction, deliver full-stack features from end to end, and ensure that Kernel maintains its polished yet powerful appearance.Your ResponsibilitiesLead the full-stack implementation of user-facing product surfaces: dashboard, onboarding, website, and core product functionalities.Influence the product roadmap by integrating customer feedback, analyzing usage patterns, and leveraging your own insights into developer needs.Enhance developer experience across our SDK, documentation, CLI, and API, delivering the kind of seamless experience that makes developers exclaim, 'this just works.'Rapidly prototype and iterate, bringing features from concept to production with minimal oversight.Help shape the standards for building a superior developer product at Kernel.Your QualificationsYou are comfortable taking ownership of features from frontend to backend, demonstrating a holistic understanding of product development.A strong passion for creating seamless user experiences and an ability to translate product vision into functional code.Experience working in a fast-paced environment with a focus on agile methodologies.
Join Our Team at KernelAt Kernel, we are revolutionizing the way developers interact with the digital world through our innovative platform, offering Lightning-Fast Browsers-as-a-Service for seamless browser automation and advanced web agents. Our cutting-edge API and MCP server empower developers to effortlessly launch browsers in the cloud, eliminating the complexities of infrastructure management.Our serverless browser platform takes the hassle out of autoscaling, reliability, and observability, allowing developers to concentrate on their agents' functionality rather than the underlying processes. Kernel transforms AI into a practical and impactful tool, enabling developers to deploy agents that can genuinely engage with online environments.Trusted by industry leaders such as Cash App and Rye for applications ranging from comprehensive research to QA automation and real-time web analysis, we have successfully raised $22M from prominent investors including Accel, YCombinator, and others.With just one line of code, any web agent can be deployed to our cloud—what happens next is up to you. If you are passionate about creating essential infrastructure for the future of AI applications, we would love to connect.
About KernelKernel is an innovative developer platform that delivers Lightning-Fast Browsers-as-a-Service for browser automation and web agent deployment. Our API and MCP server empower developers to effortlessly launch cloud-based browsers without the hassle of infrastructure management.Our serverless browser solution takes care of the complexities: autoscaling, dependable browser infrastructure, observability, and intricate web interactions, allowing developers to concentrate on their agents' functionality rather than the underlying technology. Kernel brings AI to life, enabling developers to create agents that genuinely engage with the digital landscape.Our platform is trusted by teams at Cash App, Rye, and many others for various tasks including in-depth research, QA automation, and real-time web analysis. We recently secured $22M in funding from notable investors such as Accel, YCombinator, Vercel, Paul Graham, Solomon Hykes (Docker), David Cramer (Sentry), and Charlie Marsh (Astral).With just a single line of code, you can deploy any web agent to our cloud infrastructure. If you are passionate about developing essential infrastructure for the future of AI applications, we would love to connect with you.
About KernelKernel is a cutting-edge developer platform that offers Lightning-Fast Browsers-as-a-Service tailored for browser automation and web agent creation. Our API and MCP server enable developers to seamlessly launch browsers in the cloud without the hassle of infrastructure management.Our serverless browser platform takes care of the complex tasks: autoscaling reliable browser infrastructure, ensuring observability, and managing the intricate details of web interactions, allowing developers to concentrate on their agent functionalities rather than the underlying processes. Kernel brings AI to life, making it practical and powerful, empowering developers to deploy agents that can effectively engage with the digital landscape.We are trusted by teams at Cash App, Rye, and numerous others for diverse applications like in-depth research, QA automation, and real-time web analysis. We have successfully secured $22M in funding from notable investors including Accel, YCombinator, Vercel, Paul Graham, Solomon Hykes (Docker), David Cramer (Sentry), Charlie Marsh (Astral), among others.With just one line of code, you can deploy any web agent to our cloud. The rest is in your hands. If you're passionate about developing critical infrastructure for the next generation of AI applications, we would love to connect.
At Magic, our goal is to develop safe AGI that propels humanity forward by addressing some of the most pressing challenges we face. We are committed to harnessing the power of automated research and code generation to enhance models and improve alignment in ways that surpass human capabilities. Our innovative methodology integrates cutting-edge pre-training, domain-specific reinforcement learning, ultra-long context, and advanced inference-time computing.Role OverviewAs a Kernel Engineer, you will be responsible for the design, implementation, and maintenance of high-performance kernels, aiming to optimize throughput and minimize latency during both training and inference processes.Magic's extended context windows present unique kernel optimization challenges, particularly regarding memory efficiency, data movement, and sustained throughput.Key ResponsibilitiesDesign and develop kernels that facilitate high-performance long-context functionality.Take ownership of kernel design, implementation, deployment, and ensure production reliability.Emphasize robustness, thorough testing, and functional accuracy while striving for optimal performance.Assess the feasibility of porting Magic’s compute kernels to various hardware platforms.Collaborate with the training, inference, and reinforcement learning teams to co-design kernels.Explore our work through the Magic-Attention, presented at GTC 2026.QualificationsExperience in low-level programming for AI accelerators, including NVIDIA Blackwell or Google TPUs.Proficient in developing and optimizing GPU kernels using frameworks such as NCCL, MSCCLPP, CUTLASS, CuTeDSL, Triton, Quack, and Flash Attention.
Full-time|$350K/yr - $475K/yr|On-site|San Francisco
At Thinking Machines Lab, our ambition is to enhance human potential by advancing collaborative general intelligence. We envision a future where individuals have the tools and knowledge to harness AI for their distinct requirements and aspirations.Our team comprises dedicated scientists, engineers, and innovators who have contributed to some of the most renowned AI products, including ChatGPT and Character.ai, along with open-weight models like Mistral, and influential open-source projects such as PyTorch, OpenAI Gym, Fairseq, and Segment Anything.About the RoleWe are seeking an Infrastructure Research Engineer to architect, optimize, and sustain the computational frameworks that facilitate large-scale language model training. You will create high-performance machine learning kernels (e.g., CUDA, CuTe, Triton), enable effective low-precision arithmetic operations, and enhance the distributed computing infrastructure essential for training expansive models.This position is ideal for an engineer who thrives in close collaboration with hardware and research disciplines. You will partner with researchers and systems architects to merge algorithmic design with hardware efficiency. Your responsibilities will include prototyping new kernel implementations, evaluating performance across various hardware generations, and helping to establish the numerical and parallelism strategies crucial for scaling next-generation AI systems.Note: This is an evergreen role that remains open continuously for expressions of interest. We receive numerous applications, and there may not always be an immediate opportunity that aligns with your qualifications. However, we encourage you to apply, as we regularly assess applications and will reach out as new positions become available. You are also welcome to reapply after gaining additional experience, but please refrain from applying more than once every six months. Additionally, you may notice postings for specific roles catering to particular projects or team needs. In such cases, you are encouraged to apply directly alongside this evergreen listing.What You’ll DoDesign and develop custom ML kernels (e.g., CUDA, CuTe, Triton) for key LLM operations such as attention, matrix multiplication, gating, and normalization, optimized for contemporary GPU and accelerator architectures.Conceptualize compute primitives aimed at alleviating memory bandwidth bottlenecks and enhancing kernel compute efficiency.Collaborate with research teams to synchronize kernel-level optimizations with model architecture and algorithmic objectives.Create and maintain a library of reusable kernels and performance benchmarks that serve as the foundation for internal model training.Contribute to the stability and scalability of our infrastructure, ensuring it meets the growing demands of AI development.
At Sciforium, we are at the forefront of AI infrastructure, innovating next-generation multimodal AI models and a proprietary high-efficiency serving platform. With substantial funding and direct collaboration from AMD, supported by their engineers, our team is rapidly expanding to develop the complete stack that powers cutting-edge AI models and real-time applications.About the RoleWe are on the lookout for a talented GPU Kernel Engineer who is eager to explore and maximize performance on modern accelerators. In this role, you will be responsible for designing and optimizing custom GPU kernels that drive our advanced large-scale AI systems. You will navigate the hardware-software stack, engaging in low-level kernel development and integrating optimized operations into high-level machine learning frameworks for large-scale training and inference.This position is perfect for someone who excels at the intersection of GPU programming, systems engineering, and state-of-the-art AI workloads, and aims to contribute significantly to the efficiency and scalability of our machine learning platform.Key ResponsibilitiesDevelop, implement, and enhance custom GPU kernels utilizing C++, PTX, CUDA, ROCm, Triton, and/or JAX Pallas.Profile and fine-tune the end-to-end performance of machine learning operations, particularly for large-scale LLM training and inference.Integrate low-level GPU kernels into frameworks such as PyTorch, JAX, and our proprietary internal runtimes.Create performance models, pinpoint bottlenecks, and deliver kernel-level enhancements that significantly boost AI workloads.Collaborate with machine learning researchers, distributed systems engineers, and model-serving teams to optimize computational performance across the entire stack.Engage closely with hardware vendors (NVIDIA/AMD) and stay updated on the latest GPU architecture and compiler/toolchain advancements.Contribute to the development of tools, documentation, benchmarking suites, and testing frameworks ensuring correctness and performance reproducibility.Must-Haves5+ years of industry or research experience in GPU kernel development or high-performance computing.Bachelor’s, Master’s, or PhD in Computer Science, Computer Engineering, Electrical Engineering, Applied Mathematics, or a related discipline.Strong programming proficiency in C++, Python, and familiarity with machine learning frameworks.
ABOUT BASETENAt Baseten, we empower the world's leading AI firms—such as Cursor, Notion, and OpenEvidence—by delivering mission-critical inference solutions. Our unique blend of applied AI research, robust infrastructure, and user-friendly developer tools enables AI pioneers to effectively deploy groundbreaking models. With our recent achievement of a $300M Series E funding round supported by esteemed investors like BOND and IVP, we're on an exciting growth trajectory. Join our dynamic team and contribute to the platform that drives the next generation of AI products.THE ROLEWe are looking for an experienced Senior GPU Kernel Engineer to join our innovative team at the forefront of AI acceleration. In this role, your programming expertise will directly enhance the performance of cutting-edge machine learning models. You'll be responsible for developing highly efficient GPU kernels that optimize computational processes, allowing for transformative AI applications.You'll thrive in a fast-paced, intellectually challenging environment where your technical skills are pivotal. Your contributions will directly affect production systems that serve millions of users across various platforms. This position offers exceptional opportunities for career advancement for engineers enthusiastic about low-level optimization and impactful systems engineering.EXAMPLE INITIATIVESAs part of our Model Performance team, you will engage in projects like:Baseten Embeddings Inference: The quickest embeddings solution availableThe Baseten Inference StackEnhancing model performance optimizationRESPONSIBILITIESCore Engineering ResponsibilitiesDesign and develop high-performance GPU kernels for essential machine learning operations, including matrix multiplications and attention mechanisms.Collaborate with cross-functional teams to drive performance improvements and implement optimizations.Debug and refine kernel code to achieve maximal efficiency and reliability.Stay abreast of the latest advancements in GPU technology and machine learning frameworks.
Join Zyphra as a Research Engineer specializing in AI Performance and Kernel Optimization. In this role, you will work at the forefront of AI technologies, developing and optimizing kernel solutions that enhance the performance of our systems. You will collaborate with cross-functional teams, leveraging your expertise to drive innovation and efficiency.
Join 8vc as a Product Engineer, where you will play a crucial role in shaping the future of technology. We are seeking an innovative and detail-oriented engineer to collaborate with our dynamic team and drive product development from concept to launch. You will leverage your technical expertise to solve complex problems and enhance user experiences.
Join the Firetiger TeamAt Firetiger, our vision is to empower entrepreneurs in creating one-person companies that generate over $1 billion in revenue.We are transforming the future of software companies by envisioning a world where autonomous agents take over the majority of operational tasks, reducing the need for human intervention.Unlike traditional software engineering tools designed for manual operation, Firetiger pioneers a revolutionary approach where success is defined by our customers, and our agents ensure those goals are consistently met through continuous monitoring and action.Founded by industry leaders Rustam and Achille, both former executives from Cloudflare and Segment, we are backed by Sequoia Capital and a network of innovative founders and executives from top tech companies.We are seeking a talented Product Engineer to create seamless interfaces between human users and AI agents, focusing on workflows, visualizations, and interaction patterns that simplify autonomous technical operations.
About ReductoAt Reducto, we empower AI teams to seamlessly ingest real-world enterprise data with unparalleled accuracy. Our innovative solutions unlock vast amounts of enterprise data—ranging from financial statements to health records—previously trapped in unstructured formats like PDFs and spreadsheets. By training vision models to interpret documents as humans do, we enable the development of products, training of models, and automation of processes at scale.Having experienced exponential growth, with a remarkable 7x year-over-year revenue increase, we collaborate with hundreds of companies, including leading AI teams such as Harvey, Vanta, and Scale, as well as major enterprises like FAANG and top trading firms.Backed by over $100 million from top-tier investors including A16z, Benchmark, and First Round Capital, we are in search of a talented Product Engineer to join our dynamic team.The RoleAs a Product Engineer, you will be instrumental in developing features that empower our customers to design, build, test, and evaluate their essential document workflows. Our mission is to democratize access to document data for everyone, regardless of their technical background. You will contribute to creating a comprehensive product suite that spans from document storage to the seamless export of parsed data for various applications. Your work will encompass everything from interactive document viewers to data extraction workflows, crafting user experiences that simplify complex AI functionalities.
Join Scribe as a Senior Product Engineer, where you'll play a crucial role in developing innovative products that enhance user experience. In this position, you will leverage your expertise to design, implement, and optimize cutting-edge solutions. Collaborating with cross-functional teams, you'll contribute to the entire product lifecycle, ensuring that our offerings meet the highest standards of quality and functionality.
About MetaviewAt Metaview, we are revolutionizing the workplace by eliminating the burdens of toil, the barrier to progress. We believe in the power of Human-Computer Symbiosis to enhance productivity.Our AI copilot transforms how hiring is approached, leading to:Increased flow-state, reduced toil.Enhanced insights, less guesswork.Focus on human-centricity over process-centricity.Founded in 2018 by Siadhal and Shahriar, both ex-Uber and Palantir employees, Metaview has grown to support thousands of users at leading companies like Quora, Robinhood, Brex, Elastic, and Replit. We're proud of our small yet high-performing team, dedicated to doing great work.About the RoleAs a Product Engineer at Metaview, you will embody a customer-centric and product-focused mindset. You will lead the design, development, and operational aspects of our products, taking ownership of your work in a collaborative and dynamic environment.While we seek a full stack talent, this role is geared toward those who lean towards backend development. Your primary responsibilities will revolve around backend coding, but you will also engage with other parts of the stack to maintain high productivity.We value clean and effective coding, and while familiarity with languages and tools is appreciated, a willingness to learn and adapt is essential as our product continually evolves. Currently, our technology stack includes:Python for backend application development.Typescript and React for frontend application development.Serverless Framework for application management.
About Us:At ResiQuant, we are dedicated to addressing a critical challenge in disaster resilience: the availability of accurate, standardized, and dynamic property data. As climate change and natural disasters such as earthquakes, hurricanes, and wildfires increasingly affect our world, informed risk decisions are often made with incomplete or erroneous data.Our mission is to equip insurers, financial institutions, and asset managers with AI-driven systems that integrate structural engineering knowledge. These systems help to bridge crucial data gaps, providing precise, actionable insights into buildings for improved decision-making.Founded by Stanford PhDs, we are devoted to utilizing state-of-the-art AI technologies to safeguard communities and businesses against the impacts of disasters. At ResiQuant, we envision a future where every organization has access to premier property risk intelligence, fostering resilience and protecting what matters most.The RoleIn this position, you will design and manage the backend systems that drive AI-powered property risk intelligence at enterprise scale. You will be responsible for creating APIs, data pipelines, and model-integrated services that provide reliable, real-time insights trusted by insurers for billion-dollar underwriting decisions. As an integral member of our early team, you will collaborate closely with the founders to establish the technical foundation of the platform and ensure that cutting-edge AI is applied safely and effectively within the insurance sector.What You’ll DoTake ownership of backend features from design through to deployment.Design scalable services and integrations tailored for Fortune 500 insurers.Construct pipelines to ingest, validate, and transform extensive property datasets.Develop and enhance AI-assisted backend functionalities (API design, retrieval-backed features, monitoring).Ensure observability, reliability, and performance across all systems.Work collaboratively with founders, engineers, and customers to rapidly iterate on impactful features.Who You Are0–5 years of engineering experience (1–3 years preferred). Outstanding new graduates are encouraged to apply.Bachelor’s degree in Computer Science or a related field.Proficient in Python, databases, and cloud platforms (AWS/GCP).Experience in delivering AI-assisted or data-intensive services (e.g., model APIs, retrieval pipelines).Strong understanding of backend engineering principles and practices.
Full-time|$150K/yr - $200K/yr|On-site|San Francisco
Anara is seeking a talented Software Engineer to join our innovative team in revolutionizing scientific research. In this role, you will develop groundbreaking tools that will redefine the landscape of scientific discovery for generations.Your Responsibilities:Design and implement scalable features and infrastructure for cutting-edge AI-driven research tools.Collaborate directly with customers to gather valuable insights and enhance user experience.Continuously monitor and optimize system performance and costs, making strategic trade-offs to maintain agility.Establish best practices for prompt engineering and deployment of new models and workflows.Lead the design and automation of evaluation pipelines to consistently assess, benchmark, and enhance AI agent quality and reliability.
About RejiggRejigg is transforming the landscape of small business ownership. Our innovative AI-driven marketplace seamlessly connects entrepreneurs looking to buy and sell businesses, automates due diligence, and streamlines transactions, making the journey of entrepreneurship more attainable and ownership transitions significantly simpler.Currently, the process of buying and selling businesses is often disjointed, opaque, and laden with intermediaries. We are changing that narrative, and our efforts are paying off. We proudly serve tens of thousands of buyers and thousands of sellers, and we are on a trajectory to multiply our growth by tenfold year over year.About the RoleAs a Product Engineer at Rejigg, your code will directly influence one of the most significant decisions in an individual's life: transitioning their legacy or embarking on ownership. This role is inherently emotional and complex, and it requires meticulous attention to detail.Joining our small but dedicated engineering team means working closely with the founders to translate customer insights into user-friendly, fast, and dependable products. If you enjoy taking full ownership of challenges, focusing on user experience, and delivering results rapidly through tight feedback loops, this role is perfect for you.We are a well-funded, close-knit team located in San Francisco. Here, you will be developing solutions at the intersection of AI and marketplaces, while collaborating with real people. This is a unique opportunity to take substantial ownership, accelerate your career growth, and help shape our development approach.What You Will DoDesign, develop, and deploy product features from conception to production.Collaborate closely with founders, customers, and go-to-market teams to transform insights into tangible products.Engage across the entire technology stack, including front-end, back-end, database, and infrastructure, with assistance available as needed.Enhance product quality through performance tuning, reliability improvements, and refinement of user experience.Contribute to the evolution of our architecture and engineering practices as we grow.What We Are Looking ForDemonstrated experience in building and deploying production software or a solid portfolio of relevant projects.A strong willingness to work across the stack, or deep expertise in one area with a desire to learn others.Exceptional product instincts with a focus on user experience, speed, and clarity in implementation.Ability to act swiftly, communicate effectively, and take ownership of project outcomes.Genuine interest in small businesses, entrepreneurship, marketplaces, or ownership transitions.Eagerness to collaborate in-person most days in a vibrant office environment.
Hayden AI creates mobile perception systems that help transit agencies and city governments solve transportation challenges. The team’s technology supports bus lane and stop enforcement, improves street safety, and helps make transit systems more efficient and sustainable. This Senior Firmware Engineer role is based at Hayden AI’s San Francisco headquarters. The position sits within the Device Software team and focuses on the low-level software stack powering the company’s edge AI systems. Work centers on direct interaction with hardware to ensure reliable, high-performance operation in real-world settings. Responsibilities Develop and maintain Linux kernel modules and device drivers for embedded platforms. Integrate hardware and software layers for edge AI devices to ensure stable operation. Work closely with hardware engineers and other software teams to deliver high-performance solutions. Troubleshoot and resolve firmware issues in deployed environments. Requirements Significant experience with Linux kernel and device driver development. Strong background in embedded systems and low-level programming. Comfort working directly with hardware and debugging complex system interactions. Experience with edge AI or similar real-time systems is a plus.
Full-time|$190.9K/yr - $232.8K/yr|On-site|San Francisco, California
P-1285 About This Role Join our dynamic team at Databricks as a Staff Software Engineer specializing in GenAI Performance and Kernel. In this pivotal role, you will take charge of designing, implementing, and optimizing high-performance GPU kernels that drive our GenAI inference stack. Your expertise will lead the development of finely-tuned, low-level compute paths, balancing hardware efficiency with versatility, while mentoring fellow engineers in the intricacies of kernel-level performance engineering. Collaborating closely with machine learning researchers, systems engineers, and product teams, you will elevate the forefront of inference performance at scale. What You Will Do Lead the design, implementation, benchmarking, and maintenance of essential compute kernels (such as attention, MLP, softmax, layernorm, memory management) tailored for diverse hardware backends (GPU, accelerators). Steer the performance roadmap for kernel-level enhancements, focusing on areas like vectorization, tensorization, tiling, fusion, mixed precision, sparsity, quantization, memory reuse, scheduling, and auto-tuning. Integrate kernel optimizations seamlessly with higher-level machine learning systems. Develop and uphold profiling, instrumentation, and verification tools to identify correctness, performance regressions, numerical discrepancies, and hardware utilization inefficiencies. Conduct performance investigations and root-cause analyses to address inference bottlenecks, such as memory bandwidth, cache contention, kernel launch overhead, and tensor fragmentation. Create coding patterns, abstractions, and frameworks to modularize kernels for reuse, cross-backend compatibility, and maintainability. Influence architectural decisions to enhance kernel efficiency (including memory layout, dataflow scheduling, and kernel fusion boundaries). Guide and mentor fellow engineers focused on lower-level performance, conducting code reviews and establishing best practices. Collaborate with infrastructure, tooling, and machine learning teams to implement kernel-level optimizations in production and assess their impacts.
Partly is headquartered in the UK, with a Product and Engineering HQ in Christchurch, New Zealand, and an emerging presence in San Francisco, USA. If you are located outside of a Hub, we will cover your travel and accommodation for one week each quarter to join us for our 'Season Openers'. This position is based in San Francisco. Our MissionPartly is dedicated to revolutionizing the replacement parts industry by connecting the world's parts through our innovative global platform. Our vision is to promote sustainability by empowering individuals to repair and maintain their belongings.Founded by former Rocket Lab engineers, we leverage advanced technology to tackle complex challenges that significantly impact a $1.9 trillion market. Having tripled our team size in the past year, we aim to double our workforce again in the upcoming year. Our diverse team spans across Europe and Australasia.We provide a robust digital infrastructure to some of the largest enterprises and the most dynamic startups, enabling them to catalogue and manage parts online effectively.Supported by prominent investors like Blackbird Ventures (Canva, CultureAmp), Square Peg, Octopus Ventures, Icehouse, and notable figures such as Peter Beck (Rocket Lab), Akshay Kothari (Notion Co-Founder), and Dylan Field (Figma Co-Founder).We are committed to building a world-class team and fostering a work environment where individuals can excel. Our core values are integral to every aspect of the experience at Partly.Curious about our culture and the challenges we address? Discover more from our team here: https://shorturl.at/iAFUX Role OverviewIn this position, you will collaborate with our product and engineering teams to solve exciting challenges and develop exceptional software. Responsibilities (Forward Deployed Engineer)Engage directly with customers in your local market, on-site as required, to gain a profound understanding of their workflows, limitations, and definitions of success.Lead technical discovery, solution design, and delivery for high-impact projects.
Mar 4, 2026
Sign in to browse more jobs
Create account — see all 11,676 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.