About the job
About Our Team
Join OpenAI’s innovative Forward Deployed Engineering team, where we collaborate with top semiconductor companies to implement cutting-edge AI systems throughout the chip design, verification, and tooling processes. Our role is pivotal, operating at the crossroad of customer engagement and core platform development, as we work closely with clients to transform advanced model capabilities into solutions that significantly shorten design cycles, enhance verification quality, and foster innovation.
Our initiatives convert initial, high-touch deployments into scalable patterns, reference architectures, and evaluation methodologies that resonate across the semiconductor landscape—from chip designers to EDA vendors and, in the future, fabrication partners.
About the Role
We are seeking a Forward Deployed Engineer (FDE) to spearhead comprehensive deployments of OpenAI’s models within semiconductor and chip design firms. You will collaborate with customers who possess deep expertise in hardware architecture, RTL, verification, and performance engineering, translating intricate workflows, extensive codebases, and long-standing toolchains into operational AI systems.
Your responsibilities will encompass the full spectrum of semiconductor workflows, from chip design and verification to tooling and adjacent manufacturing systems. You will play a vital role in expanding OpenAI’s influence across the semiconductor stack, shaping the application of frontier models throughout the semiconductor lifecycle.
Success will be measured by production adoption rates, reduction in cycle times, enhancements in engineer productivity, and feedback loops informed by evaluations that guide product, model, and platform strategies. Collaboration with Product, Research, GTM, and Partnerships teams will be essential to transform early successes into a robust semiconductor vertical offering.
This position requires working in environments where precision, scalability, and trust are paramount—regressions can cost weeks, failures can impede tape-out, and credibility is achieved through unwavering technical rigor.
This role is situated in San Francisco, employing a hybrid work model of three days in-office each week. Relocation assistance is provided, and travel of up to 50% is expected.
Your Responsibilities
Design and deploy production-grade AI systems based on models, overseeing integrations with RTL repositories, verification environments, simulators, and internal tools.
Lead the discovery phase and scope from initial engagement to production rollout, converting vague engineering challenges into hypothesis-driven use cases with quantifiable results.
Collaborate with cross-functional teams to ensure the successful integration and optimization of AI systems.

