About the job
Cerebras Systems is at the forefront of AI technology, creating the largest AI chip in the world, which is 56 times larger than traditional GPUs. Our innovative wafer-scale architecture delivers the computational power equivalent to dozens of GPUs on a single chip, simplifying programming to a single device. This design enables us to achieve unparalleled training and inference speeds, allowing machine learning practitioners to effortlessly execute large-scale ML applications without the complexity of managing multiple GPUs or TPUs.
Our clientele includes leading model labs, major enterprises, and pioneering AI-native startups. Recently, OpenAI announced a multi-year collaboration with Cerebras, aiming to deploy 750 megawatts of scale and revolutionize critical workloads through ultra-high-speed inference.
Thanks to our groundbreaking wafer-scale architecture, Cerebras Inference provides the fastest Generative AI inference solution globally, boasting speeds over 10 times faster than GPU-based hyperscale cloud inference services. This significant boost in speed is enhancing the user experience in AI applications, enabling real-time iterations and elevating intelligence through additional agentic computation.
The Role
We are on the lookout for a highly skilled and motivated Manufacturing Bring-up Engineer to join our dynamic team. In this pivotal role, you will support the execution, implementation, and advancement of our system-level bring-up process within the manufacturing pipeline. This is a high-visibility position that necessitates robust technical expertise, strong coordination, and collaboration to ensure our products transition seamlessly from manufacturing to our customers.

