About the job
Cerebras Systems is at the forefront of AI technology, delivering the world's largest AI chip, which is 56 times larger than traditional GPUs. Our innovative wafer-scale architecture effectively combines the computational capabilities of numerous GPUs into a single chip, providing unmatched training and inference speeds. This revolutionary approach enables machine learning professionals to seamlessly run extensive ML applications without the complexities of managing multiple GPUs or TPUs.
Cerebras collaborates with leading model labs, global corporations, and pioneering AI-centric startups. Recently, OpenAI has established a multi-year partnership with us to deploy 750 megawatts of power, drastically enhancing key workloads through ultra-fast inference.
With our state-of-the-art wafer-scale architecture, Cerebras Inference presents the fastest Generative AI inference solution available, boasting speeds over ten times that of GPU-based hyperscale cloud inference services. This monumental increase in performance is reshaping the user experience for AI applications, facilitating real-time iterations and enhancing intelligence through additional agentic computation.
The Role
We are on the lookout for a highly skilled and driven Manufacturing Bring-up Engineer to join our dynamic team. In this pivotal role, you will oversee the execution, implementation, and evolution of our system-level bring-up processes within the manufacturing pipeline. This high-visibility position demands strong technical expertise, coordination, and collaboration to ensure the successful delivery of our products to customers.

