
Artificial Intelligence • Hardware • Healthcare Insurance
Cerebras Systems is a pioneering company that focuses on developing advanced AI hardware, specifically the Cerebras Wafer Scale Engine, which delivers unparalleled performance in AI inference, outperforming traditional GPU setups. Their cutting-edge technology enables organizations like Mayo Clinic and AlphaSense to run state-of-the-art AI models with remarkable speed and efficiency. With flexible deployment options including cloud and on-premises solutions, Cerebras is transforming the landscape of AI capabilities for innovative teams across various industries.
201 - 500 employees
Founded 2016
🤖 Artificial Intelligence
🔧 Hardware
⚕️ Healthcare Insurance
August 8

Artificial Intelligence • Hardware • Healthcare Insurance
Cerebras Systems is a pioneering company that focuses on developing advanced AI hardware, specifically the Cerebras Wafer Scale Engine, which delivers unparalleled performance in AI inference, outperforming traditional GPU setups. Their cutting-edge technology enables organizations like Mayo Clinic and AlphaSense to run state-of-the-art AI models with remarkable speed and efficiency. With flexible deployment options including cloud and on-premises solutions, Cerebras is transforming the landscape of AI capabilities for innovative teams across various industries.
201 - 500 employees
Founded 2016
🤖 Artificial Intelligence
🔧 Hardware
⚕️ Healthcare Insurance
• Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. • You will play a critical hands-on role in ensuring the success of our strategic customers. • You will lead deep technical engagements across a portfolio of high-value accounts. • This is a senior individual contributor role for an experienced customer-facing technologist.
• Bachelor’s or Master’s degree in a technical field such as Computer Science, Electrical Engineering, or related discipline. • 10+ years of professional experience, with 5–7+ years in customer-facing technical roles (e.g., Customer Success Engineering, Solutions Engineering, Technical Account Management). • Strong foundation in LLM inference workloads, AI/ML systems, distributed computing, and infrastructure. • Experience deploying and optimizing LLM inference workloads, with a solid understanding of latency, throughput, and token-level performance metrics, as well as inference toolchains, APIs, and optimization techniques relevant to large-scale model serving is a strong plus. • Exceptional communication and collaboration skills; ability to interface with developers, architects, and executive stakeholders. • Comfortable leading complex technical discussions and resolving high-severity issues in real time. • Passion for customer advocacy and delivering measurable impact through high-touch engagement.
• Build a breakthrough AI platform beyond the constraints of the GPU. • Publish and open source their cutting-edge AI research. • Work on one of the fastest AI supercomputers in the world. • Enjoy job stability with startup vitality. • Our simple, non-corporate work culture that respects individual beliefs.
Apply Now