Compiler Deep Learning Engineer

November 21

Apply Now
Logo of NVIDIA

NVIDIA

Artificial Intelligence • Gaming • Automotive

NVIDIA is a leading technology company specializing in accelerated computing and artificial intelligence. NVIDIA pioneers advancements in graphical processing units (GPUs), cloud computing, data centers, and virtual reality, with a focus on gaming, automotive, healthcare, and robotics industries. The company's innovations, such as NVIDIA Omniverse, transform traditional digital processes by enabling high-fidelity simulations and rendering tasks. Their applications span various industries, from autonomous vehicles using NVIDIA DRIVE to healthcare solutions with NVIDIA Clara, and AI-driven analytics and workflows.

10,000+ employees

Founded 1993

🤖 Artificial Intelligence

🎮 Gaming

📋 Description

• Develop debugger support in an MLIR-based compiler stack to support debugging novel GPU programming paradigms • Enable debugger support in various programming languages and domain-specific languages (DSLs) targeting NVIDIA GPUs • Work with other internal compiler and developer tools teams to ensure seamless debugging experiences across the NVIDIA software and developer tooling stack • Collaborate closely with research, libraries, and product teams at NVIDIA to identify debugger features that can effectively improve developer productivity and efficiency

🎯 Requirements

• Bachelors, Masters or Ph.D. in Computer Science, Computer Engineering, or a related field (or equivalent experience) • 4+ years of relevant work or research experience in compiler development, debugging tools, or related areas • Strong C/C++ programming and software design skills, including debugging, performance analysis, and test design • Ability to work independently, define project goals and scope, and lead your own development effort • Excellent communication and collaboration skills, with a passion for working in dynamic, cross-functional teams • Strong track record in MLIR compiler engineering, with deep knowledge of compiler internals and optimizer/code generation pipelines • Experience building debugger support for programming languages or DSLs, especially those targeting GPUs • Technical understanding of debugging formats such as DWARF • Knowledge of CPU and/or GPU architecture • CUDA or OpenCL programming experience • Experience with the following technologies: MLIR, LLVM, XLA, TVM, deep learning models and algorithms, and deep learning framework design

🏖️ Benefits

• equity • benefits

Apply Now

Similar Jobs

November 21

ML Ops Engineer integrating machine learning with healthcare technology at Twin Health. Designing AI/ML systems and leading cross-functional initiatives to enhance healthcare solutions.

Distributed Systems

Docker

Java

Kubernetes

Microservices

NoSQL

Python

Spark

SQL

Go

November 21

Senior Machine Learning Engineer developing ML/AI solutions at Samsara. Working closely with scientists and engineers to enhance physical operations safety and efficiency.

Java

Python

PyTorch

Spark

Tensorflow

Go

November 20

Machine Learning Engineer developing and deploying LLMs for various NLP tasks in a remote team. Lead efforts in model development and data-driven solutions for improving AI capabilities.

Python

November 20

Machine Learning Engineer at Mitratech assisting in AI product development in the legal industry. Analyzing business requirements and building solutions using advanced machine learning techniques.

Airflow

AWS

Azure

Cloud

Distributed Systems

Docker

Google Cloud Platform

Kubernetes

Python

PyTorch

Scikit-Learn

Tensorflow

November 20

Senior ML Ops Engineer at Greenhouse managing the full lifecycle of machine learning models and enhancing delivery pipelines. Collaborating across teams to operationalize ML solutions.

AWS

Cloud

Grafana

Kubernetes

PyTorch

Terraform

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com