Staff Machine Learning Engineer

Job not on LinkedIn

August 8

Apply Now
Logo of Tintri

Tintri

Hardware • Cloud Computing • SaaS

Tintri is a leading provider of data management solutions specifically designed for virtualized workloads. With a focus on simplifying IT operations, Tintri offers innovative cloud and on-premise storage platforms that optimize data management and enhance performance through automation and analytics. Their solutions are tailored for environments requiring efficient workload management, data protection, and visibility, making it easier for organizations to manage complex IT infrastructures effectively.

201 - 500 employees

Founded 2008

🔧 Hardware

☁️ SaaS

📋 Description

• This is an incredible opportunity to be part of a company that has been at the forefront of AI and high-performance data storage innovation for over two decades. • DDN is the global leader in AI and multi-cloud data management at scale. • Our commitment to innovation, customer success, and market leadership makes this an exciting and rewarding role for a driven professional looking to make a lasting impact in the world of AI and data storage. • We are seeking a talented and experienced Sr ML Engineer to help us optimize training, inference, and Retrieval-Augmented Generation (RAG) pipelines for high-performance AI applications. • You will lead the development of connectors to open-source frameworks for data streaming. • Collaborating closely with software developers, product teams, and partners, you will lead experiments with state-of-the-art models using open-source tools and cloud platforms.

🎯 Requirements

• Bachelor’s or Master’s degree in Computer Science, Data Science, Machine Learning, or related fields. • 4+ years of experience in machine learning operations (MLOps) or related roles. • Proven expertise in building and scaling AI/ML pipelines. • Strong understanding of machine learning frameworks and libraries (TensorFlow, PyTorch, NVIDIA NeMo, vLLM, TensorRT-LLM). • Experience in deploying open-source vector databases at scale. • Solid understanding of cloud infrastructure (AWS, GCP, Azure) and distributed computing. • Proficiency with containerization tools (Docker, Kubernetes) and infrastructure as code. • Excellent problem-solving and troubleshooting skills, with attention to detail and performance optimization. • Strong communication and collaboration skills. • Implementation-level understanding of ML frameworks, data loaders and data formats. • Experience with scaling RAG pipelines and integrating them with generative AI models. • Experience in operationalizing AI/ML models in production environments. • Participation in a team on-call rotation providing seven-day week out of hours coverage, including the provision of after-hours and weekend support work when required.

Apply Now
Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com