Senior MLOps Engineer

July 8

Apply Now
Logo of Fortytwo

Fortytwo

Artificial Intelligence • B2B • SaaS

Fortytwo is a decentralized AI network that allows anyone to contribute to planetary-scale intelligence by running AI models on everyday hardware. The platform enables users to participate in a community-driven AI ecosystem where multiple consumer devices collaborate to enhance the speed and accuracy of AI inference. By collectively processing user requests, Fortytwo's nodes facilitate smarter, faster, and more efficient AI solutions, operating without centralized control to ensure openness and accessibility.

📋 Description

• Deploy scalable, production-ready ML services with optimized infrastructure and auto-scaling Kubernetes clusters. • Optimize GPU resources using MIG (Multi-Instance GPU) and NOS (Node Offloading System). • Manage cloud storage (e.g., S3) to ensure high availability and performance. • Integrate state-of-the-art ML techniques, such as LoRA and model merging, into workflows: • Deploy and manage large language models (LLM), small language models (SLM), and large multimodal models (LMM). • Serve ML models using technologies like Triton Inference Server. • Optimize models with ONNX and TensorRT for efficient deployment. • Develop Retrieval-Augmented Generation (RAG) systems integrating spreadsheet, math, and compiler processors. • Set up monitoring and logging solutions using Grafana, Prometheus, Loki, Elasticsearch, and OpenSearch. • Write and maintain CI/CD pipelines using GitHub Actions for seamless deployment processes. • Create Helm templates for rapid Kubernetes node deployment. • Automate workflows using cron jobs and Airflow DAGs.

🎯 Requirements

• Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. • Proficiency in Kubernetes, Helm, and containerization technologies. • Experience with GPU optimization (MIG, NOS) and cloud platforms (AWS, GCP, Azure). • Strong knowledge of monitoring tools (Grafana, Prometheus) and scripting languages (Python, Bash). • Hands-on experience with CI/CD tools and workflow management systems. • Familiarity with Triton Inference Server, ONNX, and TensorRT for model serving and optimization. • 5+ years of experience in MLOps or ML engineering roles. • Experience with advanced ML techniques, such as multi-sampling and dynamic temperatures. • Knowledge of distributed training and large model fine-tuning. • Proficiency in Go or Rust programming languages. • Experience designing and implementing highly secure MLOps pipelines, including secure model deployment and data encryption.

🏖️ Benefits

• Engage in meaningful AI research – Work on decentralized inference, multi-agent systems, and efficient model deployment with a team that values rigorous, first-principles thinking. • Build scalable and sustainable AI – Design AI systems that reduce reliance on massive compute clusters, making advanced models more efficient, accessible, and cost-effective. • Collaborate with a highly technical team – Join engineers and researchers who are deeply experienced, intellectually curious, and motivated by solving hard problems.

Apply Now

Similar Jobs

July 6

As a Senior Machine Learning Ops Engineer, you will design ML operations at Overstory to tackle climate challenges.

Airflow

Cloud

Google Cloud Platform

July 6

Join Gametime as a founding member to develop ML infrastructure and support engineers and scientists.

Airflow

Android

AWS

Cloud

Docker

DynamoDB

GRPC

iOS

Kafka

Python

PyTorch

Redis

Scikit-Learn

SQL

July 6

Join Gametime to design and implement AI/ML solutions for enhancing live event experiences.

Android

AWS

Azure

Cloud

Google Cloud Platform

iOS

Python

PyTorch

Scikit-Learn

SQL

July 5

Join Symbl.ai as a Machine Learning Engineer optimizing AI systems. Drive innovation and enhance infrastructure.

AWS

Azure

Cloud

Distributed Systems

Docker

Google Cloud Platform

Kubernetes

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com