Distributed Cloud – GenAI Engineer

Job not on LinkedIn

October 11

Apply Now
Logo of Devoteam

Devoteam

Artificial Intelligence • Cloud Managed Services • Cybersecurity

Devoteam is a multinational technology consulting company that specializes in delivering innovative solutions in areas such as Artificial Intelligence, Cloud Managed Services, and Cybersecurity. With a focus on business transformation, Devoteam helps organizations optimize their IT infrastructure and migrate to cloud-native architectures. They partner with leading technology providers like AWS, Google Cloud, and Microsoft to drive digital transformation and sustainable IT practices across various industries.

📋 Description

• Design, develop, and implement GenAI applications and prototypes, focusing on practical use cases using LLMs (e.g., GPT, Llama, Gemini) and supporting frameworks. • Implement advanced techniques such as Prompt Engineering and Retrieval-Augmented Generation (RAG) pipelines to ground models with proprietary data and enhance accuracy. • Perform model fine-tuning and adaptation (e.g., LoRA) on pre-trained models to optimize performance for specific domain tasks. • Build scalable MLOps pipelines for the deployment, monitoring, and continuous improvement of GenAI models in production environments. • Collaborate with Data Scientists, Product Managers, and Software Engineers to integrate GenAI capabilities seamlessly into core products. • Ensure the ethical, secure, and responsible deployment of generative models, managing risks related to bias and data privacy.

🎯 Requirements

• 3+ years of experience in Machine Learning Engineering, AI Development, or a highly related software engineering role. • Mandatory hands-on experience with Large Language Models (LLMs) and key GenAI concepts (e.g., Transformers architecture). • Strong proficiency in Python and practical experience with GenAI frameworks such as LangChain, LlamaIndex, or similar. • Proven ability to design and implement RAG (Retrieval-Augmented Generation) architectures for production use cases. • Preferred Skills: Experience with vector databases (e.g., Pinecone, Chroma, Milvus) used in RAG implementations. • Knowledge of model efficiency and serving techniques (e.g., quantization, model distillation). • Contributions to open-source ML/GenAI projects. • Solid experience with model deployment, MLOps practices, and containerization (Docker, Kubernetes). • Experience working with Cloud platforms (AWS, GCP, or Azure) and their respective AI/ML services (e.g., Vertex AI, SageMaker) is highly valued.

🏖️ Benefits

• Health insurance • Professional development opportunities

Apply Now

Similar Jobs

September 2

Own end-to-end lifecycle of generative AI features for SECJUR's legal-tech SaaS; productionise RAG, summarisation, and MLOps on Azure.

Azure

Cloud

Docker

Python

Ray

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com