Mid-Level AI/ML Engineer - Data Scientist Background

Job not on LinkedIn

August 13

Apply Now
Logo of Kolomolo

Kolomolo

Based in Stockholm Sweden and Krakow, Poland, Kolomolo is a leading IT consultancy firm specialized in cloud computing, AI, and IoT, proudly partnering with AWS. Our mission, "To be leaders in digital modernization," drives us to help companies leverage cutting-edge technologies through the expertise of our talented team.

11 - 50 employees

📋 Description

• Build and maintain knowledge graphs using graph databases like Neo4j, leveraging Cypher or Gremlin for queries. • Process large-scale structured and unstructured data into graph databases using ETL pipelines. • Develop and implement graph-based AI systems to convert natural language queries into graph structures and vice versa. • Design and deploy modular RAG systems with the use of MCP and intelligent agents. • Apply agentic design patterns to enable autonomous task execution in AI systems. • Implement context engineering and prompt engineering techniques to optimize AI performance. • Collaborate with cross-functional teams to integrate AI models into scalable microservices. • Learn and apply AWS Bedrock and SageMaker for training, fine-tuning, and deploying machine learning models. • Work on small-scale optimization models for predictive analytics and operational efficiency.

🎯 Requirements

• Strong background in mathematics, statistics, and graph theory (degree centrality, eigenvalues, eigenvectors, Dijkstra's algorithm, etc.). • Proven expertise in AI/ML engineering, including vectorization, embeddings, multi-modal AI, and retrieval-augmented generation (RAG). • Hands-on experience with graph databases (e.g., Neo4j, PG Vector, Milvus) and traversal/query languages (Cypher, Gremlin). • Knowledge of agentic systems and agentic design patterns. • Experience in building modular RAG systems with MCP and agents. • Strong Python skills with libraries like Polars, Scikit-Learn, Pandas, and Numpy. • Familiarity with regressive optimization techniques such as Logistic Regression, Bayesian Optimization, and ARIMA. • Experience with anomaly detection and covariate shift detection techniques. • Understanding of FastAPI and microservice architecture. • Experience with Docker for containerized deployments. • Solid data engineering skills, including large pipeline processing, multiprocessing, and multithreading. • Nice to Have - Experience with vector databases such as Pinecone, ChromaDB, or Milvus. • Nice to Have - Knowledge of out-of-distribution analysis and semantic similarity techniques. • Nice to Have - Experience with multi-modal AI systems.

🏖️ Benefits

• Competitive salary and benefits. • Career development opportunities in a growing tech company. • Continuous learning culture: mentorship, internal training, and certifications. • Flexible, agile work environment (remote, hybrid, or on-site in Kraków). • Office perks: great coffee, tea, fresh fruit, snacks, and a fun atmosphere. • Flat management structure, where your voice matters. • Regular team events and a social, supportive work culture.

Apply Now
Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com