
Telecommunications
Київстар is a Ukrainian telecommunications company that offers a wide range of services for private individuals, businesses, and the public sector. These services include mobile communication, home internet, television services, and international roaming. Київстар provides various packages and tariffs to suit different needs, such as SIM cards, eSIM, internet packages, and bundles combining internet, TV, and mobile services. The company also supports social initiatives, facilitates online doctor appointments, and provides technical support and customer service through multiple channels. Київстар is committed to offering quality telecommunication solutions and fostering digital connectivity in Ukraine.
1001 - 5000 employees
Founded 1994
📡 Telecommunications
August 14
Airflow
Ansible
Apache
AWS
Azure
BigQuery
Cloud
Distributed Systems
Docker
EC2
ETL
Flask
Google Cloud Platform
Grafana
Hadoop
Java
Jenkins
Kafka
Kubernetes
Microservices
Prometheus
Python
Ray
Shell Scripting
Spark
Tensorflow
Terraform
Go

Telecommunications
Київстар is a Ukrainian telecommunications company that offers a wide range of services for private individuals, businesses, and the public sector. These services include mobile communication, home internet, television services, and international roaming. Київстар provides various packages and tariffs to suit different needs, such as SIM cards, eSIM, internet packages, and bundles combining internet, TV, and mobile services. The company also supports social initiatives, facilitates online doctor appointments, and provides technical support and customer service through multiple channels. Київстар is committed to offering quality telecommunication solutions and fostering digital connectivity in Ukraine.
1001 - 5000 employees
Founded 1994
📡 Telecommunications
• About us: Kyivstar.Tech is a Ukrainian hybrid IT company and a resident of Diia.City. • We are a subsidiary of Kyivstar, one of Ukraine's largest telecom operators. • Our mission is to change lives in Ukraine and around the world by creating technological solutions and products that unleash the potential of businesses and meet users' needs. • Over 500+ KS.Tech specialists work daily in various areas: mobile and web solutions, as well as design, development, support, and technical maintenance of high-performance systems and services. • We believe in innovations that truly bring quality changes and constantly challenge conventional approaches and solutions. Each of us is an adherent of entrepreneurial culture, which allows us never to stop, to evolve, and to create something new. • We are hiring an MLOps Engineer specializing in Large Language Model (LLM) infrastructure to design and maintain the robust platform on which our AI models are developed, deployed, and monitored. As an MLOps Engineer, you will build the backbone of our machine learning operations – from scalable training pipelines to reliable deployment systems – ensuring that our NLP models (including LLMs) can be trained on large datasets and served to end-users efficiently. This role sits at the intersection of software engineering, DevOps, and machine learning, and is crucial for accelerating our R&D in the Ukrainian LLM project. You’ll work closely with data scientists and software engineers to implement best-in-class infrastructure and workflows for continuous delivery of AI innovations. • Responsibilities: • Design and implement modern, scalable ML infrastructure (cloud-native or on-premises) to support both experimentation and production deployment of NLP/LLM models. This includes setting up systems for distributed model training (leveraging GPUs or TPUs across multiple nodes) and high-throughput model serving (APIs, microservices) • Develop end-to-end pipelines for model training, validation, and deployment. Automate the ML workflow from data ingestion and feature processing to model training and evaluation, using technologies like Docker and CI/CD pipelines to ensure reproducibility and reliability • Collaborate with Data Scientists and ML Engineers to design MLOps solutions that meet model performance and latency requirements. Architect deployment patterns (batch, real-time, streaming inference) appropriate for various use cases (e.g., a real-time chatbot vs. offline analysis) • Implement and uphold best practices in MLOps, including automated testing of ML code, continuous integration/continuous deployment for model updates, and rigorous version control for code, data, and model artifacts. Ensure every model and dataset is properly versioned and reproducible • Set up monitoring and alerting for deployed models and data pipelines. Use tools to track model performance (latency, throughput) and accuracy drift in production. Implement logging and observability frameworks to quickly detect anomalies or degradations in model outputs • Manage and optimize our Kubernetes-based deployment environments. Containerize ML services and use orchestration (Kubernetes, Docker Swarm or similar) to scale model serving infrastructure. Handle cluster provisioning, health, and upgrades, possibly using Helm charts for managing LLM services • Maintain infrastructure-as-code (e.g., Terraform, Ansible) for provisioning cloud resources and ML infrastructure, enabling reproducible and auditable changes to the environment. Ensure our infrastructure is scalable, cost-effective, and secure • Perform code reviews and provide guidance to other engineers (both MLOps and ML developers) on building efficient and maintainable pipelines. Troubleshoot issues across the ML lifecycle, from data processing bottlenecks to model deployment failures, and continuously improve system robustnes
• Experience & Background: 4+ years of experience in DevOps, MLOps, or ML Infrastructure roles. Strong foundation in software engineering and DevOps principles as they apply to machine learning. Bachelor’s or Master’s in Computer Science, Engineering, or related field is preferred • Cloud & Infrastructure: Extensive experience with cloud platforms (AWS, GCP, or Azure) and designing cloud-native applications for ML. Comfortable using cloud services for compute (EC2, GCP Compute, Azure VMs), storage (S3, Cloud Storage), container registry, and serverless components where appropriate. Experience managing infrastructure with Infrastructure-as-Code tools like Terraform or CloudFormation • Containerization & Orchestration: Proficiency in container technologies (Docker) and orchestration with Kubernetes. Ability to deploy, scale, and manage complex applications on Kubernetes clusters; experience with tools like Helm for Kubernetes package management. Knowledge of container security and networking basics in distributed systems • CI/CD & Automation: Strong experience implementing CI/CD pipelines for ML projects. Familiar with tools like Jenkins, GitLab CI, or GitHub Actions for automating testing and deployment of ML code and models. Experience with specialized ML CI/CD (e.g., TensorFlow Extended TFX, MLflow for model deployment) and GitOps workflows (Argo CD) is a plus • Programming & Scripting: Strong coding skills in Python, with experience in writing pipelines or automation scripts related to ML tasks. Familiarity with shell scripting and one or more general-purpose languages (Go, Java, or C++) for infrastructure tooling. Ability to debug and optimize code for performance (both in data pipelines and in model inference code) • ML Pipeline Knowledge: Solid understanding of the machine learning lifecycle and tools. Experience building or maintaining ML pipelines, possibly using frameworks like Kubeflow, Airflow, or custom solutions. Knowledge of model serving frameworks (TensorFlow Serving, TorchServe, NVIDIA Triton, or custom Flask/FastAPI servers for ML) • Monitoring & Reliability: Experience setting up monitoring for applications and models (using Prometheus, Grafana, CloudWatch, or similar) and implementing alerting for anomalies. Understanding of model performance metrics and how to track them in production (e.g., accuracy on a validation stream, response latency). Familiarity with concepts of A/B testing or canary deployments for model updates in production • Security & Compliance: Basic understanding of security best practices in ML deployments, including data encryption, access control, and dealing with sensitive data in compliance with regulations. Experience implementing authentication/authorization for model endpoints and ensuring infrastructure complies with organizational security policies • Team Collaboration: Excellent collaboration skills to work with cross-functional teams. Experience interacting with data scientists to translate model requirements into scalable infrastructure. Strong documentation habits for outlining system designs, runbooks for operations, and lessons learned
• Office or remote – it’s up to you. You can work from anywhere, and we will arrange your workplace • Remote onboarding • Performance bonuses for everyone (annual or quarterly — depends on the role) • We train employees: with the opportunity to learn through the company’s library, internal resources, and programs from partners • Health and life insurance • Wellbeing program and corporate psychologist • Reimbursement of expenses for Kyivstar mobile communication
Apply NowAugust 4
Hire ML Engineer to build production-ready machine learning solutions for The Playa's iGaming platform.
July 29
Join the computer vision division to drive ML model deployment and enhance consumer experience.
April 17
Join Provectus as an ML Engineer to develop and optimize machine learning models and solutions.