
Telecommunications
Київстар is a Ukrainian telecommunications company that offers a wide range of services for private individuals, businesses, and the public sector. These services include mobile communication, home internet, television services, and international roaming. Київстар provides various packages and tariffs to suit different needs, such as SIM cards, eSIM, internet packages, and bundles combining internet, TV, and mobile services. The company also supports social initiatives, facilitates online doctor appointments, and provides technical support and customer service through multiple channels. Київстар is committed to offering quality telecommunication solutions and fostering digital connectivity in Ukraine.
August 15
Airflow
Amazon Redshift
Apache
AWS
Azure
BigQuery
Cloud
Docker
ETL
Google Cloud Platform
HDFS
Jenkins
Kafka
Kubernetes
MongoDB
MySQL
NoSQL
Postgres
Python
Selenium
Spark
SQL
Tableau
Terraform

Telecommunications
Київстар is a Ukrainian telecommunications company that offers a wide range of services for private individuals, businesses, and the public sector. These services include mobile communication, home internet, television services, and international roaming. Київстар provides various packages and tariffs to suit different needs, such as SIM cards, eSIM, internet packages, and bundles combining internet, TV, and mobile services. The company also supports social initiatives, facilitates online doctor appointments, and provides technical support and customer service through multiple channels. Київстар is committed to offering quality telecommunication solutions and fostering digital connectivity in Ukraine.
• Design, develop, and maintain ETL/ELT pipelines for gathering, transforming, and storing large volumes of text data and related information. Ensure pipelines are efficient and can handle data from diverse sources (e.g., web crawls, public datasets, internal databases) while maintaining data integrity. • Implement web scraping and data collection services to automate the ingestion of text and linguistic data from the web and other external sources. This includes writing crawlers or using APIs to continuously collect data relevant to our language modeling efforts. • Implementation of NLP/LLM specific data processing: cleaning and normalization of text, like filtering of toxic content, de-duplication, de-noising), detection and deletion of personal data. • Formation of specific SFT/RLHF datasets from existing data, including data augmentation/labeling with LLM as teacher. • Set up and manage cloud-based data infrastructure for the project. Configure and maintain data storage solutions (data lakes, warehouses) and processing frameworks (e.g., distributed compute on AWS/GCP/Azure) that can scale with growing data needs. • Automate data processing workflows and ensure their scalability and reliability. Use workflow orchestration tools like Apache Airflow to schedule and monitor data pipelines, enabling continuous and repeatable model training and evaluation cycles. • Maintain and optimize analytical databases and data access layers for both ad-hoc analysis and model training needs. Work with relational databases (e.g., PostgreSQL) and other storage systems to ensure fast query performance and well-structured data schemas. • Collaborate with Data Scientists and NLP Engineers to build data features and datasets for machine learning models. Provide data subsets, aggregations, or preprocessing as needed for tasks such as language model training, embedding generation, and evaluation. • Implement data quality checks, monitoring, and alerting. Develop scripts or use tools to validate data completeness and correctness (e.g., ensuring no critical data gaps or anomalies in the text corpora), and promptly address any pipeline failures or data issues. Implement data version control. • Manage data security, access, and compliance. Control permissions to datasets and ensure adherence to data privacy policies and security standards, especially when dealing with user data or proprietary text sources.
• Education & Experience: 3+ years of experience as a Data Engineer or in a similar role, building data-intensive pipelines or platforms. A Bachelor’s or Master’s degree in Computer Science, Engineering, or related field is preferred. Experience supporting machine learning or analytics teams with data pipelines is a strong advantage. • NLP Domain Experience: Prior experience handling linguistic data or supporting NLP projects (e.g., text normalization, handling different encodings, tokenization strategies). Knowledge of Ukrainian text sources and data sets, or experience with multilingual data processing, can be an advantage given our project’s focus. Understanding of FineWeb2 or similar processing pipelines approach. • Data Pipeline Expertise: Hands-on experience designing ETL/ELT processes, including extracting data from various sources, using transformation tools, and loading into storage systems. Proficiency with orchestration frameworks like Apache Airflow for scheduling workflows. Familiarity with building pipelines for unstructured data (text, logs) as well as structured data. • Programming & Scripting: Strong programming skills in Python for data manipulation and pipeline development. Experience with NLP packages (spaCy, NLTK, langdetect, fasttext, etc.). Experience with SQL for querying and transforming data in relational databases. Knowledge of Bash or other scripting for automation tasks. Writing clean, maintainable code and using version control (Git) for collaborative development. • Databases & Storage: Experience working with relational databases (e.g., PostgreSQL, MySQL) including schema design and query optimization. Familiarity with NoSQL or document stores (e.g., MongoDB) and big data technologies (HDFS, Hive, Spark) for large-scale data is a plus. Understanding of or experience with vector databases (e.g., Pinecone, FAISS) is beneficial, as our NLP applications may require embedding storage and fast similarity search. • Cloud Infrastructure: Practical experience with cloud platforms (AWS, GCP, or Azure) for data storage and processing. Ability to set up services such as S3/Cloud Storage, data warehouses (e.g., BigQuery, Redshift), and use cloud-based ETL tools or serverless functions. Understanding of infrastructure-as-code (Terraform, CloudFormation) to manage resources is a plus. • Data Quality & Monitoring: Knowledge of data quality assurance practices. Experience implementing monitoring for data pipelines (logs, alerts) and using CI/CD tools to automate pipeline deployment and testing. An analytical mindset to troubleshoot data discrepancies and optimize performance bottlenecks. • Collaboration & Domain Knowledge: Ability to work closely with data scientists and understand the requirements of machine learning projects. Basic understanding of NLP concepts and the data needs for training language models, so you can anticipate and accommodate the specific forms of text data and preprocessing they require. Good communication skills to document data workflows and to coordinate with team members across different functions.
• Office or remote – it’s up to you. You can work from anywhere, and we will arrange your workplace. • Remote onboarding. • Performance bonuses for everyone (annual or quarterly — depends on the role). • We train employees: with the opportunity to learn through the company’s library, internal resources, and programs from partners. • Health and life insurance. • Wellbeing program and corporate psychologist. • Reimbursement of expenses for Kyivstar mobile communication.
Apply NowMay 13
Join Innovecs as a Senior Data Engineer, specializing in ETL processes using Talend.
ETL