Lead Data Engineer

October 12

Apply Now
Logo of Valtech

Valtech

B2B • Marketing • SaaS

Valtech is a global digital agency focusing on experience innovation. They strive to transform businesses through a combination of technology, marketing, and data strategies. Valtech helps companies elevate their digital presence and drive commerce strategies, enhance enterprise digital transformations, and unlock marketing and performance potential. They also utilize data and AI to help organizations harness the power of information. With offices around the world, Valtech partners with businesses to shape their digital futures, offering a range of services and insights designed to enhance customer experiences.

5001 - 10000 employees

Founded 1997

🤝 B2B

☁️ SaaS

📋 Description

• Lead the technical direction of data engineering projects, including architectural design, technology selection, and standards definition • Mentor and guide engineers, fostering a high-performance, collaborative culture • Ensure adherence to best practices in data engineering, governance, and security • Design and build streaming and batch data pipelines using Spark, Delta Lake, and modern frameworks • Develop and optimize workflows for data ingestion, transformation, and curation across Databricks, Fabric, Snowflake, AWS, or GCP • Leverage Microsoft Fabric capabilities such as OneLake, Lakehouses, Power BI integration, and governance tooling to build end-to-end data solutions • Define and manage logical and physical data models to support analytics, AI, and business processes • Oversee orchestration and automation using tools like Lakeflow, Airflow, dbt, or equivalent • Establish and enforce data governance, lineage, and security best practices across cloud environments • Collaborate with analysts, data scientists, and business stakeholders to ensure alignment of solutions with business needs • Continuously improve systems for performance, reliability, and cost efficiency

🎯 Requirements

• Advanced knowledge of Apache Spark, Delta Lake, Unity Catalog, and streaming technologies • Strong programming skills in Python and SQL • Solid background in data modeling, governance, and security • Proven ability to lead engineering teams and make architectural decisions at scale • Experience in solution design, platform modernization, and cloud architecture • Hands-on experience with one or more of Databricks, Fabric, Snowflake, AWS, or GCP • Experience with Azure data services (Data Lake, Synapse, Data Factory, etc.) is considered highly relevant and transferable to Fabric • Strong leadership, communication, and stakeholder management skills • Ability to mentor, coach, and inspire team members • Strategic mindset with focus on scalability, innovation, and efficiency • Experience with data architecture frameworks and enterprise design patterns (nice to have) • Familiarity with CI/CD pipelines and infrastructure-as-code (Terraform, ARM, CloudFormation) (nice to have) • Exposure to machine learning pipelines and AI-driven solutions (nice to have)

🏖️ Benefits

• Medical insurance • Sports reimbursement budget • Home office support • A number of free psychological and legal consultations • Maternity and paternity leave support • Internal workshops and learning initiatives • English language classes compensation • Professional certifications reimbursement • Participation in professional local and global communities • Growth Framework to manage expectations and define the steps to move towards the selected career • Mentoring program with the ability to become a mentor or a mentee to grow to a higher position • Valtech Ukraine has a system of progressive benefits packages in place — the longer you stay with the company — the more benefits you get.

Apply Now

Similar Jobs

October 7

Senior Data Engineer developing and maintaining big data pipelines at Stellar for AdTech solutions. Collaborating with cross-functional teams on data architecture and governance.

Apache

AWS

Azure

Cloud

Google Cloud Platform

Hadoop

Kafka

NoSQL

PySpark

Python

Spark

SQL

September 11

Data Engineer building and optimizing ETL/ELT pipelines on GCP for SpinLab's slot-gaming analytics. Focus on scalability, FinOps, and productionizing ML workflows.

Airflow

AWS

BigQuery

Cloud

Docker

ETL

Google Cloud Platform

Kubernetes

Linux

Python

SQL

Terraform

August 27

Design and maintain Python-based ETL and Azure cloud-native solutions for scalable analytics at Atlas Technica, provider of outsourced IT management

Azure

Cloud

Docker

ETL

Python

SQL

Unix

Vault

August 15

Data Engineer (NLP-focused) builds ETL pipelines for Ukrainian text data.\nEnabling NLP/LLM research across Kyivstar.Tech's language initiatives.

Airflow

Amazon Redshift

Apache

AWS

Azure

BigQuery

Cloud

Docker

ETL

Google Cloud Platform

HDFS

Jenkins

Kafka

Kubernetes

MongoDB

MySQL

NoSQL

Postgres

Python

Selenium

Spark

SQL

Tableau

Terraform

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com