
We help our customers digitize their businesses in the field of data platform and artificial intelligence.
11 - 50 employees
September 22
đŠđŞ Germany â Remote
â° Full Time
đ˘ Junior
đĄ Mid-level
đ° Data Engineer
đŤđ¨âđ No degree required
đŁď¸đŠđŞ German Required

We help our customers digitize their businesses in the field of data platform and artificial intelligence.
11 - 50 employees
⢠Build and operate modern data platforms with focus on data engineering ⢠Develop scalable data pipelines and ETL/ELT processes on Microsoft Azure ⢠Develop stable batch and streaming pipelines with Azure Data Factory and Databricks/Spark ⢠Ensure reliability using best practices such as Delta Lake, automated workflows and VNet-securing ⢠Take responsibility for sub-projects in data integration, e.g. set up data path from operational systems to a data lakehouse ⢠Implement and optimize ETL/ELT routes and ensure data quality ⢠Use Azure data services for storage and processing (Data Lake, Azure SQL, Databricks, MS Fabric) ⢠Participate in setting up CI/CD pipelines (Azure DevOps or GitHub) for automated deployments ⢠Collaborate closely with Data Scientists and Analytics teams to provide data for analytics and ML
⢠2-4 years of experience in data engineering or data platform development ⢠Solid knowledge of SQL and programming (Python or Scala) ⢠Experience with Azure Data Services (e.g. Azure Data Factory, Azure Databricks, Synapse) ⢠Familiarity with data modeling (e.g. Star Schema, Kimball) ⢠Experience in data platform monitoring & performance optimization ⢠Experience with version management (Git) and DevOps practices (Continuous Integration, Infrastructure as Code) ⢠Communication & Collaboration: working with data scientists, analysts and clients and translating technical concepts for non-technical people ⢠Problem solving & analytical thinking: optimize data streams, identify bottlenecks and find creative solutions ⢠Language skills: German fluent, English good
Apply NowSeptember 13
Data Engineer building cargo-tracking models and back-end pipelines for Kpler's maritime trade intelligence.
AWS
Cloud
Docker
ElasticSearch
Flask
Kafka
Kubernetes
Python
Scala
Spark
SQL
Terraform
August 28
Data Engineer â AI building scalable data backends and operationalising models into production at smartclip
đŠđŞ Germany â Remote
đ° Series A on 2008-12
â° Full Time
đĄ Mid-level
đ Senior
đ° Data Engineer
Apache
AWS
Docker
ETL
Google Cloud Platform
Grafana
Hadoop
Java
JavaScript
Jenkins
Kubernetes
Linux
Node.js
Open Source
Prometheus
Python
React
Scala
Shell Scripting
Spark
SQL
TypeScript
August 20
Data Engineer collaborates with Operations and Data Science to evolve data models and pipelines. Fully remote within Germany, with WĂźrzburg or Berlin offices.
đŁď¸đŠđŞ German Required
ETL
Pandas
Python
August 14
Collaborate with Data Engineering and Data Science to enhance data models and pipelines.\nEnable AI-powered sustainability in retail through scalable data infrastructure.
đŁď¸đŠđŞ German Required
ETL
Pandas
Python
August 8
Join virtual7 as a Data Warehouse ETL developer driving digitalization in the public sector.
đŁď¸đŠđŞ German Required
ETL
Oracle
SQL