Data Engineer

Job not on LinkedIn

September 22

🗣️🇩🇪 German Required

Apply Now
Logo of paiqo

paiqo

We help our customers digitize their businesses in the field of data platform and artificial intelligence.

11 - 50 employees

📋 Description

• Build and operate modern data platforms with focus on data engineering • Develop scalable data pipelines and ETL/ELT processes on Microsoft Azure • Develop stable batch and streaming pipelines with Azure Data Factory and Databricks/Spark • Ensure reliability using best practices such as Delta Lake, automated workflows and VNet-securing • Take responsibility for sub-projects in data integration, e.g. set up data path from operational systems to a data lakehouse • Implement and optimize ETL/ELT routes and ensure data quality • Use Azure data services for storage and processing (Data Lake, Azure SQL, Databricks, MS Fabric) • Participate in setting up CI/CD pipelines (Azure DevOps or GitHub) for automated deployments • Collaborate closely with Data Scientists and Analytics teams to provide data for analytics and ML

🎯 Requirements

• 2-4 years of experience in data engineering or data platform development • Solid knowledge of SQL and programming (Python or Scala) • Experience with Azure Data Services (e.g. Azure Data Factory, Azure Databricks, Synapse) • Familiarity with data modeling (e.g. Star Schema, Kimball) • Experience in data platform monitoring & performance optimization • Experience with version management (Git) and DevOps practices (Continuous Integration, Infrastructure as Code) • Communication & Collaboration: working with data scientists, analysts and clients and translating technical concepts for non-technical people • Problem solving & analytical thinking: optimize data streams, identify bottlenecks and find creative solutions • Language skills: German fluent, English good

Apply Now

Similar Jobs

September 13

Data Engineer building cargo-tracking models and back-end pipelines for Kpler's maritime trade intelligence.

AWS

Cloud

Docker

ElasticSearch

Flask

Kafka

Kubernetes

Python

Scala

Spark

SQL

Terraform

August 28

Data Engineer – AI building scalable data backends and operationalising models into production at smartclip

Apache

AWS

Docker

ETL

Google Cloud Platform

Grafana

Hadoop

Java

JavaScript

Jenkins

Kubernetes

Linux

Node.js

Open Source

Prometheus

Python

React

Scala

Shell Scripting

Spark

SQL

TypeScript

August 20

Data Engineer collaborates with Operations and Data Science to evolve data models and pipelines. Fully remote within Germany, with WĂźrzburg or Berlin offices.

🗣️🇩🇪 German Required

ETL

Pandas

Python

August 14

Collaborate with Data Engineering and Data Science to enhance data models and pipelines.\nEnable AI-powered sustainability in retail through scalable data infrastructure.

🗣️🇩🇪 German Required

ETL

Pandas

Python

August 8

Join virtual7 as a Data Warehouse ETL developer driving digitalization in the public sector.

🗣️🇩🇪 German Required

ETL

Oracle

SQL

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com