Senior Data Engineer

Job not on LinkedIn

September 17

Apply Now
Logo of Rithum

Rithum

eCommerce • Marketing • AI

Rithum is a comprehensive e-commerce platform designed to empower brands, retailers, and suppliers to effectively launch and scale their businesses. Offering a variety of solutions including multichannel marketing, fulfillment management, and AI-driven supplier discovery, Rithum helps users optimize their online presence and streamline operations across a vast network of marketplaces. With a focus on flexibility and efficiency, Rithum aims to transform commerce by creating profitable and engaging shopping experiences.

501 - 1000 employees

Founded 1997

🛍️ eCommerce

📋 Description

• Design and implement scalable ETL/ELT workflows for batch and streaming data using AWS primitives (S3, Kinesis, Glue, Redshift, Athena) • Architect and maintain cloud-native data platforms with automated ingestion, transformation, and governance using DBT, Apache Spark, Delta Lake, Airflow, Databricks • Work with Product, BI, Support, Data Scientists and Engineers to support data needs and resolve technical challenges • Optimize data lake/lakehouse infrastructure to support AI workloads and large-scale analytics • Ensure data quality, lineage, and observability and develop/enforce data governance and privacy protections • Partner with Data Scientists to optimize pipelines for model training, inference, and continuous learning • Build self-healing data pipelines with AI-driven error detection, root cause analysis, and automated remediation • Implement intelligent data lineage tracking and AI-assisted data discovery systems with natural language interfaces • Leverage AI coding assistants to accelerate development, generate complex SQL, and optimize data pipeline code • Develop data quality monitoring (anomaly detection, profiling) and ML-driven pipeline orchestration • Generate and maintain living documentation and participate in full software development lifecycle • Mentor junior engineers, lead tool evaluation and adoption, and drive innovation in data architecture • Participate in on-call rotation as needed

🎯 Requirements

• 3+ years of experience in data engineering, including building and maintaining large-scale data pipelines • Extensive experience in SQL RDBMS (SQLServer or similar) with dimensional modeling using star schema, and foundational data warehousing concepts • Hands-on experience with AWS services such as Redshift, Athena, S3, Kinesis, Lambda, Glue • Experience with DBT, Databricks or similar data platform tooling • Experience working with structured and unstructured data and implementing data quality frameworks • Demonstrated experience using AI coding tools (GitHub Copilot, Cursor, or similar), with understanding of prompt engineering • Understanding of AI/ML concepts and data requirements, including feature stores, model versioning, and real-time inference pipelines • Excellent communication and collaboration skills • Preferred: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field • Preferred: Experience in a SaaS or e-commerce environment with AI/ML products • Preferred: Knowledge of stream processing frameworks like Kafka, Flink, or Spark Structured Streaming • Preferred: Familiarity with LLMOps and AI model deployment patterns in data infrastructure • Preferred: Experience with AI-powered data tools such as automated data catalogs, intelligent monitoring systems, or AI-assisted query optimization • Preferred: Experience with containerization and orchestration tools like Docker and Kubernetes • Willingness to travel up to 10%

🏖️ Benefits

• Medical, Dental and Psychology benefits • Life insurance and disability benefits • Competitive time off package with 25 Days of PTO, 13 Company-Paid Holidays, 2 Wellness days and 1 Paid Volunteer Day • Voucher program for Transportation, Meals & Childcare • Work from the Madrid co-working space, if desired • Remote Working Stipend: €40/month automatically applied in payroll • Access to wellbeing tools such as the Calm App and an Employee Assistance Program • Professional development stipend and learning and development offerings • Charitable contribution match per team member • Industry-competitive compensation and total rewards benefits • Remote-first working conditions and generous time off

Apply Now

Similar Jobs

August 20

Mid Data Engineer building data architecture and storage solutions for Volkswagen Group Services. Lead technical data strategy and implement cloud-based data platforms.

AWS

Azure

Cloud

NoSQL

Spark

SQL

Vault

August 12

Join Prima’s Engineering team to bridge ML/data science with engineering. Build data products and pipelines for motor insurance growth.

Airflow

Amazon Redshift

Apache

AWS

Cloud

Kafka

NoSQL

Numpy

Open Source

Pandas

Postgres

Python

RDBMS

Scikit-Learn

Spark

SQL

August 7

Data Engineer developing Azure and Big Data solutions for a global IT company. Collaborates in a skilled development team with focus on CI/CD and data integrity.

Azure

Kafka

Spark

SQL

SSIS

July 30

Seeking a Senior Data Engineer to architect scalable data pipelines using Google Cloud Platform and Databricks.

Airflow

Ansible

Apache

BigQuery

Cloud

Docker

ETL

Google Cloud Platform

Grafana

Java

Jenkins

Kafka

Kubernetes

Microservices

Prometheus

Python

RabbitMQ

Scala

Spark

Terraform

July 11

Join Volkswagen as a Mid Data Engineer in Barcelona to enhance enterprise data quality, integrity, and accessibility.

Airflow

ETL

MySQL

Oracle

Postgres

Python

SQL

Vault

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com