Data Engineer, Technical Referent

September 19

Apply Now
Logo of dLocal

dLocal

Fintech • eCommerce

dLocal is a payment solutions platform that connects global merchants with consumers in emerging markets. The company offers a variety of services including payins, payouts, and real-time fraud management to enhance payment processes for businesses. As a Nasdaq-listed company, dLocal supports transactions in over 40 countries, enabling local payment methods and currencies for billions of emerging market consumers. dLocal’s platform simplifies cross-border payments, providing a comprehensive solution for eCommerce, shared economy, digital media, financial services, and more, with an emphasis on high-growth markets such as Africa, Asia, and Latin America.

201 - 500 employees

Founded 2016

💳 Fintech

🛍️ eCommerce

💰 $150M Venture Round on 2021-04

📋 Description

• Architect and evolve scalable infrastructure to ingest, process, and serve large volumes of data efficiently • Lead improvements to existing frameworks and pipelines to ensure performance, reliability, and cost-efficiency • Establish and maintain robust data governance practices that empower cross-functional teams to access and trust data • Transform raw datasets into clean, usable formats for analytics, modeling, and reporting • Investigate and resolve complex data issues, ensuring data accuracy and system resilience • Maintain high standards for code quality, testing, and documentation, with a strong focus on reproducibility and observability • Stay current with industry trends and emerging technologies to continuously raise the bar on our engineering practices • Act as a go-to expert: mentor other engineers and influence architectural and technical decisions across the company

🎯 Requirements

• Bachelor’s degree in Computer Engineering, Data Engineering, or related technical field • Proven experience in data engineering or backend software development, ideally in cloud-native environments • Deep expertise in Python and SQL • Strong experience with distributed data processing frameworks such as Apache Spark • Solid understanding of cloud platforms (AWS and GCP) • Strong analytical thinking and problem-solving skills • Able to work autonomously and collaboratively, balancing hands-on work with technical leadership • Nice to have: Experience designing and maintaining DAGs with Apache Airflow or similar orchestration tools • Nice to have: Familiarity with modern data formats and table formats (e.g., Parquet, Delta Lake, Iceberg) • Nice to have: Master’s degree in a relevant field • Nice to have: Prior experience mentoring engineers and influencing architectural decisions at scale

🏖️ Benefits

• Remote work: work from anywhere or one of our offices around the globe!* • Flexibility: we have flexible schedules and we are driven by performance. • Fintech industry: work in a dynamic and ever-evolving environment, with plenty to build and boost your creativity. • Referral bonus program: our internal talents are the best recruiters - refer someone ideal for a role and get rewarded. • Learning & development: get access to a Premium Coursera subscription. • Language classes: we provide free English, Spanish, or Portuguese classes. • Social budget: you'll get a monthly budget to chill out with your team (in person or remotely) and deepen your connections! • dLocal Houses: want to rent a house to spend one week anywhere in the world coworking with your team? We’ve got your back! • Tailored benefits per country and company mentions of travel, health and learning benefits.

Apply Now

Similar Jobs

September 17

Build and maintain scalable ETL/ELT pipelines and lakehouse infrastructure; enable AI-driven analytics and data governance for Rithum's commerce platform.

Airflow

Amazon Redshift

Apache

AWS

Cloud

Docker

ETL

Kafka

Kubernetes

RDBMS

Spark

SQL

September 17

Senior Data Engineer building AI-enhanced data infrastructure for Rithum’s commerce network. Design scalable ETL/ELT pipelines and mentor engineers.

Airflow

Amazon Redshift

Apache

AWS

Cloud

Docker

ETL

Kafka

Kubernetes

RDBMS

Spark

SQL

August 20

Mid Data Engineer building data architecture and storage solutions for Volkswagen Group Services. Lead technical data strategy and implement cloud-based data platforms.

AWS

Azure

Cloud

NoSQL

Spark

SQL

Vault

August 12

Join Prima’s Engineering team to bridge ML/data science with engineering. Build data products and pipelines for motor insurance growth.

Airflow

Amazon Redshift

Apache

AWS

Cloud

Kafka

NoSQL

Numpy

Open Source

Pandas

Postgres

Python

RDBMS

Scikit-Learn

Spark

SQL

August 7

Data Engineer developing Azure and Big Data solutions for a global IT company. Collaborates in a skilled development team with focus on CI/CD and data integrity.

Azure

Kafka

Spark

SQL

SSIS

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com