Data Engineer

Job not on LinkedIn

October 14

Apply Now
Logo of Pavago

Pavago

Recruitment • B2B • HR Tech

Pavago is a company specializing in offshore recruitment, offering comprehensive solutions to secure top global talent for operations, sales, marketing, and more at significantly reduced costs. Pavago provides an all-in-one service that includes talent acquisition, hiring, onboarding, payroll management, and compliance assurance. With a focus on sourcing the top 1% of offshore talent, Pavago simplifies the recruitment process through a proven five-step method that ensures cultural and skill fit. Catering to small and medium-sized businesses, Pavago is a partner in operational and administrative functions, allowing businesses to focus on growth while benefiting from dedicated support and ongoing training packages.

đź“‹ Description

• Build and maintain ETL/ELT pipelines using Python, SQL, or Scala. • Orchestrate workflows with Airflow, Prefect, Dagster, or Luigi. • Ingest structured and unstructured data from APIs, SaaS platforms, relational databases, and streaming sources. • Manage data warehouses (Snowflake, BigQuery, Redshift). • Design schemas (star/snowflake) optimized for analytics. • Implement partitioning, clustering, and query performance tuning. • Implement validation checks, anomaly detection, and logging for data integrity. • Enforce naming conventions, lineage tracking, and documentation (dbt, Great Expectations). • Maintain compliance with GDPR, HIPAA, or industry-specific regulations. • Develop and monitor streaming pipelines with Kafka, Kinesis, or Pub/Sub. • Ensure low-latency ingestion for time-sensitive use cases. • Partner with analysts and data scientists to provide curated, reliable datasets. • Support BI teams in building dashboards (Tableau, Looker, Power BI). • Document data models and pipelines for knowledge transfer. • Containerize data services with Docker and orchestrate in Kubernetes. • Automate deployments via CI/CD pipelines (GitHub Actions, Jenkins, GitLab CI). • Manage cloud infrastructure using Terraform or CloudFormation.

🎯 Requirements

• 3+ years in data engineering or back-end development. • Strong Python and SQL skills. • Experience with at least one major data warehouse (Snowflake, Redshift, BigQuery). • Familiarity with pipeline orchestration tools (Airflow, Prefect).

Apply Now

Similar Jobs

October 14

Engenheiro de Dados para atuar em projetos desafiadores com grandes clientes. Fazendo parte do Banco de Talentos para futuras oportunidades.

🗣️🇧🇷🇵🇹 Portuguese Required

AWS

Azure

Cloud

ETL

Google Cloud Platform

SQL

Tableau

October 11

Data Engineer developing scalable data pipelines for international clients using Azure Cloud and Databricks. Collaborating with interdisciplinary teams to ensure best DataOps practices.

🗣️🇧🇷🇵🇹 Portuguese Required

Azure

Cloud

Kubernetes

Linux

Spark

Vault

October 8

Data Architect leading governance and technical solutions at EVT, promoting digital transformation for corporate clients with a collaborative team.

🗣️🇧🇷🇵🇹 Portuguese Required

AWS

Azure

Cloud

Python

Terraform

Unity

October 8

Extractta

201 - 500

Senior Data Engineer responsible for building data pipelines and ensuring data governance at Extractta. Working in a collaborative environment for continuous improvement.

🗣️🇧🇷🇵🇹 Portuguese Required

Airflow

Amazon Redshift

AWS

Azure

BigQuery

Cloud

ETL

Google Cloud Platform

Java

Kafka

NoSQL

Python

Scala

Spark

SQL

October 6

Data Engineer at Paschoalotto, designing and maintaining reliable ETL/ELT data pipelines for innovative digital customer solutions. Collaborating with analysts and architects to ensure data quality and governance.

🗣️🇧🇷🇵🇹 Portuguese Required

Airflow

Apache

Azure

ETL

Hadoop

Kafka

MongoDB

NoSQL

Postgres

Python

RabbitMQ

Redis

Spark

SQL

.NET

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com