Data Engineer Master

Job not on LinkedIn

Yesterday

🗣️🇧🇷🇵🇹 Portuguese Required

Apply Now
Logo of CI&T

CI&T

Artificial Intelligence • Cloud Services • SaaS

CI&T is a global tech transformation specialist focusing on helping organizations navigate their technology journey. With services spanning from application modernization and cloud solutions to AI-driven data analytics and customer experience, CI&T empowers businesses to accelerate their growth and maximize operational efficiency. The company emphasizes digital product design, strategy consulting, and immersive experiences, ensuring a robust support system for enterprises in various industries.

5001 - 10000 employees

Founded 1995

🤖 Artificial Intelligence

☁️ SaaS

💰 $5.5M Venture Round on 2014-04

📋 Description

• Define, architect, and implement scalable data platforms and end-to-end ELT pipelines aligned with modern Lakehouse principles. • Collaborate closely with cross-functional teams across the US, Colombia, and Brazil to ensure our data ecosystem is reliable, future-proof, and compliant with enterprise architecture standards. • Demonstrate deep technical expertise, strong architectural thinking, and the ability to influence and mentor engineering teams. • Use fluent English to communicate with global stakeholders, present architectural recommendations, and ensure alignment across distributed teams.

🎯 Requirements

• Expert-level SQL, with proven ability to optimize, refactor, and validate large-scale transformations. • Advanced Python (or similar) for automation, orchestration, and pipeline development. • Hands-on architecture and engineering experience with Snowflake, including performance tuning, security, data governance, dynamic tables, and workload management. • Advanced dbt expertise, including transformation logic, testing, documentation, deployment patterns, and CI/CD integration. • Proven production experience with Data Vault 2.0, including Hubs, Links, Satellites, PIT tables, multi-active satellites, and Business Vault patterns. • Experience with AutomateDV or equivalent frameworks is a strong asset. • Deep understanding of Data Lakehouse architectures, including medallion zone structures, incremental ingestion, and open table formats (Iceberg, Delta; Hudi is a plus). • Solid foundation in data modeling best practices, including normalized models, dimensional modeling, historization, and scalable enterprise patterns. • Ability to translate complex business requirements into robust, extensible architectural designs. • Experience orchestrating ELT/ETL workflows using Airflow, including DAG design, dependency strategies, and dynamic task generation. • Familiarity with modern orchestration frameworks such as Prefect, Dagster, or AWS Glue. • Comfortable with CI/CD pipelines using GitHub Actions or similar tools, integrating dbt testing and Snowflake deployments. • Understanding of infrastructure automation, configuration-as-code, and environment management. • Experience with data observability platforms (Monte Carlo, Datafold, Great Expectations) is a plus. • Knowledge of Docker or Kubernetes for reproducibility and scalable deployments is a plus. • Familiarity with Kafka, AMQP, or other message brokers and event-driven architectures is a plus. • Experience working with REST/GraphQL APIs, streaming ingestion (Kinesis, Firehose), or real-time processing is a plus. • Experience supporting hybrid architectures, multi-cloud designs, or enterprise Lakehouse strategies is a plus. • Passion for modern data architecture, distributed systems, and scalable design. • Natural ability to mentor engineers, elevate teams, and drive technical excellence. • Thrive in collaborative, multicultural environments. • Value diversity, inclusion, and respectful partnership. • Bring a data-driven, continuous improvement mindset and be comfortable challenging the status quo. • Align with our culture of diversity, inclusion, and respectful collaboration, bringing a team-first mindset.

🏖️ Benefits

• Health and dental insurance • Meal and food allowance • Childcare assistance • Extended paternity leave • Partnerships with gyms and health & wellness professionals via Wellhub (Gympass) TotalPass • Profit sharing and results participation (PLR) • Life insurance • Continuous learning platform (CI&T University) • Employee discount club • Free online platform dedicated to physical, mental, and overall well-being • Pregnancy and responsible parenting course • Partnerships with online learning platforms • Language learning platform • And many more!

Apply Now

Similar Jobs

Yesterday

Data Engineer working remotely to develop and manage data transformation pipelines for Vobi. Contributing to the digitalization of the construction industry in Brazil.

Airflow

AWS

PySpark

Python

Spark

SQL

Yesterday

Senior Data Engineering Analyst at Serasa Experian developing software solutions and ensuring data pipeline quality. Collaborating within a squad to design and implement high-performance data architectures.

🗣️🇧🇷🇵🇹 Portuguese Required

Amazon Redshift

AWS

Docker

EC2

Grafana

Jenkins

Kafka

Kubernetes

Python

Scala

2 days ago

Data Engineer tasked with full data flow management from ingestion to cloud storage for media players. Mentoring team and developing data governance frameworks.

🗣️🇧🇷🇵🇹 Portuguese Required

Airflow

Amazon Redshift

AWS

Azure

BigQuery

Cloud

ETL

Google Cloud Platform

IoT

Python

Spark

SQL

3 days ago

Data Operations Engineer focused on supporting data pipeline development and maintenance. Engaging in analytics and machine learning workflows while growing technical expertise.

AWS

Azure

Cloud

Python

SQL

3 days ago

DataOps Engineer building and managing world-class data ecosystems at Gipsyy. Ensuring robust and secure data pipelines for innovative analytics and machine learning teams.

🗣️🇧🇷🇵🇹 Portuguese Required

Airflow

Apache

AWS

Google Cloud Platform

Grafana

Jenkins

Kafka

Prometheus

Python

Spark

SQL

Terraform

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com