Data Tech Lead – DataOps

Job not on LinkedIn

November 14

🗣️🇧🇷🇵🇹 Portuguese Required

Apply Now
Logo of CI&T

CI&T

Artificial Intelligence • Cloud Services • SaaS

CI&T is a global tech transformation specialist focusing on helping organizations navigate their technology journey. With services spanning from application modernization and cloud solutions to AI-driven data analytics and customer experience, CI&T empowers businesses to accelerate their growth and maximize operational efficiency. The company emphasizes digital product design, strategy consulting, and immersive experiences, ensuring a robust support system for enterprises in various industries.

5001 - 10000 employees

Founded 1995

🤖 Artificial Intelligence

☁️ SaaS

💰 $5.5M Venture Round on 2014-04

📋 Description

• Design, build, and optimize data ingestion and transformation pipelines using Databricks and other modern cloud-based data platforms. • Implement and enforce data contracts, ensuring schema consistency and compatibility across services. • Develop and integrate data quality checks (validation, anomaly detection, reconciliation) into pipelines. • Apply DataOps best practices, including CI/CD for data workflows, observability, monitoring, and automated testing. • Collaborate with product, analytics, and engineering teams to understand requirements and deliver reliable, production-grade data solutions. • Drive improvements in data performance, cost optimization, and scalability. • Contribute to architectural decisions around data modeling, governance, and integration patterns. • Mentor junior data engineers and developers, providing code reviews, knowledge-sharing, and best-practice guidance.

🎯 Requirements

• Proven experience building and managing large-scale data pipelines in Databricks (PySpark, Delta Lake, SQL). • Strong programming skills in Python and SQL for data processing and transformation. • Deep understanding of ETL/ELT frameworks, data warehousing, and distributed data processing. • Solid experience with data pipeline orchestration tools (e.g., Airflow, Dagster, Prefect, dbt, Azure Data Factory). • Hands-on experience with modern DataOps practices: version control (Git), CI/CD pipelines, automated testing, and infrastructure-as-code. • Familiarity with cloud platforms (AWS, Azure, or GCP) and related data services. • Strong problem-solving skills with the ability to troubleshoot performance, scalability, and reliability issues. • Advanced English proficiency is essential. • Strong understanding of the business domain and end-to-end business considerations. • Experience with data contracts, schema evolution, and ensuring compatibility across services. • Experience with Databricks Asset Bundles. • Expertise in data quality frameworks (e.g., Great Expectations, Soda, dbt tests, or custom-built solutions). • Familiarity with dbt, Atlan, and Soda. • Experience integrating with Power BI. • Experience with data streaming technologies and real-time data processing.

🏖️ Benefits

• Health and dental insurance • Meal and food allowance • Childcare assistance • Extended paternity leave • Partnerships with gyms and health & wellness professionals via Wellhub (Gympass) TotalPass • Profit sharing and results participation (PLR) • Life insurance • Continuous learning platform (CI&T University) • Employee discount club • Free online platform dedicated to physical, mental, and overall well-being • Pregnancy and responsible parenting course • Partnerships with online learning platforms • Language learning platform • And many more!

Apply Now

Similar Jobs

November 13

Data Engineer responsible for creating and managing data pipelines and architectures for a corporate data platform. Join a collaborative environment with opportunities for personal and professional growth.

🗣️🇧🇷🇵🇹 Portuguese Required

AWS

Azure

ETL

Google Cloud Platform

PySpark

Python

Spark

SQL

November 13

Data Engineer responsible for developing and optimizing data pipelines in Azure Cloud for banking projects. Focus on credit loans with a 100% home-office work model.

🗣️🇧🇷🇵🇹 Portuguese Required

Azure

Cloud

ETL

Python

November 13

Data Engineer developing and maintaining efficient data pipelines using Apache Spark at Serasa Experian. Collaborating with multidisciplinary teams for data governance and architecture improvements.

🗣️🇧🇷🇵🇹 Portuguese Required

Airflow

Apache

AWS

Azure

Cloud

ETL

Google Cloud Platform

Python

Scala

Spark

November 13

Senior SAP consultant handling data migration projects at IT2YOU. Requires advanced English or Spanish and experience in SAP rollout.

🗣️🇧🇷🇵🇹 Portuguese Required

🗣️🇪🇸 Spanish Required

November 12

Data Engineer role responsible for cleaning, transforming, and analyzing data for credit solutions at OPEA. Working with data pipelines, reporting, and collaborating on strategic business decisions.

🗣️🇧🇷🇵🇹 Portuguese Required

Airflow

Amazon Redshift

Apache

AWS

Java

Python

Scala

SQL

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com