Data Architect, AWS

Job not on LinkedIn

Yesterday

🗣️🇧🇷🇵🇹 Portuguese Required

Apply Now
Logo of CI&T

CI&T

Artificial Intelligence • Cloud Services • SaaS

CI&T is a global tech transformation specialist focusing on helping organizations navigate their technology journey. With services spanning from application modernization and cloud solutions to AI-driven data analytics and customer experience, CI&T empowers businesses to accelerate their growth and maximize operational efficiency. The company emphasizes digital product design, strategy consulting, and immersive experiences, ensuring a robust support system for enterprises in various industries.

5001 - 10000 employees

Founded 1995

🤖 Artificial Intelligence

☁️ SaaS

💰 $5.5M Venture Round on 2014-04

📋 Description

• Support clients in learning and using AWS services, including Athena, Glue, Lambda, S3, DynamoDB, NoSQL, Relational Database Service (RDS), Amazon EMR, OpenSearch and Amazon Redshift. • Perform on-site technical consulting (when required): understand client requirements and develop data and analytics service offerings. • Demonstrate and apply AWS services to enable distributed computing solutions in private and public cloud environments. In consulting engagements you will: support migrations of existing applications to the cloud; design and develop new applications that utilize AWS services. • Collaborate with AWS engineering and support teams to report partner and customer needs and feedback. This includes: sharing real-world implementation cases and recurring issues; suggesting improvements and new features that could be considered for the technology roadmap. • Share real-world implementations and recommend new features that facilitate adoption and increase the value obtained from AWS cloud services. • Engage with clients' business and technology stakeholders to build a compelling vision of a data-driven company within their environment.

🎯 Requirements

• Solid experience deploying a data lake or lakehouse in production environments. • Strong experience using distributed-processing APIs such as PySpark or Flink. • Experience designing and deploying Lambda or Kappa data architectures in production environments. • Proficiency in at least one programming language for data processing (Python, Scala, Java, etc.). • Experience with one or more relevant technologies such as: Sqoop, Flume, Kafka, Oozie, Hue, Zookeeper, HCatalog, Solr, Avro, Pig, Hive, Spark SQL. • Understanding of industry database and analytics technologies, including MPP and NoSQL databases, Data Warehouse design, BI reporting, and dashboard development. • Differentials: • Knowledge of AWS CloudFormation, Terraform, Ansible or other Infrastructure-as-Code (IaC) tools. • Knowledge of Git, CI/CD, and ML/Ops pipelines. • Experience using LLMs, creating Agents, and Prompt Engineering. • English language proficiency. • AWS certification.

🏖️ Benefits

• Health and dental plan. • Meal and food allowance. • Childcare assistance. • Extended parental leave. • Partnerships with gyms and health and wellness professionals via Wellhub (Gympass) / TotalPass. • Profit Sharing (PLR). • Life insurance. • Continuous learning platform (CI&T University). • Discount club. • Free online platform dedicated to promoting physical and mental health and well-being. • Expectant parent and responsible-parenting course. • Partnerships with online course platforms. • Language learning platform. • And many others.

Apply Now

Similar Jobs

Yesterday

Senior Data Engineering Analyst at Serasa Experian developing software solutions and ensuring data pipeline quality. Collaborating within a squad to design and implement high-performance data architectures.

🗣️🇧🇷🇵🇹 Portuguese Required

Amazon Redshift

AWS

Docker

EC2

Grafana

Jenkins

Kafka

Kubernetes

Python

Scala

2 days ago

Data Engineer tasked with full data flow management from ingestion to cloud storage for media players. Mentoring team and developing data governance frameworks.

🗣️🇧🇷🇵🇹 Portuguese Required

Airflow

Amazon Redshift

AWS

Azure

BigQuery

Cloud

ETL

Google Cloud Platform

IoT

Python

Spark

SQL

3 days ago

Data Operations Engineer focused on supporting data pipeline development and maintenance. Engaging in analytics and machine learning workflows while growing technical expertise.

AWS

Azure

Cloud

Python

SQL

3 days ago

DataOps Engineer building and managing world-class data ecosystems at Gipsyy. Ensuring robust and secure data pipelines for innovative analytics and machine learning teams.

🗣️🇧🇷🇵🇹 Portuguese Required

Airflow

Apache

AWS

Google Cloud Platform

Grafana

Jenkins

Kafka

Prometheus

Python

Spark

SQL

Terraform

3 days ago

Senior Data Engineer designing and optimizing data pipelines at Sezzle to support analytics. Collaborating with cross-functional teams and leading ETL/ELT workflow development using modern tools.

Amazon Redshift

AWS

Distributed Systems

ETL

Java

Python

Scala

SQL

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com