Mid/Senior Data Engineer

Job not on LinkedIn

6 days ago

🗣️🇧🇷🇵🇹 Portuguese Required

Apply Now
Logo of Alterdata Software

Alterdata Software

Software • B2B • Productivity

Alterdata Software is a company that specializes in developing comprehensive software solutions tailored for various sectors, including accounting, real estate, and corporate management. They aim to simplify business management with agile and effective ERP systems, designed to facilitate the day-to-day operations for businesses of all sizes. With a focus on innovative solutions, Alterdata seeks to enhance productivity and control for its users.

1001 - 5000 employees

Founded 1989

🤝 B2B

⚡ Productivity

📋 Description

• Data Architecture on AWS: Design, build and maintain the Data Lake (S3) and Data Warehouse (Redshift), ensuring high availability and efficient partitioning. • Pipeline Engineering (ETL/ELT): Develop data ingestion and transformation routines using AWS Glue and EMR (Spark), prioritizing performance and cost efficiency. • Infrastructure as Code (IaC): Provision and manage all data resources (databases, buckets, instances) using Terraform. No resources are created manually in the AWS Console. • Automation and CI/CD: Implement continuous integration and delivery pipelines for data workflows using GitLab CI, ensuring versioning and automated testing. • Serverless and Event-Driven Architectures: Build event-driven architectures using AWS Lambda, EventBridge and Step Functions for real-time and batch processing. • Governance and Quality: Configure access policies and data catalog via AWS Lake Formation, ensuring compliance with LGPD (Brazilian General Data Protection Law) and information security. • Monitoring and Observability: Implement logging and metrics (CloudWatch) to ensure pipeline health and proactively address incidents. • Collaboration: Work closely with data analysts, data scientists and software engineers to understand requirements and deliver effective data solutions.

🎯 Requirements

• Education: Bachelor’s degree in Computer Science, Data Engineering/Software Engineering or equivalent. • Prior Experience: Proven experience working as a Data Engineer or in a similar role. • Languages: Advanced proficiency in Python (focus on data manipulation and scripting) and advanced SQL. • AWS Core: Hands-on experience with S3, Redshift, Glue, Lambda, Athena and IAM. • IaC (Mandatory): Experience with Terraform for infrastructure management.

🏖️ Benefits

• This position is also open to candidates with disabilities (PwD).

Apply Now

Similar Jobs

November 28

Data Engineer at Semantix implementing data solutions and managing data processing using Big Data technologies. Collaborate with teams to enhance data reliability and efficiency.

🗣️🇧🇷🇵🇹 Portuguese Required

Azure

Cloud

ETL

Hadoop

MongoDB

PySpark

Python

SQL

November 27

Data Engineer focused on building and maintaining ETL pipelines using Databricks. Joining Join's creative tech team to enhance data processes in a fully remote role.

🗣️🇧🇷🇵🇹 Portuguese Required

Cloud

ETL

PySpark

SQL

November 27

Data Engineer responsible for building and optimizing data pipelines to support strategic decision-making at Stone Co., a leading fintech in Brazil.

🗣️🇧🇷🇵🇹 Portuguese Required

Airflow

Cloud

ETL

Google Cloud Platform

Python

SQL

November 26

Keyrus

1001 - 5000

🤝 B2B

Senior Data Engineer developing and maintaining efficient data pipelines using AWS technologies at Keyrus. Collaborating with internal teams to ensure data integrity and solution scalability.

🗣️🇧🇷🇵🇹 Portuguese Required

Amazon Redshift

AWS

ETL

PySpark

Python

RDBMS

SQL

November 26

Senior Data Engineer at Valtech, designing and maintaining ETL/ELT pipelines for scalable data processing and supporting AI initiatives.

Airflow

Apache

Azure

BigQuery

Cassandra

Cloud

ETL

Google Cloud Platform

Java

Kafka

MongoDB

MySQL

NoSQL

Postgres

Python

Scala

Spark

SQL

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com