
Artificial Intelligence • Energy • B2B
Genesis Data Culture is a Brazilian AI and data-science company that builds advanced machine learning and generative-AI solutions specifically for the electric power sector. It provides data consulting, predictive-maintenance systems, asset-management analytics and digital-transformation services that improve operational performance and deliver measurable ROI for utilities and public-sector operators. The firm is recognized by national innovation programs and has delivered generative-AI search tools and predictive diagnostic models for major grid operators and power-generation assets.
September 25
🗣️🇧🇷🇵🇹 Portuguese Required

Artificial Intelligence • Energy • B2B
Genesis Data Culture is a Brazilian AI and data-science company that builds advanced machine learning and generative-AI solutions specifically for the electric power sector. It provides data consulting, predictive-maintenance systems, asset-management analytics and digital-transformation services that improve operational performance and deliver measurable ROI for utilities and public-sector operators. The firm is recognized by national innovation programs and has delivered generative-AI search tools and predictive diagnostic models for major grid operators and power-generation assets.
• Projetar e implementar pipelines de ingestão, processamento e disponibilização de dados usando o ecossistema AWS • Escrever código Python limpo, testável e bem documentado • Revisar criticamente código e infraestrutura para evitar dívida técnica • Trabalhar com infraestrutura como código (Terraform) para provisionamento • Usar serviços AWS como S3, Glue, Lambda, Athena e Step Functions para entrega de soluções de dados • Colaborar com equipe para resolver problemas reais e assumir responsabilidade sobre entregas
• Experiência sólida com AWS: S3, Glue, Lambda, Athena, Step Functions e demais serviços para ingestão, processamento e disponibilização de dados • Forte em Python, com boas práticas (logging, modularização, testes, versionamento) • Capacidade de leitura crítica de código e infraestrutura; interesse em evitar dívidas técnicas • Experiência com Infraestrutura como Código (Terraform) • Entendimento de arquitetura de dados e solução de problemas reais com profundidade e atenção ao detalhe • Escrever código limpo, testável e bem documentado • Nível: Pleno • Disponibilidade: Full-time • Carta de apresentação obrigatória explicando histórico, competências e motivação
• 100% remoto • Full-time com horário flexível • Forma de contratação: PJ • Faixa de remuneração: até R$ 10.000/mês
Apply NowSeptember 25
Develop scalable Scala and Spark data pipelines, implement medallion lakehouse architecture, and optimize production workflows at Serasa Experian, Brazilian datatech.
🗣️🇧🇷🇵🇹 Portuguese Required
Airflow
Apache
AWS
Azure
Cloud
ETL
Google Cloud Platform
Python
Scala
Spark
September 24
Airflow
Docker
ETL
Hadoop
Jenkins
Kafka
Kubernetes
OpenShift
Postgres
Python
Spark
SQL
September 18
Data Engineer focused on Power BI and analytics at GFT. Implements BI solutions and governance.
🗣️🇧🇷🇵🇹 Portuguese Required
SQL
September 18
Data Engineer integrating SAP with Azure data platforms at GFT. Build Synapse/ADF pipelines, PySpark transforms, and data governance.
🗣️🇧🇷🇵🇹 Portuguese Required
Azure
PySpark
Python
Spark
SQL
September 15
Senior Data Engineer at GFT building AWS-based big data, ETL and data mesh solutions. Work with PySpark, SQL, NoSQL, batch and realtime ingestion.
🗣️🇧🇷🇵🇹 Portuguese Required
AWS
DynamoDB
ETL
Hadoop
MongoDB
NoSQL
PySpark
Python
Spark
SQL
Terraform