GCP Data Engineer

Job not on LinkedIn

November 6

🗣️🇪🇸 Spanish Required

Apply Now
Logo of Derevo

Derevo

Artificial Intelligence • Data Analytics • Technology Consulting

Derevo is a company that empowers organizations and individuals to unlock the value of their data through comprehensive analytics processes and platforms. They focus on full-cycle analytics, which involves the creation, integration, and analysis of data, as well as fostering a data-driven culture. With over 10 years of experience, Derevo has been an ally in driving organizational change through data. They offer services including data creation, digital transformation, data integration, data analytics, and data sharing. Derevo's nearshore development model utilizes global teams to provide sustainable value, with competitive analytics rates and cultural alignment for clients, particularly in the U. S. and Canada. They work with advanced technologies such as AI and machine learning to deliver business intelligence solutions across various industries, emphasizing robust collaborations with leading technology providers.

51 - 200 employees

Founded 2013

🤖 Artificial Intelligence

📋 Description

• Serás pieza clave en el equipo de Data Integration, creando e implementando arquitecturas modernas de datos con alta calidad y escalabilidad. • Diseñarás, mantendrás y optimizarás sistemas de procesamiento paralelo, aplicando las mejores prácticas de almacenamiento y gestión en Data Warehouses, Data Lakes y Lakehouses. • Impulsando soluciones analíticas basadas en Big Data dentro del ecosistema Google Cloud Platform (GCP). • Participarás activamente en proyectos de integración y transformación de datos, colaborando con equipos multidisciplinarios. • Diseñarás y ejecutarás procesos ETL/ELT sobre BigQuery, Dataflow, Composer y otros servicios GCP. • Desarrollarás pipelines en Python (PySpark) y Apache Beam, procesando datos estructurados y semiestructurados. • Implementarás modelos de datos analíticos optimizados mediante particionamiento, clustering y vistas materializadas. • Aplicarás estrategias de seguridad y gobernanza con IAM, Dataplex y Cloud DLP. • Analizarás y validarás la calidad de los datos para asegurar su consistencia y precisión.

🎯 Requirements

• Experiencia de al menos 5 años como Data Engineer trabajando con Google Cloud Platform (GCP). • Dominio de BigQuery, Dataflow, Composer, Pub/Sub, Datastream y Storage. • Experiencia con Spark, PySpark y desarrollo de pipelines de datos. • Conocimientos sólidos en SQL avanzado (T-SQL, Spark SQL). • Experiencia en diseño y mantenimiento de Data Warehouses y Data Lakes. • Conocimiento de estrategias de gobernanza y seguridad (Row-Level/Column-Level Security, IAM). • Inglés intermedio/avanzado (B2 o superior).

🏖️ Benefits

• WELLNESS: Impulsaremos tu bienestar integral a través del equilibrio personal, profesional y económico. • LET’S RELEASE YOUR POWER: Tendrás la oportunidad de especializarte en diferentes áreas y tecnologías, logrando un desarrollo interdisciplinario. • WE CREATE NEW THINGS: Nos gusta pensar fuera de la caja. Tendrás la libertad y capacitación necesarias para crear soluciones innovadoras. • WE GROW TOGETHER: Participarás en proyectos tecnológicos punteros, multinacionales y con equipos extranjeros.

Apply Now

Similar Jobs

November 4

Data Engineer developing and maintaining data pipelines for a global agile consultancy. Utilizing Modern Data Stack with expertise in Snowflake and Azure Data Factory.

🇲🇽 Mexico – Remote

💵 $50k - $65k / month

💰 Post-IPO Equity on 2007-03

⏰ Full Time

🟡 Mid-level

🟠 Senior

🚰 Data Engineer

🗣️🇪🇸 Spanish Required

Azure

ETL

ITSM

Python

SDLC

ServiceNow

SQL

November 4

Data Engineer at Inetum designing and operating large-scale petabyte data systems. Building real-time data pipelines and collaborating in agile environments to deliver scalable solutions.

🗣️🇪🇸 Spanish Required

Airflow

AWS

Azure

Cassandra

Google Cloud Platform

Hadoop

HBase

Java

Kafka

Oracle

Python

Spark

SQL

November 1

Data Engineer focusing on large-scale data systems operations and real-time data pipelines at Inetum. Collaborating with engineers and product managers to build robust technical solutions.

🗣️🇪🇸 Spanish Required

Airflow

Azure

Cassandra

Cloud

Distributed Systems

Google Cloud Platform

Hadoop

HBase

Java

Kafka

Oracle

Python

Spark

SQL

October 31

Data Engineer responsible for architecting and maintaining data pipelines in a medallion architecture. Working for one of the largest insurers in the U.S., enabling high-quality datasets.

Airflow

Apache

AWS

Cloud

ElasticSearch

Java

Jenkins

Kubernetes

Logstash

Python

Spark

Spring

Spring Boot

SpringBoot

Vault

October 31

Data Engineer role supporting a leading US insurance provider in optimizing data architecture and reporting analysis. Seeking candidates with strong technical skills in ETL and visualization.

ETL

ITSM

Python

SQL

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com