Senior Data Engineer, Databricks, GCP – Marketing Analytics

2 hours ago

Apply Now
Logo of Truelogic Software

Truelogic Software

SaaS • B2B • Enterprise

Truelogic Software is a nearshore software development company specializing in agile staff augmentation services. They focus on providing custom outsourced software development with a team of highly skilled engineers from Latin America. Truelogic Software partners with both startups and Fortune 500 companies, offering solutions that align with their clients' time zones and ensuring high-quality outcomes through collaboration and responsiveness. With a presence in over 25 countries, Truelogic emphasizes remote work for better quality of life, and their engineers are experienced in various industries, delivering a wide range of successful projects globally.

501 - 1000 employees

Founded 2004

☁️ SaaS

🤝 B2B

🏢 Enterprise

📋 Description

• Lead the design and implementation of scalable data pipelines using Databricks and GCP (BigQuery, Cloud Storage, Cloud Composer, etc.). • Build and optimize data transformations, ingestion layers, and processing workflows to support clean room and identity graph workloads. • Develop identity graph structures, identity stitching processes, and privacy-preserving data integrations. • Partner with product managers, data scientists, and senior technical leaders to understand data requirements and deliver robust engineering solutions. • Present technical designs, project plans, and effort estimates to senior leadership with clarity and confidence. • Ensure pipeline reliability, cost optimization, and performance tuning across GCP and Databricks environments. • Collaborate with cross-functional teams to design and support consumer identity, audience modeling, and clean room use cases. • Maintain clear documentation of architectures, workflows, standards, and best practices.

🎯 Requirements

• 5+ years of experience in Data Engineering, building scalable data platforms and pipelines. • Strong, hands-on expertise in Databricks, GCP and BigQuery • Deep understanding of distributed systems, data architecture, and ETL/ELT patterns. • Experience with identity resolution, identity graphs, or consumer data engineering is a strong plus. • Excellent problem-solving skills with attention to detail. • Strong communication skills with the ability to articulate work to technical and non-technical leaders. • Ability to work independently, drive complex initiatives, and take ownership of deliverables. • Bachelor’s degree in Computer Science or a related field.

🏖️ Benefits

• 100% Remote Work: Enjoy the freedom to work from the location that helps you thrive. All it takes is a laptop and a reliable internet connection. • Highly Competitive USD Pay: Earn an excellent, market-leading compensation in USD, that goes beyond typical market offerings. • Paid Time Off: We value your well-being. Our paid time off policies ensure you have the chance to unwind and recharge when needed. • Work with Autonomy: Enjoy the freedom to manage your time as long as the work gets done. Focus on results, not the clock. • Work with Top American Companies: Grow your expertise working on innovative, high-impact projects with Industry-Leading U.S. Companies.

Apply Now

Similar Jobs

Yesterday

Lead the design and implementation of scalable Lakehouse data platforms and ELT pipelines. Mentor engineering teams and align architecture across distributed, multicultural stakeholders.

🗣️🇧🇷🇵🇹 Portuguese Required

Airflow

AWS

Cloud

Distributed Systems

Docker

ETL

GraphQL

Kafka

Kubernetes

Python

SQL

Vault

Yesterday

Data Engineer working remotely to develop and manage data transformation pipelines for Vobi. Contributing to the digitalization of the construction industry in Brazil.

Airflow

AWS

PySpark

Python

Spark

SQL

Yesterday

Data Architect (AWS) working with clients on AWS data solutions and analysis. Supporting migrations and developments in private and public cloud environments.

🗣️🇧🇷🇵🇹 Portuguese Required

Amazon Redshift

Ansible

AWS

DynamoDB

Java

Kafka

NoSQL

PySpark

Python

Scala

Spark

SQL

Terraform

Zookeeper

Yesterday

Senior Data Engineering Analyst at Serasa Experian developing software solutions and ensuring data pipeline quality. Collaborating within a squad to design and implement high-performance data architectures.

🗣️🇧🇷🇵🇹 Portuguese Required

Amazon Redshift

AWS

Docker

EC2

Grafana

Jenkins

Kafka

Kubernetes

Python

Scala

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com