Data Engineer, Databricks

Yesterday

Apply Now
Logo of Fluent, Inc

Fluent, Inc

Fluent (NASDAQ:FLNT) is the trusted acquisition partner for both established and growing brands. Leveraging our proprietary first party data asset, Fluent creates marketing programs that deliver better digital advertising experiences for consumers and measurable results for advertisers. Founded in 2010, the company is headquartered in New York City. For more information, visit www.fluentco.com.

201 - 500 employees

📋 Description

• Design, build, and maintain scalable data pipelines and ETL processes using SQL, Python, and modern data stack tools (e.g., Databricks, Snowflake, AWS, GCP, Azure). • Develop and manage large, complex datasets that support analytics, modeling, and activation. • Create data models and schemas that enable downstream use in reporting, BI, and audience segmentation. • Troubleshoot and optimize data workflows for performance and reliability. • Partner directly with clients and internal teams to translate business goals into data solutions. • Present technical concepts clearly to non-technical audiences — both in meetings and documentation. • Collaborate with data scientists, analysts, and engineers to improve data accessibility and governance. • Ensure data integrity, compliance, and best practices in data handling.

🎯 Requirements

• 5+ years of experience in data engineering, preferably in AdTech, MarTech, or audience/data-driven businesses. • Advanced SQL and Python skills (experience with PySpark a plus). • Strong understanding of APIs, data warehouses, and data lake architecture. • Previous experience in a Databricks environment. • Experience integrating multiple data sources, cleaning and transforming large datasets. • Familiarity with BI tools (Tableau, Power BI, or Looker). • Excellent communication skills — you can talk data with engineers and business outcomes with clients. • Comfort presenting technical findings and recommendations to senior stakeholders. • A plus: experience with clean rooms, identity resolution, or audience activation workflows.

🏖️ Benefits

• Competitive compensation • Ample career and professional growth opportunities • New Headquarters with an open floor plan to drive collaboration • Health, dental, and vision insurance • Pre-tax savings plans and transit/parking programs • 401K with competitive employer match • Volunteer and philanthropic activities throughout the year • Educational and social events • The amazing opportunity to work for a high-flying performance marketing company!

Apply Now

Similar Jobs

3 days ago

Data Engineer at eServices focuses on creating data solutions for accessibility and analysis. Collaborates with teams to enhance data quality and support business decisions.

Amazon Redshift

AWS

Azure

Cloud

ETL

Google Cloud Platform

Open Source

Oracle

Spark

November 28

Team Lead overseeing a high-performing data engineering team at Q4, an AI-driven investor relations platform. Responsible for building data pipelines and mentoring team members.

Amazon Redshift

AWS

Cassandra

EC2

ETL

NoSQL

Postgres

SDLC

SQL

November 25

Senior Data Engineer responsible for building scalable data solutions and supporting teams leveraging data at Jobber. Transforming operations and enhancing workflows within a cloud infrastructure.

Airflow

Amazon Redshift

AWS

Cloud

ETL

Python

Spark

SQL

Terraform

November 22

Senior Data Engineer designing and implementing data warehouses and pipelines for Leap Tools. Collaborating with engineering, ML, and product teams on data strategy.

Distributed Systems

Python

SQL

November 20

Hopper

201 - 500

Data Engineer responsible for building robust data pipelines and analytics systems for Hopper’s advertising business. Collaborate with engineering teams to ensure data integrity and enable insights.

Airflow

Amazon Redshift

BigQuery

Cloud

ETL

Kafka

Python

Scala

SQL

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com