Senior Data Engineer

Job not on LinkedIn

March 21

Apply Now
Logo of flexEngage

flexEngage

Retail • Marketing • eCommerce

flexEngage is a company that specializes in delivering custom-made transactional messages, including digital receipts, aimed at reducing customer churn for brand-driven retailers. By transforming traditional point-of-sale communications into engaging post-purchase marketing strategies, flexEngage enhances customer loyalty and encourages repeat business. The platform integrates seamlessly with existing POS systems, enabling retailers to market effectively and create personalized customer interactions that foster long-lasting relationships beyond the initial purchase.

11 - 50 employees

Founded 2014

🛒 Retail

🛍️ eCommerce

💰 $1M Venture Round on 2020-01

📋 Description

• Enable the team to build data-driven products by providing access to rich datasets at scale. • Write maintainable, performant, and cost-efficient code validated with automated tests. • Maintain the data warehouse with timely and quality data. • Create and maintain documentation for the systems and repositories. • Develop features and improvements to flexEngage products. • Participate in all aspects of the development process including story grooming, technical design, coding, code reviews, QA, and delivery. • Collaborate with Product Management, other engineers, and stakeholders in the development of new features. • Establish and advocate for data engineering standards during feature development and code reviews. • Identify areas of improvement in the codebase and implement changes.

🎯 Requirements

• Bachelor's degree in Computer Science, related degree, or equivalent practical experience. • 3+ years of hands-on experience deploying production quality code. • Professional experience using Python for data processing. • Professional experience implementing ETL best practices at scale. • Professional experience with data pipeline tools (Airflow). • Deep understanding of SQL and analytical data warehouses (Snowflake preferred). • Experience using cloud environments (AWS preferred). • Experience provisioning infrastructure through code (e.g., Terraform, Cloud Formation, Ansible). • Experience using CI/CD tools like Bitbucket Pipelines or Jenkins.

🏖️ Benefits

• Competitive compensation plan in line with experience and ability to drive results at target retailers. • Annual base salary, stock options, 401k, and health benefits. • Unlimited PTO, flexible/remote schedules.

Apply Now

Similar Jobs

March 20

Manage data engineering team at BioIntelliSense to build scalable data infrastructure.

AWS

Azure

Cloud

ETL

Google Cloud Platform

Kafka

Spark

SQL

March 19

Design and maintain scalable data management systems for clients using Azure technologies at Tkxel.

Apache

Azure

ETL

Node.js

PySpark

Spark

SQL

March 11

Experienced Data Engineer involved in migrating data infrastructure to Snowflake and building data pipelines. Collaborating with cross-functional teams to ensure data quality and implementing CI/CD processes for deployments.

AWS

Azure

Cloud

ETL

Oracle

SQL

March 10

Remote Data Architect role at Pivotal Solutions. Focus on designing ETL processes with AWS and data analytics.

Airflow

Amazon Redshift

AWS

Azure

Cloud

ETL

Google Cloud Platform

Java

MS SQL Server

Node.js

Python

Scala

Spark

SQL

March 10

Develop data models and ETL pipelines for IRS initiatives using Palantir technology. Collaborate with federal stakeholders on high-impact projects.

ETL

Node.js

Python

SQL

Tableau

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com