Senior Data Engineer - Business Enablement

Job not on LinkedIn

July 4

Apply Now
Logo of ezCater

ezCater

B2B • eCommerce • Food and Beverage

ezCater is a platform that provides businesses with catering solutions, allowing companies to order food for meetings and events from a diverse range of restaurants and caterers. The platform enables users to easily manage orders, ensuring that all dietary preferences and business needs are met seamlessly. With a focus on corporate clients, ezCater streamlines the catering process, making it more efficient for organizations to feed their teams and guests.

501 - 1000 employees

Founded 2007

🤝 B2B

🛍️ eCommerce

💰 $100M Series D on 2021-12

📋 Description

• Are you passionate about data? How about leveraging data to drive meaningful impact across a fast growing two-sided marketplace? • Have opinions on how to best enable data scientists across a billion-dollar company to do elastic workforce planning or real-time customer lifetime value prediction? Then we should definitely talk! • The Data Technology team at ezCater is growing! As we look towards the second half of 2025 and beyond, data is a key strategic component across the company - from advanced, real-time machine learning to business intelligence and data governance. Data is our differentiator and how we will drive real, meaningful impact to the $60+billion Catering industry. • We are hiring a Senior Data Engineer to join our expanding team in solving complex data and platform challenges to accelerate our growing business. The ideal candidate lives and breathes data while driving systems and architecture best practices. They care about driving business impact through producing solid and efficient infrastructure alongside accurate and performant data. • You will have the opportunity to work directly with executive stakeholders as we embark on a massive-scale data modeling effort across the organization, so flexibility and the ability to translate business requests to implementation are key.

🎯 Requirements

• Strong experience with data warehousing, data lakes, ELT process, and enterprise data platforms such as Snowflake (preferred), Redshift & BigQuery. • Experience with building performant data pipelines across disparate systems. • Experience with cloud platforms such as AWS (preferred), GCP & Azure. • Mastery in SQL and experience in Python. • Ability to work independently and collaboratively. • An open mind and willingness to be flexible. We have a large and complex business, and believe in driving real value out of every project we do. • Our stack is Snowflake, dbt, Fivetran, Airflow, AWS, Sagemaker, MLFlow, Kubernetes + Docker, Monte Carlo, Hightouch and Python for custom ETL and data science integrations. Experience with the above is a nice-to-have, and a desire to learn is a must. • A sharp mind, a soft heart and a large funny bone.

🏖️ Benefits

• Market competitive salary • stock options that you’ll help make worth a lot • 12 paid holidays • flexible PTO • 401K with ezCater match • health/dental/FSA • long-term disability insurance • mental health and family planning resources • remote-hybrid work from our awesome Boston office OR your home OR a mixture of both home and office • a tremendous amount of responsibility and autonomy • wicked awesome co-workers • Relish (and many more goodies) when you’re in our office • knowing that you helped transform the food for work space

Apply Now

Similar Jobs

July 4

Join Wynd Labs as a Data Engineer to optimize data pipelines supporting AI initiatives. Build robust systems for seamless data accessibility.

Airflow

Amazon Redshift

Apache

AWS

Azure

Cloud

Docker

ETL

Google Cloud Platform

Java

Kubernetes

Node.js

Python

Scala

SQL

Terraform

July 2

Horizon3.ai

51 - 200

Join Horizon3.ai as a Data Engineer to build robust data infrastructure ensuring performance and scalability.

Amazon Redshift

AWS

Cloud

Cyber Security

Docker

Kafka

Kubernetes

Python

Terraform

July 2

As a Cloud Architect, design and manage data-driven cloud architectures using AWS for clients.

AWS

Azure

Cassandra

Cloud

ETL

Hadoop

Informatica

Java

Kafka

Keras

MongoDB

MySQL

NoSQL

Postgres

Python

PyTorch

Scala

Scikit-Learn

Spark

SQL

Tensorflow

June 30

Lead design of data pipelines and mentor engineers while ensuring high-performance data solutions.

Airflow

Amazon Redshift

Apache

AWS

Azure

BigQuery

Cloud

Distributed Systems

ETL

Google Cloud Platform

Kafka

MySQL

Oracle

Postgres

Python

Spark

SQL

June 30

As a Data Engineer, build scalable data pipelines for analytics and machine learning solutions, joining a supportive team at DevIQ.

Airflow

Apache

AWS

Azure

Cloud

ETL

Pandas

Python

Spark

SQL

Tableau

Unity

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com