Data Engineer

Job not on LinkedIn

August 19

Apply Now
Logo of Burq

Burq

eCommerce • Logistics • SaaS

Burq is a comprehensive delivery platform that enables businesses across various industries, including e-commerce, food, floral, health & wellness, and construction, to offer on-demand delivery services. By integrating with Burq, companies can connect seamlessly with multiple delivery providers, allowing them to scale quickly to new markets and improve customer satisfaction with same-day and efficient delivery options. Burq also provides advanced tracking technology and does not charge commissions, ensuring optimal delivery solutions for businesses. With a focus on easy integration and a broad network of delivery providers, Burq empowers businesses to offer reliable delivery services without the need to build their own infrastructure.

11 - 50 employees

🛍️ eCommerce

☁️ SaaS

💰 Pre Seed Round on 2021-06

📋 Description

• About Burq: Burq started with an ambitious mission: how can we turn the complex process of offering delivery into a simple turnkey solution. • It’s a big mission and now we want you to join us to make it even bigger! • We’re already backed by some of the Valley's leading venture capitalists, including Village Global, the fund whose investors include Bill Gates, Jeff Bezos, Mark Zuckerberg, Reid Hoffman, and Sara Blakely. We have assembled a world-class team all over the globe. • We operate at scale, but we're still a small team relative to the opportunity. We have a staggering amount of work ahead. That means you have an unprecedented opportunity to grow while doing the most important work of your career. • The Role: As one of our first Data Engineers, you will be responsible for designing, building, and maintaining the pipelines and infrastructure that power our data-driven decision-making. You’ll work closely with product, operations, and engineering teams to ensure that data is clean, reliable, and ready to drive insights, from optimizing delivery routes to improving customer experiences. • This is a unique opportunity to build scalable data systems from the ground up and shape the foundation of our analytics and AI capabilities. • What You’ll Do: • Design & Build Pipelines: Develop and maintain scalable ETL/ELT processes to ingest, clean, and transform data from multiple sources (internal systems, third-party APIs, IoT devices). • Data Modeling: Design and implement efficient data models for analytics, machine learning, and operational systems. • Infrastructure: Own the data infrastructure, leveraging cloud-native solutions (e.g., AWS, GCP, or Azure) and modern data tools. • Collaboration: Partner with data scientists, analysts, and software engineers to deliver data products that enable smarter decision-making. • Data Quality: Implement robust monitoring, validation, and governance to ensure accuracy, security, and compliance. • Scalability: Architect solutions that can handle rapid growth in data volume and complexity as the business scales.

🎯 Requirements

• Experience: 3+ years of experience in data engineering, preferably in a startup or high-growth environment. • Technical Skills: • Proficiency with SQL and at least one programming language (Python, Scala, or Java). • Experience with cloud data warehouses (Snowflake, BigQuery, or Redshift). • Familiarity with workflow orchestration tools (Airflow, Dagster, Prefect). • Hands-on experience with data streaming (Kafka, Kinesis) is a plus. • Mindset: A builder mentality—comfortable with ambiguity, fast iterations, and working in a small but mighty team.

🏖️ Benefits

• Competitive Salary, Stock Options, and Performance-based Bonuses • Fully Remote • Comprehensive Medical, Vision and Dental Insurance

Apply Now

Similar Jobs

August 16

Senior Data Architect guiding cloud data platforms on AWS/Azure/GCP; mentors teams and enables AI/ML with Snowflake, Snowpark, Python.

AWS

Azure

Cloud

Google Cloud Platform

Node.js

Python

SQL

Tableau

August 16

Data Engineer at Trojan/Veralto builds scalable ELT/ETL pipelines with Matillion/Snowflake; enables analytics and data integrity.

AWS

Azure

Cloud

ETL

Google Cloud Platform

Matillion

August 9

Data Engineer at Versa Networks designs, builds, and maintains data pipelines; remote Canada role leveraging Airflow, Spark, Python, and cloud tech to enable AI/ML workflows.

Airflow

Apache

BigQuery

Cloud

Docker

Google Cloud Platform

Kubernetes

Python

Ray

Rust

Spark

Terraform

Go

July 5

Join Unity to enhance data pipelines for Deep Learning models in AdTech.

AWS

Cloud

Google Cloud Platform

Java

Python

PyTorch

Scala

Spark

Tensorflow

Unity

July 5

Become a key contributor in data engineering at Unity, enhancing data pipelines for machine learning.

AWS

Cloud

Google Cloud Platform

Java

Python

PyTorch

Scala

Spark

Tensorflow

Unity

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com