Data Engineer

Job not on LinkedIn

November 5

Apply Now
Logo of Loop

Loop

eCommerce • SaaS • Enterprise

Loop is a comprehensive returns management platform designed for eCommerce brands. It helps businesses optimize operations related to returns, post-purchase experiences, and reverse logistics. The platform offers features like automated returns, fraud prevention, exchanges, insights, and workflows, which are customizable to fit any brand's needs. Loop integrates seamlessly with popular applications, especially those on Shopify, and allows businesses to handle millions of returns efficiently, driving customer loyalty and retaining revenue. The company's service is scalable, secure, and supported by an enterprise-level service delivery team.

51 - 200 employees

🛍️ eCommerce

☁️ SaaS

🏢 Enterprise

💰 $65M Series B on 2021-07

📋 Description

• Maintain and optimize existing data pipelines and warehouse solutions for performance, reliability, and cost efficiency. • Support internal analytics and ML teams with data modeling, schema updates, and ad hoc data needs. • Contribute to dbt projects and assist in ensuring data quality, observability, and accessibility. • Write clean, tested, and documented code, and participate in code reviews. • Collaborate with senior data engineers to understand and contribute to new ingestion sources, ML pipelines, and other forward-looking initiatives. • Ensure internal stakeholders can access and use data effectively, enabling faster business insights and decision-making.

🎯 Requirements

• 4 years of hands-on experience building and maintaining data pipelines and data sets in a cloud environment (Snowflake, GBQ, Redshift, etc.). *We're expecting top candidates to have hands-on experience with Snowflake, specifically! • 2+ years of Python experience, creating reliable workflows and data processing scripts. • Strong SQL skills and experience with data modeling. • Experience with dbt or similar transformation tools. Familiarity with distributed systems and ETL/ELT processes. • Nice to have: Experience with data observability, lineage, or governance tools. • Nice to have: Exposure to BI tools and supporting analytics teams. • Nice to have: Experience working on cross-functional data projects. • Nice to have: Familiarity with Fivetran, Kafka, or modern data integration platforms.

🏖️ Benefits

• medical, dental, and vision insurance • flexible PTO • company holidays • sick & safe leave • parental leave • 401k • monthly wellness benefit • home workstation benefit • phone/internet benefit • equity

Apply Now

Similar Jobs

November 5

Millennium Data Architect supporting modernization efforts for the Department of Veterans Affairs. Required expertise in data modeling and collaboration with cross-functional teams.

Ansible

Cloud

Python

Terraform

November 5

Data Engineer developing and maintaining data pipelines and ensuring accuracy for insurance-related tasks. Collaborating with teams to support data-driven decision-making processes.

Azure

Cloud

ETL

PySpark

Python

SQL

Unity

November 5

Data Engineer responsible for the end-to-end data lifecycle at Turnkey. Collaborating with Engineering, Operations, and Product teams to ensure data infrastructure scalability and reliability.

AWS

ETL

Grafana

Kubernetes

Python

SQL

Go

November 5

Lead Data Engineer designing and implementing a scalable Customer Data Platform for AI-driven communication. Collaborating with cross-functional teams to enhance customer engagement using data-driven insights.

🗣️🇪🇸 Spanish Required

Airflow

Apache

Cloud

Python

Spark

November 5

Senior Data Engineer enabling data insights and analytics for UpGuard's cybersecurity solutions. Building and maintaining data pipelines while ensuring data integrity and security governance.

BigQuery

Docker

Kubernetes

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com