Data Engineer

Job not on LinkedIn

November 9

Apply Now
Logo of The Home Depot

The Home Depot

Retail • B2C

The Home Depot is a leading home improvement retailer, offering a wide range of building materials, home improvement products, lawn and garden products, and related services. The company operates both physical stores and an online platform, providing comprehensive solutions for DIY enthusiasts, professional contractors, and homeowners. The Home Depot is committed to diversity, equity, and inclusion, providing employment opportunities and benefits to a diverse workforce. Additionally, the company places a high emphasis on customer service and associate engagement to maintain its position as a trusted leader in the home improvement industry.

10,000+ employees

Founded 1978

🛒 Retail

👥 B2C

💰 Debt Financing on 2007-07

📋 Description

• Incorporate business knowledge into solution approach. • Effectively develop trust and collaboration with internal customers and cross-functional teams. • Work with project teams and business partners to determine project goals. • Effectively communicate insights and recommendations to both technical and non-technical leaders and business customers/partners. • Prepare reports, updates and/or presentations related to progress made on a project or solution. • Develops algorithms to transform data into useful, actionable information. • Design and develop algorithms and models to use against large datasets to create business insights. • Execute tasks with high levels of efficiency and quality.

🎯 Requirements

• Must be 18 years of age or older. • Must be legally permitted to work in the United States. • Masters in a quantitative field (Analytics, Computer Science, Math, Physics, Statistics, etc.) or relevant work experience. • Demonstrated knowledge developing and testing ETL jobs/pipelines, configuring orchestration, automated CI/CD, writing automation scripts, and supporting the pipelines in production. • Experience in high-level programming languages such as Python. • Experience defining and capturing metadata and rules associated with ETL processes. • Experience building Batch and Streaming pipelines. • Experience writing analytical SQL queries. • Ability to stitch and maintain data from multiple sources. • Ability to use JavaScript, Front-end development frameworks (React, Nucleus), and QA apps (Retina, KPI Shield, Alert Goose). • Ability to produce tags for site data. • Ability to code in Python, Google BigQuery to stitch and enrich the raw data from multiple sources. • Ability to use PySpark, AirFlow, and DataProc to engineer and automate data flows pipelines. • Ability to optimize the pipelines run time and lower the cost on slots/storage consumption. • Ability to prioritize requests and manage a product roadmap. • Strong verbal and written communications skills at all levels.

🏖️ Benefits

• Health insurance • Paid time off • Flexible work arrangements • Professional development opportunities

Apply Now

Similar Jobs

November 7

Clinical Data Engineer at Artisight, developing Python integrations for clinical data workflows in smart hospital solutions. Join our team focused on improving healthcare with innovative technology.

Docker

Linux

Postgres

Python

November 6

Data Engineer optimizing SQL environments and managing data pipelines for analytics-driven organization. Collaborating with Data Science team to enhance reporting capabilities and AI applications.

AWS

Cloud

ETL

Python

SQL

November 6

Data Engineer developing and optimizing healthcare claims data infrastructure at MedScout. Collaborating with cross-functional teams to drive business impact through actionable insights.

Amazon Redshift

AWS

BigQuery

PySpark

Python

SQL

November 5

Data Engineer at Loop creating tools for merchants and supporting data-driven decision making across teams. Collaborate on data quality, modeling, and new ingestion sources in a remote setting.

Amazon Redshift

Cloud

Distributed Systems

ETL

Kafka

Python

SQL

November 5

Data Engineer responsible for the end-to-end data lifecycle at Turnkey. Collaborating with Engineering, Operations, and Product teams to ensure data infrastructure scalability and reliability.

AWS

ETL

Grafana

Kubernetes

Python

SQL

Go

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com