Lead Data Engineer

Job not on LinkedIn

September 9

Apply Now
Logo of Hopscotch Health

Hopscotch Health

Healthcare Insurance

Hopscotch Health is a healthcare provider focused on delivering exceptional primary care services to rural communities, with a particular emphasis on Medicare patients, especially seniors. The company is committed to transforming lives through accessible and proactive value-based care, providing comprehensive healthcare solutions both in clinics and beyond. Hopscotch Health specializes in creating a community of care centered around the needs of its patients, offering partnerships with local organizations to enhance patient support and care quality. The company is dedicated to building strong relationships with patients, ensuring personalized attention and a full care team to address their healthcare needs.

📋 Description

• Own our data platform: architect, build, and maintain our data model, data pipelines, and data access layer that support our care delivery, operations, and analytics • Ensure data quality & governance: implement practices that make our data trustworthy and usable for all stakeholders, from clinicians to operators • Monitor and optimize data infrastructure: troubleshoot issues, improve system performance, and drive efficiency through automation and process enhancements • Lead with technical depth: review code, mentor engineers, and set engineering best practices across the team • Think big picture: design today’s infrastructure with tomorrow’s needs in mind, laying the foundation for advanced analytics and AI • Be hands-on: flex across the stack, from backend data engineering to occasional contributions in frontend applications that bring data to life • Collaborate with cross-functional teams: work with product managers, analytics, operations, and care delivery stakeholders to translate business needs into scalable data solutions • Report to the Director of Engineering as a technical leader (not a people manager), mentoring junior engineers and shaping long-term data ecosystem direction

🎯 Requirements

• 5+ years of professional experience in data engineering and software architecture shipping production level code, preferably in healthcare services • Strong programming skills (Python, Pyspark, or similar) with proven experience in building and scaling reusable, testable, and maintainable data pipelines • Experience designing and optimizing data models, warehouses, and/or lakehouses • Familiarity with modern data platforms and cloud ecosystems (ex. Palantir Foundry, DBT, Databricks, Snowflake, AWS/GCP/Azure) • Track record of ensuring data quality, governance, and reliability • Ability to balance high-level architecture with tactical execution • Can run with problems; take high-level business and technical challenges, frame them into actionable work, and deliver solutions with minimal oversight • Are customer-focused – with a passion for serving patients and providers who make healthcare possible • Are results-driven, focusing time and resources against the most important priorities • Comfortable with ambiguity and bring a structured, proactive approach to executing • Energized by building relationships and finding creative solutions • Articulate and succinct with communication about complex concepts, both verbally and in writing

🏖️ Benefits

• Paid holidays + PTO • Company sponsored medical, dental, and vision insurance for you + your family • FREE short-term and long-term disability insurance • FREE $100k life insurance policy • 401k plan with 4% company match + no vesting period • $720 - $1,000 added to employee Health Savings Account annually for eligible health plans • And more!

Apply Now

Similar Jobs

September 6

Data Architect designing enterprise Databricks Lakehouse for Live Nation Entertainment. Lead architecture, modeling, governance, and mentorship across data platform.

AWS

Azure

Cloud

ETL

Hadoop

Kafka

NoSQL

Spark

September 5

Data Engineer building scalable data pipelines and platforms for ForceMetrics, a public-safety data analytics startup. Empowering internal and external stakeholders with reliable data and analytics.

Apache

BigQuery

Cyber Security

Docker

ElasticSearch

Kubernetes

MySQL

Postgres

Python

SQL

Tableau

September 5

Senior Data Engineer building and optimizing AWS data pipelines for a government-focused digital services company. Develop ETL, data architecture, and data quality solutions for federal clients.

Airflow

Amazon Redshift

Apache

AWS

Cloud

EC2

ETL

Hadoop

Java

JavaScript

MySQL

Postgres

Python

Scala

Spark

SQL

September 4

Design and operate low-latency, scalable ETL and storage solutions for Oscilar's AI-driven fraud, credit, and compliance decisioning platform; mentor engineers.

Airflow

Cloud

DynamoDB

ETL

Grafana

Java

Kafka

Postgres

Prometheus

Python

Redis

SQL

Terraform

September 3

ION

10,000+ employees

Senior Data Engineer building scalable data platforms and pipelines supporting data science at Lab49, a financial-services technology partner.

Airflow

AWS

Pandas

PySpark

Python

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com