Data Engineer

November 6

Amazon Redshift

AWS

BigQuery

PySpark

Python

SQL

Apply Now
Logo of MedScout

MedScout

Healthcare Insurance • SaaS • eCommerce

MedScout is a company that leverages the latest healthcare claims data to provide actionable insights for MedTech commercial teams. Their platform is designed to simplify market analysis, strategy development, and sales execution, making it user-friendly for sales reps, sales management, and marketing teams. MedScout's proprietary dataset includes extensive claims from Medicare and Commercial payers, enabling users to identify and target best-fit physicians and healthcare sites, optimize territories, and understand referral behaviors. With an emphasis on data integration and usability, MedScout aims to enhance team effectiveness and boost sales revenue.

2 - 10 employees

⚕️ Healthcare Insurance

☁️ SaaS

🛍️ eCommerce

📋 Description

• You will design, implement, and maintain scalable data pipelines to process large volumes of healthcare claims data using Databricks, Python, and PySpark, ensuring high data quality and performance optimization for downstream analytics. • You will develop processes to integrate multiple data sources, including healthcare claims databases, into a unified data model that powers MedScout's sales enablement platform. • You will work with Product, Customer Success, and Sales leaders to understand what our customers are looking to achieve with our platform and use those insights to inform and validate your thinking as you make design and implementation decisions. • You will collaborate with data scientists and analysts to implement data transformations that support efficiently delivering advanced analytics, market insights, and predictive modeling capabilities for the platform. • You will troubleshoot and resolve complex data pipeline issues, optimize system performance, and contribute to the continuous improvement of MedScript's data infrastructure and engineering practices. • You will optimize workloads and cluster configurations to reduce compute costs while maintaining performance, including implementing auto-scaling policies, right-sizing clusters, and monitoring resource utilization patterns.

🎯 Requirements

• 3+ years of experience building, maintaining, and operating data pipelines in a modern data warehouse like Databricks, Snowflake, BigQuery, or AWS Redshift. • You have strong Python, SQL and PySpark skills. • You have experience with data transformation tools like DBT or SQLMesh. • You have a solid understanding of data modeling and schema design, particularly in contexts involving complex relationships and high-volume data processing. • You have experience with data quality frameworks, including automated testing, validation, and monitoring of data pipelines. • You have familiarity with modern software development practices including version control (Git), CI/CD, and infrastructure as code. • You work effectively with cross-functional teams, translating business requirements into technical specifications and communicating complex technical concepts to non-technical stakeholders. • Bonus: You have experience working with healthcare claims data or understanding of claims data structures.

🏖️ Benefits

• Fully covered healthcare and a great vision, dental, and 401k package. • You will feel heard. You will hear, "Yes, let's do that!" and then have the opportunity to execute your ideas successfully. • Remote first culture and quarterly on-sites with the rest of the MedScout Team. • We stay in nice hotels and eat well when we travel for work. No one feels like a badass walking into a Quality Inn. • Generous budget for learning and development + any tools you feel would make you more effective.

Apply Now

Similar Jobs

November 5

PLUM Commercial Real Estate Lending

11 - 50

💸 Finance

🏠 Real Estate

🤝 B2B

Senior Data Engineer designing scalable data pipelines and infrastructure for AI-driven fintech solutions. Collaborate with cross-functional teams using modern data stack tools.

🇺🇸 United States – Remote

⏰ Full Time

🟠 Senior

🚰 Data Engineer

November 5

Loop

51 - 200

🛍️ eCommerce

☁️ SaaS

🏢 Enterprise

Data Engineer at Loop creating tools for merchants and supporting data-driven decision making across teams. Collaborate on data quality, modeling, and new ingestion sources in a remote setting.

🇺🇸 United States – Remote

💵 $118.4k - $177.6k / year

💰 $65M Series B on 2021-07

⏰ Full Time

🟡 Mid-level

🟠 Senior

🚰 Data Engineer

🦅 H1B Visa Sponsor

November 5

GovCIO

1001 - 5000

🏛️ Government

🏢 Enterprise

🔒 Cybersecurity

Millennium Data Architect supporting modernization efforts for the Department of Veterans Affairs. Required expertise in data modeling and collaboration with cross-functional teams.

🇺🇸 United States – Remote

💵 $150k - $170k / year

⏰ Full Time

🟠 Senior

🔴 Lead

🚰 Data Engineer

November 5

iShare Inc.

11 - 50

🤝 B2B

🏢 Enterprise

Data Engineer developing and maintaining data pipelines and ensuring accuracy for insurance-related tasks. Collaborating with teams to support data-driven decision-making processes.

🇺🇸 United States – Remote

💵 $150k - $190k / year

⏰ Full Time

🟠 Senior

🔴 Lead

🚰 Data Engineer

November 5

Turnkey

11 - 50

₿ Crypto

💳 Fintech

🔌 API

Data Engineer responsible for the end-to-end data lifecycle at Turnkey. Collaborating with Engineering, Operations, and Product teams to ensure data infrastructure scalability and reliability.

🇺🇸 United States – Remote

💵 $175k - $250k / year

⏰ Full Time

🟡 Mid-level

🟠 Senior

🚰 Data Engineer

🦅 H1B Visa Sponsor

Developed by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com