Data Engineer

October 20

Apply Now
Logo of InductiveHealth Informatics

InductiveHealth Informatics

Healthcare Insurance • Security • Artificial Intelligence

InductiveHealth Informatics is a leading provider of integrated solutions for disease surveillance and public health data management. They operate platforms such as EpiTrax, ESSENCE, and NBS to offer configurable electronic disease surveillance solutions, including AI-enhanced data migration and open-source technology for electronic laboratory and case reporting. Boasting over a decade of experience in public health technology, InductiveHealth supports public health agencies worldwide by managing jurisdictional surveillance systems and providing data infrastructure, ensuring compliance with information security standards. The company plays a pivotal role in public health intervention, rapidly detecting outbreaks, and maintaining secure and scalable data environments. InductiveHealth's success stories include the development of the COVID-19 Data Tracker, highlighting its capability in providing critical epidemiological data during global health crises.

📋 Description

• Play a key role in shaping the future of our data interoperability platform. • Design and operate secure, high-performance data pipelines for real-time and batch public health data flows. • Architect and implement scalable solutions that move sensitive health data reliably and meet strict security and compliance standards. • Build and optimize ETL/ELT pipelines using NiFi, Kafka, Python, and SQL. • Collaborate with interoperability, product, and data science teams to migrate legacy pipelines to modern architectures. • Fine-tune data lakes and ensure smooth data exchange across systems. • Define monitoring and alerting strategies and create runbooks to support operational teams. • Troubleshoot production issues to maintain stable and performant pipelines. • Drive continuous improvement across the data ecosystem through collaboration, innovation, and strong execution. • Contribute directly to building and delivering high-quality, scalable integrations that drive rapid product progress. • Collaborate closely with teammates to share knowledge and ensure smooth, efficient development workflows.

🎯 Requirements

• Bachelor’s degree in Computer Science, Data Engineering, Information Systems, or a related field • A minimum of 3 years of experience implementing enterprise-grade data pipelines **OR** 7 years of equivalent professional experience implementing enterprise-grade data pipelines • Someone who thrives at the intersection of technical complexity and mission-driven impact. • Proven ability to build robust, production-grade pipelines from the ground up and work confidently with distributed data systems. • A curious, proactive problem solver who excels in fast-paced, agile environments. • Hands-on expertise with NiFi, Kafka, Python, and SQL, including designing, operating, and optimizingETL/ELT pipelines in the cloud. • Experience using or integrating with OpenSearch/Elasticsearch. • Familiarity with CI/CD and DevOps practices; able to partner with DevOps/SRE to integrate pipelines into existing tooling (not responsible for platform design). • Practical experience using relational databases and existing schemas (implementing against established data models; extending under guidance). • Strong understanding of data security and compliance requirements for sensitive data. • Excellent cross-functional collaborator with clear communication and strong technical documentation skills. • A growth mindset with a passion for solving complex interoperability challenges elegantly and efficiently. • Strong communication skills to translate complex engineering concepts into clear strategies for cross-functional partners.

🏖️ Benefits

• Virtual first, remote organization and culture • Flexible Paid Time Off (PTO) • 401(k) retirement plan with corporate matching • Medical, prescription, vision, and dental coverage (multiple plans based on your needs) • Short Term and Long Term Disability (for employee) • Life Insurance (for employee) • New Team Member support for home office setup

Apply Now

Similar Jobs

October 20

Data Engineer focused on data pipelines and strategies at Roadie. Collaborating with teams to provide insights through scaled data integrations.

Airflow

Amazon Redshift

Apache

AWS

BigQuery

Cloud

Docker

ETL

Kafka

Kubernetes

Postgres

PySpark

Python

Scala

Spark

SQL

Tableau

Terraform

October 20

Join Zapier as a Senior Data Engineer, architecting robust data systems and collaborating with cross-functional teams. Shape the future of data usage in Zapier products and enhance data access.

🇺🇸 United States – Remote

💵 $170.7k - $256.1k / year

💰 Secondary Market on 2021-04

⏰ Full Time

🟠 Senior

🚰 Data Engineer

Cloud

Distributed Systems

Python

Spark

SQL

TypeScript

October 20

Data Engineer at Zapier responsible for building and scaling data systems powering products. Collaborate with teams to improve data accessibility and reliability.

🇺🇸 United States – Remote

💵 $141.1k - $211.7k / year

💰 Secondary Market on 2021-04

⏰ Full Time

🟡 Mid-level

🟠 Senior

🚰 Data Engineer

AWS

Azure

Cloud

Google Cloud Platform

Python

Spark

SQL

TypeScript

October 19

Senior Data Engineer focused on architecting business metrics for Netflix's Games portfolio. Leveraging high-quality data for analytics needs and working with technical teams.

Hadoop

Java

Python

Scala

Spark

SQL

October 18

Sr. Data Engineer at InStride contributing to data architecture and management. Leading a team to ensure scalable data solutions and effective collaboration across departments.

Cloud

MongoDB

Python

TypeScript

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com