Senior Data Engineer – Vital CDM

Job not on LinkedIn

Yesterday

Apply Now
Logo of Health Catalyst

Health Catalyst

Healthcare Insurance • Artificial Intelligence • SaaS

Health Catalyst is a leading provider of data and analytics technology and services to healthcare organizations, committed to being the catalyst for massive, measurable, data-informed healthcare improvement. The company empowers organizations with AI-enabled insights and comprehensive data solutions to drive scalable, measurable improvements in patient outcomes, operational efficiency, and financial performance. With a focus on population health management, clinical quality, and patient engagement, Health Catalyst aims to transform healthcare through data-driven decision-making.

📋 Description

• supports the Product Development department • responsible for working with a team of web application and data engineers to implement database solutions • helping scale and refactor an existing public facing website database and related services • moving resources to Azure • creating and releasing new features for the product • managing related backend data services • collaborate with Product Managers and developers within Agile / Scrum methodology • build solutions that are automated, scalable, and sustainable • evaluate and analyze the current system architecture • create scalable solutions to improve uptime and responsiveness • drive development effort End-to-End for on-time delivery of high-quality solutions • research, identify, analyze and correct any technical issues • resolve complex data issues and perform quality data checks • receive and understand business requirements and create data mapping specifications • integrate client’s data into our product suite • maintain and optimize several complex databases • investigate and troubleshoot complicated database applications and stability issues • ensure MSSQL databases are operational • guide efforts in all areas of database design, performance, and reliability • participate in code reviews that include database changes

🎯 Requirements

• BS or MS in Computer Science or equivalent professional experience • 6+ years MSSQL Server and/or RDBMS experience with current technology required • 6+ years SQL optimization experience required (Index optimization strategies, Data normalization/de-normalization strategies, Plan analysis, Recompilation, Caching and buffering, Optimization tools including SQL Server Extended Events or similar, Statistics and their role) • 3+ years of experience with high transaction OLTP environment with 4+ TB in size • A solid understanding of data structures (e.g., XML/SGML/DTD/JSON) • A solid understanding of parsing and transforming JSON data in SQL Server • Experience writing complex and efficient SQL stored procedures • Deep SQL Server working knowledge including order of operations, transactions and concurrency, file tables and security, brokering technologies, transactional replication, indexing strategies and maintenance, backup and recovery models, multi-node clustering and high availability • Familiar with Git and branching strategies • Familiar with creating and/or consuming REST APIs using C#, NodeJS, Python, etc. • Familiar with NoSQL (MongoDB and/or Elasticsearch) • Azure knowledge highly desired • Demonstrable experience implementing enterprise-scale, high volume, high availability systems • Demonstrated ability to deliver major critical projects • Experience with Agile and Scrum team development environments

🏖️ Benefits

• mentoring and sponsorship programs • remote-work friendliness • career development • company equity • flexible PTO

Apply Now

Similar Jobs

2 days ago

Senior Data Engineer managing and optimizing data systems within the GCP stack for Second Nature. Leading technical projects and collaborating with cross-functional teams to enhance data insights.

🇺🇸 United States – Remote

💵 $140k - $150k / year

💰 $16.4M Series C on 2020-03

⏰ Full Time

🟠 Senior

🚰 Data Engineer

BigQuery

Cloud

Docker

Google Cloud Platform

Python

SQL

2 days ago

Senior Data Engineer managing and transforming data pipeline at Experian. Utilizing DBT and Snowflake for product development and research in fraud detection.

AWS

Cloud

Python

SQL

Terraform

2 days ago

Data Architect designing a data foundation to support AI/ML applications at Leidos. Responsible for creating big data systems and ensuring data access and documentation.

ETL

TypeScript

2 days ago

Data Engineer building and scaling cloud-native data pipelines using Snowflake and dbt for investment research. Solving data challenges and delivering trusted data to stakeholders across the business.

Airflow

AWS

Cloud

Python

SQL

2 days ago

Data Engineer at Cayuse responsible for designing and maintaining scalable data pipelines. Collaborating with health programs on data specifications and ensuring data integrity and governance.

Airflow

ETL

MySQL

Oracle

Pandas

Python

SQL

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com