Clinical Data Engineer

Job not on LinkedIn

November 25

Apply Now
Logo of ICON plc

ICON plc

Pharmaceuticals • Biotechnology • Healthcare Insurance

ICON plc is a global provider of clinical, consulting, and commercial services across various sectors in the healthcare and pharmaceutical industries. They offer a wide range of clinical solutions, including decentralised clinical trials, cardiac safety solutions, early clinical laboratories, and medical imaging. ICON also provides technology solutions that support drug development from early phases to post-marketing. Their therapeutic expertise spans cardiovascular, oncology, internal medicine, and more, with a strong focus on transformative therapies like cell and gene therapies and biosimilars. ICON is recognized as a leading Contract Research Organisation (CRO), regularly providing thought leadership and contributing to industry publications and events. Their services are tailored to optimize clinical trials, offer regulatory intelligence, and deliver real-world evidence through robust data analysis and patient-centric solutions.

📋 Description

• Serve as a technical expert in building data pipelines for the ingestion and delivery of clinical data at the study level, supporting study start-up, conduct, and close-out activities. • Develop robust data pipelines for integrating heterogeneous data sources. • Identify, design, and implement scalable data delivery solutions, automating manual processes whenever possible. • Develop and implement comprehensive data integrity and quality checks throughout the data ingestion process. • Design and build infrastructure for optimal data extraction, transformation, and loading (ETL/ELT) using cloud platforms such as AWS and Azure. • Collaborate with downstream users—including statistical programmers, SDTM programmers, analytics, and clinical data programmers—to ensure deliverables meet end-user requirements. • Appropriately escalate issues to CDE leadership as needed.

🎯 Requirements

• Extensive experience developing ELT and ETL solutions for data warehouses and data lakes. • Proficient with Python, R, RShiny, SQL, and NoSQL databases. • Hands-on cloud experience with AWS, Azure, or GCP. • Familiarity with GitLab, GitHub, and Jenkins for version control and CI/CD. • Proven expertise in deploying data pipelines in cloud environments. • Skilled in setting up and managing data warehouses and data lakes (e.g., Snowflake, Amazon Redshift). • Efficient in designing, developing, and maintaining scalable data pipelines for large datasets. • Strong understanding of database concepts, with working knowledge of XML, JSON, and API integrations.

🏖️ Benefits

• Various annual leave entitlements • A range of health insurance offerings to suit you and your family’s needs. • Competitive retirement planning offerings to maximize savings and plan with confidence for the years ahead. • Global Employee Assistance Programme, TELUS Health, offering 24-hour access to a global network of over 80,000 independent specialised professionals who are there to support you and your family’s well-being. • Life assurance • Flexible country-specific optional benefits, including childcare vouchers, bike purchase schemes, discounted gym memberships, subsidised travel passes, health assessments, among others.

Apply Now

Similar Jobs

November 25

Senior Azure Data Engineer overseeing data engineering projects for global teams. Collaborating with stakeholders to ensure scalability, efficiency, and compliance in data solutions.

Azure

Cloud

ETL

PySpark

Python

SQL

November 21

Enterprise Data Architect at Aptus Data Labs leading data architecture strategy and digital transformation. Designing data platforms using AWS and Databricks for analytics and AI initiatives.

Airflow

AWS

Cloud

ETL

Kafka

PySpark

Python

Spark

SQL

Unity

November 20

Big Data Engineer for Weekday's client focusing on real-time streaming systems and large-scale data processing. Building high-performance, low-latency pipelines using Java and modern big data technologies.

Apache

Distributed Systems

Hadoop

Java

Kafka

Spark

November 18

Senior Data Engineer building scalable data solutions using Azure and SQL for global teams at Smart Working. Engage with clients and deliver measurable improvements across projects.

Azure

Cloud

Spark

SQL

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com