Scientific Data Architect

November 12

Apply Now
Logo of TetraScience

TetraScience

Artificial Intelligence • Biotechnology • SaaS

TetraScience is a company dedicated to transforming raw scientific data into AI-native datasets for advanced scientific applications. By collaborating closely with leading biopharmaceutical companies, TetraScience enhances productivity, accelerates insights, and ensures data integrity across the scientific value chain. Their platform offers solutions for next-generation lab data management, AI-driven scientific outcomes, and compliance with industry standards. As the first company to provide a data and AI cloud built specifically for science, TetraScience enables its clients to liberate, unify, and transform their data, overcoming traditional data silos and boosting scientific productivity by providing a flexible, open, and collaborative infrastructure.

📋 Description

• You will be a critical team member in a unique partnership to industrialize Scientific AI. As such, you will engage directly with customers onsite a couple of days per week in the Copenhagen Region, building strong relationships, deeply understanding their scientific data challenges and requirements, and accelerating solutions. • Design and implement extensible, reusable data models that efficiently capture and organize scientific data for scientific use cases, ensuring scalability and future adaptability. • Translate scientific data workflows into robust solutions leveraging the Tetra Data Platform. • Own, scope, prototype, and implement solutions including: • - Data model design (tabular & JSON) • - Python-based parser development. • - Lab software (e.g., ELN/LIMS) integration via APIs. • - Data visualization and app development in Python (using app frameworks like Streamlit and plotting tools like holoviews and Plotly) • Collaborate with Scientific Business Analysts (SBAs), customer scientists and applied AI engineers to develop and deploy models (ML, AI, mechanistic, statistical, hybrid) • Programmatically interrogating proprietary instrument output files. • Dynamically iterate with scientific end users and technical stakeholders to rapidly drive solution development and adoption through regular demos and meetings • Proactively communicate implementation progress and deliver demos to customer stakeholders. • Collaborate with the product team to build and prioritize our roadmap by understanding customers’ pain points within and outside Tetra Data Platform. • Rapidly learn new technologies (e.g., new AWS services or scientific analysis applications) to develop and troubleshoot use cases

🎯 Requirements

• PhD with 7+ years / Masters with 10+ years of industry experience in life sciences with extensive domain knowledge in drug discovery (target ID through lead optimization), preclinical development, CMC (all drug modalities), or product quality testing. • Proven track record of defining, designing, prototyping, and implementing productized AI/ML-driven use cases in cloud environments • Collaborated with cross-functional teams, including product managers, software engineers, and scientific stakeholders. • Performed extensive exploratory data analysis and workflow optimization to enable scientific outcomes not previously possible. • Engaged diverse audiences, from scientists to executive stakeholders using your excellent communication and storytelling abilities. • Advised scientists in a consulting capacity to further research, development, and quality testing outcomes.

🏖️ Benefits

• Competitive Salary and equity in a fast-growing company. • Supportive, team-oriented culture of continuous improvement. • Generous paid time off (PTO). • Flexible working arrangements - Remote work when not at Customer Sites**

Apply Now

Similar Jobs

November 11

Technical Project Manager supporting data engineering for a digital media agency. Managing vendor relationships and ensuring accurate data delivery in a fast-paced environment.

November 11

Senior Data Engineer responsible for designing and optimizing data infrastructure at Spring & Bond. Working with analysts and mentoring junior engineers to develop scalable ETL pipelines and data solutions.

Amazon Redshift

AWS

BigQuery

Cloud

ETL

Pandas

Python

SQL

November 11

Data Engineer at Kraken developing blockchain data solutions for institutional clients. Contributing to staking services and data integrity with collaborative, innovative efforts.

Airflow

Apache

AWS

Docker

Kubernetes

Python

Rust

SQL

TypeScript

November 10

Data Engineer at Screenverse responsible for building ETL pipelines and data infrastructure. Focus on data quality and reporting for dynamic digital screen network.

Elixir

ETL

Postgres

SQL

November 9

Data Architect Lead supporting Master Data Optimization project for Agentic Dream. Designing scalable, AI-enabled architecture integrating data from 11+ ERP systems.

Azure

Cloud

ERP

ETL

SQL

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com