Senior Data Engineer – Technical Intelligence

Job not on LinkedIn

September 23

Apply Now
Logo of Input Output (IOHK)

Input Output (IOHK)

Crypto • Web 3

Input Output (IOHK) is a technology company focused on research and development in the field of blockchain and cryptocurrencies. It is known for its key role in developing the Cardano blockchain platform, which aims to offer a more secure and scalable environment for digital transactions and smart contracts. IOHK places a strong emphasis on academic research and peer-reviewed development methods, contributing to its reputation in the blockchain space.

201 - 500 employees

Founded 2015

₿ Crypto

🌐 Web 3

📋 Description

• Own and drive the end-to-end data process for intelligence (business and technical) across all company ventures, including acquiring, processing (ETL), modeling with AI/ML, and reporting. • Contribute to an automated process that leverages a data warehouse alongside ML/AI on AWS to generate insights. • Serve as the competitive positioning expert in the blockchain landscape, providing data insights to fulfill stakeholder requirements. • Architect and build performant and scalable data engineering pipelines using AWS; drive data system design, modeling, data quality, and delivery. • Work with senior leadership and stakeholders to understand data value, define key metrics for competitive positioning, and gather requirements. • Collaborate with data engineers, analysts, data scientists, ML and AI engineers to operationalize models and integrate LLM-related logic into data pipelines. • Provide direct exposure to strategic decision-making, analysing company strategy and new venture planning from a quantitative perspective. • Use Scrum methodologies to drive prioritization, planning, and execution of tasks for the data intelligence team.

🎯 Requirements

• A MSc/PhD in Computer Science, AI, or a related field is a strong plus. • Typically 7+ years of professional experience in data engineering or data science. • Expertise in data modeling and data warehousing, including building and optimizing data schemas and data lakes. • Hands-on experience specifically with AWS data services like Glue, Redshift, S3, Lambda, Athena. • Advanced proficiency in Python, SQL, PySpark. • Hands-on experience with big data frameworks like Apache Spark, DBT, workflow orchestration tools like Apache Airflow or AWS Step Functions, and Infrastructure-as-Code like CloudFormation or Terraform. • Hands-on experience with MLOps in operationalizing machine learning models in production. • Experience leading and mentoring data engineers and driving data system projects from conception to completion. • Solid critical thinking, research, and problem-solving skills, with the ability to quickly grasp any data-related domain. • Solid foundation of large-scale data systems, including data warehousing and data processing, including parallel processing, data partitioning, cost efficiency. • Solid data programming covering algorithms, data structures, design patterns, and relational databases. • Solid software development practices, version control, testing, and CI/CD. • Knowledge of Machine Learning (ML), Natural Language Processing (NLP), or Deep Learning (DL). • Knowledge of the Blockchain domain and passionate about gaining further breadth and depth of the domain.

🏖️ Benefits

• Remote work • Laptop reimbursement • New starter package to buy hardware essentials (headphones, monitor, etc) • Learning & Development opportunities • Competitive PTO

Apply Now

Similar Jobs

August 26

Kainos

1001 - 5000

Lead development of big-data ETL/ELT pipelines and components; mentor engineers; ensure production readiness for Kainos' data solutions.

AWS

Azure

ETL

Google Cloud Platform

Java

Kafka

Python

Scala

SQL

August 19

Data Engineer at Burq designs scalable data pipelines powering analytics and AI capabilities; collaborates with product, ops, and engineering teams.

Airflow

Amazon Redshift

AWS

Azure

BigQuery

Cloud

ETL

Google Cloud Platform

IoT

Java

Kafka

Python

Scala

SQL

August 16

Hands-on lead role building scalable data pipelines and architectures; mentors engineers while aligning to company analytics and product needs.

Airflow

Apache

AWS

Docker

ETL

Kafka

Kubernetes

Python

Spark

SQL

Terraform

July 30

Kainos

1001 - 5000

Join Kainos as a Data Architect, providing guidance on data architecture to drive impactful solutions.

July 29

Join Zenobē as a Senior Data Engineer to design, build, and evolve data pipelines for clean power solutions.

AWS

Cloud

ETL

Hadoop

IoT

Java

Kafka

NoSQL

Python

Scala

Spark

SQL

Tableau

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com