Mid-Level Data Engineer

October 1

Apply Now
Logo of Kyriba

Kyriba

Fintech • Enterprise • Finance

Kyriba is a leading provider of financial technology solutions that offer secure, AI-powered data integration and liquidity management services for enterprises. The platform seamlessly connects ERPs, banks, and apps to provide real-time cash visibility, enhance operational efficiency, and support all aspects of enterprise liquidity management. Kyriba's offerings include real-time treasury management, risk management, payments, and connectivity solutions. The solutions are tailored for finance professionals and address complex liquidity challenges through advanced data automation and integration capabilities, supporting industries such as finance, technology, retail, manufacturing, and insurance. Kyriba's mission is to improve financial health and resilience by optimizing liquidity performance and strategic financial decision-making for organizations of various sizes.

501 - 1000 employees

Founded 2000

💳 Fintech

🏢 Enterprise

💸 Finance

📋 Description

• Design, implement, and optimize ETL pipelines using Databricks and AWS S3 to support analytics, ML, BI, and automation • Build and maintain data architectures for structured and unstructured data, ensuring data quality, lineage, and security • Integrate data from multiple sources including external APIs and on-premise systems to create a unified data environment • Collaborate with Data Scientists and ML Engineers to deliver datasets and features for model training, validation, and inference • Develop and operationalize ML/GenAI pipelines, automating data preprocessing, feature engineering, model deployment, and monitoring (e.g., Databricks MLflow) • Support deployment and maintenance of GenAI models and LLMs in production environments • Provide clean, reliable data sources for reporting and dashboarding via QlikView and enable self-service BI • Partner with Automation Specialists to design and implement data-driven automated workflows using MuleSoft • Implement data governance, security, and compliance best practices and document data flows, pipelines, and architectures • Collaborate across teams (data science, BI, business, IT) to align data engineering efforts with strategic objectives

🎯 Requirements

• Bachelor’s or Master’s degree in Computer Science, Engineering, Information Systems, or related field • Proven experience as a Data Engineer or similar role (3+ years) • Expertise in Databricks and AWS S3 • Strong programming skills in Python (preferred for ML/automation), SQL, and/or Scala • Experience building data pipelines for analytics, ML, BI, and automation use cases • Familiarity with ML frameworks (scikit-learn, TensorFlow, PyTorch) and MLOps tools (Databricks MLflow, AWS SageMaker) • Familiarity with GenAI libraries (HuggingFace, LangChain) and LLM deployment • Experience supporting BI/reporting solutions, preferably with QlikView • Hands-on experience with automation/integration platforms such as MuleSoft is a strong plus • Understanding of data governance, security, quality, and compliance • Excellent communication, collaboration, and problem-solving skills • Nice to have: experience deploying GenAI/LLM models at scale; API development; DevOps/CI/CD for data solutions; relevant AWS, Databricks, or QlikView certifications

Apply Now

Similar Jobs

September 26

Lead team to design Azure Medallion architectures, implement CI/CD data pipelines, and integrate vector search/AI while collaborating with US stakeholders.

Azure

PySpark

Python

SQL

September 24

Lead Oracle PL/SQL development, optimize SQL, support production incidents and mentor PL/SQL team at SQLI

🗣️🇫🇷 French Required

Oracle

SQL

September 10

Senior Data Engineer for a prominent game studio developing data solutions and analytics. Collaborating with teams to enhance player experience through data while operating large-scale systems.

Ansible

AWS

EC2

Java

Kafka

PySpark

Python

Ruby on Rails

Scala

Spark

Terraform

September 4

Data Engineer building Microsoft Fabric, Delta Lake and Apache Spark pipelines on Azure for a global professional services firm. Design, optimize and secure scalable data transformation workflows.

Apache

Azure

PySpark

Python

Scala

Spark

July 5

Work as a Cloud Data Architect in GCP, providing data solutions for clients.

🗣️🇵🇱 Polish Required

Apache

BigQuery

Cloud

ETL

Google Cloud Platform

SQL

Terraform

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com