Mid-Level Data Engineer

October 1

Apply Now
Logo of Kyriba

Kyriba

Fintech • Enterprise • Finance

Kyriba is a leading provider of financial technology solutions that offer secure, AI-powered data integration and liquidity management services for enterprises. The platform seamlessly connects ERPs, banks, and apps to provide real-time cash visibility, enhance operational efficiency, and support all aspects of enterprise liquidity management. Kyriba's offerings include real-time treasury management, risk management, payments, and connectivity solutions. The solutions are tailored for finance professionals and address complex liquidity challenges through advanced data automation and integration capabilities, supporting industries such as finance, technology, retail, manufacturing, and insurance. Kyriba's mission is to improve financial health and resilience by optimizing liquidity performance and strategic financial decision-making for organizations of various sizes.

501 - 1000 employees

Founded 2000

💳 Fintech

🏢 Enterprise

💸 Finance

📋 Description

• Design, implement, and optimize ETL pipelines using Databricks and AWS S3 to support analytics, ML, BI, and automation • Build and maintain data architectures for structured and unstructured data, ensuring data quality, lineage, and security • Integrate data from multiple sources including external APIs and on-premise systems to create a unified data environment • Collaborate with Data Scientists and ML Engineers to deliver datasets and features for model training, validation, and inference • Develop and operationalize ML/GenAI pipelines, automating data preprocessing, feature engineering, model deployment, and monitoring (e.g., Databricks MLflow) • Support deployment and maintenance of GenAI models and LLMs in production environments • Provide clean, reliable data sources for reporting and dashboarding via QlikView and enable self-service BI • Partner with Automation Specialists to design and implement data-driven automated workflows using MuleSoft • Implement data governance, security, and compliance best practices and document data flows, pipelines, and architectures • Collaborate across teams (data science, BI, business, IT) to align data engineering efforts with strategic objectives

🎯 Requirements

• Bachelor’s or Master’s degree in Computer Science, Engineering, Information Systems, or related field • Proven experience as a Data Engineer or similar role (3+ years) • Expertise in Databricks and AWS S3 • Strong programming skills in Python (preferred for ML/automation), SQL, and/or Scala • Experience building data pipelines for analytics, ML, BI, and automation use cases • Familiarity with ML frameworks (scikit-learn, TensorFlow, PyTorch) and MLOps tools (Databricks MLflow, AWS SageMaker) • Familiarity with GenAI libraries (HuggingFace, LangChain) and LLM deployment • Experience supporting BI/reporting solutions, preferably with QlikView • Hands-on experience with automation/integration platforms such as MuleSoft is a strong plus • Understanding of data governance, security, quality, and compliance • Excellent communication, collaboration, and problem-solving skills • Nice to have: experience deploying GenAI/LLM models at scale; API development; DevOps/CI/CD for data solutions; relevant AWS, Databricks, or QlikView certifications

Apply Now

Similar Jobs

September 26

Madiff

51 - 200

🤝 B2B

🤖 Artificial Intelligence

Lead team to design Azure Medallion architectures, implement CI/CD data pipelines, and integrate vector search/AI while collaborating with US stakeholders.

🇵🇱 Poland – Remote

⏰ Full Time

🟠 Senior

🚰 Data Engineer

September 24

SQLI

1001 - 5000

🛍️ eCommerce

☁️ SaaS

Lead Oracle PL/SQL development, optimize SQL, support production incidents and mentor PL/SQL team at SQLI

🇵🇱 Poland – Remote

⏰ Full Time

🟠 Senior

🚰 Data Engineer

🗣️🇫🇷 French Required

September 10

Look4IT

11 - 50

🎯 Recruiter

☁️ SaaS

Senior Data Engineer for a prominent game studio developing data solutions and analytics. Collaborating with teams to enhance player experience through data while operating large-scale systems.

🇵🇱 Poland – Remote

⏰ Full Time

🟠 Senior

🚰 Data Engineer

September 4

Infotree Global Solutions

1001 - 5000

🎯 Recruiter

👥 HR Tech

🏢 Enterprise

Data Engineer building Microsoft Fabric, Delta Lake and Apache Spark pipelines on Azure for a global professional services firm. Design, optimize and secure scalable data transformation workflows.

🇵🇱 Poland – Remote

⏰ Full Time

🟡 Mid-level

🟠 Senior

🚰 Data Engineer

July 5

Future Processing

1001 - 5000

🤖 Artificial Intelligence

☁️ SaaS

🔒 Cybersecurity

Work as a Cloud Data Architect in GCP, providing data solutions for clients.

🇵🇱 Poland – Remote

💵 zł165 - zł245 / hour

⏰ Full Time

🟡 Mid-level

🟠 Senior

🚰 Data Engineer

🗣️🇵🇱 Polish Required

Developed by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com