Data Architect / Platform Specialist – Enterprise, Databricks

October 1

Apply Now
Logo of Kyriba

Kyriba

Fintech • Enterprise • Finance

Kyriba is a leading provider of financial technology solutions that offer secure, AI-powered data integration and liquidity management services for enterprises. The platform seamlessly connects ERPs, banks, and apps to provide real-time cash visibility, enhance operational efficiency, and support all aspects of enterprise liquidity management. Kyriba's offerings include real-time treasury management, risk management, payments, and connectivity solutions. The solutions are tailored for finance professionals and address complex liquidity challenges through advanced data automation and integration capabilities, supporting industries such as finance, technology, retail, manufacturing, and insurance. Kyriba's mission is to improve financial health and resilience by optimizing liquidity performance and strategic financial decision-making for organizations of various sizes.

501 - 1000 employees

Founded 2000

💳 Fintech

🏢 Enterprise

💸 Finance

📋 Description

• Design, implement, and evolve enterprise data architectures spanning multiple business domains and use cases. • Define and enforce architectural standards and best practices for data modeling, integration, and governance. • Ensure data solutions are scalable, secure, and optimized for reporting, BI, advanced analytics, ML, and GenAI workloads. • Lead Databricks platform implementation and apply Databricks data design patterns, including Delta Lake architecture and unified analytics. • Architect Databricks environments to support batch, streaming, real-time, and advanced analytics; integrate with AWS S3 and enterprise platforms. • Act as primary interface between data, IT, business, and analytics teams; drive data standardization across finance, operations, HR, supply chain, and customer domains. • Architect and optimize data flows for operational and analytical reporting, BI dashboards (e.g., QlikView), and self-service analytics. • Partner with Data Scientists and ML Engineers to ensure ML/GenAI readiness (feature stores, model training, scalable inference). • Implement enterprise data governance, data quality, security, compliance frameworks; oversee metadata management, lineage, and cataloging. • Evaluate and adopt emerging technologies; foster continuous improvement and best practices in data architecture and platform engineering.

🎯 Requirements

• Bachelor’s or Master’s degree in Computer Science, Information Systems, Engineering, or related field. • Extensive experience as a Data Architect or Platform Specialist supporting multiple business domains across large organizations. • Proven expertise in designing and implementing data architectures on Databricks and AWS S3. • Deep knowledge of data modeling, data warehousing, ETL/ELT, and cloud data platforms. • Experience with Databricks best practices for reporting, BI, ML, and GenAI. • Strong understanding of BI tools (e.g., QlikView) and their integration with enterprise data platforms. • Familiarity with ML/GenAI architectures, workflows, and operationalization. • Comprehensive knowledge of data governance, security, and compliance frameworks. • Outstanding communication, leadership, and stakeholder management skills. • Nice to have: Certifications in Databricks, AWS, or enterprise architecture frameworks (e.g., TOGAF). • Nice to have: Experience with data mesh, data fabric, or modern data stack concepts. • Nice to have: Exposure to automation and integration platforms (e.g., MuleSoft).

Apply Now

Similar Jobs

September 26

Lead team to design Azure Medallion architectures, implement CI/CD data pipelines, and integrate vector search/AI while collaborating with US stakeholders.

Azure

PySpark

Python

SQL

September 24

Lead Oracle PL/SQL development, optimize SQL, support production incidents and mentor PL/SQL team at SQLI

🗣️🇫🇷 French Required

Oracle

SQL

September 10

Senior Data Engineer for a prominent game studio developing data solutions and analytics. Collaborating with teams to enhance player experience through data while operating large-scale systems.

Ansible

AWS

EC2

Java

Kafka

PySpark

Python

Ruby on Rails

Scala

Spark

Terraform

September 4

Data Engineer building Microsoft Fabric, Delta Lake and Apache Spark pipelines on Azure for a global professional services firm. Design, optimize and secure scalable data transformation workflows.

Apache

Azure

PySpark

Python

Scala

Spark

July 5

Work as a Cloud Data Architect in GCP, providing data solutions for clients.

🗣️🇵🇱 Polish Required

Apache

BigQuery

Cloud

ETL

Google Cloud Platform

SQL

Terraform

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com