Data Architect

Job not on LinkedIn

16 hours ago

Apply Now
Logo of nDeavour Consulting

nDeavour Consulting

Recruitment • HR Tech • B2B

nDeavour Consulting is a recruitment and career counseling company headquartered in Sofia, Bulgaria. With a decade of experience, nDeavour focuses on helping individuals and businesses achieve their growth goals by providing exceptional job opportunities and professional guidance. The company prides itself on its principles of confidentiality, agility, pride, honesty, transparency, bravery, and responsibility, which foster strong relationships with clients and candidates alike.

2 - 10 employees

Founded 2019

🎯 Recruiter

👥 HR Tech

🤝 B2B

📋 Description

• Define the enterprise data architecture roadmap aligned with strategic business objectives. • Champion best practices for data modelling, integration, governance, and metadata management across domains. • Design canonical data models, entity relationships, and standards for schema evolution and semantic layers. • Architect modern lakehouse platforms using Databricks, Delta Lake, and Unity Catalog. • Define ingestion and transformation patterns, including orchestration using tools such as Airflow. • Implement bronze / silver / gold layering strategies for structured, trusted, and scalable data pipelines. • Establish data lineage, quality frameworks, cataloguing, and compliance processes. • Partner with engineering teams to integrate data into microservices and operational systems. • Contribute to real-time and batch pipeline designs that support analytics and ML/AI use cases. • Define and maintain data governance principles, security standards, and access controls. • Lead adoption of Unity Catalog for centralized metadata and policy enforcement. • Mentor senior engineers and act as a thought leader, influencing architecture and best practices across teams.

🎯 Requirements

• Proven experience architecting enterprise-scale data platforms (lakehouse, data mesh, or hybrid). • Strong expertise in Databricks, Delta Lake, Spark/PySpark, and orchestration tools like Airflow. • Deep understanding of conceptual, logical, and physical data modelling. • Strong knowledge of data governance, lineage, and data quality frameworks. • Ability to design scalable real-time and batch data pipelines. • Strong programming skills in SQL and Python for prototyping and validation. • Familiarity with AWS data services (S3, Glue, Lambda, Redshift, Kinesis, Step Functions) is a plus. • Excellent communication skills, able to translate architectural concepts for both technical and non-technical audiences. • Experience mentoring engineers and guiding teams on architecture, standards, and data best practices.

🏖️ Benefits

• Remote Office – Flexible hybrid form of working • Parking Space – We provide free parking spots • Fun Office Space – We offer a game zone and a relaxation area • Health Insurance – Additional private health insurance, including a dental care plan • Personal Development – Company-sponsored training budget to further develop your skills • Employee Referral Program – Receive a bonus for referring a friend • Holidays – Enjoy an extra 5 days after your 1st and 5th year • Social Events – We love to celebrate our success together • Family Insurance – Add insurance to a family member • Offering sports cards – 100% sponsored by the company

Apply Now

Similar Jobs

19 hours ago

Data Engineer managing and implementing the data architecture for NextGen Healthcare systems. Collaborating with internal teams and external partners to support data-driven decisions.

ETL

Java

Python

Scala

Spark

SQL

Yesterday

Senior Software Engineer responsible for architectural decisions in data engineering at Abnormal AI. Driving the reliability and performance of business-critical data pipelines for AI products.

Airflow

AWS

Distributed Systems

PySpark

Python

Spark

SQL

Yesterday

Data Architect crafting enterprise data architecture for Dynatron, boosting analytics and ML. Involved in real-time data processing and governance in automotive service industry.

Amazon Redshift

AWS

Azure

BigQuery

Cloud

Distributed Systems

Google Cloud Platform

Kafka

Pulsar

Python

Scala

SQL

Vault

Yesterday

Data Migration Engineer designing and executing complex data migrations for public safety software. Collaborating with teams to ensure high-quality data integration from legacy systems.

ETL

SQL

Yesterday

Senior Data Engineer at ProducePay designing and optimizing data platform for analytics needs. Owning data lifecycle, ensuring reliability, security, and scalability of data architecture across teams.

Airflow

Apache

AWS

EC2

Linux

Pandas

Postgres

Python

Scala

Spark

SQL

Terraform