Enterprise Data Architect – AWS, Databricks

Job not on LinkedIn

November 21

Apply Now
Logo of Aptus Data Labs

Aptus Data Labs

Artificial Intelligence • SaaS • B2B

Aptus Data Labs is a data engineering and enterprise AI company that builds scalable AI platforms, generative intelligence solutions, and data modernization services for large organizations. The company delivers industry-focused AI and analytics products (including aptplan, aptGenAI and other platforms) and services—covering advisory, cloud migration, MLOps/LLMOps, AI governance, and on-demand talent—to help pharmaceutical, banking, manufacturing, retail and other enterprises accelerate decision-making, compliance, and operational efficiency. Aptus partners with cloud and AI providers, offers pre-built accelerators and IP, and focuses on B2B deployments and enterprise-scale SaaS solutions.

📋 Description

• Lead the enterprise data architecture strategy • Architect and implement data lakehouse solutions using Databricks on AWS • Design end-to-end data pipelines, integration frameworks, and governance models • Define data models, metadata management, and data quality frameworks • Collaborate with data engineering, AI/ML, analytics, and business teams • Evaluate and integrate emerging technologies in data mesh and automation frameworks • Provide technical leadership and mentorship to data engineering and architecture teams • Establish best practices for data security, lineage, compliance, and cloud cost optimization • Partner with business stakeholders to define data modernization roadmaps and cloud migration strategies

🎯 Requirements

• 18+ years of experience • Strong experience in Data Architecture, Data Engineering, or related domains • Proven experience architecting enterprise-scale data platforms using AWS • Hands-on expertise in Databricks (Delta Lake, Spark, Unity Catalog, MLflow) • Strong experience with data modeling and ETL/ELT pipelines • Deep understanding of data governance, master data management, and data cataloging tools • Proficient in SQL, Python, PySpark, and API-based data integration • Experience with modern data stack (Snowflake, dbt, Airflow, Kafka, etc.) is a plus • Strong understanding of AI/ML data readiness and metadata design • Excellent communication and leadership skills to collaborate with technical and business teams • Certifications in AWS or Databricks preferred

🏖️ Benefits

• Health insurance • Paid time off • Flexible work hours • Professional development opportunities

Apply Now

Similar Jobs

November 20

Big Data Engineer for Weekday's client focusing on real-time streaming systems and large-scale data processing. Building high-performance, low-latency pipelines using Java and modern big data technologies.

Apache

Distributed Systems

Hadoop

Java

Kafka

Spark

November 18

Senior Data Engineer building scalable data solutions using Azure and SQL for global teams at Smart Working. Engage with clients and deliver measurable improvements across projects.

Azure

Cloud

Spark

SQL

November 18

Senior Data Engineer at WIN Home Inspection transforming real estate industry through data insights. Collaborating with teams to design BI dashboards and analyze data trends.

AWS

Cloud

ETL

Google Cloud Platform

Python

SQL

Tableau

November 18

Senior Data Engineer using data engineering techniques to create scalable data solutions for enterprise customer experience at Netomi.

Airflow

Amazon Redshift

Apache

AWS

Azure

BigQuery

Cloud

Distributed Systems

Docker

ETL

Google Cloud Platform

Hadoop

Kafka

Kubernetes

Python

Spark

SQL

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com