Principal Data Engineer

November 10

Apply Now
Logo of Meazure Learning

Meazure Learning

Education • B2B • SaaS

Meazure Learning is a company that specializes in developing, delivering, proctoring, and analyzing higher-education and credentialing exams. They provide end-to-end solutions, including remote exam proctoring, test program consultation, item writing, and advanced reporting technologies. With a focus on enhancing performance outcomes and streamlining assessment processes, Meazure Learning offers a comprehensive suite of software designed to optimize the exam experience for both test-takers and test administrators.

1001 - 5000 employees

Founded 2008

📚 Education

🤝 B2B

☁️ SaaS

📋 Description

• Design, develop, and maintain scalable and reliable data pipelines that process and transform large datasets from a variety of sources. • Build hybrid data ecosystems integrating on-prem SQL Server Data Warehouse (DW) with cloud-native platforms for seamless migration and interoperability. • Define and lead enterprise data architecture strategy across multi-cloud environments (Azure, AWS etc..). • Architect, design, and optimize large-scale, high-performance data platforms leveraging Azure Synapse, Delta Lake, Databricks, and Amazon Redshift. • Implement Medallion architecture and scalable data Lakehouse solutions on Azure Data Lake Storage Gen2 and AWS S3. • Develop semantic models and optimized datasets to enable analytics in Power BI Premium and AWS QuickSight. • Architect and manage data models, databases, and data warehouse/Lakehouse solutions ensuring performance, scalability, and data integrity. • Drive adoption of Data Mesh principles, enabling decentralized ownership and domain-oriented data architecture. • Design and orchestrate robust ETL/ELT pipelines using Azure Data Factory, AWS Glue, SSIS, and modern orchestration frameworks. • Modernize legacy ETL frameworks (SSIS) into cloud-native ADF or Glue pipelines for scalability and reusability. • Integrate structured, semi-structured, and unstructured data across systems for large-scale data transformations. • Build and maintain SSAS Tabular and Multidimensional Cubes, integrating with Power BI and modern BI ecosystems. • Implement data ingestion, transformation, and serving pipelines following Medallion and modular architecture. • Enable real-time and streaming data processing using Kafka, Kinesis, or Azure Event Hubs. • Drive CI/CD adoption for data platforms using Azure DevOps, GitHub Actions, or AWS Code Pipeline for automated validation and deployments. • Implement automated data testing, lineage tracking, and observability frameworks to improve transparency and reliability. • Lead design of enterprise data models using Star, Snowflake, and Data Vault 2.0 methodologies across SQL Server DW and Azure Synapse. • Define and implement data quality, validation, and governance frameworks to ensure accuracy and compliance. • Lead implementation of metadata management and data catalogs using Azure Purview or AWS Glue Data Catalog. • Ensure data lineage, RBAC, and encryption standards are consistently enforced across environments. • Mentor and provide guidance to junior and mid-level data engineers, fostering a collaborative and growth-oriented environment. • Lead the evaluation and adoption of emerging technologies such as Microsoft Fabric, Databricks, and real-time analytics platforms. • Partner with data scientists, analysts, and other engineering teams to develop robust, scalable data solutions. • Collaborate with stakeholders in other departments to identify data needs, provide technical expertise, and deliver data-driven solutions. • Monitor, troubleshoot, and optimize the performance of data pipelines and data infrastructure, ensuring high availability and low latency. • Drive performance tuning and optimization of SQL Server DW, Synapse, and Redshift workloads for efficiency and scalability). • Implement data observability solutions for proactive issue detection, latency reduction, and pipeline reliability. • Continuously enhance scalability and resilience through containerized and serverless architectures (Kubernetes, Lambda, Azure Functions).

🎯 Requirements

• 15+ years of experience in data engineering, with a strong focus on designing and building data pipelines, ETL, ELT processes, and data warehousing solutions. • Advanced proficiency in SQL, MPP performance tuning, and large-scale data transformations using Python, PySpark etc. • Strong knowledge of Snowflake and Data Vault 2.0 modeling techniques. • Strong understanding of data architecture, data modeling, and database design principles, as well as hands-on experience with relational, NoSQL, and big data technologies. • Strong analytical and problem-solving skills, with the ability to troubleshoot complex data engineering issues and propose innovative solutions. • Experience leading cross-functional teams, mentoring junior engineers, and collaborating with business stakeholders to design and implement data solutions. • Willingness to adapt and continuously learn in a fast-paced and evolving data landscape. • Excellent communication skills, both written and verbal, with the ability to convey complex technical concepts to non-technical stakeholders.

🏖️ Benefits

• Company-Sponsored Health Insurance • Competitive Pay • Remote work • Healthy Work Culture • Career Growth Opportunities • Learning and Development Programs • Referral Award Program • Company-Provided IT Equipment (for remote team members) • Transportation Program (for on-site team members) • Company-Provided Meals (for on-site team members) • 14 Company-Provided Holidays • Generous Leave Program

Apply Now

Similar Jobs

November 9

Exavalu

201 - 500

🤝 B2B

🏦 Banking

⚕️ Healthcare Insurance

AWS Data Engineer building scalable data pipelines on AWS Cloud at Exavalu. Engaging in design and implementation of ETL/ELT and CI/CD pipelines.

🇮🇳 India – Remote

⏰ Full Time

🟠 Senior

🔴 Lead

🚰 Data Engineer

November 5

Duck Creek Technologies

1001 - 5000

☁️ SaaS

🏢 Enterprise

Data Engineer II at Duck Creek, designing scalable data solutions and optimizing data processes for insurance software. Delivering insights and maintaining data quality in cloud environments.

🇮🇳 India – Remote

💰 $230M Private Equity Round on 2020-06

⏰ Full Time

🟠 Senior

🔴 Lead

🚰 Data Engineer

November 5

Duck Creek Technologies

1001 - 5000

☁️ SaaS

🏢 Enterprise

Data Engineer II designing and managing scalable data pipelines and ETL processes at Duck Creek. Responsible for data quality, integrity, and providing mentoring to junior engineers.

🇮🇳 India – Remote

💰 $230M Private Equity Round on 2020-06

⏰ Full Time

🟠 Senior

🔴 Lead

🚰 Data Engineer

November 5

Duck Creek Technologies

1001 - 5000

☁️ SaaS

🏢 Enterprise

Data Engineer II creating data pipelines and ETL processes for Duck Creek, an insurance SaaS leader. Involving in software solutions, mentoring, and ensuring data integrity.

🇮🇳 India – Remote

💰 $230M Private Equity Round on 2020-06

⏰ Full Time

🟠 Senior

🔴 Lead

🚰 Data Engineer

November 5

Duck Creek Technologies

1001 - 5000

☁️ SaaS

🏢 Enterprise

Data Engineer II designing data solutions for Duck Creek's SaaS products, collaborating on data architecture and implementation. Focusing on scalable pipelines and data integrity standards.

🇮🇳 India – Remote

💰 $230M Private Equity Round on 2020-06

⏰ Full Time

🟠 Senior

🔴 Lead

🚰 Data Engineer

Developed by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com