
Education ⢠B2B ⢠SaaS
Meazure Learning is a company that specializes in developing, delivering, proctoring, and analyzing higher-education and credentialing exams. They provide end-to-end solutions, including remote exam proctoring, test program consultation, item writing, and advanced reporting technologies. With a focus on enhancing performance outcomes and streamlining assessment processes, Meazure Learning offers a comprehensive suite of software designed to optimize the exam experience for both test-takers and test administrators.
November 10

Education ⢠B2B ⢠SaaS
Meazure Learning is a company that specializes in developing, delivering, proctoring, and analyzing higher-education and credentialing exams. They provide end-to-end solutions, including remote exam proctoring, test program consultation, item writing, and advanced reporting technologies. With a focus on enhancing performance outcomes and streamlining assessment processes, Meazure Learning offers a comprehensive suite of software designed to optimize the exam experience for both test-takers and test administrators.
⢠Design, develop, and maintain scalable and reliable data pipelines that process and transform large datasets from a variety of sources. ⢠Build hybrid data ecosystems integrating on-prem SQL Server Data Warehouse (DW) with cloud-native platforms for seamless migration and interoperability. ⢠Define and lead enterprise data architecture strategy across multi-cloud environments (Azure, AWS etc..). ⢠Architect, design, and optimize large-scale, high-performance data platforms leveraging Azure Synapse, Delta Lake, Databricks, and Amazon Redshift. ⢠Implement Medallion architecture and scalable data Lakehouse solutions on Azure Data Lake Storage Gen2 and AWS S3. ⢠Develop semantic models and optimized datasets to enable analytics in Power BI Premium and AWS QuickSight. ⢠Architect and manage data models, databases, and data warehouse/Lakehouse solutions ensuring performance, scalability, and data integrity. ⢠Drive adoption of Data Mesh principles, enabling decentralized ownership and domain-oriented data architecture. ⢠Design and orchestrate robust ETL/ELT pipelines using Azure Data Factory, AWS Glue, SSIS, and modern orchestration frameworks. ⢠Modernize legacy ETL frameworks (SSIS) into cloud-native ADF or Glue pipelines for scalability and reusability. ⢠Integrate structured, semi-structured, and unstructured data across systems for large-scale data transformations. ⢠Build and maintain SSAS Tabular and Multidimensional Cubes, integrating with Power BI and modern BI ecosystems. ⢠Implement data ingestion, transformation, and serving pipelines following Medallion and modular architecture. ⢠Enable real-time and streaming data processing using Kafka, Kinesis, or Azure Event Hubs. ⢠Drive CI/CD adoption for data platforms using Azure DevOps, GitHub Actions, or AWS Code Pipeline for automated validation and deployments. ⢠Implement automated data testing, lineage tracking, and observability frameworks to improve transparency and reliability. ⢠Lead design of enterprise data models using Star, Snowflake, and Data Vault 2.0 methodologies across SQL Server DW and Azure Synapse. ⢠Define and implement data quality, validation, and governance frameworks to ensure accuracy and compliance. ⢠Lead implementation of metadata management and data catalogs using Azure Purview or AWS Glue Data Catalog. ⢠Ensure data lineage, RBAC, and encryption standards are consistently enforced across environments. ⢠Mentor and provide guidance to junior and mid-level data engineers, fostering a collaborative and growth-oriented environment. ⢠Lead the evaluation and adoption of emerging technologies such as Microsoft Fabric, Databricks, and real-time analytics platforms. ⢠Partner with data scientists, analysts, and other engineering teams to develop robust, scalable data solutions. ⢠Collaborate with stakeholders in other departments to identify data needs, provide technical expertise, and deliver data-driven solutions. ⢠Monitor, troubleshoot, and optimize the performance of data pipelines and data infrastructure, ensuring high availability and low latency. ⢠Drive performance tuning and optimization of SQL Server DW, Synapse, and Redshift workloads for efficiency and scalability). ⢠Implement data observability solutions for proactive issue detection, latency reduction, and pipeline reliability. ⢠Continuously enhance scalability and resilience through containerized and serverless architectures (Kubernetes, Lambda, Azure Functions).
⢠15+ years of experience in data engineering, with a strong focus on designing and building data pipelines, ETL, ELT processes, and data warehousing solutions. ⢠Advanced proficiency in SQL, MPP performance tuning, and large-scale data transformations using Python, PySpark etc. ⢠Strong knowledge of Snowflake and Data Vault 2.0 modeling techniques. ⢠Strong understanding of data architecture, data modeling, and database design principles, as well as hands-on experience with relational, NoSQL, and big data technologies. ⢠Strong analytical and problem-solving skills, with the ability to troubleshoot complex data engineering issues and propose innovative solutions. ⢠Experience leading cross-functional teams, mentoring junior engineers, and collaborating with business stakeholders to design and implement data solutions. ⢠Willingness to adapt and continuously learn in a fast-paced and evolving data landscape. ⢠Excellent communication skills, both written and verbal, with the ability to convey complex technical concepts to non-technical stakeholders.
⢠Company-Sponsored Health Insurance ⢠Competitive Pay ⢠Remote work ⢠Healthy Work Culture ⢠Career Growth Opportunities ⢠Learning and Development Programs ⢠Referral Award Program ⢠Company-Provided IT Equipment (for remote team members) ⢠Transportation Program (for on-site team members) ⢠Company-Provided Meals (for on-site team members) ⢠14 Company-Provided Holidays ⢠Generous Leave Program
Apply NowNovember 9
AWS Data Engineer building scalable data pipelines on AWS Cloud at Exavalu. Engaging in design and implementation of ETL/ELT and CI/CD pipelines.
AWS
Cloud
ETL
Postgres
PySpark
SQL
November 5
Data Engineer II at Duck Creek, designing scalable data solutions and optimizing data processes for insurance software. Delivering insights and maintaining data quality in cloud environments.
đŽđł India â Remote
đ° $230M Private Equity Round on 2020-06
â° Full Time
đ Senior
đ´ Lead
đ° Data Engineer
Azure
Cloud
ETL
SQL
Terraform
November 5
Data Engineer II designing and managing scalable data pipelines and ETL processes at Duck Creek. Responsible for data quality, integrity, and providing mentoring to junior engineers.
đŽđł India â Remote
đ° $230M Private Equity Round on 2020-06
â° Full Time
đ Senior
đ´ Lead
đ° Data Engineer
Azure
Cloud
ETL
SQL
Terraform
November 5
Data Engineer II creating data pipelines and ETL processes for Duck Creek, an insurance SaaS leader. Involving in software solutions, mentoring, and ensuring data integrity.
đŽđł India â Remote
đ° $230M Private Equity Round on 2020-06
â° Full Time
đ Senior
đ´ Lead
đ° Data Engineer
Azure
Cloud
ETL
SQL
Terraform
November 5
Data Engineer II designing data solutions for Duck Creek's SaaS products, collaborating on data architecture and implementation. Focusing on scalable pipelines and data integrity standards.
đŽđł India â Remote
đ° $230M Private Equity Round on 2020-06
â° Full Time
đ Senior
đ´ Lead
đ° Data Engineer
Azure
Cloud
ETL
SQL
Terraform