Senior Data Engineer – Microsoft Fabric

Job not on LinkedIn

November 26

Apply Now
Logo of Accellor

Accellor

AI • Enterprise • SaaS

Accellor is a company offering AI-driven solutions across various industries, focusing on enhancing efficiency and engagement through advanced applications and data strategies. Their services include leveraging artificial intelligence for enterprise applications, product engineering, and cloud services to transform industries such as healthcare, manufacturing, financial services, real estate, retail, travel, and hospitality. Accellor partners with technology leaders like Salesforce and Microsoft Dynamics 365 to deliver personalized, intelligent business applications. Committed to responsible AI practices, Accellor helps organizations harness the potential of data and AI to drive strategic decisions, automate operations, and provide superior experiences.

201 - 500 employees

🏢 Enterprise

☁️ SaaS

📋 Description

• Design and implement scalable data pipelines and ETL/ELT processes within Microsoft Fabric from a code-first approach • Develop and maintain notebooks, data pipelines, workspace and other Fabric item configurations • Build and optimize data architectures using delta tables, lakehouse, and data warehouse patterns • Implement data modelling solutions including star schema, snowflake schema, and slowly changing dimensions (SCDs) • Performance tune Delta, Spark, and SQL workloads through partitioning, optimization, liquid clustering, and other advanced techniques • Develop and deploy Fabric solutions using CI/CD practices via Azure DevOps • Integrate and orchestrate data workflows using Fabric Data Agents and REST APIs • Collaborate with development team and stakeholders to translate business requirements into technical solutions

🎯 Requirements

• Microsoft Fabric Expertise: • Hands-on experience with Fabric notebooks, pipelines, and workspace configuration • Fabric Data Agent implementation and orchestration • Fabric CLI and CI/CD deployment practices • **Programming & Development:** • Python (advanced proficiency) • PySpark for distributed data processing • Pandas and Polars for data manipulation • Experience with Python libraries for data engineering workflows • REST API development and integration • **Data Platform & Storage:** • Delta Lake and Iceberg table formats • Delta table optimization techniques (partitioning, Z-ordering, liquid clustering) • Spark performance tuning and optimization • SQL query optimization and performance tuning • **Development Environment:** • Visual Studio Code • Azure DevOps for CI/CD and deployment pipelines • Experience with both code-first and low-code development approaches • **Data Modeling:** • Data warehouse dimensional modeling (star schema, snowflake schema) • Slowly Changing Dimensions (SCD Type 1, 2, 3) • Modern lakehouse architecture patterns • Metadata driven approaches • **Preferred Qualifications:** • 5+ years of data engineering experience • Previous experience with large-scale data platforms and enterprise analytics solutions • Strong understanding of data governance and security best practices • Experience with Agile/Scrum methodologies • Excellent problem-solving and communication skills.

Apply Now

Similar Jobs

November 25

Manager, Data Engineering leading data engineers to build data infrastructure for Jobber. Shaping technical decisions and mentoring team members in data engineering best practices.

SQL

November 19

Senior Data Engineer supporting major analytics initiatives in a healthcare-focused environment. Designing data pipelines and building data models with scalable solutions.

Informatica

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com