BI Data Engineer II

Job not on LinkedIn

November 19

Apply Now
Logo of Jamf

Jamf

Enterprise • Education • Healthcare

Jamf is a leading provider of Apple device management and security solutions. The company offers a comprehensive platform designed to simplify the management and security of Apple products in business, education, and healthcare environments. With a range of products like Jamf Pro for business and higher education, Jamf School for K-12, and Jamf Protect for endpoint security, Jamf enables organizations to enhance productivity and security of their Apple devices. The company is trusted by top technology companies and brands to automate, scale, and secure Apple IT workflows, ensuring modern, efficient operations while safeguarding user privacy. Jamf is dedicated to creating solutions that are enterprise secure and consumer simple, helping its clients manage and secure the Apple experience with ease.

1001 - 5000 employees

Founded 2002

🏢 Enterprise

📚 Education

💰 $300M Post-IPO Secondary on 2021-09

📋 Description

• Design, build, maintain, and improve the data platform infrastructure (Snowflake environments, airflow workflows, orchestration, CI/CD pipelines for dbt / transformations). • Develop and maintain Terraform (or equivalent IaC) definitions for provisioning data infrastructure (compute, storage, permissions, networking where needed). • Automate deployment of data transformations (e.g. dbt CI/CD, staging / production pipelines). • Ensure data platform availability, reliability, security and performance (e.g. enforce roles & permissions in Snowflake, resource monitoring, concurrency/usage optimisation). • Instrument monitoring, logging and alerting of data workflows (Airflow / Kubernetes / dbt jobs). • Collaborate with Data Engineers / Analysts / Architects to define platform capabilities, set standards & best practices around schema design, governance, version control, and performance. • Run capacity planning, ensure cost-efficiency, scaling strategy (e.g. concurrency limits in Snowflake warehouse sizing, cluster autoscaling, etc). • Facilitate onboarding of teams to the data platform: document usage patterns, create templates or utilities (for example dbt macros, shared libraries). • Participate in architecture reviews, evaluate new platform tooling (e.g. enhancements to orchestration, transformation frameworks, security strategy, etc). • Troubleshoot critical incidents and participate in incident / post-mortem cycles for platform issues.

🎯 Requirements

• Bachelor’s Degree in Mathematics, Computer Science or related field. • 2-4 years experience building data pipelines with Python (Required). • 2-4 years experience working with data warehouse or other cloud based database technology, with strong proficiency in SQL (Required). • Exposure to Infrastructure-as-Code (IaC) such as Terraform or DevOps (Required). • Experience with Docker / Kubernetes (Required). • Experience working with dbt (Preferred). • Strong experience with cloud infrastructure: AWS (EC2, ECR, S3, Glue, RDS, etc) or equivalent public cloud provider. • Hands-on experience in CI/CD, version control, unit / integration testing for data pipelines. • Comfortable working in agile teams, and mentoring others.

🏖️ Benefits

• Named one of Forbes Most Trusted Companies in 2024. • We know that big ideas can come from anyone, so we empower everyone to make an impact. Our more than 90% employee retention rate agrees! • You will have the opportunity to make a real and meaningful impact for more than 50,000 global customers with the best Apple device management solution in the world. • We put people over profits – which is why our customers keep coming back to us. • We encourage you to simply be you. We constantly seek and value different perspectives to ensure Jamf is a place where everyone feels comfortable and can be successful. • 23 of 25 world’s most valuable brands rely on Jamf to do their best work (as ranked by Forbes). • Over 100,000 Jamf Nation users, the largest online IT community in the world.

Apply Now

Similar Jobs

November 19

Data Engineer specializing in Data Mesh development for a technology company optimizing data processing infrastructure across products. Seeking an experienced professional with strong Apache Spark skills.

Apache

AWS

ETL

Jenkins

Spark

Terraform

November 4

Senior Data Architect shaping global data platforms for a leading media and information services company. Designing scalable, cloud-native architectures that support millions of professionals worldwide.

Airflow

Apache

AWS

BigQuery

Cloud

Google Cloud Platform

Kafka

Python

November 1

Data Engineer optimizing database performance and resolving user incidents for Inetum Polska in Poland. Collaborating on small-scale developments based on user requirements and ensuring system efficiency.

SQL

SSIS

October 31

DWH Engineer leveraging cloud technologies in media & advertising industry. Analyzing datasets and designing data warehouse models with innovative team.

AWS

Cloud

SQL

October 30

Data Engineer designing and maintaining Azure-based data architecture for a healthcare technology client. Collaborating on innovative healthcare solutions and data engineering projects.

Azure

BigQuery

Cloud

ETL

Python

Spark

SQL

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com