Remote Senior Data Engineer - ETL Data Modeling

Job not on LinkedIn

February 12

Apply Now
Logo of RemoteStar

RemoteStar

B2B • Recruitment • SaaS

RemoteStar is a global recruitment service that specializes in hiring top-quality tech talent. By assembling diverse teams with vetted developers from various regions, RemoteStar ensures high-quality staffing while maximizing cost efficiency for companies. The service includes a rigorous vetting process, technical matching, and full onboarding support, allowing businesses to focus on their core operations while RemoteStar handles the administrative aspects of recruitment and team management.

11 - 50 employees

Founded 2020

🤝 B2B

🎯 Recruiter

☁️ SaaS

📋 Description

• The Remote Senior Data Engineer (ETL Data Modeling) plays a pivotal role in growing our externally facing technical platform, supporting our customers' needs, and driving technical excellence within the team. • Implementing ETL/ELT pipelines within and outside of a data warehouse using Python, Pyspark and Snowflakes Snow SQL. • Support Redshift DWH to Snowflake Migration. • Design, implement, and support data warehouse/data lake infrastructure using AWS big data stack, Python, Redshift, Snowflake, Glue/lake formation, EMR/Spark/Scala etc. • Work with data analysts to scale value-creating capabilities, including data integrations and transformations, model features, and statistical and machine learning models. • Work with Product Managers, Finance, Service Engineering Teams and Sales Teams on a day-to-day basis to support their new analytics requirements. • Implement data quality and data governance measures and execute data profiling and data validation procedures • Implement and uphold data governance practices to maintain data quality, integrity, and security throughout the data lifecycle. • Leverage open-source technologies to build robust and cost-effective data solutions. • Develop and maintain streaming pipelines using technologies like Apache Kafka etc.

🎯 Requirements

• Must have total 5+ yrs. of IT experience and 3+ years' experience in data integration, ETL/ELT development, and database design or Data Warehouse design • Broad expertise and experience with distributed systems, streaming systems, and data engineering tools, such as Kubernetes, Kafka, Airflow, Dagster, etc. • Experience in data transformation, ETL/ELT tool and technologies such as AWS Glue, DBT etc for transforming structured/semi-structured and unstructured datasets. • Experience in ingesting and integrating data from APIs/JDBC/CDC sources. • Deep knowledge of Python, SQL, relational/non-relational database design, and master data strategies. • Experience defining, architecting, and rolling out data products, including ownership of data products through their entire lifecycle. • Deep understanding of Star and Snowflake dimensional modeling. • Experience with relational databases, including SQL queries, database definition, and schema design. • Experience with data warehouses, distributed data platforms, and data lakes. • Strong proficiency in SQL and at least one programming language (e.g., Python, Scala, JS). • Familiarity with data orchestration tools, such as Apache Airflow, and the ability to design and manage complex data workflows. • Familiarity with agile methodologies, sprint planning, and retrospectives. • Proficiency with version control systems, Bitbucket/Git. • Ability to work in a fast-paced startup environment and adapt to changing requirements with several ongoing concurrent projects. • Excellent verbal and written communication skills. • Preferred/bonus skills: Redshift to Snowflake migration experience. Experience with DevOps technologies such as Terraform, CloudFormation, and Kubernetes. While not mandatory, experience or knowledge in machine learning techniques is highly preferable, enriching our data engineering capabilities. Experience with non-relational databases/data stores (object storage, document or key-value stores, graph databases, column-family databases

🏖️ Benefits

• Dynamic working environment in an extremely fast-growing company • Work in an international environment • Work in a pleasant environment with very little hierarchy • Intellectually challenging, play a massive role in client’s success and scalability • Flexible working hours

Apply Now

Similar Jobs

February 8

Role focuses on data engineering, security solutions, and collaboration with IT teams.

Ansible

AWS

Azure

Cloud

Docker

ETL

Google Cloud Platform

Kubernetes

Python

Splunk

Terraform

January 10

Data Engineer specializing in Databricks for Exavalu, focusing on data ingestion, transformation, and collaboration with stakeholders. Join a diverse team in a remote work environment.

Cloud

PySpark

Spark

SQL

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com