Senior Data Engineer

Job not on LinkedIn

October 10

Apply Now
Logo of Rebrandly

Rebrandly

B2B • SaaS • API

Rebrandly is a leading link management platform that empowers users to transform every link into a powerful, branded connection. As the #1 platform for branded short links, Rebrandly offers essential tools to scale brands, businesses, data, and systems through reliable link management solutions. The company provides in-depth click data analysis to enhance link performance and elevate user experience. Committed to security and compliance, Rebrandly is SOC 2 Type II certified, and GDPR, HIPAA, and CCPA compliant. Millions of users trust Rebrandly's infrastructure for enhancing their digital presence and conversions.

51 - 200 employees

Founded 2015

🤝 B2B

☁️ SaaS

🔌 API

📋 Description

• Own the Entire Data Domain: Serve as the sole data engineer responsible for architecting, building, and maintaining our complete data infrastructure • Make critical architectural decisions that will shape our data strategy for years to come • Build a robust data platform that scales with our rapid growth from tens of thousands to millions of users • Build on AWS Foundation: Design and optimize our AWS Redshift data warehouse as the central hub for analytics • Implement efficient data ingestion from AWS RDS, AWS DynamoDB and other NoSQL sources • Leverage AWS services (Lambda, Glue, Kinesis, S3) to create a modern data stack • Optimize query performance and manage Redshift cluster scaling strategies • Implement Reverse ETL: Build sophisticated reverse ETL pipelines to operationalize insights • Push enriched data back to production systems, CRM, marketing automation, and customer success tools • Create real-time data activation frameworks that enhance user experiences • Design feedback loops that enable data-driven product features • Drive Scale-up Success: Rapidly prototype and deploy data solutions to support aggressive growth targets • Balance speed with reliability in a fast-moving environment • Build self-service analytics capabilities to empower teams across the organization • Establish data governance practices that scale with minimal overhead • Technical Leadership: Define data engineering best practices and standards for the organization • Mentor other engineers on data concepts and AWS technologies • Partner with stakeholders to translate business needs into technical solutions • Champion data quality and reliability across all systems

🎯 Requirements

• 5+ years of hands-on experience with AWS Redshift, including performance optimization, cluster management, and advanced SQL • 3+ years working with AWS RDS, DynamoDB and NoSQL data modeling patterns • Proven experience building reverse ETL pipelines and data activation frameworks • Strong Python or similar programming skills for data pipeline development • Experience as a solo data engineer or leading data initiatives independently • Track record of building data infrastructure in scale-up environments (Series A-C) • Expertise in modern data stack tools (dbt, Airflow, Census, Fivetran, or similar) • Experience with streaming data architectures and real-time analytics • Deep understanding of data warehouse design patterns and dimensional modeling • Proficiency in Infrastructure as Code (Terraform preferred) for AWS resources • Strong written and verbal communication skills with proven ability to work effectively in remote distributed teams • Demonstrated experience in independent project management with a track record of owning and delivering complex data projects on schedule • Previous experience working in SaaS companies with an understanding of SaaS-specific data challenges and metrics

🏖️ Benefits

• Competitive compensation • Complete ownership of the data engineering domain • Direct impact on company strategy and growth • Opportunity to build a data platform from scratch • Remote-first culture with flexible working arrangements

Apply Now

Similar Jobs

October 10

Data Architect leading strategic design and implementation of global data model at InPost Group. Focused on building scalable data ecosystems and optimizing performance across various data initiatives.

🗣️🇵🇱 Polish Required

Azure

ETL

Python

Spark

SQL

October 1

Data Engineer building Databricks and AWS S3 pipelines for Kyriba's liquidity SaaS; enable ML/GenAI, BI, automation and governance.

AWS

ETL

Python

PyTorch

Scala

Scikit-Learn

SQL

Tensorflow

September 26

Lead team to design Azure Medallion architectures, implement CI/CD data pipelines, and integrate vector search/AI while collaborating with US stakeholders.

Azure

PySpark

Python

SQL

September 24

Lead Oracle PL/SQL development, optimize SQL, support production incidents and mentor PL/SQL team at SQLI

🗣️🇫🇷 French Required

Oracle

SQL

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com