Data Engineer

August 15

Apply Now
Logo of Human Interest

Human Interest

Finance • Fintech

Human Interest is a company that provides affordable and full-service 401(k) and 403(b) retirement plans, primarily targeting small and medium-sized businesses. Founded in 2015, they aim to streamline the retirement planning process through seamless payroll integrations with over 500 leading payroll systems. The company emphasizes low-cost funds and zero transaction fees, and offers customizable retirement solutions to meet the diverse needs of employers and their employees. Human Interest also provides investment advisory services through its subsidiary, Human Interest Advisors LLC. Their goal is to make retirement benefits accessible to businesses of varying sizes, enhancing employee financial wellness and satisfaction.

501 - 1000 employees

Founded 2015

💸 Finance

💳 Fintech

💰 $200M Series D on 2021-08

📋 Description

• Build and optimize data models in dbt Core to create reliable, efficient, and accessible data for downstream reporting and analysis, with a strong understanding of end-user needs. • Design, develop, and maintain scalable data ingestion and orchestration using Meltano, Snowpipe, Airflow and other tools. • Manage and automate data infrastructure in AWS using Terraform. • Collaborate with Data Analysts and Software Engineers to clarify data requirements and translate them into effective data engineering solutions. • Proactively identify and implement improvements in data orchestration, cost/performance management, and security within Snowflake. • Develop new data ingestion pipelines from various source systems into Snowflake, including full-stack development for brand-new pipelines from ingestion to data modeling of core user-facing tables. • Implement efficient testing within dbt to detect system changes and ensure data quality, contributing to the operational health of the data platform.

🎯 Requirements

• 3+ years experience as a Data Engineer with a strong focus on data pipeline development and data warehousing, consistently delivering high-quality work on a timely basis. • Strong hands-on experience with data modeling, knowledgeable about general design patterns and architectural approaches. • Hands-on experience with cloud data warehouses. • Strong Python and SQL skills and experience with data manipulation and analysis, capable of quickly absorbing and synthesizing complex information. • Experience with data ingestion tools and ETL/ELT processes. • Experience with Airflow. • A proactive mindset to keep an eye out for areas to improve our data infrastructure. • Ability to independently define projects, clarify requirements while considering solutions with the help of mentorship for complex projects. • Excellent problem-solving skills and attention to detail, with a high-level understanding of how downstream users leverage data.

🏖️ Benefits

• A great 401(k) plan: Our own! Our 401(k) includes a dollar-for-dollar employer match up to 4% of compensation (immediately vested) and $0 plan fees • Top-of-the-line health plans, as well as dental and vision insurance • Competitive time off and parental leave • Addition Wealth: Unlimited access to digital tools, financial professionals, and a knowledge center to help you understand your equity and support your financial wellness • Lyra: Enhanced Mental Health Support for Employees and dependents • Carrot: Fertility healthcare and family forming benefits • Candidly: Student loan resource to help you and your family plan, borrow, and repay student debt • Monthly work-from-home stipend; quarterly lifestyle stipend • Engaging team-building experiences, ranging from virtual social events to team offsites, promoting collaboration and camaraderie.

Apply Now

Similar Jobs

August 15

Remote Data Engineer building scalable data pipelines with Snowflake/DBT; role can be based in Latin America or the US, PT hours.

AWS

Cloud

ETL

Node.js

SQL

August 13

Allata

201 - 500

🤝 B2B

Lead Data Engineer Databricks at Allata builds data pipelines for enterprise platforms. Contributes to analytics, data lakehouse implementations, and AI-driven decision support.

AWS

Azure

Cloud

ETL

Jenkins

MS SQL Server

Oracle

PySpark

Spark

SQL

August 13

Senior Data Engineer at Bitly designs and builds a next-gen data platform powering analytics, experimentation, and AI features; collaborates across teams to deliver data to internal and customer applications.

Airflow

Cloud

Docker

Google Cloud Platform

Kubernetes

Python

Spark

SQL

Terraform

Go

August 9

Design, build, and maintain production data pipelines for Versa Networks. Collaborate with data scientists and SWEs to ensure scalable, reliable data infrastructure.

Airflow

Apache

BigQuery

Cloud

Docker

Google Cloud Platform

Kubernetes

Python

Ray

Rust

Spark

Terraform

Go

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com