
Finance • Non-profit • SaaS
Givzey is a platform that offers intelligent solutions for the fundraising process, including smart gift agreements, dynamic workflows, automated pledge reminders, and intelligent invoicing. The company aims to streamline and scale multi-year giving and one-time gifts, enhancing the donor experience while providing tools for fundraising teams to improve efficiency. Givzey is utilized by educational institutions and other organizations to secure and manage donations effectively.
May 26

Finance • Non-profit • SaaS
Givzey is a platform that offers intelligent solutions for the fundraising process, including smart gift agreements, dynamic workflows, automated pledge reminders, and intelligent invoicing. The company aims to streamline and scale multi-year giving and one-time gifts, enhancing the donor experience while providing tools for fundraising teams to improve efficiency. Givzey is utilized by educational institutions and other organizations to secure and manage donations effectively.
• We’re looking for a Data Engineer to architect and scale the data backbone that powers our AI‑driven donor engagement platform. • You’ll design and own modern, cloud‑native data pipelines and infrastructure that deliver clean, trusted, and timely data to our ML and product teams - fueling innovation that revolutionizes the nonprofit industry. • We’re a collaborative, agile team that values curiosity, autonomy, and purpose. • Whether you're refining AI-driven experiences or architecting tools for the future of giving, your work will help shape meaningful technology that makes a difference.
• US Citizenship • Bachelor’s or Master’s in Computer Science, Data Engineering, or a related field • 2+ years of hands-on experience building and maintaining modern data pipelines using python-based ETL/ELT frameworks • Strong Python skills , including deep familiarity with pandas and comfort writing production-grade code for data transformation • Fluent in SQL , with a practical understanding of data modeling, query optimization, and warehouse performance trade-offs • Experience orchestrating data workflows using modern orchestration frameworks (e.g., Dagster, Airflow, or Prefect) • Cloud proficiency (AWS preferred) : S3, Glue, Redshift or Snowflake, Lambda, Step Functions, or similar services on other clouds • Proven track record of building performant ETL/ELT pipelines from scratch and optimizing them for cost and scalability • Experience with distributed computing and containerized environments (Docker, ECS/EKS) • Solid data modeling and database design skills across SQL and NoSQL systems • Strong communication & collaboration abilities within cross‑functional, agile teams
Apply NowApril 30
Senior Data Engineers deliver modern data solutions through collaboration with clients and teams.
Azure
MS SQL Server
Oracle
Python
RDBMS
Scala
Spark
SQL
Tableau
April 30
Join Futuralis as a Data Engineer to design and optimize critical data workflows. Drive scalable solutions and provide technical leadership across projects.
AWS
Distributed Systems
DynamoDB
Java
Node.js
PySpark
Python
SDLC
Spark
April 28
Join Meetsta to design and develop next-gen mobile applications and improve user engagement.
GRPC
iOS
April 24
Hiring Data Engineers for building and optimizing data architectures at PrismHR. Join a collaborative team in a remote environment.
🇺🇸 United States – Remote
💰 Private Equity Round on 2019-12
⏰ Full Time
🟡 Mid-level
🟠 Senior
🚰 Data Engineer
🦅 H1B Visa Sponsor
Apache
Cloud
ETL
Kafka
Ruby
Scala
Spark
Go
April 23
Join PrismHR as a Data Engineer to build and optimize data architecture for enhancing product experiences.
🇺🇸 United States – Remote
💰 Private Equity Round on 2019-12
⏰ Full Time
🟡 Mid-level
🟠 Senior
🚰 Data Engineer
🦅 H1B Visa Sponsor
Apache
Cloud
ETL
Kafka
Ruby
Scala
Spark
Go