November 25
• Architecture & storage: Design and implement our data storage strategy (warehouse, lake, transactional stores) with scalability, reliability, security, and cost in mind • Pipelines & ETL: Build and maintain robust data pipelines (batch/stream), including orchestration, testing, documentation, and SLAs • Reliability & cost control: Optimize compute/storage (e.g., spot, autoscaling, lifecycle policies) and reduce pipeline fragility • Engineering excellence: Refactor research code into reusable components, enforce repo structure, testing, logging, and reproducibility • Cross-functional collaboration: Work with DS/Analytics/Engineers to turn prototypes into production systems, provide mentorship and technical guidance • Roadmap & standards: Drive the technical vision for data platform capabilities and establish architectural patterns that become team standards
• Experience: 5+ years in data engineering, including ownership of data infrastructure for large-scale systems • Software engineering strength: Strong coding, debugging, performance analysis, testing, and CI/CD discipline; reproducible builds • Cloud & containers: Production experience on AWS, Docker + Kubernetes (EKS/ECS or equivalent) • IaC: Terraform or CloudFormation for managed, reviewable environments • Data engineering: Expert SQL, data modeling, schema design, modern orchestration (Airflow/Step Functions) and ETL tools • Warehouses & Data Lakes: Databricks (experience is a must), Spark, Redshift and Data lake formats (Parquet) • Monitoring/observability: Data monitoring (quality, drift, performance) and pipeline alerting • Collaboration: Excellent communication, comfortable working with data scientists, analysts, and engineers in a fast-paced startup • PySpark/Glue/Dask/Kafka: Experience with large-scale batch/stream processing • Analytics platforms: Experience integrating 3rd party data • Experience building data products on medallion architectures • Be mission-oriented: Proactive and self-driven with a strong sense of initiative; takes ownership, goes beyond expectations, and does what's needed to get the job done
• Competitive compensation, flexible remote work • Unlimited Responsible PTO • Great opportunity to join a growing, cash-flow-positive company while having a direct impact on Nift's revenue, growth, scale, and future success
Apply NowNovember 25
Senior Data Architect managing scalable data architecture for financial insights and analytics at Curinos. Leading the development of data strategy and ensuring data governance across reporting functions.
🇺🇸 United States – Remote
💵 $150k - $160k / year
⏰ Full Time
🟠 Senior
🚰 Data Engineer
🦅 H1B Visa Sponsor
Azure
ETL
PySpark
Python
Scala
SQL
Unity
November 25
10,000+ employees
Data Migration Lead overseeing the full lifecycle of data migration for EHR implementation. Collaborating with stakeholders to ensure data integrity and compliance in healthcare technology.
🇺🇸 United States – Remote
💵 $130k - $216k / year
💰 Grant on 2023-02
⏰ Full Time
🟠 Senior
🚰 Data Engineer
🦅 H1B Visa Sponsor
Cloud
ETL
Oracle
November 25
GenAI Data Engineer designing and scaling AI data systems powering Dynatron’s intelligent SaaS platform. Collaborate on AI infrastructure and machine learning capabilities for automotive services.
Airflow
Amazon Redshift
AWS
Cloud
Python
SQL
November 25
Data Engineer transforming healthcare data into actionable insights at Atropos Health. Collaborating with interdisciplinary teams to build and maintain data pipelines and ensure data quality.
🇺🇸 United States – Remote
💵 $100k - $130k / year
💰 $14M Series A on 2022-08
⏰ Full Time
🟢 Junior
🟡 Mid-level
🚰 Data Engineer
Amazon Redshift
BigQuery
Cloud
Python
SQL
November 24
Data Engineer contributing to enterprise data platform projects focusing on data pipelines and logic engines. Collaborating with teams to ensure effective data management and integration.
AWS
Azure
Cloud
ETL
Jenkins
MS SQL Server
Oracle
PySpark
Spark
SQL