Data Engineer

Job not on LinkedIn

August 28

Apply Now
Logo of Saaf Finance

Saaf Finance

Artificial Intelligence • Finance • Fintech

Saaf Finance is a company that specializes in automating loan origination using AI and advanced technologies. Their platform integrates with partners' data to provide a seamless digital experience for consumers and lenders, focusing on efficiency, accuracy, and security. Saaf's solutions are tailored for mortgage products including Home Equity Loans and First Mortgages, providing customizable underwriting, compliance, and integrated services. The company aims to enhance revenue growth, operational efficiency, and competitive edge for lenders while offering enhanced security and digital packaging for the secondary market.

2 - 10 employees

🤖 Artificial Intelligence

💸 Finance

💳 Fintech

📋 Description

• Saaf Finance is building the AI workforce for the mortgage industry, reimagining how loans are underwritten and processed. Backed by leading financial institutions, we’re scaling fast and looking for a Data Engineer who thrives on solving complex data challenges and wants to shape the infrastructure behind one of the most data-intensive industries in the world. • As a Data Engineer at Saaf, you’ll own the backbone of our AI-driven platform: the data. From borrower interactions to loan performance metrics, every workflow we build depends on data being accurate, reliable, and available in real-time. This is a hands-on role where you’ll work closely with engineering, product, and data teams to design and operate production-grade infrastructure that powers analytics, product features, and next-generation agentic automation in mortgage origination. • This role is perfect for someone who enjoys building from scratch, wants to push the boundaries of how data pipelines can support AI systems, and is excited to reimagine financial workflows at scale. • Key Responsibilities • Data Pipeline Development – Design, implement, and maintain ETL/ELT pipelines for structured and unstructured datasets from internal and external sources. • Data Warehousing – Build and optimize warehouses and marts (Snowflake, BigQuery, or similar) for analytics, reporting, and product use cases. • Integration – Ingest data from APIs, SaaS platforms such as CRM and financial data APIs, and internal systems into the core data platform. • Data Modeling: Design, implement, and maintain conceptual, logical, and physical data models to ensure scalable, consistent, and high-quality datasets for downstream analytics and applications • Data Quality and Governance – Implement validation, schema management, and robust documentation to ensure data accuracy and compliance. • Performance Optimization – Monitor and fine-tune pipeline and warehouse performance for scalability and cost efficiency. • Security and Compliance – Apply data security and privacy controls aligned with financial regulatory requirements, ensuring full traceability of every transformation. • Analytics Enablement – Provide clean, consistent datasets for analysts, product managers, and operational teams to support fast, data-driven decisions.

🎯 Requirements

• Technical Expertise • Strong SQL and Python development skills for data transformation and automation. • Experience with modern ETL/ELT frameworks such as dbt. • Proficiency with cloud platforms (AWS preferred) and serverless data services. • Strong experience with data warehouse technologies (Snowflake preferred). • Skilled in API integrations and ingestion from third-party systems. • Data Operations • Proficient in data modeling (Kimball/Star schema, Data Vault). • Experience implementing CI/CD practices for data workflows. • Ability to set up logging, monitoring, and alerting for data jobs. • Bonus Skills • Experience building agentic workflows and orchestrating multi-step automated processes that act on data in real time. • Familiarity with data engineering patterns and infrastructure required for the recent wave of AI-powered tools and automation platforms. • Experience working with financial datasets and APIs in a high-compliance environment. • Understanding of data privacy regulations such as GDPR and CCPA. • Qualifications • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. • 3+ years in a data engineering or similar backend data-focused role. • Proven track record of delivering production-grade data pipelines at scale. • Experience collaborating closely with product managers, data scientists, and full stack engineers. • Startup mindset: hands-on, resourceful, and comfortable operating in a fast-paced environment.

🏖️ Benefits

• Competitive salary • High ownership from day one — your work will directly shape core systems and products • Fast-paced environment with quick decision cycles and minimal bureaucracy • Remote-first team with flexibility on work hours and location • Direct access to founders and cross-functional teams — no layers, no silos • Clear expectations, regular feedback, and support for professional growth • Work on real problems in a complex, high-impact industry

Apply Now

Similar Jobs

August 27

Liven

51 - 200

☁️ SaaS

Senior Data Engineer building and operating scalable ELT pipelines for Liven, a hospitality technology provider. Driving data reliability and analytics across global venues.

🇮🇳 India – Remote

⏰ Full Time

🟠 Senior

🚰 Data Engineer

August 25

BrightEdge

201 - 500

☁️ SaaS

🤝 B2B

🏢 Enterprise

Big Data Engineer designing and optimizing terabyte-scale pipelines for BrightEdge's SEO platform.

🇮🇳 India – Remote

💰 $42.8M Series D on 2013-06

⏰ Full Time

🟡 Mid-level

🟠 Senior

🚰 Data Engineer

August 16

Databricks

1001 - 5000

🤖 Artificial Intelligence

🏢 Enterprise

☁️ SaaS

Sr. Big Data Engineer for Databricks Professional Services; leads end-to-end big data and AI projects for clients. Mentors teams and designs architectures.

🇮🇳 India – Remote

💰 $1.6G Series H on 2021-08

⏰ Full Time

🟠 Senior

🚰 Data Engineer

August 15

OpenFX

1 - 10

💳 Fintech

🏦 Banking

🛍️ eCommerce

Data Engineer and Data Scientist shaping ETL pipelines and analytics at OpenFX... enables data-driven growth for cross-border payments.

🇮🇳 India – Remote

⏰ Full Time

🟡 Mid-level

🟠 Senior

🚰 Data Engineer

August 11

InOrg Global

51 - 200

🏢 Enterprise

☁️ SaaS

🤝 B2B

Design and maintain Databricks pipelines at V4C.ai; implement Delta Lake, Spark. Collaborate with analysts and stakeholders.

🇮🇳 India – Remote

⏰ Full Time

🟡 Mid-level

🟠 Senior

🚰 Data Engineer

Developed by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com