Senior Data Engineer – AI Solutions

Job not on LinkedIn

November 24

Apply Now
Logo of Quisitive

Quisitive

SaaS • Consulting • Enterprise

Quisitive is a leading Microsoft Cloud Partner that focuses on providing integration and consulting services to help businesses innovate and thrive using Microsoft cloud technologies. The company specializes in application development, data and analytics, artificial intelligence, business applications, security, digital workplace solutions, and infrastructure management. Quisitive assists organizations in various sectors, including manufacturing, healthcare, education, energy, and public sector, helping them efficiently navigate and implement cloud-based solutions. With tailored products like MazikCare for healthcare and ShopFloor for manufacturing, Quisitive enhances operational efficiency and drives digital transformation across its clients’ businesses.

501 - 1000 employees

☁️ SaaS

🏢 Enterprise

📋 Description

• Building and maintaining backend data ingestion and embedding pipelines • Setting up environments, clone repositories, and running pipelines in JupyterHub • Working on large-scale ETL processes, including converting Iceberg tables to Parquet and exporting data to S3 buckets • Designing and optimizing schemas for Neo4j-based graph solutions • Integrating knowledge workflows and KB articles into graph structures for advanced retrieval • Troubleshooting data quality issues and optimizing Spark jobs for efficiency • Implementing retry mechanisms and debugging full-stack issues related to large file operations • Managing secure access using JWT and Kerberos authentication • Handling credentials for Oracle DB and API clients via HashiCorp Vault • Working with GitLab for source control and Jira for project tracking • Supporting migration efforts from Azure DevOps to GitLab/Jira environments

🎯 Requirements

• Strong proficiency in Python for data processing and pipeline development • Hands-on experience with Spark, Iceberg, and large-scale data frameworks • Familiarity with Neo4j, LangChain, and LLM integration for AI-driven solutions • Experience with Oracle DB, PostgreSQL, and PGVector for embedding strategies • Comfortable working with S3 buckets, Parquet, and CSV formats • Exposure to embedding models like BGEM 3 and Nomic • Understanding of AI-powered retrieval and recommendation systems • JupyterHub for testing/debugging • Power BI for dashboard development and reporting • Knowledge of Kerberos authentication and secrets management with HashiCorp Vault

🏖️ Benefits

• Passionate team members • Challenging projects • A great place to work

Apply Now

Similar Jobs

November 24

RTX

10,000+ employees

🚀 Aerospace

Senior Data Engineer role at RTX focusing on building scalable data pipelines for aerospace challenges. Collaborating with cross-functional teams to deliver cutting-edge AI and data solutions.

AWS

Azure

Cloud

ETL

Matillion

Python

Spark

November 24

Data Engineer developing data workflows to manage and optimize data quality for Cherokee Nation Integrated Health. Key contributor in cross-functional teams handling various data processing tasks.

Amazon Redshift

AWS

DynamoDB

EC2

ETL

NoSQL

Numpy

Pandas

Postgres

Python

SQL

TypeScript

November 24

Data Architect at 3Cloud supporting Azure Data Platform solutions and mentoring junior teams. Lead client support and design data-driven architectures for various industries.

Azure

SQL

November 24

Senior Data Engineer at AxisCare shaping data lifecycle for AI, ML, and BI solutions. Collaborating with teams to deliver impactful data infrastructure in home health care.

Amazon Redshift

AWS

Cloud

ETL

MySQL

Python

SQL

Tableau

November 22

Data Engineering Analyst Lead/Scientist at Experian transforming trade data into consumer features for decisions across the credit lifecycle. Reporting to the Director of Data Engineering, collaborating with various teams.

AWS

Cloud

Python

SQL

Tableau

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com