
Recruitment • HR Tech • B2B
nDeavour Consulting is a recruitment and career counseling company headquartered in Sofia, Bulgaria. With a decade of experience, nDeavour focuses on helping individuals and businesses achieve their growth goals by providing exceptional job opportunities and professional guidance. The company prides itself on its principles of confidentiality, agility, pride, honesty, transparency, bravery, and responsibility, which foster strong relationships with clients and candidates alike.
16 hours ago

Recruitment • HR Tech • B2B
nDeavour Consulting is a recruitment and career counseling company headquartered in Sofia, Bulgaria. With a decade of experience, nDeavour focuses on helping individuals and businesses achieve their growth goals by providing exceptional job opportunities and professional guidance. The company prides itself on its principles of confidentiality, agility, pride, honesty, transparency, bravery, and responsibility, which foster strong relationships with clients and candidates alike.
• Design and implement robust, scalable data ingestion and transformation pipelines using Databricks, PySpark, and distributed processing. Utilize Airflow (or similar) for reliable workflow orchestration. • Implement Delta Lake principles, focusing on CDC and schema evolution. Establish and integrate data quality frameworks (e.g., Great Expectations) within CI/CD pipelines for data integrity. • Develop and optimize complex SQL and Python scripts. Integrate diverse data sources (APIs, S3 files, etc.) and handle both structured and unstructured data. • Support the implementation of data governance and cataloguing solutions (e.g., Unity Catalog). Proactively investigate and improve inconsistent legacy datasets. • Guide and manage AI agents for code generation, ensuring quality and stability. Work pragmatically and collaboratively to drive technical solutions.
• 5+ years of professional experience in data engineering, focused on cloud and distributed processing environments. • Strong experience with Databricks, PySpark, distributed processing, and Delta Lake. Deep knowledge of CDC and schema evolution. • Expert SQL optimization and Python skills. Hands-on experience with Airflow (or similar). • Familiarity with relevant AWS components. Good understanding of CI/CD for data workflows and implementing data quality frameworks. • Knowledge of streaming (Kafka/Kinesis) and exposure to Unity Catalog or similar governance tools. • Proactive problem-solver; comfortable working with complex, inconsistent data without needing explicit specs. • Able to work independently in a fast-paced rebuild environment. • Collaborative, pragmatic, thrives on rapid iteration. • Skilled at directing AI agents to code while maintaining high quality standards.
• Remote Office – Flexible hybrid form of working • Parking Space – We provide free parking spots • Fun Office Space – We offer a game zone and a relaxation area • Health Insurance – Additional private health insurance, including a dental care plan • Personal Development – Company-sponsored training budget to further develop your skills • Employee Referral Program – Receive a bonus for referring a friend • Holidays – Enjoy an extra 5 days after your 1st and 5th year • Social Events – We love to celebrate our success together • Family Insurance – Add insurance to a family member • Offering sports cards – 100% sponsored by the company
Apply Now19 hours ago
Data Engineer managing and implementing the data architecture for NextGen Healthcare systems. Collaborating with internal teams and external partners to support data-driven decisions.
🇺🇸 United States – Remote
💰 Venture Round on 2015-02
⏰ Full Time
🟡 Mid-level
🟠 Senior
🚰 Data Engineer
🦅 H1B Visa Sponsor
ETL
Java
Python
Scala
Spark
SQL
Yesterday
501 - 1000
Senior Software Engineer responsible for architectural decisions in data engineering at Abnormal AI. Driving the reliability and performance of business-critical data pipelines for AI products.
🇺🇸 United States – Remote
💵 $176k - $207k / year
⏰ Full Time
🟠 Senior
🚰 Data Engineer
🦅 H1B Visa Sponsor
Airflow
AWS
Distributed Systems
PySpark
Python
Spark
SQL
Yesterday
Data Architect crafting enterprise data architecture for Dynatron, boosting analytics and ML. Involved in real-time data processing and governance in automotive service industry.
Amazon Redshift
AWS
Azure
BigQuery
Cloud
Distributed Systems
Google Cloud Platform
Kafka
Pulsar
Python
Scala
SQL
Vault
Yesterday
Senior Data Engineer at ProducePay designing and optimizing data platform for analytics needs. Owning data lifecycle, ensuring reliability, security, and scalability of data architecture across teams.
Airflow
Apache
AWS
EC2
Linux
Pandas
Postgres
Python
Scala
Spark
SQL
Terraform
Yesterday
Data Engineer contributing to scalable data infrastructure and pipelines at Rhino + Jetty merger. Collaborating with cross-functional teams for data integration and analytics capabilities.
🇺🇸 United States – Remote
💵 $135k - $175k / year
⏰ Full Time
🟡 Mid-level
🟠 Senior
🚰 Data Engineer
🦅 H1B Visa Sponsor
Airflow
BigQuery
Cloud
Python
SQL
Tableau