Fullstack Data Engineer

November 10

Apply Now
Logo of Codvo.ai

Codvo.ai

AI • Cybersecurity • SaaS

Codvo. ai is a technology company that specializes in delivering strategic enterprise solutions through advanced AI-driven innovation. They focus on transforming enterprise data into measurable value by helping businesses accelerate growth with custom AI implementations tailored to meet the specific challenges of various industries. Their extensive service offerings include AI/ML automation, application development, data analytics, cybersecurity, and digital transformation, ensuring that organizations can thrive in a rapidly evolving digital landscape.

51 - 200 employees

Founded 2019

🔒 Cybersecurity

☁️ SaaS

📋 Description

• Design and develop ETL/ELT pipelines on platforms like Databricks (PySpark, Delta Lake, SQL), Informatica, Teradata, Snowflake • Architect data models (batch and streaming) for analytics, ML, and reporting • Optimize performance of large-scale distributed data processing jobs • Implement CI/CD pipelines for Databricks workflows using GitHub Actions, Azure DevOps, or similar • Build and maintain APIs, dashboards, or applications that consume processed data (full-stack aspect) • Collaborate with data scientists, analysts, and business stakeholders to deliver solutions • Ensure data quality, lineage, governance, and security compliance • Deploy solutions across cloud environments (Azure, AWS, or GCP)

🎯 Requirements

• 4–7 years of experience in data engineering, with deep expertise in Databricks • Bachelor's or Master’s in Computer Science, Data Engineering, or related field • Strong in PySpark, Delta Lake, Databricks SQL • Experience with Databricks Workflows, Unity Catalog, and Delta Live Tables • Python (mandatory), SQL (expert) • Exposure to Java/Scala (for Spark jobs) • Knowledge of APIs, microservices (FastAPI/Flask), or basic front-end (React/Angular) is a plus • Proficiency with at least one: Azure Databricks, AWS Databricks, or GCP Databricks

🏖️ Benefits

• Flexible work arrangements • Professional development

Apply Now

Similar Jobs

November 9

AWS Data Engineer building scalable data pipelines on AWS Cloud at Exavalu. Engaging in design and implementation of ETL/ELT and CI/CD pipelines.

AWS

Cloud

ETL

Postgres

PySpark

SQL

November 5

Data Engineer II designing and managing scalable data pipelines and ETL processes at Duck Creek. Responsible for data quality, integrity, and providing mentoring to junior engineers.

Azure

Cloud

ETL

SQL

Terraform

November 5

Technical lead for large-scale software projects at Duck Creek, driving innovation in insurance technology. Collaborate with teams to deliver data solutions and ensure software performance on a global scale.

AWS

Azure

Cloud

ETL

SQL

November 5

Data Engineer I at Duck Creek designing, coding, and managing scalable data pipelines. Collaborating with architects, analysts, and junior engineers in a remote-first environment.

Azure

Cloud

ETL

SQL

November 5

Data Engineer II at Duck Creek, designing scalable data solutions and optimizing data processes for insurance software. Delivering insights and maintaining data quality in cloud environments.

Azure

Cloud

ETL

SQL

Terraform

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com