
AI • Cybersecurity • SaaS
Codvo. ai is a technology company that specializes in delivering strategic enterprise solutions through advanced AI-driven innovation. They focus on transforming enterprise data into measurable value by helping businesses accelerate growth with custom AI implementations tailored to meet the specific challenges of various industries. Their extensive service offerings include AI/ML automation, application development, data analytics, cybersecurity, and digital transformation, ensuring that organizations can thrive in a rapidly evolving digital landscape.
November 10
Angular
AWS
Azure
Cloud
ETL
Flask
Google Cloud Platform
Informatica
Java
Microservices
PySpark
Python
React
Scala
Spark
SQL
Unity

AI • Cybersecurity • SaaS
Codvo. ai is a technology company that specializes in delivering strategic enterprise solutions through advanced AI-driven innovation. They focus on transforming enterprise data into measurable value by helping businesses accelerate growth with custom AI implementations tailored to meet the specific challenges of various industries. Their extensive service offerings include AI/ML automation, application development, data analytics, cybersecurity, and digital transformation, ensuring that organizations can thrive in a rapidly evolving digital landscape.
• Design and develop ETL/ELT pipelines on platforms like Databricks (PySpark, Delta Lake, SQL), Informatica, Teradata, Snowflake • Architect data models (batch and streaming) for analytics, ML, and reporting • Optimize performance of large-scale distributed data processing jobs • Implement CI/CD pipelines for Databricks workflows using GitHub Actions, Azure DevOps, or similar • Build and maintain APIs, dashboards, or applications that consume processed data (full-stack aspect) • Collaborate with data scientists, analysts, and business stakeholders to deliver solutions • Ensure data quality, lineage, governance, and security compliance • Deploy solutions across cloud environments (Azure, AWS, or GCP)
• 4–7 years of experience in data engineering, with deep expertise in Databricks • Bachelor's or Master’s in Computer Science, Data Engineering, or related field • Strong in PySpark, Delta Lake, Databricks SQL • Experience with Databricks Workflows, Unity Catalog, and Delta Live Tables • Python (mandatory), SQL (expert) • Exposure to Java/Scala (for Spark jobs) • Knowledge of APIs, microservices (FastAPI/Flask), or basic front-end (React/Angular) is a plus • Proficiency with at least one: Azure Databricks, AWS Databricks, or GCP Databricks
• Flexible work arrangements • Professional development
Apply NowNovember 9
AWS Data Engineer building scalable data pipelines on AWS Cloud at Exavalu. Engaging in design and implementation of ETL/ELT and CI/CD pipelines.
AWS
Cloud
ETL
Postgres
PySpark
SQL
November 5
Data Engineer II designing and managing scalable data pipelines and ETL processes at Duck Creek. Responsible for data quality, integrity, and providing mentoring to junior engineers.
Azure
Cloud
ETL
SQL
Terraform
November 5
Technical lead for large-scale software projects at Duck Creek, driving innovation in insurance technology. Collaborate with teams to deliver data solutions and ensure software performance on a global scale.
AWS
Azure
Cloud
ETL
SQL
November 5
Data Engineer I at Duck Creek designing, coding, and managing scalable data pipelines. Collaborating with architects, analysts, and junior engineers in a remote-first environment.
🇮🇳 India – Remote
💰 $230M Private Equity Round on 2020-06
⏰ Full Time
🟡 Mid-level
🟠 Senior
🚰 Data Engineer
Azure
Cloud
ETL
SQL
November 5
Data Engineer II at Duck Creek, designing scalable data solutions and optimizing data processes for insurance software. Delivering insights and maintaining data quality in cloud environments.
Azure
Cloud
ETL
SQL
Terraform