Data Engineer

Job not on LinkedIn

August 11

Apply Now
Logo of InOrg Global

InOrg Global

Enterprise • SaaS • B2B

InOrg Global is a company specializing in managed services and innovative business strategies. They focus on providing AI/ML services, strategic operations, and workspace management to enhance productivity and efficiency. InOrg offers a unique Build-Operate-Transfer (BOT) model, guiding businesses through design, development, operation, and seamless transition of control to ensure sustained success. The company is dedicated to empowering digital disruptors and optimizing business processes with expertise in global capability centers, helping clients expand their global reach and achieve operational excellence.

📋 Description

• Design, build, and maintain scalable data pipelines using Databricks and Apache Spark. • Integrate data from various sources into data lakes or data warehouses. • Implement and manage Delta Lake architecture for reliable, versioned data storage. • Ensure data quality, performance, and reliability through testing and monitoring. • Collaborate with data analysts, scientists, and stakeholders to meet data needs. • Automate workflows and manage job scheduling within Databricks. • Maintain clear and thorough documentation of data workflows and architecture.

🎯 Requirements

• 3+ years in data engineering with strong exposure to Databricks and big data tools. • Technical Skills: Proficient in Python or Scala for ETL development. • Strong understanding of Spark, Delta Lake, and Databricks SQL. • Familiar with REST APIs, including Databricks REST API usage. • Cloud Platform: Experience with AWS, Azure, or GCP. • Data Modeling: Familiarity with data lakehouse concepts and dimensional modeling. • Version Control & CI/CD: Comfortable using Git and pipeline automation tools. • Soft Skills: Strong problem-solving abilities, attention to detail, and teamwork. • Nice to Have Certifications: Databricks Certified Data Engineer Associate/Professional. • Workflow Tools: Experience with Airflow or Databricks Workflows. • Monitoring: Familiarity with Datadog, Prometheus, or similar tools. • ML Pipelines: Exposure to MLflow or model integration in pipelines.

Apply Now

Similar Jobs

August 9

Lead the architecture planning and design for data solutions in a remote role.

Amazon Redshift

AWS

Azure

Cloud

ETL

Hadoop

Spark

August 8

Join Teamified as a Senior AI Data Engineer to design and develop AI solutions.

Django

MongoDB

NoSQL

Python

RDBMS

SQL

August 8

Join Teamified as a Senior AI Data Engineer to develop cutting-edge AI solutions for our products.

Django

MongoDB

NoSQL

Python

RDBMS

SQL

July 31

Lead projects for designing and maintaining a data analytics platform in Cummins, engaging stakeholders.

AWS

Azure

Cassandra

Cloud

DynamoDB

ERP

ETL

Hadoop

HBase

IoT

Java

Kafka

MongoDB

NoSQL

Open Source

Python

Scala

SDLC

Spark

SQL

July 29

Data Engineer role at Amgen focusing on developing data pipelines for clinical data analytics and innovation.

AWS

Cloud

Jenkins

Kubernetes

Postgres

Python

SQL

Tableau

Vault

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com