
Enterprise • SaaS • B2B
InOrg Global is a company specializing in managed services and innovative business strategies. They focus on providing AI/ML services, strategic operations, and workspace management to enhance productivity and efficiency. InOrg offers a unique Build-Operate-Transfer (BOT) model, guiding businesses through design, development, operation, and seamless transition of control to ensure sustained success. The company is dedicated to empowering digital disruptors and optimizing business processes with expertise in global capability centers, helping clients expand their global reach and achieve operational excellence.
June 11

Enterprise • SaaS • B2B
InOrg Global is a company specializing in managed services and innovative business strategies. They focus on providing AI/ML services, strategic operations, and workspace management to enhance productivity and efficiency. InOrg offers a unique Build-Operate-Transfer (BOT) model, guiding businesses through design, development, operation, and seamless transition of control to ensure sustained success. The company is dedicated to empowering digital disruptors and optimizing business processes with expertise in global capability centers, helping clients expand their global reach and achieve operational excellence.
• Design, build, and maintain scalable data pipelines using Databricks and Apache Spark. • Integrate data from various sources into data lakes or data warehouses. • Implement and manage Delta Lake architecture for reliable, versioned data storage. • Ensure data quality, performance, and reliability through testing and monitoring. • Collaborate with data analysts, scientists, and stakeholders to meet data needs. • Automate workflows and manage job scheduling within Databricks. • Maintain clear and thorough documentation of data workflows and architecture.
• 3+ years in data engineering with strong exposure to Databricks and big data tools. • Proficient in Python or Scala for ETL development. • Strong understanding of Spark, Delta Lake, and Databricks SQL. • Familiar with REST APIs, including Databricks REST API usage. • Experience with AWS, Azure, or GCP. • Familiarity with data lakehouse concepts and dimensional modeling. • Comfortable using Git and pipeline automation tools. • Strong problem-solving abilities, attention to detail, and teamwork. • Databricks Certified Data Engineer Associate/Professional (nice to have). • Experience with Airflow or Databricks Workflows. • Familiarity with Datadog, Prometheus, or similar tools. • Exposure to MLflow or model integration in pipelines.
Apply NowMay 15
As a Senior Data Engineer at DataRobot, you will develop analytic data products in a cloud environment. This role requires strong data engineering skills and collaboration with analysts and scientists.
Airflow
Amazon Redshift
AWS
Azure
Cloud
EC2
ETL
Google Cloud Platform
Postgres
Python
Scala
Spark
SQL
Terraform
April 30
Join Hitachi Solutions as an Azure Data Architect, designing scalable data solutions on Microsoft Azure.
Azure
Cloud
ETL
MS SQL Server
Oracle
Python
RDBMS
Scala
Spark
SQL
Tableau
Unity
April 22
Join Zingtree as a Senior Data Engineer to design and build data systems for process automation.
Apache
AWS
Cloud
Kafka
Kubernetes
Spark
SQL
Go
April 22
Leads projects for Cummins' analytics platform; collaborates with stakeholders to deliver data solutions.
Cassandra
Cloud
Distributed Systems
DynamoDB
ETL
Hadoop
HBase
IoT
Java
Kafka
MongoDB
Neo4j
Open Source
PySpark
Python
Scala
SDLC
Spark
SQL
SSIS
April 6
Manage technology and platform for Data Lake needs at WEX, ensuring scalability and reliability.
AWS
Azure
Cloud
Dart
Kubernetes
SDLC