Senior Data Engineer

October 30

Apply Now
Logo of Shuru

Shuru

Artificial Intelligence • B2B • Enterprise

Shuru is a product, AI, and technology consulting firm that partners with businesses to deliver strategic consulting, full-cycle product and custom software development, and curated engineering team extension. Their AI-native engineering teams build scalable AI applications, data engineering and analytics, cloud/DevOps, and API integrations to modernize systems and accelerate product delivery. Shuru operates globally with a remote-first model and emphasizes high ownership, design thinking, and measurable outcomes for enterprise and startup clients.

📋 Description

• Design and implement scalable data pipelines for ingestion, validation, cleanup, and normalization • Build and maintain ETL/ELT processes to support both batch and real-time data processing • Develop data quality frameworks and monitoring systems to ensure data integrity • Optimize pipeline performance and troubleshoot data flow issues • Deploy and manage data infrastructure on AWS and GCP platforms • Implement Infrastructure as Code (IaC) using Terraform for reproducible deployments • Design and maintain data lakes, data warehouses, and streaming architectures • Ensure security, compliance, and cost optimization across cloud resources • Implement event-driven architectures using technologies like Kafka, Kinesis, or Pub/Sub

🎯 Requirements

• 5+ years of experience in data engineering or related field • Hands-on experience with data pipeline development and maintenance • Strong proficiency in Python, SQL, and at least one other programming language (Java, Scala, Go) • Extensive experience with AWS services (S3, Redshift, EMR, Kinesis, Lambda, Glue) • Solid experience with GCP services (BigQuery, Dataflow, Pub/Sub, Cloud Storage) • Proven experience with Infrastructure as Code using Terraform • Strong background in streaming technologies (Apache Kafka, Amazon Kinesis, Google Pub/Sub) • Experience with data processing frameworks (Apache Spark, Apache Beam, Airflow) • Proficiency with data warehousing solutions (Redshift, BigQuery, Snowflake) • Knowledge of NoSQL databases (MongoDB, Cassandra, DynamoDB) • Familiarity with containerization (Docker, Kubernetes) • Experience with monitoring and observability tools (CloudWatch, Stackdriver, Datadog)

🏖️ Benefits

• Competitive salary and benefits package. • Opportunity to work with a team of experienced product and tech leaders. • A flexible work environment with remote working options. • Continuous learning and development opportunities. • Chance to make a significant impact on diverse and innovative projects.

Apply Now

Similar Jobs

October 29

Senior Data Engineer at Channel Factory, developing scalable data infrastructure for analytics in marketing tech. Collaborating with teams and modernizing ETL processes for optimal data handling.

Apache

AWS

Distributed Systems

EC2

ETL

Postgres

PySpark

Python

RabbitMQ

RDBMS

SQL

October 29

Data Architect at Yuno designing and evolving end-to-end data architecture solutions. Collaborating with teams to deliver production-ready products in payment infrastructure.

AWS

ETL

Java

Python

Spark

SQL

October 29

Data Engineer modernizing secure data infrastructure for military healthcare programs. Building data pipelines, ingesting data, and optimizing performance in compliance with DoD standards.

Airflow

Apache

AWS

ETL

Python

Spark

SQL

October 29

Senior Consultant - Data Engineering position at 3Cloud focusing on Microsoft technologies and data solutions. Collaborating with teams to implement reliable and scalable data engineering solutions.

Azure

ETL

Python

SQL

SSIS

October 29

Data Engineer supporting federal government enterprise data programs with AWS and Databricks environments. Design and optimize scalable data solutions for high-quality business insights.

AWS

ETL

Python

Spark

SQL

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com