Data Engineer - AWS

Job not on LinkedIn

May 23

Apply Now
Logo of Tiger Analytics

Tiger Analytics

Artificial Intelligence • B2B • Consulting

Tiger Analytics is a leading AI and analytics consulting firm that specializes in leveraging data science and machine learning to provide strategic business insights across various industries. They offer services in data strategy, AI engineering, and business intelligence to enable data-driven decision-making and digital transformation for their clients. Tiger Analytics collaborates with top technology partners like Microsoft, Google Cloud, and AWS to deliver cutting-edge solutions. They serve a diverse range of sectors including consumer packaged goods, healthcare, and finance, helping businesses operationalize insights and differentiate with AI and machine learning technologies.

1001 - 5000 employees

Founded 2011

🤖 Artificial Intelligence

🤝 B2B

📋 Description

•Tiger Analytics is a fast-growing advanced analytics consulting firm. •Our consultants bring deep expertise in Data Engineering, Data Science, Machine Learning and AI. •We are the trusted analytics partner for multiple Fortune 500 companies. •As an AWS Data Engineer, you will be responsible for designing, building, and maintaining scalable data pipelines on AWS cloud infrastructure. •You will work closely with cross-functional teams to support data analytics, machine learning, and business intelligence initiatives. •The ideal candidate will have strong experience with AWS services, Databricks, and Snowflake. •Design, develop, and deploy end-to-end data pipelines on AWS cloud infrastructure using services such as Amazon S3, AWS Glue, AWS Lambda, Amazon Redshift, etc. •Implement data processing and transformation workflows using Databricks, Apache Spark, and SQL to support analytics and reporting requirements. •Build and maintain orchestration workflows using Apache Airflow to automate data pipeline execution, scheduling, and monitoring. •Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver scalable data solutions. •Optimize data pipelines for performance, reliability, and cost-effectiveness, leveraging AWS best practices and cloud-native technologies.

🎯 Requirements

•8+ years of experience building and deploying large-scale data processing pipelines in a production environment. •Hands-on experience in designing and building data pipelines •Strong proficiency in AWS services such as Amazon S3, AWS Glue, AWS Lambda, Amazon Redshift, etc. •Strong experience with Databricks, Pyspark for data processing and analytics. •Solid understanding of data modeling, database design principles, and SQL and Spark SQL. •Experience with version control systems (e.g., Git) and CI/CD pipelines. •Excellent communication skills and the ability to collaborate effectively with cross-functional teams. •Strong problem-solving skills and attention to detail.

🏖️ Benefits

• This position offers an excellent opportunity for significant career development in a fast-growing and challenging entrepreneurial environment with a high degree of individual responsibility.

Apply Now

Similar Jobs

April 22

Join Sunrise Robotics to design and implement data processes enhancing intelligent robotics in manufacturing.

Airflow

Apache

Assembly

Cassandra

Cloud

Grafana

Java

MongoDB

Python

Scala

Spark

SQLite

Terraform

Unity

March 25

Join a consulting firm as a data engineer, building data tools for various clients.

Airflow

ETL

Numpy

Pandas

Python

Scikit-Learn

SQL

Tensorflow

February 5

Senior Information Architect/Data Engineer role at Fitch focusing on cloud data platform architecture and innovation.

Amazon Redshift

AWS

Cloud

Hadoop

Oracle

SDLC

SQL

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com