Data Engineer

3 days ago

Apply Now
Logo of Solvd, Inc.

Solvd, Inc.

SaaS • B2B • Enterprise

Solvd, Inc. is a global software engineering and consulting company that delivers comprehensive end-to-end solutions. Founded in 2011, Solvd offers core engineering services, including software development for web and mobile platforms, digital experience and design, data and AI/ML solutions, and cloud platform modernization. The company is an AWS partner and is known for optimizing resources, solving complex problems, and enhancing user satisfaction. Solvd emphasizes innovation and quality assurance, employing over 800 international professionals across 8 offices worldwide. The company prides itself on transforming businesses by providing feature-rich digital products, manual testing, automation, and employing cutting-edge technology solutions. Their proprietary tools like Zebrunner and Carina aid in improving development and QA processes. Solvd operates with a focus on collaboration, top talent cultivation, and customized technical solutions tailored to individual client needs, serving clients across 15 countries.

501 - 1000 employees

Founded 2010

☁️ SaaS

🤝 B2B

🏢 Enterprise

📋 Description

• Design, build, and optimize scalable data infrastructure for risk and compliance workloads • Develop batch and streaming pipelines using modern big data technologies • Implement robust ETL and ELT workflows across diverse structured and unstructured data sources • Work extensively with AWS services including EMR, Redshift, S3, Glue, Lambda, Kinesis, and related data tools • Build high-throughput, low-latency systems that support real-time or near-real-time decision making • Partner with data scientists, analysts, and engineering teams to deliver reliable and well-documented datasets • Translate business requirements into technical specifications and scalable data solutions • Mentor team members and contribute to best practices and reusable frameworks within Solvd • Streamline reporting, analytics, and data preparation processes • Replace manual workflows with automated, repeatable systems • Support ongoing performance tuning, monitoring, and platform optimization

🎯 Requirements

• Bachelor’s degree in Computer Science, Engineering, Mathematics, or similar • 3 or more years of hands-on experience in data engineering or related fields • Strong proficiency in SQL and data modeling for analytics and warehousing • Experience building ETL or ELT pipelines at scale • Experience with big data technologies such as Hadoop, Hive, Spark, HBase, or EMR • Knowledge of distributed systems and data storage principles • Basic scripting skills in Python or Scala • Familiarity with machine learning concepts • Preferred: Experience with AWS data services such as Redshift, S3, Glue, EMR, Kinesis, Firehose, and Lambda • Preferred: Experience working with non-relational data stores such as document, key-value, column-family, or graph databases • Preferred: Experience collaborating with cross-functional risk, compliance, or analytics teams • Preferred: Strong understanding of data governance, quality, and security best practices.

🏖️ Benefits

• Shape real-world AI-driven projects across key industries, working with clients from startup innovation to enterprise transformation. • Be part of a global team with equal opportunities for collaboration across continents and cultures. • Thrive in an inclusive environment that prioritizes continuous learning, innovation, and ethical AI standards.

Apply Now

Similar Jobs

3 days ago

Data Operations Engineer building and maintaining robust data pipelines. Ensuring data quality for analytics and machine learning initiatives in a collaborative environment.

Airflow

AWS

Azure

Cloud

Python

Spark

SQL

November 26

Senior Data Engineer developing scalable data pipelines for Valtech's global clients. Driving innovation and collaboration in data engineering to enhance AI capabilities.

🇨🇴 Colombia – Remote

⏰ Full Time

🟠 Senior

🚰 Data Engineer

Airflow

Apache

Azure

BigQuery

Cassandra

Cloud

ETL

Google Cloud Platform

Java

Kafka

MongoDB

MySQL

NoSQL

Postgres

Python

Scala

Spark

SQL

November 21

Data Engineer at Yuno, focusing on ETL processes and scalable data solutions for payment infrastructure. Join a remote team building high-performance payment capabilities globally.

ETL

Java

Python

Spark

SQL

November 20

Mid-Level Data Engineer at Lean Tech contributing to data solutions in software development. Engaging with teams across Latin America and the United States for scalable data pipelines.

Google Cloud Platform

Python

SQL

November 18

Data Engineer designing, building and optimizing data flows on Azure Cloud for international high-impact projects. Collaborating with business and data science teams to deliver scalable solutions.

🗣️🇪🇸 Spanish Required

Azure

Cloud

ETL

NoSQL

PySpark

Python

SQL

Unity

Vault

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com