Data Engineer

August 8

Apply Now
Logo of Wynd Labs

Wynd Labs

Artificial Intelligence • Data

Wynd Labs is focused on making public web data accessible for artificial intelligence applications. The company provides enterprise customers with robust web scraping networks for accessing diverse data, such as financial insights for small to medium cap equities. Additionally, Wynd Labs contributes to AI development by offering AI-oriented datasets and designing native AI models to assist in data scraping and structuring. Their platform supports a wide range of business solutions, including AI training data, price scraping, ad verification, and financial data services.

11 - 50 employees

Founded 2018

🤖 Artificial Intelligence

📋 Description

• We build infrastructure that delivers massive amounts of web data to the companies training the world’s most powerful AI models. • We’re the team behind Grass, a bandwidth-sharing network that lets us operate a massive distributed crawler, giving us unique access to high-quality public web data at global scale. • On top of that, we’ve built pipelines for ingesting, segmenting, and annotating billions of videos, transcripts, and audio files, powering dataset creation for frontier labs. • We’re lean, technical, and move fast. No red tape, no slow decision-making; just a team of builders pushing to expand what’s possible for open web data and AI. • We are seeking a Data Engineer with expertise in building and maintaining robust data pipelines and integrating scalable infrastructure. You will join a small, talented team and play a critical role in designing and optimizing our data systems, ensuring seamless data flow and accessibility. Your contributions will directly support our mission to position Grass as a key player in the evolution of data-driven innovation on the internet.

🎯 Requirements

• Bachelor’s degree in Computer Science, Information Systems, Data Engineering, or a related technical field. • Extensive experience with database systems such as Redshift, Snowflake, or similar cloud-based solutions. • Advanced proficiency in SQL and experience with optimizing complex queries for performance. • Hands-on experience with building and managing data pipelines using tools such as Apache Airflow, AWS Glue, or similar technologies. • Solid understanding of ETL (Extract, Transform, Load) processes and best practices for data integration. • Experience with infrastructure automation tools (e.g., Terraform, CloudFormation) for managing data ecosystems. • Knowledge of programming languages such as Python, Scala, or Java for pipeline orchestration and data manipulation. • Strong analytical and problem-solving skills, with an ability to troubleshoot and resolve data flow issues. • Familiarity with containerization (e.g., Docker) and orchestration (e.g., Kubernetes) technologies for data infrastructure deployment. • Collaborative team player with strong communication skills to work with cross-functional teams.

🏖️ Benefits

• Opportunity. We are at at the forefront of developing a web-scale crawler and knowledge graph that allows ordinary people to participate in the process, and share in the benefits of AI development. • Culture. We’re a lean team working together to achieve a very ambitious goal of improving access to public web data and distributing the value of AI to the people. We prioritize low ego and high output. • Work Remotely • Compensation. You’ll receive a competitive salary, benefits and equity package.

Apply Now

Similar Jobs

August 8

Help build, scale, and optimize Rollstack's data infrastructure. Work closely with leadership and cross-functional teams.

Amazon Redshift

BigQuery

ETL

Python

SQL

Tableau

August 8

Join Varsity Brands as a Senior Data Engineer, focusing on efficient data pipeline development with Snowflake.

Cloud

ERP

ETL

Java

Matillion

Python

Scala

SQL

Tableau

August 8

Join Aeries as a Data Engineer to support strategic data projects and complex client needs.

Amazon Redshift

AWS

Azure

Cyber Security

ETL

Hadoop

Informatica

Kafka

Oracle

Spark

SQL

Tableau

August 5

Join GitLab to architect scalable data systems, transforming data management across deployments.

🇺🇸 United States – Remote

💵 $157.9k - $236.9k / year

💰 Secondary Market on 2020-11

⏰ Full Time

🟡 Mid-level

🟠 Senior

🚰 Data Engineer

Airflow

Cloud

Docker

Kubernetes

Open Source

Postgres

Python

Ruby

Ruby on Rails

SDLC

Go

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com