
Health • Education • Entertainment
Ruby Labs is a company that creates cutting-edge consumer products designed to innovate across health, education, and entertainment sectors. With a commitment to enhancing lives and making a positive impact, Ruby Labs is at the forefront of technological progress. Their diverse portfolio reflects their dedication to innovation in vital sectors of society, specifically health and wellness, pharmaceuticals, education, and entertainment. Ruby Labs has a self-funded model that empowers them to pursue ambitious ideas independently. The company boasts over 100 million annual users and has a track record of sustained growth over 6 years, delivering products that are adopted by millions.
November 21

Health • Education • Entertainment
Ruby Labs is a company that creates cutting-edge consumer products designed to innovate across health, education, and entertainment sectors. With a commitment to enhancing lives and making a positive impact, Ruby Labs is at the forefront of technological progress. Their diverse portfolio reflects their dedication to innovation in vital sectors of society, specifically health and wellness, pharmaceuticals, education, and entertainment. Ruby Labs has a self-funded model that empowers them to pursue ambitious ideas independently. The company boasts over 100 million annual users and has a track record of sustained growth over 6 years, delivering products that are adopted by millions.
• Develop and maintain ETL/ELT data pipelines to ingest, transform, and deliver data into the data warehouse. • Design and implement monitoring and alerting systems to proactively detect pipeline failures, anomalies, and data quality issues. • Establish data quality validation checks and anomaly detection mechanisms to ensure accuracy and trust in data. • Define and maintain data structures, schemas, and partitioning strategies for efficient and scalable data storage. • Create and maintain comprehensive documentation of data pipelines, workflows, data models, and data lineage. • Troubleshoot and resolve issues related to data pipelines, performance, and quality. • Collaborate with stakeholders to understand data requirements and translate them into reliable engineering solutions. • Contribute to continuous improvement of the data platform’s observability, reliability, and maintainability.
• Proficiency in Python for data pipeline development, automation, and tooling. • Strong SQL skills and experience working with cloud data warehouses (ClickHouse, BigQuery preferred). • Hands-on experience with DBT (modelling, testing, documentation, and deployment). • Experience with workflow orchestration tools such as Airflow. • Familiarity with data quality frameworks (e.g. Great Expectations, DBT tests) and anomaly detection methods. • Experience building monitoring and alerting systems for data pipelines and data quality. • Ability to write clear, maintainable, and actionable technical documentation. • Strong problem-solving skills and attention to detail.
• Remote Work Environment: Embrace the freedom to work from anywhere, anytime, promoting a healthy work-life balance. • Unlimited PTO: Enjoy unlimited paid time off to recharge and prioritize your well-being, without counting days. • Paid National Holidays: Celebrate and relax on national holidays with paid time off to unwind and recharge. • Company-provided MacBook: Experience seamless productivity with top-notch Apple MacBooks provided to all employees who need them. • Flexible Independent Contractor Agreement: Unlock the benefits of flexibility, autonomy, and entrepreneurial opportunities. Benefit from tax advantages, networking opportunities, reduced employment obligations, and the freedom to work from anywhere.
Apply NowNovember 21
Senior Data Engineer working on Azure & AI at Wavestone, supporting data engineering and analytics for business decision-making. Collaborating with clients and technical teams on cloud solutions.
🗣️🇩🇪 German Required
🗣️🇵🇱 Polish Required
Azure
ETL
SDLC
SQL
November 19
Data Engineer II at Jamf responsible for building and transforming data infrastructure for business intelligence. Collaborating with analysts and data scientists to ensure data reliability and performance.
🇵🇱 Poland – Remote
💰 $300M Post-IPO Secondary on 2021-09
⏰ Full Time
🟢 Junior
🟡 Mid-level
🚰 Data Engineer
Airflow
AWS
Cloud
Docker
EC2
Kubernetes
Python
SQL
Terraform
November 19
Data Engineer specializing in Data Mesh development for a technology company optimizing data processing infrastructure across products. Seeking an experienced professional with strong Apache Spark skills.
Apache
AWS
ETL
Jenkins
Spark
Terraform
November 18
Data Engineer at Provectus, focusing on Data Engineering and Machine Learning with a diverse team. Collaborating on data-driven architectures and innovative data platforms.
Airflow
Apache
AWS
ETL
Flask
Kafka
Python
Spark
SQL
Terraform
November 15
51 - 200
Senior Data Engineer responsible for implementing Microsoft Data Intelligence solutions. Collaborate with clients to analyze large data sets and support business decisions using Azure.
🗣️🇩🇪 German Required
🗣️🇵🇱 Polish Required
Azure
ETL
SDLC
SQL