
Health • Education • Entertainment
Ruby Labs is a company that creates cutting-edge consumer products designed to innovate across health, education, and entertainment sectors. With a commitment to enhancing lives and making a positive impact, Ruby Labs is at the forefront of technological progress. Their diverse portfolio reflects their dedication to innovation in vital sectors of society, specifically health and wellness, pharmaceuticals, education, and entertainment. Ruby Labs has a self-funded model that empowers them to pursue ambitious ideas independently. The company boasts over 100 million annual users and has a track record of sustained growth over 6 years, delivering products that are adopted by millions.
November 21

Health • Education • Entertainment
Ruby Labs is a company that creates cutting-edge consumer products designed to innovate across health, education, and entertainment sectors. With a commitment to enhancing lives and making a positive impact, Ruby Labs is at the forefront of technological progress. Their diverse portfolio reflects their dedication to innovation in vital sectors of society, specifically health and wellness, pharmaceuticals, education, and entertainment. Ruby Labs has a self-funded model that empowers them to pursue ambitious ideas independently. The company boasts over 100 million annual users and has a track record of sustained growth over 6 years, delivering products that are adopted by millions.
• Support the development and maintenance of ETL/ELT pipelines. • Assist in ingesting, transforming, and loading data into our data warehouse. • Help implement monitoring and alerting for pipeline health and data quality. • Contribute to basic data validation checks and anomaly detection processes. • Support the maintenance of schemas, partitioning, and data structures. • Write and update documentation for pipelines, workflows, and data models. • Troubleshoot simple pipeline or data issues with guidance from senior engineers. • Collaborate with analysts and other stakeholders to understand data needs. • Participate in improving the reliability and observability of the data platform.
• Basic Python skills for data processing, scripting, and automation. • Strong foundational SQL skills and willingness to improve. • Familiarity with cloud data warehouses (BigQuery or ClickHouse is a plus). • Introductory experience with DBT (models, tests, documentation). • Basic understanding of workflow orchestration tools such as Airflow. • Attention to detail and ability to follow engineering best practices. • Willingness to learn monitoring, alerting, and data quality frameworks. • Ability to write clear and simple documentation. • Curiosity, problem-solving attitude, and desire to grow into a mid-level/senior Data Engineer.
• Remote Work Environment: Embrace the freedom to work from anywhere, anytime, promoting a healthy work-life balance. • Unlimited PTO: Enjoy unlimited paid time off to recharge and prioritize your well-being, without counting days. • Paid National Holidays: Celebrate and relax on national holidays with paid time off to unwind and recharge. • Company-provided MacBook: Experience seamless productivity with top-notch Apple MacBooks provided to all employees who need them. • Flexible Independent Contractor Agreement: Unlock the benefits of flexibility, autonomy, and entrepreneurial opportunities.
Apply NowNovember 19
Data Engineer II at Jamf responsible for building and transforming data infrastructure for business intelligence. Collaborating with analysts and data scientists to ensure data reliability and performance.
🇵🇱 Poland – Remote
💰 $300M Post-IPO Secondary on 2021-09
⏰ Full Time
🟢 Junior
🟡 Mid-level
🚰 Data Engineer
Airflow
AWS
Cloud
Docker
EC2
Kubernetes
Python
SQL
Terraform
October 23
DWH Engineer analyzing and interpreting large datasets in media & advertising. Collaborating with a dynamic team and leveraging cutting-edge technologies for insights.
AWS
Cloud
SQL