
Founded in 2011, Hornet Networks is the world’s queer tech platform, providing a digital home for LGBTQ+ people to connect with each other — anytime, anywhere. Amplifying the radical, affirmative power of the queer community with cutting-edge technology, Hornet serves an LGBTQ+ audience with its two distinct, exciting apps: Hornet and Spaces.
51 - 200 employees
💰 $8M Series A on 2016-11
November 13

Founded in 2011, Hornet Networks is the world’s queer tech platform, providing a digital home for LGBTQ+ people to connect with each other — anytime, anywhere. Amplifying the radical, affirmative power of the queer community with cutting-edge technology, Hornet serves an LGBTQ+ audience with its two distinct, exciting apps: Hornet and Spaces.
51 - 200 employees
💰 $8M Series A on 2016-11
• In this role, you will be the owner of Hornet’s raw data layer — ensuring that downstream Analytics & AI applications can build on a consistent and trustworthy foundation. • You will design and maintain the pipelines that bring our user, product, and monetization data into BigQuery, while enforcing standards that guarantee quality, scalability, and reliability. • Own the ingestion of raw data from Firebase, back-end/front-end systems, and external partners into BigQuery. • Design, build, and maintain robust ETL/ELT pipelines that are efficient, evolvable, and scalable to TB-scale workloads. • Implement data contracts, schema enforcement, and quality checks at the ingestion layer to prevent downstream issues. • Proactively monitor and debug pipelines, ensuring resilient, idempotent ingestion processes. • Optimize data storage and query performance in BigQuery, balancing cost efficiency and scalability. • Collaborate with cross-functional teams to integrate new data sources and align pipeline development with analytics needs. • Contribute to engineering best practices, documenting processes, reviewing code, and mentoring colleagues where appropriate. • Partner with the AI Engineer on aligning ingestion with broader architectural standards and long-term infrastructure goals.
• 5+ years of Data Engineering experience with a proven track record of delivering reliable, scalable data pipelines. • Strong expertise in SQL (BigQuery preferred) and Python for production-ready data workflows. • Deep knowledge of cloud data warehouses and orchestration frameworks (e.g., Mage/Airflow). • Experience implementing data contracts, schema enforcement, and data quality frameworks. • Strong problem-solving skills with a focus on pipeline reliability, scalability, and cost optimization. • Familiarity with CI/CD practices and version control systems (GitHub). • Excellent communication skills and ability to collaborate effectively with analytics and engineering peers. • Bachelor’s degree in Computer Science, Engineering, or related field.
• Work alongside an innovative and passionate Data-Analytics-AI squad, pushing the boundaries of what’s possible. • Enjoy a flexible, remote-first work environment with teammates across the world. • We offer you an open contract with monthly invoicing or employment, depending on circumstances. • We do not ask for cover letters, but we give bonus points to candidates who can articulate their passion for our mission in a concise (1-2 sentence) statement.
Apply NowOctober 31
Data Engineer at Kuda creating scalable data pipelines for analytics and machine learning. Join our mission to make financial services accessible for Africans.
October 30
Lead Data Engineer at Boldr designing and optimizing data pipelines. Collaborate and mentor while ensuring data flows securely and efficiently.
August 19
Data Architect overseeing data architecture for analytics and governance; enables AI-driven decisions.
August 16
Leads data pipelines for Kuda's money app. Shapes data governance and analytics across teams.
August 15
Remote Data Analytics Engineer builds scalable ETL pipelines with PySpark/Databricks for Parvana's software and data solutions. Enables cloud-based data integration and analytics.