Senior Data Infrastructure Engineer

September 30

Apply Now
Logo of Collaborative Robotics

Collaborative Robotics

Artificial Intelligence • Hardware • Productivity

Collaborative Robotics is a company focused on advancing the field of collaborative robots, also known as cobots. These robots are designed to work alongside humans, assisting them in various tasks to improve efficiency and safety. The company is committed to building innovative robotic solutions that integrate seamlessly into different work environments, enhancing productivity and transforming industries.

11 - 50 employees

🤖 Artificial Intelligence

🔧 Hardware

⚡ Productivity

💰 $30M Series A on 2023-07

📋 Description

• Own the full ingestion path from edge to cloud, ensuring robot telemetry, sensor data, and warehouse events are reliably captured, transported, and made available for downstream systems • Design, build, and operate scalable pipelines and foundational data layers (streaming and batch) that deliver low-latency, reliable data for analytics, AI/ML, and product features • Implement observability, monitoring, and CI/CD practices to ensure pipeline quality and keep data flows robust, maintainable, and trustworthy • Scale and optimize multi-tenant infrastructure, balancing performance, reliability, and cost-efficiency • Collaborate directly with robotics, AI/ML, and product teams to translate product requirements into resilient data systems that unlock features in Vista, Portal, and ScoutMap • Establish and enforce best practices for data engineering, reliability, and security while enabling analytics engineers to deliver marts, metrics, and dashboards • Shape how humans manage and interact with robotic fleets by powering Vista (AI insights) and ScoutMap (3D mapping and environment intelligence)

🎯 Requirements

• 5+ years of professional experience in data engineering or data infrastructure roles • Strong proficiency in Python and SQL, with the ability to write production-quality, scalable, and well-tested code • Proven experience designing and operating ingestion pipelines and staging layers (streaming and batch) • Experience deploying and managing cloud data infrastructure in AWS using infrastructure-as-code (e.g., Terraform, Kubernetes, Docker) • Hands-on experience with cloud-based data platforms, storage systems, and infrastructure • Familiarity with data quality practices, testing frameworks, and CI/CD for data pipelines • Highly motivated teammate with excellent oral and written communication skills • Enjoy working in a fast paced, collaborative and dynamic start-up environment • Willingness to travel occasionally for on-site support or testing, as needed • Must have and maintain US work authorization • Preferred: Proven experience as the technical lead or primary owner of a data pipeline or platform project • Preferred: Experience with Databricks (Delta Live Tables, SQL Warehouse) and familiarity with dbt or similar tools • Preferred: Strong understanding of multi-tenant architectures and cost/performance/reliability tradeoffs • Preferred: Background in streaming systems (Kafka, Flink, Kinesis, or Spark Structured Streaming) • Preferred: Familiarity with data quality and observability tools (e.g., Great Expectations, Monte Carlo) • Preferred: Exposure to IoT/robotics telemetry or 3D/spatial data processing (e.g., point clouds, LiDAR, time-series) • Preferred: Experience working in a product-facing data role, collaborating with product, engineering, and AI/ML teams

🏖️ Benefits

• equity • comprehensive benefits • Option to work remotely within the United States • Preferred office locations: Santa Clara, CA or Seattle, WA

Apply Now

Similar Jobs

September 29

Lead Infrastructure Engineer designing infrastructure, automation and observability for Commerce's AI-driven commerce ecosystem. Remote within US time zones or based in Austin/San Francisco.

Ansible

Chef

Docker

Kubernetes

Linux

PHP

Prometheus

Puppet

Ruby

SQL

Unix

Go

September 28

Design and operate scalable cloud and on-prem infrastructure for TwelveLabs' video AI models; build CI/CD, multi-tenant architectures, and security/monitoring frameworks.

Ansible

AWS

Azure

Cloud

Google Cloud Platform

Kubernetes

Python

Terraform

TypeScript

Go

September 26

Design and operate Azure/Databricks/MongoDB/Snowflake data infrastructure for Siemens Healthineers. Implement IaC, CI/CD, security, and disaster recovery.

Azure

Cloud

Docker

Kubernetes

MongoDB

Python

Spark

Terraform

Vault

September 26

Design and operate Azure, Databricks, MongoDB, and Snowflake data infrastructure for Siemens Healthineers; automate IaC, ensure security, scalability, and observability.

Azure

Cloud

Docker

Kubernetes

MongoDB

Python

Spark

Terraform

Vault

September 25

Manage and automate Sysdig's cloud and on-prem infrastructure; ensure data-store scalability, security, and incident response.

Ansible

Cloud

Kubernetes

Linux

Python

Terraform

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com