Bringing you a better way to build software.
Open Source • Open Source Governance • Management and Compliance • Repository Management • DevOps
501 - 1000
💰 $80M Private Equity Round on 2018-09
February 3
AWS
Azure
Cassandra
Cloud
DynamoDB
Hadoop
HBase
Java
Kafka
MapReduce
Maven
MongoDB
Open Source
Python
RabbitMQ
Scala
Spark
Loading...
Bringing you a better way to build software.
Open Source • Open Source Governance • Management and Compliance • Repository Management • DevOps
501 - 1000
💰 $80M Private Equity Round on 2018-09
• Work in one of our data processing teams to create and manage data pipelines • Work on products that continually ingest data from open source software components, aggregate, and analyze it • Deliver data to customers to drive insights and make decisions on using open source software components • Monitor and observe data ingestion pipelines in a production environment • Collaborate with cross-functional teams to ensure seamless data flow and integration into the data warehouse • Optimize data processing and transformation algorithms to improve efficiency and performance • Practice modern engineering practices such as CI/CD, Automated Testing, IaC, and continuous monitoring
• 3+ years of overall software engineering experience • 3+ years of backend or data engineering with at least one programming language commonly used in data engineering (e.g., Python, Java, Scala) • Knowledge of data modeling and database design principles • Excellent problem-solving skills and attention to detail • Ability to work effectively both independently and as part of a team • Strong written and verbal communication skills • A Bachelor's degree in Computer Science, Engineering, or a related field • Preferred: Experience with high volume data ingestion pipelines • Preferred: Understanding of machine learning concepts and their integration with data pipelines • Preferred: Previous experience working in a data engineering role or related field • Preferred: Knowledge and experience with cloud-based data platforms (e.g., AWS, Google Cloud, Azure) • Preferred: Knowledge and experience with large scale data tools and techniques (ex:, Databricks, Spark, Hadoop, Hive, MapReduce ...) • Preferred: Knowledge and experience with non-relational databases (ex: DynamoDB, HBase, MongoDB, Cassandra, ...) • Preferred: Knowledge and experience working with queues and pipelines (ex: SNS, SQS, RabbitMQ, Kafka, ...)
• Competitive salary package • Medical/Dental/Vision benefits • Business casual dress • Flexible work schedules • 2019 Best Places to Work Washington Post and Washingtonian • 2019 Wealthfront Top Career Launch Company • EY Entrepreneur of the Year 2019 • Fast Company Top 50 Companies for Innovators • Glassdoor ranking of 4.9
Apply Now