pathway.com - The smartest way to build Data Products
AI for IoT • SaaS • Cloud-native Application • Things in motion • Risk analysis
11 - 50
💰 $4.5M Pre Seed Round on 2022-12
March 19
pathway.com - The smartest way to build Data Products
AI for IoT • SaaS • Cloud-native Application • Things in motion • Risk analysis
11 - 50
💰 $4.5M Pre Seed Round on 2022-12
• We are searching for a person with a Data Processing or Data Engineering profile, willing to work with live client datasets, and to test, benchmark, and showcase our brand-new stream data processing technology. • The end-user of our product are mostly developers and data engineers working in a corporate environment. Our development framework is one day expected to become for them a part of their preferred development stack for analytics projects at work – their daily bread & butter. • You will be working closely with our CTO, Head of Product, as well as key developers. You will be expected to: - Implement the flow of data from their location in client's warehouses up to Pathway's ingress. - Set up CDC interfaces for change streams between client data stores and i/o data processed by Pathway; ensuring data persistence for Pathway outputs. - Design ETL pipelines within Pathway. - Contribute to benchmark framework design (throughput / latency / memory footprint; consistency), including in a distributed system setup. - Contribute to building open-source test frameworks for simulated streaming data scenarios on public datasets.
• Inside-out understanding of at least one major distributed data processing framework (Spark, Dask, Ray,...) • 6 months+ experience working with a streaming dataflow framework (e.g.: Flink, Kafka Streams or ksqldb, Spark in streaming mode, Beam/Dataflow) • Ability to set up distributed dataflows independently. • Experience with data streams: message queues, message brokers (Kafka), CDC. • Working familiarity with data schema and schema versioning concepts; Avro, Protobuf, or others. • Familiarities with Kubernetes. • Familiarity with deployments in both Azure and AWS clouds. • Good working knowledge of Python. • Good working knowledge of SQL. • Experienced in working for an innovative tech company (SaaS, IT infrastructure or similar preferred), with a long-term vision. • Warmly disposed towards open-source and open-core software, but pragmatic about licensing. Bonus Points: - Know the ways of developers in a corporate environment. - Passionate about trends in data. - Proficiency in Rust. - Experience with Machine Learning pipelines or MLOps. - Familiarity with any modern data transformation workflow tooling (dbt, Airflow, Dagster, Prefect,...) - Familiarity with Databricks Data Lakehouse architecture. - Familiarity with Snowflake's data product vision (2022+). - Experience in a startup environment.
• Intellectually stimulating work environment. Be a pioneer: you get to work with a new type of stream processing framework. • Work in one of the hottest data startups in France, with exciting career prospects • Responsibilities and ability to make significant contribution to the company’s success • Compensation: annual salary of €60K-€100K + Employee stock option plan. • Inclusive workplace culture
Apply Now