Software Engineer, Data Engineer

Job not on LinkedIn

October 29

Apply Now
Logo of Fractal River

Fractal River

B2B • Artificial Intelligence • SaaS

Fractal River is a company that provides scalable solutions to accelerate growth for startups. They focus on building scalable revenue and customer operations, as well as enhancing data and AI/analytics capabilities. Fractal River offers solutions like Conversational Interfaces and Analytics, leveraging AI and Large Language Models for unlocking insights from unstructured data and enabling self-service business intelligence. They also provide Embedded Analytics for customized reporting and customer engagements, optimize customer journey with efficient onboarding processes, and deploy robust data ingestion pipelines for SaaS offerings. Additionally, they focus on customer relationship management to improve customer health and reduce churn. Fractal River partners with startups to deploy the right technology and develop processes to enhance business capabilities.

📋 Description

• Help design and implement data pipelines and analytic infrastructures with the latest technologies. • Create and deploy AI tools and agentic infrastructure to enhance both client and internal capabilities. • Build data warehouses using tools like dbt, working with platforms such as Snowflake, Redshift, and BigQuery. • Create beautiful dashboards and reports, and work with customers to create self-service data exploration capabilities. • Build data pipelines and integrations to connect systems, automate processes, and ensure data flows seamlessly across platforms. • Leverage APIs from multiple systems to extract and update data, trigger and monitor processes. • Maintain and oversee cloud infrastructure to ensure it runs with reliability and performance. • Help create data models, establish data governance frameworks, define metrics, and ensure data quality across reporting layers. • Develop technical documentation and best practices for our customers. • Drive internal improvement initiatives by identifying efficiency gains and proposing internal projects. • Contribute to our evolving DevOps and DataOps practices. • Coordinate projects, activities, and tasks to ensure objectives and key results are met.

🎯 Requirements

• 1-5 years of working experience in fast-paced environments. • Familiarity with Python and SQL fundamentals. • Collaborative and communicative, able to work with customers and team members effectively. • Curious and driven to learn. • Proactive and a self-starter. • Detail-oriented with strong attention to quality. • Adaptable in environments with constant iteration. • Process-minded, capable of developing and improving best practices in data and workflows. • Nice-to-have: Familiarity with data warehouses like Snowflake, Redshift, or BigQuery. • Nice-to-have: Experience with BI tools such as Looker, Sigma, or Power BI. • Nice-to-have: AWS/GCP/Azure services knowledge. • Nice-to-have: Experience with development tools such as Terraform, Docker, CircleCI, and dbt.

🏖️ Benefits

• Personal development plan with an accelerated career track. • Access to an extensive reference library of books, case studies, and best practices. • Unlimited access to AI tools (ChatGPT, Claude, Gemini, etc.) • Unlimited budget for work-related books. • Online training (Udemy, nanodegrees, etc.), English language training. • Stretch assignments, pair programming, and code reviews with new technologies. • Yearly performance bonus based on personal performance. • Yearly success bonus based on company performance. • Home office setup including fast internet, large monitor, and your favorite keyboard and mouse. • After a few months, gaming chair, espresso maker, standing desk, and speakers (or other items like these). • Monthly wellness budget to cover items such as sodas, tea, snacks, pet care, yoga, or other wellness-related items. • Company card for wellness budget and company expenses. • Three floating holidays and unlimited (but reasonable) PTO starting the first year. • Fun company off-sites!

Apply Now

Similar Jobs

October 22

Data Engineer responsible for designing and developing data pipelines for analytics team support. Collaborating on data delivery and improving internal processes in a fast-paced environment.

Azure

Cloud

Django

ETL

Java

Jenkins

Python

React

Spark

SQL

Unity

October 22

Data Engineer developing data solutions for social impact at Beam. Collaborating with teams and working with data technologies to drive mission success.

Cloud

Postgres

Python

SQL

October 21

Junior Data Engineer contributing to healthcare data solutions at Cylinder. Building data pipelines and collaborating with senior team members in a supportive environment.

Airflow

Cloud

Distributed Systems

Python

SQL

October 14

Experienced Data Engineer designing data solutions for US clients. Collaborating with teams to build data pipelines and support analytics activities remotely.

AWS

Cloud

NoSQL

Python

Spark

SQL

Tableau

Terraform

October 3

Data Engineer creating impactful data transformation solutions for the U.S. Army's cloud management ecosystem. Collaborating with teams and ensuring successful delivery of software.

AWS

Cloud

Python

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com