Data Engineer

Job not on LinkedIn

November 13

Apply Now
Logo of Fractal River

Fractal River

B2B • Artificial Intelligence • SaaS

Fractal River is a company that provides scalable solutions to accelerate growth for startups. They focus on building scalable revenue and customer operations, as well as enhancing data and AI/analytics capabilities. Fractal River offers solutions like Conversational Interfaces and Analytics, leveraging AI and Large Language Models for unlocking insights from unstructured data and enabling self-service business intelligence. They also provide Embedded Analytics for customized reporting and customer engagements, optimize customer journey with efficient onboarding processes, and deploy robust data ingestion pipelines for SaaS offerings. Additionally, they focus on customer relationship management to improve customer health and reduce churn. Fractal River partners with startups to deploy the right technology and develop processes to enhance business capabilities.

11 - 50 employees

🤝 B2B

🤖 Artificial Intelligence

☁️ SaaS

📋 Description

• Help design and implement data pipelines and analytic infrastructures with the latest technologies. • Create and deploy AI tools and agentic infrastructure to enhance both client and internal capabilities. • Build data warehouses using tools like dbt, working with platforms such as Snowflake, Redshift, and BigQuery. • Create beautiful dashboards and reports, and work with customers to create self-service data exploration capabilities. • Build data pipelines and integrations to connect systems, automate processes, and ensure data flows seamlessly across platforms. • Leverage APIs from multiple systems to extract and update data, trigger and monitor processes, and in general help tie our customers’ infrastructures into cohesive platforms that power their growth. • Maintain and oversee cloud infrastructure to ensure it runs with the reliability and performance our customers expect. • Help create data models, establish data governance frameworks, define metrics, and ensure data quality across reporting layers. • Develop technical documentation and best practices for our customers. • Drive internal improvement initiatives by identifying opportunities for efficiency gains, discovering new best practices, and proposing internal projects that enhance our capabilities and processes. • Contribute to our evolving DevOps and DataOps practices, helping shape how we work as we continuously improve. • Coordinate projects, activities, and tasks to ensure objectives and key results are met. • Identify opportunities, generate innovative solutions, and improve existing product features.

🎯 Requirements

• Our ideal candidate is someone with 1-5 years of working experience in fast-paced environments, a high tolerance for ambiguity, and a passion for constant learning. • Comfortable with Python and SQL fundamentals, but more importantly, you know when and how to leverage AI tools to accelerate your work while maintaining quality and understanding. • Eager to leverage AI tools (ChatGPT, Claude, Gemini, etc.) to fuel creativity and problem-solving—not to replace your deliverables, but to expand your thinking and improve them. • Collaborative and communicative, able to work with customers and team members effectively. • Adaptable: Thrives in environments with constant iteration and welcomes creative solutions to challenging problems. • Process-minded: Capable of developing and improving best practices in data, DevOps, and AI-assisted workflows. • Detail-oriented: You have strong attention to detail and care about the quality of your work. • English Language Requirements: CEFR level C1 (Advanced) means you can: • Understand a wide range of demanding, longer texts and recognize implicit meaning • Express yourself fluently and spontaneously without much obvious searching for expressions • Use language flexibly and effectively for social, academic, and professional purposes • Produce clear, well-structured, detailed text on complex subjects • Nice-to-have's (not required): • Familiarity with data warehouses like Snowflake, Redshift, or BigQuery • Experience with data modeling, data governance, and creating complex visualizations • Experience with BI tools such as Looker, Sigma, or Power BI • AWS/GCP/Azure services knowledge • Experience with development tools such as Terraform, Docker, CircleCI, and dbt • Certifications (AWS, Google Professional Data Engineer, etc.) • Knowledge of Google Analytics, Salesforce, HubSpot, and/or Zendesk • Comfort working in non-hierarchical, diverse work environments • Bachelor’s degree in computer science or similar field

🏖️ Benefits

• Personal development plan with an accelerated career track. • Access to an extensive reference library of books, case studies, and best practices. • Unlimited access to AI tools (ChatGPT, Claude, Gemini, etc.) • Unlimited budget for work-related books. • Online training (Udemy, nanodegrees, etc.), English language training. • Stretch assignments, pair programming, and code reviews with new technologies. • Yearly performance bonus based on personal performance. • Yearly success bonus based on company performance. • Home office setup including fast internet, large monitor, and your favorite keyboard and mouse. • After a few months, gaming chair, espresso maker, standing desk, and speakers (or other items like these). • Monthly wellness budget to cover items such as sodas, tea, snacks, pet care, yoga, or other wellness-related items. • Company card for wellness budget and company expenses. • Three floating holidays and unlimited (but reasonable) PTO starting the first year. • Fun company off-sites!

Apply Now

Similar Jobs

November 9

The Home Depot

10,000+ employees

🛒 Retail

👥 B2C

Data Engineer creating infrastructure to transform customer data at The Home Depot. Collaborating with teams to build algorithms and maintain database pipelines.

🇺🇸 United States – Remote

💵 $75k - $130k / year

💰 Debt Financing on 2007-07

⏰ Full Time

🟢 Junior

🟡 Mid-level

🚰 Data Engineer

November 4

Penrod

51 - 200

⚕️ Healthcare Insurance

☁️ SaaS

🏢 Enterprise

Salesforce CRM Data Migration Specialist executing complex data migration projects for healthcare. Ensuring secure transfer of sensitive data to the Salesforce platform with data integrity and compliance.

🇺🇸 United States – Remote

💵 $60k - $80k / year

💰 Venture Round on 2022-02

⏰ Full Time

🟢 Junior

🟡 Mid-level

🚰 Data Engineer

🚫👨‍🎓 No degree required

October 29

EEOC

1001 - 5000

🏛️ Government

📋 Compliance

🌍 Social Impact

Design, develop, and maintain data engineering solutions that support Fulton Bank's business. Collaborate on database development from inception through delivery.

🇺🇸 United States – Remote

💵 $79.1k - $131.8k / year

⏰ Full Time

🟢 Junior

🟡 Mid-level

🚰 Data Engineer

October 22

MeridianLink

501 - 1000

💳 Fintech

🏦 Banking

☁️ SaaS

Data Engineer responsible for designing and developing data pipelines for analytics team support. Collaborating on data delivery and improving internal processes in a fast-paced environment.

🇺🇸 United States – Remote

💵 $98.9k - $141.2k / year

💰 $485M Post-IPO Debt on 2021-11

⏰ Full Time

🟢 Junior

🟡 Mid-level

🚰 Data Engineer

🦅 H1B Visa Sponsor

October 22

Beam Impact

11 - 50

🛍️ eCommerce

🤝 B2B

Data Engineer developing data solutions for social impact at Beam. Collaborating with teams and working with data technologies to drive mission success.

🇺🇸 United States – Remote

💵 $140k - $165k / year

⏰ Full Time

🟢 Junior

🟡 Mid-level

🚰 Data Engineer

Developed by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com