Solutions Architect – Specialist

September 29

Apply Now
Logo of Databricks

Databricks

Artificial Intelligence • Enterprise • SaaS

Databricks is a data and AI company that provides a unified platform for data engineering, machine learning, and analytics. It focuses on optimizing big data processing and helps organizations leverage Apache Spark to deliver deeper insights and powerful data-driven applications. Databricks also offers robust tools and seamless integration for machine learning operations.

1001 - 5000 employees

Founded 2013

🤖 Artificial Intelligence

🏢 Enterprise

☁️ SaaS

💰 $1.6G Series H on 2021-08

📋 Description

• Guide customers in building big data solutions on Databricks and support Solution Architects • Provide technical leadership to guide strategic customers to successful implementations on big data projects, from architectural design to data engineering to model deployment • Architect production-level data pipelines, including end-to-end pipeline load performance testing and optimization • Become a technical expert in areas such as data lake technology, big data streaming, or big data ingestion and workflows • Assist Solution Architects with advanced aspects of the technical sale including custom proof of concept content, estimating workload sizing, and custom architectures • Provide tutorials and training to improve community adoption (including hackathons and conference presentations) • Contribute to the Databricks Community • Report to the Specialist Field Engineering Manager and strengthen technical skills through mentorship and internal training programs

🎯 Requirements

• 5+ years experience in a technical role with expertise in at least one of: Software Engineering/Data Engineering or Data Applications Engineering • Hands-on production experience with Apache Spark • Expertise in data ingestion and streaming technologies (e.g., Spark Streaming, Kafka) • Experience building big data pipelines • Experience maintaining and extending production data systems • Deep specialty expertise in at least one area (scaling big data workloads, migrating Hadoop to public cloud, large scale data ingestion/CDC/streaming, cloud data lake technologies such as Delta) • Bachelor's degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience • Production programming experience in SQL and Python, Scala, or Java • 2 years professional experience with Big Data technologies (e.g., Spark, Hadoop, Kafka) and architectures • 2 years customer-facing experience in a pre-sales or post-sales role • Can meet expectations for technical training and role-specific outcomes within 6 months of hire • Can travel up to 30% when needed

🏖️ Benefits

• Eligibility for annual performance bonus • Equity (stock) eligibility • Comprehensive benefits and perks (see https://www.mybenefitsnow.com/databricks) • Remote work (Central or Eastern US - remote)

Apply Now

Similar Jobs

September 29

Senior Solutions Engineer at Confluent driving data streaming adoption, designing technical solutions, and leading proofs-of-concept with customers and sales

Cloud

Java

Kubernetes

Python

SQL

September 28

Research IT Solutions Engineer at Johns Hopkins University designing and supporting cloud, CI/CD, data, and AI/ML research computing solutions for investigators.

AWS

Azure

Cloud

Python

SDLC

September 28

Lead implementation of Veeva RIM cloud platform for life sciences. Translate regulatory requirements into enterprise solutions and manage project delivery.

Cloud

Oracle

Vault

September 28

Solutions Consultant selling BELAY virtual staffing services to generate new business. Focus on inbound leads, prospecting, CRM management, and 25% travel.

September 28

Partner with Sales to architect and demo zero-trust networking solutions, co-own pipeline, close Commercial accounts, and scale SE enablement at Tailscale.

AWS

Azure

DNS

Firewalls

Google Cloud Platform

TCP/IP

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com