
B2B • SaaS • Artificial Intelligence
Zingtree is a company that empowers customer support through AI process automation, helping enterprises streamline and simplify complex support processes. It offers tools to create, manage, and automate support workflows, making it easier for customer service agents and customers to resolve issues efficiently. Zingtree integrates with various CRM systems and supports multiple industries including contact centers, healthcare, insurance, and home services. Through its dynamic workflows, Author Assist AI, and compliance automation, it helps businesses improve their customer experience with faster resolution times and enhanced compliance controls.
April 22

B2B • SaaS • Artificial Intelligence
Zingtree is a company that empowers customer support through AI process automation, helping enterprises streamline and simplify complex support processes. It offers tools to create, manage, and automate support workflows, making it easier for customer service agents and customers to resolve issues efficiently. Zingtree integrates with various CRM systems and supports multiple industries including contact centers, healthcare, insurance, and home services. Through its dynamic workflows, Author Assist AI, and compliance automation, it helps businesses improve their customer experience with faster resolution times and enhanced compliance controls.
• Zingtree is the next gen, intelligent process automation platform that reimagines customer experience operations for top Customer Support leaders. • With over 500 customers, including global enterprises like Optum, Corpay, Sony, SharkNinja, and Allianz, Zingtree transforms self-service, uncovers and implements automation opportunities, and makes every customer service agent an expert. • We are seeking a Senior Data Engineer who will design, build, and maintain scalable data processing systems and analytics platforms. • In this role, you'll lead our efforts to create robust, in-house data infrastructure solutions to support our growing data needs and business intelligence requirements.
• Experience: 5+ years in data engineering, with expertise in large-scale systems and data infrastructure design. • Technical Skills: Strong background in database technologies and SQL optimization; experience designing data warehousing solutions; proficiency in building and optimizing data pipelines; knowledge of distributed computing platforms (Spark, Flink, or similar); experience with columnar databases and OLAP systems (like Apache Druid). • Infrastructure Knowledge: Experience with cloud infrastructure, particularly AWS; understanding of container orchestration with Kubernetes; familiarity with infrastructure-as-code practices. • Specialized Expertise: Experience with stream processing and real-time analytics frameworks; knowledge of Go programming a plus; background with event data processing and analytics; familiarity with Kafka or similar message queuing systems. • Analytical Mindset: Strong problem-solving abilities; experience troubleshooting complex data systems; capacity to evaluate and recommend appropriate technologies for specific use cases. • Communication Skills: Ability to explain complex technical concepts to various stakeholders; experience collaborating with cross-functional engineering teams. • Leadership Qualities: Self-motivated with a drive to continuously learn and improve; proactive approach to identifying and addressing technical challenges; experience mentoring junior engineers on data best practices. • Organizational Abilities: Skills to manage multiple projects and priorities; adaptability to evolving technical requirements and business needs.
• Competitive Compensation - We offer fair compensation packages • Equity stock options • Comprehensive health insurance for employees and dependents • Provident Fund contributions in compliance with EPF guidelines • Generous Paid Parental Leave - Paid time off for parents to spend time with their new child • Unlimited PTO - Take the time you need to recharge and bring your best self to work • Flexible Remote Work - Work from anywhere • Home Office Stipend - Receive up to $500 to create a great work environment at home, and $100 a month for Internet, phone, etc. • Co-working Reimbursement - Expense up to $200 a month on co-working space
Apply NowApril 22
Leads projects for Cummins' analytics platform; collaborates with stakeholders to deliver data solutions.
Cassandra
Cloud
Distributed Systems
DynamoDB
ETL
Hadoop
HBase
IoT
Java
Kafka
MongoDB
Neo4j
Open Source
PySpark
Python
Scala
SDLC
Spark
SQL
SSIS
April 6
Manage technology and platform for Data Lake needs at WEX, ensuring scalability and reliability.
AWS
Azure
Cloud
Dart
Kubernetes
SDLC
April 5
Senior Staff Engineer for Data Lake House in WEX, focusing on technology development and platform implementation.
Apache
AWS
Azure
Cloud
Java
Kubernetes
Python
SDLC
Spark
March 25
Consultant/Senior Consultant specializing in data engineering and Azure solutions at Hitachi Solutions.
Ansible
Azure
Cloud
ETL
Jenkins
Puppet
SQL
SSIS
Terraform
March 24
Work as a Data Engineer at Centific optimizing ETL pipelines and developing Python applications.
Azure
ETL
PySpark
Python
Spark