
Artificial Intelligence • Enterprise • SaaS
Netomi is a company that specializes in providing AI-powered customer experience solutions. Their platform, known as Agentic OS, is designed for enterprise-scale customer service and integrates seamlessly with existing business systems. Netomi's solutions leverage generative AI and large language models to automate over 80% of customer inquiries, enhance customer satisfaction, and reduce support costs. Netomi is trusted by global brands across various industries, offering secure, proactive, and predictive customer care through omnichannel support, including email, chat, messaging, SMS, social media, search, and voice. The company ensures compliance with stringent security standards and data protection regulations.
October 22

Artificial Intelligence • Enterprise • SaaS
Netomi is a company that specializes in providing AI-powered customer experience solutions. Their platform, known as Agentic OS, is designed for enterprise-scale customer service and integrates seamlessly with existing business systems. Netomi's solutions leverage generative AI and large language models to automate over 80% of customer inquiries, enhance customer satisfaction, and reduce support costs. Netomi is trusted by global brands across various industries, offering secure, proactive, and predictive customer care through omnichannel support, including email, chat, messaging, SMS, social media, search, and voice. The company ensures compliance with stringent security standards and data protection regulations.
• Define and execute the Quality Engineering vision and roadmap aligned with organizational goals. • Build and lead a multi-disciplinary QA team (Platform QA, Data Quality, Automation, and Performance Engineering). • Partner with engineering, product, and customer success leaders to ensure quality is embedded across the SDLC. • Establish quality KPIs and dashboards for visibility into release readiness, product stability, and production health. • Own functional and regression testing strategy across microservices, SDKs, APIs, and UI layers. • Drive non-functional testing excellence — performance, scalability, reliability, failover, and compatibility. • Lead security testing initiatives, collaborating with DevSecOps and compliance teams to ensure adherence to ISO, SOC 2, and GDPR standards. • Introduce AI-augmented testing frameworks leveraging machine learning for predictive defect detection and data drift analysis. • Partner with the Data Science org to establish data quality pipelines for training and inference data. • Define testing standards for AI model accuracy, precision, bias, and drift monitoring. • Ensure data-driven validation frameworks are integrated into MLOps and deployment cycles. • Oversee QA processes for custom deployments, integrations, and client-specific configurations. • Build automation frameworks to validate end-to-end customer experience workflows and integrations (Genesys, Salesforce, Zendesk, etc.). • Institutionalize CI/CD-driven test automation for faster and safer deployments. • Evaluate and implement next-gen QA tools for test management, performance benchmarking, and observability. • Drive shift-left testing culture and early defect detection through static analysis, contract testing, and chaos testing. • Foster collaboration between QA, DevOps, and Observability teams for quality-in-production metrics.
• 10+ years of experience in software quality engineering, with 3+ years leading QA teams in a SaaS or platform environment. • Proven experience in AI/ML or data-centric product quality, including validation of data pipelines and ML models. • Deep understanding of microservices architecture, cloud platforms (AWS/Azure), and CI/CD pipelines. • Hands-on experience with automation frameworks (Selenium, Cypress, Playwright, TestNG, PyTest, etc.). • Expertise in performance testing tools (JMeter, Gatling, Locust) and security testing (OWASP, Burp Suite, ZAP). • Strong grasp of API and SDK testing, ideally for multi-platform environments (web, mobile, voice). • Exceptional communication, leadership, and stakeholder management skills with an ability to influence across functions. • Preferred Skills • Experience in AI/ML testing, data validation, and model governance frameworks. • Familiarity with observability stacks (ELK, Grafana, Datadog, OpenTelemetry) for production monitoring. • Understanding of GenAI systems, prompt testing, and retrieval-augmented generation (RAG) validation. • Exposure to B2B enterprise platforms and multi-tenant SaaS architectures. • Knowledge of secure SDLC and compliance frameworks (SOC 2, ISO 27001).
• Competitive compensation, fast career progression, and global exposure. • Work at the frontier of AI-driven customer experience. • Influence quality strategy across platform, AI, and customer success ecosystems. • Collaborate with top-tier engineers and data scientists in a high-growth, innovation-driven culture.
Apply Now