Scaling AI-Driven Operations to 40,000+ Daily Requests with a Contextual Data Layer
Stop letting fragmented data and sluggish AI response times stall your incident resolution. Zscaler solved the “latency-at-scale” crisis by transforming static runbooks into a high-velocity contextual data layer. By unifying multi-model data—graph and vector—with Arango’s Agentic AI Suite, they equipped their AI with the deep institutional intelligence required to shatter search bottlenecks. The result? Precise, reliable answers across 40,000 daily requests with response times slashed to under 15 seconds.
What you will learn:
- Building a Contextual Foundation: How to integrate graph and vector data to provide AI agents with the “tribal knowledge” needed for complex troubleshooting.
- Eliminating the Latency Gap: The architectural secrets Zscaler used to reduce deep-search response times by 50% while handling 100+ concurrent users.
- Future-Proofing Scalability: Proven tactics using Partition IDs to manage growth from 250 to thousands of runbooks without degrading performance and to minimize LLM costs.
- Deploying Agentic AI at Scale: Lessons learned from moving beyond basic RAG to a robust, production-grade retrieval system that supports 40,000+ daily requests.
Innovate Together: Build the Future of Agentic AI
At Arango, we believe the most complex data challenges are solved through partnership and innovative solutions. Join us to see how we’re helping pioneers like Zscaler push the boundaries of what’s possible with an intelligent contextual data layer.
Speakers
Shekhar Iyer
CEO
Arango
Rajesh Beri
Senior Director, AI /ML Engineering
Zscaler