Abstract
Application and Storage caches in modern systems must be manually tuned and sized in response to changing application’s workload. A balance must be achieved between cost, performance and revenue loss from cache sizing mis-matches. However, caches are inherently nonlinear systems making this exercise equivalent to solving a maze in the dark.
Until now!
The industry’s first self-tuning and auto-scaling solution for application and storage caches leverages breakthrough predictive algorithms. Imagine your cache self-tuning in real-time in response to changing workloads, thereby unlocking huge improvements in cost, efficiency, performance and observability.
In this session, we cover how auto-scaling cache delivers impressive gains in performance and observability together with a live demo of multi-tiered cache scaling with changing workload
* Why static caches leave so much performance on the floor?
* What is a auto-scaling for caches and how does the cache automatically adapt to changing application workloads?
* The efficiency, QoS, performance SLAs/SLOs and cost tradeoffs that auto-scaling caches enable
* Live demo of auto-scaling multi-tiered cache as workload changes
Learning Objectives:
1. Why static caches leave so much performance on the floor
2. What is a auto-scaling for caches and how does the cache automatically adapt to changing application workloads?
3. The efficiency, QoS, performance SLAs/SLOs and cost tradeoffs that auto-scaling caches enable
4. See a live demo of auto-scaling multi-tiered cache as workload changes