Auto Scale or Auto Waste of Resources?

By Lane Inman | Aug 22, 2019

Auto Scale or Auto Waste of Resources?


It’s a major accomplishment for the traditional IT shops, moving from siloed, built hardware environments to one that automatically responds to the need of business. Alerted on utilization, automation takes over and deploys a new kit in the event of workload saturation. This is particularly useful when one has a number of pre-defined blocks of hardware with pre-existing known performance thresholds.

This is severely outdated! Today’s cloud providers offer services that can fluctuate orders of magnitude, and the “dream” of auto deployment rarely takes into account the actuality of the underlying infrastructure. If a machine is over utilized, you can deploy another of the same size, same operating system, and in reality, the exact same machine; but there is no guarantee that the additional machine uses anything close to the expected performance.

We’ve had a number of clients come to us with performance challenges in their environments leveraging both private and public clouds. For both auto and non-auto scale, the process is pretty straight forward. When virtual machines require more resources due to excessive CPU usage, you deploy another or grow in size. This works perfectly if the virtual machines provided a consistent value, however they do not.

Take for instance the case of a client who came to us due to some fairly significant issues after a migration to a major cloud provider shown in Table 1.

In February, our client followed the accepted practice of increasing the size of the underlying servers resulting in a much healthier underlying platform. This client went from a 2:4 to a 4:8 machine and were happy with the results.

Leveraging our measures, we were able to share with the client a particularly disturbing trend. Though they had a larger (more expensive) cloud offering, it was LESS capable than the originally tested 2:4 machine previously purchased. More importantly, their increase in CPU did little to nothing to address the actual issue. The IO demand on the underlying storage was also less, resulting in increased contention.

A more effective method to address this particular issue would be to identify a similar costing profile (the original design) and ensure that the quality of service was the same by using Krystallize Technologies Measures and deploying the new workloads to it.

 

Our Performance Pros are ready to help.