By now you’ve certainly heard of – or perhaps been impacted by – Meltdown and Spectre, two newly discovered vulnerabilities that affected nearly every modern processor. If not, you might want to take a moment and visit Meltdownattack.com for a good overview, and Ars Technica for some good examples of how the vulnerability affects processors.
As with any highly visible and impactful public vulnerability, companies often quickly shift into an all-hands-on-deck operational motion once vendor patches are available. This particular set of patches, once applied, was advertised as very likely to negatively impact system performance as a result of the patch code needing to compensate for the vulnerability in the hardware. The end result of any of these circumstances can lead to a host of negative consequences, including an impacted user experience, which typically ties to a decrease in revenue or other business success. As a result, teams should spend some additional time on performance testing to ensure the viability of their systems post-patch.
As a leader of our SaaS team, I have been heavily involved in our own evaluation of the impact of patching Meltdown and Spectre. This post is meant to share our experience and how you might apply it to your own patch process.
Our SaaS platform services customers are running mission-critical applications. It is therefore very important for us to understand the impact these patches might have on our environment so we can make any operational adjustments needed post-patch and our customers can continue to rely on our software to run their businesses. We have been testing the Meltdown and Spectre patches in-house in pre-production, leveraging our own platform to better gauge potential impacts on both the SaaS infrastructure, and in a typical on-premises environment.
We use AppD to identify all of the business transactions that end up using various components of our infrastructure, e.g., databases, message queues, caches, etc.; see the code where transactions in the application are negatively impacted, and identify adjustments that might help mitigate the performance degradation.
A key to identifying these performance degradations is AppDynamics’ dynamic baselining, which records how business transactions and various system metrics behave throughout the day and week. This means that once we apply a patch, we are able to compare the new performance to what we’ve seen prior – something you cannot do when relying on static thresholds to tell you if there is a problem.
We aren’t just applying baselines to our technical performance metrics, either. We’re leveraging our own custom dashboards from Business iQ to track the key transactions and data that drive our business KPIs. This allows us to easily identify which patching-related slowdowns should be addressed first. Since we may need to roll out patches before all slowdowns are addressed, we want to minimize business-impact while doing the right thing, security-wise.
As you might imagine, the environment underlying the AppDynamics platform is extremely complex. We have a private data center with software running on bare metal servers, and we leverage public cloud providers for many of our services. There are different cost implications to updating these environments that we consider, and you may be in a similar situation.
In the interest of rolling patches as soon as possible, a short-term solution to any performance impact may be to change the computing environment itself until you’ve had a chance to update any code. For instance, if the average response time of a critical business transaction doubles due to resource contention during testing, you may decide it’s time to increase the size of your cloud instances, or add additional instances to help carry the load until you’ve redesigned your implementation.
For AppD, our servers offer a consistent compute environment; this means the only way to boost compute power due to performance degradations is to increase the number of servers allocated to a given task, which has obvious cost implications. In the cloud, our systems are running on virtual machines, allowing us to increase instance size or simply allocate more instances. There may be cost trade-offs for increasing instance size vs. running more instances, as everyone’s situation is unique. We rely on our own software to help us understand how our servers and instances are being utilized, which assists with our capacity planning updates.
A note on the example above: I used average response time as the decision point for whether to change our environment, not just a technical metric like CPU utilization. It’s important to focus on the actual customer impact of system performance, and then use system metrics to help understand root cause. You don’t want to rush into environment changes for things that may not impact your business.
After extensive testing in our development environments, we are currently deploying canary releases – applying the upgrades to a small percentage of isolated production systems – and monitoring the results, which helps us gauge the impact of patches on the performance of our applications. This proves a useful method for identifying potential performance impacts without affecting the entire production environment.
To recap, a few things you should consider when preparing to apply patches:
– The potential impacts of patching on the user experience.
– Using an APM solution with dynamic baselining to monitor and compare performance before and after patching.
– Understand the business impact of patching, it may influence what short-term trade offs you make before addressing in the long-term.
– Using canary releases to gauge the impact of any patches on application performance, without the risk of a full rollout.
When the inevitable happens and you find yourself preparing to apply patches, remember to keep your users’ experience – and the application performance driving that experience – front of mind. Better yet, start preparing now and experience all the other benefits that AppDynamics has to offer with a 15 day free trial!