TAG | Gartner
Recently Jonah Kowall of Gartner released a research note titled “Use Synthetic Monitoring to Measure Availability and Real-User Monitoring for Performance”. After reading this paper I had some thoughts that I wanted to share based upon my experience as a Monitoring Architect (and certifiable performance geek) working within large enterprise organizations. I highly recommend reading the research note as the information and findings contained within are spot on and highlight important differences between Synthetic and Real-User Monitoring as applied to availability and performance.
My Apps Are Not All 24×7
During my time working at a top 10 Investment Bank I came across many different applications with varying service level requirements. I say they were requirements because there were rarely ever any agreements or contracts in place, usually just an organizational understanding of how important each application was to the business and the expected service level. Many of the applications in the Investment Bank portfolio were only used during trading hours of the exchanges that they interfaced with. These applications also had to be available right as the exchanges opened and performing well for the entire duration of trading activity. Having no real user activity meant that the only way to gain any insight into availability and performance of these applications was by using synthetically generated transactions.
Was this an ideal situation? No, but it was all we had to work with in the absence of real user activity. If the synthetic transactions were slow or throwing errors at least we could attempt to repair the platform before the opening bell. Once the trading day got started we measured real user activity to see the true picture of performance and made adjustments based upon that information.
Can’t Script It All
Having to rely upon synthetic transactions as a measure of availability and performance is definitely suboptimal. The problem gets amplified in environments where you shouldn’t be testing certain application functionality due to regulatory and other restrictions. Do you really want to be trading securities, derivatives, currencies, etc… with your synthetic transaction monitoring tool? Me thinks not!
So now there is a gaping hole in your monitoring strategy if you are relying upon synthetic transactions alone. You can’t test all of your business critical functionality even if you wanted to spend the long hours scripting and testing your synthetics. The scripting/testing time investment gets amplified when there are changes to your application code. If those code updates change the application response you will need to re-script for the new response. It’s an evil cycle that doesn’t happen when you use the right kind of real user monitoring.
Real User Monitoring: Accurate and Meaningful
When you monitor real user transactions you will get more accurate and relevant information. Here is a list (what would a good blog post be without a list?) of some of the benefits:
- Understand exactly how your application is being used.
- See the performance of each application function as the end user does, not just within your data center.
- No scripting required (scripting can take a significant amount of time and resources)
- Ensure full visibility of application usage and performance, not just what was scripted.
- Understand the real geographic distribution of your users and the impact of that distribution on end user experience.
- Ability to track performance of your most important users (particularly useful in trading environments)
Synthetic transaction monitoring and real user monitoring can definitely co-exist within the same application environment. Every business is different and has their own unique requirements that can impact the type of monitoring you choose to implement. If you’ve not yet read the Gartner research note I suggest you go check it out now. It provides a solid analysis on synthetic and real user monitoring tools, companies, and usage scenarios which are completely different from what I have covered here.
Have synthetic or real transaction monitoring saved the day for your company? I’d love to hear about it in the comments below.Link to this post:
Round Two – Last time I wrote a blog comparing APM versus network-based APM tools, which I still consider NPM at it’s core regardless of what some critics and competitors claim. Let me make one thing clear though, NPM is great for equipping IT network administrators to see how fast or slow data is traveling through the pipes of their application. Unfortunately, network-based APM tools simply cannot provide App Ops granular visibility into the application runtime when isolating bottlenecks go beyond the system level and it’s final destination – the end user’s browser.
I find several of the blogs and YouTube clips from such NPM vendors quite comical as they try to throw punches at APM companies. Their arguments are centered primarily against agent-based approaches being an inadequate APM solution due to today’s fickle and distributed application architectures. It’s not like I haven’t heard it before.
The amusing thing about it…they’re completely right! In fact, we couldn’t agree more, and that’s why Jyoti Bansal founded AppDynamics to address these perennial shortcomings legacy APM vendors have been ignoring. Even the smallest businesses next to the largest enterprises have complex applications that have outpaced their App Ops teams’ current set of monitoring tools. That’s why AppDynamics is reinventing and reigniting the application performance management space by enabling IT operations to monitor complex, modern applications running in the cloud or the data center. So let me respond to those claims they’ve made.
“Agents have high deployment and ongoing maintenance burden.”
Legacy APM: TRUE
AppDynamics: FALSE. No manual instrumentation required. It’s automatic.
“Agents are invasive which can perturb the systems being monitored.”
Legacy APM: TRUE
AppDynamics: FALSE. Our customers see less than 1-2% overhead in production.
All AppDynamics. The next-gen of APM.
I drew a parallel in my previous post that using NPM concepts to monitor application performance is like inspecting Fedex packages en-route to figure out why operations at a hub came to a screeching halt. Remember, even if the package contents is visible from afar, it doesn’t explain why the hub conveyors, which electronically guide packages to their appropriate destination chute is broken, nor can it identify why cargo operations have stalled. In other words, good luck trying to gather anything beyond the scope of the application’s infrastructure. Using network monitoring tools to collect even the most basic system health metrics such as CPU utilization, memory usage, thread pool consumption and thrashing? Time to throw in the towel.
And what about End User Monitoring?
What’s becoming just as important as being able to monitor server side processing and network time is the ability to monitor end user performance. When NPM tools are only able to see the last packet sent from the server, how does that help you understand the browser’s performance? It doesn’t since once again, this kind of analysis is only feasible higher up the stack at the Application Layer. And just to clarify when I say Application Layer, I mean application execution time, not “network process time to application” as defined by OSI Layer 7.
On top of that, what about those customers running their applications in a public cloud? Are you going to convince your cloud provider to install a network appliance into their infrastructure? I highly doubt it. With AppDynamics, we have partnerships with cloud providers such as Amazon EC2, Azure, RightScale and Opsource allowing developers and operations to easily deploy AppDynamics with a flick of a switch and monitor their applications in production 24/7.
Once again, next-gen APM triumphs over NPM based application performance on not just the server side, but also the browser. AppDynamics is embracing this and fully aware of the technical and business significance of monitoring end user performance. We’re delighted to offer this kind of end-to-end visibility to our customers who will now be able to monitor application performance from the end users’ browser to the backend application tiers (databases, mainframes), all through a single pane of glass view.Link to this post:
On Tuesday, Gartner announced this year’s Magic Quadrant for Application Performance Monitoring (APM). I’ll make a few observations from reading the MQ and then suggest 3 additional criteria that APM buyers should consider to make informed buying decisions.
The research report started with an analysis of the APM market growth at 15% year-over-year and $2 billion in total market spend. These facts reflect what we see every day – the market for APM is very strong and benefits from the high growth in web-driven commerce. Web apps just can’t be slow.
One key APM growth driver is that modern applications have become more difficult to monitor – with more moving parts and a higher rate of change. Gartner summarizes this nicely in their market overview:
“Unfortunately, at just the moment when executives have become keen about imposing an application-centric view of the world on IT operations, applications have become far more difficult to monitor; in general, architectures have become more modular, redundant, distributed and dynamic, often laying down the particular twists and turns that a code execution path could take at the latest possible moment.”