TAG | Business Transaction Management
I’ve been looking forward to writing this blog for some time. Over the past two years I’ve worked with many enterprise customers to document the pain they solved using AppDynamics. A common question I always ask is “What was the actual business impact of that slowdown or outage?”, the result is that most customers guesstimate the revenue impact of slow performance, and are generally nervous about calculating such number.
They’re nervous because they might expose to the business how much revenue they are costing them each year thru incidents and outages. That’s definitely one way to look at things. However, if you flip this problem around IT could actually show the business how much revenue it created as a result of agile releases or initiatives such as SOA, Cloud and Virtualization.
Imagine if a new application feature suddenly caused a 5% increase in revenue? Wouldn’t it be cool for IT to share this fact with the business? With AppDynamics new real-time business metrics IT can do just that. Here’s how it works…
1. Monitoring Business Transactions
A business transaction is a type of user request in your application. AppDynamics can auto-discover these and monitor the response time of such requests, this allows IT to see the real end user experience and detect problems instantly as they happen. For example, below is a Checkout transaction from one of our customers that was requested 4,639 times, it had 53 errors and over 700 were classified as slow from their normal performance baseline.
2. Extracting Revenue Metrics from Business Transactions
Once you starting discovering and monitoring the performance of business transactions, the next step is to define which key business data you want to extract and report on. In AppDynamics you can define “Information Points” which are essentially custom metrics from extracting method parameters in application code. For example, in the below screenshot I created an information point called “Checkout Revenue” and specified the application code where AppDynamics should extract the revenue values, in this example it was the method signature:
I then created a custom metric called Checkout Revenue based on a SUM operation on the getter chain:
AppDynamics will now extract all the checkout revenue values from every transaction and make this available as a new metric “Checkout Revenue” which can be reported in real-time just like any other AppDynamics metric.
3. Correlating Application Response Times with Application Revenue:
Now that AppDynamics is monitoring the performance and revenue of your business transactions, its possible to correlate and report these metrics over-time so IT can understand their relationship. Take the below example, which shows the revenue per minute vs. the response time per minute of the application. As the screenshot shows, its pretty clear what the real business impact of this slowdown was to the business. Now imagine the reverse, imagine if the application got faster and that have a positive impact on revenue and transaction throughput? Wouldn’t it be great to track this information over-time so you can see the real impact of agile release cycles?
4. Creating Real-time Business Dashboards
Today nearly every monitoring dashboard is about application response times, or the health and resource of infrastructure. So when something glows red or flashes on a dashboard it denotes something very bad is happening. The reality is that most dashboards glow red everyday when performance and resource spikes. When is a problem really a problem? With real-time business metrics you can now mash and fuse business KPI’s with your application and infrastructure metrics. So when something turns red you can see the revenue impact of such issue.
5. Being Pro-Active with Business Alerts
Looking at monitoring dashboards periodically (like the above) is the first step to being pro-active with business impact. However, if you want to be truly pro-active you need to automate this entire process and let your monitoring solution do the alerting for you. The great thing with AppDynamics is that it can self-learn the normal value of every metric it collects, and create a dynamic baseline (threshold) over-time. This allows it to accurately detect deviations caused by abnormal activity. So just like we can detect deviations in application performance we can now do the same for your application revenue or order throughput. For example, one of our customers Orbitz said:
“If we’ve sold less than $1,000 in five minutes, there is probably a problem, even if it’s 2 o’clock in the morning. If our sales have flatlined, that’s a critical problem. I don’t know how to be any clearer.”
Geoff Kramer, Manager of Quality Engineering at Orbitz Worldwide
The ability to alert on business impact vs. application or infrastructure performance can be a game changer. It helps IT truly align with the priorities and needs of the business, allowing them to speak the same language and manage the bottom line.
You can get started today with real-time business metrics by signing up and taking a free trial of AppDynamics Pro here.
Link to this post:
Since we launched our Managed Service Provider program late last year, we’ve signed up many MSPs that were interested in adding Application Performance Management-as-a-Service (APMaaS) to their service catalogs. Wouldn’t you be excited to add a service that’s easy to manage but more importantly easy to sell to your existing customer base?
Service providers like Scicom definitely were (check out the case study), because they are being held responsible for the performance of their customer’s complex, distributed applications, but oftentimes don’t have visibility inside the actual application. That’s like being asked to officiate an NFL game with your eyes closed.
The sad truth is that many MSPs still think that high visibility in app environments equates to high configuration, high cost, and high overhead.
Thankfully this is 2013. People send emails instead of snail mail, play Call of Duty instead of Pac-Man, listen to Pandora instead of cassettes, and can have high visibility in app environments with low configuration, low cost, and low overhead with AppDynamics.
Not only do we have a great APM service to help MSPs increase their Monthly Recurring Revenue (MRR), we make it extremely easy for them to deploy this service in their own environments, which, to be candid, is half the battle. MSPs can’t spend countless hours deploying a new service. It takes focus and attention away from their core business, which in turn could endanger the SLAs they have with their customers. Plus, it’s just really annoying.
Introducing: APMaaS in a Box
Here at AppDynamics, we take pride in delivering value quickly. Most of our customers go from nothing to full-fledged production performance monitoring across their entire environment in a matter of hours in both on-premise and SaaS deployments. MSPs are now leveraging that same rapid SaaS deployment model in their own environments with something that we like to call ‘APMaaS in a Box’.
At a high level, APMaaS in a Box is large cardboard box with air holes and a fragile sticker wherein we pack a support engineer, a few management servers, an instruction manual, and a return label…just kidding…sorry, couldn’t resist.
Simply put, APMaaS in a Box is a set of files and scripts that allows MSPs to provision multi-tenant controllers in their own data center or private cloud and provision AppDynamics licenses for customers themselves…basically it’s the ultimate turnkey APMaaS.
By utilizing AppDynamics’ APMaaS in a Box, MSPs across the world are leveraging our quick deployment, self-service license provisioning, and flexibility in the way we do business to differentiate themselves and gain net new revenue.
Within 6 hours, MSPs like NTT Europe who use our APMaaS in a Box capabilities will have all the pieces they need in place to start monitoring the performance of their customer’s apps. Now that’s some rapid time to value!
Self-Service License Provisioning
MSPs can provision licenses directly through the AppDynamics partner portal. This gives you complete control over who gets licenses and makes it very easy to manage this process across your customer base.
A MSP can get started on a month-to-month basis with no commitment. Only paying for what you sell eliminates the cost of shelfware. MSPs can also sell AppDynamics however they would like to position it and can float licenses across customers. NTT Europe uses a 3-tier service offering so customers can pick and choose the APM services they’d like to pay for. Feel free to get creative when packaging this service for customers!
As more and more MSPs move up the stack from infrastructure management to monitoring the performance of their customer’s distributed applications, choosing an APM partner that understands the Managed Services business is of utmost importance. AppDynamics’ APMaaS in a box capabilities align well with internal MSP infrastructures, and our pricing model aligns with the business needs of Managed Service Providers – we’re a perfect fit.
MSPs who continue to evolve their service offerings to keep pace with customer demands will be well positioned to reap the benefits and future revenue that comes along with staying ahead of the market. To paraphrase The Great One, MSPs need to “skate where the puck is going to be, not where it has been.” I encourage all you MSPs out there to contact us today to see how we can help you skate ahead of the curve and take advantage of the growing APM market with our easy to use, easy to deploy APMaaS in a Box. If you don’t, your competition will…Link to this post:
Everyday in our life we rely on services provided by other people. Making a phone call, getting a car fixed, or ordering a pizza – and yet we want those things to happen as quickly as possible, because time often means money. If you take your car to a Mercedes or BMW dealer, you will understand this point better than anyone. Our productivity (and often happiness) is therefore controlled, everyday, by different organizations and people. When things slow down or don’t happen we get upset, frustrated, and sometimes rant on twitter like these folk:
If your application today has SOA design principles, is heavily distributed and relies on 3rd party service providers, then you’ve probably become frustrated at some point when your application slows down or crashes. The problem is this: your end user experience and quality of service (QoS) is only as good as the QoS of your service providers. So, unless you monitor QoS you can’t measure QoS–and if you can’t measure QoS, you can’t manage your service providers and your end user experience. For example, take a look at this customer e-commerce application which has 7 JVM’s, 1 database and 7 external web service providers:
This customer recently had a slowdown with their e-commerce production application. After a few minutes browsing AppDynamics, they successfully identified that one of their web service providers was having latency issues (AppDynamics automatically baselines performance and flags deviations for each web service provider as shown in the above screenshot). The customer called their service provider, and sure enough the service provider admitted to having issues. A few hours later the service provider called back and said “we fixed the problem, everything should be back to normal”–yet the customer could clearly see latency issues still occurring in AppDynamics. So they sent their service provider a screenshot showing the evidence. The service provider then checked again, and called back a few minutes later saying “Yes, sorry a few customers are still being impacted.” Without this level of visibility, many organizations are simply blind to how external service providers impact their end user experience and business.
Being able to troubleshoot slow performance in minutes is helpful, but what about being able to report the exact service level you receive–say, from each of your service providers over a period of time? Did your service improve over time or did it regress? How many outages or severity 1 incidents did your service providers cause this week for your application?
Take the below screenshot from AppDynamics, which plots the maximum response time for five different web services consumed by an application over the last week. You can see that three out of the five web services (denoted by pink, blue and turquoise lines) consistently deliver sub-second response times and provide a great service level. However, the other two web services (red and green lines) show performance spikes with response times of between 14 and 22 seconds. The green web service in particular is very inconsistent and shows several performance spikes in two days.
Below is the response time of another web service (PayPal) for a customer application over the last 3 months. Notice the spikes in response time and look at the deviation between average and maximum response time over the time period. What’s impressive is that despite the occasional service blip, the PayPal service has slowly improved by 14% from 450 milliseconds to around 385 milliseconds. It’s also been very stable the last few weeks, along with having a consistent service (small deviation from average and maximum response time).
If your application relies on one or more 3rd party web services, you should periodically check and report what level of service you are receiving each week. That way, you can truly understand your service provider QoS and its impact on your end user experience and application performance. You can also keep your service providers honest, with complete visibility of whether QoS is improving or degrading over time as service outages occur and are fixed.
The next time you experience a slow down or outage in your application, you should first check external web services before you start to troubleshoot your own. The last thing you want to be doing is debugging your own code, when it could be someone else’s service and code that is causing the issue. Using AppDynamics it’s possible to monitor, measure, and manage the QoS from each of your web service providers. You can get started right now by downloading AppDynamics Lite (our free edition) for a single JVM or IIS web server, or you can request a 30-day trial of AppDynamics Pro (our commercial edition) for Java or .NET applications with multiple JVMs and IIS web servers.Link to this post:
Last week I flew into Las Vegas for #Interop fully suited and booted in my big blue costume (no joke). I’d been invited to speak in a vendor debate on User eXperience (UX): Monitor the Application or the Network? NetScout represented the Network, AppDynamics (and me) represented the Application, and “Compuware dynaTrace Gomez” sat on the fence representing both. Moderating was Jim Frey from EMA, who did a great job introducing the subject, asking the questions and keeping the debate flowing.
At the start each vendor gave their usual intro and company pitch, followed by their own definition on what User Experience is.
Defining User Experience
So at this point you’d probably expect me to blabber on about how application code and agents are critical for monitoring the UX? Wrong. For me, users experience “Business Transactions”–they don’t experience applications, infrastructure, or networks. When a user complains, they normally say something like “I can’t Login” or “My checkout timed out.” I can honestly say I’ve never heard them say – ”The CPU utilization on your machine is too high” or “I don’t think you have enough memory allocated.”
Now think about that from a monitoring perspective. Do most organizations today monitor business transactions? Or do they monitor application infrastructure and networks? The truth is the latter, normally with several toolsets. So the question “Monitor the Application or the Network?” is really the wrong question for me. Unless you monitor business transactions, you are never going to understand what your end users actually experience.
Monitoring Business Transactions
So how do you monitor business transactions? The reality is that both Application and Network monitoring tools are capable, but most solutions have been designed not to–just so they provide a more technical view for application developers and network engineers. This is wrong, very wrong and a primary reason why IT never sees what the end user sees or complains about. Today, SOA means applications are more complex and distributed, meaning a single business transaction could traverse multiple applications that potentially share services and infrastructure. If your monitoring solution doesn’t have business transaction context, you’re basically blind to how application infrastructure is impacting your UX.
The debate then switched to how monitoring the UX differs from an application and network perspective. Simply put, application monitoring relies on agents, while network monitoring relies on sniffing network traffic passively. My point here was that you can either monitor user experience with the network or you can manage it with the application. For example, with network monitoring you only see business transactions and the application infrastructure, because you’re monitoring at the network layer. In contrast, with application monitoring you see business transactions, application infrastructure, and the application logic (hence why it’s called application monitoring).
Monitor or Manage the UX?
Both application and network monitoring can identify and isolate UX degradation, because they see how a business transaction executes across the application infrastructure. However, you can only manage UX if you can understand what’s causing the degradation. To do this you need deep visibility into the application run-time and logic (code). Operations telling a Development team that their JVM is responsible for a user experience issue is a bit like Fedex telling a customer their package is lost somewhere in Alaska. Identifying and Isolating pain is useful, but one could argue it’s pointless without being able to manage and resolve the pain (through finding the root cause).
Netscout made the point that with network monitoring you can identify common bottlenecks in the network that are responsible for degrading the UX. I have no doubt you could, but if you look at the most common reason for UX issues, it’s related to change–and if you look at what changes the most, it’s application logic. Why? Because Development and Operations teams want to be agile, so their applications and business remains competitive in the marketplace. Agile release cycles means application logic (code) constantly changes. It’s therefore not unusual for an application to change several times a week, and that’s before you count hotfixes and patches. So if applications change more than the network, then one could argue it’s more effective for monitoring and managing the end user experience.
UX and Web Applications
We then debated which monitoring concept was better for web-based applications. Obviously, network monitoring is able to monitor the UX by sniffing HTTP packets passively, so it’s possible to get granular visibility on QoS in the network and application. However, the recent adoption of Web 2.0 technologies (ajax, GWT, Dojo) means application logic is now moving from the application server to the users browser. This means browser processing time becomes a critical part of the UX. Unfortunately, Network monitoring solutions can’t monitor browser processing latency (because they monitor the network), unlike application monitoring solutions that can use techniques like client-side instrumentation or web-page injection to obtain browser latency for the UX.
The C Word
We then got to the Cloud and which made more sense for monitoring UX. Well, network monitoring solutions are normally hardware appliances which plug direct into a network tap or span port. I’ve never asked, but I’d imagine the guys in Seattle (Amazon) and Redmond (Windows Azure) probably wouldn’t let you wheel a network monitoring appliance into their data-centre. More importantly, why would you need to if you’re already paying someone else to manage your infrastructure and network for you? Moving to the Cloud is about agility, and letting someone else deal with the hardware and pipes so you can focus on making your application and business competitive. It’s actually very easy for application monitoring solutions to monitor UX in the cloud. Agents can piggy back with application code libraries when they’re deployed to the cloud, or cloud providers can embed and provision vendor agents as part of their server builds and provisioning process.
What’s interesting also is that Cloud is highlighting a trend towards DevOps (or NoOps for a few organizations) where Operations become more focused on applications vs infrastructure. As the network and infrastructure becomes abstracted in the Public Cloud, then the focus naturally shifts to the application and deployment of code. For private clouds you’ll still have network Ops and Engineering teams that build and support the Cloud platform, but they wouldn’t be the people who care about user experience. Those people would be the Line of Business or application owners which the UX impacts.
In reality most organizations today already monitor the application infrastructure and network. However, if you want to start monitoring the true UX, you should monitor what your users experience, and that is business transactions. If you can’t see your users’ business transactions, you can’t manage their experience.
What are your thoughts on this?
I did have an hour spare at #Interop after my debate to meet and greet our competitors, before flying back to AppDynamics HQ. It was nice to see many of them meet and greet the APM Caped Crusader.
App Man.Link to this post: