TAG | Analytics
Every production web application should use web analytics. There are many great free tools for web analytics, the most popular of which is Google Analytics. Google Analytics helps you analyze visitor traffic and paint a complete picture of your audience and their needs. Web analytics solutions provide insight into how people discover your site, what content is most popular, and who your users are. Modern web analytics also provide insight into user behavior, social engagement, client-side page speed, and the effectiveness of ad campaigns. Any responsible business owner is data-driven and should leverage web analytics solutions to get more information about your end users.
Web Analytics Landscape
While Google Analytics is the most popular and the de facto standard in the industry, there are quite a few quality web analytics solutions available in the marketplace:
- Google Analytics
- Adobe Digital Analytics (formerly Omniture)
- IBM Digital Analytics (formerly Coremetrics)
The Forrester Wave Report provides a good guide to choosing an analytics solution.
There are also many solutions focused on specialized web analytics that I think are worth mentioning. They are either geared towards mobile applications or getting better analytics on your customers’ interactions:
Once you understand your user demographics, it’s great to be able to get additional information about how performance affects your users. Web analytics only tells you one side of the story, the client-side. If you are integrating web analytics, check out Segment.io which provides analytics.js for easy integration of multiple analytics providers.
It’s all good – until it isn’t
Using Google Analytics on its own is fine and dandy – until you’re having performance problems in production you need visibility into what’s going on. This is where application performance management solutions come in. APM tools like AppDynamics provide the added benefit of understanding both the server-side and the client-side. Not only can you understand application performance and user demographics in real time, but when you have problems you can use the code-level visibility to understand the root cause of your performance problems. Application performance management is the perfect complement to web analytics. Not only do you understand your user demographics, but you also understand how performance affects your customers and business. It’s important to be able to see from a business perspective how well your application is performing in production:
Since AppDynamics is built on an extensible platform, it’s easy to track custom metrics directly from Google Analytics via the machine agent.
The end user experience dashboard in AppDynamics Pro gives you real time visibility where your users are suffering the most:
Capturing web analytics is a good start, but it’s not enough to get an end-to-end perspective on the performance of your web and mobile applications. The reality is that understanding user demographics and application experience are two completely separate problems that require two complementary solutions. O’Reilly has a stellar article on why real user monitoring is essential for production applications.
Get started with AppDynamics Pro today for in-depth application performance management.
As always, please feel free to comment if you think I have missed something or if you have a request for content in an upcoming post.Link to this post:
There was a time when I thought more was better. More french fries are better; more ice cream is better; more of everything is always better. I followed this principle as a way of life for many years until one day I woke up and realized that I was obese and slow. I realized that more is not always better and that moderation and balance was the an important key to a happy and healthy life. The same exact rule applies to IT monitoring and in this blog I’ll explain how to identify high fat, high calorie, low nutritional content monitoring solutions.
Fat, Dumb, and Happy
The worst offenders in the battle against data center obesity are those companies that claim to be “always-on”. Always gathering mountains of data when there are no problems to solve is equivalent to entering a hotdog eating contest every day of the year. Why would you do that to yourself? Your IT bloating will reach epic proportions in no time and your CIO/CTO will eventually start asking why you are spending so much money on all of that storage space and all of those monitoring servers.
Let’s use an example to explore this scenario. A user of your application logs in, searches for some new running shoes, adds their favorite ones to their cart, checks out and happily disappears into the ether awaiting their shoe delivery so they can get ready for their local charity fun run. This same pattern is repeated for many users of your application.
Scenario 1: “Always on” bloated consumerism – Your monitoring software:
- Tracked the response time of each function the user performed (small amount of data)
- Tracked the execution details of many of the method calls involved in each and every function (lots and lots of data)
- Sent all of this across the network to be compiled and stored
- This happens for every single function that every single user executes regardless of if there is a problem or not.
Smart and Fit
Scenario 2: The intelligent fitness pro – Your monitoring software:
- Tracked the response time of each function the user performed (small amount of data)
- Periodically tracked the execution details of all the method calls involved in each function so that you have a reference point for “good” transactions (small amount of data)
- Tracked the execution details of all method calls for every bad (or slow) function (business transaction) so that you have the information you need to solve the problem (small – medium data)
- The built in analytics decide when slow business transactions are impacting your users and automatically collect all the appropriate details.
How often do you look at deep granular monitoring details when there are no application issues to resolve? I was an application monitoring expert at a major investment bank and I never looked at those details when there were no problems. AppDynamics is a new breed of monitoring tool that is based upon intelligent analytics to keep your data center fast and fit. I think John Martin from Edmunds.com said it best in his case study “AppDynamics intelligence just says, ‘Hey something interesting is going on, I’m going to collect more data for you’.”
Smart people choose smart tools. You owe it to yourself to take a free trial of AppDynamics today and make us prove our value to you.Link to this post:
I’ve got many years of performance geekery under my belt and I’ve learned many lessons during that time. One of the most important lessons is paying close attention to the distinction between data and information. Let’s take a look at how the dictionary defines each term:
Data – facts and statistics collected together for reference or analysis.
Information – facts provided or learned about something or someone.
What do these definitions reveal to us about data and information and how does it apply to monitoring tools? Let’s explore that together. I’ll provide specific examples along the way to illustrate my points.
The Problems with Data
Data is fundamental to problem solving, but I don’t want to have to dig through a bunch of data while my business critical, mission critical, revenue generating, etc… applications are down. To me, data is just like this picture…
Welcome back to my series on Deploying APM in the Enterprise. In Part 6: Spread the APM Love we talked about ways to increase user adoption of your APM tool and thereby make your organization more successful.
This week we focus on Dashboards and Reports. I’m a huge believer in unlocking the information and intelligence contained within your monitoring tools, and turning that data into actionable information. Over time your tooling ecosystem will collect a wealth of data that can be used for capacity planning, business intelligence, development planning, and many other business and IT activities that require information about usage patterns and operational statistics. By unlocking this value, and surfacing it to the business and IT organizations in a meaningful way, you take another step up the maturity ladder and solidify your rockstar status within your organization. A huge benefit of providing useful information to the business is that they will fight for your tools and projects if they are ever in question!
Dashboards should be used to provide real-time insight into critical business and IT indicators. A failed server doesn’t always mean that there is impact to the business.
Several of our customers are using AppDynamics dashboards to better understand business activity. For example, we have multiple e-commerce customers that are tracking revenue and order volumes in real time.
Let’s take a look at some different perspectives within the organization and explore what type of dashboard each role requires.
Most managers don’t need to know the sordid details about the infrastructure that supports the business applications. For each application within the managers realm of responsibility they need to understand key business indicators and have an overall indication of their application performance and health. Take a look at the dashboard below…
This is the type of dashboard that managers want to keep an eye on throughout the workday. Any impact to the business will be easily identified on this dashboard so that the manager can make sure the impact is being addressed. There should be alerts associated with these metrics so that the operations center is notified of business impact but it’s not always an IT problem when business metrics deviate from normal behavior. It’s possible that sales volume is impacted by a competitor offering lower prices on the same product. This will show up as business impact and there is nothing that the IT staff can do to fix it. This is a business problem that needs to be addressed by the appropriate department.
Of course any IT infrastructure problems that impact the business will also show up on the managers dashboard but only by way of business metrics that have deviated from normal behavior. This fosters communication between IT and the business which should lead to improved cooperation and coordination over time.
Application support rides the fence between the business and IT. If something goes wrong with their application, the business makes sure that app support hears about it and gets the problem resolved in a timely manner. Application support teams need a view of key business indicators (similar to what the managers are looking at) as well as key IT indicators so they know when there are problems with the application or the infrastructure they depend upon.
The operations teams need to know when infrastructure components are nearing capacity, throwing errors, or failing completely. It is their responsibility to ensure the proper functionality of the infrastructure and as such need an infrastructure centric view of the components they are responsible for. This is a very classic enterprise monitoring view that has been long established and still has its place in the modern monitoring world. With that said, its beneficial to show the operations teams a few key business indicators so they know how urgent it is to replace that failed piece of hardware.
Given that dashboards are best utilized to provide real-time status information, reports are the go-to solution for information that drives action in the future. Reports can be about the application, infrastructure, business, or anything else that makes sense given the data you ave to work with. What I want to focus on in this blog are the insights I have picked up over the years when developing and utilizing reports at my past companies.
- Reports should contain information that is actionable. There is little value in receiving a report that you cannot use to decide if action is required.
- Reports need to be concise. Very few people are interested in reading a 50 page report. Try to keep them to a few pages or at least summarized in the first couple of pages with supporting details in the rest of the report.
- Don’t send an avalanche of reports. People usually don’t have the time or desire to read multiple reports per day. Ideally you should send reports only when there is something in the report that needs attention.
- If you can, include the source of the report and a description of what they report is used for. I’ve received reports in the past where I didn’t know what system sent them and why I needed to see them.
- Know your audience. Make sure the business gets business related information and IT gets technology related information.
- Don’t blast reports to email groups (usually). Most reports need to be seen by only a few people in your organization. Email distribution groups tend to contain way more people than are truly interested in your report. No need to clog up the email system with a 50 page PDF sent to 2000 addresses.
Reports are important and can help your organization avoid issues down the road but you need to implement them carefully or they become just another piece of internal spam. The best report that I can ever receive is the one that only shows up when there is a problem brewing within the next few weeks, clearly identifies the problem, and I am the right person to make sure it gets addressed. That would be reporting done right!
Dashboards and reports need to be implemented properly to get the most out of your monitoring investments. You can amplify the value of all of your monitoring investments by combining, analyzing, and displaying the data contained within each disparate repository. The most mature organizations will build out their monitoring ecosystem and then invest in analytics to derive business and technology value and to create the best dashboards and reports possible.
Thanks for taking the time to read this series. The final post will be next week and will focus on how to keep up the momentum and stay relevant when it comes to monitoring within your organization.