Fat or Fit – You Choose

There was a time when I thought more was better. More french fries are better; more ice cream is better; more of everything is always better. I followed this principle as a way of life for many years until one day I woke up and realized that I was obese and slow. I realized that more is not always better and that moderation and balance was the an important key to a happy and healthy life. The same exact rule applies to IT monitoring and in this blog I’ll explain how to identify high fat, high calorie, low nutritional content monitoring solutions.


Fat, Dumb, and Happy

The worst offenders in the battle against data center obesity are those companies that claim to be “always-on”. Always gathering mountains of data when there are no problems to solve is equivalent to entering a hotdog eating contest every day of the year. Why would you do that to yourself? Your IT bloating will reach epic proportions in no time and your CIO/CTO will eventually start asking why you are spending so much money on all of that storage space and all of those monitoring servers.

Let’s use an example to explore this scenario. A user of your application logs in, searches for some new running shoes, adds their favorite ones to their cart, checks out and happily disappears into the ether awaiting their shoe delivery so they can get ready for their local charity fun run. This same pattern is repeated for many users of your application.

Scenario 1: “Always on” bloated consumerism – Your monitoring software:

  • Tracked the response time of each function the user performed (small amount of data)
  • Tracked the execution details of many of the method calls involved in each and every function (lots and lots of data)
  • Sent all of this across the network to be compiled and stored
  • This happens for every single function that every single user executes regardless of if there is a problem or not.

Smart and Fit

Scenario 2: The intelligent fitness pro – Your monitoring software:trinity

  • Tracked the response time of each function the user performed (small amount of data)
  • Periodically tracked the execution details of all the method calls involved in each function so that you have a reference point for “good” transactions (small amount of data)
  • Tracked the execution details of all method calls for every bad (or slow) function (business transaction) so that you have the information you need to solve the problem (small – medium data)
  • The built in analytics decide when slow business transactions are impacting your users and automatically collect all the appropriate details.

How often do you look at deep granular monitoring details when there are no application issues to resolve? I was an application monitoring expert at a major investment bank and I never looked at those details when there were no problems. AppDynamics is a new breed of monitoring tool that is based upon intelligent analytics to keep your data center fast and fit. I think John Martin from Edmunds.com said it best in his case study “AppDynamics intelligence just says, ‘Hey something interesting is going on, I’m going to collect more data for you’.”

Smart people choose smart tools. You owe it to yourself to take a free trial of AppDynamics today and make us prove our value to you.