The AppD Approach: How to Monitor .NET Core Apps

For the past few months we’ve been collecting customer feedback on a new agent designed specifically to monitor microservices built with .NET Core. As I discussed a few weeks ago in my post “The Challenges of App Monitoring with .NET Core,” the speed and portability that make .NET Core a popular choice for companies seeking to more fully embrace the world of complex, containerized applications placed new demands on monitoring solutions.

Today we’re announcing the general availability of the AppDynamics .NET Core agent for Windows. Please stay tuned for news about a native C++-based Linux agent we are working on, as well. Our goal is to design agents that address the three biggest challenges of monitoring .NET Core: performance, flexibility, and functionality. As companies modernize monolithic applications and increasingly shift parts of their IT infrastructure to the cloud, these agents will ensure deep visibility into rapidly evolving production, testing, and development environments.

In this blog post, I’d like to share some of the considerations that went into the choices we made in architecting the new agents. It was extremely important to our engineering team to create an agent that is as light-weight and reliable as the microservices and containers we monitor without compromising functionality. One change we made was removing the Windows Service that required a machine-level install, which increased reliability and freed up CPU and considerable memory (70 MB). In addition, the new .NET Core agents for Windows require just half the disk space of traditional .NET agents and consist of only two DLLs and two configuration files.

Our approach to monitoring .NET Core recognizes that the deployment of .NET Core applications is fundamentally different from those built with the full .NET framework. In Windows environments deployment was dependent on both the framework and the machine, and our agent was installed using the traditional Windows installer (via MSI files). In contrast, the advantage of .NET Core is that it runs on a variety of platforms and runtimes.

Last year, our team made the decision we would mirror .NET Core’s flexibility in deployment. Unlike some other app monitoring solutions, the AppDynamics .NET Core agents reside next to the application. This architecture means containers can be spun up and spun down or moved around without affecting visibility. Operations engineers can integrate AppDynamics in any way that makes sense, while developers are able to leverage NuGet package-management tools. The pipeline for deploying and installing the agents on each platform is the same as for deploying applications and microservices there. For example, agents can be deployed with Azure Site Extensions for Azure or buildpacks for Pivotal Cloud Foundry (available soon). In the case of Docker, the agents can be embedded in a Docker image with engineers setting a few environment variables for monitoring to then proceed automatically. During our recent beta it was great to see our customers deploying the AppDynamics .NET Core agents to Docker, Azure App Services, Azure Service Fabric, Pivotal Cloud Foundry, and other environments.

How it works

The .NET Core agents deliver all the functionality and automation you expect from  AppDynamics. The agents auto-detect apps, which in the case of .NET Core could be running on Kestrel or WebListener. The agents then talk to the AppDynamics Controller providing everything from business and performance-related transaction metrics to errors, and health alerts.

Similar to the traditional .NET agent, the new .NET Core agents are particularly suited to monitoring the asynchronous transactions that often characterize microservices. We automatically instrument asynchronous apps and provide deep visibility at the code level with built-in visualizations such as snapshots and full-stack call graphs that include unrestricted views into the ASP.NET Core middleware.

Although certain Windows-environment specific machine metrics like performance counters are not available to the new .NET Core agents due to the new cross-platform architecture, as I previously discussed, AppDynamics continues to provide cross-stack and full-stack visibility by automatically correlating the metrics collected by the .NET Core agents with infrastructure and end-user metrics. This allows transactions to be traced from an end user to an application or microservice through databases such as Azure SQL, SQL Server, and MongoDB, across distributed tiers, and back to the end user, automatically discovering dependencies and identifying anomalies along the way. These unified full-stack and cross-stack topologies are critical to developing and deploying microservices that are responsive to business needs.

Drive business outcomes

AppDynamics’ Business iQ connects application performance with business results using a variety of data collectors to pull detailed, real-time information on everything from users to pricing. With the new .NET Core agents, it is even easier to create contextual information points to collect custom data from microservices. Thanks to run-time reinstrumentation, engineers can make changes in existing information points without restarting the microservice.

Customers have asked whether this functionality will be available in hybrid environments. Yes, this is one of the great advantages of the new .NET Core agents. Customers will have visibility into the performance of their business and their applications across on-premises installations running on the full .NET framework and .NET Core applications running on the Azure cloud or other public clouds. Just as .NET Core seeks to enable microservices to move between platforms, AppD is continually working to provide complete, end-to-end visibility into apps and microservices wherever they are running and regardless of the underlying technologies.

It is worth acknowledging that the first generation of .NET core monitoring tools is shipping with a tradeoff between ease of deployment and performance and reliability. Some vendors, especially those who shipped early, emphasize the simplicity and speed of their agents. Deploying AppD’s agents does involve more than “one” step. However, customers assure us that the reliability of our agents combined with their lack of overhead more than compensates for the small, upfront investment made in deployment. In the meantime, our engineering teams remain hard at work tuning and automating deployment and installation processes.

The AppD approach to monitoring .NET Core apps illustrates the importance of a unified solution for maintaining full-stack and cross-stack visibility. The ultimate goal of monitoring is to improve business performance. Ideally, performance issues—and potential problems— are automatically detected before they affect business goals. Achieving this requires real-time data collection on-premises, on IoT devices, and across clouds. It depends on the continuous monitoring of everything—applications, containers, microservices, machines, and databases—as well as on the continuous improvement of AI and machine learning algorithms. Our new agents represent one more step in this exciting journey. Onward!

The Challenges of App Monitoring with .NET Core

The evolution of software development from monolithic on-premises applications to containerized microservices running in the cloud took a major step forward last summer with the release of .NET Core 2. As I wrote in the “Understanding the Momentum Behind .NET Core,” the number of developers using .NET Core recently passed the half million mark. Yet in the rush to adoption many developers have encountered a speed bump. It turns out the changes that make .NET Core so revolutionary create new challenges for application performance monitoring.

Unlike .NET apps that run on top of IIS and are tied to Windows infrastructure, microservices running on .NET Core can be deployed anywhere. The customers I’ve spoken with are particularly interested in deploying microservices on Linux systems, which they believe will deliver the greatest return on investment. But the flexibility comes at cost.

When operations engineers move .NET applications to .NET Core they are seeking fully functional, performant environments that are designed for a microservice. What they are finding is that the .NET Core environment requirements vary substantially from the environments that the full framework runs on. While .NET Core’s independence from IIS and Windows machines provides flexibility, it also means that some performance tools for system metrics may no longer be relevant.

Engineers who are used to debugging apps in a traditional Windows environment find that valuable tools like Event Tracing for Windows (ETW) and performance counters are not consistently available. For example, an on-premises Windows machine allows you to read performance counters while Azure WebApps on Windows only provides access to application-specific performance counters. Neither ETW nor performance counters are available on Linux, so if you want to deploy an ASP.NET Core microservice on Linux you will need to modify your method of collecting system-level data.

In creating .NET Core and the ASP.NET Core framework Microsoft made improving performance a top priority. One of the biggest changes was replacing the highly versatile but comparatively slow IIS web server with Kestrel, a stripped-down, cross-platform web server. Unlike IIS, Kestrel does not maintain backwards compatibility with a decade-and-half of previous development and is specifically suited to the smaller environments that characterize microservices development and deployment. Open-source, event-driven, and asynchronous, Kestrel is built for speed. But the switch from IIS to Kestrel is not without tradeoffs. Tools we relied on before like IIS Request Failed logging don’t consistently work. The fact is, Kestrel is more of an application server than a web server, and many organizations will want to use a full-fledged web server like IIS, Apache, or Nginx in front as a reverse proxy. This means engineers have to now familiarize themselves with the performance tools, logging, and security setup for these technologies.

Beyond monitoring web servers, developers need performance metrics for the entire platform where a microservice is deployed—from Azure and AWS to Google Cloud Platform and Pivotal Cloud Foundry, not to mention additional underlying technologies like Docker. The increase in platforms has a tendency to add up to an unwelcome increase in monitoring tools.

At the same time, the volume, velocity, and types of data from heterogeneous, multi-cloud, microservices-oriented environments is set to increase at exponential rates. This is prompting companies who are adopting .NET Core and microservices to take a hard look at their current approach to application monitoring. Most are concluding that the traditional patchwork of multiple tools is not going to be up to the task.

While application performance monitoring has gotten much more complex with .NET Core, the need for it is even more acute. Migrating applications from .NET without appropriate monitoring solutions in place can be particularly risky.

One key concern is that not all .NET Framework functionality is available in .NET Core, including .NET Remoting, Code Access Security and AppDomains. Equivalents are available in ASP.NET Core, but they require code changes by a developer. Likewise, HTTP handlers and other IIS tools must be integrated into a simplified middleware pipeline in ASP.NET Core to ensure that the logic remains part of an application as it migrated from .NET to .NET Core.

Not all third-party dependencies have a .NET Core-compatible release. In some cases, developers may be forced to find new libraries to address an application’s needs.

Given all of the above, mistakes in migration are possible. There may be errors in third-party libraries, functionality may be missing, and key API calls may cause errors. Performance tools are critical in helping this migration by providing granular visibility into the application and its dependencies. Problems can thus be identified earlier in the cycle, making the transition smoother.

AppDynamics had been tackling the challenges outlined in this post for more than a year. A beta release of support for .NET Core 2.0 on Windows became available in January, and we’ll have more news going forward.

Please stay tuned for my next blog post about AppDynamics’ approach to app monitoring with .NET Core.

A UNIX Bigot Learns About .NET and Azure Performance – Part 1

This blog post is the beginning of my journey to learn more about .NET and Microsoft Azure as it applies to performance monitoring. I’ve long admitted to being a UNIX bigot but recently I see a lot of good things going on with Microsoft. As a performance monitoring geek I feel compelled to understand more about these technologies at a deep enough level to provide good guidance when asked by peers and acquaintances.

The Importance of .NET

Here are some of the reasons why .NET is so important:

  • In a 2013 Computer World article, C# was listed as the #6 most important programming language to learn along with ASP.NET ranking at #14.
  • In a 2010 Forrester article .NET was cited as the top development platform used by respondents.
  • .NET is also a very widely used platform in financial services. An article published by WhiteHat Security stated that “.NET, Java and ASP are the most widely used programming languages at 28.1%, 25% and 16% respectively.” In reference to financial service companies.
  • The Rise of Azure

    .NET alone is pretty interesting from a statistical perspective but the rise of Azure in the cloud computing PaaS and IaaS world is a compounding factor. In a “State of the Cloud” survey conducted by RightScale, Azure was found to be the 3rd most popular public cloud computing platform for Enterprises. In a report published by Capgemini, 73% of their respondents globally stated that Azure was part of their cloud computing strategy with strong support across retail, financial services, energy/utilities, public, and telecommunications/media verticals.

    Developer influence

    Not to be underestimated in this .NET/Azure world is the influence that developers will have on overall adoption levels of each technology platform. Microsoft has created an integration between Visual Studio (the IDE used to develop on the .NET platform) and Azure that makes it extremely easy to deploy .NET applications onto the Azure cloud. Ease of deployment is one of the key factors in the success of new enterprise technologies and Microsoft has definitely created a great opportunity for itself by ensuring that .NET apps can be easily deployed to Azure through the interface developers are already familiar with.

    The fun part is yet to come

    Before I started my research for this blog series I didn’t realize how far Microsoft had come with their .NET and Azure technologies. To me, if you work in IT operations you absolutely must understand these important technologies and embrace the fact that Microsoft has really entrenched itself in the enterprise. I’m looking forward to learning more about the performance considerations of .NET and Azure and sharing that information with you in my follow-up posts. Keep an eye out for my next post as I dive into the relevant IIS and WMI/Perfmon performance counters and details.

    How to Run AppDynamics in Microsoft Azure
    “Is it possible to run the AppDynamics controller within my own Microsoft Azure IaaS?”

    I hear this question fairly regularly and would like to walk you through how to host the controller in your own Azure cloud. First off, the pros of having AppDynamics with Azure:

  • Have full control and ownership of the data collected by AppDynamics
  • Provide additional security to access the data (for example, lock it down to a corporate VPN only).
  • Enable easy integration between AppDynamics and your services, such as Active Directory for authentication or internal bug tracking system for alert notifications. These would typically require opening custom ports when you leverage the AppDynamics SaaS environment.
  • AppDynamics works by placing an agent running on your servers which reports to the controller. It’s common to have several agents monitoring your applications. To further the ease of use, we monitor Java, .NET, PHP, Node.js, and now, C++ all in one single pane of glass. Your Azure architecture might look something like this:

    Screen Shot 2014-09-24 at 3.32.03 PM
    A unique feature for AppDynamics is flexible deployment. Typically, legacy APM solutions rely on on-premise deployment, whereas newer companies are Saas-only. At AppDynamics you can run the controller on-premise, leverage the AppDynamics SaaS option, or deploy a hybrid mixture.

    To run the controller in your Azure IaaS you can leverage the security of the on-premise deployment option and install the controller the same way as if you would in your datacenter. This allows you to have full control over the your data and be the gatekeeper to access that data.

    Important to note:

  • Properly size the controller — you can estimate the CPU/memory/disk requirements based on number of agents you are going to deploy. This is covered in the AppDynamics online documentation.
  • Configure the VM for maximum I/O.
  • The second is very important to configure as the controller installs a database which requires high I/O throughput. The recommended best practice is to treat the VM the same as you would be running a SQL server on it.

    If you forget to do this, you run the risk that the performance of the controller will slow down. This will not slow down your monitored applications as the agents are implemented to be non-blocking. However, the slowness will cause controller UI to lag and hard to visualize the collected data.

    Hope this helps and you can choose the option which works the best for your organization! Try it out now, for FREE!

    How Do you Monitor Your Logging?

    Applications typically log additional data such as exceptions to different data sources. Windows event logs, local files, and SQL databases are most commonly used in production. New applications can take advantage of leveraging big data instead of individual files or SQL.

    One of the most surprising experiences when we start monitoring applications is noticing the logging is not configured properly in production environments. There have been two types of misconfiguration errors we’ve seen often in the field:

    1. logging configuration is copied from staging settings

    2. while deploying the application to production environment, logging wasn’t fully configured and the logging failed to log any data

    To take a closer look, I have a couple of sample applications to show how the problems could manifest themselves. These sample applications were implemented using MVC5 and are running in Windows Azure and using Microsoft Enterprise Library Exception Handling and Logging blocks to log exceptions to the SQL database. There is no specific preference regarding logging framework or storage, just wanted to demonstrate problems similar to what we’ve seen with different customers.

    Situation #1 Logging configuration was copied from staging to production and points to the staging SQL database

    When we installed AppDynamics and it automatically detected the application flowmap, I noticed the application talks to the production UserData database and… a staging database for logging.

    The other issue was the extremely slow response time while calling the logging database. The following snapshot can explain the slow performance, as you see there’s an exception happening while trying to run an ADO.NET query:

    Exception details confirm the application was not able to connect to a database, which is expected — the production environment in located in DMZ and usually can’t reach a staging network.

    To restate what we see above — this is a failure while trying to log the original exception which could be anything from a user not being able to log into the website to failing to checkout.

    At the same time the impact is even higher because the application spends 15 seconds trying to connect to the logging database and timeout, all while the user is waiting.

    Situation #2 During deployment the service account wasn’t granted permissions to write to the logging database

    This looks similar to the example above but when we drill inside the error we can see the request has an internal exception happened during the processing:

    The exception says the service account didn’t have permissions to run the stored procedure “WriteLog” which logs entries to the logging database. From the performance perspective, the overhead of security failure is less from timeouts in the example above but the result is the same — we won’t be able to see the originating exception.

    Not fully documenting or automating the application deployment/configuration process usually causes such problems.

    These are one-time issues that once you fix it will work on the machine. However, next time you deploy the application to a new server or VM this will happen again until you fix the deployment.

    Let’s check the EntLigLogging database — it has no rows

    Here’s some analysis to explain why this happened:

    1. We found exceptions when the application was logging the error

    2. This means there was an original error and the application was trying to report it using logging

    3. Logging failed which means the original error was never reported!

    4. And… logging doesn’t log anywhere about its failures, which means from a logging perspective the application has no problems!!

    This is logically correct — if you can’t log data to the storage database you can’t log anything. Typically, loggers are implemented similar to the following example:

    Logging is the last option in this case and when it fails nothing else happens as you see in the code above.

    Just to clarify, AppDynamics was able to report these exceptions because the agent instruments common methods like ADO.NET calls, HTTP calls, and other exit calls as well as error handlers, which helped in identifying the problem.

    Going back to our examples, what if the deployment and configuration process is now fixed and fully automated so there can’t be a manual mistake? Do you still need to worry? Unfortunately, these issues happen more often than you’d expect, here is another real example.

    Situation #3 What happens when the logging database fills up?

    Everything is configured correctly but at some point the logging database fills up. In the screenshot above you can this this happened around 10:15pm. As a result, the response time and error rates have spiked.

    Here is one of the snapshots collected at that time:

    You can see that in this situation it took over 32 seconds trying to log data. Here are the exception details:

    The worst part is at 10:15pm the application was not able to report about its own problems due to the database being completely full, which may incorrectly be translated that the application is healthy since it is “not failing” because there are no new log entries.

    We’ve seen enough times that the logging database isn’t seen as a critical piece of the application therefore it gets pushed down the priority list and often overlooked. Logging is part of your application logic and it should fall into the same category as the application. It’s essential to document, test, properly deploy and monitor the logging.

    This problem could be avoided entirely unless your application receives an unexpected surge of traffic due to a sales event, new release, marketing campaign, etc. Other than the rare Slashdotting effect, your database should never get to full capacity and result in a lack of logging. Without sufficient room in your database, your application’s performance is in jeopardy and you won’t know since your monitoring framework isn’t notifying you. Because these issues are still possible, albeit during a large load surge, it’s important to continuously monitor your loggingn as you wouldn’t want an issue to occur during an important event.

    Key points:

    • Logging adds a new dependency to the application

    • Logging can fail to log the data – there could be several reasons why

    • When this happens you won’t be notified about the original problem or a logging failure and the performance issues will compound

    This would never happen to your application, would it?

    If you’d like to try AppDynamics check out our free trial and start monitoring your apps today! Also, be sure to check out my previous post, The Real Cost of Logging.

    The Real Cost of Logging

    In order to manage today’s highly dynamic application environments, many organizations turn to their logging system for answers – but reliance on these systems may be having an undesired impact on the applications themselves.

    The vast majority of organisations use some sort of logging system — it could log errors, traces, information messages or debugging information. Applications can write logs to a file, database, Windows event log, or big data store and there are many logging frameworks and practices being used.

    Logging brings good insights about the application behavior, especially about failures. However, by being part of the application, logging also participates in the execution chain, which can have its disadvantages. While working with customers we often see the negative consequences when logging alone introduced the adverse impact to the application.

    Most of the time the overhead of logging is negligible. It only matters when the application is under significant load — but these are the times when it matters the most. Think about Walmart or Best Buy during Black Friday and Cyber Monday. Online sales are particularly crucial for these retail companies during this period and this is the time when their applications are under most stress.

    To better explain the logging overhead I created a lightweight .NET application that:

    1. implemented using ASP.NET
    2. performs lightweight processing
    3. has an exception built in
    4. exceptions are always handled within try…catch statement
    5. exceptions are either logged using log4net or ignored based on the test

    In my example I used log4net as I recently diagnosed a similar problem with a customer who was using log4net, however this could be replaced for any other framework that you use.

    Test #1

    First, we set up a baseline by running an application when exceptions are not being logged from the catch statement.


    Test #2

    Next I enabled logging exceptions by logging those to a local file and running same load test.

    As you can see not only is the average response time significantly higher now, but also the throughput of the application is lower.

    The AppDynamics snapshot is collected automatically when there is a performance problem or failure and includes full call graph with timings for each executed method.

    By investigating the call graph AppDynamics produces, we see that log4net method, FileAppender, renders error information to a file using FileStream. On the right you can see duration of each call and the most time was spent in WriteFileNative as it was competing with similar requests trying to append the log file with error details.

    Test #3

    I often come across attempting to make exception logging asynchronous by using ThreadPool. Below is how the performance looks like in this setup under exactly the same load.

    This is a clever concept and works adequately for low throughput applications, but as you can see the average response time is still in a similar range as the non-asynchronous version, but a slightly lower throughput is achieved.

    Why is this? Having logging running in separate threads means the resources are still consumed, there are less threads available, and the number of context switches will be greater.

    In my experience logging to a file is exceptionally common. Other types of storage could introduce better performance, however they always need to be further tested and logging to a file is the easier solution.


    While logging is important and helps with application support and troubleshooting, logging should be treated as part of the application logic. This means logging has to be designed, implemented, tested, monitored and managed. In short, it should become a part of full application lifecycle management.

    How well do you understand the your logging framework and it’s performance cost?

     Take five minutes to get complete visibility into the performance of your production applications with AppDynamics today.

    Diving Into What’s New in Java & .NET Monitoring

    In the AppDynamics Spring 2014 release we added quite a few features to our Java and .NET APM solutions. With the addition of the service endpoints, an improved JMX console, JVM crash detection and crash reports, additional support for many popular frameworks, and async support we have the best APM solution in the marketplace for Java and .NET applications.

    Added support for frameworks:

    • TypeSafe Play/Akka

    • Google Web Toolkit

    • JAX-RS 2.0

    • Apache Synapse

    • Apple WebObjects

    Service Endpoints

    With the addition of service endpoints customers with large SOA environments can define specific service points to track metrics and get associated business transaction information. Service endpoints helps service owners monitor and troubleshoot their own specific services within a large set of services:

    JMX Console

    The JMX console has been greatly improved to add the ability to manage complex attributes and provide executing mBean methods and updating mBean attributes:

    JVM Crash Detector

    The JVM crash detector has been improved to provide crash reports with dump files that allow tracing the root cause of JVM crashes:

    Async Support

    We  added improved support for asynchronous calls and added a waterfall timeline for better clarity in where time is spent during requests:


    AppDynamics for .NET applications has been greatly improved by adding better integration and support for Windows AzureASP.Net MVC 5, improved Windows Communication Foundation support, and RabbitMQ support:



    Take five minutes to get complete visibility into the performance of your production applications with AppDynamics today.

    Instrumenting .NET applications with AppDynamics using NuGet


    One of the coolest things to come out of the .NET stable at AppD this week was the NuGet package for Azure Cloud Services. NuGet makes it a breeze to deploy our .NET agent along with your web and worker roles from inside Visual Studio. For those unfamiliar with NuGet, more information can be found here.

    Our NuGet package ensures that the .NET agent is deployed at the same time when the role is published to the cloud. After adding it to the project you’ll never have to worry about deploying the agent when you swap your hosting environment from staging to production in Azure or when Azure changes the machine from under your instance. For the remainder of the post I’ll use a web role to demonstrate how to quickly install our NuGet package, changes it makes to your solution and how to edit the configuration by hand if needed. Even though I’ll use a web role, things work exactly the same way for a worker role.


    So, without further ado, let’s take a look at how to quickly instrument .NET code in Azure using AppD’s NuGet package for Windows Azure Cloud Services. NuGet packages can be added via the command line or the GUI. In order to use the command line, we need to bring up the package manager console in Visual Studio as shown below


    In the console, type ‘install-package AppDynamics.WindowsAzure.CloudServices’ to install the package. This will bring up the following UI where you can enter the information needed by the agent to talk to the controller and upload metrics. You should have received this information in the welcome email from AppDynamics.


    The ‘Application Name’ is the name of the application in the controller under which the metrics reported by this agent will be stored. When ‘Test Connection’ is checked we will check the information entered by trying to connect to the controller. An error message will be displayed if the test connection is unsuccessful. That’s it, enter the information, click apply and we’re done. Easy Peasy. No more adding files one by one or modifying scripts by hand. Once deployed, instances of this web role will start reporting metrics as soon as they experience any traffic. Oh, and by the way, if you prefer to use a GUI instead of typing commands on the console, the same thing can be done by right-clicking on the solution in Visual Studio and choosing ‘Manage NuGet Package’.

    Anatomy of the package

    If you look closely at the solution explorer you’ll notice that a new folder called ‘AppDynamics’ has been created in the solution explorer. On expanding the folder you’ll find the following two files:

    • Installer of the latest and greatest .NET agent.
    • Startup.cmd
    The startup script makes sure that the agent gets installed as a part of the deployment process on Azure. Other than adding these files we also change the ServiceDefinition.csdef file to add a startup task as shown below.

    Screen Shot 2013-11-27 at 8.11.27 PM

    In case, you need to change the controller information you entered in the GUI while installing the package, it can be done by editing the startup section of the csdef file shown above. Application name, controller URL, port, account key etc. can all be changed. On re-deploying the role to Azure, these new values will come into effect.

    Next Steps

    Microsoft Developer Evangelist, Bruno Terkaly blogged about monitoring the performance of multi-tiered Windows Azure based web applications. Find out more on Microsoft Developer Network.

    Find out more in our step-by-step guide on instrumenting .NET applications using AppDynamics Pro. Take five minutes to get complete visibility into the performance of your production applications with AppDynamics Pro today.

    As always, please feel free to comment if you think I have missed something or if you have a request for content in an upcoming post.

    AppDynamics Pro on the Windows Azure Store

    Over a year ago, AppDynamics announced a partnership with Microsoft and launched AppDynamics Lite on the Windows Azure Store. With AppDynamics Lite, Windows Azure users were able to easily monitor their applications at the code level, allowing them to identify and diagnose performance bottlenecks in real time. Today we’re happy to announce that AppDynamics Pro is now available as an addon in the Windows Azure store, which makes it easier for developers to get complete visibility into their mission-critical applications running on Windows Azure.

    • Easier/simplified buying experience in Windows Azure Store
    • Tiered pricing based on number of agents and VM size
    • Easy deployment from Visual Studio with NuGet
    • Out-of-the-box support for more Windows Azure services

    “AppDynamics is one of only a handful of application monitoring solutions that works on Windows Azure, and the only one that provides the level of visibility required in our distributed and complex application environments,” said James Graham, project manager at MacMillan Publishers. “The AppDynamics monitoring solution provides insight into how our .NET applications perform at a code level, which is invaluable in the creation of a dynamic, fulfilling user experience for our students.”

    Easy buying experience

    Purchasing the AppDynamics Pro add-on in the Windows Azure Store only takes a couple of minutes. In the Azure portal click NEW at the bottom left of the screen and then select STORE. Search for AppDynamics, choose your plan, add-on name and region.


    Tiered pricing

    AppDynamics Pro for Windows Azure features new tiered pricing based on the size of your VM (extra small, small or medium, large, or extra large) and the number of agents required (1, 5 or 10). This new pricing allows organizations with smaller applications to pay less to store their monitoring data than those with larger, more heavily trafficked apps. The cost is added to your monthly Windows Azure bill, and you can cancel or change your plan at any time.

    AppDynamics on Windows Azure Pricing

    Deploying with NuGet

    Use the AppDynamics NuGet package to deploy AppDynamics Pro with your solution from Visual Studio. For detailed instructions check out the how-to guide.


    Monitoring with AppDynamics

    • Monitor the health of Windows Azure applications
    • Troubleshoot performance problems in real time
    • Rapidly diagnose root cause of performance problems
    • Dynamically scale up and scale down their Windows Azure application based on performance metrics

    AppDynamics .Net App

    Additional platform support

    AppDynamics Pro automatically detects and monitors most Azure services out-of-the-box, including web and worker roles, SQL, Azure Blob, Azure Queue and Windows Azure Service Bus. In addition, AppDynamics Pro now supports MVC 4. Find out more in our getting started guide for Windows Azure.

    Get started monitoring your Windows Azure app by adding the AppDynamics Pro add-on in the Windows Azure Store.

    Covis Software GmbH isolates problematic Web Service in 4 minutes with AppDynamics Pro

    I received another X-Ray from Covis Software GmbH in Germany who provides CRM solutions. They’ve been managing their .NET application performance with AppDynamics Pro for several months now in production. In the below X-Ray (as documented by the customer), Covis were able to rapidly identify a poor performing remote web service call which was impacting their application, business transactions and end user experience of their customers.

    If you would like to get started with AppDynamics you can download AppDynamics Lite (our free version) or you can take a free 30-day trial of AppDynamics Pro.