Top 5 Performance Metrics for Node.js Applications

October 14 2015
 

Is your app running properly? Monitor and visualize Node.js application performance with these top five metrics.


The last couple articles presented an introduction to Application Performance Management (APM) and identified the challenges in effectively implementing an APM strategy. This article builds on these topics by reviewing five of the top performance metrics to capture to assess the health of your enterprise Node.js application.

Specifically this article reviews the following:

  • Business Transactions
  • External Dependencies
  • The Event Loop
  • Memory Leaks
  • Application Topology

Business Transactions

Business Transactions provide insight into real-user behavior: they capture real-time performance that real users are experiencing as they interact with your application. As mentioned in the previous article, measuring the performance of a business transaction involves capturing the response time of a business transaction holistically as well as measuring the response times of its constituent tiers. These response times can then be compared with the baseline that best meets your business needs to determine normalcy.

If you were to measure only a single aspect of your application I would encourage you to measure the behavior of your business transactions. While container metrics can provide a wealth of information and can help you determine when to auto-scale your environment, your business transactions determine the performance of your application. Instead of asking for the CPU usage of your application server you should be asking whether or not your users are able to complete their business transactions and if those business transactions are behaving normally.

As a little background, business transactions are identified by their entry-point, which is the interaction with your application that starts the business transaction. In the case of a Node.js application, this is usually the HTTP request. There may be some exceptions, such as a WebSocket connection, in which case the business transaction could be an interval in your code that is defined by you. In the case of a Node.js worker server, the business transaction could potentially be the job that the Node.js application executes that it picked up from a queue server. Alternatively, you may choose to define multiple entry-points for the same web request based on a URL parameter or for a service call based on the contents of its body. The point is that the business transaction needs to be related to a function that means something to your business.

Once a business transaction is identified, its performance is measured across your entire application ecosystem. The performance of each individual business transaction is evaluated against its baseline to assess normalcy. For example, we might determine that if the response time of the business transaction is slower than two standard deviations from the average response time for this baseline that it is behaving abnormally, as shown in figure 1.

Figure 1 Evaluating BT Response Time Against its Baseline

The baseline used to evaluate the business transaction is evaluated is consistent for the hour in which the business transaction is running, but the business transaction is being refined by each business transaction execution. For example, if you have chosen a baseline that compares business transactions against the average response time for the hour of day and the day of the week, after the current hour is over, all business transactions executed in that hour will be incorporated into the baseline for next week. Through this mechanism an application can evolve over time without requiring the original baseline to be thrown away and rebuilt; you can consider it as a window moving over time.

In summary, business transactions are the most reflective measurement of the user experience so they are the most important metric to capture.

External Dependencies

External dependencies can come in various forms: dependent web services, legacy systems, or databases; external dependencies are systems with which your application interacts. We do not necessarily have control over the code running inside external dependencies, but we often have control over the configuration of those external dependencies, so it is important to know when they are running well and when they are not. Furthermore, we need to be able to differentiate between problems in our application and problems in dependencies.

From a business transaction perspective, we can identify and measure external dependencies as being in their own tiers. Sometimes we need to configure the monitoring solution to identify methods that really wrap external service calls, but for common protocols, such as HTTP, external dependencies can be automatically detected. Similar to business transactions and their constituent application tiers, external dependency behavior should be baselined and response times evaluated against those baselines.

Business transactions provide you with the best holistic view of the performance of your application and can help you triage performance issues, but external dependencies can significantly affect your applications in unexpected ways unless you are watching them.

Your Node.js application may be utilizing a backend database, a caching layer, or possibly even a queue server as it offloads CPU intensive tasks onto worker servers to process in the background. Whatever the backend your Node.js application interfaces with, the latency to these backend services can potentially affect the performance of your Node.js application performance, even if you’re interfacing with them asynchronously. The various types of exit calls may include:

  • SQL databases
  • NoSQL servers
  • Internal web-services
  • External third-party web-service APIs

However your Node.js application communicates with third-party applications, internal or external, the latency in waiting for the response can potentially impact the performance of your application and your customer experience. Measuring and optimizing the response time if your communications can help solve for these bottlenecks.

The Event Loop

In order to understand what metrics to collect surrounding the event loop behavior, it helps to first understand what the event loop actually is and how it can potentially impact your application performance. For illustrative purposes, you may think of the event loop as an infinite loop executing code in a queue. For each iteration within the infinite loop, the event loop executes a block of synchronous code. Node.js – being single-threaded and non-blocking – will then pick up the next block of code, or tick, waiting in the queue as it continue to execute more code. Although it is a non-blocking model, various events that potentially could be considered blocking include:

  • Accessing a file on disk
  • Querying a database
  • Requesting data from a remote webservice

With Javascript (the language of Node.js), you can perform all your I/O operations with the advantage of callbacks. This provides the advantage of the execution stream moving on to execute other code while your I/O is performing in the background. Node.js will execute the code awaiting in the Event Queue, execute it on a thread from the available thread pool, and then move on to the next code in queue. As soon as your code is completed, it then returns and the callback is instructed to execute additional code as it eventually completes the entire transaction.

It is important to note that the execution stream of code within the asynchronous nature of Node.js is not per request, as it may be in other languages such as PHP or Python. In other words, imagine that you have two transactions, X and Y, that were requested by an end-user.

As Node.js begins to execute code from transaction X, it is also executing code from transaction Y and since Node.js is asynchronous, the code from transaction X merges in the queue with code from transaction Y. Code from both transactions are essentially waiting in the queue waiting to be executed simultaneously by the event loop. Thus, if the event loop is blocked by code from transaction X, the slowdown of the execution may impact the performance of transaction Y.

This non-blocking, single-threaded nature of Node.js is the fundamental difference in how code execution may potentially impact all the requests within the queue and how with other languages it does not. Thus, in order to ensure the healthy performance of the event loop, it is critical for a modern Node.js application to monitor the event loop and collect vital metrics surrounding behavior that may impact the performance of your Node.js application.

Memory Leaks

The built-in Garbage Collector (GC) of V8 automatically manages memory so that the developer does not need to. V8 memory management works similar to other programming languages and, similarly, it is also susceptible to memory leaks. Memory leaks are caused when application code reserves memory in the heap and fails to free that memory when it is no longer needed. Over time, a failure to free the reserved memory causes memory usage to rise and thus causes a spike in memory usage on the machine. If you choose to ignore memory leaks, Node.js will eventually throw an error because the process is out of memory and it will shut down.

In order to understand how GC works, you must first understand the difference between live and dead regions of memory. Any object pointed to in memory by V8 is considered a  live object. Root objects – or any object pointed to by a chain of pointers – is considered live. Everything else is considered dead and is targeted for cleanup by the GC cycle.  The V8 GC engine identifies dead regions of memory and attempts to release them so that they’re available again to the operating system.

Upon each V8 Garbage Collection cycle, hypothetically the heap memory usage should be completely flushed out. Unfortunately, some objects persist in memory after the GC cycle and eventually are never cleared. Over time, these objects are considered a “leak” and will continue to grow. Eventually, the memory leak will increase your memory usage and cause your application and server performance to suffer. As you monitor your heap memory usage, you should pay attention to the GC cycle before and after.  Specifically, you should pay attention to a full GC cycle and track the heap usage. If the heap usage is growing over time, this is a strong indication of a possible memory leak. As a general rule, you should be concerned if heap usage grows past a few GC cycles and does not clear up eventually.

Once you’ve identified that a memory leak is occurring, you options are to then collect heap snapshots and understand the difference over time. Specifically, you may be interested in understanding what objects and classes have been steadily growing. Performing a heap dump may be taxing on your application so once a memory leak has been identified, diagnosing the problem is best performed on a pre-production environment so that your production applications are not impacted on performance. Diagnosing memory leaks may prove to be difficult but with the right tools you may be able to both detect a memory leak and eventually diagnose the problem.

Application Topology

The final performance component to measure in this top-5 list is your application topology. Because of the advent of the cloud, applications can now be elastic in nature: your application environment can grow and shrink to meet your user demand. Therefore, it is important to take an inventory of your application topology to determine whether or not your environment is sized optimally. If you have too many virtual server instances then your cloud-hosting cost is going to go up, but if you do not have enough then your business transactions are going to suffer.

It is important to measure two metrics during this assessment:

  • Business Transaction Load
  • Container Performance

Business transactions should be baselined and you should know at any given time the number of servers needed to satisfy your baseline.  If your business transaction load increases unexpectedly, such as to more than two times the standard deviation of normal load then you may want to add additional servers to satisfy those users.

The other metric to measure is the performance of your containers. Specifically you want to determine if any tiers of servers are under duress and, if they are, you may want to add additional servers to that tier. It is important to look at the servers across a tier because an individual server may be under duress due to factors like garbage collection, but if a large percentage of servers in a tier are under duress then it may indicate that the tier cannot support the load it is receiving.

Because your application components can scale individually, it is important to analyze the performance of each application component and adjust your topology accordingly.

Conclusion

This article presented a top-5 list of metrics that you might want to measure when assessing the health of your application. In summary, those top-5 items were:

  • Business Transactions
  • External Dependencies
  • The Event Loop
  • Memory Leaks
  • Application Topology

In the next article we’re going to pull all of the topics in this series together to present the approach that AppDynamics took to implementing its APM strategy. This is not a marketing article, but rather an explanation of why certain decisions and optimizations were made and how they can provide you with a powerful view of the health of a virtual or cloud-based application.

Interested in monitoring your Node.js application performance? Check out a free trial today!

Omed Habib
Omed Habib is a Director of Product Marketing at AppDynamics. He originally joined AppDynamics as a Principal Product Manager to lead the development of their world-class PHP, Node.js and Python APM agents. An engineer at heart, Omed fell in love with web-scale architecture while directing technology throughout his career. He spends his time exploring new ways to help some of the largest software deployments in the world meet their performance needs.

Thank you! Your submission has been received!

Oops! Something went wrong while submitting the form