Key Server Performance Metrics For Actionable Monitoring

The web server serves as your application's mission control center, and accurately assessing the health of your application is largely dependent on reviewing key performance metrics gathered through an effective server monitoring strategy.

Outages, slow response times, and other performance issues all have a negative impact on user experience, and the root cause of a problem can be difficult to determine due to the abundance of complex data available. Identifying key server metrics through performance monitoring can reduce error rates and improve the overall productivity and profitability of your organization. 

While there are many important metrics available for review, narrowing that list down to the most critical performance indicators can help streamline your efforts for maximum efficiency. The following suggestions focus on key components of the various information available to help you improve users' experience, reduce internal IT frustrations, and optimize overall application performance.

Requests per second (RPS)

A web server's primary function is to receive and process requests, but if your server becomes overloaded with requests, performance can suffer. RPS is a metric which calculates the number of requests received during a specified monitoring period, often in the one to five minute range. RPS counts each request, without considering what the request involves. 

Evaluating RPS gives you insight about the number of requests your server can handle before problems arise, and is a helpful metric if the performance of a web application is slow.



The server's availability is ultimately the most critical component of your operation. If the server isn't reliable, your application and end users are suffering. Uptime measures how long a server has been running -- 100 percent is the ideal, and many web hosting packages list 99.9 percent or more. A server in use needs attention if your uptime metric is less than 99 percent.

Uptime monitoring tools are often incorporated into web server, but there are third-party services who can provide uptime reports for you.


Error rates

The error rate is a metric that calculates the percentage of requests which fail or don't receive a response. Tracking the number of HTTP server errors gives you greater insight into application malfunctions or potential issues, which allows your DevOps teams to assess and repair errors more efficiently. 

Errors are going to happen, particularly when the server is experiencing a big load. Set up alerting for HTTP 5xx codes so that you can identify and minimize problems before they multiply or impact the overall health of the application.


Thread count

Thread count tracks the total number of requests being received at a particular time, which also allows you to assess the server load. 

Many servers are configured to limit the number of threads per process. Once the thread count surpasses the maximum threshold, requests are on hold until there's space available, which can lead to the request timing out if processing takes too long. Consequently, the thread count metric is an important indicator of performance because if your application generates too many threads, you may have an increase in errors.


System-level performance metrics

If you're experiencing performance degradation issues related to your server, CPU usage, memory utilization, or disk usage may be to blame. Tracking the performance metrics of your hardware utilization can detect critical issues related to capacity deficiency, limited hard drive space, insufficient RAM, or resource bottlenecks.

If a physical component of your system is struggling, all related tasks will experience performance issues as well. Having comprehensive access to system-level metrics makes it easier to quickly troubleshoot server performance issues and repair or replace problematic system elements.


Average response time (ART)

Average response time measures the length of request/response cycles, so that you can assess the average amount of time the application takes to generate a response from the server. Having a low average response time generally indicates that the application is performing at sufficient speeds to ensure a positive user experience. 

Since ART is an average of each request/respond cycle over a period of time, the metric can be negatively impacted by unusual circumstances or slow components, which may make the system performance seem slower than it actually is.


Peak response times (PRT)

The most effective technique for getting an accurate understanding of response time is to evaluate both average response time and peak response time metrics. PRT measures the length of request/response cycles to track the longest cycle within the monitoring period. If your ART metric is under one second, but the PRT is significantly higher, it indicates that one of the requests took significantly longer which may be an anomaly. If ART and PRT are both high, you most likely have a server problem. 

PRT helps identify which resources are problematic, as well as the root cause of the issue. ART is a more general view of overall performance.


Security-related metrics

A security breach or unauthorized access can result in data loss, compliance failure, or malicious changes. Monitoring file modifications, system changes, or access to sensitive resources can facilitate awareness of intrusion or vulnerabilities. 

Servers have so many background tasks running that it's easy to miss signs of a breach. Tracking file-related activity and monitoring logs generated by servers, applications, and security devices allows system administrators to spot patterns, problems, errors, or inconsistencies tot help keep infrastructure secure.


Server Performance Resources

Here are some additional solutions and resources to support your needs.




View all resources

Gain insight into your infrastructure with Server Visibility

Learn how to to quickly isolate and resolve an application performance issue with server monitoring

Gain insight into your infrastructure with Server Visibility

Enterprises need to manage their applications end-to-end including their dependencies on the underlying server infrastructure to deliver exceptional experience.

3 Common Application Performance Bottlenecks

Optimizing application performance today is an entirely different animal than it was even a decade ago. Here’s how to side-step slow-downs and eliminate the most common application performance bottlenecks.

Why End User Experience Monitoring Is Critical for IT Success

By monitoring the end-to-end user experience, IT professionals gain keen insights into application performance from the customer's perspective.

Discover why we are the #1 fastest growing APM platform

Try us free for 15 days