TAG | Cloud Monitoring
Welcome back to my series on migration to the cloud. In my last post we discussed all of the effort you need to put into the planning phase of your migration. In this post we are going to focus on what should happen directly after the migration has been completed.
Regardless of how well you planned or if you just decided to dive right in without any forethought, there are steps that need to be taken after your migration to ensure your application is working properly and performing up to snuff. These steps need to be performed whether you chose to use a public, private or hybrid cloud implementation.
Step 1: Take Your New Cloud Based Application for a Test Drive
Go easy at first and just roll through the functionality as a user would. If it doesn’t work well for you then you know it wont work well when there are a bunch of users hitting it.
Assuming things went well with your functional test it’s time to go bigger. Lay down a load test and see step 2 below.
Step 2: Monitoring is Not the Job of Your Users
If you’re relying on the users of your application to let you know if there are performance or stability issues you are already a major step behind your competition. If you planned properly then you have a monitoring system in place. If you’re just winging it, put in a monitoring system now!!!
Here are the things your monitoring tool should help you understand:
- Architecture and Flow: You design an application architecture to support the type of application you are building. How do you really know if you have deployed the architecture you designed in the first place? How do you know if your application flow changes over time and causes problems? Cloud computing environments are dynamic and can shift at any given time. You need to have a tool in place that let’s you know exactly what happened, when and if it caused any impact.
What happens if you don’t have a flow map? Simple, when there’s a problem you waste a bunch of time trying to figure out what components were involved in the problematic transaction so that you can isolate the problem to the right component.
- Response Times: Slow sucks! You moved to the cloud for many potential reasons but one thing is certain, your users don’t want your application(s) to run slowly. It seems obvious to monitor the response time of your applications but I’m constantly amazed by how many organizations still don’t have this type of monitoring in place for their applications. There are really only 2 options in this category; let your users tell you when (notice I didn’t say if) your application is slow or have a monitoring tool alert you right away.
- Resources: You need to keep an eye on the resources you are consuming in the cloud. New instances of your application can quickly add up to a large expense if your code is inefficient. You need to understand how well your application scales under load and fix the resource hogs so that you can drive better value out of your application as usage increases.
Step 3: Elasticity
Elasticity is a key benefit of migrating your application to the cloud. Traditional application architectures accounted for periodic spikes in workload by permanently over-allocating resources. Put simply, we used to buy a bunch of servers so that we could handle the monthly or yearly spikes in activity. Most of these servers sat nearly idle the rest of the year and generated heat.
If you’re going to take advantage of the inherent elasticity within your cloud environment you need to understand exactly how your application will respond to being overloaded and how your infrastructure adapts to this condition. Cloud providers have tools to execute the dynamic shift in resources but ultimately you need a tool to detect the trigger conditions and then interface with the dynamic provisioning features of your cloud.
The combination of slow transactions AND resource exhaustion would be a great trigger to spin up new application instances. Each condition on its own does not justify adding a new resource.
The point here is that migrating to the cloud is not a magic bullet. You need to know how to use the features that are available and you need the right tools to help you understand exactly when to use those features. You need to stress your new cloud application to the point of failure and understand how to respond BEFORE you set users free on your application. Your users will certainly break your application and during an event is not the proper time to figure out how to manage your application in the cloud.
Let failure be your guide to success. Fail when it doesn’t matter so that you can success when the pressure is on. The cloud auto-scaling features shown in this post are part of AppDynamics Pro 3.7. Click here to start your free trial today.Link to this post:
Public cloud, private cloud, hybrid cloud, cloud bursting, cloud storming, elastic compute, IaaS, PaaS, SaaS, the list of terms goes on and on ad-nauseam. Like it or not, cloud computing has taken hold as an important design consideration in companies ranging from small startups to large established enterprises. The concepts and technologies behind cloud computing have been around for quite a long time now so why is it taking so long for so many companies to move their applications and realize the benefits that cloud computing offers?
Getting beyond the ridiculous fear of the unknown, security concerns are a major inhibitor to cloud adoption but between private cloud and a slew of security technologies and methods that should only impact a small portion of applications. The real problem, in my opinion, is that nobody wants to fail and suffer damage to their personal and/or corporate brands. I’ve seen so many companies make a poor transition to cloud computing and it impacts their revenue and customer retention.
Companies like Netflix, Orbitz, and Family Search have been tremendously successful with their cloud computing initiatives. Do they have better technologists than other companies? Are their processes better than others? Do they have special tools that nobody else has? Or have they made a commitment that is okay to fail as long as they fail fast and don’t repeat their mistakes? The answer might be a combination of all of the above depending upon which organizations we are talking about.
There is a wealth of information published on the internet about deploying applications to the cloud; there are companies that exist solely to help you move application to the cloud; there are even companies that exist to help you figure out IF you should move your application(s) to the cloud. I used to work for one of those companies and what we saw over and over again was that our clients really didn’t know how to get started down the path of moving their existing applications to a cloud environment. Even worse were the companies that thought they knew what it took to successfully migrate their application(s) but didn’t. All of these companies were missing crucial bits of information that would make the difference between a smooth and painless migration and a rough, frustrating migration.
The tools, processes, and information you use in the planning, execution, and ongoing management of your cloud applications will make all of the difference between success and failure.
In this blog series I’ll discuss some of the key considerations related to planning and execution of migrating your applications to the cloud. I’ll cover a few important aspects of deciding IF you should move your applications to the cloud and then focus mostly on what happens after you’ve decided to go for it. Everything I discuss will be directly from my experience moving and monitoring cloud applications within an enterprise and as a consultant.
In my opinion it’s much harder to move an existing application than it is to set up a new application in the cloud. The good news is that there are common considerations for each of these scenarios so next week I’ll discuss the following:
Should we move or deploy to the cloud?
What can I monitor to ensure my users are not impacted in a negative way?
In future posts I’ll discuss the planning and migration phases, how to take advantage of cloud elasticity, and good ongoing management practices. I might even preview some awesome new features we’re cooking up to make management of your applications faster and easier (shhhhh, it’ll be our little secret).Link to this post:
At AppDynamics we’re always looking to partner with vendors that can significantly enhance the visibility we provide our customers when it comes to managing application performance in production. One area we see synergies is in synthetic monitoring and load testing, specifically for the next generation of cloud and mobile applications. So when Apica, the performance and load testing company for cloud, web and mobile applications came to us to join forces, we were stoked.
On June 26th, Apica and AppDynamics announced a partnership to provide DevOps an integrated monitoring solution to gain 10x visibility into application availability and performance, with the power to identify the root cause of slow downs and outages in as few as 3-clicks.
We’ve seen how development agility can be directly proportional to production fragility. Reproducing complex, distributed production architectures in test is tough, because data volumes, computing resource and user behavior always differ. This is why many organizations are starting to test application performance in production, so they can stress test out-of-hours and pro-actively identify severity-1 incidents before end users and the business is impacted.
Monitoring visibility across geo-locations, browsers and mobile devices
Giving DevOps the visibility they need to see how their applications perform, in production, under load, across geo-locations, different browsers and mobile devices helps DevOps be pro-active in managing application performance. This is where Apica’s synthetic monitoring and load testing products helps organizations understand how an application is performing from an end user, location, browser or device perspective.
“Users expect a fast and reliable web, cloud, and mobile experience. Every second delay can cost businesses valuable customers and revenue,” says Sven Hammar, CEO of Apica. “Together with AppDynamics, we’re providing users with best-of-breed solutions to ensure uptime and availability for revenue-critical applications. They’ll have the most complete understanding available of the metrics that are powering or causing problems for their applications so they can take measures to improve performance.”
AppDynamics integrates with the following three Apica products:
Apica WebPerformance – Verifies the performance and availability of your mission-critical applications from over 80 different locations world-wide.
Apica ProxySniffer – Automates creation of load test scripts and scenarios by recording real end-user traffic
Apica LoadTest – A cloud-based load testing tool for your mission-critical applications
You can access the Apica portal via any web browser or mobile device. Thru this new partnership AppDynamics real-time monitoring data is now seamlessly available in Apica portal, so DevOps can now drill-down a level further to understand the root cause of slowdowns and availability issues.
Like AppDynamics, Apica has similar synergies to application performance management (APM):
Complete Lifecycle Visibility - We have customers that deploy our solutions in both production and pre-production environments. For Apica, scripts used for load testing thru LoadTest can also be applied to WebPerformance as well to test application availability and performance in production.
Real-time monitoring – understanding the true end user experience 24/7; specifically around business transactions and the application infrastructure, so pro-active alert notifications can be sent to DevOps when service degradation is identified.
Built for the Cloud – Apica LoadTest is cloud-ready so organizations can automate load tests and find the optimal configuration for their cloud deployments.
Below is a short clip of the direct integration of AppDynamics with Apica allowing DevOps to monitor all of the business transactions and system resources in real-time. There is also contextual business transaction drill-down that takes a user from the Apica portal into the AppDynamics user interface so the root cause of performance issues can rapidly found.
You can start your free trial with Apica’s performance and load testing solution by clicking here. For more details on monitoring your applications with Apica and AppDynamics, please visit Apica’s AppDynamics partner page.
Link to this post:
If you haven’t already, many IT organizations are migrating some of their applications to the cloud to become more agile, alleviate operational complexity and spend less time managing infrastructure and servers. The next question you may ask yourself is, “How will we monitor these applications and where should we even begin with so many monitoring tools on the market?”
I’m glad you asked. Here is a list of gotchas you should look out for. If you have your own list, feel free to comment below and share with us.
1. Lack of End User or Business Context - With apps running in the cloud, monitoring infrastructure metrics indicates very little about your end-user experience,or the performance of your apps or business running in the cloud. End users experience business transactions so make sure your monitoring gives you this visibility.
2. Node Churn - How well does your application monitoring solution deal with node churn – the provisioning and de-provisioning of servers and application nodes? The monitoring solution has to work in dynamic, virtual and elastic environment where change is constant, otherwise you’ll end up with blind spots in your application and monitoring. Many of the current monitoring solutions today are unable to monitor and adapt to dynamic cloud infrastructure changes, requiring manual intervention by operations so new nodes can be registered and monitored.
3. Agent-less is Tough in the Cloud - You may not have any major issues with installing a packet sniffer or network-monitoring appliance in your own private cloud or data-center, but you won’t be able to place these kinds of devices in PaaS or IaaS environments to monitor your application performance. Monitoring agents in comparison can easily be embedded or piggy-backed as part of an application deployment in the cloud. Agent-less may not be a option when trying to monitor many cloud applications.
4. High Network Bandwidth Costs - Cloud providers typically charge per gigabyte of inbound and outbound traffic. If your cloud application has 100 nodes and your collecting megabytes of performance data every minute, all of that data has to be communicated outside of the cloud to your monitoring solution’s management server, which can be on-premise or in another cloud. Monitoring what’s relevant in your application versus monitoring everything means you’ll avoid exorbitant cloud usage bandwidth costs for transferring monitoring data.
5. Inflexible Licensing - If you want to monitor specific nodes, will your application monitoring vendor lock each license down to a physical server, hostname or IP, OR can your licenses float to monitor any server/node? This can be a severe limitation as now your agents are locked down to a specific node indefinitely. Even if you weren’t monitoring your applications running in the cloud, it’s still a nuisance to have a monitoring agent handcuffed to a physical server without being given the licensing flexibility to move agents around to monitor different server or nodes. As stated above, with node churn occurring frequently in cloud environments, you need a monitoring solution to be as flexible as possible so you can deploy agents anywhere, at anytime.
Link to this post: