CAT | Cloud
Managed Service Providers have a big task these days. A few years ago applications were much less complex and understanding those application architectures and dependencies before on-boarding was much more simple. These days we have applications that combine classic architecture, service oriented architecture, cloud architecture, big data components, etc… On-boarding a new customer application can be risky and if it goes wrong can leave a bad taste in your new clients mouth.
The risk alone is cringe worthy so to minimize that risk most MSPs seek out and pay top dollar for experts that know about the particular type of application they need to on-board. This is a pretty effective solution from a risk perspective but from a cost perspective this takes away from the MSPs bottom line.
Reduce Risk Without Driving Up Cost
Some of the best MSPs out there have figured out that there is a way to substantially reduce on-boarding risk, time and cost. Here is the secret recipe these MSPs have discovered:
- Automatic discovery and rendering of all application components (Figure 1)
- Automatic discovery and correlation of application dependencies (Figure 1)
- Automatic detection and identification of application problems (Figure 2)
- Automatic discovery and correlation of application load and resource consumption. (Figure 3)
The recipe above is all about letting the right tools collect, analyze, correlate, and visualize the important application aspects that every MSP needs to know before they attempt to on-board a customers application. The alternative is spending a bunch of time and money trying to manually piece together this information from tribal knowledge, log files, shell scripts, trouble tickets, etc…
Verification and Validation
An awesome side effect to using a tool for all of those tasks above is that you can use the same tool to prove to your new customers that you successfully moved their application onto your platform. Verifying the architecture, response times, stability, and dependencies is critical to calling any application move a success so if you use the same tool for your before and after analysis you are way ahead of the game.
All of the functionality mentioned in this blog post is available today in AppDynamics Pro. If you plan on moving an application you need to see for yourself how much AppDynamics Pro can help. Click here to begin your free self-service trial of AppDynamics Pro today.
Link to this post:
Your application is fast and scalable, right? How do you know? How often do you run performance or load tests? In this post I will give an overview of the tools of the trade for performance and load testing web applications.
Open-Source performance testing tools
These tools allow you to load test your application for free. My preferred tool is Bees with Machine Guns — not just because of the epic name, but primarily because it uses Amazon’s EC2 to generate high levels of concurrency with ease.
- Bees with Machine Guns – A utility for arming (creating) many bees (micro EC2 instances) to attack (load test) targets (web applications).
- MultiMechanize – Multi-Mechanize is an open source framework for performance and load testing. It runs concurrent Python scripts to generate load (synthetic transactions) against a remote site or service. Multi-Mechanize is most commonly used for web performance and scalability testing, but can be used to generate workload against any remote API accessible from Python.
- Siege – Siege is an http load testing and benchmarking utility. It was designed to let web developers measure their code under duress, to see how it will stand up to load on the internet. Siege supports basic authentication, cookies, HTTP and HTTPS protocols. It lets its user hit a web server with a configurable number of simulated web browsers. Those browsers place the server “under siege.”
- HttpPerf – Httperf is a tool for measuring web server performance. It provides a flexible facility for generating various HTTP workloads and for measuring server performance. The focus of httperf is not on implementing one particular benchmark but on providing a robust, high-performance tool that facilitates the construction of both micro- and macro-level benchmarks. The three distinguishing characteristics of httpperf are its robustness, which includes the ability to generate and sustain server overload, support for the HTTP/1.1 and SSL protocols, and its extensibility to new workload generators and performance measurements.
- Apache Bench – AB is a tool for benchmarking your Apache HTTP server. It is designed to give you an impression of how Apache performs.
- JMeter – Apache JMeter may be used to test performance both on static and dynamic resources (files, Servlets, Perl scripts, Java Objects, databases and queries, FTP servers and more). It can be used to simulate a heavy load on a server, network or object to test its strength or to analyze overall performance under different load types. You can use it to make a graphical analysis of performance or to test your server/script/object behavior under heavy concurrent load.
Performance testing tools as a service
Through these services you can build, execute, and analyze performance tests.
- Apica Load Test – Cloud-based load testing for web and mobile applications
- Blitz.io – Blitz allows you to continuously monitor your app 24×7 from around the world. You can emulate a single user or hundreds of users all day, everyday and be notified immediately if anything goes wrong.
- Soasta – Build, execute, and analyze performance tests on a single, powerful, intuitive platform.
- Blazemeter – BlazeMeter is a self- service performance & load testing cloud, 100% JMeter-compatible. Easily run tests of 30k, 50k, 80k or more concurrent users, on demand.
Performance testing on the client side
The best place to get started is at Google with the Web Performance best practices.
- Google PageSpeed Insights – PageSpeed Insights analyzes the content of a web page, then generates suggestions to make that page faster. Reducing page load times can reduce bounce rates and increase conversion rates.
Web acceleration services
Through a simple DNS change, your website’s traffic is routed through these services and your content is optimized and cached globally for better performance. This is an easy way to improve performance with minimum efforts.
- Yottaa – All-in-one web optimization solution delivers speed, scale, security and actionable insight for any website.
- Cloudflare – Offers free and commercial, cloud-based services to help secure and accelerate websites.
- Torbit – Torbit helps you accurately measure your website’s performance and quantify how speed impacts your revenue.
- Incapsula – Incapsula offers state-of-the-art security and performance to websites of all sizes.
As always, please feel free to comment if you think I have missed something or if you have a request for content in an upcoming post.Link to this post:
Welcome back to my series on migration to the cloud. In my last post we discussed all of the effort you need to put into the planning phase of your migration. In this post we are going to focus on what should happen directly after the migration has been completed.
Regardless of how well you planned or if you just decided to dive right in without any forethought, there are steps that need to be taken after your migration to ensure your application is working properly and performing up to snuff. These steps need to be performed whether you chose to use a public, private or hybrid cloud implementation.
Step 1: Take Your New Cloud Based Application for a Test Drive
Go easy at first and just roll through the functionality as a user would. If it doesn’t work well for you then you know it wont work well when there are a bunch of users hitting it.
Assuming things went well with your functional test it’s time to go bigger. Lay down a load test and see step 2 below.
Step 2: Monitoring is Not the Job of Your Users
If you’re relying on the users of your application to let you know if there are performance or stability issues you are already a major step behind your competition. If you planned properly then you have a monitoring system in place. If you’re just winging it, put in a monitoring system now!!!
Here are the things your monitoring tool should help you understand:
- Architecture and Flow: You design an application architecture to support the type of application you are building. How do you really know if you have deployed the architecture you designed in the first place? How do you know if your application flow changes over time and causes problems? Cloud computing environments are dynamic and can shift at any given time. You need to have a tool in place that let’s you know exactly what happened, when and if it caused any impact.
What happens if you don’t have a flow map? Simple, when there’s a problem you waste a bunch of time trying to figure out what components were involved in the problematic transaction so that you can isolate the problem to the right component.
- Response Times: Slow sucks! You moved to the cloud for many potential reasons but one thing is certain, your users don’t want your application(s) to run slowly. It seems obvious to monitor the response time of your applications but I’m constantly amazed by how many organizations still don’t have this type of monitoring in place for their applications. There are really only 2 options in this category; let your users tell you when (notice I didn’t say if) your application is slow or have a monitoring tool alert you right away.
- Resources: You need to keep an eye on the resources you are consuming in the cloud. New instances of your application can quickly add up to a large expense if your code is inefficient. You need to understand how well your application scales under load and fix the resource hogs so that you can drive better value out of your application as usage increases.
Step 3: Elasticity
Elasticity is a key benefit of migrating your application to the cloud. Traditional application architectures accounted for periodic spikes in workload by permanently over-allocating resources. Put simply, we used to buy a bunch of servers so that we could handle the monthly or yearly spikes in activity. Most of these servers sat nearly idle the rest of the year and generated heat.
If you’re going to take advantage of the inherent elasticity within your cloud environment you need to understand exactly how your application will respond to being overloaded and how your infrastructure adapts to this condition. Cloud providers have tools to execute the dynamic shift in resources but ultimately you need a tool to detect the trigger conditions and then interface with the dynamic provisioning features of your cloud.
The combination of slow transactions AND resource exhaustion would be a great trigger to spin up new application instances. Each condition on its own does not justify adding a new resource.
The point here is that migrating to the cloud is not a magic bullet. You need to know how to use the features that are available and you need the right tools to help you understand exactly when to use those features. You need to stress your new cloud application to the point of failure and understand how to respond BEFORE you set users free on your application. Your users will certainly break your application and during an event is not the proper time to figure out how to manage your application in the cloud.
Let failure be your guide to success. Fail when it doesn’t matter so that you can success when the pressure is on. The cloud auto-scaling features shown in this post are part of AppDynamics Pro 3.7. Click here to start your free trial today.Link to this post:
Planning to deploy or migrate an application to a cloud environment is a big deal. In my last post we discussed the value of using real business and IT requirements to drive the justification of using a cloud architecture. We also explored the importance of using monitoring information to understand your before and after picture of application performance and overall success.
In this post I am going to dive deeper into the planning phase. You can’t expect to throw a half assed plan in place and just deal with problems as they pop up during an application migration. That will almost certainly result in frustration for the end users, IT staff, and the business who relies upon the application.
In reality, at least 90% of your total project time should be dedicated to planning and at most 10% to the actual implementation. The big question is “What are the most important aspects of the planning phase?”. Here’s my cloud migration planning top 10 list:
- Application Portfolio Rationalization - Let’s face reality for a moment… If you’re in a large enterprise you have multiple apps that perform a very similar business function at some level. Application Portfolio Rationalization is a method of discovering the overlap between your application and consolidating where it makes sense. It’s like spring cleaning for your IT department. You need to get your house in order before you decide to start moving applications or you will just waste a lot of time and money moving duplicate business functionality across your portfolio.
- Business Justification and Goal Identification – If there is one thing I try to make clear in every blog post it is the fact that you need to justify your activities using business logic. If there is no business driver for a change then why make the change? Even very techie-like activities can be related back to business drivers.
Example… Techie Activity: Quarterly server patching Business Driver: Failure to patch exposes the business to risk of being hacked which could cause brand damage and loss of revenue.
I included goal identification with business justification because your goals should align with the business drivers responsible for the change.
- Current State Architecture Assessment (App and Infra) – This task sounds simple but is really difficult for most companies. Current State Architecture Assessment is all about documenting the actual deployed application components, infrastructure components, and application dependencies. Why is this so difficult? Most enterprises have implemented a CMDB to try and document this information but the CMDB is typically manually populated and updated. What happens in reality is that over time the CMDB is neglected when application and infrastructure changes occur. In order to solve this problem some companies have turned to Automated discovery and dependency mapping tools. These tools are usually agentless so they login to each server and scan for processes, network connections, etc… at pre-defined intervals and create a very detailed mapping that includes all persistent connections to and from each host regardless of whether or not they are application related. The periodic scans also miss the short lived services calls between applications unless the scan happens to be at approximately the same time of the transient application call. An agent based APM tool covers all the gaps associated with these other methods.
- Current State Performance Assessment – Traditional monitoring metrics (CPU, Memory, Disk I/O, Network I/O, etc…) will help you size your cloud environment but tell you nothing about the actual application performance. The important performance metrics encompass end user response time, business transaction response time, external service response time, error and exception rates, transaction throughput, with baselines for each. This is also a good time to make sure there are no glaring performance issues that you are going to promote into your cloud environment. It’s better to fix any known issues before you migrate as the extra complexity of the cloud can amplify your application problems.
- Architectural Change Impact Assessment – Now that you know what your real application and infrastructure components are, you need to assess the impact of the difference between traditional and cloud architectures. Are there components that wont work well (or at all) in a cloud architecture? Are code changes required to take advantage of the dynamic features available in your cloud of choice? You need to have a very good understanding of how your application works today and how you want it to work after migration and plan accordingly.
- Problem Resolution Planning – Problem resolution planning is about making a commitment to your monitoring tools and strategy as a core layer of your overall application architecture. The number of potential points of failure increases dramatically from traditional to cloud environments due to increased virtualization and dynamic scaling. In highly distributed applications you need monitoring tools that will tell you exactly where problems are occurring or you will spend too much time isolating the problem location. Make monitoring a part of your application deployment and support culture!!!
- Process re-alignment – Just hearing the word “process” makes me cringe and have flashbacks to the giant, bloated , slow moving enterprise environments that I used to call my workplace. The unfortunate truth is that we really do need solid processes if we want to maintain order and have any chance of managing a large environment in a sustainable fashion. Many of the traditional IT development and operations processes need to be modified when we migrate to the cloud so you can’t overlook this task.
- Application re-development – The fruits of your Architectural Change Impact Assessment work will probably precipitate some level of development work within your application. Maybe only minor tweaks are required, maybe significant portions of your code need to change, maybe this application should never have been selected as a cloud migration candidate. If you need to change the application code you need to test it all over again and measure the performance.
- Application Functional and Performance Testing – No surprises here, after the code has been modified to function as intended with your cloud deployment it needs to be tested. APM tools really speed up the testing process since they show you the root of your performance problems down to the line of code level. If you rely only upon the output of your application testing suite your developers will spend hours trying to figure out what code to change instead of minutes fixing the problematic code.
- Training (New Processes and Technology) – With all of those new and/or modified processes and new software required to support your cloud application training is imperative. Never forget the “people” part of “people, process, technology”.
There’s a lot more that really goes into planning a cloud migration but these major tasks are the big ones in my book. Give these 10 tasks the attention they deserve and the odds will shift in your favor for a successful cloud migration. Next week we’ll talk about important work that should happen after your application gets migrated.Link to this post: