DevOps is scary stuff for us pure Ops folks that thought they left coding behind a long, long time ago. Most of us Ops people can hack out some basic (or maybe even advanced) shell scripts in Perl, ksh, bash, csh, etc… But the term DevOps alone makes me cringe and think I might really need to know how to write code for real (which I don’t enjoy, that’s why I’m an ops guy in the first place).
So here’s my plan. I’m going to do a bunch of research, play with relevant tools (what fun is IT without tools?), and document everything I discover here in a series of blog posts. My goal is to educate myself and others so that we operations types can get more comfortable with DevOps. By breaking down this concept and figuring out what it really means, hopefully we can figure out how to transition pure Ops guys into this new IT management paradigm.
What is DevOps
Here we go, I’m probably about to open up Pandoras Box by trying to define what DevOps means but to me that is the foundation of everything else I will discuss in this series. I started my research by asking Google “what is devops”. Naturally, Wikipedia was the first result so that is where we will begin. The first sentence on Wikipedia defines DevOps as “a software development method that stresses communication, collaboration and integration between software developers and information technology (IT) professionals.” Hmmm… This is not a great start for us Ops folks who don’t really want anything to do with programming.
Reading further on down the page I see something more interesting to me … “The goal is to automate as much as possible different operational processes.” Now that is an idea I can stand behind. I have always been a fan of automating whatever repetitive processes that I can (usually by way of shell scripts).
My next stop on this DevOps train lead me to a very interesting blog post by the folks at the agile admin. In it they discuss the definition and history of DevOps. Here are some of the nuggets that were of particular interest to me:
- “Effectively, you can define DevOps as system administrators participating in an agile development process alongside developers and using many of the same agile techniques for their systems work.”
- “It’s a misconception that DevOps is coming from the development side of the house – DevOps, and its antecedents in agile operations, are largely being initiated out of operations teams.”
- “The point is that all the participants in creating a product or system should collaborate from the beginning – business folks of various stripes, developers of various stripes, and operations folks of various stripes, and all this includes security, network, and whoever else.”
Wow, that’s a lot more comforting to my fragile psyche. The idea that DevOps is being largely initiated out of the operations side of the house makes me feel like I misunderstood the whole concept right from the start.
For even more perspective I read a great article on O’Reilly Radar from Mike Loukides. In it he explains the origins of dev and ops and shows how operations has been changing over the years to include much more automation of tasks and configurations. He also explains how there is no expectation of all knowing developer/operations super humans but instead that operations staff needs to work closely or even be in the same group as the development team.
When it comes right down to it there are developers and there are operations staff. The two groups have worked too far apart for far too long. The DevOps movement is an attempt to bring these worlds together so that they can achieve the effectiveness and efficiency that the business deserves. I really do feel a lot better about DevOps now that I have done more research into the basic meaning and I hope this helps some of you who were feeling intimidated like I was. In my next post I plan to break down common operations tasks and talk about the tools that are available to help automate those tasks and their associated processes.
As always, please feel free to comment if you think I have missed something or if you have a request for content in an upcoming post.Link to this post:
We all know performance is important, but performance tuning is too often an afterthought. As a result, taking on a performance tuning project for a slow application can be pretty intimidating – where do you even begin? In this series I’ll tell you about the strategies and technologies that (in my experience) have been the most successful in improving PHP performance. To start off, however, we’ll talk about some of the easy wins in PHP performance tuning. These are the things you can do that’ll get you the most performance bang for your buck, and you should be sure you’ve checked off all of them before you take on any of the more complex stuff.
Why does performance matter?
The simple truth is that application performance has a direct impact on your bottom line:
Follow these simple best practices to start improving PHP performance:
One of the easiest improvements you can make to improve performance and stability is to upgrade your version of PHP. PHP 5.3.x was released in 2009. If you haven’t migrated to PHP 5.4, now is the time! Not only do you benefit from bug fixes and new features, but you will also see faster response times immediately. See PHP.net to get started.
- Installing the latest PHP on Linux
- Installing the latest PHP on OSX
- Installing the latest PHP on Windows
Once you’ve finished upgrading PHP, be sure to disable any unused extensions in production such as xdebug or xhprof.
Use an opcode cache
PHP is an interpreted language, which means that every time a PHP page is requested, the server will interpet the PHP file and compile it into something the machine can understand (opcode). Opcode caches preserve this generated code in a cache so that it will only need to be interpreted on the first request. If you aren’t using an opcode cache you’re missing out on a very easy performance gain. Pick your flavor: APC, Zend Optimizer, XCache, or Eaccellerator. I highly recommend APC, written by the creator of PHP, Rasmus Lerdorf.
Many developers writing object-oriented applications create one PHP source file per class definition. One of the biggest annoyances in writing PHP is having to write a long list of needed includes at the beginning of each script (one for each class). PHP re-evaluates these require/include expressions over and over during the evaluation period each time a file containing one or more of these expressions is loaded into the runtime. Using an autoloader will enable you to remove all of your require/include statements and benefit from a performance improvement. You can even cache the class map of your autoloader in APC for a small performance improvement.
Optimize your sessions
While HTTP is stateless, most real life web applications require a way to manage user data. In PHP, application state is managed via sessions. The default configuration for PHP is to persist session data to disk. This is extremely slow and not scalable beyond a single server. A better solution is to store your session data in a database and front with an LRU (Least Recently Used) cache with Memcached or Redis. If you are super smart you will realize you should limit your session data size (4096 bytes) and store all session data in a signed or encrypted cookie.
- Encrypting PHP sessions with Suhosin
- Storing sessions in Memcached
- Using a database to persist sessions
Use a distributed data cache
Applications usually require data. Data is usually structured and organized in a database. Depending on the data set and how it is accessed it can be expensive to query. An easy solution is to cache the result of the first query in a data cache like Memcached or Redis. If the data changes, you invalidate the cache and make another SQL query to get the updated result set from the database.
There are many use cases for a distributed data cache from caching web service responses and app configurations to entire rendered pages.
Do blocking work in the background
Often times web applications have to run tasks that can take a while to complete. In most cases there is no good reason to force the end-user to have to wait for the job to finish. The solution is to queue blocking work to run in background jobs. Background jobs are jobs that are executed outside the main flow of your program, and usually handled by a queue or message system. There are a lot of great solutions that can help solve running backgrounds jobs. The benefits come in terms of both end-user experience and scaling by writing and processing long running jobs from a queue. I am a big fan of Resque for PHP that is a simple toolkit for running tasks from queues. There are a variety of tools that provide queuing or messaging systems that work well with PHP:
I highly recommend Wan Qi Chen’s excellent blog post series about getting started with background jobs and Resque for PHP.
Leverage HTTP caching
HTTP caching is one of the most misunderstood technologies on the Internet. Go read the HTTP caching specification. Don’t worry, I’ll wait. Seriously, go do it! They solved all of these caching design problems a few decades ago. It boils down to expiration or invalidation and when used properly can save your app servers a lot of load. Please read the excellent HTTP caching guide from Mark Nottingam. I highly recommend using Varnish as a reverse proxy cache to alleviate load on your app servers.
Optimize your favorite framework
Deep diving into the specifics of optimizing each framework is outside of the scope of this post, but these principles apply to every framework:
- Stay up-to-date with the latest stable version of your favorite framework
- Disable features you are not using (I18N, Security, etc)
- Enable caching features for view and result set caching
Learn to how to profile code for PHP performance
Xdebug is a PHP extension for powerful debugging. It supports stack and function traces, profiling information and memory allocation and script execution analysis. It allows developers to easily profile PHP code.
WebGrind is an Xdebug profiling web frontend in PHP5. It implements a subset of the features of kcachegrind and installs in seconds and works on all platforms. For quick-and-dirty optimizations it does the job. Here’s a screenshot showing the output from profiling:
XHprof is a function-level hierarchical profiler for PHP with a reporting and UI layer. XHProf is capable of reporting function-level inclusive and exclusive wall times, memory usage, CPU times and number of calls for each function. Additionally, it supports the ability to compare two runs (hierarchical DIFF reports) or aggregate results from multiple runs.
AppDynamics is application performance management software designed to help dev and ops troubleshoot problems in complex production apps.
Get started with AppDynamics today and get in-depth analysis of your applications performance.
PHP application performance is only part of the battle
Now that you have optimized the server-side, you can spend time improving the client side! In modern web applications most of the end-user experience time is spent waiting on the client side to render. Google has dedicated many resources to helping developers improve client side performance.
See us live!
It’s not an exciting or glamorous subject but it’s an absolutely critical concept for properly managing your applications and infrastructure. CMDB, CMS, SIS, EIEIO (joking) or anything else you want to call it these days is a concept that has been poorly implemented from the very beginning. The concept itself is sound; have a single source of truth that describes your application and infrastructure environments to enable IT operations efficiency (at least that’s the core concept in my mind).
CMDB’s are Awesome, but Not Really
Back in the my days working for global financial services institutions I relied heavily on the CMDB as a starting point for many different activities. I say it was only a starting point because invariably the information within the CMDB was wrong. It was either originally input wrong, not updated regularly enough, or updated with incorrect information. No matter what the real cause, my single source of truth became a partial source of truth that always required extensive verification. Not very efficient!
Getting back to my reliance upon the CMDB… Here are the types of activities that required me to query the CMDB:
- Change controls – Anytime I needed to change anything on a production server I needed to understand what applications had components existing on that server.
- Application upgrades – I needed to know all of the application components at that exact moment to make sure they all got the update.
- Application migrations – There were times when we simply needed to move the application from one data center to another. This required a complete understanding of all application components, flows, and dependencies.
- Performance troubleshooting – When I was asked to get involved with a performance problem, one of the first things I wanted to understand was all of the components that made up the application and any external dependencies.
There are many more uses for the data in the CMDB but those were my top use cases. As I said before, invariably the CMDB was wrong. There were usually components missing from the CMDB, and components in the CMDB that were no longer part of the application, and incorrect dependencies, and, and , and…
Salvation by Auto Discovery and Dependency Mapping, but Not Really
So what’s a good IT department supposed to do about this problem? Buy a discovery and dependency mapping tool of course. And that’s exactly what we did. We explored the market and brought in the best (relative) tool for the job. It was one of those agentless tools that makes deployment way faster and easier in a large enterprise like mine was. The problem, as I would later realize, is that agentless discovery tools only see what’s going on when they login and scan the host. Under normal conditions you can scan your environment maybe once or twice a day without completely overwhelming the tool. What that means is that all of those transient (short lived) services calls into or out of each application are misses by the discovery tool unless they happen to be running at nearly the exact time of the scan.
To add further insult to injury, most organizations don’t want a bunch of scanning activity going on during heavy business hours so the scans are typically relegated to the middle of the night when there is little or no load on the applications that are being scanned. This amplifies the transient communication dependency mapping problem. Now the vendors who sell these solutions will claim that there are ways to deal with this issue if you just use their network sniffer in conjunction with their agentless appliance. I won’t comment much on this but I will say that this creates another slew of deployment problems from a political and technical perspective and the thought of every trying it again makes me wince in pain. (Where did I put my therapists phone number again?)
The Application Knows, Really It Does
What better source of understanding application components and dependencies is there than the application itself? Let’s explore this for a moment. If you can live inside of the application and see all of the socket connections opening and closing then you absolutely know what else the application is communicating with. Imagine if there was a system that could automatically see all of these connection that open and close at any given time and draw a picture of the application and all of it’s dependencies at that exact moment in time or any point in the past. And imagine if this system had a published API that allowed your other systems to query it for this information. Regardless of transient or persistent connection types, you would have the ability to know all of the components of your application and all of its external dependencies. This is exactly what AppDynamics does out of the box.
I believe that the CMDB of old should be an ecosystem of information points that provide the truth at the moment it is requested. Forrester calls this a SIS (service information system) in their research paper titled “Reinvent The Obsolete But Necessary CMDB”. Click here to read it if you’re interested. The SIS isn’t some vendor tool, instead it’s an architectural construct that should be different for each company based upon their tools and requirements. From my perspective it is incredibly difficult and inefficient to manage a datacenter or group of applications without implementing this type of concept.
If you’ve already got AppDynamics deployed, consider using it as a significant source of truth about your applications. If you’re stuck with an outdated CMDB, consider shifting your architecture and check out how AppDynamics can help with a free trial.Link to this post:
Unicorns, those magical mythical creatures that many have searched for but never actually found. One of our customers recommended AppDynamics to their associates and compared us to “Unicorns … only real.” This analogy is really great since Enterprises have been searching for “software that just works” but up until recently haven’t been able to find it. So now that we’ve found them, lets talk about 2 awesome Unicorns, AppDynamics and PagerDuty.
Recently we released a couple of blogs about the AppDynamics and PagerDuty integration. If you haven’t had the chance yet you can check them out here and here. I had some time to sit back and really think about what these two companies and our integration mean to the IT world and I want to share those thoughts with you.
I’m a person that has worked in many sizes of company from really small startups (less than 20 employees) to really large enterprises (more than 250,000 employees) and a few in between. IT support levels vary greatly within these different size organizations. In particular, the ability to detect problems and notify the right people quickly is an issue in the SMB world (at the companies I worked for anyway).
One of the reasons for this problem lies in the costs associated with traditional monitoring and alerting systems. Beyond the up front purchase price there is typically the ongoing configuration and maintenance costs which can drive TCO excruciatingly high in no time. When thinking about SMB, taking into account the high purchase price, high setup cost, and high maintenance costs it’s no wonder very few companies invest in the software they need to monitor and manage their environment properly.
Taking it a step further, it’s a shame that large enterprises have to pay these exorbitant costs and suffer through “Enterprise Class Software” that takes an army of highly paid consultants and/or employees to setup and maintain.
This is why AppDynamics and PagerDuty is a big deal to me. Enterprise quality software that is as easy to use, configure and maintain as consumer software while not sacrificing functionality. This was unheard of 5 years ago. Thankfully, things are changing rapidly for the better. AppDynamics and PageryDuty allow any company to quickly deploy, configure, manage, identify, isolate, alert, troubleshoot, automate, repair, etc… All of this done better than the Enterprise Class products of 5 years ago and at fraction of the TCO.
Specifically, here are a few of the things that are way better when you use AppDynamics and PagerDuty:
- 90% less configuration and management work with better results.
- Isolation of problems down to the node, page, transaction, or line of code level.
- Automatic remediation of known problems.
- Reduced dependency on “The Expert” who actually knows how to set up and use the monitoring tool.
- Ability to interface with modern devices (like sending push notifications to iOS and Android)
- Easy to use graphical interface for configuration of advance rules.
- On call scheduling so you don’t have to “pass the pager”. Yep, there are still pagers out there.
- Automated escalation of alerts that have not been responded to yet.
When it comes right down to it we are in a time where software is being re-invented and every company from the biggest to the smallest need to re-evaluate their strategy and take advantage of the amazing tools at their disposal. Here’s your chance to catch a Unicorn, don’t miss out by looking the other direction.
Click here to start your free trial of AppDynamics and catch a Unicorn for yourself.Link to this post: