TAG | cloud
Welcome back to my blog series on deploying applications to the cloud.
What’s the point of deploying an application to the cloud versus just hosting it in your own data center? Is it really a good idea? Will it save you money? Will it work better? Will it cause new deployment and management problems? How do you monitor it?
These are all basic questions you should ask yourself before deciding IF your new or existing application will end up in a cloud environment.
The answers might be different for each application supporting your business. Cloud is really a set of architectural patterns that are available to help you solve business problems using technology. If you’re considering cloud you better have a business problem that you need to solve.
Here are a few business problems that would make me consider a cloud implementation for my application(s):
- We’re out of space in our data center and most of our applications are used via the internet–should we build another data center or move some applications to public cloud providers?
- Our new mobile application will need to scale rapidly as it becomes more popular–we have to be able to scale as needed so our customers have a good user experience.
- We need to accelerate our time to market and make our business more agile–we don’t have time to wait for IT and all of our productivity sapping processes.
No matter what your business reasons are you need to come up with quantifiable and measurable success criteria so that you can prove out the benefits (or failure) of your cloud computing initiative. This implies you are already measuring something BEFORE you move to the cloud so you can compare metrics before and after. Here are some example KPIs that might be applicable:
- Time to deliver requested environment to developers
- Number of application impact incidents
- Infrastructure cost per application
- Time to scale / Cost to scale Application
- Transaction throughput
- SLA (yeah, you really should have one of these)
So You’ve Decided To Go For It
You’ve got your business justification nailed down and decided you really do need a cloud based application. Great! If this is a brand new application you can design it from the ground up and just deploy it, right? No!!! Remember, you need to monitor and manage this application if you stand any chance of providing a good user experience over the long haul.
“My cloud provider has all the monitoring and management tools I need.” – Wrong! Your cloud provider has basic monitoring tools that show you infrastructure metrics (CPU usage, Memory Usage, I/O usage, etc…). These monitoring tools don’t tell you anything about your application. Here’s what you need to know about your cloud application (at a minimum):
- Which application nodes are in use at any given time. (Dynamic scaling, provisioning, de-provisioning will change this picture at any given time)
- Application calls to external services with response times and error rates. (External service calls are performance killers for cloud applications and drive up the cost as most provider charge for network traffic leaving their cloud)
- The response time and errors of all of my users business transactions. (Applies to any application architecture but cloud deployments can experience greater variance due to factors outside of the application owners control (network congestion, regional provider issues, etc…))
- When a problem occurs – full application call stack for code analysis. (Applies to any application architecture)
- Host level KPIs correlated with all of the application activity. (Really important in the cloud due to host virtualization, shared resources, and multiple sizing options when you select a host to deploy. Select the wrong size by mistake and you just limited your max application performance)
- Historic baselines for everything so you know what normal behavior looks like. (Critical to identifying problems regardless of architecture)
If you’re deploying a new application you should have a really good idea of any external application dependencies (like calling a payment gateway to process credit card orders). If you are moving an existing application there is more work that needs to be done up front. In particular you need to really understand your existing application dependencies. Is there a service or backend database that your application relies upon that you’re not planning to move with the application? If so you can really screw up the entire cloud implementation if you make a bunch of calls to a component that lives outside of your chosen cloud environment.
If you’re moving an existing application you better deploy a tool that can dynamically detect and show application flow maps. I’m not talking about those agentless tools that scan your hosts everyday looking for network connections (those usually miss all of the short lived services calls). I mean a solution that will give you the entire picture regardless of persistent and transient connection methodologies.
Since you need to monitor your existing environment anyway you might as well collect performance data and save it so that you have a good point of comparison for your “before and after” application environment (We’ll discuss this item more in a future blog post).
There are a ton of considerations when you choose to implement your application using cloud computing architecture patterns. In my next post I’ll go into more detail around the planning phase. Having all your ducks in a row before your begin the migration is critical to success.Link to this post:
It seems like every article, tweet, blog post I read someone has a different definition of the same buzzwords – especially in technology. Mentioning cloud or big data on a tech blog is like bringing sand to the beach. That’s one of the reasons why we made The Real DevOps of Silicon Valley - to make fun of the hype. I got to thinking… has anyone taken the time to shed some light on these ambiguous terms? I investigated on Urban Dictionary and this is what I found…
IT According to UrbanDictionary.com
(Not kidding, look it up…)
1. CLOUD COMPUTING
cloud com·put·ing, noun.
“Utilizing the resonance of water molecules in clouds when disturbed by
wireless signals to transmit data around the globe from cloud to cloud.
‘I use cloud computing so I don’t have to worry about viruses, I
only have to worry about birds flying through my cloud.’”
“Agile is a generalized term for a group of anti-social behaviors used by office workers to avoid doing any work while simultaneously giving the appearance of being insanely busy. Agile methods include visual distraction, subterfuge, camouflage, psycho-babble, buzzwords, deception, disinformation, and ritual humiliation. It has nothing to do with the art and practice of software engineering.”
3. BIG DATA
big da·ta, noun.
“Modern day version of Big Brother. Online searches, store purchases, Facebook posts, Tweets or Foursquare check-ins, cell phone usage, etc. is creating a flood of data that, when organized and categorized and analyzed, reveals trends and habits about ourselves and society at large.”
“When developers and operations get together to drink beer and color on whiteboards to avoid drama in the War Room. Also a buzzword for recruiters to use to promote overpaid dev or ops jobs.”
Watch episode HERE.
“The parts of a computer that can’t be kicked, but ironically
deserve it most.
“The word the Knights of Ni cannot hear or say.”
(Monty Python & the Holy Grail reference)
Link to this post:
It’s been a week since we hosted AppJam Americas, our first North American user conference in San Francisco. With myself as master of ceremonies, and a minor wardrobe malfunction at the start (see video at the end of this post), the entire day was a huge success for us and our customers. One thing that stuck in my mind was that applications today have become way more complex to manage—and strategic monitoring has become key to mastering that complexity. Simply put, SOA+Virtualization+Big Data+Cloud+Agile != Easy.
The day started with Jyoti Bansal, our CEO and Founder outlining his vision to be the world’s #1 solution for managing modern web applications. The simple facts are that applications have become more dynamic, distributed and virtual. All of these factors have increased their operational complexity, and log files and legacy monitoring solutions are ill-suited to the task.
Jyoti then outlined our core design principles around Business Transaction Monitoring, Self-learning, intelligence and the need to keep app management simple. He then suggested what the audience could expect from AppJam: “AppJam is about sharing knowledge, learning best practices, guiding our direction and Jamming.” (We’re pretty sure by “jamming” he meant “partying.”)
With the intro from Jyoti done, it was time for me to nose dive the stage and introduce our first customer speaker – Ariel Tsetlin from Netflix.
?How Netflix Operates & Monitors in the Cloud
With 27 million customers around the world, Neflix’s growth over the past three years has been meteoric. In fact, they found that they couldn’t build data centers fast enough. Hence, they moved to the public cloud in AWS for better agility.
In his session, Ariel talked about Netflix’s architecture in the cloud and how they built their own PaaS in terms of apps and clusters on top of Amazons IaaS. One unique thing Netflix does is bake their OS, middleware, apps and monitoring agents into a single image rather than using a tool like Chef or Puppet to manage application configuration and deployment separately from the underlying OS, middleware and tools. Everything is automated and managed at the instance level, with developers given the freedom and responsibility to deploy whenever they want to. That’s pretty cool stuff when you consider that developers now manage their own capacity and auto-scaling within the Cloud.
Ariel then talked about the assumption that failure is inevitable in the Cloud, with the need to plan and design around the fact that every part of the application can and will fail at some point. Testing for failure through “monkey theory” and Netflix’s “Simian Army” allows them to simulate failure at every level of the application, from randomly killing instances to taking out entire availability zones in AWS.
From a monitoring perspective, Netflix uses internally developed tools and AppDynamics, which are also baked into their AWS images. Doing so allows developers to live and die by monitoring in production through automated alerts and problem discovery. What’s perhaps different is that Netflix focuses their monitoring at the service level (e.g. app cluster), rather than at the infrastructure level–so they’re really not interested in CPU or memory unless it’s impacting their end users or business transactions.
Finally, Ariel spoke about AppDynamics at Netflix, touching on the fact they monitor over 1 million metrics per minute across 400+ business transactions and 300+ application services, giving them proactive alerts with URL drill-down into business transaction latency and errors from self-learned baselines. Overall, it was a great session for those looking to migrate and operate their application in the Cloud.
When Big Data Meets SOA
Next up was Bob Hartley, development manager from Family Search, who gave an excellent talk about managing SOA and Big Data behind the world largest genealogy architecture. With almost 3 billion names indexed and 550+ million high resolution digital images, FamilySearch has over 20 petabytes of data which needs to be managed by their Java and Node.JS distributed architecture spanning 5,000 servers. What’s scary is that this architecture and data is growing at a rapid pace, meaning application performance and scalability is fundamental to the success of Family Search.
After a brief intro, Bob started to talk about his Big Data architecture in terms of what technologies they were using to manage search queries, images, and people records. Clusters of Apache Lucene, SOLR, and custom map-reduce combined with traditional relational database technology such as Oracle, MySQL, and Postgres.
Bob then talked about his team’s mission – to enable business agility through visibility, responsiveness, standardization, and vendor independence. At the top of this list was to provide joy for customers and stakeholders through delivering features that matter faster.
Bob also emphasized the need for repeatable, reliable and automated processes, as well as the need to monitor everything so his team could manage the performance of their SOA and Big Data application through continuous agile release cycles. Family Search has gone from a 3-month release cycle to a continuous delivery model in which changes can be deployed in just 40 minutes. That’s pretty mind blowing stuff when you consider the size and complexity of their environment!
What’s interesting is that Release != Deploy at FamilySearch; they incrementally roll out out new features to different sets of users using flags, allowing them to test and tease features before making them available to everyone. Monitoring is at the heart of their continuous release cycle, with Dev and Ops using baselines and trending to determine the impact of new features on application performance and scalability.
In terms of the evaluation process, the company looked at 20 different APM vendors over a 6 month period before finally settled on AppDynamics due to our dynamic discovery, baselining, trending, and alerting of business transactions. As Bob said, “AppDynamics gave us valuable performance data in less than one day. The closest competitors took over 2 weeks just to install their tools.”
Today, a single AppDynamics management server is used in production to monitor over 5,000 servers, 40+ application services, and 10 million business transactions a day. Since deployment, Family Search has managed to find dozens of problems they’ve had for years, and have managed to scale their application by 10x without increasing server resources. They’ve also seen MTTD drop from days to minutes and MTTR drop from months to hours and minutes.
Bob finished his talk with his lessons learned for managing SOA, Big Data and Agile applications: “Keep Architecture Simple,” “Speed of delivery is essential,” “Systems will eventually fail,” and “Working with SOA, Big Data and Agile is hard.”
How AppDynamics is accelerating DevOps culture at Edmunds.com
After lunch, John Martin, Senior Director of Production Engineering, spoke about DevOps culture at Edmunds.com and how AppDynamics has become central to driving team collaboration. After a brief architecture overview outlining his SOA environment of 30 application services, John outlined what DevOps meant to him and his team – “DevOps is really about Collaboration – the most challenging issues we faced were communication.” Openly honest and deeply passionate throughout his session, John talked about three key challenges his team faced over the years that were responsible for the move to DevOps:
1. Infrastructure Growth
2. Communication Failure
3. Go Faster & Be Efficient
In 2005 Edmunds.com had just 30 servers; by the end of this year that figure will have risen to 2,500. Through release automation using tools such as Bladelogic and Chef, John and his team are now able to perform a release in minutes versus the 8 hours it took back in 2005.
John gave an example on communication failure in which development was preparing for a major release at Edmunds.com using a new CMS platform. This release was performance-tested just two weeks prior to go-live. Unfortunately the new platform showed massive scalability limitations, causing Ops to work around the clock to over-provision resources as a tactical fix. Fortunately the release was delivered on time and the business was happy. However, they suffered as a technology organization due to finding architecture flaws so late in the game – “We needed a clear picture of what went wrong and how we were going to prevent such breakdown in future.”
Another mistake with a release in 2010 which forced a major re-think between development and operations. It was this occurrence that caused Edmunds.com to get really serious about DevOps. In fact, the technical leads got together and reorganized specialized teams within Dev and Ops to resolve deployment issues and shed pre-conceptions on who should do what. The result was improved relationships, better tooling, and a clearer perspective on how future projects could work.
John then touched on the tools that were accelerating DevOps culture, specifically Splunk for log files and AppDynamics for application monitoring. “AppDynamics provides a way for Dev and Ops to speak the same language. We’ve saved hundreds of hours in pre-release tests and discovered many new hotspots like the performance of our inventory business transaction which increased by 111%.” In fact, within the first year, AppDynamics generated a ROI of $795,166 with year 2 savings estimated at a further $420k. John laughed, “As you can see, AppDynamics wasn’t a bad investment.”
John ended his session with 5 tips for ensuring that DevOps succeeds in an organization: Be honest, communicate early and often, educate, criticize constructively, and create champions. Overall, a great session on why DevOps is needed in today’s IT teams.
Zero to Production APM in 30 days (while sending half a billion messages per day)
The final customer session of the day came from Kevin Siminski, Director of Infrastructure Operations at ExactTarget and it was definitely worth waiting for. Kevin actually kicked off his talk by describing a weekly product tech sync meeting which he had with his COO. The meeting was full with different stakeholders from development and operations who were discussing a problem that they were currently experiencing in production.
“I literally got my laptop out, brought up the AppDynamics UI and in one minute we’d found the root cause of the problem,” Kevin said. Not a bad way to get his point across of why the value of Application Performance Management (APM) in 2012 is so important.
Kevin then gave a brief intro to ExactTarget and the challenges of powering some of the world’s top brands like Nike, BestBuy and Priceline.com. ExactTarget’s .NET messaging environment is highly virtualized with over 5,000 machines that generate north of 500 million messages per day across multiple Terabytes of databases.
Kevin then touched on the role of his global operations team and how his team’s responsibility had shifted over the last four years. “My team went from just triaging system alerts to taking a more proactive approach on how we managed emails and our business. Today my team actively collaborates with development, infrastructure and support teams.” All these teams are now focused and aligned on innovation, stability, performance and high availability.
Kevin then outlined his 30-day implementation plan for deploying AppDynamics across his entire environment using a single dedicated systems engineer and an AppDynamics SaaS management server for production. Week 1 was spent on boarding the IT-security team, reviewing config mgmt and testing agent deployment to validate network and security paths. Week 2 involved deploying agents to a few of the production IIS pools and validating data collection on the AppDynamics management server. Week 3 saw all agents pushed to every IIS pool with collection mechanism sent to disabled. The config mgmt team then took over and “owned” the deployment process for go live. Week 4 saw all services and AppDynamics agents enabled during a production change window with all metrics closely monitored throughout the week to ensure no impact or unacceptable overhead.
AppDynamics’ first mission was to monitor the ExactTarget application as it underwent an upgrade to its mission-critical database from SQL Server 2003 to 2008. It was a high-risk migration as Kevin’s team were unable to assess the full risk due to legacy application components, so with all hands on the deck they watched AppDynamics as the migration happened in real-time. As the switch was made, application calls per minute and response time remained constant but application errors began to spike. By drilling down on these errors in AppDynamics, the dev team was quickly able to locate where they were coming from and resolve the application exceptions.
Today, AppDynamics is used for DevOps collaboration and feedback loops so engineers get to see the true impact of their releases in a production environment, a process that was requested by a product VP outside of Kevin’s global operations team. Overall, Kevin relayed an incredible story of how APM can be deployed rapidly across the enterprise to achieve tangible results in just 30 days.
A nice surprising statistic that I later realized in the evening was that the total number of servers being monitored by AppDynamics across our four customer speakers was well over 20,000 nodes. Having been in the APM market for almost 10 years I’m struggling to think of another vendor with such successful large scale production deployments.
Here’s a link to the photo gallery of AppJam 2012 Americas. A big thank you to our customers for attending and we’ll see you all next year!
For those keen to see my stage nosedive here you go:
Appman.Link to this post:
As data has grown exponentially at many sites, companies have been forced to horizontally scale their data. Some have turned to sharding of databases like Postgres or MySQL, while others have switched to newer NoSQL data systems. There have been many debates in the last few years about SQL vs. NoSQL data management systems and which is better. What many have failed to grasp, though, is how similar these systems are and how complex they both are to run in production in high scale.
Both of these systems represent what I call a Data Cloud. This Data Cloud is logical data set spread across many nodes. While developers have heated debates about which system is better and how to design code around it, those in DevOps usually struggle with very similar issues because the two systems are mostly the same. Both systems
- Run across many nodes with large amounts of data flowing between them and from/to the application
- Strain both the hardware of all nodes, and the network connecting them
- Maintain duplicate data across nodes for fault tolerance, and must have failover ability
- Must be tuned on a per node and cluster-wide bases
- Must allow for growth by adding additional nodes.
Running this Data Cloud in production presents a new set of challenges for DevOps, many of which are not well understood or addressed. One of the main challenges is the management and monitoring of these systems, for which few (if any) tools or products exist at this time.
When systems were smaller and you ran a single Database in production, you probably had all the necessary systems in place. With a plethora of products for Management, monitoring, visualizing data, and backups, it was not hard to be successful and meet your SLAs.
But now all this is much more complex once you move into the world of the Data Cloud. Now you have a large number of nodes, all representing the same system and still needing to meet the same SLAs as the old simple DB from before. Let us look at the challenges for running a production Data Cloud successfully.
Do you know how many nodes you need? How many nodes do you put in each replica set? How much latency and throughput do you need in your network for the nodes to communicate fast enough? What is the ideal hardware to use for each node to balance performance with costs?
How do you monitor dozens, hundreds or even thousands of nodes all at once? How do you get a unified view of your data cloud, and then drill down to the problem nodes? Are there even any off-the-shelf monitoring tools that can help? Your old monitoring tool won’t be very useful anymore unless you are willing to look at every node one by one to see what is going on there.
How do you set up a common set of alerts across all nodes? And how do you keep your alert thresholds in sync as you add nodes and remove them? More importantly, even assuming you have alerting in place, once staff receives critical alerts, how will they know where to find the troubled node in the massive cloud, or whether it’s a node level issue or more global in nature? This must be done quickly during critical outages.
How does your staff view the data when it is distributed? In case of data inaccuracy, how can they quickly identify the faulty nodes and fix up the data?
As performance degrades, how do you troubleshoot and identify the bottlenecks? How do you find which nodes by be the cause of the problem? How do you improve performance across all the nodes.
Data Cloud Management
How do you back up all the data while consistently tracking which nodes were backed up successfully and when? How do you make schema changes across all the nodes in one consistent step without breaking your app? And how do you make configuration changes on various nodes or across all nodes? And how do you track the configurations of each node and keep them consistent across your system?
By now you should see that there is a lot to think about before endeavoring to launch a production Data Cloud. Too many companies focus all their energies on deciding which DB or NoSQL system to use and developing their apps for it. But that might turn out to be the lesser of your challenges once you struggle to put the system into production. Be sure you can answer all the questions I have listed above before your launch.
Boris.Link to this post: