The Major Paradigm Shift That’s Changing the IT Landscape

Applications are central to the way companies do business today. The problem businesses face is two-fold: not only is the application essential to the way they compete, but the competitive bar has been raised by the tech giants of the world—Google, Facebook, Netflix, et al.—who’ve evolved as design-led, cloud-native organizations that know how to iterate very quickly. In turn, better consumer experiences have raised expectations for the performance, ease of use, and reliability of business-to-business (B2B) tech.  

The 4th Major Paradigm Shift is Happening Now

These changes are having a major impact on how companies adopt new technologies. Speaking at the first-ever Cisco IMPACT (formerly GSX), AppDynamics General Manager Danny Winokur called these developments the “4th major paradigm shift” in the history of IT—the ongoing evolution from mainframe to client/server to the web—and now onto cloud and microservices technologies that offer very rapid iteration.

Source: Cisco

“The ability to iterate quickly allows you to produce the best end-user experience, extract data and telemetry from that experience, iterate based on those learnings and do it again,” said Winokur. “The faster you iterate, the more likely you’ll be the winner in this application-driven, hyper-competitive environment.” 

The challenge posed by this latest paradigm shift, however, is when companies’ technical environments wind up a chaotic mess. To complicate matters further, many large enterprises still use mainframes for backend data processing. In many cases, their on-prem data centers are central to the way they operate. Hoping to iterate faster, these enterprises put new cloud technology stacks into their data centers, and launch cloud projects that use the latest and greatest public cloud technologies, including serverless offerings such as AWS Lambda.

The result? “When it comes to producing a seamless, well-designed application, things get complicated in a hurry,” said Winokur. “All of these components must work together in a way that’s transparent to the end user and produces an engaging experience with little to no downtime.”

But when things don’t go well, problems quickly turn catastrophic. According to data from Amazon’s recommendation engine, just 100 milliseconds of latency in an application can reduce sales by one percent. 

“Here at AppDynamics, we see many business environments where the customer is facing multiple downtime incidents per day,” said Winokur. “Think of the impact on their bottom line, not to mention the effect this has on their reputation, particularly when the outage becomes a news headline.” 

When systems go down, each individual incident costs hundreds of thousands of dollars, according to a recent AppDynamics survey of 6,000 global IT leaders. In some instances, an incident can cost millions of dollars and seriously damage brand reputation.

New Paradigm, New Solutions

So why are companies struggling with the 4th paradigm? “Because the way their teams work together isn’t well-suited to the new world of cloud and microservices,” said Winokur. Rather, they have an IT war room adept at finger-pointing, and siloed teams looking at their own piece of the puzzle and saying, “I don’t see a problem, do you see a problem?” On the other side of the hall, the business is saying, “We invested so much money in this technology—why isn’t it working?”

These organizations usually have a traditional structure with siloed development, operations and business teams. They’re not used to working together in a connected way, one absolutely necessary in a world where the application is the business.

This poses a problem in a modern BizDevOps model where teams must work collaboratively to make decisions and drive change. “Too often these teams are challenged by legacy systems that were never built for this type of collaboration in the first place,” Winokur said. “The end result is AppOps, InfraOps, NetOps and SecOps, each doing its own thing without a single source of truth.”

Source: Cisco

At Cisco-AppDynamics, we empathize with the challenges our customers are facing. Our technology lays the foundation for data interoperability, and allows previously siloed teams to work collaboratively in support of the BizDevOps model. We enable enterprises to seamlessly move data from legacy environments to systems where it can be correlated through APIs, enabling previously siloed teams to take intelligent, data-driven actions that support the needs of their business. 

In the coming years, we’ll see AIOps across all layers of the stack—not just within each of the individual layers, which is the first step we’re taking, but across the full stack. “This will allow companies to correlate data across the business, application, infrastructure, network, and security domains—with AI and ML—to bring data together, drive insights, and enable automated actions,” Winokur said.

This is a tremendous opportunity and our vision for the future. Go here to learn more about the AppDynamics and Cisco vision for AIOps.

How to Build Hybrid Cloud Confidence

This blog post is coauthored by Vamsi Chemitiganti, Chief Strategist at Platform9 Systems.

The hybrid cloud is not a new concept. Way back in 2010, AppDynamics founder Jyoti Bansal had an interesting take on hybrid cloud. The issues Jyoti discussed more than eight years ago are just as challenging today, particularly with architectures becoming more distributed and complex. Today’s enterprises must run myriad open source and commercial products. And new projects—some game-changers—keep sprouting up for companies to adopt. Vertical technologies like container orchestrators are going through rapid evolution as well. As they garner momentum, new software platforms are emerging to take advantage of these capabilities, requiring enterprises to double down on container management strategies.

Building Applications: 2008 vs. Now

When you consider how applications were built a decade ago, today is truly an amazing time to be a software engineer. From a commercial Java standpoint, software vendors in 2008 may have released only a few variations of their application—for instance, specific WAR (web archives)—to account for differences in application servers. Fast forward to today: commercial vendors must expose entirely how the sausage is made and provide a litany of formats to deploy and integrate with CI/CD and hybrid cloud infrastructures.

The 2008 point of view: commercial vendors delivering just a WAR.

Enterprises today are doing more due diligence. It’s not uncommon for enterprises to ask for source/binaries, quality/security scans, infrastructure-as-code templates, container orchestrator descriptors, and cloud vendor deployment scripts.

With enterprises needing to run a wider array of technologies, decisions must be made on where to run these workloads. The hybrid cloud allows enterprises to pick and choose which workloads and tools to run internally, and when to leverage a public cloud service.

Current Hybrid Cloud Prospective

The hybrid cloud is composed of six fundamental components, as shown in the illustration below. This diverse infrastructure spans public and private clouds to deliver a mix of VMs, containers and bare metal deployments; a SaaS-delivered management plane that guarantees 99.999% uptime; a certified catalog of applications for developers with single-click self service, and an ability to run application architectures composed of various runtimes (e.g., Tomcat, NGINX, Kafka, Istio, Spring Boot, and so on); CI/CD toolchains; line-of-business self service; and application architectures that are a mix of stateless microservices, stateful applications and serverless functions.

The six pillars of the hybrid cloud.


In a cloud-based infrastructure, the key concerns facing enterprises include complex cloud management, overall cost of infrastructure, and the lack of a single pane of glass for metrics. These lead to pain points in Day-1 and Day-2 operations, a lack of choice for developers, slow provisioning times and overly complicated management stacks.

The Key to Hybrid Cloud Success

Success depends on your ability to quickly deliver all of the above components: a lean hybrid cloud in days and weeks, not months. With an effective strategy, an enterprise can bring together all of the six component technologies, creating a hybrid cloud that “just works.” This includes the ability for both developers and infrastructure admins to deploy applications and workloads on any underlying cloud provider, a single pane of glass across all the underlying clouds, and a robust catalog of application runtimes for developers.

Incremental Success Develops Cloud Confidence

The ability to deliver a lean hybrid cloud in days or weeks is an ambitious goal. When picking workloads to run on the private or public cloud, metrics become a crucial component of your strategy. One of AppDynamics’ core strengths is its ability to compare workloads and platforms for effectiveness. When a new application works effectively in your hybrid cloud infrastructure, you gain confidence in your investment and quickly bring additional workloads into the new model. By using AppDynamics to trace a business transaction such as user conversion, IT can easily justify where and why a workload is running in a particularly segment of the hybrid cloud environment.

AppDynamics provides essential KPIs for comparing workloads—a great resource for justifying your cloud migration strategy.

The Hybrid Cloud Revolution Continues

As our cloud topology marches towards cloud infrastructure nirvana, where organizations can reallocate workloads to the most prudent infrastructure provider(s)—either on- or off-prem—confidence in your hybrid cloud is crucial. Look to AppDynamics and Platform9 to help you navigate this exciting new world!

Why Idempotency and Ephemerality Matter in a Cloud-Native World

The first time I heard “idempotent” and “ephemeral,” I had no idea what they meant. Perhaps I should have, though, because idempotent and ephemeral patterns are not new to the computing world.

In mathematics and computer science, idempotence is a property in which no matter how many times you execute some operations, you achieve the same outcome. Ephemerality is a concept of things being short-lived or transitory. In a Cloud Native environment, we expect consistency and portability with the presumption that infrastructure will likely be impermanent, including containers that are transient and disposable. These shifts are influencing providers across multiple layers of the OSI Model.


Again, “idempotent,” threw me the first time I heard it. I had not felt so perplexed by the meaning of a word since childhood. I was a staff developer and excited to use JMS for the first time. My team was starting to modernize a financial service client’s system to be event-based, implementing ActiveMQ as the message broker. We were determining transaction boundaries and suffering from duplicate messages that were wreaking havoc on our final calculations. One of the project’s senior developers suggested I make the endpoint “idempotent.” I gave him a blank look as if he were speaking pig Latin.

Someone recommended I get a copy of what soon became one of my favorite books, Enterprise Integration Patterns. The book’s authors, Gregor Hohpe and Bobby Woolf, describe a lot of the system-to-system design patterns that we depend on today. In the case of the financial service client, the design pattern was Idempotent Receiver (Consumer). We deployed into one of the client’s data centers, and minus a severe application infrastructure fault, we were under the guise that the center’s infrastructure would be there indefinitely.

By today’s standards, our application infrastructure was fragile. Our duplicate check implementation was very stateful, but if the service had stopped we would have been open to duplicates again. In the next iteration of our application, we designed the idempotent service to be more robust—something we would’ve done sooner had we not been under the guise that our infrastructure was more stable than ephemeral.


Cloud providers are looking to capture more workloads while providing their clients with better ROI. When Google launched its Preemptible Virtual Machine in 2015, it was responding to a demand for lower-cost instances. Due to the short-lived nature of instances, I was having a hard time wrapping my head around the workload that would be appropropriate for a Preemptible Virtual Machine—or even an Amazon Spot Instance, which offers spare compute capacity in the AWS cloud at steep discounts.

The rise of preemptible or spot instances shows the upsurge in ephemeral computing. As enterprises grapple with the nuances of cloud cost, one avenue for lower-cost services is to have the compute live for a shortened, or ephemeral, period.

Prior to preemptible or spot instances, there was a baseline understanding that compute capacity would exist for a finite period of time. Because of cloud availability, some organizations treated traditional instances as indefinite. For a planned hardware upgrade, the cloud vendor would give its customers advance notice to switch workloads over to another instance. For unplanned events like outages, a service designed to be multi-region or multi-zone would suffice.

Today, it feels like we are designing workloads to cope with Chaos Monkey at every level, including infrastructure. There’s a growing understanding that our workload infrastructure likely won’t be there in a predictable format. As a result, we’re building more robust services to cope with this unpredictability. These changes spotlight the importance of keeping workloads portable in case we have to switch to another instance, region, zone or provider.

Software is Eating (Feeding) the Cloud Native World

Marc Andreessen’s famous quote—“software is eating the world”—proves equally true in the Cloud Native space. The most prolific push for generic hardware has been led by public cloud vendors. Similar to enterprises making the move to x86, cloud vendors have been pushing to make all parts of their stack as generic as possible. In case of failure or expansion, vendors can swap a generic part in and out with ease.

With the generic hardware approach, a good amount of logic moves to the software stack. The rationale here is that if hardware is ephemeral, reconstituting the compute, storage, and even networking would be both seamless and consistent with software-defined storage and networking. Applying this to the public/hybrid cloud market, a software-driven solution that’s robust, scalable and portable becomes a core component of Cloud Native.

Save Us, Software!

Configuration control and consistency is moving down the stack: from application to application infrastructure, and now down to infrastructure. With advances in software-defined infrastructure (SDI), the trifecta of load-balancing, clustering and replication can be applied to multiple parts of the stack.


Medium has a very well-written article on the different layers of software-powered networking, from software-defined networking (SDN) to container networking. With the ever-widening adoption of the container networking interface (CNI), containerized applications can have a more consistent approach to network connectivity. For example, with Cisco Application Centric Infrastructure as a robust SDN platform, coupled with a service mesh, enterprises have a consistent and recreatable way of discovering and participating in services. AppDynamics can provide insight into this increasingly complex networking landscape as well.


Not long ago, storage in the cloud world was viewed as non-ephemeral. But as offerings, practices and architectures have begun to shift for some cloud storage products, there’s now a delineation between ephemeral and non-ephemeral storage. Although one of the pillars of a twelve-factor application is to run stateless processes, some sort of state needs to be written somewhere, and a popular place is to disk. Advances in software-defined storage (SDS), with projects such as Ceph and Gluster, provide object and file storage capabilities, respectively. Similar to the delineation of SDN and CNI, there is SDS and Container Storage Interface (CSI). For example, Portworx, a popular cloud-native storage vendor, coupled with commodity cloud or on-premises storage, allows for greater portability and storage consistency from the infrastructure to the container/application level.

One More ‘y’ Term

A successful Cloud Native implementation requires another key component: observability. Because without proper visibility into the stack, it’s nearly impossible to know when and how to react to an ephemeral infrastructure action to maintain idempotency.

Idempotency + Ephemerality + Observability = Cloud Native

Despite the inherent challenges with observability, insight into the system is crucial. Relating changes in ephemeral infrastructure to overall sentiment and KPIs can be a challenge as well. With AppDynamics, it’s much easier to validate and advance your investment in the software-defined world.

AppDynamics provides insight on KPIs for a cloud migration/infrastructure change.


AppDynamics delivers deep insights into containers running across an enterprise infrastructure.

A Look to the Future

Every month it seems like a new project is accepted into the Cloud Native Computing Foundation, which is very exciting. As enterprises march toward infrastructure nirvana, where organizations can recreate robust and consistent infrastructure in an ephemeral world, Cloud Native computing will be an important part of the equation. With the power to create cloud computing almost anywhere, it’s important to not lose focus of non-functional requirements such as security. Cisco’s Tetration Platform, which addresses security and operational challenges for a multicloud data center, can protect hybrid cloud workloads holistically.

Look to AppDynamics for help with navigating the Cloud Native world!

Strangler Pattern: Migrate to Microservices from a Monolithic App

In my 20-plus years in the software industry, I’ve worn a lot of hats: developer, DBA, performance engineer and—for the past 10 years prior to joining AppDynamics—software architect. I’ve been coding since sixth grade and have seen some pretty dramatic changes over the years, from punch cards and 8-inch floppies to DevOps and microservices.

This may surprise you, but during my career I’ve spent more time fixing broken software than building new and innovative applications. I’ve encountered pretty much every variety of enterprise software snafu, many requiring time-consuming fixes I had to do manually. There was a silver lining to this pain, however: I learned a lot about what does and doesn’t work in software development and deployment. Below are some observations drawn from my experiences in the field.

Enter the Strangler

You may already be familiar with the “Strangler Pattern,” an architectural framework for updating or modernizing software and enterprise applications. While the concept isn’t new—esteemed author Martin Fowler was discussing the pattern way back in 2004—it’s even more relevant for today’s microservices- and DevOps-focused organizations.

Essentially, the term is a metaphor for modern software development. The Strangler tree, or fig, is the popular name for a variety of tropical and subtropical plant species. Vines sprout from the top of the tree and extend their roots into the ground, enveloping and sometimes killing their host, and shrouding the carcass of the original tree under a thick set of vines.

This “strangler” effect is not unlike the experience that an organization encounters when  transitioning from a monolithic legacy application to microservices—breaking apart pieces of the monolith into smaller, modular components that can be built faster and deployed quicker. While the enterprise version of the Strangler tree won’t kill off its host entirely—some legacy functions won’t transfer to microservices and must remain—the strategy is essential for any organization striving for agile development.

A Hybrid Approach

The Strangler Pattern is a representation of agility within the enterprise. If you’re moving in the agile direction or doing a legacy modernization, you’re using the Strangler, whether you realize it or not.

The pattern helps software developers, architects, and the business side align the direction of their legacy transition. Anytime you hear “cloud hybrid,” “hybrid cloud” or “on-prem plus cloud,” it’s a Strangler Pattern, as the organization must maintain connectivity between its legacy application and the microservices it’s pushing to the cloud.

When enterprises start their agile journey, they soon realize there’s a huge cost of trying to reverse-engineer legacy code—much of it written in mainframe, Cobol, C++, or old .NET or Java—to make it smaller and more modular. They also discover that the hybrid or strangler approach, while certainly not easy, is easier than trying to rewrite everything.

Agile Enterprise vs. Agile Development

Developers are very quick to adopt agile. This may seem like a good thing, but it’s actually one of the core problems in organizations I’ve worked with over the years: Developers are agile-ready, the organization is not.

For a legacy transition to work, the focus must be on the agile enterprise, not just agile development. Processes must be in place to determine requirements, mock up screens, hash out wireframes, and generally move things along. Some businesses have this down—Google, Amazon and Netflix come to mind—but many companies don’t have these processes in place. Rather, they jump in head first, quickly going to microservices because of the buzz, without really considering the implications of what this will mean to their organizational requirements. The catalyst may be a new CTO coming in and saying, “Let’s move to the cloud.”

But a poorly conceived microservices transition, one where the entire enterprise neither embraces the agile philosophy nor understands what it means to go to a microservices strategy, can have disastrous consequences.

Bad App, Big Bill

Developers and DevOps and infrastructure folks usually understand what it means to go to a microservices strategy, but the business doesn’t always get it.

There are a lot of misconceptions about what microservices are and what they can do. For the Strangler Pattern to work, you need a comprehensive understanding of the potential impacts of a cloud transition.

I’ve seen situations where an application ran great on a developer’s local desktop, where it wasn’t a problem if the app woke every few seconds, checked a database, went back to sleep, and repeated this process over and over. But a month after pushing this app to the cloud, the company got a bill from its cloud provider for thousands of dollars of CPU time. Clearly, no one at the company considered the ramifications of porting this app directly to the cloud, rather than optimizing it for the cloud.

The moral? There are different approaches for different cloud models, migrations and microservice strategies. It’s critically important for your organization to understand the pros and cons of each approach, and how you as a developer or architect can work with the organization’s agile enterprise strategy.

Lift-and-Shift: A Fairy Tale

When attempting a “lift and shift”—moving an entire application or operation to the cloud—companies often adopt a methodical approach. They start with static services that can be moved easily, and services that don’t contain sensitive company data or personal customer information.

The ultimate goal of lift-and-shift is to move everything to the cloud, but in my experience that’s a fairy tale. It’s aspirational but not achievable: You’re either building for the cloud from the ground up or lifting some services, usually the ones easiest to shift. Whenever I mention “lift and shift” to developers, architects and customers, they usually laugh because they’ve gone through enough pain to understand it’s not entirely possible, and that their organization will be in a hybrid or transitional state for an extended period of time.

If you lift-and-shift an application that runs great on-prem, it’s likely to all of a sudden spin resources, causing you to scale unnecessarily because the code was written to perform in an environment that’s very different from the one it’s in. The Strangler Pattern again may come into play: understand what elements of your application or service are a natural fit for the cloud, e.g., have elasticity requirements or unpredictable scale, and move them first. You can then move the remaining pieces more easily into a cloud environment that behaves more predictably.

Putting It All Together

There are plenty of challenges that come with cloud migration. Your enterprise, top to bottom, must be ready for the move. If you haven’t invested in a DevOps strategy along with the people capable of executing it, codifying all the dependencies and deployment options to make an application run efficiently in the cloud, you’ll likely find your team pushing bug fixes all day and troubleshooting problems, rather than being agile and developing code and features that help users.

The ability to monitor your environment is fundamentally important as well. Being able to see the performance of your services, to quickly root-cause a breakdown in communication between microservices and their dependencies, is absolutely critical to an effective strategy.

Without an agile transformation, you’ll never truly achieve a microservices architecture. That’s why the Strangler Pattern is a good approach. You can’t just say one day, “We’re going agile and into the cloud,” and the whole organization replies, “Okay, we’re ready!”

You’ll meet resistance. Processes will have to be rewritten, code and deployment processes will need to change. It’s a huge undertaking.

The Strangler’s piecemeal approach makes a lot more sense. You don’t want to learn that you screwed up your continuous integration and development pipeline on 500 new microservices. It’s much wiser to start with two or three services and learn what works and what doesn’t.

I’ve experienced first-hand all of the software migration problems I’ve described above. I’ve repaired them manually, which was time-consuming and painful. The silver lining is that I learned a lot about fixing broken software, and now I’m able to share these hard-earned lessons.

In conclusion, be sure to implement a strategy using DevOps monitoring tools for continuous integration and deployment, as well as good monitoring and logging systems to understand your performance.

AppDynamics Cloud Arrives in Europe

Ask people where “the cloud” is, and most will likely respond “it doesn’t matter.” And that is true in the absolute sense. The physical location of cloud services is unknown and in theory should not matter.

Except when it does.

For many organizations today, and in particular those in the EU, data residency (where the bits actually live) and control over data processing have become hot topics, especially focused by the privacy law updates in the form of the EU General Data Protection Regulation (GDPR). As announced, AppDynamics customers in Europe are already meeting their data residency needs by taking advantage of a new cloud using the AWS Region in Frankfurt, Germany. In addition to the benefits of guaranteed availability and encryption of data in transit and at rest, all AppDynamics SaaS customers can be assured that AppDynamics and Cisco have also been preparing for GDPR readiness to help our customers with this new regulation (for more information about AppDynamics and GDPR, check out

To understand some of the reasons why AppDynamics brought their cloud to Europe, I sat down with Kahnan Patel, Director of SaaS Engineering, Steve Jenner, Vice President Worldwide Sales Engineering and Craig Rosen, Chief Information Security Officer for AppDynamics:

Bradley: What kind of things did you think about when coming up with AppDynamics European SaaS Offering?

Kahnan: One thing we continued to hear from our customers was a desire for data residency in the EU.  At the same time we were looking to leverage some of the latest technology that Amazon had to offer in AWS which would allow us to more easily scale our services to allow us to address on-demand capacity needs.  It made sense to use this opportunity to do both – by building a SaaS capability in the AWS Frankfurt Region.

Bradley: Transitioning our core services to AWS seems like a big effort.  How did you feel about that move?

Kahnan: You know, a lot of our own customers are doing the same thing right now – moving their core services and customer facing applications to AWS.  Such lifting and shifting of cloud services requires product back-end tweaking as well as endless performance and quality testing to make sure the migration does not cause any negative impact of service reliability, availability, scalability and performance.  By doing this, we will be able to stay on the leading edge of technology more easily and offer those improvements in performance and security to our customers.

Bradley: The other day, I heard you mentioning how we drink our own champagne here at AppD. Can you talk a little bit about how we are using our own products to monitor our own applications in AWS?

Kahnan: Sure. One of the ways we provide a high level of service to our customers is through the use of the AppDynamics platform to monitor and manage the performance from our customers’ perspective.  We use the entire portfolio and consistently are able to proactively identify any issues before our customers notice them. We have always done this in our existing cloud and are continuing to do so with our SaaS offering in the Frankfurt Region, taking advantage expanded capabilities leveraging the tight integration we have with AWS.

Bradley: Steve, it seems like this is bound to be a popular SaaS option – what has been the response from our customers in the EU?

Steve: The response has been nothing short of amazing.  Clearly, there has been a pent-up demand for enterprise-grade APM solutions hosted in the EU region as we have started to see some very large organizations jump in early to take advantage of the opportunity.  We have even had some customers in the region, but outside the EU, subscribe to the new service.

Bradley: There is a lot of buzz around GDPR.  How has AppDynamics helped customers address their concerns about GDPR?

Craig: GDPR certainly impacts many of our customers, so the European SaaS offering coinciding in the same timeframe as the GDPR enforcement date is an opportunity for us to reinforce our commitment to building security and privacy in to our program by design. Regardless of geography, most of our customers are very glad to see all of our SaaS offerings, including our European SaaS offering, as being GDPR-ready because it enables them to meet the new data protection guidelines that GDPR brings front and center.

Bradley: That brings up a great point about our broader efforts around security and privacy. Craig, what are our goals as it relates to that effort?

Craig: In addition to ensuring our customers are able to comply with the latest privacy and security regulations, at AppDynamics we continue to focus on security and privacy as a priority for our customers. There are many goals and efforts underway, too many to list, but our primary goal at AppDynamics has always been to provide assurance to our customers by building a strong security and privacy program and product platform.

Bradley: Can you talk a bit more about how we are doing that?

Craig: Sure. The AppDynamics SaaS platform extends the work we already do to build a secure product with a security and privacy-minded operational footprint. This includes features and capabilities like access controls, data encryption at rest and in-transit, and service isolation, some of which leverage the security services offered to us through AWS. We couple that with a highly-capable team of experts that provide security design-led development, security testing, security monitoring and DevSecOps practice integration to ensure a continuity of security as environments shift and scale to meet customer demands. Our program also maintains a regular cadence of SOC2 attestations that establish an independent assessment of our SaaS based security controls.

Bradley: We have talked a lot about how we watch the market for technology trends and listen to our customers requests for new capabilities.  Given the high demand for SaaS, what kind of additional things can we share about future plans?

Kahnan: There is a lot of investment going into the AppDynamics platform right now.  One of the things I am most excited about is taking what we have built in the European SaaS capability and making that available in strategic AWS Regions worldwide.  We have looked at quite a number of great ideas from customers and are working on some additional features to meet these demands in partnership with Amazon and the advances they are making with AWS.  Stay tuned for more details as this comes to market soon.

Bradley: It sounds like there has been a very positive response to AppDynamics SaaS in Europe. Where can customers go to get more information if they have additional questions?

Steve: Customers can reach out to their account managers for more information or contact us at or

For more programmatic details and information about AppDynamics’ commitment to security, see

Introducing AppDynamics for Kubernetes

Today we’re excited to announce AppDynamics for Kubernetes monitoring, which will give enterprises end-to-end, unified visibility into their entire Kubernetes stack and Kubernetes-orchestrated applications for both on-premises and public cloud environments. Enterprises use Kubernetes to fundamentally transform how they deploy and run applications in distributed, multicloud environments. With AppDynamics for Kubernetes, they will have a production-grade monitoring solution to deliver a flawless end-user experience.

Why is Kubernetes so popular? Because it delivers on the promise of doing more with less. By leveraging the portability, isolation, and immutability provided by containers and Kubernetes, development teams can ship more features faster by simplifying application packaging and deployment—all while keeping the application highly available without downtime. And Kubernetes’ self-healing properties not only enables operations teams to ensure application reliability and hyper-scalability but also boost efficiency through increased resource utilization.

According to the latest survey by the Cloud Native Computing Foundation (CNCF), 69% of respondents said Kubernetes was their top choice for container orchestration. And Gartner recently proclaimed that “Kubernetes has emerged as the de facto standard for container orchestration.” The rapid expansion of Kubernetes is also due to the vibrant community. With over 35,000 GitHub stars and some 1,600 unique contributors spanning every timezone, Kubernetes is the most engaged community on GitHub.

Challenges Emerge

Kubernetes brings, however, new operational workflows and complexities, many involving application performance management. As enterprises expand the use of Kubernetes beyond dev/test and into production environments, these challenges become even more profound.

The CNCF survey reveals that 38% of respondents identified monitoring as one of their biggest Kubernetes-adoption challenges—one that grows even larger to 46% as the size of the enterprise increases.

Shortcomings of Current Monitoring Approaches

When experimenting with Kubernetes in dev/test environments, organizations typically either start with the monitoring tools that come with Kubernetes or use those that are developed, maintained and supported by the community. Examples include the Kubernetes dashboard, kube-state-metrics, cAdvisor or Heapster. While these tools provide information about the current health of Kubernetes, they lack data storage capabilities. So either InfluxDB or Prometheus (two popular time-series databases) is added to provide persistence. For data visualization, open-source tools such as Grafana or Kibana are tacked on. The system still lacks log collection, though, so log collectors are added as well. Quickly, organizations realize that monitoring Kubernetes is much more involved than capturing metrics.

But wait: additional third-party integration may be needed to achieve reliability. By default, monitoring data is stored on the local disk susceptible to failure due to node outages. And to secure access to their data, organizations must develop or integrate additional tools for authentication and role-based access control (RBAC). Bottom line: While this approach may work well for small development or DevOps teams, a production-grade solution is needed, especially as enterprises start to adopt Kubernetes for their mission-critical applications.

Unfortunately, traditional APM tools often aren’t up to the task here, as they fail to address the dynamic nature of application provisioning in Kubernetes, as well as the complexities of microservices architecture.

Introducing AppDynamics for Kubernetes

The all-new AppDynamics for Kubernetes will give organizations the deepest visibility into application and business performance. With it, companies will have unparalleled insights into containerized applications, Kubernetes clusters, Docker containers, and underlying infrastructure metrics—all through a single pane of glass.

To effectively monitor the performance of applications deployed in Kubernetes, organizations must reimagine their monitoring strategies. In Kubernetes, containerized applications are deployed on pods, which are dynamically created on virtual groups or clusters called namespaces. Since Kubernetes decouples developers and operations from deploying to specific machines, it significantly simplifies day-to-day operations by abstracting the underlying infrastructure. However, this results in limited control over which physical machine the pods are deployed to, as shown in Fig. 1 below:


Fig. 1: Dynamic deployments of applications across a Kubernetes cluster.

To gather performance metrics for any resource, AppDynamics leverages labels, the identifying metadata and foundation for grouping, searching, filtering and managing Kubernetes objects. This enables organizations to gather performance insights and set intelligent thresholds and alerts for the performance of Pods, Namespace, ReplicaSets, Services, Deployment and other Kubernetes labels.

With AppDynamics for Kubernetes, enterprises can:

  1. Achieve end-to-end visibility: From end-user touch points such as a browser, mobile app, or IoT device, all the way to the Kubernetes platform, AppDynamics provides line-of-code-level detail for every application deployed (either traditional app or microservice), granular metrics on Docker container resources, infrastructure metrics, log analytics, and the performance of every database query—all correlated and within the context of Business Transactions, a logical representation of end-user interaction with applications. AppDynamics for Kubernetes will help enterprises avoid silos, and enables them to leverage existing skill sets and processes to monitor Kubernetes and non-Kubernetes applications from a unified monitoring solution across multiple, hybrid clouds.
  2. Expedite root cause analysis: Cascading failures from microservices can cause alert storms. Triaging the root cause via traditional monitoring tools is often time-consuming, and can lead to finger-pointing in war-room scenarios. By leveraging unique machine learning capabilities, AppDynamics makes it simple to identify the root cause of failure.
  3. Correlate Kubernetes performance with business metrics: For deeper visibility into business performance, organizations can create tagged metrics, such as customer conversion rate or end-user experience correlated with the performance of applications on the Kubernetes platform. Health rules and alerts based on business metrics provide intelligent validation so that every code release can drive business outcomes.
  4. Get a seamless, out-of-the-box experience: AppDynamics’ Machine agent is deployed by Kubernetes as a DaemonSet on all the worker nodes, thereby leveraging Kubernetes’ capability to ensure that the AppDynamics agent is always running and reporting performance data.
  5. Accelerate ‘Shift-Left’: AppDynamics is integrated with Cisco CloudCenter, which creates immutable application profiles with built-in AppDynamics agents. Leveraging the capability, customers can dramatically streamline Day 2 operations of application deployment in various Kubernetes environments, such as dev, test and pre-production. And proactive monitoring enables customers to catch performance-related issues before they impact the user experience. Go here to learn more about Cisco CloudCenter.

AppDynamics at KubeCon Europe

We are excited to be a sponsor of KubeCon + CloudNativeCon Europe 2018, a premier Kubernetes and cloud-native event. Our team will be there in full force to help you get started with production-grade monitoring of your Kubernetes deployments. And don’t forget to load up on cool new AppD schwag at the event.

Stop by AppD booth S-C36 in the expo hall. Additionally, I will be presenting the following sessions at Cisco Lounge in the expo hall:

  1. Introduction to Application Performance Monitoring—Wed-Fri, May 2-4, 12:30 PM
  2. Enterprise-grade Application Performance Monitoring for Kubernetes—Wed-Thu, May 2-3, 3:30 PM, Friday 3:00 PM

We are looking forward to engaging with all of our fellow Kubernauts. See you in Copenhagen!

Attaining Nirvana: The Four Levels of Cloud Maturity

Cloud adoption is atop every CIO’s priority list, and for good reason. Technology stacks are advancing at lightning speeds. Application architectures of the past decade are aging fast and being replaced with modern, public and private cloud-based ones. But while cloud adoption is inevitable, the vast majority of organizations are still searching for an effective application migration strategy, notes Gartner.

If you feel like you are falling behind the competition in your cloud journey, there’s no need to panic. A structured and comprehensive migration model, combined with a smart investment strategy, will go a long way toward ensuring success. Our cloud maturity model is based on insights we’ve gleaned from hundreds of conversations with CIOs about their cloud adoption strategies, as well as numerous customer migrations we’ve supported successfully. We’ve identified common patterns—or “maturity levels”—in the adoption process. By understanding this maturity model, you may find it easier to develop your own cloud strategy.

Below we describe four levels of cloud maturity. It is important to note that progression does not require adopting every level along the way. Some organizations skip levels, jumping from Level 1 to Level 3, for instance, and bypassing Level 2 altogether. Not all organizations need to end up at Level 4, and most will have different applications in their portfolio at different levels of maturity at the same time. For example, since customer-facing apps that generate revenue need to be the most agile and responsive, it makes sense to migrate them to Level 3 or Level 4, which are optimized for rapid application delivery and scale. On the other hand, older apps in maintenance mode can be kept at Level 1 or Level 2 without incurring too much additional investment. Also, keep in mind that companies are adopting a DevOps operational philosophy to accomplish these transformational tasks (more on this below).

Hybrid apps, where parts of the application continue to run on-premises due to immovable mainframe systems or data gravity—Dave McCrory’s concept where data becomes so large it’s nearly impossible to move—are a reality as well.

Level 1: Traditional Data Center Apps

Traditional data center apps run in classic virtual machines (such as VMWare) or on bare metal in a private data center, and are typically monitored by the IT Ops team. There are pros and cons to these architectures and deployments, of course. Advantages include total control over your hardware, software, and data; less reliance on external factors such as internet connectivity; and possibly a lower total cost of ownership (TCO) when deployed at scale. Disadvantages include large upfront costs that often require capital expenditure (CapEx), as well as maintenance responsibilities. They also result into longer implementation times that begin with hardware and software procurement before any code is written. Given these drawbacks, it’s quite likely that by 2020 a corporate “no cloud” policy will be extremely rare.

Level 2: Lifted and Shifted Apps

Given the cloud’s promise of elasticity and agility, nearly every IT organization has embarked on a cloud adoption journey. Those who haven’t actually started migrating their production workloads are definitely prototyping or experimenting in this area. However, cloud migration seldom runs smoothly. Oftentimes an organization will take the same virtual machines it was running on-premises and “lift and shift” them to the cloud. The target environment is typically a public cloud provider such as Amazon AWS, Microsoft Azure, or Google Cloud, or at times a private data center configured as a private cloud.

A sound migration strategy? Not really. Despite the expected cost savings of this approach, a company often finds it’s far more expensive to run its applications in the cloud in the same way it did on-premises. The large VM configurations that these applications were architected for are very expensive in the cloud. In other cases, the underlying infrastructure lacks the support it had on-premises. For example, some application servers relying on multicasting capabilities to communicate with each other, no longer work in the cloud when purely lifted and shifted. These shortcomings demand an application refactoring.

Lastly—and perhaps most importantly—these traditional data center applications were written with certain hardware reliability assumptions that may no longer be valid in the cloud. On-premises hardware is typically custom built and designed for higher reliability, whereas the cloud is built on a tenet that hardware is cheap and unreliable, while the software layer delivers application resiliency and robustness. This is another reason why application refactoring becomes necessary.

Level 3: Refactored Apps

Once an organization realizes that some of its lifted-and-shifted applications are fundamentally hamstrung in the cloud, it needs to refactor its apps. Several modern platforms that are purpose-built for running apps in cloud environments are a perfect choice for refactored programs. These modern platforms generally include application platform-as-a-service (PaaS) technologies or cloud native services. The typical PaaS technologies include Pivotal Cloud Foundry, Red Hat OpenShift, AWS Elastic Beanstalk, Azure PaaS, and Google App Engine. The cloud native offerings include hundreds of managed services offered by AWS, Azure and Google Cloud, such as database, Kubernetes, and messaging services, to name a few.

In a traditional data center apps environment, the IT organization is responsible for deploying, managing, and scaling the application code. By comparison, these modern platforms automate tasks and abstract out a lot of complexities; applications become elastic, dynamically consuming computing resources to meet varying workloads. The modern approach frees the IT organization from maintaining something that has no intellectual property benefit to its business, allowing it to focus on business problems as opposed to infrastructure issues.

Level 3 is where organizations begin to see signs of the cloud’s true potential. This is also where companies can dabble in container technology, and start to build the organizational muscle to run apps in modern architectures. However, not all is perfect in this world; organizations need to use caution when adopting some cloud-native services, which can result into vendor lock-in and poorer application portability—a top priority for some companies.

And while it may appear the cloud journey is now complete, obstacles remain. Refactored apps are often the same code from monolithic apps, only broken down into more manageable components. These smaller components can now scale automatically using PaaS or cloud-native services, but their code was not originally architected for truly modular, stateless deployments that would make them linearly scalable. Yes, a car may run faster after getting a custom performance chip upgrade, but in order to negotiate race-track turns at high speeds, it needs to be built from the ground up for high performance.

Level 4: Microservices—the Nirvana State

Microservices, as the name suggests, is an architecture in which the application is a collection of small, modular and independently deployable services that scale on demand to meet application workload requirements. These services are usually architected either using container and orchestration technologies (Docker and Kubernetes, or AWS, Azure, and Google container services) or serverless computing (AWS Lambda, Azure Functions, or Google Functions).

The reason this level is regarded as the “Nirvana State” is because applications built on microservices architectures are ultra-scalable and fault tolerant, ensuring an ultra-responsive and uninterrupted end-user experience of the application. Organizations can distribute smaller services to nimble DevOps teams, which run independently and enjoy the freedom to innovate faster, bringing new features to market in less time.

It requires a big commitment, though: a company must either rearchitect its applications from the core, or write all new applications with the microservices approach.

While these granular services offer many benefits, and the individual services are simpler to manage, combining them into broader business applications can get complicated, particularly from a monitoring and troubleshooting standpoint.

Where DevOps Fits In

Although this maturity model explores cloud adoption through a technology lens, organizational evolution is absolutely critical to the success of the adoption journey. Moving from siloed Dev and IT organizations to a more DevOps model is absolutely critical for achieving all the benefits of the higher levels of maturity.

A DevOps team generally consists of 8 to 10 people, including developers and site reliability engineers (SREs). Each super-agile team is responsible for only a few services, not the entire application. Teams take great pride in the responsiveness of their services, as well as the speed at which they deliver new functionality. The key tenet here is: You build it, you own it, you run it! That’s what makes DevOps different from the traditional Dev and Ops split.

Nirvana Takes Time—and Hard Work

Cloud adoption is a journey where the adoption of microservices on cloud platforms (public or private) can lead to greater agility, significant cost savings, and superior elasticity for organizations. The road may seem treacherous at times, but don’t get discouraged! Everyone is on the same path. The global IT sector is poised for another year of explosive growth—5.0 percent, CompTIA forecasts—and is embracing fast-paced innovation. Taking a considered approach and adopting DevOps practices is the fastest way to achieving the Nirvana State.

Learn more about how AppDynamics can help you on the path to cloud maturity.

Top 3 Challenges of Adopting Microservices as Part of Your Cloud Migration

IDC estimates 60% of worldwide enterprises are migrating existing applications to the cloud. With the promise of greater flexibility, a reduction in overhead, and the potential for significant cost savings, it’s a logical decision. But instead of performing a “lift and shift,” – simply moving an existing application to a cloud platform – many businesses use the migration period as an opportunity to modernize the architecture of their applications.

What’s more, in a survey by NGINX, over 70% of organizations say they’re adopting or exploring microservices for their new architectures – and with good reason. Breaking a monolithic application into manageable microservices allows development teams to rapidly respond to an ever-evolving set of business requirements, choose the right technology stack for each task, and readily provide support for a variety of delivery platforms, including web, mobile, and native apps.

However, adopting microservices as part of your cloud migration isn’t always easy. Below are three common challenges we’ve seen enterprises face, and solutions to help mitigate these risks.

Challenge 1: Identifying what needs to be migrated to microservices

Before you can begin breaking your application out into individual microservices, you need to first understand its full scope and architecture. This can prove challenging as many times the overarching view of the application is based on “tribal knowledge” or cobbled together from a collection of disparate tools. Sometimes a more holistic view may be available, but is based on outdated information which does not reflect the current architecture of the application.

You need to find a solution that will help you discover and map every component, dependency, and third party call of your application. This solution should help you understand the relationship of these pieces and how each impacts your application’s behavior and user experience. Armed with this information you’ll have a clear picture of what needs to be migrated, and will be able to make more informed decisions regarding the architecture of your microservices.

Challenge 2: Ensuring your microservices meet or beat pre-migration performance

To ensure your application runs smoothly post-migration, and that user experience was not negatively impacted, you need a way to compare performance metrics from pre and post-migration. This can be extremely difficult as the architecture of these two environments (with the changes to hardware and the move to a distributed architecture)  can look drastically different. To make things even more difficult, the monitoring tools supplied by individual hosting providers give insight into only a small portion of the entire architecture, and have no way of creating a more holistic set of data.

To combat these issues and establish a consistent baseline by which to measure your performance and user experience, you will need to capture key user interactions (often referred to as business transactions), prior to beginning your migration. The business transactions are likely to remain the same through migration whereas other metrics may change as you take different code paths and deploy on different infrastructures. Armed with baseline data about your business transactions, you can easily compare the performance of your pre and post-migration environments and ensure that there is no impact to your user experience or overall performance.

Challenge 3: Monitoring your new microservice environment

With large monolithic applications running a single codebase on a few pieces of hardware, two or three tools could once provide complete, straightforward monitoring of application and infrastructure performance. However, with the introduction of microservices, and the potential for each service to implement its own solution for technology stack, database, and hosting provider, a single service may now require a greater number of monitoring tools than the entire application once did. And microservice monitoring brings specific challenges: they are often short-lived, which means monitoring over a longer period of time can be more complicated; and there may be more pathways through which the service is reached, potentially exposing issues such as thread contention.

Finally, while development teams previously didn’t require a monitoring solution which took infrastructure into account, the move to DevOps and the reliance on cloud native technologies means this factor can no longer be ignored.

The goal then becomes finding a unified monitoring platform that supports all of your environments, regardless of language or technology stack. This solution must collate metrics from across your application and infrastructure into a single source of truth, and allow for correlation of those metrics to user experience.

Are you ready?

AppDynamics has been helping customers like Nasdaq, eHarmony, and Telkom with their cloud migration and microservice adoptions. Schedule a demo to see what AppDynamics can do for you.

IDC White Paper: Critical Application And Business KPIs for Successful Cloud Migration

Today, enterprises worldwide are moving towards a cloud-first strategy with the promise of benefits like agility, scalability, and innovation-at-speed.

However, migrating to the cloud can also present issues with security, compliance, performance, and more. As a result, it’s critical for businesses to understand the types of application and infrastructure monitoring, analytics, and performance information needed for successful migration.

To gain more insight into the best practices and KPIs for cloud migration, IDC surveyed 600 global enterprise decision makers about their cloud migration challenges and the information they needed to make informed decisions before, during, and post-migration.

Below are key findings from the IDC white paper, Critical Application and Business KPIs for Successful Cloud Migration (August 2017), sponsored by AppDynamics.

Application Performance Management (APM) is Imperative to Support Effective Migration  

To make smarter migration decisions, surveyed respondents reported a need for insight into KPIs for business and technical metrics, end user experience and business impact analysis, cloud capacity utilization, and cost-per-application evaluation. IDC’s research shows that application performance monitoring and analytics shed light onto these KPIs, making APM increasingly required to support effective planning and validation.

Cost Savings are the Biggest Benefit Expected of Migration to Cloud Infrastructure

Some 60% of respondents indicate that IT and development cost savings are the most important business benefits expected from migration. While the survey suggests that these expectations are often met, it’s important to note that this requires modernization of application architectures, supporting technology, people, and processes. In support of this, IDC highlights that containers are playing an increasing role in successful cloud migration, with almost two-thirds of surveyed respondents either currently containerizing new or existing applications, or planning to implement containers to support existing applications.

iOS and Android Applications are the Least Likely to Have Current or Planned Migrations

Enterprises no longer fear migrating existing applications. Nearly half (45.9%) of the respondents said they’ve already migrated some custom-developed browser-based applications to the cloud, and another 38.7% plan to do so within the next two years. Interestingly, iOS and Android applications are currently the least likely to have been migrated to date, but are important priorities for the next two years.

AppDynamics’ Role in Cloud Migration

The AppDynamics platform gives your enterprise real-time, end-to-end data about your users, transactions, code, and infrastructure to arm you with the information you need to support your application migration to the cloud. AppDynamics’ APM solution plays a central role in any cloud journey by offering:  

  • Breadth of visibility into complex and distributed applications, including every dependency, user experience, and transaction to help accelerate cloud migration evaluation and planning.
  • Pre and post-move business and technical KPI assessments to prove migration success.
  • End user experience and business impact analysis of cloud computing.

For additional data and insights, download the IDC white paper now: Critical Application and Business KPIs for Successful Cloud Migration.

Good Migrations: Five Steps to Successful Cloud Migration

Unless you’ve been living in a cave for the last decade, you’ve seen cloud computing spread like fire across every industry. You also probably know that the cloud plays a pivotal role when it comes to digital transformation. Whether “the app is the business” is a well-worn subject or not, it doesn’t change the fact that companies are spinning up their apps faster because they can scale, test, and optimize them in real production environments in the cloud. But, like any technology, the cloud isn’t perfect. If it isn’t configured specifically to your application and business needs, you can find yourself dealing with performance issues, unhappy users, and one splitting headache.

On the flip side, developing a successful cloud migration strategy can be a rewarding experience that reverberates across your enterprise. The fact that 48 of the Fortune 50 Corporations have announced plans to adopt the cloud or have done so already speaks to the fact that the cloud isn’t just good for business. It’s a must-have, basic requirement. Like Wi-Fi. And a laptop.

In our new eBook, Good Migrations: Five Steps to Successful Cloud Migration, we focus on the right steps in migration, and also the varying ways enterprises can take those steps. Simply put, no two enterprises — nor their apps — are alike. So there’s no single one-size-fits-all cloud migration plan or solution that works for every enterprise. As for company happy hours? Those seem to work pretty well for everyone.

Here are just a few of the valuable points covered:

Why Migrate at All

Every company has different reasons for migrating to the cloud. So, you do need to ask yourself a few questions about why you want (or need) to move to the cloud. It’s critical that you focus on precisely what it is you need specific to your app and business needs. Define what cloud environment fits your objectives. Determine how you’ll make the move. Plan for every phase: before, during — and yes — after you migrate.

Where Your App Lives Impacts How it Lives

New environments and IT configurations come with new rules. So you’ll scrutinize your app through a different lens after migration. You have to understand every system that connects to and interacts with it. You’re not reinventing the wheel, but you will likely need to modify it. Yes, it’s a time investment, but it’s one that will ultimately save you in the long run.

The Phases of Cloud Migration

Migration is a process that rolls out in phases. Here’s a summary of them:

  1. Choose your providers: Choose one or several, depending on your needs.
  2. Assign responsibilities to the providers: Who does what? Who is in control of the app?
  3. Adjusting the internal configuration: Manage IT expectations and apps that aren’t migrated.
  4. Getting users on board: If your team isn’t comfortable with the technology and aligned with the change, the migration won’t make a difference.

Rising Currency in the Cloud

Migrating an app or two isn’t going to fetch you the ROI it’s capable of. You need to go big or go home (that is, if you determine that the cloud is appropriate for your enterprise at all). When you implement a cloud migration on a large scale across your enterprise, you can create considerable value. To measure that value you need to set clear goals that can be measured across a variety of operations. In time, you’ll be able to calculate how much time and money your company saves, spends, and earns in the cloud.

Migration is a Journey, Not a Destination

Your apps are never truly complete. You’re always improving them. The same is true with cloud migration. Priorities change. Business inevitably changes. Users change. Like your apps and your business in general, you’ll continue to evaluate your cloud environment, making sure your teams are in alignment, along with tweaking networks and devices.

Learn More

To learn more so you can make informed decisions, be sure to download the eBook Good Migrations: Five Steps to Successful Cloud Migration.