Turning Digital Transformation into Digital Dexterity

The goal of digital dexterity is to build a flexible, agile workplace and workforce invested in the success of the organization. This dexterity allows the enterprise to treat employees like consumers—researching their challenges, goals and desired technologies—and then allowing the employees to exploit existing and emerging technologies for better business outcomes. This post advises line-of-business and product owners, already acting as agents of transformation inside their enterprises, on extending the metamorphosis into dexterity.

The Road vs. The Mountaintop

The journey begins with digital transformation, a road leading to multiple destinations. It is not a singular goal, but rather a way of life. Even digital-first enterprises continue to transform as they experiment with different business models or expand into new markets.

Enterprises executing digital transformations share three common goals:

  • Making analog tasks digital
  • Seeking new ways to solve old problems
  • Making the business better

While all three goals are important, the technical challenges of digital transformation often end up overshadowing the goal of improving the business. Transformation must leave the business not just different, but better. The transformed enterprise needs to be more agile in both application development and business. Digital transformation needs to result in a new company with digital ’savvy’, an understanding of the power of the data being collected, and the flexible and informed mindset required for digital dexterity.

Dexterity vs. Transformation

The real goal of digital transformation is to shorten the time required to transform business processes. How quickly can you spot a new or altered opportunity? Is the business digitally savvy enough to comprehend the possibilities of new technologies like blockchain, internet of things, and edge computing? Is your business now digitally dextrous?

Digitization without exploiting resultant data is a negative technical investment.

The next step in this journey—data extraction and data-driven decision making—mines the real value of going digital. The significant power of digital over analog is the ease of accumulating and assessing data, including data on each customer click, each cloud system executed upon, each line of code, and even each stage in a business process.

Often the first projects in digital transformation take long, lapsed periods of time. IT will need to rebuild itself first, and take many steps to respond faster to evolving needs. IT will also need to upgrade traditional waterfall models into agile development lifecycles with continuous integration and continuous delivery. Departments can restructure to create DevOps teams to reduce time from coding to deployment.

In the middle of long transformation projects, it is worth stepping back and asking anew: Why are we doing this? Digital transformation has been around so long, it may feel like it’s past the use-by date. Though some enterprises birth as digital-first, many are still struggling with basic analog-digital transformation. In the rush to deal with technology of multi-channel digitization, the goal often is missed.

(For more on digital transformation in specific industry verticals, read our AppDynamics blog posts on insurance, retail banking and construction.)

Digital Dexterity

Once your digital processes are generating data, the next step is to ensure you can exploit the wisdom of that data.

Achieving digital dexterity requires a new culture on both the business and technical sides. The technology team not only needs the technical skills to transform, but also the diplomatic skills to boost the organization’s digital dexterity. Amongst the “best coders on the planet” that you hire, you will want to seed the best communicators and evangelists as well. The business team will initially need your support in understanding what can be exploited with technology; the technical team will need to communicate using business terms. Similarly, these teams need to be presented with clear correlations from their application deliverables to business outcomes. Developing a multichannel awareness may be a new thing for your salesforce.

The real measure of dexterity is the enterprise’s ability to empower technical staff to make business decisions, and business staff to drive technical choices.

Challenges You Will Meet


Gartner’s 2018 CIO Survey reveals that CIOs believe corporate culture is one of the biggest barriers to digital transformation, followed by resources and talent. Those three elements make up 82% of digital business impediments, the survey says.

Consider expanding DevOps into BizDevOps. For this, you will need a nervous system connected to all parts of your enterprise to define common goals for both the business and technical teams, both of which need a common, shared view of data to allow differently trained participants to discuss and identify solutions.

Build a common vision and strategy across your business and technology leaders. Collaborative learning across team and knowledge structures is an effective way to help employees become dextrous.

Embracing diversity is a key action that adds a variety of viewpoints for spotting new opportunities. Make sure your strategy considers the employee experience (also a good time to preclude bias for gender, disabilities, etc.). Consider if the approach makes the employee more business-literate and more empowered to exploit new business processes.

Application owners need to continuously search out ways to improve employee effectiveness. The applications we develop should always listen to, interpret, and learn from their users. In the same way smart speakers were extremely stupid initially but self-improved over time, the enterprise application should consider user activity and create more efficient workflows for the user.

Technical Delay

As part of digital transformation, enterprises build out business intelligence frameworks, creating data lakes and gaining a rearview-mirror view of their business. Executives may even bring on data scientists to create models to predict the coming quarter. Each of these actions has value but excludes one key timeframe: today. Right now.

Why Aim for Dexterity?

Every company today is experiencing disruption. In fact, more companies experience disruption than act as disruptors. Right now, there’s a startup somewhere that will eventually flip to a business model that challenges yours. It might be a small change, or a permanent change in the marketplace. Your job is to prepare your enterprise by making sure your employees are empowered with self-serve, consumer-like technologies, and that they’re aware of the possibilities of change.

A dextrous enterprise can easily respond to market movements and disruptions. New businesses can be created with less struggle once it’s easier to connect departments and businesses. Employees with common awareness of the business—and the technology supporting the business—can readily identify, define and exploit new revenue opportunities. The holy grail alignment of IT and business will come through having all parties look at the same data to enable data-driven decisions.

Remember, the dextrous enterprise provides a consumer-like experience for its employees.

Transformation must leave the business not just improved, but better at surviving disruptions. The transformed enterprise is more agile in both development and business. It is able to rapidly integrate and partner with external businesses when the opportunity or need arises, and connect disparate business processes into a new buyer’s journey when a disruptor changes the marketplace. Digital transformation needs to deliver a new company that understands the power of collected data and the flexibility to harness the latest technology.

Digital dexterity is people using digital technologies to think, act and organize themselves in new and productive ways.

For more uses cases supporting digital dexterity, read how our customers are using Business iQ for AppDynamics.

AppDynamics Receives Pivotal Award for Outstanding Solutions and Services

Fall 2018 has gotten off to a great start! At the SpringOne Platform Conference in Washington D.C., Pivotal Software recognized AppDynamics as a Customer Impact Independent Software Vendor of the Year. The award recognized our excellence in building and delivering solutions to the Pivotal Cloud Foundry (PCF) ecosystem and community.

Mark Prichard (left and center), Senior Director of Product Management at AppDynamics, receives the 2018 Pivotal Partner Award at the SpringOne Platform Conference.
This is the first year that Pivotal has recognized specific members of its partner community for their excellence in building and delivering solutions to their customers. AppDynamics and other award winners were selected from more than 170 systems integrators and 70-plus independent software vendors worldwide. The honors were based on partners’ impact on Pivotal customers, market momentum, and the successful implementation of Pivotal’s values and methodologies.

AppDynamics, of course, has long been an active supporter of the Cloud Foundry platform—both Pivotal and open source—and will continue to be one. Many of our largest customers have been successfully running large-scale CF deployments for years. We’re thrilled to receive the Pivotal award, as it’s a validation of our hard work and dedication to the PCF ecosystem.

Spring Forward

The agile revolution is enabling teams to create the next generation of powerful enterprise applications. And few firms in recent years have been as prolific as Pivotal in leading this agile transformation. 

The three-day SpringOne Platform Conference brought together cloud and agile practitioners from around the globe to discuss the latest Cloud Native technologies empowering the next generation of applications. As an event attendee, I came away with a sense of great excitement. SpringOne was packed with vendors and initiatives, as well as an engaged community moving forward at a rapid pace.

In fact, the degree of collaboration and interoperability was staggering. One great example was Microsoft’s push into Java. It was really exciting to see Microsoft’s heavy investment in the Java space, a compelling development that garnered a lot of buzz at SpringOne.

With lineage from Spring Source, the Java community has been bolstered by contributions from the Spring Community. And Spring’s reach has been furthered by Pivotal’s portfolio and capabilities.


This has been very exciting year for us here at AppD, as we continue to help our clients drive innovation and validate the hard work being built everyday in the Pivotal ecosystem.

AppDynamics is a stellar enterprise partner no matter where you happen to be in your agile transformation journey. Our capabilities and commitment to the Pivotal platform will only grow stronger, as shown recently by our two distinct Service Broker tiles on the Pivotal Network, which bring significant enhancements to our customers who monitor apps on PCF.

We’re already excited for SpringOne 2019, which will surely be a stellar event with lots of information to share!

The Consumerization of IT and What It Means for Modern DevOps

As professionals in the IT space, we’re constantly introduced to new terms, concepts, technologies and practices. In many cases, we view these terms as IT-specific to help us be more proficient and cutting-edge. Or at least that’s what we strive to achieve. With many companies trying to disrupt the verticals they target, it’s important for us to understand how these new facets of technology impact the bottom line.

Business is under pressure to deliver more groundbreaking ideas than ever. Understanding the impact of new technology will empower you to engage business leaders to be supportive in both principal values and budgetary needs. When you look at companies that successfully disrupted a specific space, you’ll see one key element in the mix: the end-user experience. This used to mean how nice your app looks. Today, the user experience is more about speed and ease-of-access, while also maintaining a level of confidence that the app will do what it’s supposed to. If you fail to deliver this, your end user will simply move on. As a matter of fact, Bloomberg reports that approximately $6 trillion dollars are moving from digital laggards to businesses that provide the best user experience through digital transformation.

The first point we all must realize is that in the past 10 years, the consumerization of IT has taken the industry by storm. What this really means is that consumers of your IT services are not much different from consumers of your public-facing applications. We know that public-facing applications are the front door to your business—this is where customers are won and lost by the adoption of technology. As an IT professional, your internal business leaders are your customers, and it’s up to you to deliver and drive technology solutions that help drive the business metrics so near and dear to your “customers” hearts. So, making sure you articulate the changes to your principles reflects how IT will impact your business.

Secondly, a DevOps shift for internal process and procedure is no easy feat, especially when you’re dealing with years of hardened policies and practices. But it’s crucial for building a modern DevOps function. This means you’ll need to coordinate a holistic effort that includes development and operations teams, as well as the line of business. Otherwise, the moves you need to make will become exponentially more difficult as business demands become more severe, particularly with the rising number of disruptors in your market.

Lastly, the proof is in the pudding. When you begin your journey to the DevOps shift, it’s critically important to keep all the key players engaged, thereby enabling them to see the value you’re bringing to the table. This is particularly important when you’re demonstrating how new technology implementations are impacting the business in a positive—or negative—way. In this scenario, what you’ll show is either, “Yes, our technology is on the right path,” or “No, the implementation is giving us a negative response from our customers, so we need to quickly course-correct to minimize the damage and regain a positive direction.”

When all is said and done, we must understand the user experience is the new currency in the hyper-connected world we live in. But what is even more critical is that frequent change is required to stay ahead of the competition. This is where your business leaders come in. It’s in nobody’s best interest to stay stagnant, regardless of your industry. Disruption has hit retail, transportation, finance, healthcare and the list goes on. Making frequent changes to beat the disruptors requires you to build out a DevOps practice to ensure you have the ability and tools to respond to high business demands. Here’s how this impacts the business and helps push your DevOps plan forward:

  1. Business leaders are under tremendous pressure to drive continuous growth. A flat-line approach is a leading indicator that your company is falling behind competitors. Highlighting that you want to build out a practice that enables you to quickly develop, monitor, analyze and respond is exactly what your business wants to hear. But be prepared to knuckle down, as this is a never-ending loop to ensure you’re on the right path. When your leaders understand they’re now part of the process, they’ll become more tightly aligned with your strategy.

  1. Defining the critical metrics with your business leaders will allow you to understand how your technology provides the greatest impact. This can include any array of vital measurements that enable you to correlate your application performance to key business metrics. And not necessarily just monetary metrics either—they can be tied to conversions, promotion success, how frequently users are using (or not using) your application, and overall customer satisfaction.  Having these metrics in place will ensure your business leaders’ involvement moving forward, and gain their confidence that your strategy is on target.

  1. Embrace analytics to gain the ability to understand business transactions and the user journey.  The key part here is that you’re building out a DevOps strategy to be lean and nimble, but the end goal must be to understand how your end users are reacting to your applications. Leveraging an analytical platform like AppDynamics Business iQ is key to showing how your application ties directly to metrics defined by your business leaders. These leaders will gain immediate value from key data they’re not accustomed to seeing. This effort will also help you set priorities on which items you should develop first.

  1. As with Agile development and DevOps, this is an iterative process and a continuous cycle. Automating the process to remove the human element is key: Leveraging AI to help predict anomalies and stay ahead of the consumer will build the highest degree of confidence in your new DevOps implementation. However, this can’t be done in a silo. Once everyone is involved and engaged, showcasing your strategy to other parts of the business will be as celebratory a ticker-tape parade by a championship-winning team. Take your success and show how you’re an innovative technology leader—not one who sits in the server room, but rather one who’s engaged with the business. One who proudly bears the title, “Disruptor.”

Smart monitoring and automation help business leaders see issues that concern them most, and should be your first priority when rolling out organizational changes. In addition to pinpointing issues within applications, these tools help predict future issues by identifying trends as they arise. Consumerization of IT has taken our world by storm. Business leaders have lost faith in IT, which needs to reinvent itself as a leader driving business, rather than a team of technicians responding to the crisis of the day. By implementing a valuable, new technological shift—one with all the right tools in place, a keen understanding of the business, and impactful solutions—you’ll be seen as a key partner and leader with innovations that disrupt the competition and make your business a success.

Learn more about how AppDynamics can help you succeed with your business transformation.

The Incredible Extensible Machine Agent

Our users tell us all the time: The AppDynamics platform is amazing right out of the box. But everybody has something special they want to do, whether it’s to add some functionality, set up a unique monitoring scenario, whatever. That’s what makes AppDynamics’ emphasis on open architecture so important and useful. The functionality of the AppDynamics machine agent can be customized and extended to perform specific tasks to meet specific user needs, either through existing extensions from the AppDynamics Exchange or through user customizations.

It helps to understand what the machine agent is and how it works. The machine agent is a stand-alone java application that can be run in conjunction with application agents or separate from them. This means monitoring can be extended to environments outside the realm of the application being monitored. It can be deployed to application servers, databases servers, web servers — really anything running Linux, UNIX, Windows, or MAC.

Screen Shot 2014-08-21 at 9.03.06 AM

The real elegance of the machine agent is its tremendous extensibility. For non-Windows environments, there are three ways to extend the machine agent: through a script, with Java, or by sending metrics to the agent’s HTTP listener. If you have a .NET environment, you also have the capability of adding additional hardware metrics, over and above these three ways.

Let’s look at a real-life example. Say I want to create a extension using cURL that would give the HTTP status of certain websites. My first step is to look for one in the AppDynamics Exchange, our library of all the extensions and integrations currently available. It’s also the place one can request extensions that they need or submit extensions they have built.

Sure enough, there’s one already available (community.appdynamics.com/t5/AppDynamics-eXchange/idbp/extensions) called Site Monitor, written by Kunal Gupta. I decided to use it, and followed these steps to create my HTTP status collection functionality.

1. Download the extension to the machine agent on a test machine.
2. Edit the Site Monitor configuration file (site-config.xml) to ping the sites that I wanted (in this case www.appdynamics.com). The sites can also be HTTPS sites if needed.
3. Restart the machine agent.

That’s it. It started pulling in the status code right away and, as a bonus, also the response time for requesting the status code of the URL that I wanted.

Screen Shot 2014-08-21 at 9.02.55 AM

It’s great that I can now see the status code (200 in this case), but now I can truly use its power. I can quickly build dashboards displaying the information.

Screen Shot 2014-08-21 at 9.02.45 AM

There also is the ability to hook the status code into custom health rules, which provide alerts when performance becomes unacceptable.

Screen Shot 2014-08-21 at 9.02.35 AM
Screen Shot 2014-08-21 at 9.02.14 AM

So there it is. In just a matter of minutes, the extension was up and running, giving me valuable data about the ongoing status of my application. If the extension that I wanted didn’t exist, it would have been just as easy to use the cURL command (curl –sL –w “{http_code} \\n “ www.appdynamics.com -o /dev/null).

Either way, the machine agent can be extended to support your specific needs and solve specific challenges. Check out the AppDynamics Exchange to see what kinds of extensions are already available, and experiment with the machine agent to see how easily you can expand its capabilities.

If you’d like to try AppDynamics check out our free trial and start monitoring your apps today!

The Intelligent Approach to Production Monitoring

We get a lot of questions about our analytics-driven Application Performance Management (APM) collection and analysis technology. Specifically, people want to know how we capture so much detailed information while maintaining such low overhead levels. The short answer is, our agents are intelligent and know when to capture every gory detail (the full call stack) and when to only collect the basics for every transaction. Using an analytics-driven approach, AppDynamics is able to provide the highest level of detail to solve performance issues during peak application traffic times.

AppDynamics, An Efficient Doctor

AppDynamics’ APM solution monitors, baselines and reports on the performance of every single transaction flowing through your application. However, unlike other APM solutions that got their start in development environments, ours was built for production, which requires a more agile approach to capturing transaction details.

I’d like to share with you a story which illustrates AppDynamics analytics-based methodology and compares it with many of our competitors’ “capture as much detail as possible whether there are problems or not” (aka, our agents are too old to have intelligence built in) approach.

You visit Dr. AppDynamics for your regular health checkups. She takes your vital signs, records weight, measures reflexes and compares every metric taken against known good baselines. When your statistics are close to the baselines the doctor sends you home and sees the next patient without delay. When your health vitals deviate too far from the pre-established baselines the smart doctor orders more relevant tests to diagnose your problem. This methodology minimizes the burden on the available resources and efficiently and effectively diagnoses any issues you have.

In contrast, you visit Dr. Legacy for your regular health checkups. She takes your vital signs, records weight, measures reflexes and immediately orders a battery of diagnostic tests even though you are perfectly healthy. She does this for every single patient she sees. The medical system is now overburdened with extra work that was not required in the first place. This burden is slowing down the entire system so in order to ensure things move faster Dr. Legacy decides to reduce the amount of diagnostics tests being run on every single patient (even the ones with actual problems). Now the patients who have legitimate problems are going undiagnosed in the waiting room during the time when they need the most attention. In addition, due to the large amount of diagnostics testing and data being generated, the cost of care is driven up needlessly and excessively.

Does Dr. Legacy’s methodology make any sense to you when better methods exist?

AppDynamics intelligent approach to collecting data and inducing diagnostics makes it easier to spot outliers and, because deep diagnostic data is provided for only the transactions that require this level of detail, there is less impact on system resources and very little monitoring overhead.

Monitoring 100% of Your Business Transactions All the Time

AppDynamics monitors every single business transaction (BT) that flows through your applications. There is no exception to this rule. We automatically learn and develop a dynamic baseline for end-to-end response time as well as the response time of every component along the transaction flow, and also for all critical business metrics within your application.

We score each transaction by comparing the actual response time to the self-learned baseline. When we determine that a BT has deviated too far from normal behavior (using a tunable algorithm), our agent knows to automatically collect full call stack details for your troubleshooting pleasure. This analytics-based methodology allows AppDynamics to detect and alert on problems right from the start so they can be fixed before they cause a major impact.

Of course, there are times when deep data capture of every transaction is advantageous—such as during development—and the AppDynamics APM solution has another intelligent feature to address this need. We’ve built a simple, one-click button to enable full data recording system-wide. Developer mode is ideal for pre-production environments when engineers are profiling and load testing the application. Developer mode will capture a transaction snapshot for every single request. In production this would be overkill and wasteful. It’s even smart enough to know when you’re done using it and will automatically shut off when it is unintentionally left on, so your system won’t get bogged down if transaction volume increases.

Who Looks at Production Call Stacks When There are No Problems?

One of the worst qualities about legacy APM solutions is the fact that they collect as much data as they can, all the time. Usually this originates from the APM tool starting as a profiling tool for developers that has been molded to work in production. While this methodology is fine for development environments (we support this with dev-mode as described above), it fails miserably in any high volume scenario like load testing and production. Why does it fail? I’m glad you asked 😉

Any halfway decent APM tool has built-in overhead limiters to keep themselves from causing harm and introducing too much overhead within a running application. When you are collecting as much deep dive data as possible with no intelligent way of focusing your data collection you are inducing the maximum allowed overhead basically all the time (assuming reasonable load). The problem is that as your application load gets higher, this is the time when your problems are most likely to surface, and this is the time when legacy APM overhead is skyrocketing (due to massive amounts of code execution and deep collection being “always on”) so the overhead limiters kick in and reduce the amount of data being collected or kill off data collection altogether. In plain English this means that legacy APM tools can’t tell good transactions from bad and will provide you with the least amount of data at the time you need the most data. Isn’t it funny how marketing and sales teams try to turn this methodology into the best thing ever?

I have personally used many different APM tools in production and I never needed to look at a full call stack when there was no problem. I was too busy getting my job accomplished to poke around in mostly meaningless data just for the fun of it.

Distributed Intelligence for Massive Scalability

All of the intelligent data collection mentioned above requires a very small amount of extra processing to determine when to go deep and what to save. This is a place where the implementation details really make a difference.

At AppDynamics, we put the smarts where they are best suited to be – at the agent level. It’s a simple paradigm shift that distributes the workload across your install base (where it’s not even noticed) rather than concentrating it a single point. This important architectural design makes it so that as the load on the application goes up, the load on the management server remains low.

Contrasting this with legacy APM solutions, restricting whatever intelligence you have to the central monitoring server(s) causes higher resource requirements and therefore a monitoring infrastructure that requires more servers and greater levels of care and feeding.

Collecting, transmitting, storing, and analyzing large amounts of unneeded data comes with a high total cost of ownership (TCO). It takes a lot of people, servers, and storage to properly manage those legacy APM tools in an enterprise environment. Most APM vendors even want to sell you their expensive full time consultancy services just to manage their complex solutions. Intelligent APM tools ease your burden instead of increasing it like the legacy APM tools do.

All software tools go through transition periods where improvements are made and generational gaps are recognized. What was once cutting edge becomes hopelessly outdated unless you invest heavily in modernization. Hopefully this detailed look at APM methodologies helps you cut through the giant pile of sales and marketing propaganda that develops and IT ops folks are constantly exposed to. It’s important to understand what software vendors really do, but it’s most important to understand how they do it as it will have a major impact on real life usage.

Understanding Performance of PayPal as a Service (PPAAS)

In a previous post – Agile Performance Testing – Proactively Managing Performance – I discussed some of the challenges faced in managing a successful performance engineering practices in an Agile development model.  Let’s continue this with a real world example, highlighting how AppDynamics simplifies the collection and comparison of Key Performance Indicators (KPIs) to give visibility into an Agile development team’s integration with PayPal as a Service (PPaaS).

Our dev team is tasked with building a new shopping cart and checkout capability for an online merchant. They have designed a simple Java Enterprise architecture with a web front-end, built on Apache TomEE, a set of mid-tier services, on JBoss AS 7, and have chosen to integrate with PayPal as the backend payment processor. With PayPal’s Mobile, REST and Classic SDKs, integrating secure payments into their app is a snap and our team knows this is a good choice.

However, the merchant has tight Service Level Agreements (SLAs) and it’s critical the team proactively analyze, and resolve, performance issues in pre-production as part of their Agile process. In order to prepare for meeting these SLAs, they plan to use AppDynamics as part of development and performance testing for end-to-end visibility, and to collect and compare KPIs across sprints.

The dev team is agile and are continuously integrating into their QA test and performance environment. During one of the first sprints they created a basic checkout flow, which is shown below:

Screen Shot 2014-04-29 at 9.28.21 AM

For this sprint they stubbed several of the service calls to PayPal, but coded the first step in authenticating — getting an OAuth Access Token, used to validate payments.

Enabling AppDynamics on their application was trivial, and the dev team got immediate end-to-end visibility into their application flow, performance timings across all tiers, as well as the initial call to PayPal. Based on some initial performance testing everything looks great!


NOTE: in our example AppDynamics is configured to identify backend HTTP Requests (REST Service Invocations) using the first 3 segments of the target URL. This is an easy change and the updated configuration is automatically pushed to the AppDynamics agent without any need to change config files, or restart the application.


In a later sprint, our dev team finished integrating the full payments process flow. They’re using PayPal’s SDK and while it’s a seamless integration, they’re unclear exactly what calls to PayPal are happening under the covers.

Because AppDynamics automatically discovers, maps, and scores all incoming transactions end-to-end, our dev team was able to get immediate and full visibility into two new REST invocations, authorization and payment.


The dynamic discovery of AppDynamics is extremely important in an Agile, continuous integration, or continuous release models where code is constantly changing. Having to manually configure what methods to monitor is a burden that degrades a team’s efficiency.

Needing to understand performance across the two sprints, the team leverages AppDynamics’ Compare Releases functionality to quickly understand the difference between performance runs across the sprints.


AppDynamics flow map visualize the difference in transaction flow between the sprints, highlighting the additional REST calls required to fully process the payment. Also, the KPI comparison gives the dev team an easy way to quickly measure the differences in performance.


Performance has changed, as expected, when implementing the full payment processing flow. During a performance test AppDynamics automatically identifies and takes diagnostics on the abnormal transactions.


Transaction Snapshots capture source line of code call graphs, end-to-end across the Web and Service tiers. Drilling down across the call graphs, the dev team clearly identifies the payment service as the long running call.


AppDynamics provides full context on the REST invocation, and highlights the SDK was configured to talk to PayPal’s sandbox environment, explaining the occasional high-response times.

To recap, our Agile dev team leveraged AppDynamics to get deep end-to-end visibility across their pre-production application environment. AppDynamics release comparison provided the means to understand differences in the checkout flows across sprints, and the dynamic discovery, application mapping, and automatic detection allowed the team to quickly understand and quantify their interactions with PayPal. When transactions deviated away from normal, AppDynamics automatically identified and captured the slowness to provide end-to-end source line of code root-cause analysis.

Take five minutes to get complete visibility into the performance of your production applications with AppDynamics today.

Agile Performance Testing – Proactively Managing Performance

Just in case you haven’t heard, Waterfall is out and Agile is in.  For organizations that thrive on innovation, successful agile development and continuous deployment processes are paramount to reducing go to market time, fast tracking product enhancements and quickly resolving defects.

Executed successfully, with the right team in place, Agile practices should result in higher functional product quality.  Operating in small, focused teams that work well-defined sprints with clearly groomed stories is ideal for early QA involvement, parallel test planning and execution.

But how do you manage non-functional performance quality in an Agile model?  The reality is that traditional performance engineering, and testing, is often best performed over longer periods of time; workload characterizations, capacity planning, script development, test user creation, test data development, multi-day soak tests and more… are not always easily adaptable into 2-week, or shorter, sprints.  And the high-velocity of development change often cause continuous, and sometimes large, ripples that disrupt a team’s ability to keep up with these activities; anyone ever had a data model change break their test dataset?

Before joining AppDynamics I faced this exact scenario as the Lead Performance Engineer for PayPal’s Java Middleware team.  PayPal was undergoing an Agile transformation and our small team of historically matrix aligned, specialty engineers, was challenged to adapt.

Here are my best practices and lessons learned, sometimes the hard way, of how to adapt performance-engineering practices into an agile development model:

  1. Fully integrate yourself into the Sprint team, immediately.  My first big success at PayPal was the day I had my desk moved to sit in the middle of the Dev team.  I joined the water cooler talk, attended every standup, shot nerf missiles across the room, wrote and groomed stories as a core part of the scrum team.  Performance awareness, practices, and results organically increased because it was a well represented function within the team as opposed to an after thought farmed out to a remote organization.
  2. Build multiple performance and stress test scenarios with distinct goals and execution schedules.  Plan for longer soak and stress tests as part of the release process, but have one or more per-sprint, and even nightly, performance tests that can be continually executed to proactively measure performance, and identify defects as they are introduced.  Consider it your mission to quantify the performance impact of a code change.
  3. Extend your Continuous Integration (CI) pipelines to include performance testing.  At PayPal, I built custom integrations between Jenkins and JMeter to automate test execution and report generation.  Our pipelines triggered automated nightly regressions on development branches and within a well-understood platform where QA and development could parameterize workload, kick-off a performance test and interpret a test report.  Unless you like working 18-hour days, I can’t overstate the importance of building integrations into tools that are already or easily adopted by the broader team.  If you’re using Jenkins, you might take a look at the Jenkins Performance Plugin.
  4. Define Key Performance Indicators (KPIs).  In an Agile model you should expect smaller scoped tests, executed at a higher frequency.  It’s critical to have a set of KPIs the group understands, and buys into, so you can quickly look at a test and interpret if a) things look good, or b) something funky happened and additional investigation is needed. Some organizations have clearly defined non-functional criteria, or SLAs, and many don’t. Be Agile with your KPIs, and refine them over time. Here are some of the KPIs we commonly evaluated:
  • Percentile Response-Time – 90th, 95th, 99th – Summary and Per-Transaction
  • Throughput – Summary and Per-Transaction
  • Garbage Collector (GC) Performance – % non-paused time, number of collections (major and minor), and collection times.
  • Heap Utilization – Young Generation and Tenured Space
  • Resource Pools – Connection Pools and Thread Pools

5. Invest in best of breed tooling.  With higher velocity code change and release schedules, it’s essential to have deep visibility into your performance environment. Embrace tooling, but consider these factors impacted by Agile development: 

  • Can your toolset automatically, and continuously discover, map and diagnose failures in a distributed system without asking you to configure what methods should be monitored?  In an Agile team the code base is constantly shifting.  If you have to configure method-level monitoring, you’ll spend significant time maintaining tooling, rather than solving problems.
  • Can the solution be enabled out of the box under heavy loads?  If the overhead of your tooling degrades performance under high loads, it’s ineffective in a performance environment.  Don’t let your performance monitoring become your performance problem.

When a vendor recommends you reduce monitoring coverage to support load testing, consider a) the effectiveness of a tool which won’t provide 100% visibility, and b) how much time will be spent consistently reconfiguring monitoring for optimal overhead.

Performance testing within an Agile organization challenges us as engineers to adapt to a high velocity of change.  Applying best practices gives us the opportunity to work as part of the development team to proactively identify and diagnose performance defects as code changes are introduced.  Because the fastest way to resolve a defect in production is to fix it before it gets there.

Take five minutes to get complete visibility into the performance of your production applications with AppDynamics today.

Quantifying the value of DevOps

In my experience when you work in IT the executive team rarely focuses on your team until you experience a catastrophic failure – once you do you are the center of attention until services are back to normal. It is easy to ignore the background work that IT teams spend most of their days on just to keep everything running smoothly. In this post I will discuss how to quantify the value of DevOps to organizations. The notion of DevOps is simple: Developers working together with Operations to get things done faster in an automated and repeatable way. If the process is working the cycle looks like:


DevOps consists of tools, processes, and the cultural change to apply both across an organization. In my experience in large companies this is usually driven from the top down, and in smaller companies this comes organically from the bottom up.

When I started in IT I worked as a NOC engineer for a datacenter. Most my days were spent helping colocation customers install or upgrade their servers. If one of our managed servers failed it was my responsibility to fix it as fast as possible. Other days were spent as a consultant helping companies manage their applications. This is when most web applications were simple with only two servers – a database and an app server:


As I grew in my career I moved to the engineering side and worked developing very large web applications. The applications I worked on were much more complex then what I was used to in my datacenter days. It is not just the architecture and code that is more complex, but the operational overhead to manage such large infrastructure requires an evolved attitude and better tools.


When I built and deployed applications we had to build our servers from the ground up. In the age of the cloud you get to choose which problems you want to spend time solving. If you choose an Infrastructure as a service provider you own not only your application and data, but the middleware and operating system as well. If you pick a platform as a service you just have to support your application and data. The traditional on-premise option while giving you the most freedom, also carries the responsibility for managing the hardware, network, and power. Pick your battles wisely:

Screen Shot 2014-03-12 at 11.50.15 AM

As an application owner on a large team you find out quickly how well a team works together. In the pre-DevOps days the typical process to resolve an operational issues looked like this:

Screen Shot 2014-03-12 at 11.49.50 AM

1)     Support creates a ticket and assigns a relative priority
2)     Operations begins to investigate and blames developers
3)     Developer say its not possible as it works in development and bounces the ticket back to operations
4)     Operations team escalates the issue to management until operations and developers are working side by side to find the root cause
5)     Both argue that the issue isn’t as severe as being stated so they reprioritize
6)     Management hears about the ticket and assigns it Severity or Priority 1
7)     Operations and Developers find the root cause together and fix the issue
8)     Support closes the ticket

Many times we wasted a lot of time investigating support tickets that weren’t actually issues. We investigated them because we couldn’t rely on the health checks and monitoring tools to determine if the issue was valid. Either the ticket couldn’t be reproduced or the issues were with a third-party. Either way we had to invest the time required to figure it out. Never once did we calculate how much money the false positives cost the company in man-hours.

Screen Shot 2014-03-12 at 11.50.35 AM

With better application monitoring tools we are able to reduce the number of false positive and the wasted money the company spent.

How much revenue did the business lose?


I never once was able to articulate how much money our team saved the company by adding tools and improving processes. In the age of DevOps there are a lot of tools in the DevOps toolchain.

By adopting infrastructure automation with tools like Chef, Puppet, and Ansible you can treat your infrastructure as code so that it is automated, versioned, testable, and most importantly repeatable. The next time a server goes down it takes seconds to spin up an identical instance. How much time have you saved the company by having a consistent way to manage configuration changes?

By adopting deployment automation with tools like Jenkins, Fabric, and Capistrano you can confidently and consistently deploy applications across your environments. How much time have you saved the company by reducing build and deployment issues?

By adopting log automation using tools such as Logstash, Splunk, SumoLogic and Loggly you can aggregate and index all of your logs across every service. How much time have you saved the company by not having to manually find the machine causing the problem and retrieve the associated logs in a single click?

By adopting application performance management tools like AppDynamics you can easily get code level visibility into production problems and understand exactly what nodes are causing problems. How much time have you saved the company by adopting APM to decrease the mean time to resolution?

By adoption run book automation through tools like AppDynamics you can automate responses to common application problems and auto-scale up and down in the cloud. How much time have you saved the company by automatically fixing common application failures with out even clicking a button?

Understanding the value these tools and processes have on your organization is straightforward:


DevOps = Automation & Collaboration = Time = Money 

When applying DevOps across your organization the most valuable advice I can give is to automate everything and always plan to fail. A survey from RebelLabs/ZeroTurnaround shows that:

1)     DevOps teams spend more time improving things and less time fixing things
2)     DevOps teams recover from failures faster
3)     DevOps teams release apps more than twice as fast


How much does an outage cost in your company?

This post was inspired by a tech talk I have given in the past: https://speakerdeck.com/dustinwhittle/devops-pay-raise-devnexus



Take five minutes to get complete visibility into the performance of your production applications with AppDynamics today.

The New Generation of Enterprise Java: Designing for the Next Big Thing


There’s been a generational shift in how Java enterprise applications are created: they have been broken down from a monolithic architecture into multiple services, and they’re highly interconnected and distributed. How can Java developers and Operations teams adapt to these changes?

This keynote will discuss the 4 Big Things that Java professionals need to design for now:

  • Cloud: Most applications built will have some part of its service in the cloud
  • Big Data: With the advent of NoSQL, Hadoop, and distributed caches, how should we now approach the data layer?
  • Agile Development & Operations: Developers won’t just be responsible for the code, but how it’s deployed. How does that affect the DevOps relationships?
  • Failure is an option: Distributed systems won’t just invite but demand failure, so how can failure become part of the initial design?

This talk will present recommended strategies and approaches for these new design imperatives.

You can watch the keynote here


Why I Joined The Leading APM Provider AppDynamics

A new year, a new iPhone and a new quarter. What else is new? How about a new company?

Last month I was fortunate enough to join a stellar marketing team at one of the fastest growing enterprise software startups in the bay area. The company you ask? AppDynamics, and did I mention we’re also the leading next generation Application Performance Management (APM) provider for modern architectures in distributed, cloud, virtualized and on-premise environments? We exceeded our targets for 2011 achieving an astonishing 400% growth in bookings. Not too shabby for being the new kid on the block in a competitive market already inundated with vendors. You have old school APM tools from megavendors like CA, HP and Compuware (was dynaTrace). Then you have the new school breed such as New Relic and AppDynamics. In fact, Gartner’s MQ lists over twenty vendors. So with such a crowded market why did I even consider such a move?

Well there’s a laundry list of reasons, but here are the top ones that come to mind.

1. Business Innovation. This is another kind of BI not just Business Intelligence. It’s really a breath of fresh air to be working with an organization that is not only obsessed with pumping out insanely great technology every few quarters or so, but also open to embracing innovative approaches to every discipline of the business including creative marketing and sales strategies. Often times enterprise software companies unabashedly attempt to cloak themselves in slideware selling a “vision” or an enterprise solution poles apart from reality. Unfortunately when it comes down to an actual evaluation, you end up having to attend a dozen meetings just to see an applicable demo, a one week to two month proof-of-concept followed by throwing millions of dollars at consulting and implementation services, which segues to my next point.

2. Ease-of-Use. This simple yet powerful concept has been repeatedly neglected or intentionally ignored by many enterprise software companies. Luckily, the Leaders of the New School such as Apple, Salesforce, Box, etc. (not Busta Rhymes group) have changed the way end users value an intuitive user interface and design. At AppDynamics, we’ve adopted a similar mindshare. “Easy” is the new world order in this industry because the managers, engineers and folks in IT operations are encountering enough complexity as it is with these modern architectures. I doubt the last thing that they want is another tool to further complicate their lives causing more frustration on the job. At the end of the day everyone is a consumer – the least common denominator – who wants to use software that helps us demystify our lives and makes us successful at our jobs (unless you’re a sadist).

Software that is easy to install, implement and use can have a tremendous impact on the bottom-line of a business. Suppose you end up rolling out a new system but end up having to spend a chunk of company change on implementation and training costs. What impact does that have on your productivity and ultimately your company’s bottom-line? Here’s an example from Avon’s Q3, 2011 earnings transcript,

“Despite extensive pre-implementation testing, we had greater than anticipated implementation challenges in the go-live. Significantly higher business complexity in this market contributed to a greater than expected level of disruption, as I said, when we went to the go-live environment.”

Many vendors make enterprise deployments akin to embarking on an IT version of manifest destiny. I’m sure you can think of a few applications in your own IT toolbox that fit the bill where at some point you ended up asking yourself, “Why can’t this be as easy as [fill in the blank with some consumer app]?”  Fig. 2. See empathetic frustrated user to your left.

That was compelling enough for me to join AppDynamics. We truly understand the business significance as to why software ought to be easy 360 degrees around especially in production. I’m not saying that the work designers and developers have to do to achieve this “Easy” goal is easy in itself. I have an unrequited love for the folks in engineering who possess the talent and perseverance in coding applications, but that doesn’t excuse a vendor from selling you a dream and then leaving you stranded to implement a nightmare all because there wasn’t enough emphasis on ease-of-use.

3. Application Performance. This one is near and dear to my heart and arguably the main reason for me to join AppDynamics. It takes me back to the challenging days and sleepless nights I endured while working on a massive global PDM implementation at LG Electronics jointly with Dassault Systemes. The year was 2008. Skynet hadn’t become self-aware yet. App Man was just A Man in the throes and woes of IT operations, and half way around the world over in Seoul, Korea I was managing juggling recurring performance issues on a weekly basis with our PMO having to answer to the beck and call of the LGE CIO. The project’s launch date had been delayed due to various complications with the implementation (that’s a whole other story). Any ideas what one of those might have entailed? If you guessed “performance”, congratulations! You’ve won! Download your free copy AppDynamics Lite.

Every week new customizations were being released from R&D back in the states, PS in Korea and SI’s sitting on the other side of the room. You could call it Agile development’s nemesis, frAgile development. The dynamic nature of our java-based environment only introduced more challenges to the performance team who were heads-down trying to reverse engineer someone else’s code and refactor it using APM tools that just didn’t provide us with the full visibility we needed to comprehensively profile and diagnose application performance issues (using JenniferSoft). In fact, one of the consultants on our team ended up creating his own profiler to expose these blind spots, but what we really needed was a next-generation APM tool that would visually map and connect the dots for us like the one below.

Then we ran into another stumbling block after we completed migrating legacy data to a new “production” environment. When the time came to retest the entire set of performance use cases in this new environment we experienced all kinds of performance regressions. Since everyone was collaborating so well with each other for over the past two years, we all cheerfully marched forward without any finger pointing as to what the root cause was. Ok, so it wasn’t that utopian. Fortunately, because of everyone’s undying commitment and personal sacrifices, the project went live successfully in mid 2010 with over 2,000 users visiting the system per day. In hindsight, we could have easily saved a month’s worth had we used a better tool thereby eliminating the usual suspects.

From that experience I’ve come to appreciate and understand how business-critical managing application performance is for any company. Now I am on a mission to spread the word of AppDynamics to help companies manage rapidly evolving, distributed environments.

Buckle up 2012, we’re just getting started.