The Gartner Data Center, Infrastructure & Operations Management Conference was held in Las Vegas earlier this month, and attracted over 3,500 attendees and over 113 sponsors including AppDynamics.
We had a lot of users stop by, and I spent at least 8-10 hours at the booth talking to analysts, users, and existing customers. There was a large degree of variance in the topics ranging from those who didn’t know what APM was to those who were large happy customers of AppDynamics. There was a lot of discussion about the monitoring market, APM, analytics, and other ways to leverage APM data for business visibility.
While the Gartner keynotes reiterated the messages of Symposium this year, the CIO survey data was presented by Dave Russell and Mike Chuba, the conference chairs. There was increased focus on bimodal for infrastructure and operations (I&O). To most, bimodal means agile methodologies and multidisciplinary teams. The implications on people, process, technology, and culture were well discussed, but the mess that bimodal often makes was avoided in the talk. Gartner does have other good research on issues with implementing bimodal. Having two organizations that are often warring factions is not healthy for the culture or the business. Ray Paquet also presented specifically on bimodal IT, addressing the transformation to digital that is driving the bimodal strategy. There is a building momentum of revenue coming from digital business, and Gartner predicts this will accelerate.
IT, Meet Finance
When I was an end user buying technology, I never really considered the viability of vendors. I was buying software from both small and large companies and never really evaluated them beyond a cursory look. Gary Spivak, an analyst at Gartner, came from being a Wall Street analyst into covering vendors in the ITOM space. He’s done a lot of interesting research, and taught many analysts (including myself) on how to better judge viability of vendors to deliver what is expected by end users. His session, “What I&O Leaders Need to Know When Key Vendors Face Investor Pressures for Change,” was quite enlightening, especially considering all of the changes we’ve seen in the market — but it’s even more critical given the pending mega-acquisition of EMC by Dell. According to Spivak, the investor’s goal is to make money and profits; the end-user’s goals are to buy products from companies who can innovate, provide high quality support, and deliver a good product. The reality, at least in my time buying software, is that large vendors offering legacy solutions rarely deliver to these expectations. The combination of activist investors and private equity has been a great recipe for making lots of money, but it has not been good for end users. Gary also predicts that “By 2020, at least eight of the Top 12 publicly traded ITOM vendors will respond to activist investors to sell all or parts of their businesses, up from two today.” His advice is to review the ratings of vendors you are buying from, take action if they are acquired by private equity, and do not accept the answer of “‘business as usual,’because it almost never is.”
Dennis Smith presented a great session on “How Containers and Cloud Management Can Be Synergistically Deployed.” He covered each of these technologies and how one could, in theory, define and deploy a layer of API interoperability.
Colin Fletcher presented, “Work Smarter, Not Harder by Digitalizing Operations With Analytics,” in which he presented how far analytics have come in a short amount of time. Image recognition has gotten as good as humans in the last five years. Within I&O, we still do a lot of manual rule writing, especially in monitoring. Some companies have put in automated baselining, as we have in the core of AppDynamics, but many users still use static thresholds. Rule writing is tedious, error prone, and does not scale. Gartner estimates that the IT Operations Analytics (ITOA) spend was $1.7B in 2014 and expected to grow 70 percent in 2015. This is clearly an area of innovation and investment, something we are seeing with AppDynamics analytics. The bulk of this investment is purchased to be used by humans doing the analysis versus having intelligent machine learning technologies in place. More on that later.
Application Performance Monitoring
Finally, the sole APM-specific session was presented by analyst Cameron Haight, “Rethinking APM in a Digital Business Era.” Cameron took a unique approach to this presentation, drawing all of the graphics himself. The angle of this presentation was also quite progressive. The target was the new requirements introduced by digital business models, and how APM is changing to meet business demands. Digital businesses leverage new capabilities in technology; for example, the Internet of Things, which in turn creates exponential increases in transaction volume and data velocity. These apps are built upon new architectures, including microservices, running on platforms such as AWS Lambda or other PaaS solutions, where the container is the new infrastructure. The net result for those building and supporting apps is that microservices are the new norm, and many more languages will be introduced into stacks, along with new data platforms. The velocity of changes and ability to change components at different speeds will be a business requirement. The result is that a lot of complexity is being introduced, which will be a major challenge. Cameron states that APM vendors must address the old models they currently use to price solutions to align monitoring costs with these new architectures. In order to deal with these changes, the user interfaces not only must allow exploration, but also provide new ways of visualizing data. Finally, the use of advanced analytics must allow tools to learn your patterns and workflows and predict what you are likely to do next.
Cameron then explains that “the purpose of APM is digital business insight, not numbers,” meaning APM tools must do a better job at providing the analysis, versus leaving the human to come up with a conclusion from the data. APM is not optional and should be used across the lifecycle.
Cameron’s last point is that APM should be provided internally as a service, which requires changes within the organization. He provided some similar alignments that Adrian Cockroft has put together about platform monitoring teams.
Selecting APM technologies which can handle all of what exists in these new stacks is a challenge. This is primarily the reason why legacy APM providers are no longer an option for most organizations, and new providers which can support these architectures, such as AppDynamics, are most often selected.
It was interesting to see the crowds at the conference. Clearly, there is a major shift happening where people care less about the hardware and more about the software. The sessions on servers, physical storage, and converged infrastructure were far more lightly attended than those on public cloud, containers, and other software-driven infrastructure technologies. This is precisely the reason why infrastructure monitoring and visibility must evolve and change significantly from the physical world these tools previously managed.