At the end of Twentieth Century in the wake of Deep Blue’s triumph over Garry Kasparov, it was popular for computer scientists to speculate about when human begins would begin to interact with artificial intelligence. It was generally believed that machines with true reasoning capabilities were decades away. In an interview with author Michio Kaku, published in Visions: How Science Will Revolutionize the 21st Century, AI expert and Carnegie Mellon professor Hans Moravec predicted that robots would be able to model the world and anticipate the consequences of different actions sometime between 2020 and 2030.
Twenty years later, we are no longer wondering about how artificial intelligence (AI) will first appear in our lives. It has arrived in the form of virtual assistants like Alexa and self-driving cars. But this can give a misleading impression of what we can expect from AI in the next few years. AI software is not going to evolve human-like reasoning capabilities anytime soon.
Indeed, most of what is described as AI is really machine-learning algorithms that act largely as detectors. These algorithms analyze massive amounts of data and learn to discriminate between normal and anomalous behavior. AI, where it exists, is similar to a decision-support system for reacting to behavior as it changes. But even in these early stages, machine learning and AI are changing the game for IT operations. In the next few years, the impact of machine learning and AI will be profound.
The problem enterprises are facing is that computing environments have simply grown too large and too complex for human beings to monitor alone. To effectively monitor enterprise systems, IT must track millions of metrics per second. This is not a challenge that can be met by putting another screen on the wall of the network operations center. There are already too many screens, and just contemplating the number of screens that would be required is overwhelming. Even more daunting is figuring out the five or ten metrics that matter the most out of five or ten million as every new millisecond brings the system to a new dynamic state.
The company I founded, Perspica, which was was acquired last year by AppDynamics/Cisco, solved this problem for our customers by applying machine learning and AI to massive amounts of streaming telemetry data generated by applications and IT infrastructure. What Perspica did was surface all the relevant metrics and then use those metrics to accelerate root cause analysis and reduce the mean time to repair. But Perspica’s ability to grow beyond that was limited by the data that we had access to. In fact, everyone involved in machine learning and AI at that time faced the same limitation. We lacked a source of truth on which to train our algorithms to go beyond what they had already achieved.
But this limitation is rapidly being overcome. Increasingly, data scientists are gaining access to new sets of what we call labelled data—sets of numbers or strings of text that a computer can understand as a true representation of something else. Data scientists who work with IT data, in particular, are finding that enough labelled data exists that we can realistically begin talking about automating large parts of IT in the next two or three years. And that is only the beginning.
In the future, every enterprise is going to have some combination of machine learning and AI to monitor its computing environments, and, equally important to understand how changes to those environments affect business goals. As these systems are deployed, they will become smarter and more sophisticated. Every application, every server, and every port on that server will have its own unique AI model, which means if you have 50 applications running on 10,000 servers you will need to train 500,000 models. This is not something that is going to be created overnight. But once these models are put in place, self-healing systems will become standard.
We’ll see AI playing a role in everyday IT and business events. For example, imagine a large airline that is planning on holding a worldwide promotion. The airline’s IT department rolls out new application code as a canary deployment. But the monitoring system soon reveals the new release is performing worse than the old code. While the business owner and IT staff are realizing that the code push has failed, the airline’s AI system is determining the root cause is a disk space issue and taking steps to address the problem.
For many years, Perspica and others were doing detection. Today, as we broaden the libraries, increase the sets of problems that have solutions and bundle those solutions together, we’ll be able to start doing remediation. Moravec, it seemed, had the timeline correct.
What will happen in the next twenty years? The media sometimes promotes “fear of AI.” But I see AI making business more profitable and people more productive. It will improve service quality, reliability, and availability not only in IT but across all industries. And in this way, AI will not only have profound implications for IT. It is also bound to improve the human condition.