A few years ago at the DevNexus developers conference in Atlanta, I was chatting with a few former colleagues over dinner. As an avid fan of the HBO show Silicon Valley, as well as a technologist impressed with the rise of serverless technologies, I jokingly said we should “build the Box“—i.e., a serverless appliance not unlike the show’s Box by Pied Piper. But one ex-colleague said I was really late to the game. I gave him a confused look and he said, “Serverless isn’t new. Mainframes have been around since the 1950s and they’re certainly not a server. You could say they’re serverless, though.”
A debatable point, perhaps, but one thing is certain: Our technology journey has evolved far from the mainframes of the 1970s. One could argue that 2001 marked the introduction of the “modern” mainframe operating system—the IBM z/OS. That was nearly two decades ago, and it wouldn’t be surprising if today’s millennial software engineer knows far more about serverless than big iron.
For most of my academic and professional career, Java deployed on a UNIX-like operating system was the world I knew. But serverless is rapidly becoming the norm for modern applications. What’s interesting is that there are a lot of parallels between workloads that run on the mainframe and those that run on a serverless infrastructure like Lambda.
Kubernetes is King, What is this Mainframe Nonsense?
Big iron may be old school, but it remains a major force in the enterprise. As recently as 2017, approximately $3 trillion in daily commerce was flowing through COBOL systems, Reuters reports. My first interaction with a mainframe came fresh out of college. As part of a consulting team performing integration testing, I would receive the occasional error from a mysterious (to me) CICS gateway. (In case you’re wondering, the Customer Information Control Center, or CICS, is the prominent way applications interact with an IBM z/OS or z/VSE mainframe.)
The buzz these days, of course, centers on Kubernetes and other Cloud Native Computing Foundation (CNCF) projects. And with the big push into Cloud Native and Hybrid Cloud, a lot of modern infrastructure is designed to scale out. A typical mainframe, by comparison, is built to scale up.
A quick refresher on scale up vs scale out: To scale up is to add more resources to a node or, in modern parlance, migrate to a larger cloud instance. To scale out is to add more nodes to the pool or (in modern speak) to multiple cloud instances. These scaling paradigms certainly represent two different eras.
New Tools, Similar Workloads
The mainframe’s core attributes are reliability and scalability. Big iron measures uptime in years and can scale up by adding computing resources to its sizable frame. Today, we develop workloads to be widely distributed, adding new layers of complexity. The good news is that we have orchestrators and resource managers like Kubernetes, Yarn and Mesos to act as a facade for endpoints, federating the workload out. In the heyday of the mainframe, these modern tools weren’t available, even though mainframe workloads can be quite similar to their serverless counterparts.
Trigger, Work, Output
The three core activities of a serverless implementation are to trigger the function, execute it, and produce output. These actions are similar to mainframe activity involving batch and interactive (OLAP) workloads, as the mainframe is quite good at running code or a function over many iterations.
Similarly, it takes less time to deploy and execute a Lambda function than it does using traditional server, virtual machine, or even container-based approaches; as complex as big iron can seem, the specific target runtime for a function on a mainframe is very similar to that on a serverless implementation.
But Big Iron or Serverless Can Be a Black Box
If you’re a developer who submits code to execute externally—either on a mainframe or a serverless provider like AWS Lambda—it can feel like you’re working with a black box that masks its inner workings. In both cases you’re billed for duration, which for Lambda users is the time your code begins executing until it returns or terminates. But it’s hard to oversee duration if you’re not properly profiling the function.
When designing a traditional Java workload, for example, you can use tools that provide direct access to the Java virtual machine, or even the Java container, to gain deep insights into what’s going on. But that level of insight goes away with a Java Lambda because the serverless instance is typically short-lived. (Amazon is adding support with its X-Ray distributed tracing system, but not all serverless implementations have access to these tools.)
And the mainframe? With an IBM z/OS workload, you can monitoring a running job and view output with the System Display and Search Facility (SDSF), which is not unlike validating that a serverless function has run and its output has met expectations.
These big iron and serverless scenarios, unfortunately, can lead to inefficient execution and a higher costs, due either to the mainframe leveraging more logical partitions (LPARs), or more serverless instances or durations being used.
Modern Workload with a Modern Partner
More enterprises are embracing the strangler pattern for brownfield development, incorporating mainframe, serverless, and a host of other architectures. But even with workloads and skill sets shifting from mainframe to serverless, a lack of insight—the black box—remains an issue. By partnering with an APM vendor like AppDynamics, you can gain insights across your entire end-to-end environment, including mainframe and serverless implementations.