cloud-orchestration

Cloud Orchestration: 3 Best Practices for Organizing Cloud Automation

Cloud computing offers organizations a reliable, scalable and cost-effective way to run their critical business applications. And given the cloud’s many advantages, it’s no surprise that a recent Gartner study forecast the cloud services industry to grow at nearly three times the rate of overall IT services through 2022.

As many IT teams have learned, the journey to the cloud is never easy, although the right practices can help a lot. Cloud automation is a wide-ranging term for the tools and processes used to reduce manual efforts to provision and manage complex multicloud environments. Automation can simplify critical business processes in the cloud and enhance the customer experience at scale.

What is Cloud Orchestration?

An essential component of cloud automation, cloud orchestration involves the organization and coordination of automated tasks to develop a consolidated, efficient workflow. With organizations increasingly using a combination of cloud offerings, including public, private, hybrid, and multicloud services, the resulting ecosystem is enormously complex, with data and applications operating across multiple environments. To control these diverse workloads in an automated, orderly fashion, organizations need cloud orchestration tools to manage everything as a single workflow.

Orchestration vs. Cloud Automation

Cloud automation and orchestration help management teams improve IT processes by limiting repetitive tasks and human errors. By managing resources that move between public and private cloud environments, they allow admins to streamline processes and reduce tedious manual work, as well as enable faster deployment of applications.

Given this harmonious union, it’s easy to think of cloud automation and orchestration as essentially the same thing. How do they differ? While individual deployment and managerial tasks can be classified as cloud automation, the arrangement and coordination of these automated processes are core components of cloud orchestration.

Major cloud providers such as Amazon AWS, Google Cloud Platform, IBM Cloud and Microsoft Azure, as well as other third-party vendors, offer cloud automation and cloud orchestration tools, each with specific attributes. These tools often utilize leading-edge technologies such as AIOps to free IT teams from having to manually manage mundane tasks, enabling IT to focus more on complex issues that require human insight. That said, cloud service orchestration and automation tools may still require oversight from automation engineers.

3 Best Practices for Organizing Cloud Automation

1) Focus on Domains

To develop an effective cloud optimization and automation strategy, you must focus on domains. Virtualization and cloud computing create an interconnected mix of application components and resources, and it’s impossible to track and orchestration them individually, TechTarget advises. A good solution is to group these resources into domains for better management. For example, every public cloud provider you use could become a separate resource domain; in your data center, there could be a domain for each type of virtualization you use—VMware, Docker and so on. However, to make cloud orchestration and automation easier to manage, you’ll want to minimize the number of domains in your environment.

2) Get to Know Cloud Automation and Orchestration

Choosing the right mix of cloud automation and orchestration tools can be challenging. These tools, also known as cloud management platforms (CMPs) typically fall into two categories: those from a public cloud provider (e.g, AWS) or a third-party firm such as Cloudify or RightScale. According to Cabot Partners Group principal analyst Charlie Burns (formerly with Information Services Group), organizations should focus on core CMP functionality when evaluating cloud automation and orchestration capabilities.

3) Adopt FinOps Practices

A 2017 study by migration analytics firm TSO Logic (now part of Amazon AWS) found that most organizations overpay for cloud services—and that a whopping 35% of an average company’s cloud computing bill is wasted cost. A financial practice known as FinOps (Financial Operations) can help organizations adopt fiscally sound practices when migration to and maintaining a cloud platform. FinOps is a business management software-as-a-service (SaaS) designed to analyze the costs of public cloud services. It helps enterprises forecast, plan and budget future cloud spending. There’s even a non-profit trade association—the FinOps Foundation—that strives to codify and promote best practices and standards for cloud financial management.

Cloud Orchestration and Automation: Embrace the Journey

It’s not uncommon for businesses to struggle when starting their cloud migration journey, which may involve a steep learning curve toward the intricacies of multicloud and hybrid cloud environments.

Cloud components and services are added and removed continuously. As our AppDynamics colleague Subarno Mukherjee writes, unless your business is cloud-based from birth, it’s unlikely your entire application portfolio will rely solely on cloud resources. Rather, cloud adoption and automation will be gradual processes, one where your organization moves some applications to the cloud, but continues to run others on-premises for various reasons, such as security or regulatory requirements like GDPR.

A successful cloud strategy requires a monitoring solution that provides robust support for managing applications in dynamic cloud environments where volume, variety and velocity of your data multiply at an exponential rate. Traditional monitoring tools won’t cut it—they’ll leave you stranded in a sea of cloud-centric data without delivering key insights. Learn how AppDynamics can help you manage your cloud journey.

 

References:

CIO

TechTarget

AWS Insider

TechRadar

Network World

Why Kubelet TLS Bootstrap in Kubernetes 1.12 is a Very Big Deal

Kubelet TLS Bootstrap, an exciting and highly-anticipated feature in Kubernetes 1.12, is graduating to general availability. As you know, the Kubernetes orchestration system provides such key benefits as service discovery, load balancing, rolling restarts, and the ability to maintain container counts by replacing failed containers. And by using Kubernetes-compliant extensions, you can seamlessly enhance system functionality. This is similar to how Istio (with Kubernetes) provides added benefits such as robust tracing/monitoring, traffic management, and so on.

Until now, however, Kubernetes did not provide similar automation features for security best practices, such as mutually-authenticated TLS connections (mutual-TLS or mTLS). These connections enable developers to use simple certificate directives that limit nodes to communicate with predetermined services—all without writing a single line of additional code. Even though the use of TLS 1.2 certificates for service-to-service communication is a known best-practice, very few companies use mutual-TLS to deploy their systems. This lack of adoption is due mostly to greater deployment difficulties in creating and managing public key infrastructures (PKI). This is why the new TLS Bootstrap module in Kubernetes 1.12 is so exciting: It provides features for adding authentication and authorization to each service at the application level.

The Power of mTLS

Mutual-TLS mandates that both the client and server must authenticate themselves by exchanging identities (certificates). mTLS is made possible by provisioning a TLS certificate to each Kubelet. The client and server use the TLS handshake protocol to negotiate and set up a secure encryption channel. As part of this negotiation, each party checks the validity of the other party’s certificate. Optionally, they can add more verification, such as authorization (the principle of least privilege). Hence, mTLS will provide added security to your application and data. Even if malicious software has taken over a container or host, it cannot connect to any service without providing a valid identity/authorization.

In addition, the Kubelet certificate rotation feature (currently in beta) has an automated way to get a signed certificate from the cluster API server. The Kubelet process accepts an argument, -rotate-certificates, which controls whether the kubelet will automatically request a new certificate as the current one nears expiration. The kube-controller-manager process accepts the argument –experimental-cluster-signing-duration, which controls the length of time each certificate will be in use.

When a kubelet starts up, it uses its initial certificate to connect to the Kubernetes API and issue a certificate-signing request. Upon approval (which can be automated with a few checks), the controller manager signs a certificate issued for a time period specified by the duration parameter. This certificate is then attached to the Certificate Signing Request. The kubelet uses an API call to retrieve the signed certificate, which it uses to connect to the Kubernetes API. As the current certificate nears expiration, the kubelet will use the same process described above to get a new certificate.

Since this process is fully automated, certificates can be created with a very short expiry time. For example, if the expiration time is one hour, even if a malicious agent gets hold of the certificate, the compromised certificate will still expire in an hour.

Robust Security and the Strength of APM

Mutual-TLS and automated certificate rotation give organizations robust security without having to spend heavily on firewalls or intrusion-detection services. mTLS is also the first step towards eliminating the distinction of trusted and non-trusted connections. In this new paradigm, connections coming from inside the firewall or corporate network are treated exactly the same as those from the internet. Every client must identify itself and receive authorization to access a resource, regardless of the originating host’s location. This approach safeguards resources, even if a host inside the corporate firewall is compromised.

AppDynamics fully supports mutually-authenticated TLS connections between its agents and the controller. Our agents running inside a container can communicate with the controller in much the same way as microservices connect to each other. In hybrid environments, where server authentication is available only for some agents and mutual authentication for others, it’s possible to set up and configure multiple HTTP listeners in Glassfish—one for server authentication only, another for both server and client authentication. The agent and controller connections can be configured to use the TLS 2 protocol as well.

See how AppDynamics can provide end-to-end, unified Kubernetes monitoring & visibility!

 

 

Self Tuning Applications in the Cloud: It’s about time!

In my previous blog I’ve written about the hard work needed to successfully migrate applications to the cloud.   But why go through all that work to get to the cloud? It’s to take advantage of the dynamic nature of the cloud with the ability (and agility) to quickly scale applications. Your application’s load probably changes all day, all week, and all year. Now your application can utilize more or less resources based on the changes in load. Just ask the cloud for as much computing resources that you need at any given time, and unlike at data centers, the resources are available at the push of a button.

But that only works during the marketing video. Back in the real world, no one can find that magic button to push. Instead scaling in the cloud involves pushing many buttons, running many scripts, configuring various software, and then fixing whatever didn’t quite work. Oh, and of course even that is the easy part, compared to actually knowing when to scale, how much to scale and even what parts of your application to scale. And this repeats all day, every day, at least until everyone gets discouraged.