Deploying AppDynamics Agents to OpenShift Using Init Containers

There are several ways to instrument an application on OpenShift with an AppDynamics application agent. The most straightforward way is to embed the agent into the main application image. (For more on this topic, read my blog Monitoring Kubernetes and OpenShift with AppDynamics.)

Let’s consider a Node.js app. All you need to do is to add a require reference to the agent libraries and pass the necessary information on the controller. The reference itself becomes a part of the app and will be embedded in the image. The list of variables (e.g., controller host name, app/tier name, license) the agent needs to communicate with the controller can be embedded, though it is best practice to pass them into the app on initialization as configurable environmental variables.

In the world of Kubernetes (K8s) and OpenShift, this task is accomplished with config maps and secrets. Config maps are reusable key value stores that can be made accessible to one or more applications. Secrets are very similar to config maps with an additional capability to obfuscate key values. When you create a secret, K8s automatically encodes the value of the key as a base64 string. Now the actual value is not visible, and you are protected from people looking over your shoulder. When the key is requested by the app, Kubernetes automatically decodes the value. Secrets can be used to store any sensitive data such as license keys, passwords, and so on. In our example below, we use a secret to store the license key.

Here is an example of AppD instrumentation where the agent is embedded, and the configurable values are passed as environment variables by means of a configMap, a secret and the pod spec.

var appDobj = {
   controllerHostName: process.env[‘CONTROLLER_HOST’],
   controllerPort: CONTROLLER_PORT,
   controllerSslEnabled: true,
accountName: process.env[‘ACCOUNT_NAME’],
   accountAccessKey: process.env[‘ACCOUNT_ACCESS_KEY’],
   applicationName: process.env[‘APPLICATION_NAME’],
   tierName: process.env[‘TIER_NAME’],
   nodeName: ‘process’

Pod Spec
– env:
   – name: TIER_NAME
     value: MyAppTier
         key: appd-key
         name: appd-secret
     – configMapRef:
         name: controller-config

A ConfigMap with AppD variables.

AppD license key stored as secret.

The Init Container Route: Best Practice

The straightforward way is not always the best. Application developers may want to avoid embedding a “foreign object” into the app images for a number of good reasons—for example, image size, granularity of testing, or encapsulation. Being developers ourselves, we respect that and offer an alternative, a less intrusive way of instrumentation. The Kubernetes way.

An init container is a design feature in Kubernetes that allows decoupling of app logic from any type of initialization routine, such as monitoring, in our case. While the main app container lives for the entire duration of the pod, the lifespan of the init container is much shorter. The init container does the required prep work before orchestration of the main container begins. Once the initialization is complete, the init container exists and the main container is started. This way the init container does not run parallel to the main container as, for example, a sidecar container would. However, like a sidecar container, the init container, while still active, has access to the ephemeral storage of the pod.

We use this ability to share storage between the init container and the main container to inject the AppDynamics agent into the app. Our init container image, in its simplest form, can be described with this Dockerfile:

FROM openjdk:8-jdk-alpine
RUN apk add –no-cache bash gawk sed grep bc coreutils
RUN mkdir -p /sharedFiles/AppServerAgent
ADD /sharedFiles/
RUN unzip /sharedFiles/ -d /sharedFiles/
AppServerAgent /
CMD [“tail”, “-f”, “/dev/null”]

The above example assumes you have already downloaded the archive with AppDynamics app agent binaries locally. When the container is initialized, it unzips the binaries into a new directory. To the pod spec, we then add a directive that copies the directory with the agent binaries to a shared volume on the pod:

     – name: agent-repo
       image: agent-repo:x.x.x
       imagePullPolicy: IfNotPresent
       command: [“cp”,  “-r”,  “/sharedFiles/AppServerAgent”,  /mountpath/AppServerAgent”]
       – mountPath: /mountPath
         name: shared-files
       – name: shared-files
         emptyDir: {}
     serviceAccountName: my-account

After the init container exits, the AppDynamics agent binaries are waiting for the application to be picked up from the shared volume on the pod.

Let’s assume we are deploying a Java app, one normally initialized via a script that calls the java command with Java options. The script,, may look like this:

JAVA_OPTS=”$JAVA_OPTS -Dappdynamics.agent.tierName=$TIER_NAME”
JAVA_OPTS=”$JAVA_OPTS -Dappdynamics.agent.reuse.nodeName=true -Dappdynamics.agent.reuse.nodeName.prefix=$TIER_NAME”
-Dappdynamics.controller.hostName=$CONTROLLER_HOST -Dappdynamics.controller.port=$CONTROLLER_PORT -Dappdynamics.controller.ssl.enabled=$CONTROLLER_SSL_ENABLED”
JAVA_OPTS=”$JAVA_OPTS -Dappdynamics.agent.accountName=$ACCOUNT_NAME -Dappdynamics.agent.accountAccessKey=$ACCOUNT_ACCESS_KEY -Dappdynamics.agent.applicationName=$APPLICATION_NAME”
JAVA_OPTS=”$JAVA_OPTS -Dappdynamics.socket.collection.bci.enable=true”
JAVA_OPTS=”$JAVA_OPTS -Xms64m -Xmx512m -XX:MaxPermSize=256m”

$JAVA_OPTS -jar myapp.jar

It is embedded into the image and invoked via Docker’s ENTRYPOINT directive when the container starts.

FROM openjdk:8-jdk-alpine
RUN chmod +x
ADD myapp.jar /usr/src/myapp.jar
ENTRYPOINT [“/bin/sh”, “”]

To make the consumption of more flexible and Kubernetes-friendly, we can trim it down to this:

#a more flexible
java $JAVA_OPTS -jar myapp.jar

And declare all the necessary Java options in the spec as a single environmental variable.

       – name: my-app
         image: my-app-image:x.x.x
         imagePullPolicy: IfNotPresent
           privileged: true
           – configMapRef:
               name: controller-config
           – name: ACCOUNT_ACCESS_KEY
                 key: appd-key
name: appd-secret
-name: JAVA_OPTS
  value: “ -javaagent:/sharedFiles/AppServerAgent/javaagent.jar
         -Xms64m -Xmx512m -XX:MaxPermSize=256m
         – containerPort: 8080
           – mountPath: /sharedFiles
             name: shared-files

The dynamic values for the Java options are populated from the ConfigMap. First, we reference the entire configMap, where all shared values are defined:

           – configMapRef:
               name: controller-config

We also reference our secret as a separate environmental variable. Then, using the $() notation, we can reference the individual variables in order to concatenate the value of the JAVA_OPTS variable.

Thanks to these Kubernetes features (init containers, configMaps, secrets), we can add AppDynamics monitoring into an existing app in a noninvasive way, without the need to rebuild the image.

This approach has multiple benefits. The app image remains unchanged in terms of size and encapsulation. From a Kubernetes perspective, no extra processing is added, as the init container exits before the main container starts. There is added flexibility in what can be passed into the application initialization routine without the need to modify the image.

Note that OpenShift does not allow running Docker containers as user root by default. If you must (for whatever good reason), add the service account you use for deployments to the anyuid SCC. Assuming your service account is my-account, as in the provided examples, run this command:

oc adm policy add-scc-to-user anyuid -z myaccount

Here’s an example of a complete app spec with AppD instrumentation:

apiVersion: extensions/v1beta1
kind: Deployment
 name: my-app
 replicas: 1
       name: my-app
     – name: agent-repo
       image: agent-repo:x.x.x
       imagePullPolicy: IfNotPresent
       command: [“cp”,  “-r”,  “/sharedFiles/AppServerAgent”,  “/mountPath/AppServerAgent”]
       – mountPath: /mountPath
         name: shared-files
       – name: shared-files
         emptyDir: {}
     serviceAccountName: my-account
       – name: my-app
         image: my-service
         imagePullPolicy: IfNotPresent
           – configMapRef:
               name: controller-config
           – name: TIER_NAME
             value: WebTier
           – name: ACCOUNT_ACCESS_KEY
                 key: appd-key
                 name: appd-key-secret
           – name: JAVA_OPTS
              value: ”
                  -Xms64m -Xmx512m -XX:MaxPermSize=256m
                          – containerPort: 8080
                             – mountPath: /sharedFiles
                               name: shared-files
                    restartPolicy: Always

Learn more about how AppDynamics can help monitor your applications on Kubernetes and OpenShift.

Monitoring Kubernetes and OpenShift with AppDynamics

Here at AppDynamics, we build applications for both external and internal consumption. We’re always innovating to make our development and deployment process more efficient. We refactor apps to get the benefits of a microservices architecture, to develop and test faster without stepping on each other, and to fully leverage containerization.

Like many other organizations, we are embracing Kubernetes as a deployment platform. We use both upstream Kubernetes and OpenShift, an enterprise Kubernetes distribution on steroids. The Kubernetes framework is very powerful. It allows massive deployments at scale, simplifies new version rollouts and multi-variant testing, and offers many levers to fine-tune the development and deployment process.

At the same time, this flexibility makes Kubernetes complex in terms of setup, monitoring and maintenance at scale. Each of the Kubernetes core components (api-server, kube-controller-manager, kubelet, kube-scheduler) has quite a few flags that govern how the cluster behaves and performs. The default values may be OK initially for smaller clusters, but as deployments scale up, some adjustments must be made. We have learned to keep these values in mind when monitoring OpenShift clusters—both from our own pain and from published accounts of other community members who have experienced their own hair-pulling discoveries.

It should come as no surprise that we use our own tools to monitor our apps, including those deployed to OpenShift clusters. Kubernetes is just another layer of infrastructure. Along with the server and network visibility data, we are now incorporating Kubernetes and OpenShift metrics into the bigger monitoring picture.

In this blog, we will share what we monitor in OpenShift clusters and give suggestions as to how our strategy might be relevant to your own environments. (For more hands-on advice, read my blog Deploying AppDynamics Agents to OpenShift Using Init Containers.)

OpenShift Cluster Monitoring

For OpenShift cluster monitoring, we use two plug-ins that can be deployed with our standalone machine agent. AppDynamics’ Kubernetes Events Extension, described in our blog on monitoring Kubernetes events, tracks every event in the cluster. Kubernetes Snapshot Extension captures attributes of various cluster resources and publishes them to the AppDynamics Events API. The snapshot extension collects data on all deployments, pods, replica sets, daemon sets and service endpoints. It captures the full extent of the available attributes, including metadata, spec details, metrics and state. Both extensions use the Kubernetes API to retrieve the data, and can be configured to run at desired intervals.

The data these plug-ins provide ends up in our analytics data repository and instantly becomes available for mining, reporting, baselining and visualization. The data retention period is at least 90 days, which offers ample time to go back and perform an exhaustive root cause analysis (RCA). It also allows you to reduce the retention interval of events in the cluster itself. (By default, this is set to one hour.)

We use the collected data to build dynamic baselines, set up health rules and create alerts. The health rules, baselines and aggregate data points can then be displayed on custom dashboards where operators can see the norms and easily spot any deviations.

An example of a customizable Kubernetes dashboard.

What We Monitor and Why

Cluster Nodes

At the foundational level, we want monitoring operators to keep an eye on the health of the nodes where the cluster is deployed. Typically, you would have a cluster of masters, where core Kubernetes components (api-server, controller-manager, kube-schedule, etc.) are deployed, as well as a highly available etcd cluster and a number of worker nodes for guest applications. To paint a complete picture, we combine infrastructure health metrics with the relevant cluster data gathered by our Kubernetes data collectors.

From an infrastructure point of view, we track CPU, memory and disk utilization on all the nodes, and also zoom into the network traffic on etcd. In order to spot bottlenecks, we look at various aspects of the traffic at a granular level (e.g., reads/writes and throughput). Kubernetes and OpenShift clusters may suffer from memory starvation, disks overfilled with logs or spikes in consumption of the API server and, consequently, the etcd. Ironically, it is often monitoring solutions that are known for bringing clusters down by pulling excessive amounts of information from the Kubernetes APIs. It is always a good idea to establish how much monitoring is enough and dial it up when necessary to diagnose issues further. If a high level of monitoring is warranted, you may need to add more masters and etcd nodes. Another useful technique, especially with large-scale implementations, is to have a separate etcd cluster just for storing Kubernetes events. This way, the spikes in event creation and event retrieval for monitoring purposes won’t affect performance of the main etcd instances. This can be accomplished by setting the –etcd-servers-overrides flag of the api-server, for example:

–etcd-servers-overrides =/events#;https://etcd2.;https://etcd3.

From the cluster perspective we monitor resource utilization across the nodes that allow pod scheduling. We also keep track of the pod counts and visualize how many pods are deployed to each node and how many of them are bad (failed/evicted).

A dashboard widget with infrastructure and cluster metrics combined.

Why is this important? Kubelet, the component responsible for managing pods on a given node, has a setting, –max-pods, which determines the maximum number of pods that can be orchestrated. In Kubernetes the default is 110. In OpenShift it is 250. The value can be changed up or down depending on need. We like to visualize the remaining headroom on each node, which helps with proactive resource planning and to prevent sudden overflows (which could mean an outage). Another data point we add there is the number of evicted pods per node.

Pod Evictions

Evictions are caused by space or memory starvation. We recently had an issue with the disk space on one of our worker nodes due to a runaway log. As a result, the kubelet produced massive evictions of pods from that node. Evictions are bad for many reasons. They will typically affect the quality of service or may even cause an outage. If the evicted pods have an exclusive affinity with the node experiencing disk pressure, and as a result cannot be re-orchestrated elsewhere in the cluster, the evictions will result in an outage. Evictions of core component pods may lead to the meltdown of the cluster.

Long after the incident where pods were evicted, we saw the evicted pods were still lingering. Why was that? Garbage collection of evictions is controlled by a setting in kube-controller-manager called –terminated-pod-gc-threshold.  The default value is set to 12,500, which means that garbage collection won’t occur until you have that many evicted pods. Even in a large implementation it may be a good idea to dial this threshold down to a smaller number.

If you experience a lot of evictions, you may also want to check if kube-scheduler has a custom –policy-config-file defined with no CheckNodeMemoryPressure or CheckNodeDiskPressure predicates.

Following our recent incident, we set up a new dashboard widget that tracks a metric of any threats that may cause a cluster meltdown (e.g., massive evictions). We also associated a health rule with this metric and set up an alert. Specifically, we’re now looking for warning events that tell us when a node is about to experience memory or disk pressure, or when a pod cannot be reallocated (e.g., NodeHasDiskPressure, NodeHasMemoryPressure, ErrorReconciliationRetryTimeout, ExceededGracePeriod, EvictionThresholdMet).

We also look for daemon pod failures (FailedDaemonPod), as they are often associated with cluster health rather than issues with the daemon set app itself.

Pod Issues

Pod crashes are an obvious target for monitoring, but we are also interested in tracking pod kills. Why would someone be killing a pod? There may be good reasons for it, but it may also signal a problem with the application. For similar reasons, we track deployment scale-downs, which we do by inspecting ScalingReplicaSet events. We also like to visualize the scale-down trend along with the app health state. Scale-downs, for example, may happen by design through auto-scaling when the app load subsides. They may also be issued manually or in error, and can expose the application to an excessive load.

Pending state is supposed to be a relatively short stage in the lifecycle of a pod, but sometimes it isn’t. It may be good idea to track pods with a pending time that exceeds a certain, reasonable threshold—one minute, for example. In AppDynamics, we also have the luxury of baselining any metric and then tracking any configurable deviation from the baseline. If you catch a spike in pending state duration, the first thing to check is the size of your images and the speed of image download. One big image may clog the pipe and affect other containers. Kubelet has this flag, –serialize-image-pulls, which is set to “true” by default. It means that images will be loaded one at a time. Change the flag to “false” if you want to load images in parallel and avoid the potential clogging by a monster-sized image. Keep in mind, however, that you have to use Docker’s overlay2 storage driver to make this work. In newer Docker versions this setting is the default. In addition to the Kubelet setting, you may also need to tweak the max-concurrent-downloads flag of the Docker daemon to ensure the desired parallelism.

Large images that take a long time to download may also cause a different type of issue that results in a failed deployment. The Kubelet flag –image-pull-progress-deadline determines the point in time when the image will be deemed “too long to pull or extract.” If you deal with big images, make sure you dial up the value of the flag to fit your needs.

User Errors

Many big issues in the cluster stem from small user errors (human mistakes). A typo in a spec—for example, in the image name—may bring down the entire deployment. Similar effects may occur due to a missing image or insufficient rights to the registry. With that in mind, we track image errors closely and pay attention to excessive image-pulling. Unless it is truly needed, image-pulling is something you want to avoid in order to conserve bandwidth and speed up deployments.

Storage issues also tend to arise due to spec errors, lack of permissions or policy conflicts. We monitor storage issues (e.g., mounting problems) because they may cause crashes. We also pay close attention to resource quota violations because they do not trigger pod failures. They will, however, prevent new deployments from starting and existing deployments from scaling up.

Speaking of quota violations, are you setting resource limits in your deployment specs?

Policing the Cluster

On our OpenShift dashboards, we display a list of potential red flags that are not necessarily a problem yet but may cause serious issues down the road. Among these are pods without resource limits or health probes in the deployment specs.

Resource limits can be enforced by resource quotas across the entire cluster or at a more granular level. Violation of these limits will prevent the deployment. In the absence of a quota, pods can be deployed without defined resource limits. Having no resource limits is bad for multiple reasons. It makes cluster capacity planning challenging. It may also cause an outage. If you create or change a resource quota when there are active pods without limits, any subsequent scale-up or redeployment of these pods will result in failures.

The health probes, readiness and liveness are not enforceable, but it is a best practice to have them defined in the specs. They are the primary mechanism for the pods to tell the kubelet whether the application is ready to accept traffic and is still functioning. If the readiness probe is not defined and the pods takes a long time to initialize (based on the kubelet’s default), the pod will be restarted. This loop may continue for some time, taking up cluster resources for no reason and effectively causing a poor user experience or outage.

The absence of the liveness probe may cause a similar effect if the application is performing a lengthy operation and the pod appears to Kubelet as unresponsive.

We provide easy access to the list of pods with incomplete specs, allowing cluster admins to have a targeted conversation with development teams about corrective action.

Routing and Endpoint Tracking

As part of our OpenShift monitoring, we provide visibility into potential routing and service endpoint issues. We track unused services, including those created by someone in error and those without any pods behind them because the pods failed or were removed.

We also monitor bad endpoints pointing at old (deleted) pods, which effectively cause downtime. This issue may occur during rolling updates when the cluster is under increased load and API request-throttling is lower than it needs to be. To resolve the issue, you may need to increase the –kube-api-burst and –kube-api-qps config values of kube-controller-manager.

Every metric we expose on the dashboard can be viewed and analyzed in the list and further refined with ADQL, the AppDynamics query language. After spotting an anomaly on the dashboard, the operator can drill into the raw data to get to the root cause of the problem.

Application Monitoring

Context plays a significant role in our monitoring philosophy. We always look at application performance through the lens of the end-user experience and desired business outcomes. Unlike specialized cluster-monitoring tools, we are not only interested in cluster health and uptime per se. We’re equally concerned with the impact the cluster may have on application health and, subsequently, on the business objectives of the app.

In addition to having a cluster-level dashboard, we also build specialized dashboards with a more application-centric point of view. There we correlate cluster events and anomalies with application or component availability, end-user experience as reported by real-user monitoring, and business metrics (e.g., conversion of specific user segments).

Leveraging K8s Metadata

Kubernetes makes it super easy to run canary deployments, blue-green deployments, and A/B or multivariate testing. We leverage these conveniences by pulling deployment metadata and using labels to analyze performance of different versions side by side.

Monitoring Kubernetes or OpenShift is just a part of what AppDynamics does for our internal needs and for our clients. AppDynamics covers the entire spectrum of end-to-end monitoring, from the foundational infrastructure to business intelligence. Inherently, AppDynamics is used by many different groups of operators who may have very different skills. For example, we look at the platform as a collaboration tool that helps translate the language of APM to the language of Kubernetes and vice versa.

By bringing these different datasets together under one umbrella, AppDynamics establishes a common ground for diverse groups of operators. On the one hand you have cluster admins, who are experts in Kubernetes but may not know the guest applications in detail. On the other hand, you have DevOps in charge of APM or managers looking at business metrics, both of whom may not be intimately familiar with Kubernetes. These groups can now have a productive monitoring conversation, using terms that are well understood by everyone and a single tool to examine data points on a shared dashboard.

Learn more about how AppDynamics can help you monitor your applications on Kubernetes and OpenShift.

Exploring the AppDynamics Integration with OpenShift Container Platform

This was originally posted on OpenShift’s blog.


Regardless of whether you are developing traditional applications or microservices, you will inevitably have a requirement to monitor the health of your applications, and the components upon which they are built.

Furthermore, users of OpenShift Container Platform will often have their own existing enterprise standard monitoring infrastructure into which they will be looking to integrate applications they build. As part of Red Hat’s Professional Services organisation, one of the monitoring platforms I encounter whilst working with OpenShift in the field is the AppDynamics SaaS offering.

In this post, I will run through how we can take a Source-to-Image (S2I) builder, customise it to add the monitoring agent, and then use it as the basis of a Fuse Integration Services application, written in Apache Camel, and using the Java CDI approach.

Register with AppDynamics

Before getting into the process of modifying the S2I builder image and building the application, the first thing we need to do is register with the AppDynamics platform. If you’re an existing consumer of this service, then this step obviously isn’t required!

Either way, once registered, we need to download the Java agent. In this example, I’ve used the Standalone JVM agent, but there are many more options to choose from, and one of those may better suit your requirements.

Adding the Agent to your Image

There are two primary ways you can go about adding the AppDynamics Java agent to your image.

Firstly, you can use Source-To-Image (S2I) to add the Java agent to the standard fis-java-openshift base image at the same time as pulling in all your other dependencies – mainly source code and libraries.

Secondly, you can extend the fis-java-openshift S2I builder image itself, add your own layer containing the Java agent, and use this new image as the basis for your builds.

Using S2I

When using S2I to create an image, OpenShift can execute a number of scripts as part of this process. The two scripts we are interested in in this context are assemble and run.

In the fis-java-openshift image, the S2I scripts are located in /usr/local/s2i. We can override the actions of these scripts by adding an .s2i/bin directory into the code repository, and creating our new scripts there.


The assemble script is going to be the script that pulls in the Java agent and unpacks it ready for use. Whilst we need to override it to carry out this task, we also need it to carry on performing the tasks it currently performs in addition to any customisations we might add:


# run original assemble script

# install appdynamics agent
curl -H "Accept: application/zip" -o /deployments/

pushd /deployments
    pushd fis-java-appdynamics-plugin-master/
    mv appdynamics/ ../
rm -rf fis-java-appdynamics-plugin-master/
rm -f

As can be seen above, we actually get this script to execute the original assemble script before we add the AppDynamics agent – this way, if the Maven build fails, we haven’t wasted any time downloading any artifacts we’re not going to use.


The run script is going to be the script that sets up the environment to allow us to use the AppDynamics Java agent, and – you’ve guessed it – run our app! Just as with the assemble script, we still want run to carry on executing our application when our customisations are complete. Therefore, all we do here is get it to check for the presence of an environment variable, and if it’s found, configure the environment to use AppDynamics.


if [ x"$APPDYNAMICS_AGENT_ACCOUNT_NAME" != "x" ]; then
    mkdir /deployments/logs
    export JAVA_OPTIONS="-javaagent:/deployments/appdynamics/javaagent.jar -Dappdynamics.agent.logs.dir=/deployments/logs $JAVA_OPTIONS"

exec /usr/local/s2i/run

In this case, we’re looking for a variable called APPDYNAMICS_AGENT_ACCOUNT_NAME. After all, if we haven’t configured any credentials for the Java agent, then it can’t connect to the AppDynamics SaaS anyway.


Finally, to bring this all together, we can use a Template to pull all of these components together, begin the build process, and deploy our application.

The S2I process is possibly the simpler of the two methods outlined here for adding the AppDynamics Java agent to your application, but it does present some points of which you need to be aware:

  • The Java agent needs to be hosted somewhere accessible to your build process. It also needs to be version controlled separately from the build, which adds extra build management overhead.
  • It will be downloaded every single time you run a build – not the most efficient way of deploying it if you have an effective CI / CD pipeline and are doing multiple builds per hour!
  • Whilst it’s simpler to configure, it can present confusing problems during the build process. For example, if your assemble script creates some directories for your application to use (logging directories, for example), you may need to think about how your build and application are being executed, and who owns what in that process.

Regardless of these minor issues, this is still a powerful (and useful!) mechanism, and as such I have provided a sample repository that allows you to execute an S2I build that should pull in the Java agent and run it alongside an application.

NOTE: If you’re still interested in using the S2I process, and want to know more about how to configure the Java agent with environment variables, skip ahead to ‘Adding the AppDynamics agent to a FIS application’.

Extending the Fuse Integration Services (FIS) Base Image

My preference for using the AppDynamics Java agent with applications built on FIS (and for similar use cases), is to add it into the base image once, so that it is accessible by the any applications based on that image.


In this example, this is done by creating a new Docker image, based on fis-java-openshift:latest and adding the Java agent into this project as an artifact to be added to that image:


USER root

ADD appdynamics/ /opt/appdynamics/

RUN chgrp -R 0 /opt/appdynamics/
RUN chmod -R g+rw /opt/appdynamics/
RUN find /opt/appdynamics/ -type d -exec chmod g+x {} +

#jboss from FIS
USER 185

In this Dockerfile, we are adding the content of the appdynamics directory in our Git repository to the fis-java-openshift base image, and altering its permissions so that it is owned by the JBoss user in that image.


In order to consume this Dockerfile and turn it into a useable image, we have a number of options. By far the simplest is to execute an oc new-build command against the repository hosting the Docker image – in the case of this image, this would be:

oc new-build  --context-dir=src/main/docker

Note the use of the --context-dir switch pointing to the directory containing the Dockerfile. This informs OpenShift that it needs to look in a sub-directory, not the root of the Git repository, for its build artifacts.

Once we’ve executed the above command, we can tail the OpenShift logs from the CLI (or view them from the Web Console), and see the Dockerfile build taking place. The output will be similar to this:

[vagrant@rhel-cdk ~]$ oc logs -f fis-java-appdynamics-1-build           
I0709 09:03:35.440237       1 source.go:197] Downloading "" ...
Step 1 : FROM
 ---> 771d26abb75d
Step 2 : USER root
 ---> Using cache
 ---> c66c5f1378be
Step 3 : ADD appdynamics/ /opt/appdynamics/
 ---> ef153cb350d8
Removing intermediate container 44c776871f6f
Step 4 : RUN chown -R 185:185 /opt/appdynamics/
 ---> Running in 861f8c27225e
 ---> ee1ac493f88d
Removing intermediate container 861f8c27225e
Step 5 : USER 185
 ---> Running in 1d9fe0a02e6a
 ---> 73f598d8a0e9
Removing intermediate container 1d9fe0a02e6a
Step 6 : ENV "OPENSHIFT_BUILD_NAME" "fis-java-appdynamics-1" "OPENSHIFT_BUILD_NAMESPACE" "dev1" "OPENSHIFT_BUILD_SOURCE" "" "OPENSHIFT_BUILD_COMMIT" "d025f9961896b25fcae479d62779ae455df334d3"
 ---> Running in 510a4b51db5a
 ---> c4e938d189eb
Removing intermediate container 510a4b51db5a
Step 7 : LABEL "" "Updated the FIS build artifacts" "" "" "" "src/main/docker" "" "Benjamin Holmes \\u003e" "" "Sat Jul 9 11:26:11 2016 +0100" "" "d025f9961896b25fcae479d62779ae455df334d3" "" "master"
 ---> Running in 213844392db7
 ---> 44fede9609fd
Removing intermediate container 213844392db7
Successfully built 44fede9609fd
I0709 09:04:06.573966       1 docker.go:118] Pushing image ...
I0709 09:04:10.970516       1 docker.go:122] Push successful

NOTE: As an alternative to a standard Dockerfile build, we can use the Kubernetes Fluent DSL to generate the BuildConfig and ImageStream objects as part of a Template that will tell OpenShift to do a Dockerfile build based on the supplied project content. Using Kubernetes DSL is optional (you are more than welcome to define the objects manually), but as a Java developer this is a simple process to understand, it allows you to version control your whole image build process, and also falls nicely into the ‘configuration as code’ discipline so prominent in the DevOps world. An example of how to use the Fluent DSL is supplied in the Github repository for the AppDynamics base image.

Whichever process you decide upon (the supplied Github repository contains artifacts for both builds), OpenShift will generate a number of Kubernetes artifacts. What we are interested in here is the Image Stream…

apiVersion: v1
kind: ImageStream
  generation: 1
    app: fis-java-appdynamics
  name: fis-java-appdynamics
  namespace: dev1
spec: {}
    tag: latest

…and the BuildConfig:

apiVersion: v1
kind: BuildConfig
    app: fis-java-appdynamics
  name: fis-java-appdynamics
  namespace: dev1
    kind: ImageStreamTag
    name: fis-java-appdynamics:latest
  postCommit: {}
  resources: {}
    contextDir: src/main/docker
    secrets: []
    type: Git
        kind: ImageStreamTag
        name: fis-java-openshift:latest
    type: Docker
  - github:
    secret: 9Y66CCaSoOipX2pgeEXs
    type: GitHub
  - generic:
    secret: IrYOFwVX0pZKSkceG4D_
    type: Generic
  - type: ConfigChange
  - imageChange:
    type: ImageChange
  lastVersion: 1

Please note that lines have been removed from the above objects for the sake of brevity.

Once the build of the fis-java-appdynamics image has completed successfully, we will have a new base image present in our namespace that contains the AppDynamics agent plugin.


Adding the AppDynamics agent to a FIS application

Given that I have elected to follow the second method of creating a new base image with the AppDynamics Java agent added to it, I now need a way of configuring it.

NOTE: These steps are much the same as those performed if you were to use the S2I builder process. However you can see the subtle differences, such as the addition of JAVA_OPTIONS is performed by the .s2i/bin/run scriptas opposed to the DeploymentConfig in the Template in the sample repository here.

Configuring the Agent

The AppDynamics agent follows a similar agent model to many other profiling tools, in that it is added to a JVM using the -javaagent switch. When thinking in terms of immutable containers, we obviously want this whole configuration process to be as loosely coupled from the application image as possible.

With this in mind, the simplest way to configure the AppDynamics Java agent is via environment variables. This is helpful, as the AppDynamics agent prioritises environment variables over any other forms of configuration it may have available to it (such as controller-info.xml within the agent distribution). The AppDynamics Agent Configuration guide has  further information.

One option we have here is to hard code all of the requisite environment variables into an application DeploymentConfig. However, in this brave new world of immutable containers, short lived cloud applications workloads, and CI/CD pipelines we can be a bit cleverer than that.

Using a Template for the application, we can still define all of the environment variables required by the AppDynamics agent, but we can also use a mixture of templated parameters, and Kubernetes’ Downward API to effectively allow the container to introspect itself at runtime, and feed useful information about itself to the agent.

Therefore, we can produce a Template that includes an environment variables component in its DeploymentConfig section which looks a little like this:

      - name: JAVA_OPTIONS
        value: '-javaagent:/opt/appdynamics/javaagent.jar'
      - name: TZ
        value: Europe/London
        value: ${SERVICE_NAME}
               apiVersion: v1
                fieldPath: metadata.namespace
              apiVersion: v1

Note the use of Downward API reference for APPDYNAMICS_TIER_NAME and APPDYNAMICS_AGENT_NODE_NAME.

NOTE: If you would like to try this with my sample template, you should execute the following command against your OpenShift environment:

oc create -f

This creates the template within the current OpenShift project. This should be the same project in which you have done the fis-java-appdynamics build, otherwise OpenShift won’t be able to locate the new base image!

When we present this via the OpenShift Web Console, we are shown a much more user friendly version of the above, allowing you to key in your AppDynamics account details without the need to store them in potentially troublesome static configuration files within the container.


Once all the fields have been completed, click on ‘Create’, and (assuming all mandatory fields have been filled in), a screen will be presented confirming the successful creation of all template objects.

Once this Template has been instantiated successfully, OpenShift will start a build against the source code branch, using fis-java-appdynamics as the S2I builder image.

Be aware that this project repository contains a standard Maven settings.xml which can be used to define how Maven resolves the build dependencies. If you experience long build times, this file can be updated to resolve to a local Maven repository, such as Sonatype Nexus, or JFrog Artifactory.

After the build has completed successfully, a new Pod will be started, running the application with its embedded Java agent (parts omitted for brevity):

Executing /deployments/bin/run ...
Launching application in folder: /deployments
Running  java  -javaagent:/opt/appdynamics/javaagent.jar
Install Directory resolved to[/opt/appdynamics]
log4j:WARN No appenders could be found for logger (com.singularity.MissingMethodGenerator).
log4j:WARN Please initialize the log4j system properly.
[Thread-0] Sat Jul 09 05:29:26 BST 2016[DEBUG]: AgentInstallManager - Full Agent Registration Info Resolver is running
[Thread-0] Sat Jul 09 05:29:26 BST 2016[INFO]: AgentInstallManager - Full Agent Registration Info Resolver found env variable [APPDYNAMICS_AGENT_APPLICATION_NAME] for application name [greeting-service]
[Thread-0] Sat Jul 09 05:29:26 BST 2016[INFO]: AgentInstallManager - Full Agent Registration Info Resolver found env variable [APPDYNAMICS_AGENT_TIER_NAME] for tier name [dev1]
[Thread-0] Sat Jul 09 05:29:26 BST 2016[INFO]: AgentInstallManager - Full Agent Registration Info Resolver found env variable [APPDYNAMICS_AGENT_NODE_NAME] for node name [greeting-service-1-fbcaa]
[Thread-0] Sat Jul 09 05:29:26 BST 2016[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using selfService [true]
[Thread-0] Sat Jul 09 05:29:26 BST 2016[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using selfService [true]
[Thread-0] Sat Jul 09 05:29:26 BST 2016[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using application name [greeting-service]
[Thread-0] Sat Jul 09 05:29:26 BST 2016[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using tier name [dev1]
[Thread-0] Sat Jul 09 05:29:26 BST 2016[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using node name [greeting-service-1-fbcaa]
[Thread-0] Sat Jul 09 05:29:26 BST 2016[DEBUG]: AgentInstallManager - Full Agent Registration Info Resolver finished running
[Thread-0] Sat Jul 09 05:29:26 BST 2016[INFO]: AgentInstallManager - Agent runtime directory set to [/opt/appdynamics/ver4.1.7.1]
[Thread-0] Sat Jul 09 05:29:26 BST 2016[INFO]: AgentInstallManager - Agent node directory set to [greeting-service-1-fbcaa]
[Thread-0] Sat Jul 09 05:29:26 BST 2016[INFO]: JavaAgent - Using Java Agent Version [Server Agent v4.1.7.1 GA #9949 ra4a2721d52322207b626e8d4c88855c846741b3d]
[Thread-0] Sat Jul 09 05:29:26 BST 2016[INFO]: JavaAgent - Running IBM Java Agent [No]
[Thread-0] Sat Jul 09 05:29:26 BST 2016[INFO]: JavaAgent - Java Agent Directory [/opt/appdynamics/ver4.1.7.1]
[Thread-0] Sat Jul 09 05:29:26 BST 2016[INFO]: JavaAgent - Java Agent AppAgent directory [/opt/appdynamics/ver4.1.7.1]
Agent Logging Directory [/opt/appdynamics/ver4.1.7.1/logs/greeting-service-1-fbcaa]
Running obfuscated agent
Started AppDynamics Java Agent Successfully.
Registered app server agent with Node ID[8494] Component ID[6859] Application ID [4075]

Verifying Successful Integration

Once the application has started successfully, and the agent has registered itself with AppDynamics, you should be able to see your application on the AppDynamics Dashboard:


Drilling down into the Application in the Dashboard also confirms that the Downward API has done its job, and we’ve automatically pulled in both the container name, and the Kubernetes namespace.


Testing Integration

In order to get something a bit more meaningful out of the AppDynamics platform, I’ve put together a small test harness in SoapUI that simply runs a load test against the Fuse application’s RESTful endpoint:


In OpenShift’s container logs we can see these requests coming into the application, either via the Web Console or via the CLI.

Once the test harness has completed its cycle, going back to the AppDynamics dashboard starts to give us a glimpse of something a bit more useful to us from an application monitoring and operations point of view:


We can even drill down into the Web Service endpoints themselves, and examine the levels of load each is experiencing.


Application Scaling

One of the really nice things about using OpenShift, the Downward API, and AppDynamics in this way is that it even gives us useful information about health, request distribution and throughput when we scale out the application. Here the application has been scaled to 3 nodes:


We can also look at the load and response times being experienced by users of the application service. Whilst this particular view gives an amalgamation of data, it’s a simple operation to drill down into an individual JVM to see how it’s performing.


I have barely scratched the surface of what we can monitor, log, and alert on with OpenShift, Fuse Integration Services, and AppDynamics. Hopefully though, it gives you a glimpse of what is possible using the tools provided by the OpenShift Container Platform, and a template for not only integrating AppDynamics, but also other useful toolsets that follow a similar agent model.


The full source repository for fis-java-appdynamics is here:

The full source repository for the FIS/Camel application based on the fis-java-appdynamics image is here:

The full source repository for the FIS/Camel application with the Java agent added using S2I is here:
Please note branches and tags in these repositories.

Using AppDynamics with Red Hat OpenShift v3

As customers and partners are making the transition of their applications from monoliths to microservices using Platform as a Service (PaaS) providers such as RedHat OpenShift v3, AppDynamics has made a significant investment in providing first-class integrations with such providers.

OpenShift v3 has been significantly re-architected from its predecessor OpenShift v2 where cartridges have been replaced with docker images and gears with docker containers. The complete set of differences can be found here.

AppDynamics integrates its agents with RedHat OpenShift v3 using the Source-to-Image (S2I) methodology. S2I is a tool for building reproducible Docker images. It produces ready-to-run images by injecting application source into a Docker image and assembling a new Docker image. The new image incorporates the base image (the builder) and built source and is ready to use with the docker run command. S2I supports incremental builds, which re-use previously downloaded dependencies, previously built artifacts, etc.


The overall workflow for using the AppDynamics with RedHat OpenShift v3 s shown below:

Step 1: is already provided for by RedHat.

To perform Steps 2 and 3, we have provided the S2I scripts in the following GitHub repository and instructions on how to build enhanced builder images for JBoss Wildfly and EAP servers.

Let’s explore this with an actual example and use the sample



–    Ensure OC is installed (

–    Ensure sti is installed (

–    Ensure you have a dockerhub account (

Step 2: Create an AppDynamics builder image

$ git clone

$ cd sti-wildfly

$ make build VERSION=eap6.4

Step 3: Create an Application image

$ s2i build  -e “APPDYNAMICS_APPLICATION_NAME=os3-ticketmonster,APPDYNAMICS_TIER_NAME=os3-ticketmonster-tier,APPDYNAMICS_ACCOUNT_NAME=customer1_xxxxxxxxxxxxxxxxxxf,APPDYNAMICS_ACCOUNT_ACCESS_KEY=xxxxxxxxxxxxxxxxxxxxx,,APPDYNAMICS_CONTROLLER_PORT=443,APPDYNAMICS_CONTROLLER_SSL_ENABLED=true” appdynamics/sti-wildfly-eap64-centos7:latest pranta/appd-eap-ticketmonster

$ docker tag openshift-ticket-monster pranta/openshift-ticket-monster:latest

$ docker push pranta/openshift-ticket-monster

Step 4: Deploy the Application into OpenShift

$ oc login

$ oc new-project wildfly

$ oc project wildfly

$ oc new-app –docker-image=pranta/appd-eap-ticketmonster:latest –name=ticketmonster-demo

Now you should be able to login to the controller and see the ticketmonster application on the application dashboard:

See a live demo of Red Hat OpenShift v3 with AppDynamics from AppSphere ’15 below.

If you’re already using RedHat OpenShift v3 but are new to AppDynamics, here’s where you can sign-up for a free trial:

AppDynamics Partners with OpenShift by Red Hat to Empower DevOps

We’re proud to partner with OpenShift by Red Hat to help monitor their open-source platform-as-a-service (PaaS). Together we make it easier to scale into the cloud. The integration helps foster DevOps by increasing the visibility and collaboration between the typically fragmented development and operations teams throughout the product lifecycle. We caught up with Chris Morgan, Technical Director of Partner Ecosystem at Red Hat, to discuss all the ways Agile and rapid-release cycles have changed development and sped up innovation.

Morgan refers to these new DevOps tools as driving innovation and empowering developers by cultivating a constant feedback loop and proving end-to-end visibility while help scale applications.


“We have a great partner that’s able to provide [APM] to enhance the platform and make it more desirable to developers and for our customers. Ease of use and deployment is what everyone wants.”

“Using AppDynamics, we can monitor the existing application and understand how best it’s performing and then re-architect it so it can take advantage of the things that platform-as-a-service has to offer and you move to OpenShift.”

AppDynamics is excited to announce we are available in the OpenShift marketplace to make it easier than ever to add application performance monitoring to OpenShift based applications.

AppDynamics in the OpenShift Marketplace

AppDynamics in the OpenShift Marketplace

Bootstrapping DropWizard apps with AppDynamics on OpenShift by Red Hat

Getting started with DropWizard, OpenShift, and AppDynamics

In this blog post, I’ll show you how to deploy a Dropwizard-based application on OpenShift by Red Hat and monitor it with AppDynamics.

DropWizard is a high-performance Java framework for building RESTful web services. It is built by the smart folks at Yammer and is available as an open-source project on GitHub. The easiest way to get started with DropWizard is with the example application. The DropWizard example application was developed to, as its name implies, provide examples of some of the features present in DropWizard.


OpenShift can be used to deploy any kind of application with the DIY (do it yourself) cartridge. To get started, log in to OpenShift and create an application using the DIY cartridge.

With the official OpenShift quick start guide to AppDynamics getting started with AppDynamics on OpenShift couldn’t be easier.

1) Signup for an account on OpenShift by RedHat

2) Setup RedHat client tools on your local machine

$ gem install rhc
$ rhc setup

3) Create a Do It Yourself application on OpenShift

$ rhc app create appdynamicsdemo diy-0.1

Getting started is as easy as creating an application from an existing git repository:

DIY Cartridge

% rhc app create appdynamicsdemo diy-0.1 --from-code

Application Options
Domain: appddemo
Cartridges: diy-0.1
Source Code:
Gear Size: default
Scaling: no

Creating application ‘appdynamicsdemo’ … done
Waiting for your DNS name to be available … done

Cloning into ‘appdynamicsdemo’…
Your application ‘appdynamicsdemo’ is now available.

SSH to:
Git remote: ssh://

Run ‘rhc show-app appdynamicsdemo’ for more details about your app.

With the OpenShift Do-It-Yourself container you can easily run any application by adding a few action hooks to your application. In order to make DropWizard work on OpenShift we need to create three action hooks for building, deploying, and starting the application. Action hooks are simply scripts that are run at different points during deployment. To get started simply create a .openshift/action_hooks directory:

mkdir -p .openshift/action_hooks

Here is the example for the above sample application:

When checking out the repository use Maven to download the project dependencies and package the project for production from source code:



mvn -s $OPENSHIFT_REPO_DIR/.openshift/settings.xml -q package

When deploying the code you need to replace the IP address and port for the DIY container. The properties are made available as environment variables:



sed -i 's/@OPENSHIFT_DIY_IP@/'"$OPENSHIFT_DIY_IP"'/g' example.yml
sed -i 's/@OPENSHIFT_DIY_PORT@/'"$OPENSHIFT_DIY_PORT"'/g' example.yml

Let’s recap some of the smart decisions we have made so far:

  • Leverage OpenShift platform as a service (PaaS) for managing the infrastructure
  • Use DropWizard as a solid foundation for our Java application
  • Monitor the application performance with AppDynamics Pro

With a solid Java foundation we are prepared to build our new application. Next, try adding another machine or dive into the DropWizard documentation.

Combining DropWizard, OpenShift, and AppDynamics

AppDynamics allows you to instrument any Java application with by simply adding the AppDynamics agent to the JVM. Sign up for a AppDynamics Pro self-service account. Log in using your account details in your email titled “Welcome to your AppDynamics Pro SaaS Trial” or the account details you have entered during On-Premise installation.

The last step to combine the power of OpenShift and DropWizard is to instrument the app with AppDynamics. Simply update your AppDynamics credentials in the Java agent’s AppServerAgent/conf/controller-info.xml configuration file.

Finally, to start the application we need to run any database migrations and add the AppDynamics Java agent to the startup commmand:



java -jar target/dropwizard-example-0.7.0-SNAPSHOT.jar db migrate example.yml

java -javaagent:${OPENSHIFT_REPO_DIR}AppServerAgent/javaagent.jar
     -jar ${OPENSHIFT_REPO_DIR}target/dropwizard-example-0.7.0-SNAPSHOT.jar
     server example.yml > ${OPENSHIFT_DIY_LOG_DIR}/helloworld.log &

OpenShift App

Additional resources on running DropWizard on OpenShift:

Take five minutes to get complete visibility into the performance of your production applications with AppDynamics Pro today.

As always, please feel free to comment if you think I have missed something or if you have a request for content in an upcoming post.

Monitoring Java Applications with AppDynamics on OpenShift by Red Hat

At AppDynamics, we are all about making it easy to monitor complex applications. That is why we are excited to announce our partnership with OpenShift by RedHat to make it easier than ever before to deploy to the cloud with application performance monitoring built-in.

Getting started with OpenShift

OpenShift is Red Hat’s Platform-as-a-Service (PaaS) that allows developers to quickly develop, host, and scale applications in a cloud environment. With OpenShift you have choice of offerings, including online, on premise, and open source project options.

OpenShift Online is Red Hat’s public cloud application development and hosting platform that automates the provisioning, management and scaling of applications so that you can focus on writing the code for your business, startup, or next big idea.

RedHat OpenShift

OpenShift is a platform as a service (PaaS) by RedHat ideal for deploying large distributed applications. With the official OpenShift quick start guide to AppDynamics getting started with AppDynamics on OpenShift couldn’t be easier.

1) Signup for a RedHat OpenShift account

2) Setup RedHat client tools on your local machine

$ gem install rhc
$ rhc setup

3) Create a JBoss application on OpenShift

$ rhc app create appdynamicsdemo jbossews-2.0 --from-code

AppDynamics @ OpenShift

Get started today with the AppDynamics OpenShift getting started guide.

Production monitoring with AppDynamics Pro

Monitor your critical cloud-based applications with AppDynamics Pro for code level visibility into application performance problems.

OpenShift App

Take five minutes to get complete visibility into the performance of your production applications with AppDynamics Pro today.