There are several ways to instrument an application on OpenShift with an AppDynamics application agent. The most straightforward way is to embed the agent into the main application image. (For more on this topic, read my blog Monitoring Kubernetes and OpenShift with AppDynamics.)
Let’s consider a Node.js app. All you need to do is to add a require reference to the agent libraries and pass the necessary information on the controller. The reference itself becomes a part of the app and will be embedded in the image. The list of variables (e.g., controller host name, app/tier name, license) the agent needs to communicate with the controller can be embedded, though it is best practice to pass them into the app on initialization as configurable environmental variables.
In the world of Kubernetes (K8s) and OpenShift, this task is accomplished with config maps and secrets. Config maps are reusable key value stores that can be made accessible to one or more applications. Secrets are very similar to config maps with an additional capability to obfuscate key values. When you create a secret, K8s automatically encodes the value of the key as a base64 string. Now the actual value is not visible, and you are protected from people looking over your shoulder. When the key is requested by the app, Kubernetes automatically decodes the value. Secrets can be used to store any sensitive data such as license keys, passwords, and so on. In our example below, we use a secret to store the license key.
Here is an example of AppD instrumentation where the agent is embedded, and the configurable values are passed as environment variables by means of a configMap, a secret and the pod spec.
var appDobj = {
controllerHostName: process.env[‘CONTROLLER_HOST’],
controllerPort: CONTROLLER_PORT,
controllerSslEnabled: true,
accountName: process.env[‘ACCOUNT_NAME’],
accountAccessKey: process.env[‘ACCOUNT_ACCESS_KEY’],
applicationName: process.env[‘APPLICATION_NAME’],
tierName: process.env[‘TIER_NAME’],
nodeName: ‘process’
}
require(“appdynamics”).profile(appDobj);
Pod Spec
– env:
– name: TIER_NAME
value: MyAppTier
– name: ACCOUNT_ACCESS_KEY
valueFrom:
secretKeyRef:
key: appd-key
name: appd-secret
envFrom:
– configMapRef:
name: controller-config
A ConfigMap with AppD variables.
AppD license key stored as secret.
The Init Container Route: Best Practice
The straightforward way is not always the best. Application developers may want to avoid embedding a “foreign object” into the app images for a number of good reasons—for example, image size, granularity of testing, or encapsulation. Being developers ourselves, we respect that and offer an alternative, a less intrusive way of instrumentation. The Kubernetes way.
An init container is a design feature in Kubernetes that allows decoupling of app logic from any type of initialization routine, such as monitoring, in our case. While the main app container lives for the entire duration of the pod, the lifespan of the init container is much shorter. The init container does the required prep work before orchestration of the main container begins. Once the initialization is complete, the init container exists and the main container is started. This way the init container does not run parallel to the main container as, for example, a sidecar container would. However, like a sidecar container, the init container, while still active, has access to the ephemeral storage of the pod.
We use this ability to share storage between the init container and the main container to inject the AppDynamics agent into the app. Our init container image, in its simplest form, can be described with this Dockerfile:
FROM openjdk:8-jdk-alpine
RUN apk add –no-cache bash gawk sed grep bc coreutils
RUN mkdir -p /sharedFiles/AppServerAgent
ADD AppServerAgent.zip /sharedFiles/
RUN unzip /sharedFiles/AppServerAgent.zip -d /sharedFiles/
AppServerAgent /
CMD [“tail”, “-f”, “/dev/null”]
The above example assumes you have already downloaded the archive with AppDynamics app agent binaries locally. When the container is initialized, it unzips the binaries into a new directory. To the pod spec, we then add a directive that copies the directory with the agent binaries to a shared volume on the pod:
spec:
initContainers:
– name: agent-repo
image: agent-repo:x.x.x
imagePullPolicy: IfNotPresent
command: [“cp”, “-r”, “/sharedFiles/AppServerAgent”, /mountpath/AppServerAgent”]
volumeMounts:
– mountPath: /mountPath
name: shared-files
volumes:
– name: shared-files
emptyDir: {}
serviceAccountName: my-account
After the init container exits, the AppDynamics agent binaries are waiting for the application to be picked up from the shared volume on the pod.
Let’s assume we are deploying a Java app, one normally initialized via a script that calls the java command with Java options. The script, startup.sh, may look like this:
# startup.sh
JAVA_OPTS=”$JAVA_OPTS -Dappdynamics.agent.tierName=$TIER_NAME”
JAVA_OPTS=”$JAVA_OPTS -Dappdynamics.agent.reuse.nodeName=true -Dappdynamics.agent.reuse.nodeName.prefix=$TIER_NAME”
JAVA_OPTS=”$JAVA_OPTS
-javaagent:/sharedFiles/AppServerAgent/javaagent.jar”
JAVA_OPTS=”$JAVA_OPTS
-Dappdynamics.controller.hostName=$CONTROLLER_HOST -Dappdynamics.controller.port=$CONTROLLER_PORT -Dappdynamics.controller.ssl.enabled=$CONTROLLER_SSL_ENABLED”
JAVA_OPTS=”$JAVA_OPTS -Dappdynamics.agent.accountName=$ACCOUNT_NAME -Dappdynamics.agent.accountAccessKey=$ACCOUNT_ACCESS_KEY -Dappdynamics.agent.applicationName=$APPLICATION_NAME”
JAVA_OPTS=”$JAVA_OPTS -Dappdynamics.socket.collection.bci.enable=true”
JAVA_OPTS=”$JAVA_OPTS -Xms64m -Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true”
JAVA_OPTS=”$JAVA_OPTS -Djava.security.egd=file:/dev/./urandom”
$JAVA_OPTS -jar myapp.jar
It is embedded into the image and invoked via Docker’s ENTRYPOINT directive when the container starts.
FROM openjdk:8-jdk-alpine
COPY startup.sh startup.sh
RUN chmod +x startup.sh
ADD myapp.jar /usr/src/myapp.jar
EXPOSE 8080
ENTRYPOINT [“/bin/sh”, “startup.sh”]
To make the consumption of startup.sh more flexible and Kubernetes-friendly, we can trim it down to this:
#a more flexible startup.sh
java $JAVA_OPTS -jar myapp.jar
And declare all the necessary Java options in the spec as a single environmental variable.
containers:
– name: my-app
image: my-app-image:x.x.x
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
envFrom:
– configMapRef:
name: controller-config
env:
– name: ACCOUNT_ACCESS_KEY
valueFrom:
secretKeyRef:
key: appd-key
name: appd-secret
-name: JAVA_OPTS
value: “ -javaagent:/sharedFiles/AppServerAgent/javaagent.jar
-Dappdynamics.agent.accountName=$(ACCOUNT_NAME)
-Dappdynamics.agent.accountAccessKey=$(ACCOUNT_ACCESS_KEY)
-Dappdynamics.controller.hostName=$(CONTROLLER_HOST)
-Xms64m -Xmx512m -XX:MaxPermSize=256m
-Djava.net.preferIPv4Stack=true
…”
ports:
– containerPort: 8080
volumeMounts:
– mountPath: /sharedFiles
name: shared-files
The dynamic values for the Java options are populated from the ConfigMap. First, we reference the entire configMap, where all shared values are defined:
envFrom:
– configMapRef:
name: controller-config
We also reference our secret as a separate environmental variable. Then, using the $() notation, we can reference the individual variables in order to concatenate the value of the JAVA_OPTS variable.
Thanks to these Kubernetes features (init containers, configMaps, secrets), we can add AppDynamics monitoring into an existing app in a noninvasive way, without the need to rebuild the image.
This approach has multiple benefits. The app image remains unchanged in terms of size and encapsulation. From a Kubernetes perspective, no extra processing is added, as the init container exits before the main container starts. There is added flexibility in what can be passed into the application initialization routine without the need to modify the image.
Note that OpenShift does not allow running Docker containers as user root by default. If you must (for whatever good reason), add the service account you use for deployments to the anyuid SCC. Assuming your service account is my-account, as in the provided examples, run this command:
oc adm policy add-scc-to-user anyuid -z myaccount
Here’s an example of a complete app spec with AppD instrumentation:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 1
template:
metadata:
labels:
name: my-app
spec:
initContainers:
– name: agent-repo
image: agent-repo:x.x.x
imagePullPolicy: IfNotPresent
command: [“cp”, “-r”, “/sharedFiles/AppServerAgent”, “/mountPath/AppServerAgent”]
volumeMounts:
– mountPath: /mountPath
name: shared-files
volumes:
– name: shared-files
emptyDir: {}
serviceAccountName: my-account
containers:
– name: my-app
image: my-service
imagePullPolicy: IfNotPresent
envFrom:
– configMapRef:
name: controller-config
env:
– name: TIER_NAME
value: WebTier
– name: ACCOUNT_ACCESS_KEY
valueFrom:
secretKeyRef:
key: appd-key
name: appd-key-secret
– name: JAVA_OPTS
value: ”
-javaagent:/sharedFiles/AppServerAgent/javaagent.jar
-Dappdynamics.agent.accountName=$(ACCOUNT_NAME)
-Dappdynamics.agent.accountAccessKey=$(ACCOUNT_ACCESS_KEY)
-Dappdynamics.controller.hostName=$(CONTROLLER_HOST)
-Xms64m -Xmx512m -XX:MaxPermSize=256m
-Djava.net.preferIPv4Stack=true
…”
ports:
– containerPort: 8080
volumeMounts:
– mountPath: /sharedFiles
name: shared-files
restartPolicy: Always
Learn more about how AppDynamics can help monitor your applications on Kubernetes and OpenShift.