Lightrun Kubernetes Operator
The Lightrun Kubernetes(K8s) Operator makes it easy to insert Lightrun agents into your K8s workloads without changing your docker or manifest files. The Lightrun K8s Operator project was initially scaffolded using operator-sdk and kubebuilder book, and aims to follow the Kubernetes Operator pattern.
In theory for adding a Lightrun agent to an application running on Kubernetes, you must:
- Install the agent into the Kubernetes pod.
- Notify the running application to start using the installed agent.
The Lightrun K8s operator does those steps for you. details
Important - Read this before deploying to production.
- Kubernetes >= 1.19
To set up the Lightrun K8s operator:
- Create namespace for the operator and test deployment
kubectl create namespace lightrun-operator
kubectl create namespace lightrun-agent-test
lightrun-operator
namespace is hardcoded in the example operator.yaml
due to Role and RoleBinding objects
If you want to deploy operator to a different namespace - you can use helm chart
- Deploy operator to the operator namesapce
kubectl apply -f https://raw.githubusercontent.com/lightrun-platform/lightrun-k8s-operator/main/examples/operator.yaml -n lightrun-operator
- Create simple deployment for test
App source code PrimeMain.java
kubectl apply -f https://raw.githubusercontent.com/lightrun-platform/lightrun-k8s-operator/main/examples/deployment.yaml -n lightrun-agent-test
- Download Lightrun agent config
curl https://raw.githubusercontent.com/lightrun-platform/lightrun-k8s-operator/main/examples/lightrunjavaagent.yaml > agent.yaml
- Update the following config parameters in the
agent.yaml
file.
-
serverHostname - for SaaS it is
app.lightrun.com
, for on-prem use your own hostname -
lightrun_key - You can find this value on the set up page, 2nd step
-
pinned_cert_hash - you can fetch it from https://
<serverHostname>
/api/getPinnedServerCerthave to be authenticated
- Create agent custom resource
kubectl apply -f agent.yaml -n lightrun-agent-test
Helm chart is available in repository branch helm-repo
- Add the repo to your Helm repository list
helm repo add lightrun-k8s-operator https://lightrun-platform.github.io/lightrun-k8s-operator
- Install the Helm chart:
Using default values
helm install lightrun-k8s-operator/lightrun-k8s-operator -n lightrun-operator --create-namespace
Using custom values file
helm install lightrun-k8s-operator/lightrun-k8s-operator -f <values file> -n lightrun-operator --create-namespace
helm upgrade --install
orhelm install --dry-run
may not work properly due to limitations of how Helm work with CRDs. You can find more info here
- Uninstall the Helm chart.
helm delete lightrun-k8s-operator
CRDs
will not be deleted due to Helm CRDs limitations. You can learn more about the limitations here.
For the sake of simplicity, we are keeping the convention of the same version for both the controller image and the Helm chart. This helps to ensure that controller actions are aligned with CRDs preventing failed resource validation errors.
-
Operator can only patch environment variable that configured as a key/value pair
env: - name: JAVA_TOOL_OPTIONS value: "some initital value"
if value mapped from the configMap or secret using
valueFrom
, operator will fail to update the deployment with the following error:'Deployment.apps "<deployment name>" is invalid: spec.template.spec.containers[0].env[31].valueFrom: Invalid value: "": may not be specified when `value` is not empty'
-
If an application has JDWR enabled, it will cause a conflict with the Lightrun agent installed by the Lightrun K8s operator.
-
You must install the correct init container for your application’s container platform. For example, lightruncom/k8s-operator-init-java-agent-
linux
:1.7.0-init.0.- Linux
- Alpine
Available init containers:
-
K8s type of resources
- Deployment
-
Application's language
- Java
If you have any idea for an improvement or find a bug do not hesitate in opening an issue, just simply fork and create a pull-request. Please open an issue first for any big changes.
make post-commit-hook
Run this command to add post commit hook. It will regenerate rules and CRD from the code after every commit, so you'll not forget to do it. You'll need to commit those changes as well.
You’ll need a Kubernetes cluster to run against. You can use KIND or K3S to get a local cluster for testing, or run against a remote cluster.
Note: When using make
commands, your controller will automatically use the current context in your kubeconfig file (i.e. whatever cluster kubectl cluster-info
shows).
- Clone repo
git clone [email protected]:lightrun-platform/lightrun-k8s-operator.git
cd lightrun-k8s-operator
- Install the CRDs into the cluster:
make install
- Run your controller (this will run in the foreground):
make run
- Open another terminal tab and deploy simple app to your cluster
kubectl apply -f ./examples/deployment.yaml
kubectl get deployments app
-
Update
lightrun_key
,pinned_cert_hash
andserverHostname
in the CR example file -
Create LightrunJavaAgent custom resource
kubectl apply -f ./config/samples/agents_v1beta_lightrunjavaagent.yaml
At this point you will see in the controller logs that it recognized new resource and started to work. If you run the following command, you will see that changes done by the controller (init container, volume, patched ENV var).
kubectl describe deployments app
Copyright 2022 Lightrun
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.