diff --git a/README.md b/README.md
index 4865e04..d0d5b63 100644
--- a/README.md
+++ b/README.md
@@ -1,7 +1,7 @@
# Deploy Amazon EKS Cluster
-GitHub action to deploy an EKS cluster, defining VPC's, Secruity Groups, EC2 Instance templates and everything needed, taking minimum imputs from the user.
-Will generate a cluster of EC2 Instances running Amazon EKS Image, with version 1.27 as default.
+GitHub action to deploy an EKS cluster, defining VPC's, Security Groups, EC2 Instance templates and everything needed, taking minimum imputs from the user.
+Will generate a cluster of EC2 Instances running Amazon EKS Image, with version 1.28 as default.
## Requirements
@@ -28,9 +28,10 @@ jobs:
- name: Create EKS Cluster
uses: bitovi/github-actions-deploy-eks@v0.1.0
with:
- aws_access_key_id: ${{ secrets.AWS_ACCESS_KEY_ID_SANDBOX}}
- aws_secret_access_key: ${{ secrets.AWS_SECRET_ACCESS_KEY_SANDBOX}}
- aws_default_region: us-east-1
+ aws_access_key_id: ${{ secrets.AWS_ACCESS_KEY_ID }}
+ aws_secret_access_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
+ aws_eks_cluster_admin_role_arn: arn:aws:iam::123456789012:role/AWSReservedSSO_AdministratorAccess_1234567890123456
+ aws_additional_tags: {"key1": "value1", "key2": "value2"}
```
### Advanced example
@@ -48,16 +49,17 @@ jobs:
- name: Create EKS Cluster
uses: bitovi/github-actions-deploy-eks@v0.1.0
with:
- aws_access_key_id: ${{ secrets.AWS_ACCESS_KEY_ID_SANDBOX}}
- aws_secret_access_key: ${{ secrets.AWS_SECRET_ACCESS_KEY_SANDBOX}}
+ aws_access_key_id: ${{ secrets.AWS_ACCESS_KEY_ID }}
+ aws_secret_access_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws_default_region: us-east-1
+ aws_eks_cluster_admin_role_arn: arn:aws:iam::123456789012:role/AWSReservedSSO_AdministratorAccess_1234567890123456
# tf_stack_destroy: true
tf_state_bucket_destroy: true
aws_eks_environment: qa
aws_eks_stackname: qa-stack
- aws_eks_cluster_version: 1.25
+ aws_eks_cluster_version: 1.29
aws_eks_instance_type: t2.small
aws_eks_max_size: 5
@@ -76,12 +78,18 @@ jobs:
1. [Action Defaults](#action-defaults-inputs)
1. [AWS](#aws-inputs)
1. [EKS](#eks-inputs)
+1. [Extras](#eks-extras) ⚠️
1. [VPC](#vpc-inputs)
+> ⚠️ Using any kind of **extras** can lead to the creation of load balancers. If doing so, manual intervention to delete them after will be needed. (You'll need to delete a load balancer and the VPC manually, then run the job )
+
+
+### Outputs
+1. [Action Outputs](#action-outputs)
+
The following inputs can be used as `step.with` keys
-
#### **Action defaults Inputs**
| Name | Type | Description |
|------------------|---------|------------------------------------|
@@ -112,15 +120,18 @@ The following inputs can be used as `step.with` keys
| Name | Type | Description |
|------------------|---------|------------------------------------|
| `aws_eks_create` | Boolean | Define if an EKS cluster should be created. Defaults to `true`. |
-f| `aws_eks_security_group_name_master` | String | Define the security group name master. Defaults to `SG for ${var.aws_resource_identifier} - EKS Master`. |
-| `aws_eks_security_group_name_worker` | String | Define the security group name worker. Defaults to `SG for ${var.aws_resource_identifier} - EKS Worker`. |
+| `aws_eks_security_group_name_cluster` | String | Define the security group name master. Defaults to `SG for ${var.aws_resource_identifier} - EKS Cluster`. |
+| `aws_eks_security_group_name_node` | String | Define the security group name worker. Defaults to `SG for ${var.aws_resource_identifier} - EKS Node`. |
| `aws_eks_environment` | String | Specify the eks environment name. Defaults to `env` |
| `aws_eks_management_cidr` | String | Comma separated list of remote public CIDRs blocks to add it to Worker nodes security groups. |
| `aws_eks_allowed_ports` | String | Allow incoming traffic from this port. Accepts comma separated values, matching 1 to 1 with `aws_eks_allowed_ports_cidr`. |
| `aws_eks_allowed_ports_cidr` | String | Allow incoming traffic from this CIDR block. Accepts comma separated values, matching 1 to 1 with `aws_eks_allowed_ports`. If none defined, will allow all incoming traffic. |
| `aws_eks_cluster_name` | String | Specify the k8s cluster name. Defaults to `${var.aws_resource_identifier}-cluster` |
-| `aws_eks_cluster_log_types` | String | Comma separated list of cluster log type. See [this AWS doc](https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html). Defaults to `none`. |
-| `aws_eks_cluster_version` | String | Specify the k8s cluster version. Defaults to `1.27` |
+| `aws_eks_cluster_admin_role_arn` | String | Role ARN to grant cluster-admin permissions. |
+| `aws_eks_cluster_log_types` | String | Comma separated list of cluster log type. See [this AWS doc](https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html). Defaults to `api,audit,authenticator`. |
+| `aws_eks_cluster_log_retention_days` | String | Days to store logs. Defaults to `7`. |
+| `aws_eks_cluster_logs_skip_destroy` | Boolean | Skip deletion of cluster logs if set to true. Defaults to `false`. |
+| `aws_eks_cluster_version` | String | Specify the k8s cluster version. Defaults to `1.28` |
| `aws_eks_instance_type` | String | Define the EC2 instance type. See [this list](https://aws.amazon.com/ec2/instance-types/) for reference. Defaults to `t3a.medium`. |
| `aws_eks_instance_ami_id` | String | AWS AMI ID. Will default to the latest Amazon EKS Node image for the cluster version. |
| `aws_eks_instance_user_data_file` | String | Relative path in the repo for a user provided script to be executed with the EC2 Instance creation. See note. |
@@ -130,6 +141,16 @@ f| `aws_eks_security_group_name_master` | String | Define the security group nam
| `aws_eks_max_size` | String | Enter the max_size for the worker nodes. Defaults to `4`. |
| `aws_eks_min_size` | String | Enter the min_size for the worker nodes. Defaults to `2`. |
| `aws_eks_additional_tags` | JSON | Add additional tags to the terraform [default tags](https://www.hashicorp.com/blog/default-tags-in-the-terraform-aws-provider), any tags put here will be added to eks provisioned resources.|
+
+
+
+#### **EKS Extras**
+| Name | Type | Description |
+|------------------|---------|------------------------------------|
+| `prometheus_enable` | Boolean | Set to `true`to enable deployment through chart. |
+| `grafana_enable` | Boolean | Set to `true`to enable deployment through chart. |
+| `loki_enable` | Boolean | Set to `true` to enable deployment through chart. |
+| `nginx_enable` | Boolean | Set to `true` to enable deployment through chart. |
| `input_helm_charts` | String | Relative path to the folder from project containing Helm charts to be installed. Could be uncompressed or compressed (.tgz) files. |
@@ -146,12 +167,27 @@ f| `aws_eks_security_group_name_master` | String | Define the security group nam
| `aws_vpc_id` | String | **Existing** AWS VPC ID to use. Accepts `vpc-###` values. |
| `aws_vpc_subnet_id` | String | **Existing** AWS VPC Subnet ID. If none provided, will pick one. (Ideal when there's only one). |
| `aws_vpc_enable_nat_gateway` | String | Adds a NAT gateway for each public subnet. Defaults to `true`. |
-| `aws_vpc_single_nat_gateway` | String | Toggles only one NAT gateway for all of the public subnets. Defaults to `false`. |
+| `aws_vpc_single_nat_gateway` | String | Toggles only one NAT gateway for all of the public subnets. Defaults to `true`. |
| `aws_vpc_external_nat_ip_ids` | String | **Existing** comma separated list of IP IDs if reusing. (ElasticIPs). |
| `aws_vpc_additional_tags` | JSON | Add additional tags to the terraform [default tags](https://www.hashicorp.com/blog/default-tags-in-the-terraform-aws-provider), any tags put here will be added to vpc provisioned resources.|
+#### **Action Outputs**
+| Name | Description |
+|------------------|------------------------------------|
+| `aws_vpc_id` | The selected VPC ID used. |
+| `ecs_load_balancer_dns` | ECS ALB DNS Record. |
+| `ecs_dns_record` | ECS DNS URL. |
+| `ecs_sg_id` | ECS SG ID. |
+| `ecs_lb_sg_id` | ECS LB SG ID. |
+
+
+
+
+## Helm charts
+We have **aws-auth**,**ingress**, **grafana**, **prometheus** and **loki** as helm charts which can be called by deployment repo to install in the aws eks cluster. User can pass inputs like `grafana_enable`, `loki_enable`, `nginx_enable` and/or `prometheus_enable` in the deployment repo along with aws access information, and these charts would be installed along with eks creation in aws.
+
## Note about resource identifiers
Most resources will contain the tag `${GITHUB_ORG_NAME}-${GITHUB_REPO_NAME}-${GITHUB_BRANCH_NAME}`, some of them, even the resource name after.
@@ -161,9 +197,15 @@ We use the kubernetes style for this. For example, kubernetes -> k(# of characte
For some specific resources, we have a 32 characters limit. If the identifier length exceeds this number after compression, we remove the middle part and replace it for a hash made up from the string itself.
+## Note about tagging
+
+There's the option to add any kind of defined tags to each grouping module. Will be added to the commons tagging.
+See first example for the correct formatting.
+
### S3 buckets naming
-Buckets names can be made of up to 63 characters. If the length allows us to add -tf-state, we will do so. If not, a simple -tf will be added.
+Buckets names can be up to 63 characters. If the length allows, -tf-state will be suffixed to the name. Otherwise, only -tf will be added.
+In all cases, the name hashing described above will be used to keep the lengths within limit.
## EC2 User data
@@ -175,8 +217,14 @@ As a default, if not setting any instance ami_id, we will take care of setting u
[BitOps](https://bitops.sh) allows you to define Infrastructure-as-Code for multiple tools in a central place. This action uses a BitOps [Operations Repository](https://bitops.sh/operations-repo-structure/) to set up the necessary Terraform and Ansible to create infrastructure and deploy to it.
## Contributing
-We would love for you to contribute to [bitovi/github-actions-deploy-docker-to-ec2](https://github.com/bitovi/github-actions-deploy-docker-to-ec2).
-Would you like to see additional features? [Create an issue](https://github.com/bitovi/github-actions-deploy-docker-to-ec2/issues/new) or a [Pull Requests](https://github.com/bitovi/github-actions-deploy-docker-to-ec2/pulls). We love discussing solutions!
+We would love for you to contribute to [bitovi/github-actions-deploy-docker-to-ec2](https://github.com/bitovi/github-actions-deploy-eks).
+Would you like to see additional features? [Create an issue](https://github.com/bitovi/github-actions-deploy-eks/issues/new) or a [Pull Requests](https://github.com/bitovi/github-actions-deploy-eks/pulls). We love discussing solutions!
## License
-The scripts and documentation in this project are released under the [MIT License](https://github.com/bitovi/github-actions-deploy-docker-to-ec2/blob/main/LICENSE).
+The scripts and documentation in this project are released under the [MIT License](https://github.com/bitovi/github-actions-deploy-eks/blob/main/LICENSE).
+
+# Provided by Bitovi
+[Bitovi](https://www.bitovi.com/) is a proud supporter of Open Source software.
+
+# We want to hear from you.
+Come chat with us about open source in our Bitovi community [Discord](https://discord.gg/zAHn4JBVcX)!
diff --git a/action.yaml b/action.yaml
index 4f5c99f..1c5fdf0 100644
--- a/action.yaml
+++ b/action.yaml
@@ -1,4 +1,4 @@
-name: 'Deploy ESK to AWS'
+name: 'Deploy EKS to AWS'
description: 'Deploy Kubernetes Service (EKS) to AWS'
branding:
icon: upload-cloud
@@ -57,10 +57,10 @@ inputs:
description: 'Define if an EKS cluster should be created'
required: false
default: true
- aws_eks_security_group_name_master:
+ aws_eks_security_group_name_cluster:
description: "SG for ${var.aws_resource_identifier} - ${var.aws_eks_environment} - EKS Master"
required: false
- aws_eks_security_group_name_worker:
+ aws_eks_security_group_name_node:
description: "SG for ${var.aws_resource_identifier} - ${var.aws_eks_environment} - EKS Worker"
required: false
aws_eks_environment:
@@ -78,8 +78,17 @@ inputs:
aws_eks_cluster_name:
description: "EKS Cluster name. Defaults to eks-cluster"
required: false
+ aws_eks_cluster_admin_role_arn:
+ description: "Role ARN to grant cluster-admin permissions"
+ required: false
aws_eks_cluster_log_types:
- description: "EKS Log types, csv list"
+ description: "EKS Log types, comma separated list. Defaults to api,audit,authenticator"
+ required: false
+ aws_eks_cluster_log_retention_days:
+ description: "Days to store logs. Defaults to 7."
+ required: false
+ aws_eks_cluster_log_skip_destroy:
+ description: "Skip deletion of cluster logs if set to true"
required: false
aws_eks_cluster_version:
description: 'Specify the k8s cluster version'
@@ -112,6 +121,23 @@ inputs:
description: 'A JSON object of additional tags that will be included on created resources. Example: `{"key1": "value1", "key2": "value2"}`'
required: false
+ # EKS Extras
+ prometheus_enable:
+ description: 'Specifies if this action should checkout the code for this deployment'
+ required: false
+ grafana_enable:
+ description: 'Specifies if this action should checkout the code for this deployment'
+ required: false
+ loki_enable:
+ description: 'Specifies if this action should checkout the code for this deployment'
+ required: false
+ nginx_enable:
+ description: 'Specifies if this action should checkout the code for this deployment'
+ required: false
+ input_helm_charts:
+ description: 'Relative path to the folder from project containing Helm charts to be installed. Could be uncompressed or compressed (.tgz) files.'
+ required: false
+
# AWS VPC Inputs
aws_vpc_create:
description: 'Define if a VPC should be created'
@@ -149,6 +175,7 @@ inputs:
aws_vpc_single_nat_gateway:
description: 'Creates only one NAT gateway'
required: false
+ default: true
aws_vpc_external_nat_ip_ids:
description: 'Comma separated list of IP IDS to reuse in the NAT gateways'
required: false
@@ -156,18 +183,48 @@ inputs:
description: 'A JSON object of additional tags that will be included on created resources. Example: `{"key1": "value1", "key2": "value2"}`'
required: false
- # Helm input
- input_helm_charts:
- description: 'Relative path to the folder from project containing Helm charts to be installed. Could be uncompressed or compressed (.tgz) files.'
- required: false
+outputs:
+ # VPC
+ aws_vpc_id:
+ description: "The selected VPC ID used."
+ value: ${{ steps.deploy.outputs.aws_vpc_id }}
+ eks_cluster_name:
+ description: "EKS Cluster name"
+ value: ${{ steps.deploy.outputs.eks_cluster_name }}
+ eks_cluster_role_arn:
+ description: "EKS Role ARN"
+ value: ${{ steps.deploy.outputs.eks_cluster_role_arn }}
runs:
using: 'composite'
steps:
+ - name: If grafana is enabled
+ if: ${{ inputs.grafana_enable == 'true' }}
+ shell: bash
+ run: |
+ mv $GITHUB_ACTION_PATH/helm-charts/grafana $GITHUB_ACTION_PATH/operations/deployment/helm
+
+ - name: If prometheus is enabled
+ if: ${{ inputs.prometheus_enable == 'true' }}
+ shell: bash
+ run: |
+ mv $GITHUB_ACTION_PATH/helm-charts/prometheus $GITHUB_ACTION_PATH/operations/deployment/helm
+
+ - name: If loki is enabled
+ if: ${{ inputs.loki_enable == 'true' }}
+ shell: bash
+ run: |
+ mv $GITHUB_ACTION_PATH/helm-charts/loki $GITHUB_ACTION_PATH/operations/deployment/helm
+
+ - name: If nginx is enabled
+ if: ${{ inputs.nginx_enable == 'true' }}
+ shell: bash
+ run: |
+ mv $GITHUB_ACTION_PATH/helm-charts/ingress-nginx $GITHUB_ACTION_PATH/operations/deployment/helm
- name: Deploy with BitOps
id: deploy
- uses: bitovi/github-actions-commons@main
+ uses: bitovi/github-actions-commons@v0.0.13
with:
bitops_code_only: ${{ inputs.bitops_code_only }}
bitops_code_store: ${{ inputs.bitops_code_store }}
@@ -192,14 +249,17 @@ runs:
# EKS
aws_eks_create: ${{ inputs.aws_eks_create }}
- aws_eks_security_group_name_master: ${{ inputs.aws_eks_security_group_name_master }}
- aws_eks_security_group_name_worker: ${{ inputs.aws_eks_security_group_name_worker }}
+ aws_eks_security_group_name_cluster: ${{ inputs.aws_eks_security_group_name_cluster }}
+ aws_eks_security_group_name_node: ${{ inputs.aws_eks_security_group_name_node }}
aws_eks_environment: ${{ inputs.aws_eks_environment }}
aws_eks_management_cidr: ${{ inputs.aws_eks_management_cidr }}
aws_eks_allowed_ports: ${{ inputs.aws_eks_allowed_ports }}
aws_eks_allowed_ports_cidr: ${{ inputs.aws_eks_allowed_ports_cidr }}
aws_eks_cluster_name: ${{ inputs.aws_eks_cluster_name }}
+ aws_eks_cluster_admin_role_arn: ${{ inputs.aws_eks_cluster_admin_role_arn }}
aws_eks_cluster_log_types: ${{ inputs.aws_eks_cluster_log_types}}
+ aws_eks_cluster_log_retention_days: ${{ inputs.aws_eks_cluster_log_retention_days }}
+ aws_eks_cluster_log_skip_destroy: ${{ inputs.aws_eks_cluster_log_skip_destroy }}
aws_eks_cluster_version: ${{ inputs.aws_eks_cluster_version }}
aws_eks_instance_type: ${{ inputs.aws_eks_instance_type }}
aws_eks_instance_ami_id: ${{ inputs.aws_eks_instance_ami_id }}
diff --git a/operations/deployment/helm/aws-auth/Chart.yaml b/helm-charts/aws-auth/Chart.yaml
similarity index 100%
rename from operations/deployment/helm/aws-auth/Chart.yaml
rename to helm-charts/aws-auth/Chart.yaml
diff --git a/operations/deployment/helm/aws-auth/bitops.config.yaml b/helm-charts/aws-auth/bitops.config.yaml
similarity index 92%
rename from operations/deployment/helm/aws-auth/bitops.config.yaml
rename to helm-charts/aws-auth/bitops.config.yaml
index 60d7548..6bebb77 100644
--- a/operations/deployment/helm/aws-auth/bitops.config.yaml
+++ b/helm-charts/aws-auth/bitops.config.yaml
@@ -4,7 +4,7 @@ helm:
timeout: 200s
debug: true
atomic: false
- force: false
+ force: true
dry-run: false
options:
release-name: aws-auth
diff --git a/operations/deployment/helm/aws-auth/example.bitops.config.yaml b/helm-charts/aws-auth/example.bitops.config.yaml
similarity index 100%
rename from operations/deployment/helm/aws-auth/example.bitops.config.yaml
rename to helm-charts/aws-auth/example.bitops.config.yaml
diff --git a/operations/deployment/helm/aws-auth/templates/aws-auth.yaml b/helm-charts/aws-auth/templates/aws-auth.yaml
similarity index 75%
rename from operations/deployment/helm/aws-auth/templates/aws-auth.yaml
rename to helm-charts/aws-auth/templates/aws-auth.yaml
index e41dea6..d3c6ffc 100644
--- a/operations/deployment/helm/aws-auth/templates/aws-auth.yaml
+++ b/helm-charts/aws-auth/templates/aws-auth.yaml
@@ -3,6 +3,11 @@ apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.configmap.name }}
+ labels:
+ app.kubernetes.io/managed-by: Helm
+ annotations:
+ meta.helm.sh/release-name: aws-auth
+ meta.helm.sh/release-namespace: kube-system
data:
{{- if .Values.data.mapAccounts }}
mapAccounts: |
diff --git a/operations/deployment/helm/aws-auth/values.yaml b/helm-charts/aws-auth/values.yaml
similarity index 100%
rename from operations/deployment/helm/aws-auth/values.yaml
rename to helm-charts/aws-auth/values.yaml
diff --git a/helm-charts/grafana/.helmignore b/helm-charts/grafana/.helmignore
new file mode 100644
index 0000000..8cade13
--- /dev/null
+++ b/helm-charts/grafana/.helmignore
@@ -0,0 +1,23 @@
+# Patterns to ignore when building packages.
+# This supports shell glob matching, relative path matching, and
+# negation (prefixed with !). Only one pattern per line.
+.DS_Store
+# Common VCS dirs
+.git/
+.gitignore
+.bzr/
+.bzrignore
+.hg/
+.hgignore
+.svn/
+# Common backup files
+*.swp
+*.bak
+*.tmp
+*~
+# Various IDEs
+.vscode
+.project
+.idea/
+*.tmproj
+OWNERS
diff --git a/helm-charts/grafana/Chart.yaml b/helm-charts/grafana/Chart.yaml
new file mode 100644
index 0000000..d1fdd26
--- /dev/null
+++ b/helm-charts/grafana/Chart.yaml
@@ -0,0 +1,33 @@
+apiVersion: v2
+name: grafana
+version: 7.0.17
+appVersion: 10.2.2
+kubeVersion: "^1.8.0-0"
+description: The leading tool for querying and visualizing time series and metrics.
+home: https://grafana.com
+icon: https://artifacthub.io/image/b4fed1a7-6c8f-4945-b99d-096efa3e4116
+sources:
+ - https://github.com/grafana/grafana
+ - https://github.com/grafana/helm-charts
+annotations:
+ "artifacthub.io/license": AGPL-3.0-only
+ "artifacthub.io/links": |
+ - name: Chart Source
+ url: https://github.com/grafana/helm-charts
+ - name: Upstream Project
+ url: https://github.com/grafana/grafana
+maintainers:
+ - name: zanhsieh
+ email: zanhsieh@gmail.com
+ - name: rtluckie
+ email: rluckie@cisco.com
+ - name: maorfr
+ email: maor.friedman@redhat.com
+ - name: Xtigyro
+ email: miroslav.hadzhiev@gmail.com
+ - name: torstenwalter
+ email: mail@torstenwalter.de
+type: application
+keywords:
+ - monitoring
+ - metric
\ No newline at end of file
diff --git a/helm-charts/grafana/ENV_FILE b/helm-charts/grafana/ENV_FILE
new file mode 100644
index 0000000..e69de29
diff --git a/helm-charts/grafana/README.md b/helm-charts/grafana/README.md
new file mode 100644
index 0000000..77abbed
--- /dev/null
+++ b/helm-charts/grafana/README.md
@@ -0,0 +1,769 @@
+# Grafana Helm Chart
+
+* Installs the web dashboarding system [Grafana](http://grafana.org/)
+
+## Get Repo Info
+
+```console
+helm repo add grafana https://grafana.github.io/helm-charts
+helm repo update
+```
+
+_See [helm repo](https://helm.sh/docs/helm/helm_repo/) for command documentation._
+
+## Installing the Chart
+
+To install the chart with the release name `my-release`:
+
+```console
+helm install my-release grafana/grafana
+```
+
+## Uninstalling the Chart
+
+To uninstall/delete the my-release deployment:
+
+```console
+helm delete my-release
+```
+
+The command removes all the Kubernetes components associated with the chart and deletes the release.
+
+## Upgrading an existing Release to a new major version
+
+A major chart version change (like v1.2.3 -> v2.0.0) indicates that there is an
+incompatible breaking change needing manual actions.
+
+### To 4.0.0 (And 3.12.1)
+
+This version requires Helm >= 2.12.0.
+
+### To 5.0.0
+
+You have to add --force to your helm upgrade command as the labels of the chart have changed.
+
+### To 6.0.0
+
+This version requires Helm >= 3.1.0.
+
+### To 7.0.0
+
+For consistency with other Helm charts, the `global.image.registry` parameter was renamed
+to `global.imageRegistry`. If you were not previously setting `global.image.registry`, no action
+is required on upgrade. If you were previously setting `global.image.registry`, you will
+need to instead set `global.imageRegistry`.
+
+## Configuration
+
+| Parameter | Description | Default |
+|-------------------------------------------|-----------------------------------------------|---------------------------------------------------------|
+| `replicas` | Number of nodes | `1` |
+| `podDisruptionBudget.minAvailable` | Pod disruption minimum available | `nil` |
+| `podDisruptionBudget.maxUnavailable` | Pod disruption maximum unavailable | `nil` |
+| `podDisruptionBudget.apiVersion` | Pod disruption apiVersion | `nil` |
+| `deploymentStrategy` | Deployment strategy | `{ "type": "RollingUpdate" }` |
+| `livenessProbe` | Liveness Probe settings | `{ "httpGet": { "path": "/api/health", "port": 3000 } "initialDelaySeconds": 60, "timeoutSeconds": 30, "failureThreshold": 10 }` |
+| `readinessProbe` | Readiness Probe settings | `{ "httpGet": { "path": "/api/health", "port": 3000 } }`|
+| `securityContext` | Deployment securityContext | `{"runAsUser": 472, "runAsGroup": 472, "fsGroup": 472}` |
+| `priorityClassName` | Name of Priority Class to assign pods | `nil` |
+| `image.registry` | Image registry | `docker.io` |
+| `image.repository` | Image repository | `grafana/grafana` |
+| `image.tag` | Overrides the Grafana image tag whose default is the chart appVersion (`Must be >= 5.0.0`) | `` |
+| `image.sha` | Image sha (optional) | `` |
+| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
+| `image.pullSecrets` | Image pull secrets (can be templated) | `[]` |
+| `service.enabled` | Enable grafana service | `true` |
+| `service.type` | Kubernetes service type | `ClusterIP` |
+| `service.port` | Kubernetes port where service is exposed | `80` |
+| `service.portName` | Name of the port on the service | `service` |
+| `service.appProtocol` | Adds the appProtocol field to the service | `` |
+| `service.targetPort` | Internal service is port | `3000` |
+| `service.nodePort` | Kubernetes service nodePort | `nil` |
+| `service.annotations` | Service annotations (can be templated) | `{}` |
+| `service.labels` | Custom labels | `{}` |
+| `service.clusterIP` | internal cluster service IP | `nil` |
+| `service.loadBalancerIP` | IP address to assign to load balancer (if supported) | `nil` |
+| `service.loadBalancerSourceRanges` | list of IP CIDRs allowed access to lb (if supported) | `[]` |
+| `service.externalIPs` | service external IP addresses | `[]` |
+| `service.externalTrafficPolicy` | change the default externalTrafficPolicy | `nil` |
+| `headlessService` | Create a headless service | `false` |
+| `extraExposePorts` | Additional service ports for sidecar containers| `[]` |
+| `hostAliases` | adds rules to the pod's /etc/hosts | `[]` |
+| `ingress.enabled` | Enables Ingress | `false` |
+| `ingress.annotations` | Ingress annotations (values are templated) | `{}` |
+| `ingress.labels` | Custom labels | `{}` |
+| `ingress.path` | Ingress accepted path | `/` |
+| `ingress.pathType` | Ingress type of path | `Prefix` |
+| `ingress.hosts` | Ingress accepted hostnames | `["chart-example.local"]` |
+| `ingress.extraPaths` | Ingress extra paths to prepend to every host configuration. Useful when configuring [custom actions with AWS ALB Ingress Controller](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.6/guide/ingress/annotations/#actions). Requires `ingress.hosts` to have one or more host entries. | `[]` |
+| `ingress.tls` | Ingress TLS configuration | `[]` |
+| `ingress.ingressClassName` | Ingress Class Name. MAY be required for Kubernetes versions >= 1.18 | `""` |
+| `resources` | CPU/Memory resource requests/limits | `{}` |
+| `nodeSelector` | Node labels for pod assignment | `{}` |
+| `tolerations` | Toleration labels for pod assignment | `[]` |
+| `affinity` | Affinity settings for pod assignment | `{}` |
+| `extraInitContainers` | Init containers to add to the grafana pod | `{}` |
+| `extraContainers` | Sidecar containers to add to the grafana pod | `""` |
+| `extraContainerVolumes` | Volumes that can be mounted in sidecar containers | `[]` |
+| `extraLabels` | Custom labels for all manifests | `{}` |
+| `schedulerName` | Name of the k8s scheduler (other than default) | `nil` |
+| `persistence.enabled` | Use persistent volume to store data | `false` |
+| `persistence.type` | Type of persistence (`pvc` or `statefulset`) | `pvc` |
+| `persistence.size` | Size of persistent volume claim | `10Gi` |
+| `persistence.existingClaim` | Use an existing PVC to persist data (can be templated) | `nil` |
+| `persistence.storageClassName` | Type of persistent volume claim | `nil` |
+| `persistence.accessModes` | Persistence access modes | `[ReadWriteOnce]` |
+| `persistence.annotations` | PersistentVolumeClaim annotations | `{}` |
+| `persistence.finalizers` | PersistentVolumeClaim finalizers | `[ "kubernetes.io/pvc-protection" ]` |
+| `persistence.extraPvcLabels` | Extra labels to apply to a PVC. | `{}` |
+| `persistence.subPath` | Mount a sub dir of the persistent volume (can be templated) | `nil` |
+| `persistence.inMemory.enabled` | If persistence is not enabled, whether to mount the local storage in-memory to improve performance | `false` |
+| `persistence.inMemory.sizeLimit` | SizeLimit for the in-memory local storage | `nil` |
+| `initChownData.enabled` | If false, don't reset data ownership at startup | true |
+| `initChownData.image.registry` | init-chown-data container image registry | `docker.io` |
+| `initChownData.image.repository` | init-chown-data container image repository | `busybox` |
+| `initChownData.image.tag` | init-chown-data container image tag | `1.31.1` |
+| `initChownData.image.sha` | init-chown-data container image sha (optional)| `""` |
+| `initChownData.image.pullPolicy` | init-chown-data container image pull policy | `IfNotPresent` |
+| `initChownData.resources` | init-chown-data pod resource requests & limits | `{}` |
+| `schedulerName` | Alternate scheduler name | `nil` |
+| `env` | Extra environment variables passed to pods | `{}` |
+| `envValueFrom` | Environment variables from alternate sources. See the API docs on [EnvVarSource](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#envvarsource-v1-core) for format details. Can be templated | `{}` |
+| `envFromSecret` | Name of a Kubernetes secret (must be manually created in the same namespace) containing values to be added to the environment. Can be templated | `""` |
+| `envFromSecrets` | List of Kubernetes secrets (must be manually created in the same namespace) containing values to be added to the environment. Can be templated | `[]` |
+| `envFromConfigMaps` | List of Kubernetes ConfigMaps (must be manually created in the same namespace) containing values to be added to the environment. Can be templated | `[]` |
+| `envRenderSecret` | Sensible environment variables passed to pods and stored as secret. (passed through [tpl](https://helm.sh/docs/howto/charts_tips_and_tricks/#using-the-tpl-function)) | `{}` |
+| `enableServiceLinks` | Inject Kubernetes services as environment variables. | `true` |
+| `extraSecretMounts` | Additional grafana server secret mounts | `[]` |
+| `extraVolumeMounts` | Additional grafana server volume mounts | `[]` |
+| `extraVolumes` | Additional Grafana server volumes | `[]` |
+| `createConfigmap` | Enable creating the grafana configmap | `true` |
+| `extraConfigmapMounts` | Additional grafana server configMap volume mounts (values are templated) | `[]` |
+| `extraEmptyDirMounts` | Additional grafana server emptyDir volume mounts | `[]` |
+| `plugins` | Plugins to be loaded along with Grafana | `[]` |
+| `datasources` | Configure grafana datasources (passed through tpl) | `{}` |
+| `alerting` | Configure grafana alerting (passed through tpl) | `{}` |
+| `notifiers` | Configure grafana notifiers | `{}` |
+| `dashboardProviders` | Configure grafana dashboard providers | `{}` |
+| `dashboards` | Dashboards to import | `{}` |
+| `dashboardsConfigMaps` | ConfigMaps reference that contains dashboards | `{}` |
+| `grafana.ini` | Grafana's primary configuration | `{}` |
+| `global.imageRegistry` | Global image pull registry for all images. | `null` |
+| `global.imagePullSecrets` | Global image pull secrets (can be templated). Allows either an array of {name: pullSecret} maps (k8s-style), or an array of strings (more common helm-style). | `[]` |
+| `ldap.enabled` | Enable LDAP authentication | `false` |
+| `ldap.existingSecret` | The name of an existing secret containing the `ldap.toml` file, this must have the key `ldap-toml`. | `""` |
+| `ldap.config` | Grafana's LDAP configuration | `""` |
+| `annotations` | Deployment annotations | `{}` |
+| `labels` | Deployment labels | `{}` |
+| `podAnnotations` | Pod annotations | `{}` |
+| `podLabels` | Pod labels | `{}` |
+| `podPortName` | Name of the grafana port on the pod | `grafana` |
+| `lifecycleHooks` | Lifecycle hooks for podStart and preStop [Example](https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/#define-poststart-and-prestop-handlers) | `{}` |
+| `sidecar.image.registry` | Sidecar image registry | `quay.io` |
+| `sidecar.image.repository` | Sidecar image repository | `kiwigrid/k8s-sidecar` |
+| `sidecar.image.tag` | Sidecar image tag | `1.24.6` |
+| `sidecar.image.sha` | Sidecar image sha (optional) | `""` |
+| `sidecar.imagePullPolicy` | Sidecar image pull policy | `IfNotPresent` |
+| `sidecar.resources` | Sidecar resources | `{}` |
+| `sidecar.securityContext` | Sidecar securityContext | `{}` |
+| `sidecar.enableUniqueFilenames` | Sets the kiwigrid/k8s-sidecar UNIQUE_FILENAMES environment variable. If set to `true` the sidecar will create unique filenames where duplicate data keys exist between ConfigMaps and/or Secrets within the same or multiple Namespaces. | `false` |
+| `sidecar.alerts.enabled` | Enables the cluster wide search for alerts and adds/updates/deletes them in grafana |`false` |
+| `sidecar.alerts.label` | Label that config maps with alerts should have to be added | `grafana_alert` |
+| `sidecar.alerts.labelValue` | Label value that config maps with alerts should have to be added | `""` |
+| `sidecar.alerts.searchNamespace` | Namespaces list. If specified, the sidecar will search for alerts config-maps inside these namespaces. Otherwise the namespace in which the sidecar is running will be used. It's also possible to specify ALL to search in all namespaces. | `nil` |
+| `sidecar.alerts.watchMethod` | Method to use to detect ConfigMap changes. With WATCH the sidecar will do a WATCH requests, with SLEEP it will list all ConfigMaps, then sleep for 60 seconds. | `WATCH` |
+| `sidecar.alerts.resource` | Should the sidecar looks into secrets, configmaps or both. | `both` |
+| `sidecar.alerts.reloadURL` | Full url of datasource configuration reload API endpoint, to invoke after a config-map change | `"http://localhost:3000/api/admin/provisioning/alerting/reload"` |
+| `sidecar.alerts.skipReload` | Enabling this omits defining the REQ_URL and REQ_METHOD environment variables | `false` |
+| `sidecar.alerts.initDatasources` | Set to true to deploy the datasource sidecar as an initContainer. This is needed if skipReload is true, to load any alerts defined at startup time. | `false` |
+| `sidecar.alerts.extraMounts` | Additional alerts sidecar volume mounts. | `[]` |
+| `sidecar.dashboards.enabled` | Enables the cluster wide search for dashboards and adds/updates/deletes them in grafana | `false` |
+| `sidecar.dashboards.SCProvider` | Enables creation of sidecar provider | `true` |
+| `sidecar.dashboards.provider.name` | Unique name of the grafana provider | `sidecarProvider` |
+| `sidecar.dashboards.provider.orgid` | Id of the organisation, to which the dashboards should be added | `1` |
+| `sidecar.dashboards.provider.folder` | Logical folder in which grafana groups dashboards | `""` |
+| `sidecar.dashboards.provider.disableDelete` | Activate to avoid the deletion of imported dashboards | `false` |
+| `sidecar.dashboards.provider.allowUiUpdates` | Allow updating provisioned dashboards from the UI | `false` |
+| `sidecar.dashboards.provider.type` | Provider type | `file` |
+| `sidecar.dashboards.provider.foldersFromFilesStructure` | Allow Grafana to replicate dashboard structure from filesystem. | `false` |
+| `sidecar.dashboards.watchMethod` | Method to use to detect ConfigMap changes. With WATCH the sidecar will do a WATCH requests, with SLEEP it will list all ConfigMaps, then sleep for 60 seconds. | `WATCH` |
+| `sidecar.skipTlsVerify` | Set to true to skip tls verification for kube api calls | `nil` |
+| `sidecar.dashboards.label` | Label that config maps with dashboards should have to be added | `grafana_dashboard` |
+| `sidecar.dashboards.labelValue` | Label value that config maps with dashboards should have to be added | `""` |
+| `sidecar.dashboards.folder` | Folder in the pod that should hold the collected dashboards (unless `sidecar.dashboards.defaultFolderName` is set). This path will be mounted. | `/tmp/dashboards` |
+| `sidecar.dashboards.folderAnnotation` | The annotation the sidecar will look for in configmaps to override the destination folder for files | `nil` |
+| `sidecar.dashboards.defaultFolderName` | The default folder name, it will create a subfolder under the `sidecar.dashboards.folder` and put dashboards in there instead | `nil` |
+| `sidecar.dashboards.searchNamespace` | Namespaces list. If specified, the sidecar will search for dashboards config-maps inside these namespaces. Otherwise the namespace in which the sidecar is running will be used. It's also possible to specify ALL to search in all namespaces. | `nil` |
+| `sidecar.dashboards.script` | Absolute path to shell script to execute after a configmap got reloaded. | `nil` |
+| `sidecar.dashboards.reloadURL` | Full url of dashboards configuration reload API endpoint, to invoke after a config-map change | `"http://localhost:3000/api/admin/provisioning/dashboards/reload"` |
+| `sidecar.dashboards.skipReload` | Enabling this omits defining the REQ_USERNAME, REQ_PASSWORD, REQ_URL and REQ_METHOD environment variables | `false` |
+| `sidecar.dashboards.resource` | Should the sidecar looks into secrets, configmaps or both. | `both` |
+| `sidecar.dashboards.extraMounts` | Additional dashboard sidecar volume mounts. | `[]` |
+| `sidecar.datasources.enabled` | Enables the cluster wide search for datasources and adds/updates/deletes them in grafana |`false` |
+| `sidecar.datasources.label` | Label that config maps with datasources should have to be added | `grafana_datasource` |
+| `sidecar.datasources.labelValue` | Label value that config maps with datasources should have to be added | `""` |
+| `sidecar.datasources.searchNamespace` | Namespaces list. If specified, the sidecar will search for datasources config-maps inside these namespaces. Otherwise the namespace in which the sidecar is running will be used. It's also possible to specify ALL to search in all namespaces. | `nil` |
+| `sidecar.datasources.watchMethod` | Method to use to detect ConfigMap changes. With WATCH the sidecar will do a WATCH requests, with SLEEP it will list all ConfigMaps, then sleep for 60 seconds. | `WATCH` |
+| `sidecar.datasources.resource` | Should the sidecar looks into secrets, configmaps or both. | `both` |
+| `sidecar.datasources.reloadURL` | Full url of datasource configuration reload API endpoint, to invoke after a config-map change | `"http://localhost:3000/api/admin/provisioning/datasources/reload"` |
+| `sidecar.datasources.skipReload` | Enabling this omits defining the REQ_URL and REQ_METHOD environment variables | `false` |
+| `sidecar.datasources.initDatasources` | Set to true to deploy the datasource sidecar as an initContainer in addition to a container. This is needed if skipReload is true, to load any datasources defined at startup time. | `false` |
+| `sidecar.notifiers.enabled` | Enables the cluster wide search for notifiers and adds/updates/deletes them in grafana | `false` |
+| `sidecar.notifiers.label` | Label that config maps with notifiers should have to be added | `grafana_notifier` |
+| `sidecar.notifiers.labelValue` | Label value that config maps with notifiers should have to be added | `""` |
+| `sidecar.notifiers.searchNamespace` | Namespaces list. If specified, the sidecar will search for notifiers config-maps (or secrets) inside these namespaces. Otherwise the namespace in which the sidecar is running will be used. It's also possible to specify ALL to search in all namespaces. | `nil` |
+| `sidecar.notifiers.watchMethod` | Method to use to detect ConfigMap changes. With WATCH the sidecar will do a WATCH requests, with SLEEP it will list all ConfigMaps, then sleep for 60 seconds. | `WATCH` |
+| `sidecar.notifiers.resource` | Should the sidecar looks into secrets, configmaps or both. | `both` |
+| `sidecar.notifiers.reloadURL` | Full url of notifier configuration reload API endpoint, to invoke after a config-map change | `"http://localhost:3000/api/admin/provisioning/notifications/reload"` |
+| `sidecar.notifiers.skipReload` | Enabling this omits defining the REQ_URL and REQ_METHOD environment variables | `false` |
+| `sidecar.notifiers.initNotifiers` | Set to true to deploy the notifier sidecar as an initContainer in addition to a container. This is needed if skipReload is true, to load any notifiers defined at startup time. | `false` |
+| `smtp.existingSecret` | The name of an existing secret containing the SMTP credentials. | `""` |
+| `smtp.userKey` | The key in the existing SMTP secret containing the username. | `"user"` |
+| `smtp.passwordKey` | The key in the existing SMTP secret containing the password. | `"password"` |
+| `admin.existingSecret` | The name of an existing secret containing the admin credentials (can be templated). | `""` |
+| `admin.userKey` | The key in the existing admin secret containing the username. | `"admin-user"` |
+| `admin.passwordKey` | The key in the existing admin secret containing the password. | `"admin-password"` |
+| `serviceAccount.autoMount` | Automount the service account token in the pod| `true` |
+| `serviceAccount.annotations` | ServiceAccount annotations | |
+| `serviceAccount.create` | Create service account | `true` |
+| `serviceAccount.labels` | ServiceAccount labels | `{}` |
+| `serviceAccount.name` | Service account name to use, when empty will be set to created account if `serviceAccount.create` is set else to `default` | `` |
+| `serviceAccount.nameTest` | Service account name to use for test, when empty will be set to created account if `serviceAccount.create` is set else to `default` | `nil` |
+| `rbac.create` | Create and use RBAC resources | `true` |
+| `rbac.namespaced` | Creates Role and Rolebinding instead of the default ClusterRole and ClusteRoleBindings for the grafana instance | `false` |
+| `rbac.useExistingRole` | Set to a rolename to use existing role - skipping role creating - but still doing serviceaccount and rolebinding to the rolename set here. | `nil` |
+| `rbac.pspEnabled` | Create PodSecurityPolicy (with `rbac.create`, grant roles permissions as well) | `false` |
+| `rbac.pspUseAppArmor` | Enforce AppArmor in created PodSecurityPolicy (requires `rbac.pspEnabled`) | `false` |
+| `rbac.extraRoleRules` | Additional rules to add to the Role | [] |
+| `rbac.extraClusterRoleRules` | Additional rules to add to the ClusterRole | [] |
+| `command` | Define command to be executed by grafana container at startup | `nil` |
+| `args` | Define additional args if command is used | `nil` |
+| `testFramework.enabled` | Whether to create test-related resources | `true` |
+| `testFramework.image.registry` | `test-framework` image registry. | `docker.io` |
+| `testFramework.image.repository` | `test-framework` image repository. | `bats/bats` |
+| `testFramework.image.tag` | `test-framework` image tag. | `v1.4.1` |
+| `testFramework.imagePullPolicy` | `test-framework` image pull policy. | `IfNotPresent` |
+| `testFramework.securityContext` | `test-framework` securityContext | `{}` |
+| `downloadDashboards.env` | Environment variables to be passed to the `download-dashboards` container | `{}` |
+| `downloadDashboards.envFromSecret` | Name of a Kubernetes secret (must be manually created in the same namespace) containing values to be added to the environment. Can be templated | `""` |
+| `downloadDashboards.resources` | Resources of `download-dashboards` container | `{}` |
+| `downloadDashboardsImage.registry` | Curl docker image registry | `docker.io` |
+| `downloadDashboardsImage.repository` | Curl docker image repository | `curlimages/curl` |
+| `downloadDashboardsImage.tag` | Curl docker image tag | `7.73.0` |
+| `downloadDashboardsImage.sha` | Curl docker image sha (optional) | `""` |
+| `downloadDashboardsImage.pullPolicy` | Curl docker image pull policy | `IfNotPresent` |
+| `namespaceOverride` | Override the deployment namespace | `""` (`Release.Namespace`) |
+| `serviceMonitor.enabled` | Use servicemonitor from prometheus operator | `false` |
+| `serviceMonitor.namespace` | Namespace this servicemonitor is installed in | |
+| `serviceMonitor.interval` | How frequently Prometheus should scrape | `1m` |
+| `serviceMonitor.path` | Path to scrape | `/metrics` |
+| `serviceMonitor.scheme` | Scheme to use for metrics scraping | `http` |
+| `serviceMonitor.tlsConfig` | TLS configuration block for the endpoint | `{}` |
+| `serviceMonitor.labels` | Labels for the servicemonitor passed to Prometheus Operator | `{}` |
+| `serviceMonitor.scrapeTimeout` | Timeout after which the scrape is ended | `30s` |
+| `serviceMonitor.relabelings` | RelabelConfigs to apply to samples before scraping. | `[]` |
+| `serviceMonitor.metricRelabelings` | MetricRelabelConfigs to apply to samples before ingestion. | `[]` |
+| `revisionHistoryLimit` | Number of old ReplicaSets to retain | `10` |
+| `imageRenderer.enabled` | Enable the image-renderer deployment & service | `false` |
+| `imageRenderer.image.registry` | image-renderer Image registry | `docker.io` |
+| `imageRenderer.image.repository` | image-renderer Image repository | `grafana/grafana-image-renderer` |
+| `imageRenderer.image.tag` | image-renderer Image tag | `latest` |
+| `imageRenderer.image.sha` | image-renderer Image sha (optional) | `""` |
+| `imageRenderer.image.pullPolicy` | image-renderer ImagePullPolicy | `Always` |
+| `imageRenderer.env` | extra env-vars for image-renderer | `{}` |
+| `imageRenderer.envValueFrom` | Environment variables for image-renderer from alternate sources. See the API docs on [EnvVarSource](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#envvarsource-v1-core) for format details. Can be templated | `{}` |
+| `imageRenderer.serviceAccountName` | image-renderer deployment serviceAccountName | `""` |
+| `imageRenderer.securityContext` | image-renderer deployment securityContext | `{}` |
+| `imageRenderer.podAnnotations ` | image-renderer image-renderer pod annotation | `{}` |
+| `imageRenderer.hostAliases` | image-renderer deployment Host Aliases | `[]` |
+| `imageRenderer.priorityClassName` | image-renderer deployment priority class | `''` |
+| `imageRenderer.service.enabled` | Enable the image-renderer service | `true` |
+| `imageRenderer.service.portName` | image-renderer service port name | `http` |
+| `imageRenderer.service.port` | image-renderer port used by deployment | `8081` |
+| `imageRenderer.service.targetPort` | image-renderer service port used by service | `8081` |
+| `imageRenderer.appProtocol` | Adds the appProtocol field to the service | `` |
+| `imageRenderer.grafanaSubPath` | Grafana sub path to use for image renderer callback url | `''` |
+| `imageRenderer.podPortName` | name of the image-renderer port on the pod | `http` |
+| `imageRenderer.revisionHistoryLimit` | number of image-renderer replica sets to keep | `10` |
+| `imageRenderer.networkPolicy.limitIngress` | Enable a NetworkPolicy to limit inbound traffic from only the created grafana pods | `true` |
+| `imageRenderer.networkPolicy.limitEgress` | Enable a NetworkPolicy to limit outbound traffic to only the created grafana pods | `false` |
+| `imageRenderer.resources` | Set resource limits for image-renderer pods | `{}` |
+| `imageRenderer.nodeSelector` | Node labels for pod assignment | `{}` |
+| `imageRenderer.tolerations` | Toleration labels for pod assignment | `[]` |
+| `imageRenderer.affinity` | Affinity settings for pod assignment | `{}` |
+| `networkPolicy.enabled` | Enable creation of NetworkPolicy resources. | `false` |
+| `networkPolicy.allowExternal` | Don't require client label for connections | `true` |
+| `networkPolicy.explicitNamespacesSelector` | A Kubernetes LabelSelector to explicitly select namespaces from which traffic could be allowed | `{}` |
+| `networkPolicy.ingress` | Enable the creation of an ingress network policy | `true` |
+| `networkPolicy.egress.enabled` | Enable the creation of an egress network policy | `false` |
+| `networkPolicy.egress.ports` | An array of ports to allow for the egress | `[]` |
+| `enableKubeBackwardCompatibility` | Enable backward compatibility of kubernetes where pod's defintion version below 1.13 doesn't have the enableServiceLinks option | `false` |
+
+### Example ingress with path
+
+With grafana 6.3 and above
+
+```yaml
+grafana.ini:
+ server:
+ domain: monitoring.example.com
+ root_url: "%(protocol)s://%(domain)s/grafana"
+ serve_from_sub_path: true
+ingress:
+ enabled: true
+ hosts:
+ - "monitoring.example.com"
+ path: "/grafana"
+```
+
+### Example of extraVolumeMounts and extraVolumes
+
+Configure additional volumes with `extraVolumes` and volume mounts with `extraVolumeMounts`.
+
+Example for `extraVolumeMounts` and corresponding `extraVolumes`:
+
+```yaml
+extraVolumeMounts:
+ - name: plugins
+ mountPath: /var/lib/grafana/plugins
+ subPath: configs/grafana/plugins
+ readOnly: false
+ - name: dashboards
+ mountPath: /var/lib/grafana/dashboards
+ hostPath: /usr/shared/grafana/dashboards
+ readOnly: false
+
+extraVolumes:
+ - name: plugins
+ existingClaim: existing-grafana-claim
+ - name: dashboards
+ hostPath: /usr/shared/grafana/dashboards
+```
+
+Volumes default to `emptyDir`. Set to `persistentVolumeClaim`,
+`hostPath`, `csi`, or `configMap` for other types. For a
+`persistentVolumeClaim`, specify an existing claim name with
+`existingClaim`.
+
+## Import dashboards
+
+There are a few methods to import dashboards to Grafana. Below are some examples and explanations as to how to use each method:
+
+```yaml
+dashboards:
+ default:
+ some-dashboard:
+ json: |
+ {
+ "annotations":
+
+ ...
+ # Complete json file here
+ ...
+
+ "title": "Some Dashboard",
+ "uid": "abcd1234",
+ "version": 1
+ }
+ custom-dashboard:
+ # This is a path to a file inside the dashboards directory inside the chart directory
+ file: dashboards/custom-dashboard.json
+ prometheus-stats:
+ # Ref: https://grafana.com/dashboards/2
+ gnetId: 2
+ revision: 2
+ datasource: Prometheus
+ loki-dashboard-quick-search:
+ gnetId: 12019
+ revision: 2
+ datasource:
+ - name: DS_PROMETHEUS
+ value: Prometheus
+ - name: DS_LOKI
+ value: Loki
+ local-dashboard:
+ url: https://raw.githubusercontent.com/user/repository/master/dashboards/dashboard.json
+```
+
+## BASE64 dashboards
+
+Dashboards could be stored on a server that does not return JSON directly and instead of it returns a Base64 encoded file (e.g. Gerrit)
+A new parameter has been added to the url use case so if you specify a b64content value equals to true after the url entry a Base64 decoding is applied before save the file to disk.
+If this entry is not set or is equals to false not decoding is applied to the file before saving it to disk.
+
+### Gerrit use case
+
+Gerrit API for download files has the following schema: where {project-name} and
+{file-id} usually has '/' in their values and so they MUST be replaced by %2F so if project-name is user/repo, branch-id is master and file-id is equals to dir1/dir2/dashboard
+the url value is
+
+## Sidecar for dashboards
+
+If the parameter `sidecar.dashboards.enabled` is set, a sidecar container is deployed in the grafana
+pod. This container watches all configmaps (or secrets) in the cluster and filters out the ones with
+a label as defined in `sidecar.dashboards.label`. The files defined in those configmaps are written
+to a folder and accessed by grafana. Changes to the configmaps are monitored and the imported
+dashboards are deleted/updated.
+
+A recommendation is to use one configmap per dashboard, as a reduction of multiple dashboards inside
+one configmap is currently not properly mirrored in grafana.
+
+Example dashboard config:
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: sample-grafana-dashboard
+ labels:
+ grafana_dashboard: "1"
+data:
+ k8s-dashboard.json: |-
+ [...]
+```
+
+## Sidecar for datasources
+
+If the parameter `sidecar.datasources.enabled` is set, an init container is deployed in the grafana
+pod. This container lists all secrets (or configmaps, though not recommended) in the cluster and
+filters out the ones with a label as defined in `sidecar.datasources.label`. The files defined in
+those secrets are written to a folder and accessed by grafana on startup. Using these yaml files,
+the data sources in grafana can be imported.
+
+Should you aim for reloading datasources in Grafana each time the config is changed, set `sidecar.datasources.skipReload: false` and adjust `sidecar.datasources.reloadURL` to `http://..svc.cluster.local/api/admin/provisioning/datasources/reload`.
+
+Secrets are recommended over configmaps for this usecase because datasources usually contain private
+data like usernames and passwords. Secrets are the more appropriate cluster resource to manage those.
+
+Example values to add a postgres datasource as a kubernetes secret:
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: grafana-datasources
+ labels:
+ grafana_datasource: 'true' # default value for: sidecar.datasources.label
+stringData:
+ pg-db.yaml: |-
+ apiVersion: 1
+ datasources:
+ - name: My pg db datasource
+ type: postgres
+ url: my-postgresql-db:5432
+ user: db-readonly-user
+ secureJsonData:
+ password: 'SUperSEcretPa$$word'
+ jsonData:
+ database: my_datase
+ sslmode: 'disable' # disable/require/verify-ca/verify-full
+ maxOpenConns: 0 # Grafana v5.4+
+ maxIdleConns: 2 # Grafana v5.4+
+ connMaxLifetime: 14400 # Grafana v5.4+
+ postgresVersion: 1000 # 903=9.3, 904=9.4, 905=9.5, 906=9.6, 1000=10
+ timescaledb: false
+ # allow users to edit datasources from the UI.
+ editable: false
+```
+
+Example values to add a datasource adapted from [Grafana](http://docs.grafana.org/administration/provisioning/#example-datasource-config-file):
+
+```yaml
+datasources:
+ datasources.yaml:
+ apiVersion: 1
+ datasources:
+ # name of the datasource. Required
+ - name: Graphite
+ # datasource type. Required
+ type: graphite
+ # access mode. proxy or direct (Server or Browser in the UI). Required
+ access: proxy
+ # org id. will default to orgId 1 if not specified
+ orgId: 1
+ # url
+ url: http://localhost:8080
+ # database password, if used
+ password:
+ # database user, if used
+ user:
+ # database name, if used
+ database:
+ # enable/disable basic auth
+ basicAuth:
+ # basic auth username
+ basicAuthUser:
+ # basic auth password
+ basicAuthPassword:
+ # enable/disable with credentials headers
+ withCredentials:
+ # mark as default datasource. Max one per org
+ isDefault:
+ #