You've already forked DataMate
23
deployment/helm/milvus/.helmignore
Normal file
23
deployment/helm/milvus/.helmignore
Normal file
@@ -0,0 +1,23 @@
|
||||
# Patterns to ignore when building packages.
|
||||
# This supports shell glob matching, relative path matching, and
|
||||
# negation (prefixed with !). Only one pattern per line.
|
||||
.DS_Store
|
||||
# Common VCS dirs
|
||||
.git/
|
||||
.gitignore
|
||||
.bzr/
|
||||
.bzrignore
|
||||
.hg/
|
||||
.hgignore
|
||||
.svn/
|
||||
# Common backup files
|
||||
*.swp
|
||||
*.bak
|
||||
*.tmp
|
||||
*~
|
||||
# Various IDEs
|
||||
.project
|
||||
.idea/
|
||||
*.tmproj
|
||||
# OWNERS file for Kubernetes
|
||||
OWNERS
|
||||
21
deployment/helm/milvus/Chart.yaml
Normal file
21
deployment/helm/milvus/Chart.yaml
Normal file
@@ -0,0 +1,21 @@
|
||||
apiVersion: v1
|
||||
appVersion: 2.6.4
|
||||
description: Milvus is an open-source vector database built to power AI applications
|
||||
and vector similarity search.
|
||||
home: https://milvus.io/
|
||||
icon: https://raw.githubusercontent.com/milvus-io/docs/master/v1.0.0/assets/milvus_logo.png
|
||||
keywords:
|
||||
- milvus
|
||||
- elastic
|
||||
- vector
|
||||
- search
|
||||
- deploy
|
||||
kubeVersion: ^1.10.0-0
|
||||
maintainers:
|
||||
- email: contact@milvus.io
|
||||
name: contact
|
||||
url: milvus.io
|
||||
name: milvus
|
||||
sources:
|
||||
- https://github.com/zilliztech/milvus
|
||||
version: 5.0.6
|
||||
209
deployment/helm/milvus/README-TEI.md
Normal file
209
deployment/helm/milvus/README-TEI.md
Normal file
@@ -0,0 +1,209 @@
|
||||
# Text Embeddings Inference (TEI) Integration Guide
|
||||
|
||||
This document describes how to use Text Embeddings Inference (TEI) service with Milvus Helm Chart, and how to integrate TEI with Milvus. TEI is an open-source project developed by Hugging Face, available at [https://github.com/huggingface/text-embeddings-inference](https://github.com/huggingface/text-embeddings-inference).
|
||||
|
||||
## Overview
|
||||
|
||||
Text Embeddings Inference (TEI) is a high-performance text embedding model inference service that converts text into vector representations. Milvus is a vector database that can store and retrieve these vectors. By combining the two, you can build powerful semantic search and retrieval systems.
|
||||
|
||||
## Deployment Methods
|
||||
|
||||
This guide provides two ways to use TEI:
|
||||
1. Deploy TEI service directly through the Milvus Helm Chart
|
||||
2. Use external TEI service with Milvus integration
|
||||
|
||||
## Deploy TEI through Milvus Helm Chart
|
||||
|
||||
### Basic Configuration
|
||||
|
||||
Enable TEI service in `values.yaml`:
|
||||
|
||||
```yaml
|
||||
tei:
|
||||
enabled: true
|
||||
modelId: "BAAI/bge-large-en-v1.5" # Specify the model to use
|
||||
```
|
||||
|
||||
This is the simplest configuration, just specify `enabled: true` and the desired `modelId`.
|
||||
|
||||
### Complete Configuration Options
|
||||
|
||||
```yaml
|
||||
tei:
|
||||
enabled: true # Enable TEI service
|
||||
name: text-embeddings-inference # Service name
|
||||
replicas: 1 # Number of TEI replicas
|
||||
image:
|
||||
repository: ghcr.io/huggingface/text-embeddings-inference # Image repository
|
||||
tag: cpu-1.6 # Image tag (CPU version)
|
||||
pullPolicy: IfNotPresent # Image pull policy
|
||||
service:
|
||||
type: ClusterIP # Service type
|
||||
port: 8080 # Service port
|
||||
annotations: {} # Service annotations
|
||||
labels: {} # Service labels
|
||||
resources: # Resource configuration
|
||||
requests:
|
||||
cpu: "4" # CPU request
|
||||
memory: "8Gi" # Memory request
|
||||
limits:
|
||||
cpu: "8" # CPU limit
|
||||
memory: "16Gi" # Memory limit
|
||||
persistence: # Persistence storage configuration
|
||||
enabled: true # Enable persistence storage
|
||||
mountPath: "/data" # Mount path
|
||||
annotations: {} # Storage annotations
|
||||
persistentVolumeClaim: # PVC configuration
|
||||
existingClaim: "" # Use existing PVC
|
||||
storageClass: # Storage class
|
||||
accessModes: ReadWriteOnce # Access modes
|
||||
size: 50Gi # Storage size
|
||||
subPath: "" # Sub path
|
||||
modelId: "BAAI/bge-large-en-v1.5" # Model ID
|
||||
extraArgs: [] # Additional command line arguments for TEI, such as "--max-batch-tokens=16384", "--max-client-batch-size=32", "--max-concurrent-requests=128", etc.
|
||||
nodeSelector: {} # Node selector
|
||||
affinity: {} # Affinity configuration
|
||||
tolerations: [] # Tolerations
|
||||
topologySpreadConstraints: [] # Topology spread constraints
|
||||
extraEnv: [] # Additional environment variables
|
||||
```
|
||||
|
||||
### Using GPU Acceleration
|
||||
|
||||
If you have GPU resources, you can use the GPU version of the TEI image to accelerate inference:
|
||||
|
||||
```yaml
|
||||
tei:
|
||||
enabled: true
|
||||
modelId: "BAAI/bge-large-en-v1.5"
|
||||
image:
|
||||
repository: ghcr.io/huggingface/text-embeddings-inference
|
||||
tag: 1.6 # GPU version
|
||||
resources:
|
||||
limits:
|
||||
nvidia.com/gpu: 1 # Allocate 1 GPU
|
||||
```
|
||||
|
||||
|
||||
## Frequently Asked Questions
|
||||
|
||||
### How to determine the embedding dimension of a model?
|
||||
|
||||
Different models have different embedding dimensions. Here are the dimensions of some commonly used models:
|
||||
- BAAI/bge-large-en-v1.5: 1024
|
||||
- BAAI/bge-base-en-v1.5: 768
|
||||
- nomic-ai/nomic-embed-text-v1: 768
|
||||
- sentence-transformers/all-mpnet-base-v2: 768
|
||||
|
||||
You can find this information in the model's documentation or get it through the TEI service's API.
|
||||
|
||||
### How to test if the TEI service is working properly?
|
||||
|
||||
After deploying the TEI service, you can use the following commands to test if the service is working properly:
|
||||
|
||||
```bash
|
||||
# Get the TEI service endpoint
|
||||
export TEI_SERVICE=$(kubectl get svc -l component=text-embeddings-inference -o jsonpath='{.items[0].metadata.name}')
|
||||
|
||||
# Test the embedding functionality
|
||||
kubectl run -it --rm curl --image=curlimages/curl -- curl -X POST "http://${TEI_SERVICE}:8080/embed" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"inputs":"This is a test text"}'
|
||||
```
|
||||
|
||||
### How to use TEI-generated embeddings in Milvus?
|
||||
|
||||
In Milvus, you can use TEI-generated embeddings for the following operations:
|
||||
|
||||
1. When creating a collection, specify the vector dimension to match the TEI model output dimension
|
||||
2. Before inserting data, use the TEI service to convert text to vectors
|
||||
3. When searching, similarly use the TEI service to convert query text to vectors
|
||||
|
||||
## Using Milvus Text Embedding Function
|
||||
|
||||
Milvus provides a text embedding function feature that allows you to generate vector embeddings directly within Milvus. You can configure Milvus to use TEI as the backend for this function.
|
||||
|
||||
### Using the Text Embedding Function in Milvus
|
||||
|
||||
1. Specify the embedding function when creating a collection:
|
||||
|
||||
```python
|
||||
from pymilvus import MilvusClient, DataType, Function, FunctionType
|
||||
|
||||
# Connect to Milvus
|
||||
client = MilvusClient(uri="http://localhost:19530")
|
||||
|
||||
# 1. Create schema
|
||||
schema = client.create_schema()
|
||||
|
||||
# 2. Add fields
|
||||
schema.add_field("id", DataType.INT64, is_primary=True, auto_id=False) # Primary key
|
||||
schema.add_field("text", DataType.VARCHAR, max_length=65535) # Text field
|
||||
schema.add_field("embedding", DataType.FLOAT_VECTOR, dim=768) # Vector field, dimension must match TEI model output
|
||||
|
||||
# 3. Define TEI embedding function
|
||||
tei_embedding_function = Function(
|
||||
name="tei_func", # Unique identifier for this embedding function
|
||||
function_type=FunctionType.TEXTEMBEDDING, # Indicates a text embedding function
|
||||
input_field_names=["text"], # Scalar field(s) containing text data to embed
|
||||
output_field_names=["embedding"], # Vector field(s) for storing embeddings
|
||||
params={ # TEI specific parameters
|
||||
"provider": "TEI", # Must be set to "TEI"
|
||||
"endpoint": "http://tei-service:8080", # TEI service address
|
||||
# Optional: "api_key": "your_secure_api_key",
|
||||
# Optional: "truncate": "true",
|
||||
# Optional: "truncation_direction": "right",
|
||||
# Optional: "max_client_batch_size": 64,
|
||||
# Optional: "ingestion_prompt": "passage: ",
|
||||
# Optional: "search_prompt": "query: "
|
||||
}
|
||||
)
|
||||
schema.add_function(tei_embedding_function)
|
||||
|
||||
# 4. Create collection with schema and index param
|
||||
index_params = client.prepare_index_params()
|
||||
index_params.add_index(
|
||||
field_name="embedding",
|
||||
index_name="embedding_index",
|
||||
index_type="HNSW",
|
||||
metric_type="COSINE",
|
||||
)
|
||||
client.create_collection(
|
||||
name="test_collection",
|
||||
schema=schema,
|
||||
index_params=index_params
|
||||
)
|
||||
client.load_collection(
|
||||
collection_name="test_collection"
|
||||
)
|
||||
|
||||
```
|
||||
|
||||
2. Automatically generate embeddings when inserting data
|
||||
|
||||
```python
|
||||
# Insert data, Milvus will automatically call the TEI service to generate embedding vectors
|
||||
client.insert(
|
||||
collection_name="test_collection",
|
||||
data=[
|
||||
{"id": 1, "text": "This is a sample document about artificial intelligence."},
|
||||
{"id": 2, "text": "Vector databases are designed to handle embeddings efficiently."}
|
||||
]
|
||||
)
|
||||
```
|
||||
|
||||
3. Automatically generate query embeddings when searching:
|
||||
|
||||
```python
|
||||
# Search directly using text, Milvus will automatically call the TEI service to generate query vectors
|
||||
results = client.search(
|
||||
collection_name="test_collection",
|
||||
data=["Tell me about AI technology"],
|
||||
anns_field="embedding",
|
||||
search_params={
|
||||
"metric_type": "COSINE",
|
||||
"params": {}
|
||||
},
|
||||
limit=3
|
||||
)
|
||||
```
|
||||
679
deployment/helm/milvus/README.md
Normal file
679
deployment/helm/milvus/README.md
Normal file
@@ -0,0 +1,679 @@
|
||||
# Milvus Helm Chart
|
||||
|
||||
For more information about installing and using Helm, see the [Helm Docs](https://helm.sh/docs/). For a quick introduction to Charts, see the [Chart Guide](https://helm.sh/docs/topics/charts/).
|
||||
|
||||
To install Milvus, refer to [Milvus installation](https://milvus.io/docs/v2.0.x/install_cluster-helm.md).
|
||||
|
||||
## Introduction
|
||||
This chart bootstraps Milvus deployment on a Kubernetes cluster using the Helm package manager.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Kubernetes >= 1.20.0
|
||||
- Helm >= 3.14.0
|
||||
|
||||
## Compatibility Notice
|
||||
|
||||
- **IMPORTANT** As of helm version 5.0.0, significant architectural changes have been introduced in Milvus v2.6.0:
|
||||
- Coordinator consolidation: Legacy separate coordinators (dataCoord, queryCoord, indexCoord) have been consolidated into a single mixCoord
|
||||
- New components: Introduction of Streaming Node for enhanced data processing
|
||||
- Component removal: indexNode has been removed and consolidated
|
||||
- Milvus v2.6.0-rc1 is not compatible with v2.6.x. Direct upgrades from release candidates are not supported.
|
||||
- You must upgrade to v2.5.16 with mixCoordinator enabled before upgrading to v2.6.x.
|
||||
|
||||
- **IMPORTANT** For users using pulsar2. Please don't use version 4.2.21~4.2.29 for upgrading, there're some known issues. 4.2.30 or later version is recommended. Remember to add `--set pulsar.enabled=true,pulsarv3.enabled=false` or set them in your values file when upgrading.
|
||||
|
||||
- As of version 4.2.21, the Milvus Helm chart introduced pulsar-v3.x chart as dependency. For backward compatibility, please upgrade your helm to v3.14 or later version, and be sure to add the `--reset-then-reuse-values` option whenever you use `helm upgrade`.
|
||||
|
||||
- As of version 4.2.0, the Milvus Helm chart no longer supports Milvus v2.3.x. If you need to deploy Milvus v2.3.x using Helm, please use Milvus Helm chart version less than 4.2.0 (e.g 4.1.36).
|
||||
|
||||
> **IMPORTANT** The master branch is for the development of Milvus v2.x. On March 9th, 2021, we released Milvus v1.0, the first stable version of Milvus with long-term support. To use Milvus v1.x, switch to [branch 1.1](https://github.com/zilliztech/milvus-helm/tree/1.1).
|
||||
|
||||
## Install the Chart
|
||||
|
||||
1. Add the stable repository
|
||||
```bash
|
||||
$ helm repo add zilliztech https://zilliztech.github.io/milvus-helm/
|
||||
```
|
||||
|
||||
2. Update charts repositories
|
||||
```
|
||||
$ helm repo update
|
||||
```
|
||||
|
||||
### Deploy Milvus with standalone mode
|
||||
|
||||
Assume the release name is `my-release`:
|
||||
|
||||
```bash
|
||||
# Helm v3.x
|
||||
$ helm upgrade --install my-release --set cluster.enabled=false --set etcd.replicaCount=1 --set pulsarv3.enabled=false --set minio.mode=standalone zilliztech/milvus
|
||||
```
|
||||
By default, milvus standalone uses `woodpecker` as message queue. You can also use `pulsar` or `kafka` as message queue:
|
||||
|
||||
```bash
|
||||
# Helm v3.x
|
||||
# Milvus Standalone with pulsar as message queue
|
||||
$ helm upgrade --install my-release --set cluster.enabled=false --set standalone.messageQueue=pulsar --set etcd.replicaCount=1 --set pulsarv3.enabled=true --set minio.mode=standalone zilliztech/milvus
|
||||
```
|
||||
|
||||
```bash
|
||||
# Helm v3.x
|
||||
# Milvus Standalone with kafka as message queue
|
||||
$ helm upgrade --install my-release --set cluster.enabled=false --set standalone.messageQueue=kafka --set etcd.replicaCount=1 --set pulsarv3.enabled=false --set kafka.enabled=true --set minio.mode=standalone zilliztech/milvus
|
||||
```
|
||||
If you need to use standalone mode with embedded ETCD and local storage (without starting MinIO and additional ETCD), you can use the following steps:
|
||||
|
||||
1. Prepare a values file
|
||||
```
|
||||
cat > values.yaml <<EOF
|
||||
---
|
||||
cluster:
|
||||
enabled: false
|
||||
|
||||
etcd:
|
||||
enabled: false
|
||||
|
||||
pulsarv3:
|
||||
enabled: false
|
||||
|
||||
minio:
|
||||
enabled: false
|
||||
tls:
|
||||
enabled: false
|
||||
|
||||
extraConfigFiles:
|
||||
user.yaml: |+
|
||||
etcd:
|
||||
use:
|
||||
embed: true
|
||||
data:
|
||||
dir: /var/lib/milvus/etcd
|
||||
common:
|
||||
storageType: local
|
||||
EOF
|
||||
|
||||
```
|
||||
|
||||
2. Helm install with this values file
|
||||
```
|
||||
helm upgrade --install -f values.yaml my-release zilliztech/milvus
|
||||
|
||||
```
|
||||
|
||||
> **Tip**: To list all releases, using `helm list`.
|
||||
|
||||
### Deploy Milvus with cluster mode
|
||||
|
||||
Assume the release name is `my-release`:
|
||||
|
||||
```bash
|
||||
# Helm v3.x
|
||||
$ helm upgrade --install my-release zilliztech/milvus
|
||||
```
|
||||
By default, milvus cluster uses `pulsar` as message queue. You can also use `kafka` instead of `pulsar` for milvus cluster:
|
||||
|
||||
```bash
|
||||
# Helm v3.x
|
||||
$ helm upgrade --install my-release zilliztech/milvus --set pulsarv3.enabled=false --set kafka.enabled=true
|
||||
```
|
||||
|
||||
By default, milvus cluster uses `mixCoordinator` which contains all coordinators. This is the recommended deployment approach for Milvus v2.6.x and later versions.
|
||||
|
||||
### Upgrading from Milvus v2.5.x to v2.6.x
|
||||
|
||||
Upgrading from Milvus 2.5.x to 2.6.x involves significant architectural changes. Please follow these steps carefully:
|
||||
|
||||
#### Requirements
|
||||
|
||||
**System requirements:**
|
||||
- Helm version >= 3.14.0
|
||||
- Kubernetes version >= 1.20.0
|
||||
- Milvus cluster deployed via Helm Chart
|
||||
|
||||
**Compatibility requirements:**
|
||||
- Milvus v2.6.0-rc1 is not compatible with v2.6.x. Direct upgrades from release candidates are not supported.
|
||||
- If you are currently running v2.6.0-rc1 and need to preserve your data, please refer to community guides for migration assistance.
|
||||
- You must upgrade to v2.5.16 with mixCoordinator enabled before upgrading to v2.6.x.
|
||||
|
||||
#### Upgrade Process
|
||||
|
||||
**Step 1: Upgrade Helm Chart**
|
||||
|
||||
First, upgrade your Milvus Helm chart repository:
|
||||
|
||||
```bash
|
||||
helm repo add zilliztech https://zilliztech.github.io/milvus-helm
|
||||
helm repo update zilliztech
|
||||
```
|
||||
|
||||
**Step 2: Upgrade to v2.5.16 with mixCoordinator**
|
||||
|
||||
Check if your cluster currently uses separate coordinators:
|
||||
|
||||
```bash
|
||||
kubectl get pods
|
||||
```
|
||||
|
||||
If you see separate coordinator pods (datacoord, querycoord, indexcoord), upgrade to v2.5.16 and enable mixCoordinator:
|
||||
|
||||
```bash
|
||||
helm upgrade my-release zilliztech/milvus \
|
||||
--set image.all.tag="v2.5.16" \
|
||||
--set mixCoordinator.enabled=true \
|
||||
--set rootCoordinator.enabled=false \
|
||||
--set indexCoordinator.enabled=false \
|
||||
--set queryCoordinator.enabled=false \
|
||||
--set dataCoordinator.enabled=false \
|
||||
--reset-then-reuse-values \
|
||||
--version=4.2.58
|
||||
```
|
||||
|
||||
If your cluster already uses mixCoordinator, simply upgrade the image:
|
||||
|
||||
```bash
|
||||
helm upgrade my-release zilliztech/milvus \
|
||||
--set image.all.tag="v2.5.16" \
|
||||
--reset-then-reuse-values \
|
||||
--version=4.2.58
|
||||
```
|
||||
|
||||
**Step 3: Upgrade to v2.6.x**
|
||||
|
||||
Once v2.5.16 is running successfully with mixCoordinator, upgrade to v2.6.x:
|
||||
|
||||
```bash
|
||||
helm upgrade my-release zilliztech/milvus \
|
||||
--set image.all.tag="v2.6.x" \
|
||||
--set streaming.enabled=true \
|
||||
--set indexNode.enabled=false \
|
||||
--reset-then-reuse-values \
|
||||
--version=5.0.0
|
||||
```
|
||||
|
||||
### Upgrade an existing Milvus cluster (General)
|
||||
|
||||
> **IMPORTANT** If you have installed a milvus cluster with version below v2.1.x, you need follow the instructions at here: https://github.com/milvus-io/milvus/blob/master/deployments/migrate-meta/README.md. After meta migration, you use `helm upgrade` to update your cluster again.
|
||||
|
||||
E.g. to scale out query node from 1(default) to 2:
|
||||
|
||||
```bash
|
||||
# Helm v3.14.0+
|
||||
$ helm upgrade --reset-then-reuse-values --install --set queryNode.replicas=2 my-release zilliztech/milvus
|
||||
```
|
||||
|
||||
### Milvus Mix Coordinator Active Standby
|
||||
> **IMPORTANT** Milvus supports deploying multiple Mix Coordinator instances for high availability. For example, you can run a Milvus cluster with two mixcoord pods:
|
||||
|
||||
```bash
|
||||
helm upgrade --install my-release zilliztech/milvus --set mixCoordinator.activeStandby.enabled=true --set mixCoordinator.replicas=2
|
||||
```
|
||||
|
||||
### Breaking Changes
|
||||
|
||||
> **IMPORTANT** Milvus helm chart 4.0.0 has breaking changes for milvus configuration. Previously, you can set segment size like this `--set dataCoordinator.segment.maxSize=1024`. Now we have remove all the shortcut config option. Instead, you can set using `extraConfigFiles` like this:
|
||||
```bash
|
||||
extraConfigFiles:
|
||||
user.yaml: |+
|
||||
dataCoord:
|
||||
segment:
|
||||
maxSize: 1024
|
||||
```
|
||||
|
||||
So if you had deployed a cluster with helm chart version below 4.0.0 and also specified extra config, you need set the configs under `extraConfigFiles` when running `helm upgrade`.
|
||||
|
||||
> **IMPORTANT** Milvus has removed mysql as meta store support from v2.3.1. And milvus helm chart has also removed mysql as dependency from chart version 4.1.8.
|
||||
|
||||
### Enable log to file
|
||||
|
||||
By default, all the logs of milvus components will output stdout. If you wanna log to file, you'd install milvus with `--set log.persistence.enabled=true`. Note that you should have a storageclass with `ReadWriteMany` access modes.
|
||||
|
||||
```bash
|
||||
# Install a milvus cluster with file log output
|
||||
helm install my-release zilliztech/milvus --set log.persistence.enabled=true --set log.persistence.persistentVolumeClaim.storageClass=<read-write-many-storageclass>
|
||||
```
|
||||
|
||||
It will output log to `/milvus/logs/` directory.
|
||||
|
||||
### Enable proxy tls connection
|
||||
By default the TLS connection to proxy service is false, to enable TLS with users' own certificate and privatge key, it can be specified in `extraConfigFiles` like this:
|
||||
|
||||
```bash
|
||||
extraConfigFiles:
|
||||
user.yaml: |+
|
||||
# Enable tlsMode and set the tls cert and key
|
||||
tls:
|
||||
serverPemPath: /etc/milvus/certs/tls.crt
|
||||
serverKeyPath: /etc/milvus/certs/tls.key
|
||||
common:
|
||||
security:
|
||||
tlsMode: 1
|
||||
|
||||
```
|
||||
The path specified above are TLS secret data mounted inside the proxy pod as files. To create a TLS secret, set `proxy.tls.enabled` to `true` then provide base64-encoded values for your certificate and private key files in values.yaml:
|
||||
|
||||
```bash
|
||||
proxy:
|
||||
enabled: true
|
||||
tls:
|
||||
enabled: true
|
||||
secretName: milvus-tls
|
||||
#expecting base64 encoded values here: i.e. $(cat tls.crt | base64 -w 0) and $(cat tls.key | base64 -w 0)
|
||||
key: LS0tLS1C....
|
||||
crt: LS0tLS1CR...
|
||||
```
|
||||
or in cli using --set:
|
||||
|
||||
```bash
|
||||
--set proxy.tls.enabled=true \
|
||||
--set prox.tls.key=$(cat /path/to/private_key_file | base64 -w 0) \
|
||||
--set prox.tls.crt=$(cat /path/to/certificate_file | base64 -w 0)
|
||||
```
|
||||
In case you want to use a different `secretName` or mount path inside pod, modify `prox.tls.secretName` above, and `serverPemPath` and `serverPemPath` in `extraConfigFles `accordingly, then in the `volume` and `volumeMounts` sections in values.yaml
|
||||
|
||||
```bash
|
||||
volumes:
|
||||
- secret:
|
||||
secretName: Your-tls-secret-name
|
||||
name: milvus-tls
|
||||
volumeMounts:
|
||||
- mountPath: /Your/tls/files/path/
|
||||
name: milvus-tls
|
||||
```
|
||||
|
||||
## Milvus with External Object Storage
|
||||
|
||||
As of https://github.com/minio/minio/releases/tag/RELEASE.2022-10-29T06-21-33Z, the MinIO Gateway and the related filesystem mode code have been removed. It is now recommended to utilize the `externalS3` configuration for integrating with various object storage services. Notably, Milvus now provides support for popular object storage platforms such as AWS S3, GCP GCS, Azure Blob, Aliyun OSS and Tencent COS.
|
||||
|
||||
The recommended configuration option for `externalS3.cloudProvider` includes the following choices: `aws`, `gcp`, `azure`, `aliyun`, and `tencent`. Here's an example to use AWS S3 for Milvus object storage:
|
||||
|
||||
```
|
||||
minio:
|
||||
enabled: false
|
||||
externalS3:
|
||||
enabled: true
|
||||
cloudProvider: aws
|
||||
host: s3.aws.com
|
||||
port: 443
|
||||
useSSL: true
|
||||
bucketName: <bucket-name>
|
||||
accessKey: <s3-access-key>
|
||||
secretKey: <s3-secret-key>
|
||||
```
|
||||
|
||||
|
||||
## Uninstall the Chart
|
||||
|
||||
```bash
|
||||
# Helm v3.x
|
||||
$ helm uninstall my-release
|
||||
```
|
||||
|
||||
The command removes all the Kubernetes components associated with the chart and deletes the release.
|
||||
|
||||
- Completely uninstall Milvus
|
||||
|
||||
> **IMPORTANT** Please run this command with care. Maybe you want to keep ETCD data
|
||||
```bash
|
||||
MILVUS_LABELS="app.kubernetes.io/instance=my-release"
|
||||
kubectl delete pvc $(kubectl get pvc -l "${MILVUS_LABELS}" -o jsonpath='{range.items[*]}{.metadata.name} ')
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Milvus Service Configuration
|
||||
|
||||
The following table lists the configurable parameters of the Milvus Service and their default values.
|
||||
|
||||
| Parameter | Description | Default |
|
||||
|-------------------------------------------|-----------------------------------------------|---------------------------------------------------------|
|
||||
| `cluster.enabled` | Enable or disable Milvus Cluster mode | `true` |
|
||||
| `route.enabled` | Enable OpenShift route | `false` |
|
||||
| `route.hostname` | Hostname for the OpenShift route | `""` |
|
||||
| `route.path` | Path for the OpenShift route | `""` |
|
||||
| `route.termination` | TLS termination for the route | `edge` |
|
||||
| `route.annotations` | Route annotations | `{}` |
|
||||
| `route.labels` | Route labels | `{}` |
|
||||
| `image.all.repository` | Image repository | `milvusdb/milvus` |
|
||||
| `image.all.tag` | Image tag | `v2.6.4` |
|
||||
| `image.all.pullPolicy` | Image pull policy | `IfNotPresent` |
|
||||
| `image.all.pullSecrets` | Image pull secrets | `{}` |
|
||||
| `image.tools.repository` | Config image repository | `milvusdb/milvus-config-tool` |
|
||||
| `image.tools.tag` | Config image tag | `v0.1.2` |
|
||||
| `image.tools.pullPolicy` | Config image pull policy | `IfNotPresent` |
|
||||
| `customConfigMap` | User specified ConfigMap for configuration |
|
||||
| `extraConfigFiles` | Extra config to override default milvus.yaml | `user.yaml:` |
|
||||
| `service.type` | Service type | `ClusterIP` |
|
||||
| `service.port` | Port where service is exposed | `19530` |
|
||||
| `service.portName` | Useful for [Istio protocol selection](https://istio.io/latest/docs/ops/configuration/traffic-management/protocol-selection/) | `milvus` |
|
||||
| `service.nodePort` | Service nodePort | `unset` |
|
||||
| `service.annotations` | Service annotations | `{}` |
|
||||
| `service.labels` | Service custom labels | `{}` |
|
||||
| `service.clusterIP` | Internal cluster service IP | `unset` |
|
||||
| `service.loadBalancerIP` | IP address to assign to load balancer (if supported) | `unset` |
|
||||
| `service.loadBalancerSourceRanges` | List of IP CIDRs allowed access to lb (if supported) | `[]` |
|
||||
| `service.externalIPs` | Service external IP addresses | `[]` |
|
||||
| `ingress.enabled` | If true, Ingress will be created | `false` |
|
||||
| `ingress.annotations` | Ingress annotations | `{}` |
|
||||
| `ingress.labels` | Ingress labels | `{}` |
|
||||
| `ingress.rules` | Ingress rules | `[]` |
|
||||
| `ingress.tls` | Ingress TLS configuration | `[]` |
|
||||
| `serviceAccount.create` | Create a custom service account | `false` |
|
||||
| `serviceAccount.name` | Service Account name | `milvus` |
|
||||
| `serviceAccount.annotations` | Service Account Annotations | `{}` |
|
||||
| `serviceAccount.labels` | Service Account labels | `{}` |
|
||||
| `metrics.enabled` | Export Prometheus monitoring metrics | `true` |
|
||||
| `metrics.serviceMonitor.enabled` | Create ServiceMonitor for Prometheus operator | `false` |
|
||||
| `metrics.serviceMonitor.additionalLabels` | Additional labels that can be used so ServiceMonitor will be discovered by Prometheus | `unset` |
|
||||
| `log.level` | Logging level to be used. Valid levels are `debug`, `info`, `warn`, `error`, `fatal` | `info` |
|
||||
| `log.file.maxSize` | The size limit of the log file (MB) | `300` |
|
||||
| `log.file.maxAge` | The maximum number of days that the log is retained. (day) | `10` |
|
||||
| `log.file.maxBackups` | The maximum number of retained logs. | `20` |
|
||||
| `log.format` | Format used for the logs. Valid formats are `text` and `json` | `text` |
|
||||
| `log.persistence.enabled` | Use persistent volume to store Milvus logs data | `false` |
|
||||
| `log.persistence.mountPath` | Milvus logs data persistence volume mount path | `/milvus/logs` |
|
||||
| `log.persistence.annotations` | PersistentVolumeClaim annotations | `{}` |
|
||||
| `log.persistence.persistentVolumeClaim.existingClaim` | Use your own data Persistent Volume existing claim name | `unset` |
|
||||
| `log.persistence.persistentVolumeClaim.storageClass` | The Milvus logs data Persistent Volume Storage Class | `unset` |
|
||||
| `log.persistence.persistentVolumeClaim.accessModes` | The Milvus logs data Persistence access modes | `ReadWriteOnce` |
|
||||
| `log.persistence.persistentVolumeClaim.size` | The size of Milvus logs data Persistent Volume Storage Class | `5Gi` |
|
||||
| `log.persistence.persistentVolumeClaim.subPath` | SubPath for Milvus logs data mount | `unset` |
|
||||
| `externalS3.enabled` | Enable or disable external S3 | `false` |
|
||||
| `externalS3.host` | The host of the external S3 | `unset` |
|
||||
| `externalS3.port` | The port of the external S3 | `unset` |
|
||||
| `externalS3.rootPath` | The path prefix of the external S3 | `unset` |
|
||||
| `externalS3.accessKey` | The Access Key of the external S3 | `unset` |
|
||||
| `externalS3.secretKey` | The Secret Key of the external S3 | `unset` |
|
||||
| `externalS3.bucketName` | The Bucket Name of the external S3 | `unset` |
|
||||
| `externalS3.useSSL` | If true, use SSL to connect to the external S3 | `false` |
|
||||
| `externalS3.useIAM` | If true, use iam to connect to the external S3 | `false` |
|
||||
| `externalS3.cloudProvider` | When `useIAM` enabled, only "aws" & "gcp" is supported for now | `aws` |
|
||||
| `externalS3.iamEndpoint` | The IAM endpoint of the external S3 | `` |
|
||||
| `externalS3.region` | The region of the external S3 | `` |
|
||||
| `externalS3.useVirtualHost` | If true, the external S3 whether use virtual host bucket mode | `` |
|
||||
| `externalEtcd.enabled` | Enable or disable external Etcd | `false` |
|
||||
| `externalEtcd.endpoints` | The endpoints of the external etcd | `{}` |
|
||||
| `externalPulsar.enabled` | Enable or disable external Pulsar | `false` |
|
||||
| `externalPulsar.host` | The host of the external Pulsar | `localhost` |
|
||||
| `externalPulsar.port` | The port of the external Pulsar | `6650` |
|
||||
| `externalPulsar.tenant` | The tenant of the external Pulsar | `public` |
|
||||
| `externalPulsar.namespace` | The namespace of the external Pulsar | `default` |
|
||||
| `externalPulsar.authPlugin` | The authPlugin of the external Pulsar | `""` |
|
||||
| `externalPulsar.authParams` | The authParams of the external Pulsar | `""` |
|
||||
| `externalKafka.enabled` | Enable or disable external Kafka | `false` |
|
||||
| `externalKafka.brokerList` | The brokerList of the external Kafka separated by comma | `localhost:9092` |
|
||||
| `externalKafka.securityProtocol` | The securityProtocol used for kafka authentication | `SASL_SSL` |
|
||||
| `externalKafka.sasl.mechanisms` | SASL mechanism to use for kafka authentication | `PLAIN` |
|
||||
| `externalKafka.sasl.username` | username for PLAIN or SASL/PLAIN authentication | `` |
|
||||
| `externalKafka.sasl.password` | password for PLAIN or SASL/PLAIN authentication | `` |
|
||||
|
||||
### Milvus Standalone Deployment Configuration
|
||||
|
||||
The following table lists the configurable parameters of the Milvus Standalone component and their default values.
|
||||
|
||||
| Parameter | Description | Default |
|
||||
|-------------------------------------------|-----------------------------------------------|---------------------------------------------------------|
|
||||
| `standalone.resources` | Resource requests/limits for the Milvus Standalone pods | `{}` |
|
||||
| `standalone.nodeSelector` | Node labels for Milvus Standalone pods assignment | `{}` |
|
||||
| `standalone.affinity` | Affinity settings for Milvus Standalone pods assignment | `{}` |
|
||||
| `standalone.tolerations` | Toleration labels for Milvus Standalone pods assignment | `[]` |
|
||||
| `standalone.heaptrack.enabled` | Whether to enable heaptrack | `false` |
|
||||
| `standalone.disk.enabled` | Whether to enable disk | `true` |
|
||||
| `standalone.profiling.enabled` | Whether to enable live profiling | `false` |
|
||||
| `standalone.extraEnv` | Additional Milvus Standalone container environment variables | `[]` |
|
||||
| `standalone.messageQueue` | Message queue for Milvus Standalone: rocksmq, natsmq, pulsar, kafka | `rocksmq` |
|
||||
| `standalone.persistence.enabled` | Use persistent volume to store Milvus standalone data | `true` |
|
||||
| `standalone.persistence.mountPath` | Milvus standalone data persistence volume mount path | `/var/lib/milvus` |
|
||||
| `standalone.persistence.annotations` | PersistentVolumeClaim annotations | `{}` |
|
||||
| `standalone.persistence.persistentVolumeClaim.existingClaim` | Use your own data Persistent Volume existing claim name | `unset` |
|
||||
| `standalone.persistence.persistentVolumeClaim.storageClass` | The Milvus standalone data Persistent Volume Storage Class | `unset` |
|
||||
| `standalone.persistence.persistentVolumeClaim.accessModes` | The Milvus standalone data Persistence access modes | `ReadWriteOnce` |
|
||||
| `standalone.persistence.persistentVolumeClaim.size` | The size of Milvus standalone data Persistent Volume Storage Class | `5Gi` |
|
||||
| `standalone.persistence.persistentVolumeClaim.subPath` | SubPath for Milvus standalone data mount | `unset` |
|
||||
|
||||
### Milvus Proxy Deployment Configuration
|
||||
|
||||
The following table lists the configurable parameters of the Milvus Proxy component and their default values.
|
||||
|
||||
| Parameter | Description | Default |
|
||||
|-------------------------------------------|---------------------------------------------------------|---------------|
|
||||
| `proxy.enabled` | Enable or disable Milvus Proxy Deployment | `true` |
|
||||
| `proxy.replicas` | Desired number of Milvus Proxy pods | `1` |
|
||||
| `proxy.resources` | Resource requests/limits for the Milvus Proxy pods | `{}` |
|
||||
| `proxy.nodeSelector` | Node labels for Milvus Proxy pods assignment | `{}` |
|
||||
| `proxy.affinity` | Affinity settings for Milvus Proxy pods assignment | `{}` |
|
||||
| `proxy.tolerations` | Toleration labels for Milvus Proxy pods assignment | `[]` |
|
||||
| `proxy.heaptrack.enabled` | Whether to enable heaptrack | `false` |
|
||||
| `proxy.profiling.enabled` | Whether to enable live profiling | `false` |
|
||||
| `proxy.extraEnv` | Additional Milvus Proxy container environment variables | `[]` |
|
||||
| `proxy.http.enabled` | Enable rest api for Milvus Proxy | `true` |
|
||||
| `proxy.maxUserNum` | Modify the Milvus maximum user limit | `100` |
|
||||
| `proxy.maxRoleNum` | Modify the Milvus maximum role limit | `10` |
|
||||
| `proxy.http.debugMode.enabled` | Enable debug mode for rest api | `false` |
|
||||
| `proxy.tls.enabled` | Enable porxy tls connection | `false` |
|
||||
| `proxy.strategy` | Deployment strategy configuration | RollingUpdate |
|
||||
| `proxy.annotations` | Additional pod annotations | `{}` |
|
||||
| `proxy.hpa` | Enable hpa for proxy node | false |
|
||||
| `proxy.minReplicas` | Specify the minimum number of replicas | 1 |
|
||||
| `proxy.maxReplicas` | Specify the maximum number of replicas | 5 |
|
||||
| `proxy.cpuUtilization` | Specify the cpu auto-scaling value | 40 |
|
||||
|
||||
|
||||
|
||||
### Milvus Query Node Deployment Configuration
|
||||
|
||||
The following table lists the configurable parameters of the Milvus Query Node component and their default values.
|
||||
|
||||
| Parameter | Description | Default |
|
||||
|-------------------------------------------|-----------------------------------------------|---------------------------------------------------------|
|
||||
| `queryNode.enabled` | Enable or disable Milvus Query Node component | `true` |
|
||||
| `queryNode.replicas` | Desired number of Milvus Query Node pods | `1` |
|
||||
| `queryNode.resources` | Resource requests/limits for the Milvus Query Node pods | `{}` |
|
||||
| `queryNode.nodeSelector` | Node labels for Milvus Query Node pods assignment | `{}` |
|
||||
| `queryNode.affinity` | Affinity settings for Milvus Query Node pods assignment | `{}` |
|
||||
| `queryNode.tolerations` | Toleration labels for Milvus Query Node pods assignment | `[]` |
|
||||
| `queryNode.heaptrack.enabled` | Whether to enable heaptrack | `false` |
|
||||
| `queryNode.disk.enabled` | Whether to enable disk for query | `true` |
|
||||
| `queryNode.profiling.enabled` | Whether to enable live profiling | `false` |
|
||||
| `queryNode.extraEnv` | Additional Milvus Query Node container environment variables | `[]` |
|
||||
| `queryNode.strategy` | Deployment strategy configuration | RollingUpdate |
|
||||
| `queryNode.annotations` | Additional pod annotations | `{}` |
|
||||
| `queryNode.hpa` | Enable hpa for query node | false |
|
||||
| `queryNode.minReplicas` | Specify the minimum number of replicas | 1 |
|
||||
| `queryNode.maxReplicas` | Specify the maximum number of replicas | 5 |
|
||||
| `queryNode.cpuUtilization` | Specify the cpu auto-scaling value | 40 |
|
||||
| `queryNode.memoryUtilization` | Specify the memory auto-scaling value | 60 |
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
### Milvus Data Node Deployment Configuration
|
||||
|
||||
The following table lists the configurable parameters of the Milvus Data Node component and their default values.
|
||||
|
||||
| Parameter | Description | Default |
|
||||
|-------------------------------------------|-----------------------------------------------|---------------------------------------------------------|
|
||||
| `dataNode.enabled` | Enable or disable Data Node component | `true` |
|
||||
| `dataNode.replicas` | Desired number of Data Node pods | `1` |
|
||||
| `dataNode.resources` | Resource requests/limits for the Milvus Data Node pods | `{}` |
|
||||
| `dataNode.nodeSelector` | Node labels for Milvus Data Node pods assignment | `{}` |
|
||||
| `dataNode.affinity` | Affinity settings for Milvus Data Node pods assignment | `{}` |
|
||||
| `dataNode.tolerations` | Toleration labels for Milvus Data Node pods assignment | `[]` |
|
||||
| `dataNode.heaptrack.enabled` | Whether to enable heaptrack | `false` |
|
||||
| `dataNode.profiling.enabled` | Whether to enable live profiling | `false` |
|
||||
| `dataNode.extraEnv` | Additional Milvus Data Node container environment variables | `[]` |
|
||||
| `dataNode.strategy` | Deployment strategy configuration | RollingUpdate |
|
||||
| `dataNode.annotations` | Additional pod annotations | `{}` |
|
||||
| `dataNode.hpa` | Enable hpa for data node | false |
|
||||
| `dataNode.minReplicas` | Specify the minimum number of replicas | 1 |
|
||||
| `dataNode.maxReplicas` | Specify the maximum number of replicas | 5 |
|
||||
| `dataNode.cpuUtilization` | Specify the cpu auto-scaling value | 40 |
|
||||
|
||||
### Milvus Mix Coordinator Deployment Configuration
|
||||
|
||||
The following table lists the configurable parameters of the Milvus Mix Coordinator component and their default values. The Mix Coordinator consolidates all coordinator functions (Root, Query, Index, and Data Coordinators) into a single component for improved efficiency and simplified deployment.
|
||||
|
||||
| Parameter | Description | Default |
|
||||
|-------------------------------------------|-----------------------------------------------|---------------------------------------------------------|
|
||||
| `mixCoordinator.enabled` | Enable or disable Mix Coordinator component | `true` |
|
||||
| `mixCoordinator.replicas` | Desired number of Mix Coordinator pods | `1` |
|
||||
| `mixCoordinator.resources` | Resource requests/limits for the Milvus Mix Coordinator pods | `{}` |
|
||||
| `mixCoordinator.nodeSelector` | Node labels for Milvus Mix Coordinator pods assignment | `{}` |
|
||||
| `mixCoordinator.affinity` | Affinity settings for Milvus Mix Coordinator pods assignment | `{}` |
|
||||
| `mixCoordinator.tolerations` | Toleration labels for Milvus Mix Coordinator pods assignment | `[]` |
|
||||
| `mixCoordinator.heaptrack.enabled` | Whether to enable heaptrack | `false` |
|
||||
| `mixCoordinator.profiling.enabled` | Whether to enable live profiling | `false` |
|
||||
| `mixCoordinator.activeStandby.enabled` | Whether to enable active-standby | `false` |
|
||||
| `mixCoordinator.extraEnv` | Additional Milvus Mix Coordinator container environment variables | `[]` |
|
||||
| `mixCoordinator.service.type` | Service type | `ClusterIP` |
|
||||
| `mixCoordinator.service.annotations` | Service annotations | `{}` |
|
||||
| `mixCoordinator.service.labels` | Service custom labels | `{}` |
|
||||
| `mixCoordinator.service.clusterIP` | Internal cluster service IP | `unset` |
|
||||
| `mixCoordinator.service.loadBalancerIP` | IP address to assign to load balancer (if supported) | `unset` |
|
||||
| `mixCoordinator.service.loadBalancerSourceRanges` | List of IP CIDRs allowed access to lb (if supported) | `[]` |
|
||||
| `mixCoordinator.service.externalIPs` | Service external IP addresses | `[]` |
|
||||
| `mixCoordinator.strategy` | Deployment strategy configuration | RollingUpdate |
|
||||
| `mixCoordinator.annotations` | Additional pod annotations | `{}` |
|
||||
|
||||
### Milvus Streaming Node Deployment Configuration
|
||||
|
||||
The following table lists the configurable parameters of the Milvus Streaming Node component and their default values. The Streaming Node is a new component introduced in Milvus v2.6.x for enhanced data processing and streaming operations.
|
||||
|
||||
| Parameter | Description | Default |
|
||||
|-------------------------------------------|---------------------------------------------------------|---------------|
|
||||
| `streamingNode.enabled` | Enable or disable Streaming Node component | `true` |
|
||||
| `streamingNode.replicas` | Desired number of Streaming Node pods | `1` |
|
||||
| `streamingNode.resources` | Resource requests/limits for the Milvus Streaming Node pods | `{}` |
|
||||
| `streamingNode.nodeSelector` | Node labels for Milvus Streaming Node pods assignment | `{}` |
|
||||
| `streamingNode.affinity` | Affinity settings for Milvus Streaming Node pods assignment | `{}` |
|
||||
| `streamingNode.tolerations` | Toleration labels for Milvus Streaming Node pods assignment | `[]` |
|
||||
| `streamingNode.securityContext` | Security context for Streaming Node pods | `{}` |
|
||||
| `streamingNode.containerSecurityContext` | Container security context for Streaming Node | `{}` |
|
||||
| `streamingNode.heaptrack.enabled` | Whether to enable heaptrack | `false` |
|
||||
| `streamingNode.profiling.enabled` | Whether to enable live profiling | `false` |
|
||||
| `streamingNode.extraEnv` | Additional Milvus Streaming Node container environment variables | `[]` |
|
||||
| `streamingNode.strategy` | Deployment strategy configuration | `{}` |
|
||||
|
||||
|
||||
### TEI Configuration
|
||||
|
||||
The following table lists the configurable parameters of the Text Embeddings Inference (TEI) component and their default values.
|
||||
|
||||
| Parameter | Description | Default |
|
||||
|-------------------------------------------|---------------------------------------------------------|---------------------------------------------|
|
||||
| `tei.enabled` | Enable or disable TEI deployment | `false` |
|
||||
| `tei.name` | Name of the TEI component | `text-embeddings-inference` |
|
||||
| `tei.image.repository` | TEI image repository | `ghcr.io/huggingface/text-embeddings-inference` |
|
||||
| `tei.image.tag` | TEI image tag | `cpu-1.6` |
|
||||
| `tei.image.pullPolicy` | TEI image pull policy | `IfNotPresent` |
|
||||
| `tei.service.type` | TEI service type | `ClusterIP` |
|
||||
| `tei.service.port` | TEI service port | `8080` |
|
||||
| `tei.service.annotations` | TEI service annotations | `{}` |
|
||||
| `tei.service.labels` | TEI service custom labels | `{}` |
|
||||
| `tei.resources.requests.cpu` | CPU resource requests for TEI pods | `4` |
|
||||
| `tei.resources.requests.memory` | Memory resource requests for TEI pods | `8Gi` |
|
||||
| `tei.resources.limits.cpu` | CPU resource limits for TEI pods | `8` |
|
||||
| `tei.resources.limits.memory` | Memory resource limits for TEI pods | `16Gi` |
|
||||
| `tei.persistence.enabled` | Enable persistence for TEI | `true` |
|
||||
| `tei.persistence.mountPath` | Mount path for TEI persistence | `/data` |
|
||||
| `tei.persistence.annotations` | Annotations for TEI PVC | `{}` |
|
||||
| `tei.persistence.persistentVolumeClaim.existingClaim` | Existing PVC for TEI | `""` |
|
||||
| `tei.persistence.persistentVolumeClaim.storageClass` | Storage class for TEI PVC | `""` |
|
||||
| `tei.persistence.persistentVolumeClaim.accessModes` | Access modes for TEI PVC | `ReadWriteOnce` |
|
||||
| `tei.persistence.persistentVolumeClaim.size` | Size of TEI PVC | `50Gi` |
|
||||
| `tei.persistence.persistentVolumeClaim.subPath` | SubPath for TEI PVC | `""` |
|
||||
| `tei.modelId` | Model ID for TEI | `BAAI/bge-large-en-v1.5` |
|
||||
| `tei.extraArgs` | Additional arguments for TEI | `[]` |
|
||||
| `tei.nodeSelector` | Node labels for TEI pods assignment | `{}` |
|
||||
| `tei.affinity` | Affinity settings for TEI pods assignment | `{}` |
|
||||
| `tei.tolerations` | Toleration labels for TEI pods assignment | `[]` |
|
||||
| `tei.topologySpreadConstraints` | Topology spread constraints for TEI pods | `[]` |
|
||||
| `tei.extraEnv` | Additional TEI container environment variables | `[]` |
|
||||
| `tei.replicas` | Number of TEI replicas | `1` |
|
||||
|
||||
|
||||
### Pulsar Configuration
|
||||
|
||||
This version of the chart includes the dependent Pulsar chart in the charts/ directory.
|
||||
- `pulsar-v3.3.0` is used for Pulsar v3
|
||||
- `pulsar-v2.7.8` is used for Pulsar v2
|
||||
|
||||
Since milvus chart version 4.2.21, pulsar v3 is supported, but pulsar v2 will be still used by default until the release of Milvus v2.5.0.
|
||||
|
||||
We recommend creating new instances with pulsar v3 to avoid security vulnerabilities & some bugs in pulsar v2. To use pulsar v3, set `pulsarv3.enabled` to `true` and `pulsar.enabled` to `false`. Set other values for pulsar v3 under `pulsarv3` field.
|
||||
|
||||
You can find more information at:
|
||||
* [https://pulsar.apache.org/charts](https://pulsar.apache.org/charts)
|
||||
|
||||
### Etcd Configuration
|
||||
|
||||
This version of the chart includes the dependent Etcd chart in the charts/ directory.
|
||||
|
||||
You can find more information at:
|
||||
* [https://artifacthub.io/packages/helm/bitnami/etcd](https://artifacthub.io/packages/helm/bitnami/etcd)
|
||||
|
||||
### Minio Configuration
|
||||
|
||||
This version of the chart includes the dependent Minio chart in the charts/ directory.
|
||||
|
||||
You can find more information at:
|
||||
* [https://github.com/minio/charts/blob/master/README.md](https://github.com/minio/charts/blob/master/README.md)
|
||||
|
||||
### Kafka Configuration
|
||||
|
||||
This version of the chart includes the dependent Kafka chart in the charts/ directory.
|
||||
|
||||
You can find more information at:
|
||||
* [https://artifacthub.io/packages/helm/bitnami/kafka](https://artifacthub.io/packages/helm/bitnami/kafka)
|
||||
|
||||
|
||||
### Node Selector, Affinity and Tolerations Configuration Guide
|
||||
|
||||
- [Node Selector Configuration Guide](../../docs/node-selector-configuration-guide.md)
|
||||
- [Affinity Configuration Guide](../../docs/affinity-configuration-guide.md)
|
||||
- [Tolerations Configuration Guide](../../docs/tolerations-configuration-guide.md)
|
||||
|
||||
#### Important Configuration Considerations
|
||||
|
||||
When configuring pod scheduling in Kubernetes, be aware of potential conflicts between different scheduling mechanisms:
|
||||
|
||||
1. **Node Selectors vs Node Affinity**
|
||||
- Node selectors provide a simple way to constrain pods to nodes with specific labels
|
||||
- Node affinity provides more flexible pod scheduling rules
|
||||
- When used together, **both** node selector and node affinity rules must be satisfied
|
||||
- Consider using only one mechanism unless you have specific requirements
|
||||
|
||||
2. **Scheduling Priority**
|
||||
- Node Selectors: Hard requirement that must be satisfied
|
||||
- Required Node Affinity: Hard requirement that must be satisfied
|
||||
- Preferred Node Affinity: Soft preference that Kubernetes will try to satisfy
|
||||
- Pod Anti-Affinity: Can prevent pods from being scheduled on the same node
|
||||
|
||||
3. **Best Practices**
|
||||
- Start with simple node selectors for basic constraints
|
||||
- Use node/pod affinity for more complex scheduling requirements
|
||||
- Avoid combining multiple scheduling constraints unless necessary
|
||||
- Test your configuration in a non-production environment first
|
||||
|
||||
For detailed examples and configurations, please refer to the documentation guides linked above.
|
||||
|
||||
### Milvus Live Profiling
|
||||
Profiling is an effective way of understanding which parts of your application are consuming the most resources.
|
||||
|
||||
Continuous Profiling adds a dimension of time that allows you to understand your systems resource usage (i.e. CPU, Memory, etc.) over time and gives you the ability to locate, debug, and fix issues related to performance.
|
||||
|
||||
You can enable profiling with Pyroscope and you can find more information at:
|
||||
* [https://pyroscope.io/docs/kubernetes-helm-chart/](https://pyroscope.io/docs/kubernetes-helm-chart/)
|
||||
|
||||
## Text Embeddings Inference (TEI) Integration Guide for Milvus
|
||||
|
||||
Text Embeddings Inference (TEI) is a high-performance text embedding model inference service that converts text into vector representations. Milvus is a vector database that can store and retrieve these vectors. By combining the two, you can build powerful semantic search and retrieval systems.
|
||||
|
||||
TEI is an open-source project developed by Hugging Face, available at [https://github.com/huggingface/text-embeddings-inference](https://github.com/huggingface/text-embeddings-inference).
|
||||
|
||||
This guide provides two ways to use TEI:
|
||||
1. Deploy TEI service directly through the Milvus Helm Chart
|
||||
2. Use external TEI service with Milvus integration
|
||||
|
||||
For detailed configuration and usage instructions, please refer to the [README-TEI.md](./README-TEI.md) document.
|
||||
21
deployment/helm/milvus/charts/etcd/.helmignore
Normal file
21
deployment/helm/milvus/charts/etcd/.helmignore
Normal file
@@ -0,0 +1,21 @@
|
||||
# Patterns to ignore when building packages.
|
||||
# This supports shell glob matching, relative path matching, and
|
||||
# negation (prefixed with !). Only one pattern per line.
|
||||
.DS_Store
|
||||
# Common VCS dirs
|
||||
.git/
|
||||
.gitignore
|
||||
.bzr/
|
||||
.bzrignore
|
||||
.hg/
|
||||
.hgignore
|
||||
.svn/
|
||||
# Common backup files
|
||||
*.swp
|
||||
*.bak
|
||||
*.tmp
|
||||
*~
|
||||
# Various IDEs
|
||||
.project
|
||||
.idea/
|
||||
*.tmproj
|
||||
6
deployment/helm/milvus/charts/etcd/Chart.lock
Normal file
6
deployment/helm/milvus/charts/etcd/Chart.lock
Normal file
@@ -0,0 +1,6 @@
|
||||
dependencies:
|
||||
- name: common
|
||||
repository: oci://registry-1.docker.io/bitnamicharts
|
||||
version: 2.4.0
|
||||
digest: sha256:8c1a5dc923412d11d4d841420494b499cb707305c8b9f87f45ea1a8bf3172cb3
|
||||
generated: "2023-05-21T14:12:59.250402885Z"
|
||||
29
deployment/helm/milvus/charts/etcd/Chart.yaml
Normal file
29
deployment/helm/milvus/charts/etcd/Chart.yaml
Normal file
@@ -0,0 +1,29 @@
|
||||
annotations:
|
||||
category: Database
|
||||
licenses: Apache-2.0
|
||||
apiVersion: v2
|
||||
appVersion: 3.5.9
|
||||
dependencies:
|
||||
- name: common
|
||||
repository: oci://registry-1.docker.io/bitnamicharts
|
||||
tags:
|
||||
- bitnami-common
|
||||
version: 2.x.x
|
||||
description: etcd is a distributed key-value store designed to securely store data
|
||||
across a cluster. etcd is widely used in production on account of its reliability,
|
||||
fault-tolerance and ease of use.
|
||||
home: https://bitnami.com
|
||||
icon: https://bitnami.com/assets/stacks/etcd/img/etcd-stack-220x234.png
|
||||
keywords:
|
||||
- etcd
|
||||
- cluster
|
||||
- database
|
||||
- cache
|
||||
- key-value
|
||||
maintainers:
|
||||
- name: VMware, Inc.
|
||||
url: https://github.com/bitnami/charts
|
||||
name: etcd
|
||||
sources:
|
||||
- https://github.com/bitnami/charts/tree/main/bitnami/etcd
|
||||
version: 8.12.0
|
||||
537
deployment/helm/milvus/charts/etcd/README.md
Normal file
537
deployment/helm/milvus/charts/etcd/README.md
Normal file
@@ -0,0 +1,537 @@
|
||||
<!--- app-name: Etcd -->
|
||||
|
||||
# Etcd packaged by Bitnami
|
||||
|
||||
etcd is a distributed key-value store designed to securely store data across a cluster. etcd is widely used in production on account of its reliability, fault-tolerance and ease of use.
|
||||
|
||||
[Overview of Etcd](https://etcd.io/)
|
||||
|
||||
Trademarks: This software listing is packaged by Bitnami. The respective trademarks mentioned in the offering are owned by the respective companies, and use of them does not imply any affiliation or endorsement.
|
||||
|
||||
## TL;DR
|
||||
|
||||
```console
|
||||
helm install my-release oci://registry-1.docker.io/bitnamicharts/etcd
|
||||
```
|
||||
|
||||
## Introduction
|
||||
|
||||
This chart bootstraps a [etcd](https://github.com/bitnami/containers/tree/main/bitnami/etcd) deployment on a [Kubernetes](https://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
|
||||
|
||||
Bitnami charts can be used with [Kubeapps](https://kubeapps.dev/) for deployment and management of Helm Charts in clusters.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Kubernetes 1.19+
|
||||
- Helm 3.2.0+
|
||||
- PV provisioner support in the underlying infrastructure
|
||||
|
||||
## Installing the Chart
|
||||
|
||||
To install the chart with the release name `my-release`:
|
||||
|
||||
```console
|
||||
helm install my-release oci://registry-1.docker.io/bitnamicharts/etcd
|
||||
```
|
||||
|
||||
These commands deploy etcd on the Kubernetes cluster in the default configuration. The [Parameters](#parameters) section lists the parameters that can be configured during installation.
|
||||
|
||||
> **Tip**: List all releases using `helm list`
|
||||
|
||||
## Uninstalling the Chart
|
||||
|
||||
To uninstall/delete the `my-release` deployment:
|
||||
|
||||
```console
|
||||
helm delete my-release
|
||||
```
|
||||
|
||||
The command removes all the Kubernetes components associated with the chart and deletes the release.
|
||||
|
||||
## Parameters
|
||||
|
||||
### Global parameters
|
||||
|
||||
| Name | Description | Value |
|
||||
| ------------------------- | ----------------------------------------------- | ----- |
|
||||
| `global.imageRegistry` | Global Docker image registry | `""` |
|
||||
| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` |
|
||||
| `global.storageClass` | Global StorageClass for Persistent Volume(s) | `""` |
|
||||
|
||||
### Common parameters
|
||||
|
||||
| Name | Description | Value |
|
||||
| ------------------------ | -------------------------------------------------------------------------------------------- | --------------- |
|
||||
| `kubeVersion` | Force target Kubernetes version (using Helm capabilities if not set) | `""` |
|
||||
| `nameOverride` | String to partially override common.names.fullname template (will maintain the release name) | `""` |
|
||||
| `fullnameOverride` | String to fully override common.names.fullname template | `""` |
|
||||
| `commonLabels` | Labels to add to all deployed objects | `{}` |
|
||||
| `commonAnnotations` | Annotations to add to all deployed objects | `{}` |
|
||||
| `clusterDomain` | Default Kubernetes cluster domain | `cluster.local` |
|
||||
| `extraDeploy` | Array of extra objects to deploy with the release | `[]` |
|
||||
| `diagnosticMode.enabled` | Enable diagnostic mode (all probes will be disabled and the command will be overridden) | `false` |
|
||||
| `diagnosticMode.command` | Command to override all containers in the deployment | `["sleep"]` |
|
||||
| `diagnosticMode.args` | Args to override all containers in the deployment | `["infinity"]` |
|
||||
|
||||
### etcd parameters
|
||||
|
||||
| Name | Description | Value |
|
||||
| -------------------------------------- | ----------------------------------------------------------------------------------------------------------- | -------------------- |
|
||||
| `image.registry` | etcd image registry | `docker.io` |
|
||||
| `image.repository` | etcd image name | `bitnami/etcd` |
|
||||
| `image.tag` | etcd image tag | `3.5.9-debian-11-r4` |
|
||||
| `image.digest` | etcd image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag | `""` |
|
||||
| `image.pullPolicy` | etcd image pull policy | `IfNotPresent` |
|
||||
| `image.pullSecrets` | etcd image pull secrets | `[]` |
|
||||
| `image.debug` | Enable image debug mode | `false` |
|
||||
| `auth.rbac.create` | Switch to enable RBAC authentication | `true` |
|
||||
| `auth.rbac.allowNoneAuthentication` | Allow to use etcd without configuring RBAC authentication | `true` |
|
||||
| `auth.rbac.rootPassword` | Root user password. The root user is always `root` | `""` |
|
||||
| `auth.rbac.existingSecret` | Name of the existing secret containing credentials for the root user | `""` |
|
||||
| `auth.rbac.existingSecretPasswordKey` | Name of key containing password to be retrieved from the existing secret | `""` |
|
||||
| `auth.token.enabled` | Enables token authentication | `true` |
|
||||
| `auth.token.type` | Authentication token type. Allowed values: 'simple' or 'jwt' | `jwt` |
|
||||
| `auth.token.privateKey.filename` | Name of the file containing the private key for signing the JWT token | `jwt-token.pem` |
|
||||
| `auth.token.privateKey.existingSecret` | Name of the existing secret containing the private key for signing the JWT token | `""` |
|
||||
| `auth.token.signMethod` | JWT token sign method | `RS256` |
|
||||
| `auth.token.ttl` | JWT token TTL | `10m` |
|
||||
| `auth.client.secureTransport` | Switch to encrypt client-to-server communications using TLS certificates | `false` |
|
||||
| `auth.client.useAutoTLS` | Switch to automatically create the TLS certificates | `false` |
|
||||
| `auth.client.existingSecret` | Name of the existing secret containing the TLS certificates for client-to-server communications | `""` |
|
||||
| `auth.client.enableAuthentication` | Switch to enable host authentication using TLS certificates. Requires existing secret | `false` |
|
||||
| `auth.client.certFilename` | Name of the file containing the client certificate | `cert.pem` |
|
||||
| `auth.client.certKeyFilename` | Name of the file containing the client certificate private key | `key.pem` |
|
||||
| `auth.client.caFilename` | Name of the file containing the client CA certificate | `""` |
|
||||
| `auth.peer.secureTransport` | Switch to encrypt server-to-server communications using TLS certificates | `false` |
|
||||
| `auth.peer.useAutoTLS` | Switch to automatically create the TLS certificates | `false` |
|
||||
| `auth.peer.existingSecret` | Name of the existing secret containing the TLS certificates for server-to-server communications | `""` |
|
||||
| `auth.peer.enableAuthentication` | Switch to enable host authentication using TLS certificates. Requires existing secret | `false` |
|
||||
| `auth.peer.certFilename` | Name of the file containing the peer certificate | `cert.pem` |
|
||||
| `auth.peer.certKeyFilename` | Name of the file containing the peer certificate private key | `key.pem` |
|
||||
| `auth.peer.caFilename` | Name of the file containing the peer CA certificate | `""` |
|
||||
| `autoCompactionMode` | Auto compaction mode, by default periodic. Valid values: "periodic", "revision". | `""` |
|
||||
| `autoCompactionRetention` | Auto compaction retention for mvcc key value store in hour, by default 0, means disabled | `""` |
|
||||
| `initialClusterState` | Initial cluster state. Allowed values: 'new' or 'existing' | `""` |
|
||||
| `logLevel` | Sets the log level for the etcd process. Allowed values: 'debug', 'info', 'warn', 'error', 'panic', 'fatal' | `info` |
|
||||
| `maxProcs` | Limits the number of operating system threads that can execute user-level | `""` |
|
||||
| `removeMemberOnContainerTermination` | Use a PreStop hook to remove the etcd members from the etcd cluster on container termination | `true` |
|
||||
| `configuration` | etcd configuration. Specify content for etcd.conf.yml | `""` |
|
||||
| `existingConfigmap` | Existing ConfigMap with etcd configuration | `""` |
|
||||
| `extraEnvVars` | Extra environment variables to be set on etcd container | `[]` |
|
||||
| `extraEnvVarsCM` | Name of existing ConfigMap containing extra env vars | `""` |
|
||||
| `extraEnvVarsSecret` | Name of existing Secret containing extra env vars | `""` |
|
||||
| `command` | Default container command (useful when using custom images) | `[]` |
|
||||
| `args` | Default container args (useful when using custom images) | `[]` |
|
||||
|
||||
### etcd statefulset parameters
|
||||
|
||||
| Name | Description | Value |
|
||||
| --------------------------------------------------- | ----------------------------------------------------------------------------------------- | --------------- |
|
||||
| `replicaCount` | Number of etcd replicas to deploy | `1` |
|
||||
| `updateStrategy.type` | Update strategy type, can be set to RollingUpdate or OnDelete. | `RollingUpdate` |
|
||||
| `podManagementPolicy` | Pod management policy for the etcd statefulset | `Parallel` |
|
||||
| `hostAliases` | etcd pod host aliases | `[]` |
|
||||
| `lifecycleHooks` | Override default etcd container hooks | `{}` |
|
||||
| `containerPorts.client` | Client port to expose at container level | `2379` |
|
||||
| `containerPorts.peer` | Peer port to expose at container level | `2380` |
|
||||
| `podSecurityContext.enabled` | Enabled etcd pods' Security Context | `true` |
|
||||
| `podSecurityContext.fsGroup` | Set etcd pod's Security Context fsGroup | `1001` |
|
||||
| `containerSecurityContext.enabled` | Enabled etcd containers' Security Context | `true` |
|
||||
| `containerSecurityContext.runAsUser` | Set etcd container's Security Context runAsUser | `1001` |
|
||||
| `containerSecurityContext.runAsNonRoot` | Set etcd container's Security Context runAsNonRoot | `true` |
|
||||
| `containerSecurityContext.allowPrivilegeEscalation` | Force the child process to be run as nonprivilege | `false` |
|
||||
| `resources.limits` | The resources limits for the etcd container | `{}` |
|
||||
| `resources.requests` | The requested resources for the etcd container | `{}` |
|
||||
| `livenessProbe.enabled` | Enable livenessProbe | `true` |
|
||||
| `livenessProbe.initialDelaySeconds` | Initial delay seconds for livenessProbe | `60` |
|
||||
| `livenessProbe.periodSeconds` | Period seconds for livenessProbe | `30` |
|
||||
| `livenessProbe.timeoutSeconds` | Timeout seconds for livenessProbe | `5` |
|
||||
| `livenessProbe.failureThreshold` | Failure threshold for livenessProbe | `5` |
|
||||
| `livenessProbe.successThreshold` | Success threshold for livenessProbe | `1` |
|
||||
| `readinessProbe.enabled` | Enable readinessProbe | `true` |
|
||||
| `readinessProbe.initialDelaySeconds` | Initial delay seconds for readinessProbe | `60` |
|
||||
| `readinessProbe.periodSeconds` | Period seconds for readinessProbe | `10` |
|
||||
| `readinessProbe.timeoutSeconds` | Timeout seconds for readinessProbe | `5` |
|
||||
| `readinessProbe.failureThreshold` | Failure threshold for readinessProbe | `5` |
|
||||
| `readinessProbe.successThreshold` | Success threshold for readinessProbe | `1` |
|
||||
| `startupProbe.enabled` | Enable startupProbe | `false` |
|
||||
| `startupProbe.initialDelaySeconds` | Initial delay seconds for startupProbe | `0` |
|
||||
| `startupProbe.periodSeconds` | Period seconds for startupProbe | `10` |
|
||||
| `startupProbe.timeoutSeconds` | Timeout seconds for startupProbe | `5` |
|
||||
| `startupProbe.failureThreshold` | Failure threshold for startupProbe | `60` |
|
||||
| `startupProbe.successThreshold` | Success threshold for startupProbe | `1` |
|
||||
| `customLivenessProbe` | Override default liveness probe | `{}` |
|
||||
| `customReadinessProbe` | Override default readiness probe | `{}` |
|
||||
| `customStartupProbe` | Override default startup probe | `{}` |
|
||||
| `extraVolumes` | Optionally specify extra list of additional volumes for etcd pods | `[]` |
|
||||
| `extraVolumeMounts` | Optionally specify extra list of additional volumeMounts for etcd container(s) | `[]` |
|
||||
| `initContainers` | Add additional init containers to the etcd pods | `[]` |
|
||||
| `sidecars` | Add additional sidecar containers to the etcd pods | `[]` |
|
||||
| `podAnnotations` | Annotations for etcd pods | `{}` |
|
||||
| `podLabels` | Extra labels for etcd pods | `{}` |
|
||||
| `podAffinityPreset` | Pod affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `""` |
|
||||
| `podAntiAffinityPreset` | Pod anti-affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `soft` |
|
||||
| `nodeAffinityPreset.type` | Node affinity preset type. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `""` |
|
||||
| `nodeAffinityPreset.key` | Node label key to match. Ignored if `affinity` is set. | `""` |
|
||||
| `nodeAffinityPreset.values` | Node label values to match. Ignored if `affinity` is set. | `[]` |
|
||||
| `affinity` | Affinity for pod assignment | `{}` |
|
||||
| `nodeSelector` | Node labels for pod assignment | `{}` |
|
||||
| `tolerations` | Tolerations for pod assignment | `[]` |
|
||||
| `terminationGracePeriodSeconds` | Seconds the pod needs to gracefully terminate | `""` |
|
||||
| `schedulerName` | Name of the k8s scheduler (other than default) | `""` |
|
||||
| `priorityClassName` | Name of the priority class to be used by etcd pods | `""` |
|
||||
| `runtimeClassName` | Name of the runtime class to be used by pod(s) | `""` |
|
||||
| `shareProcessNamespace` | Enable shared process namespace in a pod. | `false` |
|
||||
| `topologySpreadConstraints` | Topology Spread Constraints for pod assignment | `[]` |
|
||||
| `persistentVolumeClaimRetentionPolicy.enabled` | Controls if and how PVCs are deleted during the lifecycle of a StatefulSet | `false` |
|
||||
| `persistentVolumeClaimRetentionPolicy.whenScaled` | Volume retention behavior when the replica count of the StatefulSet is reduced | `Retain` |
|
||||
| `persistentVolumeClaimRetentionPolicy.whenDeleted` | Volume retention behavior that applies when the StatefulSet is deleted | `Retain` |
|
||||
|
||||
### Traffic exposure parameters
|
||||
|
||||
| Name | Description | Value |
|
||||
| ---------------------------------- | ---------------------------------------------------------------------------------- | ----------- |
|
||||
| `service.type` | Kubernetes Service type | `ClusterIP` |
|
||||
| `service.enabled` | create second service if equal true | `true` |
|
||||
| `service.clusterIP` | Kubernetes service Cluster IP | `""` |
|
||||
| `service.ports.client` | etcd client port | `2379` |
|
||||
| `service.ports.peer` | etcd peer port | `2380` |
|
||||
| `service.nodePorts.client` | Specify the nodePort client value for the LoadBalancer and NodePort service types. | `""` |
|
||||
| `service.nodePorts.peer` | Specify the nodePort peer value for the LoadBalancer and NodePort service types. | `""` |
|
||||
| `service.clientPortNameOverride` | etcd client port name override | `""` |
|
||||
| `service.peerPortNameOverride` | etcd peer port name override | `""` |
|
||||
| `service.loadBalancerIP` | loadBalancerIP for the etcd service (optional, cloud specific) | `""` |
|
||||
| `service.loadBalancerSourceRanges` | Load Balancer source ranges | `[]` |
|
||||
| `service.externalIPs` | External IPs | `[]` |
|
||||
| `service.externalTrafficPolicy` | %%MAIN_CONTAINER_NAME%% service external traffic policy | `Cluster` |
|
||||
| `service.extraPorts` | Extra ports to expose (normally used with the `sidecar` value) | `[]` |
|
||||
| `service.annotations` | Additional annotations for the etcd service | `{}` |
|
||||
| `service.sessionAffinity` | Session Affinity for Kubernetes service, can be "None" or "ClientIP" | `None` |
|
||||
| `service.sessionAffinityConfig` | Additional settings for the sessionAffinity | `{}` |
|
||||
| `service.headless.annotations` | Annotations for the headless service. | `{}` |
|
||||
|
||||
### Persistence parameters
|
||||
|
||||
| Name | Description | Value |
|
||||
| -------------------------- | --------------------------------------------------------------- | ------------------- |
|
||||
| `persistence.enabled` | If true, use a Persistent Volume Claim. If false, use emptyDir. | `true` |
|
||||
| `persistence.storageClass` | Persistent Volume Storage Class | `""` |
|
||||
| `persistence.annotations` | Annotations for the PVC | `{}` |
|
||||
| `persistence.labels` | Labels for the PVC | `{}` |
|
||||
| `persistence.accessModes` | Persistent Volume Access Modes | `["ReadWriteOnce"]` |
|
||||
| `persistence.size` | PVC Storage Request for etcd data volume | `8Gi` |
|
||||
| `persistence.selector` | Selector to match an existing Persistent Volume | `{}` |
|
||||
|
||||
### Volume Permissions parameters
|
||||
|
||||
| Name | Description | Value |
|
||||
| -------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------- | ----------------------- |
|
||||
| `volumePermissions.enabled` | Enable init container that changes the owner and group of the persistent volume(s) mountpoint to `runAsUser:fsGroup` | `false` |
|
||||
| `volumePermissions.image.registry` | Init container volume-permissions image registry | `docker.io` |
|
||||
| `volumePermissions.image.repository` | Init container volume-permissions image name | `bitnami/bitnami-shell` |
|
||||
| `volumePermissions.image.tag` | Init container volume-permissions image tag | `11-debian-11-r118` |
|
||||
| `volumePermissions.image.digest` | Init container volume-permissions image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag | `""` |
|
||||
| `volumePermissions.image.pullPolicy` | Init container volume-permissions image pull policy | `IfNotPresent` |
|
||||
| `volumePermissions.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` |
|
||||
| `volumePermissions.resources.limits` | Init container volume-permissions resource limits | `{}` |
|
||||
| `volumePermissions.resources.requests` | Init container volume-permissions resource requests | `{}` |
|
||||
|
||||
### Network Policy parameters
|
||||
|
||||
| Name | Description | Value |
|
||||
| --------------------------------------- | ---------------------------------------------------------- | ------- |
|
||||
| `networkPolicy.enabled` | Enable creation of NetworkPolicy resources | `false` |
|
||||
| `networkPolicy.allowExternal` | Don't require client label for connections | `true` |
|
||||
| `networkPolicy.extraIngress` | Add extra ingress rules to the NetworkPolicy | `[]` |
|
||||
| `networkPolicy.extraEgress` | Add extra ingress rules to the NetworkPolicy | `[]` |
|
||||
| `networkPolicy.ingressNSMatchLabels` | Labels to match to allow traffic from other namespaces | `{}` |
|
||||
| `networkPolicy.ingressNSPodMatchLabels` | Pod labels to match to allow traffic from other namespaces | `{}` |
|
||||
|
||||
### Metrics parameters
|
||||
|
||||
| Name | Description | Value |
|
||||
| ----------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------- | ------------ |
|
||||
| `metrics.enabled` | Expose etcd metrics | `false` |
|
||||
| `metrics.podAnnotations` | Annotations for the Prometheus metrics on etcd pods | `{}` |
|
||||
| `metrics.podMonitor.enabled` | Create PodMonitor Resource for scraping metrics using PrometheusOperator | `false` |
|
||||
| `metrics.podMonitor.namespace` | Namespace in which Prometheus is running | `monitoring` |
|
||||
| `metrics.podMonitor.interval` | Specify the interval at which metrics should be scraped | `30s` |
|
||||
| `metrics.podMonitor.scrapeTimeout` | Specify the timeout after which the scrape is ended | `30s` |
|
||||
| `metrics.podMonitor.additionalLabels` | Additional labels that can be used so PodMonitors will be discovered by Prometheus | `{}` |
|
||||
| `metrics.podMonitor.scheme` | Scheme to use for scraping | `http` |
|
||||
| `metrics.podMonitor.tlsConfig` | TLS configuration used for scrape endpoints used by Prometheus | `{}` |
|
||||
| `metrics.podMonitor.relabelings` | Prometheus relabeling rules | `[]` |
|
||||
| `metrics.prometheusRule.enabled` | Create a Prometheus Operator PrometheusRule (also requires `metrics.enabled` to be `true` and `metrics.prometheusRule.rules`) | `false` |
|
||||
| `metrics.prometheusRule.namespace` | Namespace for the PrometheusRule Resource (defaults to the Release Namespace) | `""` |
|
||||
| `metrics.prometheusRule.additionalLabels` | Additional labels that can be used so PrometheusRule will be discovered by Prometheus | `{}` |
|
||||
| `metrics.prometheusRule.rules` | Prometheus Rule definitions | `[]` |
|
||||
|
||||
### Snapshotting parameters
|
||||
|
||||
| Name | Description | Value |
|
||||
| ----------------------------------------------- | ----------------------------------------------------------------------- | -------------- |
|
||||
| `startFromSnapshot.enabled` | Initialize new cluster recovering an existing snapshot | `false` |
|
||||
| `startFromSnapshot.existingClaim` | Existing PVC containing the etcd snapshot | `""` |
|
||||
| `startFromSnapshot.snapshotFilename` | Snapshot filename | `""` |
|
||||
| `disasterRecovery.enabled` | Enable auto disaster recovery by periodically snapshotting the keyspace | `false` |
|
||||
| `disasterRecovery.cronjob.schedule` | Schedule in Cron format to save snapshots | `*/30 * * * *` |
|
||||
| `disasterRecovery.cronjob.historyLimit` | Number of successful finished jobs to retain | `1` |
|
||||
| `disasterRecovery.cronjob.snapshotHistoryLimit` | Number of etcd snapshots to retain, tagged by date | `1` |
|
||||
| `disasterRecovery.cronjob.snapshotsDir` | Directory to store snapshots | `/snapshots` |
|
||||
| `disasterRecovery.cronjob.podAnnotations` | Pod annotations for cronjob pods | `{}` |
|
||||
| `disasterRecovery.cronjob.resources.limits` | Cronjob container resource limits | `{}` |
|
||||
| `disasterRecovery.cronjob.resources.requests` | Cronjob container resource requests | `{}` |
|
||||
| `disasterRecovery.cronjob.nodeSelector` | Node labels for cronjob pods assignment | `{}` |
|
||||
| `disasterRecovery.cronjob.tolerations` | Tolerations for cronjob pods assignment | `[]` |
|
||||
| `disasterRecovery.pvc.existingClaim` | A manually managed Persistent Volume and Claim | `""` |
|
||||
| `disasterRecovery.pvc.size` | PVC Storage Request | `2Gi` |
|
||||
| `disasterRecovery.pvc.storageClassName` | Storage Class for snapshots volume | `nfs` |
|
||||
| `disasterRecovery.pvc.subPath` | Path within the volume from which to mount | `""` |
|
||||
|
||||
### Service account parameters
|
||||
|
||||
| Name | Description | Value |
|
||||
| --------------------------------------------- | ------------------------------------------------------------ | ------- |
|
||||
| `serviceAccount.create` | Enable/disable service account creation | `false` |
|
||||
| `serviceAccount.name` | Name of the service account to create or use | `""` |
|
||||
| `serviceAccount.automountServiceAccountToken` | Enable/disable auto mounting of service account token | `true` |
|
||||
| `serviceAccount.annotations` | Additional annotations to be included on the service account | `{}` |
|
||||
| `serviceAccount.labels` | Additional labels to be included on the service account | `{}` |
|
||||
|
||||
### Other parameters
|
||||
|
||||
| Name | Description | Value |
|
||||
| -------------------- | -------------------------------------------------------------- | ------ |
|
||||
| `pdb.create` | Enable/disable a Pod Disruption Budget creation | `true` |
|
||||
| `pdb.minAvailable` | Minimum number/percentage of pods that should remain scheduled | `51%` |
|
||||
| `pdb.maxUnavailable` | Maximum number/percentage of pods that may be made unavailable | `""` |
|
||||
|
||||
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
|
||||
|
||||
```console
|
||||
helm install my-release \
|
||||
--set auth.rbac.rootPassword=secretpassword oci://registry-1.docker.io/bitnamicharts/etcd
|
||||
```
|
||||
|
||||
The above command sets the etcd `root` account password to `secretpassword`.
|
||||
|
||||
> NOTE: Once this chart is deployed, it is not possible to change the application's access credentials, such as usernames or passwords, using Helm. To change these application credentials after deployment, delete any persistent volumes (PVs) used by the chart and re-deploy it, or use the application's built-in administrative tools if available.
|
||||
|
||||
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
|
||||
|
||||
```console
|
||||
helm install my-release -f values.yaml oci://registry-1.docker.io/bitnamicharts/etcd
|
||||
```
|
||||
|
||||
> **Tip**: You can use the default [values.yaml](values.yaml)
|
||||
|
||||
## Configuration and installation details
|
||||
|
||||
### [Rolling VS Immutable tags](https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/)
|
||||
|
||||
It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.
|
||||
|
||||
Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.
|
||||
|
||||
### Cluster configuration
|
||||
|
||||
The Bitnami etcd chart can be used to bootstrap an etcd cluster, easy to scale and with available features to implement disaster recovery.
|
||||
|
||||
Refer to the [chart documentation](https://docs.bitnami.com/kubernetes/infrastructure/etcd/get-started/understand-default-configuration/) for more information about all these details.
|
||||
|
||||
### Enable security for etcd
|
||||
|
||||
The etcd chart can be configured with Role-based access control and TLS encryption to improve its security.
|
||||
|
||||
[Learn more about security in the chart documentation](https://docs.bitnami.com/kubernetes/infrastructure/etcd/administration/enable-security/).
|
||||
|
||||
### Persistence
|
||||
|
||||
The [Bitnami etcd](https://github.com/bitnami/containers/tree/main/bitnami/etcd) image stores the etcd data at the `/bitnami/etcd` path of the container. Persistent Volume Claims are used to keep the data across statefulsets.
|
||||
|
||||
The chart mounts a [Persistent Volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) volume at this location. The volume is created using dynamic volume provisioning by default. An existing PersistentVolumeClaim can also be defined for this purpose.
|
||||
|
||||
If you encounter errors when working with persistent volumes, refer to our [troubleshooting guide for persistent volumes](https://docs.bitnami.com/kubernetes/faq/troubleshooting/troubleshooting-persistence-volumes/).
|
||||
|
||||
### Backup and restore the etcd keyspace
|
||||
|
||||
The Bitnami etcd chart provides mechanisms to bootstrap the etcd cluster restoring an existing snapshot before initializing.
|
||||
|
||||
[Learn more about backup/restore features in the chart documentation](https://docs.bitnami.com/kubernetes/infrastructure/etcd/administration/backup-restore/).
|
||||
|
||||
### Exposing etcd metrics
|
||||
|
||||
The metrics exposed by etcd can be exposed to be scraped by Prometheus. This can be done by adding the required annotations for Prometheus to discover the metrics endpoints or creating a PodMonitor entry if you are using the Prometheus Operator.
|
||||
|
||||
[Learn more about exposing metrics in the chart documentation](https://docs.bitnami.com/kubernetes/infrastructure/etcd/administration/enable-metrics/).
|
||||
|
||||
### Using custom configuration
|
||||
|
||||
In order to use custom configuration parameters, two options are available:
|
||||
|
||||
- Using environment variables: etcd allows setting environment variables that map to configuration settings. In order to set extra environment variables, you can use the `extraEnvVars` property. Alternatively, you can use a ConfigMap or a Secret with the environment variables using the `extraEnvVarsCM` or the `extraEnvVarsSecret` properties.
|
||||
|
||||
```yaml
|
||||
extraEnvVars:
|
||||
- name: ETCD_AUTO_COMPACTION_RETENTION
|
||||
value: "0"
|
||||
- name: ETCD_HEARTBEAT_INTERVAL
|
||||
value: "150"
|
||||
```
|
||||
|
||||
- Using a custom `etcd.conf.yml`: The etcd chart allows mounting a custom `etcd.conf.yml` file as ConfigMap. In order to so, you can use the `configuration` property. Alternatively, you can use an existing ConfigMap using the `existingConfigmap` parameter.
|
||||
|
||||
### Auto Compaction
|
||||
|
||||
Since etcd keeps an exact history of its keyspace, this history should be periodically compacted to avoid performance degradation and eventual storage space exhaustion. Compacting the keyspace history drops all information about keys superseded prior to a given keyspace revision. The space used by these keys then becomes available for additional writes to the keyspace.
|
||||
|
||||
`autoCompactionMode`, by default periodic. Valid values: "periodic", "revision".
|
||||
|
||||
- 'periodic' for duration based retention, defaulting to hours if no time unit is provided (e.g. "5m").
|
||||
- 'revision' for revision number based retention.
|
||||
`autoCompactionRetention` for mvcc key value store in hour, by default 0, means disabled.
|
||||
|
||||
You can enable auto compaction by using following parameters:
|
||||
|
||||
```console
|
||||
autoCompactionMode=periodic
|
||||
autoCompactionRetention=10m
|
||||
```
|
||||
|
||||
### Sidecars and Init Containers
|
||||
|
||||
If you have a need for additional containers to run within the same pod as the etcd app (e.g. an additional metrics or logging exporter), you can do so via the `sidecars` config parameter. Simply define your container according to the Kubernetes container spec.
|
||||
|
||||
```yaml
|
||||
sidecars:
|
||||
- name: your-image-name
|
||||
image: your-image
|
||||
imagePullPolicy: Always
|
||||
ports:
|
||||
- name: portname
|
||||
containerPort: 1234
|
||||
```
|
||||
|
||||
Similarly, you can add extra init containers using the `initContainers` parameter.
|
||||
|
||||
```yaml
|
||||
initContainers:
|
||||
- name: your-image-name
|
||||
image: your-image
|
||||
imagePullPolicy: Always
|
||||
ports:
|
||||
- name: portname
|
||||
containerPort: 1234
|
||||
```
|
||||
|
||||
### Deploying extra resources
|
||||
|
||||
There are cases where you may want to deploy extra objects, such a ConfigMap containing your app's configuration or some extra deployment with a micro service used by your app. For covering this case, the chart allows adding the full specification of other objects using the `extraDeploy` parameter.
|
||||
|
||||
### Setting Pod's affinity
|
||||
|
||||
This chart allows you to set your custom affinity using the `affinity` parameter. Find more information about Pod's affinity in the [kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity).
|
||||
|
||||
As an alternative, you can use of the preset configurations for pod affinity, pod anti-affinity, and node affinity available at the [bitnami/common](https://github.com/bitnami/charts/tree/main/bitnami/common#affinities) chart. To do so, set the `podAffinityPreset`, `podAntiAffinityPreset`, or `nodeAffinityPreset` parameters.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
Find more information about how to deal with common errors related to Bitnami's Helm charts in [this troubleshooting guide](https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues).
|
||||
|
||||
## Upgrading
|
||||
|
||||
### To 8.0.0
|
||||
|
||||
This version reverts the change in the previous major bump ([7.0.0](https://github.com/bitnami/charts/tree/main/bitnami/etcd#to-700)). Now the default `etcd` branch is `3.5` again once confirmed by the [etcd developers](https://github.com/etcd-io/etcd/tree/main/CHANGELOG#production-recommendation) that this version is production-ready once solved the data corruption issue.
|
||||
|
||||
### To 7.0.0
|
||||
|
||||
This version changes the default `etcd` branch to `3.4` as suggested by [etcd developers](https://github.com/etcd-io/etcd/tree/main/CHANGELOG#production-recommendation). In order to migrate the data follow the official etcd instructions.
|
||||
|
||||
### To 6.0.0
|
||||
|
||||
This version introduces several features and performance improvements:
|
||||
|
||||
- The statefulset can now be scaled using `kubectl scale` command. Using `helm upgrade` to recalculate available endpoints is no longer needed.
|
||||
- The scripts used for bootstrapping, runtime reconfiguration, and disaster recovery have been refactored and moved to the etcd container with two purposes: removing technical debt & improving the stability.
|
||||
- Several parameters were reorganized to simplify the structure and follow the same standard used on other Bitnami charts:
|
||||
- `etcd.initialClusterState` is renamed to `initialClusterState`.
|
||||
- `statefulset.replicaCount` is renamed to `replicaCount`.
|
||||
- `statefulset.podManagementPolicy` is renamed to `podManagementPolicy`.
|
||||
- `statefulset.updateStrategy` and `statefulset.rollingUpdatePartition` are merged into `updateStrategy`.
|
||||
- `securityContext.*` is deprecated in favor of `podSecurityContext` and `containerSecurityContext`.
|
||||
- `configFileConfigMap` is deprecated in favor of `configuration` and `existingConfigmap`.
|
||||
- `envVarsConfigMap` is deprecated in favor of `extraEnvVars`, `extraEnvVarsCM` and `extraEnvVarsSecret`.
|
||||
- `allowNoneAuthentication` is renamed to `auth.rbac.allowNoneAuthentication`.
|
||||
- New parameters/features were added:
|
||||
- `extraDeploy` to deploy any extra desired object.
|
||||
- `initContainers` and `sidecars` to define custom init containers and sidecars.
|
||||
- `extraVolumes` and `extraVolumeMounts` to define custom volumes and mount points.
|
||||
- Probes can be now customized, and support to startup probes is added.
|
||||
- LifecycleHooks can be customized using `lifecycleHooks` parameter.
|
||||
- The default command/args can be customized using `command` and `args` parameters.
|
||||
- Metrics integration with Prometheus Operator does no longer use a ServiceMonitor object, but a PodMonitor instead.
|
||||
|
||||
Consequences:
|
||||
|
||||
- Backwards compatibility is not guaranteed unless you adapt you **values.yaml** according to the changes described above.
|
||||
|
||||
### To 5.2.0
|
||||
|
||||
This version introduces `bitnami/common`, a [library chart](https://helm.sh/docs/topics/library_charts/#helm) as a dependency. More documentation about this new utility could be found [here](https://github.com/bitnami/charts/tree/main/bitnami/common#bitnami-common-library-chart). Please, make sure that you have updated the chart dependencies before executing any upgrade.
|
||||
|
||||
### To 5.0.0
|
||||
|
||||
[On November 13, 2020, Helm v2 support formally ended](https://github.com/helm/charts#status-of-the-project). This major version is the result of the required changes applied to the Helm Chart to be able to incorporate the different features added in Helm v3 and to be consistent with the Helm project itself regarding the Helm v2 EOL.
|
||||
|
||||
[Learn more about this change and related upgrade considerations](https://docs.bitnami.com/kubernetes/infrastructure/etcd/administration/upgrade-helm3/).
|
||||
|
||||
### To 4.4.14
|
||||
|
||||
In this release we addressed a vulnerability that showed the `ETCD_ROOT_PASSWORD` environment variable in the application logs. Users are advised to update immediately. More information in [this issue](https://github.com/bitnami/charts/issues/1901).
|
||||
|
||||
### To 3.0.0
|
||||
|
||||
Backwards compatibility is not guaranteed. The following notables changes were included:
|
||||
|
||||
- **etcdctl** uses v3 API.
|
||||
- Adds support for auto disaster recovery.
|
||||
- Labels are adapted to follow the Helm charts best practices.
|
||||
|
||||
To upgrade from previous charts versions, create a snapshot of the keyspace and restore it in a new etcd cluster. Only v3 API data can be restored.
|
||||
You can use the command below to upgrade your chart by starting a new cluster using an existing snapshot, available in an existing PVC, to initialize the members:
|
||||
|
||||
```console
|
||||
helm install new-release oci://registry-1.docker.io/bitnamicharts/etcd \
|
||||
--set statefulset.replicaCount=3 \
|
||||
--set persistence.enabled=true \
|
||||
--set persistence.size=8Gi \
|
||||
--set startFromSnapshot.enabled=true \
|
||||
--set startFromSnapshot.existingClaim=my-claim \
|
||||
--set startFromSnapshot.snapshotFilename=my-snapshot.db
|
||||
```
|
||||
|
||||
### To 1.0.0
|
||||
|
||||
Backwards compatibility is not guaranteed unless you modify the labels used on the chart's deployments.
|
||||
Use the workaround below to upgrade from versions previous to 1.0.0. The following example assumes that the release name is etcd:
|
||||
|
||||
```console
|
||||
kubectl delete statefulset etcd --cascade=false
|
||||
```
|
||||
|
||||
## License
|
||||
|
||||
Copyright © 2023 Bitnami
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
<http://www.apache.org/licenses/LICENSE-2.0>
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
22
deployment/helm/milvus/charts/etcd/charts/common/.helmignore
Normal file
22
deployment/helm/milvus/charts/etcd/charts/common/.helmignore
Normal file
@@ -0,0 +1,22 @@
|
||||
# Patterns to ignore when building packages.
|
||||
# This supports shell glob matching, relative path matching, and
|
||||
# negation (prefixed with !). Only one pattern per line.
|
||||
.DS_Store
|
||||
# Common VCS dirs
|
||||
.git/
|
||||
.gitignore
|
||||
.bzr/
|
||||
.bzrignore
|
||||
.hg/
|
||||
.hgignore
|
||||
.svn/
|
||||
# Common backup files
|
||||
*.swp
|
||||
*.bak
|
||||
*.tmp
|
||||
*~
|
||||
# Various IDEs
|
||||
.project
|
||||
.idea/
|
||||
*.tmproj
|
||||
.vscode/
|
||||
23
deployment/helm/milvus/charts/etcd/charts/common/Chart.yaml
Normal file
23
deployment/helm/milvus/charts/etcd/charts/common/Chart.yaml
Normal file
@@ -0,0 +1,23 @@
|
||||
annotations:
|
||||
category: Infrastructure
|
||||
licenses: Apache-2.0
|
||||
apiVersion: v2
|
||||
appVersion: 2.4.0
|
||||
description: A Library Helm Chart for grouping common logic between bitnami charts.
|
||||
This chart is not deployable by itself.
|
||||
home: https://bitnami.com
|
||||
icon: https://bitnami.com/downloads/logos/bitnami-mark.png
|
||||
keywords:
|
||||
- common
|
||||
- helper
|
||||
- template
|
||||
- function
|
||||
- bitnami
|
||||
maintainers:
|
||||
- name: VMware, Inc.
|
||||
url: https://github.com/bitnami/charts
|
||||
name: common
|
||||
sources:
|
||||
- https://github.com/bitnami/charts
|
||||
type: library
|
||||
version: 2.4.0
|
||||
235
deployment/helm/milvus/charts/etcd/charts/common/README.md
Normal file
235
deployment/helm/milvus/charts/etcd/charts/common/README.md
Normal file
@@ -0,0 +1,235 @@
|
||||
# Bitnami Common Library Chart
|
||||
|
||||
A [Helm Library Chart](https://helm.sh/docs/topics/library_charts/#helm) for grouping common logic between Bitnami charts.
|
||||
|
||||
Looking to use our applications in production? Try [VMware Application Catalog](https://bitnami.com/enterprise), the enterprise edition of Bitnami Application Catalog.
|
||||
|
||||
## TL;DR
|
||||
|
||||
```yaml
|
||||
dependencies:
|
||||
- name: common
|
||||
version: 1.x.x
|
||||
repository: oci://registry-1.docker.io/bitnamicharts
|
||||
```
|
||||
|
||||
```console
|
||||
helm dependency update
|
||||
```
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: {{ include "common.names.fullname" . }}
|
||||
data:
|
||||
myvalue: "Hello World"
|
||||
```
|
||||
|
||||
## Introduction
|
||||
|
||||
This chart provides a common template helpers which can be used to develop new charts using [Helm](https://helm.sh) package manager.
|
||||
|
||||
Bitnami charts can be used with [Kubeapps](https://kubeapps.dev/) for deployment and management of Helm Charts in clusters.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Kubernetes 1.19+
|
||||
- Helm 3.2.0+
|
||||
|
||||
## Parameters
|
||||
|
||||
## Special input schemas
|
||||
|
||||
### ImageRoot
|
||||
|
||||
```yaml
|
||||
registry:
|
||||
type: string
|
||||
description: Docker registry where the image is located
|
||||
example: docker.io
|
||||
|
||||
repository:
|
||||
type: string
|
||||
description: Repository and image name
|
||||
example: bitnami/nginx
|
||||
|
||||
tag:
|
||||
type: string
|
||||
description: image tag
|
||||
example: 1.16.1-debian-10-r63
|
||||
|
||||
pullPolicy:
|
||||
type: string
|
||||
description: Specify a imagePullPolicy. Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
|
||||
|
||||
pullSecrets:
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
description: Optionally specify an array of imagePullSecrets (evaluated as templates).
|
||||
|
||||
debug:
|
||||
type: boolean
|
||||
description: Set to true if you would like to see extra information on logs
|
||||
example: false
|
||||
|
||||
## An instance would be:
|
||||
# registry: docker.io
|
||||
# repository: bitnami/nginx
|
||||
# tag: 1.16.1-debian-10-r63
|
||||
# pullPolicy: IfNotPresent
|
||||
# debug: false
|
||||
```
|
||||
|
||||
### Persistence
|
||||
|
||||
```yaml
|
||||
enabled:
|
||||
type: boolean
|
||||
description: Whether enable persistence.
|
||||
example: true
|
||||
|
||||
storageClass:
|
||||
type: string
|
||||
description: Ghost data Persistent Volume Storage Class, If set to "-", storageClassName: "" which disables dynamic provisioning.
|
||||
example: "-"
|
||||
|
||||
accessMode:
|
||||
type: string
|
||||
description: Access mode for the Persistent Volume Storage.
|
||||
example: ReadWriteOnce
|
||||
|
||||
size:
|
||||
type: string
|
||||
description: Size the Persistent Volume Storage.
|
||||
example: 8Gi
|
||||
|
||||
path:
|
||||
type: string
|
||||
description: Path to be persisted.
|
||||
example: /bitnami
|
||||
|
||||
## An instance would be:
|
||||
# enabled: true
|
||||
# storageClass: "-"
|
||||
# accessMode: ReadWriteOnce
|
||||
# size: 8Gi
|
||||
# path: /bitnami
|
||||
```
|
||||
|
||||
### ExistingSecret
|
||||
|
||||
```yaml
|
||||
name:
|
||||
type: string
|
||||
description: Name of the existing secret.
|
||||
example: mySecret
|
||||
keyMapping:
|
||||
description: Mapping between the expected key name and the name of the key in the existing secret.
|
||||
type: object
|
||||
|
||||
## An instance would be:
|
||||
# name: mySecret
|
||||
# keyMapping:
|
||||
# password: myPasswordKey
|
||||
```
|
||||
|
||||
#### Example of use
|
||||
|
||||
When we store sensitive data for a deployment in a secret, some times we want to give to users the possibility of using theirs existing secrets.
|
||||
|
||||
```yaml
|
||||
# templates/secret.yaml
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: {{ include "common.names.fullname" . }}
|
||||
labels:
|
||||
app: {{ include "common.names.fullname" . }}
|
||||
type: Opaque
|
||||
data:
|
||||
password: {{ .Values.password | b64enc | quote }}
|
||||
|
||||
# templates/dpl.yaml
|
||||
---
|
||||
...
|
||||
env:
|
||||
- name: PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: {{ include "common.secrets.name" (dict "existingSecret" .Values.existingSecret "context" $) }}
|
||||
key: {{ include "common.secrets.key" (dict "existingSecret" .Values.existingSecret "key" "password") }}
|
||||
...
|
||||
|
||||
# values.yaml
|
||||
---
|
||||
name: mySecret
|
||||
keyMapping:
|
||||
password: myPasswordKey
|
||||
```
|
||||
|
||||
### ValidateValue
|
||||
|
||||
#### NOTES.txt
|
||||
|
||||
```console
|
||||
{{- $validateValueConf00 := (dict "valueKey" "path.to.value00" "secret" "secretName" "field" "password-00") -}}
|
||||
{{- $validateValueConf01 := (dict "valueKey" "path.to.value01" "secret" "secretName" "field" "password-01") -}}
|
||||
|
||||
{{ include "common.validations.values.multiple.empty" (dict "required" (list $validateValueConf00 $validateValueConf01) "context" $) }}
|
||||
```
|
||||
|
||||
If we force those values to be empty we will see some alerts
|
||||
|
||||
```console
|
||||
helm install test mychart --set path.to.value00="",path.to.value01=""
|
||||
'path.to.value00' must not be empty, please add '--set path.to.value00=$PASSWORD_00' to the command. To get the current value:
|
||||
|
||||
export PASSWORD_00=$(kubectl get secret --namespace default secretName -o jsonpath="{.data.password-00}" | base64 -d)
|
||||
|
||||
'path.to.value01' must not be empty, please add '--set path.to.value01=$PASSWORD_01' to the command. To get the current value:
|
||||
|
||||
export PASSWORD_01=$(kubectl get secret --namespace default secretName -o jsonpath="{.data.password-01}" | base64 -d)
|
||||
```
|
||||
|
||||
## Upgrading
|
||||
|
||||
### To 1.0.0
|
||||
|
||||
[On November 13, 2020, Helm v2 support was formally finished](https://github.com/helm/charts#status-of-the-project), this major version is the result of the required changes applied to the Helm Chart to be able to incorporate the different features added in Helm v3 and to be consistent with the Helm project itself regarding the Helm v2 EOL.
|
||||
|
||||
#### What changes were introduced in this major version?
|
||||
|
||||
- Previous versions of this Helm Chart use `apiVersion: v1` (installable by both Helm 2 and 3), this Helm Chart was updated to `apiVersion: v2` (installable by Helm 3 only). [Here](https://helm.sh/docs/topics/charts/#the-apiversion-field) you can find more information about the `apiVersion` field.
|
||||
- Use `type: library`. [Here](https://v3.helm.sh/docs/faq/#library-chart-support) you can find more information.
|
||||
- The different fields present in the *Chart.yaml* file has been ordered alphabetically in a homogeneous way for all the Bitnami Helm Charts
|
||||
|
||||
#### Considerations when upgrading to this version
|
||||
|
||||
- If you want to upgrade to this version from a previous one installed with Helm v3, you shouldn't face any issues
|
||||
- If you want to upgrade to this version using Helm v2, this scenario is not supported as this version doesn't support Helm v2 anymore
|
||||
- If you installed the previous version with Helm v2 and wants to upgrade to this version with Helm v3, please refer to the [official Helm documentation](https://helm.sh/docs/topics/v2_v3_migration/#migration-use-cases) about migrating from Helm v2 to v3
|
||||
|
||||
#### Useful links
|
||||
|
||||
- <https://docs.bitnami.com/tutorials/resolve-helm2-helm3-post-migration-issues/>
|
||||
- <https://helm.sh/docs/topics/v2_v3_migration/>
|
||||
- <https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/>
|
||||
|
||||
## License
|
||||
|
||||
Copyright © 2023 Bitnami
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
<http://www.apache.org/licenses/LICENSE-2.0>
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
@@ -0,0 +1,106 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
|
||||
{{/*
|
||||
Return a soft nodeAffinity definition
|
||||
{{ include "common.affinities.nodes.soft" (dict "key" "FOO" "values" (list "BAR" "BAZ")) -}}
|
||||
*/}}
|
||||
{{- define "common.affinities.nodes.soft" -}}
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- preference:
|
||||
matchExpressions:
|
||||
- key: {{ .key }}
|
||||
operator: In
|
||||
values:
|
||||
{{- range .values }}
|
||||
- {{ . | quote }}
|
||||
{{- end }}
|
||||
weight: 1
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return a hard nodeAffinity definition
|
||||
{{ include "common.affinities.nodes.hard" (dict "key" "FOO" "values" (list "BAR" "BAZ")) -}}
|
||||
*/}}
|
||||
{{- define "common.affinities.nodes.hard" -}}
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
nodeSelectorTerms:
|
||||
- matchExpressions:
|
||||
- key: {{ .key }}
|
||||
operator: In
|
||||
values:
|
||||
{{- range .values }}
|
||||
- {{ . | quote }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return a nodeAffinity definition
|
||||
{{ include "common.affinities.nodes" (dict "type" "soft" "key" "FOO" "values" (list "BAR" "BAZ")) -}}
|
||||
*/}}
|
||||
{{- define "common.affinities.nodes" -}}
|
||||
{{- if eq .type "soft" }}
|
||||
{{- include "common.affinities.nodes.soft" . -}}
|
||||
{{- else if eq .type "hard" }}
|
||||
{{- include "common.affinities.nodes.hard" . -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return a topologyKey definition
|
||||
{{ include "common.affinities.topologyKey" (dict "topologyKey" "BAR") -}}
|
||||
*/}}
|
||||
{{- define "common.affinities.topologyKey" -}}
|
||||
{{ .topologyKey | default "kubernetes.io/hostname" -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return a soft podAffinity/podAntiAffinity definition
|
||||
{{ include "common.affinities.pods.soft" (dict "component" "FOO" "extraMatchLabels" .Values.extraMatchLabels "topologyKey" "BAR" "context" $) -}}
|
||||
*/}}
|
||||
{{- define "common.affinities.pods.soft" -}}
|
||||
{{- $component := default "" .component -}}
|
||||
{{- $extraMatchLabels := default (dict) .extraMatchLabels -}}
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- podAffinityTerm:
|
||||
labelSelector:
|
||||
matchLabels: {{- (include "common.labels.matchLabels" .context) | nindent 10 }}
|
||||
{{- if not (empty $component) }}
|
||||
{{ printf "app.kubernetes.io/component: %s" $component }}
|
||||
{{- end }}
|
||||
{{- range $key, $value := $extraMatchLabels }}
|
||||
{{ $key }}: {{ $value | quote }}
|
||||
{{- end }}
|
||||
topologyKey: {{ include "common.affinities.topologyKey" (dict "topologyKey" .topologyKey) }}
|
||||
weight: 1
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return a hard podAffinity/podAntiAffinity definition
|
||||
{{ include "common.affinities.pods.hard" (dict "component" "FOO" "extraMatchLabels" .Values.extraMatchLabels "topologyKey" "BAR" "context" $) -}}
|
||||
*/}}
|
||||
{{- define "common.affinities.pods.hard" -}}
|
||||
{{- $component := default "" .component -}}
|
||||
{{- $extraMatchLabels := default (dict) .extraMatchLabels -}}
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
- labelSelector:
|
||||
matchLabels: {{- (include "common.labels.matchLabels" .context) | nindent 8 }}
|
||||
{{- if not (empty $component) }}
|
||||
{{ printf "app.kubernetes.io/component: %s" $component }}
|
||||
{{- end }}
|
||||
{{- range $key, $value := $extraMatchLabels }}
|
||||
{{ $key }}: {{ $value | quote }}
|
||||
{{- end }}
|
||||
topologyKey: {{ include "common.affinities.topologyKey" (dict "topologyKey" .topologyKey) }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return a podAffinity/podAntiAffinity definition
|
||||
{{ include "common.affinities.pods" (dict "type" "soft" "key" "FOO" "values" (list "BAR" "BAZ")) -}}
|
||||
*/}}
|
||||
{{- define "common.affinities.pods" -}}
|
||||
{{- if eq .type "soft" }}
|
||||
{{- include "common.affinities.pods.soft" . -}}
|
||||
{{- else if eq .type "hard" }}
|
||||
{{- include "common.affinities.pods.hard" . -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,180 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
|
||||
{{/*
|
||||
Return the target Kubernetes version
|
||||
*/}}
|
||||
{{- define "common.capabilities.kubeVersion" -}}
|
||||
{{- if .Values.global }}
|
||||
{{- if .Values.global.kubeVersion }}
|
||||
{{- .Values.global.kubeVersion -}}
|
||||
{{- else }}
|
||||
{{- default .Capabilities.KubeVersion.Version .Values.kubeVersion -}}
|
||||
{{- end -}}
|
||||
{{- else }}
|
||||
{{- default .Capabilities.KubeVersion.Version .Values.kubeVersion -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the appropriate apiVersion for poddisruptionbudget.
|
||||
*/}}
|
||||
{{- define "common.capabilities.policy.apiVersion" -}}
|
||||
{{- if semverCompare "<1.21-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- print "policy/v1beta1" -}}
|
||||
{{- else -}}
|
||||
{{- print "policy/v1" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the appropriate apiVersion for networkpolicy.
|
||||
*/}}
|
||||
{{- define "common.capabilities.networkPolicy.apiVersion" -}}
|
||||
{{- if semverCompare "<1.7-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- print "extensions/v1beta1" -}}
|
||||
{{- else -}}
|
||||
{{- print "networking.k8s.io/v1" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the appropriate apiVersion for cronjob.
|
||||
*/}}
|
||||
{{- define "common.capabilities.cronjob.apiVersion" -}}
|
||||
{{- if semverCompare "<1.21-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- print "batch/v1beta1" -}}
|
||||
{{- else -}}
|
||||
{{- print "batch/v1" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the appropriate apiVersion for daemonset.
|
||||
*/}}
|
||||
{{- define "common.capabilities.daemonset.apiVersion" -}}
|
||||
{{- if semverCompare "<1.14-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- print "extensions/v1beta1" -}}
|
||||
{{- else -}}
|
||||
{{- print "apps/v1" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the appropriate apiVersion for deployment.
|
||||
*/}}
|
||||
{{- define "common.capabilities.deployment.apiVersion" -}}
|
||||
{{- if semverCompare "<1.14-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- print "extensions/v1beta1" -}}
|
||||
{{- else -}}
|
||||
{{- print "apps/v1" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the appropriate apiVersion for statefulset.
|
||||
*/}}
|
||||
{{- define "common.capabilities.statefulset.apiVersion" -}}
|
||||
{{- if semverCompare "<1.14-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- print "apps/v1beta1" -}}
|
||||
{{- else -}}
|
||||
{{- print "apps/v1" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the appropriate apiVersion for ingress.
|
||||
*/}}
|
||||
{{- define "common.capabilities.ingress.apiVersion" -}}
|
||||
{{- if .Values.ingress -}}
|
||||
{{- if .Values.ingress.apiVersion -}}
|
||||
{{- .Values.ingress.apiVersion -}}
|
||||
{{- else if semverCompare "<1.14-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- print "extensions/v1beta1" -}}
|
||||
{{- else if semverCompare "<1.19-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- print "networking.k8s.io/v1beta1" -}}
|
||||
{{- else -}}
|
||||
{{- print "networking.k8s.io/v1" -}}
|
||||
{{- end }}
|
||||
{{- else if semverCompare "<1.14-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- print "extensions/v1beta1" -}}
|
||||
{{- else if semverCompare "<1.19-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- print "networking.k8s.io/v1beta1" -}}
|
||||
{{- else -}}
|
||||
{{- print "networking.k8s.io/v1" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the appropriate apiVersion for RBAC resources.
|
||||
*/}}
|
||||
{{- define "common.capabilities.rbac.apiVersion" -}}
|
||||
{{- if semverCompare "<1.17-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- print "rbac.authorization.k8s.io/v1beta1" -}}
|
||||
{{- else -}}
|
||||
{{- print "rbac.authorization.k8s.io/v1" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the appropriate apiVersion for CRDs.
|
||||
*/}}
|
||||
{{- define "common.capabilities.crd.apiVersion" -}}
|
||||
{{- if semverCompare "<1.19-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- print "apiextensions.k8s.io/v1beta1" -}}
|
||||
{{- else -}}
|
||||
{{- print "apiextensions.k8s.io/v1" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the appropriate apiVersion for APIService.
|
||||
*/}}
|
||||
{{- define "common.capabilities.apiService.apiVersion" -}}
|
||||
{{- if semverCompare "<1.10-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- print "apiregistration.k8s.io/v1beta1" -}}
|
||||
{{- else -}}
|
||||
{{- print "apiregistration.k8s.io/v1" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the appropriate apiVersion for Horizontal Pod Autoscaler.
|
||||
*/}}
|
||||
{{- define "common.capabilities.hpa.apiVersion" -}}
|
||||
{{- if semverCompare "<1.23-0" (include "common.capabilities.kubeVersion" .context) -}}
|
||||
{{- if .beta2 -}}
|
||||
{{- print "autoscaling/v2beta2" -}}
|
||||
{{- else -}}
|
||||
{{- print "autoscaling/v2beta1" -}}
|
||||
{{- end -}}
|
||||
{{- else -}}
|
||||
{{- print "autoscaling/v2" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the appropriate apiVersion for Vertical Pod Autoscaler.
|
||||
*/}}
|
||||
{{- define "common.capabilities.vpa.apiVersion" -}}
|
||||
{{- if semverCompare "<1.23-0" (include "common.capabilities.kubeVersion" .context) -}}
|
||||
{{- if .beta2 -}}
|
||||
{{- print "autoscaling/v2beta2" -}}
|
||||
{{- else -}}
|
||||
{{- print "autoscaling/v2beta1" -}}
|
||||
{{- end -}}
|
||||
{{- else -}}
|
||||
{{- print "autoscaling/v2" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Returns true if the used Helm version is 3.3+.
|
||||
A way to check the used Helm version was not introduced until version 3.3.0 with .Capabilities.HelmVersion, which contains an additional "{}}" structure.
|
||||
This check is introduced as a regexMatch instead of {{ if .Capabilities.HelmVersion }} because checking for the key HelmVersion in <3.3 results in a "interface not found" error.
|
||||
**To be removed when the catalog's minimun Helm version is 3.3**
|
||||
*/}}
|
||||
{{- define "common.capabilities.supportsHelmVersion" -}}
|
||||
{{- if regexMatch "{(v[0-9])*[^}]*}}$" (.Capabilities | toString ) }}
|
||||
{{- true -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,23 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Through error when upgrading using empty passwords values that must not be empty.
|
||||
|
||||
Usage:
|
||||
{{- $validationError00 := include "common.validations.values.single.empty" (dict "valueKey" "path.to.password00" "secret" "secretName" "field" "password-00") -}}
|
||||
{{- $validationError01 := include "common.validations.values.single.empty" (dict "valueKey" "path.to.password01" "secret" "secretName" "field" "password-01") -}}
|
||||
{{ include "common.errors.upgrade.passwords.empty" (dict "validationErrors" (list $validationError00 $validationError01) "context" $) }}
|
||||
|
||||
Required password params:
|
||||
- validationErrors - String - Required. List of validation strings to be return, if it is empty it won't throw error.
|
||||
- context - Context - Required. Parent context.
|
||||
*/}}
|
||||
{{- define "common.errors.upgrade.passwords.empty" -}}
|
||||
{{- $validationErrors := join "" .validationErrors -}}
|
||||
{{- if and $validationErrors .context.Release.IsUpgrade -}}
|
||||
{{- $errorString := "\nPASSWORDS ERROR: You must provide your current passwords when upgrading the release." -}}
|
||||
{{- $errorString = print $errorString "\n Note that even after reinstallation, old credentials may be needed as they may be kept in persistent volume claims." -}}
|
||||
{{- $errorString = print $errorString "\n Further information can be obtained at https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues/#credential-errors-while-upgrading-chart-releases" -}}
|
||||
{{- $errorString = print $errorString "\n%s" -}}
|
||||
{{- printf $errorString $validationErrors | fail -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,80 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Return the proper image name
|
||||
{{ include "common.images.image" ( dict "imageRoot" .Values.path.to.the.image "global" .Values.global ) }}
|
||||
*/}}
|
||||
{{- define "common.images.image" -}}
|
||||
{{- $registryName := .imageRoot.registry -}}
|
||||
{{- $repositoryName := .imageRoot.repository -}}
|
||||
{{- $separator := ":" -}}
|
||||
{{- $termination := .imageRoot.tag | toString -}}
|
||||
{{- if .global }}
|
||||
{{- if .global.imageRegistry }}
|
||||
{{- $registryName = .global.imageRegistry -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- if .imageRoot.digest }}
|
||||
{{- $separator = "@" -}}
|
||||
{{- $termination = .imageRoot.digest | toString -}}
|
||||
{{- end -}}
|
||||
{{- if $registryName }}
|
||||
{{- printf "%s/%s%s%s" $registryName $repositoryName $separator $termination -}}
|
||||
{{- else -}}
|
||||
{{- printf "%s%s%s" $repositoryName $separator $termination -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the proper Docker Image Registry Secret Names (deprecated: use common.images.renderPullSecrets instead)
|
||||
{{ include "common.images.pullSecrets" ( dict "images" (list .Values.path.to.the.image1, .Values.path.to.the.image2) "global" .Values.global) }}
|
||||
*/}}
|
||||
{{- define "common.images.pullSecrets" -}}
|
||||
{{- $pullSecrets := list }}
|
||||
|
||||
{{- if .global }}
|
||||
{{- range .global.imagePullSecrets -}}
|
||||
{{- $pullSecrets = append $pullSecrets . -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- range .images -}}
|
||||
{{- range .pullSecrets -}}
|
||||
{{- $pullSecrets = append $pullSecrets . -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- if (not (empty $pullSecrets)) }}
|
||||
imagePullSecrets:
|
||||
{{- range $pullSecrets | uniq }}
|
||||
- name: {{ . }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the proper Docker Image Registry Secret Names evaluating values as templates
|
||||
{{ include "common.images.renderPullSecrets" ( dict "images" (list .Values.path.to.the.image1, .Values.path.to.the.image2) "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.images.renderPullSecrets" -}}
|
||||
{{- $pullSecrets := list }}
|
||||
{{- $context := .context }}
|
||||
|
||||
{{- if $context.Values.global }}
|
||||
{{- range $context.Values.global.imagePullSecrets -}}
|
||||
{{- $pullSecrets = append $pullSecrets (include "common.tplvalues.render" (dict "value" . "context" $context)) -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- range .images -}}
|
||||
{{- range .pullSecrets -}}
|
||||
{{- $pullSecrets = append $pullSecrets (include "common.tplvalues.render" (dict "value" . "context" $context)) -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- if (not (empty $pullSecrets)) }}
|
||||
imagePullSecrets:
|
||||
{{- range $pullSecrets | uniq }}
|
||||
- name: {{ . }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,68 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
|
||||
{{/*
|
||||
Generate backend entry that is compatible with all Kubernetes API versions.
|
||||
|
||||
Usage:
|
||||
{{ include "common.ingress.backend" (dict "serviceName" "backendName" "servicePort" "backendPort" "context" $) }}
|
||||
|
||||
Params:
|
||||
- serviceName - String. Name of an existing service backend
|
||||
- servicePort - String/Int. Port name (or number) of the service. It will be translated to different yaml depending if it is a string or an integer.
|
||||
- context - Dict - Required. The context for the template evaluation.
|
||||
*/}}
|
||||
{{- define "common.ingress.backend" -}}
|
||||
{{- $apiVersion := (include "common.capabilities.ingress.apiVersion" .context) -}}
|
||||
{{- if or (eq $apiVersion "extensions/v1beta1") (eq $apiVersion "networking.k8s.io/v1beta1") -}}
|
||||
serviceName: {{ .serviceName }}
|
||||
servicePort: {{ .servicePort }}
|
||||
{{- else -}}
|
||||
service:
|
||||
name: {{ .serviceName }}
|
||||
port:
|
||||
{{- if typeIs "string" .servicePort }}
|
||||
name: {{ .servicePort }}
|
||||
{{- else if or (typeIs "int" .servicePort) (typeIs "float64" .servicePort) }}
|
||||
number: {{ .servicePort | int }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Print "true" if the API pathType field is supported
|
||||
Usage:
|
||||
{{ include "common.ingress.supportsPathType" . }}
|
||||
*/}}
|
||||
{{- define "common.ingress.supportsPathType" -}}
|
||||
{{- if (semverCompare "<1.18-0" (include "common.capabilities.kubeVersion" .)) -}}
|
||||
{{- print "false" -}}
|
||||
{{- else -}}
|
||||
{{- print "true" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Returns true if the ingressClassname field is supported
|
||||
Usage:
|
||||
{{ include "common.ingress.supportsIngressClassname" . }}
|
||||
*/}}
|
||||
{{- define "common.ingress.supportsIngressClassname" -}}
|
||||
{{- if semverCompare "<1.18-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- print "false" -}}
|
||||
{{- else -}}
|
||||
{{- print "true" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return true if cert-manager required annotations for TLS signed
|
||||
certificates are set in the Ingress annotations
|
||||
Ref: https://cert-manager.io/docs/usage/ingress/#supported-annotations
|
||||
Usage:
|
||||
{{ include "common.ingress.certManagerRequest" ( dict "annotations" .Values.path.to.the.ingress.annotations ) }}
|
||||
*/}}
|
||||
{{- define "common.ingress.certManagerRequest" -}}
|
||||
{{ if or (hasKey .annotations "cert-manager.io/cluster-issuer") (hasKey .annotations "cert-manager.io/issuer") (hasKey .annotations "kubernetes.io/tls-acme") }}
|
||||
{{- true -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,18 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Kubernetes standard labels
|
||||
*/}}
|
||||
{{- define "common.labels.standard" -}}
|
||||
app.kubernetes.io/name: {{ include "common.names.name" . }}
|
||||
helm.sh/chart: {{ include "common.names.chart" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Labels to use on deploy.spec.selector.matchLabels and svc.spec.selector
|
||||
*/}}
|
||||
{{- define "common.labels.matchLabels" -}}
|
||||
app.kubernetes.io/name: {{ include "common.names.name" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,66 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Expand the name of the chart.
|
||||
*/}}
|
||||
{{- define "common.names.name" -}}
|
||||
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create chart name and version as used by the chart label.
|
||||
*/}}
|
||||
{{- define "common.names.chart" -}}
|
||||
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create a default fully qualified app name.
|
||||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||
If release name contains chart name it will be used as a full name.
|
||||
*/}}
|
||||
{{- define "common.names.fullname" -}}
|
||||
{{- if .Values.fullnameOverride -}}
|
||||
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- $name := default .Chart.Name .Values.nameOverride -}}
|
||||
{{- if contains $name .Release.Name -}}
|
||||
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create a default fully qualified dependency name.
|
||||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||
If release name contains chart name it will be used as a full name.
|
||||
Usage:
|
||||
{{ include "common.names.dependency.fullname" (dict "chartName" "dependency-chart-name" "chartValues" .Values.dependency-chart "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.names.dependency.fullname" -}}
|
||||
{{- if .chartValues.fullnameOverride -}}
|
||||
{{- .chartValues.fullnameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- $name := default .chartName .chartValues.nameOverride -}}
|
||||
{{- if contains $name .context.Release.Name -}}
|
||||
{{- .context.Release.Name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- printf "%s-%s" .context.Release.Name $name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Allow the release namespace to be overridden for multi-namespace deployments in combined charts.
|
||||
*/}}
|
||||
{{- define "common.names.namespace" -}}
|
||||
{{- default .Release.Namespace .Values.namespaceOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create a fully qualified app name adding the installation's namespace.
|
||||
*/}}
|
||||
{{- define "common.names.fullname.namespace" -}}
|
||||
{{- printf "%s-%s" (include "common.names.fullname" .) (include "common.names.namespace" .) | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,165 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Generate secret name.
|
||||
|
||||
Usage:
|
||||
{{ include "common.secrets.name" (dict "existingSecret" .Values.path.to.the.existingSecret "defaultNameSuffix" "mySuffix" "context" $) }}
|
||||
|
||||
Params:
|
||||
- existingSecret - ExistingSecret/String - Optional. The path to the existing secrets in the values.yaml given by the user
|
||||
to be used instead of the default one. Allows for it to be of type String (just the secret name) for backwards compatibility.
|
||||
+info: https://github.com/bitnami/charts/tree/main/bitnami/common#existingsecret
|
||||
- defaultNameSuffix - String - Optional. It is used only if we have several secrets in the same deployment.
|
||||
- context - Dict - Required. The context for the template evaluation.
|
||||
*/}}
|
||||
{{- define "common.secrets.name" -}}
|
||||
{{- $name := (include "common.names.fullname" .context) -}}
|
||||
|
||||
{{- if .defaultNameSuffix -}}
|
||||
{{- $name = printf "%s-%s" $name .defaultNameSuffix | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- with .existingSecret -}}
|
||||
{{- if not (typeIs "string" .) -}}
|
||||
{{- with .name -}}
|
||||
{{- $name = . -}}
|
||||
{{- end -}}
|
||||
{{- else -}}
|
||||
{{- $name = . -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- printf "%s" $name -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Generate secret key.
|
||||
|
||||
Usage:
|
||||
{{ include "common.secrets.key" (dict "existingSecret" .Values.path.to.the.existingSecret "key" "keyName") }}
|
||||
|
||||
Params:
|
||||
- existingSecret - ExistingSecret/String - Optional. The path to the existing secrets in the values.yaml given by the user
|
||||
to be used instead of the default one. Allows for it to be of type String (just the secret name) for backwards compatibility.
|
||||
+info: https://github.com/bitnami/charts/tree/main/bitnami/common#existingsecret
|
||||
- key - String - Required. Name of the key in the secret.
|
||||
*/}}
|
||||
{{- define "common.secrets.key" -}}
|
||||
{{- $key := .key -}}
|
||||
|
||||
{{- if .existingSecret -}}
|
||||
{{- if not (typeIs "string" .existingSecret) -}}
|
||||
{{- if .existingSecret.keyMapping -}}
|
||||
{{- $key = index .existingSecret.keyMapping $.key -}}
|
||||
{{- end -}}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
|
||||
{{- printf "%s" $key -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Generate secret password or retrieve one if already created.
|
||||
|
||||
Usage:
|
||||
{{ include "common.secrets.passwords.manage" (dict "secret" "secret-name" "key" "keyName" "providedValues" (list "path.to.password1" "path.to.password2") "length" 10 "strong" false "chartName" "chartName" "context" $) }}
|
||||
|
||||
Params:
|
||||
- secret - String - Required - Name of the 'Secret' resource where the password is stored.
|
||||
- key - String - Required - Name of the key in the secret.
|
||||
- providedValues - List<String> - Required - The path to the validating value in the values.yaml, e.g: "mysql.password". Will pick first parameter with a defined value.
|
||||
- length - int - Optional - Length of the generated random password.
|
||||
- strong - Boolean - Optional - Whether to add symbols to the generated random password.
|
||||
- chartName - String - Optional - Name of the chart used when said chart is deployed as a subchart.
|
||||
- context - Context - Required - Parent context.
|
||||
|
||||
The order in which this function returns a secret password:
|
||||
1. Already existing 'Secret' resource
|
||||
(If a 'Secret' resource is found under the name provided to the 'secret' parameter to this function and that 'Secret' resource contains a key with the name passed as the 'key' parameter to this function then the value of this existing secret password will be returned)
|
||||
2. Password provided via the values.yaml
|
||||
(If one of the keys passed to the 'providedValues' parameter to this function is a valid path to a key in the values.yaml and has a value, the value of the first key with a value will be returned)
|
||||
3. Randomly generated secret password
|
||||
(A new random secret password with the length specified in the 'length' parameter will be generated and returned)
|
||||
|
||||
*/}}
|
||||
{{- define "common.secrets.passwords.manage" -}}
|
||||
|
||||
{{- $password := "" }}
|
||||
{{- $subchart := "" }}
|
||||
{{- $chartName := default "" .chartName }}
|
||||
{{- $passwordLength := default 10 .length }}
|
||||
{{- $providedPasswordKey := include "common.utils.getKeyFromList" (dict "keys" .providedValues "context" $.context) }}
|
||||
{{- $providedPasswordValue := include "common.utils.getValueFromKey" (dict "key" $providedPasswordKey "context" $.context) }}
|
||||
{{- $secretData := (lookup "v1" "Secret" (include "common.names.namespace" .context) .secret).data }}
|
||||
{{- if $secretData }}
|
||||
{{- if hasKey $secretData .key }}
|
||||
{{- $password = index $secretData .key | quote }}
|
||||
{{- else }}
|
||||
{{- printf "\nPASSWORDS ERROR: The secret \"%s\" does not contain the key \"%s\"\n" .secret .key | fail -}}
|
||||
{{- end -}}
|
||||
{{- else if $providedPasswordValue }}
|
||||
{{- $password = $providedPasswordValue | toString | b64enc | quote }}
|
||||
{{- else }}
|
||||
|
||||
{{- if .context.Values.enabled }}
|
||||
{{- $subchart = $chartName }}
|
||||
{{- end -}}
|
||||
|
||||
{{- $requiredPassword := dict "valueKey" $providedPasswordKey "secret" .secret "field" .key "subchart" $subchart "context" $.context -}}
|
||||
{{- $requiredPasswordError := include "common.validations.values.single.empty" $requiredPassword -}}
|
||||
{{- $passwordValidationErrors := list $requiredPasswordError -}}
|
||||
{{- include "common.errors.upgrade.passwords.empty" (dict "validationErrors" $passwordValidationErrors "context" $.context) -}}
|
||||
|
||||
{{- if .strong }}
|
||||
{{- $subStr := list (lower (randAlpha 1)) (randNumeric 1) (upper (randAlpha 1)) | join "_" }}
|
||||
{{- $password = randAscii $passwordLength }}
|
||||
{{- $password = regexReplaceAllLiteral "\\W" $password "@" | substr 5 $passwordLength }}
|
||||
{{- $password = printf "%s%s" $subStr $password | toString | shuffle | b64enc | quote }}
|
||||
{{- else }}
|
||||
{{- $password = randAlphaNum $passwordLength | b64enc | quote }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
{{- printf "%s" $password -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Reuses the value from an existing secret, otherwise sets its value to a default value.
|
||||
|
||||
Usage:
|
||||
{{ include "common.secrets.lookup" (dict "secret" "secret-name" "key" "keyName" "defaultValue" .Values.myValue "context" $) }}
|
||||
|
||||
Params:
|
||||
- secret - String - Required - Name of the 'Secret' resource where the password is stored.
|
||||
- key - String - Required - Name of the key in the secret.
|
||||
- defaultValue - String - Required - The path to the validating value in the values.yaml, e.g: "mysql.password". Will pick first parameter with a defined value.
|
||||
- context - Context - Required - Parent context.
|
||||
|
||||
*/}}
|
||||
{{- define "common.secrets.lookup" -}}
|
||||
{{- $value := "" -}}
|
||||
{{- $defaultValue := required "\n'common.secrets.lookup': Argument 'defaultValue' missing or empty" .defaultValue -}}
|
||||
{{- $secretData := (lookup "v1" "Secret" (include "common.names.namespace" .context) .secret).data -}}
|
||||
{{- if and $secretData (hasKey $secretData .key) -}}
|
||||
{{- $value = index $secretData .key -}}
|
||||
{{- else -}}
|
||||
{{- $value = $defaultValue | toString | b64enc -}}
|
||||
{{- end -}}
|
||||
{{- printf "%s" $value -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Returns whether a previous generated secret already exists
|
||||
|
||||
Usage:
|
||||
{{ include "common.secrets.exists" (dict "secret" "secret-name" "context" $) }}
|
||||
|
||||
Params:
|
||||
- secret - String - Required - Name of the 'Secret' resource where the password is stored.
|
||||
- context - Context - Required - Parent context.
|
||||
*/}}
|
||||
{{- define "common.secrets.exists" -}}
|
||||
{{- $secret := (lookup "v1" "Secret" (include "common.names.namespace" .context) .secret) }}
|
||||
{{- if $secret }}
|
||||
{{- true -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,23 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Return the proper Storage Class
|
||||
{{ include "common.storage.class" ( dict "persistence" .Values.path.to.the.persistence "global" $) }}
|
||||
*/}}
|
||||
{{- define "common.storage.class" -}}
|
||||
|
||||
{{- $storageClass := .persistence.storageClass -}}
|
||||
{{- if .global -}}
|
||||
{{- if .global.storageClass -}}
|
||||
{{- $storageClass = .global.storageClass -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- if $storageClass -}}
|
||||
{{- if (eq "-" $storageClass) -}}
|
||||
{{- printf "storageClassName: \"\"" -}}
|
||||
{{- else }}
|
||||
{{- printf "storageClassName: %s" $storageClass -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,13 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Renders a value that contains template.
|
||||
Usage:
|
||||
{{ include "common.tplvalues.render" ( dict "value" .Values.path.to.the.Value "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.tplvalues.render" -}}
|
||||
{{- if typeIs "string" .value }}
|
||||
{{- tpl .value .context }}
|
||||
{{- else }}
|
||||
{{- tpl (.value | toYaml) .context }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,62 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Print instructions to get a secret value.
|
||||
Usage:
|
||||
{{ include "common.utils.secret.getvalue" (dict "secret" "secret-name" "field" "secret-value-field" "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.utils.secret.getvalue" -}}
|
||||
{{- $varname := include "common.utils.fieldToEnvVar" . -}}
|
||||
export {{ $varname }}=$(kubectl get secret --namespace {{ include "common.names.namespace" .context | quote }} {{ .secret }} -o jsonpath="{.data.{{ .field }}}" | base64 -d)
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Build env var name given a field
|
||||
Usage:
|
||||
{{ include "common.utils.fieldToEnvVar" dict "field" "my-password" }}
|
||||
*/}}
|
||||
{{- define "common.utils.fieldToEnvVar" -}}
|
||||
{{- $fieldNameSplit := splitList "-" .field -}}
|
||||
{{- $upperCaseFieldNameSplit := list -}}
|
||||
|
||||
{{- range $fieldNameSplit -}}
|
||||
{{- $upperCaseFieldNameSplit = append $upperCaseFieldNameSplit ( upper . ) -}}
|
||||
{{- end -}}
|
||||
|
||||
{{ join "_" $upperCaseFieldNameSplit }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Gets a value from .Values given
|
||||
Usage:
|
||||
{{ include "common.utils.getValueFromKey" (dict "key" "path.to.key" "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.utils.getValueFromKey" -}}
|
||||
{{- $splitKey := splitList "." .key -}}
|
||||
{{- $value := "" -}}
|
||||
{{- $latestObj := $.context.Values -}}
|
||||
{{- range $splitKey -}}
|
||||
{{- if not $latestObj -}}
|
||||
{{- printf "please review the entire path of '%s' exists in values" $.key | fail -}}
|
||||
{{- end -}}
|
||||
{{- $value = ( index $latestObj . ) -}}
|
||||
{{- $latestObj = $value -}}
|
||||
{{- end -}}
|
||||
{{- printf "%v" (default "" $value) -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Returns first .Values key with a defined value or first of the list if all non-defined
|
||||
Usage:
|
||||
{{ include "common.utils.getKeyFromList" (dict "keys" (list "path.to.key1" "path.to.key2") "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.utils.getKeyFromList" -}}
|
||||
{{- $key := first .keys -}}
|
||||
{{- $reverseKeys := reverse .keys }}
|
||||
{{- range $reverseKeys }}
|
||||
{{- $value := include "common.utils.getValueFromKey" (dict "key" . "context" $.context ) }}
|
||||
{{- if $value -}}
|
||||
{{- $key = . }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- printf "%s" $key -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,14 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Warning about using rolling tag.
|
||||
Usage:
|
||||
{{ include "common.warnings.rollingTag" .Values.path.to.the.imageRoot }}
|
||||
*/}}
|
||||
{{- define "common.warnings.rollingTag" -}}
|
||||
|
||||
{{- if and (contains "bitnami/" .repository) (not (.tag | toString | regexFind "-r\\d+$|sha256:")) }}
|
||||
WARNING: Rolling tag detected ({{ .repository }}:{{ .tag }}), please note that it is strongly recommended to avoid using rolling tags in a production environment.
|
||||
+info https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/
|
||||
{{- end }}
|
||||
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,72 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Validate Cassandra required passwords are not empty.
|
||||
|
||||
Usage:
|
||||
{{ include "common.validations.values.cassandra.passwords" (dict "secret" "secretName" "subchart" false "context" $) }}
|
||||
Params:
|
||||
- secret - String - Required. Name of the secret where Cassandra values are stored, e.g: "cassandra-passwords-secret"
|
||||
- subchart - Boolean - Optional. Whether Cassandra is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.validations.values.cassandra.passwords" -}}
|
||||
{{- $existingSecret := include "common.cassandra.values.existingSecret" . -}}
|
||||
{{- $enabled := include "common.cassandra.values.enabled" . -}}
|
||||
{{- $dbUserPrefix := include "common.cassandra.values.key.dbUser" . -}}
|
||||
{{- $valueKeyPassword := printf "%s.password" $dbUserPrefix -}}
|
||||
|
||||
{{- if and (or (not $existingSecret) (eq $existingSecret "\"\"")) (eq $enabled "true") -}}
|
||||
{{- $requiredPasswords := list -}}
|
||||
|
||||
{{- $requiredPassword := dict "valueKey" $valueKeyPassword "secret" .secret "field" "cassandra-password" -}}
|
||||
{{- $requiredPasswords = append $requiredPasswords $requiredPassword -}}
|
||||
|
||||
{{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}}
|
||||
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for existingSecret.
|
||||
|
||||
Usage:
|
||||
{{ include "common.cassandra.values.existingSecret" (dict "context" $) }}
|
||||
Params:
|
||||
- subchart - Boolean - Optional. Whether Cassandra is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.cassandra.values.existingSecret" -}}
|
||||
{{- if .subchart -}}
|
||||
{{- .context.Values.cassandra.dbUser.existingSecret | quote -}}
|
||||
{{- else -}}
|
||||
{{- .context.Values.dbUser.existingSecret | quote -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for enabled cassandra.
|
||||
|
||||
Usage:
|
||||
{{ include "common.cassandra.values.enabled" (dict "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.cassandra.values.enabled" -}}
|
||||
{{- if .subchart -}}
|
||||
{{- printf "%v" .context.Values.cassandra.enabled -}}
|
||||
{{- else -}}
|
||||
{{- printf "%v" (not .context.Values.enabled) -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for the key dbUser
|
||||
|
||||
Usage:
|
||||
{{ include "common.cassandra.values.key.dbUser" (dict "subchart" "true" "context" $) }}
|
||||
Params:
|
||||
- subchart - Boolean - Optional. Whether Cassandra is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.cassandra.values.key.dbUser" -}}
|
||||
{{- if .subchart -}}
|
||||
cassandra.dbUser
|
||||
{{- else -}}
|
||||
dbUser
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,103 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Validate MariaDB required passwords are not empty.
|
||||
|
||||
Usage:
|
||||
{{ include "common.validations.values.mariadb.passwords" (dict "secret" "secretName" "subchart" false "context" $) }}
|
||||
Params:
|
||||
- secret - String - Required. Name of the secret where MariaDB values are stored, e.g: "mysql-passwords-secret"
|
||||
- subchart - Boolean - Optional. Whether MariaDB is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.validations.values.mariadb.passwords" -}}
|
||||
{{- $existingSecret := include "common.mariadb.values.auth.existingSecret" . -}}
|
||||
{{- $enabled := include "common.mariadb.values.enabled" . -}}
|
||||
{{- $architecture := include "common.mariadb.values.architecture" . -}}
|
||||
{{- $authPrefix := include "common.mariadb.values.key.auth" . -}}
|
||||
{{- $valueKeyRootPassword := printf "%s.rootPassword" $authPrefix -}}
|
||||
{{- $valueKeyUsername := printf "%s.username" $authPrefix -}}
|
||||
{{- $valueKeyPassword := printf "%s.password" $authPrefix -}}
|
||||
{{- $valueKeyReplicationPassword := printf "%s.replicationPassword" $authPrefix -}}
|
||||
|
||||
{{- if and (or (not $existingSecret) (eq $existingSecret "\"\"")) (eq $enabled "true") -}}
|
||||
{{- $requiredPasswords := list -}}
|
||||
|
||||
{{- $requiredRootPassword := dict "valueKey" $valueKeyRootPassword "secret" .secret "field" "mariadb-root-password" -}}
|
||||
{{- $requiredPasswords = append $requiredPasswords $requiredRootPassword -}}
|
||||
|
||||
{{- $valueUsername := include "common.utils.getValueFromKey" (dict "key" $valueKeyUsername "context" .context) }}
|
||||
{{- if not (empty $valueUsername) -}}
|
||||
{{- $requiredPassword := dict "valueKey" $valueKeyPassword "secret" .secret "field" "mariadb-password" -}}
|
||||
{{- $requiredPasswords = append $requiredPasswords $requiredPassword -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- if (eq $architecture "replication") -}}
|
||||
{{- $requiredReplicationPassword := dict "valueKey" $valueKeyReplicationPassword "secret" .secret "field" "mariadb-replication-password" -}}
|
||||
{{- $requiredPasswords = append $requiredPasswords $requiredReplicationPassword -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}}
|
||||
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for existingSecret.
|
||||
|
||||
Usage:
|
||||
{{ include "common.mariadb.values.auth.existingSecret" (dict "context" $) }}
|
||||
Params:
|
||||
- subchart - Boolean - Optional. Whether MariaDB is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.mariadb.values.auth.existingSecret" -}}
|
||||
{{- if .subchart -}}
|
||||
{{- .context.Values.mariadb.auth.existingSecret | quote -}}
|
||||
{{- else -}}
|
||||
{{- .context.Values.auth.existingSecret | quote -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for enabled mariadb.
|
||||
|
||||
Usage:
|
||||
{{ include "common.mariadb.values.enabled" (dict "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.mariadb.values.enabled" -}}
|
||||
{{- if .subchart -}}
|
||||
{{- printf "%v" .context.Values.mariadb.enabled -}}
|
||||
{{- else -}}
|
||||
{{- printf "%v" (not .context.Values.enabled) -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for architecture
|
||||
|
||||
Usage:
|
||||
{{ include "common.mariadb.values.architecture" (dict "subchart" "true" "context" $) }}
|
||||
Params:
|
||||
- subchart - Boolean - Optional. Whether MariaDB is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.mariadb.values.architecture" -}}
|
||||
{{- if .subchart -}}
|
||||
{{- .context.Values.mariadb.architecture -}}
|
||||
{{- else -}}
|
||||
{{- .context.Values.architecture -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for the key auth
|
||||
|
||||
Usage:
|
||||
{{ include "common.mariadb.values.key.auth" (dict "subchart" "true" "context" $) }}
|
||||
Params:
|
||||
- subchart - Boolean - Optional. Whether MariaDB is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.mariadb.values.key.auth" -}}
|
||||
{{- if .subchart -}}
|
||||
mariadb.auth
|
||||
{{- else -}}
|
||||
auth
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,108 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Validate MongoDB® required passwords are not empty.
|
||||
|
||||
Usage:
|
||||
{{ include "common.validations.values.mongodb.passwords" (dict "secret" "secretName" "subchart" false "context" $) }}
|
||||
Params:
|
||||
- secret - String - Required. Name of the secret where MongoDB® values are stored, e.g: "mongodb-passwords-secret"
|
||||
- subchart - Boolean - Optional. Whether MongoDB® is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.validations.values.mongodb.passwords" -}}
|
||||
{{- $existingSecret := include "common.mongodb.values.auth.existingSecret" . -}}
|
||||
{{- $enabled := include "common.mongodb.values.enabled" . -}}
|
||||
{{- $authPrefix := include "common.mongodb.values.key.auth" . -}}
|
||||
{{- $architecture := include "common.mongodb.values.architecture" . -}}
|
||||
{{- $valueKeyRootPassword := printf "%s.rootPassword" $authPrefix -}}
|
||||
{{- $valueKeyUsername := printf "%s.username" $authPrefix -}}
|
||||
{{- $valueKeyDatabase := printf "%s.database" $authPrefix -}}
|
||||
{{- $valueKeyPassword := printf "%s.password" $authPrefix -}}
|
||||
{{- $valueKeyReplicaSetKey := printf "%s.replicaSetKey" $authPrefix -}}
|
||||
{{- $valueKeyAuthEnabled := printf "%s.enabled" $authPrefix -}}
|
||||
|
||||
{{- $authEnabled := include "common.utils.getValueFromKey" (dict "key" $valueKeyAuthEnabled "context" .context) -}}
|
||||
|
||||
{{- if and (or (not $existingSecret) (eq $existingSecret "\"\"")) (eq $enabled "true") (eq $authEnabled "true") -}}
|
||||
{{- $requiredPasswords := list -}}
|
||||
|
||||
{{- $requiredRootPassword := dict "valueKey" $valueKeyRootPassword "secret" .secret "field" "mongodb-root-password" -}}
|
||||
{{- $requiredPasswords = append $requiredPasswords $requiredRootPassword -}}
|
||||
|
||||
{{- $valueUsername := include "common.utils.getValueFromKey" (dict "key" $valueKeyUsername "context" .context) }}
|
||||
{{- $valueDatabase := include "common.utils.getValueFromKey" (dict "key" $valueKeyDatabase "context" .context) }}
|
||||
{{- if and $valueUsername $valueDatabase -}}
|
||||
{{- $requiredPassword := dict "valueKey" $valueKeyPassword "secret" .secret "field" "mongodb-password" -}}
|
||||
{{- $requiredPasswords = append $requiredPasswords $requiredPassword -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- if (eq $architecture "replicaset") -}}
|
||||
{{- $requiredReplicaSetKey := dict "valueKey" $valueKeyReplicaSetKey "secret" .secret "field" "mongodb-replica-set-key" -}}
|
||||
{{- $requiredPasswords = append $requiredPasswords $requiredReplicaSetKey -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}}
|
||||
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for existingSecret.
|
||||
|
||||
Usage:
|
||||
{{ include "common.mongodb.values.auth.existingSecret" (dict "context" $) }}
|
||||
Params:
|
||||
- subchart - Boolean - Optional. Whether MongoDb is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.mongodb.values.auth.existingSecret" -}}
|
||||
{{- if .subchart -}}
|
||||
{{- .context.Values.mongodb.auth.existingSecret | quote -}}
|
||||
{{- else -}}
|
||||
{{- .context.Values.auth.existingSecret | quote -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for enabled mongodb.
|
||||
|
||||
Usage:
|
||||
{{ include "common.mongodb.values.enabled" (dict "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.mongodb.values.enabled" -}}
|
||||
{{- if .subchart -}}
|
||||
{{- printf "%v" .context.Values.mongodb.enabled -}}
|
||||
{{- else -}}
|
||||
{{- printf "%v" (not .context.Values.enabled) -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for the key auth
|
||||
|
||||
Usage:
|
||||
{{ include "common.mongodb.values.key.auth" (dict "subchart" "true" "context" $) }}
|
||||
Params:
|
||||
- subchart - Boolean - Optional. Whether MongoDB® is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.mongodb.values.key.auth" -}}
|
||||
{{- if .subchart -}}
|
||||
mongodb.auth
|
||||
{{- else -}}
|
||||
auth
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for architecture
|
||||
|
||||
Usage:
|
||||
{{ include "common.mongodb.values.architecture" (dict "subchart" "true" "context" $) }}
|
||||
Params:
|
||||
- subchart - Boolean - Optional. Whether MongoDB® is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.mongodb.values.architecture" -}}
|
||||
{{- if .subchart -}}
|
||||
{{- .context.Values.mongodb.architecture -}}
|
||||
{{- else -}}
|
||||
{{- .context.Values.architecture -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,103 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Validate MySQL required passwords are not empty.
|
||||
|
||||
Usage:
|
||||
{{ include "common.validations.values.mysql.passwords" (dict "secret" "secretName" "subchart" false "context" $) }}
|
||||
Params:
|
||||
- secret - String - Required. Name of the secret where MySQL values are stored, e.g: "mysql-passwords-secret"
|
||||
- subchart - Boolean - Optional. Whether MySQL is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.validations.values.mysql.passwords" -}}
|
||||
{{- $existingSecret := include "common.mysql.values.auth.existingSecret" . -}}
|
||||
{{- $enabled := include "common.mysql.values.enabled" . -}}
|
||||
{{- $architecture := include "common.mysql.values.architecture" . -}}
|
||||
{{- $authPrefix := include "common.mysql.values.key.auth" . -}}
|
||||
{{- $valueKeyRootPassword := printf "%s.rootPassword" $authPrefix -}}
|
||||
{{- $valueKeyUsername := printf "%s.username" $authPrefix -}}
|
||||
{{- $valueKeyPassword := printf "%s.password" $authPrefix -}}
|
||||
{{- $valueKeyReplicationPassword := printf "%s.replicationPassword" $authPrefix -}}
|
||||
|
||||
{{- if and (or (not $existingSecret) (eq $existingSecret "\"\"")) (eq $enabled "true") -}}
|
||||
{{- $requiredPasswords := list -}}
|
||||
|
||||
{{- $requiredRootPassword := dict "valueKey" $valueKeyRootPassword "secret" .secret "field" "mysql-root-password" -}}
|
||||
{{- $requiredPasswords = append $requiredPasswords $requiredRootPassword -}}
|
||||
|
||||
{{- $valueUsername := include "common.utils.getValueFromKey" (dict "key" $valueKeyUsername "context" .context) }}
|
||||
{{- if not (empty $valueUsername) -}}
|
||||
{{- $requiredPassword := dict "valueKey" $valueKeyPassword "secret" .secret "field" "mysql-password" -}}
|
||||
{{- $requiredPasswords = append $requiredPasswords $requiredPassword -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- if (eq $architecture "replication") -}}
|
||||
{{- $requiredReplicationPassword := dict "valueKey" $valueKeyReplicationPassword "secret" .secret "field" "mysql-replication-password" -}}
|
||||
{{- $requiredPasswords = append $requiredPasswords $requiredReplicationPassword -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}}
|
||||
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for existingSecret.
|
||||
|
||||
Usage:
|
||||
{{ include "common.mysql.values.auth.existingSecret" (dict "context" $) }}
|
||||
Params:
|
||||
- subchart - Boolean - Optional. Whether MySQL is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.mysql.values.auth.existingSecret" -}}
|
||||
{{- if .subchart -}}
|
||||
{{- .context.Values.mysql.auth.existingSecret | quote -}}
|
||||
{{- else -}}
|
||||
{{- .context.Values.auth.existingSecret | quote -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for enabled mysql.
|
||||
|
||||
Usage:
|
||||
{{ include "common.mysql.values.enabled" (dict "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.mysql.values.enabled" -}}
|
||||
{{- if .subchart -}}
|
||||
{{- printf "%v" .context.Values.mysql.enabled -}}
|
||||
{{- else -}}
|
||||
{{- printf "%v" (not .context.Values.enabled) -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for architecture
|
||||
|
||||
Usage:
|
||||
{{ include "common.mysql.values.architecture" (dict "subchart" "true" "context" $) }}
|
||||
Params:
|
||||
- subchart - Boolean - Optional. Whether MySQL is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.mysql.values.architecture" -}}
|
||||
{{- if .subchart -}}
|
||||
{{- .context.Values.mysql.architecture -}}
|
||||
{{- else -}}
|
||||
{{- .context.Values.architecture -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for the key auth
|
||||
|
||||
Usage:
|
||||
{{ include "common.mysql.values.key.auth" (dict "subchart" "true" "context" $) }}
|
||||
Params:
|
||||
- subchart - Boolean - Optional. Whether MySQL is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.mysql.values.key.auth" -}}
|
||||
{{- if .subchart -}}
|
||||
mysql.auth
|
||||
{{- else -}}
|
||||
auth
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,129 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Validate PostgreSQL required passwords are not empty.
|
||||
|
||||
Usage:
|
||||
{{ include "common.validations.values.postgresql.passwords" (dict "secret" "secretName" "subchart" false "context" $) }}
|
||||
Params:
|
||||
- secret - String - Required. Name of the secret where postgresql values are stored, e.g: "postgresql-passwords-secret"
|
||||
- subchart - Boolean - Optional. Whether postgresql is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.validations.values.postgresql.passwords" -}}
|
||||
{{- $existingSecret := include "common.postgresql.values.existingSecret" . -}}
|
||||
{{- $enabled := include "common.postgresql.values.enabled" . -}}
|
||||
{{- $valueKeyPostgresqlPassword := include "common.postgresql.values.key.postgressPassword" . -}}
|
||||
{{- $valueKeyPostgresqlReplicationEnabled := include "common.postgresql.values.key.replicationPassword" . -}}
|
||||
{{- if and (or (not $existingSecret) (eq $existingSecret "\"\"")) (eq $enabled "true") -}}
|
||||
{{- $requiredPasswords := list -}}
|
||||
{{- $requiredPostgresqlPassword := dict "valueKey" $valueKeyPostgresqlPassword "secret" .secret "field" "postgresql-password" -}}
|
||||
{{- $requiredPasswords = append $requiredPasswords $requiredPostgresqlPassword -}}
|
||||
|
||||
{{- $enabledReplication := include "common.postgresql.values.enabled.replication" . -}}
|
||||
{{- if (eq $enabledReplication "true") -}}
|
||||
{{- $requiredPostgresqlReplicationPassword := dict "valueKey" $valueKeyPostgresqlReplicationEnabled "secret" .secret "field" "postgresql-replication-password" -}}
|
||||
{{- $requiredPasswords = append $requiredPasswords $requiredPostgresqlReplicationPassword -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to decide whether evaluate global values.
|
||||
|
||||
Usage:
|
||||
{{ include "common.postgresql.values.use.global" (dict "key" "key-of-global" "context" $) }}
|
||||
Params:
|
||||
- key - String - Required. Field to be evaluated within global, e.g: "existingSecret"
|
||||
*/}}
|
||||
{{- define "common.postgresql.values.use.global" -}}
|
||||
{{- if .context.Values.global -}}
|
||||
{{- if .context.Values.global.postgresql -}}
|
||||
{{- index .context.Values.global.postgresql .key | quote -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for existingSecret.
|
||||
|
||||
Usage:
|
||||
{{ include "common.postgresql.values.existingSecret" (dict "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.postgresql.values.existingSecret" -}}
|
||||
{{- $globalValue := include "common.postgresql.values.use.global" (dict "key" "existingSecret" "context" .context) -}}
|
||||
|
||||
{{- if .subchart -}}
|
||||
{{- default (.context.Values.postgresql.existingSecret | quote) $globalValue -}}
|
||||
{{- else -}}
|
||||
{{- default (.context.Values.existingSecret | quote) $globalValue -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for enabled postgresql.
|
||||
|
||||
Usage:
|
||||
{{ include "common.postgresql.values.enabled" (dict "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.postgresql.values.enabled" -}}
|
||||
{{- if .subchart -}}
|
||||
{{- printf "%v" .context.Values.postgresql.enabled -}}
|
||||
{{- else -}}
|
||||
{{- printf "%v" (not .context.Values.enabled) -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for the key postgressPassword.
|
||||
|
||||
Usage:
|
||||
{{ include "common.postgresql.values.key.postgressPassword" (dict "subchart" "true" "context" $) }}
|
||||
Params:
|
||||
- subchart - Boolean - Optional. Whether postgresql is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.postgresql.values.key.postgressPassword" -}}
|
||||
{{- $globalValue := include "common.postgresql.values.use.global" (dict "key" "postgresqlUsername" "context" .context) -}}
|
||||
|
||||
{{- if not $globalValue -}}
|
||||
{{- if .subchart -}}
|
||||
postgresql.postgresqlPassword
|
||||
{{- else -}}
|
||||
postgresqlPassword
|
||||
{{- end -}}
|
||||
{{- else -}}
|
||||
global.postgresql.postgresqlPassword
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for enabled.replication.
|
||||
|
||||
Usage:
|
||||
{{ include "common.postgresql.values.enabled.replication" (dict "subchart" "true" "context" $) }}
|
||||
Params:
|
||||
- subchart - Boolean - Optional. Whether postgresql is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.postgresql.values.enabled.replication" -}}
|
||||
{{- if .subchart -}}
|
||||
{{- printf "%v" .context.Values.postgresql.replication.enabled -}}
|
||||
{{- else -}}
|
||||
{{- printf "%v" .context.Values.replication.enabled -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for the key replication.password.
|
||||
|
||||
Usage:
|
||||
{{ include "common.postgresql.values.key.replicationPassword" (dict "subchart" "true" "context" $) }}
|
||||
Params:
|
||||
- subchart - Boolean - Optional. Whether postgresql is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.postgresql.values.key.replicationPassword" -}}
|
||||
{{- if .subchart -}}
|
||||
postgresql.replication.password
|
||||
{{- else -}}
|
||||
replication.password
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,76 @@
|
||||
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Validate Redis® required passwords are not empty.
|
||||
|
||||
Usage:
|
||||
{{ include "common.validations.values.redis.passwords" (dict "secret" "secretName" "subchart" false "context" $) }}
|
||||
Params:
|
||||
- secret - String - Required. Name of the secret where redis values are stored, e.g: "redis-passwords-secret"
|
||||
- subchart - Boolean - Optional. Whether redis is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.validations.values.redis.passwords" -}}
|
||||
{{- $enabled := include "common.redis.values.enabled" . -}}
|
||||
{{- $valueKeyPrefix := include "common.redis.values.keys.prefix" . -}}
|
||||
{{- $standarizedVersion := include "common.redis.values.standarized.version" . }}
|
||||
|
||||
{{- $existingSecret := ternary (printf "%s%s" $valueKeyPrefix "auth.existingSecret") (printf "%s%s" $valueKeyPrefix "existingSecret") (eq $standarizedVersion "true") }}
|
||||
{{- $existingSecretValue := include "common.utils.getValueFromKey" (dict "key" $existingSecret "context" .context) }}
|
||||
|
||||
{{- $valueKeyRedisPassword := ternary (printf "%s%s" $valueKeyPrefix "auth.password") (printf "%s%s" $valueKeyPrefix "password") (eq $standarizedVersion "true") }}
|
||||
{{- $valueKeyRedisUseAuth := ternary (printf "%s%s" $valueKeyPrefix "auth.enabled") (printf "%s%s" $valueKeyPrefix "usePassword") (eq $standarizedVersion "true") }}
|
||||
|
||||
{{- if and (or (not $existingSecret) (eq $existingSecret "\"\"")) (eq $enabled "true") -}}
|
||||
{{- $requiredPasswords := list -}}
|
||||
|
||||
{{- $useAuth := include "common.utils.getValueFromKey" (dict "key" $valueKeyRedisUseAuth "context" .context) -}}
|
||||
{{- if eq $useAuth "true" -}}
|
||||
{{- $requiredRedisPassword := dict "valueKey" $valueKeyRedisPassword "secret" .secret "field" "redis-password" -}}
|
||||
{{- $requiredPasswords = append $requiredPasswords $requiredRedisPassword -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for enabled redis.
|
||||
|
||||
Usage:
|
||||
{{ include "common.redis.values.enabled" (dict "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.redis.values.enabled" -}}
|
||||
{{- if .subchart -}}
|
||||
{{- printf "%v" .context.Values.redis.enabled -}}
|
||||
{{- else -}}
|
||||
{{- printf "%v" (not .context.Values.enabled) -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right prefix path for the values
|
||||
|
||||
Usage:
|
||||
{{ include "common.redis.values.key.prefix" (dict "subchart" "true" "context" $) }}
|
||||
Params:
|
||||
- subchart - Boolean - Optional. Whether redis is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.redis.values.keys.prefix" -}}
|
||||
{{- if .subchart -}}redis.{{- else -}}{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Checks whether the redis chart's includes the standarizations (version >= 14)
|
||||
|
||||
Usage:
|
||||
{{ include "common.redis.values.standarized.version" (dict "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.redis.values.standarized.version" -}}
|
||||
|
||||
{{- $standarizedAuth := printf "%s%s" (include "common.redis.values.keys.prefix" .) "auth" -}}
|
||||
{{- $standarizedAuthValues := include "common.utils.getValueFromKey" (dict "key" $standarizedAuth "context" .context) }}
|
||||
|
||||
{{- if $standarizedAuthValues -}}
|
||||
{{- true -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,46 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Validate values must not be empty.
|
||||
|
||||
Usage:
|
||||
{{- $validateValueConf00 := (dict "valueKey" "path.to.value" "secret" "secretName" "field" "password-00") -}}
|
||||
{{- $validateValueConf01 := (dict "valueKey" "path.to.value" "secret" "secretName" "field" "password-01") -}}
|
||||
{{ include "common.validations.values.empty" (dict "required" (list $validateValueConf00 $validateValueConf01) "context" $) }}
|
||||
|
||||
Validate value params:
|
||||
- valueKey - String - Required. The path to the validating value in the values.yaml, e.g: "mysql.password"
|
||||
- secret - String - Optional. Name of the secret where the validating value is generated/stored, e.g: "mysql-passwords-secret"
|
||||
- field - String - Optional. Name of the field in the secret data, e.g: "mysql-password"
|
||||
*/}}
|
||||
{{- define "common.validations.values.multiple.empty" -}}
|
||||
{{- range .required -}}
|
||||
{{- include "common.validations.values.single.empty" (dict "valueKey" .valueKey "secret" .secret "field" .field "context" $.context) -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Validate a value must not be empty.
|
||||
|
||||
Usage:
|
||||
{{ include "common.validations.value.empty" (dict "valueKey" "mariadb.password" "secret" "secretName" "field" "my-password" "subchart" "subchart" "context" $) }}
|
||||
|
||||
Validate value params:
|
||||
- valueKey - String - Required. The path to the validating value in the values.yaml, e.g: "mysql.password"
|
||||
- secret - String - Optional. Name of the secret where the validating value is generated/stored, e.g: "mysql-passwords-secret"
|
||||
- field - String - Optional. Name of the field in the secret data, e.g: "mysql-password"
|
||||
- subchart - String - Optional - Name of the subchart that the validated password is part of.
|
||||
*/}}
|
||||
{{- define "common.validations.values.single.empty" -}}
|
||||
{{- $value := include "common.utils.getValueFromKey" (dict "key" .valueKey "context" .context) }}
|
||||
{{- $subchart := ternary "" (printf "%s." .subchart) (empty .subchart) }}
|
||||
|
||||
{{- if not $value -}}
|
||||
{{- $varname := "my-value" -}}
|
||||
{{- $getCurrentValue := "" -}}
|
||||
{{- if and .secret .field -}}
|
||||
{{- $varname = include "common.utils.fieldToEnvVar" . -}}
|
||||
{{- $getCurrentValue = printf " To get the current value:\n\n %s\n" (include "common.utils.secret.getvalue" .) -}}
|
||||
{{- end -}}
|
||||
{{- printf "\n '%s' must not be empty, please add '--set %s%s=$%s' to the command.%s" .valueKey $subchart .valueKey $varname $getCurrentValue -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,5 @@
|
||||
## bitnami/common
|
||||
## It is required by CI/CD tools and processes.
|
||||
## @skip exampleValue
|
||||
##
|
||||
exampleValue: common-chart
|
||||
119
deployment/helm/milvus/charts/etcd/templates/NOTES.txt
Normal file
119
deployment/helm/milvus/charts/etcd/templates/NOTES.txt
Normal file
@@ -0,0 +1,119 @@
|
||||
CHART NAME: {{ .Chart.Name }}
|
||||
CHART VERSION: {{ .Chart.Version }}
|
||||
APP VERSION: {{ .Chart.AppVersion }}
|
||||
|
||||
{{- if and (eq .Values.service.type "LoadBalancer") .Values.auth.rbac.allowNoneAuthentication }}
|
||||
-------------------------------------------------------------------------------
|
||||
WARNING
|
||||
|
||||
By specifying "service.type=LoadBalancer", "auth.rbac.enabled=false" and
|
||||
"auth.rbac.allowNoneAuthentication=true" you have most likely exposed the etcd
|
||||
service externally without any authentication mechanism.
|
||||
|
||||
For security reasons, we strongly suggest that you switch to "ClusterIP" or
|
||||
"NodePort". As alternative, you can also switch to "auth.rbac.enabled=true"
|
||||
providing a valid password on "auth.rbac.rootPassword" parameter.
|
||||
|
||||
-------------------------------------------------------------------------------
|
||||
{{- end }}
|
||||
|
||||
** Please be patient while the chart is being deployed **
|
||||
|
||||
{{- if .Values.diagnosticMode.enabled }}
|
||||
The chart has been deployed in diagnostic mode. All probes have been disabled and the command has been overwritten with:
|
||||
|
||||
command: {{- include "common.tplvalues.render" (dict "value" .Values.diagnosticMode.command "context" $) | nindent 4 }}
|
||||
args: {{- include "common.tplvalues.render" (dict "value" .Values.diagnosticMode.args "context" $) | nindent 4 }}
|
||||
|
||||
Get the list of pods by executing:
|
||||
|
||||
kubectl get pods --namespace {{ .Release.Namespace }} -l app.kubernetes.io/instance={{ .Release.Name }}
|
||||
|
||||
Access the pod you want to debug by executing
|
||||
|
||||
kubectl exec --namespace {{ .Release.Namespace }} -ti <NAME OF THE POD> -- bash
|
||||
|
||||
In order to replicate the container startup scripts execute this command:
|
||||
|
||||
/opt/bitnami/scripts/etcd/entrypoint.sh /opt/bitnami/scripts/etcd/run.sh
|
||||
|
||||
{{- else }}
|
||||
|
||||
etcd can be accessed via port {{ coalesce .Values.service.ports.client .Values.service.port }} on the following DNS name from within your cluster:
|
||||
|
||||
{{ template "common.names.fullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.clusterDomain }}
|
||||
|
||||
To create a pod that you can use as a etcd client run the following command:
|
||||
|
||||
kubectl run {{ template "common.names.fullname" . }}-client --restart='Never' --image {{ template "etcd.image" . }}{{- if or .Values.auth.rbac.create .Values.auth.rbac.enabled }} --env ROOT_PASSWORD=$(kubectl get secret --namespace {{ .Release.Namespace }} {{ template "common.names.fullname" . }} -o jsonpath="{.data.etcd-root-password}" | base64 -d){{- end }} --env ETCDCTL_ENDPOINTS="{{ template "common.names.fullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.clusterDomain }}:{{ coalesce .Values.service.ports.client .Values.service.port }}" --namespace {{ .Release.Namespace }} --command -- sleep infinity
|
||||
|
||||
Then, you can set/get a key using the commands below:
|
||||
|
||||
kubectl exec --namespace {{ .Release.Namespace }} -it {{ template "common.names.fullname" . }}-client -- bash
|
||||
{{- $etcdAuthOptions := include "etcd.authOptions" . }}
|
||||
etcdctl {{ $etcdAuthOptions }} put /message Hello
|
||||
etcdctl {{ $etcdAuthOptions }} get /message
|
||||
|
||||
To connect to your etcd server from outside the cluster execute the following commands:
|
||||
|
||||
{{- if contains "NodePort" .Values.service.type }}
|
||||
|
||||
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
|
||||
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "common.names.fullname" . }})
|
||||
echo "etcd URL: http://$NODE_IP:$NODE_PORT/"
|
||||
|
||||
{{- else if contains "LoadBalancer" .Values.service.type }}
|
||||
|
||||
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
|
||||
Watch the status with: 'kubectl get svc --namespace {{ .Release.Namespace }} -w {{ template "common.names.fullname" . }}'
|
||||
|
||||
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "common.names.fullname" . }} --template "{{ "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}" }}")
|
||||
echo "etcd URL: http://$SERVICE_IP:{{ coalesce .Values.service.ports.client .Values.service.port }}/"
|
||||
|
||||
{{- else if contains "ClusterIP" .Values.service.type }}
|
||||
|
||||
kubectl port-forward --namespace {{ .Release.Namespace }} svc/{{ template "common.names.fullname" . }} {{ coalesce .Values.service.ports.client .Values.service.port }}:{{ coalesce .Values.service.ports.client .Values.service.port }} &
|
||||
echo "etcd URL: http://127.0.0.1:{{ coalesce .Values.service.ports.client .Values.service.port }}"
|
||||
|
||||
{{- end }}
|
||||
{{- if or .Values.auth.rbac.create .Values.auth.rbac.enabled }}
|
||||
|
||||
* As rbac is enabled you should add the flag `--user root:$ETCD_ROOT_PASSWORD` to the etcdctl commands. Use the command below to export the password:
|
||||
|
||||
export ETCD_ROOT_PASSWORD=$(kubectl get secret --namespace {{ .Release.Namespace }} {{ template "common.names.fullname" . }} -o jsonpath="{.data.etcd-root-password}" | base64 -d)
|
||||
|
||||
{{- end }}
|
||||
{{- if .Values.auth.client.secureTransport }}
|
||||
{{- if .Values.auth.client.useAutoTLS }}
|
||||
|
||||
* As TLS is enabled you should add the flag `--cert-file /bitnami/etcd/data/fixtures/client/cert.pem --key-file /bitnami/etcd/data/fixtures/client/key.pem` to the etcdctl commands.
|
||||
|
||||
{{- else }}
|
||||
|
||||
* As TLS is enabled you should add the flag `--cert-file /opt/bitnami/etcd/certs/client/{{ .Values.auth.client.certFilename }} --key-file /opt/bitnami/etcd/certs/client/{{ .Values.auth.client.certKeyFilename }}` to the etcdctl commands.
|
||||
|
||||
{{- end }}
|
||||
|
||||
* You should also export a proper etcdctl endpoint using the https schema. Eg.
|
||||
|
||||
export ETCDCTL_ENDPOINTS=https://{{ template "common.names.fullname" . }}-0:{{ coalesce .Values.service.ports.client .Values.service.port }}
|
||||
|
||||
{{- end }}
|
||||
{{- if .Values.auth.client.enableAuthentication }}
|
||||
|
||||
* As TLS host authentication is enabled you should add the flag `--ca-file /opt/bitnami/etcd/certs/client/{{ .Values.auth.client.caFilename | default "ca.crt" }}` to the etcdctl commands.
|
||||
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{- include "common.warnings.rollingTag" .Values.image }}
|
||||
{{- include "common.warnings.rollingTag" .Values.volumePermissions.image }}
|
||||
{{- include "etcd.validateValues" . }}
|
||||
{{- $requiredPassword := list -}}
|
||||
{{- $secretName := include "etcd.secretName" . -}}
|
||||
{{- if and (or .Values.auth.rbac.create .Values.auth.rbac.enabled) (not .Values.auth.rbac.existingSecret) -}}
|
||||
{{- $requiredEtcdPassword := dict "valueKey" "auth.rbac.rootPassword" "secret" $secretName "field" "etcd-root-password" -}}
|
||||
{{- $requiredPassword = append $requiredPassword $requiredEtcdPassword -}}
|
||||
{{- end -}}
|
||||
{{- $requiredEtcdPasswordErrors := include "common.validations.values.multiple.empty" (dict "required" $requiredPassword "context" $) -}}
|
||||
{{- include "common.errors.upgrade.passwords.empty" (dict "validationErrors" (list $requiredEtcdPasswordErrors) "context" $) -}}
|
||||
205
deployment/helm/milvus/charts/etcd/templates/_helpers.tpl
Normal file
205
deployment/helm/milvus/charts/etcd/templates/_helpers.tpl
Normal file
@@ -0,0 +1,205 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
|
||||
{{/*
|
||||
Return the proper etcd image name
|
||||
*/}}
|
||||
{{- define "etcd.image" -}}
|
||||
{{ include "common.images.image" (dict "imageRoot" .Values.image "global" .Values.global) }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the proper image name (for the init container volume-permissions image)
|
||||
*/}}
|
||||
{{- define "etcd.volumePermissions.image" -}}
|
||||
{{ include "common.images.image" (dict "imageRoot" .Values.volumePermissions.image "global" .Values.global) }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the proper Docker Image Registry Secret Names
|
||||
*/}}
|
||||
{{- define "etcd.imagePullSecrets" -}}
|
||||
{{ include "common.images.pullSecrets" (dict "images" (list .Values.image .Values.volumePermissions.image) "global" .Values.global) }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the proper etcd peer protocol
|
||||
*/}}
|
||||
{{- define "etcd.peerProtocol" -}}
|
||||
{{- if .Values.auth.peer.secureTransport -}}
|
||||
{{- print "https" -}}
|
||||
{{- else -}}
|
||||
{{- print "http" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the proper etcd client protocol
|
||||
*/}}
|
||||
{{- define "etcd.clientProtocol" -}}
|
||||
{{- if .Values.auth.client.secureTransport -}}
|
||||
{{- print "https" -}}
|
||||
{{- else -}}
|
||||
{{- print "http" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the proper etcdctl authentication options
|
||||
*/}}
|
||||
{{- define "etcd.authOptions" -}}
|
||||
{{- $rbacOption := "--user root:$ROOT_PASSWORD" -}}
|
||||
{{- $certsOption := " --cert $ETCD_CERT_FILE --key $ETCD_KEY_FILE" -}}
|
||||
{{- $autoCertsOption := " --cert /bitnami/etcd/data/fixtures/client/cert.pem --key /bitnami/etcd/data/fixtures/client/key.pem" -}}
|
||||
{{- $caOption := " --cacert $ETCD_TRUSTED_CA_FILE" -}}
|
||||
{{- if or .Values.auth.rbac.create .Values.auth.rbac.enabled -}}
|
||||
{{- printf "%s" $rbacOption -}}
|
||||
{{- end -}}
|
||||
{{- if and .Values.auth.client.secureTransport .Values.auth.client.useAutoTLS -}}
|
||||
{{- printf "%s" $autoCertsOption -}}
|
||||
{{- else if and .Values.auth.client.secureTransport (not .Values.auth.client.useAutoTLS) -}}
|
||||
{{- printf "%s" $certsOption -}}
|
||||
{{- if .Values.auth.client.enableAuthentication -}}
|
||||
{{- printf "%s" $caOption -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the etcd configuration configmap
|
||||
*/}}
|
||||
{{- define "etcd.configmapName" -}}
|
||||
{{- if .Values.existingConfigmap -}}
|
||||
{{- printf "%s" (tpl .Values.existingConfigmap $) | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- printf "%s-configuration" (include "common.names.fullname" .) | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return true if a configmap object should be created
|
||||
*/}}
|
||||
{{- define "etcd.createConfigmap" -}}
|
||||
{{- if and .Values.configuration (not .Values.existingConfigmap) }}
|
||||
{{- true -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the secret with etcd credentials
|
||||
*/}}
|
||||
{{- define "etcd.secretName" -}}
|
||||
{{- if .Values.auth.rbac.existingSecret -}}
|
||||
{{- printf "%s" .Values.auth.rbac.existingSecret | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- printf "%s" (include "common.names.fullname" .) -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Get the secret password key to be retrieved from etcd secret.
|
||||
*/}}
|
||||
{{- define "etcd.secretPasswordKey" -}}
|
||||
{{- if and .Values.auth.rbac.existingSecret .Values.auth.rbac.existingSecretPasswordKey -}}
|
||||
{{- printf "%s" .Values.auth.rbac.existingSecretPasswordKey -}}
|
||||
{{- else -}}
|
||||
{{- printf "etcd-root-password" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return true if a secret object should be created for the etcd token private key
|
||||
*/}}
|
||||
{{- define "etcd.token.createSecret" -}}
|
||||
{{- if and (eq .Values.auth.token.enabled true) (eq .Values.auth.token.type "jwt") (empty .Values.auth.token.privateKey.existingSecret) }}
|
||||
{{- true -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the secret with etcd token private key
|
||||
*/}}
|
||||
{{- define "etcd.token.secretName" -}}
|
||||
{{- if .Values.auth.token.privateKey.existingSecret -}}
|
||||
{{- printf "%s" .Values.auth.token.privateKey.existingSecret | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- printf "%s-jwt-token" (include "common.names.fullname" .) | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the proper Disaster Recovery PVC name
|
||||
*/}}
|
||||
{{- define "etcd.disasterRecovery.pvc.name" -}}
|
||||
{{- if .Values.disasterRecovery.pvc.existingClaim -}}
|
||||
{{- printf "%s" (tpl .Values.disasterRecovery.pvc.existingClaim $) | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else if .Values.startFromSnapshot.existingClaim -}}
|
||||
{{- printf "%s" (tpl .Values.startFromSnapshot.existingClaim $) | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- printf "%s-snapshotter" (include "common.names.fullname" .) | trunc 63 | trimSuffix "-" }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create the name of the service account to use
|
||||
*/}}
|
||||
{{- define "etcd.serviceAccountName" -}}
|
||||
{{- if .Values.serviceAccount.create -}}
|
||||
{{ default (include "common.names.fullname" .) .Values.serviceAccount.name | trunc 63 | trimSuffix "-" }}
|
||||
{{- else -}}
|
||||
{{ default "default" .Values.serviceAccount.name | trunc 63 | trimSuffix "-" }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Compile all warnings into a single message, and call fail.
|
||||
*/}}
|
||||
{{- define "etcd.validateValues" -}}
|
||||
{{- $messages := list -}}
|
||||
{{- $messages := append $messages (include "etcd.validateValues.startFromSnapshot.existingClaim" .) -}}
|
||||
{{- $messages := append $messages (include "etcd.validateValues.startFromSnapshot.snapshotFilename" .) -}}
|
||||
{{- $messages := append $messages (include "etcd.validateValues.disasterRecovery" .) -}}
|
||||
{{- $messages := without $messages "" -}}
|
||||
{{- $message := join "\n" $messages -}}
|
||||
|
||||
{{- if $message -}}
|
||||
{{- printf "\nVALUES VALIDATION:\n%s" $message | fail -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/* Validate values of etcd - an existing claim must be provided when startFromSnapshot is enabled */}}
|
||||
{{- define "etcd.validateValues.startFromSnapshot.existingClaim" -}}
|
||||
{{- if and .Values.startFromSnapshot.enabled (not .Values.startFromSnapshot.existingClaim) (not .Values.disasterRecovery.enabled) -}}
|
||||
etcd: startFromSnapshot.existingClaim
|
||||
An existing claim must be provided when startFromSnapshot is enabled and disasterRecovery is disabled!!
|
||||
Please provide it (--set startFromSnapshot.existingClaim="xxxx")
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/* Validate values of etcd - the snapshot filename must be provided when startFromSnapshot is enabled */}}
|
||||
{{- define "etcd.validateValues.startFromSnapshot.snapshotFilename" -}}
|
||||
{{- if and .Values.startFromSnapshot.enabled (not .Values.startFromSnapshot.snapshotFilename) (not .Values.disasterRecovery.enabled) -}}
|
||||
etcd: startFromSnapshot.snapshotFilename
|
||||
The snapshot filename must be provided when startFromSnapshot is enabled and disasterRecovery is disabled!!
|
||||
Please provide it (--set startFromSnapshot.snapshotFilename="xxxx")
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/* Validate values of etcd - persistence must be enabled when disasterRecovery is enabled */}}
|
||||
{{- define "etcd.validateValues.disasterRecovery" -}}
|
||||
{{- if and .Values.disasterRecovery.enabled (not .Values.persistence.enabled) -}}
|
||||
etcd: disasterRecovery
|
||||
Persistence must be enabled when disasterRecovery is enabled!!
|
||||
Please enable persistence (--set persistence.enabled=true)
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- define "etcd.token.jwtToken" -}}
|
||||
{{- if (include "etcd.token.createSecret" .) -}}
|
||||
{{- $jwtToken := lookup "v1" "Secret" .Release.Namespace (printf "%s-jwt-token" (include "common.names.fullname" .) | trunc 63 | trimSuffix "-" ) -}}
|
||||
{{- if $jwtToken -}}
|
||||
{{ index $jwtToken "data" "jwt-token.pem" | b64dec }}
|
||||
{{- else -}}
|
||||
{{ genPrivateKey "rsa" }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
17
deployment/helm/milvus/charts/etcd/templates/configmap.yaml
Normal file
17
deployment/helm/milvus/charts/etcd/templates/configmap.yaml
Normal file
@@ -0,0 +1,17 @@
|
||||
{{- if (include "etcd.createConfigmap" .) }}
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: {{ printf "%s-configuration" (include "common.names.fullname" .) | trunc 63 | trimSuffix "-" }}
|
||||
namespace: {{ .Release.Namespace | quote }}
|
||||
labels: {{- include "common.labels.standard" . | nindent 4 }}
|
||||
{{- if .Values.commonLabels }}
|
||||
{{- include "common.tplvalues.render" ( dict "value" .Values.commonLabels "context" $ ) | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.commonAnnotations }}
|
||||
annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
|
||||
{{- end }}
|
||||
data:
|
||||
etcd.conf.yml: |-
|
||||
{{- include "common.tplvalues.render" ( dict "value" .Values.configuration "context" $ ) | nindent 4 }}
|
||||
{{- end }}
|
||||
137
deployment/helm/milvus/charts/etcd/templates/cronjob.yaml
Normal file
137
deployment/helm/milvus/charts/etcd/templates/cronjob.yaml
Normal file
@@ -0,0 +1,137 @@
|
||||
{{- if .Values.disasterRecovery.enabled -}}
|
||||
apiVersion: {{ include "common.capabilities.cronjob.apiVersion" . }}
|
||||
kind: CronJob
|
||||
metadata:
|
||||
name: {{ printf "%s-snapshotter" (include "common.names.fullname" .) | trunc 63 | trimSuffix "-" }}
|
||||
namespace: {{ .Release.Namespace | quote }}
|
||||
labels: {{- include "common.labels.standard" . | nindent 4 }}
|
||||
{{- if .Values.commonLabels }}
|
||||
{{- include "common.tplvalues.render" ( dict "value" .Values.commonLabels "context" $ ) | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.commonAnnotations }}
|
||||
annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
concurrencyPolicy: Forbid
|
||||
schedule: {{ .Values.disasterRecovery.cronjob.schedule | quote }}
|
||||
successfulJobsHistoryLimit: {{ .Values.disasterRecovery.cronjob.historyLimit }}
|
||||
jobTemplate:
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
labels: {{- include "common.labels.standard" . | nindent 12 }}
|
||||
app.kubernetes.io/component: snapshotter
|
||||
{{- if .Values.disasterRecovery.cronjob.podAnnotations }}
|
||||
annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.disasterRecovery.cronjob.podAnnotations "context" $) | nindent 12 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- if .Values.disasterRecovery.cronjob.nodeSelector }}
|
||||
nodeSelector: {{- toYaml .Values.disasterRecovery.cronjob.nodeSelector | nindent 12 }}
|
||||
{{- end }}
|
||||
{{- if .Values.disasterRecovery.cronjob.tolerations }}
|
||||
tolerations: {{- toYaml .Values.disasterRecovery.cronjob.tolerations | nindent 12 }}
|
||||
{{- end }}
|
||||
{{- include "etcd.imagePullSecrets" . | nindent 10 }}
|
||||
restartPolicy: OnFailure
|
||||
{{- if .Values.podSecurityContext.enabled }}
|
||||
securityContext: {{- omit .Values.podSecurityContext "enabled" | toYaml | nindent 12 }}
|
||||
{{- end }}
|
||||
{{- if and .Values.volumePermissions.enabled (or .Values.podSecurityContext.enabled .Values.containerSecurityContext.enabled) }}
|
||||
initContainers:
|
||||
- name: volume-permissions
|
||||
image: {{ include "etcd.volumePermissions.image" . }}
|
||||
imagePullPolicy: {{ .Values.volumePermissions.image.pullPolicy | quote }}
|
||||
command:
|
||||
- /bin/bash
|
||||
- -ec
|
||||
- |
|
||||
chown -R {{ .Values.containerSecurityContext.runAsUser }}:{{ .Values.podSecurityContext.fsGroup }} /snapshots
|
||||
securityContext:
|
||||
runAsUser: 0
|
||||
{{- if .Values.volumePermissions.resources }}
|
||||
resources: {{- include "common.tplvalues.render" (dict "value" .Values.volumePermissions.resources "context" $) | nindent 16 }}
|
||||
{{- end }}
|
||||
volumeMounts:
|
||||
- name: snapshot-volume
|
||||
mountPath: /snapshots
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: etcd-snapshotter
|
||||
image: {{ include "etcd.image" . }}
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy | quote }}
|
||||
{{- if .Values.containerSecurityContext.enabled }}
|
||||
securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 16 }}
|
||||
{{- end }}
|
||||
{{- if .Values.diagnosticMode.enabled }}
|
||||
command: {{- include "common.tplvalues.render" (dict "value" .Values.diagnosticMode.command "context" $) | nindent 16 }}
|
||||
args: {{- include "common.tplvalues.render" (dict "value" .Values.diagnosticMode.args "context" $) | nindent 16 }}
|
||||
{{- else }}
|
||||
command:
|
||||
- /opt/bitnami/scripts/etcd/snapshot.sh
|
||||
{{- end }}
|
||||
env:
|
||||
- name: BITNAMI_DEBUG
|
||||
value: {{ ternary "true" "false" (or .Values.image.debug .Values.diagnosticMode.enabled) | quote }}
|
||||
- name: ETCDCTL_API
|
||||
value: "3"
|
||||
- name: ETCD_ON_K8S
|
||||
value: "yes"
|
||||
- name: MY_STS_NAME
|
||||
value: {{ include "common.names.fullname" . | quote }}
|
||||
{{- $releaseNamespace := .Release.Namespace }}
|
||||
{{- $etcdFullname := include "common.names.fullname" . }}
|
||||
{{- $etcdHeadlessServiceName := (printf "%s-%s" $etcdFullname "headless" | trunc 63 | trimSuffix "-") }}
|
||||
{{- $clusterDomain := .Values.clusterDomain }}
|
||||
- name: ETCD_CLUSTER_DOMAIN
|
||||
value: {{ printf "%s.%s.svc.%s" $etcdHeadlessServiceName $releaseNamespace $clusterDomain | quote }}
|
||||
- name: ETCD_SNAPSHOT_HISTORY_LIMIT
|
||||
value: {{ .Values.disasterRecovery.cronjob.snapshotHistoryLimit | quote }}
|
||||
- name: ETCD_SNAPSHOTS_DIR
|
||||
value: {{ .Values.disasterRecovery.cronjob.snapshotsDir | quote }}
|
||||
{{- if .Values.auth.client.secureTransport }}
|
||||
- name: ETCD_CERT_FILE
|
||||
value: "/opt/bitnami/etcd/certs/client/{{ .Values.auth.client.certFilename }}"
|
||||
- name: ETCD_KEY_FILE
|
||||
value: "/opt/bitnami/etcd/certs/client/{{ .Values.auth.client.certKeyFilename }}"
|
||||
{{- if .Values.auth.client.enableAuthentication }}
|
||||
- name: ETCD_CLIENT_CERT_AUTH
|
||||
value: "true"
|
||||
- name: ETCD_TRUSTED_CA_FILE
|
||||
value: "/opt/bitnami/etcd/certs/client/{{ .Values.auth.client.caFilename | default "ca.crt" }}"
|
||||
{{- else if .Values.auth.client.caFilename }}
|
||||
- name: ETCD_TRUSTED_CA_FILE
|
||||
value: "/opt/bitnami/etcd/certs/client/{{ .Values.auth.client.caFilename | default "ca.crt" }}"
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if or .Values.auth.rbac.create .Values.auth.rbac.enabled }}
|
||||
- name: ETCD_ROOT_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: {{ include "etcd.secretName" . }}
|
||||
key: {{ include "etcd.secretPasswordKey" . }}
|
||||
{{- end }}
|
||||
{{- if .Values.disasterRecovery.cronjob.resources }}
|
||||
resources: {{- toYaml .Values.disasterRecovery.cronjob.resources | nindent 16 }}
|
||||
{{- end }}
|
||||
volumeMounts:
|
||||
- name: snapshot-volume
|
||||
mountPath: /snapshots
|
||||
{{- if .Values.disasterRecovery.pvc.subPath }}
|
||||
subPath: {{ .Values.disasterRecovery.pvc.subPath }}
|
||||
{{- end }}
|
||||
{{- if .Values.auth.client.secureTransport }}
|
||||
- name: certs
|
||||
mountPath: /opt/bitnami/etcd/certs/client
|
||||
readOnly: true
|
||||
{{- end }}
|
||||
volumes:
|
||||
{{- if .Values.auth.client.secureTransport }}
|
||||
- name: certs
|
||||
secret:
|
||||
secretName: {{ required "A secret containing the client certificates is required" (tpl .Values.auth.client.existingSecret .) }}
|
||||
defaultMode: 256
|
||||
{{- end }}
|
||||
- name: snapshot-volume
|
||||
persistentVolumeClaim:
|
||||
claimName: {{ include "etcd.disasterRecovery.pvc.name" . }}
|
||||
{{- end }}
|
||||
@@ -0,0 +1,4 @@
|
||||
{{- range .Values.extraDeploy }}
|
||||
---
|
||||
{{ include "common.tplvalues.render" (dict "value" . "context" $) }}
|
||||
{{- end }}
|
||||
@@ -0,0 +1,81 @@
|
||||
{{- if .Values.networkPolicy.enabled }}
|
||||
kind: NetworkPolicy
|
||||
apiVersion: {{ template "common.capabilities.networkPolicy.apiVersion" . }}
|
||||
metadata:
|
||||
name: {{ template "common.names.fullname" . }}
|
||||
namespace: {{ .Release.Namespace | quote }}
|
||||
labels: {{- include "common.labels.standard" . | nindent 4 }}
|
||||
{{- if .Values.commonLabels }}
|
||||
{{- include "common.tplvalues.render" ( dict "value" .Values.commonLabels "context" $ ) | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.commonAnnotations }}
|
||||
annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels: {{- include "common.labels.matchLabels" . | nindent 6 }}
|
||||
{{- if .Values.podLabels }}
|
||||
{{- include "common.tplvalues.render" (dict "value" .Values.podLabels "context" $) | nindent 6 }}
|
||||
{{- end }}
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
egress:
|
||||
# Allow dns resolution
|
||||
- ports:
|
||||
- port: 53
|
||||
protocol: UDP
|
||||
- port: 53
|
||||
protocol: TCP
|
||||
# Allow outbound connections to other cluster pods
|
||||
- ports:
|
||||
- port: {{ .Values.containerPorts.client }}
|
||||
- port: {{ .Values.containerPorts.peer }}
|
||||
to:
|
||||
- podSelector:
|
||||
matchLabels: {{- include "common.labels.standard" . | nindent 14 }}
|
||||
{{- if .Values.podLabels }}
|
||||
{{- include "common.tplvalues.render" (dict "value" .Values.podLabels "context" $) | nindent 14 }}
|
||||
{{- end }}
|
||||
{{- if .Values.networkPolicy.extraEgress }}
|
||||
{{- include "common.tplvalues.render" ( dict "value" .Values.networkPolicy.extraEgress "context" $ ) | nindent 4 }}
|
||||
{{- end }}
|
||||
ingress:
|
||||
# Allow inbound connections
|
||||
- ports:
|
||||
- port: {{ .Values.containerPorts.client }}
|
||||
- port: {{ .Values.containerPorts.peer }}
|
||||
{{- if not .Values.networkPolicy.allowExternal }}
|
||||
from:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
{{ template "common.names.fullname" . }}-client: "true"
|
||||
- podSelector:
|
||||
matchLabels: {{- include "common.labels.standard" . | nindent 14 }}
|
||||
{{- if .Values.podLabels }}
|
||||
{{- include "common.tplvalues.render" (dict "value" .Values.podLabels "context" $) | nindent 14 }}
|
||||
{{- end }}
|
||||
{{- if .Values.networkPolicy.ingressNSMatchLabels }}
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
{{- range $key, $value := .Values.networkPolicy.ingressNSMatchLabels }}
|
||||
{{ $key | quote }}: {{ $value | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.networkPolicy.ingressNSPodMatchLabels }}
|
||||
podSelector:
|
||||
matchLabels:
|
||||
{{- range $key, $value := .Values.networkPolicy.ingressNSPodMatchLabels }}
|
||||
{{ $key | quote }}: {{ $value | quote }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.metrics.enabled }}
|
||||
# Allow prometheus scrapes for metrics
|
||||
- ports:
|
||||
- port: 2379
|
||||
{{- end }}
|
||||
{{- if .Values.networkPolicy.extraIngress }}
|
||||
{{- include "common.tplvalues.render" ( dict "value" .Values.networkPolicy.extraIngress "context" $ ) | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
23
deployment/helm/milvus/charts/etcd/templates/pdb.yaml
Normal file
23
deployment/helm/milvus/charts/etcd/templates/pdb.yaml
Normal file
@@ -0,0 +1,23 @@
|
||||
{{- if .Values.pdb.create }}
|
||||
apiVersion: {{ include "common.capabilities.policy.apiVersion" . }}
|
||||
kind: PodDisruptionBudget
|
||||
metadata:
|
||||
name: {{ include "common.names.fullname" . }}
|
||||
namespace: {{ .Release.Namespace | quote }}
|
||||
labels: {{- include "common.labels.standard" . | nindent 4 }}
|
||||
{{- if .Values.commonLabels }}
|
||||
{{- include "common.tplvalues.render" ( dict "value" .Values.commonLabels "context" $ ) | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.commonAnnotations }}
|
||||
annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- if .Values.pdb.minAvailable }}
|
||||
minAvailable: {{ .Values.pdb.minAvailable }}
|
||||
{{- end }}
|
||||
{{- if .Values.pdb.maxUnavailable }}
|
||||
maxUnavailable: {{ .Values.pdb.maxUnavailable }}
|
||||
{{- end }}
|
||||
selector:
|
||||
matchLabels: {{- include "common.labels.matchLabels" . | nindent 6 }}
|
||||
{{- end }}
|
||||
42
deployment/helm/milvus/charts/etcd/templates/podmonitor.yaml
Normal file
42
deployment/helm/milvus/charts/etcd/templates/podmonitor.yaml
Normal file
@@ -0,0 +1,42 @@
|
||||
{{- if and .Values.metrics.enabled .Values.metrics.podMonitor.enabled }}
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: PodMonitor
|
||||
metadata:
|
||||
name: {{ include "common.names.fullname" . }}
|
||||
namespace: {{ ternary .Values.metrics.podMonitor.namespace .Release.Namespace (not (empty .Values.metrics.podMonitor.namespace)) }}
|
||||
labels: {{- include "common.labels.standard" . | nindent 4 }}
|
||||
{{- if .Values.commonLabels }}
|
||||
{{- include "common.tplvalues.render" ( dict "value" .Values.commonLabels "context" $ ) | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.metrics.podMonitor.additionalLabels }}
|
||||
{{- include "common.tplvalues.render" (dict "value" .Values.metrics.podMonitor.additionalLabels "context" $) | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.commonAnnotations }}
|
||||
annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
podMetricsEndpoints:
|
||||
- port: client
|
||||
path: /metrics
|
||||
{{- if .Values.metrics.podMonitor.interval }}
|
||||
interval: {{ .Values.metrics.podMonitor.interval }}
|
||||
{{- end }}
|
||||
{{- if .Values.metrics.podMonitor.scrapeTimeout }}
|
||||
scrapeTimeout: {{ .Values.metrics.podMonitor.scrapeTimeout }}
|
||||
{{- end }}
|
||||
{{- if .Values.metrics.podMonitor.scheme }}
|
||||
scheme: {{ .Values.metrics.podMonitor.scheme }}
|
||||
{{- end }}
|
||||
{{- if .Values.metrics.podMonitor.tlsConfig }}
|
||||
tlsConfig: {{- include "common.tplvalues.render" ( dict "value" .Values.metrics.podMonitor.tlsConfig "context" $ ) | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.metrics.podMonitor.relabelings }}
|
||||
relabelings:
|
||||
{{- include "common.tplvalues.render" (dict "value" .Values.metrics.podMonitor.relabelings "context" $) | nindent 8 }}
|
||||
{{- end }}
|
||||
namespaceSelector:
|
||||
matchNames:
|
||||
- {{ .Release.Namespace }}
|
||||
selector:
|
||||
matchLabels: {{- include "common.labels.matchLabels" . | nindent 6 }}
|
||||
{{- end }}
|
||||
@@ -0,0 +1,26 @@
|
||||
{{- if and .Values.metrics.enabled .Values.metrics.prometheusRule.enabled }}
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: PrometheusRule
|
||||
metadata:
|
||||
name: {{ include "common.names.fullname" . }}
|
||||
{{- if .Values.metrics.prometheusRule.namespace }}
|
||||
namespace: {{ .Values.metrics.prometheusRule.namespace }}
|
||||
{{- else }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- end }}
|
||||
labels: {{- include "common.labels.standard" . | nindent 4 }}
|
||||
app.kubernetes.io/component: metrics
|
||||
{{- if .Values.commonLabels }}
|
||||
{{- include "common.tplvalues.render" ( dict "value" .Values.commonLabels "context" $ ) | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.metrics.prometheusRule.additionalLabels }}
|
||||
{{- include "common.tplvalues.render" ( dict "value" .Values.metrics.prometheusRule.additionalLabels "context" $ ) | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.commonAnnotations }}
|
||||
annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
groups:
|
||||
- name: {{ include "common.names.fullname" . }}
|
||||
rules: {{- include "common.tplvalues.render" ( dict "value" .Values.metrics.prometheusRule.rules "context" $ ) | nindent 6 }}
|
||||
{{- end }}
|
||||
21
deployment/helm/milvus/charts/etcd/templates/secrets.yaml
Normal file
21
deployment/helm/milvus/charts/etcd/templates/secrets.yaml
Normal file
@@ -0,0 +1,21 @@
|
||||
{{- if and (or .Values.auth.rbac.create .Values.auth.rbac.enabled) (not .Values.auth.rbac.existingSecret) -}}
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: {{ include "common.names.fullname" . }}
|
||||
namespace: {{ .Release.Namespace | quote }}
|
||||
labels: {{- include "common.labels.standard" . | nindent 4 }}
|
||||
{{- if .Values.commonLabels }}
|
||||
{{- include "common.tplvalues.render" ( dict "value" .Values.commonLabels "context" $ ) | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.commonAnnotations }}
|
||||
annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
|
||||
{{- end }}
|
||||
type: Opaque
|
||||
data:
|
||||
{{- if .Values.auth.rbac.rootPassword }}
|
||||
etcd-root-password: {{ .Values.auth.rbac.rootPassword | b64enc | quote }}
|
||||
{{- else }}
|
||||
etcd-root-password: {{ randAlphaNum 10 | b64enc | quote }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
@@ -0,0 +1,24 @@
|
||||
{{- if .Values.serviceAccount.create }}
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
automountServiceAccountToken: {{ .Values.serviceAccount.automountServiceAccountToken }}
|
||||
metadata:
|
||||
name: {{ include "etcd.serviceAccountName" . }}
|
||||
namespace: {{ .Release.Namespace | quote }}
|
||||
labels: {{- include "common.labels.standard" . | nindent 4 }}
|
||||
{{- if .Values.commonLabels }}
|
||||
{{- include "common.tplvalues.render" ( dict "value" .Values.commonLabels "context" $ ) | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.serviceAccount.labels }}
|
||||
{{- include "common.tplvalues.render" ( dict "value" .Values.serviceAccount.labels "context" $ ) | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- if or .Values.commonAnnotations .Values.serviceAccount.annotations }}
|
||||
annotations:
|
||||
{{- if .Values.commonAnnotations }}
|
||||
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.serviceAccount.annotations }}
|
||||
{{- include "common.tplvalues.render" ( dict "value" .Values.serviceAccount.annotations "context" $ ) | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
@@ -0,0 +1,21 @@
|
||||
{{- if and .Values.disasterRecovery.enabled (not .Values.disasterRecovery.pvc.existingClaim) -}}
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: {{ printf "%s-snapshotter" (include "common.names.fullname" .) | trunc 63 | trimSuffix "-" }}
|
||||
namespace: {{ .Release.Namespace | quote }}
|
||||
labels: {{- include "common.labels.standard" . | nindent 4 }}
|
||||
{{- if .Values.commonLabels }}
|
||||
{{- include "common.tplvalues.render" ( dict "value" .Values.commonLabels "context" $ ) | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.commonAnnotations }}
|
||||
annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .Values.disasterRecovery.pvc.size | quote }}
|
||||
storageClassName: {{ .Values.disasterRecovery.pvc.storageClassName | quote }}
|
||||
{{- end -}}
|
||||
427
deployment/helm/milvus/charts/etcd/templates/statefulset.yaml
Normal file
427
deployment/helm/milvus/charts/etcd/templates/statefulset.yaml
Normal file
@@ -0,0 +1,427 @@
|
||||
apiVersion: {{ include "common.capabilities.statefulset.apiVersion" . }}
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: {{ include "common.names.fullname" . }}
|
||||
namespace: {{ .Release.Namespace | quote }}
|
||||
labels: {{- include "common.labels.standard" . | nindent 4 }}
|
||||
{{- if .Values.commonLabels }}
|
||||
{{- include "common.tplvalues.render" ( dict "value" .Values.commonLabels "context" $ ) | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.commonAnnotations }}
|
||||
annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
replicas: {{ .Values.replicaCount }}
|
||||
selector:
|
||||
matchLabels: {{- include "common.labels.matchLabels" . | nindent 6 }}
|
||||
serviceName: {{ printf "%s-headless" (include "common.names.fullname" .) | trunc 63 | trimSuffix "-" }}
|
||||
podManagementPolicy: {{ .Values.podManagementPolicy }}
|
||||
updateStrategy: {{- include "common.tplvalues.render" (dict "value" .Values.updateStrategy "context" $ ) | nindent 4 }}
|
||||
template:
|
||||
metadata:
|
||||
labels: {{- include "common.labels.standard" . | nindent 8 }}
|
||||
{{- if .Values.podLabels }}
|
||||
{{- include "common.tplvalues.render" (dict "value" .Values.podLabels "context" $) | nindent 8 }}
|
||||
{{- end }}
|
||||
annotations:
|
||||
{{- if .Values.podAnnotations }}
|
||||
{{- include "common.tplvalues.render" ( dict "value" .Values.podAnnotations "context" $) | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- if and .Values.metrics.enabled .Values.metrics.podAnnotations }}
|
||||
{{- include "common.tplvalues.render" ( dict "value" .Values.metrics.podAnnotations "context" $) | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- if (include "etcd.createConfigmap" .) }}
|
||||
checksum/configuration: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
|
||||
{{- end }}
|
||||
{{- if (include "etcd.token.createSecret" .) }}
|
||||
checksum/token-secret: {{ include (print $.Template.BasePath "/token-secrets.yaml") . | sha256sum }}
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- include "etcd.imagePullSecrets" . | nindent 6 }}
|
||||
{{- if .Values.hostAliases }}
|
||||
hostAliases: {{- include "common.tplvalues.render" (dict "value" .Values.hostAliases "context" $) | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.affinity }}
|
||||
affinity: {{- include "common.tplvalues.render" (dict "value" .Values.affinity "context" $) | nindent 8 }}
|
||||
{{- else }}
|
||||
affinity:
|
||||
podAffinity: {{- include "common.affinities.pods" (dict "type" .Values.podAffinityPreset "context" $) | nindent 10 }}
|
||||
podAntiAffinity: {{- include "common.affinities.pods" (dict "type" .Values.podAntiAffinityPreset "context" $) | nindent 10 }}
|
||||
nodeAffinity: {{- include "common.affinities.nodes" (dict "type" .Values.nodeAffinityPreset.type "key" .Values.nodeAffinityPreset.key "values" .Values.nodeAffinityPreset.values) | nindent 10 }}
|
||||
{{- end }}
|
||||
{{- if .Values.nodeSelector }}
|
||||
nodeSelector: {{- include "common.tplvalues.render" (dict "value" .Values.nodeSelector "context" $) | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.tolerations }}
|
||||
tolerations: {{- include "common.tplvalues.render" (dict "value" .Values.tolerations "context" $) | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.terminationGracePeriodSeconds }}
|
||||
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }}
|
||||
{{- end }}
|
||||
{{- if .Values.schedulerName }}
|
||||
schedulerName: {{ .Values.schedulerName }}
|
||||
{{- end }}
|
||||
{{- if .Values.topologySpreadConstraints }}
|
||||
topologySpreadConstraints: {{- include "common.tplvalues.render" (dict "value" .Values.topologySpreadConstraints "context" .) | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.priorityClassName }}
|
||||
priorityClassName: {{ .Values.priorityClassName }}
|
||||
{{- end }}
|
||||
{{- if .Values.runtimeClassName }}
|
||||
runtimeClassName: {{ .Values.runtimeClassName }}
|
||||
{{- end }}
|
||||
{{- if .Values.podSecurityContext.enabled }}
|
||||
securityContext: {{- omit .Values.podSecurityContext "enabled" | toYaml | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.shareProcessNamespace }}
|
||||
shareProcessNamespace: {{ .Values.shareProcessNamespace }}
|
||||
{{- end }}
|
||||
serviceAccountName: {{ include "etcd.serviceAccountName" $ | quote }}
|
||||
{{- if or .Values.initContainers (and .Values.volumePermissions.enabled .Values.persistence.enabled) }}
|
||||
initContainers:
|
||||
{{- if .Values.initContainers }}
|
||||
{{- include "common.tplvalues.render" (dict "value" .Values.initContainers "context" $) | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- if and .Values.volumePermissions.enabled .Values.persistence.enabled }}
|
||||
- name: volume-permissions
|
||||
image: {{ include "etcd.volumePermissions.image" . }}
|
||||
imagePullPolicy: {{ .Values.volumePermissions.image.pullPolicy | quote }}
|
||||
command:
|
||||
- /bin/bash
|
||||
- -ec
|
||||
- |
|
||||
chown -R {{ .Values.containerSecurityContext.runAsUser }}:{{ .Values.podSecurityContext.fsGroup }} /bitnami/etcd
|
||||
securityContext:
|
||||
runAsUser: 0
|
||||
{{- if .Values.volumePermissions.resources }}
|
||||
resources: {{- include "common.tplvalues.render" (dict "value" .Values.volumePermissions.resources "context" $) | nindent 12 }}
|
||||
{{- end }}
|
||||
volumeMounts:
|
||||
- name: data
|
||||
mountPath: /bitnami/etcd
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
containers:
|
||||
{{- $replicaCount := int .Values.replicaCount }}
|
||||
{{- $peerPort := int .Values.containerPorts.peer }}
|
||||
{{- $etcdFullname := include "common.names.fullname" . }}
|
||||
{{- $releaseNamespace := .Release.Namespace }}
|
||||
{{- $etcdHeadlessServiceName := (printf "%s-%s" $etcdFullname "headless" | trunc 63 | trimSuffix "-") }}
|
||||
{{- $clusterDomain := .Values.clusterDomain }}
|
||||
{{- $etcdPeerProtocol := include "etcd.peerProtocol" . }}
|
||||
{{- $etcdClientProtocol := include "etcd.clientProtocol" . }}
|
||||
- name: etcd
|
||||
image: {{ include "etcd.image" . }}
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy | quote }}
|
||||
{{- if .Values.containerSecurityContext.enabled }}
|
||||
securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 12 }}
|
||||
{{- end }}
|
||||
{{- if .Values.diagnosticMode.enabled }}
|
||||
command: {{- include "common.tplvalues.render" (dict "value" .Values.diagnosticMode.command "context" $) | nindent 12 }}
|
||||
{{- else if .Values.command }}
|
||||
command: {{- include "common.tplvalues.render" (dict "value" .Values.command "context" $) | nindent 12 }}
|
||||
{{- end }}
|
||||
{{- if .Values.diagnosticMode.enabled }}
|
||||
args: {{- include "common.tplvalues.render" (dict "value" .Values.diagnosticMode.args "context" $) | nindent 12 }}
|
||||
{{- else if .Values.args }}
|
||||
args: {{- include "common.tplvalues.render" (dict "value" .Values.args "context" $) | nindent 12 }}
|
||||
{{- end }}
|
||||
env:
|
||||
- name: BITNAMI_DEBUG
|
||||
value: {{ ternary "true" "false" (or .Values.image.debug .Values.diagnosticMode.enabled) | quote }}
|
||||
- name: MY_POD_IP
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: status.podIP
|
||||
- name: MY_POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
- name: MY_STS_NAME
|
||||
value: {{ include "common.names.fullname" . | quote }}
|
||||
- name: ETCDCTL_API
|
||||
value: "3"
|
||||
- name: ETCD_ON_K8S
|
||||
value: "yes"
|
||||
- name: ETCD_START_FROM_SNAPSHOT
|
||||
value: {{ ternary "yes" "no" .Values.startFromSnapshot.enabled | quote }}
|
||||
- name: ETCD_DISASTER_RECOVERY
|
||||
value: {{ ternary "yes" "no" .Values.disasterRecovery.enabled | quote }}
|
||||
- name: ETCD_NAME
|
||||
value: "$(MY_POD_NAME)"
|
||||
- name: ETCD_DATA_DIR
|
||||
value: "/bitnami/etcd/data"
|
||||
- name: ETCD_LOG_LEVEL
|
||||
value: {{ ternary "debug" .Values.logLevel .Values.image.debug | quote }}
|
||||
- name: ALLOW_NONE_AUTHENTICATION
|
||||
value: {{ ternary "yes" "no" (and (not (or .Values.auth.rbac.create .Values.auth.rbac.enabled)) .Values.auth.rbac.allowNoneAuthentication) | quote }}
|
||||
{{- if or .Values.auth.rbac.create .Values.auth.rbac.enabled }}
|
||||
- name: ETCD_ROOT_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: {{ include "etcd.secretName" . }}
|
||||
key: {{ include "etcd.secretPasswordKey" . }}
|
||||
{{- end }}
|
||||
{{- if .Values.auth.token.enabled }}
|
||||
- name: ETCD_AUTH_TOKEN
|
||||
{{- if eq .Values.auth.token.type "jwt" }}
|
||||
value: {{ printf "jwt,priv-key=/opt/bitnami/etcd/certs/token/%s,sign-method=%s,ttl=%s" .Values.auth.token.privateKey.filename .Values.auth.token.signMethod .Values.auth.token.ttl | quote }}
|
||||
{{- else if eq .Values.auth.token.type "simple" }}
|
||||
value: "simple"
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
- name: ETCD_ADVERTISE_CLIENT_URLS
|
||||
value: "{{ $etcdClientProtocol }}://$(MY_POD_NAME).{{ $etcdHeadlessServiceName }}.{{ .Release.Namespace }}.svc.{{ $clusterDomain }}:{{ .Values.containerPorts.client }},{{ $etcdClientProtocol }}://{{ $etcdFullname }}.{{ .Release.Namespace }}.svc.{{ $clusterDomain }}:{{ coalesce .Values.service.ports.client .Values.service.port }}"
|
||||
- name: ETCD_LISTEN_CLIENT_URLS
|
||||
value: "{{ $etcdClientProtocol }}://0.0.0.0:{{ .Values.containerPorts.client }}"
|
||||
- name: ETCD_INITIAL_ADVERTISE_PEER_URLS
|
||||
value: "{{ $etcdPeerProtocol }}://$(MY_POD_NAME).{{ $etcdHeadlessServiceName }}.{{ .Release.Namespace }}.svc.{{ $clusterDomain }}:{{ .Values.containerPorts.peer }}"
|
||||
- name: ETCD_LISTEN_PEER_URLS
|
||||
value: "{{ $etcdPeerProtocol }}://0.0.0.0:{{ .Values.containerPorts.peer }}"
|
||||
{{- if .Values.autoCompactionMode }}
|
||||
- name: ETCD_AUTO_COMPACTION_MODE
|
||||
value: {{ .Values.autoCompactionMode | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.autoCompactionRetention }}
|
||||
- name: ETCD_AUTO_COMPACTION_RETENTION
|
||||
value: {{ .Values.autoCompactionRetention | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.maxProcs }}
|
||||
- name: GOMAXPROCS
|
||||
value: {{ .Values.maxProcs }}
|
||||
{{- end }}
|
||||
{{- if gt $replicaCount 1 }}
|
||||
- name: ETCD_INITIAL_CLUSTER_TOKEN
|
||||
value: "etcd-cluster-k8s"
|
||||
- name: ETCD_INITIAL_CLUSTER_STATE
|
||||
value: {{ default (ternary "new" "existing" .Release.IsInstall) .Values.initialClusterState | quote }}
|
||||
{{- $initialCluster := list }}
|
||||
{{- range $e, $i := until $replicaCount }}
|
||||
{{- $initialCluster = append $initialCluster (printf "%s-%d=%s://%s-%d.%s.%s.svc.%s:%d" $etcdFullname $i $etcdPeerProtocol $etcdFullname $i $etcdHeadlessServiceName $releaseNamespace $clusterDomain $peerPort) }}
|
||||
{{- end }}
|
||||
- name: ETCD_INITIAL_CLUSTER
|
||||
value: {{ join "," $initialCluster | quote }}
|
||||
{{- end }}
|
||||
- name: ETCD_CLUSTER_DOMAIN
|
||||
value: {{ printf "%s.%s.svc.%s" $etcdHeadlessServiceName $releaseNamespace $clusterDomain | quote }}
|
||||
{{- if and .Values.auth.client.secureTransport .Values.auth.client.useAutoTLS }}
|
||||
- name: ETCD_AUTO_TLS
|
||||
value: "true"
|
||||
{{- else if .Values.auth.client.secureTransport }}
|
||||
- name: ETCD_CERT_FILE
|
||||
value: "/opt/bitnami/etcd/certs/client/{{ .Values.auth.client.certFilename }}"
|
||||
- name: ETCD_KEY_FILE
|
||||
value: "/opt/bitnami/etcd/certs/client/{{ .Values.auth.client.certKeyFilename }}"
|
||||
{{- if .Values.auth.client.enableAuthentication }}
|
||||
- name: ETCD_CLIENT_CERT_AUTH
|
||||
value: "true"
|
||||
- name: ETCD_TRUSTED_CA_FILE
|
||||
value: "/opt/bitnami/etcd/certs/client/{{ .Values.auth.client.caFilename | default "ca.crt" }}"
|
||||
{{- else if .Values.auth.client.caFilename }}
|
||||
- name: ETCD_TRUSTED_CA_FILE
|
||||
value: "/opt/bitnami/etcd/certs/client/{{ .Values.auth.client.caFilename | default "ca.crt" }}"
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if and .Values.auth.peer.secureTransport .Values.auth.peer.useAutoTLS }}
|
||||
- name: ETCD_PEER_AUTO_TLS
|
||||
value: "true"
|
||||
{{- else if .Values.auth.peer.secureTransport }}
|
||||
- name: ETCD_PEER_CERT_FILE
|
||||
value: "/opt/bitnami/etcd/certs/peer/{{ .Values.auth.peer.certFilename }}"
|
||||
- name: ETCD_PEER_KEY_FILE
|
||||
value: "/opt/bitnami/etcd/certs/peer/{{ .Values.auth.peer.certKeyFilename }}"
|
||||
{{- if .Values.auth.peer.enableAuthentication }}
|
||||
- name: ETCD_PEER_CLIENT_CERT_AUTH
|
||||
value: "true"
|
||||
- name: ETCD_PEER_TRUSTED_CA_FILE
|
||||
value: "/opt/bitnami/etcd/certs/peer/{{ .Values.auth.peer.caFilename | default "ca.crt" }}"
|
||||
{{- else if .Values.auth.peer.caFilename }}
|
||||
- name: ETCD_PEER_TRUSTED_CA_FILE
|
||||
value: "/opt/bitnami/etcd/certs/peer/{{ .Values.auth.peer.caFilename | default "ca.crt" }}"
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.startFromSnapshot.enabled }}
|
||||
- name: ETCD_INIT_SNAPSHOT_FILENAME
|
||||
value: {{ .Values.startFromSnapshot.snapshotFilename | quote }}
|
||||
- name: ETCD_INIT_SNAPSHOTS_DIR
|
||||
value: {{ ternary "/snapshots" "/init-snapshot" (and .Values.disasterRecovery.enabled (not .Values.disasterRecovery.pvc.existingClaim)) | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.extraEnvVars }}
|
||||
{{- include "common.tplvalues.render" (dict "value" .Values.extraEnvVars "context" $) | nindent 12 }}
|
||||
{{- end }}
|
||||
envFrom:
|
||||
{{- if .Values.extraEnvVarsCM }}
|
||||
- configMapRef:
|
||||
name: {{ include "common.tplvalues.render" (dict "value" .Values.extraEnvVarsCM "context" $) }}
|
||||
{{- end }}
|
||||
{{- if .Values.extraEnvVarsSecret }}
|
||||
- secretRef:
|
||||
name: {{ include "common.tplvalues.render" (dict "value" .Values.extraEnvVarsSecret "context" $) }}
|
||||
{{- end }}
|
||||
ports:
|
||||
- name: client
|
||||
containerPort: {{ .Values.containerPorts.client }}
|
||||
protocol: TCP
|
||||
- name: peer
|
||||
containerPort: {{ .Values.containerPorts.peer }}
|
||||
protocol: TCP
|
||||
{{- if not .Values.diagnosticMode.enabled }}
|
||||
{{- if .Values.customLivenessProbe }}
|
||||
livenessProbe: {{- include "common.tplvalues.render" (dict "value" .Values.customLivenessProbe "context" $) | nindent 12 }}
|
||||
{{- else if .Values.livenessProbe.enabled }}
|
||||
livenessProbe:
|
||||
exec:
|
||||
command:
|
||||
- /opt/bitnami/scripts/etcd/healthcheck.sh
|
||||
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
|
||||
periodSeconds: {{ .Values.livenessProbe.periodSeconds }}
|
||||
timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
|
||||
successThreshold: {{ .Values.livenessProbe.successThreshold }}
|
||||
failureThreshold: {{ .Values.livenessProbe.failureThreshold }}
|
||||
{{- end }}
|
||||
{{- if .Values.customReadinessProbe }}
|
||||
readinessProbe: {{- include "common.tplvalues.render" (dict "value" .Values.customReadinessProbe "context" $) | nindent 12 }}
|
||||
{{- else if .Values.readinessProbe.enabled }}
|
||||
readinessProbe:
|
||||
exec:
|
||||
command:
|
||||
- /opt/bitnami/scripts/etcd/healthcheck.sh
|
||||
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
|
||||
periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
|
||||
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
|
||||
successThreshold: {{ .Values.readinessProbe.successThreshold }}
|
||||
failureThreshold: {{ .Values.readinessProbe.failureThreshold }}
|
||||
{{- end }}
|
||||
{{- if .Values.customStartupProbe }}
|
||||
startupProbe: {{- include "common.tplvalues.render" (dict "value" .Values.customStartupProbe "context" $) | nindent 12 }}
|
||||
{{- else if .Values.startupProbe.enabled }}
|
||||
startupProbe:
|
||||
exec:
|
||||
command:
|
||||
- /opt/bitnami/scripts/etcd/healthcheck.sh
|
||||
initialDelaySeconds: {{ .Values.startupProbe.initialDelaySeconds }}
|
||||
periodSeconds: {{ .Values.startupProbe.periodSeconds }}
|
||||
timeoutSeconds: {{ .Values.startupProbe.timeoutSeconds }}
|
||||
successThreshold: {{ .Values.startupProbe.successThreshold }}
|
||||
failureThreshold: {{ .Values.startupProbe.failureThreshold }}
|
||||
{{- end }}
|
||||
{{- if .Values.lifecycleHooks }}
|
||||
lifecycle: {{- include "common.tplvalues.render" (dict "value" .Values.lifecycleHooks "context" $) | nindent 12 }}
|
||||
{{- else if and (gt $replicaCount 1) .Values.removeMemberOnContainerTermination }}
|
||||
lifecycle:
|
||||
preStop:
|
||||
exec:
|
||||
command:
|
||||
- /opt/bitnami/scripts/etcd/prestop.sh
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.resources }}
|
||||
resources: {{- include "common.tplvalues.render" (dict "value" .Values.resources "context" $) | nindent 12 }}
|
||||
{{- end }}
|
||||
volumeMounts:
|
||||
- name: data
|
||||
mountPath: /bitnami/etcd
|
||||
{{- if and (eq .Values.auth.token.enabled true) (eq .Values.auth.token.type "jwt") }}
|
||||
- name: etcd-jwt-token
|
||||
mountPath: /opt/bitnami/etcd/certs/token/
|
||||
readOnly: true
|
||||
{{- end }}
|
||||
{{- if or (and .Values.startFromSnapshot.enabled (not .Values.disasterRecovery.enabled)) (and .Values.disasterRecovery.enabled .Values.startFromSnapshot.enabled .Values.disasterRecovery.pvc.existingClaim) }}
|
||||
- name: init-snapshot-volume
|
||||
mountPath: /init-snapshot
|
||||
{{- end }}
|
||||
{{- if or .Values.disasterRecovery.enabled (and .Values.disasterRecovery.enabled .Values.startFromSnapshot.enabled) }}
|
||||
- name: snapshot-volume
|
||||
mountPath: /snapshots
|
||||
{{- if .Values.disasterRecovery.pvc.subPath }}
|
||||
subPath: {{ .Values.disasterRecovery.pvc.subPath }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if or .Values.configuration .Values.existingConfigmap }}
|
||||
- name: etcd-config
|
||||
mountPath: /opt/bitnami/etcd/conf/
|
||||
{{- end }}
|
||||
{{- if or .Values.auth.client.enableAuthentication (and .Values.auth.client.secureTransport (not .Values.auth.client.useAutoTLS )) }}
|
||||
- name: etcd-client-certs
|
||||
mountPath: /opt/bitnami/etcd/certs/client/
|
||||
readOnly: true
|
||||
{{- end }}
|
||||
{{- if or .Values.auth.peer.enableAuthentication (and .Values.auth.peer.secureTransport (not .Values.auth.peer.useAutoTLS )) }}
|
||||
- name: etcd-peer-certs
|
||||
mountPath: /opt/bitnami/etcd/certs/peer/
|
||||
readOnly: true
|
||||
{{- end }}
|
||||
{{- if .Values.extraVolumeMounts }}
|
||||
{{- include "common.tplvalues.render" (dict "value" .Values.extraVolumeMounts "context" $) | nindent 12 }}
|
||||
{{- end }}
|
||||
{{- if .Values.sidecars }}
|
||||
{{- include "common.tplvalues.render" (dict "value" .Values.sidecars "context" $) | nindent 8 }}
|
||||
{{- end }}
|
||||
volumes:
|
||||
{{- if and (eq .Values.auth.token.enabled true) (eq .Values.auth.token.type "jwt") }}
|
||||
- name: etcd-jwt-token
|
||||
secret:
|
||||
secretName: {{ include "etcd.token.secretName" . }}
|
||||
defaultMode: 256
|
||||
{{- end }}
|
||||
{{- if or (and .Values.startFromSnapshot.enabled (not .Values.disasterRecovery.enabled)) (and .Values.disasterRecovery.enabled .Values.startFromSnapshot.enabled .Values.disasterRecovery.pvc.existingClaim) }}
|
||||
- name: init-snapshot-volume
|
||||
persistentVolumeClaim:
|
||||
claimName: {{ .Values.startFromSnapshot.existingClaim }}
|
||||
{{- end }}
|
||||
{{- if or .Values.disasterRecovery.enabled (and .Values.disasterRecovery.enabled .Values.startFromSnapshot.enabled) }}
|
||||
- name: snapshot-volume
|
||||
persistentVolumeClaim:
|
||||
claimName: {{ include "etcd.disasterRecovery.pvc.name" . }}
|
||||
{{- end }}
|
||||
{{- if or .Values.configuration .Values.existingConfigmap }}
|
||||
- name: etcd-config
|
||||
configMap:
|
||||
name: {{ include "etcd.configmapName" . }}
|
||||
{{- end }}
|
||||
{{- if or .Values.auth.client.enableAuthentication (and .Values.auth.client.secureTransport (not .Values.auth.client.useAutoTLS )) }}
|
||||
- name: etcd-client-certs
|
||||
secret:
|
||||
secretName: {{ required "A secret containing the client certificates is required" (tpl .Values.auth.client.existingSecret .) }}
|
||||
defaultMode: 256
|
||||
{{- end }}
|
||||
{{- if or .Values.auth.peer.enableAuthentication (and .Values.auth.peer.secureTransport (not .Values.auth.peer.useAutoTLS )) }}
|
||||
- name: etcd-peer-certs
|
||||
secret:
|
||||
secretName: {{ required "A secret containing the peer certificates is required" (tpl .Values.auth.peer.existingSecret .) }}
|
||||
defaultMode: 256
|
||||
{{- end }}
|
||||
{{- if .Values.extraVolumes }}
|
||||
{{- include "common.tplvalues.render" (dict "value" .Values.extraVolumes "context" $) | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- if not .Values.persistence.enabled }}
|
||||
- name: data
|
||||
emptyDir: {}
|
||||
{{- else }}
|
||||
{{- if .Values.persistentVolumeClaimRetentionPolicy.enabled }}
|
||||
persistentVolumeClaimRetentionPolicy:
|
||||
whenDeleted: {{ .Values.persistentVolumeClaimRetentionPolicy.whenDeleted }}
|
||||
whenScaled: {{ .Values.persistentVolumeClaimRetentionPolicy.whenScaled }}
|
||||
{{- end }}
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: data
|
||||
{{- if .Values.persistence.annotations }}
|
||||
annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.persistence.annotations "context" $) | nindent 10 }}
|
||||
{{- end }}
|
||||
{{- if .Values.persistence.labels }}
|
||||
labels: {{- include "common.tplvalues.render" ( dict "value" .Values.persistence.labels "context" $) | nindent 10 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
accessModes:
|
||||
{{- range .Values.persistence.accessModes }}
|
||||
- {{ . | quote }}
|
||||
{{- end }}
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .Values.persistence.size | quote }}
|
||||
{{- if .Values.persistence.selector }}
|
||||
selector: {{- include "common.tplvalues.render" ( dict "value" .Values.persistence.selector "context" $) | nindent 10 }}
|
||||
{{- end }}
|
||||
{{ include "common.storage.class" (dict "persistence" .Values.persistence "global" .Values.global) }}
|
||||
{{- end }}
|
||||
@@ -0,0 +1,45 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: {{ printf "%s-headless" (include "common.names.fullname" .) | trunc 63 | trimSuffix "-" }}
|
||||
namespace: {{ .Release.Namespace | quote }}
|
||||
labels: {{- include "common.labels.standard" . | nindent 4 }}
|
||||
{{- if .Values.commonLabels }}
|
||||
{{- include "common.tplvalues.render" ( dict "value" .Values.commonLabels "context" $ ) | nindent 4 }}
|
||||
{{- end }}
|
||||
annotations:
|
||||
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
|
||||
{{- if .Values.service.headless.annotations }}
|
||||
{{- include "common.tplvalues.render" (dict "value" .Values.service.headless.annotations "context" $) | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.commonAnnotations }}
|
||||
{{- include "common.tplvalues.render" (dict "value" .Values.commonAnnotations "context" $) | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
type: ClusterIP
|
||||
clusterIP: None
|
||||
publishNotReadyAddresses: true
|
||||
ports:
|
||||
{{- if .Values.service.clientPortNameOverride }}
|
||||
{{- if .Values.auth.client.secureTransport }}
|
||||
- name: {{ .Values.service.clientPortNameOverride }}-ssl
|
||||
{{- else }}
|
||||
- name: {{ .Values.service.clientPortNameOverride }}
|
||||
{{- end }}
|
||||
{{- else }}
|
||||
- name: client
|
||||
{{- end }}
|
||||
port: {{ .Values.containerPorts.client }}
|
||||
targetPort: client
|
||||
{{- if .Values.service.peerPortNameOverride }}
|
||||
{{- if .Values.auth.peer.secureTransport }}
|
||||
- name: {{ .Values.service.peerPortNameOverride }}-ssl
|
||||
{{- else }}
|
||||
- name: {{ .Values.service.peerPortNameOverride }}
|
||||
{{- end }}
|
||||
{{- else }}
|
||||
- name: peer
|
||||
{{- end }}
|
||||
port: {{ .Values.containerPorts.peer }}
|
||||
targetPort: peer
|
||||
selector: {{- include "common.labels.matchLabels" . | nindent 4 }}
|
||||
62
deployment/helm/milvus/charts/etcd/templates/svc.yaml
Normal file
62
deployment/helm/milvus/charts/etcd/templates/svc.yaml
Normal file
@@ -0,0 +1,62 @@
|
||||
{{- if .Values.service.enabled }}
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: {{ include "common.names.fullname" . }}
|
||||
namespace: {{ .Release.Namespace | quote }}
|
||||
labels: {{- include "common.labels.standard" . | nindent 4 }}
|
||||
{{- if .Values.commonLabels }}
|
||||
{{- include "common.tplvalues.render" ( dict "value" .Values.commonLabels "context" $ ) | nindent 4 }}
|
||||
{{- end }}
|
||||
annotations:
|
||||
{{- if .Values.service.annotations }}
|
||||
{{- include "common.tplvalues.render" ( dict "value" .Values.service.annotations "context" $) | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.commonAnnotations }}
|
||||
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
type: {{ .Values.service.type }}
|
||||
{{- if .Values.service.clusterIP }}
|
||||
clusterIP: {{ .Values.service.clusterIP }}
|
||||
{{- end }}
|
||||
{{- if (or (eq .Values.service.type "LoadBalancer") (eq .Values.service.type "NodePort")) }}
|
||||
externalTrafficPolicy: {{ .Values.service.externalTrafficPolicy | quote }}
|
||||
{{- end }}
|
||||
{{- if and (eq .Values.service.type "LoadBalancer") (not (empty .Values.service.loadBalancerIP)) }}
|
||||
loadBalancerIP: {{ .Values.service.loadBalancerIP }}
|
||||
{{- end }}
|
||||
{{- if and (eq .Values.service.type "LoadBalancer") (not (empty .Values.service.loadBalancerSourceRanges)) }}
|
||||
loadBalancerSourceRanges: {{- toYaml .Values.service.loadBalancerSourceRanges | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.service.externalIPs }}
|
||||
externalIPs: {{- toYaml .Values.service.externalIPs | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.service.sessionAffinity }}
|
||||
sessionAffinity: {{ .Values.service.sessionAffinity }}
|
||||
{{- end }}
|
||||
{{- if .Values.service.sessionAffinityConfig }}
|
||||
sessionAffinityConfig: {{- include "common.tplvalues.render" (dict "value" .Values.service.sessionAffinityConfig "context" $) | nindent 4 }}
|
||||
{{- end }}
|
||||
ports:
|
||||
- name: {{ default "client" .Values.service.clientPortNameOverride | quote }}
|
||||
port: {{ coalesce .Values.service.ports.client .Values.service.port }}
|
||||
targetPort: client
|
||||
{{- if and (or (eq .Values.service.type "NodePort") (eq .Values.service.type "LoadBalancer")) (not (empty (coalesce .Values.service.nodePorts.client .Values.service.nodePorts.clientPort))) }}
|
||||
nodePort: {{ coalesce .Values.service.nodePorts.client .Values.service.nodePorts.clientPort }}
|
||||
{{- else if eq .Values.service.type "ClusterIP" }}
|
||||
nodePort: null
|
||||
{{- end }}
|
||||
- name: {{ default "peer" .Values.service.peerPortNameOverride | quote }}
|
||||
port: {{ coalesce .Values.service.ports.peer .Values.service.peerPort }}
|
||||
targetPort: peer
|
||||
{{- if and (or (eq .Values.service.type "NodePort") (eq .Values.service.type "LoadBalancer")) (not (empty (coalesce .Values.service.nodePorts.peer .Values.service.nodePorts.peerPort))) }}
|
||||
nodePort: {{ coalesce .Values.service.nodePorts.peer .Values.service.nodePorts.peerPort }}
|
||||
{{- else if eq .Values.service.type "ClusterIP" }}
|
||||
nodePort: null
|
||||
{{- end }}
|
||||
{{- if .Values.service.extraPorts }}
|
||||
{{- include "common.tplvalues.render" (dict "value" .Values.service.extraPorts "context" $) | nindent 4 }}
|
||||
{{- end }}
|
||||
selector: {{- include "common.labels.matchLabels" . | nindent 4 }}
|
||||
{{- end }}
|
||||
@@ -0,0 +1,14 @@
|
||||
{{- if (include "etcd.token.createSecret" .) }}
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: {{ printf "%s-jwt-token" (include "common.names.fullname" .) | trunc 63 | trimSuffix "-" }}
|
||||
namespace: {{ .Release.Namespace | quote }}
|
||||
labels: {{- include "common.labels.standard" . | nindent 4 }}
|
||||
{{- if .Values.commonAnnotations }}
|
||||
annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
|
||||
{{- end }}
|
||||
type: Opaque
|
||||
data:
|
||||
jwt-token.pem: {{ include "etcd.token.jwtToken" . | b64enc | quote }}
|
||||
{{- end }}
|
||||
906
deployment/helm/milvus/charts/etcd/values.yaml
Normal file
906
deployment/helm/milvus/charts/etcd/values.yaml
Normal file
@@ -0,0 +1,906 @@
|
||||
## @section Global parameters
|
||||
## Global Docker image parameters
|
||||
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
|
||||
## Current available global Docker image parameters: imageRegistry, imagePullSecrets and storageClass
|
||||
##
|
||||
|
||||
## @param global.imageRegistry Global Docker image registry
|
||||
## @param global.imagePullSecrets [array] Global Docker registry secret names as an array
|
||||
## @param global.storageClass Global StorageClass for Persistent Volume(s)
|
||||
##
|
||||
global:
|
||||
imageRegistry: ""
|
||||
## E.g.
|
||||
## imagePullSecrets:
|
||||
## - myRegistryKeySecretName
|
||||
##
|
||||
imagePullSecrets: []
|
||||
storageClass: ""
|
||||
|
||||
## @section Common parameters
|
||||
##
|
||||
|
||||
## @param kubeVersion Force target Kubernetes version (using Helm capabilities if not set)
|
||||
##
|
||||
kubeVersion: ""
|
||||
## @param nameOverride String to partially override common.names.fullname template (will maintain the release name)
|
||||
##
|
||||
nameOverride: ""
|
||||
## @param fullnameOverride String to fully override common.names.fullname template
|
||||
##
|
||||
fullnameOverride: ""
|
||||
## @param commonLabels [object] Labels to add to all deployed objects
|
||||
##
|
||||
commonLabels: {}
|
||||
## @param commonAnnotations [object] Annotations to add to all deployed objects
|
||||
##
|
||||
commonAnnotations: {}
|
||||
## @param clusterDomain Default Kubernetes cluster domain
|
||||
##
|
||||
clusterDomain: cluster.local
|
||||
## @param extraDeploy [array] Array of extra objects to deploy with the release
|
||||
##
|
||||
extraDeploy: []
|
||||
|
||||
## Enable diagnostic mode in the deployment
|
||||
##
|
||||
diagnosticMode:
|
||||
## @param diagnosticMode.enabled Enable diagnostic mode (all probes will be disabled and the command will be overridden)
|
||||
##
|
||||
enabled: false
|
||||
## @param diagnosticMode.command Command to override all containers in the deployment
|
||||
##
|
||||
command:
|
||||
- sleep
|
||||
## @param diagnosticMode.args Args to override all containers in the deployment
|
||||
##
|
||||
args:
|
||||
- infinity
|
||||
|
||||
## @section etcd parameters
|
||||
##
|
||||
|
||||
## Bitnami etcd image version
|
||||
## ref: https://hub.docker.com/r/bitnami/etcd/tags/
|
||||
## @param image.registry etcd image registry
|
||||
## @param image.repository etcd image name
|
||||
## @param image.tag etcd image tag
|
||||
## @param image.digest etcd image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag
|
||||
##
|
||||
image:
|
||||
registry: docker.io
|
||||
repository: bitnami/etcd
|
||||
tag: 3.5.9-debian-11-r4
|
||||
digest: ""
|
||||
## @param image.pullPolicy etcd image pull policy
|
||||
## Specify a imagePullPolicy
|
||||
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
|
||||
## ref: https://kubernetes.io/docs/user-guide/images/#pre-pulling-images
|
||||
##
|
||||
pullPolicy: IfNotPresent
|
||||
## @param image.pullSecrets [array] etcd image pull secrets
|
||||
## Optionally specify an array of imagePullSecrets.
|
||||
## Secrets must be manually created in the namespace.
|
||||
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
|
||||
## e.g:
|
||||
## pullSecrets:
|
||||
## - myRegistryKeySecretName
|
||||
##
|
||||
pullSecrets: []
|
||||
## @param image.debug Enable image debug mode
|
||||
## Set to true if you would like to see extra information on logs
|
||||
##
|
||||
debug: false
|
||||
## Authentication parameters
|
||||
##
|
||||
auth:
|
||||
## Role-based access control parameters
|
||||
## ref: https://etcd.io/docs/current/op-guide/authentication/
|
||||
##
|
||||
rbac:
|
||||
## @param auth.rbac.create Switch to enable RBAC authentication
|
||||
##
|
||||
create: true
|
||||
## @param auth.rbac.allowNoneAuthentication Allow to use etcd without configuring RBAC authentication
|
||||
##
|
||||
allowNoneAuthentication: true
|
||||
## @param auth.rbac.rootPassword Root user password. The root user is always `root`
|
||||
##
|
||||
rootPassword: ""
|
||||
## @param auth.rbac.existingSecret Name of the existing secret containing credentials for the root user
|
||||
##
|
||||
existingSecret: ""
|
||||
## @param auth.rbac.existingSecretPasswordKey Name of key containing password to be retrieved from the existing secret
|
||||
##
|
||||
existingSecretPasswordKey: ""
|
||||
## Authentication token
|
||||
## ref: https://etcd.io/docs/latest/learning/design-auth-v3/#two-types-of-tokens-simple-and-jwt
|
||||
##
|
||||
token:
|
||||
## @param auth.token.enabled Enables token authentication
|
||||
##
|
||||
enabled: true
|
||||
## @param auth.token.type Authentication token type. Allowed values: 'simple' or 'jwt'
|
||||
## ref: https://etcd.io/docs/latest/op-guide/configuration/#--auth-token
|
||||
##
|
||||
type: jwt
|
||||
## @param auth.token.privateKey.filename Name of the file containing the private key for signing the JWT token
|
||||
## @param auth.token.privateKey.existingSecret Name of the existing secret containing the private key for signing the JWT token
|
||||
## NOTE: Ignored if auth.token.type=simple
|
||||
## NOTE: A secret containing a private key will be auto-generated if an existing one is not provided.
|
||||
##
|
||||
privateKey:
|
||||
filename: jwt-token.pem
|
||||
existingSecret: ""
|
||||
## @param auth.token.signMethod JWT token sign method
|
||||
## NOTE: Ignored if auth.token.type=simple
|
||||
##
|
||||
signMethod: RS256
|
||||
## @param auth.token.ttl JWT token TTL
|
||||
## NOTE: Ignored if auth.token.type=simple
|
||||
##
|
||||
ttl: 10m
|
||||
## TLS authentication for client-to-server communications
|
||||
## ref: https://etcd.io/docs/current/op-guide/security/
|
||||
##
|
||||
client:
|
||||
## @param auth.client.secureTransport Switch to encrypt client-to-server communications using TLS certificates
|
||||
##
|
||||
secureTransport: false
|
||||
## @param auth.client.useAutoTLS Switch to automatically create the TLS certificates
|
||||
##
|
||||
useAutoTLS: false
|
||||
## @param auth.client.existingSecret Name of the existing secret containing the TLS certificates for client-to-server communications
|
||||
##
|
||||
existingSecret: ""
|
||||
## @param auth.client.enableAuthentication Switch to enable host authentication using TLS certificates. Requires existing secret
|
||||
##
|
||||
enableAuthentication: false
|
||||
## @param auth.client.certFilename Name of the file containing the client certificate
|
||||
##
|
||||
certFilename: cert.pem
|
||||
## @param auth.client.certKeyFilename Name of the file containing the client certificate private key
|
||||
##
|
||||
certKeyFilename: key.pem
|
||||
## @param auth.client.caFilename Name of the file containing the client CA certificate
|
||||
## If not specified and `auth.client.enableAuthentication=true` or `auth.rbac.enabled=true`, the default is is `ca.crt`
|
||||
##
|
||||
caFilename: ""
|
||||
## TLS authentication for server-to-server communications
|
||||
## ref: https://etcd.io/docs/current/op-guide/security/
|
||||
##
|
||||
peer:
|
||||
## @param auth.peer.secureTransport Switch to encrypt server-to-server communications using TLS certificates
|
||||
##
|
||||
secureTransport: false
|
||||
## @param auth.peer.useAutoTLS Switch to automatically create the TLS certificates
|
||||
##
|
||||
useAutoTLS: false
|
||||
## @param auth.peer.existingSecret Name of the existing secret containing the TLS certificates for server-to-server communications
|
||||
##
|
||||
existingSecret: ""
|
||||
## @param auth.peer.enableAuthentication Switch to enable host authentication using TLS certificates. Requires existing secret
|
||||
##
|
||||
enableAuthentication: false
|
||||
## @param auth.peer.certFilename Name of the file containing the peer certificate
|
||||
##
|
||||
certFilename: cert.pem
|
||||
## @param auth.peer.certKeyFilename Name of the file containing the peer certificate private key
|
||||
##
|
||||
certKeyFilename: key.pem
|
||||
## @param auth.peer.caFilename Name of the file containing the peer CA certificate
|
||||
## If not specified and `auth.peer.enableAuthentication=true` or `rbac.enabled=true`, the default is is `ca.crt`
|
||||
##
|
||||
caFilename: ""
|
||||
## @param autoCompactionMode Auto compaction mode, by default periodic. Valid values: "periodic", "revision".
|
||||
## - 'periodic' for duration based retention, defaulting to hours if no time unit is provided (e.g. 5m).
|
||||
## - 'revision' for revision number based retention.
|
||||
##
|
||||
autoCompactionMode: ""
|
||||
## @param autoCompactionRetention Auto compaction retention for mvcc key value store in hour, by default 0, means disabled
|
||||
##
|
||||
autoCompactionRetention: ""
|
||||
## @param initialClusterState Initial cluster state. Allowed values: 'new' or 'existing'
|
||||
## If this values is not set, the default values below are set:
|
||||
## - 'new': when installing the chart ('helm install ...')
|
||||
## - 'existing': when upgrading the chart ('helm upgrade ...')
|
||||
##
|
||||
initialClusterState: ""
|
||||
## @param logLevel Sets the log level for the etcd process. Allowed values: 'debug', 'info', 'warn', 'error', 'panic', 'fatal'
|
||||
##
|
||||
logLevel: "info"
|
||||
## @param maxProcs Limits the number of operating system threads that can execute user-level
|
||||
## Go code simultaneously by setting GOMAXPROCS environment variable
|
||||
## ref: https://golang.org/pkg/runtime
|
||||
##
|
||||
maxProcs: ""
|
||||
## @param removeMemberOnContainerTermination Use a PreStop hook to remove the etcd members from the etcd cluster on container termination
|
||||
## they the containers are terminated. Set to 'false' if appears an error-related member ID wasn't properly stored.
|
||||
## NOTE: Ignored if lifecycleHooks is set or replicaCount=1
|
||||
##
|
||||
removeMemberOnContainerTermination: true
|
||||
## @param configuration etcd configuration. Specify content for etcd.conf.yml
|
||||
## e.g:
|
||||
## configuration: |-
|
||||
## foo: bar
|
||||
## baz:
|
||||
##
|
||||
configuration: ""
|
||||
## @param existingConfigmap Existing ConfigMap with etcd configuration
|
||||
## NOTE: When it's set the configuration parameter is ignored
|
||||
##
|
||||
existingConfigmap: ""
|
||||
## @param extraEnvVars [array] Extra environment variables to be set on etcd container
|
||||
## e.g:
|
||||
## extraEnvVars:
|
||||
## - name: FOO
|
||||
## value: "bar"
|
||||
##
|
||||
extraEnvVars: []
|
||||
## @param extraEnvVarsCM Name of existing ConfigMap containing extra env vars
|
||||
##
|
||||
extraEnvVarsCM: ""
|
||||
## @param extraEnvVarsSecret Name of existing Secret containing extra env vars
|
||||
##
|
||||
extraEnvVarsSecret: ""
|
||||
## @param command [array] Default container command (useful when using custom images)
|
||||
##
|
||||
command: []
|
||||
## @param args [array] Default container args (useful when using custom images)
|
||||
##
|
||||
args: []
|
||||
|
||||
## @section etcd statefulset parameters
|
||||
##
|
||||
|
||||
|
||||
## @param replicaCount Number of etcd replicas to deploy
|
||||
##
|
||||
replicaCount: 1
|
||||
## Update strategy
|
||||
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
|
||||
## @param updateStrategy.type Update strategy type, can be set to RollingUpdate or OnDelete.
|
||||
##
|
||||
updateStrategy:
|
||||
type: RollingUpdate
|
||||
## @param podManagementPolicy Pod management policy for the etcd statefulset
|
||||
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies
|
||||
##
|
||||
podManagementPolicy: Parallel
|
||||
## @param hostAliases [array] etcd pod host aliases
|
||||
## ref: https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/
|
||||
##
|
||||
hostAliases: []
|
||||
## @param lifecycleHooks [object] Override default etcd container hooks
|
||||
##
|
||||
lifecycleHooks: {}
|
||||
## etcd container ports to open
|
||||
## @param containerPorts.client Client port to expose at container level
|
||||
## @param containerPorts.peer Peer port to expose at container level
|
||||
##
|
||||
containerPorts:
|
||||
client: 2379
|
||||
peer: 2380
|
||||
## etcd pods' Security Context
|
||||
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
|
||||
## @param podSecurityContext.enabled Enabled etcd pods' Security Context
|
||||
## @param podSecurityContext.fsGroup Set etcd pod's Security Context fsGroup
|
||||
##
|
||||
podSecurityContext:
|
||||
enabled: true
|
||||
fsGroup: 1001
|
||||
## etcd containers' SecurityContext
|
||||
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
|
||||
## @param containerSecurityContext.enabled Enabled etcd containers' Security Context
|
||||
## @param containerSecurityContext.runAsUser Set etcd container's Security Context runAsUser
|
||||
## @param containerSecurityContext.runAsNonRoot Set etcd container's Security Context runAsNonRoot
|
||||
## @param containerSecurityContext.allowPrivilegeEscalation Force the child process to be run as nonprivilege
|
||||
##
|
||||
containerSecurityContext:
|
||||
enabled: true
|
||||
runAsUser: 1001
|
||||
runAsNonRoot: true
|
||||
allowPrivilegeEscalation: false
|
||||
## etcd containers' resource requests and limits
|
||||
## ref: https://kubernetes.io/docs/user-guide/compute-resources/
|
||||
## We usually recommend not to specify default resources and to leave this as a conscious
|
||||
## choice for the user. This also increases chances charts run on environments with little
|
||||
## resources, such as Minikube. If you do want to specify resources, uncomment the following
|
||||
## lines, adjust them as necessary, and remove the curly braces after 'resources:'.
|
||||
## @param resources.limits [object] The resources limits for the etcd container
|
||||
## @param resources.requests [object] The requested resources for the etcd container
|
||||
##
|
||||
resources:
|
||||
## Example:
|
||||
## limits:
|
||||
## cpu: 500m
|
||||
## memory: 1Gi
|
||||
##
|
||||
limits: {}
|
||||
requests: {}
|
||||
## Configure extra options for liveness probe
|
||||
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes
|
||||
## @param livenessProbe.enabled Enable livenessProbe
|
||||
## @param livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe
|
||||
## @param livenessProbe.periodSeconds Period seconds for livenessProbe
|
||||
## @param livenessProbe.timeoutSeconds Timeout seconds for livenessProbe
|
||||
## @param livenessProbe.failureThreshold Failure threshold for livenessProbe
|
||||
## @param livenessProbe.successThreshold Success threshold for livenessProbe
|
||||
##
|
||||
livenessProbe:
|
||||
enabled: true
|
||||
initialDelaySeconds: 60
|
||||
periodSeconds: 30
|
||||
timeoutSeconds: 5
|
||||
successThreshold: 1
|
||||
failureThreshold: 5
|
||||
## Configure extra options for readiness probe
|
||||
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes
|
||||
## @param readinessProbe.enabled Enable readinessProbe
|
||||
## @param readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe
|
||||
## @param readinessProbe.periodSeconds Period seconds for readinessProbe
|
||||
## @param readinessProbe.timeoutSeconds Timeout seconds for readinessProbe
|
||||
## @param readinessProbe.failureThreshold Failure threshold for readinessProbe
|
||||
## @param readinessProbe.successThreshold Success threshold for readinessProbe
|
||||
##
|
||||
readinessProbe:
|
||||
enabled: true
|
||||
initialDelaySeconds: 60
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 5
|
||||
successThreshold: 1
|
||||
failureThreshold: 5
|
||||
## Configure extra options for liveness probe
|
||||
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes
|
||||
## @param startupProbe.enabled Enable startupProbe
|
||||
## @param startupProbe.initialDelaySeconds Initial delay seconds for startupProbe
|
||||
## @param startupProbe.periodSeconds Period seconds for startupProbe
|
||||
## @param startupProbe.timeoutSeconds Timeout seconds for startupProbe
|
||||
## @param startupProbe.failureThreshold Failure threshold for startupProbe
|
||||
## @param startupProbe.successThreshold Success threshold for startupProbe
|
||||
##
|
||||
startupProbe:
|
||||
enabled: false
|
||||
initialDelaySeconds: 0
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 5
|
||||
successThreshold: 1
|
||||
failureThreshold: 60
|
||||
## @param customLivenessProbe [object] Override default liveness probe
|
||||
##
|
||||
customLivenessProbe: {}
|
||||
## @param customReadinessProbe [object] Override default readiness probe
|
||||
##
|
||||
customReadinessProbe: {}
|
||||
## @param customStartupProbe [object] Override default startup probe
|
||||
##
|
||||
customStartupProbe: {}
|
||||
## @param extraVolumes [array] Optionally specify extra list of additional volumes for etcd pods
|
||||
##
|
||||
extraVolumes: []
|
||||
## @param extraVolumeMounts [array] Optionally specify extra list of additional volumeMounts for etcd container(s)
|
||||
##
|
||||
extraVolumeMounts: []
|
||||
## @param initContainers [array] Add additional init containers to the etcd pods
|
||||
## e.g:
|
||||
## initContainers:
|
||||
## - name: your-image-name
|
||||
## image: your-image
|
||||
## imagePullPolicy: Always
|
||||
## ports:
|
||||
## - name: portname
|
||||
## containerPort: 1234
|
||||
##
|
||||
initContainers: []
|
||||
## @param sidecars [array] Add additional sidecar containers to the etcd pods
|
||||
## e.g:
|
||||
## sidecars:
|
||||
## - name: your-image-name
|
||||
## image: your-image
|
||||
## imagePullPolicy: Always
|
||||
## ports:
|
||||
## - name: portname
|
||||
## containerPort: 1234
|
||||
##
|
||||
sidecars: []
|
||||
## @param podAnnotations [object] Annotations for etcd pods
|
||||
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
|
||||
##
|
||||
podAnnotations: {}
|
||||
## @param podLabels [object] Extra labels for etcd pods
|
||||
## Ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
|
||||
##
|
||||
podLabels: {}
|
||||
## @param podAffinityPreset Pod affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard`
|
||||
## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
|
||||
##
|
||||
podAffinityPreset: ""
|
||||
## @param podAntiAffinityPreset Pod anti-affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard`
|
||||
## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
|
||||
##
|
||||
podAntiAffinityPreset: soft
|
||||
## Node affinity preset
|
||||
## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
|
||||
## @param nodeAffinityPreset.type Node affinity preset type. Ignored if `affinity` is set. Allowed values: `soft` or `hard`
|
||||
## @param nodeAffinityPreset.key Node label key to match. Ignored if `affinity` is set.
|
||||
## @param nodeAffinityPreset.values [array] Node label values to match. Ignored if `affinity` is set.
|
||||
##
|
||||
nodeAffinityPreset:
|
||||
type: ""
|
||||
## e.g:
|
||||
## key: "kubernetes.io/e2e-az-name"
|
||||
##
|
||||
key: ""
|
||||
## e.g:
|
||||
## values:
|
||||
## - e2e-az1
|
||||
## - e2e-az2
|
||||
##
|
||||
values: []
|
||||
## @param affinity [object] Affinity for pod assignment
|
||||
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
|
||||
## Note: podAffinityPreset, podAntiAffinityPreset, and nodeAffinityPreset will be ignored when it's set
|
||||
##
|
||||
affinity: {}
|
||||
## @param nodeSelector [object] Node labels for pod assignment
|
||||
## Ref: https://kubernetes.io/docs/user-guide/node-selection/
|
||||
##
|
||||
nodeSelector: {}
|
||||
## @param tolerations [array] Tolerations for pod assignment
|
||||
## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
|
||||
##
|
||||
tolerations: []
|
||||
## @param terminationGracePeriodSeconds Seconds the pod needs to gracefully terminate
|
||||
## ref: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#hook-handler-execution
|
||||
##
|
||||
terminationGracePeriodSeconds: ""
|
||||
## @param schedulerName Name of the k8s scheduler (other than default)
|
||||
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
|
||||
##
|
||||
schedulerName: ""
|
||||
## @param priorityClassName Name of the priority class to be used by etcd pods
|
||||
## Priority class needs to be created beforehand
|
||||
## Ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
|
||||
##
|
||||
priorityClassName: ""
|
||||
## @param runtimeClassName Name of the runtime class to be used by pod(s)
|
||||
## ref: https://kubernetes.io/docs/concepts/containers/runtime-class/
|
||||
##
|
||||
runtimeClassName: ""
|
||||
## @param shareProcessNamespace Enable shared process namespace in a pod.
|
||||
## If set to false (default), each container will run in separate namespace, etcd will have PID=1.
|
||||
## If set to true, the /pause will run as init process and will reap any zombie PIDs,
|
||||
## for example, generated by a custom exec probe running longer than a probe timeoutSeconds.
|
||||
## Enable this only if customLivenessProbe or customReadinessProbe is used and zombie PIDs are accumulating.
|
||||
## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/
|
||||
##
|
||||
shareProcessNamespace: false
|
||||
## @param topologySpreadConstraints Topology Spread Constraints for pod assignment
|
||||
## https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/
|
||||
## The value is evaluated as a template
|
||||
##
|
||||
topologySpreadConstraints: []
|
||||
## persistentVolumeClaimRetentionPolicy
|
||||
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#persistentvolumeclaim-retention
|
||||
## @param persistentVolumeClaimRetentionPolicy.enabled Controls if and how PVCs are deleted during the lifecycle of a StatefulSet
|
||||
## @param persistentVolumeClaimRetentionPolicy.whenScaled Volume retention behavior when the replica count of the StatefulSet is reduced
|
||||
## @param persistentVolumeClaimRetentionPolicy.whenDeleted Volume retention behavior that applies when the StatefulSet is deleted
|
||||
persistentVolumeClaimRetentionPolicy:
|
||||
enabled: false
|
||||
whenScaled: Retain
|
||||
whenDeleted: Retain
|
||||
## @section Traffic exposure parameters
|
||||
##
|
||||
|
||||
service:
|
||||
## @param service.type Kubernetes Service type
|
||||
##
|
||||
type: ClusterIP
|
||||
## @param service.enabled create second service if equal true
|
||||
##
|
||||
enabled: true
|
||||
## @param service.clusterIP Kubernetes service Cluster IP
|
||||
## e.g.:
|
||||
## clusterIP: None
|
||||
##
|
||||
clusterIP: ""
|
||||
## @param service.ports.client etcd client port
|
||||
## @param service.ports.peer etcd peer port
|
||||
##
|
||||
ports:
|
||||
client: 2379
|
||||
peer: 2380
|
||||
## @param service.nodePorts.client Specify the nodePort client value for the LoadBalancer and NodePort service types.
|
||||
## @param service.nodePorts.peer Specify the nodePort peer value for the LoadBalancer and NodePort service types.
|
||||
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
|
||||
##
|
||||
nodePorts:
|
||||
client: ""
|
||||
peer: ""
|
||||
## @param service.clientPortNameOverride etcd client port name override
|
||||
##
|
||||
clientPortNameOverride: ""
|
||||
## @param service.peerPortNameOverride etcd peer port name override
|
||||
##
|
||||
peerPortNameOverride: ""
|
||||
## @param service.loadBalancerIP loadBalancerIP for the etcd service (optional, cloud specific)
|
||||
## ref: https://kubernetes.io/docs/user-guide/services/#type-loadbalancer
|
||||
##
|
||||
loadBalancerIP: ""
|
||||
## @param service.loadBalancerSourceRanges [array] Load Balancer source ranges
|
||||
## ref: https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
|
||||
## e.g:
|
||||
## loadBalancerSourceRanges:
|
||||
## - 10.10.10.0/24
|
||||
##
|
||||
loadBalancerSourceRanges: []
|
||||
## @param service.externalIPs [array] External IPs
|
||||
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#external-ips
|
||||
##
|
||||
externalIPs: []
|
||||
## @param service.externalTrafficPolicy %%MAIN_CONTAINER_NAME%% service external traffic policy
|
||||
## ref http://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
|
||||
##
|
||||
externalTrafficPolicy: Cluster
|
||||
## @param service.extraPorts Extra ports to expose (normally used with the `sidecar` value)
|
||||
##
|
||||
extraPorts: []
|
||||
## @param service.annotations [object] Additional annotations for the etcd service
|
||||
##
|
||||
annotations: {}
|
||||
## @param service.sessionAffinity Session Affinity for Kubernetes service, can be "None" or "ClientIP"
|
||||
## If "ClientIP", consecutive client requests will be directed to the same Pod
|
||||
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies
|
||||
##
|
||||
sessionAffinity: None
|
||||
## @param service.sessionAffinityConfig Additional settings for the sessionAffinity
|
||||
## sessionAffinityConfig:
|
||||
## clientIP:
|
||||
## timeoutSeconds: 300
|
||||
##
|
||||
sessionAffinityConfig: {}
|
||||
## Headless service properties
|
||||
##
|
||||
headless:
|
||||
## @param service.headless.annotations Annotations for the headless service.
|
||||
##
|
||||
annotations: {}
|
||||
|
||||
## @section Persistence parameters
|
||||
##
|
||||
|
||||
## Enable persistence using Persistent Volume Claims
|
||||
## ref: https://kubernetes.io/docs/user-guide/persistent-volumes/
|
||||
##
|
||||
persistence:
|
||||
## @param persistence.enabled If true, use a Persistent Volume Claim. If false, use emptyDir.
|
||||
##
|
||||
enabled: true
|
||||
## @param persistence.storageClass Persistent Volume Storage Class
|
||||
## If defined, storageClassName: <storageClass>
|
||||
## If set to "-", storageClassName: "", which disables dynamic provisioning
|
||||
## If undefined (the default) or set to null, no storageClassName spec is
|
||||
## set, choosing the default provisioner. (gp2 on AWS, standard on
|
||||
## GKE, AWS & OpenStack)
|
||||
##
|
||||
storageClass: ""
|
||||
##
|
||||
## @param persistence.annotations [object] Annotations for the PVC
|
||||
##
|
||||
annotations: {}
|
||||
## @param persistence.labels [object] Labels for the PVC
|
||||
##
|
||||
labels: {}
|
||||
## @param persistence.accessModes Persistent Volume Access Modes
|
||||
##
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
## @param persistence.size PVC Storage Request for etcd data volume
|
||||
##
|
||||
size: 8Gi
|
||||
## @param persistence.selector [object] Selector to match an existing Persistent Volume
|
||||
## ref: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#selector
|
||||
##
|
||||
selector: {}
|
||||
|
||||
## @section Volume Permissions parameters
|
||||
##
|
||||
|
||||
## Init containers parameters:
|
||||
## volumePermissions: Change the owner and group of the persistent volume mountpoint to runAsUser:fsGroup values from the securityContext section.
|
||||
##
|
||||
volumePermissions:
|
||||
## @param volumePermissions.enabled Enable init container that changes the owner and group of the persistent volume(s) mountpoint to `runAsUser:fsGroup`
|
||||
##
|
||||
enabled: false
|
||||
## @param volumePermissions.image.registry Init container volume-permissions image registry
|
||||
## @param volumePermissions.image.repository Init container volume-permissions image name
|
||||
## @param volumePermissions.image.tag Init container volume-permissions image tag
|
||||
## @param volumePermissions.image.digest Init container volume-permissions image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag
|
||||
##
|
||||
image:
|
||||
registry: docker.io
|
||||
repository: bitnami/bitnami-shell
|
||||
tag: 11-debian-11-r118
|
||||
digest: ""
|
||||
## @param volumePermissions.image.pullPolicy Init container volume-permissions image pull policy
|
||||
##
|
||||
pullPolicy: IfNotPresent
|
||||
## @param volumePermissions.image.pullSecrets [array] Specify docker-registry secret names as an array
|
||||
## Optionally specify an array of imagePullSecrets.
|
||||
## Secrets must be manually created in the namespace.
|
||||
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
|
||||
## e.g:
|
||||
## pullSecrets:
|
||||
## - myRegistryKeySecretName
|
||||
##
|
||||
pullSecrets: []
|
||||
## Init container' resource requests and limits
|
||||
## ref: https://kubernetes.io/docs/user-guide/compute-resources/
|
||||
## We usually recommend not to specify default resources and to leave this as a conscious
|
||||
## choice for the user. This also increases chances charts run on environments with little
|
||||
## resources, such as Minikube. If you do want to specify resources, uncomment the following
|
||||
## lines, adjust them as necessary, and remove the curly braces after 'resources:'.
|
||||
## @param volumePermissions.resources.limits [object] Init container volume-permissions resource limits
|
||||
## @param volumePermissions.resources.requests [object] Init container volume-permissions resource requests
|
||||
##
|
||||
resources:
|
||||
## Example:
|
||||
## limits:
|
||||
## cpu: 500m
|
||||
## memory: 1Gi
|
||||
##
|
||||
limits: {}
|
||||
requests: {}
|
||||
|
||||
## @section Network Policy parameters
|
||||
## ref: https://kubernetes.io/docs/concepts/services-networking/network-policies/
|
||||
##
|
||||
networkPolicy:
|
||||
## @param networkPolicy.enabled Enable creation of NetworkPolicy resources
|
||||
##
|
||||
enabled: false
|
||||
## @param networkPolicy.allowExternal Don't require client label for connections
|
||||
## When set to false, only pods with the correct client label will have network access to the ports
|
||||
## etcd is listening on. When true, etcd will accept connections from any source
|
||||
## (with the correct destination port).
|
||||
##
|
||||
allowExternal: true
|
||||
## @param networkPolicy.extraIngress [array] Add extra ingress rules to the NetworkPolicy
|
||||
## e.g:
|
||||
## extraIngress:
|
||||
## - ports:
|
||||
## - port: 1234
|
||||
## from:
|
||||
## - podSelector:
|
||||
## - matchLabels:
|
||||
## - role: frontend
|
||||
## - podSelector:
|
||||
## - matchExpressions:
|
||||
## - key: role
|
||||
## operator: In
|
||||
## values:
|
||||
## - frontend
|
||||
##
|
||||
extraIngress: []
|
||||
## @param networkPolicy.extraEgress [array] Add extra ingress rules to the NetworkPolicy
|
||||
## e.g:
|
||||
## extraEgress:
|
||||
## - ports:
|
||||
## - port: 1234
|
||||
## to:
|
||||
## - podSelector:
|
||||
## - matchLabels:
|
||||
## - role: frontend
|
||||
## - podSelector:
|
||||
## - matchExpressions:
|
||||
## - key: role
|
||||
## operator: In
|
||||
## values:
|
||||
## - frontend
|
||||
##
|
||||
extraEgress: []
|
||||
## @param networkPolicy.ingressNSMatchLabels [object] Labels to match to allow traffic from other namespaces
|
||||
## @param networkPolicy.ingressNSPodMatchLabels [object] Pod labels to match to allow traffic from other namespaces
|
||||
##
|
||||
ingressNSMatchLabels: {}
|
||||
ingressNSPodMatchLabels: {}
|
||||
|
||||
## @section Metrics parameters
|
||||
##
|
||||
|
||||
metrics:
|
||||
## @param metrics.enabled Expose etcd metrics
|
||||
##
|
||||
enabled: false
|
||||
## @param metrics.podAnnotations [object] Annotations for the Prometheus metrics on etcd pods
|
||||
##
|
||||
podAnnotations:
|
||||
prometheus.io/scrape: "true"
|
||||
prometheus.io/port: "{{ .Values.containerPorts.client }}"
|
||||
## Prometheus Service Monitor
|
||||
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#endpoint
|
||||
##
|
||||
podMonitor:
|
||||
## @param metrics.podMonitor.enabled Create PodMonitor Resource for scraping metrics using PrometheusOperator
|
||||
##
|
||||
enabled: false
|
||||
## @param metrics.podMonitor.namespace Namespace in which Prometheus is running
|
||||
##
|
||||
namespace: monitoring
|
||||
## @param metrics.podMonitor.interval Specify the interval at which metrics should be scraped
|
||||
##
|
||||
interval: 30s
|
||||
## @param metrics.podMonitor.scrapeTimeout Specify the timeout after which the scrape is ended
|
||||
##
|
||||
scrapeTimeout: 30s
|
||||
## @param metrics.podMonitor.additionalLabels [object] Additional labels that can be used so PodMonitors will be discovered by Prometheus
|
||||
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec
|
||||
##
|
||||
additionalLabels: {}
|
||||
## @param metrics.podMonitor.scheme Scheme to use for scraping
|
||||
##
|
||||
scheme: http
|
||||
## @param metrics.podMonitor.tlsConfig [object] TLS configuration used for scrape endpoints used by Prometheus
|
||||
## ref: https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#tlsconfig
|
||||
## e.g:
|
||||
## tlsConfig:
|
||||
## ca:
|
||||
## secret:
|
||||
## name: existingSecretName
|
||||
##
|
||||
tlsConfig: {}
|
||||
## @param metrics.podMonitor.relabelings [array] Prometheus relabeling rules
|
||||
##
|
||||
relabelings: []
|
||||
|
||||
## Prometheus Operator PrometheusRule configuration
|
||||
##
|
||||
prometheusRule:
|
||||
## @param metrics.prometheusRule.enabled Create a Prometheus Operator PrometheusRule (also requires `metrics.enabled` to be `true` and `metrics.prometheusRule.rules`)
|
||||
##
|
||||
enabled: false
|
||||
## @param metrics.prometheusRule.namespace Namespace for the PrometheusRule Resource (defaults to the Release Namespace)
|
||||
##
|
||||
namespace: ""
|
||||
## @param metrics.prometheusRule.additionalLabels Additional labels that can be used so PrometheusRule will be discovered by Prometheus
|
||||
##
|
||||
additionalLabels: {}
|
||||
## @param metrics.prometheusRule.rules Prometheus Rule definitions
|
||||
# - alert: ETCD has no leader
|
||||
# annotations:
|
||||
# summary: "ETCD has no leader"
|
||||
# description: "pod {{`{{`}} $labels.pod {{`}}`}} state error, can't connect leader"
|
||||
# for: 1m
|
||||
# expr: etcd_server_has_leader == 0
|
||||
# labels:
|
||||
# severity: critical
|
||||
# group: PaaS
|
||||
##
|
||||
rules: []
|
||||
|
||||
|
||||
## @section Snapshotting parameters
|
||||
##
|
||||
|
||||
## Start a new etcd cluster recovering the data from an existing snapshot before bootstrapping
|
||||
##
|
||||
startFromSnapshot:
|
||||
## @param startFromSnapshot.enabled Initialize new cluster recovering an existing snapshot
|
||||
##
|
||||
enabled: false
|
||||
## @param startFromSnapshot.existingClaim Existing PVC containing the etcd snapshot
|
||||
##
|
||||
existingClaim: ""
|
||||
## @param startFromSnapshot.snapshotFilename Snapshot filename
|
||||
##
|
||||
snapshotFilename: ""
|
||||
## Enable auto disaster recovery by periodically snapshotting the keyspace:
|
||||
## - It creates a cronjob to periodically snapshotting the keyspace
|
||||
## - It also creates a ReadWriteMany PVC to store the snapshots
|
||||
## If the cluster permanently loses more than (N-1)/2 members, it tries to
|
||||
## recover itself from the last available snapshot.
|
||||
##
|
||||
disasterRecovery:
|
||||
## @param disasterRecovery.enabled Enable auto disaster recovery by periodically snapshotting the keyspace
|
||||
##
|
||||
enabled: false
|
||||
cronjob:
|
||||
## @param disasterRecovery.cronjob.schedule Schedule in Cron format to save snapshots
|
||||
## See https://en.wikipedia.org/wiki/Cron
|
||||
##
|
||||
schedule: "*/30 * * * *"
|
||||
## @param disasterRecovery.cronjob.historyLimit Number of successful finished jobs to retain
|
||||
##
|
||||
historyLimit: 1
|
||||
## @param disasterRecovery.cronjob.snapshotHistoryLimit Number of etcd snapshots to retain, tagged by date
|
||||
##
|
||||
snapshotHistoryLimit: 1
|
||||
## @param disasterRecovery.cronjob.snapshotsDir Directory to store snapshots
|
||||
##
|
||||
snapshotsDir: "/snapshots"
|
||||
## @param disasterRecovery.cronjob.podAnnotations [object] Pod annotations for cronjob pods
|
||||
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
|
||||
##
|
||||
podAnnotations: {}
|
||||
## Configure resource requests and limits for snapshotter containers
|
||||
## ref: https://kubernetes.io/docs/user-guide/compute-resources/
|
||||
## We usually recommend not to specify default resources and to leave this as a conscious
|
||||
## choice for the user. This also increases chances charts run on environments with little
|
||||
## resources, such as Minikube. If you do want to specify resources, uncomment the following
|
||||
## lines, adjust them as necessary, and remove the curly braces after 'resources:'.
|
||||
## @param disasterRecovery.cronjob.resources.limits [object] Cronjob container resource limits
|
||||
## @param disasterRecovery.cronjob.resources.requests [object] Cronjob container resource requests
|
||||
##
|
||||
resources:
|
||||
## Example:
|
||||
## limits:
|
||||
## cpu: 500m
|
||||
## memory: 1Gi
|
||||
##
|
||||
limits: {}
|
||||
requests: {}
|
||||
|
||||
## @param disasterRecovery.cronjob.nodeSelector Node labels for cronjob pods assignment
|
||||
## Ref: https://kubernetes.io/docs/user-guide/node-selection/
|
||||
##
|
||||
nodeSelector: {}
|
||||
## @param disasterRecovery.cronjob.tolerations Tolerations for cronjob pods assignment
|
||||
## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
|
||||
##
|
||||
tolerations: []
|
||||
|
||||
pvc:
|
||||
## @param disasterRecovery.pvc.existingClaim A manually managed Persistent Volume and Claim
|
||||
## If defined, PVC must be created manually before volume will be bound
|
||||
## The value is evaluated as a template, so, for example, the name can depend on .Release or .Chart
|
||||
##
|
||||
existingClaim: ""
|
||||
## @param disasterRecovery.pvc.size PVC Storage Request
|
||||
##
|
||||
size: 2Gi
|
||||
## @param disasterRecovery.pvc.storageClassName Storage Class for snapshots volume
|
||||
##
|
||||
storageClassName: nfs
|
||||
## @param disasterRecovery.pvc.subPath Path within the volume from which to mount
|
||||
## Useful if snapshots should only be stored in a subdirectory of the volume
|
||||
##
|
||||
subPath: ""
|
||||
|
||||
## @section Service account parameters
|
||||
##
|
||||
|
||||
serviceAccount:
|
||||
## @param serviceAccount.create Enable/disable service account creation
|
||||
##
|
||||
create: false
|
||||
## @param serviceAccount.name Name of the service account to create or use
|
||||
##
|
||||
name: ""
|
||||
## @param serviceAccount.automountServiceAccountToken Enable/disable auto mounting of service account token
|
||||
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server
|
||||
##
|
||||
automountServiceAccountToken: true
|
||||
## @param serviceAccount.annotations [object] Additional annotations to be included on the service account
|
||||
##
|
||||
annotations: {}
|
||||
## @param serviceAccount.labels [object] Additional labels to be included on the service account
|
||||
##
|
||||
labels: {}
|
||||
|
||||
## @section Other parameters
|
||||
##
|
||||
|
||||
## etcd Pod Disruption Budget configuration
|
||||
## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/
|
||||
##
|
||||
pdb:
|
||||
## @param pdb.create Enable/disable a Pod Disruption Budget creation
|
||||
##
|
||||
create: true
|
||||
## @param pdb.minAvailable Minimum number/percentage of pods that should remain scheduled
|
||||
##
|
||||
minAvailable: 51%
|
||||
## @param pdb.maxUnavailable Maximum number/percentage of pods that may be made unavailable
|
||||
##
|
||||
maxUnavailable: ""
|
||||
21
deployment/helm/milvus/charts/kafka/.helmignore
Normal file
21
deployment/helm/milvus/charts/kafka/.helmignore
Normal file
@@ -0,0 +1,21 @@
|
||||
# Patterns to ignore when building packages.
|
||||
# This supports shell glob matching, relative path matching, and
|
||||
# negation (prefixed with !). Only one pattern per line.
|
||||
.DS_Store
|
||||
# Common VCS dirs
|
||||
.git/
|
||||
.gitignore
|
||||
.bzr/
|
||||
.bzrignore
|
||||
.hg/
|
||||
.hgignore
|
||||
.svn/
|
||||
# Common backup files
|
||||
*.swp
|
||||
*.bak
|
||||
*.tmp
|
||||
*~
|
||||
# Various IDEs
|
||||
.project
|
||||
.idea/
|
||||
*.tmproj
|
||||
9
deployment/helm/milvus/charts/kafka/Chart.lock
Normal file
9
deployment/helm/milvus/charts/kafka/Chart.lock
Normal file
@@ -0,0 +1,9 @@
|
||||
dependencies:
|
||||
- name: zookeeper
|
||||
repository: https://charts.bitnami.com/bitnami
|
||||
version: 8.1.2
|
||||
- name: common
|
||||
repository: https://charts.bitnami.com/bitnami
|
||||
version: 1.12.0
|
||||
digest: sha256:903762070537232f45dacf1d3ab09c43a9fec56656c2c66c15fbb35b64b6678c
|
||||
generated: "2022-03-17T20:28:29.202310166Z"
|
||||
33
deployment/helm/milvus/charts/kafka/Chart.yaml
Normal file
33
deployment/helm/milvus/charts/kafka/Chart.yaml
Normal file
@@ -0,0 +1,33 @@
|
||||
annotations:
|
||||
category: Infrastructure
|
||||
apiVersion: v2
|
||||
appVersion: 3.1.0
|
||||
dependencies:
|
||||
- condition: zookeeper.enabled
|
||||
name: zookeeper
|
||||
repository: https://charts.bitnami.com/bitnami
|
||||
version: 8.x.x
|
||||
- name: common
|
||||
repository: https://charts.bitnami.com/bitnami
|
||||
tags:
|
||||
- bitnami-common
|
||||
version: 1.x.x
|
||||
description: Apache Kafka is a distributed streaming platform designed to build real-time
|
||||
pipelines and can be used as a message broker or as a replacement for a log aggregation
|
||||
solution for big data applications.
|
||||
home: https://github.com/bitnami/charts/tree/master/bitnami/kafka
|
||||
icon: https://bitnami.com/assets/stacks/kafka/img/kafka-stack-220x234.png
|
||||
keywords:
|
||||
- kafka
|
||||
- zookeeper
|
||||
- streaming
|
||||
- producer
|
||||
- consumer
|
||||
maintainers:
|
||||
- email: containers@bitnami.com
|
||||
name: Bitnami
|
||||
name: kafka
|
||||
sources:
|
||||
- https://github.com/bitnami/bitnami-docker-kafka
|
||||
- https://kafka.apache.org/
|
||||
version: 15.5.1
|
||||
965
deployment/helm/milvus/charts/kafka/README.md
Normal file
965
deployment/helm/milvus/charts/kafka/README.md
Normal file
@@ -0,0 +1,965 @@
|
||||
<!--- app-name: Apache Kafka -->
|
||||
|
||||
# Apache Kafka packaged by Bitnami
|
||||
|
||||
Apache Kafka is a distributed streaming platform designed to build real-time pipelines and can be used as a message broker or as a replacement for a log aggregation solution for big data applications.
|
||||
|
||||
[Overview of Apache Kafka](http://kafka.apache.org/)
|
||||
|
||||
Trademarks: This software listing is packaged by Bitnami. The respective trademarks mentioned in the offering are owned by the respective companies, and use of them does not imply any affiliation or endorsement.
|
||||
|
||||
## TL;DR
|
||||
|
||||
```console
|
||||
helm repo add bitnami https://charts.bitnami.com/bitnami
|
||||
helm install my-release bitnami/kafka
|
||||
```
|
||||
|
||||
## Introduction
|
||||
|
||||
This chart bootstraps a [Kafka](https://github.com/bitnami/bitnami-docker-kafka) deployment on a [Kubernetes](https://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
|
||||
|
||||
Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This Helm chart has been tested on top of [Bitnami Kubernetes Production Runtime](https://kubeprod.io/) (BKPR). Deploy BKPR to get automated TLS certificates, logging and monitoring for your applications.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Kubernetes 1.19+
|
||||
- Helm 3.2.0+
|
||||
- PV provisioner support in the underlying infrastructure
|
||||
|
||||
## Installing the Chart
|
||||
|
||||
To install the chart with the release name `my-release`:
|
||||
|
||||
```console
|
||||
helm repo add bitnami https://charts.bitnami.com/bitnami
|
||||
helm install my-release bitnami/kafka
|
||||
```
|
||||
|
||||
These commands deploy Kafka on the Kubernetes cluster in the default configuration. The [Parameters](#parameters) section lists the parameters that can be configured during installation.
|
||||
|
||||
> **Tip**: List all releases using `helm list`
|
||||
|
||||
## Uninstalling the Chart
|
||||
|
||||
To uninstall/delete the `my-release` deployment:
|
||||
|
||||
```console
|
||||
helm delete my-release
|
||||
```
|
||||
|
||||
The command removes all the Kubernetes components associated with the chart and deletes the release.
|
||||
|
||||
## Parameters
|
||||
|
||||
### Global parameters
|
||||
|
||||
| Name | Description | Value |
|
||||
| ------------------------- | ----------------------------------------------- | ----- |
|
||||
| `global.imageRegistry` | Global Docker image registry | `""` |
|
||||
| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` |
|
||||
| `global.storageClass` | Global StorageClass for Persistent Volume(s) | `""` |
|
||||
|
||||
|
||||
### Common parameters
|
||||
|
||||
| Name | Description | Value |
|
||||
| ------------------------ | --------------------------------------------------------------------------------------- | --------------- |
|
||||
| `kubeVersion` | Override Kubernetes version | `""` |
|
||||
| `nameOverride` | String to partially override common.names.fullname | `""` |
|
||||
| `fullnameOverride` | String to fully override common.names.fullname | `""` |
|
||||
| `clusterDomain` | Default Kubernetes cluster domain | `cluster.local` |
|
||||
| `commonLabels` | Labels to add to all deployed objects | `{}` |
|
||||
| `commonAnnotations` | Annotations to add to all deployed objects | `{}` |
|
||||
| `extraDeploy` | Array of extra objects to deploy with the release | `[]` |
|
||||
| `diagnosticMode.enabled` | Enable diagnostic mode (all probes will be disabled and the command will be overridden) | `false` |
|
||||
| `diagnosticMode.command` | Command to override all containers in the statefulset | `["sleep"]` |
|
||||
| `diagnosticMode.args` | Args to override all containers in the statefulset | `["infinity"]` |
|
||||
|
||||
|
||||
### Kafka parameters
|
||||
|
||||
| Name | Description | Value |
|
||||
| ------------------------------------------ |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------|
|
||||
| `image.registry` | Kafka image registry | `docker.io` |
|
||||
| `image.repository` | Kafka image repository | `bitnami/kafka` |
|
||||
| `image.tag` | Kafka image tag (immutable tags are recommended) | `3.1.0-debian-10-r31` |
|
||||
| `image.pullPolicy` | Kafka image pull policy | `IfNotPresent` |
|
||||
| `image.pullSecrets` | Specify docker-registry secret names as an array | `[]` |
|
||||
| `image.debug` | Specify if debug values should be set | `false` |
|
||||
| `config` | Configuration file for Kafka. Auto-generated based on other parameters when not specified | `""` |
|
||||
| `existingConfigmap` | ConfigMap with Kafka Configuration | `""` |
|
||||
| `log4j` | An optional log4j.properties file to overwrite the default of the Kafka brokers | `""` |
|
||||
| `existingLog4jConfigMap` | The name of an existing ConfigMap containing a log4j.properties file | `""` |
|
||||
| `heapOpts` | Kafka Java Heap size | `-Xmx1024m -Xms1024m` |
|
||||
| `deleteTopicEnable` | Switch to enable topic deletion or not | `false` |
|
||||
| `autoCreateTopicsEnable` | Switch to enable auto creation of topics. Enabling auto creation of topics not recommended for production or similar environments | `true` |
|
||||
| `logFlushIntervalMessages` | The number of messages to accept before forcing a flush of data to disk | `_10000` |
|
||||
| `logFlushIntervalMs` | The maximum amount of time a message can sit in a log before we force a flush | `1000` |
|
||||
| `logRetentionBytes` | A size-based retention policy for logs | `_1073741824` |
|
||||
| `logRetentionCheckIntervalMs` | The interval at which log segments are checked to see if they can be deleted | `300000` |
|
||||
| `logRetentionHours` | The minimum age of a log file to be eligible for deletion due to age | `168` |
|
||||
| `logSegmentBytes` | The maximum size of a log segment file. When this size is reached a new log segment will be created | `_1073741824` |
|
||||
| `logsDirs` | A comma separated list of directories under which to store log files | `/bitnami/kafka/data` |
|
||||
| `maxMessageBytes` | The largest record batch size allowed by Kafka | `_1000012` |
|
||||
| `defaultReplicationFactor` | Default replication factors for automatically created topics | `1` |
|
||||
| `offsetsTopicReplicationFactor` | The replication factor for the offsets topic | `1` |
|
||||
| `transactionStateLogReplicationFactor` | The replication factor for the transaction topic | `1` |
|
||||
| `transactionStateLogMinIsr` | Overridden min.insync.replicas config for the transaction topic | `1` |
|
||||
| `numIoThreads` | The number of threads doing disk I/O | `8` |
|
||||
| `numNetworkThreads` | The number of threads handling network requests | `3` |
|
||||
| `numPartitions` | The default number of log partitions per topic | `1` |
|
||||
| `numRecoveryThreadsPerDataDir` | The number of threads per data directory to be used for log recovery at startup and flushing at shutdown | `1` |
|
||||
| `socketReceiveBufferBytes` | The receive buffer (SO_RCVBUF) used by the socket server | `102400` |
|
||||
| `socketRequestMaxBytes` | The maximum size of a request that the socket server will accept (protection against OOM) | `_104857600` |
|
||||
| `socketSendBufferBytes` | The send buffer (SO_SNDBUF) used by the socket server | `102400` |
|
||||
| `zookeeperConnectionTimeoutMs` | Timeout in ms for connecting to ZooKeeper | `6000` |
|
||||
| `zookeeperChrootPath` | Path which puts data under some path in the global ZooKeeper namespace | `""` |
|
||||
| `authorizerClassName` | The Authorizer is configured by setting authorizer.class.name=kafka.security.authorizer.AclAuthorizer in server.properties | `""` |
|
||||
| `allowEveryoneIfNoAclFound` | By default, if a resource has no associated ACLs, then no one is allowed to access that resource except super users | `true` |
|
||||
| `superUsers` | You can add super users in server.properties | `User:admin` |
|
||||
| `auth.clientProtocol` | Authentication protocol for communications with clients. Allowed protocols: `plaintext`, `tls`, `mtls`, `sasl` and `sasl_tls` | `plaintext` |
|
||||
| `auth.externalClientProtocol` | Authentication protocol for communications with external clients. Defaults to value of `auth.clientProtocol`. Allowed protocols: `plaintext`, `tls`, `mtls`, `sasl` and `sasl_tls` | `""` |
|
||||
| `auth.interBrokerProtocol` | Authentication protocol for inter-broker communications. Allowed protocols: `plaintext`, `tls`, `mtls`, `sasl` and `sasl_tls` | `plaintext` |
|
||||
| `auth.sasl.mechanisms` | SASL mechanisms when either `auth.interBrokerProtocol`, `auth.clientProtocol` or `auth.externalClientProtocol` are `sasl`. Allowed types: `plain`, `scram-sha-256`, `scram-sha-512` | `plain,scram-sha-256,scram-sha-512` |
|
||||
| `auth.sasl.interBrokerMechanism` | SASL mechanism for inter broker communication. | `plain` |
|
||||
| `auth.sasl.jaas.clientUsers` | Kafka client user list | `["user"]` |
|
||||
| `auth.sasl.jaas.clientPasswords` | Kafka client passwords. This is mandatory if more than one user is specified in clientUsers | `[]` |
|
||||
| `auth.sasl.jaas.interBrokerUser` | Kafka inter broker communication user for SASL authentication | `admin` |
|
||||
| `auth.sasl.jaas.interBrokerPassword` | Kafka inter broker communication password for SASL authentication | `""` |
|
||||
| `auth.sasl.jaas.zookeeperUser` | Kafka ZooKeeper user for SASL authentication | `""` |
|
||||
| `auth.sasl.jaas.zookeeperPassword` | Kafka ZooKeeper password for SASL authentication | `""` |
|
||||
| `auth.sasl.jaas.existingSecret` | Name of the existing secret containing credentials for clientUsers, interBrokerUser and zookeeperUser | `""` |
|
||||
| `auth.tls.type` | Format to use for TLS certificates. Allowed types: `jks` and `pem` | `jks` |
|
||||
| `auth.tls.pemChainIncluded` | Flag to denote whether the TLS authority chain is included inside the PEM | `false` |
|
||||
| `auth.tls.existingSecrets` | Array existing secrets containing the TLS certificates for the Kafka brokers | `[]` |
|
||||
| `auth.tls.autoGenerated` | Generate automatically self-signed TLS certificates for Kafka brokers. Currently only supported if `auth.tls.type` is `pem` | `false` |
|
||||
| `auth.tls.password` | Password to access the JKS files or PEM key when they are password-protected. | `""` |
|
||||
| `auth.tls.existingSecret` | Name of the secret containing the password to access the JKS files or PEM key when they are password-protected. (`key`: `password`) | `""` |
|
||||
| `auth.tls.jksTruststoreSecret` | Name of the existing secret containing your truststore if truststore not existing or different from the ones in the `auth.tls.existingSecrets` | `""` |
|
||||
| `auth.tls.jksKeystoreSAN` | The secret key from the `auth.tls.existingSecrets` containing the keystore with a SAN certificate | `""` |
|
||||
| `auth.tls.jksTruststore` | The secret key from the `auth.tls.existingSecrets` or `auth.tls.jksTruststoreSecret` containing the truststore | `""` |
|
||||
| `auth.tls.endpointIdentificationAlgorithm` | The endpoint identification algorithm to validate server hostname using server certificate | `https` |
|
||||
| `listeners` | The address(es) the socket server listens on. Auto-calculated it's set to an empty array | `[]` |
|
||||
| `advertisedListeners` | The address(es) (hostname:port) the broker will advertise to producers and consumers. Auto-calculated it's set to an empty array | `[]` |
|
||||
| `listenerSecurityProtocolMap` | The protocol->listener mapping. Auto-calculated it's set to nil | `""` |
|
||||
| `allowPlaintextListener` | Allow to use the PLAINTEXT listener | `true` |
|
||||
| `interBrokerListenerName` | The listener that the brokers should communicate on | `INTERNAL` |
|
||||
| `command` | Override Kafka container command | `["/scripts/setup.sh"]` |
|
||||
| `args` | Override Kafka container arguments | `[]` |
|
||||
| `extraEnvVars` | Extra environment variables to add to Kafka pods | `[]` |
|
||||
| `extraEnvVarsCM` | ConfigMap with extra environment variables | `""` |
|
||||
| `extraEnvVarsSecret` | Secret with extra environment variables | `""` |
|
||||
|
||||
|
||||
### Statefulset parameters
|
||||
|
||||
| Name | Description | Value |
|
||||
| --------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------- |
|
||||
| `replicaCount` | Number of Kafka nodes | `1` |
|
||||
| `minBrokerId` | Minimal broker.id value, nodes increment their `broker.id` respectively | `0` |
|
||||
| `containerPorts.client` | Kafka client container port | `9092` |
|
||||
| `containerPorts.internal` | Kafka inter-broker container port | `9093` |
|
||||
| `containerPorts.external` | Kafka external container port | `9094` |
|
||||
| `livenessProbe.enabled` | Enable livenessProbe on Kafka containers | `true` |
|
||||
| `livenessProbe.initialDelaySeconds` | Initial delay seconds for livenessProbe | `10` |
|
||||
| `livenessProbe.periodSeconds` | Period seconds for livenessProbe | `10` |
|
||||
| `livenessProbe.timeoutSeconds` | Timeout seconds for livenessProbe | `5` |
|
||||
| `livenessProbe.failureThreshold` | Failure threshold for livenessProbe | `3` |
|
||||
| `livenessProbe.successThreshold` | Success threshold for livenessProbe | `1` |
|
||||
| `readinessProbe.enabled` | Enable readinessProbe on Kafka containers | `true` |
|
||||
| `readinessProbe.initialDelaySeconds` | Initial delay seconds for readinessProbe | `5` |
|
||||
| `readinessProbe.periodSeconds` | Period seconds for readinessProbe | `10` |
|
||||
| `readinessProbe.timeoutSeconds` | Timeout seconds for readinessProbe | `5` |
|
||||
| `readinessProbe.failureThreshold` | Failure threshold for readinessProbe | `6` |
|
||||
| `readinessProbe.successThreshold` | Success threshold for readinessProbe | `1` |
|
||||
| `startupProbe.enabled` | Enable startupProbe on Kafka containers | `false` |
|
||||
| `startupProbe.initialDelaySeconds` | Initial delay seconds for startupProbe | `30` |
|
||||
| `startupProbe.periodSeconds` | Period seconds for startupProbe | `10` |
|
||||
| `startupProbe.timeoutSeconds` | Timeout seconds for startupProbe | `1` |
|
||||
| `startupProbe.failureThreshold` | Failure threshold for startupProbe | `15` |
|
||||
| `startupProbe.successThreshold` | Success threshold for startupProbe | `1` |
|
||||
| `customLivenessProbe` | Custom livenessProbe that overrides the default one | `{}` |
|
||||
| `customReadinessProbe` | Custom readinessProbe that overrides the default one | `{}` |
|
||||
| `customStartupProbe` | Custom startupProbe that overrides the default one | `{}` |
|
||||
| `lifecycleHooks` | lifecycleHooks for the Kafka container to automate configuration before or after startup | `{}` |
|
||||
| `resources.limits` | The resources limits for the container | `{}` |
|
||||
| `resources.requests` | The requested resources for the container | `{}` |
|
||||
| `podSecurityContext.enabled` | Enable security context for the pods | `true` |
|
||||
| `podSecurityContext.fsGroup` | Set Kafka pod's Security Context fsGroup | `1001` |
|
||||
| `containerSecurityContext.enabled` | Enable Kafka containers' Security Context | `true` |
|
||||
| `containerSecurityContext.runAsUser` | Set Kafka containers' Security Context runAsUser | `1001` |
|
||||
| `containerSecurityContext.runAsNonRoot` | Set Kafka containers' Security Context runAsNonRoot | `true` |
|
||||
| `hostAliases` | Kafka pods host aliases | `[]` |
|
||||
| `hostNetwork` | Specify if host network should be enabled for Kafka pods | `false` |
|
||||
| `hostIPC` | Specify if host IPC should be enabled for Kafka pods | `false` |
|
||||
| `podLabels` | Extra labels for Kafka pods | `{}` |
|
||||
| `podAnnotations` | Extra annotations for Kafka pods | `{}` |
|
||||
| `podAffinityPreset` | Pod affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `""` |
|
||||
| `podAntiAffinityPreset` | Pod anti-affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `soft` |
|
||||
| `nodeAffinityPreset.type` | Node affinity preset type. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `""` |
|
||||
| `nodeAffinityPreset.key` | Node label key to match Ignored if `affinity` is set. | `""` |
|
||||
| `nodeAffinityPreset.values` | Node label values to match. Ignored if `affinity` is set. | `[]` |
|
||||
| `affinity` | Affinity for pod assignment | `{}` |
|
||||
| `nodeSelector` | Node labels for pod assignment | `{}` |
|
||||
| `tolerations` | Tolerations for pod assignment | `[]` |
|
||||
| `topologySpreadConstraints` | Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template | `{}` |
|
||||
| `terminationGracePeriodSeconds` | Seconds the pod needs to gracefully terminate | `""` |
|
||||
| `podManagementPolicy` | StatefulSet controller supports relax its ordering guarantees while preserving its uniqueness and identity guarantees. There are two valid pod management policies: OrderedReady and Parallel | `Parallel` |
|
||||
| `priorityClassName` | Name of the existing priority class to be used by kafka pods | `""` |
|
||||
| `schedulerName` | Name of the k8s scheduler (other than default) | `""` |
|
||||
| `updateStrategy.type` | Kafka statefulset strategy type | `RollingUpdate` |
|
||||
| `updateStrategy.rollingUpdate` | Kafka statefulset rolling update configuration parameters | `{}` |
|
||||
| `extraVolumes` | Optionally specify extra list of additional volumes for the Kafka pod(s) | `[]` |
|
||||
| `extraVolumeMounts` | Optionally specify extra list of additional volumeMounts for the Kafka container(s) | `[]` |
|
||||
| `sidecars` | Add additional sidecar containers to the Kafka pod(s) | `[]` |
|
||||
| `initContainers` | Add additional Add init containers to the Kafka pod(s) | `[]` |
|
||||
| `pdb.create` | Deploy a pdb object for the Kafka pod | `false` |
|
||||
| `pdb.minAvailable` | Maximum number/percentage of unavailable Kafka replicas | `""` |
|
||||
| `pdb.maxUnavailable` | Maximum number/percentage of unavailable Kafka replicas | `1` |
|
||||
|
||||
|
||||
### Traffic Exposure parameters
|
||||
|
||||
| Name | Description | Value |
|
||||
| ------------------------------------------------- | ------------------------------------------------------------------------------------------------- | --------------------- |
|
||||
| `service.type` | Kubernetes Service type | `ClusterIP` |
|
||||
| `service.ports.client` | Kafka svc port for client connections | `9092` |
|
||||
| `service.ports.internal` | Kafka svc port for inter-broker connections | `9093` |
|
||||
| `service.ports.external` | Kafka svc port for external connections | `9094` |
|
||||
| `service.nodePorts.client` | Node port for the Kafka client connections | `""` |
|
||||
| `service.nodePorts.external` | Node port for the Kafka external connections | `""` |
|
||||
| `service.sessionAffinity` | Control where client requests go, to the same pod or round-robin | `None` |
|
||||
| `service.clusterIP` | Kafka service Cluster IP | `""` |
|
||||
| `service.loadBalancerIP` | Kafka service Load Balancer IP | `""` |
|
||||
| `service.loadBalancerSourceRanges` | Kafka service Load Balancer sources | `[]` |
|
||||
| `service.externalTrafficPolicy` | Kafka service external traffic policy | `Cluster` |
|
||||
| `service.annotations` | Additional custom annotations for Kafka service | `{}` |
|
||||
| `service.extraPorts` | Extra ports to expose in the Kafka service (normally used with the `sidecar` value) | `[]` |
|
||||
| `externalAccess.enabled` | Enable Kubernetes external cluster access to Kafka brokers | `false` |
|
||||
| `externalAccess.autoDiscovery.enabled` | Enable using an init container to auto-detect external IPs/ports by querying the K8s API | `false` |
|
||||
| `externalAccess.autoDiscovery.image.registry` | Init container auto-discovery image registry | `docker.io` |
|
||||
| `externalAccess.autoDiscovery.image.repository` | Init container auto-discovery image repository | `bitnami/kubectl` |
|
||||
| `externalAccess.autoDiscovery.image.tag` | Init container auto-discovery image tag (immutable tags are recommended) | `1.23.4-debian-10-r9` |
|
||||
| `externalAccess.autoDiscovery.image.pullPolicy` | Init container auto-discovery image pull policy | `IfNotPresent` |
|
||||
| `externalAccess.autoDiscovery.image.pullSecrets` | Init container auto-discovery image pull secrets | `[]` |
|
||||
| `externalAccess.autoDiscovery.resources.limits` | The resources limits for the auto-discovery init container | `{}` |
|
||||
| `externalAccess.autoDiscovery.resources.requests` | The requested resources for the auto-discovery init container | `{}` |
|
||||
| `externalAccess.service.type` | Kubernetes Service type for external access. It can be NodePort or LoadBalancer | `LoadBalancer` |
|
||||
| `externalAccess.service.ports.external` | Kafka port used for external access when service type is LoadBalancer | `9094` |
|
||||
| `externalAccess.service.loadBalancerIPs` | Array of load balancer IPs for each Kafka broker. Length must be the same as replicaCount | `[]` |
|
||||
| `externalAccess.service.loadBalancerNames` | Array of load balancer Names for each Kafka broker. Length must be the same as replicaCount | `[]` |
|
||||
| `externalAccess.service.loadBalancerAnnotations` | Array of load balancer annotations for each Kafka broker. Length must be the same as replicaCount | `[]` |
|
||||
| `externalAccess.service.loadBalancerSourceRanges` | Address(es) that are allowed when service is LoadBalancer | `[]` |
|
||||
| `externalAccess.service.nodePorts` | Array of node ports used for each Kafka broker. Length must be the same as replicaCount | `[]` |
|
||||
| `externalAccess.service.useHostIPs` | Use service host IPs to configure Kafka external listener when service type is NodePort | `false` |
|
||||
| `externalAccess.service.usePodIPs` | using the MY_POD_IP address for external access. | `false` |
|
||||
| `externalAccess.service.domain` | Domain or external ip used to configure Kafka external listener when service type is NodePort | `""` |
|
||||
| `externalAccess.service.annotations` | Service annotations for external access | `{}` |
|
||||
| `externalAccess.service.extraPorts` | Extra ports to expose in the Kafka external service | `[]` |
|
||||
| `networkPolicy.enabled` | Specifies whether a NetworkPolicy should be created | `false` |
|
||||
| `networkPolicy.allowExternal` | Don't require client label for connections | `true` |
|
||||
| `networkPolicy.explicitNamespacesSelector` | A Kubernetes LabelSelector to explicitly select namespaces from which traffic could be allowed | `{}` |
|
||||
| `networkPolicy.externalAccess.from` | customize the from section for External Access on tcp-external port | `[]` |
|
||||
| `networkPolicy.egressRules.customRules` | Custom network policy rule | `{}` |
|
||||
|
||||
|
||||
### Persistence parameters
|
||||
|
||||
| Name | Description | Value |
|
||||
| ------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------- | ------------------------- |
|
||||
| `persistence.enabled` | Enable Kafka data persistence using PVC, note that ZooKeeper persistence is unaffected | `true` |
|
||||
| `persistence.existingClaim` | A manually managed Persistent Volume and Claim | `""` |
|
||||
| `persistence.storageClass` | PVC Storage Class for Kafka data volume | `""` |
|
||||
| `persistence.accessModes` | Persistent Volume Access Modes | `["ReadWriteOnce"]` |
|
||||
| `persistence.size` | PVC Storage Request for Kafka data volume | `8Gi` |
|
||||
| `persistence.annotations` | Annotations for the PVC | `{}` |
|
||||
| `persistence.selector` | Selector to match an existing Persistent Volume for Kafka data PVC. If set, the PVC can't have a PV dynamically provisioned for it | `{}` |
|
||||
| `persistence.mountPath` | Mount path of the Kafka data volume | `/bitnami/kafka` |
|
||||
| `logPersistence.enabled` | Enable Kafka logs persistence using PVC, note that ZooKeeper persistence is unaffected | `false` |
|
||||
| `logPersistence.existingClaim` | A manually managed Persistent Volume and Claim | `""` |
|
||||
| `logPersistence.storageClass` | PVC Storage Class for Kafka logs volume | `""` |
|
||||
| `logPersistence.accessModes` | Persistent Volume Access Modes | `["ReadWriteOnce"]` |
|
||||
| `logPersistence.size` | PVC Storage Request for Kafka logs volume | `8Gi` |
|
||||
| `logPersistence.annotations` | Annotations for the PVC | `{}` |
|
||||
| `logPersistence.selector` | Selector to match an existing Persistent Volume for Kafka log data PVC. If set, the PVC can't have a PV dynamically provisioned for it | `{}` |
|
||||
| `logPersistence.mountPath` | Mount path of the Kafka logs volume | `/opt/bitnami/kafka/logs` |
|
||||
|
||||
|
||||
### Volume Permissions parameters
|
||||
|
||||
| Name | Description | Value |
|
||||
| ------------------------------------------------------ | ------------------------------------------------------------------------------- | ----------------------- |
|
||||
| `volumePermissions.enabled` | Enable init container that changes the owner and group of the persistent volume | `false` |
|
||||
| `volumePermissions.image.registry` | Init container volume-permissions image registry | `docker.io` |
|
||||
| `volumePermissions.image.repository` | Init container volume-permissions image repository | `bitnami/bitnami-shell` |
|
||||
| `volumePermissions.image.tag` | Init container volume-permissions image tag (immutable tags are recommended) | `10-debian-10-r351` |
|
||||
| `volumePermissions.image.pullPolicy` | Init container volume-permissions image pull policy | `IfNotPresent` |
|
||||
| `volumePermissions.image.pullSecrets` | Init container volume-permissions image pull secrets | `[]` |
|
||||
| `volumePermissions.resources.limits` | Init container volume-permissions resource limits | `{}` |
|
||||
| `volumePermissions.resources.requests` | Init container volume-permissions resource requests | `{}` |
|
||||
| `volumePermissions.containerSecurityContext.runAsUser` | User ID for the init container | `0` |
|
||||
|
||||
|
||||
### Other Parameters
|
||||
|
||||
| Name | Description | Value |
|
||||
| --------------------------------------------- | ---------------------------------------------------------------------------------------------- | ------- |
|
||||
| `serviceAccount.create` | Enable creation of ServiceAccount for Kafka pods | `true` |
|
||||
| `serviceAccount.name` | The name of the service account to use. If not set and `create` is `true`, a name is generated | `""` |
|
||||
| `serviceAccount.automountServiceAccountToken` | Allows auto mount of ServiceAccountToken on the serviceAccount created | `true` |
|
||||
| `serviceAccount.annotations` | Additional custom annotations for the ServiceAccount | `{}` |
|
||||
| `rbac.create` | Whether to create & use RBAC resources or not | `false` |
|
||||
|
||||
|
||||
### Metrics parameters
|
||||
|
||||
| Name | Description | Value |
|
||||
| ----------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------- |
|
||||
| `metrics.kafka.enabled` | Whether or not to create a standalone Kafka exporter to expose Kafka metrics | `false` |
|
||||
| `metrics.kafka.image.registry` | Kafka exporter image registry | `docker.io` |
|
||||
| `metrics.kafka.image.repository` | Kafka exporter image repository | `bitnami/kafka-exporter` |
|
||||
| `metrics.kafka.image.tag` | Kafka exporter image tag (immutable tags are recommended) | `1.4.2-debian-10-r160` |
|
||||
| `metrics.kafka.image.pullPolicy` | Kafka exporter image pull policy | `IfNotPresent` |
|
||||
| `metrics.kafka.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` |
|
||||
| `metrics.kafka.certificatesSecret` | Name of the existing secret containing the optional certificate and key files | `""` |
|
||||
| `metrics.kafka.tlsCert` | The secret key from the certificatesSecret if 'client-cert' key different from the default (cert-file) | `cert-file` |
|
||||
| `metrics.kafka.tlsKey` | The secret key from the certificatesSecret if 'client-key' key different from the default (key-file) | `key-file` |
|
||||
| `metrics.kafka.tlsCaSecret` | Name of the existing secret containing the optional ca certificate for Kafka exporter client authentication | `""` |
|
||||
| `metrics.kafka.tlsCaCert` | The secret key from the certificatesSecret or tlsCaSecret if 'ca-cert' key different from the default (ca-file) | `ca-file` |
|
||||
| `metrics.kafka.extraFlags` | Extra flags to be passed to Kafka exporter | `{}` |
|
||||
| `metrics.kafka.command` | Override Kafka exporter container command | `[]` |
|
||||
| `metrics.kafka.args` | Override Kafka exporter container arguments | `[]` |
|
||||
| `metrics.kafka.containerPorts.metrics` | Kafka exporter metrics container port | `9308` |
|
||||
| `metrics.kafka.resources.limits` | The resources limits for the container | `{}` |
|
||||
| `metrics.kafka.resources.requests` | The requested resources for the container | `{}` |
|
||||
| `metrics.kafka.podSecurityContext.enabled` | Enable security context for the pods | `true` |
|
||||
| `metrics.kafka.podSecurityContext.fsGroup` | Set Kafka exporter pod's Security Context fsGroup | `1001` |
|
||||
| `metrics.kafka.containerSecurityContext.enabled` | Enable Kafka exporter containers' Security Context | `true` |
|
||||
| `metrics.kafka.containerSecurityContext.runAsUser` | Set Kafka exporter containers' Security Context runAsUser | `1001` |
|
||||
| `metrics.kafka.containerSecurityContext.runAsNonRoot` | Set Kafka exporter containers' Security Context runAsNonRoot | `true` |
|
||||
| `metrics.kafka.hostAliases` | Kafka exporter pods host aliases | `[]` |
|
||||
| `metrics.kafka.podLabels` | Extra labels for Kafka exporter pods | `{}` |
|
||||
| `metrics.kafka.podAnnotations` | Extra annotations for Kafka exporter pods | `{}` |
|
||||
| `metrics.kafka.podAffinityPreset` | Pod affinity preset. Ignored if `metrics.kafka.affinity` is set. Allowed values: `soft` or `hard` | `""` |
|
||||
| `metrics.kafka.podAntiAffinityPreset` | Pod anti-affinity preset. Ignored if `metrics.kafka.affinity` is set. Allowed values: `soft` or `hard` | `soft` |
|
||||
| `metrics.kafka.nodeAffinityPreset.type` | Node affinity preset type. Ignored if `metrics.kafka.affinity` is set. Allowed values: `soft` or `hard` | `""` |
|
||||
| `metrics.kafka.nodeAffinityPreset.key` | Node label key to match Ignored if `metrics.kafka.affinity` is set. | `""` |
|
||||
| `metrics.kafka.nodeAffinityPreset.values` | Node label values to match. Ignored if `metrics.kafka.affinity` is set. | `[]` |
|
||||
| `metrics.kafka.affinity` | Affinity for pod assignment | `{}` |
|
||||
| `metrics.kafka.nodeSelector` | Node labels for pod assignment | `{}` |
|
||||
| `metrics.kafka.tolerations` | Tolerations for pod assignment | `[]` |
|
||||
| `metrics.kafka.schedulerName` | Name of the k8s scheduler (other than default) for Kafka exporter | `""` |
|
||||
| `metrics.kafka.extraVolumes` | Optionally specify extra list of additional volumes for the Kafka exporter pod(s) | `[]` |
|
||||
| `metrics.kafka.extraVolumeMounts` | Optionally specify extra list of additional volumeMounts for the Kafka exporter container(s) | `[]` |
|
||||
| `metrics.kafka.sidecars` | Add additional sidecar containers to the Kafka exporter pod(s) | `[]` |
|
||||
| `metrics.kafka.initContainers` | Add init containers to the Kafka exporter pods | `[]` |
|
||||
| `metrics.kafka.service.ports.metrics` | Kafka exporter metrics service port | `9308` |
|
||||
| `metrics.kafka.service.clusterIP` | Static clusterIP or None for headless services | `""` |
|
||||
| `metrics.kafka.service.sessionAffinity` | Control where client requests go, to the same pod or round-robin | `None` |
|
||||
| `metrics.kafka.service.annotations` | Annotations for the Kafka exporter service | `{}` |
|
||||
| `metrics.kafka.serviceAccount.create` | Enable creation of ServiceAccount for Kafka exporter pods | `true` |
|
||||
| `metrics.kafka.serviceAccount.name` | The name of the service account to use. If not set and `create` is `true`, a name is generated | `""` |
|
||||
| `metrics.kafka.serviceAccount.automountServiceAccountToken` | Allows auto mount of ServiceAccountToken on the serviceAccount created | `true` |
|
||||
| `metrics.jmx.enabled` | Whether or not to expose JMX metrics to Prometheus | `false` |
|
||||
| `metrics.jmx.image.registry` | JMX exporter image registry | `docker.io` |
|
||||
| `metrics.jmx.image.repository` | JMX exporter image repository | `bitnami/jmx-exporter` |
|
||||
| `metrics.jmx.image.tag` | JMX exporter image tag (immutable tags are recommended) | `0.16.1-debian-10-r222` |
|
||||
| `metrics.jmx.image.pullPolicy` | JMX exporter image pull policy | `IfNotPresent` |
|
||||
| `metrics.jmx.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` |
|
||||
| `metrics.jmx.containerSecurityContext.enabled` | Enable Prometheus JMX exporter containers' Security Context | `true` |
|
||||
| `metrics.jmx.containerSecurityContext.runAsUser` | Set Prometheus JMX exporter containers' Security Context runAsUser | `1001` |
|
||||
| `metrics.jmx.containerSecurityContext.runAsNonRoot` | Set Prometheus JMX exporter containers' Security Context runAsNonRoot | `true` |
|
||||
| `metrics.jmx.containerPorts.metrics` | Prometheus JMX exporter metrics container port | `5556` |
|
||||
| `metrics.jmx.resources.limits` | The resources limits for the JMX exporter container | `{}` |
|
||||
| `metrics.jmx.resources.requests` | The requested resources for the JMX exporter container | `{}` |
|
||||
| `metrics.jmx.service.ports.metrics` | Prometheus JMX exporter metrics service port | `5556` |
|
||||
| `metrics.jmx.service.clusterIP` | Static clusterIP or None for headless services | `""` |
|
||||
| `metrics.jmx.service.sessionAffinity` | Control where client requests go, to the same pod or round-robin | `None` |
|
||||
| `metrics.jmx.service.annotations` | Annotations for the Prometheus JMX exporter service | `{}` |
|
||||
| `metrics.jmx.whitelistObjectNames` | Allows setting which JMX objects you want to expose to via JMX stats to JMX exporter | `["kafka.controller:*","kafka.server:*","java.lang:*","kafka.network:*","kafka.log:*"]` |
|
||||
| `metrics.jmx.config` | Configuration file for JMX exporter | `""` |
|
||||
| `metrics.jmx.existingConfigmap` | Name of existing ConfigMap with JMX exporter configuration | `""` |
|
||||
| `metrics.serviceMonitor.enabled` | if `true`, creates a Prometheus Operator ServiceMonitor (requires `metrics.kafka.enabled` or `metrics.jmx.enabled` to be `true`) | `false` |
|
||||
| `metrics.serviceMonitor.namespace` | Namespace in which Prometheus is running | `""` |
|
||||
| `metrics.serviceMonitor.interval` | Interval at which metrics should be scraped | `""` |
|
||||
| `metrics.serviceMonitor.scrapeTimeout` | Timeout after which the scrape is ended | `""` |
|
||||
| `metrics.serviceMonitor.labels` | Additional labels that can be used so ServiceMonitor will be discovered by Prometheus | `{}` |
|
||||
| `metrics.serviceMonitor.selector` | Prometheus instance selector labels | `{}` |
|
||||
| `metrics.serviceMonitor.relabelings` | RelabelConfigs to apply to samples before scraping | `[]` |
|
||||
| `metrics.serviceMonitor.metricRelabelings` | MetricRelabelConfigs to apply to samples before ingestion | `[]` |
|
||||
| `metrics.serviceMonitor.honorLabels` | Specify honorLabels parameter to add the scrape endpoint | `false` |
|
||||
| `metrics.serviceMonitor.jobLabel` | The name of the label on the target service to use as the job name in prometheus. | `""` |
|
||||
|
||||
|
||||
### Kafka provisioning parameters
|
||||
|
||||
| Name | Description | Value |
|
||||
| ---------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------- | --------------------- |
|
||||
| `provisioning.enabled` | Enable kafka provisioning Job | `false` |
|
||||
| `provisioning.auth.tls.type` | Format to use for TLS certificates. Allowed types: `jks` and `pem`. | `jks` |
|
||||
| `provisioning.auth.tls.certificatesSecret` | Existing secret containing the TLS certificates for the Kafka provisioning Job. | `""` |
|
||||
| `provisioning.auth.tls.cert` | The secret key from the certificatesSecret if 'cert' key different from the default (tls.crt) | `tls.crt` |
|
||||
| `provisioning.auth.tls.key` | The secret key from the certificatesSecret if 'key' key different from the default (tls.key) | `tls.key` |
|
||||
| `provisioning.auth.tls.caCert` | The secret key from the certificatesSecret if 'caCert' key different from the default (ca.crt) | `ca.crt` |
|
||||
| `provisioning.auth.tls.keystore` | The secret key from the certificatesSecret if 'keystore' key different from the default (keystore.jks) | `keystore.jks` |
|
||||
| `provisioning.auth.tls.truststore` | The secret key from the certificatesSecret if 'truststore' key different from the default (truststore.jks) | `truststore.jks` |
|
||||
| `provisioning.auth.tls.passwordsSecret` | Name of the secret containing passwords to access the JKS files or PEM key when they are password-protected. | `""` |
|
||||
| `provisioning.auth.tls.keyPasswordSecretKey` | The secret key from the passwordsSecret if 'keyPasswordSecretKey' key different from the default (key-password) | `key-password` |
|
||||
| `provisioning.auth.tls.keystorePasswordSecretKey` | The secret key from the passwordsSecret if 'keystorePasswordSecretKey' key different from the default (keystore-password) | `keystore-password` |
|
||||
| `provisioning.auth.tls.truststorePasswordSecretKey` | The secret key from the passwordsSecret if 'truststorePasswordSecretKey' key different from the default (truststore-password) | `truststore-password` |
|
||||
| `provisioning.auth.tls.keyPassword` | Password to access the password-protected PEM key if necessary. Ignored if 'passwordsSecret' is provided. | `""` |
|
||||
| `provisioning.auth.tls.keystorePassword` | Password to access the JKS keystore. Ignored if 'passwordsSecret' is provided. | `""` |
|
||||
| `provisioning.auth.tls.truststorePassword` | Password to access the JKS truststore. Ignored if 'passwordsSecret' is provided. | `""` |
|
||||
| `provisioning.numPartitions` | Default number of partitions for topics when unspecified | `1` |
|
||||
| `provisioning.replicationFactor` | Default replication factor for topics when unspecified | `1` |
|
||||
| `provisioning.topics` | Kafka provisioning topics | `[]` |
|
||||
| `provisioning.command` | Override provisioning container command | `[]` |
|
||||
| `provisioning.args` | Override provisioning container arguments | `[]` |
|
||||
| `provisioning.podAnnotations` | Extra annotations for Kafka provisioning pods | `{}` |
|
||||
| `provisioning.podLabels` | Extra labels for Kafka provisioning pods | `{}` |
|
||||
| `provisioning.resources.limits` | The resources limits for the Kafka provisioning container | `{}` |
|
||||
| `provisioning.resources.requests` | The requested resources for the Kafka provisioning container | `{}` |
|
||||
| `provisioning.podSecurityContext.enabled` | Enable security context for the pods | `true` |
|
||||
| `provisioning.podSecurityContext.fsGroup` | Set Kafka provisioning pod's Security Context fsGroup | `1001` |
|
||||
| `provisioning.containerSecurityContext.enabled` | Enable Kafka provisioning containers' Security Context | `true` |
|
||||
| `provisioning.containerSecurityContext.runAsUser` | Set Kafka provisioning containers' Security Context runAsUser | `1001` |
|
||||
| `provisioning.containerSecurityContext.runAsNonRoot` | Set Kafka provisioning containers' Security Context runAsNonRoot | `true` |
|
||||
| `provisioning.schedulerName` | Name of the k8s scheduler (other than default) for kafka provisioning | `""` |
|
||||
| `provisioning.extraVolumes` | Optionally specify extra list of additional volumes for the Kafka provisioning pod(s) | `[]` |
|
||||
| `provisioning.extraVolumeMounts` | Optionally specify extra list of additional volumeMounts for the Kafka provisioning container(s) | `[]` |
|
||||
| `provisioning.sidecars` | Add additional sidecar containers to the Kafka provisioning pod(s) | `[]` |
|
||||
| `provisioning.initContainers` | Add additional Add init containers to the Kafka provisioning pod(s) | `[]` |
|
||||
|
||||
|
||||
### ZooKeeper chart parameters
|
||||
|
||||
| Name | Description | Value |
|
||||
| ------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------- |
|
||||
| `zookeeper.enabled` | Switch to enable or disable the ZooKeeper helm chart | `true` |
|
||||
| `zookeeper.replicaCount` | Number of ZooKeeper nodes | `1` |
|
||||
| `zookeeper.auth.enabled` | Enable ZooKeeper auth | `false` |
|
||||
| `zookeeper.auth.clientUser` | User that will use ZooKeeper clients to auth | `""` |
|
||||
| `zookeeper.auth.clientPassword` | Password that will use ZooKeeper clients to auth | `""` |
|
||||
| `zookeeper.auth.serverUsers` | Comma, semicolon or whitespace separated list of user to be created. Specify them as a string, for example: "user1,user2,admin" | `""` |
|
||||
| `zookeeper.auth.serverPasswords` | Comma, semicolon or whitespace separated list of passwords to assign to users when created. Specify them as a string, for example: "pass4user1, pass4user2, pass4admin" | `""` |
|
||||
| `zookeeper.persistence.enabled` | Enable persistence on ZooKeeper using PVC(s) | `true` |
|
||||
| `zookeeper.persistence.storageClass` | Persistent Volume storage class | `""` |
|
||||
| `zookeeper.persistence.accessModes` | Persistent Volume access modes | `["ReadWriteOnce"]` |
|
||||
| `zookeeper.persistence.size` | Persistent Volume size | `8Gi` |
|
||||
| `externalZookeeper.servers` | List of external zookeeper servers to use | `[]` |
|
||||
|
||||
|
||||
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
|
||||
|
||||
```console
|
||||
helm install my-release \
|
||||
--set replicaCount=3 \
|
||||
bitnami/kafka
|
||||
```
|
||||
|
||||
The above command deploys Kafka with 3 brokers (replicas).
|
||||
|
||||
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
|
||||
|
||||
```console
|
||||
helm install my-release -f values.yaml bitnami/kafka
|
||||
```
|
||||
|
||||
> **Tip**: You can use the default [values.yaml](values.yaml)
|
||||
|
||||
## Configuration and installation details
|
||||
|
||||
### [Rolling VS Immutable tags](https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/)
|
||||
|
||||
It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.
|
||||
|
||||
Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.
|
||||
|
||||
### Setting custom parameters
|
||||
|
||||
Any environment variable beginning with `KAFKA_CFG_` will be mapped to its corresponding Kafka key. For example, use `KAFKA_CFG_BACKGROUND_THREADS` in order to set `background.threads`. In order to pass custom environment variables use the `extraEnvVars` property.
|
||||
|
||||
Using `extraEnvVars` with `KAFKA_CFG_` is the preferred and simplest way to add custom Kafka parameters not otherwise specified in this chart. Alternatively, you can provide a *full* Kafka configuration using `config` or `existingConfigmap`.
|
||||
Setting either `config` or `existingConfigmap` will cause the chart to disregard `KAFKA_CFG_` settings, which are used by many other Kafka-related chart values described above, as well as dynamically generated parameters such as `zookeeper.connect`. This can cause unexpected behavior.
|
||||
|
||||
### Listeners configuration
|
||||
|
||||
This chart allows you to automatically configure Kafka with 3 listeners:
|
||||
|
||||
- One for inter-broker communications.
|
||||
- A second one for communications with clients within the K8s cluster.
|
||||
- (optional) a third listener for communications with clients outside the K8s cluster. Check [this section](#accessing-kafka-brokers-from-outside-the-clusters) for more information.
|
||||
|
||||
For more complex configurations, set the `listeners`, `advertisedListeners` and `listenerSecurityProtocolMap` parameters as needed.
|
||||
|
||||
### Enable security for Kafka and Zookeeper
|
||||
|
||||
You can configure different authentication protocols for each listener you configure in Kafka. For instance, you can use `sasl_tls` authentication for client communications, while using `tls` for inter-broker communications. This table shows the available protocols and the security they provide:
|
||||
|
||||
| Method | Authentication | Encryption via TLS |
|
||||
|-----------|------------------------------|--------------------|
|
||||
| plaintext | None | No |
|
||||
| tls | None | Yes |
|
||||
| mtls | Yes (two-way authentication) | Yes |
|
||||
| sasl | Yes (via SASL) | No |
|
||||
| sasl_tls | Yes (via SASL) | Yes |
|
||||
|
||||
Learn more about how to configure Kafka to use the different authentication protocols in the [chart documentation](https://docs.bitnami.com/kubernetes/infrastructure/kafka/administration/enable-security/).
|
||||
|
||||
If you enabled SASL authentication on any listener, you can set the SASL credentials using the parameters below:
|
||||
|
||||
- `auth.sasl.jaas.clientUsers`/`auth.sasl.jaas.clientPasswords`: when enabling SASL authentication for communications with clients.
|
||||
- `auth.sasl.jaas.interBrokerUser`/`auth.sasl.jaas.interBrokerPassword`: when enabling SASL authentication for inter-broker communications.
|
||||
- `auth.jaas.zookeeperUser`/`auth.jaas.zookeeperPassword`: In the case that the Zookeeper chart is deployed with SASL authentication enabled.
|
||||
|
||||
In order to configure TLS authentication/encryption, you **can** create a secret per Kafka broker you have in the cluster containing the Java Key Stores (JKS) files: the truststore (`kafka.truststore.jks`) and the keystore (`kafka.keystore.jks`). Then, you need pass the secret names with the `auth.tls.existingSecrets` parameter when deploying the chart.
|
||||
|
||||
> **Note**: If the JKS files are password protected (recommended), you will need to provide the password to get access to the keystores. To do so, use the `auth.tls.password` parameter to provide your password.
|
||||
|
||||
For instance, to configure TLS authentication on a Kafka cluster with 2 Kafka brokers use the commands below to create the secrets:
|
||||
|
||||
```console
|
||||
kubectl create secret generic kafka-jks-0 --from-file=kafka.truststore.jks=./kafka.truststore.jks --from-file=kafka.keystore.jks=./kafka-0.keystore.jks
|
||||
kubectl create secret generic kafka-jks-1 --from-file=kafka.truststore.jks=./kafka.truststore.jks --from-file=kafka.keystore.jks=./kafka-1.keystore.jks
|
||||
```
|
||||
|
||||
> **Note**: the command above assumes you already created the truststore and keystores files. This [script](https://raw.githubusercontent.com/confluentinc/confluent-platform-security-tools/master/kafka-generate-ssl.sh) can help you with the JKS files generation.
|
||||
|
||||
If, for some reason (like using Cert-Manager) you can not use the default JKS secret scheme, you can use the additional parameters:
|
||||
|
||||
- `auth.tls.jksTruststoreSecret` to define additional secret, where the `kafka.truststore.jks` is being kept. The truststore password **must** be the same as in `auth.tls.password`
|
||||
- `auth.tls.jksTruststore` to overwrite the default value of the truststore key (`kafka.truststore.jks`).
|
||||
- `auth.tls.jksKeystoreSAN` if you want to use a SAN certificate for your brokers. Setting this parameter would mean that the chart expects a existing key in the `auth.tls.jksTruststoreSecret` with the `auth.tls.jksKeystoreSAN` value and use this as a keystore for **all** brokers
|
||||
> **Note**: If you are using cert-manager, particularly when an ACME issuer is used, the `ca.crt` field is not put in the `Secret` that cert-manager creates. To handle this, the `auth.tls.pemChainIncluded` property can be set to `true` and the initContainer created by this Chart will attempt to extract the intermediate certs from the `tls.crt` field of the secret (which is a PEM chain)
|
||||
|
||||
> **Note**: The truststore/keystore from above **must** be protected with the same password as in `auth.tls.password`
|
||||
|
||||
You can deploy the chart with authentication using the following parameters:
|
||||
|
||||
```console
|
||||
replicaCount=2
|
||||
auth.clientProtocol=sasl
|
||||
auth.interBrokerProtocol=tls
|
||||
auth.tls.existingSecrets[0]=kafka-jks-0
|
||||
auth.tls.existingSecrets[1]=kafka-jks-1
|
||||
auth.tls.password=jksPassword
|
||||
auth.sasl.jaas.clientUsers[0]=brokerUser
|
||||
auth.sasl.jaas.clientPasswords[0]=brokerPassword
|
||||
auth.sasl.jaas.zookeeperUser=zookeeperUser
|
||||
auth.sasl.jaas.zookeeperPassword=zookeeperPassword
|
||||
zookeeper.auth.enabled=true
|
||||
zookeeper.auth.serverUsers=zookeeperUser
|
||||
zookeeper.auth.serverPasswords=zookeeperPassword
|
||||
zookeeper.auth.clientUser=zookeeperUser
|
||||
zookeeper.auth.clientPassword=zookeeperPassword
|
||||
```
|
||||
|
||||
You can deploy the chart with AclAuthorizer using the following parameters:
|
||||
|
||||
```console
|
||||
replicaCount=2
|
||||
auth.clientProtocol=sasl
|
||||
auth.interBrokerProtocol=sasl_tls
|
||||
auth.tls.existingSecrets[0]=kafka-jks-0
|
||||
auth.tls.existingSecrets[1]=kafka-jks-1
|
||||
auth.tls.password=jksPassword
|
||||
auth.sasl.jaas.clientUsers[0]=brokerUser
|
||||
auth.sasl.jaas.clientPasswords[0]=brokerPassword
|
||||
auth.sasl.jaas.zookeeperUser=zookeeperUser
|
||||
auth.sasl.jaas.zookeeperPassword=zookeeperPassword
|
||||
zookeeper.auth.enabled=true
|
||||
zookeeper.auth.serverUsers=zookeeperUser
|
||||
zookeeper.auth.serverPasswords=zookeeperPassword
|
||||
zookeeper.auth.clientUser=zookeeperUser
|
||||
zookeeper.auth.clientPassword=zookeeperPassword
|
||||
authorizerClassName=kafka.security.authorizer.AclAuthorizer
|
||||
allowEveryoneIfNoAclFound=false
|
||||
superUsers=User:admin
|
||||
```
|
||||
|
||||
If you also enable exposing metrics using the Kafka exporter, and you are using `sasl_tls`, `tls`, or `mtls` authentication protocols, you need to mount the CA certificated used to sign the brokers certificates in the exporter so it can validate the Kafka brokers. To do so, create a secret containing the CA, and set the `metrics.certificatesSecret` parameter. As an alternative, you can skip TLS validation using extra flags:
|
||||
|
||||
```console
|
||||
metrics.kafka.extraFlags={tls.insecure-skip-tls-verify: ""}
|
||||
```
|
||||
|
||||
### Accessing Kafka brokers from outside the cluster
|
||||
|
||||
In order to access Kafka Brokers from outside the cluster, an additional listener and advertised listener must be configured. Additionally, a specific service per kafka pod will be created.
|
||||
|
||||
There are two ways of configuring external access. Using LoadBalancer services or using NodePort services.
|
||||
|
||||
#### Using LoadBalancer services
|
||||
|
||||
You have two alternatives to use LoadBalancer services:
|
||||
|
||||
- Option A) Use random load balancer IPs using an **initContainer** that waits for the IPs to be ready and discover them automatically.
|
||||
|
||||
```console
|
||||
externalAccess.enabled=true
|
||||
externalAccess.service.type=LoadBalancer
|
||||
externalAccess.service.ports.external=9094
|
||||
externalAccess.autoDiscovery.enabled=true
|
||||
serviceAccount.create=true
|
||||
rbac.create=true
|
||||
```
|
||||
|
||||
Note: This option requires creating RBAC rules on clusters where RBAC policies are enabled.
|
||||
|
||||
- Option B) Manually specify the load balancer IPs:
|
||||
|
||||
```console
|
||||
externalAccess.enabled=true
|
||||
externalAccess.service.type=LoadBalancer
|
||||
externalAccess.service.ports.external=9094
|
||||
externalAccess.service.loadBalancerIPs[0]='external-ip-1'
|
||||
externalAccess.service.loadBalancerIPs[1]='external-ip-2'}
|
||||
```
|
||||
|
||||
Note: You need to know in advance the load balancer IPs so each Kafka broker advertised listener is configured with it.
|
||||
|
||||
Following the aforementioned steps will also allow to connect the brokers from the outside using the cluster's default service (when `service.type` is `LoadBalancer` or `NodePort`). Use the property `service.externalPort` to specify the port used for external connections.
|
||||
|
||||
#### Using NodePort services
|
||||
|
||||
You have two alternatives to use NodePort services:
|
||||
|
||||
- Option A) Use random node ports using an **initContainer** that discover them automatically.
|
||||
|
||||
```console
|
||||
externalAccess.enabled=true
|
||||
externalAccess.service.type=NodePort
|
||||
externalAccess.autoDiscovery.enabled=true
|
||||
serviceAccount.create=true
|
||||
rbac.create=true
|
||||
```
|
||||
|
||||
Note: This option requires creating RBAC rules on clusters where RBAC policies are enabled.
|
||||
|
||||
- Option B) Manually specify the node ports:
|
||||
|
||||
```console
|
||||
externalAccess.enabled=true
|
||||
externalAccess.service.type=NodePort
|
||||
externalAccess.service.nodePorts[0]='node-port-1'
|
||||
externalAccess.service.nodePorts[1]='node-port-2'
|
||||
```
|
||||
|
||||
Note: You need to know in advance the node ports that will be exposed so each Kafka broker advertised listener is configured with it.
|
||||
|
||||
The pod will try to get the external ip of the node using `curl -s https://ipinfo.io/ip` unless `externalAccess.service.domain` or `externalAccess.service.useHostIPs` is provided.
|
||||
|
||||
#### Name resolution with External-DNS
|
||||
|
||||
You can use the following values to generate External-DNS annotations which automatically creates DNS records for each ReplicaSet pod:
|
||||
|
||||
```yaml
|
||||
externalAccess:
|
||||
service:
|
||||
annotations:
|
||||
external-dns.alpha.kubernetes.io/hostname: "{{ .targetPod }}.example.com"
|
||||
```
|
||||
|
||||
### Sidecars
|
||||
|
||||
If you have a need for additional containers to run within the same pod as Kafka (e.g. an additional metrics or logging exporter), you can do so via the `sidecars` config parameter. Simply define your container according to the Kubernetes container spec.
|
||||
|
||||
```yaml
|
||||
sidecars:
|
||||
- name: your-image-name
|
||||
image: your-image
|
||||
imagePullPolicy: Always
|
||||
ports:
|
||||
- name: portname
|
||||
containerPort: 1234
|
||||
```
|
||||
|
||||
### Setting Pod's affinity
|
||||
|
||||
This chart allows you to set your custom affinity using the `affinity` parameter. Find more information about Pod's affinity in the [kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity).
|
||||
|
||||
As an alternative, you can use of the preset configurations for pod affinity, pod anti-affinity, and node affinity available at the [bitnami/common](https://github.com/bitnami/charts/tree/master/bitnami/common#affinities) chart. To do so, set the `podAffinityPreset`, `podAntiAffinityPreset`, or `nodeAffinityPreset` parameters.
|
||||
|
||||
### Deploying extra resources
|
||||
|
||||
There are cases where you may want to deploy extra objects, such as Kafka Connect. For covering this case, the chart allows adding the full specification of other objects using the `extraDeploy` parameter. The following example would create a deployment including a Kafka Connect deployment so you can connect Kafka with MongoDB®:
|
||||
|
||||
```yaml
|
||||
## Extra objects to deploy (value evaluated as a template)
|
||||
##
|
||||
extraDeploy:
|
||||
- |
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: {{ include "kafka.fullname" . }}-connect
|
||||
labels: {{- include "common.labels.standard" . | nindent 4 }}
|
||||
app.kubernetes.io/component: connector
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels: {{- include "common.labels.matchLabels" . | nindent 6 }}
|
||||
app.kubernetes.io/component: connector
|
||||
template:
|
||||
metadata:
|
||||
labels: {{- include "common.labels.standard" . | nindent 8 }}
|
||||
app.kubernetes.io/component: connector
|
||||
spec:
|
||||
containers:
|
||||
- name: connect
|
||||
image: KAFKA-CONNECT-IMAGE
|
||||
imagePullPolicy: IfNotPresent
|
||||
ports:
|
||||
- name: connector
|
||||
containerPort: 8083
|
||||
volumeMounts:
|
||||
- name: configuration
|
||||
mountPath: /bitnami/kafka/config
|
||||
volumes:
|
||||
- name: configuration
|
||||
configMap:
|
||||
name: {{ include "kafka.fullname" . }}-connect
|
||||
- |
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: {{ include "kafka.fullname" . }}-connect
|
||||
labels: {{- include "common.labels.standard" . | nindent 4 }}
|
||||
app.kubernetes.io/component: connector
|
||||
data:
|
||||
connect-standalone.properties: |-
|
||||
bootstrap.servers = {{ include "kafka.fullname" . }}-0.{{ include "kafka.fullname" . }}-headless.{{ .Release.Namespace }}.svc.{{ .Values.clusterDomain }}:{{ .Values.service.port }}
|
||||
...
|
||||
mongodb.properties: |-
|
||||
connection.uri=mongodb://root:password@mongodb-hostname:27017
|
||||
...
|
||||
- |
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: {{ include "kafka.fullname" . }}-connect
|
||||
labels: {{- include "common.labels.standard" . | nindent 4 }}
|
||||
app.kubernetes.io/component: connector
|
||||
spec:
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 8083
|
||||
targetPort: connector
|
||||
selector: {{- include "common.labels.matchLabels" . | nindent 4 }}
|
||||
app.kubernetes.io/component: connector
|
||||
```
|
||||
|
||||
You can create the Kafka Connect image using the Dockerfile below:
|
||||
|
||||
```Dockerfile
|
||||
FROM bitnami/kafka:latest
|
||||
# Download MongoDB® Connector for Apache Kafka https://www.confluent.io/hub/mongodb/kafka-connect-mongodb
|
||||
RUN mkdir -p /opt/bitnami/kafka/plugins && \
|
||||
cd /opt/bitnami/kafka/plugins && \
|
||||
curl --remote-name --location --silent https://search.maven.org/remotecontent?filepath=org/mongodb/kafka/mongo-kafka-connect/1.2.0/mongo-kafka-connect-1.2.0-all.jar
|
||||
CMD /opt/bitnami/kafka/bin/connect-standalone.sh /opt/bitnami/kafka/config/connect-standalone.properties /opt/bitnami/kafka/config/mongo.properties
|
||||
```
|
||||
|
||||
## Persistence
|
||||
|
||||
The [Bitnami Kafka](https://github.com/bitnami/bitnami-docker-kafka) image stores the Kafka data at the `/bitnami/kafka` path of the container.
|
||||
|
||||
Persistent Volume Claims are used to keep the data across deployments. This is known to work in GCE, AWS, and minikube. See the [Parameters](#persistence-parameters) section to configure the PVC or to disable persistence.
|
||||
|
||||
### Adjust permissions of persistent volume mountpoint
|
||||
|
||||
As the image run as non-root by default, it is necessary to adjust the ownership of the persistent volume so that the container can write data into it.
|
||||
|
||||
By default, the chart is configured to use Kubernetes Security Context to automatically change the ownership of the volume. However, this feature does not work in all Kubernetes distributions.
|
||||
As an alternative, this chart supports using an initContainer to change the ownership of the volume before mounting it in the final destination.
|
||||
|
||||
You can enable this initContainer by setting `volumePermissions.enabled` to `true`.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
Find more information about how to deal with common errors related to Bitnami's Helm charts in [this troubleshooting guide](https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues).
|
||||
|
||||
## Upgrading
|
||||
|
||||
### To 15.0.0
|
||||
|
||||
This major release bumps Kafka major version to `3.x` series.
|
||||
It also renames several values in this chart and adds missing features, in order to be inline with the rest of assets in the Bitnami charts repository. Some affected values are:
|
||||
|
||||
- `service.port`, `service.internalPort` and `service.externalPort` have been regrouped under the `service.ports` map.
|
||||
- `metrics.kafka.service.port` has been regrouped under the `metrics.kafka.service.ports` map.
|
||||
- `metrics.jmx.service.port` has been regrouped under the `metrics.jmx.service.ports` map.
|
||||
- `updateStrategy` (string) and `rollingUpdatePartition` are regrouped under the `updateStrategy` map.
|
||||
- Several parameters marked as deprecated `14.x.x` are not supported anymore.
|
||||
|
||||
Additionally updates the ZooKeeper subchart to it newest major, `8.0.0`, which contains similar changes.
|
||||
|
||||
### To 14.0.0
|
||||
|
||||
In this version, the `image` block is defined once and is used in the different templates, while in the previous version, the `image` block was duplicated for the main container and the provisioning one
|
||||
|
||||
```yaml
|
||||
image:
|
||||
registry: docker.io
|
||||
repository: bitnami/kafka
|
||||
tag: 2.8.0
|
||||
```
|
||||
|
||||
VS
|
||||
|
||||
```yaml
|
||||
image:
|
||||
registry: docker.io
|
||||
repository: bitnami/kafka
|
||||
tag: 2.8.0
|
||||
...
|
||||
provisioning:
|
||||
image:
|
||||
registry: docker.io
|
||||
repository: bitnami/kafka
|
||||
tag: 2.8.0
|
||||
```
|
||||
|
||||
See [PR#7114](https://github.com/bitnami/charts/pull/7114) for more info about the implemented changes
|
||||
|
||||
### To 13.0.0
|
||||
|
||||
This major updates the Zookeeper subchart to it newest major, 7.0.0, which renames all TLS-related settings. For more information on this subchart's major, please refer to [zookeeper upgrade notes](https://github.com/bitnami/charts/tree/master/bitnami/zookeeper#to-700).
|
||||
|
||||
### To 12.2.0
|
||||
|
||||
This version also introduces `bitnami/common`, a [library chart](https://helm.sh/docs/topics/library_charts/#helm) as a dependency. More documentation about this new utility could be found [here](https://github.com/bitnami/charts/tree/master/bitnami/common#bitnami-common-library-chart). Please, make sure that you have updated the chart dependencies before executing any upgrade.
|
||||
|
||||
### To 12.0.0
|
||||
|
||||
[On November 13, 2020, Helm v2 support was formally finished](https://github.com/helm/charts#status-of-the-project), this major version is the result of the required changes applied to the Helm Chart to be able to incorporate the different features added in Helm v3 and to be consistent with the Helm project itself regarding the Helm v2 EOL.
|
||||
|
||||
**What changes were introduced in this major version?**
|
||||
|
||||
- Previous versions of this Helm Chart use `apiVersion: v1` (installable by both Helm 2 and 3), this Helm Chart was updated to `apiVersion: v2` (installable by Helm 3 only). [Here](https://helm.sh/docs/topics/charts/#the-apiversion-field) you can find more information about the `apiVersion` field.
|
||||
- Move dependency information from the *requirements.yaml* to the *Chart.yaml*
|
||||
- After running `helm dependency update`, a *Chart.lock* file is generated containing the same structure used in the previous *requirements.lock*
|
||||
- The different fields present in the *Chart.yaml* file has been ordered alphabetically in a homogeneous way for all the Bitnami Helm Charts
|
||||
|
||||
**Considerations when upgrading to this version**
|
||||
|
||||
- If you want to upgrade to this version from a previous one installed with Helm v3, you shouldn't face any issues
|
||||
- If you want to upgrade to this version using Helm v2, this scenario is not supported as this version doesn't support Helm v2 anymore
|
||||
- If you installed the previous version with Helm v2 and wants to upgrade to this version with Helm v3, please refer to the [official Helm documentation](https://helm.sh/docs/topics/v2_v3_migration/#migration-use-cases) about migrating from Helm v2 to v3
|
||||
|
||||
**Useful links**
|
||||
|
||||
- https://docs.bitnami.com/tutorials/resolve-helm2-helm3-post-migration-issues/
|
||||
- https://helm.sh/docs/topics/v2_v3_migration/
|
||||
- https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/
|
||||
|
||||
### To 11.8.0
|
||||
|
||||
External access to brokers can now be achieved through the cluster's Kafka service.
|
||||
|
||||
- `service.nodePort` -> deprecated in favor of `service.nodePorts.client` and `service.nodePorts.external`
|
||||
|
||||
### To 11.7.0
|
||||
|
||||
The way to configure the users and passwords changed. Now it is allowed to create multiple users during the installation by providing the list of users and passwords.
|
||||
|
||||
- `auth.jaas.clientUser` (string) -> deprecated in favor of `auth.jaas.clientUsers` (array).
|
||||
- `auth.jaas.clientPassword` (string) -> deprecated in favor of `auth.jaas.clientPasswords` (array).
|
||||
|
||||
### To 11.0.0
|
||||
|
||||
The way to configure listeners and athentication on Kafka is totally refactored allowing users to configure different authentication protocols on different listeners. Please check the [Listeners Configuration](#listeners-configuration) section for more information.
|
||||
|
||||
Backwards compatibility is not guaranteed you adapt your values.yaml to the new format. Here you can find some parameters that were renamed or disappeared in favor of new ones on this major version:
|
||||
|
||||
- `auth.enabled` -> deprecated in favor of `auth.clientProtocol` and `auth.interBrokerProtocol` parameters.
|
||||
- `auth.ssl` -> deprecated in favor of `auth.clientProtocol` and `auth.interBrokerProtocol` parameters.
|
||||
- `auth.certificatesSecret` -> renamed to `auth.jksSecret`.
|
||||
- `auth.certificatesPassword` -> renamed to `auth.jksPassword`.
|
||||
- `sslEndpointIdentificationAlgorithm` -> renamedo to `auth.tlsEndpointIdentificationAlgorithm`.
|
||||
- `auth.interBrokerUser` -> renamed to `auth.jaas.interBrokerUser`
|
||||
- `auth.interBrokerPassword` -> renamed to `auth.jaas.interBrokerPassword`
|
||||
- `auth.zookeeperUser` -> renamed to `auth.jaas.zookeeperUser`
|
||||
- `auth.zookeeperPassword` -> renamed to `auth.jaas.zookeeperPassword`
|
||||
- `auth.existingSecret` -> renamed to `auth.jaas.existingSecret`
|
||||
- `service.sslPort` -> deprecated in favor of `service.internalPort`
|
||||
- `service.nodePorts.kafka` and `service.nodePorts.ssl` -> deprecated in favor of `service.nodePort`
|
||||
- `metrics.kafka.extraFlag` -> new parameter
|
||||
- `metrics.kafka.certificatesSecret` -> new parameter
|
||||
|
||||
### To 10.0.0
|
||||
|
||||
If you are setting the `config` or `log4j` parameter, backwards compatibility is not guaranteed, because the `KAFKA_MOUNTED_CONFDIR` has moved from `/opt/bitnami/kafka/conf` to `/bitnami/kafka/config`. In order to continue using these parameters, you must also upgrade your image to `docker.io/bitnami/kafka:2.4.1-debian-10-r38` or later.
|
||||
|
||||
### To 9.0.0
|
||||
|
||||
Backwards compatibility is not guaranteed you adapt your values.yaml to the new format. Here you can find some parameters that were renamed on this major version:
|
||||
|
||||
```diff
|
||||
- securityContext.enabled
|
||||
- securityContext.fsGroup
|
||||
- securityContext.fsGroup
|
||||
+ podSecurityContext
|
||||
- externalAccess.service.loadBalancerIP
|
||||
+ externalAccess.service.loadBalancerIPs
|
||||
- externalAccess.service.nodePort
|
||||
+ externalAccess.service.nodePorts
|
||||
- metrics.jmx.configMap.enabled
|
||||
- metrics.jmx.configMap.overrideConfig
|
||||
+ metrics.jmx.config
|
||||
- metrics.jmx.configMap.overrideName
|
||||
+ metrics.jmx.existingConfigmap
|
||||
```
|
||||
|
||||
Ports names were prefixed with the protocol to comply with Istio (see https://istio.io/docs/ops/deployment/requirements/).
|
||||
|
||||
### To 8.0.0
|
||||
|
||||
There is not backwards compatibility since the brokerID changes to the POD_NAME. For more information see [this PR](https://github.com/bitnami/charts/pull/2028).
|
||||
|
||||
### To 7.0.0
|
||||
|
||||
Backwards compatibility is not guaranteed when Kafka metrics are enabled, unless you modify the labels used on the exporter deployments.
|
||||
Use the workaround below to upgrade from versions previous to 7.0.0. The following example assumes that the release name is kafka:
|
||||
|
||||
```console
|
||||
helm upgrade kafka bitnami/kafka --version 6.1.8 --set metrics.kafka.enabled=false
|
||||
helm upgrade kafka bitnami/kafka --version 7.0.0 --set metrics.kafka.enabled=true
|
||||
```
|
||||
|
||||
### To 2.0.0
|
||||
|
||||
Backwards compatibility is not guaranteed unless you modify the labels used on the chart's deployments.
|
||||
Use the workaround below to upgrade from versions previous to 2.0.0. The following example assumes that the release name is kafka:
|
||||
|
||||
```console
|
||||
kubectl delete statefulset kafka-kafka --cascade=false
|
||||
kubectl delete statefulset kafka-zookeeper --cascade=false
|
||||
```
|
||||
|
||||
### To 1.0.0
|
||||
|
||||
Backwards compatibility is not guaranteed unless you modify the labels used on the chart's deployments.
|
||||
Use the workaround below to upgrade from versions previous to 1.0.0. The following example assumes that the release name is kafka:
|
||||
|
||||
```console
|
||||
kubectl delete statefulset kafka-kafka --cascade=false
|
||||
kubectl delete statefulset kafka-zookeeper --cascade=false
|
||||
```
|
||||
|
||||
## License
|
||||
|
||||
Copyright © 2022 Bitnami
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
@@ -0,0 +1,22 @@
|
||||
# Patterns to ignore when building packages.
|
||||
# This supports shell glob matching, relative path matching, and
|
||||
# negation (prefixed with !). Only one pattern per line.
|
||||
.DS_Store
|
||||
# Common VCS dirs
|
||||
.git/
|
||||
.gitignore
|
||||
.bzr/
|
||||
.bzrignore
|
||||
.hg/
|
||||
.hgignore
|
||||
.svn/
|
||||
# Common backup files
|
||||
*.swp
|
||||
*.bak
|
||||
*.tmp
|
||||
*~
|
||||
# Various IDEs
|
||||
.project
|
||||
.idea/
|
||||
*.tmproj
|
||||
.vscode/
|
||||
23
deployment/helm/milvus/charts/kafka/charts/common/Chart.yaml
Normal file
23
deployment/helm/milvus/charts/kafka/charts/common/Chart.yaml
Normal file
@@ -0,0 +1,23 @@
|
||||
annotations:
|
||||
category: Infrastructure
|
||||
apiVersion: v2
|
||||
appVersion: 1.12.0
|
||||
description: A Library Helm Chart for grouping common logic between bitnami charts.
|
||||
This chart is not deployable by itself.
|
||||
home: https://github.com/bitnami/charts/tree/master/bitnami/common
|
||||
icon: https://bitnami.com/downloads/logos/bitnami-mark.png
|
||||
keywords:
|
||||
- common
|
||||
- helper
|
||||
- template
|
||||
- function
|
||||
- bitnami
|
||||
maintainers:
|
||||
- email: containers@bitnami.com
|
||||
name: Bitnami
|
||||
name: common
|
||||
sources:
|
||||
- https://github.com/bitnami/charts
|
||||
- https://www.bitnami.com/
|
||||
type: library
|
||||
version: 1.12.0
|
||||
346
deployment/helm/milvus/charts/kafka/charts/common/README.md
Normal file
346
deployment/helm/milvus/charts/kafka/charts/common/README.md
Normal file
@@ -0,0 +1,346 @@
|
||||
# Bitnami Common Library Chart
|
||||
|
||||
A [Helm Library Chart](https://helm.sh/docs/topics/library_charts/#helm) for grouping common logic between bitnami charts.
|
||||
|
||||
## TL;DR
|
||||
|
||||
```yaml
|
||||
dependencies:
|
||||
- name: common
|
||||
version: 1.x.x
|
||||
repository: https://charts.bitnami.com/bitnami
|
||||
```
|
||||
|
||||
```bash
|
||||
$ helm dependency update
|
||||
```
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: {{ include "common.names.fullname" . }}
|
||||
data:
|
||||
myvalue: "Hello World"
|
||||
```
|
||||
|
||||
## Introduction
|
||||
|
||||
This chart provides a common template helpers which can be used to develop new charts using [Helm](https://helm.sh) package manager.
|
||||
|
||||
Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This Helm chart has been tested on top of [Bitnami Kubernetes Production Runtime](https://kubeprod.io/) (BKPR). Deploy BKPR to get automated TLS certificates, logging and monitoring for your applications.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Kubernetes 1.19+
|
||||
- Helm 3.2.0+
|
||||
|
||||
## Parameters
|
||||
|
||||
The following table lists the helpers available in the library which are scoped in different sections.
|
||||
|
||||
### Affinities
|
||||
|
||||
| Helper identifier | Description | Expected Input |
|
||||
|-------------------------------|------------------------------------------------------|------------------------------------------------|
|
||||
| `common.affinities.node.soft` | Return a soft nodeAffinity definition | `dict "key" "FOO" "values" (list "BAR" "BAZ")` |
|
||||
| `common.affinities.node.hard` | Return a hard nodeAffinity definition | `dict "key" "FOO" "values" (list "BAR" "BAZ")` |
|
||||
| `common.affinities.pod.soft` | Return a soft podAffinity/podAntiAffinity definition | `dict "component" "FOO" "context" $` |
|
||||
| `common.affinities.pod.hard` | Return a hard podAffinity/podAntiAffinity definition | `dict "component" "FOO" "context" $` |
|
||||
|
||||
### Capabilities
|
||||
|
||||
| Helper identifier | Description | Expected Input |
|
||||
|------------------------------------------------|------------------------------------------------------------------------------------------------|-------------------|
|
||||
| `common.capabilities.kubeVersion` | Return the target Kubernetes version (using client default if .Values.kubeVersion is not set). | `.` Chart context |
|
||||
| `common.capabilities.cronjob.apiVersion` | Return the appropriate apiVersion for cronjob. | `.` Chart context |
|
||||
| `common.capabilities.deployment.apiVersion` | Return the appropriate apiVersion for deployment. | `.` Chart context |
|
||||
| `common.capabilities.statefulset.apiVersion` | Return the appropriate apiVersion for statefulset. | `.` Chart context |
|
||||
| `common.capabilities.ingress.apiVersion` | Return the appropriate apiVersion for ingress. | `.` Chart context |
|
||||
| `common.capabilities.rbac.apiVersion` | Return the appropriate apiVersion for RBAC resources. | `.` Chart context |
|
||||
| `common.capabilities.crd.apiVersion` | Return the appropriate apiVersion for CRDs. | `.` Chart context |
|
||||
| `common.capabilities.policy.apiVersion` | Return the appropriate apiVersion for podsecuritypolicy. | `.` Chart context |
|
||||
| `common.capabilities.networkPolicy.apiVersion` | Return the appropriate apiVersion for networkpolicy. | `.` Chart context |
|
||||
| `common.capabilities.supportsHelmVersion` | Returns true if the used Helm version is 3.3+ | `.` Chart context |
|
||||
|
||||
### Errors
|
||||
|
||||
| Helper identifier | Description | Expected Input |
|
||||
|-----------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------|
|
||||
| `common.errors.upgrade.passwords.empty` | It will ensure required passwords are given when we are upgrading a chart. If `validationErrors` is not empty it will throw an error and will stop the upgrade action. | `dict "validationErrors" (list $validationError00 $validationError01) "context" $` |
|
||||
|
||||
### Images
|
||||
|
||||
| Helper identifier | Description | Expected Input |
|
||||
|-----------------------------|------------------------------------------------------|---------------------------------------------------------------------------------------------------------|
|
||||
| `common.images.image` | Return the proper and full image name | `dict "imageRoot" .Values.path.to.the.image "global" $`, see [ImageRoot](#imageroot) for the structure. |
|
||||
| `common.images.pullSecrets` | Return the proper Docker Image Registry Secret Names (deprecated: use common.images.renderPullSecrets instead) | `dict "images" (list .Values.path.to.the.image1, .Values.path.to.the.image2) "global" .Values.global` |
|
||||
| `common.images.renderPullSecrets` | Return the proper Docker Image Registry Secret Names (evaluates values as templates) | `dict "images" (list .Values.path.to.the.image1, .Values.path.to.the.image2) "context" $` |
|
||||
|
||||
### Ingress
|
||||
|
||||
| Helper identifier | Description | Expected Input |
|
||||
|-------------------------------------------|-------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `common.ingress.backend` | Generate a proper Ingress backend entry depending on the API version | `dict "serviceName" "foo" "servicePort" "bar"`, see the [Ingress deprecation notice](https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/) for the syntax differences |
|
||||
| `common.ingress.supportsPathType` | Prints "true" if the pathType field is supported | `.` Chart context |
|
||||
| `common.ingress.supportsIngressClassname` | Prints "true" if the ingressClassname field is supported | `.` Chart context |
|
||||
| `common.ingress.certManagerRequest` | Prints "true" if required cert-manager annotations for TLS signed certificates are set in the Ingress annotations | `dict "annotations" .Values.path.to.the.ingress.annotations` |
|
||||
|
||||
### Labels
|
||||
|
||||
| Helper identifier | Description | Expected Input |
|
||||
|-----------------------------|-----------------------------------------------------------------------------|-------------------|
|
||||
| `common.labels.standard` | Return Kubernetes standard labels | `.` Chart context |
|
||||
| `common.labels.matchLabels` | Labels to use on `deploy.spec.selector.matchLabels` and `svc.spec.selector` | `.` Chart context |
|
||||
|
||||
### Names
|
||||
|
||||
| Helper identifier | Description | Expected Input |
|
||||
|--------------------------|------------------------------------------------------------|-------------------|
|
||||
| `common.names.name` | Expand the name of the chart or use `.Values.nameOverride` | `.` Chart context |
|
||||
| `common.names.fullname` | Create a default fully qualified app name. | `.` Chart context |
|
||||
| `common.names.namespace` | Allow the release namespace to be overridden | `.` Chart context |
|
||||
| `common.names.chart` | Chart name plus version | `.` Chart context |
|
||||
|
||||
### Secrets
|
||||
|
||||
| Helper identifier | Description | Expected Input |
|
||||
|---------------------------|--------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `common.secrets.name` | Generate the name of the secret. | `dict "existingSecret" .Values.path.to.the.existingSecret "defaultNameSuffix" "mySuffix" "context" $` see [ExistingSecret](#existingsecret) for the structure. |
|
||||
| `common.secrets.key` | Generate secret key. | `dict "existingSecret" .Values.path.to.the.existingSecret "key" "keyName"` see [ExistingSecret](#existingsecret) for the structure. |
|
||||
| `common.passwords.manage` | Generate secret password or retrieve one if already created. | `dict "secret" "secret-name" "key" "keyName" "providedValues" (list "path.to.password1" "path.to.password2") "length" 10 "strong" false "chartName" "chartName" "context" $`, length, strong and chartNAme fields are optional. |
|
||||
| `common.secrets.exists` | Returns whether a previous generated secret already exists. | `dict "secret" "secret-name" "context" $` |
|
||||
|
||||
### Storage
|
||||
|
||||
| Helper identifier | Description | Expected Input |
|
||||
|-------------------------------|---------------------------------------|---------------------------------------------------------------------------------------------------------------------|
|
||||
| `common.storage.class` | Return the proper Storage Class | `dict "persistence" .Values.path.to.the.persistence "global" $`, see [Persistence](#persistence) for the structure. |
|
||||
|
||||
### TplValues
|
||||
|
||||
| Helper identifier | Description | Expected Input |
|
||||
|---------------------------|----------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `common.tplvalues.render` | Renders a value that contains template | `dict "value" .Values.path.to.the.Value "context" $`, value is the value should rendered as template, context frequently is the chart context `$` or `.` |
|
||||
|
||||
### Utils
|
||||
|
||||
| Helper identifier | Description | Expected Input |
|
||||
|--------------------------------|------------------------------------------------------------------------------------------|------------------------------------------------------------------------|
|
||||
| `common.utils.fieldToEnvVar` | Build environment variable name given a field. | `dict "field" "my-password"` |
|
||||
| `common.utils.secret.getvalue` | Print instructions to get a secret value. | `dict "secret" "secret-name" "field" "secret-value-field" "context" $` |
|
||||
| `common.utils.getValueFromKey` | Gets a value from `.Values` object given its key path | `dict "key" "path.to.key" "context" $` |
|
||||
| `common.utils.getKeyFromList` | Returns first `.Values` key with a defined value or first of the list if all non-defined | `dict "keys" (list "path.to.key1" "path.to.key2") "context" $` |
|
||||
|
||||
### Validations
|
||||
|
||||
| Helper identifier | Description | Expected Input |
|
||||
|--------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `common.validations.values.single.empty` | Validate a value must not be empty. | `dict "valueKey" "path.to.value" "secret" "secret.name" "field" "my-password" "subchart" "subchart" "context" $` secret, field and subchart are optional. In case they are given, the helper will generate a how to get instruction. See [ValidateValue](#validatevalue) |
|
||||
| `common.validations.values.multiple.empty` | Validate a multiple values must not be empty. It returns a shared error for all the values. | `dict "required" (list $validateValueConf00 $validateValueConf01) "context" $`. See [ValidateValue](#validatevalue) |
|
||||
| `common.validations.values.mariadb.passwords` | This helper will ensure required password for MariaDB are not empty. It returns a shared error for all the values. | `dict "secret" "mariadb-secret" "subchart" "true" "context" $` subchart field is optional and could be true or false it depends on where you will use mariadb chart and the helper. |
|
||||
| `common.validations.values.postgresql.passwords` | This helper will ensure required password for PostgreSQL are not empty. It returns a shared error for all the values. | `dict "secret" "postgresql-secret" "subchart" "true" "context" $` subchart field is optional and could be true or false it depends on where you will use postgresql chart and the helper. |
|
||||
| `common.validations.values.redis.passwords` | This helper will ensure required password for Redis™ are not empty. It returns a shared error for all the values. | `dict "secret" "redis-secret" "subchart" "true" "context" $` subchart field is optional and could be true or false it depends on where you will use redis chart and the helper. |
|
||||
| `common.validations.values.cassandra.passwords` | This helper will ensure required password for Cassandra are not empty. It returns a shared error for all the values. | `dict "secret" "cassandra-secret" "subchart" "true" "context" $` subchart field is optional and could be true or false it depends on where you will use cassandra chart and the helper. |
|
||||
| `common.validations.values.mongodb.passwords` | This helper will ensure required password for MongoDB® are not empty. It returns a shared error for all the values. | `dict "secret" "mongodb-secret" "subchart" "true" "context" $` subchart field is optional and could be true or false it depends on where you will use mongodb chart and the helper. |
|
||||
|
||||
### Warnings
|
||||
|
||||
| Helper identifier | Description | Expected Input |
|
||||
|------------------------------|----------------------------------|------------------------------------------------------------|
|
||||
| `common.warnings.rollingTag` | Warning about using rolling tag. | `ImageRoot` see [ImageRoot](#imageroot) for the structure. |
|
||||
|
||||
## Special input schemas
|
||||
|
||||
### ImageRoot
|
||||
|
||||
```yaml
|
||||
registry:
|
||||
type: string
|
||||
description: Docker registry where the image is located
|
||||
example: docker.io
|
||||
|
||||
repository:
|
||||
type: string
|
||||
description: Repository and image name
|
||||
example: bitnami/nginx
|
||||
|
||||
tag:
|
||||
type: string
|
||||
description: image tag
|
||||
example: 1.16.1-debian-10-r63
|
||||
|
||||
pullPolicy:
|
||||
type: string
|
||||
description: Specify a imagePullPolicy. Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
|
||||
|
||||
pullSecrets:
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
description: Optionally specify an array of imagePullSecrets (evaluated as templates).
|
||||
|
||||
debug:
|
||||
type: boolean
|
||||
description: Set to true if you would like to see extra information on logs
|
||||
example: false
|
||||
|
||||
## An instance would be:
|
||||
# registry: docker.io
|
||||
# repository: bitnami/nginx
|
||||
# tag: 1.16.1-debian-10-r63
|
||||
# pullPolicy: IfNotPresent
|
||||
# debug: false
|
||||
```
|
||||
|
||||
### Persistence
|
||||
|
||||
```yaml
|
||||
enabled:
|
||||
type: boolean
|
||||
description: Whether enable persistence.
|
||||
example: true
|
||||
|
||||
storageClass:
|
||||
type: string
|
||||
description: Ghost data Persistent Volume Storage Class, If set to "-", storageClassName: "" which disables dynamic provisioning.
|
||||
example: "-"
|
||||
|
||||
accessMode:
|
||||
type: string
|
||||
description: Access mode for the Persistent Volume Storage.
|
||||
example: ReadWriteOnce
|
||||
|
||||
size:
|
||||
type: string
|
||||
description: Size the Persistent Volume Storage.
|
||||
example: 8Gi
|
||||
|
||||
path:
|
||||
type: string
|
||||
description: Path to be persisted.
|
||||
example: /bitnami
|
||||
|
||||
## An instance would be:
|
||||
# enabled: true
|
||||
# storageClass: "-"
|
||||
# accessMode: ReadWriteOnce
|
||||
# size: 8Gi
|
||||
# path: /bitnami
|
||||
```
|
||||
|
||||
### ExistingSecret
|
||||
|
||||
```yaml
|
||||
name:
|
||||
type: string
|
||||
description: Name of the existing secret.
|
||||
example: mySecret
|
||||
keyMapping:
|
||||
description: Mapping between the expected key name and the name of the key in the existing secret.
|
||||
type: object
|
||||
|
||||
## An instance would be:
|
||||
# name: mySecret
|
||||
# keyMapping:
|
||||
# password: myPasswordKey
|
||||
```
|
||||
|
||||
#### Example of use
|
||||
|
||||
When we store sensitive data for a deployment in a secret, some times we want to give to users the possibility of using theirs existing secrets.
|
||||
|
||||
```yaml
|
||||
# templates/secret.yaml
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: {{ include "common.names.fullname" . }}
|
||||
labels:
|
||||
app: {{ include "common.names.fullname" . }}
|
||||
type: Opaque
|
||||
data:
|
||||
password: {{ .Values.password | b64enc | quote }}
|
||||
|
||||
# templates/dpl.yaml
|
||||
---
|
||||
...
|
||||
env:
|
||||
- name: PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: {{ include "common.secrets.name" (dict "existingSecret" .Values.existingSecret "context" $) }}
|
||||
key: {{ include "common.secrets.key" (dict "existingSecret" .Values.existingSecret "key" "password") }}
|
||||
...
|
||||
|
||||
# values.yaml
|
||||
---
|
||||
name: mySecret
|
||||
keyMapping:
|
||||
password: myPasswordKey
|
||||
```
|
||||
|
||||
### ValidateValue
|
||||
|
||||
#### NOTES.txt
|
||||
|
||||
```console
|
||||
{{- $validateValueConf00 := (dict "valueKey" "path.to.value00" "secret" "secretName" "field" "password-00") -}}
|
||||
{{- $validateValueConf01 := (dict "valueKey" "path.to.value01" "secret" "secretName" "field" "password-01") -}}
|
||||
|
||||
{{ include "common.validations.values.multiple.empty" (dict "required" (list $validateValueConf00 $validateValueConf01) "context" $) }}
|
||||
```
|
||||
|
||||
If we force those values to be empty we will see some alerts
|
||||
|
||||
```console
|
||||
$ helm install test mychart --set path.to.value00="",path.to.value01=""
|
||||
'path.to.value00' must not be empty, please add '--set path.to.value00=$PASSWORD_00' to the command. To get the current value:
|
||||
|
||||
export PASSWORD_00=$(kubectl get secret --namespace default secretName -o jsonpath="{.data.password-00}" | base64 --decode)
|
||||
|
||||
'path.to.value01' must not be empty, please add '--set path.to.value01=$PASSWORD_01' to the command. To get the current value:
|
||||
|
||||
export PASSWORD_01=$(kubectl get secret --namespace default secretName -o jsonpath="{.data.password-01}" | base64 --decode)
|
||||
```
|
||||
|
||||
## Upgrading
|
||||
|
||||
### To 1.0.0
|
||||
|
||||
[On November 13, 2020, Helm v2 support was formally finished](https://github.com/helm/charts#status-of-the-project), this major version is the result of the required changes applied to the Helm Chart to be able to incorporate the different features added in Helm v3 and to be consistent with the Helm project itself regarding the Helm v2 EOL.
|
||||
|
||||
**What changes were introduced in this major version?**
|
||||
|
||||
- Previous versions of this Helm Chart use `apiVersion: v1` (installable by both Helm 2 and 3), this Helm Chart was updated to `apiVersion: v2` (installable by Helm 3 only). [Here](https://helm.sh/docs/topics/charts/#the-apiversion-field) you can find more information about the `apiVersion` field.
|
||||
- Use `type: library`. [Here](https://v3.helm.sh/docs/faq/#library-chart-support) you can find more information.
|
||||
- The different fields present in the *Chart.yaml* file has been ordered alphabetically in a homogeneous way for all the Bitnami Helm Charts
|
||||
|
||||
**Considerations when upgrading to this version**
|
||||
|
||||
- If you want to upgrade to this version from a previous one installed with Helm v3, you shouldn't face any issues
|
||||
- If you want to upgrade to this version using Helm v2, this scenario is not supported as this version doesn't support Helm v2 anymore
|
||||
- If you installed the previous version with Helm v2 and wants to upgrade to this version with Helm v3, please refer to the [official Helm documentation](https://helm.sh/docs/topics/v2_v3_migration/#migration-use-cases) about migrating from Helm v2 to v3
|
||||
|
||||
**Useful links**
|
||||
|
||||
- https://docs.bitnami.com/tutorials/resolve-helm2-helm3-post-migration-issues/
|
||||
- https://helm.sh/docs/topics/v2_v3_migration/
|
||||
- https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/
|
||||
|
||||
## License
|
||||
|
||||
Copyright © 2022 Bitnami
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
@@ -0,0 +1,102 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
|
||||
{{/*
|
||||
Return a soft nodeAffinity definition
|
||||
{{ include "common.affinities.nodes.soft" (dict "key" "FOO" "values" (list "BAR" "BAZ")) -}}
|
||||
*/}}
|
||||
{{- define "common.affinities.nodes.soft" -}}
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- preference:
|
||||
matchExpressions:
|
||||
- key: {{ .key }}
|
||||
operator: In
|
||||
values:
|
||||
{{- range .values }}
|
||||
- {{ . | quote }}
|
||||
{{- end }}
|
||||
weight: 1
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return a hard nodeAffinity definition
|
||||
{{ include "common.affinities.nodes.hard" (dict "key" "FOO" "values" (list "BAR" "BAZ")) -}}
|
||||
*/}}
|
||||
{{- define "common.affinities.nodes.hard" -}}
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
nodeSelectorTerms:
|
||||
- matchExpressions:
|
||||
- key: {{ .key }}
|
||||
operator: In
|
||||
values:
|
||||
{{- range .values }}
|
||||
- {{ . | quote }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return a nodeAffinity definition
|
||||
{{ include "common.affinities.nodes" (dict "type" "soft" "key" "FOO" "values" (list "BAR" "BAZ")) -}}
|
||||
*/}}
|
||||
{{- define "common.affinities.nodes" -}}
|
||||
{{- if eq .type "soft" }}
|
||||
{{- include "common.affinities.nodes.soft" . -}}
|
||||
{{- else if eq .type "hard" }}
|
||||
{{- include "common.affinities.nodes.hard" . -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return a soft podAffinity/podAntiAffinity definition
|
||||
{{ include "common.affinities.pods.soft" (dict "component" "FOO" "extraMatchLabels" .Values.extraMatchLabels "context" $) -}}
|
||||
*/}}
|
||||
{{- define "common.affinities.pods.soft" -}}
|
||||
{{- $component := default "" .component -}}
|
||||
{{- $extraMatchLabels := default (dict) .extraMatchLabels -}}
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- podAffinityTerm:
|
||||
labelSelector:
|
||||
matchLabels: {{- (include "common.labels.matchLabels" .context) | nindent 10 }}
|
||||
{{- if not (empty $component) }}
|
||||
{{ printf "app.kubernetes.io/component: %s" $component }}
|
||||
{{- end }}
|
||||
{{- range $key, $value := $extraMatchLabels }}
|
||||
{{ $key }}: {{ $value | quote }}
|
||||
{{- end }}
|
||||
namespaces:
|
||||
- {{ .context.Release.Namespace | quote }}
|
||||
topologyKey: kubernetes.io/hostname
|
||||
weight: 1
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return a hard podAffinity/podAntiAffinity definition
|
||||
{{ include "common.affinities.pods.hard" (dict "component" "FOO" "extraMatchLabels" .Values.extraMatchLabels "context" $) -}}
|
||||
*/}}
|
||||
{{- define "common.affinities.pods.hard" -}}
|
||||
{{- $component := default "" .component -}}
|
||||
{{- $extraMatchLabels := default (dict) .extraMatchLabels -}}
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
- labelSelector:
|
||||
matchLabels: {{- (include "common.labels.matchLabels" .context) | nindent 8 }}
|
||||
{{- if not (empty $component) }}
|
||||
{{ printf "app.kubernetes.io/component: %s" $component }}
|
||||
{{- end }}
|
||||
{{- range $key, $value := $extraMatchLabels }}
|
||||
{{ $key }}: {{ $value | quote }}
|
||||
{{- end }}
|
||||
namespaces:
|
||||
- {{ .context.Release.Namespace | quote }}
|
||||
topologyKey: kubernetes.io/hostname
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return a podAffinity/podAntiAffinity definition
|
||||
{{ include "common.affinities.pods" (dict "type" "soft" "key" "FOO" "values" (list "BAR" "BAZ")) -}}
|
||||
*/}}
|
||||
{{- define "common.affinities.pods" -}}
|
||||
{{- if eq .type "soft" }}
|
||||
{{- include "common.affinities.pods.soft" . -}}
|
||||
{{- else if eq .type "hard" }}
|
||||
{{- include "common.affinities.pods.hard" . -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,128 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
|
||||
{{/*
|
||||
Return the target Kubernetes version
|
||||
*/}}
|
||||
{{- define "common.capabilities.kubeVersion" -}}
|
||||
{{- if .Values.global }}
|
||||
{{- if .Values.global.kubeVersion }}
|
||||
{{- .Values.global.kubeVersion -}}
|
||||
{{- else }}
|
||||
{{- default .Capabilities.KubeVersion.Version .Values.kubeVersion -}}
|
||||
{{- end -}}
|
||||
{{- else }}
|
||||
{{- default .Capabilities.KubeVersion.Version .Values.kubeVersion -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the appropriate apiVersion for poddisruptionbudget.
|
||||
*/}}
|
||||
{{- define "common.capabilities.policy.apiVersion" -}}
|
||||
{{- if semverCompare "<1.21-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- print "policy/v1beta1" -}}
|
||||
{{- else -}}
|
||||
{{- print "policy/v1" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the appropriate apiVersion for networkpolicy.
|
||||
*/}}
|
||||
{{- define "common.capabilities.networkPolicy.apiVersion" -}}
|
||||
{{- if semverCompare "<1.7-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- print "extensions/v1beta1" -}}
|
||||
{{- else -}}
|
||||
{{- print "networking.k8s.io/v1" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the appropriate apiVersion for cronjob.
|
||||
*/}}
|
||||
{{- define "common.capabilities.cronjob.apiVersion" -}}
|
||||
{{- if semverCompare "<1.21-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- print "batch/v1beta1" -}}
|
||||
{{- else -}}
|
||||
{{- print "batch/v1" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the appropriate apiVersion for deployment.
|
||||
*/}}
|
||||
{{- define "common.capabilities.deployment.apiVersion" -}}
|
||||
{{- if semverCompare "<1.14-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- print "extensions/v1beta1" -}}
|
||||
{{- else -}}
|
||||
{{- print "apps/v1" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the appropriate apiVersion for statefulset.
|
||||
*/}}
|
||||
{{- define "common.capabilities.statefulset.apiVersion" -}}
|
||||
{{- if semverCompare "<1.14-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- print "apps/v1beta1" -}}
|
||||
{{- else -}}
|
||||
{{- print "apps/v1" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the appropriate apiVersion for ingress.
|
||||
*/}}
|
||||
{{- define "common.capabilities.ingress.apiVersion" -}}
|
||||
{{- if .Values.ingress -}}
|
||||
{{- if .Values.ingress.apiVersion -}}
|
||||
{{- .Values.ingress.apiVersion -}}
|
||||
{{- else if semverCompare "<1.14-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- print "extensions/v1beta1" -}}
|
||||
{{- else if semverCompare "<1.19-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- print "networking.k8s.io/v1beta1" -}}
|
||||
{{- else -}}
|
||||
{{- print "networking.k8s.io/v1" -}}
|
||||
{{- end }}
|
||||
{{- else if semverCompare "<1.14-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- print "extensions/v1beta1" -}}
|
||||
{{- else if semverCompare "<1.19-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- print "networking.k8s.io/v1beta1" -}}
|
||||
{{- else -}}
|
||||
{{- print "networking.k8s.io/v1" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the appropriate apiVersion for RBAC resources.
|
||||
*/}}
|
||||
{{- define "common.capabilities.rbac.apiVersion" -}}
|
||||
{{- if semverCompare "<1.17-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- print "rbac.authorization.k8s.io/v1beta1" -}}
|
||||
{{- else -}}
|
||||
{{- print "rbac.authorization.k8s.io/v1" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the appropriate apiVersion for CRDs.
|
||||
*/}}
|
||||
{{- define "common.capabilities.crd.apiVersion" -}}
|
||||
{{- if semverCompare "<1.19-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- print "apiextensions.k8s.io/v1beta1" -}}
|
||||
{{- else -}}
|
||||
{{- print "apiextensions.k8s.io/v1" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Returns true if the used Helm version is 3.3+.
|
||||
A way to check the used Helm version was not introduced until version 3.3.0 with .Capabilities.HelmVersion, which contains an additional "{}}" structure.
|
||||
This check is introduced as a regexMatch instead of {{ if .Capabilities.HelmVersion }} because checking for the key HelmVersion in <3.3 results in a "interface not found" error.
|
||||
**To be removed when the catalog's minimun Helm version is 3.3**
|
||||
*/}}
|
||||
{{- define "common.capabilities.supportsHelmVersion" -}}
|
||||
{{- if regexMatch "{(v[0-9])*[^}]*}}$" (.Capabilities | toString ) }}
|
||||
{{- true -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,23 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Through error when upgrading using empty passwords values that must not be empty.
|
||||
|
||||
Usage:
|
||||
{{- $validationError00 := include "common.validations.values.single.empty" (dict "valueKey" "path.to.password00" "secret" "secretName" "field" "password-00") -}}
|
||||
{{- $validationError01 := include "common.validations.values.single.empty" (dict "valueKey" "path.to.password01" "secret" "secretName" "field" "password-01") -}}
|
||||
{{ include "common.errors.upgrade.passwords.empty" (dict "validationErrors" (list $validationError00 $validationError01) "context" $) }}
|
||||
|
||||
Required password params:
|
||||
- validationErrors - String - Required. List of validation strings to be return, if it is empty it won't throw error.
|
||||
- context - Context - Required. Parent context.
|
||||
*/}}
|
||||
{{- define "common.errors.upgrade.passwords.empty" -}}
|
||||
{{- $validationErrors := join "" .validationErrors -}}
|
||||
{{- if and $validationErrors .context.Release.IsUpgrade -}}
|
||||
{{- $errorString := "\nPASSWORDS ERROR: You must provide your current passwords when upgrading the release." -}}
|
||||
{{- $errorString = print $errorString "\n Note that even after reinstallation, old credentials may be needed as they may be kept in persistent volume claims." -}}
|
||||
{{- $errorString = print $errorString "\n Further information can be obtained at https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues/#credential-errors-while-upgrading-chart-releases" -}}
|
||||
{{- $errorString = print $errorString "\n%s" -}}
|
||||
{{- printf $errorString $validationErrors | fail -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,75 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Return the proper image name
|
||||
{{ include "common.images.image" ( dict "imageRoot" .Values.path.to.the.image "global" $) }}
|
||||
*/}}
|
||||
{{- define "common.images.image" -}}
|
||||
{{- $registryName := .imageRoot.registry -}}
|
||||
{{- $repositoryName := .imageRoot.repository -}}
|
||||
{{- $tag := .imageRoot.tag | toString -}}
|
||||
{{- if .global }}
|
||||
{{- if .global.imageRegistry }}
|
||||
{{- $registryName = .global.imageRegistry -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- if $registryName }}
|
||||
{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
|
||||
{{- else -}}
|
||||
{{- printf "%s:%s" $repositoryName $tag -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the proper Docker Image Registry Secret Names (deprecated: use common.images.renderPullSecrets instead)
|
||||
{{ include "common.images.pullSecrets" ( dict "images" (list .Values.path.to.the.image1, .Values.path.to.the.image2) "global" .Values.global) }}
|
||||
*/}}
|
||||
{{- define "common.images.pullSecrets" -}}
|
||||
{{- $pullSecrets := list }}
|
||||
|
||||
{{- if .global }}
|
||||
{{- range .global.imagePullSecrets -}}
|
||||
{{- $pullSecrets = append $pullSecrets . -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- range .images -}}
|
||||
{{- range .pullSecrets -}}
|
||||
{{- $pullSecrets = append $pullSecrets . -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- if (not (empty $pullSecrets)) }}
|
||||
imagePullSecrets:
|
||||
{{- range $pullSecrets }}
|
||||
- name: {{ . }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the proper Docker Image Registry Secret Names evaluating values as templates
|
||||
{{ include "common.images.renderPullSecrets" ( dict "images" (list .Values.path.to.the.image1, .Values.path.to.the.image2) "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.images.renderPullSecrets" -}}
|
||||
{{- $pullSecrets := list }}
|
||||
{{- $context := .context }}
|
||||
|
||||
{{- if $context.Values.global }}
|
||||
{{- range $context.Values.global.imagePullSecrets -}}
|
||||
{{- $pullSecrets = append $pullSecrets (include "common.tplvalues.render" (dict "value" . "context" $context)) -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- range .images -}}
|
||||
{{- range .pullSecrets -}}
|
||||
{{- $pullSecrets = append $pullSecrets (include "common.tplvalues.render" (dict "value" . "context" $context)) -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- if (not (empty $pullSecrets)) }}
|
||||
imagePullSecrets:
|
||||
{{- range $pullSecrets }}
|
||||
- name: {{ . }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,68 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
|
||||
{{/*
|
||||
Generate backend entry that is compatible with all Kubernetes API versions.
|
||||
|
||||
Usage:
|
||||
{{ include "common.ingress.backend" (dict "serviceName" "backendName" "servicePort" "backendPort" "context" $) }}
|
||||
|
||||
Params:
|
||||
- serviceName - String. Name of an existing service backend
|
||||
- servicePort - String/Int. Port name (or number) of the service. It will be translated to different yaml depending if it is a string or an integer.
|
||||
- context - Dict - Required. The context for the template evaluation.
|
||||
*/}}
|
||||
{{- define "common.ingress.backend" -}}
|
||||
{{- $apiVersion := (include "common.capabilities.ingress.apiVersion" .context) -}}
|
||||
{{- if or (eq $apiVersion "extensions/v1beta1") (eq $apiVersion "networking.k8s.io/v1beta1") -}}
|
||||
serviceName: {{ .serviceName }}
|
||||
servicePort: {{ .servicePort }}
|
||||
{{- else -}}
|
||||
service:
|
||||
name: {{ .serviceName }}
|
||||
port:
|
||||
{{- if typeIs "string" .servicePort }}
|
||||
name: {{ .servicePort }}
|
||||
{{- else if or (typeIs "int" .servicePort) (typeIs "float64" .servicePort) }}
|
||||
number: {{ .servicePort | int }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Print "true" if the API pathType field is supported
|
||||
Usage:
|
||||
{{ include "common.ingress.supportsPathType" . }}
|
||||
*/}}
|
||||
{{- define "common.ingress.supportsPathType" -}}
|
||||
{{- if (semverCompare "<1.18-0" (include "common.capabilities.kubeVersion" .)) -}}
|
||||
{{- print "false" -}}
|
||||
{{- else -}}
|
||||
{{- print "true" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Returns true if the ingressClassname field is supported
|
||||
Usage:
|
||||
{{ include "common.ingress.supportsIngressClassname" . }}
|
||||
*/}}
|
||||
{{- define "common.ingress.supportsIngressClassname" -}}
|
||||
{{- if semverCompare "<1.18-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- print "false" -}}
|
||||
{{- else -}}
|
||||
{{- print "true" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return true if cert-manager required annotations for TLS signed
|
||||
certificates are set in the Ingress annotations
|
||||
Ref: https://cert-manager.io/docs/usage/ingress/#supported-annotations
|
||||
Usage:
|
||||
{{ include "common.ingress.certManagerRequest" ( dict "annotations" .Values.path.to.the.ingress.annotations ) }}
|
||||
*/}}
|
||||
{{- define "common.ingress.certManagerRequest" -}}
|
||||
{{ if or (hasKey .annotations "cert-manager.io/cluster-issuer") (hasKey .annotations "cert-manager.io/issuer") }}
|
||||
{{- true -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,18 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Kubernetes standard labels
|
||||
*/}}
|
||||
{{- define "common.labels.standard" -}}
|
||||
app.kubernetes.io/name: {{ include "common.names.name" . }}
|
||||
helm.sh/chart: {{ include "common.names.chart" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Labels to use on deploy.spec.selector.matchLabels and svc.spec.selector
|
||||
*/}}
|
||||
{{- define "common.labels.matchLabels" -}}
|
||||
app.kubernetes.io/name: {{ include "common.names.name" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,63 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Expand the name of the chart.
|
||||
*/}}
|
||||
{{- define "common.names.name" -}}
|
||||
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create chart name and version as used by the chart label.
|
||||
*/}}
|
||||
{{- define "common.names.chart" -}}
|
||||
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create a default fully qualified app name.
|
||||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||
If release name contains chart name it will be used as a full name.
|
||||
*/}}
|
||||
{{- define "common.names.fullname" -}}
|
||||
{{- if .Values.fullnameOverride -}}
|
||||
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- $name := default .Chart.Name .Values.nameOverride -}}
|
||||
{{- if contains $name .Release.Name -}}
|
||||
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create a default fully qualified dependency name.
|
||||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||
If release name contains chart name it will be used as a full name.
|
||||
Usage:
|
||||
{{ include "common.names.dependency.fullname" (dict "chartName" "dependency-chart-name" "chartValues" .Values.dependency-chart "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.names.dependency.fullname" -}}
|
||||
{{- if .chartValues.fullnameOverride -}}
|
||||
{{- .chartValues.fullnameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- $name := default .chartName .chartValues.nameOverride -}}
|
||||
{{- if contains $name .context.Release.Name -}}
|
||||
{{- .context.Release.Name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- printf "%s-%s" .context.Release.Name $name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Allow the release namespace to be overridden for multi-namespace deployments in combined charts.
|
||||
*/}}
|
||||
{{- define "common.names.namespace" -}}
|
||||
{{- if .Values.namespaceOverride -}}
|
||||
{{- .Values.namespaceOverride -}}
|
||||
{{- else -}}
|
||||
{{- .Release.Namespace -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,140 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Generate secret name.
|
||||
|
||||
Usage:
|
||||
{{ include "common.secrets.name" (dict "existingSecret" .Values.path.to.the.existingSecret "defaultNameSuffix" "mySuffix" "context" $) }}
|
||||
|
||||
Params:
|
||||
- existingSecret - ExistingSecret/String - Optional. The path to the existing secrets in the values.yaml given by the user
|
||||
to be used instead of the default one. Allows for it to be of type String (just the secret name) for backwards compatibility.
|
||||
+info: https://github.com/bitnami/charts/tree/master/bitnami/common#existingsecret
|
||||
- defaultNameSuffix - String - Optional. It is used only if we have several secrets in the same deployment.
|
||||
- context - Dict - Required. The context for the template evaluation.
|
||||
*/}}
|
||||
{{- define "common.secrets.name" -}}
|
||||
{{- $name := (include "common.names.fullname" .context) -}}
|
||||
|
||||
{{- if .defaultNameSuffix -}}
|
||||
{{- $name = printf "%s-%s" $name .defaultNameSuffix | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- with .existingSecret -}}
|
||||
{{- if not (typeIs "string" .) -}}
|
||||
{{- with .name -}}
|
||||
{{- $name = . -}}
|
||||
{{- end -}}
|
||||
{{- else -}}
|
||||
{{- $name = . -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- printf "%s" $name -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Generate secret key.
|
||||
|
||||
Usage:
|
||||
{{ include "common.secrets.key" (dict "existingSecret" .Values.path.to.the.existingSecret "key" "keyName") }}
|
||||
|
||||
Params:
|
||||
- existingSecret - ExistingSecret/String - Optional. The path to the existing secrets in the values.yaml given by the user
|
||||
to be used instead of the default one. Allows for it to be of type String (just the secret name) for backwards compatibility.
|
||||
+info: https://github.com/bitnami/charts/tree/master/bitnami/common#existingsecret
|
||||
- key - String - Required. Name of the key in the secret.
|
||||
*/}}
|
||||
{{- define "common.secrets.key" -}}
|
||||
{{- $key := .key -}}
|
||||
|
||||
{{- if .existingSecret -}}
|
||||
{{- if not (typeIs "string" .existingSecret) -}}
|
||||
{{- if .existingSecret.keyMapping -}}
|
||||
{{- $key = index .existingSecret.keyMapping $.key -}}
|
||||
{{- end -}}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
|
||||
{{- printf "%s" $key -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Generate secret password or retrieve one if already created.
|
||||
|
||||
Usage:
|
||||
{{ include "common.secrets.passwords.manage" (dict "secret" "secret-name" "key" "keyName" "providedValues" (list "path.to.password1" "path.to.password2") "length" 10 "strong" false "chartName" "chartName" "context" $) }}
|
||||
|
||||
Params:
|
||||
- secret - String - Required - Name of the 'Secret' resource where the password is stored.
|
||||
- key - String - Required - Name of the key in the secret.
|
||||
- providedValues - List<String> - Required - The path to the validating value in the values.yaml, e.g: "mysql.password". Will pick first parameter with a defined value.
|
||||
- length - int - Optional - Length of the generated random password.
|
||||
- strong - Boolean - Optional - Whether to add symbols to the generated random password.
|
||||
- chartName - String - Optional - Name of the chart used when said chart is deployed as a subchart.
|
||||
- context - Context - Required - Parent context.
|
||||
|
||||
The order in which this function returns a secret password:
|
||||
1. Already existing 'Secret' resource
|
||||
(If a 'Secret' resource is found under the name provided to the 'secret' parameter to this function and that 'Secret' resource contains a key with the name passed as the 'key' parameter to this function then the value of this existing secret password will be returned)
|
||||
2. Password provided via the values.yaml
|
||||
(If one of the keys passed to the 'providedValues' parameter to this function is a valid path to a key in the values.yaml and has a value, the value of the first key with a value will be returned)
|
||||
3. Randomly generated secret password
|
||||
(A new random secret password with the length specified in the 'length' parameter will be generated and returned)
|
||||
|
||||
*/}}
|
||||
{{- define "common.secrets.passwords.manage" -}}
|
||||
|
||||
{{- $password := "" }}
|
||||
{{- $subchart := "" }}
|
||||
{{- $chartName := default "" .chartName }}
|
||||
{{- $passwordLength := default 10 .length }}
|
||||
{{- $providedPasswordKey := include "common.utils.getKeyFromList" (dict "keys" .providedValues "context" $.context) }}
|
||||
{{- $providedPasswordValue := include "common.utils.getValueFromKey" (dict "key" $providedPasswordKey "context" $.context) }}
|
||||
{{- $secretData := (lookup "v1" "Secret" $.context.Release.Namespace .secret).data }}
|
||||
{{- if $secretData }}
|
||||
{{- if hasKey $secretData .key }}
|
||||
{{- $password = index $secretData .key }}
|
||||
{{- else }}
|
||||
{{- printf "\nPASSWORDS ERROR: The secret \"%s\" does not contain the key \"%s\"\n" .secret .key | fail -}}
|
||||
{{- end -}}
|
||||
{{- else if $providedPasswordValue }}
|
||||
{{- $password = $providedPasswordValue | toString | b64enc | quote }}
|
||||
{{- else }}
|
||||
|
||||
{{- if .context.Values.enabled }}
|
||||
{{- $subchart = $chartName }}
|
||||
{{- end -}}
|
||||
|
||||
{{- $requiredPassword := dict "valueKey" $providedPasswordKey "secret" .secret "field" .key "subchart" $subchart "context" $.context -}}
|
||||
{{- $requiredPasswordError := include "common.validations.values.single.empty" $requiredPassword -}}
|
||||
{{- $passwordValidationErrors := list $requiredPasswordError -}}
|
||||
{{- include "common.errors.upgrade.passwords.empty" (dict "validationErrors" $passwordValidationErrors "context" $.context) -}}
|
||||
|
||||
{{- if .strong }}
|
||||
{{- $subStr := list (lower (randAlpha 1)) (randNumeric 1) (upper (randAlpha 1)) | join "_" }}
|
||||
{{- $password = randAscii $passwordLength }}
|
||||
{{- $password = regexReplaceAllLiteral "\\W" $password "@" | substr 5 $passwordLength }}
|
||||
{{- $password = printf "%s%s" $subStr $password | toString | shuffle | b64enc | quote }}
|
||||
{{- else }}
|
||||
{{- $password = randAlphaNum $passwordLength | b64enc | quote }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
{{- printf "%s" $password -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Returns whether a previous generated secret already exists
|
||||
|
||||
Usage:
|
||||
{{ include "common.secrets.exists" (dict "secret" "secret-name" "context" $) }}
|
||||
|
||||
Params:
|
||||
- secret - String - Required - Name of the 'Secret' resource where the password is stored.
|
||||
- context - Context - Required - Parent context.
|
||||
*/}}
|
||||
{{- define "common.secrets.exists" -}}
|
||||
{{- $secret := (lookup "v1" "Secret" $.context.Release.Namespace .secret) }}
|
||||
{{- if $secret }}
|
||||
{{- true -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,23 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Return the proper Storage Class
|
||||
{{ include "common.storage.class" ( dict "persistence" .Values.path.to.the.persistence "global" $) }}
|
||||
*/}}
|
||||
{{- define "common.storage.class" -}}
|
||||
|
||||
{{- $storageClass := .persistence.storageClass -}}
|
||||
{{- if .global -}}
|
||||
{{- if .global.storageClass -}}
|
||||
{{- $storageClass = .global.storageClass -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- if $storageClass -}}
|
||||
{{- if (eq "-" $storageClass) -}}
|
||||
{{- printf "storageClassName: \"\"" -}}
|
||||
{{- else }}
|
||||
{{- printf "storageClassName: %s" $storageClass -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,13 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Renders a value that contains template.
|
||||
Usage:
|
||||
{{ include "common.tplvalues.render" ( dict "value" .Values.path.to.the.Value "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.tplvalues.render" -}}
|
||||
{{- if typeIs "string" .value }}
|
||||
{{- tpl .value .context }}
|
||||
{{- else }}
|
||||
{{- tpl (.value | toYaml) .context }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,62 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Print instructions to get a secret value.
|
||||
Usage:
|
||||
{{ include "common.utils.secret.getvalue" (dict "secret" "secret-name" "field" "secret-value-field" "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.utils.secret.getvalue" -}}
|
||||
{{- $varname := include "common.utils.fieldToEnvVar" . -}}
|
||||
export {{ $varname }}=$(kubectl get secret --namespace {{ .context.Release.Namespace | quote }} {{ .secret }} -o jsonpath="{.data.{{ .field }}}" | base64 --decode)
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Build env var name given a field
|
||||
Usage:
|
||||
{{ include "common.utils.fieldToEnvVar" dict "field" "my-password" }}
|
||||
*/}}
|
||||
{{- define "common.utils.fieldToEnvVar" -}}
|
||||
{{- $fieldNameSplit := splitList "-" .field -}}
|
||||
{{- $upperCaseFieldNameSplit := list -}}
|
||||
|
||||
{{- range $fieldNameSplit -}}
|
||||
{{- $upperCaseFieldNameSplit = append $upperCaseFieldNameSplit ( upper . ) -}}
|
||||
{{- end -}}
|
||||
|
||||
{{ join "_" $upperCaseFieldNameSplit }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Gets a value from .Values given
|
||||
Usage:
|
||||
{{ include "common.utils.getValueFromKey" (dict "key" "path.to.key" "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.utils.getValueFromKey" -}}
|
||||
{{- $splitKey := splitList "." .key -}}
|
||||
{{- $value := "" -}}
|
||||
{{- $latestObj := $.context.Values -}}
|
||||
{{- range $splitKey -}}
|
||||
{{- if not $latestObj -}}
|
||||
{{- printf "please review the entire path of '%s' exists in values" $.key | fail -}}
|
||||
{{- end -}}
|
||||
{{- $value = ( index $latestObj . ) -}}
|
||||
{{- $latestObj = $value -}}
|
||||
{{- end -}}
|
||||
{{- printf "%v" (default "" $value) -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Returns first .Values key with a defined value or first of the list if all non-defined
|
||||
Usage:
|
||||
{{ include "common.utils.getKeyFromList" (dict "keys" (list "path.to.key1" "path.to.key2") "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.utils.getKeyFromList" -}}
|
||||
{{- $key := first .keys -}}
|
||||
{{- $reverseKeys := reverse .keys }}
|
||||
{{- range $reverseKeys }}
|
||||
{{- $value := include "common.utils.getValueFromKey" (dict "key" . "context" $.context ) }}
|
||||
{{- if $value -}}
|
||||
{{- $key = . }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- printf "%s" $key -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,14 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Warning about using rolling tag.
|
||||
Usage:
|
||||
{{ include "common.warnings.rollingTag" .Values.path.to.the.imageRoot }}
|
||||
*/}}
|
||||
{{- define "common.warnings.rollingTag" -}}
|
||||
|
||||
{{- if and (contains "bitnami/" .repository) (not (.tag | toString | regexFind "-r\\d+$|sha256:")) }}
|
||||
WARNING: Rolling tag detected ({{ .repository }}:{{ .tag }}), please note that it is strongly recommended to avoid using rolling tags in a production environment.
|
||||
+info https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/
|
||||
{{- end }}
|
||||
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,72 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Validate Cassandra required passwords are not empty.
|
||||
|
||||
Usage:
|
||||
{{ include "common.validations.values.cassandra.passwords" (dict "secret" "secretName" "subchart" false "context" $) }}
|
||||
Params:
|
||||
- secret - String - Required. Name of the secret where Cassandra values are stored, e.g: "cassandra-passwords-secret"
|
||||
- subchart - Boolean - Optional. Whether Cassandra is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.validations.values.cassandra.passwords" -}}
|
||||
{{- $existingSecret := include "common.cassandra.values.existingSecret" . -}}
|
||||
{{- $enabled := include "common.cassandra.values.enabled" . -}}
|
||||
{{- $dbUserPrefix := include "common.cassandra.values.key.dbUser" . -}}
|
||||
{{- $valueKeyPassword := printf "%s.password" $dbUserPrefix -}}
|
||||
|
||||
{{- if and (or (not $existingSecret) (eq $existingSecret "\"\"")) (eq $enabled "true") -}}
|
||||
{{- $requiredPasswords := list -}}
|
||||
|
||||
{{- $requiredPassword := dict "valueKey" $valueKeyPassword "secret" .secret "field" "cassandra-password" -}}
|
||||
{{- $requiredPasswords = append $requiredPasswords $requiredPassword -}}
|
||||
|
||||
{{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}}
|
||||
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for existingSecret.
|
||||
|
||||
Usage:
|
||||
{{ include "common.cassandra.values.existingSecret" (dict "context" $) }}
|
||||
Params:
|
||||
- subchart - Boolean - Optional. Whether Cassandra is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.cassandra.values.existingSecret" -}}
|
||||
{{- if .subchart -}}
|
||||
{{- .context.Values.cassandra.dbUser.existingSecret | quote -}}
|
||||
{{- else -}}
|
||||
{{- .context.Values.dbUser.existingSecret | quote -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for enabled cassandra.
|
||||
|
||||
Usage:
|
||||
{{ include "common.cassandra.values.enabled" (dict "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.cassandra.values.enabled" -}}
|
||||
{{- if .subchart -}}
|
||||
{{- printf "%v" .context.Values.cassandra.enabled -}}
|
||||
{{- else -}}
|
||||
{{- printf "%v" (not .context.Values.enabled) -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for the key dbUser
|
||||
|
||||
Usage:
|
||||
{{ include "common.cassandra.values.key.dbUser" (dict "subchart" "true" "context" $) }}
|
||||
Params:
|
||||
- subchart - Boolean - Optional. Whether Cassandra is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.cassandra.values.key.dbUser" -}}
|
||||
{{- if .subchart -}}
|
||||
cassandra.dbUser
|
||||
{{- else -}}
|
||||
dbUser
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,103 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Validate MariaDB required passwords are not empty.
|
||||
|
||||
Usage:
|
||||
{{ include "common.validations.values.mariadb.passwords" (dict "secret" "secretName" "subchart" false "context" $) }}
|
||||
Params:
|
||||
- secret - String - Required. Name of the secret where MariaDB values are stored, e.g: "mysql-passwords-secret"
|
||||
- subchart - Boolean - Optional. Whether MariaDB is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.validations.values.mariadb.passwords" -}}
|
||||
{{- $existingSecret := include "common.mariadb.values.auth.existingSecret" . -}}
|
||||
{{- $enabled := include "common.mariadb.values.enabled" . -}}
|
||||
{{- $architecture := include "common.mariadb.values.architecture" . -}}
|
||||
{{- $authPrefix := include "common.mariadb.values.key.auth" . -}}
|
||||
{{- $valueKeyRootPassword := printf "%s.rootPassword" $authPrefix -}}
|
||||
{{- $valueKeyUsername := printf "%s.username" $authPrefix -}}
|
||||
{{- $valueKeyPassword := printf "%s.password" $authPrefix -}}
|
||||
{{- $valueKeyReplicationPassword := printf "%s.replicationPassword" $authPrefix -}}
|
||||
|
||||
{{- if and (or (not $existingSecret) (eq $existingSecret "\"\"")) (eq $enabled "true") -}}
|
||||
{{- $requiredPasswords := list -}}
|
||||
|
||||
{{- $requiredRootPassword := dict "valueKey" $valueKeyRootPassword "secret" .secret "field" "mariadb-root-password" -}}
|
||||
{{- $requiredPasswords = append $requiredPasswords $requiredRootPassword -}}
|
||||
|
||||
{{- $valueUsername := include "common.utils.getValueFromKey" (dict "key" $valueKeyUsername "context" .context) }}
|
||||
{{- if not (empty $valueUsername) -}}
|
||||
{{- $requiredPassword := dict "valueKey" $valueKeyPassword "secret" .secret "field" "mariadb-password" -}}
|
||||
{{- $requiredPasswords = append $requiredPasswords $requiredPassword -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- if (eq $architecture "replication") -}}
|
||||
{{- $requiredReplicationPassword := dict "valueKey" $valueKeyReplicationPassword "secret" .secret "field" "mariadb-replication-password" -}}
|
||||
{{- $requiredPasswords = append $requiredPasswords $requiredReplicationPassword -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}}
|
||||
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for existingSecret.
|
||||
|
||||
Usage:
|
||||
{{ include "common.mariadb.values.auth.existingSecret" (dict "context" $) }}
|
||||
Params:
|
||||
- subchart - Boolean - Optional. Whether MariaDB is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.mariadb.values.auth.existingSecret" -}}
|
||||
{{- if .subchart -}}
|
||||
{{- .context.Values.mariadb.auth.existingSecret | quote -}}
|
||||
{{- else -}}
|
||||
{{- .context.Values.auth.existingSecret | quote -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for enabled mariadb.
|
||||
|
||||
Usage:
|
||||
{{ include "common.mariadb.values.enabled" (dict "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.mariadb.values.enabled" -}}
|
||||
{{- if .subchart -}}
|
||||
{{- printf "%v" .context.Values.mariadb.enabled -}}
|
||||
{{- else -}}
|
||||
{{- printf "%v" (not .context.Values.enabled) -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for architecture
|
||||
|
||||
Usage:
|
||||
{{ include "common.mariadb.values.architecture" (dict "subchart" "true" "context" $) }}
|
||||
Params:
|
||||
- subchart - Boolean - Optional. Whether MariaDB is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.mariadb.values.architecture" -}}
|
||||
{{- if .subchart -}}
|
||||
{{- .context.Values.mariadb.architecture -}}
|
||||
{{- else -}}
|
||||
{{- .context.Values.architecture -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for the key auth
|
||||
|
||||
Usage:
|
||||
{{ include "common.mariadb.values.key.auth" (dict "subchart" "true" "context" $) }}
|
||||
Params:
|
||||
- subchart - Boolean - Optional. Whether MariaDB is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.mariadb.values.key.auth" -}}
|
||||
{{- if .subchart -}}
|
||||
mariadb.auth
|
||||
{{- else -}}
|
||||
auth
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,108 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Validate MongoDB® required passwords are not empty.
|
||||
|
||||
Usage:
|
||||
{{ include "common.validations.values.mongodb.passwords" (dict "secret" "secretName" "subchart" false "context" $) }}
|
||||
Params:
|
||||
- secret - String - Required. Name of the secret where MongoDB® values are stored, e.g: "mongodb-passwords-secret"
|
||||
- subchart - Boolean - Optional. Whether MongoDB® is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.validations.values.mongodb.passwords" -}}
|
||||
{{- $existingSecret := include "common.mongodb.values.auth.existingSecret" . -}}
|
||||
{{- $enabled := include "common.mongodb.values.enabled" . -}}
|
||||
{{- $authPrefix := include "common.mongodb.values.key.auth" . -}}
|
||||
{{- $architecture := include "common.mongodb.values.architecture" . -}}
|
||||
{{- $valueKeyRootPassword := printf "%s.rootPassword" $authPrefix -}}
|
||||
{{- $valueKeyUsername := printf "%s.username" $authPrefix -}}
|
||||
{{- $valueKeyDatabase := printf "%s.database" $authPrefix -}}
|
||||
{{- $valueKeyPassword := printf "%s.password" $authPrefix -}}
|
||||
{{- $valueKeyReplicaSetKey := printf "%s.replicaSetKey" $authPrefix -}}
|
||||
{{- $valueKeyAuthEnabled := printf "%s.enabled" $authPrefix -}}
|
||||
|
||||
{{- $authEnabled := include "common.utils.getValueFromKey" (dict "key" $valueKeyAuthEnabled "context" .context) -}}
|
||||
|
||||
{{- if and (or (not $existingSecret) (eq $existingSecret "\"\"")) (eq $enabled "true") (eq $authEnabled "true") -}}
|
||||
{{- $requiredPasswords := list -}}
|
||||
|
||||
{{- $requiredRootPassword := dict "valueKey" $valueKeyRootPassword "secret" .secret "field" "mongodb-root-password" -}}
|
||||
{{- $requiredPasswords = append $requiredPasswords $requiredRootPassword -}}
|
||||
|
||||
{{- $valueUsername := include "common.utils.getValueFromKey" (dict "key" $valueKeyUsername "context" .context) }}
|
||||
{{- $valueDatabase := include "common.utils.getValueFromKey" (dict "key" $valueKeyDatabase "context" .context) }}
|
||||
{{- if and $valueUsername $valueDatabase -}}
|
||||
{{- $requiredPassword := dict "valueKey" $valueKeyPassword "secret" .secret "field" "mongodb-password" -}}
|
||||
{{- $requiredPasswords = append $requiredPasswords $requiredPassword -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- if (eq $architecture "replicaset") -}}
|
||||
{{- $requiredReplicaSetKey := dict "valueKey" $valueKeyReplicaSetKey "secret" .secret "field" "mongodb-replica-set-key" -}}
|
||||
{{- $requiredPasswords = append $requiredPasswords $requiredReplicaSetKey -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}}
|
||||
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for existingSecret.
|
||||
|
||||
Usage:
|
||||
{{ include "common.mongodb.values.auth.existingSecret" (dict "context" $) }}
|
||||
Params:
|
||||
- subchart - Boolean - Optional. Whether MongoDb is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.mongodb.values.auth.existingSecret" -}}
|
||||
{{- if .subchart -}}
|
||||
{{- .context.Values.mongodb.auth.existingSecret | quote -}}
|
||||
{{- else -}}
|
||||
{{- .context.Values.auth.existingSecret | quote -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for enabled mongodb.
|
||||
|
||||
Usage:
|
||||
{{ include "common.mongodb.values.enabled" (dict "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.mongodb.values.enabled" -}}
|
||||
{{- if .subchart -}}
|
||||
{{- printf "%v" .context.Values.mongodb.enabled -}}
|
||||
{{- else -}}
|
||||
{{- printf "%v" (not .context.Values.enabled) -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for the key auth
|
||||
|
||||
Usage:
|
||||
{{ include "common.mongodb.values.key.auth" (dict "subchart" "true" "context" $) }}
|
||||
Params:
|
||||
- subchart - Boolean - Optional. Whether MongoDB® is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.mongodb.values.key.auth" -}}
|
||||
{{- if .subchart -}}
|
||||
mongodb.auth
|
||||
{{- else -}}
|
||||
auth
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for architecture
|
||||
|
||||
Usage:
|
||||
{{ include "common.mongodb.values.architecture" (dict "subchart" "true" "context" $) }}
|
||||
Params:
|
||||
- subchart - Boolean - Optional. Whether MariaDB is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.mongodb.values.architecture" -}}
|
||||
{{- if .subchart -}}
|
||||
{{- .context.Values.mongodb.architecture -}}
|
||||
{{- else -}}
|
||||
{{- .context.Values.architecture -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,129 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Validate PostgreSQL required passwords are not empty.
|
||||
|
||||
Usage:
|
||||
{{ include "common.validations.values.postgresql.passwords" (dict "secret" "secretName" "subchart" false "context" $) }}
|
||||
Params:
|
||||
- secret - String - Required. Name of the secret where postgresql values are stored, e.g: "postgresql-passwords-secret"
|
||||
- subchart - Boolean - Optional. Whether postgresql is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.validations.values.postgresql.passwords" -}}
|
||||
{{- $existingSecret := include "common.postgresql.values.existingSecret" . -}}
|
||||
{{- $enabled := include "common.postgresql.values.enabled" . -}}
|
||||
{{- $valueKeyPostgresqlPassword := include "common.postgresql.values.key.postgressPassword" . -}}
|
||||
{{- $valueKeyPostgresqlReplicationEnabled := include "common.postgresql.values.key.replicationPassword" . -}}
|
||||
{{- if and (or (not $existingSecret) (eq $existingSecret "\"\"")) (eq $enabled "true") -}}
|
||||
{{- $requiredPasswords := list -}}
|
||||
{{- $requiredPostgresqlPassword := dict "valueKey" $valueKeyPostgresqlPassword "secret" .secret "field" "postgresql-password" -}}
|
||||
{{- $requiredPasswords = append $requiredPasswords $requiredPostgresqlPassword -}}
|
||||
|
||||
{{- $enabledReplication := include "common.postgresql.values.enabled.replication" . -}}
|
||||
{{- if (eq $enabledReplication "true") -}}
|
||||
{{- $requiredPostgresqlReplicationPassword := dict "valueKey" $valueKeyPostgresqlReplicationEnabled "secret" .secret "field" "postgresql-replication-password" -}}
|
||||
{{- $requiredPasswords = append $requiredPasswords $requiredPostgresqlReplicationPassword -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to decide whether evaluate global values.
|
||||
|
||||
Usage:
|
||||
{{ include "common.postgresql.values.use.global" (dict "key" "key-of-global" "context" $) }}
|
||||
Params:
|
||||
- key - String - Required. Field to be evaluated within global, e.g: "existingSecret"
|
||||
*/}}
|
||||
{{- define "common.postgresql.values.use.global" -}}
|
||||
{{- if .context.Values.global -}}
|
||||
{{- if .context.Values.global.postgresql -}}
|
||||
{{- index .context.Values.global.postgresql .key | quote -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for existingSecret.
|
||||
|
||||
Usage:
|
||||
{{ include "common.postgresql.values.existingSecret" (dict "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.postgresql.values.existingSecret" -}}
|
||||
{{- $globalValue := include "common.postgresql.values.use.global" (dict "key" "existingSecret" "context" .context) -}}
|
||||
|
||||
{{- if .subchart -}}
|
||||
{{- default (.context.Values.postgresql.existingSecret | quote) $globalValue -}}
|
||||
{{- else -}}
|
||||
{{- default (.context.Values.existingSecret | quote) $globalValue -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for enabled postgresql.
|
||||
|
||||
Usage:
|
||||
{{ include "common.postgresql.values.enabled" (dict "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.postgresql.values.enabled" -}}
|
||||
{{- if .subchart -}}
|
||||
{{- printf "%v" .context.Values.postgresql.enabled -}}
|
||||
{{- else -}}
|
||||
{{- printf "%v" (not .context.Values.enabled) -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for the key postgressPassword.
|
||||
|
||||
Usage:
|
||||
{{ include "common.postgresql.values.key.postgressPassword" (dict "subchart" "true" "context" $) }}
|
||||
Params:
|
||||
- subchart - Boolean - Optional. Whether postgresql is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.postgresql.values.key.postgressPassword" -}}
|
||||
{{- $globalValue := include "common.postgresql.values.use.global" (dict "key" "postgresqlUsername" "context" .context) -}}
|
||||
|
||||
{{- if not $globalValue -}}
|
||||
{{- if .subchart -}}
|
||||
postgresql.postgresqlPassword
|
||||
{{- else -}}
|
||||
postgresqlPassword
|
||||
{{- end -}}
|
||||
{{- else -}}
|
||||
global.postgresql.postgresqlPassword
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for enabled.replication.
|
||||
|
||||
Usage:
|
||||
{{ include "common.postgresql.values.enabled.replication" (dict "subchart" "true" "context" $) }}
|
||||
Params:
|
||||
- subchart - Boolean - Optional. Whether postgresql is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.postgresql.values.enabled.replication" -}}
|
||||
{{- if .subchart -}}
|
||||
{{- printf "%v" .context.Values.postgresql.replication.enabled -}}
|
||||
{{- else -}}
|
||||
{{- printf "%v" .context.Values.replication.enabled -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for the key replication.password.
|
||||
|
||||
Usage:
|
||||
{{ include "common.postgresql.values.key.replicationPassword" (dict "subchart" "true" "context" $) }}
|
||||
Params:
|
||||
- subchart - Boolean - Optional. Whether postgresql is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.postgresql.values.key.replicationPassword" -}}
|
||||
{{- if .subchart -}}
|
||||
postgresql.replication.password
|
||||
{{- else -}}
|
||||
replication.password
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,76 @@
|
||||
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Validate Redis™ required passwords are not empty.
|
||||
|
||||
Usage:
|
||||
{{ include "common.validations.values.redis.passwords" (dict "secret" "secretName" "subchart" false "context" $) }}
|
||||
Params:
|
||||
- secret - String - Required. Name of the secret where redis values are stored, e.g: "redis-passwords-secret"
|
||||
- subchart - Boolean - Optional. Whether redis is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.validations.values.redis.passwords" -}}
|
||||
{{- $enabled := include "common.redis.values.enabled" . -}}
|
||||
{{- $valueKeyPrefix := include "common.redis.values.keys.prefix" . -}}
|
||||
{{- $standarizedVersion := include "common.redis.values.standarized.version" . }}
|
||||
|
||||
{{- $existingSecret := ternary (printf "%s%s" $valueKeyPrefix "auth.existingSecret") (printf "%s%s" $valueKeyPrefix "existingSecret") (eq $standarizedVersion "true") }}
|
||||
{{- $existingSecretValue := include "common.utils.getValueFromKey" (dict "key" $existingSecret "context" .context) }}
|
||||
|
||||
{{- $valueKeyRedisPassword := ternary (printf "%s%s" $valueKeyPrefix "auth.password") (printf "%s%s" $valueKeyPrefix "password") (eq $standarizedVersion "true") }}
|
||||
{{- $valueKeyRedisUseAuth := ternary (printf "%s%s" $valueKeyPrefix "auth.enabled") (printf "%s%s" $valueKeyPrefix "usePassword") (eq $standarizedVersion "true") }}
|
||||
|
||||
{{- if and (or (not $existingSecret) (eq $existingSecret "\"\"")) (eq $enabled "true") -}}
|
||||
{{- $requiredPasswords := list -}}
|
||||
|
||||
{{- $useAuth := include "common.utils.getValueFromKey" (dict "key" $valueKeyRedisUseAuth "context" .context) -}}
|
||||
{{- if eq $useAuth "true" -}}
|
||||
{{- $requiredRedisPassword := dict "valueKey" $valueKeyRedisPassword "secret" .secret "field" "redis-password" -}}
|
||||
{{- $requiredPasswords = append $requiredPasswords $requiredRedisPassword -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for enabled redis.
|
||||
|
||||
Usage:
|
||||
{{ include "common.redis.values.enabled" (dict "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.redis.values.enabled" -}}
|
||||
{{- if .subchart -}}
|
||||
{{- printf "%v" .context.Values.redis.enabled -}}
|
||||
{{- else -}}
|
||||
{{- printf "%v" (not .context.Values.enabled) -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right prefix path for the values
|
||||
|
||||
Usage:
|
||||
{{ include "common.redis.values.key.prefix" (dict "subchart" "true" "context" $) }}
|
||||
Params:
|
||||
- subchart - Boolean - Optional. Whether redis is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.redis.values.keys.prefix" -}}
|
||||
{{- if .subchart -}}redis.{{- else -}}{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Checks whether the redis chart's includes the standarizations (version >= 14)
|
||||
|
||||
Usage:
|
||||
{{ include "common.redis.values.standarized.version" (dict "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.redis.values.standarized.version" -}}
|
||||
|
||||
{{- $standarizedAuth := printf "%s%s" (include "common.redis.values.keys.prefix" .) "auth" -}}
|
||||
{{- $standarizedAuthValues := include "common.utils.getValueFromKey" (dict "key" $standarizedAuth "context" .context) }}
|
||||
|
||||
{{- if $standarizedAuthValues -}}
|
||||
{{- true -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,46 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Validate values must not be empty.
|
||||
|
||||
Usage:
|
||||
{{- $validateValueConf00 := (dict "valueKey" "path.to.value" "secret" "secretName" "field" "password-00") -}}
|
||||
{{- $validateValueConf01 := (dict "valueKey" "path.to.value" "secret" "secretName" "field" "password-01") -}}
|
||||
{{ include "common.validations.values.empty" (dict "required" (list $validateValueConf00 $validateValueConf01) "context" $) }}
|
||||
|
||||
Validate value params:
|
||||
- valueKey - String - Required. The path to the validating value in the values.yaml, e.g: "mysql.password"
|
||||
- secret - String - Optional. Name of the secret where the validating value is generated/stored, e.g: "mysql-passwords-secret"
|
||||
- field - String - Optional. Name of the field in the secret data, e.g: "mysql-password"
|
||||
*/}}
|
||||
{{- define "common.validations.values.multiple.empty" -}}
|
||||
{{- range .required -}}
|
||||
{{- include "common.validations.values.single.empty" (dict "valueKey" .valueKey "secret" .secret "field" .field "context" $.context) -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Validate a value must not be empty.
|
||||
|
||||
Usage:
|
||||
{{ include "common.validations.value.empty" (dict "valueKey" "mariadb.password" "secret" "secretName" "field" "my-password" "subchart" "subchart" "context" $) }}
|
||||
|
||||
Validate value params:
|
||||
- valueKey - String - Required. The path to the validating value in the values.yaml, e.g: "mysql.password"
|
||||
- secret - String - Optional. Name of the secret where the validating value is generated/stored, e.g: "mysql-passwords-secret"
|
||||
- field - String - Optional. Name of the field in the secret data, e.g: "mysql-password"
|
||||
- subchart - String - Optional - Name of the subchart that the validated password is part of.
|
||||
*/}}
|
||||
{{- define "common.validations.values.single.empty" -}}
|
||||
{{- $value := include "common.utils.getValueFromKey" (dict "key" .valueKey "context" .context) }}
|
||||
{{- $subchart := ternary "" (printf "%s." .subchart) (empty .subchart) }}
|
||||
|
||||
{{- if not $value -}}
|
||||
{{- $varname := "my-value" -}}
|
||||
{{- $getCurrentValue := "" -}}
|
||||
{{- if and .secret .field -}}
|
||||
{{- $varname = include "common.utils.fieldToEnvVar" . -}}
|
||||
{{- $getCurrentValue = printf " To get the current value:\n\n %s\n" (include "common.utils.secret.getvalue" .) -}}
|
||||
{{- end -}}
|
||||
{{- printf "\n '%s' must not be empty, please add '--set %s%s=$%s' to the command.%s" .valueKey $subchart .valueKey $varname $getCurrentValue -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,5 @@
|
||||
## bitnami/common
|
||||
## It is required by CI/CD tools and processes.
|
||||
## @skip exampleValue
|
||||
##
|
||||
exampleValue: common-chart
|
||||
@@ -0,0 +1,21 @@
|
||||
# Patterns to ignore when building packages.
|
||||
# This supports shell glob matching, relative path matching, and
|
||||
# negation (prefixed with !). Only one pattern per line.
|
||||
.DS_Store
|
||||
# Common VCS dirs
|
||||
.git/
|
||||
.gitignore
|
||||
.bzr/
|
||||
.bzrignore
|
||||
.hg/
|
||||
.hgignore
|
||||
.svn/
|
||||
# Common backup files
|
||||
*.swp
|
||||
*.bak
|
||||
*.tmp
|
||||
*~
|
||||
# Various IDEs
|
||||
.project
|
||||
.idea/
|
||||
*.tmproj
|
||||
@@ -0,0 +1,6 @@
|
||||
dependencies:
|
||||
- name: common
|
||||
repository: https://charts.bitnami.com/bitnami
|
||||
version: 1.12.0
|
||||
digest: sha256:7e484480451778c273e7a165dbfaa5594ec1c9a63a114ce9d458626cadd28893
|
||||
generated: "2022-03-16T15:37:48.808974487Z"
|
||||
@@ -0,0 +1,24 @@
|
||||
annotations:
|
||||
category: Infrastructure
|
||||
apiVersion: v2
|
||||
appVersion: 3.7.0
|
||||
dependencies:
|
||||
- name: common
|
||||
repository: https://charts.bitnami.com/bitnami
|
||||
tags:
|
||||
- bitnami-common
|
||||
version: 1.x.x
|
||||
description: Apache ZooKeeper provides a reliable, centralized register of configuration
|
||||
data and services for distributed applications.
|
||||
home: https://github.com/bitnami/charts/tree/master/bitnami/zookeeper
|
||||
icon: https://bitnami.com/assets/stacks/zookeeper/img/zookeeper-stack-220x234.png
|
||||
keywords:
|
||||
- zookeeper
|
||||
maintainers:
|
||||
- email: containers@bitnami.com
|
||||
name: Bitnami
|
||||
name: zookeeper
|
||||
sources:
|
||||
- https://github.com/bitnami/bitnami-docker-zookeeper
|
||||
- https://zookeeper.apache.org/
|
||||
version: 8.1.2
|
||||
497
deployment/helm/milvus/charts/kafka/charts/zookeeper/README.md
Normal file
497
deployment/helm/milvus/charts/kafka/charts/zookeeper/README.md
Normal file
@@ -0,0 +1,497 @@
|
||||
<!--- app-name: Apache ZooKeeper -->
|
||||
|
||||
# Apache ZooKeeper packaged by Bitnami
|
||||
|
||||
Apache ZooKeeper provides a reliable, centralized register of configuration data and services for distributed applications.
|
||||
|
||||
[Overview of Apache ZooKeeper](https://zookeeper.apache.org)
|
||||
|
||||
Trademarks: This software listing is packaged by Bitnami. The respective trademarks mentioned in the offering are owned by the respective companies, and use of them does not imply any affiliation or endorsement.
|
||||
|
||||
## TL;DR
|
||||
|
||||
```console
|
||||
$ helm repo add bitnami https://charts.bitnami.com/bitnami
|
||||
$ helm install my-release bitnami/zookeeper
|
||||
```
|
||||
|
||||
## Introduction
|
||||
|
||||
This chart bootstraps a [ZooKeeper](https://github.com/bitnami/bitnami-docker-zookeeper) deployment on a [Kubernetes](https://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
|
||||
|
||||
Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This Helm chart has been tested on top of [Bitnami Kubernetes Production Runtime](https://kubeprod.io/) (BKPR). Deploy BKPR to get automated TLS certificates, logging and monitoring for your applications.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Kubernetes 1.19+
|
||||
- Helm 3.2.0+
|
||||
- PV provisioner support in the underlying infrastructure
|
||||
|
||||
## Installing the Chart
|
||||
|
||||
To install the chart with the release name `my-release`:
|
||||
|
||||
```console
|
||||
$ helm repo add bitnami https://charts.bitnami.com/bitnami
|
||||
$ helm install my-release bitnami/zookeeper
|
||||
```
|
||||
|
||||
These commands deploy ZooKeeper on the Kubernetes cluster in the default configuration. The [Parameters](#parameters) section lists the parameters that can be configured during installation.
|
||||
|
||||
> **Tip**: List all releases using `helm list`
|
||||
|
||||
## Uninstalling the Chart
|
||||
|
||||
To uninstall/delete the `my-release` deployment:
|
||||
|
||||
```console
|
||||
$ helm delete my-release
|
||||
```
|
||||
|
||||
The command removes all the Kubernetes components associated with the chart and deletes the release.
|
||||
|
||||
## Parameters
|
||||
|
||||
### Global parameters
|
||||
|
||||
| Name | Description | Value |
|
||||
| ------------------------- | ----------------------------------------------- | ----- |
|
||||
| `global.imageRegistry` | Global Docker image registry | `""` |
|
||||
| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` |
|
||||
| `global.storageClass` | Global StorageClass for Persistent Volume(s) | `""` |
|
||||
|
||||
|
||||
### Common parameters
|
||||
|
||||
| Name | Description | Value |
|
||||
| ------------------------ | -------------------------------------------------------------------------------------------- | --------------- |
|
||||
| `kubeVersion` | Override Kubernetes version | `""` |
|
||||
| `nameOverride` | String to partially override common.names.fullname template (will maintain the release name) | `""` |
|
||||
| `fullnameOverride` | String to fully override common.names.fullname template | `""` |
|
||||
| `clusterDomain` | Kubernetes Cluster Domain | `cluster.local` |
|
||||
| `extraDeploy` | Extra objects to deploy (evaluated as a template) | `[]` |
|
||||
| `commonLabels` | Add labels to all the deployed resources | `{}` |
|
||||
| `commonAnnotations` | Add annotations to all the deployed resources | `{}` |
|
||||
| `namespaceOverride` | Override namespace for ZooKeeper resources | `""` |
|
||||
| `diagnosticMode.enabled` | Enable diagnostic mode (all probes will be disabled and the command will be overridden) | `false` |
|
||||
| `diagnosticMode.command` | Command to override all containers in the statefulset | `["sleep"]` |
|
||||
| `diagnosticMode.args` | Args to override all containers in the statefulset | `["infinity"]` |
|
||||
|
||||
|
||||
### ZooKeeper chart parameters
|
||||
|
||||
| Name | Description | Value |
|
||||
| --------------------------- | -------------------------------------------------------------------------------------------------------------------------- | ----------------------- |
|
||||
| `image.registry` | ZooKeeper image registry | `docker.io` |
|
||||
| `image.repository` | ZooKeeper image repository | `bitnami/zookeeper` |
|
||||
| `image.tag` | ZooKeeper image tag (immutable tags are recommended) | `3.7.0-debian-10-r265` |
|
||||
| `image.pullPolicy` | ZooKeeper image pull policy | `IfNotPresent` |
|
||||
| `image.pullSecrets` | Specify docker-registry secret names as an array | `[]` |
|
||||
| `image.debug` | Specify if debug values should be set | `false` |
|
||||
| `auth.enabled` | Enable ZooKeeper auth. It uses SASL/Digest-MD5 | `false` |
|
||||
| `auth.clientUser` | User that will use ZooKeeper clients to auth | `""` |
|
||||
| `auth.clientPassword` | Password that will use ZooKeeper clients to auth | `""` |
|
||||
| `auth.serverUsers` | Comma, semicolon or whitespace separated list of user to be created | `""` |
|
||||
| `auth.serverPasswords` | Comma, semicolon or whitespace separated list of passwords to assign to users when created | `""` |
|
||||
| `auth.existingSecret` | Use existing secret (ignores previous passwords) | `""` |
|
||||
| `tickTime` | Basic time unit (in milliseconds) used by ZooKeeper for heartbeats | `2000` |
|
||||
| `initLimit` | ZooKeeper uses to limit the length of time the ZooKeeper servers in quorum have to connect to a leader | `10` |
|
||||
| `syncLimit` | How far out of date a server can be from a leader | `5` |
|
||||
| `preAllocSize` | Block size for transaction log file | `65536` |
|
||||
| `snapCount` | The number of transactions recorded in the transaction log before a snapshot can be taken (and the transaction log rolled) | `100000` |
|
||||
| `maxClientCnxns` | Limits the number of concurrent connections that a single client may make to a single member of the ZooKeeper ensemble | `60` |
|
||||
| `maxSessionTimeout` | Maximum session timeout (in milliseconds) that the server will allow the client to negotiate | `40000` |
|
||||
| `heapSize` | Size (in MB) for the Java Heap options (Xmx and Xms) | `1024` |
|
||||
| `fourlwCommandsWhitelist` | A list of comma separated Four Letter Words commands that can be executed | `srvr, mntr, ruok` |
|
||||
| `minServerId` | Minimal SERVER_ID value, nodes increment their IDs respectively | `1` |
|
||||
| `listenOnAllIPs` | Allow ZooKeeper to listen for connections from its peers on all available IP addresses | `false` |
|
||||
| `autopurge.snapRetainCount` | The most recent snapshots amount (and corresponding transaction logs) to retain | `3` |
|
||||
| `autopurge.purgeInterval` | The time interval (in hours) for which the purge task has to be triggered | `0` |
|
||||
| `logLevel` | Log level for the ZooKeeper server. ERROR by default | `ERROR` |
|
||||
| `jvmFlags` | Default JVM flags for the ZooKeeper process | `""` |
|
||||
| `dataLogDir` | Dedicated data log directory | `""` |
|
||||
| `configuration` | Configure ZooKeeper with a custom zoo.cfg file | `""` |
|
||||
| `existingConfigmap` | The name of an existing ConfigMap with your custom configuration for ZooKeeper | `""` |
|
||||
| `extraEnvVars` | Array with extra environment variables to add to ZooKeeper nodes | `[]` |
|
||||
| `extraEnvVarsCM` | Name of existing ConfigMap containing extra env vars for ZooKeeper nodes | `""` |
|
||||
| `extraEnvVarsSecret` | Name of existing Secret containing extra env vars for ZooKeeper nodes | `""` |
|
||||
| `command` | Override default container command (useful when using custom images) | `["/scripts/setup.sh"]` |
|
||||
| `args` | Override default container args (useful when using custom images) | `[]` |
|
||||
|
||||
|
||||
### Statefulset parameters
|
||||
|
||||
| Name | Description | Value |
|
||||
| --------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------- |
|
||||
| `replicaCount` | Number of ZooKeeper nodes | `1` |
|
||||
| `containerPorts.client` | ZooKeeper client container port | `2181` |
|
||||
| `containerPorts.tls` | ZooKeeper TLS container port | `3181` |
|
||||
| `containerPorts.follower` | ZooKeeper follower container port | `2888` |
|
||||
| `containerPorts.election` | ZooKeeper election container port | `3888` |
|
||||
| `livenessProbe.enabled` | Enable livenessProbe on ZooKeeper containers | `true` |
|
||||
| `livenessProbe.initialDelaySeconds` | Initial delay seconds for livenessProbe | `30` |
|
||||
| `livenessProbe.periodSeconds` | Period seconds for livenessProbe | `10` |
|
||||
| `livenessProbe.timeoutSeconds` | Timeout seconds for livenessProbe | `5` |
|
||||
| `livenessProbe.failureThreshold` | Failure threshold for livenessProbe | `6` |
|
||||
| `livenessProbe.successThreshold` | Success threshold for livenessProbe | `1` |
|
||||
| `livenessProbe.probeCommandTimeout` | Probe command timeout for livenessProbe | `2` |
|
||||
| `readinessProbe.enabled` | Enable readinessProbe on ZooKeeper containers | `true` |
|
||||
| `readinessProbe.initialDelaySeconds` | Initial delay seconds for readinessProbe | `5` |
|
||||
| `readinessProbe.periodSeconds` | Period seconds for readinessProbe | `10` |
|
||||
| `readinessProbe.timeoutSeconds` | Timeout seconds for readinessProbe | `5` |
|
||||
| `readinessProbe.failureThreshold` | Failure threshold for readinessProbe | `6` |
|
||||
| `readinessProbe.successThreshold` | Success threshold for readinessProbe | `1` |
|
||||
| `readinessProbe.probeCommandTimeout` | Probe command timeout for readinessProbe | `2` |
|
||||
| `startupProbe.enabled` | Enable startupProbe on ZooKeeper containers | `false` |
|
||||
| `startupProbe.initialDelaySeconds` | Initial delay seconds for startupProbe | `30` |
|
||||
| `startupProbe.periodSeconds` | Period seconds for startupProbe | `10` |
|
||||
| `startupProbe.timeoutSeconds` | Timeout seconds for startupProbe | `1` |
|
||||
| `startupProbe.failureThreshold` | Failure threshold for startupProbe | `15` |
|
||||
| `startupProbe.successThreshold` | Success threshold for startupProbe | `1` |
|
||||
| `customLivenessProbe` | Custom livenessProbe that overrides the default one | `{}` |
|
||||
| `customReadinessProbe` | Custom readinessProbe that overrides the default one | `{}` |
|
||||
| `customStartupProbe` | Custom startupProbe that overrides the default one | `{}` |
|
||||
| `lifecycleHooks` | for the ZooKeeper container(s) to automate configuration before or after startup | `{}` |
|
||||
| `resources.limits` | The resources limits for the ZooKeeper containers | `{}` |
|
||||
| `resources.requests.memory` | The requested memory for the ZooKeeper containers | `256Mi` |
|
||||
| `resources.requests.cpu` | The requested cpu for the ZooKeeper containers | `250m` |
|
||||
| `podSecurityContext.enabled` | Enabled ZooKeeper pods' Security Context | `true` |
|
||||
| `podSecurityContext.fsGroup` | Set ZooKeeper pod's Security Context fsGroup | `1001` |
|
||||
| `containerSecurityContext.enabled` | Enabled ZooKeeper containers' Security Context | `true` |
|
||||
| `containerSecurityContext.runAsUser` | Set ZooKeeper containers' Security Context runAsUser | `1001` |
|
||||
| `containerSecurityContext.runAsNonRoot` | Set ZooKeeper containers' Security Context runAsNonRoot | `true` |
|
||||
| `hostAliases` | ZooKeeper pods host aliases | `[]` |
|
||||
| `podLabels` | Extra labels for ZooKeeper pods | `{}` |
|
||||
| `podAnnotations` | Annotations for ZooKeeper pods | `{}` |
|
||||
| `podAffinityPreset` | Pod affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `""` |
|
||||
| `podAntiAffinityPreset` | Pod anti-affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `soft` |
|
||||
| `nodeAffinityPreset.type` | Node affinity preset type. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `""` |
|
||||
| `nodeAffinityPreset.key` | Node label key to match Ignored if `affinity` is set. | `""` |
|
||||
| `nodeAffinityPreset.values` | Node label values to match. Ignored if `affinity` is set. | `[]` |
|
||||
| `affinity` | Affinity for pod assignment | `{}` |
|
||||
| `nodeSelector` | Node labels for pod assignment | `{}` |
|
||||
| `tolerations` | Tolerations for pod assignment | `[]` |
|
||||
| `topologySpreadConstraints` | Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template | `{}` |
|
||||
| `podManagementPolicy` | StatefulSet controller supports relax its ordering guarantees while preserving its uniqueness and identity guarantees. There are two valid pod management policies: `OrderedReady` and `Parallel` | `Parallel` |
|
||||
| `priorityClassName` | Name of the existing priority class to be used by ZooKeeper pods, priority class needs to be created beforehand | `""` |
|
||||
| `schedulerName` | Kubernetes pod scheduler registry | `""` |
|
||||
| `updateStrategy.type` | ZooKeeper statefulset strategy type | `RollingUpdate` |
|
||||
| `updateStrategy.rollingUpdate` | ZooKeeper statefulset rolling update configuration parameters | `{}` |
|
||||
| `extraVolumes` | Optionally specify extra list of additional volumes for the ZooKeeper pod(s) | `[]` |
|
||||
| `extraVolumeMounts` | Optionally specify extra list of additional volumeMounts for the ZooKeeper container(s) | `[]` |
|
||||
| `sidecars` | Add additional sidecar containers to the ZooKeeper pod(s) | `[]` |
|
||||
| `initContainers` | Add additional init containers to the ZooKeeper pod(s) | `[]` |
|
||||
| `pdb.create` | Deploy a pdb object for the ZooKeeper pod | `false` |
|
||||
| `pdb.minAvailable` | Minimum available ZooKeeper replicas | `""` |
|
||||
| `pdb.maxUnavailable` | Maximum unavailable ZooKeeper replicas | `1` |
|
||||
|
||||
|
||||
### Traffic Exposure parameters
|
||||
|
||||
| Name | Description | Value |
|
||||
| ------------------------------------------- | --------------------------------------------------------------------------------------- | ----------- |
|
||||
| `service.type` | Kubernetes Service type | `ClusterIP` |
|
||||
| `service.ports.client` | ZooKeeper client service port | `2181` |
|
||||
| `service.ports.tls` | ZooKeeper TLS service port | `3181` |
|
||||
| `service.ports.follower` | ZooKeeper follower service port | `2888` |
|
||||
| `service.ports.election` | ZooKeeper election service port | `3888` |
|
||||
| `service.nodePorts.client` | Node port for clients | `""` |
|
||||
| `service.nodePorts.tls` | Node port for TLS | `""` |
|
||||
| `service.disableBaseClientPort` | Remove client port from service definitions. | `false` |
|
||||
| `service.sessionAffinity` | Control where client requests go, to the same pod or round-robin | `None` |
|
||||
| `service.clusterIP` | ZooKeeper service Cluster IP | `""` |
|
||||
| `service.loadBalancerIP` | ZooKeeper service Load Balancer IP | `""` |
|
||||
| `service.loadBalancerSourceRanges` | ZooKeeper service Load Balancer sources | `[]` |
|
||||
| `service.externalTrafficPolicy` | ZooKeeper service external traffic policy | `Cluster` |
|
||||
| `service.annotations` | Additional custom annotations for ZooKeeper service | `{}` |
|
||||
| `service.extraPorts` | Extra ports to expose in the ZooKeeper service (normally used with the `sidecar` value) | `[]` |
|
||||
| `service.headless.annotations` | Annotations for the Headless Service | `{}` |
|
||||
| `service.headless.publishNotReadyAddresses` | If the ZooKeeper headless service should publish DNS records for not ready pods | `true` |
|
||||
| `networkPolicy.enabled` | Specifies whether a NetworkPolicy should be created | `false` |
|
||||
| `networkPolicy.allowExternal` | Don't require client label for connections | `true` |
|
||||
|
||||
|
||||
### Other Parameters
|
||||
|
||||
| Name | Description | Value |
|
||||
| --------------------------------------------- | ---------------------------------------------------------------------- | ------- |
|
||||
| `serviceAccount.create` | Enable creation of ServiceAccount for ZooKeeper pod | `false` |
|
||||
| `serviceAccount.name` | The name of the ServiceAccount to use. | `""` |
|
||||
| `serviceAccount.automountServiceAccountToken` | Allows auto mount of ServiceAccountToken on the serviceAccount created | `true` |
|
||||
| `serviceAccount.annotations` | Additional custom annotations for the ServiceAccount | `{}` |
|
||||
|
||||
|
||||
### Persistence parameters
|
||||
|
||||
| Name | Description | Value |
|
||||
| -------------------------------------- | ------------------------------------------------------------------------------ | ------------------- |
|
||||
| `persistence.enabled` | Enable ZooKeeper data persistence using PVC. If false, use emptyDir | `true` |
|
||||
| `persistence.existingClaim` | Name of an existing PVC to use (only when deploying a single replica) | `""` |
|
||||
| `persistence.storageClass` | PVC Storage Class for ZooKeeper data volume | `""` |
|
||||
| `persistence.accessModes` | PVC Access modes | `["ReadWriteOnce"]` |
|
||||
| `persistence.size` | PVC Storage Request for ZooKeeper data volume | `8Gi` |
|
||||
| `persistence.annotations` | Annotations for the PVC | `{}` |
|
||||
| `persistence.selector` | Selector to match an existing Persistent Volume for ZooKeeper's data PVC | `{}` |
|
||||
| `persistence.dataLogDir.size` | PVC Storage Request for ZooKeeper's dedicated data log directory | `8Gi` |
|
||||
| `persistence.dataLogDir.existingClaim` | Provide an existing `PersistentVolumeClaim` for ZooKeeper's data log directory | `""` |
|
||||
| `persistence.dataLogDir.selector` | Selector to match an existing Persistent Volume for ZooKeeper's data log PVC | `{}` |
|
||||
|
||||
|
||||
### Volume Permissions parameters
|
||||
|
||||
| Name | Description | Value |
|
||||
| ------------------------------------------------------ | ------------------------------------------------------------------------------- | ----------------------- |
|
||||
| `volumePermissions.enabled` | Enable init container that changes the owner and group of the persistent volume | `false` |
|
||||
| `volumePermissions.image.registry` | Init container volume-permissions image registry | `docker.io` |
|
||||
| `volumePermissions.image.repository` | Init container volume-permissions image repository | `bitnami/bitnami-shell` |
|
||||
| `volumePermissions.image.tag` | Init container volume-permissions image tag (immutable tags are recommended) | `10-debian-10-r312` |
|
||||
| `volumePermissions.image.pullPolicy` | Init container volume-permissions image pull policy | `IfNotPresent` |
|
||||
| `volumePermissions.image.pullSecrets` | Init container volume-permissions image pull secrets | `[]` |
|
||||
| `volumePermissions.resources.limits` | Init container volume-permissions resource limits | `{}` |
|
||||
| `volumePermissions.resources.requests` | Init container volume-permissions resource requests | `{}` |
|
||||
| `volumePermissions.containerSecurityContext.runAsUser` | User ID for the init container | `0` |
|
||||
|
||||
|
||||
### Metrics parameters
|
||||
|
||||
| Name | Description | Value |
|
||||
| ------------------------------------------ | ------------------------------------------------------------------------------------- | ----------- |
|
||||
| `metrics.enabled` | Enable Prometheus to access ZooKeeper metrics endpoint | `false` |
|
||||
| `metrics.containerPort` | ZooKeeper Prometheus Exporter container port | `9141` |
|
||||
| `metrics.service.type` | ZooKeeper Prometheus Exporter service type | `ClusterIP` |
|
||||
| `metrics.service.port` | ZooKeeper Prometheus Exporter service port | `9141` |
|
||||
| `metrics.service.annotations` | Annotations for Prometheus to auto-discover the metrics endpoint | `{}` |
|
||||
| `metrics.serviceMonitor.enabled` | Create ServiceMonitor Resource for scraping metrics using Prometheus Operator | `false` |
|
||||
| `metrics.serviceMonitor.namespace` | Namespace for the ServiceMonitor Resource (defaults to the Release Namespace) | `""` |
|
||||
| `metrics.serviceMonitor.interval` | Interval at which metrics should be scraped. | `""` |
|
||||
| `metrics.serviceMonitor.scrapeTimeout` | Timeout after which the scrape is ended | `""` |
|
||||
| `metrics.serviceMonitor.additionalLabels` | Additional labels that can be used so ServiceMonitor will be discovered by Prometheus | `{}` |
|
||||
| `metrics.serviceMonitor.selector` | Prometheus instance selector labels | `{}` |
|
||||
| `metrics.serviceMonitor.relabelings` | RelabelConfigs to apply to samples before scraping | `[]` |
|
||||
| `metrics.serviceMonitor.metricRelabelings` | MetricRelabelConfigs to apply to samples before ingestion | `[]` |
|
||||
| `metrics.serviceMonitor.honorLabels` | Specify honorLabels parameter to add the scrape endpoint | `false` |
|
||||
| `metrics.serviceMonitor.jobLabel` | The name of the label on the target service to use as the job name in prometheus. | `""` |
|
||||
| `metrics.prometheusRule.enabled` | Create a PrometheusRule for Prometheus Operator | `false` |
|
||||
| `metrics.prometheusRule.namespace` | Namespace for the PrometheusRule Resource (defaults to the Release Namespace) | `""` |
|
||||
| `metrics.prometheusRule.additionalLabels` | Additional labels that can be used so PrometheusRule will be discovered by Prometheus | `{}` |
|
||||
| `metrics.prometheusRule.rules` | PrometheusRule definitions | `[]` |
|
||||
|
||||
|
||||
### TLS/SSL parameters
|
||||
|
||||
| Name | Description | Value |
|
||||
| -------------------------------- | ----------------------------------------------------------------------------------------------- | --------------------------------------------------------------------- |
|
||||
| `tls.client.enabled` | Enable TLS for client connections | `false` |
|
||||
| `tls.client.autoGenerated` | Generate automatically self-signed TLS certificates for ZooKeeper client communications | `false` |
|
||||
| `tls.client.existingSecret` | Name of the existing secret containing the TLS certificates for ZooKeeper client communications | `""` |
|
||||
| `tls.client.keystorePath` | Location of the KeyStore file used for Client connections | `/opt/bitnami/zookeeper/config/certs/client/zookeeper.keystore.jks` |
|
||||
| `tls.client.truststorePath` | Location of the TrustStore file used for Client connections | `/opt/bitnami/zookeeper/config/certs/client/zookeeper.truststore.jks` |
|
||||
| `tls.client.passwordsSecretName` | Existing secret containing Keystore and truststore passwords | `""` |
|
||||
| `tls.client.keystorePassword` | Password to access KeyStore if needed | `""` |
|
||||
| `tls.client.truststorePassword` | Password to access TrustStore if needed | `""` |
|
||||
| `tls.quorum.enabled` | Enable TLS for quorum protocol | `false` |
|
||||
| `tls.quorum.autoGenerated` | Create self-signed TLS certificates. Currently only supports PEM certificates. | `false` |
|
||||
| `tls.quorum.existingSecret` | Name of the existing secret containing the TLS certificates for ZooKeeper quorum protocol | `""` |
|
||||
| `tls.quorum.keystorePath` | Location of the KeyStore file used for Quorum protocol | `/opt/bitnami/zookeeper/config/certs/quorum/zookeeper.keystore.jks` |
|
||||
| `tls.quorum.truststorePath` | Location of the TrustStore file used for Quorum protocol | `/opt/bitnami/zookeeper/config/certs/quorum/zookeeper.truststore.jks` |
|
||||
| `tls.quorum.passwordsSecretName` | Existing secret containing Keystore and truststore passwords | `""` |
|
||||
| `tls.quorum.keystorePassword` | Password to access KeyStore if needed | `""` |
|
||||
| `tls.quorum.truststorePassword` | Password to access TrustStore if needed | `""` |
|
||||
| `tls.resources.limits` | The resources limits for the TLS init container | `{}` |
|
||||
| `tls.resources.requests` | The requested resources for the TLS init container | `{}` |
|
||||
|
||||
|
||||
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
|
||||
|
||||
```console
|
||||
$ helm install my-release \
|
||||
--set auth.clientUser=newUser \
|
||||
bitnami/zookeeper
|
||||
```
|
||||
|
||||
The above command sets the ZooKeeper user to `newUser`.
|
||||
|
||||
> NOTE: Once this chart is deployed, it is not possible to change the application's access credentials, such as usernames or passwords, using Helm. To change these application credentials after deployment, delete any persistent volumes (PVs) used by the chart and re-deploy it, or use the application's built-in administrative tools if available.
|
||||
|
||||
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
|
||||
|
||||
```console
|
||||
$ helm install my-release -f values.yaml bitnami/zookeeper
|
||||
```
|
||||
|
||||
> **Tip**: You can use the default [values.yaml](values.yaml)
|
||||
|
||||
## Configuration and installation details
|
||||
|
||||
### [Rolling vs Immutable tags](https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/)
|
||||
|
||||
It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.
|
||||
|
||||
Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.
|
||||
|
||||
### Configure log level
|
||||
|
||||
You can configure the ZooKeeper log level using the `ZOO_LOG_LEVEL` environment variable or the parameter `logLevel`. By default, it is set to `ERROR` because each use of the liveness probe and the readiness probe produces an `INFO` message on connection and a `WARN` message on disconnection, generating a high volume of noise in your logs.
|
||||
|
||||
In order to remove that log noise so levels can be set to 'INFO', two changes must be made.
|
||||
|
||||
First, ensure that you are not getting metrics via the deprecated pattern of polling 'mntr' on the ZooKeeper client port. The preferred method of polling for Apache ZooKeeper metrics is the ZooKeeper metrics server. This is supported in this chart when setting `metrics.enabled` to `true`.
|
||||
|
||||
Second, to avoid the connection/disconnection messages from the probes, you can set custom values for these checks which direct them to the ZooKeeper Admin Server instead of the client port. By default, an Admin Server will be started that listens on `localhost` at port `8080`. The following is an example of this use of the Admin Server for probes:
|
||||
|
||||
```
|
||||
livenessProbe:
|
||||
enabled: false
|
||||
readinessProbe:
|
||||
enabled: false
|
||||
customLivenessProbe:
|
||||
exec:
|
||||
command: ['/bin/bash', '-c', 'curl -s -m 2 http://localhost:8080/commands/ruok | grep ruok']
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 5
|
||||
successThreshold: 1
|
||||
failureThreshold: 6
|
||||
customReadinessProbe:
|
||||
exec:
|
||||
command: ['/bin/bash', '-c', 'curl -s -m 2 http://localhost:8080/commands/ruok | grep error | grep null']
|
||||
initialDelaySeconds: 5
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 5
|
||||
successThreshold: 1
|
||||
failureThreshold: 6
|
||||
```
|
||||
|
||||
You can also set the log4j logging level and what log appenders are turned on, by using `ZOO_LOG4J_PROP` set inside of conf/log4j.properties as zookeeper.root.logger by default to
|
||||
|
||||
```console
|
||||
zookeeper.root.logger=INFO, CONSOLE
|
||||
```
|
||||
the available appender is
|
||||
|
||||
- CONSOLE
|
||||
- ROLLINGFILE
|
||||
- RFAAUDIT
|
||||
- TRACEFILE
|
||||
|
||||
## Persistence
|
||||
|
||||
The [Bitnami ZooKeeper](https://github.com/bitnami/bitnami-docker-zookeeper) image stores the ZooKeeper data and configurations at the `/bitnami/zookeeper` path of the container.
|
||||
|
||||
Persistent Volume Claims are used to keep the data across deployments. This is known to work in GCE, AWS, and minikube. See the [Parameters](#parameters) section to configure the PVC or to disable persistence.
|
||||
|
||||
If you encounter errors when working with persistent volumes, refer to our [troubleshooting guide for persistent volumes](https://docs.bitnami.com/kubernetes/faq/troubleshooting/troubleshooting-persistence-volumes/).
|
||||
|
||||
### Adjust permissions of persistent volume mountpoint
|
||||
|
||||
As the image run as non-root by default, it is necessary to adjust the ownership of the persistent volume so that the container can write data into it.
|
||||
|
||||
By default, the chart is configured to use Kubernetes Security Context to automatically change the ownership of the volume. However, this feature does not work in all Kubernetes distributions.
|
||||
As an alternative, this chart supports using an initContainer to change the ownership of the volume before mounting it in the final destination.
|
||||
|
||||
You can enable this initContainer by setting `volumePermissions.enabled` to `true`.
|
||||
|
||||
### Configure the data log directory
|
||||
|
||||
You can use a dedicated device for logs (instead of using the data directory) to help avoiding competition between logging and snaphots. To do so, set the `dataLogDir` parameter with the path to be used for writing transaction logs. Alternatively, set this parameter with an empty string and it will result in the log being written to the data directory (Zookeeper's default behavior).
|
||||
|
||||
When using a dedicated device for logs, you can use a PVC to persist the logs. To do so, set `persistence.enabled` to `true`. See the [Persistence Parameters](#persistence-parameters) section for more information.
|
||||
|
||||
### Set pod affinity
|
||||
|
||||
This chart allows you to set custom pod affinity using the `affinity` parameter. Find more information about pod affinity in the [Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity).
|
||||
|
||||
As an alternative, you can use any of the preset configurations for pod affinity, pod anti-affinity, and node affinity available at the [bitnami/common](https://github.com/bitnami/charts/tree/master/bitnami/common#affinities) chart. To do so, set the `podAffinityPreset`, `podAntiAffinityPreset`, or `nodeAffinityPreset` parameters.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
Find more information about how to deal with common errors related to Bitnami's Helm charts in [this troubleshooting guide](https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues).
|
||||
|
||||
## Upgrading
|
||||
|
||||
### To 8.0.0
|
||||
|
||||
This major release renames several values in this chart and adds missing features, in order to be inline with the rest of assets in the Bitnami charts repository.
|
||||
|
||||
Affected values:
|
||||
|
||||
- `allowAnonymousLogin` is deprecated.
|
||||
- `containerPort`, `tlsContainerPort`, `followerContainerPort` and `electionContainerPort` have been regrouped under the `containerPorts` map.
|
||||
- `service.port`, `service.tlsClientPort`, `service.followerPort`, and `service.electionPort` have been regrouped under the `service.ports` map.
|
||||
- `updateStrategy` (string) and `rollingUpdatePartition` are regrouped under the `updateStrategy` map.
|
||||
- `podDisruptionBudget.*` parameters are renamed to `pdb.*`.
|
||||
|
||||
### To 7.0.0
|
||||
|
||||
This new version renames the parameters used to configure TLS for both client and quorum.
|
||||
|
||||
- `service.tls.disable_base_client_port` is renamed to `service.disableBaseClientPort`
|
||||
- `service.tls.client_port` is renamed to `service.tlsClientPort`
|
||||
- `service.tls.client_enable` is renamed to `tls.client.enabled`
|
||||
- `service.tls.client_keystore_path` is renamed to `tls.client.keystorePath`
|
||||
- `service.tls.client_truststore_path` is renamed to `tls.client.truststorePath`
|
||||
- `service.tls.client_keystore_password` is renamed to `tls.client.keystorePassword`
|
||||
- `service.tls.client_truststore_password` is renamed to `tls.client.truststorePassword`
|
||||
- `service.tls.quorum_enable` is renamed to `tls.quorum.enabled`
|
||||
- `service.tls.quorum_keystore_path` is renamed to `tls.quorum.keystorePath`
|
||||
- `service.tls.quorum_truststore_path` is renamed to `tls.quorum.truststorePath`
|
||||
- `service.tls.quorum_keystore_password` is renamed to `tls.quorum.keystorePassword`
|
||||
- `service.tls.quorum_truststore_password` is renamed to `tls.quorum.truststorePassword`
|
||||
|
||||
### To 6.1.0
|
||||
|
||||
This version introduces `bitnami/common`, a [library chart](https://helm.sh/docs/topics/library_charts/#helm) as a dependency. More documentation about this new utility could be found [here](https://github.com/bitnami/charts/tree/master/bitnami/common#bitnami-common-library-chart). Please, make sure that you have updated the chart dependencies before executing any upgrade.
|
||||
|
||||
### To 6.0.0
|
||||
|
||||
[On November 13, 2020, Helm v2 support was formally finished](https://github.com/helm/charts#status-of-the-project), this major version is the result of the required changes applied to the Helm Chart to be able to incorporate the different features added in Helm v3 and to be consistent with the Helm project itself regarding the Helm v2 EOL.
|
||||
|
||||
[Learn more about this change and related upgrade considerations](https://docs.bitnami.com/kubernetes/infrastructure/zookeeper/administration/upgrade-helm3/).
|
||||
|
||||
### To 5.21.0
|
||||
|
||||
A couple of parameters related to Zookeeper metrics were renamed or disappeared in favor of new ones:
|
||||
|
||||
- `metrics.port` is renamed to `metrics.containerPort`.
|
||||
- `metrics.annotations` is deprecated in favor of `metrics.service.annotations`.
|
||||
|
||||
### To 3.0.0
|
||||
|
||||
This new version of the chart includes the new ZooKeeper major version 3.5.5. Note that to perform an automatic upgrade
|
||||
of the application, each node will need to have at least one snapshot file created in the data directory. If not, the
|
||||
new version of the application won't be able to start the service. Please refer to [ZOOKEEPER-3056](https://issues.apache.org/jira/browse/ZOOKEEPER-3056)
|
||||
in order to find ways to workaround this issue in case you are facing it.
|
||||
|
||||
### To 2.0.0
|
||||
|
||||
Backwards compatibility is not guaranteed unless you modify the labels used on the chart's statefulsets.
|
||||
Use the workaround below to upgrade from versions previous to 2.0.0. The following example assumes that the release name is `zookeeper`:
|
||||
|
||||
```console
|
||||
$ kubectl delete statefulset zookeeper-zookeeper --cascade=false
|
||||
```
|
||||
|
||||
### To 1.0.0
|
||||
|
||||
Backwards compatibility is not guaranteed unless you modify the labels used on the chart's deployments.
|
||||
Use the workaround below to upgrade from versions previous to 1.0.0. The following example assumes that the release name is zookeeper:
|
||||
|
||||
```console
|
||||
$ kubectl delete statefulset zookeeper-zookeeper --cascade=false
|
||||
```
|
||||
|
||||
## License
|
||||
|
||||
Copyright © 2022 Bitnami
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
@@ -0,0 +1,22 @@
|
||||
# Patterns to ignore when building packages.
|
||||
# This supports shell glob matching, relative path matching, and
|
||||
# negation (prefixed with !). Only one pattern per line.
|
||||
.DS_Store
|
||||
# Common VCS dirs
|
||||
.git/
|
||||
.gitignore
|
||||
.bzr/
|
||||
.bzrignore
|
||||
.hg/
|
||||
.hgignore
|
||||
.svn/
|
||||
# Common backup files
|
||||
*.swp
|
||||
*.bak
|
||||
*.tmp
|
||||
*~
|
||||
# Various IDEs
|
||||
.project
|
||||
.idea/
|
||||
*.tmproj
|
||||
.vscode/
|
||||
@@ -0,0 +1,23 @@
|
||||
annotations:
|
||||
category: Infrastructure
|
||||
apiVersion: v2
|
||||
appVersion: 1.12.0
|
||||
description: A Library Helm Chart for grouping common logic between bitnami charts.
|
||||
This chart is not deployable by itself.
|
||||
home: https://github.com/bitnami/charts/tree/master/bitnami/common
|
||||
icon: https://bitnami.com/downloads/logos/bitnami-mark.png
|
||||
keywords:
|
||||
- common
|
||||
- helper
|
||||
- template
|
||||
- function
|
||||
- bitnami
|
||||
maintainers:
|
||||
- email: containers@bitnami.com
|
||||
name: Bitnami
|
||||
name: common
|
||||
sources:
|
||||
- https://github.com/bitnami/charts
|
||||
- https://www.bitnami.com/
|
||||
type: library
|
||||
version: 1.12.0
|
||||
@@ -0,0 +1,346 @@
|
||||
# Bitnami Common Library Chart
|
||||
|
||||
A [Helm Library Chart](https://helm.sh/docs/topics/library_charts/#helm) for grouping common logic between bitnami charts.
|
||||
|
||||
## TL;DR
|
||||
|
||||
```yaml
|
||||
dependencies:
|
||||
- name: common
|
||||
version: 1.x.x
|
||||
repository: https://charts.bitnami.com/bitnami
|
||||
```
|
||||
|
||||
```bash
|
||||
$ helm dependency update
|
||||
```
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: {{ include "common.names.fullname" . }}
|
||||
data:
|
||||
myvalue: "Hello World"
|
||||
```
|
||||
|
||||
## Introduction
|
||||
|
||||
This chart provides a common template helpers which can be used to develop new charts using [Helm](https://helm.sh) package manager.
|
||||
|
||||
Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This Helm chart has been tested on top of [Bitnami Kubernetes Production Runtime](https://kubeprod.io/) (BKPR). Deploy BKPR to get automated TLS certificates, logging and monitoring for your applications.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Kubernetes 1.19+
|
||||
- Helm 3.2.0+
|
||||
|
||||
## Parameters
|
||||
|
||||
The following table lists the helpers available in the library which are scoped in different sections.
|
||||
|
||||
### Affinities
|
||||
|
||||
| Helper identifier | Description | Expected Input |
|
||||
|-------------------------------|------------------------------------------------------|------------------------------------------------|
|
||||
| `common.affinities.node.soft` | Return a soft nodeAffinity definition | `dict "key" "FOO" "values" (list "BAR" "BAZ")` |
|
||||
| `common.affinities.node.hard` | Return a hard nodeAffinity definition | `dict "key" "FOO" "values" (list "BAR" "BAZ")` |
|
||||
| `common.affinities.pod.soft` | Return a soft podAffinity/podAntiAffinity definition | `dict "component" "FOO" "context" $` |
|
||||
| `common.affinities.pod.hard` | Return a hard podAffinity/podAntiAffinity definition | `dict "component" "FOO" "context" $` |
|
||||
|
||||
### Capabilities
|
||||
|
||||
| Helper identifier | Description | Expected Input |
|
||||
|------------------------------------------------|------------------------------------------------------------------------------------------------|-------------------|
|
||||
| `common.capabilities.kubeVersion` | Return the target Kubernetes version (using client default if .Values.kubeVersion is not set). | `.` Chart context |
|
||||
| `common.capabilities.cronjob.apiVersion` | Return the appropriate apiVersion for cronjob. | `.` Chart context |
|
||||
| `common.capabilities.deployment.apiVersion` | Return the appropriate apiVersion for deployment. | `.` Chart context |
|
||||
| `common.capabilities.statefulset.apiVersion` | Return the appropriate apiVersion for statefulset. | `.` Chart context |
|
||||
| `common.capabilities.ingress.apiVersion` | Return the appropriate apiVersion for ingress. | `.` Chart context |
|
||||
| `common.capabilities.rbac.apiVersion` | Return the appropriate apiVersion for RBAC resources. | `.` Chart context |
|
||||
| `common.capabilities.crd.apiVersion` | Return the appropriate apiVersion for CRDs. | `.` Chart context |
|
||||
| `common.capabilities.policy.apiVersion` | Return the appropriate apiVersion for podsecuritypolicy. | `.` Chart context |
|
||||
| `common.capabilities.networkPolicy.apiVersion` | Return the appropriate apiVersion for networkpolicy. | `.` Chart context |
|
||||
| `common.capabilities.supportsHelmVersion` | Returns true if the used Helm version is 3.3+ | `.` Chart context |
|
||||
|
||||
### Errors
|
||||
|
||||
| Helper identifier | Description | Expected Input |
|
||||
|-----------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------|
|
||||
| `common.errors.upgrade.passwords.empty` | It will ensure required passwords are given when we are upgrading a chart. If `validationErrors` is not empty it will throw an error and will stop the upgrade action. | `dict "validationErrors" (list $validationError00 $validationError01) "context" $` |
|
||||
|
||||
### Images
|
||||
|
||||
| Helper identifier | Description | Expected Input |
|
||||
|-----------------------------|------------------------------------------------------|---------------------------------------------------------------------------------------------------------|
|
||||
| `common.images.image` | Return the proper and full image name | `dict "imageRoot" .Values.path.to.the.image "global" $`, see [ImageRoot](#imageroot) for the structure. |
|
||||
| `common.images.pullSecrets` | Return the proper Docker Image Registry Secret Names (deprecated: use common.images.renderPullSecrets instead) | `dict "images" (list .Values.path.to.the.image1, .Values.path.to.the.image2) "global" .Values.global` |
|
||||
| `common.images.renderPullSecrets` | Return the proper Docker Image Registry Secret Names (evaluates values as templates) | `dict "images" (list .Values.path.to.the.image1, .Values.path.to.the.image2) "context" $` |
|
||||
|
||||
### Ingress
|
||||
|
||||
| Helper identifier | Description | Expected Input |
|
||||
|-------------------------------------------|-------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `common.ingress.backend` | Generate a proper Ingress backend entry depending on the API version | `dict "serviceName" "foo" "servicePort" "bar"`, see the [Ingress deprecation notice](https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/) for the syntax differences |
|
||||
| `common.ingress.supportsPathType` | Prints "true" if the pathType field is supported | `.` Chart context |
|
||||
| `common.ingress.supportsIngressClassname` | Prints "true" if the ingressClassname field is supported | `.` Chart context |
|
||||
| `common.ingress.certManagerRequest` | Prints "true" if required cert-manager annotations for TLS signed certificates are set in the Ingress annotations | `dict "annotations" .Values.path.to.the.ingress.annotations` |
|
||||
|
||||
### Labels
|
||||
|
||||
| Helper identifier | Description | Expected Input |
|
||||
|-----------------------------|-----------------------------------------------------------------------------|-------------------|
|
||||
| `common.labels.standard` | Return Kubernetes standard labels | `.` Chart context |
|
||||
| `common.labels.matchLabels` | Labels to use on `deploy.spec.selector.matchLabels` and `svc.spec.selector` | `.` Chart context |
|
||||
|
||||
### Names
|
||||
|
||||
| Helper identifier | Description | Expected Input |
|
||||
|--------------------------|------------------------------------------------------------|-------------------|
|
||||
| `common.names.name` | Expand the name of the chart or use `.Values.nameOverride` | `.` Chart context |
|
||||
| `common.names.fullname` | Create a default fully qualified app name. | `.` Chart context |
|
||||
| `common.names.namespace` | Allow the release namespace to be overridden | `.` Chart context |
|
||||
| `common.names.chart` | Chart name plus version | `.` Chart context |
|
||||
|
||||
### Secrets
|
||||
|
||||
| Helper identifier | Description | Expected Input |
|
||||
|---------------------------|--------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `common.secrets.name` | Generate the name of the secret. | `dict "existingSecret" .Values.path.to.the.existingSecret "defaultNameSuffix" "mySuffix" "context" $` see [ExistingSecret](#existingsecret) for the structure. |
|
||||
| `common.secrets.key` | Generate secret key. | `dict "existingSecret" .Values.path.to.the.existingSecret "key" "keyName"` see [ExistingSecret](#existingsecret) for the structure. |
|
||||
| `common.passwords.manage` | Generate secret password or retrieve one if already created. | `dict "secret" "secret-name" "key" "keyName" "providedValues" (list "path.to.password1" "path.to.password2") "length" 10 "strong" false "chartName" "chartName" "context" $`, length, strong and chartNAme fields are optional. |
|
||||
| `common.secrets.exists` | Returns whether a previous generated secret already exists. | `dict "secret" "secret-name" "context" $` |
|
||||
|
||||
### Storage
|
||||
|
||||
| Helper identifier | Description | Expected Input |
|
||||
|-------------------------------|---------------------------------------|---------------------------------------------------------------------------------------------------------------------|
|
||||
| `common.storage.class` | Return the proper Storage Class | `dict "persistence" .Values.path.to.the.persistence "global" $`, see [Persistence](#persistence) for the structure. |
|
||||
|
||||
### TplValues
|
||||
|
||||
| Helper identifier | Description | Expected Input |
|
||||
|---------------------------|----------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `common.tplvalues.render` | Renders a value that contains template | `dict "value" .Values.path.to.the.Value "context" $`, value is the value should rendered as template, context frequently is the chart context `$` or `.` |
|
||||
|
||||
### Utils
|
||||
|
||||
| Helper identifier | Description | Expected Input |
|
||||
|--------------------------------|------------------------------------------------------------------------------------------|------------------------------------------------------------------------|
|
||||
| `common.utils.fieldToEnvVar` | Build environment variable name given a field. | `dict "field" "my-password"` |
|
||||
| `common.utils.secret.getvalue` | Print instructions to get a secret value. | `dict "secret" "secret-name" "field" "secret-value-field" "context" $` |
|
||||
| `common.utils.getValueFromKey` | Gets a value from `.Values` object given its key path | `dict "key" "path.to.key" "context" $` |
|
||||
| `common.utils.getKeyFromList` | Returns first `.Values` key with a defined value or first of the list if all non-defined | `dict "keys" (list "path.to.key1" "path.to.key2") "context" $` |
|
||||
|
||||
### Validations
|
||||
|
||||
| Helper identifier | Description | Expected Input |
|
||||
|--------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `common.validations.values.single.empty` | Validate a value must not be empty. | `dict "valueKey" "path.to.value" "secret" "secret.name" "field" "my-password" "subchart" "subchart" "context" $` secret, field and subchart are optional. In case they are given, the helper will generate a how to get instruction. See [ValidateValue](#validatevalue) |
|
||||
| `common.validations.values.multiple.empty` | Validate a multiple values must not be empty. It returns a shared error for all the values. | `dict "required" (list $validateValueConf00 $validateValueConf01) "context" $`. See [ValidateValue](#validatevalue) |
|
||||
| `common.validations.values.mariadb.passwords` | This helper will ensure required password for MariaDB are not empty. It returns a shared error for all the values. | `dict "secret" "mariadb-secret" "subchart" "true" "context" $` subchart field is optional and could be true or false it depends on where you will use mariadb chart and the helper. |
|
||||
| `common.validations.values.postgresql.passwords` | This helper will ensure required password for PostgreSQL are not empty. It returns a shared error for all the values. | `dict "secret" "postgresql-secret" "subchart" "true" "context" $` subchart field is optional and could be true or false it depends on where you will use postgresql chart and the helper. |
|
||||
| `common.validations.values.redis.passwords` | This helper will ensure required password for Redis™ are not empty. It returns a shared error for all the values. | `dict "secret" "redis-secret" "subchart" "true" "context" $` subchart field is optional and could be true or false it depends on where you will use redis chart and the helper. |
|
||||
| `common.validations.values.cassandra.passwords` | This helper will ensure required password for Cassandra are not empty. It returns a shared error for all the values. | `dict "secret" "cassandra-secret" "subchart" "true" "context" $` subchart field is optional and could be true or false it depends on where you will use cassandra chart and the helper. |
|
||||
| `common.validations.values.mongodb.passwords` | This helper will ensure required password for MongoDB® are not empty. It returns a shared error for all the values. | `dict "secret" "mongodb-secret" "subchart" "true" "context" $` subchart field is optional and could be true or false it depends on where you will use mongodb chart and the helper. |
|
||||
|
||||
### Warnings
|
||||
|
||||
| Helper identifier | Description | Expected Input |
|
||||
|------------------------------|----------------------------------|------------------------------------------------------------|
|
||||
| `common.warnings.rollingTag` | Warning about using rolling tag. | `ImageRoot` see [ImageRoot](#imageroot) for the structure. |
|
||||
|
||||
## Special input schemas
|
||||
|
||||
### ImageRoot
|
||||
|
||||
```yaml
|
||||
registry:
|
||||
type: string
|
||||
description: Docker registry where the image is located
|
||||
example: docker.io
|
||||
|
||||
repository:
|
||||
type: string
|
||||
description: Repository and image name
|
||||
example: bitnami/nginx
|
||||
|
||||
tag:
|
||||
type: string
|
||||
description: image tag
|
||||
example: 1.16.1-debian-10-r63
|
||||
|
||||
pullPolicy:
|
||||
type: string
|
||||
description: Specify a imagePullPolicy. Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
|
||||
|
||||
pullSecrets:
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
description: Optionally specify an array of imagePullSecrets (evaluated as templates).
|
||||
|
||||
debug:
|
||||
type: boolean
|
||||
description: Set to true if you would like to see extra information on logs
|
||||
example: false
|
||||
|
||||
## An instance would be:
|
||||
# registry: docker.io
|
||||
# repository: bitnami/nginx
|
||||
# tag: 1.16.1-debian-10-r63
|
||||
# pullPolicy: IfNotPresent
|
||||
# debug: false
|
||||
```
|
||||
|
||||
### Persistence
|
||||
|
||||
```yaml
|
||||
enabled:
|
||||
type: boolean
|
||||
description: Whether enable persistence.
|
||||
example: true
|
||||
|
||||
storageClass:
|
||||
type: string
|
||||
description: Ghost data Persistent Volume Storage Class, If set to "-", storageClassName: "" which disables dynamic provisioning.
|
||||
example: "-"
|
||||
|
||||
accessMode:
|
||||
type: string
|
||||
description: Access mode for the Persistent Volume Storage.
|
||||
example: ReadWriteOnce
|
||||
|
||||
size:
|
||||
type: string
|
||||
description: Size the Persistent Volume Storage.
|
||||
example: 8Gi
|
||||
|
||||
path:
|
||||
type: string
|
||||
description: Path to be persisted.
|
||||
example: /bitnami
|
||||
|
||||
## An instance would be:
|
||||
# enabled: true
|
||||
# storageClass: "-"
|
||||
# accessMode: ReadWriteOnce
|
||||
# size: 8Gi
|
||||
# path: /bitnami
|
||||
```
|
||||
|
||||
### ExistingSecret
|
||||
|
||||
```yaml
|
||||
name:
|
||||
type: string
|
||||
description: Name of the existing secret.
|
||||
example: mySecret
|
||||
keyMapping:
|
||||
description: Mapping between the expected key name and the name of the key in the existing secret.
|
||||
type: object
|
||||
|
||||
## An instance would be:
|
||||
# name: mySecret
|
||||
# keyMapping:
|
||||
# password: myPasswordKey
|
||||
```
|
||||
|
||||
#### Example of use
|
||||
|
||||
When we store sensitive data for a deployment in a secret, some times we want to give to users the possibility of using theirs existing secrets.
|
||||
|
||||
```yaml
|
||||
# templates/secret.yaml
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: {{ include "common.names.fullname" . }}
|
||||
labels:
|
||||
app: {{ include "common.names.fullname" . }}
|
||||
type: Opaque
|
||||
data:
|
||||
password: {{ .Values.password | b64enc | quote }}
|
||||
|
||||
# templates/dpl.yaml
|
||||
---
|
||||
...
|
||||
env:
|
||||
- name: PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: {{ include "common.secrets.name" (dict "existingSecret" .Values.existingSecret "context" $) }}
|
||||
key: {{ include "common.secrets.key" (dict "existingSecret" .Values.existingSecret "key" "password") }}
|
||||
...
|
||||
|
||||
# values.yaml
|
||||
---
|
||||
name: mySecret
|
||||
keyMapping:
|
||||
password: myPasswordKey
|
||||
```
|
||||
|
||||
### ValidateValue
|
||||
|
||||
#### NOTES.txt
|
||||
|
||||
```console
|
||||
{{- $validateValueConf00 := (dict "valueKey" "path.to.value00" "secret" "secretName" "field" "password-00") -}}
|
||||
{{- $validateValueConf01 := (dict "valueKey" "path.to.value01" "secret" "secretName" "field" "password-01") -}}
|
||||
|
||||
{{ include "common.validations.values.multiple.empty" (dict "required" (list $validateValueConf00 $validateValueConf01) "context" $) }}
|
||||
```
|
||||
|
||||
If we force those values to be empty we will see some alerts
|
||||
|
||||
```console
|
||||
$ helm install test mychart --set path.to.value00="",path.to.value01=""
|
||||
'path.to.value00' must not be empty, please add '--set path.to.value00=$PASSWORD_00' to the command. To get the current value:
|
||||
|
||||
export PASSWORD_00=$(kubectl get secret --namespace default secretName -o jsonpath="{.data.password-00}" | base64 --decode)
|
||||
|
||||
'path.to.value01' must not be empty, please add '--set path.to.value01=$PASSWORD_01' to the command. To get the current value:
|
||||
|
||||
export PASSWORD_01=$(kubectl get secret --namespace default secretName -o jsonpath="{.data.password-01}" | base64 --decode)
|
||||
```
|
||||
|
||||
## Upgrading
|
||||
|
||||
### To 1.0.0
|
||||
|
||||
[On November 13, 2020, Helm v2 support was formally finished](https://github.com/helm/charts#status-of-the-project), this major version is the result of the required changes applied to the Helm Chart to be able to incorporate the different features added in Helm v3 and to be consistent with the Helm project itself regarding the Helm v2 EOL.
|
||||
|
||||
**What changes were introduced in this major version?**
|
||||
|
||||
- Previous versions of this Helm Chart use `apiVersion: v1` (installable by both Helm 2 and 3), this Helm Chart was updated to `apiVersion: v2` (installable by Helm 3 only). [Here](https://helm.sh/docs/topics/charts/#the-apiversion-field) you can find more information about the `apiVersion` field.
|
||||
- Use `type: library`. [Here](https://v3.helm.sh/docs/faq/#library-chart-support) you can find more information.
|
||||
- The different fields present in the *Chart.yaml* file has been ordered alphabetically in a homogeneous way for all the Bitnami Helm Charts
|
||||
|
||||
**Considerations when upgrading to this version**
|
||||
|
||||
- If you want to upgrade to this version from a previous one installed with Helm v3, you shouldn't face any issues
|
||||
- If you want to upgrade to this version using Helm v2, this scenario is not supported as this version doesn't support Helm v2 anymore
|
||||
- If you installed the previous version with Helm v2 and wants to upgrade to this version with Helm v3, please refer to the [official Helm documentation](https://helm.sh/docs/topics/v2_v3_migration/#migration-use-cases) about migrating from Helm v2 to v3
|
||||
|
||||
**Useful links**
|
||||
|
||||
- https://docs.bitnami.com/tutorials/resolve-helm2-helm3-post-migration-issues/
|
||||
- https://helm.sh/docs/topics/v2_v3_migration/
|
||||
- https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/
|
||||
|
||||
## License
|
||||
|
||||
Copyright © 2022 Bitnami
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
@@ -0,0 +1,102 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
|
||||
{{/*
|
||||
Return a soft nodeAffinity definition
|
||||
{{ include "common.affinities.nodes.soft" (dict "key" "FOO" "values" (list "BAR" "BAZ")) -}}
|
||||
*/}}
|
||||
{{- define "common.affinities.nodes.soft" -}}
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- preference:
|
||||
matchExpressions:
|
||||
- key: {{ .key }}
|
||||
operator: In
|
||||
values:
|
||||
{{- range .values }}
|
||||
- {{ . | quote }}
|
||||
{{- end }}
|
||||
weight: 1
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return a hard nodeAffinity definition
|
||||
{{ include "common.affinities.nodes.hard" (dict "key" "FOO" "values" (list "BAR" "BAZ")) -}}
|
||||
*/}}
|
||||
{{- define "common.affinities.nodes.hard" -}}
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
nodeSelectorTerms:
|
||||
- matchExpressions:
|
||||
- key: {{ .key }}
|
||||
operator: In
|
||||
values:
|
||||
{{- range .values }}
|
||||
- {{ . | quote }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return a nodeAffinity definition
|
||||
{{ include "common.affinities.nodes" (dict "type" "soft" "key" "FOO" "values" (list "BAR" "BAZ")) -}}
|
||||
*/}}
|
||||
{{- define "common.affinities.nodes" -}}
|
||||
{{- if eq .type "soft" }}
|
||||
{{- include "common.affinities.nodes.soft" . -}}
|
||||
{{- else if eq .type "hard" }}
|
||||
{{- include "common.affinities.nodes.hard" . -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return a soft podAffinity/podAntiAffinity definition
|
||||
{{ include "common.affinities.pods.soft" (dict "component" "FOO" "extraMatchLabels" .Values.extraMatchLabels "context" $) -}}
|
||||
*/}}
|
||||
{{- define "common.affinities.pods.soft" -}}
|
||||
{{- $component := default "" .component -}}
|
||||
{{- $extraMatchLabels := default (dict) .extraMatchLabels -}}
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- podAffinityTerm:
|
||||
labelSelector:
|
||||
matchLabels: {{- (include "common.labels.matchLabels" .context) | nindent 10 }}
|
||||
{{- if not (empty $component) }}
|
||||
{{ printf "app.kubernetes.io/component: %s" $component }}
|
||||
{{- end }}
|
||||
{{- range $key, $value := $extraMatchLabels }}
|
||||
{{ $key }}: {{ $value | quote }}
|
||||
{{- end }}
|
||||
namespaces:
|
||||
- {{ .context.Release.Namespace | quote }}
|
||||
topologyKey: kubernetes.io/hostname
|
||||
weight: 1
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return a hard podAffinity/podAntiAffinity definition
|
||||
{{ include "common.affinities.pods.hard" (dict "component" "FOO" "extraMatchLabels" .Values.extraMatchLabels "context" $) -}}
|
||||
*/}}
|
||||
{{- define "common.affinities.pods.hard" -}}
|
||||
{{- $component := default "" .component -}}
|
||||
{{- $extraMatchLabels := default (dict) .extraMatchLabels -}}
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
- labelSelector:
|
||||
matchLabels: {{- (include "common.labels.matchLabels" .context) | nindent 8 }}
|
||||
{{- if not (empty $component) }}
|
||||
{{ printf "app.kubernetes.io/component: %s" $component }}
|
||||
{{- end }}
|
||||
{{- range $key, $value := $extraMatchLabels }}
|
||||
{{ $key }}: {{ $value | quote }}
|
||||
{{- end }}
|
||||
namespaces:
|
||||
- {{ .context.Release.Namespace | quote }}
|
||||
topologyKey: kubernetes.io/hostname
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return a podAffinity/podAntiAffinity definition
|
||||
{{ include "common.affinities.pods" (dict "type" "soft" "key" "FOO" "values" (list "BAR" "BAZ")) -}}
|
||||
*/}}
|
||||
{{- define "common.affinities.pods" -}}
|
||||
{{- if eq .type "soft" }}
|
||||
{{- include "common.affinities.pods.soft" . -}}
|
||||
{{- else if eq .type "hard" }}
|
||||
{{- include "common.affinities.pods.hard" . -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,128 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
|
||||
{{/*
|
||||
Return the target Kubernetes version
|
||||
*/}}
|
||||
{{- define "common.capabilities.kubeVersion" -}}
|
||||
{{- if .Values.global }}
|
||||
{{- if .Values.global.kubeVersion }}
|
||||
{{- .Values.global.kubeVersion -}}
|
||||
{{- else }}
|
||||
{{- default .Capabilities.KubeVersion.Version .Values.kubeVersion -}}
|
||||
{{- end -}}
|
||||
{{- else }}
|
||||
{{- default .Capabilities.KubeVersion.Version .Values.kubeVersion -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the appropriate apiVersion for poddisruptionbudget.
|
||||
*/}}
|
||||
{{- define "common.capabilities.policy.apiVersion" -}}
|
||||
{{- if semverCompare "<1.21-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- print "policy/v1beta1" -}}
|
||||
{{- else -}}
|
||||
{{- print "policy/v1" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the appropriate apiVersion for networkpolicy.
|
||||
*/}}
|
||||
{{- define "common.capabilities.networkPolicy.apiVersion" -}}
|
||||
{{- if semverCompare "<1.7-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- print "extensions/v1beta1" -}}
|
||||
{{- else -}}
|
||||
{{- print "networking.k8s.io/v1" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the appropriate apiVersion for cronjob.
|
||||
*/}}
|
||||
{{- define "common.capabilities.cronjob.apiVersion" -}}
|
||||
{{- if semverCompare "<1.21-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- print "batch/v1beta1" -}}
|
||||
{{- else -}}
|
||||
{{- print "batch/v1" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the appropriate apiVersion for deployment.
|
||||
*/}}
|
||||
{{- define "common.capabilities.deployment.apiVersion" -}}
|
||||
{{- if semverCompare "<1.14-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- print "extensions/v1beta1" -}}
|
||||
{{- else -}}
|
||||
{{- print "apps/v1" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the appropriate apiVersion for statefulset.
|
||||
*/}}
|
||||
{{- define "common.capabilities.statefulset.apiVersion" -}}
|
||||
{{- if semverCompare "<1.14-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- print "apps/v1beta1" -}}
|
||||
{{- else -}}
|
||||
{{- print "apps/v1" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the appropriate apiVersion for ingress.
|
||||
*/}}
|
||||
{{- define "common.capabilities.ingress.apiVersion" -}}
|
||||
{{- if .Values.ingress -}}
|
||||
{{- if .Values.ingress.apiVersion -}}
|
||||
{{- .Values.ingress.apiVersion -}}
|
||||
{{- else if semverCompare "<1.14-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- print "extensions/v1beta1" -}}
|
||||
{{- else if semverCompare "<1.19-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- print "networking.k8s.io/v1beta1" -}}
|
||||
{{- else -}}
|
||||
{{- print "networking.k8s.io/v1" -}}
|
||||
{{- end }}
|
||||
{{- else if semverCompare "<1.14-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- print "extensions/v1beta1" -}}
|
||||
{{- else if semverCompare "<1.19-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- print "networking.k8s.io/v1beta1" -}}
|
||||
{{- else -}}
|
||||
{{- print "networking.k8s.io/v1" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the appropriate apiVersion for RBAC resources.
|
||||
*/}}
|
||||
{{- define "common.capabilities.rbac.apiVersion" -}}
|
||||
{{- if semverCompare "<1.17-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- print "rbac.authorization.k8s.io/v1beta1" -}}
|
||||
{{- else -}}
|
||||
{{- print "rbac.authorization.k8s.io/v1" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the appropriate apiVersion for CRDs.
|
||||
*/}}
|
||||
{{- define "common.capabilities.crd.apiVersion" -}}
|
||||
{{- if semverCompare "<1.19-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- print "apiextensions.k8s.io/v1beta1" -}}
|
||||
{{- else -}}
|
||||
{{- print "apiextensions.k8s.io/v1" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Returns true if the used Helm version is 3.3+.
|
||||
A way to check the used Helm version was not introduced until version 3.3.0 with .Capabilities.HelmVersion, which contains an additional "{}}" structure.
|
||||
This check is introduced as a regexMatch instead of {{ if .Capabilities.HelmVersion }} because checking for the key HelmVersion in <3.3 results in a "interface not found" error.
|
||||
**To be removed when the catalog's minimun Helm version is 3.3**
|
||||
*/}}
|
||||
{{- define "common.capabilities.supportsHelmVersion" -}}
|
||||
{{- if regexMatch "{(v[0-9])*[^}]*}}$" (.Capabilities | toString ) }}
|
||||
{{- true -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,23 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Through error when upgrading using empty passwords values that must not be empty.
|
||||
|
||||
Usage:
|
||||
{{- $validationError00 := include "common.validations.values.single.empty" (dict "valueKey" "path.to.password00" "secret" "secretName" "field" "password-00") -}}
|
||||
{{- $validationError01 := include "common.validations.values.single.empty" (dict "valueKey" "path.to.password01" "secret" "secretName" "field" "password-01") -}}
|
||||
{{ include "common.errors.upgrade.passwords.empty" (dict "validationErrors" (list $validationError00 $validationError01) "context" $) }}
|
||||
|
||||
Required password params:
|
||||
- validationErrors - String - Required. List of validation strings to be return, if it is empty it won't throw error.
|
||||
- context - Context - Required. Parent context.
|
||||
*/}}
|
||||
{{- define "common.errors.upgrade.passwords.empty" -}}
|
||||
{{- $validationErrors := join "" .validationErrors -}}
|
||||
{{- if and $validationErrors .context.Release.IsUpgrade -}}
|
||||
{{- $errorString := "\nPASSWORDS ERROR: You must provide your current passwords when upgrading the release." -}}
|
||||
{{- $errorString = print $errorString "\n Note that even after reinstallation, old credentials may be needed as they may be kept in persistent volume claims." -}}
|
||||
{{- $errorString = print $errorString "\n Further information can be obtained at https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues/#credential-errors-while-upgrading-chart-releases" -}}
|
||||
{{- $errorString = print $errorString "\n%s" -}}
|
||||
{{- printf $errorString $validationErrors | fail -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,75 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Return the proper image name
|
||||
{{ include "common.images.image" ( dict "imageRoot" .Values.path.to.the.image "global" $) }}
|
||||
*/}}
|
||||
{{- define "common.images.image" -}}
|
||||
{{- $registryName := .imageRoot.registry -}}
|
||||
{{- $repositoryName := .imageRoot.repository -}}
|
||||
{{- $tag := .imageRoot.tag | toString -}}
|
||||
{{- if .global }}
|
||||
{{- if .global.imageRegistry }}
|
||||
{{- $registryName = .global.imageRegistry -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- if $registryName }}
|
||||
{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
|
||||
{{- else -}}
|
||||
{{- printf "%s:%s" $repositoryName $tag -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the proper Docker Image Registry Secret Names (deprecated: use common.images.renderPullSecrets instead)
|
||||
{{ include "common.images.pullSecrets" ( dict "images" (list .Values.path.to.the.image1, .Values.path.to.the.image2) "global" .Values.global) }}
|
||||
*/}}
|
||||
{{- define "common.images.pullSecrets" -}}
|
||||
{{- $pullSecrets := list }}
|
||||
|
||||
{{- if .global }}
|
||||
{{- range .global.imagePullSecrets -}}
|
||||
{{- $pullSecrets = append $pullSecrets . -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- range .images -}}
|
||||
{{- range .pullSecrets -}}
|
||||
{{- $pullSecrets = append $pullSecrets . -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- if (not (empty $pullSecrets)) }}
|
||||
imagePullSecrets:
|
||||
{{- range $pullSecrets }}
|
||||
- name: {{ . }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the proper Docker Image Registry Secret Names evaluating values as templates
|
||||
{{ include "common.images.renderPullSecrets" ( dict "images" (list .Values.path.to.the.image1, .Values.path.to.the.image2) "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.images.renderPullSecrets" -}}
|
||||
{{- $pullSecrets := list }}
|
||||
{{- $context := .context }}
|
||||
|
||||
{{- if $context.Values.global }}
|
||||
{{- range $context.Values.global.imagePullSecrets -}}
|
||||
{{- $pullSecrets = append $pullSecrets (include "common.tplvalues.render" (dict "value" . "context" $context)) -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- range .images -}}
|
||||
{{- range .pullSecrets -}}
|
||||
{{- $pullSecrets = append $pullSecrets (include "common.tplvalues.render" (dict "value" . "context" $context)) -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- if (not (empty $pullSecrets)) }}
|
||||
imagePullSecrets:
|
||||
{{- range $pullSecrets }}
|
||||
- name: {{ . }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,68 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
|
||||
{{/*
|
||||
Generate backend entry that is compatible with all Kubernetes API versions.
|
||||
|
||||
Usage:
|
||||
{{ include "common.ingress.backend" (dict "serviceName" "backendName" "servicePort" "backendPort" "context" $) }}
|
||||
|
||||
Params:
|
||||
- serviceName - String. Name of an existing service backend
|
||||
- servicePort - String/Int. Port name (or number) of the service. It will be translated to different yaml depending if it is a string or an integer.
|
||||
- context - Dict - Required. The context for the template evaluation.
|
||||
*/}}
|
||||
{{- define "common.ingress.backend" -}}
|
||||
{{- $apiVersion := (include "common.capabilities.ingress.apiVersion" .context) -}}
|
||||
{{- if or (eq $apiVersion "extensions/v1beta1") (eq $apiVersion "networking.k8s.io/v1beta1") -}}
|
||||
serviceName: {{ .serviceName }}
|
||||
servicePort: {{ .servicePort }}
|
||||
{{- else -}}
|
||||
service:
|
||||
name: {{ .serviceName }}
|
||||
port:
|
||||
{{- if typeIs "string" .servicePort }}
|
||||
name: {{ .servicePort }}
|
||||
{{- else if or (typeIs "int" .servicePort) (typeIs "float64" .servicePort) }}
|
||||
number: {{ .servicePort | int }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Print "true" if the API pathType field is supported
|
||||
Usage:
|
||||
{{ include "common.ingress.supportsPathType" . }}
|
||||
*/}}
|
||||
{{- define "common.ingress.supportsPathType" -}}
|
||||
{{- if (semverCompare "<1.18-0" (include "common.capabilities.kubeVersion" .)) -}}
|
||||
{{- print "false" -}}
|
||||
{{- else -}}
|
||||
{{- print "true" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Returns true if the ingressClassname field is supported
|
||||
Usage:
|
||||
{{ include "common.ingress.supportsIngressClassname" . }}
|
||||
*/}}
|
||||
{{- define "common.ingress.supportsIngressClassname" -}}
|
||||
{{- if semverCompare "<1.18-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- print "false" -}}
|
||||
{{- else -}}
|
||||
{{- print "true" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return true if cert-manager required annotations for TLS signed
|
||||
certificates are set in the Ingress annotations
|
||||
Ref: https://cert-manager.io/docs/usage/ingress/#supported-annotations
|
||||
Usage:
|
||||
{{ include "common.ingress.certManagerRequest" ( dict "annotations" .Values.path.to.the.ingress.annotations ) }}
|
||||
*/}}
|
||||
{{- define "common.ingress.certManagerRequest" -}}
|
||||
{{ if or (hasKey .annotations "cert-manager.io/cluster-issuer") (hasKey .annotations "cert-manager.io/issuer") }}
|
||||
{{- true -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,18 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Kubernetes standard labels
|
||||
*/}}
|
||||
{{- define "common.labels.standard" -}}
|
||||
app.kubernetes.io/name: {{ include "common.names.name" . }}
|
||||
helm.sh/chart: {{ include "common.names.chart" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Labels to use on deploy.spec.selector.matchLabels and svc.spec.selector
|
||||
*/}}
|
||||
{{- define "common.labels.matchLabels" -}}
|
||||
app.kubernetes.io/name: {{ include "common.names.name" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,63 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Expand the name of the chart.
|
||||
*/}}
|
||||
{{- define "common.names.name" -}}
|
||||
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create chart name and version as used by the chart label.
|
||||
*/}}
|
||||
{{- define "common.names.chart" -}}
|
||||
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create a default fully qualified app name.
|
||||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||
If release name contains chart name it will be used as a full name.
|
||||
*/}}
|
||||
{{- define "common.names.fullname" -}}
|
||||
{{- if .Values.fullnameOverride -}}
|
||||
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- $name := default .Chart.Name .Values.nameOverride -}}
|
||||
{{- if contains $name .Release.Name -}}
|
||||
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create a default fully qualified dependency name.
|
||||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||
If release name contains chart name it will be used as a full name.
|
||||
Usage:
|
||||
{{ include "common.names.dependency.fullname" (dict "chartName" "dependency-chart-name" "chartValues" .Values.dependency-chart "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.names.dependency.fullname" -}}
|
||||
{{- if .chartValues.fullnameOverride -}}
|
||||
{{- .chartValues.fullnameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- $name := default .chartName .chartValues.nameOverride -}}
|
||||
{{- if contains $name .context.Release.Name -}}
|
||||
{{- .context.Release.Name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- printf "%s-%s" .context.Release.Name $name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Allow the release namespace to be overridden for multi-namespace deployments in combined charts.
|
||||
*/}}
|
||||
{{- define "common.names.namespace" -}}
|
||||
{{- if .Values.namespaceOverride -}}
|
||||
{{- .Values.namespaceOverride -}}
|
||||
{{- else -}}
|
||||
{{- .Release.Namespace -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,140 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Generate secret name.
|
||||
|
||||
Usage:
|
||||
{{ include "common.secrets.name" (dict "existingSecret" .Values.path.to.the.existingSecret "defaultNameSuffix" "mySuffix" "context" $) }}
|
||||
|
||||
Params:
|
||||
- existingSecret - ExistingSecret/String - Optional. The path to the existing secrets in the values.yaml given by the user
|
||||
to be used instead of the default one. Allows for it to be of type String (just the secret name) for backwards compatibility.
|
||||
+info: https://github.com/bitnami/charts/tree/master/bitnami/common#existingsecret
|
||||
- defaultNameSuffix - String - Optional. It is used only if we have several secrets in the same deployment.
|
||||
- context - Dict - Required. The context for the template evaluation.
|
||||
*/}}
|
||||
{{- define "common.secrets.name" -}}
|
||||
{{- $name := (include "common.names.fullname" .context) -}}
|
||||
|
||||
{{- if .defaultNameSuffix -}}
|
||||
{{- $name = printf "%s-%s" $name .defaultNameSuffix | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- with .existingSecret -}}
|
||||
{{- if not (typeIs "string" .) -}}
|
||||
{{- with .name -}}
|
||||
{{- $name = . -}}
|
||||
{{- end -}}
|
||||
{{- else -}}
|
||||
{{- $name = . -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- printf "%s" $name -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Generate secret key.
|
||||
|
||||
Usage:
|
||||
{{ include "common.secrets.key" (dict "existingSecret" .Values.path.to.the.existingSecret "key" "keyName") }}
|
||||
|
||||
Params:
|
||||
- existingSecret - ExistingSecret/String - Optional. The path to the existing secrets in the values.yaml given by the user
|
||||
to be used instead of the default one. Allows for it to be of type String (just the secret name) for backwards compatibility.
|
||||
+info: https://github.com/bitnami/charts/tree/master/bitnami/common#existingsecret
|
||||
- key - String - Required. Name of the key in the secret.
|
||||
*/}}
|
||||
{{- define "common.secrets.key" -}}
|
||||
{{- $key := .key -}}
|
||||
|
||||
{{- if .existingSecret -}}
|
||||
{{- if not (typeIs "string" .existingSecret) -}}
|
||||
{{- if .existingSecret.keyMapping -}}
|
||||
{{- $key = index .existingSecret.keyMapping $.key -}}
|
||||
{{- end -}}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
|
||||
{{- printf "%s" $key -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Generate secret password or retrieve one if already created.
|
||||
|
||||
Usage:
|
||||
{{ include "common.secrets.passwords.manage" (dict "secret" "secret-name" "key" "keyName" "providedValues" (list "path.to.password1" "path.to.password2") "length" 10 "strong" false "chartName" "chartName" "context" $) }}
|
||||
|
||||
Params:
|
||||
- secret - String - Required - Name of the 'Secret' resource where the password is stored.
|
||||
- key - String - Required - Name of the key in the secret.
|
||||
- providedValues - List<String> - Required - The path to the validating value in the values.yaml, e.g: "mysql.password". Will pick first parameter with a defined value.
|
||||
- length - int - Optional - Length of the generated random password.
|
||||
- strong - Boolean - Optional - Whether to add symbols to the generated random password.
|
||||
- chartName - String - Optional - Name of the chart used when said chart is deployed as a subchart.
|
||||
- context - Context - Required - Parent context.
|
||||
|
||||
The order in which this function returns a secret password:
|
||||
1. Already existing 'Secret' resource
|
||||
(If a 'Secret' resource is found under the name provided to the 'secret' parameter to this function and that 'Secret' resource contains a key with the name passed as the 'key' parameter to this function then the value of this existing secret password will be returned)
|
||||
2. Password provided via the values.yaml
|
||||
(If one of the keys passed to the 'providedValues' parameter to this function is a valid path to a key in the values.yaml and has a value, the value of the first key with a value will be returned)
|
||||
3. Randomly generated secret password
|
||||
(A new random secret password with the length specified in the 'length' parameter will be generated and returned)
|
||||
|
||||
*/}}
|
||||
{{- define "common.secrets.passwords.manage" -}}
|
||||
|
||||
{{- $password := "" }}
|
||||
{{- $subchart := "" }}
|
||||
{{- $chartName := default "" .chartName }}
|
||||
{{- $passwordLength := default 10 .length }}
|
||||
{{- $providedPasswordKey := include "common.utils.getKeyFromList" (dict "keys" .providedValues "context" $.context) }}
|
||||
{{- $providedPasswordValue := include "common.utils.getValueFromKey" (dict "key" $providedPasswordKey "context" $.context) }}
|
||||
{{- $secretData := (lookup "v1" "Secret" $.context.Release.Namespace .secret).data }}
|
||||
{{- if $secretData }}
|
||||
{{- if hasKey $secretData .key }}
|
||||
{{- $password = index $secretData .key }}
|
||||
{{- else }}
|
||||
{{- printf "\nPASSWORDS ERROR: The secret \"%s\" does not contain the key \"%s\"\n" .secret .key | fail -}}
|
||||
{{- end -}}
|
||||
{{- else if $providedPasswordValue }}
|
||||
{{- $password = $providedPasswordValue | toString | b64enc | quote }}
|
||||
{{- else }}
|
||||
|
||||
{{- if .context.Values.enabled }}
|
||||
{{- $subchart = $chartName }}
|
||||
{{- end -}}
|
||||
|
||||
{{- $requiredPassword := dict "valueKey" $providedPasswordKey "secret" .secret "field" .key "subchart" $subchart "context" $.context -}}
|
||||
{{- $requiredPasswordError := include "common.validations.values.single.empty" $requiredPassword -}}
|
||||
{{- $passwordValidationErrors := list $requiredPasswordError -}}
|
||||
{{- include "common.errors.upgrade.passwords.empty" (dict "validationErrors" $passwordValidationErrors "context" $.context) -}}
|
||||
|
||||
{{- if .strong }}
|
||||
{{- $subStr := list (lower (randAlpha 1)) (randNumeric 1) (upper (randAlpha 1)) | join "_" }}
|
||||
{{- $password = randAscii $passwordLength }}
|
||||
{{- $password = regexReplaceAllLiteral "\\W" $password "@" | substr 5 $passwordLength }}
|
||||
{{- $password = printf "%s%s" $subStr $password | toString | shuffle | b64enc | quote }}
|
||||
{{- else }}
|
||||
{{- $password = randAlphaNum $passwordLength | b64enc | quote }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
{{- printf "%s" $password -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Returns whether a previous generated secret already exists
|
||||
|
||||
Usage:
|
||||
{{ include "common.secrets.exists" (dict "secret" "secret-name" "context" $) }}
|
||||
|
||||
Params:
|
||||
- secret - String - Required - Name of the 'Secret' resource where the password is stored.
|
||||
- context - Context - Required - Parent context.
|
||||
*/}}
|
||||
{{- define "common.secrets.exists" -}}
|
||||
{{- $secret := (lookup "v1" "Secret" $.context.Release.Namespace .secret) }}
|
||||
{{- if $secret }}
|
||||
{{- true -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,23 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Return the proper Storage Class
|
||||
{{ include "common.storage.class" ( dict "persistence" .Values.path.to.the.persistence "global" $) }}
|
||||
*/}}
|
||||
{{- define "common.storage.class" -}}
|
||||
|
||||
{{- $storageClass := .persistence.storageClass -}}
|
||||
{{- if .global -}}
|
||||
{{- if .global.storageClass -}}
|
||||
{{- $storageClass = .global.storageClass -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- if $storageClass -}}
|
||||
{{- if (eq "-" $storageClass) -}}
|
||||
{{- printf "storageClassName: \"\"" -}}
|
||||
{{- else }}
|
||||
{{- printf "storageClassName: %s" $storageClass -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,13 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Renders a value that contains template.
|
||||
Usage:
|
||||
{{ include "common.tplvalues.render" ( dict "value" .Values.path.to.the.Value "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.tplvalues.render" -}}
|
||||
{{- if typeIs "string" .value }}
|
||||
{{- tpl .value .context }}
|
||||
{{- else }}
|
||||
{{- tpl (.value | toYaml) .context }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,62 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Print instructions to get a secret value.
|
||||
Usage:
|
||||
{{ include "common.utils.secret.getvalue" (dict "secret" "secret-name" "field" "secret-value-field" "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.utils.secret.getvalue" -}}
|
||||
{{- $varname := include "common.utils.fieldToEnvVar" . -}}
|
||||
export {{ $varname }}=$(kubectl get secret --namespace {{ .context.Release.Namespace | quote }} {{ .secret }} -o jsonpath="{.data.{{ .field }}}" | base64 --decode)
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Build env var name given a field
|
||||
Usage:
|
||||
{{ include "common.utils.fieldToEnvVar" dict "field" "my-password" }}
|
||||
*/}}
|
||||
{{- define "common.utils.fieldToEnvVar" -}}
|
||||
{{- $fieldNameSplit := splitList "-" .field -}}
|
||||
{{- $upperCaseFieldNameSplit := list -}}
|
||||
|
||||
{{- range $fieldNameSplit -}}
|
||||
{{- $upperCaseFieldNameSplit = append $upperCaseFieldNameSplit ( upper . ) -}}
|
||||
{{- end -}}
|
||||
|
||||
{{ join "_" $upperCaseFieldNameSplit }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Gets a value from .Values given
|
||||
Usage:
|
||||
{{ include "common.utils.getValueFromKey" (dict "key" "path.to.key" "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.utils.getValueFromKey" -}}
|
||||
{{- $splitKey := splitList "." .key -}}
|
||||
{{- $value := "" -}}
|
||||
{{- $latestObj := $.context.Values -}}
|
||||
{{- range $splitKey -}}
|
||||
{{- if not $latestObj -}}
|
||||
{{- printf "please review the entire path of '%s' exists in values" $.key | fail -}}
|
||||
{{- end -}}
|
||||
{{- $value = ( index $latestObj . ) -}}
|
||||
{{- $latestObj = $value -}}
|
||||
{{- end -}}
|
||||
{{- printf "%v" (default "" $value) -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Returns first .Values key with a defined value or first of the list if all non-defined
|
||||
Usage:
|
||||
{{ include "common.utils.getKeyFromList" (dict "keys" (list "path.to.key1" "path.to.key2") "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.utils.getKeyFromList" -}}
|
||||
{{- $key := first .keys -}}
|
||||
{{- $reverseKeys := reverse .keys }}
|
||||
{{- range $reverseKeys }}
|
||||
{{- $value := include "common.utils.getValueFromKey" (dict "key" . "context" $.context ) }}
|
||||
{{- if $value -}}
|
||||
{{- $key = . }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- printf "%s" $key -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,14 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Warning about using rolling tag.
|
||||
Usage:
|
||||
{{ include "common.warnings.rollingTag" .Values.path.to.the.imageRoot }}
|
||||
*/}}
|
||||
{{- define "common.warnings.rollingTag" -}}
|
||||
|
||||
{{- if and (contains "bitnami/" .repository) (not (.tag | toString | regexFind "-r\\d+$|sha256:")) }}
|
||||
WARNING: Rolling tag detected ({{ .repository }}:{{ .tag }}), please note that it is strongly recommended to avoid using rolling tags in a production environment.
|
||||
+info https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/
|
||||
{{- end }}
|
||||
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,72 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Validate Cassandra required passwords are not empty.
|
||||
|
||||
Usage:
|
||||
{{ include "common.validations.values.cassandra.passwords" (dict "secret" "secretName" "subchart" false "context" $) }}
|
||||
Params:
|
||||
- secret - String - Required. Name of the secret where Cassandra values are stored, e.g: "cassandra-passwords-secret"
|
||||
- subchart - Boolean - Optional. Whether Cassandra is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.validations.values.cassandra.passwords" -}}
|
||||
{{- $existingSecret := include "common.cassandra.values.existingSecret" . -}}
|
||||
{{- $enabled := include "common.cassandra.values.enabled" . -}}
|
||||
{{- $dbUserPrefix := include "common.cassandra.values.key.dbUser" . -}}
|
||||
{{- $valueKeyPassword := printf "%s.password" $dbUserPrefix -}}
|
||||
|
||||
{{- if and (or (not $existingSecret) (eq $existingSecret "\"\"")) (eq $enabled "true") -}}
|
||||
{{- $requiredPasswords := list -}}
|
||||
|
||||
{{- $requiredPassword := dict "valueKey" $valueKeyPassword "secret" .secret "field" "cassandra-password" -}}
|
||||
{{- $requiredPasswords = append $requiredPasswords $requiredPassword -}}
|
||||
|
||||
{{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}}
|
||||
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for existingSecret.
|
||||
|
||||
Usage:
|
||||
{{ include "common.cassandra.values.existingSecret" (dict "context" $) }}
|
||||
Params:
|
||||
- subchart - Boolean - Optional. Whether Cassandra is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.cassandra.values.existingSecret" -}}
|
||||
{{- if .subchart -}}
|
||||
{{- .context.Values.cassandra.dbUser.existingSecret | quote -}}
|
||||
{{- else -}}
|
||||
{{- .context.Values.dbUser.existingSecret | quote -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for enabled cassandra.
|
||||
|
||||
Usage:
|
||||
{{ include "common.cassandra.values.enabled" (dict "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.cassandra.values.enabled" -}}
|
||||
{{- if .subchart -}}
|
||||
{{- printf "%v" .context.Values.cassandra.enabled -}}
|
||||
{{- else -}}
|
||||
{{- printf "%v" (not .context.Values.enabled) -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for the key dbUser
|
||||
|
||||
Usage:
|
||||
{{ include "common.cassandra.values.key.dbUser" (dict "subchart" "true" "context" $) }}
|
||||
Params:
|
||||
- subchart - Boolean - Optional. Whether Cassandra is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.cassandra.values.key.dbUser" -}}
|
||||
{{- if .subchart -}}
|
||||
cassandra.dbUser
|
||||
{{- else -}}
|
||||
dbUser
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,103 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Validate MariaDB required passwords are not empty.
|
||||
|
||||
Usage:
|
||||
{{ include "common.validations.values.mariadb.passwords" (dict "secret" "secretName" "subchart" false "context" $) }}
|
||||
Params:
|
||||
- secret - String - Required. Name of the secret where MariaDB values are stored, e.g: "mysql-passwords-secret"
|
||||
- subchart - Boolean - Optional. Whether MariaDB is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.validations.values.mariadb.passwords" -}}
|
||||
{{- $existingSecret := include "common.mariadb.values.auth.existingSecret" . -}}
|
||||
{{- $enabled := include "common.mariadb.values.enabled" . -}}
|
||||
{{- $architecture := include "common.mariadb.values.architecture" . -}}
|
||||
{{- $authPrefix := include "common.mariadb.values.key.auth" . -}}
|
||||
{{- $valueKeyRootPassword := printf "%s.rootPassword" $authPrefix -}}
|
||||
{{- $valueKeyUsername := printf "%s.username" $authPrefix -}}
|
||||
{{- $valueKeyPassword := printf "%s.password" $authPrefix -}}
|
||||
{{- $valueKeyReplicationPassword := printf "%s.replicationPassword" $authPrefix -}}
|
||||
|
||||
{{- if and (or (not $existingSecret) (eq $existingSecret "\"\"")) (eq $enabled "true") -}}
|
||||
{{- $requiredPasswords := list -}}
|
||||
|
||||
{{- $requiredRootPassword := dict "valueKey" $valueKeyRootPassword "secret" .secret "field" "mariadb-root-password" -}}
|
||||
{{- $requiredPasswords = append $requiredPasswords $requiredRootPassword -}}
|
||||
|
||||
{{- $valueUsername := include "common.utils.getValueFromKey" (dict "key" $valueKeyUsername "context" .context) }}
|
||||
{{- if not (empty $valueUsername) -}}
|
||||
{{- $requiredPassword := dict "valueKey" $valueKeyPassword "secret" .secret "field" "mariadb-password" -}}
|
||||
{{- $requiredPasswords = append $requiredPasswords $requiredPassword -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- if (eq $architecture "replication") -}}
|
||||
{{- $requiredReplicationPassword := dict "valueKey" $valueKeyReplicationPassword "secret" .secret "field" "mariadb-replication-password" -}}
|
||||
{{- $requiredPasswords = append $requiredPasswords $requiredReplicationPassword -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}}
|
||||
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for existingSecret.
|
||||
|
||||
Usage:
|
||||
{{ include "common.mariadb.values.auth.existingSecret" (dict "context" $) }}
|
||||
Params:
|
||||
- subchart - Boolean - Optional. Whether MariaDB is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.mariadb.values.auth.existingSecret" -}}
|
||||
{{- if .subchart -}}
|
||||
{{- .context.Values.mariadb.auth.existingSecret | quote -}}
|
||||
{{- else -}}
|
||||
{{- .context.Values.auth.existingSecret | quote -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for enabled mariadb.
|
||||
|
||||
Usage:
|
||||
{{ include "common.mariadb.values.enabled" (dict "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.mariadb.values.enabled" -}}
|
||||
{{- if .subchart -}}
|
||||
{{- printf "%v" .context.Values.mariadb.enabled -}}
|
||||
{{- else -}}
|
||||
{{- printf "%v" (not .context.Values.enabled) -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for architecture
|
||||
|
||||
Usage:
|
||||
{{ include "common.mariadb.values.architecture" (dict "subchart" "true" "context" $) }}
|
||||
Params:
|
||||
- subchart - Boolean - Optional. Whether MariaDB is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.mariadb.values.architecture" -}}
|
||||
{{- if .subchart -}}
|
||||
{{- .context.Values.mariadb.architecture -}}
|
||||
{{- else -}}
|
||||
{{- .context.Values.architecture -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for the key auth
|
||||
|
||||
Usage:
|
||||
{{ include "common.mariadb.values.key.auth" (dict "subchart" "true" "context" $) }}
|
||||
Params:
|
||||
- subchart - Boolean - Optional. Whether MariaDB is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.mariadb.values.key.auth" -}}
|
||||
{{- if .subchart -}}
|
||||
mariadb.auth
|
||||
{{- else -}}
|
||||
auth
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,108 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Validate MongoDB® required passwords are not empty.
|
||||
|
||||
Usage:
|
||||
{{ include "common.validations.values.mongodb.passwords" (dict "secret" "secretName" "subchart" false "context" $) }}
|
||||
Params:
|
||||
- secret - String - Required. Name of the secret where MongoDB® values are stored, e.g: "mongodb-passwords-secret"
|
||||
- subchart - Boolean - Optional. Whether MongoDB® is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.validations.values.mongodb.passwords" -}}
|
||||
{{- $existingSecret := include "common.mongodb.values.auth.existingSecret" . -}}
|
||||
{{- $enabled := include "common.mongodb.values.enabled" . -}}
|
||||
{{- $authPrefix := include "common.mongodb.values.key.auth" . -}}
|
||||
{{- $architecture := include "common.mongodb.values.architecture" . -}}
|
||||
{{- $valueKeyRootPassword := printf "%s.rootPassword" $authPrefix -}}
|
||||
{{- $valueKeyUsername := printf "%s.username" $authPrefix -}}
|
||||
{{- $valueKeyDatabase := printf "%s.database" $authPrefix -}}
|
||||
{{- $valueKeyPassword := printf "%s.password" $authPrefix -}}
|
||||
{{- $valueKeyReplicaSetKey := printf "%s.replicaSetKey" $authPrefix -}}
|
||||
{{- $valueKeyAuthEnabled := printf "%s.enabled" $authPrefix -}}
|
||||
|
||||
{{- $authEnabled := include "common.utils.getValueFromKey" (dict "key" $valueKeyAuthEnabled "context" .context) -}}
|
||||
|
||||
{{- if and (or (not $existingSecret) (eq $existingSecret "\"\"")) (eq $enabled "true") (eq $authEnabled "true") -}}
|
||||
{{- $requiredPasswords := list -}}
|
||||
|
||||
{{- $requiredRootPassword := dict "valueKey" $valueKeyRootPassword "secret" .secret "field" "mongodb-root-password" -}}
|
||||
{{- $requiredPasswords = append $requiredPasswords $requiredRootPassword -}}
|
||||
|
||||
{{- $valueUsername := include "common.utils.getValueFromKey" (dict "key" $valueKeyUsername "context" .context) }}
|
||||
{{- $valueDatabase := include "common.utils.getValueFromKey" (dict "key" $valueKeyDatabase "context" .context) }}
|
||||
{{- if and $valueUsername $valueDatabase -}}
|
||||
{{- $requiredPassword := dict "valueKey" $valueKeyPassword "secret" .secret "field" "mongodb-password" -}}
|
||||
{{- $requiredPasswords = append $requiredPasswords $requiredPassword -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- if (eq $architecture "replicaset") -}}
|
||||
{{- $requiredReplicaSetKey := dict "valueKey" $valueKeyReplicaSetKey "secret" .secret "field" "mongodb-replica-set-key" -}}
|
||||
{{- $requiredPasswords = append $requiredPasswords $requiredReplicaSetKey -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}}
|
||||
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for existingSecret.
|
||||
|
||||
Usage:
|
||||
{{ include "common.mongodb.values.auth.existingSecret" (dict "context" $) }}
|
||||
Params:
|
||||
- subchart - Boolean - Optional. Whether MongoDb is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.mongodb.values.auth.existingSecret" -}}
|
||||
{{- if .subchart -}}
|
||||
{{- .context.Values.mongodb.auth.existingSecret | quote -}}
|
||||
{{- else -}}
|
||||
{{- .context.Values.auth.existingSecret | quote -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for enabled mongodb.
|
||||
|
||||
Usage:
|
||||
{{ include "common.mongodb.values.enabled" (dict "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.mongodb.values.enabled" -}}
|
||||
{{- if .subchart -}}
|
||||
{{- printf "%v" .context.Values.mongodb.enabled -}}
|
||||
{{- else -}}
|
||||
{{- printf "%v" (not .context.Values.enabled) -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for the key auth
|
||||
|
||||
Usage:
|
||||
{{ include "common.mongodb.values.key.auth" (dict "subchart" "true" "context" $) }}
|
||||
Params:
|
||||
- subchart - Boolean - Optional. Whether MongoDB® is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.mongodb.values.key.auth" -}}
|
||||
{{- if .subchart -}}
|
||||
mongodb.auth
|
||||
{{- else -}}
|
||||
auth
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for architecture
|
||||
|
||||
Usage:
|
||||
{{ include "common.mongodb.values.architecture" (dict "subchart" "true" "context" $) }}
|
||||
Params:
|
||||
- subchart - Boolean - Optional. Whether MariaDB is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.mongodb.values.architecture" -}}
|
||||
{{- if .subchart -}}
|
||||
{{- .context.Values.mongodb.architecture -}}
|
||||
{{- else -}}
|
||||
{{- .context.Values.architecture -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,129 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Validate PostgreSQL required passwords are not empty.
|
||||
|
||||
Usage:
|
||||
{{ include "common.validations.values.postgresql.passwords" (dict "secret" "secretName" "subchart" false "context" $) }}
|
||||
Params:
|
||||
- secret - String - Required. Name of the secret where postgresql values are stored, e.g: "postgresql-passwords-secret"
|
||||
- subchart - Boolean - Optional. Whether postgresql is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.validations.values.postgresql.passwords" -}}
|
||||
{{- $existingSecret := include "common.postgresql.values.existingSecret" . -}}
|
||||
{{- $enabled := include "common.postgresql.values.enabled" . -}}
|
||||
{{- $valueKeyPostgresqlPassword := include "common.postgresql.values.key.postgressPassword" . -}}
|
||||
{{- $valueKeyPostgresqlReplicationEnabled := include "common.postgresql.values.key.replicationPassword" . -}}
|
||||
{{- if and (or (not $existingSecret) (eq $existingSecret "\"\"")) (eq $enabled "true") -}}
|
||||
{{- $requiredPasswords := list -}}
|
||||
{{- $requiredPostgresqlPassword := dict "valueKey" $valueKeyPostgresqlPassword "secret" .secret "field" "postgresql-password" -}}
|
||||
{{- $requiredPasswords = append $requiredPasswords $requiredPostgresqlPassword -}}
|
||||
|
||||
{{- $enabledReplication := include "common.postgresql.values.enabled.replication" . -}}
|
||||
{{- if (eq $enabledReplication "true") -}}
|
||||
{{- $requiredPostgresqlReplicationPassword := dict "valueKey" $valueKeyPostgresqlReplicationEnabled "secret" .secret "field" "postgresql-replication-password" -}}
|
||||
{{- $requiredPasswords = append $requiredPasswords $requiredPostgresqlReplicationPassword -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to decide whether evaluate global values.
|
||||
|
||||
Usage:
|
||||
{{ include "common.postgresql.values.use.global" (dict "key" "key-of-global" "context" $) }}
|
||||
Params:
|
||||
- key - String - Required. Field to be evaluated within global, e.g: "existingSecret"
|
||||
*/}}
|
||||
{{- define "common.postgresql.values.use.global" -}}
|
||||
{{- if .context.Values.global -}}
|
||||
{{- if .context.Values.global.postgresql -}}
|
||||
{{- index .context.Values.global.postgresql .key | quote -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for existingSecret.
|
||||
|
||||
Usage:
|
||||
{{ include "common.postgresql.values.existingSecret" (dict "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.postgresql.values.existingSecret" -}}
|
||||
{{- $globalValue := include "common.postgresql.values.use.global" (dict "key" "existingSecret" "context" .context) -}}
|
||||
|
||||
{{- if .subchart -}}
|
||||
{{- default (.context.Values.postgresql.existingSecret | quote) $globalValue -}}
|
||||
{{- else -}}
|
||||
{{- default (.context.Values.existingSecret | quote) $globalValue -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for enabled postgresql.
|
||||
|
||||
Usage:
|
||||
{{ include "common.postgresql.values.enabled" (dict "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.postgresql.values.enabled" -}}
|
||||
{{- if .subchart -}}
|
||||
{{- printf "%v" .context.Values.postgresql.enabled -}}
|
||||
{{- else -}}
|
||||
{{- printf "%v" (not .context.Values.enabled) -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for the key postgressPassword.
|
||||
|
||||
Usage:
|
||||
{{ include "common.postgresql.values.key.postgressPassword" (dict "subchart" "true" "context" $) }}
|
||||
Params:
|
||||
- subchart - Boolean - Optional. Whether postgresql is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.postgresql.values.key.postgressPassword" -}}
|
||||
{{- $globalValue := include "common.postgresql.values.use.global" (dict "key" "postgresqlUsername" "context" .context) -}}
|
||||
|
||||
{{- if not $globalValue -}}
|
||||
{{- if .subchart -}}
|
||||
postgresql.postgresqlPassword
|
||||
{{- else -}}
|
||||
postgresqlPassword
|
||||
{{- end -}}
|
||||
{{- else -}}
|
||||
global.postgresql.postgresqlPassword
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for enabled.replication.
|
||||
|
||||
Usage:
|
||||
{{ include "common.postgresql.values.enabled.replication" (dict "subchart" "true" "context" $) }}
|
||||
Params:
|
||||
- subchart - Boolean - Optional. Whether postgresql is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.postgresql.values.enabled.replication" -}}
|
||||
{{- if .subchart -}}
|
||||
{{- printf "%v" .context.Values.postgresql.replication.enabled -}}
|
||||
{{- else -}}
|
||||
{{- printf "%v" .context.Values.replication.enabled -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for the key replication.password.
|
||||
|
||||
Usage:
|
||||
{{ include "common.postgresql.values.key.replicationPassword" (dict "subchart" "true" "context" $) }}
|
||||
Params:
|
||||
- subchart - Boolean - Optional. Whether postgresql is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.postgresql.values.key.replicationPassword" -}}
|
||||
{{- if .subchart -}}
|
||||
postgresql.replication.password
|
||||
{{- else -}}
|
||||
replication.password
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,76 @@
|
||||
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Validate Redis™ required passwords are not empty.
|
||||
|
||||
Usage:
|
||||
{{ include "common.validations.values.redis.passwords" (dict "secret" "secretName" "subchart" false "context" $) }}
|
||||
Params:
|
||||
- secret - String - Required. Name of the secret where redis values are stored, e.g: "redis-passwords-secret"
|
||||
- subchart - Boolean - Optional. Whether redis is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.validations.values.redis.passwords" -}}
|
||||
{{- $enabled := include "common.redis.values.enabled" . -}}
|
||||
{{- $valueKeyPrefix := include "common.redis.values.keys.prefix" . -}}
|
||||
{{- $standarizedVersion := include "common.redis.values.standarized.version" . }}
|
||||
|
||||
{{- $existingSecret := ternary (printf "%s%s" $valueKeyPrefix "auth.existingSecret") (printf "%s%s" $valueKeyPrefix "existingSecret") (eq $standarizedVersion "true") }}
|
||||
{{- $existingSecretValue := include "common.utils.getValueFromKey" (dict "key" $existingSecret "context" .context) }}
|
||||
|
||||
{{- $valueKeyRedisPassword := ternary (printf "%s%s" $valueKeyPrefix "auth.password") (printf "%s%s" $valueKeyPrefix "password") (eq $standarizedVersion "true") }}
|
||||
{{- $valueKeyRedisUseAuth := ternary (printf "%s%s" $valueKeyPrefix "auth.enabled") (printf "%s%s" $valueKeyPrefix "usePassword") (eq $standarizedVersion "true") }}
|
||||
|
||||
{{- if and (or (not $existingSecret) (eq $existingSecret "\"\"")) (eq $enabled "true") -}}
|
||||
{{- $requiredPasswords := list -}}
|
||||
|
||||
{{- $useAuth := include "common.utils.getValueFromKey" (dict "key" $valueKeyRedisUseAuth "context" .context) -}}
|
||||
{{- if eq $useAuth "true" -}}
|
||||
{{- $requiredRedisPassword := dict "valueKey" $valueKeyRedisPassword "secret" .secret "field" "redis-password" -}}
|
||||
{{- $requiredPasswords = append $requiredPasswords $requiredRedisPassword -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right value for enabled redis.
|
||||
|
||||
Usage:
|
||||
{{ include "common.redis.values.enabled" (dict "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.redis.values.enabled" -}}
|
||||
{{- if .subchart -}}
|
||||
{{- printf "%v" .context.Values.redis.enabled -}}
|
||||
{{- else -}}
|
||||
{{- printf "%v" (not .context.Values.enabled) -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Auxiliary function to get the right prefix path for the values
|
||||
|
||||
Usage:
|
||||
{{ include "common.redis.values.key.prefix" (dict "subchart" "true" "context" $) }}
|
||||
Params:
|
||||
- subchart - Boolean - Optional. Whether redis is used as subchart or not. Default: false
|
||||
*/}}
|
||||
{{- define "common.redis.values.keys.prefix" -}}
|
||||
{{- if .subchart -}}redis.{{- else -}}{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Checks whether the redis chart's includes the standarizations (version >= 14)
|
||||
|
||||
Usage:
|
||||
{{ include "common.redis.values.standarized.version" (dict "context" $) }}
|
||||
*/}}
|
||||
{{- define "common.redis.values.standarized.version" -}}
|
||||
|
||||
{{- $standarizedAuth := printf "%s%s" (include "common.redis.values.keys.prefix" .) "auth" -}}
|
||||
{{- $standarizedAuthValues := include "common.utils.getValueFromKey" (dict "key" $standarizedAuth "context" .context) }}
|
||||
|
||||
{{- if $standarizedAuthValues -}}
|
||||
{{- true -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,46 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Validate values must not be empty.
|
||||
|
||||
Usage:
|
||||
{{- $validateValueConf00 := (dict "valueKey" "path.to.value" "secret" "secretName" "field" "password-00") -}}
|
||||
{{- $validateValueConf01 := (dict "valueKey" "path.to.value" "secret" "secretName" "field" "password-01") -}}
|
||||
{{ include "common.validations.values.empty" (dict "required" (list $validateValueConf00 $validateValueConf01) "context" $) }}
|
||||
|
||||
Validate value params:
|
||||
- valueKey - String - Required. The path to the validating value in the values.yaml, e.g: "mysql.password"
|
||||
- secret - String - Optional. Name of the secret where the validating value is generated/stored, e.g: "mysql-passwords-secret"
|
||||
- field - String - Optional. Name of the field in the secret data, e.g: "mysql-password"
|
||||
*/}}
|
||||
{{- define "common.validations.values.multiple.empty" -}}
|
||||
{{- range .required -}}
|
||||
{{- include "common.validations.values.single.empty" (dict "valueKey" .valueKey "secret" .secret "field" .field "context" $.context) -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Validate a value must not be empty.
|
||||
|
||||
Usage:
|
||||
{{ include "common.validations.value.empty" (dict "valueKey" "mariadb.password" "secret" "secretName" "field" "my-password" "subchart" "subchart" "context" $) }}
|
||||
|
||||
Validate value params:
|
||||
- valueKey - String - Required. The path to the validating value in the values.yaml, e.g: "mysql.password"
|
||||
- secret - String - Optional. Name of the secret where the validating value is generated/stored, e.g: "mysql-passwords-secret"
|
||||
- field - String - Optional. Name of the field in the secret data, e.g: "mysql-password"
|
||||
- subchart - String - Optional - Name of the subchart that the validated password is part of.
|
||||
*/}}
|
||||
{{- define "common.validations.values.single.empty" -}}
|
||||
{{- $value := include "common.utils.getValueFromKey" (dict "key" .valueKey "context" .context) }}
|
||||
{{- $subchart := ternary "" (printf "%s." .subchart) (empty .subchart) }}
|
||||
|
||||
{{- if not $value -}}
|
||||
{{- $varname := "my-value" -}}
|
||||
{{- $getCurrentValue := "" -}}
|
||||
{{- if and .secret .field -}}
|
||||
{{- $varname = include "common.utils.fieldToEnvVar" . -}}
|
||||
{{- $getCurrentValue = printf " To get the current value:\n\n %s\n" (include "common.utils.secret.getvalue" .) -}}
|
||||
{{- end -}}
|
||||
{{- printf "\n '%s' must not be empty, please add '--set %s%s=$%s' to the command.%s" .valueKey $subchart .valueKey $varname $getCurrentValue -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,5 @@
|
||||
## bitnami/common
|
||||
## It is required by CI/CD tools and processes.
|
||||
## @skip exampleValue
|
||||
##
|
||||
exampleValue: common-chart
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user