- Steps to mape v2 to v3 stste
- Update current v3 implementation details
- Add steps to map release to another namespace
Signed-off-by: Martin Hickey <martin.hickey@ie.ibm.com>
- Add schema to values which enables value validation at runtime (install/upgrade etc.)
- Helm install/set-up is simplified:
- Helm client (helm) only (no tiller)
- No Helm initialization and no Tiller installation
- Run-as-is paradigm
- Commands removed/replaced/added:
- chart: command consists of multiple subcommands to interact with charts and registries. Subcommands as follows:
@ -36,7 +35,6 @@ The changes are as follows:
- save: save a chart directory
- delete --> uninstall : removes all release history by default (previous;y needed `--purge`)
- fetch --> pull
- init (removed?)
- install: requires release name or `--generate-name` argument
- inspect --> show
- registry: login to or logout from a registry
@ -58,46 +56,93 @@ The changes are as follows:
The migration use cases are as follows:
1. Running Helm v2 and v3 concurrently on the same cluster:
- v2 and v3 history/state are independent of each other. v2 uses "ConfigMaps" under Tiller namespace and `TILLER`ownership. v3 uses "Secrets" in user namespace and `helm` ownership. There should be no conflicts.
- The only issue could be if Kubernetes resources are not named with unique capability like with release in its name and you depoy the chart again for v3. This would happen for v2 anyway when not an upgrade. Can be avoided by naming resources uniquely.
- Make sure to not override the v2 client binary (`helm`). Rename or separate directory.
- v3 has no configuration and therefore doesn't need to be initialized. Run as is.
- v2 and v3 history/state are independent of each other. v2 uses "ConfigMaps" under Tiller namespace and `TILLER`ownership. v3 uses "Secrets" in user namespace and `helm` ownership. There should be no conflicts. Releases are incremental in both v2 and v3.
- The only issue could be if Kubernetes cluster scoped resources (e.g. `clusterroles.rbac`) are defined in a chart. The v3 deployment would then fail even if unique in the namepsace as the resources would clash.
- Make sure to not override the v2 client binary (`helm`). Rename or use separate directory for one of the releases.
- v3 has configuration and when initialized will override the v2 configuration. To avoid this use a separate `HELM_HOME`, for example, `export HELM_HOME=$HOME/.helm3`.
2. Deploying a new chart:
- Use v3 to deploy
3. Upgrading an existing release from v2 to v3:
- The choices are as follows:
- Lose the release history: Deploy as new using v3 and delete the existing release using v2
- Maintain the release history: TBD: Detail the steps involved in migration of v2 history to v3. This should describe the steps manually and then any tools provided for those manual steps. Are the steps involved:
- Retrieve ConfigMaps for Tiller owner
- Find ConfigMaps for a release
- For each release:
- Extract the data from the ConfigMap and map it to a Secret:
- Ownere is `helm` and namespace is set
- Releases are incremental in both v2 and v3:
```console
$ kubectl get configmap -n kube-system -l "OWNER=TILLER"
NAME DATA AGE
cautious-hummingbird.v1 1 5d2h
chrt-5586.v1 1 6d21h
chrt-5586.v2 1 6d21h
$ kubectl get secret -n default -l "owner=helm"
NAME TYPE DATA AGE
foo-chrt.v1 helm.sh/release 1 7d20h
fuzzy-bear.v1 helm.sh/release 1 7d
moo-chrt.v1 helm.sh/release 1 6d4h
moo-chrt.v2 helm.sh/release 1 5s
tst-lib-chart-1.v1 helm.sh/release 1 7d
tst-lib-chart.v1 helm.sh/release 1 7d4h
```
- The release data in the config map is a base-64 encoded, gzipped archive of the entire release record.
- Create secret under a user provided or a default user if not provided.
- Delete the release ConfigMaps
4. Move release Kubernetes resources to user namespace
- Get all resources from the release state, update the namespace and then update the resource. Suggestion here: https://gist.github.com/simonswine/6bf3b665e4117f42b550c3ea12dd171a
3. Move an existing release from v2 to v3, choices as follows:
- Lose the release history: Deploy as new using v3 and delete the existing release using v2.
- Maintain the release history:
- Retrieve v2 release states by getting the ConfigMaps from the `kube-system` namespace for `Tiller` owner
```console
$ kubectl get configmap -n kube-system -l "OWNER=TILLER"
NAME DATA AGE
mychart.v1 1 27h
easy-chrt.v1 1 26h
```
- For each release and version (e.g. `mychart.v1`) you want to move::
- Extract the data from the ConfigMap: `kubectl get configmap ${RELEASE_NAME} -n kube-system -o json > ${RELEASE_NAME}-cm.json`
- Map it to a v3 Secret as follows:
- Set owner to `helm`
- Set namespace to namespace of release in v2. Check with command: `helm ls`
```console
$ helm ls
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
*** Note: The release data in the ConfigMap is a base-64 encoded, gzipped archive of the entire release record. TODO: This is currently failing to be loaded by v3. ***
- Create the Secret resource in the namespace of the release
```
# Deploy the ${RELEASE_NAME} secret into the ${NAMESPACE} namespace
kubens ${NAMESPACE}
kubectl create -f ${RELEASE_NAME}-secret.json
```
- Check the release now exists in v3 (`helm ls`) and has state stored as a Secret (`kubectl get secret --all-namespaces -l "owner=helm"`):
```console
$ helm ls
NAME NAMESPACE REVISION UPDATED STATUS CHART
mychart default 1 2019-06-12 10:43:19.949644311 +0100 IST deployed mychart-0.1.0
easy-chrt default 1 2019-06-12 10:09:20.903353326 +0100 IST deployed easy-chrt-0.1.0
demo default 1 2019-06-12 14:31:52.264875915 +0100 IST deployed demo-0.1.0
$ kubectl get secret --all-namespaces -l "owner=helm"
4. Move Helm releaes and it's Kubernetes resources from it's default v2 namespace (only current release version applicable with namespaced scoped resources):
- Get all resources from the current release: `helm get <release>`
- For each resource:
- Create the resource in the new namespace: `kubectl get <resource_type> <resource_name> -o json --namespace <ns_old> | jq '.items[].metadata.namespace = "ns_new"' | kubectl create-f -`
- Delete the resource in the old namespace: `kubectl delete <resource_type> --namespace <ns_old>`
- Update the release Secret resource to new namespace: `kubectl edit secret <release_name> -n <<ns_old>>`