From d12d74381e2c08e9ca542610541bccd969662246 Mon Sep 17 00:00:00 2001 From: Martin Hickey Date: Thu, 13 Jun 2019 14:59:28 +0100 Subject: [PATCH] Add more details to the migration doc - Steps to mape v2 to v3 stste - Update current v3 implementation details - Add steps to map release to another namespace Signed-off-by: Martin Hickey --- docs/v2_v3_migration.md | 123 +++++++++++++++++++++++++++------------- 1 file changed, 84 insertions(+), 39 deletions(-) diff --git a/docs/v2_v3_migration.md b/docs/v2_v3_migration.md index d33c81b10..f164ed3b1 100644 --- a/docs/v2_v3_migration.md +++ b/docs/v2_v3_migration.md @@ -24,7 +24,6 @@ The changes are as follows: - Add schema to values which enables value validation at runtime (install/upgrade etc.) - Helm install/set-up is simplified: - Helm client (helm) only (no tiller) - - No Helm initialization and no Tiller installation - Run-as-is paradigm - Commands removed/replaced/added: - chart: command consists of multiple subcommands to interact with charts and registries. Subcommands as follows: @@ -36,7 +35,6 @@ The changes are as follows: - save: save a chart directory - delete --> uninstall : removes all release history by default (previous;y needed `--purge`) - fetch --> pull - - init (removed?) - install: requires release name or `--generate-name` argument - inspect --> show - registry: login to or logout from a registry @@ -58,46 +56,93 @@ The changes are as follows: The migration use cases are as follows: 1. Running Helm v2 and v3 concurrently on the same cluster: - - v2 and v3 history/state are independent of each other. v2 uses "ConfigMaps" under Tiller namespace and `TILLER`ownership. v3 uses "Secrets" in user namespace and `helm` ownership. There should be no conflicts. - - The only issue could be if Kubernetes resources are not named with unique capability like with release in its name and you depoy the chart again for v3. This would happen for v2 anyway when not an upgrade. Can be avoided by naming resources uniquely. - - Make sure to not override the v2 client binary (`helm`). Rename or separate directory. - - v3 has no configuration and therefore doesn't need to be initialized. Run as is. + - v2 and v3 history/state are independent of each other. v2 uses "ConfigMaps" under Tiller namespace and `TILLER`ownership. v3 uses "Secrets" in user namespace and `helm` ownership. There should be no conflicts. Releases are incremental in both v2 and v3. + - The only issue could be if Kubernetes cluster scoped resources (e.g. `clusterroles.rbac`) are defined in a chart. The v3 deployment would then fail even if unique in the namepsace as the resources would clash. + - Make sure to not override the v2 client binary (`helm`). Rename or use separate directory for one of the releases. + - v3 has configuration and when initialized will override the v2 configuration. To avoid this use a separate `HELM_HOME`, for example, `export HELM_HOME=$HOME/.helm3`. 2. Deploying a new chart: - Use v3 to deploy -3. Upgrading an existing release from v2 to v3: - - The choices are as follows: - - Lose the release history: Deploy as new using v3 and delete the existing release using v2 - - Maintain the release history: TBD: Detail the steps involved in migration of v2 history to v3. This should describe the steps manually and then any tools provided for those manual steps. Are the steps involved: - - Retrieve ConfigMaps for Tiller owner - - Find ConfigMaps for a release - - For each release: - - Extract the data from the ConfigMap and map it to a Secret: - - Ownere is `helm` and namespace is set - - Releases are incremental in both v2 and v3: - - ```console - $ kubectl get configmap -n kube-system -l "OWNER=TILLER" - NAME DATA AGE - cautious-hummingbird.v1 1 5d2h - chrt-5586.v1 1 6d21h - chrt-5586.v2 1 6d21h - - $ kubectl get secret -n default -l "owner=helm" - NAME TYPE DATA AGE - foo-chrt.v1 helm.sh/release 1 7d20h - fuzzy-bear.v1 helm.sh/release 1 7d - moo-chrt.v1 helm.sh/release 1 6d4h - moo-chrt.v2 helm.sh/release 1 5s - tst-lib-chart-1.v1 helm.sh/release 1 7d - tst-lib-chart.v1 helm.sh/release 1 7d4h - ``` - - - The release data in the config map is a base-64 encoded, gzipped archive of the entire release record. - - Create secret under a user provided or a default user if not provided. - - Delete the release ConfigMaps +3. Move an existing release from v2 to v3, choices as follows: + - Lose the release history: Deploy as new using v3 and delete the existing release using v2. + - Maintain the release history: + - Retrieve v2 release states by getting the ConfigMaps from the `kube-system` namespace for `Tiller` owner + + ```console + $ kubectl get configmap -n kube-system -l "OWNER=TILLER" + NAME DATA AGE + mychart.v1 1 27h + easy-chrt.v1 1 26h + ``` + + - For each release and version (e.g. `mychart.v1`) you want to move:: + - Extract the data from the ConfigMap: `kubectl get configmap ${RELEASE_NAME} -n kube-system -o json > ${RELEASE_NAME}-cm.json` + - Map it to a v3 Secret as follows: + - Set owner to `helm` + - Set namespace to namespace of release in v2. Check with command: `helm ls` + + ```console + $ helm ls + NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE + mychart 1 Wed Jun 12 10:58:22 2019 DEPLOYED mychart-0.1.0 1.0 default + easy-chrt 1 Wed Jun 12 12:16:56 2019 DEPLOYED new-chrt-0.1.0 1.0 default + ``` + + - Set 'kind' to `Secret` + - Add type `helm.sh/release` + - Update some keys from capital case to camel case and lower case + + ``` + # Make copy of the ConfigMap output + cp ${RELEASE_NAME}-cm.json ${RELEASE_NAME}-secret.json + + # Update fields and values to correspond to v3 state secret object + sed -i -e 's/ConfigMap/Secret/g' ./${RELEASE_NAME}-secret.json + sed -i -e 's/MODIFIED_AT/modifiedAt/g' ./${RELEASE_NAME}-secret.json + sed -i -e 's/NAME/name/g' ./${RELEASE_NAME}-secret.json + sed -i -e 's/OWNER/owner/g' ./${RELEASE_NAME}-secret.json + sed -i -e 's/STATUS/status/g' ./${RELEASE_NAME}-secret.json + sed -i -e 's/VERSION/version/g' ./${RELEASE_NAME}-secret.json + sed -i -e 's/configmaps/secrets/g' ./${RELEASE_NAME}-secret.json + sed -i -e "s/kube-system/${NAMESPACE}/g" ./${RELEASE_NAME}-secret.json + sed -i -e 's/TILLER/helm/g' ./${RELEASE_NAME}-secret.json + STATUS=`jq '.metadata.labels.status' ${RELEASE_NAME}-secret.json | tr '[:upper:]' '[:lower:]'` + jq ".metadata.labels.status=${STATUS}" ${RELEASE_NAME}-secret.json > ${RELEASE_NAME}-secret.tmp && mv ${RELEASE_NAME}-secret.tmp ${RELEASE_NAME}-secret.json + ``` + + *** Note: The release data in the ConfigMap is a base-64 encoded, gzipped archive of the entire release record. TODO: This is currently failing to be loaded by v3. *** + + - Create the Secret resource in the namespace of the release + + ``` + # Deploy the ${RELEASE_NAME} secret into the ${NAMESPACE} namespace + kubens ${NAMESPACE} + kubectl create -f ${RELEASE_NAME}-secret.json + ``` + + - Check the release now exists in v3 (`helm ls`) and has state stored as a Secret (`kubectl get secret --all-namespaces -l "owner=helm"`): + + ```console + $ helm ls + NAME NAMESPACE REVISION UPDATED STATUS CHART + mychart default 1 2019-06-12 10:43:19.949644311 +0100 IST deployed mychart-0.1.0 + easy-chrt default 1 2019-06-12 10:09:20.903353326 +0100 IST deployed easy-chrt-0.1.0 + demo default 1 2019-06-12 14:31:52.264875915 +0100 IST deployed demo-0.1.0 + + $ kubectl get secret --all-namespaces -l "owner=helm" + NAMESPACE NAME TYPE DATA AGE + default demo.v1 helm.sh/release 1 23h + default easy-chrt.v1 helm.sh/release 1 28h + default mychart.v1 helm.sh/release 1 27h + ``` + + - Delete the release ConfigMap: `kubectl delete configmap ${RELEASE_NAME} -n kube-system` -4. Move release Kubernetes resources to user namespace - - Get all resources from the release state, update the namespace and then update the resource. Suggestion here: https://gist.github.com/simonswine/6bf3b665e4117f42b550c3ea12dd171a +4. Move Helm releaes and it's Kubernetes resources from it's default v2 namespace (only current release version applicable with namespaced scoped resources): + - Get all resources from the current release: `helm get ` + - For each resource: + - Create the resource in the new namespace: `kubectl get -o json --namespace | jq '.items[].metadata.namespace = "ns_new"' | kubectl create-f -` + - Delete the resource in the old namespace: `kubectl delete --namespace ` + - Update the release Secret resource to new namespace: `kubectl edit secret -n <>`