Add more details to the migration doc

- Steps to mape v2 to v3 stste
- Update current v3 implementation details
- Add steps to map release to another namespace

Signed-off-by: Martin Hickey <martin.hickey@ie.ibm.com>
pull/5582/head
Martin Hickey 6 years ago
parent 40982f2d8d
commit d12d74381e

@ -24,7 +24,6 @@ The changes are as follows:
- Add schema to values which enables value validation at runtime (install/upgrade etc.) - Add schema to values which enables value validation at runtime (install/upgrade etc.)
- Helm install/set-up is simplified: - Helm install/set-up is simplified:
- Helm client (helm) only (no tiller) - Helm client (helm) only (no tiller)
- No Helm initialization and no Tiller installation
- Run-as-is paradigm - Run-as-is paradigm
- Commands removed/replaced/added: - Commands removed/replaced/added:
- chart: command consists of multiple subcommands to interact with charts and registries. Subcommands as follows: - chart: command consists of multiple subcommands to interact with charts and registries. Subcommands as follows:
@ -36,7 +35,6 @@ The changes are as follows:
- save: save a chart directory - save: save a chart directory
- delete --> uninstall : removes all release history by default (previous;y needed `--purge`) - delete --> uninstall : removes all release history by default (previous;y needed `--purge`)
- fetch --> pull - fetch --> pull
- init (removed?)
- install: requires release name or `--generate-name` argument - install: requires release name or `--generate-name` argument
- inspect --> show - inspect --> show
- registry: login to or logout from a registry - registry: login to or logout from a registry
@ -58,46 +56,93 @@ The changes are as follows:
The migration use cases are as follows: The migration use cases are as follows:
1. Running Helm v2 and v3 concurrently on the same cluster: 1. Running Helm v2 and v3 concurrently on the same cluster:
- v2 and v3 history/state are independent of each other. v2 uses "ConfigMaps" under Tiller namespace and `TILLER`ownership. v3 uses "Secrets" in user namespace and `helm` ownership. There should be no conflicts. - v2 and v3 history/state are independent of each other. v2 uses "ConfigMaps" under Tiller namespace and `TILLER`ownership. v3 uses "Secrets" in user namespace and `helm` ownership. There should be no conflicts. Releases are incremental in both v2 and v3.
- The only issue could be if Kubernetes resources are not named with unique capability like with release in its name and you depoy the chart again for v3. This would happen for v2 anyway when not an upgrade. Can be avoided by naming resources uniquely. - The only issue could be if Kubernetes cluster scoped resources (e.g. `clusterroles.rbac`) are defined in a chart. The v3 deployment would then fail even if unique in the namepsace as the resources would clash.
- Make sure to not override the v2 client binary (`helm`). Rename or separate directory. - Make sure to not override the v2 client binary (`helm`). Rename or use separate directory for one of the releases.
- v3 has no configuration and therefore doesn't need to be initialized. Run as is. - v3 has configuration and when initialized will override the v2 configuration. To avoid this use a separate `HELM_HOME`, for example, `export HELM_HOME=$HOME/.helm3`.
2. Deploying a new chart: 2. Deploying a new chart:
- Use v3 to deploy - Use v3 to deploy
3. Upgrading an existing release from v2 to v3: 3. Move an existing release from v2 to v3, choices as follows:
- The choices are as follows: - Lose the release history: Deploy as new using v3 and delete the existing release using v2.
- Lose the release history: Deploy as new using v3 and delete the existing release using v2 - Maintain the release history:
- Maintain the release history: TBD: Detail the steps involved in migration of v2 history to v3. This should describe the steps manually and then any tools provided for those manual steps. Are the steps involved: - Retrieve v2 release states by getting the ConfigMaps from the `kube-system` namespace for `Tiller` owner
- Retrieve ConfigMaps for Tiller owner
- Find ConfigMaps for a release ```console
- For each release: $ kubectl get configmap -n kube-system -l "OWNER=TILLER"
- Extract the data from the ConfigMap and map it to a Secret: NAME DATA AGE
- Ownere is `helm` and namespace is set mychart.v1 1 27h
- Releases are incremental in both v2 and v3: easy-chrt.v1 1 26h
```
```console
$ kubectl get configmap -n kube-system -l "OWNER=TILLER" - For each release and version (e.g. `mychart.v1`) you want to move::
NAME DATA AGE - Extract the data from the ConfigMap: `kubectl get configmap ${RELEASE_NAME} -n kube-system -o json > ${RELEASE_NAME}-cm.json`
cautious-hummingbird.v1 1 5d2h - Map it to a v3 Secret as follows:
chrt-5586.v1 1 6d21h - Set owner to `helm`
chrt-5586.v2 1 6d21h - Set namespace to namespace of release in v2. Check with command: `helm ls`
$ kubectl get secret -n default -l "owner=helm" ```console
NAME TYPE DATA AGE $ helm ls
foo-chrt.v1 helm.sh/release 1 7d20h NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
fuzzy-bear.v1 helm.sh/release 1 7d mychart 1 Wed Jun 12 10:58:22 2019 DEPLOYED mychart-0.1.0 1.0 default
moo-chrt.v1 helm.sh/release 1 6d4h easy-chrt 1 Wed Jun 12 12:16:56 2019 DEPLOYED new-chrt-0.1.0 1.0 default
moo-chrt.v2 helm.sh/release 1 5s ```
tst-lib-chart-1.v1 helm.sh/release 1 7d
tst-lib-chart.v1 helm.sh/release 1 7d4h - Set 'kind' to `Secret`
``` - Add type `helm.sh/release`
- Update some keys from capital case to camel case and lower case
- The release data in the config map is a base-64 encoded, gzipped archive of the entire release record.
- Create secret under a user provided or a default user if not provided. ```
- Delete the release ConfigMaps # Make copy of the ConfigMap output
cp ${RELEASE_NAME}-cm.json ${RELEASE_NAME}-secret.json
4. Move release Kubernetes resources to user namespace
- Get all resources from the release state, update the namespace and then update the resource. Suggestion here: https://gist.github.com/simonswine/6bf3b665e4117f42b550c3ea12dd171a # Update fields and values to correspond to v3 state secret object
sed -i -e 's/ConfigMap/Secret/g' ./${RELEASE_NAME}-secret.json
sed -i -e 's/MODIFIED_AT/modifiedAt/g' ./${RELEASE_NAME}-secret.json
sed -i -e 's/NAME/name/g' ./${RELEASE_NAME}-secret.json
sed -i -e 's/OWNER/owner/g' ./${RELEASE_NAME}-secret.json
sed -i -e 's/STATUS/status/g' ./${RELEASE_NAME}-secret.json
sed -i -e 's/VERSION/version/g' ./${RELEASE_NAME}-secret.json
sed -i -e 's/configmaps/secrets/g' ./${RELEASE_NAME}-secret.json
sed -i -e "s/kube-system/${NAMESPACE}/g" ./${RELEASE_NAME}-secret.json
sed -i -e 's/TILLER/helm/g' ./${RELEASE_NAME}-secret.json
STATUS=`jq '.metadata.labels.status' ${RELEASE_NAME}-secret.json | tr '[:upper:]' '[:lower:]'`
jq ".metadata.labels.status=${STATUS}" ${RELEASE_NAME}-secret.json > ${RELEASE_NAME}-secret.tmp && mv ${RELEASE_NAME}-secret.tmp ${RELEASE_NAME}-secret.json
```
*** Note: The release data in the ConfigMap is a base-64 encoded, gzipped archive of the entire release record. TODO: This is currently failing to be loaded by v3. ***
- Create the Secret resource in the namespace of the release
```
# Deploy the ${RELEASE_NAME} secret into the ${NAMESPACE} namespace
kubens ${NAMESPACE}
kubectl create -f ${RELEASE_NAME}-secret.json
```
- Check the release now exists in v3 (`helm ls`) and has state stored as a Secret (`kubectl get secret --all-namespaces -l "owner=helm"`):
```console
$ helm ls
NAME NAMESPACE REVISION UPDATED STATUS CHART
mychart default 1 2019-06-12 10:43:19.949644311 +0100 IST deployed mychart-0.1.0
easy-chrt default 1 2019-06-12 10:09:20.903353326 +0100 IST deployed easy-chrt-0.1.0
demo default 1 2019-06-12 14:31:52.264875915 +0100 IST deployed demo-0.1.0
$ kubectl get secret --all-namespaces -l "owner=helm"
NAMESPACE NAME TYPE DATA AGE
default demo.v1 helm.sh/release 1 23h
default easy-chrt.v1 helm.sh/release 1 28h
default mychart.v1 helm.sh/release 1 27h
```
- Delete the release ConfigMap: `kubectl delete configmap ${RELEASE_NAME} -n kube-system`
4. Move Helm releaes and it's Kubernetes resources from it's default v2 namespace (only current release version applicable with namespaced scoped resources):
- Get all resources from the current release: `helm get <release>`
- For each resource:
- Create the resource in the new namespace: `kubectl get <resource_type> <resource_name> -o json --namespace <ns_old> | jq '.items[].metadata.namespace = "ns_new"' | kubectl create-f -`
- Delete the resource in the old namespace: `kubectl delete <resource_type> --namespace <ns_old>`
- Update the release Secret resource to new namespace: `kubectl edit secret <release_name> -n <<ns_old>>`

Loading…
Cancel
Save