* test(pkg/storage/secrets): make MockSecretsInterface.List follow ListOptions
Signed-off-by: Lu Fengqi <lufq.fnst@cn.fujitsu.com>
* test(pkg/storage/secrets): add unit test for Secrets.Query
Signed-off-by: Lu Fengqi <lufq.fnst@cn.fujitsu.com>
* test(pkg/storage/cfgmaps): make MockConfigMapsInterface.List follow ListOptions
Signed-off-by: Lu Fengqi <lufq.fnst@cn.fujitsu.com>
* test(pkg/storage/cfgmaps): add unit test for ConfigMaps.Query
Signed-off-by: Lu Fengqi <lufq.fnst@cn.fujitsu.com>
Upgrade Kubernetes libraries to v0.18.0
Add new lazy load KubernetesClientSet to avoid missing kubeconfig error
In kubernetes v1.18 kubeconfig validation was added. Minikube and Kind
both remove kubeconfig when stopping clusters. This causes and error
when running any helm commands because we initialize the client before
executing the command.
Signed-off-by: Adam Reese <adam@reese.io>
Remove references to protobuf and update description of release
object stored representation to Helm v3.
Signed-off-by: Martin Hickey <martin.hickey@ie.ibm.com>
Helm does not yet properly handle concurrent executions (see #7322),
and invoking Helm concurrently on the same release lead to corrupted storage.
Specifically, several Releases may be marked as DEPLOYED. This patch improved handling of such situations, by taking the latest
DEPLOYED Release. Eventually, the storage will clean itself out, after
the corrupted Releases are deleted due to --history-max.
This is a port to Helm v3 of #7319.
Signed-off-by: Cristian Klein <cristian.klein@elastisys.com>
The error returned from DeployedAll will never contain "not found".
The error returned at the end of Deployed is already known to be nil,
and we never want to return ls[0] together with a non-nil error anyway.
Signed-off-by: Simon Alling <alling.simon@gmail.com>
* Change release storage name to prefix helm storage type
Signed-off-by: Martin Hickey <martin.hickey@ie.ibm.com>
* Add comments about the Kubernetes storage object type field content
Signed-off-by: Martin Hickey <martin.hickey@ie.ibm.com>
To match the convention of `helm install`, `helm uninstall` is the inverse.
Other tangential changes in this PR:
- StatusDeleting has been changed to StatusUninstalling
- StatusDeleted has been changed to StatusUninstalled
- `helm list --deleted` has been changed to `helm list --uninstalled`
- `helm list --deleting` has been changed to `helm list --uninstalling`
- `helm.DeleteOption` and all delete options have been renamed to `helm.UninstallOption`
I have not made any changes to the "helm.sh/hook-delete-policy", "pre-delete" and "post-delete" hook annotations because
1. it's a major breaking change to existing helm charts, which we've commited to NOT break in Helm 3
2. there is no "helm.sh/hook-install-policy" to pair with "helm.sh/hook-uninstall-policy", so delete still makes sense here
`helm delete` and `helm del` have been added as aliases to `helm uninstall`, so `helm delete` and `helm del` still works as is.
When using `helm upgrade --install`, if the first release fails, Helm will respond with an error saying that it cannot upgrade from an unknown state.
With this feature, `helm upgrade --install --force` automates the same process as `helm delete && helm install --replace`. It will mark the previous release as DELETED, delete any existing resources inside Kubernetes, then replace it as if it was a fresh install. It will then mark the FAILED release as SUPERSEDED.
* add test for rolling back from a FAILED deployment
* Update naming of release variables
Use same naming as the rest of the file.
* Update rollback test
- Add logging
- Verify other release names not changed
* fix(tiller): Supersede multiple deployments
There are cases when multiple revisions of a release has been
marked with DEPLOYED status. This makes sure any previous deployment
will be set to SUPERSEDED when doing rollbacks.
Closes#2941#3513#3275