Changes to the Kubernetes API server and kubectl libraries caused
the status to no longer display when helm status was run for a
release. This change restores the status display.
Generation of the tables for display was moved server
side. A request for the data as a table is made and a kubectl
printer for tables can display this data. Kubectl uses this setup and
the structure here closely resembles kubectl. kubectl is still
able to display objects as tables from prior to server side
printing but only prints limited information.
Note, an extra request is made because table responses cannot be
easily transformed into Go objects for Kubernetes types to work
with. There is one request to get the resources for display in
a table and a second request to get the resources to lookup the
related pods. The related pods are now requested as a table as
well for display purposes.
This is likely part of the larger trend to move features like
this server side so that more libraries in more languages can
get to the feature.
Closes#6896
Signed-off-by: Matt Farina <matt@mattfarina.com>
This happened to be a bug we identified in Helm 3 and did not check if
it existed in Helm 2. The improved logic for job waiting used an automatic
retry. However, when we were creating the watcher, we were listing on everything
of that same api version and kind. So if you had more than 1 hook and the first
was successful, it would think everything was successful. I have validated that
this now fails as intended if a job is failing
Closes#6767
Signed-off-by: Taylor Thomas <taylor.thomas@microsoft.com>
In several of the job checks and other conversions we were using legacyscheme.
I don't know why it was working before, but I am guessing something changed
between k8s 1.15 and 1.16. To fix I changed the references to use the default
scheme in client-go
Signed-off-by: Taylor Thomas <taylor.thomas@microsoft.com>
Closes#6751
After doing some more digging, I found out that updating the status
of an `Ingress` object is completely optional. Because of this, Helm
cannot support ingresses with the `--wait` flag because there is no
standard way to identify that they are ready
Signed-off-by: Taylor Thomas <taylor.thomas@microsoft.com>
.Get() calls perform() on a list of infos, populating two shared maps. perform() now concurrently calls the ResourceActorFunc concurrently based on GVK, causing a data race condition in .Get()
This fixes that condition by locking the function to ensure these functions run serially for Helm 2 to fix the data race condition. This has since been optimized in Helm 3 so it's no longer an issue.
Signed-off-by: Matthew Fisher <matt.fisher@microsoft.com>
When waiting for resources use the `ListWatchUntil` instead of
`UntilWithoutRetry` so that if the connection drops between tiller and
the API while waiting the operation can still succeed.
Signed-off-by: Richard Connon <richard.connon@oracle.com>
Probably since K8s 1.13.x, `converter.ConvertToVersion(info.Object, groupVersioner)` which is the body of `asVersioned` doesn't return an error or an "unstructured" object, but `apiextensions/v1beta1.CustomResourceDefinition`.
The result was `helm upgrade` with any changes in CRD consistently failing.
This fixes that by adding an additional case of the conversion result being `v1beta1.CustomResourceDefinition`.
This is a backward-compatible change as it doesn't remove existing switch cases for older K8s versions.
Fixes#5853
Signed-off-by: Yusuke Kuoka <ykuoka@gmail.com>
Manifest validation is done by the builder, but it requires that the schema is set before the Stream function is called. Otherwise the StreamVisitor is created without a schema and no validation is done.
Signed-off-by: Morten Torkildsen <mortent@google.com>
Makes sure CRDs installed through the crd_install hook reaches the `established` state before the hook is considered complete.
Signed-off-by: Morten Torkildsen <mortent@google.com>
This is the fix for only one particular, but important case.
The case when a new resource has been added to the chart and
there is an error in the chart, which leads to release failure.
In this case after first failed release upgrade new resource will be
created in the cluster. On the next release upgrade there will be the error:
`no RESOURCE with the name NAME found` for this newly created resource
from the previous release upgrade.
The root of this problem is in the side effect of the first release process,
Release invariant says: if resouce exists in the kubernetes cluster, then
it should exist in the release storage. But this invariant has been broken
by helm itself -- because helm created new resources as side effect and not
adopted them into release storage.
To maintain release invariant for such case during release upgrade operation
all newly *successfully* created resources will be deleted in the case
of an error in the subsequent resources update.
This behaviour will be enabled only when `--cleanup-on-fail` option used
for `helm upgrade` or `helm rollback`.
Signed-off-by: Timofey Kirillov <timofey.kirillov@flant.com>
Don't delete a resource on upgrade if it is annotated with
helm.io/resource-policy=keep. This can cause data loss for users if the
annotation is ignored (e.g. for a PVC).
Closes#3673
Signed-off-by: James Ravn <james@r-vn.org>