The cache needed to be cleared due to an issue presented when the
source for one dependency moved from one place to another. To
accomplish this we add a counter. This is similar to the recommendation
in the CircleCI documentation on caching
Closes#8184
Signed-off-by: Matt Farina <matt@mattfarina.com>
The sorting previously used the selfref which contained the name
on the end for all cases except pods. In that case the selfref
was to pods and not the specific pod. This caused an issue where
multiple pods had the same selfref used as the key for softing.
The objects being sorted are tables that each have one row. In the
new setup the key is the first cells value from the first and only
row. This is the name of the resource.
Note, the Get function now requests a table. The tests have been
updated to return a Table type for the objects.
Closes#7924
Signed-off-by: Matt Farina <matt@mattfarina.com>
Changes to the Kubernetes API server and kubectl libraries caused
the status to no longer display when helm status was run for a
release. This change restores the status display.
Generation of the tables for display was moved server
side. A request for the data as a table is made and a kubectl
printer for tables can display this data. Kubectl uses this setup and
the structure here closely resembles kubectl. kubectl is still
able to display objects as tables from prior to server side
printing but only prints limited information.
Note, an extra request is made because table responses cannot be
easily transformed into Go objects for Kubernetes types to work
with. There is one request to get the resources for display in
a table and a second request to get the resources to lookup the
related pods. The related pods are now requested as a table as
well for display purposes.
This is likely part of the larger trend to move features like
this server side so that more libraries in more languages can
get to the feature.
Closes#6896
Signed-off-by: Matt Farina <matt@mattfarina.com>
* align both formats behaviors and now they will just differ in how to discover their paths
* add coverage for exports format and fix expected assertions for parent-child format to match the logic child values always wins
* just partially revert dda8497, this way parents values could be overridden when coalescing
* after getting better coverage we were able to refact both formats behaviors by merging their propagation logics into a single code path.
If two `helm upgrade`s are executed at the exact same time, then one of
the invocations will fail with "already exists".
If one `helm upgrade` is executed and a second one is started while the
first is in `pending-upgrade`, then the second invocation will create a
new release. Effectively, two helm invocations will simultaneously
change the state of Kubernetes resources -- which is scary -- then two
releases will be in `deployed` state -- which can cause other issues.
This commit fixes the corrupted storage problem, by introducting a poor
person's lock. If the last release is in a pending state, then helm will
abort. If the last release is in a pending state, due to a previously
killed helm, then the user is expected to do `helm rollback`.
This is a port to Helm v2 of #7322.
Signed-off-by: Cristian Klein <cristian.klein@elastisys.com>
For some reason, many users experince corrupted storage with the
ConfigMaps storage backend. Specifically, several Releases are marked as
DEPLOYED. This patch improved handling of such situations, by taking the latest
DEPLOYED Release. Eventually, the storage will clean itself out, after
the corrupted Releases are deleted due to --history-max.
Closes#6031
Signed-off-by: Cristian Klein <cristian.klein@elastisys.com>
This happened to be a bug we identified in Helm 3 and did not check if
it existed in Helm 2. The improved logic for job waiting used an automatic
retry. However, when we were creating the watcher, we were listing on everything
of that same api version and kind. So if you had more than 1 hook and the first
was successful, it would think everything was successful. I have validated that
this now fails as intended if a job is failing
Closes#6767
Signed-off-by: Taylor Thomas <taylor.thomas@microsoft.com>
In several of the job checks and other conversions we were using legacyscheme.
I don't know why it was working before, but I am guessing something changed
between k8s 1.15 and 1.16. To fix I changed the references to use the default
scheme in client-go
Signed-off-by: Taylor Thomas <taylor.thomas@microsoft.com>