Since Helm is going through breaking changes with Helm v4, the version path to
Helm needs to be updated.
Signed-off-by: Matt Farina <matt.farina@suse.com>
Multiple changes were made to pass linting. Some Go built-in names
are being used for variables (e.g., min). This happens in the Go
source itself including the Go standard library and is not always
a bad practice.
To handle allowing some built-in names to be used the linter config
is updated to allow (via opt-in) some names to pass. This allows us
to still check for re-use of Go built-in names and opt-in to any
new uses.
There were also several cases where a value was checked for nil
before checking its length when this is already handled by len()
or the types default value. These were cleaned up.
The license validation was updated because it was checking everything
in the .git directory including all remote content that was local.
The previous vendor directory was from a time prior to Go modules
when Helm handled dependencies differently. It was no longer needed.
Signed-off-by: Matt Farina <matt.farina@suse.com>
kubernetes might at any time throw 409 Conflict Error codes. Clients
are supposed to retry when this happens. As an example, see
kubernetes/issues/67761 where such an issues might happen when the
cluster manipulates a projects's ResourceQuotas.
Catch such Conflict Errors on createResource and deleteResource and
retry before giving up. Due to the more complex logic and focus on
kubernetes/issues/67761, this patch purposefully omits possibly
needed changes to updateResource and instead defers them to another
patch if required in the future.
Closes issue #9710
Signed-off-by: Andreas Karis <ak.karis@gmail.com>
Noteis:
1. This moves golangci scanning to a GitHub action. This will
enable inline pointers to issues in the PR where linting fails.
2. Go 1.21 is specified in the go.mod because Kubernetes libs
require it.
3. The lint issues were removed. Some were fixed while others
were handled by skipping linting or using _ as an argument.
Many of these can be refactored later for better cleanup.
Signed-off-by: Matt Farina <matt.farina@suse.com>
This covers both the property and the minimal copy of the Factory
interface. It also notes that this interface is not covered by the
Helm backwards compatibility and why.
Signed-off-by: Matt Farina <matt.farina@suse.com>
Signed-off-by: Joe Julian <me@joejulian.name>
Add --cascade=<background|foreground|orphan> option to helm uninstall
Current behaviour is hardcoded to background
Addresses issue: https://github.com/helm/helm/issues/10586
Signed-off-by: MichaelMorris <michael.morris@est.tech>
Fixes#11712
A change was made that when validation was turned off the Kubernetes
packages were building objects as a Table type. This was done for
display purposes. When details about the objects was going to be
printed as part of #10912.
This broke rollback, and possibly other functionality, as a Table
type was returned in some cases that needed the regular object.
This caused things to break silently.
The fix involved adding in a new Function (and interface) to
query for tables instead of the objects themselves. There was not
a clean way to add it to the existing function that covered all
cases.
A second problem was noticed along the way. When data was output
via status as YAML or JSON it was in the form of a table rather
than the objects themselves. This did not reflect expectations
and did not match the functionality in kubectl. The code was
updated to return a table when that was presented and the objects
when they are being output for YAML or JSON. The API also supports
this handling to SDK users can replicate this functionality.
API changes made here were never released. The functions were
developed for this release of Helm and only ever appeared in an
RC. In this case, they can be changed.
Signed-off-by: Matt Farina <matt.farina@suse.com>
Creating a new PR based on this existing stale PR https://github.com/helm/helm/pull/7728
Signed-off-by: Soujanya Mangipudi <somangip@microsoft.com>
# Conflicts:
# go.sum
Unfortunately errors from the API server do not always (do they ever?) contain
the name of the resource in question.
Deletions for multiple resources are processed concurrently, so in a resulting
log, a preceding "Starting delete" line might be for a different object.
Signed-off-by: Marcin Owsiany <porridge@redhat.com>
This required modifying the `kube.Factory` interface to conform to
changes in k8s' `cmdutil.Factory` interface:
fe3772890f
Signed-off-by: Andrew Seigner <andrew@sig.gy>
If set, 'uninstall' command will wait until all the resources are deleted before returning.
It will wait for as long as --timeout
closes#2378
Signed-off-by: Mike Ng <ming@redhat.com>
managedFields were a changed that landed in 1.18. This is an array
under metadata with managedFields. The kubernetes client pkgs that
Helm uses automatically add them.
This change added a manager for the managedFields. The flow for
deciding on the name to use is:
1. An explicit name if one is chosen
2. The base name of the first os.Arg (the binary name) if no name
explicitly set.
3. unknown if no name set and name cannot be detected
The name is at the package level as there is no other place to easily
set it for Helm v3. Since the name is for the binary or app it should
be ok to set app wide.
Signed-off-by: Matt Farina <matt.farina@suse.com>
Since Tiller is no longer part of Helm v3, internal documentation
language about Tiller can be removed
Signed-off-by: Matt Farina <matt@mattfarina.com>
* Continue deleting objects when one fails to minimize the risk of an
upgrade ending in an unrecoverable state
* Exclude failed deleted object from the returned result set
Signed-off-by: Adam Reese <adam@reese.io>
Upgrade Kubernetes libraries to v0.18.0
Add new lazy load KubernetesClientSet to avoid missing kubeconfig error
In kubernetes v1.18 kubeconfig validation was added. Minikube and Kind
both remove kubeconfig when stopping clusters. This causes and error
when running any helm commands because we initialize the client before
executing the command.
Signed-off-by: Adam Reese <adam@reese.io>
Don't delete a resource on upgrade if it is annotated with
helm.io/resource-policy=keep. This can cause data loss for users
if the annotation is ignored(e.g. for a PVC)
Close#7677
Signed-off-by: Dong Gang <dong.gang@daocloud.io>
Fixes issue #7279.
Prevent the deletion of CRDs that were defined in the `templates/`
directory. This makes CRD deletion behaviour consistent with Helm
documentation:
> CRDs are never deleted. Deleting a CRD automatically deletes all of the
> CRD’s contents across all namespaces in the cluster. Consequently, Helm
> will not delete CRDs.
Previous the documentation only applied to CRDs that were defined in the
`crds/` directory. It did not consider that Charts could have CRDs in the
`templates/` directory (for example charts that were written before the
`crds/` directory feature or if the Chart author needed templated CRDs).
Signed-off-by: Phil Grayson <phil@philgrayson.com>
* Port watcher with retries to wait for resources
Port of Helm 2 PR #6014 to Helm 3
Signed-off-by: Martin Hickey <martin.hickey@ie.ibm.com>
* Add fix from PR #6907
Signed-off-by: Martin Hickey <martin.hickey@ie.ibm.com>
Currently, if using the --atomic flag or deleting a release that failed due to an already existing
resource, Helm will deleting those resources that aren't managed by it. This PR fixes the issue
by checking for pre-existing resources during install and upgrade. This is done as a validation
step so the release will not even be started if resources currently exist. This PR is inspired by
@xchapter7x's work in #3477.
This also fixes a small bug in upgrade where deletes fail if the resource was already deletes
Fixes#6407
Signed-off-by: Taylor Thomas <taylor.thomas@microsoft.com>
This ports the functionality of cleanup on fail to v3 as introduced in #4871. This has been tested manually
and would be a good candidate for a new acceptance test.
Signed-off-by: Taylor Thomas <taylor.thomas@microsoft.com>
* fix: clear the discovery cache after CRDs are installed
This fixes an issue in which a chart could not contain both a CRD and an instance of that CRD. It works around a stale cache by force cache invalidation whenever a CRD is added.
Closes#6316
Signed-off-by: Matt Butcher <matt.butcher@microsoft.com>
* fix: wait for CRD to register before allowing CRDs to be installed
This fixes an issue with the previous version of this patch in which the CRD would not be available quickly enough.
Signed-off-by: Matt Butcher <matt.butcher@microsoft.com>
* feat: use Wait() to wait for CRDs to be ready
This forward-ports the CRD wait logic to Helm 3, and then uses that to wait for CRDs to be registered.
Signed-off-by: Matt Butcher <matt.butcher@microsoft.com>
* ref: moved the scheme modification to an appropriate place.
Signed-off-by: Matt Butcher <matt.butcher@microsoft.com>
* fix: turned warnings into fatal errors, fixed spelling, clear cache once
Signed-off-by: Matt Butcher <matt.butcher@microsoft.com>