Flags sometimes can be used with an = sign, such as --kube-context=prod.
In this case, the variable ${flagname} retains the = sign as part of the
flag name. However, in zsh completion, an = sign cannot be part of an
index of the associative array 'flaghash' or else it causes an error.
This commits strips the = sign out when using ${flagname} as an index.
Note that this is not a big deal since flaghash is not actually used
anywhere in Helm completion. I believe it is made available by the
Cobra framework in case some completions choose to use it.
Signed-off-by: Marc Khouzam <marc.khouzam@ville.montreal.qc.ca>
This adds the `--probe=[true|false]` flag to `tiller`, so that you can selectively disable the following probing HTTP endpoints:
- `/readiness`
- `/liveness`
- `/metrics`
One of expected use-cases of this feature would be to avoid consuming an extra port per `tiller`, which becomes more problematic in the [tillerless](https://github.com/rimusz/helm-tiller) setup.
The default is `--probe=true`, which starts the probing endpoints as before.
Implementation-wise, I intentionally made it so that the number of changed lines is as small as possible.
That is, I opted not to factor out the probes server starting logic into its own function, like `startProbesServer`.
Instead, I just added conditionals to the logging part and the server starting part.
As it isn't easily E2E testable, I've verified it to work by running the following commands manually.
With probing enabled(default):
```
$ ./tiller
[main] 2019/04/06 09:20:15 Starting Tiller v2.12+unreleased (tls=false)
[main] 2019/04/06 09:20:15 GRPC listening on :44134
[main] 2019/04/06 09:20:15 Probes listening on :44135
[main] 2019/04/06 09:20:15 Storage driver is ConfigMap
[main] 2019/04/06 09:20:15 Max history per release is 0
```
With probing disabled, you'll see no tiller is no longer listening on 44135:
```
$ ./tiller --probe=false
[main] 2019/04/06 09:20:07 Starting Tiller v2.12+unreleased (tls=false)
[main] 2019/04/06 09:20:07 GRPC listening on :44134
[main] 2019/04/06 09:20:07 Storage driver is ConfigMap
[main] 2019/04/06 09:20:07 Max history per release is 0
```
To ensure that tiller can disable the probing endpoints, I ran multiple tillers at once, with/without `--probe=false`:
The first test runs three tillers without `--probe=false`.
As expected, it results in two tillers failes due to the conflicting port, as you can see in the message `Probes server died: listen tcp :44135: bind: address already in use`.
```
$ bash -c 'for i in {0..2}; do (./tiller --listen=:$((44136+$i)) 2>&1 | sed "s/^/tiller $i: /" )& done; sleep 3 ; pkill tiller'
tiller 1: [main] 2019/04/06 09:57:49 Starting Tiller v2.12+unreleased (tls=false)
tiller 1: [main] 2019/04/06 09:57:49 GRPC listening on :44137
tiller 1: [main] 2019/04/06 09:57:49 Probes listening on :44135
tiller 1: [main] 2019/04/06 09:57:49 Storage driver is ConfigMap
tiller 1: [main] 2019/04/06 09:57:49 Max history per release is 0
tiller 0: [main] 2019/04/06 09:57:49 Starting Tiller v2.12+unreleased (tls=false)
tiller 0: [main] 2019/04/06 09:57:49 GRPC listening on :44136
tiller 0: [main] 2019/04/06 09:57:49 Probes listening on :44135
tiller 0: [main] 2019/04/06 09:57:49 Storage driver is ConfigMap
tiller 0: [main] 2019/04/06 09:57:49 Max history per release is 0
tiller 0: [main] 2019/04/06 09:57:49 Probes server died: listen tcp :44135: bind: address already in use
tiller 2: [main] 2019/04/06 09:57:49 Starting Tiller v2.12+unreleased (tls=false)
tiller 2: [main] 2019/04/06 09:57:49 GRPC listening on :44138
tiller 2: [main] 2019/04/06 09:57:49 Probes listening on :44135
tiller 2: [main] 2019/04/06 09:57:49 Storage driver is ConfigMap
tiller 2: [main] 2019/04/06 09:57:49 Max history per release is 0
tiller 2: [main] 2019/04/06 09:57:49 Probes server died: listen tcp :44135: bind: address already in use
```
The second test runs three tillers with `--probe=false`.
It results in all tillers running without errors, that indicates this feature is working as expected:
```
$ bash -c 'for i in {0..2}; do (./tiller --listen=:$((44136+$i)) --probe=false 2>&1 | sed "s/^/tiller $i: /" )& done; sleep 3 ; pkill tiller'
tiller 1: [main] 2019/04/06 09:58:18 Starting Tiller v2.12+unreleased (tls=false)
tiller 1: [main] 2019/04/06 09:58:18 GRPC listening on :44137
tiller 1: [main] 2019/04/06 09:58:18 Storage driver is ConfigMap
tiller 1: [main] 2019/04/06 09:58:18 Max history per release is 0
tiller 2: [main] 2019/04/06 09:58:18 Starting Tiller v2.12+unreleased (tls=false)
tiller 2: [main] 2019/04/06 09:58:18 GRPC listening on :44138
tiller 2: [main] 2019/04/06 09:58:18 Storage driver is ConfigMap
tiller 2: [main] 2019/04/06 09:58:18 Max history per release is 0
tiller 0: [main] 2019/04/06 09:58:18 Starting Tiller v2.12+unreleased (tls=false)
tiller 0: [main] 2019/04/06 09:58:18 GRPC listening on :44136
tiller 0: [main] 2019/04/06 09:58:18 Storage driver is ConfigMap
tiller 0: [main] 2019/04/06 09:58:18 Max history per release is 0
```
Resolves#3159
Signed-off-by: Yusuke KUOKA <ykuoka@gmail.com>
As many people have requested and discussed in #3159.
The variable name are kept the same as before. Corresponding command-line flag is named, and description are written, after the existing flag for gRPC.
The scope of this change is intentionally limited to the minimum. That is, I have not yet added `--probe=false`, because it shouldn't be a blocker if we can change the port number.
Signed-off-by: Yusuke KUOKA <ykuoka@gmail.com>
Makes sure CRDs installed through the crd_install hook reaches the `established` state before the hook is considered complete.
Signed-off-by: Morten Torkildsen <mortent@google.com>
This is the fix for only one particular, but important case.
The case when a new resource has been added to the chart and
there is an error in the chart, which leads to release failure.
In this case after first failed release upgrade new resource will be
created in the cluster. On the next release upgrade there will be the error:
`no RESOURCE with the name NAME found` for this newly created resource
from the previous release upgrade.
The root of this problem is in the side effect of the first release process,
Release invariant says: if resouce exists in the kubernetes cluster, then
it should exist in the release storage. But this invariant has been broken
by helm itself -- because helm created new resources as side effect and not
adopted them into release storage.
To maintain release invariant for such case during release upgrade operation
all newly *successfully* created resources will be deleted in the case
of an error in the subsequent resources update.
This behaviour will be enabled only when `--cleanup-on-fail` option used
for `helm upgrade` or `helm rollback`.
Signed-off-by: Timofey Kirillov <timofey.kirillov@flant.com>
When checking version and desired version is not set, we follow
redirected URL of github latest release to get the latest tag instead of
trying to get the tag value from html content.
Closes#5480
Signed-off-by: Arief Hidayat <mr.arief.hidayat@gmail.com>
Github recently changed the output of the releases page.
grepping for the exact <a> tag fixes the issue where the wrong tag was being filtered.
Signed-off-by: Matthew Fisher <matt.fisher@microsoft.com>
There was a typo in a tiller error with "released named" message, I've changed it to "a release named". Also fix a unit-test for it.
Signed-off-by: Mikhail Kirpichev <mkirpic@gmail.com>