Merge branch 'master' into feat/app-version

Signed-off-by: Kevin Labesse <kevin@labesse.me>
pull/4961/head
Labesse Kévin 7 years ago committed by GitHub
commit 5183b6425b

@ -84,12 +84,12 @@ your PR will be rejected by the automated DCO check.
Whether you are a user or contributor, official support channels include:
- GitHub [issues](https://github.com/helm/helm/issues/new)
- Slack [Kubernetes Slack](http://slack.kubernetes.io/):
- User: #helm-users
- Contributor: #helm-dev
- [Issues](https://github.com/helm/helm/issues)
- Slack:
- User: [#helm-users](https://kubernetes.slack.com/messages/C0NH30761/details/)
- Contributor: [#helm-dev](https://kubernetes.slack.com/messages/C51E88VDG/)
Before opening a new issue or submitting a new pull request, it's helpful to search the project - it's likely that another user has already reported the issue you're facing, or it's a known issue that we're already aware of.
Before opening a new issue or submitting a new pull request, it's helpful to search the project - it's likely that another user has already reported the issue you're facing, or it's a known issue that we're already aware of. It is also worth asking on the Slack channels.
## Milestones
@ -180,33 +180,33 @@ contributing to Helm. All issue types follow the same general lifecycle. Differe
Coding conventions and standards are explained in the official developer docs:
[Developers Guide](docs/developers.md)
The next section contains more information on the workflow followed for PRs
The next section contains more information on the workflow followed for Pull Requests.
## Pull Requests
Like any good open source project, we use Pull Requests to track code changes
Like any good open source project, we use Pull Requests (PRs) to track code changes.
### PR Lifecycle
1. PR creation
- PRs are usually created to fix or else be a subset of other PRs that fix a particular issue.
- We more than welcome PRs that are currently in progress. They are a great way to keep track of
important work that is in-flight, but useful for others to see. If a PR is a work in progress,
it **must** be prefaced with "WIP: [title]". Once the PR is ready for review, remove "WIP" from
the title.
- It is preferred, but not required, to have a PR tied to a specific issue.
- It is preferred, but not required, to have a PR tied to a specific issue. There can be
circumstances where if it is a quick fix then an issue might be overkill. The details provided
in the PR description would suffice in this case.
2. Triage
- The maintainer in charge of triaging will apply the proper labels for the issue. This should
include at least a size label, `bug` or `feature`, and `awaiting review` once all labels are applied.
See the [Labels section](#labels) for full details on the definitions of labels
See the [Labels section](#labels) for full details on the definitions of labels.
- Add the PR to the correct milestone. This should be the same as the issue the PR closes.
3. Assigning reviews
- Once a review has the `awaiting review` label, maintainers will review them as schedule permits.
The maintainer who takes the issue should self-request a review.
- Reviews from others in the community, especially those who have encountered a bug or have
requested a feature, are highly encouraged, but not required. Maintainer reviews **are** required
before any merge
- Any PR with the `size/large` label requires 2 review approvals from maintainers before it can be
merged. Those with `size/medium` are per the judgement of the maintainers
merged. Those with `size/medium` or `size/small` are per the judgement of the maintainers.
4. Reviewing/Discussion
- Once a maintainer begins reviewing a PR, they will remove the `awaiting review` label and add
the `in progress` label so the person submitting knows that it is being worked on. This is
@ -214,17 +214,24 @@ Like any good open source project, we use Pull Requests to track code changes
- All reviews will be completed using Github review tool.
- A "Comment" review should be used when there are questions about the code that should be
answered, but that don't involve code changes. This type of review does not count as approval.
- A "Changes Requested" review indicates that changes to the code need to be made before they will be merged.
- Reviewers should update labels as needed (such as `needs rebase`)
5. Address comments by answering questions or changing code
- A "Changes Requested" review indicates that changes to the code need to be made before they will be
merged.
- Reviewers (maintainers) should update labels as needed (such as `needs rebase`).
- Reviews are also welcome from others in the community, especially those who have encountered a bug or
have requested a feature. In the code review, a message can be added, as well as `LGTM` if the PR is
good to merge. Its also possible to add comments to specific lines in a file, for giving context
to the comment.
5. PR owner should try to be responsive to comments by answering questions or changing code. If the
owner is unsure of any comment, reach out to the person who added the comment in
[#helm-dev](https://kubernetes.slack.com/messages/C51E88VDG/). Once all comments have been addressed,
the PR is ready to be merged.
6. Merge or close
- PRs should stay open until merged or if they have not been active for more than 30 days.
This will help keep the PR queue to a manageable size and reduce noise. Should the PR need
to stay open (like in the case of a WIP), the `keep open` label can be added.
- If the owner of the PR is listed in `OWNERS`, that user **must** merge their own PRs
or explicitly request another OWNER do that for them.
- If the owner of a PR is _not_ listed in `OWNERS`, any core committer may
merge the PR once it is approved.
- If the owner of the PR is listed in `OWNERS`, that user **must** merge their own PRs or explicitly
request another OWNER do that for them.
- If the owner of a PR is _not_ listed in `OWNERS`, any maintainer may merge the PR once it is approved.
#### Documentation PRs

@ -41,6 +41,7 @@ If you want to use a package manager:
- [Homebrew](https://brew.sh/) users can use `brew install kubernetes-helm`.
- [Chocolatey](https://chocolatey.org/) users can use `choco install kubernetes-helm`.
- [Scoop](https://scoop.sh/) users can use `scoop install helm`.
- [GoFish](https://gofi.sh/) users can use `gofish install helm`.
To rapidly get Helm up and running, start with the [Quick Start Guide](https://docs.helm.sh/using_helm/#quickstart-guide).

@ -212,6 +212,7 @@ __helm_convert_bash_to_zsh() {
-e "s/${LWORD}compopt${RWORD}/__helm_compopt/g" \
-e "s/${LWORD}declare${RWORD}/__helm_declare/g" \
-e "s/\\\$(type${RWORD}/\$(__helm_type/g" \
-e 's/aliashash\["\(\w\+\)"\]/aliashash[\1]/g' \
<<'BASH_COMPLETION_EOF'
`
out.Write([]byte(zshInitialization))

@ -132,6 +132,7 @@ type installCmd struct {
appVersion string
timeout int64
wait bool
atomic bool
repoURL string
username string
password string
@ -191,6 +192,8 @@ func newInstallCmd(c helm.Interface, out io.Writer) *cobra.Command {
}
inst.chartPath = cp
inst.client = ensureHelmClient(inst.client)
inst.wait = inst.wait || inst.atomic
return inst.run()
},
}
@ -214,6 +217,7 @@ func newInstallCmd(c helm.Interface, out io.Writer) *cobra.Command {
f.StringVar(&inst.appVersion, "app-version", "", "specify an app version for the release")
f.Int64Var(&inst.timeout, "timeout", 300, "time in seconds to wait for any individual Kubernetes operation (like Jobs for hooks)")
f.BoolVar(&inst.wait, "wait", false, "if set, will wait until all Pods, PVCs, Services, and minimum number of Pods of a Deployment are in a ready state before marking the release as successful. It will wait for as long as --timeout")
f.BoolVar(&inst.atomic, "atomic", false, "if set, installation process purges chart on fail, also sets --wait flag")
f.StringVar(&inst.repoURL, "repo", "", "chart repository url where to locate the requested chart")
f.StringVar(&inst.username, "username", "", "chart repository username where to locate the requested chart")
f.StringVar(&inst.password, "password", "", "chart repository password where to locate the requested chart")
@ -253,8 +257,8 @@ func (i *installCmd) run() error {
fmt.Printf("FINAL NAME: %s\n", i.name)
}
if msgs := validation.IsDNS1123Label(i.name); i.name != "" && len(msgs) > 0 {
return fmt.Errorf("release name %s is not a valid DNS label: %s", i.name, strings.Join(msgs, ";"))
if msgs := validation.IsDNS1123Subdomain(i.name); i.name != "" && len(msgs) > 0 {
return fmt.Errorf("release name %s is invalid: %s", i.name, strings.Join(msgs, ";"))
}
// Check chart requirements to make sure all dependencies are present in /charts
@ -313,6 +317,23 @@ func (i *installCmd) run() error {
helm.InstallWait(i.wait),
helm.InstallDescription(i.description))
if err != nil {
if i.atomic {
fmt.Fprintf(os.Stdout, "INSTALL FAILED\nPURGING CHART\nError: %v\n", prettyError(err))
deleteSideEffects := &deleteCmd{
name: i.name,
disableHooks: i.disableHooks,
purge: true,
timeout: i.timeout,
description: "",
dryRun: i.dryRun,
out: i.out,
client: i.client,
}
if err := deleteSideEffects.run(); err != nil {
return err
}
fmt.Fprintf(os.Stdout, "Successfully purged a chart!\n")
}
return prettyError(err)
}

@ -113,6 +113,14 @@ func TestInstall(t *testing.T) {
expected: "apollo",
resp: helm.ReleaseMock(&helm.MockReleaseOptions{Name: "apollo"}),
},
// Install, with atomic
{
name: "install with a atomic",
args: []string{"testdata/testcharts/alpine"},
flags: strings.Split("--name apollo", " "),
expected: "apollo",
resp: helm.ReleaseMock(&helm.MockReleaseOptions{Name: "apollo"}),
},
// Install, using the name-template
{
name: "install with name-template",
@ -169,7 +177,6 @@ func TestInstall(t *testing.T) {
name: "install chart with release name using periods",
args: []string{"testdata/testcharts/alpine"},
flags: []string{"--name", "foo.bar"},
err: true,
},
{
name: "install chart with release name using underscores",

@ -183,7 +183,7 @@ func generateLabels(labels map[string]string) map[string]string {
return labels
}
// parseNodeSelectors parses a comma delimited list of key=values pairs into a map.
// parseNodeSelectorsInto parses a comma delimited list of key=values pairs into a map.
func parseNodeSelectorsInto(labels string, m map[string]string) error {
kv := strings.Split(labels, ",")
for _, v := range kv {

@ -50,7 +50,7 @@ type Options struct {
// AutoMountServiceAccountToken determines whether or not the service account should be added to Tiller.
AutoMountServiceAccountToken bool
// Force allows to force upgrading tiller if deployed version is greater than current version
// ForceUpgrade allows to force upgrading tiller if deployed version is greater than current version
ForceUpgrade bool
// ImageSpec identifies the image Tiller will use when deployed.

@ -47,10 +47,11 @@ func deleteService(client corev1.ServicesGetter, namespace string) error {
}
// deleteDeployment deletes the Tiller Deployment resource
// We need to use the reaper instead of the kube API because GC for deployment dependents
// is not yet supported at the k8s server level (<= 1.5)
func deleteDeployment(client kubernetes.Interface, namespace string) error {
err := client.Extensions().Deployments(namespace).Delete(deploymentName, &metav1.DeleteOptions{})
policy := metav1.DeletePropagationBackground
err := client.AppsV1().Deployments(namespace).Delete(deploymentName, &metav1.DeleteOptions{
PropagationPolicy: &policy,
})
return ingoreNotFound(err)
}

@ -31,7 +31,8 @@ This command rolls back a release to a previous revision.
The first argument of the rollback command is the name of a release, and the
second is a revision (version) number. To see revision numbers, run
'helm history RELEASE'.
'helm history RELEASE'. If you'd like to rollback to the previous release use
'helm rollback [RELEASE] 0'.
`
type rollbackCmd struct {

@ -147,8 +147,8 @@ func (t *templateCmd) run(cmd *cobra.Command, args []string) error {
}
}
if msgs := validation.IsDNS1123Label(t.releaseName); t.releaseName != "" && len(msgs) > 0 {
return fmt.Errorf("release name %s is not a valid DNS label: %s", t.releaseName, strings.Join(msgs, ";"))
if msgs := validation.IsDNS1123Subdomain(t.releaseName); t.releaseName != "" && len(msgs) > 0 {
return fmt.Errorf("release name %s is invalid: %s", t.releaseName, strings.Join(msgs, ";"))
}
// Check chart requirements to make sure all dependencies are present in /charts

@ -112,21 +112,21 @@ func TestTemplateCmd(t *testing.T) {
desc: "verify the release name using capitals is invalid",
args: []string{subchart1ChartPath, "--name", "FOO"},
expectKey: "subchart1/templates/service.yaml",
expectError: "is not a valid DNS label",
expectError: "is invalid",
},
{
name: "check_invalid_name_uppercase",
desc: "verify the release name using periods is invalid",
args: []string{subchart1ChartPath, "--name", "foo.bar"},
expectKey: "subchart1/templates/service.yaml",
expectError: "is not a valid DNS label",
expectValue: "release-name: \"foo.bar\"",
},
{
name: "check_invalid_name_uppercase",
desc: "verify the release name using underscores is invalid",
args: []string{subchart1ChartPath, "--name", "foo_bar"},
expectKey: "subchart1/templates/service.yaml",
expectError: "is not a valid DNS label",
expectError: "is invalid",
},
{
name: "check_release_is_install",
@ -160,7 +160,7 @@ func TestTemplateCmd(t *testing.T) {
name: "check_invalid_name_template",
desc: "verify the relase name generate by template is invalid",
args: []string{subchart1ChartPath, "--name-template", "foobar-{{ b64enc \"abc\" }}-baz"},
expectError: "is not a valid DNS label",
expectError: "is invalid",
},
{
name: "check_name_template",

@ -106,6 +106,7 @@ type upgradeCmd struct {
resetValues bool
reuseValues bool
wait bool
atomic bool
repoURL string
username string
password string
@ -143,6 +144,7 @@ func newUpgradeCmd(client helm.Interface, out io.Writer) *cobra.Command {
upgrade.release = args[0]
upgrade.chart = args[1]
upgrade.client = ensureHelmClient(upgrade.client)
upgrade.wait = upgrade.wait || upgrade.atomic
return upgrade.run()
},
@ -169,6 +171,7 @@ func newUpgradeCmd(client helm.Interface, out io.Writer) *cobra.Command {
f.BoolVar(&upgrade.resetValues, "reset-values", false, "when upgrading, reset the values to the ones built into the chart")
f.BoolVar(&upgrade.reuseValues, "reuse-values", false, "when upgrading, reuse the last release's values and merge in any overrides from the command line via --set and -f. If '--reset-values' is specified, this is ignored.")
f.BoolVar(&upgrade.wait, "wait", false, "if set, will wait until all Pods, PVCs, Services, and minimum number of Pods of a Deployment are in a ready state before marking the release as successful. It will wait for as long as --timeout")
f.BoolVar(&upgrade.atomic, "atomic", false, "if set, upgrade process rolls back changes made in case of failed upgrade, also sets --wait flag")
f.StringVar(&upgrade.repoURL, "repo", "", "chart repository url where to locate the requested chart")
f.StringVar(&upgrade.username, "username", "", "chart repository username where to locate the requested chart")
f.StringVar(&upgrade.password, "password", "", "chart repository password where to locate the requested chart")
@ -193,6 +196,8 @@ func (u *upgradeCmd) run() error {
return err
}
releaseHistory, err := u.client.ReleaseHistory(u.release, helm.WithMaxHistory(1))
if u.install {
// If a release does not exist, install it. If another error occurs during
// the check, ignore the error and continue with the upgrade.
@ -200,7 +205,6 @@ func (u *upgradeCmd) run() error {
// The returned error is a grpc.rpcError that wraps the message from the original error.
// So we're stuck doing string matching against the wrapped error, which is nested somewhere
// inside of the grpc.rpcError message.
releaseHistory, err := u.client.ReleaseHistory(u.release, helm.WithMaxHistory(1))
if err == nil {
if u.namespace == "" {
@ -235,6 +239,7 @@ func (u *upgradeCmd) run() error {
timeout: u.timeout,
wait: u.wait,
description: u.description,
atomic: u.atomic,
}
return ic.run()
}
@ -279,6 +284,25 @@ func (u *upgradeCmd) run() error {
helm.UpgradeWait(u.wait),
helm.UpgradeDescription(u.description))
if err != nil {
fmt.Fprintf(u.out, "UPGRADE FAILED\nROLLING BACK\nError: %v\n", prettyError(err))
if u.atomic {
rollback := &rollbackCmd{
out: u.out,
client: u.client,
name: u.release,
dryRun: u.dryRun,
recreate: u.recreate,
force: u.force,
timeout: u.timeout,
wait: u.wait,
description: "",
revision: releaseHistory.Releases[0].Version,
disableHooks: u.disableHooks,
}
if err := rollback.run(); err != nil {
return err
}
}
return fmt.Errorf("UPGRADE FAILED: %v", prettyError(err))
}

@ -123,6 +123,14 @@ func TestUpgradeCmd(t *testing.T) {
expected: "Release \"funny-bunny\" has been upgraded. Happy Helming!\n",
rels: []*release.Release{helm.ReleaseMock(&helm.MockReleaseOptions{Name: "funny-bunny", Version: 5, Chart: ch2})},
},
{
name: "install a release with 'upgrade --atomic'",
args: []string{"funny-bunny", chartPath},
flags: []string{"--atomic"},
resp: helm.ReleaseMock(&helm.MockReleaseOptions{Name: "funny-bunny", Version: 6, Chart: ch}),
expected: "Release \"funny-bunny\" has been upgraded. Happy Helming!\n",
rels: []*release.Release{helm.ReleaseMock(&helm.MockReleaseOptions{Name: "funny-bunny", Version: 6, Chart: ch})},
},
{
name: "install a release with 'upgrade --install'",
args: []string{"zany-bunny", chartPath},

@ -63,7 +63,7 @@ data:
dessert: cake
```
## Overriding Values from a Parent Chart
## Overriding Values of a Child Chart
Our original chart, `mychart` is now the _parent_ chart of `mysubchart`. This relationship is based entirely on the fact that `mysubchart` is within `mychart/charts`.

@ -98,10 +98,7 @@ data:
Variables are normally not "global". They are scoped to the block in which they are declared. Earlier, we assigned `$relname` in the top level of the template. That variable will be in scope for the entire template. But in our last example, `$key` and `$val` will only be in scope inside of the `{{range...}}{{end}}` block.
However, there is one variable that is always global - `$` - this
variable will always point to the root context. This can be very
useful when you are looping in a range need to know the chart's release
name.
However, there is one variable that is always global - `$` - this variable will always point to the root context. This can be very useful when you are looping in a range and need to know the chart's release name.
An example illustrating this:
```yaml

@ -36,6 +36,12 @@ is required, and will print an error message when that entry is missing:
value: {{ required "A valid .Values.who entry required!" .Values.who }}
```
When using the `include` function, you can pass it a custom object tree built from the current context by using the `dict` function:
```yaml
{{- include "mytpl" (dict "key1" .Values.originalKey1 "key2" .Values.originalKey2) }}
```
## Quote Strings, Don't Quote Integers
When you are working with string data, you are always safer quoting the
@ -255,9 +261,9 @@ embed each of the components.
Two strong design patterns are illustrated by these projects:
**SAP's [OpenStack chart](https://github.com/sapcc/openstack-helm):** This chart
installs a full OpenStack IaaS on Kubernetes. All of the charts are collected
together in one GitHub repository.
**SAP's [Converged charts](https://github.com/sapcc/helm-charts):** These charts
install SAP Converged Cloud a full OpenStack IaaS on Kubernetes. All of the charts are collected
together in one GitHub repository, except for a few submodules.
**Deis's [Workflow](https://github.com/deis/workflow/tree/master/charts/workflow):**
This chart exposes the entire Deis PaaS system with one chart. But it's different

@ -1,6 +1,6 @@
image:
repository: alpine
tag: 3.3
tag: latest
pullPolicy: IfNotPresent
restartPolicy: Never

@ -32,6 +32,6 @@ spec:
restartPolicy: {{ .Values.restartPolicy }}
containers:
- name: post-install-job
image: "alpine:3.3"
image: "alpine:latest"
# All we're going to do is sleep for a while, then exit.
command: ["/bin/sleep", "{{ .Values.sleepyTime }}"]

@ -14,7 +14,7 @@ index: >-
image:
repository: nginx
tag: 1.11.0
tag: alpine
pullPolicy: IfNotPresent
service:

@ -79,6 +79,7 @@ helm install [CHART] [flags]
```
--app-version string specify an app version for the release
--atomic if set, installation process purges chart on fail, also sets --wait flag
--ca-file string verify certificates of HTTPS-enabled servers using this CA bundle
--cert-file string identify HTTPS client using this SSL certificate file
--dep-up run helm dependency update before installing the chart
@ -130,4 +131,4 @@ helm install [CHART] [flags]
* [helm](helm.md) - The Helm package manager for Kubernetes.
###### Auto generated by spf13/cobra on 22-Nov-2018
###### Auto generated by spf13/cobra on 28-Jan-2019

@ -9,7 +9,8 @@ This command rolls back a release to a previous revision.
The first argument of the rollback command is the name of a release, and the
second is a revision (version) number. To see revision numbers, run
'helm history RELEASE'.
'helm history RELEASE'. If you'd like to rollback to the previous release use
'helm rollback [RELEASE] 0'.
```
@ -51,4 +52,4 @@ helm rollback [flags] [RELEASE] [REVISION]
* [helm](helm.md) - The Helm package manager for Kubernetes.
###### Auto generated by spf13/cobra on 10-Aug-2018
###### Auto generated by spf13/cobra on 29-Jan-2019

@ -66,6 +66,7 @@ helm upgrade [RELEASE] [CHART] [flags]
```
--app-version string specify the app version to use for the upgrade
--atomic if set, upgrade process rolls back changes made in case of failed upgrade, also sets --wait flag
--ca-file string verify certificates of HTTPS-enabled servers using this CA bundle
--cert-file string identify HTTPS client using this SSL certificate file
--description string specify the description to use for the upgrade, rather than the default
@ -117,4 +118,4 @@ helm upgrade [RELEASE] [CHART] [flags]
* [helm](helm.md) - The Helm package manager for Kubernetes.
###### Auto generated by spf13/cobra on 22-Nov-2018
###### Auto generated by spf13/cobra on 28-Jan-2019

@ -45,7 +45,7 @@ brew install kubernetes-helm
(Note: There is also a formula for emacs-helm, which is a different
project.)
### From Chocolatey (Windows)
### From Chocolatey or scoop (Windows)
Members of the Kubernetes community have contributed a [Helm package](https://chocolatey.org/packages/kubernetes-helm) build to
[Chocolatey](https://chocolatey.org/). This package is generally up to date.
@ -54,6 +54,12 @@ Members of the Kubernetes community have contributed a [Helm package](https://ch
choco install kubernetes-helm
```
The binary can also be installed via [`scoop`](https://scoop.sh) command-line installer.
```
scoop install helm
```
## From Script
Helm now has an installer script that will automatically grab the latest version

@ -53,6 +53,7 @@ or [pull request](https://github.com/helm/helm/pulls).
- [helm-stop](https://github.com/IBM/helm-stop) - Plugin for stopping a release pods
- [helm-template](https://github.com/technosophos/helm-template) - Debug/render templates client-side
- [helm-tiller](https://github.com/adamreese/helm-tiller) - Additional commands to work with Tiller
- [helm-tiller-info](https://github.com/maorfr/helm-tiller-info) - Plugin which prints information about Tiller
- [helm-unittest](https://github.com/lrills/helm-unittest) - Plugin for unit testing chart locally with YAML
- [Tillerless Helm v2](https://github.com/rimusz/helm-tiller) - Helm plugin for using Tiller locally and in CI/CD pipelines
@ -88,7 +89,6 @@ Tools layered on top of Helm or Tiller.
Platforms, distributions, and services that include Helm support.
- [Cabin](http://www.skippbox.com/cabin/) - Mobile App for Managing Kubernetes
- [Fabric8](https://fabric8.io) - Integrated development platform for Kubernetes
- [Jenkins X](http://jenkins-x.io/) - open source automated CI/CD for Kubernetes which uses Helm for [promoting](http://jenkins-x.io/about/features/#promotion) applications through [environments via GitOps](http://jenkins-x.io/about/features/#environments)
- [Kubernetic](https://kubernetic.com/) - Kubernetes Desktop Client

@ -69,9 +69,10 @@ When Helm clients are connecting from outside of the cluster, the security betwe
Contrary to the previous [Enabling TLS](#enabling-tls) section, this section does not involve running a tiller server pod in your cluster (for what it's worth, that lines up with the current [helm v3 proposal](https://github.com/helm/community/blob/master/helm-v3/000-helm-v3.md)), thus there is no gRPC endpoint (and thus there's no need to create & manage TLS certificates to secure each gRPC endpoint).
Steps:
* Fetch the latest helm release tarball from the [GitHub release page](https://github.com/helm/helm/releases), and extract and move `helm` and `tiller` somewhere on your `$PATH`.
* "Server": Run `tiller --storage=secret`. (Note that `tiller` has a default value of ":44134" for the `--listen` argument.)
* Client: In another terminal (and on the same host that the aforementioned `tiller` command was run for the previous bullet): Run `export HELM_HOST=:44134`, and then run `helm` commands as usual.
- Fetch the latest helm release tarball from the [GitHub release page](https://github.com/helm/helm/releases), and extract and move `helm` and `tiller` somewhere on your `$PATH`.
- "Server": Run `tiller --storage=secret`. (Note that `tiller` has a default value of ":44134" for the `--listen` argument.)
- Client: In another terminal (and on the same host that the aforementioned `tiller` command was run for the previous bullet): Run `export HELM_HOST=:44134`, and then run `helm` commands as usual.
### Tiller's Release Information

@ -1,4 +1,4 @@
# Using Helm
# Using Helm
This guide explains the basics of using Helm (and Tiller) to manage
packages on your Kubernetes cluster. It assumes that you have already
@ -215,7 +215,10 @@ You can then override any of these settings in a YAML formatted file,
and then pass that file during installation.
```console
$ echo '{mariadbUser: user0, mariadbDatabase: user0db}' > config.yaml
$ cat << EOF > config.yaml
mariadbUser: user0
mariadbDatabase: user0db
EOF
$ helm install -f config.yaml stable/mariadb
```

@ -93,6 +93,12 @@ func loadArchiveFiles(in io.Reader) ([]*BufferedFile, error) {
continue
}
switch hd.Typeflag {
// We don't want to process these extension header files.
case tar.TypeXGlobalHeader, tar.TypeXHeader:
continue
}
// Archive could contain \ if generated on Windows
delimiter := "/"
if strings.ContainsRune(hd.Name, '\\') {

@ -302,7 +302,7 @@ func (h *Client) RunReleaseTest(rlsName string, opts ...ReleaseTestOption) (<-ch
return h.test(ctx, req)
}
// PingTiller pings the Tiller pod and ensure's that it is up and running
// PingTiller pings the Tiller pod and ensures that it is up and running
func (h *Client) PingTiller() error {
ctx := NewContext()
return h.ping(ctx)

@ -257,7 +257,7 @@ func (c *FakeClient) RunReleaseTest(rlsName string, opts ...ReleaseTestOption) (
return results, errc
}
// PingTiller pings the Tiller pod and ensure's that it is up and running
// PingTiller pings the Tiller pod and ensures that it is up and running
func (c *FakeClient) PingTiller() error {
return nil
}

@ -474,7 +474,7 @@ type VersionOption func(*options)
// the defaults used when running the `helm upgrade` command.
type UpdateOption func(*options)
// RollbackOption allows specififying various settings configurable
// RollbackOption allows specifying various settings configurable
// by the helm client user for overriding the defaults used when
// running the `helm rollback` command.
type RollbackOption func(*options)

@ -23,11 +23,12 @@ import (
goerrors "errors"
"fmt"
"io"
"k8s.io/apimachinery/pkg/api/meta"
"log"
"strings"
"time"
jsonpatch "github.com/evanphx/json-patch"
"github.com/evanphx/json-patch"
appsv1 "k8s.io/api/apps/v1"
appsv1beta1 "k8s.io/api/apps/v1beta1"
appsv1beta2 "k8s.io/api/apps/v1beta2"
@ -60,6 +61,8 @@ const MissingGetHeader = "==> MISSING\nKIND\t\tNAME\n"
// ErrNoObjectsVisited indicates that during a visit operation, no matching objects were found.
var ErrNoObjectsVisited = goerrors.New("no objects visited")
var metadataAccessor = meta.NewAccessor()
// Client represents a client capable of communicating with the Kubernetes API.
type Client struct {
cmdutil.Factory
@ -308,6 +311,19 @@ func (c *Client) Update(namespace string, originalReader, targetReader io.Reader
for _, info := range original.Difference(target) {
c.Log("Deleting %q in %s...", info.Name, info.Namespace)
if err := info.Get(); err != nil {
c.Log("Unable to get obj %q, err: %s", info.Name, err)
}
annotations, err := metadataAccessor.Annotations(info.Object)
if err != nil {
c.Log("Unable to get annotations on %q, err: %s", info.Name, err)
}
if annotations != nil && annotations[ResourcePolicyAnno] == KeepPolicy {
c.Log("Skipping delete of %q due to annotation [%s=%s]", info.Name, ResourcePolicyAnno, KeepPolicy)
continue
}
if err := deleteResource(info); err != nil {
c.Log("Failed to delete %q, err: %s", info.Name, err)
}

@ -151,6 +151,8 @@ func TestUpdate(t *testing.T) {
return newResponse(200, &listB.Items[1])
case p == "/namespaces/default/pods/squid" && m == "DELETE":
return newResponse(200, &listB.Items[1])
case p == "/namespaces/default/pods/squid" && m == "GET":
return newResponse(200, &listA.Items[2])
default:
t.Fatalf("unexpected request: %s %s", req.Method, req.URL.Path)
return nil, nil
@ -183,6 +185,7 @@ func TestUpdate(t *testing.T) {
"/namespaces/default/pods/otter:GET",
"/namespaces/default/pods/dolphin:GET",
"/namespaces/default/pods:POST",
"/namespaces/default/pods/squid:GET",
"/namespaces/default/pods/squid:DELETE",
}
if len(expectedActions) != len(actions) {
@ -194,6 +197,18 @@ func TestUpdate(t *testing.T) {
t.Errorf("expected %s request got %s", v, actions[k])
}
}
// Test resource policy is respected
actions = nil
listA.Items[2].ObjectMeta.Annotations = map[string]string{ResourcePolicyAnno: KeepPolicy}
if err := c.Update(v1.NamespaceDefault, objBody(&listA), objBody(&listB), false, false, 0, false); err != nil {
t.Fatal(err)
}
for _, v := range actions {
if v == "/namespaces/default/pods/squid:DELETE" {
t.Errorf("should not have deleted squid - it has helm.sh/resource-policy=keep")
}
}
}
func TestBuild(t *testing.T) {

@ -0,0 +1,26 @@
/*
Copyright The Helm Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package kube
// ResourcePolicyAnno is the annotation name for a resource policy
const ResourcePolicyAnno = "helm.sh/resource-policy"
// KeepPolicy is the resource policy type for keep
//
// This resource policy type allows resources to skip being deleted
// during an uninstallRelease action.
const KeepPolicy = "keep"

@ -24,15 +24,6 @@ import (
"k8s.io/helm/pkg/tiller/environment"
)
// resourcePolicyAnno is the annotation name for a resource policy
const resourcePolicyAnno = "helm.sh/resource-policy"
// keepPolicy is the resource policy type for keep
//
// This resource policy type allows resources to skip being deleted
// during an uninstallRelease action.
const keepPolicy = "keep"
func filterManifestsToKeep(manifests []Manifest) ([]Manifest, []Manifest) {
remaining := []Manifest{}
keep := []Manifest{}
@ -43,14 +34,14 @@ func filterManifestsToKeep(manifests []Manifest) ([]Manifest, []Manifest) {
continue
}
resourcePolicyType, ok := m.Head.Metadata.Annotations[resourcePolicyAnno]
resourcePolicyType, ok := m.Head.Metadata.Annotations[kube.ResourcePolicyAnno]
if !ok {
remaining = append(remaining, m)
continue
}
resourcePolicyType = strings.ToLower(strings.TrimSpace(resourcePolicyType))
if resourcePolicyType == keepPolicy {
if resourcePolicyType == kube.KeepPolicy {
keep = append(keep, m)
}

@ -22,6 +22,6 @@ COPY helm /helm
COPY tiller /tiller
EXPOSE 44134
USER nobody
USER 65534
ENTRYPOINT ["/tiller"]

@ -21,6 +21,6 @@ ENV HOME /tmp
COPY tiller /tiller
EXPOSE 44134
USER nobody
USER 65534
ENTRYPOINT ["/tiller", "--experimental-release"]

Loading…
Cancel
Save