doc(helm): remove Tiller reference from the docs (#4788)

* Remove Tiller reference from the docs

Signed-off-by: Martin Hickey <martin.hickey@ie.ibm.com>

* Update comments after review

- https://github.com/helm/helm/pull/4788#discussion_r226037034
- https://github.com/helm/helm/pull/4788#discussion_r226037064
- https://github.com/helm/helm/pull/4788#discussion_r226037806
- https://github.com/helm/helm/pull/4788#discussion_r226038492
- https://github.com/helm/helm/pull/4788#discussion_r226039202
- https://github.com/helm/helm/pull/4788#discussion_r226039894

Signed-off-by: Martin Hickey <martin.hickey@ie.ibm.com>
pull/4141/head
Martin Hickey 7 years ago committed by Adam Reese
parent c40905ff6d
commit 82c154e2ae

@ -20,9 +20,8 @@ Use Helm to:
Helm is a tool that streamlines installing and managing Kubernetes applications. Helm is a tool that streamlines installing and managing Kubernetes applications.
Think of it like apt/yum/homebrew for Kubernetes. Think of it like apt/yum/homebrew for Kubernetes.
- Helm has two parts: a client (`helm`) and a server (`tiller`) - Helm has two parts: a client (`helm`) and a library
- Tiller runs inside of your Kubernetes cluster, and manages releases (installations) - The library renders your templates and communicates with the Kubernetes API
of your charts.
- Helm runs on your laptop, CI/CD, or wherever you want it to run. - Helm runs on your laptop, CI/CD, or wherever you want it to run.
- Charts are Helm packages that contain at least two things: - Charts are Helm packages that contain at least two things:
- A description of the package (`Chart.yaml`) - A description of the package (`Chart.yaml`)

@ -24,41 +24,33 @@ For Helm, there are three important concepts:
## Components ## Components
Helm has two major components: Helm is an executable which is implemented into two distinct parts:
**The Helm Client** is a command-line client for end users. The client **The Helm Client** is a command-line client for end users. The client
is responsible for the following domains: is responsible for the following:
- Local chart development - Local chart development
- Managing repositories - Managing repositories
- Interacting with the Tiller server - Managing releases
- Interfacing with the Helm library
- Sending charts to be installed - Sending charts to be installed
- Asking for information about releases
- Requesting upgrading or uninstalling of existing releases - Requesting upgrading or uninstalling of existing releases
**The Tiller Server** is an in-cluster server that interacts with the **The Helm Library** provides the logic for executing all Helm operations.
Helm client, and interfaces with the Kubernetes API server. The server It interfaces with the Kubernetes API server and provides the following capability:
is responsible for the following:
- Listening for incoming requests from the Helm client
- Combining a chart and configuration to build a release - Combining a chart and configuration to build a release
- Installing charts into Kubernetes, and then tracking the subsequent - Installing charts into Kubernetes, and providing the subsequent release object
release
- Upgrading and uninstalling charts by interacting with Kubernetes - Upgrading and uninstalling charts by interacting with Kubernetes
In a nutshell, the client is responsible for managing charts, and the The standalone Helm library encapsulates the Helm logic so that it can be leveraged by different clients.
server is responsible for managing releases.
## Implementation ## Implementation
The Helm client is written in the Go programming language, and uses the The Helm client and library is written in the Go programming language.
gRPC protocol suite to interact with the Tiller server.
The Tiller server is also written in Go. It provides a gRPC server to
connect with the client, and it uses the Kubernetes client library to
communicate with Kubernetes. Currently, that library uses REST+JSON.
The Tiller server stores information in ConfigMaps located inside of The library uses the Kubernetes client library to communicate with Kubernetes. Currently,
Kubernetes. It does not need its own database. that library uses REST+JSON. It stores information in Secrets located inside of Kubernetes.
It does not need its own database.
Configuration files are, when possible, written in YAML. Configuration files are, when possible, written in YAML.

@ -28,19 +28,17 @@ When SemVer versions are stored in Kubernetes labels, we conventionally alter th
YAML files should be indented using _two spaces_ (and never tabs). YAML files should be indented using _two spaces_ (and never tabs).
## Usage of the Words Helm, Tiller, and Chart ## Usage of the Words Helm and Chart
There are a few small conventions followed for using the words Helm, helm, Tiller, and tiller. There are a few small conventions followed for using the words Helm and helm.
- Helm refers to the project, and is often used as an umbrella term - Helm refers to the project, and is often used as an umbrella term
- `helm` refers to the client-side command - `helm` refers to the client-side command
- Tiller is the proper name of the backend
- `tiller` is the name of the binary run on the backend
- The term 'chart' does not need to be capitalized, as it is not a proper noun. - The term 'chart' does not need to be capitalized, as it is not a proper noun.
When in doubt, use _Helm_ (with an uppercase 'H'). When in doubt, use _Helm_ (with an uppercase 'H').
## Restricting Tiller by Version ## Restricting Helm by Version
A `Chart.yaml` file can specify a `helmVersion` SemVer constraint: A `Chart.yaml` file can specify a `helmVersion` SemVer constraint:
@ -55,5 +53,5 @@ supported in older versions of Helm. While this parameter will accept sophistica
SemVer rules, the best practice is to default to the form `>=2.4.0`, where `2.4.0` SemVer rules, the best practice is to default to the form `>=2.4.0`, where `2.4.0`
is the version that introduced the new feature used in the chart. is the version that introduced the new feature used in the chart.
This feature was introduced in Helm 2.4.0, so any version of Tiller older than This feature was introduced in Helm 2.4.0, so any version of Helm older than
2.4.0 will simply ignore this field. 2.4.0 will simply ignore this field.

@ -25,7 +25,6 @@ are recommended, and _should_ be placed onto a chart for global consistency. Tho
Name|Status|Description Name|Status|Description
-----|------|---------- -----|------|----------
heritage | REC | This should always be set to `{{ .Release.Service }}`. It is for finding all things managed by Tiller.
release | REC | This should be the `{{ .Release.Name }}`. release | REC | This should be the `{{ .Release.Name }}`.
chart | REC | This should be the chart name and version: `{{ .Chart.Name }}-{{ .Chart.Version \| replace "+" "_" }}`. chart | REC | This should be the chart name and version: `{{ .Chart.Name }}-{{ .Chart.Version \| replace "+" "_" }}`.
app | REC | This should be the app name, reflecting the entire app. Usually `{{ template "name" . }}` is used for this. This is used by many Kubernetes manifests, and is not Helm-specific. app | REC | This should be the app name, reflecting the entire app. Usually `{{ template "name" . }}` is used for this. This is used by many Kubernetes manifests, and is not Helm-specific.

@ -4,7 +4,7 @@ In the previous section we looked at several ways to create and access named tem
Helm provides access to files through the `.Files` object. Before we get going with the template examples, though, there are a few things to note about how this works: Helm provides access to files through the `.Files` object. Before we get going with the template examples, though, there are a few things to note about how this works:
- It is okay to add extra files to your Helm chart. These files will be bundled and sent to Tiller. Be careful, though. Charts must be smaller than 1M because of the storage limitations of Kubernetes objects. - It is okay to add extra files to your Helm chart. These files will be bundled. Be careful, though. Charts must be smaller than 1M because of the storage limitations of Kubernetes objects.
- Some files cannot be accessed through the `.Files` object, usually for security reasons. - Some files cannot be accessed through the `.Files` object, usually for security reasons.
- Files in `templates/` cannot be accessed. - Files in `templates/` cannot be accessed.
- Files excluded using `.helmignore` cannot be accessed. - Files excluded using `.helmignore` cannot be accessed.

@ -8,7 +8,6 @@ In the previous section, we use `{{.Release.Name}}` to insert the name of a rele
- `Release`: This object describes the release itself. It has several objects inside of it: - `Release`: This object describes the release itself. It has several objects inside of it:
- `Release.Name`: The release name - `Release.Name`: The release name
- `Release.Service`: The name of the releasing service (always `Tiller`).
- `Release.IsUpgrade`: This is set to `true` if the current operation is an upgrade or rollback. - `Release.IsUpgrade`: This is set to `true` if the current operation is an upgrade or rollback.
- `Release.IsInstall`: This is set to `true` if the current operation is an install. - `Release.IsInstall`: This is set to `true` if the current operation is an install.
- `Values`: Values passed into the template from the `values.yaml` file and from user-supplied files. By default, `Values` is empty. - `Values`: Values passed into the template from the `values.yaml` file and from user-supplied files. By default, `Values` is empty.
@ -21,7 +20,7 @@ In the previous section, we use `{{.Release.Name}}` to insert the name of a rele
- `Capabilities.APIVersions` is a set of versions. - `Capabilities.APIVersions` is a set of versions.
- `Capabilities.APIVersions.Has $version` indicates whether a version (`batch/v1`) is enabled on the cluster. - `Capabilities.APIVersions.Has $version` indicates whether a version (`batch/v1`) is enabled on the cluster.
- `Capabilities.KubeVersion` provides a way to look up the Kubernetes version. It has the following values: `Major`, `Minor`, `GitVersion`, `GitCommit`, `GitTreeState`, `BuildDate`, `GoVersion`, `Compiler`, and `Platform`. - `Capabilities.KubeVersion` provides a way to look up the Kubernetes version. It has the following values: `Major`, `Minor`, `GitVersion`, `GitCommit`, `GitTreeState`, `BuildDate`, `GoVersion`, `Compiler`, and `Platform`.
- `Capabilities.helmVersion` provides a way to look up the Tiller version. It has the following values: `SemVer`, `GitCommit`, and `GitTreeState`. - `Capabilities.HelmVersion` provides a way to look up the Helm version. It has the following values: `SemVer`, `GitCommit`, and `GitTreeState`.
- `Template`: Contains information about the current template that is being executed - `Template`: Contains information about the current template that is being executed
- `Name`: A namespaced filepath to the current template (e.g. `mychart/templates/mytemplate.yaml`) - `Name`: A namespaced filepath to the current template (e.g. `mychart/templates/mytemplate.yaml`)
- `BasePath`: The namespaced path to the templates directory of the current chart (e.g. `mychart/templates`). - `BasePath`: The namespaced path to the templates directory of the current chart (e.g. `mychart/templates`).

@ -1,6 +1,6 @@
# Debugging Templates # Debugging Templates
Debugging templates can be tricky simply because the templates are rendered on the Tiller server, not the Helm client. And then the rendered templates are sent to the Kubernetes API server, which may reject the YAML files for reasons other than formatting. Debugging templates can be tricky because the rendered templates are sent to the Kubernetes API server, which may reject the YAML files for reasons other than formatting.
There are a few commands that can help you debug. There are a few commands that can help you debug.

@ -18,9 +18,9 @@ mychart/
... ...
``` ```
The `templates/` directory is for template files. When Tiller evaluates a chart, The `templates/` directory is for template files. When Helm evaluates a chart,
it will send all of the files in the `templates/` directory through the it will send all of the files in the `templates/` directory through the
template rendering engine. Tiller then collects the results of those templates template rendering engine. It then collects the results of those templates
and sends them on to Kubernetes. and sends them on to Kubernetes.
The `values.yaml` file is also important to templates. This file contains the The `values.yaml` file is also important to templates. This file contains the
@ -90,7 +90,7 @@ In virtue of the fact that this file is in the `templates/` directory, it will
be sent through the template engine. be sent through the template engine.
It is just fine to put a plain YAML file like this in the `templates/` directory. It is just fine to put a plain YAML file like this in the `templates/` directory.
When Tiller reads this template, it will simply send it to Kubernetes as-is. When Helm reads this template, it will simply send it to Kubernetes as-is.
With this simple template, we now have an installable chart. And we can install With this simple template, we now have an installable chart. And we can install
it like this: it like this:
@ -165,7 +165,7 @@ The template directive `{{ .Release.Name }}` injects the release name into the t
The leading dot before `Release` indicates that we start with the top-most namespace for this scope (we'll talk about scope in a bit). So we could read `.Release.Name` as "start at the top namespace, find the `Release` object, then look inside of it for an object called `Name`". The leading dot before `Release` indicates that we start with the top-most namespace for this scope (we'll talk about scope in a bit). So we could read `.Release.Name` as "start at the top namespace, find the `Release` object, then look inside of it for an object called `Name`".
The `Release` object is one of the built-in objects for Helm, and we'll cover it in more depth later. But for now, it is sufficient to say that this will display the release name that Tiller assigns to our release. The `Release` object is one of the built-in objects for Helm, and we'll cover it in more depth later. But for now, it is sufficient to say that this will display the release name that the library assigns to our release.
Now when we install our resource, we'll immediately see the result of using this template directive: Now when we install our resource, we'll immediately see the result of using this template directive:
@ -187,7 +187,7 @@ instead of `mychart-configmap`.
You can run `helm get manifest clunky-serval` to see the entire generated YAML. You can run `helm get manifest clunky-serval` to see the entire generated YAML.
At this point, we've seen templates at their most basic: YAML files that have template directives embedded in `{{` and `}}`. In the next part, we'll take a deeper look into templates. But before moving on, there's one quick trick that can make building templates faster: When you want to test the template rendering, but not actually install anything, you can use `helm install --debug --dry-run ./mychart`. This will send the chart to the Tiller server, which will render the templates. But instead of installing the chart, it will return the rendered template to you so you can see the output: At this point, we've seen templates at their most basic: YAML files that have template directives embedded in `{{` and `}}`. In the next part, we'll take a deeper look into templates. But before moving on, there's one quick trick that can make building templates faster: When you want to test the template rendering, but not actually install anything, you can use `helm install --debug --dry-run ./mychart`. This will render the templates. But instead of installing the chart, it will return the rendered template to you so you can see the output:
```console ```console
$ helm install --debug --dry-run ./mychart $ helm install --debug --dry-run ./mychart

@ -58,7 +58,6 @@ engine: gotpl # The name of the template engine (optional, defaults to gotpl)
icon: A URL to an SVG or PNG image to be used as an icon (optional). icon: A URL to an SVG or PNG image to be used as an icon (optional).
appVersion: The version of the app that this contains (optional). This needn't be SemVer. appVersion: The version of the app that this contains (optional). This needn't be SemVer.
deprecated: Whether this chart is deprecated (optional, boolean) deprecated: Whether this chart is deprecated (optional, boolean)
helmVersion: The version of Tiller that this chart requires. This should be expressed as a SemVer range: ">2.0.0" (optional)
``` ```
If you are familiar with the `Chart.yaml` file format for Helm Classic, you will If you are familiar with the `Chart.yaml` file format for Helm Classic, you will
@ -91,7 +90,7 @@ rely upon or require GitHub or even Git. Consequently, it does not use
Git SHAs for versioning at all. Git SHAs for versioning at all.
The `version` field inside of the `Chart.yaml` is used by many of the The `version` field inside of the `Chart.yaml` is used by many of the
Helm tools, including the CLI and the Tiller server. When generating a Helm tools, including the CLI. When generating a
package, the `helm package` command will use the version that it finds package, the `helm package` command will use the version that it finds
in the `Chart.yaml` as a token in the package name. The system assumes in the `Chart.yaml` as a token in the package name. The system assumes
that the version number in the chart package name matches the version number in that the version number in the chart package name matches the version number in
@ -488,7 +487,7 @@ the Kubernetes objects from the charts and all its dependencies are
Hence a single release is created with all the objects for the chart and its dependencies. Hence a single release is created with all the objects for the chart and its dependencies.
The install order of Kubernetes types is given by the enumeration InstallOrder in kind_sorter.go The install order of Kubernetes types is given by the enumeration InstallOrder in kind_sorter.go
(see [the Helm source file](https://github.com/kubernetes/helm/blob/master/pkg/tiller/kind_sorter.go#L26)). (see [the Helm source file](https://github.com/helm/helm/blob/dev-v3/pkg/tiller/kind_sorter.go#L26)).
## Templates and Values ## Templates and Values
@ -574,8 +573,7 @@ cannot be overridden. As with all values, the names are _case
sensitive_. sensitive_.
- `Release.Name`: The name of the release (not the chart) - `Release.Name`: The name of the release (not the chart)
- `Release.Service`: The service that conducted the release. Usually - `Release.Service`: The service that conducted the release.
this is `Tiller`.
- `Release.IsUpgrade`: This is set to true if the current operation is an upgrade or rollback. - `Release.IsUpgrade`: This is set to true if the current operation is an upgrade or rollback.
- `Release.IsInstall`: This is set to true if the current operation is an - `Release.IsInstall`: This is set to true if the current operation is an
install. install.
@ -589,9 +587,9 @@ sensitive_.
`{{.Files.GetString name}}` functions. You can also access the contents of the file `{{.Files.GetString name}}` functions. You can also access the contents of the file
as `[]byte` using `{{.Files.GetBytes}}` as `[]byte` using `{{.Files.GetBytes}}`
- `Capabilities`: A map-like object that contains information about the versions - `Capabilities`: A map-like object that contains information about the versions
of Kubernetes (`{{.Capabilities.KubeVersion}}`, Tiller of Kubernetes (`{{.Capabilities.KubeVersion}}`, Helm
(`{{.Capabilities.HelmVersion}}`, and the supported Kubernetes API versions (`{{.Capabilities.HelmVersion}}`, and the supported Kubernetes
(`{{.Capabilities.APIVersions.Has "batch/v1"`) API versions (`{{.Capabilities.APIVersions.Has "batch/v1"`)
**NOTE:** Any unknown Chart.yaml fields will be dropped. They will not **NOTE:** Any unknown Chart.yaml fields will be dropped. They will not
be accessible inside of the `Chart` object. Thus, Chart.yaml cannot be be accessible inside of the `Chart` object. Thus, Chart.yaml cannot be

@ -45,10 +45,10 @@ consider the lifecycle for a `helm install`. By default, the lifecycle
looks like this: looks like this:
1. User runs `helm install foo` 1. User runs `helm install foo`
2. Chart is loaded into Tiller 2. The Helm library install API is called
3. After some verification, Tiller renders the `foo` templates 3. After some verification, the library renders the `foo` templates
4. Tiller loads the resulting resources into Kubernetes 4. The library loads the resulting resources into Kubernetes
5. Tiller returns the release name (and other data) to the client 5. The library returns the release object (and other data) to the client
6. The client exits 6. The client exits
Helm defines two hooks for the `install` lifecycle: `pre-install` and Helm defines two hooks for the `install` lifecycle: `pre-install` and
@ -56,24 +56,24 @@ Helm defines two hooks for the `install` lifecycle: `pre-install` and
hooks, the lifecycle is altered like this: hooks, the lifecycle is altered like this:
1. User runs `helm install foo` 1. User runs `helm install foo`
2. Chart is loaded into Tiller 2. The Helm library install API is called
3. After some verification, Tiller renders the `foo` templates 3. After some verification, the library renders the `foo` templates
4. Tiller prepares to execute the `pre-install` hooks (loading hook resources into 4. The library prepares to execute the `pre-install` hooks (loading hook resources into
Kubernetes) Kubernetes)
5. Tiller sorts hooks by weight (assigning a weight of 0 by default) and by name for those hooks with the same weight in ascending order. 5. The library sorts hooks by weight (assigning a weight of 0 by default) and by name for those hooks with the same weight in ascending order.
6. Tiller then loads the hook with the lowest weight first (negative to positive) 6. The library then loads the hook with the lowest weight first (negative to positive)
7. Tiller waits until the hook is "Ready" 7. The library waits until the hook is "Ready" (except for CRDs)
8. Tiller loads the resulting resources into Kubernetes. Note that if the `--wait` 8. The library loads the resulting resources into Kubernetes. Note that if the `--wait`
flag is set, Tiller will wait until all resources are in a ready state flag is set, the library will wait until all resources are in a ready state
and will not run the `post-install` hook until they are ready. and will not run the `post-install` hook until they are ready.
9. Tiller executes the `post-install` hook (loading hook resources) 9. The library executes the `post-install` hook (loading hook resources)
10. Tiller waits until the hook is "Ready" 10. The library waits until the hook is "Ready"
11. Tiller returns the release name (and other data) to the client 11. The library returns the release object (and other data) to the client
12. The client exits 12. The client exits
What does it mean to wait until a hook is ready? This depends on the What does it mean to wait until a hook is ready? This depends on the
resource declared in the hook. If the resources is a `Job` kind, Tiller resource declared in the hook. If the resources is a `Job` kind, the library
will wait until the job successfully runs to completion. And if the job will wait until the job successfully runs to completion. And if the job
fails, the release will fail. This is a _blocking operation_, so the fails, the release will fail. This is a _blocking operation_, so the
Helm client will pause while the Job is run. Helm client will pause while the Job is run.
@ -90,7 +90,7 @@ to `0` if weight is not important.
### Hook resources are not managed with corresponding releases ### Hook resources are not managed with corresponding releases
The resources that a hook creates are not tracked or managed as part of the The resources that a hook creates are not tracked or managed as part of the
release. Once Tiller verifies that the hook has reached its ready state, it release. Once Helm verifies that the hook has reached its ready state, it
will leave the hook resource alone. will leave the hook resource alone.
Practically speaking, this means that if you create resources in a hook, you Practically speaking, this means that if you create resources in a hook, you
@ -170,7 +170,7 @@ deterministic executing order. Weights are defined using the following annotatio
``` ```
Hook weights can be positive or negative numbers but must be represented as Hook weights can be positive or negative numbers but must be represented as
strings. When Tiller starts the execution cycle of hooks of a particular Kind it strings. When Helm starts the execution cycle of hooks of a particular Kind it
will sort those hooks in ascending order. will sort those hooks in ascending order.
It is also possible to define policies that determine when to delete corresponding hook resources. Hook deletion policies are defined using the following annotation: It is also possible to define policies that determine when to delete corresponding hook resources. Hook deletion policies are defined using the following annotation:
@ -181,9 +181,9 @@ It is also possible to define policies that determine when to delete correspondi
``` ```
You can choose one or more defined annotation values: You can choose one or more defined annotation values:
* `"hook-succeeded"` specifies Tiller should delete the hook after the hook is successfully executed. * `"hook-succeeded"` specifies Helm should delete the hook after the hook is successfully executed.
* `"hook-failed"` specifies Tiller should delete the hook if the hook failed during execution. * `"hook-failed"` specifies Helm should delete the hook if the hook failed during execution.
* `"before-hook-creation"` specifies Tiller should delete the previous hook before the new hook is launched. * `"before-hook-creation"` specifies Helm should delete the previous hook before the new hook is launched.
### Automatically uninstall hook from previous release ### Automatically uninstall hook from previous release
@ -195,4 +195,4 @@ One might choose `"helm.sh/hook-delete-policy": "before-hook-creation"` over `"h
* It may be necessary to keep succeeded hook resource in kubernetes for some reason. * It may be necessary to keep succeeded hook resource in kubernetes for some reason.
* At the same time it is not desirable to do manual resource deletion before helm release upgrade. * At the same time it is not desirable to do manual resource deletion before helm release upgrade.
`"helm.sh/hook-delete-policy": "before-hook-creation"` annotation on hook causes tiller to remove the hook from previous release if there is one before the new hook is launched and can be used with another policy. `"helm.sh/hook-delete-policy": "before-hook-creation"` annotation on hook causes Helm to remove the hook from previous release if there is one before the new hook is launched and can be used with another policy.

@ -9,10 +9,8 @@ Helm uses [Go templates](https://godoc.org/text/template) for templating
your resource files. While Go ships several built-in functions, we have your resource files. While Go ships several built-in functions, we have
added many others. added many others.
First, we added almost all of the functions in the First, we added all of the functions in the
[Sprig library](https://godoc.org/github.com/Masterminds/sprig). We removed two [Sprig library](https://godoc.org/github.com/Masterminds/sprig).
for security reasons: `env` and `expandenv` (which would have given chart authors
access to Tiller's environment).
We also added two special template functions: `include` and `required`. The `include` We also added two special template functions: `include` and `required`. The `include`
function allows you to bring in another template, and then pass the results to other function allows you to bring in another template, and then pass the results to other
@ -160,7 +158,7 @@ spec:
See also the `helm upgrade --recreate-pods` flag for a slightly See also the `helm upgrade --recreate-pods` flag for a slightly
different way of addressing this issue. different way of addressing this issue.
## Tell Tiller Not To Uninstall a Resource ## Tell Helm Not To Uninstall a Resource
Sometimes there are resources that should not be uninstalled when Helm runs a Sometimes there are resources that should not be uninstalled when Helm runs a
`helm uninstall`. Chart developers can add an annotation to a resource to prevent `helm uninstall`. Chart developers can add an annotation to a resource to prevent
@ -176,7 +174,7 @@ metadata:
(Quotation marks are required) (Quotation marks are required)
The annotation `"helm.sh/resource-policy": keep` instructs Tiller to skip this The annotation `"helm.sh/resource-policy": keep` instructs Helm to skip this
resource during a `helm uninstall` operation. _However_, this resource becomes resource during a `helm uninstall` operation. _However_, this resource becomes
orphaned. Helm will no longer manage it in any way. This can lead to problems orphaned. Helm will no longer manage it in any way. This can lead to problems
if using `helm install --replace` on a release that has already been uninstalled, but if using `helm install --replace` on a release that has already been uninstalled, but

@ -1,17 +1,16 @@
# Developers Guide # Developers Guide
This guide explains how to set up your environment for developing on This guide explains how to set up your environment for developing on
Helm and Tiller. Helm.
## Prerequisites ## Prerequisites
- The latest version of Go - The latest version of Go
- The latest version of Dep - The latest version of Dep
- A Kubernetes cluster w/ kubectl (optional) - A Kubernetes cluster w/ kubectl (optional)
- The gRPC toolchain
- Git - Git
## Building Helm/Tiller ## Building Helm
We use Make to build our programs. The simplest way to get started is: We use Make to build our programs. The simplest way to get started is:
@ -23,18 +22,15 @@ NOTE: This will fail if not running from the path `$GOPATH/src/k8s.io/helm`. The
directory `k8s.io` should not be a symlink or `build` will not find the relevant directory `k8s.io` should not be a symlink or `build` will not find the relevant
packages. packages.
This will build both Helm and Tiller. `make bootstrap` will attempt to This will build both Helm and the Helm library. `make bootstrap` will attempt to
install certain tools if they are missing. install certain tools if they are missing.
To run all the tests (without running the tests for `vendor/`), run To run all the tests (without running the tests for `vendor/`), run
`make test`. `make test`.
To run Helm and Tiller locally, you can run `bin/helm` or `bin/tiller`. To run Helm locally, you can run `bin/helm`.
- Helm and Tiller are known to run on macOS and most Linuxes, including - Helm is known to run on macOS and most Linuxes, including Alpine.
Alpine.
- Tiller must have access to a Kubernetes cluster. It learns about the
cluster by examining the Kube config files that `kubectl` uses.
### Man pages ### Man pages
@ -49,30 +45,6 @@ $ export MANPATH=$GOPATH/src/k8s.io/helm/docs/man:$MANPATH
$ man helm $ man helm
``` ```
## gRPC and Protobuf
Helm and Tiller communicate using gRPC. To get started with gRPC, you will need to...
- Install `protoc` for compiling protobuf files. Releases are
[here](https://github.com/google/protobuf/releases)
- Run Helm's `make bootstrap` to generate the `protoc-gen-go` plugin and
place it in `bin/`.
Note that you need to be on protobuf 3.2.0 (`protoc --version`). The
version of `protoc-gen-go` is tied to the version of gRPC used in
Kubernetes. So the plugin is maintained locally.
While the gRPC and ProtoBuf specs remain silent on indentation, we
require that the indentation style matches the Go format specification.
Namely, protocol buffers should use tab-based indentation and rpc
declarations should follow the style of Go function declarations.
### The Helm API (HAPI)
We use gRPC as an API layer. See `pkg/proto/hapi` for the generated Go code,
and `_proto` for the protocol buffer definitions.
To regenerate the Go files from the protobuf source, `make protoc`.
## Docker Images ## Docker Images
@ -85,41 +57,7 @@ GCR registry.
For development, we highly recommend using the For development, we highly recommend using the
[Kubernetes Minikube](https://github.com/kubernetes/minikube) [Kubernetes Minikube](https://github.com/kubernetes/minikube)
developer-oriented distribution. Once this is installed, you can use developer-oriented distribution.
`helm init` to install into the cluster. Note that version of tiller you're using for
development may not be available in Google Cloud Container Registry. If you're getting
image pull errors, you can override the version of Tiller. Example:
```console
helm init --tiller-image=gcr.io/kubernetes-helm/tiller:2.7.2
```
Or use the latest version:
```console
helm init --canary-image
```
For developing on Tiller, it is sometimes more expedient to run Tiller locally
instead of packaging it into an image and running it in-cluster. You can do
this by telling the Helm client to us a local instance.
```console
$ make build
$ bin/tiller
```
And to configure the Helm client, use the `--host` flag or export the `HELM_HOST`
environment variable:
```console
$ export HELM_HOST=localhost:44134
$ helm install foo
```
(Note that you do not need to use `helm init` when you are running Tiller directly)
Tiller should run on any >= 1.3 Kubernetes cluster.
## Contribution Guidelines ## Contribution Guidelines
@ -191,8 +129,6 @@ Common commit types:
Common scopes: Common scopes:
- helm: The Helm CLI - helm: The Helm CLI
- tiller: The Tiller server
- proto: Protobuf definitions
- pkg/lint: The lint package. Follow a similar convention for any - pkg/lint: The lint package. Follow a similar convention for any
package package
- `*`: two or more scopes - `*`: two or more scopes

@ -94,7 +94,7 @@ chart repository server or any other HTTP server.
## Release ## Release
When a chart is installed, Tiller (the Helm server) creates a _release_ When a chart is installed, the Helm library creates a _release_
to track that installation. to track that installation.
A single chart may be installed many times into the same cluster, and A single chart may be installed many times into the same cluster, and
@ -130,12 +130,10 @@ rollback 1| release 4 (but running the same config as release 1)
The above table illustrates how release numbers increment across The above table illustrates how release numbers increment across
install, upgrade, and rollback. install, upgrade, and rollback.
## Tiller ## Helm Library
Tiller is the in-cluster component of Helm. It interacts directly with It interacts directly with the Kubernetes API server to install,
the Kubernetes API server to install, upgrade, query, and remove upgrade, query, and remove Kubernetes resources.
Kubernetes resources. It also stores the objects that represent
releases.
## Repository (Repo, Chart Repository) ## Repository (Repo, Chart Repository)

@ -7,8 +7,7 @@ is now part of the CNCF. Many companies now contribute regularly to Helm.
Differences from Helm Classic: Differences from Helm Classic:
- Helm now has both a client (`helm`) and a server (`tiller`). The - Helm now has both a client (`helm`) and a library. In version 2 it had a server (`tiller`) but the capability is now contained within the library.
server runs inside of Kubernetes, and manages your resources.
- Helm's chart format has changed for the better: - Helm's chart format has changed for the better:
- Dependencies are immutable and stored inside of a chart's `charts/` - Dependencies are immutable and stored inside of a chart's `charts/`
directory. directory.

@ -1,13 +1,12 @@
# Helm Documentation # Helm Documentation
- [Quick Start](quickstart.md) - Read me first! - [Quick Start](quickstart.md) - Read me first!
- [Installing Helm](install.md) - Install Helm and Tiller - [Installing Helm](install.md) - Install Helm
- [Kubernetes Distribution Notes](kubernetes_distros.md) - [Kubernetes Distribution Notes](kubernetes_distros.md)
- [Frequently Asked Questions](install_faq.md) - [Frequently Asked Questions](install_faq.md)
- [Using Helm](using_helm.md) - Learn the Helm tools - [Using Helm](using_helm.md) - Learn the Helm tools
- [Plugins](plugins.md) - [Plugins](plugins.md)
- [Role-based Access Control](rbac.md) - [Role-based Access Control](rbac.md)
- [TLS/SSL for Helm and Tiller](tiller_ssl.md) - Use Helm-to-Tiller encryption
- [Developing Charts](charts.md) - An introduction to chart development - [Developing Charts](charts.md) - An introduction to chart development
- [Chart Lifecycle Hooks](charts_hooks.md) - [Chart Lifecycle Hooks](charts_hooks.md)
- [Chart Tips and Tricks](charts_tips_and_tricks.md) - [Chart Tips and Tricks](charts_tips_and_tricks.md)
@ -31,7 +30,7 @@
- [Appendix A: YAML Techniques](chart_template_guide/yaml_techniques.md) - [Appendix A: YAML Techniques](chart_template_guide/yaml_techniques.md)
- [Appendix B: Go Data Types](chart_template_guide/data_types.md) - [Appendix B: Go Data Types](chart_template_guide/data_types.md)
- [Related Projects](related.md) - More Helm tools, articles, and plugins - [Related Projects](related.md) - More Helm tools, articles, and plugins
- [Architecture](architecture.md) - Overview of the Helm/Tiller design - [Architecture](architecture.md) - Overview of the Helm design
- [Developers](developers.md) - About the developers - [Developers](developers.md) - About the developers
- [History](history.md) - A brief history of the project - [History](history.md) - A brief history of the project
- [Glossary](glossary.md) - Decode the Helm vocabulary - [Glossary](glossary.md) - Decode the Helm vocabulary

@ -1,15 +1,12 @@
# Installing Helm # Installing Helm
There are two parts to Helm: The Helm client (`helm`) and the Helm There are two parts to Helm: The Helm client (`helm`) and the Helm
server (Tiller). This guide shows how to install the client, and then library. This guide shows how to install both together.
proceeds to show two ways to install the server.
**IMPORTANT**: If you are responsible for ensuring your cluster is a controlled environment, especially when resources are shared, it is strongly recommended installing Tiller using a secured configuration. For guidance, see [Securing your Helm Installation](securing_installation.md).
## Installing the Helm Client ## Installing Helm
The Helm client can be installed either from source, or from pre-built binary Helm can be installed either from source, or from pre-built binary releases.
releases.
### From the Binary Releases ### From the Binary Releases
@ -48,7 +45,7 @@ choco install kubernetes-helm
## From Script ## From Script
Helm now has an installer script that will automatically grab the latest version Helm now has an installer script that will automatically grab the latest version
of the Helm client and [install it locally](https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get). of Helm and [install it locally](https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get).
You can fetch that script, and then execute it locally. It's well documented so You can fetch that script, and then execute it locally. It's well documented so
that you can read through it and understand what it is doing before you run it. that you can read through it and understand what it is doing before you run it.
@ -96,255 +93,7 @@ The `bootstrap` target will attempt to install dependencies, rebuild the
`vendor/` tree, and validate configuration. `vendor/` tree, and validate configuration.
The `build` target will compile `helm` and place it in `bin/helm`. The `build` target will compile `helm` and place it in `bin/helm`.
Tiller is also compiled, and is placed in `bin/tiller`.
## Installing Tiller
Tiller, the server portion of Helm, typically runs inside of your
Kubernetes cluster. But for development, it can also be run locally, and
configured to talk to a remote Kubernetes cluster.
### Easy In-Cluster Installation
The easiest way to install `tiller` into the cluster is simply to run
`helm init`. This will validate that `helm`'s local environment is set
up correctly (and set it up if necessary). Then it will connect to
whatever cluster `kubectl` connects to by default (`kubectl config
view`). Once it connects, it will install `tiller` into the
`kube-system` namespace.
After `helm init`, you should be able to run `kubectl get pods --namespace
kube-system` and see Tiller running.
You can explicitly tell `helm init` to...
- Install the canary build with the `--canary-image` flag
- Install a particular image (version) with `--tiller-image`
- Install to a particular cluster with `--kube-context`
- Install into a particular namespace with `--tiller-namespace`
Once Tiller is installed, running `helm version` should show you both
the client and server version. (If it shows only the client version,
`helm` cannot yet connect to the server. Use `kubectl` to see if any
`tiller` pods are running.)
Helm will look for Tiller in the `kube-system` namespace unless
`--tiller-namespace` or `TILLER_NAMESPACE` is set.
### Installing Tiller Canary Builds
Canary images are built from the `master` branch. They may not be
stable, but they offer you the chance to test out the latest features.
The easiest way to install a canary image is to use `helm init` with the
`--canary-image` flag:
```console
$ helm init --canary-image
```
This will use the most recently built container image. You can always
uninstall Tiller by deleting the Tiller deployment from the
`kube-system` namespace using `kubectl`.
### Running Tiller Locally
For development, it is sometimes easier to work on Tiller locally, and
configure it to connect to a remote Kubernetes cluster.
The process of building Tiller is explained above.
Once `tiller` has been built, simply start it:
```console
$ bin/tiller
Tiller running on :44134
```
When Tiller is running locally, it will attempt to connect to the
Kubernetes cluster that is configured by `kubectl`. (Run `kubectl config
view` to see which cluster that is.)
You must tell `helm` to connect to this new local Tiller host instead of
connecting to the one in-cluster. There are two ways to do this. The
first is to specify the `--host` option on the command line. The second
is to set the `$HELM_HOST` environment variable.
```console
$ export HELM_HOST=localhost:44134
$ helm version # Should connect to localhost.
Client: &version.Version{SemVer:"v2.0.0-alpha.4", GitCommit:"db...", GitTreeState:"dirty"}
Server: &version.Version{SemVer:"v2.0.0-alpha.4", GitCommit:"a5...", GitTreeState:"dirty"}
```
Importantly, even when running locally, Tiller will store release
configuration in ConfigMaps inside of Kubernetes.
## Upgrading Tiller
As of Helm 2.2.0, Tiller can be upgraded using `helm init --upgrade`.
For older versions of Helm, or for manual upgrades, you can use `kubectl` to modify
the Tiller image:
```console
$ export TILLER_TAG=v2.0.0-beta.1 # Or whatever version you want
$ kubectl --namespace=kube-system set image deployments/tiller-deploy tiller=gcr.io/kubernetes-helm/tiller:$TILLER_TAG
deployment "tiller-deploy" image updated
```
Setting `TILLER_TAG=canary` will get the latest snapshot of master.
## Deleting or Reinstalling Tiller
Because Tiller stores its data in Kubernetes ConfigMaps, you can safely
delete and re-install Tiller without worrying about losing any data. The
recommended way of deleting Tiller is with `kubectl delete deployment
tiller-deploy --namespace kube-system`, or more concisely `helm reset`.
Tiller can then be re-installed from the client with:
```console
$ helm init
```
## Advanced Usage
`helm init` provides additional flags for modifying Tiller's deployment
manifest before it is installed.
### Using `--node-selectors`
The `--node-selectors` flag allows us to specify the node labels required
for scheduling the Tiller pod.
The example below will create the specified label under the nodeSelector
property.
```
helm init --node-selectors "beta.kubernetes.io/os"="linux"
```
The installed deployment manifest will contain our node selector label.
```
...
spec:
template:
spec:
nodeSelector:
beta.kubernetes.io/os: linux
...
```
### Using `--override`
`--override` allows you to specify properties of Tiller's
deployment manifest. Unlike the `--set` command used elsewhere in Helm,
`helm init --override` manipulates the specified properties of the final
manifest (there is no "values" file). Therefore you may specify any valid
value for any valid property in the deployment manifest.
#### Override annotation
In the example below we use `--override` to add the revision property and set
its value to 1.
```
helm init --override metadata.annotations."deployment\.kubernetes\.io/revision"="1"
```
Output:
```
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
...
```
#### Override affinity
In the example below we set properties for node affinity. Multiple
`--override` commands may be combined to modify different properties of the
same list item.
```
helm init --override "spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].weight"="1" --override "spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].preference.matchExpressions[0].key"="e2e-az-name"
```
The specified properties are combined into the
"preferredDuringSchedulingIgnoredDuringExecution" property's first
list item.
```
...
spec:
strategy: {}
template:
...
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- preference:
matchExpressions:
- key: e2e-az-name
operator: ""
weight: 1
...
```
### Using `--output`
The `--output` flag allows us skip the installation of Tiller's deployment
manifest and simply output the deployment manifest to stdout in either
JSON or YAML format. The output may then be modified with tools like `jq`
and installed manually with `kubectl`.
In the example below we execute `helm init` with the `--output json` flag.
```
helm init --output json
```
The Tiller installation is skipped and the manifest is output to stdout
in JSON format.
```
"apiVersion": "extensions/v1beta1",
"kind": "Deployment",
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "helm",
"name": "tiller"
},
"name": "tiller-deploy",
"namespace": "kube-system"
},
...
```
### Storage backends
By default, `tiller` stores release information in `ConfigMaps` in the namespace
where it is running. As of Helm 2.7.0, there is now a beta storage backend that
uses `Secrets` for storing release information. This was added for additional
security in protecting charts in conjunction with the release of `Secret`
encryption in Kubernetes.
To enable the secrets backend, you'll need to init Tiller with the following
options:
```shell
helm init --override 'spec.template.spec.containers[0].command'='{/tiller,--storage=secret}'
```
Currently, if you want to switch from the default backend to the secrets
backend, you'll have to do the migration for this on your own. When this backend
graduates from beta, there will be a more official path of migration
## Conclusion ## Conclusion
@ -352,5 +101,5 @@ In most cases, installation is as simple as getting a pre-built `helm` binary
and running `helm init`. This document covers additional cases for those and running `helm init`. This document covers additional cases for those
who want to do more sophisticated things with Helm. who want to do more sophisticated things with Helm.
Once you have the Helm Client and Tiller successfully installed, you can Once you have the Helm Client successfully installed, you can
move on to using Helm to manage charts. move on to using Helm to manage charts.

@ -35,7 +35,7 @@ Helm.
## Installing ## Installing
I'm trying to install Helm/Tiller, but something is not right. I'm trying to install Helm, but something is not right.
**Q: How do I put the Helm client files somewhere other than ~/.helm?** **Q: How do I put the Helm client files somewhere other than ~/.helm?**
@ -49,53 +49,14 @@ helm init --client-only
Note that if you have existing repositories, you will need to re-add them Note that if you have existing repositories, you will need to re-add them
with `helm repo add...`. with `helm repo add...`.
**Q: How do I configure Helm, but not install Tiller?** **Q: How do I configure Helm?**
A: By default, `helm init` will ensure that the local `$HELM_HOME` is configured, A: By default, `helm init` will ensure that the local `$HELM_HOME` is configured.
and then install Tiller on your cluster. To locally configure, but not install
Tiller, use `helm init --client-only`.
**Q: How do I manually install Tiller on the cluster?**
A: Tiller is installed as a Kubernetes `deployment`. You can get the manifest
by running `helm init --dry-run --debug`, and then manually install it with
`kubectl`. It is suggested that you do not remove or change the labels on that
deployment, as they are sometimes used by supporting scripts and tools.
**Q: Why do I get `Error response from daemon: target is unknown` during Tiller install?**
A: Users have reported being unable to install Tiller on Kubernetes instances that
are using Docker 1.13.0. The root cause of this was a bug in Docker that made
that one version incompatible with images pushed to the Docker registry by
earlier versions of Docker.
This [issue](https://github.com/docker/docker/issues/30083) was fixed shortly
after the release, and is available in Docker 1.13.1-RC1 and later.
## Getting Started ## Getting Started
I successfully installed Helm/Tiller but I can't use it. I successfully installed Helm but I can't use it.
**Q: Trying to use Helm, I get the error "client transport was broken"**
```
E1014 02:26:32.885226 16143 portforward.go:329] an error occurred forwarding 37008 -> 44134: error forwarding port 44134 to pod tiller-deploy-2117266891-e4lev_kube-system, uid : unable to do port forwarding: socat not found.
2016/10/14 02:26:32 transport: http2Client.notifyError got notified that the client transport was broken EOF.
Error: transport is closing
```
A: This is usually a good indication that Kubernetes is not set up to allow port forwarding.
Typically, the missing piece is `socat`. If you are running CoreOS, we have been
told that it may have been misconfigured on installation. The CoreOS team
recommends reading this:
- https://coreos.com/kubernetes/docs/latest/kubelet-wrapper.html
Here are a few resolved issues that may help you get started:
- https://github.com/kubernetes/helm/issues/1371
- https://github.com/kubernetes/helm/issues/966
**Q: Trying to use Helm, I get the error "lookup XXXXX on 8.8.8.8:53: no such host"** **Q: Trying to use Helm, I get the error "lookup XXXXX on 8.8.8.8:53: no such host"**
@ -136,96 +97,11 @@ certificates and certificate authorities. These need to be stored in a Kubernete
config file (Default: `~/.kube/config` so that `kubectl` and `helm` can access config file (Default: `~/.kube/config` so that `kubectl` and `helm` can access
them. them.
**Q: When I run a Helm command, I get an error about the tunnel or proxy**
A: Helm uses the Kubernetes proxy service to connect to the Tiller server.
If the command `kubectl proxy` does not work for you, neither will Helm.
Typically, the error is related to a missing `socat` service.
**Q: Tiller crashes with a panic**
When I run a command on Helm, Tiller crashes with an error like this:
```
Tiller is listening on :44134
Probes server is listening on :44135
Storage driver is ConfigMap
Cannot initialize Kubernetes connection: the server has asked for the client to provide credentials 2016-12-20 15:18:40.545739 I | storage.go:37: Getting release "bailing-chinchilla" (v1) from storage
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x8053d5]
goroutine 77 [running]:
panic(0x1abbfc0, 0xc42000a040)
/usr/local/go/src/runtime/panic.go:500 +0x1a1
k8s.io/helm/vendor/k8s.io/kubernetes/pkg/client/unversioned.(*ConfigMaps).Get(0xc4200c6200, 0xc420536100, 0x15, 0x1ca7431, 0x6, 0xc42016b6a0)
/home/ubuntu/.go_workspace/src/k8s.io/helm/vendor/k8s.io/kubernetes/pkg/client/unversioned/configmap.go:58 +0x75
k8s.io/helm/pkg/storage/driver.(*ConfigMaps).Get(0xc4201d6190, 0xc420536100, 0x15, 0xc420536100, 0x15, 0xc4205360c0)
/home/ubuntu/.go_workspace/src/k8s.io/helm/pkg/storage/driver/cfgmaps.go:69 +0x62
k8s.io/helm/pkg/storage.(*Storage).Get(0xc4201d61a0, 0xc4205360c0, 0x12, 0xc400000001, 0x12, 0x0, 0xc420200070)
/home/ubuntu/.go_workspace/src/k8s.io/helm/pkg/storage/storage.go:38 +0x160
k8s.io/helm/pkg/tiller.(*ReleaseServer).uniqName(0xc42002a000, 0x0, 0x0, 0xc42016b800, 0xd66a13, 0xc42055a040, 0xc420558050, 0xc420122001)
/home/ubuntu/.go_workspace/src/k8s.io/helm/pkg/tiller/release_server.go:577 +0xd7
k8s.io/helm/pkg/tiller.(*ReleaseServer).prepareRelease(0xc42002a000, 0xc42027c1e0, 0xc42002a001, 0xc42016bad0, 0xc42016ba08)
/home/ubuntu/.go_workspace/src/k8s.io/helm/pkg/tiller/release_server.go:630 +0x71
k8s.io/helm/pkg/tiller.(*ReleaseServer).InstallRelease(0xc42002a000, 0x7f284c434068, 0xc420250c00, 0xc42027c1e0, 0x0, 0x31a9, 0x31a9)
/home/ubuntu/.go_workspace/src/k8s.io/helm/pkg/tiller/release_server.go:604 +0x78
k8s.io/helm/pkg/proto/hapi/services._ReleaseService_InstallRelease_Handler(0x1c51f80, 0xc42002a000, 0x7f284c434068, 0xc420250c00, 0xc42027c190, 0x0, 0x0, 0x0, 0x0, 0x0)
/home/ubuntu/.go_workspace/src/k8s.io/helm/pkg/proto/hapi/services/tiller.pb.go:747 +0x27d
k8s.io/helm/vendor/google.golang.org/grpc.(*Server).processUnaryRPC(0xc4202f3ea0, 0x28610a0, 0xc420078000, 0xc420264690, 0xc420166150, 0x288cbe8, 0xc420250bd0, 0x0, 0x0)
/home/ubuntu/.go_workspace/src/k8s.io/helm/vendor/google.golang.org/grpc/server.go:608 +0xc50
k8s.io/helm/vendor/google.golang.org/grpc.(*Server).handleStream(0xc4202f3ea0, 0x28610a0, 0xc420078000, 0xc420264690, 0xc420250bd0)
/home/ubuntu/.go_workspace/src/k8s.io/helm/vendor/google.golang.org/grpc/server.go:766 +0x6b0
k8s.io/helm/vendor/google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc420124710, 0xc4202f3ea0, 0x28610a0, 0xc420078000, 0xc420264690)
/home/ubuntu/.go_workspace/src/k8s.io/helm/vendor/google.golang.org/grpc/server.go:419 +0xab
created by k8s.io/helm/vendor/google.golang.org/grpc.(*Server).serveStreams.func1
/home/ubuntu/.go_workspace/src/k8s.io/helm/vendor/google.golang.org/grpc/server.go:420 +0xa3
```
A: Check your security settings for Kubernetes.
A panic in Tiller is almost always the result of a failure to negotiate with the
Kubernetes API server (at which point Tiller can no longer do anything useful, so
it panics and exits).
Often, this is a result of authentication failing because the Pod in which Tiller
is running does not have the right token.
To fix this, you will need to change your Kubernetes configuration. Make sure
that `--service-account-private-key-file` from `controller-manager` and
`--service-account-key-file` from apiserver point to the _same_ x509 RSA key.
## Upgrading
My Helm used to work, then I upgrade. Now it is broken.
**Q: After upgrade, I get the error "Client version is incompatible". What's wrong?**
Tiller and Helm have to negotiate a common version to make sure that they can safely
communicate without breaking API assumptions. That error means that the version
difference is too great to safely continue. Typically, you need to upgrade
Tiller manually for this.
The [Installation Guide](install.md) has definitive information about safely
upgrading Helm and Tiller.
The rules for version numbers are as follows:
- Pre-release versions are incompatible with everything else. `Alpha.1` is incompatible with `Alpha.2`.
- Patch revisions _are compatible_: 1.2.3 is compatible with 1.2.4
- Minor revisions _are not compatible_: 1.2.0 is not compatible with 1.3.0,
though we may relax this constraint in the future.
- Major revisions _are not compatible_: 1.0.0 is not compatible with 2.0.0.
## Uninstalling ## Uninstalling
I am trying to remove stuff. I am trying to remove stuff.
**Q: When I delete the Tiller deployment, how come all the releases are still there?**
Releases are stored in ConfigMaps inside of the `kube-system` namespace. You will
have to manually delete them to get rid of the record, or use ```helm uninstall --purge```.
**Q: I want to delete my local Helm. Where are all its files?** **Q: I want to delete my local Helm. Where are all its files?**
Along with the `helm` binary, Helm stores some files in `$HELM_HOME`, which is Along with the `helm` binary, Helm stores some files in `$HELM_HOME`, which is

@ -32,20 +32,15 @@ distributions:
Some versions of Helm (v2.0.0-beta2) require you to `export KUBECONFIG=/etc/kubernetes/admin.conf` Some versions of Helm (v2.0.0-beta2) require you to `export KUBECONFIG=/etc/kubernetes/admin.conf`
or create a `~/.kube/config`. or create a `~/.kube/config`.
## Container Linux by CoreOS
Helm requires that kubelet have access to a copy of the `socat` program to proxy connections to the Tiller API. On Container Linux the Kubelet runs inside of a [hyperkube](https://github.com/kubernetes/kubernetes/tree/master/cluster/images/hyperkube) container image that has socat. So, even though Container Linux doesn't ship `socat` the container filesystem running kubelet does have socat. To learn more read the [Kubelet Wrapper](https://coreos.com/kubernetes/docs/latest/kubelet-wrapper.html) docs.
## Openshift ## Openshift
Helm works straightforward on OpenShift Online, OpenShift Dedicated, OpenShift Container Platform (version >= 3.6) or OpenShift Origin (version >= 3.6). To learn more read [this blog](https://blog.openshift.com/getting-started-helm-openshift/) post. Helm works straightforward on OpenShift Online, OpenShift Dedicated, OpenShift Container Platform (version >= 3.6) or OpenShift Origin (version >= 3.6). To learn more read [this blog](https://blog.openshift.com/getting-started-helm-openshift/) post.
## Platform9 ## Platform9
Helm Client and Helm Server (Tiller) are pre-installed with [Platform9 Managed Kubernetes](https://platform9.com/managed-kubernetes/?utm_source=helm_distro_notes). Platform9 provides access to all official Helm charts through the App Catalog UI and native Kubernetes CLI. Additional repositories can be manually added. Further details are available in this [Platform9 App Catalog article](https://platform9.com/support/deploying-kubernetes-apps-platform9-managed-kubernetes/?utm_source=helm_distro_notes). Helm is pre-installed with [Platform9 Managed Kubernetes](https://platform9.com/managed-kubernetes/?utm_source=helm_distro_notes). Platform9 provides access to all official Helm charts through the App Catalog UI and native Kubernetes CLI. Additional repositories can be manually added. Further details are available in this [Platform9 App Catalog article](https://platform9.com/support/deploying-kubernetes-apps-platform9-managed-kubernetes/?utm_source=helm_distro_notes).
## DC/OS ## DC/OS
Helm (both client and server) has been tested and is working on Mesospheres DC/OS 1.11 Kubernetes platform, and requires Helm has been tested and is working on Mesospheres DC/OS 1.11 Kubernetes platform, and requires no additional configuration.
no additional configuration.

@ -76,8 +76,8 @@ $ helm install --verify mychart-0.1.0.tgz
If the keyring (containing the public key associated with the signed chart) is not in the default location, you may need to point to the If the keyring (containing the public key associated with the signed chart) is not in the default location, you may need to point to the
keyring with `--keyring PATH` as in the `helm package` example. keyring with `--keyring PATH` as in the `helm package` example.
If verification fails, the install will be aborted before the chart is even pushed If verification fails, the install will be aborted before the chart is even rendered.
up to Tiller.
### Using Keybase.io credentials ### Using Keybase.io credentials

@ -8,7 +8,7 @@ The following prerequisites are required for a successful and properly secured u
1. A Kubernetes cluster 1. A Kubernetes cluster
2. Deciding what security configurations to apply to your installation, if any 2. Deciding what security configurations to apply to your installation, if any
3. Installing and configuring Helm and Tiller, the cluster-side service. 3. Installing and configuring Helm.
### Install Kubernetes or have access to a cluster ### Install Kubernetes or have access to a cluster
@ -17,23 +17,12 @@ The following prerequisites are required for a successful and properly secured u
NOTE: Kubernetes versions prior to 1.6 have limited or no support for role-based access controls (RBAC). NOTE: Kubernetes versions prior to 1.6 have limited or no support for role-based access controls (RBAC).
Helm will figure out where to install Tiller by reading your Kubernetes
configuration file (usually `$HOME/.kube/config`). This is the same file
that `kubectl` uses.
To find out which cluster Tiller would install to, you can run
`kubectl config current-context` or `kubectl cluster-info`.
```console
$ kubectl config current-context
my-cluster
```
### Understand your Security Context ### Understand your Security Context
As with all powerful tools, ensure you are installing it correctly for your scenario. As with all powerful tools, ensure you are installing it correctly for your scenario.
If you're using Helm on a cluster that you completely control, like minikube or a cluster on a private network in which sharing is not a concern, the default installation -- which applies no security configuration -- is fine, and it's definitely the easiest. To install Helm without additional security steps, [install Helm](#Install-Helm) and then [initialize Helm](#initialize-helm-and-install-tiller). If you're using Helm on a cluster that you completely control, like minikube or a cluster on a private network in which sharing is not a concern, the default installation -- which applies no security configuration -- is fine, and it's definitely the easiest. To install Helm without additional security steps, [install Helm](#Install-Helm) and then [initialize Helm](#initialize-helm).
However, if your cluster is exposed to a larger network or if you share your cluster with others -- production clusters fall into this category -- you must take extra steps to secure your installation to prevent careless or malicious actors from damaging the cluster or its data. To apply configurations that secure Helm for use in production environments and other multi-tenant scenarios, see [Securing a Helm installation](securing_installation.md) However, if your cluster is exposed to a larger network or if you share your cluster with others -- production clusters fall into this category -- you must take extra steps to secure your installation to prevent careless or malicious actors from damaging the cluster or its data. To apply configurations that secure Helm for use in production environments and other multi-tenant scenarios, see [Securing a Helm installation](securing_installation.md)
@ -48,26 +37,14 @@ Download a binary release of the Helm client. You can use tools like
For more details, or for other options, see [the installation For more details, or for other options, see [the installation
guide](install.md). guide](install.md).
## Initialize Helm and Install Tiller ## Initialize Helm
Once you have Helm ready, you can initialize the local CLI and also Once you have Helm ready, you can initialize the local CLI:
install Tiller into your Kubernetes cluster in one step:
```console ```console
$ helm init $ helm init
``` ```
This will install Tiller into the Kubernetes cluster you saw with
`kubectl config current-context`.
**TIP:** Want to install into a different cluster? Use the
`--kube-context` flag.
**TIP:** When you want to upgrade Tiller, just run `helm init --upgrade`.
By default, when Tiller is installed,it does not have authentication enabled.
To learn more about configuring strong TLS authentication for Tiller, consult
[the Tiller TLS guide](tiller_ssl.md).
## Install an Example Chart ## Install an Example Chart

@ -1,281 +0,0 @@
# Role-based Access Control
In Kubernetes, granting a role to an application-specific service account is a best practice to ensure that your application is operating in the scope that you have specified. Read more about service account permissions [in the official Kubernetes docs](https://kubernetes.io/docs/admin/authorization/rbac/#service-account-permissions).
Bitnami also has a fantastic guide for [configuring RBAC in your cluster](https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/) that takes you through RBAC basics.
This guide is for users who want to restrict Tiller's capabilities to install resources to certain namespaces, or to grant a Helm client running access to a Tiller instance.
## Tiller and Role-based Access Control
You can add a service account to Tiller using the `--service-account <NAME>` flag while you're configuring Helm. As a prerequisite, you'll have to create a role binding which specifies a [role](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) and a [service account](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) name that have been set up in advance.
Once you have satisfied the pre-requisite and have a service account with the correct permissions, you'll run a command like this: `helm init --service-account <NAME>`
### Example: Service account with cluster-admin role
```console
$ kubectl create serviceaccount tiller --namespace kube-system
serviceaccount "tiller" created
```
In `rbac-config.yaml`:
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
```
_Note: The cluster-admin role is created by default in a Kubernetes cluster, so you don't have to define it explicitly._
```console
$ kubectl create -f rbac-config.yaml
serviceaccount "tiller" created
clusterrolebinding "tiller" created
$ helm init --service-account tiller
```
### Example: Deploy Tiller in a namespace, restricted to deploying resources only in that namespace
In the example above, we gave Tiller admin access to the entire cluster. You are not at all required to give Tiller cluster-admin access for it to work. Instead of specifying a ClusterRole or a ClusterRoleBinding, you can specify a Role and RoleBinding to limit Tiller's scope to a particular namespace.
```console
$ kubectl create namespace tiller-world
namespace "tiller-world" created
$ kubectl create serviceaccount tiller --namespace tiller-world
serviceaccount "tiller" created
```
Define a Role that allows Tiller to manage all resources in `tiller-world` like in `role-tiller.yaml`:
```yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: tiller-manager
namespace: tiller-world
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["*"]
verbs: ["*"]
```
```console
$ kubectl create -f role-tiller.yaml
role "tiller-manager" created
```
In `rolebinding-tiller.yaml`,
```yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: tiller-binding
namespace: tiller-world
subjects:
- kind: ServiceAccount
name: tiller
namespace: tiller-world
roleRef:
kind: Role
name: tiller-manager
apiGroup: rbac.authorization.k8s.io
```
```console
$ kubectl create -f rolebinding-tiller.yaml
rolebinding "tiller-binding" created
```
Afterwards you can run `helm init` to install Tiller in the `tiller-world` namespace.
```console
$ helm init --service-account tiller --tiller-namespace tiller-world
$HELM_HOME has been configured at /Users/awesome-user/.helm.
Tiller (the Helm server side component) has been installed into your Kubernetes Cluster.
Happy Helming!
$ helm install nginx --tiller-namespace tiller-world --namespace tiller-world
NAME: wayfaring-yak
LAST DEPLOYED: Mon Aug 7 16:00:16 2017
NAMESPACE: tiller-world
STATUS: DEPLOYED
RESOURCES:
==> v1/Pod
NAME READY STATUS RESTARTS AGE
wayfaring-yak-alpine 0/1 ContainerCreating 0 0s
```
### Example: Deploy Tiller in a namespace, restricted to deploying resources in another namespace
In the example above, we gave Tiller admin access to the namespace it was deployed inside. Now, let's limit Tiller's scope to deploy resources in a different namespace!
For example, let's install Tiller in the namespace `myorg-system` and allow Tiller to deploy resources in the namespace `myorg-users`.
```console
$ kubectl create namespace myorg-system
namespace "myorg-system" created
$ kubectl create serviceaccount tiller --namespace myorg-system
serviceaccount "tiller" created
```
Define a Role that allows Tiller to manage all resources in `myorg-users` like in `role-tiller.yaml`:
```yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: tiller-manager
namespace: myorg-users
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["*"]
verbs: ["*"]
```
```console
$ kubectl create -f role-tiller.yaml
role "tiller-manager" created
```
Bind the service account to that role. In `rolebinding-tiller.yaml`,
```yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: tiller-binding
namespace: myorg-users
subjects:
- kind: ServiceAccount
name: tiller
namespace: myorg-system
roleRef:
kind: Role
name: tiller-manager
apiGroup: rbac.authorization.k8s.io
```
```console
$ kubectl create -f rolebinding-tiller.yaml
rolebinding "tiller-binding" created
```
We'll also need to grant Tiller access to read configmaps in myorg-system so it can store release information. In `role-tiller-myorg-system.yaml`:
```yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: myorg-system
name: tiller-manager
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["configmaps"]
verbs: ["*"]
```
```console
$ kubectl create -f role-tiller-myorg-system.yaml
role "tiller-manager" created
```
And the respective role binding. In `rolebinding-tiller-myorg-system.yaml`:
```yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: tiller-binding
namespace: myorg-system
subjects:
- kind: ServiceAccount
name: tiller
namespace: myorg-system
roleRef:
kind: Role
name: tiller-manager
apiGroup: rbac.authorization.k8s.io
```
```console
$ kubectl create -f rolebinding-tiller-myorg-system.yaml
rolebinding "tiller-binding" created
```
## Helm and Role-based Access Control
When running a Helm client in a pod, in order for the Helm client to talk to a Tiller instance, it will need certain privileges to be granted. Specifically, the Helm client will need to be able to create pods, forward ports and be able to list pods in the namespace where Tiller is running (so it can find Tiller).
### Example: Deploy Helm in a namespace, talking to Tiller in another namespace
In this example, we will assume Tiller is running in a namespace called `tiller-world` and that the Helm client is running in a namespace called `helm-world`. By default, Tiller is running in the `kube-system` namespace.
In `helm-user.yaml`:
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: helm
namespace: helm-world
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: tiller-user
namespace: tiller-world
rules:
- apiGroups:
- ""
resources:
- pods/portforward
verbs:
- create
- apiGroups:
- ""
resources:
- pods
verbs:
- list
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: tiller-user-binding
namespace: tiller-world
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: tiller-user
subjects:
- kind: ServiceAccount
name: helm
namespace: helm-world
```
```console
$ kubectl create -f helm-user.yaml
serviceaccount "helm" created
role "tiller-user" created
rolebinding "tiller-user-binding" created
```

@ -25,7 +25,6 @@ or [pull request](https://github.com/kubernetes/helm/pulls).
## Helm Plugins ## Helm Plugins
- [helm-tiller](https://github.com/adamreese/helm-tiller) - Additional commands to work with Tiller
- [Technosophos's Helm Plugins](https://github.com/technosophos/helm-plugins) - Plugins for GitHub, Keybase, and GPG - [Technosophos's Helm Plugins](https://github.com/technosophos/helm-plugins) - Plugins for GitHub, Keybase, and GPG
- [helm-template](https://github.com/technosophos/helm-template) - Debug/render templates client-side - [helm-template](https://github.com/technosophos/helm-template) - Debug/render templates client-side
- [Helm Value Store](https://github.com/skuid/helm-value-store) - Plugin for working with Helm deployment values - [Helm Value Store](https://github.com/skuid/helm-value-store) - Plugin for working with Helm deployment values
@ -33,7 +32,6 @@ or [pull request](https://github.com/kubernetes/helm/pulls).
- [helm-env](https://github.com/adamreese/helm-env) - Plugin to show current environment - [helm-env](https://github.com/adamreese/helm-env) - Plugin to show current environment
- [helm-last](https://github.com/adamreese/helm-last) - Plugin to show the latest release - [helm-last](https://github.com/adamreese/helm-last) - Plugin to show the latest release
- [helm-nuke](https://github.com/adamreese/helm-nuke) - Plugin to destroy all releases - [helm-nuke](https://github.com/adamreese/helm-nuke) - Plugin to destroy all releases
- [helm-local](https://github.com/adamreese/helm-local) - Plugin to run Tiller as a local daemon
- [App Registry](https://github.com/app-registry/helm-plugin) - Plugin to manage charts via the [App Registry specification](https://github.com/app-registry/spec) - [App Registry](https://github.com/app-registry/helm-plugin) - Plugin to manage charts via the [App Registry specification](https://github.com/app-registry/spec)
- [helm-secrets](https://github.com/futuresimple/helm-secrets) - Plugin to manage and store secrets safely - [helm-secrets](https://github.com/futuresimple/helm-secrets) - Plugin to manage and store secrets safely
- [helm-edit](https://github.com/mstrzele/helm-edit) - Plugin for editing release's values - [helm-edit](https://github.com/mstrzele/helm-edit) - Plugin for editing release's values
@ -49,14 +47,12 @@ tag on their plugin repositories.
## Additional Tools ## Additional Tools
Tools layered on top of Helm or Tiller. Tools layered on top of Helm.
- [AppsCode Swift](https://github.com/appscode/swift) - Ajax friendly Helm Tiller Proxy using [grpc-gateway](https://github.com/grpc-ecosystem/grpc-gateway)
- [Quay App Registry](https://coreos.com/blog/quay-application-registry-for-kubernetes.html) - Open Kubernetes application registry, including a Helm access client - [Quay App Registry](https://coreos.com/blog/quay-application-registry-for-kubernetes.html) - Open Kubernetes application registry, including a Helm access client
- [Chartify](https://github.com/appscode/chartify) - Generate Helm charts from existing Kubernetes resources. - [Chartify](https://github.com/appscode/chartify) - Generate Helm charts from existing Kubernetes resources.
- [VIM-Kubernetes](https://github.com/andrewstuart/vim-kubernetes) - VIM plugin for Kubernetes and Helm - [VIM-Kubernetes](https://github.com/andrewstuart/vim-kubernetes) - VIM plugin for Kubernetes and Helm
- [Landscaper](https://github.com/Eneco/landscaper/) - "Landscaper takes a set of Helm Chart references with values (a desired state), and realizes this in a Kubernetes cluster." - [Landscaper](https://github.com/Eneco/landscaper/) - "Landscaper takes a set of Helm Chart references with values (a desired state), and realizes this in a Kubernetes cluster."
- [Rudder](https://github.com/AcalephStorage/rudder) - RESTful (JSON) proxy for Tiller's API
- [Helmfile](https://github.com/roboll/helmfile) - Helmfile is a declarative spec for deploying helm charts - [Helmfile](https://github.com/roboll/helmfile) - Helmfile is a declarative spec for deploying helm charts
- [Autohelm](https://github.com/reactiveops/autohelm) - Autohelm is _another_ simple declarative spec for deploying helm charts. Written in python and supports git urls as a source for helm charts. - [Autohelm](https://github.com/reactiveops/autohelm) - Autohelm is _another_ simple declarative spec for deploying helm charts. Written in python and supports git urls as a source for helm charts.
- [Helmsman](https://github.com/Praqma/helmsman) - Helmsman is a helm-charts-as-code tool which enables installing/upgrading/protecting/moving/deleting releases from version controlled desired state files (described in a simple TOML format). - [Helmsman](https://github.com/Praqma/helmsman) - Helmsman is a helm-charts-as-code tool which enables installing/upgrading/protecting/moving/deleting releases from version controlled desired state files (described in a simple TOML format).
@ -67,7 +63,6 @@ Tools layered on top of Helm or Tiller.
- [Helm Chart Publisher](https://github.com/luizbafilho/helm-chart-publisher) - HTTP API for publishing Helm Charts in an easy way - [Helm Chart Publisher](https://github.com/luizbafilho/helm-chart-publisher) - HTTP API for publishing Helm Charts in an easy way
- [Armada](https://github.com/att-comdev/armada) - Manage prefixed releases throughout various Kubernetes namespaces, and removes completed jobs for complex deployments. Used by the [Openstack-Helm](https://github.com/openstack/openstack-helm) team. - [Armada](https://github.com/att-comdev/armada) - Manage prefixed releases throughout various Kubernetes namespaces, and removes completed jobs for complex deployments. Used by the [Openstack-Helm](https://github.com/openstack/openstack-helm) team.
- [ChartMuseum](https://github.com/chartmuseum/chartmuseum) - Helm Chart Repository with support for Amazon S3 and Google Cloud Storage - [ChartMuseum](https://github.com/chartmuseum/chartmuseum) - Helm Chart Repository with support for Amazon S3 and Google Cloud Storage
- [Helm.NET](https://github.com/qmfrederik/helm) - A .NET client for Tiller's API
- [Codefresh](https://codefresh.io) - Kubernetes native CI/CD and management platform with UI dashboards for managing Helm charts and releases - [Codefresh](https://codefresh.io) - Kubernetes native CI/CD and management platform with UI dashboards for managing Helm charts and releases
## Helm Included ## Helm Included

@ -234,8 +234,6 @@ Download Helm X.Y. The common platform binaries are here:
- [Linux](https://storage.googleapis.com/kubernetes-helm/helm-vX.Y.Z-linux-amd64.tar.gz) - [Linux](https://storage.googleapis.com/kubernetes-helm/helm-vX.Y.Z-linux-amd64.tar.gz)
- [Windows](https://storage.googleapis.com/kubernetes-helm/helm-vX.Y.Z-windows-amd64.tar.gz) - [Windows](https://storage.googleapis.com/kubernetes-helm/helm-vX.Y.Z-windows-amd64.tar.gz)
Once you have the client installed, upgrade Tiller with `helm init --upgrade`.
The [Quickstart Guide](https://docs.helm.sh/using_helm/#quickstart-guide) will get you going from there. For **upgrade instructions** or detailed installation notes, check the [install guide](https://docs.helm.sh/using_helm/#installing-helm). You can also use a [script to install](https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get) on any system with `bash`. The [Quickstart Guide](https://docs.helm.sh/using_helm/#quickstart-guide) will get you going from there. For **upgrade instructions** or detailed installation notes, check the [install guide](https://docs.helm.sh/using_helm/#installing-helm). You can also use a [script to install](https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get) on any system with `bash`.
## What's Next ## What's Next

@ -1,111 +0,0 @@
# Securing your Helm Installation
Helm is a powerful and flexible package-management and operations tool for Kubernetes. Installing it using the default installation command -- `helm init` -- quickly and easily installs **Tiller**, the server-side component with which Helm corresponds.
This default installation applies **_no security configurations_**, however. It's completely appropriate to use this type of installation when you are working against a cluster with no or very few security concerns, such as local development with Minikube or with a cluster that is well-secured in a private network with no data-sharing or no other users or teams. If this is the case, then the default installation is fine, but remember: With great power comes great responsibility. Always use due diligence when deciding to use the default installation.
## Who Needs Security Configurations?
For the following types of clusters we strongly recommend that you apply the proper security configurations to Helm and Tiller to ensure the safety of the cluster, the data in it, and the network to which it is connected.
- Clusters that are exposed to uncontrolled network environments: either untrusted network actors can access the cluster, or untrusted applications that can access the network environment.
- Clusters that are for many people to use -- _multitenant_ clusters -- as a shared environment
- Clusters that have access to or use high-value data or networks of any type
Often, environments like these are referred to as _production grade_ or _production quality_ because the damage done to any company by misuse of the cluster can be profound for either customers, the company itself, or both. Once the risk of damage becomes high enough, you need to ensure the integrity of your cluster no matter what the actual risk.
To configure your installation properly for your environment, you must:
- Understand the security context of your cluster
- Choose the Best Practices you should apply to your helm installation
The following assumes you have a Kubernetes configuration file (a _kubeconfig_ file) or one was given to you to access a cluster.
## Understanding the Security Context of your Cluster
`helm init` installs Tiller into the cluster in the `kube-system` namespace and without any RBAC rules applied. This is appropriate for local development and other private scenarios because it enables you to be productive immediately. It also enables you to continue running Helm with existing Kubernetes clusters that do not have role-based access control (RBAC) support until you can move your workloads to a more recent Kubernetes version.
There are four main areas to consider when securing a tiller installation:
1. Role-based access control, or RBAC
2. Tiller's gRPC endpoint and its usage by Helm
3. Tiller release information
4. Helm charts
### RBAC
Recent versions of Kubernetes employ a [role-based access control (or RBAC)](https://en.wikipedia.org/wiki/Role-based_access_control) system (as do modern operating systems) to help mitigate the damage that can done if credentials are misused or bugs exist. Even where an identity is hijacked, the identity has only so many permissions to a controlled space. This effectively adds a layer of security to limit the scope of any attack with that identity.
Helm and Tiller are designed to install, remove, and modify logical applications that can contain many services interacting together. As a result, often its usefulness involves cluster-wide operations, which in a multitenant cluster means that great care must be taken with access to a cluster-wide Tiller installation to prevent improper activity.
Specific users and teams -- developers, operators, system and network administrators -- will need their own portion of the cluster in which they can use Helm and Tiller without risking other portions of the cluster. This means using a Kubernetes cluster with RBAC enabled and Tiller configured to enforce them. For more information about using RBAC in Kubernetes, see [Using RBAC Authorization](rbac.md).
#### Tiller and User Permissions
Tiller in its current form does not provide a way to map user credentials to specific permissions within Kubernetes. When Tiller is running inside of the cluster, it operates with the permissions of its service account. If no service account name is supplied to Tiller, it runs with the default service account for that namespace. This means that all Tiller operations on that server are executed using the Tiller pod's credentials and permissions.
To properly limit what Tiller itself can do, the standard Kubernetes RBAC mechanisms must be attached to Tiller, including Roles and RoleBindings that place explicit limits on what things a Tiller instance can install, and where.
This situation may change in the future. While the community has several methods that might address this, at the moment performing actions using the rights of the client, instead of the rights of Tiller, is contingent upon the outcome of the Pod Identity Working Group, which has taken on the task of solving the problem in a general way.
### The Tiller gRPC Endpoint and TLS
In the default installation the gRPC endpoint that Tiller offers is available inside the cluster (not external to the cluster) without authentication configuration applied. Without applying authentication, any process in the cluster can use the gRPC endpoint to perform operations inside the cluster. In a local or secured private cluster, this enables rapid usage and is normal. (When running outside the cluster, Helm authenticates through the Kubernetes API server to reach Tiller, leveraging existing Kubernetes authentication support.)
Shared and production clusters -- for the most part -- should use Helm 2.7.2 at a minimum and configure TLS for each Tiller gRPC endpoint to ensure that within the cluster usage of gRPC endpoints is only for the properly authenticated identity for that endpoint. Doing so enables any number of Tiller instances to be deployed in any number of namespaces and yet no unauthenticated usage of any gRPC endpoint is possible. Finally, usa Helm `init` with the `--tiller-tls-verify` option to install Tiller with TLS enabled and to verify remote certificates, and all other Helm commands should use the `--tls` option.
For more information about the proper steps to configure Tiller and use Helm properly with TLS configured, see [Using SSL between Helm and Tiller](tiller_ssl.md).
When Helm clients are connecting from outside of the cluster, the security between the Helm client and the API server is managed by Kubernetes itself. You may want to ensure that this link is secure. Note that if you are using the TLS configuration recommended above, not even the Kubernetes API server has access to the unencrypted messages between the client and Tiller.
### Tiller's Release Information
For historical reasons, Tiller stores its release information in ConfigMaps. We suggest changing the default to Secrets.
Secrets are the Kubernetes accepted mechanism for saving configuration data that is considered sensitive. While secrets don't themselves offer many protections, Kubernetes cluster management software often treats them differently than other objects. Thus, we suggest using secrets to store releases.
Enabling this feature currently requires setting the `--storage=secret` flag in the tiller-deploy deployment. This entails directly modifying the deployment or using `helm init --override=...`, as no helm init flag is currently available to do this for you. For more information, see [Using --override](install.md#using---override).
### Thinking about Charts
Because of the relative longevity of Helm, the Helm chart ecosystem evolved without the immediate concern for cluster-wide control, and especially in the developer space this makes complete sense. However, charts are a kind of package that not only installs containers you may or may not have validated yourself, but it may also install into more than one namespace.
As with all shared software, in a controlled or shared environment you must validate all software you install yourself _before_ you install it. If you have secured Tiller with TLS and have installed it with permissions to only one or a subset of namespaces, some charts may fail to install -- but in these environments, that is exactly what you want. If you need to use the chart, you may have to work with the creator or modify it yourself in order to use it securely in a multitenant cluster with proper RBAC rules applied. The `helm template` command renders the chart locally and displays the output.
Once vetted, you can use Helm's provenance tools to [ensure the provenance and integrity of charts](provenance.md) that you use.
### gRPC Tools and Secured Tiller Configurations
Many very useful tools use the gRPC interface directly, and having been built against the default installation -- which provides cluster-wide access -- may fail once security configurations have been applied. RBAC policies are controlled by you or by the cluster operator, and either can be adjusted for the tool, or the tool can be configured to work properly within the constraints of specific RBAC policies applied to Tiller. The same may need to be done if the gRPC endpoint is secured: the tools need their own secure TLS configuration in order to use a specific Tiller instance. The combination of RBAC policies and a secured gRPC endpoint configured in conjunction with gRPC tools enables you to control your cluster environment as you should.
## Best Practices for Securing Helm and Tiller
The following guidelines reiterate the Best Practices for securing Helm and Tiller and using them correctly.
1. Create a cluster with RBAC enabled
2. Configure each Tiller gRPC endpoint to use a separate TLS certificate
3. Release information should be a Kubernetes Secret
4. Install one Tiller per user, team, or other organizational entity with the `--service-account` flag, Roles, and RoleBindings
5. Use the `--tiller-tls-verify` option with `helm init` and the `--tls` flag with other Helm commands to enforce verification
If these steps are followed, an example `helm init` command might look something like this:
```bash
$ helm init \
--tiller-tls \
--tiller-tls-verify \
--tiller-tls-ca-cert=ca.pem \
--tiller-tls-cert=cert.pem \
--tiller-tls-key=key.pem \
--service-account=accountname
```
This command will start Tiller with both strong authentication over gRPC, and a service account to which RBAC policies have been applied.

@ -1,291 +0,0 @@
# Using SSL Between Helm and Tiller
This document explains how to create strong SSL/TLS connections between Helm and
Tiller. The emphasis here is on creating an internal CA, and using both the
cryptographic and identity functions of SSL.
> Support for TLS-based auth was introduced in Helm 2.3.0
Configuring SSL is considered an advanced topic, and knowledge of Helm and Tiller
is assumed.
## Overview
The Tiller authentication model uses client-side SSL certificates. Tiller itself
verifies these certificates using a certificate authority. Likewise, the client
also verifies Tiller's identity by certificate authority.
There are numerous possible configurations for setting up certificates and authorities,
but the method we cover here will work for most situations.
> As of Helm 2.7.2, Tiller _requires_ that the client certificate be validated
> by its CA. In prior versions, Tiller used a weaker validation strategy that
> allowed self-signed certificates.
In this guide, we will show how to:
- Create a private CA that is used to issue certificates for Tiller clients and
servers.
- Create a certificate for Tiller
- Create a certificate for the Helm client
- Create a Tiller instance that uses the certificate
- Configure the Helm client to use the CA and client-side certificate
By the end of this guide, you should have a Tiller instance running that will
only accept connections from clients who can be authenticated by SSL certificate.
## Generating Certificate Authorities and Certificates
One way to generate SSL CAs is via the `openssl` command line tool. There are many
guides and best practices documents available online. This explanation is focused
on getting ready within a small amount of time. For production configurations,
we urge readers to read [the official documentation](https://www.openssl.org) and
consult other resources.
### Generate a Certificate Authority
The simplest way to generate a certificate authority is to run two commands:
```console
$ openssl genrsa -out ./ca.key.pem 4096
$ openssl req -key ca.key.pem -new -x509 -days 7300 -sha256 -out ca.cert.pem -extensions v3_ca
Enter pass phrase for ca.key.pem:
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:CO
Locality Name (eg, city) []:Boulder
Organization Name (eg, company) [Internet Widgits Pty Ltd]:tiller
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:tiller
Email Address []:tiller@example.com
```
Note that the data input above is _sample data_. You should customize to your own
specifications.
The above will generate both a secret key and a CA. Note that these two files are
very important. The key in particular should be handled with particular care.
Often, you will want to generate an intermediate signing key. For the sake of brevity,
we will be signing keys with our root CA.
### Generating Certificates
We will be generating two certificates, each representing a type of certificate:
- One certificate is for Tiller. You will want one of these _per tiller host_ that
you run.
- One certificate is for the user. You will want one of these _per helm user_.
Since the commands to generate these are the same, we'll be creating both at the
same time. The names will indicate their target.
First, the Tiller key:
```console
$ openssl genrsa -out ./tiller.key.pem 4096
Generating RSA private key, 4096 bit long modulus
..........................................................................................................................................................................................................................................................................................................................++
............................................................................++
e is 65537 (0x10001)
Enter pass phrase for ./tiller.key.pem:
Verifying - Enter pass phrase for ./tiller.key.pem:
```
Next, generate the Helm client's key:
```console
$ openssl genrsa -out ./helm.key.pem 4096
Generating RSA private key, 4096 bit long modulus
.....++
......................................................................................................................................................................................++
e is 65537 (0x10001)
Enter pass phrase for ./helm.key.pem:
Verifying - Enter pass phrase for ./helm.key.pem:
```
Again, for production use you will generate one client certificate for each user.
Next we need to create certificates from these keys. For each certificate, this is
a two-step process of creating a CSR, and then creating the certificate.
```console
$ openssl req -key tiller.key.pem -new -sha256 -out tiller.csr.pem
Enter pass phrase for tiller.key.pem:
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:CO
Locality Name (eg, city) []:Boulder
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Tiller Server
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:tiller-server
Email Address []:
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
```
And we repeat this step for the Helm client certificate:
```console
$ openssl req -key helm.key.pem -new -sha256 -out helm.csr.pem
# Answer the questions with your client user's info
```
(In rare cases, we've had to add the `-nodes` flag when generating the request.)
Now we sign each of these CSRs with the CA certificate we created:
```console
$ openssl x509 -req -CA ca.cert.pem -CAkey ca.key.pem -CAcreateserial -in tiller.csr.pem -out tiller.cert.pem
Signature ok
subject=/C=US/ST=CO/L=Boulder/O=Tiller Server/CN=tiller-server
Getting CA Private Key
Enter pass phrase for ca.key.pem:
```
And again for the client certificate:
```console
$ openssl x509 -req -CA ca.cert.pem -CAkey ca.key.pem -CAcreateserial -in helm.csr.pem -out helm.cert.pem
```
At this point, the important files for us are these:
```
# The CA. Make sure the key is kept secret.
ca.cert.pem
ca.key.pem
# The Helm client files
helm.cert.pem
helm.key.pem
# The Tiller server files.
tiller.cert.pem
tiller.key.pem
```
Now we're ready to move on to the next steps.
## Creating a Custom Tiller Installation
Helm includes full support for creating a deployment configured for SSL. By specifying
a few flags, the `helm init` command can create a new Tiller installation complete
with all of our SSL configuration.
To take a look at what this will generate, run this command:
```console
$ helm init --dry-run --debug --tiller-tls --tiller-tls-cert ./tiller.cert.pem --tiller-tls-key ./tiller.key.pem --tiller-tls-verify --tls-ca-cert ca.cert.pem
```
The output will show you a Deployment, a Secret, and a Service. Your SSL information
will be preloaded into the Secret, which the Deployment will mount to pods as they
start up.
If you want to customize the manifest, you can save that output to a file and then
use `kubectl create` to load it into your cluster.
> We strongly recommend enabling RBAC on your cluster and adding [service accounts](rbac.md)
> with RBAC.
Otherwise, you can remove the `--dry-run` and `--debug` flags. We also recommend
putting Tiller in a non-system namespace (`--tiller-namespace=something`) and enable
a service account (`--service-account=somename`). But for this example we will stay
with the basics:
```console
$ helm init --tiller-tls --tiller-tls-cert ./tiller.cert.pem --tiller-tls-key ./tiller.key.pem --tiller-tls-verify --tls-ca-cert ca.cert.pem
```
In a minute or two it should be ready. We can check Tiller like this:
```console
$ kubectl -n kube-system get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
... other stuff
tiller-deploy 1 1 1 1 2m
```
If there is a problem, you may want to use `kubectl get pods -n kube-system` to
find out what went wrong. With the SSL/TLS support, the most common problems all
have to do with improperly generated TLS certificates or accidentally swapping the
cert and the key.
At this point, you should get a _failure_ when you run basic Helm commands:
```console
$ helm ls
Error: transport is closing
```
This is because your Helm client does not have the correct certificate to authenticate
to Tiller.
## Configuring the Helm Client
The Tiller server is now running with TLS protection. It's time to configure the
Helm client to also perform TLS operations.
For a quick test, we can specify our configuration manually. We'll run a normal
Helm command (`helm ls`), but with SSL/TLS enabled.
```console
helm ls --tls --tls-ca-cert ca.cert.pem --tls-cert helm.cert.pem --tls-key helm.key.pem
```
This configuration sends our client-side certificate to establish identity, uses
the client key for encryption, and uses the CA certificate to validate the remote
Tiller's identity.
Typing a line that is cumbersome, though. The shortcut is to move the key,
cert, and CA into `$HELM_HOME`:
```console
$ cp ca.cert.pem $(helm home)/ca.pem
$ cp helm.cert.pem $(helm home)/cert.pem
$ cp helm.key.pem $(helm home)/key.pem
```
With this, you can simply run `helm ls --tls` to enable TLS.
### Troubleshooting
*Running a command, I get `Error: transport is closing`*
This is almost always due to a configuration error in which the client is missing
a certificate (`--tls-cert`) or the certificate is bad.
*I'm using a certificate, but get `Error: remote error: tls: bad certificate`*
This means that Tiller's CA cannot verify your certificate. In the examples above,
we used a single CA to generate both the client and server certificates. In these
examples, the CA has _signed_ the client's certificate. We then load that CA
up to Tiller. So when the client certificate is sent to the server, Tiller
checks the client certificate against the CA.
*If I use `--tls-verify` on the client, I get `Error: x509: certificate is valid for tiller-server, not localhost`*
If you plan to use `--tls-verify` on the client, you will need to make sure that
the host name that Helm connects to matches the host name on the certificate. In
some cases this is awkward, since Helm will connect over localhost, or the FQDN is
not available for public resolution.
## References
https://github.com/denji/golang-tls
https://www.openssl.org/docs/
https://jamielinux.com/docs/openssl-certificate-authority/sign-server-and-client-certificates.html

@ -1,8 +1,8 @@
# Using Helm # Using Helm
This guide explains the basics of using Helm (and Tiller) to manage This guide explains the basics of using Helm to manage
packages on your Kubernetes cluster. It assumes that you have already packages on your Kubernetes cluster. It assumes that you have already
[installed](install.md) the Helm client and the Tiller server (typically by `helm [installed](install.md) the Helm client and library (typically by `helm
init`). init`).
If you are simply interested in running a few quick commands, you may If you are simply interested in running a few quick commands, you may
@ -493,15 +493,6 @@ Note: The `stable` repository is managed on the [Kubernetes Charts
GitHub repository](https://github.com/kubernetes/charts). That project GitHub repository](https://github.com/kubernetes/charts). That project
accepts chart source code, and (after audit) packages those for you. accepts chart source code, and (after audit) packages those for you.
## Tiller, Namespaces and RBAC
In some cases you may wish to scope Tiller or deploy multiple Tillers to a single cluster. Here are some best practices when operating in those circumstances.
1. Tiller can be [installed](install.md) into any namespace. By default, it is installed into kube-system. You can run multiple Tillers provided they each run in their own namespace.
2. Limiting Tiller to only be able to install into specific namespaces and/or resource types is controlled by Kubernetes [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/) roles and rolebindings. You can add a service account to Tiller when configuring Helm via `helm init --service-account <NAME>`. You can find more information about that [here](rbac.md).
3. Release names are unique PER TILLER INSTANCE.
4. Charts should only contain resources that exist in a single namespace.
5. It is not recommended to have multiple Tillers configured to manage resources in the same namespace.
## Conclusion ## Conclusion
This chapter has covered the basic usage patterns of the `helm` client, This chapter has covered the basic usage patterns of the `helm` client,

Loading…
Cancel
Save