mirror of https://github.com/helm/helm
doc(helm): remove Tiller reference from the docs (#4788)
* Remove Tiller reference from the docs Signed-off-by: Martin Hickey <martin.hickey@ie.ibm.com> * Update comments after review - https://github.com/helm/helm/pull/4788#discussion_r226037034 - https://github.com/helm/helm/pull/4788#discussion_r226037064 - https://github.com/helm/helm/pull/4788#discussion_r226037806 - https://github.com/helm/helm/pull/4788#discussion_r226038492 - https://github.com/helm/helm/pull/4788#discussion_r226039202 - https://github.com/helm/helm/pull/4788#discussion_r226039894 Signed-off-by: Martin Hickey <martin.hickey@ie.ibm.com>pull/4141/head
parent
c40905ff6d
commit
82c154e2ae
@ -1,281 +0,0 @@
|
|||||||
# Role-based Access Control
|
|
||||||
|
|
||||||
In Kubernetes, granting a role to an application-specific service account is a best practice to ensure that your application is operating in the scope that you have specified. Read more about service account permissions [in the official Kubernetes docs](https://kubernetes.io/docs/admin/authorization/rbac/#service-account-permissions).
|
|
||||||
|
|
||||||
Bitnami also has a fantastic guide for [configuring RBAC in your cluster](https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/) that takes you through RBAC basics.
|
|
||||||
|
|
||||||
This guide is for users who want to restrict Tiller's capabilities to install resources to certain namespaces, or to grant a Helm client running access to a Tiller instance.
|
|
||||||
|
|
||||||
## Tiller and Role-based Access Control
|
|
||||||
|
|
||||||
You can add a service account to Tiller using the `--service-account <NAME>` flag while you're configuring Helm. As a prerequisite, you'll have to create a role binding which specifies a [role](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) and a [service account](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) name that have been set up in advance.
|
|
||||||
|
|
||||||
Once you have satisfied the pre-requisite and have a service account with the correct permissions, you'll run a command like this: `helm init --service-account <NAME>`
|
|
||||||
|
|
||||||
### Example: Service account with cluster-admin role
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ kubectl create serviceaccount tiller --namespace kube-system
|
|
||||||
serviceaccount "tiller" created
|
|
||||||
```
|
|
||||||
|
|
||||||
In `rbac-config.yaml`:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
apiVersion: v1
|
|
||||||
kind: ServiceAccount
|
|
||||||
metadata:
|
|
||||||
name: tiller
|
|
||||||
namespace: kube-system
|
|
||||||
---
|
|
||||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
|
||||||
kind: ClusterRoleBinding
|
|
||||||
metadata:
|
|
||||||
name: tiller
|
|
||||||
roleRef:
|
|
||||||
apiGroup: rbac.authorization.k8s.io
|
|
||||||
kind: ClusterRole
|
|
||||||
name: cluster-admin
|
|
||||||
subjects:
|
|
||||||
- kind: ServiceAccount
|
|
||||||
name: tiller
|
|
||||||
namespace: kube-system
|
|
||||||
```
|
|
||||||
|
|
||||||
_Note: The cluster-admin role is created by default in a Kubernetes cluster, so you don't have to define it explicitly._
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ kubectl create -f rbac-config.yaml
|
|
||||||
serviceaccount "tiller" created
|
|
||||||
clusterrolebinding "tiller" created
|
|
||||||
$ helm init --service-account tiller
|
|
||||||
```
|
|
||||||
|
|
||||||
### Example: Deploy Tiller in a namespace, restricted to deploying resources only in that namespace
|
|
||||||
|
|
||||||
In the example above, we gave Tiller admin access to the entire cluster. You are not at all required to give Tiller cluster-admin access for it to work. Instead of specifying a ClusterRole or a ClusterRoleBinding, you can specify a Role and RoleBinding to limit Tiller's scope to a particular namespace.
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ kubectl create namespace tiller-world
|
|
||||||
namespace "tiller-world" created
|
|
||||||
$ kubectl create serviceaccount tiller --namespace tiller-world
|
|
||||||
serviceaccount "tiller" created
|
|
||||||
```
|
|
||||||
|
|
||||||
Define a Role that allows Tiller to manage all resources in `tiller-world` like in `role-tiller.yaml`:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
kind: Role
|
|
||||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
|
||||||
metadata:
|
|
||||||
name: tiller-manager
|
|
||||||
namespace: tiller-world
|
|
||||||
rules:
|
|
||||||
- apiGroups: ["", "extensions", "apps"]
|
|
||||||
resources: ["*"]
|
|
||||||
verbs: ["*"]
|
|
||||||
```
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ kubectl create -f role-tiller.yaml
|
|
||||||
role "tiller-manager" created
|
|
||||||
```
|
|
||||||
|
|
||||||
In `rolebinding-tiller.yaml`,
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
kind: RoleBinding
|
|
||||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
|
||||||
metadata:
|
|
||||||
name: tiller-binding
|
|
||||||
namespace: tiller-world
|
|
||||||
subjects:
|
|
||||||
- kind: ServiceAccount
|
|
||||||
name: tiller
|
|
||||||
namespace: tiller-world
|
|
||||||
roleRef:
|
|
||||||
kind: Role
|
|
||||||
name: tiller-manager
|
|
||||||
apiGroup: rbac.authorization.k8s.io
|
|
||||||
```
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ kubectl create -f rolebinding-tiller.yaml
|
|
||||||
rolebinding "tiller-binding" created
|
|
||||||
```
|
|
||||||
|
|
||||||
Afterwards you can run `helm init` to install Tiller in the `tiller-world` namespace.
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ helm init --service-account tiller --tiller-namespace tiller-world
|
|
||||||
$HELM_HOME has been configured at /Users/awesome-user/.helm.
|
|
||||||
|
|
||||||
Tiller (the Helm server side component) has been installed into your Kubernetes Cluster.
|
|
||||||
Happy Helming!
|
|
||||||
|
|
||||||
$ helm install nginx --tiller-namespace tiller-world --namespace tiller-world
|
|
||||||
NAME: wayfaring-yak
|
|
||||||
LAST DEPLOYED: Mon Aug 7 16:00:16 2017
|
|
||||||
NAMESPACE: tiller-world
|
|
||||||
STATUS: DEPLOYED
|
|
||||||
|
|
||||||
RESOURCES:
|
|
||||||
==> v1/Pod
|
|
||||||
NAME READY STATUS RESTARTS AGE
|
|
||||||
wayfaring-yak-alpine 0/1 ContainerCreating 0 0s
|
|
||||||
```
|
|
||||||
|
|
||||||
### Example: Deploy Tiller in a namespace, restricted to deploying resources in another namespace
|
|
||||||
|
|
||||||
In the example above, we gave Tiller admin access to the namespace it was deployed inside. Now, let's limit Tiller's scope to deploy resources in a different namespace!
|
|
||||||
|
|
||||||
For example, let's install Tiller in the namespace `myorg-system` and allow Tiller to deploy resources in the namespace `myorg-users`.
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ kubectl create namespace myorg-system
|
|
||||||
namespace "myorg-system" created
|
|
||||||
$ kubectl create serviceaccount tiller --namespace myorg-system
|
|
||||||
serviceaccount "tiller" created
|
|
||||||
```
|
|
||||||
|
|
||||||
Define a Role that allows Tiller to manage all resources in `myorg-users` like in `role-tiller.yaml`:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
kind: Role
|
|
||||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
|
||||||
metadata:
|
|
||||||
name: tiller-manager
|
|
||||||
namespace: myorg-users
|
|
||||||
rules:
|
|
||||||
- apiGroups: ["", "extensions", "apps"]
|
|
||||||
resources: ["*"]
|
|
||||||
verbs: ["*"]
|
|
||||||
```
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ kubectl create -f role-tiller.yaml
|
|
||||||
role "tiller-manager" created
|
|
||||||
```
|
|
||||||
|
|
||||||
Bind the service account to that role. In `rolebinding-tiller.yaml`,
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
kind: RoleBinding
|
|
||||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
|
||||||
metadata:
|
|
||||||
name: tiller-binding
|
|
||||||
namespace: myorg-users
|
|
||||||
subjects:
|
|
||||||
- kind: ServiceAccount
|
|
||||||
name: tiller
|
|
||||||
namespace: myorg-system
|
|
||||||
roleRef:
|
|
||||||
kind: Role
|
|
||||||
name: tiller-manager
|
|
||||||
apiGroup: rbac.authorization.k8s.io
|
|
||||||
```
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ kubectl create -f rolebinding-tiller.yaml
|
|
||||||
rolebinding "tiller-binding" created
|
|
||||||
```
|
|
||||||
|
|
||||||
We'll also need to grant Tiller access to read configmaps in myorg-system so it can store release information. In `role-tiller-myorg-system.yaml`:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
kind: Role
|
|
||||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
|
||||||
metadata:
|
|
||||||
namespace: myorg-system
|
|
||||||
name: tiller-manager
|
|
||||||
rules:
|
|
||||||
- apiGroups: ["", "extensions", "apps"]
|
|
||||||
resources: ["configmaps"]
|
|
||||||
verbs: ["*"]
|
|
||||||
```
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ kubectl create -f role-tiller-myorg-system.yaml
|
|
||||||
role "tiller-manager" created
|
|
||||||
```
|
|
||||||
|
|
||||||
And the respective role binding. In `rolebinding-tiller-myorg-system.yaml`:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
kind: RoleBinding
|
|
||||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
|
||||||
metadata:
|
|
||||||
name: tiller-binding
|
|
||||||
namespace: myorg-system
|
|
||||||
subjects:
|
|
||||||
- kind: ServiceAccount
|
|
||||||
name: tiller
|
|
||||||
namespace: myorg-system
|
|
||||||
roleRef:
|
|
||||||
kind: Role
|
|
||||||
name: tiller-manager
|
|
||||||
apiGroup: rbac.authorization.k8s.io
|
|
||||||
```
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ kubectl create -f rolebinding-tiller-myorg-system.yaml
|
|
||||||
rolebinding "tiller-binding" created
|
|
||||||
```
|
|
||||||
|
|
||||||
## Helm and Role-based Access Control
|
|
||||||
|
|
||||||
When running a Helm client in a pod, in order for the Helm client to talk to a Tiller instance, it will need certain privileges to be granted. Specifically, the Helm client will need to be able to create pods, forward ports and be able to list pods in the namespace where Tiller is running (so it can find Tiller).
|
|
||||||
|
|
||||||
### Example: Deploy Helm in a namespace, talking to Tiller in another namespace
|
|
||||||
|
|
||||||
In this example, we will assume Tiller is running in a namespace called `tiller-world` and that the Helm client is running in a namespace called `helm-world`. By default, Tiller is running in the `kube-system` namespace.
|
|
||||||
|
|
||||||
In `helm-user.yaml`:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
apiVersion: v1
|
|
||||||
kind: ServiceAccount
|
|
||||||
metadata:
|
|
||||||
name: helm
|
|
||||||
namespace: helm-world
|
|
||||||
---
|
|
||||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
|
||||||
kind: Role
|
|
||||||
metadata:
|
|
||||||
name: tiller-user
|
|
||||||
namespace: tiller-world
|
|
||||||
rules:
|
|
||||||
- apiGroups:
|
|
||||||
- ""
|
|
||||||
resources:
|
|
||||||
- pods/portforward
|
|
||||||
verbs:
|
|
||||||
- create
|
|
||||||
- apiGroups:
|
|
||||||
- ""
|
|
||||||
resources:
|
|
||||||
- pods
|
|
||||||
verbs:
|
|
||||||
- list
|
|
||||||
---
|
|
||||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
|
||||||
kind: RoleBinding
|
|
||||||
metadata:
|
|
||||||
name: tiller-user-binding
|
|
||||||
namespace: tiller-world
|
|
||||||
roleRef:
|
|
||||||
apiGroup: rbac.authorization.k8s.io
|
|
||||||
kind: Role
|
|
||||||
name: tiller-user
|
|
||||||
subjects:
|
|
||||||
- kind: ServiceAccount
|
|
||||||
name: helm
|
|
||||||
namespace: helm-world
|
|
||||||
```
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ kubectl create -f helm-user.yaml
|
|
||||||
serviceaccount "helm" created
|
|
||||||
role "tiller-user" created
|
|
||||||
rolebinding "tiller-user-binding" created
|
|
||||||
```
|
|
@ -1,111 +0,0 @@
|
|||||||
# Securing your Helm Installation
|
|
||||||
|
|
||||||
Helm is a powerful and flexible package-management and operations tool for Kubernetes. Installing it using the default installation command -- `helm init` -- quickly and easily installs **Tiller**, the server-side component with which Helm corresponds.
|
|
||||||
|
|
||||||
This default installation applies **_no security configurations_**, however. It's completely appropriate to use this type of installation when you are working against a cluster with no or very few security concerns, such as local development with Minikube or with a cluster that is well-secured in a private network with no data-sharing or no other users or teams. If this is the case, then the default installation is fine, but remember: With great power comes great responsibility. Always use due diligence when deciding to use the default installation.
|
|
||||||
|
|
||||||
## Who Needs Security Configurations?
|
|
||||||
|
|
||||||
For the following types of clusters we strongly recommend that you apply the proper security configurations to Helm and Tiller to ensure the safety of the cluster, the data in it, and the network to which it is connected.
|
|
||||||
|
|
||||||
- Clusters that are exposed to uncontrolled network environments: either untrusted network actors can access the cluster, or untrusted applications that can access the network environment.
|
|
||||||
- Clusters that are for many people to use -- _multitenant_ clusters -- as a shared environment
|
|
||||||
- Clusters that have access to or use high-value data or networks of any type
|
|
||||||
|
|
||||||
Often, environments like these are referred to as _production grade_ or _production quality_ because the damage done to any company by misuse of the cluster can be profound for either customers, the company itself, or both. Once the risk of damage becomes high enough, you need to ensure the integrity of your cluster no matter what the actual risk.
|
|
||||||
|
|
||||||
To configure your installation properly for your environment, you must:
|
|
||||||
|
|
||||||
- Understand the security context of your cluster
|
|
||||||
- Choose the Best Practices you should apply to your helm installation
|
|
||||||
|
|
||||||
The following assumes you have a Kubernetes configuration file (a _kubeconfig_ file) or one was given to you to access a cluster.
|
|
||||||
|
|
||||||
## Understanding the Security Context of your Cluster
|
|
||||||
|
|
||||||
`helm init` installs Tiller into the cluster in the `kube-system` namespace and without any RBAC rules applied. This is appropriate for local development and other private scenarios because it enables you to be productive immediately. It also enables you to continue running Helm with existing Kubernetes clusters that do not have role-based access control (RBAC) support until you can move your workloads to a more recent Kubernetes version.
|
|
||||||
|
|
||||||
There are four main areas to consider when securing a tiller installation:
|
|
||||||
|
|
||||||
1. Role-based access control, or RBAC
|
|
||||||
2. Tiller's gRPC endpoint and its usage by Helm
|
|
||||||
3. Tiller release information
|
|
||||||
4. Helm charts
|
|
||||||
|
|
||||||
### RBAC
|
|
||||||
|
|
||||||
Recent versions of Kubernetes employ a [role-based access control (or RBAC)](https://en.wikipedia.org/wiki/Role-based_access_control) system (as do modern operating systems) to help mitigate the damage that can done if credentials are misused or bugs exist. Even where an identity is hijacked, the identity has only so many permissions to a controlled space. This effectively adds a layer of security to limit the scope of any attack with that identity.
|
|
||||||
|
|
||||||
Helm and Tiller are designed to install, remove, and modify logical applications that can contain many services interacting together. As a result, often its usefulness involves cluster-wide operations, which in a multitenant cluster means that great care must be taken with access to a cluster-wide Tiller installation to prevent improper activity.
|
|
||||||
|
|
||||||
Specific users and teams -- developers, operators, system and network administrators -- will need their own portion of the cluster in which they can use Helm and Tiller without risking other portions of the cluster. This means using a Kubernetes cluster with RBAC enabled and Tiller configured to enforce them. For more information about using RBAC in Kubernetes, see [Using RBAC Authorization](rbac.md).
|
|
||||||
|
|
||||||
#### Tiller and User Permissions
|
|
||||||
|
|
||||||
Tiller in its current form does not provide a way to map user credentials to specific permissions within Kubernetes. When Tiller is running inside of the cluster, it operates with the permissions of its service account. If no service account name is supplied to Tiller, it runs with the default service account for that namespace. This means that all Tiller operations on that server are executed using the Tiller pod's credentials and permissions.
|
|
||||||
|
|
||||||
To properly limit what Tiller itself can do, the standard Kubernetes RBAC mechanisms must be attached to Tiller, including Roles and RoleBindings that place explicit limits on what things a Tiller instance can install, and where.
|
|
||||||
|
|
||||||
This situation may change in the future. While the community has several methods that might address this, at the moment performing actions using the rights of the client, instead of the rights of Tiller, is contingent upon the outcome of the Pod Identity Working Group, which has taken on the task of solving the problem in a general way.
|
|
||||||
|
|
||||||
|
|
||||||
### The Tiller gRPC Endpoint and TLS
|
|
||||||
|
|
||||||
In the default installation the gRPC endpoint that Tiller offers is available inside the cluster (not external to the cluster) without authentication configuration applied. Without applying authentication, any process in the cluster can use the gRPC endpoint to perform operations inside the cluster. In a local or secured private cluster, this enables rapid usage and is normal. (When running outside the cluster, Helm authenticates through the Kubernetes API server to reach Tiller, leveraging existing Kubernetes authentication support.)
|
|
||||||
|
|
||||||
Shared and production clusters -- for the most part -- should use Helm 2.7.2 at a minimum and configure TLS for each Tiller gRPC endpoint to ensure that within the cluster usage of gRPC endpoints is only for the properly authenticated identity for that endpoint. Doing so enables any number of Tiller instances to be deployed in any number of namespaces and yet no unauthenticated usage of any gRPC endpoint is possible. Finally, usa Helm `init` with the `--tiller-tls-verify` option to install Tiller with TLS enabled and to verify remote certificates, and all other Helm commands should use the `--tls` option.
|
|
||||||
|
|
||||||
For more information about the proper steps to configure Tiller and use Helm properly with TLS configured, see [Using SSL between Helm and Tiller](tiller_ssl.md).
|
|
||||||
|
|
||||||
When Helm clients are connecting from outside of the cluster, the security between the Helm client and the API server is managed by Kubernetes itself. You may want to ensure that this link is secure. Note that if you are using the TLS configuration recommended above, not even the Kubernetes API server has access to the unencrypted messages between the client and Tiller.
|
|
||||||
|
|
||||||
### Tiller's Release Information
|
|
||||||
|
|
||||||
For historical reasons, Tiller stores its release information in ConfigMaps. We suggest changing the default to Secrets.
|
|
||||||
|
|
||||||
Secrets are the Kubernetes accepted mechanism for saving configuration data that is considered sensitive. While secrets don't themselves offer many protections, Kubernetes cluster management software often treats them differently than other objects. Thus, we suggest using secrets to store releases.
|
|
||||||
|
|
||||||
Enabling this feature currently requires setting the `--storage=secret` flag in the tiller-deploy deployment. This entails directly modifying the deployment or using `helm init --override=...`, as no helm init flag is currently available to do this for you. For more information, see [Using --override](install.md#using---override).
|
|
||||||
|
|
||||||
### Thinking about Charts
|
|
||||||
|
|
||||||
Because of the relative longevity of Helm, the Helm chart ecosystem evolved without the immediate concern for cluster-wide control, and especially in the developer space this makes complete sense. However, charts are a kind of package that not only installs containers you may or may not have validated yourself, but it may also install into more than one namespace.
|
|
||||||
|
|
||||||
As with all shared software, in a controlled or shared environment you must validate all software you install yourself _before_ you install it. If you have secured Tiller with TLS and have installed it with permissions to only one or a subset of namespaces, some charts may fail to install -- but in these environments, that is exactly what you want. If you need to use the chart, you may have to work with the creator or modify it yourself in order to use it securely in a multitenant cluster with proper RBAC rules applied. The `helm template` command renders the chart locally and displays the output.
|
|
||||||
|
|
||||||
Once vetted, you can use Helm's provenance tools to [ensure the provenance and integrity of charts](provenance.md) that you use.
|
|
||||||
|
|
||||||
### gRPC Tools and Secured Tiller Configurations
|
|
||||||
|
|
||||||
Many very useful tools use the gRPC interface directly, and having been built against the default installation -- which provides cluster-wide access -- may fail once security configurations have been applied. RBAC policies are controlled by you or by the cluster operator, and either can be adjusted for the tool, or the tool can be configured to work properly within the constraints of specific RBAC policies applied to Tiller. The same may need to be done if the gRPC endpoint is secured: the tools need their own secure TLS configuration in order to use a specific Tiller instance. The combination of RBAC policies and a secured gRPC endpoint configured in conjunction with gRPC tools enables you to control your cluster environment as you should.
|
|
||||||
|
|
||||||
## Best Practices for Securing Helm and Tiller
|
|
||||||
|
|
||||||
The following guidelines reiterate the Best Practices for securing Helm and Tiller and using them correctly.
|
|
||||||
|
|
||||||
1. Create a cluster with RBAC enabled
|
|
||||||
2. Configure each Tiller gRPC endpoint to use a separate TLS certificate
|
|
||||||
3. Release information should be a Kubernetes Secret
|
|
||||||
4. Install one Tiller per user, team, or other organizational entity with the `--service-account` flag, Roles, and RoleBindings
|
|
||||||
5. Use the `--tiller-tls-verify` option with `helm init` and the `--tls` flag with other Helm commands to enforce verification
|
|
||||||
|
|
||||||
If these steps are followed, an example `helm init` command might look something like this:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ helm init \
|
|
||||||
--tiller-tls \
|
|
||||||
--tiller-tls-verify \
|
|
||||||
--tiller-tls-ca-cert=ca.pem \
|
|
||||||
--tiller-tls-cert=cert.pem \
|
|
||||||
--tiller-tls-key=key.pem \
|
|
||||||
--service-account=accountname
|
|
||||||
```
|
|
||||||
|
|
||||||
This command will start Tiller with both strong authentication over gRPC, and a service account to which RBAC policies have been applied.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -1,291 +0,0 @@
|
|||||||
# Using SSL Between Helm and Tiller
|
|
||||||
|
|
||||||
This document explains how to create strong SSL/TLS connections between Helm and
|
|
||||||
Tiller. The emphasis here is on creating an internal CA, and using both the
|
|
||||||
cryptographic and identity functions of SSL.
|
|
||||||
|
|
||||||
> Support for TLS-based auth was introduced in Helm 2.3.0
|
|
||||||
|
|
||||||
Configuring SSL is considered an advanced topic, and knowledge of Helm and Tiller
|
|
||||||
is assumed.
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
The Tiller authentication model uses client-side SSL certificates. Tiller itself
|
|
||||||
verifies these certificates using a certificate authority. Likewise, the client
|
|
||||||
also verifies Tiller's identity by certificate authority.
|
|
||||||
|
|
||||||
There are numerous possible configurations for setting up certificates and authorities,
|
|
||||||
but the method we cover here will work for most situations.
|
|
||||||
|
|
||||||
> As of Helm 2.7.2, Tiller _requires_ that the client certificate be validated
|
|
||||||
> by its CA. In prior versions, Tiller used a weaker validation strategy that
|
|
||||||
> allowed self-signed certificates.
|
|
||||||
|
|
||||||
In this guide, we will show how to:
|
|
||||||
|
|
||||||
- Create a private CA that is used to issue certificates for Tiller clients and
|
|
||||||
servers.
|
|
||||||
- Create a certificate for Tiller
|
|
||||||
- Create a certificate for the Helm client
|
|
||||||
- Create a Tiller instance that uses the certificate
|
|
||||||
- Configure the Helm client to use the CA and client-side certificate
|
|
||||||
|
|
||||||
By the end of this guide, you should have a Tiller instance running that will
|
|
||||||
only accept connections from clients who can be authenticated by SSL certificate.
|
|
||||||
|
|
||||||
## Generating Certificate Authorities and Certificates
|
|
||||||
|
|
||||||
One way to generate SSL CAs is via the `openssl` command line tool. There are many
|
|
||||||
guides and best practices documents available online. This explanation is focused
|
|
||||||
on getting ready within a small amount of time. For production configurations,
|
|
||||||
we urge readers to read [the official documentation](https://www.openssl.org) and
|
|
||||||
consult other resources.
|
|
||||||
|
|
||||||
### Generate a Certificate Authority
|
|
||||||
|
|
||||||
The simplest way to generate a certificate authority is to run two commands:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ openssl genrsa -out ./ca.key.pem 4096
|
|
||||||
$ openssl req -key ca.key.pem -new -x509 -days 7300 -sha256 -out ca.cert.pem -extensions v3_ca
|
|
||||||
Enter pass phrase for ca.key.pem:
|
|
||||||
You are about to be asked to enter information that will be incorporated
|
|
||||||
into your certificate request.
|
|
||||||
What you are about to enter is what is called a Distinguished Name or a DN.
|
|
||||||
There are quite a few fields but you can leave some blank
|
|
||||||
For some fields there will be a default value,
|
|
||||||
If you enter '.', the field will be left blank.
|
|
||||||
-----
|
|
||||||
Country Name (2 letter code) [AU]:US
|
|
||||||
State or Province Name (full name) [Some-State]:CO
|
|
||||||
Locality Name (eg, city) []:Boulder
|
|
||||||
Organization Name (eg, company) [Internet Widgits Pty Ltd]:tiller
|
|
||||||
Organizational Unit Name (eg, section) []:
|
|
||||||
Common Name (e.g. server FQDN or YOUR name) []:tiller
|
|
||||||
Email Address []:tiller@example.com
|
|
||||||
```
|
|
||||||
|
|
||||||
Note that the data input above is _sample data_. You should customize to your own
|
|
||||||
specifications.
|
|
||||||
|
|
||||||
The above will generate both a secret key and a CA. Note that these two files are
|
|
||||||
very important. The key in particular should be handled with particular care.
|
|
||||||
|
|
||||||
Often, you will want to generate an intermediate signing key. For the sake of brevity,
|
|
||||||
we will be signing keys with our root CA.
|
|
||||||
|
|
||||||
### Generating Certificates
|
|
||||||
|
|
||||||
We will be generating two certificates, each representing a type of certificate:
|
|
||||||
|
|
||||||
- One certificate is for Tiller. You will want one of these _per tiller host_ that
|
|
||||||
you run.
|
|
||||||
- One certificate is for the user. You will want one of these _per helm user_.
|
|
||||||
|
|
||||||
Since the commands to generate these are the same, we'll be creating both at the
|
|
||||||
same time. The names will indicate their target.
|
|
||||||
|
|
||||||
First, the Tiller key:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ openssl genrsa -out ./tiller.key.pem 4096
|
|
||||||
Generating RSA private key, 4096 bit long modulus
|
|
||||||
..........................................................................................................................................................................................................................................................................................................................++
|
|
||||||
............................................................................++
|
|
||||||
e is 65537 (0x10001)
|
|
||||||
Enter pass phrase for ./tiller.key.pem:
|
|
||||||
Verifying - Enter pass phrase for ./tiller.key.pem:
|
|
||||||
```
|
|
||||||
|
|
||||||
Next, generate the Helm client's key:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ openssl genrsa -out ./helm.key.pem 4096
|
|
||||||
Generating RSA private key, 4096 bit long modulus
|
|
||||||
.....++
|
|
||||||
......................................................................................................................................................................................++
|
|
||||||
e is 65537 (0x10001)
|
|
||||||
Enter pass phrase for ./helm.key.pem:
|
|
||||||
Verifying - Enter pass phrase for ./helm.key.pem:
|
|
||||||
```
|
|
||||||
|
|
||||||
Again, for production use you will generate one client certificate for each user.
|
|
||||||
|
|
||||||
Next we need to create certificates from these keys. For each certificate, this is
|
|
||||||
a two-step process of creating a CSR, and then creating the certificate.
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ openssl req -key tiller.key.pem -new -sha256 -out tiller.csr.pem
|
|
||||||
Enter pass phrase for tiller.key.pem:
|
|
||||||
You are about to be asked to enter information that will be incorporated
|
|
||||||
into your certificate request.
|
|
||||||
What you are about to enter is what is called a Distinguished Name or a DN.
|
|
||||||
There are quite a few fields but you can leave some blank
|
|
||||||
For some fields there will be a default value,
|
|
||||||
If you enter '.', the field will be left blank.
|
|
||||||
-----
|
|
||||||
Country Name (2 letter code) [AU]:US
|
|
||||||
State or Province Name (full name) [Some-State]:CO
|
|
||||||
Locality Name (eg, city) []:Boulder
|
|
||||||
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Tiller Server
|
|
||||||
Organizational Unit Name (eg, section) []:
|
|
||||||
Common Name (e.g. server FQDN or YOUR name) []:tiller-server
|
|
||||||
Email Address []:
|
|
||||||
|
|
||||||
Please enter the following 'extra' attributes
|
|
||||||
to be sent with your certificate request
|
|
||||||
A challenge password []:
|
|
||||||
An optional company name []:
|
|
||||||
```
|
|
||||||
|
|
||||||
And we repeat this step for the Helm client certificate:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ openssl req -key helm.key.pem -new -sha256 -out helm.csr.pem
|
|
||||||
# Answer the questions with your client user's info
|
|
||||||
```
|
|
||||||
|
|
||||||
(In rare cases, we've had to add the `-nodes` flag when generating the request.)
|
|
||||||
|
|
||||||
Now we sign each of these CSRs with the CA certificate we created:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ openssl x509 -req -CA ca.cert.pem -CAkey ca.key.pem -CAcreateserial -in tiller.csr.pem -out tiller.cert.pem
|
|
||||||
Signature ok
|
|
||||||
subject=/C=US/ST=CO/L=Boulder/O=Tiller Server/CN=tiller-server
|
|
||||||
Getting CA Private Key
|
|
||||||
Enter pass phrase for ca.key.pem:
|
|
||||||
```
|
|
||||||
|
|
||||||
And again for the client certificate:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ openssl x509 -req -CA ca.cert.pem -CAkey ca.key.pem -CAcreateserial -in helm.csr.pem -out helm.cert.pem
|
|
||||||
```
|
|
||||||
|
|
||||||
At this point, the important files for us are these:
|
|
||||||
|
|
||||||
```
|
|
||||||
# The CA. Make sure the key is kept secret.
|
|
||||||
ca.cert.pem
|
|
||||||
ca.key.pem
|
|
||||||
# The Helm client files
|
|
||||||
helm.cert.pem
|
|
||||||
helm.key.pem
|
|
||||||
# The Tiller server files.
|
|
||||||
tiller.cert.pem
|
|
||||||
tiller.key.pem
|
|
||||||
```
|
|
||||||
|
|
||||||
Now we're ready to move on to the next steps.
|
|
||||||
|
|
||||||
## Creating a Custom Tiller Installation
|
|
||||||
|
|
||||||
Helm includes full support for creating a deployment configured for SSL. By specifying
|
|
||||||
a few flags, the `helm init` command can create a new Tiller installation complete
|
|
||||||
with all of our SSL configuration.
|
|
||||||
|
|
||||||
To take a look at what this will generate, run this command:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ helm init --dry-run --debug --tiller-tls --tiller-tls-cert ./tiller.cert.pem --tiller-tls-key ./tiller.key.pem --tiller-tls-verify --tls-ca-cert ca.cert.pem
|
|
||||||
```
|
|
||||||
|
|
||||||
The output will show you a Deployment, a Secret, and a Service. Your SSL information
|
|
||||||
will be preloaded into the Secret, which the Deployment will mount to pods as they
|
|
||||||
start up.
|
|
||||||
|
|
||||||
If you want to customize the manifest, you can save that output to a file and then
|
|
||||||
use `kubectl create` to load it into your cluster.
|
|
||||||
|
|
||||||
> We strongly recommend enabling RBAC on your cluster and adding [service accounts](rbac.md)
|
|
||||||
> with RBAC.
|
|
||||||
|
|
||||||
Otherwise, you can remove the `--dry-run` and `--debug` flags. We also recommend
|
|
||||||
putting Tiller in a non-system namespace (`--tiller-namespace=something`) and enable
|
|
||||||
a service account (`--service-account=somename`). But for this example we will stay
|
|
||||||
with the basics:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ helm init --tiller-tls --tiller-tls-cert ./tiller.cert.pem --tiller-tls-key ./tiller.key.pem --tiller-tls-verify --tls-ca-cert ca.cert.pem
|
|
||||||
```
|
|
||||||
|
|
||||||
In a minute or two it should be ready. We can check Tiller like this:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ kubectl -n kube-system get deployment
|
|
||||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
|
||||||
... other stuff
|
|
||||||
tiller-deploy 1 1 1 1 2m
|
|
||||||
```
|
|
||||||
|
|
||||||
If there is a problem, you may want to use `kubectl get pods -n kube-system` to
|
|
||||||
find out what went wrong. With the SSL/TLS support, the most common problems all
|
|
||||||
have to do with improperly generated TLS certificates or accidentally swapping the
|
|
||||||
cert and the key.
|
|
||||||
|
|
||||||
At this point, you should get a _failure_ when you run basic Helm commands:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ helm ls
|
|
||||||
Error: transport is closing
|
|
||||||
```
|
|
||||||
|
|
||||||
This is because your Helm client does not have the correct certificate to authenticate
|
|
||||||
to Tiller.
|
|
||||||
|
|
||||||
## Configuring the Helm Client
|
|
||||||
|
|
||||||
The Tiller server is now running with TLS protection. It's time to configure the
|
|
||||||
Helm client to also perform TLS operations.
|
|
||||||
|
|
||||||
For a quick test, we can specify our configuration manually. We'll run a normal
|
|
||||||
Helm command (`helm ls`), but with SSL/TLS enabled.
|
|
||||||
|
|
||||||
```console
|
|
||||||
helm ls --tls --tls-ca-cert ca.cert.pem --tls-cert helm.cert.pem --tls-key helm.key.pem
|
|
||||||
```
|
|
||||||
|
|
||||||
This configuration sends our client-side certificate to establish identity, uses
|
|
||||||
the client key for encryption, and uses the CA certificate to validate the remote
|
|
||||||
Tiller's identity.
|
|
||||||
|
|
||||||
Typing a line that is cumbersome, though. The shortcut is to move the key,
|
|
||||||
cert, and CA into `$HELM_HOME`:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ cp ca.cert.pem $(helm home)/ca.pem
|
|
||||||
$ cp helm.cert.pem $(helm home)/cert.pem
|
|
||||||
$ cp helm.key.pem $(helm home)/key.pem
|
|
||||||
```
|
|
||||||
|
|
||||||
With this, you can simply run `helm ls --tls` to enable TLS.
|
|
||||||
|
|
||||||
### Troubleshooting
|
|
||||||
|
|
||||||
*Running a command, I get `Error: transport is closing`*
|
|
||||||
|
|
||||||
This is almost always due to a configuration error in which the client is missing
|
|
||||||
a certificate (`--tls-cert`) or the certificate is bad.
|
|
||||||
|
|
||||||
*I'm using a certificate, but get `Error: remote error: tls: bad certificate`*
|
|
||||||
|
|
||||||
This means that Tiller's CA cannot verify your certificate. In the examples above,
|
|
||||||
we used a single CA to generate both the client and server certificates. In these
|
|
||||||
examples, the CA has _signed_ the client's certificate. We then load that CA
|
|
||||||
up to Tiller. So when the client certificate is sent to the server, Tiller
|
|
||||||
checks the client certificate against the CA.
|
|
||||||
|
|
||||||
*If I use `--tls-verify` on the client, I get `Error: x509: certificate is valid for tiller-server, not localhost`*
|
|
||||||
|
|
||||||
If you plan to use `--tls-verify` on the client, you will need to make sure that
|
|
||||||
the host name that Helm connects to matches the host name on the certificate. In
|
|
||||||
some cases this is awkward, since Helm will connect over localhost, or the FQDN is
|
|
||||||
not available for public resolution.
|
|
||||||
|
|
||||||
## References
|
|
||||||
|
|
||||||
https://github.com/denji/golang-tls
|
|
||||||
https://www.openssl.org/docs/
|
|
||||||
https://jamielinux.com/docs/openssl-certificate-authority/sign-server-and-client-certificates.html
|
|
Loading…
Reference in new issue