check for pods health, add docs

Signed-off-by: Josh Dolitsky <jdolitsky@gmail.com>
pull/6000/head
Josh Dolitsky 6 years ago
parent 9bc18491fb
commit 3154e73e99

@ -1,3 +1,85 @@
# Helm Acceptance Tests # Helm Acceptance Tests
TODO This directory contains the source for Helm acceptance tests.
The tests are written using [Robot Framework](https://robotframework.org/).
## System requirements
The following tools/commands are expected to be present on the base system
prior to running the tests:
- [kind](https://kind.sigs.k8s.io/)
- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
- [python3](https://www.python.org/downloads/)
- [pip](https://pip.pypa.io/en/stable/installing/)
- [virtualenv](https://virtualenv.pypa.io/en/latest/installation/)
## Running the tests
From the root of this repo, run the following:
```
make acceptance
```
## Viewing the results
Robot creates an HTML test report describing test successes/failures.
To view the report, runt the following:
```
open .acceptance/report.html
```
Note: by default, the tests will output to the `.acceptance/` directory.
To modify this location, set the `ROBOT_OUTPUT_DIR` environment variable.
## Kubernetes integration
When testing Helm against multiple Kubernetes versions,
new test clusters are created on the fly (using `kind`),
with names in the following format:
```
helm-acceptance-test-<timestamp>-<kube_version>
```
If you wish to use an existing `kind` cluster for one
or more versions, you can set an environment variable for
a given version.
Here is an example of using an existing `kind` cluster
for Kubernetes version `1.15.0`:
```
export KIND_CLUSTER_1_15_0="helm-ac-keepalive-1.15.0"
```
A `kind` cluster can be created manually like so:
```
kind create cluster \
--name=helm-ac-keepalive-1.15.0 \
--image=kindest/node:v1.15.0
```
## Adding a new test case etc.
All files ending in `.robot` extension in this directory will be executed.
Add a new file describing your test, or, alternatively, add to an existing one.
Robot tests themselves are written in (mostly) plain English, but the Python
programming language can be used in order to add custom keywords etc.
Notice the [lib/](./lib/) directory - this contains Python libraries that
enable us to work with system tools such as `kind`. The file [common.py](./lib/common.py)
contains a base class called `CommandRunner` that you will likely want to
leverage when adding support for a new external tool.
The test run is wrapped by [acceptance.sh](./../scripts/acceptance.sh) -
in this file the environment is validated (i.e. check if required tools present).
sinstalled (including Robot Framework itself). If any additional Python libraries
are required for a new library, it can be appended to `ROBOT_PY_REQUIRES`.

@ -1,5 +1,14 @@
*** Settings *** *** Settings ***
Documentation Verify Helm functionality on multiple Kubernetes versions Documentation Verify Helm functionality on multiple Kubernetes versions.
...
... Fresh new kind-based clusters will be created for each
... of the Kubernetes versions being tested. An existing
... kind cluster can be used by specifying it in an env var
... representing the version, for example:
...
... export KIND_CLUSTER_1_14_3="helm-ac-keepalive-1.14.3"
... export KIND_CLUSTER_1_15_0="helm-ac-keepalive-1.15.0"
...
Library String Library String
Library lib/Kind.py Library lib/Kind.py
Library lib/Kubectl.py Library lib/Kubectl.py
@ -8,8 +17,8 @@ Suite Setup Suite Setup
Suite Teardown Suite Teardown Suite Teardown Suite Teardown
*** Test Cases *** *** Test Cases ***
Helm works with Kubernetes 1.14.3 #Helm works with Kubernetes 1.14.3
Test Helm on Kubernetes version 1.14.3 # Test Helm on Kubernetes version 1.14.3
Helm works with Kubernetes 1.15.0 Helm works with Kubernetes 1.15.0
Test Helm on Kubernetes version 1.15.0 Test Helm on Kubernetes version 1.15.0
@ -18,7 +27,10 @@ Helm works with Kubernetes 1.15.0
Test Helm on Kubernetes version Test Helm on Kubernetes version
[Arguments] ${kube_version} [Arguments] ${kube_version}
Create test cluster with kube version ${kube_version} Create test cluster with kube version ${kube_version}
Verify wait flag works
# Add new test cases here
Verify --wait flag works as expected
Kind.Delete test cluster Kind.Delete test cluster
Create test cluster with kube version Create test cluster with kube version
@ -26,36 +38,75 @@ Create test cluster with kube version
Kind.Create test cluster with Kubernetes version ${kube_version} Kind.Create test cluster with Kubernetes version ${kube_version}
Kind.Wait for cluster Kind.Wait for cluster
Kubectl.Get nodes Kubectl.Get nodes
Kubectl.return code should be 0 Kubectl.Return code should be 0
Kubectl.Get pods kube-system Kubectl.Get pods kube-system
Kubectl.return code should be 0 Kubectl.Return code should be 0
Verify wait flag works Verify --wait flag works as expected
# Install nginx chart in a good state, using --wait flag # Install nginx chart in a good state, using --wait flag
Helm.Delete release wait-flag-good Helm.Delete release wait-flag-good
Helm.Install test chart wait-flag-good nginx --wait --timeout=60s Helm.Install test chart wait-flag-good nginx --wait --timeout=60s
Helm.return code should be 0 Helm.Return code should be 0
# Make sure everything is up-and-running # Make sure everything is up-and-running
# TODO Kubectl.Get pods default
Kubectl.Get services default
Kubectl.Get persistent volume claims default
Kubectl.Service has IP default wait-flag-good-nginx
Kubectl.Return code should be 0
Kubectl.Persistent volume claim is bound default wait-flag-good-nginx
Kubectl.Return code should be 0
Kubectl.Pods with prefix are running default wait-flag-good-nginx-ext- 3
Kubectl.Return code should be 0
Kubectl.Pods with prefix are running default wait-flag-good-nginx-fluentd-es- 1
Kubectl.Return code should be 0
Kubectl.Pods with prefix are running default wait-flag-good-nginx-v1- 3
Kubectl.Return code should be 0
Kubectl.Pods with prefix are running default wait-flag-good-nginx-v1beta1- 3
Kubectl.Return code should be 0
Kubectl.Pods with prefix are running default wait-flag-good-nginx-v1beta2- 3
Kubectl.Return code should be 0
Kubectl.Pods with prefix are running default wait-flag-good-nginx-web- 3
Kubectl.Return code should be 0
# Delete good release # Delete good release
Helm.Delete release wait-flag-good Helm.Delete release wait-flag-good
Helm.return code should be 0 Helm.Return code should be 0
# Install nginx chart in a bad state, using --wait flag # Install nginx chart in a bad state, using --wait flag
Helm.Delete release wait-flag-bad Helm.Delete release wait-flag-bad
Helm.Install test chart wait-flag-bad nginx --wait --timeout=60s --set breakme=true Helm.Install test chart wait-flag-bad nginx --wait --timeout=60s --set breakme=true
# Install should return non-zero, as things fail to come up # Install should return non-zero, as things fail to come up
Helm.return code should not be 0 Helm.Return code should not be 0
# Make sure things are NOT up-and-running
Kubectl.Get pods default
Kubectl.Get services default
Kubectl.Get persistent volume claims default
Kubectl.Pods with prefix are running default wait-flag-bad-nginx-ext- 3
Kubectl.Return code should not be 0
Kubectl.Pods with prefix are running default wait-flag-bad-nginx-fluentd-es- 1
Kubectl.Return code should not be 0
Kubectl.Pods with prefix are running default wait-flag-bad-nginx-v1- 3
Kubectl.Return code should not be 0
Kubectl.Pods with prefix are running default wait-flag-bad-nginx-v1beta1- 3
Kubectl.Return code should not be 0
Kubectl.Pods with prefix are running default wait-flag-bad-nginx-v1beta2- 3
Kubectl.Return code should not be 0
Kubectl.Pods with prefix are running default wait-flag-bad-nginx-web- 3
Kubectl.Return code should not be 0
# Delete bad release # Delete bad release
Helm.Delete release wait-flag-bad Helm.Delete release wait-flag-bad
Helm.return code should be 0 Helm.Return code should be 0
Suite Setup Suite Setup
Kind.cleanup all test clusters Kind.Cleanup all test clusters
Suite Teardown Suite Teardown
Kind.cleanup all test clusters Kind.Cleanup all test clusters

@ -1,5 +1,6 @@
import common import common
import time import time
import os
DOCKER_HUB_REPO='kindest/node' DOCKER_HUB_REPO='kindest/node'
CLUSTER_PREFIX = 'helm-acceptance-test' CLUSTER_PREFIX = 'helm-acceptance-test'
@ -14,6 +15,7 @@ KIND_POD_INTERVAL_SECONDS = 2
KIND_POD_EXPECTED_NUMBER = 8 KIND_POD_EXPECTED_NUMBER = 8
LAST_CLUSTER_NAME = 'UNSET' LAST_CLUSTER_NAME = 'UNSET'
LAST_CLUSTER_EXISTING = False
def kind_auth_wrap(cmd): def kind_auth_wrap(cmd):
c = 'export KUBECONFIG="$(kind get kubeconfig-path' c = 'export KUBECONFIG="$(kind get kubeconfig-path'
@ -22,17 +24,28 @@ def kind_auth_wrap(cmd):
class Kind(common.CommandRunner): class Kind(common.CommandRunner):
def create_test_cluster_with_kubernetes_version(self, kube_version): def create_test_cluster_with_kubernetes_version(self, kube_version):
global LAST_CLUSTER_NAME global LAST_CLUSTER_NAME, LAST_CLUSTER_EXISTING
LAST_CLUSTER_NAME = CLUSTER_PREFIX+'-'+common.NOW+'-'+kube_version existing_cluster_name = os.getenv('KIND_CLUSTER_'+kube_version.replace('.', '_'))
cmd = 'kind create cluster --loglevel='+LOG_LEVEL if existing_cluster_name:
cmd += ' --name='+LAST_CLUSTER_NAME print('Using existing kind cluster for '+kube_version+', "'+existing_cluster_name+'"')
cmd += ' --image='+DOCKER_HUB_REPO+':v'+kube_version LAST_CLUSTER_NAME = existing_cluster_name
self.run_command(cmd) LAST_CLUSTER_EXISTING = True
else:
new_cluster_name = CLUSTER_PREFIX+'-'+common.NOW+'-'+kube_version
print('Creating new kind cluster for '+kube_version+', "'+new_cluster_name+'"')
LAST_CLUSTER_NAME = new_cluster_name
cmd = 'kind create cluster --loglevel='+LOG_LEVEL
cmd += ' --name='+new_cluster_name
cmd += ' --image='+DOCKER_HUB_REPO+':v'+kube_version
self.run_command(cmd)
def delete_test_cluster(self): def delete_test_cluster(self):
cmd = 'kind delete cluster --loglevel='+LOG_LEVEL if LAST_CLUSTER_EXISTING:
cmd += ' --name='+LAST_CLUSTER_NAME print('Not deleting cluster (cluster existed prior to test run)')
self.run_command(cmd) else:
cmd = 'kind delete cluster --loglevel='+LOG_LEVEL
cmd += ' --name='+LAST_CLUSTER_NAME
self.run_command(cmd)
def cleanup_all_test_clusters(self): def cleanup_all_test_clusters(self):
cmd = 'for i in `kind get clusters| grep ^'+CLUSTER_PREFIX+'-'+common.NOW+'`;' cmd = 'for i in `kind get clusters| grep ^'+CLUSTER_PREFIX+'-'+common.NOW+'`;'

@ -9,3 +9,29 @@ class Kubectl(common.CommandRunner):
def get_pods(self, namespace): def get_pods(self, namespace):
cmd = 'kubectl get pods --namespace='+namespace cmd = 'kubectl get pods --namespace='+namespace
self.run_command(kind_auth_wrap(cmd)) self.run_command(kind_auth_wrap(cmd))
def get_services(self, namespace):
cmd = 'kubectl get services --namespace='+namespace
self.run_command(kind_auth_wrap(cmd))
def get_persistent_volume_claims(self, namespace):
cmd = 'kubectl get pvc --namespace='+namespace
self.run_command(kind_auth_wrap(cmd))
def service_has_ip(self, namespace, service_name):
cmd = 'kubectl get services --namespace='+namespace
cmd += ' | grep '+service_name
cmd += ' | awk \'{print $3}\' | grep \'\(.\).*\\1\''
self.run_command(kind_auth_wrap(cmd))
def persistent_volume_claim_is_bound(self, namespace, pvc_name):
cmd = 'kubectl get pvc --namespace='+namespace
cmd += ' | grep '+pvc_name
cmd += ' | awk \'{print $2}\' | grep ^Bound'
self.run_command(kind_auth_wrap(cmd))
def pods_with_prefix_are_running(self, namespace, pod_prefix, num_expected):
cmd = '[ `kubectl get pods --namespace='+namespace
cmd += ' | grep ^'+pod_prefix+' | awk \'{print $2 "--" $3}\''
cmd += ' | grep -E "^([1-9][0-9]*)/\\1--Running" | wc -l` == '+num_expected+' ]'
self.run_command(kind_auth_wrap(cmd))

@ -1,7 +1,7 @@
apiVersion: apps/v1 apiVersion: apps/v1
kind: DaemonSet kind: DaemonSet
metadata: metadata:
name: fluentd-elasticsearch name: {{ template "nginx.fullname" . }}-fluentd-es
labels: labels:
k8s-app: fluentd-logging k8s-app: fluentd-logging
spec: spec:

@ -1,7 +1,7 @@
apiVersion: v1 apiVersion: v1
kind: PersistentVolumeClaim kind: PersistentVolumeClaim
metadata: metadata:
name: myclaim name: {{ template "nginx.fullname" . }}
spec: spec:
accessModes: accessModes:
- ReadWriteOnce - ReadWriteOnce

@ -1,7 +1,7 @@
apiVersion: apps/v1 apiVersion: apps/v1
kind: StatefulSet kind: StatefulSet
metadata: metadata:
name: web name: {{ template "nginx.fullname" . }}-web
spec: spec:
selector: selector:
matchLabels: matchLabels:

@ -4,8 +4,8 @@ REQUIRED_SYSTEM_COMMANDS=(
"kind" "kind"
"kubectl" "kubectl"
"python3" "python3"
"virtualenv"
"pip" "pip"
"virtualenv"
) )
set +x set +x

Loading…
Cancel
Save