diff --git a/acceptance_tests/README.md b/acceptance_tests/README.md index 13154d5ae..313d5c6ae 100644 --- a/acceptance_tests/README.md +++ b/acceptance_tests/README.md @@ -1,3 +1,85 @@ # Helm Acceptance Tests -TODO \ No newline at end of file +This directory contains the source for Helm acceptance tests. + +The tests are written using [Robot Framework](https://robotframework.org/). + +## System requirements + +The following tools/commands are expected to be present on the base system +prior to running the tests: + +- [kind](https://kind.sigs.k8s.io/) +- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) +- [python3](https://www.python.org/downloads/) +- [pip](https://pip.pypa.io/en/stable/installing/) +- [virtualenv](https://virtualenv.pypa.io/en/latest/installation/) + +## Running the tests + +From the root of this repo, run the following: + +``` +make acceptance +``` + +## Viewing the results + +Robot creates an HTML test report describing test successes/failures. + +To view the report, runt the following: + +``` +open .acceptance/report.html +``` + +Note: by default, the tests will output to the `.acceptance/` directory. +To modify this location, set the `ROBOT_OUTPUT_DIR` environment variable. + +## Kubernetes integration + +When testing Helm against multiple Kubernetes versions, +new test clusters are created on the fly (using `kind`), +with names in the following format: + +``` +helm-acceptance-test-- +``` + +If you wish to use an existing `kind` cluster for one +or more versions, you can set an environment variable for +a given version. + +Here is an example of using an existing `kind` cluster +for Kubernetes version `1.15.0`: + +``` +export KIND_CLUSTER_1_15_0="helm-ac-keepalive-1.15.0" +``` + +A `kind` cluster can be created manually like so: + +``` +kind create cluster \ + --name=helm-ac-keepalive-1.15.0 \ + --image=kindest/node:v1.15.0 +``` + +## Adding a new test case etc. + +All files ending in `.robot` extension in this directory will be executed. +Add a new file describing your test, or, alternatively, add to an existing one. + +Robot tests themselves are written in (mostly) plain English, but the Python +programming language can be used in order to add custom keywords etc. + +Notice the [lib/](./lib/) directory - this contains Python libraries that +enable us to work with system tools such as `kind`. The file [common.py](./lib/common.py) +contains a base class called `CommandRunner` that you will likely want to +leverage when adding support for a new external tool. + +The test run is wrapped by [acceptance.sh](./../scripts/acceptance.sh) - +in this file the environment is validated (i.e. check if required tools present). + +sinstalled (including Robot Framework itself). If any additional Python libraries +are required for a new library, it can be appended to `ROBOT_PY_REQUIRES`. diff --git a/acceptance_tests/kubernetes_versions.robot b/acceptance_tests/kubernetes_versions.robot index c5f197057..f14e9b1f8 100644 --- a/acceptance_tests/kubernetes_versions.robot +++ b/acceptance_tests/kubernetes_versions.robot @@ -1,5 +1,14 @@ *** Settings *** -Documentation Verify Helm functionality on multiple Kubernetes versions +Documentation Verify Helm functionality on multiple Kubernetes versions. +... +... Fresh new kind-based clusters will be created for each +... of the Kubernetes versions being tested. An existing +... kind cluster can be used by specifying it in an env var +... representing the version, for example: +... +... export KIND_CLUSTER_1_14_3="helm-ac-keepalive-1.14.3" +... export KIND_CLUSTER_1_15_0="helm-ac-keepalive-1.15.0" +... Library String Library lib/Kind.py Library lib/Kubectl.py @@ -8,8 +17,8 @@ Suite Setup Suite Setup Suite Teardown Suite Teardown *** Test Cases *** -Helm works with Kubernetes 1.14.3 - Test Helm on Kubernetes version 1.14.3 +#Helm works with Kubernetes 1.14.3 +# Test Helm on Kubernetes version 1.14.3 Helm works with Kubernetes 1.15.0 Test Helm on Kubernetes version 1.15.0 @@ -18,7 +27,10 @@ Helm works with Kubernetes 1.15.0 Test Helm on Kubernetes version [Arguments] ${kube_version} Create test cluster with kube version ${kube_version} - Verify wait flag works + + # Add new test cases here + Verify --wait flag works as expected + Kind.Delete test cluster Create test cluster with kube version @@ -26,36 +38,75 @@ Create test cluster with kube version Kind.Create test cluster with Kubernetes version ${kube_version} Kind.Wait for cluster Kubectl.Get nodes - Kubectl.return code should be 0 + Kubectl.Return code should be 0 Kubectl.Get pods kube-system - Kubectl.return code should be 0 + Kubectl.Return code should be 0 -Verify wait flag works +Verify --wait flag works as expected # Install nginx chart in a good state, using --wait flag Helm.Delete release wait-flag-good Helm.Install test chart wait-flag-good nginx --wait --timeout=60s - Helm.return code should be 0 + Helm.Return code should be 0 # Make sure everything is up-and-running - # TODO + Kubectl.Get pods default + Kubectl.Get services default + Kubectl.Get persistent volume claims default + + Kubectl.Service has IP default wait-flag-good-nginx + Kubectl.Return code should be 0 + + Kubectl.Persistent volume claim is bound default wait-flag-good-nginx + Kubectl.Return code should be 0 + + Kubectl.Pods with prefix are running default wait-flag-good-nginx-ext- 3 + Kubectl.Return code should be 0 + Kubectl.Pods with prefix are running default wait-flag-good-nginx-fluentd-es- 1 + Kubectl.Return code should be 0 + Kubectl.Pods with prefix are running default wait-flag-good-nginx-v1- 3 + Kubectl.Return code should be 0 + Kubectl.Pods with prefix are running default wait-flag-good-nginx-v1beta1- 3 + Kubectl.Return code should be 0 + Kubectl.Pods with prefix are running default wait-flag-good-nginx-v1beta2- 3 + Kubectl.Return code should be 0 + Kubectl.Pods with prefix are running default wait-flag-good-nginx-web- 3 + Kubectl.Return code should be 0 # Delete good release Helm.Delete release wait-flag-good - Helm.return code should be 0 + Helm.Return code should be 0 # Install nginx chart in a bad state, using --wait flag Helm.Delete release wait-flag-bad Helm.Install test chart wait-flag-bad nginx --wait --timeout=60s --set breakme=true # Install should return non-zero, as things fail to come up - Helm.return code should not be 0 + Helm.Return code should not be 0 + + # Make sure things are NOT up-and-running + Kubectl.Get pods default + Kubectl.Get services default + Kubectl.Get persistent volume claims default + + Kubectl.Pods with prefix are running default wait-flag-bad-nginx-ext- 3 + Kubectl.Return code should not be 0 + Kubectl.Pods with prefix are running default wait-flag-bad-nginx-fluentd-es- 1 + Kubectl.Return code should not be 0 + Kubectl.Pods with prefix are running default wait-flag-bad-nginx-v1- 3 + Kubectl.Return code should not be 0 + Kubectl.Pods with prefix are running default wait-flag-bad-nginx-v1beta1- 3 + Kubectl.Return code should not be 0 + Kubectl.Pods with prefix are running default wait-flag-bad-nginx-v1beta2- 3 + Kubectl.Return code should not be 0 + Kubectl.Pods with prefix are running default wait-flag-bad-nginx-web- 3 + Kubectl.Return code should not be 0 # Delete bad release Helm.Delete release wait-flag-bad - Helm.return code should be 0 + Helm.Return code should be 0 Suite Setup - Kind.cleanup all test clusters + Kind.Cleanup all test clusters Suite Teardown - Kind.cleanup all test clusters + Kind.Cleanup all test clusters diff --git a/acceptance_tests/lib/Kind.py b/acceptance_tests/lib/Kind.py index a133b29e7..8129f3b2e 100644 --- a/acceptance_tests/lib/Kind.py +++ b/acceptance_tests/lib/Kind.py @@ -1,5 +1,6 @@ import common import time +import os DOCKER_HUB_REPO='kindest/node' CLUSTER_PREFIX = 'helm-acceptance-test' @@ -14,6 +15,7 @@ KIND_POD_INTERVAL_SECONDS = 2 KIND_POD_EXPECTED_NUMBER = 8 LAST_CLUSTER_NAME = 'UNSET' +LAST_CLUSTER_EXISTING = False def kind_auth_wrap(cmd): c = 'export KUBECONFIG="$(kind get kubeconfig-path' @@ -22,17 +24,28 @@ def kind_auth_wrap(cmd): class Kind(common.CommandRunner): def create_test_cluster_with_kubernetes_version(self, kube_version): - global LAST_CLUSTER_NAME - LAST_CLUSTER_NAME = CLUSTER_PREFIX+'-'+common.NOW+'-'+kube_version - cmd = 'kind create cluster --loglevel='+LOG_LEVEL - cmd += ' --name='+LAST_CLUSTER_NAME - cmd += ' --image='+DOCKER_HUB_REPO+':v'+kube_version - self.run_command(cmd) + global LAST_CLUSTER_NAME, LAST_CLUSTER_EXISTING + existing_cluster_name = os.getenv('KIND_CLUSTER_'+kube_version.replace('.', '_')) + if existing_cluster_name: + print('Using existing kind cluster for '+kube_version+', "'+existing_cluster_name+'"') + LAST_CLUSTER_NAME = existing_cluster_name + LAST_CLUSTER_EXISTING = True + else: + new_cluster_name = CLUSTER_PREFIX+'-'+common.NOW+'-'+kube_version + print('Creating new kind cluster for '+kube_version+', "'+new_cluster_name+'"') + LAST_CLUSTER_NAME = new_cluster_name + cmd = 'kind create cluster --loglevel='+LOG_LEVEL + cmd += ' --name='+new_cluster_name + cmd += ' --image='+DOCKER_HUB_REPO+':v'+kube_version + self.run_command(cmd) def delete_test_cluster(self): - cmd = 'kind delete cluster --loglevel='+LOG_LEVEL - cmd += ' --name='+LAST_CLUSTER_NAME - self.run_command(cmd) + if LAST_CLUSTER_EXISTING: + print('Not deleting cluster (cluster existed prior to test run)') + else: + cmd = 'kind delete cluster --loglevel='+LOG_LEVEL + cmd += ' --name='+LAST_CLUSTER_NAME + self.run_command(cmd) def cleanup_all_test_clusters(self): cmd = 'for i in `kind get clusters| grep ^'+CLUSTER_PREFIX+'-'+common.NOW+'`;' diff --git a/acceptance_tests/lib/Kubectl.py b/acceptance_tests/lib/Kubectl.py index 5e79016c5..8bb7c0ea8 100644 --- a/acceptance_tests/lib/Kubectl.py +++ b/acceptance_tests/lib/Kubectl.py @@ -9,3 +9,29 @@ class Kubectl(common.CommandRunner): def get_pods(self, namespace): cmd = 'kubectl get pods --namespace='+namespace self.run_command(kind_auth_wrap(cmd)) + + def get_services(self, namespace): + cmd = 'kubectl get services --namespace='+namespace + self.run_command(kind_auth_wrap(cmd)) + + def get_persistent_volume_claims(self, namespace): + cmd = 'kubectl get pvc --namespace='+namespace + self.run_command(kind_auth_wrap(cmd)) + + def service_has_ip(self, namespace, service_name): + cmd = 'kubectl get services --namespace='+namespace + cmd += ' | grep '+service_name + cmd += ' | awk \'{print $3}\' | grep \'\(.\).*\\1\'' + self.run_command(kind_auth_wrap(cmd)) + + def persistent_volume_claim_is_bound(self, namespace, pvc_name): + cmd = 'kubectl get pvc --namespace='+namespace + cmd += ' | grep '+pvc_name + cmd += ' | awk \'{print $2}\' | grep ^Bound' + self.run_command(kind_auth_wrap(cmd)) + + def pods_with_prefix_are_running(self, namespace, pod_prefix, num_expected): + cmd = '[ `kubectl get pods --namespace='+namespace + cmd += ' | grep ^'+pod_prefix+' | awk \'{print $2 "--" $3}\'' + cmd += ' | grep -E "^([1-9][0-9]*)/\\1--Running" | wc -l` == '+num_expected+' ]' + self.run_command(kind_auth_wrap(cmd)) \ No newline at end of file diff --git a/acceptance_tests/testdata/charts/nginx/templates/ds.yaml b/acceptance_tests/testdata/charts/nginx/templates/ds.yaml index 03a513b3c..678878520 100755 --- a/acceptance_tests/testdata/charts/nginx/templates/ds.yaml +++ b/acceptance_tests/testdata/charts/nginx/templates/ds.yaml @@ -1,7 +1,7 @@ apiVersion: apps/v1 kind: DaemonSet metadata: - name: fluentd-elasticsearch + name: {{ template "nginx.fullname" . }}-fluentd-es labels: k8s-app: fluentd-logging spec: diff --git a/acceptance_tests/testdata/charts/nginx/templates/pvc.yaml b/acceptance_tests/testdata/charts/nginx/templates/pvc.yaml index 4a021a979..193d7a830 100755 --- a/acceptance_tests/testdata/charts/nginx/templates/pvc.yaml +++ b/acceptance_tests/testdata/charts/nginx/templates/pvc.yaml @@ -1,7 +1,7 @@ apiVersion: v1 kind: PersistentVolumeClaim metadata: - name: myclaim + name: {{ template "nginx.fullname" . }} spec: accessModes: - ReadWriteOnce diff --git a/acceptance_tests/testdata/charts/nginx/templates/statefulset.yaml b/acceptance_tests/testdata/charts/nginx/templates/statefulset.yaml index 588673b09..2f027c5a4 100755 --- a/acceptance_tests/testdata/charts/nginx/templates/statefulset.yaml +++ b/acceptance_tests/testdata/charts/nginx/templates/statefulset.yaml @@ -1,7 +1,7 @@ apiVersion: apps/v1 kind: StatefulSet metadata: - name: web + name: {{ template "nginx.fullname" . }}-web spec: selector: matchLabels: diff --git a/scripts/acceptance.sh b/scripts/acceptance.sh index cdbc9e47f..4c6af4aa9 100755 --- a/scripts/acceptance.sh +++ b/scripts/acceptance.sh @@ -4,8 +4,8 @@ REQUIRED_SYSTEM_COMMANDS=( "kind" "kubectl" "python3" - "virtualenv" "pip" + "virtualenv" ) set +x