Skip to content
Snippets Groups Projects
Unverified Commit 76c42b4d authored by Max Gautier's avatar Max Gautier Committed by GitHub
Browse files

CI: cleanup '-scale' tests infra (#11535)

There is actually no test using this since ad6fecef,
so there is no reason to keep that infra in our tests scripts.
parent b3b00775
Branches
Tags
No related merge requests found
# Node Layouts
There are six node layout types: `default`, `separate`, `ha`, `scale`, `all-in-one`, and `node-etcd-client`.
There are five node layout types: `default`, `separate`, `ha`, `all-in-one`, and `node-etcd-client`.
`default` is a non-HA two nodes setup with one separate `kube_node`
and the `etcd` group merged with the `kube_control_plane`.
......@@ -11,11 +11,6 @@ and the `etcd` group merged with the `kube_control_plane`.
`ha` layout consists of two etcd nodes, two control planes and a single worker node,
with role intersection.
`scale` layout can be combined with above layouts (`ha-scale`, `separate-scale`). It includes 200 fake hosts
in the Ansible inventory. This helps test TLS certificate generation at scale
to prevent regressions and profile certain long-running tasks. These nodes are
never actually deployed, but certificates are generated for them.
`all-in-one` layout use a single node for with `kube_control_plane`, `etcd` and `kube_node` merged.
`node-etcd-client` layout consists of a 4 nodes cluster, all of them in `kube_node`, first 3 in `etcd` and only one `kube_control_plane`.
......
......@@ -3,7 +3,7 @@
instance-{{ loop.index }} ansible_host={{instance.stdout}}
{% endfor %}
{% if mode is defined and mode in ["separate", "separate-scale"] %}
{% if mode == "separate" %}
[kube_control_plane]
instance-1
......@@ -12,7 +12,7 @@ instance-2
[etcd]
instance-3
{% elif mode is defined and mode in ["ha", "ha-scale"] %}
{% elif mode == "ha" %}
[kube_control_plane]
instance-1
instance-2
......@@ -103,5 +103,3 @@ kube_control_plane
calico_rr
[calico_rr]
[fake_hosts]
---
_vm_count_dict:
separate: 3
separate-scale: 3
ha: 3
ha-scale: 3
ha-recover: 3
ha-recover-noquorum: 3
all-in-one: 1
......
......@@ -54,7 +54,7 @@ run_playbook () {
playbook=$1
shift
# We can set --limit here and still pass it as supplemental args because `--limit` is a 'last one wins' option
ansible-playbook --limit "all:!fake_hosts" \
ansible-playbook \
$ANSIBLE_LOG_LEVEL \
-e @${CI_TEST_SETTING} \
-e @${CI_TEST_REGISTRY_MIRROR} \
......@@ -85,8 +85,8 @@ fi
# Test control plane recovery
if [ "${RECOVER_CONTROL_PLANE_TEST}" != "false" ]; then
run_playbook reset.yml --limit "${RECOVER_CONTROL_PLANE_TEST_GROUPS}:!fake_hosts" -e reset_confirmation=yes
run_playbook recover-control-plane.yml -e etcd_retries=10 --limit "etcd:kube_control_plane:!fake_hosts"
run_playbook reset.yml --limit "${RECOVER_CONTROL_PLANE_TEST_GROUPS}" -e reset_confirmation=yes
run_playbook recover-control-plane.yml -e etcd_retries=10 --limit "etcd:kube_control_plane"
fi
# Test collection build and install by installing our collection, emptying our repository, adding
......
ansible_default_ipv4:
address: 255.255.255.255
ansible_hostname: "{{ '{{' }}inventory_hostname }}"
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment