diff --git a/contrib/network-storage/glusterfs/README.md b/contrib/network-storage/glusterfs/README.md
deleted file mode 100644
index bfe0a4d6e5c88788a7c706acff265303810e1eca..0000000000000000000000000000000000000000
--- a/contrib/network-storage/glusterfs/README.md
+++ /dev/null
@@ -1,92 +0,0 @@
-# Deploying a Kubespray Kubernetes Cluster with GlusterFS
-
-You can either deploy using Ansible on its own by supplying your own inventory file or by using Terraform to create the VMs and then providing a dynamic inventory to Ansible. The following two sections are self-contained, you don't need to go through one to use the other. So, if you want to provision with Terraform, you can skip the **Using an Ansible inventory** section, and if you want to provision with a pre-built ansible inventory, you can neglect the **Using Terraform and Ansible**  section.
-
-## Using an Ansible inventory
-
-In the same directory of this ReadMe file you should find a file named `inventory.example` which contains an example setup. Please note that, additionally to the Kubernetes nodes/masters, we define a set of machines for GlusterFS and we add them to the group `[gfs-cluster]`, which in turn is added to the larger `[network-storage]` group as a child group.
-
-Change that file to reflect your local setup (adding more machines or removing them and setting the adequate ip numbers), and save it to `inventory/sample/k8s_gfs_inventory`. Make sure that the settings on `inventory/sample/group_vars/all.yml` make sense with your deployment. Then execute change to the kubespray root folder, and execute (supposing that the machines are all using ubuntu):
-
-```shell
-ansible-playbook -b --become-user=root -i inventory/sample/k8s_gfs_inventory --user=ubuntu ./cluster.yml
-```
-
-This will provision your Kubernetes cluster. Then, to provision and configure the GlusterFS cluster, from the same directory execute:
-
-```shell
-ansible-playbook -b --become-user=root -i inventory/sample/k8s_gfs_inventory --user=ubuntu ./contrib/network-storage/glusterfs/glusterfs.yml
-```
-
-If your machines are not using Ubuntu, you need to change the `--user=ubuntu` to the correct user. Alternatively, if your Kubernetes machines are using one OS and your GlusterFS a different one, you can instead specify the `ansible_ssh_user=<correct-user>` variable in the inventory file that you just created, for each machine/VM:
-
-```shell
-k8s-master-1 ansible_ssh_host=192.168.0.147 ip=192.168.0.147 ansible_ssh_user=core
-k8s-master-node-1 ansible_ssh_host=192.168.0.148 ip=192.168.0.148 ansible_ssh_user=core
-k8s-master-node-2 ansible_ssh_host=192.168.0.146 ip=192.168.0.146 ansible_ssh_user=core
-```
-
-## Using Terraform and Ansible
-
-First step is to fill in a `my-kubespray-gluster-cluster.tfvars` file with the specification desired for your cluster. An example with all required variables would look like:
-
-```ini
-cluster_name = "cluster1"
-number_of_k8s_masters = "1"
-number_of_k8s_masters_no_floating_ip = "2"
-number_of_k8s_nodes_no_floating_ip = "0"
-number_of_k8s_nodes = "0"
-public_key_path = "~/.ssh/my-desired-key.pub"
-image = "Ubuntu 16.04"
-ssh_user = "ubuntu"
-flavor_k8s_node = "node-flavor-id-in-your-openstack"
-flavor_k8s_master = "master-flavor-id-in-your-openstack"
-network_name = "k8s-network"
-floatingip_pool = "net_external"
-
-# GlusterFS variables
-flavor_gfs_node = "gluster-flavor-id-in-your-openstack"
-image_gfs = "Ubuntu 16.04"
-number_of_gfs_nodes_no_floating_ip = "3"
-gfs_volume_size_in_gb = "50"
-ssh_user_gfs = "ubuntu"
-```
-
-As explained in the general terraform/openstack guide, you need to source your OpenStack credentials file, add your ssh-key to the ssh-agent and setup environment variables for terraform:
-
-```shell
-$ source ~/.stackrc
-$ eval $(ssh-agent -s)
-$ ssh-add ~/.ssh/my-desired-key
-$ echo Setting up Terraform creds && \
-  export TF_VAR_username=${OS_USERNAME} && \
-  export TF_VAR_password=${OS_PASSWORD} && \
-  export TF_VAR_tenant=${OS_TENANT_NAME} && \
-  export TF_VAR_auth_url=${OS_AUTH_URL}
-```
-
-Then, standing on the kubespray directory (root base of the Git checkout), issue the following terraform command to create the VMs for the cluster:
-
-```shell
-terraform apply -state=contrib/terraform/openstack/terraform.tfstate -var-file=my-kubespray-gluster-cluster.tfvars contrib/terraform/openstack
-```
-
-This will create both your Kubernetes and Gluster VMs. Make sure that the ansible file `contrib/terraform/openstack/group_vars/all.yml` includes any ansible variable that you want to setup (like, for instance, the type of machine for bootstrapping).
-
-Then, provision your Kubernetes (kubespray) cluster with the following ansible call:
-
-```shell
-ansible-playbook -b --become-user=root -i contrib/terraform/openstack/hosts ./cluster.yml
-```
-
-Finally, provision the glusterfs nodes and add the Persistent Volume setup for GlusterFS in Kubernetes through the following ansible call:
-
-```shell
-ansible-playbook -b --become-user=root -i contrib/terraform/openstack/hosts ./contrib/network-storage/glusterfs/glusterfs.yml
-```
-
-If you need to destroy the cluster, you can run:
-
-```shell
-terraform destroy -state=contrib/terraform/openstack/terraform.tfstate -var-file=my-kubespray-gluster-cluster.tfvars contrib/terraform/openstack
-```
diff --git a/contrib/network-storage/glusterfs/glusterfs.yml b/contrib/network-storage/glusterfs/glusterfs.yml
deleted file mode 100644
index d5ade945b7c6896e81eece30629301159321e41f..0000000000000000000000000000000000000000
--- a/contrib/network-storage/glusterfs/glusterfs.yml
+++ /dev/null
@@ -1,29 +0,0 @@
----
-- name: Bootstrap hosts
-  hosts: gfs-cluster
-  gather_facts: false
-  vars:
-    ansible_ssh_pipelining: false
-  roles:
-    - { role: bootstrap-os, tags: bootstrap-os}
-
-- name: Gather facts
-  hosts: all
-  gather_facts: true
-
-- name: Install glusterfs server
-  hosts: gfs-cluster
-  vars:
-    ansible_ssh_pipelining: true
-  roles:
-    - { role: glusterfs/server }
-
-- name: Install glusterfs servers
-  hosts: k8s_cluster
-  roles:
-    - { role: glusterfs/client }
-
-- name: Configure Kubernetes to use glusterfs
-  hosts: kube_control_plane[0]
-  roles:
-    - { role: kubernetes-pv }
diff --git a/contrib/network-storage/glusterfs/group_vars b/contrib/network-storage/glusterfs/group_vars
deleted file mode 120000
index 6a3f85e47c9ef4922829af8ad65e3c37cf478fb3..0000000000000000000000000000000000000000
--- a/contrib/network-storage/glusterfs/group_vars
+++ /dev/null
@@ -1 +0,0 @@
-../../../inventory/local/group_vars
\ No newline at end of file
diff --git a/contrib/network-storage/glusterfs/inventory.example b/contrib/network-storage/glusterfs/inventory.example
deleted file mode 100644
index 985647e8cd07e0da02f852b99557227c07cfa1f1..0000000000000000000000000000000000000000
--- a/contrib/network-storage/glusterfs/inventory.example
+++ /dev/null
@@ -1,43 +0,0 @@
-# ## Configure 'ip' variable to bind kubernetes services on a
-# ## different ip than the default iface
-# node1 ansible_ssh_host=95.54.0.12  # ip=10.3.0.1
-# node2 ansible_ssh_host=95.54.0.13  # ip=10.3.0.2
-# node3 ansible_ssh_host=95.54.0.14  # ip=10.3.0.3
-# node4 ansible_ssh_host=95.54.0.15  # ip=10.3.0.4
-# node5 ansible_ssh_host=95.54.0.16  # ip=10.3.0.5
-# node6 ansible_ssh_host=95.54.0.17  # ip=10.3.0.6
-#
-# ## GlusterFS nodes
-# ## Set disk_volume_device_1 to desired device for gluster brick, if different to /dev/vdb (default).
-# ## As in the previous case, you can set ip to give direct communication on internal IPs
-# gfs_node1 ansible_ssh_host=95.54.0.18 # disk_volume_device_1=/dev/vdc  ip=10.3.0.7
-# gfs_node2 ansible_ssh_host=95.54.0.19 # disk_volume_device_1=/dev/vdc  ip=10.3.0.8
-# gfs_node3 ansible_ssh_host=95.54.0.20 # disk_volume_device_1=/dev/vdc  ip=10.3.0.9
-
-# [kube_control_plane]
-# node1
-# node2
-
-# [etcd]
-# node1
-# node2
-# node3
-
-# [kube_node]
-# node2
-# node3
-# node4
-# node5
-# node6
-
-# [k8s_cluster:children]
-# kube_node
-# kube_control_plane
-
-# [gfs-cluster]
-# gfs_node1
-# gfs_node2
-# gfs_node3
-
-# [network-storage:children]
-# gfs-cluster
diff --git a/contrib/network-storage/glusterfs/roles/bootstrap-os b/contrib/network-storage/glusterfs/roles/bootstrap-os
deleted file mode 120000
index 44dbbe482c9318166b93534f1949f754dfeda94e..0000000000000000000000000000000000000000
--- a/contrib/network-storage/glusterfs/roles/bootstrap-os
+++ /dev/null
@@ -1 +0,0 @@
-../../../../roles/bootstrap-os
\ No newline at end of file
diff --git a/contrib/network-storage/glusterfs/roles/glusterfs/README.md b/contrib/network-storage/glusterfs/roles/glusterfs/README.md
deleted file mode 100644
index 9e5bf5dcfbc682aafcca4b78ab40bf9c9bec9cde..0000000000000000000000000000000000000000
--- a/contrib/network-storage/glusterfs/roles/glusterfs/README.md
+++ /dev/null
@@ -1,50 +0,0 @@
-# Ansible Role: GlusterFS
-
-[![Build Status](https://travis-ci.org/geerlingguy/ansible-role-glusterfs.svg?branch=master)](https://travis-ci.org/geerlingguy/ansible-role-glusterfs)
-
-Installs and configures GlusterFS on Linux.
-
-## Requirements
-
-For GlusterFS to connect between servers, TCP ports `24007`, `24008`, and `24009`/`49152`+ (that port, plus an additional incremented port for each additional server in the cluster; the latter if GlusterFS is version 3.4+), and TCP/UDP port `111` must be open. You can open these using whatever firewall you wish (this can easily be configured using the `geerlingguy.firewall` role).
-
-This role performs basic installation and setup of Gluster, but it does not configure or mount bricks (volumes), since that step is easier to do in a series of plays in your own playbook. Ansible 1.9+ includes the [`gluster_volume`](https://docs.ansible.com/ansible/latest/collections/gluster/gluster/gluster_volume_module.html) module to ease the management of Gluster volumes.
-
-## Role Variables
-
-Available variables are listed below, along with default values (see `defaults/main.yml`):
-
-```yaml
-glusterfs_default_release: ""
-```
-
-You can specify a `default_release` for apt on Debian/Ubuntu by overriding this variable. This is helpful if you need a different package or version for the main GlusterFS packages (e.g. GlusterFS 3.5.x instead of 3.2.x with the `wheezy-backports` default release on Debian Wheezy).
-
-```yaml
-glusterfs_ppa_use: true
-glusterfs_ppa_version: "3.5"
-```
-
-For Ubuntu, specify whether to use the official Gluster PPA, and which version of the PPA to use. See Gluster's [Getting Started Guide](https://docs.gluster.org/en/latest/Quick-Start-Guide/Quickstart/) for more info.
-
-## Dependencies
-
-None.
-
-## Example Playbook
-
-```yaml
-    - hosts: server
-      roles:
-        - geerlingguy.glusterfs
-```
-
-For a real-world use example, read through [Simple GlusterFS Setup with Ansible](http://www.jeffgeerling.com/blog/simple-glusterfs-setup-ansible), a blog post by this role's author, which is included in Chapter 8 of [Ansible for DevOps](https://www.ansiblefordevops.com/).
-
-## License
-
-MIT / BSD
-
-## Author Information
-
-This role was created in 2015 by [Jeff Geerling](http://www.jeffgeerling.com/), author of [Ansible for DevOps](https://www.ansiblefordevops.com/).
diff --git a/contrib/network-storage/glusterfs/roles/glusterfs/client/defaults/main.yml b/contrib/network-storage/glusterfs/roles/glusterfs/client/defaults/main.yml
deleted file mode 100644
index c3fff2e63248ce6f4473b47e297d82d9a85c1551..0000000000000000000000000000000000000000
--- a/contrib/network-storage/glusterfs/roles/glusterfs/client/defaults/main.yml
+++ /dev/null
@@ -1,11 +0,0 @@
----
-# For Ubuntu.
-glusterfs_default_release: ""
-glusterfs_ppa_use: true
-glusterfs_ppa_version: "4.1"
-
-# Gluster configuration.
-gluster_mount_dir: /mnt/gluster
-gluster_volume_node_mount_dir: /mnt/xfs-drive-gluster
-gluster_brick_dir: "{{ gluster_volume_node_mount_dir }}/brick"
-gluster_brick_name: gluster
diff --git a/contrib/network-storage/glusterfs/roles/glusterfs/client/meta/main.yml b/contrib/network-storage/glusterfs/roles/glusterfs/client/meta/main.yml
deleted file mode 100644
index b7fe4962e0432fa2368cd1cebad117f8c75e869d..0000000000000000000000000000000000000000
--- a/contrib/network-storage/glusterfs/roles/glusterfs/client/meta/main.yml
+++ /dev/null
@@ -1,30 +0,0 @@
----
-dependencies: []
-
-galaxy_info:
-  author: geerlingguy
-  description: GlusterFS installation for Linux.
-  company: "Midwestern Mac, LLC"
-  license: "license (BSD, MIT)"
-  min_ansible_version: "2.0"
-  platforms:
-  - name: EL
-    versions:
-    - "6"
-    - "7"
-  - name: Ubuntu
-    versions:
-    - precise
-    - trusty
-    - xenial
-  - name: Debian
-    versions:
-    - wheezy
-    - jessie
-  galaxy_tags:
-  - system
-  - networking
-  - cloud
-  - clustering
-  - files
-  - sharing
diff --git a/contrib/network-storage/glusterfs/roles/glusterfs/client/tasks/main.yml b/contrib/network-storage/glusterfs/roles/glusterfs/client/tasks/main.yml
deleted file mode 100644
index 947cf8aa2317064a784c8b60c0d89f965a5a8269..0000000000000000000000000000000000000000
--- a/contrib/network-storage/glusterfs/roles/glusterfs/client/tasks/main.yml
+++ /dev/null
@@ -1,21 +0,0 @@
----
-# This is meant for Ubuntu and RedHat installations, where apparently the glusterfs-client is not used from inside
-# hyperkube and needs to be installed as part of the system.
-
-# Setup/install tasks.
-- name: Setup RedHat distros for glusterfs
-  include_tasks: setup-RedHat.yml
-  when: ansible_os_family == 'RedHat' and groups['gfs-cluster'] is defined
-
-- name: Setup Debian distros for glusterfs
-  include_tasks: setup-Debian.yml
-  when: ansible_os_family == 'Debian' and groups['gfs-cluster'] is defined
-
-- name: Ensure Gluster mount directories exist.
-  file:
-    path: "{{ item }}"
-    state: directory
-    mode: "0775"
-  with_items:
-    - "{{ gluster_mount_dir }}"
-  when: ansible_os_family in ["Debian","RedHat"] and groups['gfs-cluster'] is defined
diff --git a/contrib/network-storage/glusterfs/roles/glusterfs/client/tasks/setup-Debian.yml b/contrib/network-storage/glusterfs/roles/glusterfs/client/tasks/setup-Debian.yml
deleted file mode 100644
index 0d7cc18747ac39b0bdabec3a146ba613836395a3..0000000000000000000000000000000000000000
--- a/contrib/network-storage/glusterfs/roles/glusterfs/client/tasks/setup-Debian.yml
+++ /dev/null
@@ -1,24 +0,0 @@
----
-- name: Add PPA for GlusterFS.
-  apt_repository:
-    repo: 'ppa:gluster/glusterfs-{{ glusterfs_ppa_version }}'
-    state: present
-    update_cache: true
-  register: glusterfs_ppa_added
-  when: glusterfs_ppa_use
-
-- name: Ensure GlusterFS client will reinstall if the PPA was just added.  # noqa no-handler
-  apt:
-    name: "{{ item }}"
-    state: absent
-  with_items:
-    - glusterfs-client
-  when: glusterfs_ppa_added.changed
-
-- name: Ensure GlusterFS client is installed.
-  apt:
-    name: "{{ item }}"
-    state: present
-    default_release: "{{ glusterfs_default_release }}"
-  with_items:
-    - glusterfs-client
diff --git a/contrib/network-storage/glusterfs/roles/glusterfs/client/tasks/setup-RedHat.yml b/contrib/network-storage/glusterfs/roles/glusterfs/client/tasks/setup-RedHat.yml
deleted file mode 100644
index d2ee36aa7cc9e29100545b70c04f556d5e861de8..0000000000000000000000000000000000000000
--- a/contrib/network-storage/glusterfs/roles/glusterfs/client/tasks/setup-RedHat.yml
+++ /dev/null
@@ -1,14 +0,0 @@
----
-- name: Install Prerequisites
-  package:
-    name: "{{ item }}"
-    state: present
-  with_items:
-    - "centos-release-gluster{{ glusterfs_default_release }}"
-
-- name: Install Packages
-  package:
-    name: "{{ item }}"
-    state: present
-  with_items:
-    - glusterfs-client
diff --git a/contrib/network-storage/glusterfs/roles/glusterfs/server/defaults/main.yml b/contrib/network-storage/glusterfs/roles/glusterfs/server/defaults/main.yml
deleted file mode 100644
index 7d6e1025b1f0513c4901937a29f2fa7d56552111..0000000000000000000000000000000000000000
--- a/contrib/network-storage/glusterfs/roles/glusterfs/server/defaults/main.yml
+++ /dev/null
@@ -1,13 +0,0 @@
----
-# For Ubuntu.
-glusterfs_default_release: ""
-glusterfs_ppa_use: true
-glusterfs_ppa_version: "3.12"
-
-# Gluster configuration.
-gluster_mount_dir: /mnt/gluster
-gluster_volume_node_mount_dir: /mnt/xfs-drive-gluster
-gluster_brick_dir: "{{ gluster_volume_node_mount_dir }}/brick"
-gluster_brick_name: gluster
-# Default device to mount for xfs formatting, terraform overrides this by setting the variable in the inventory.
-disk_volume_device_1: /dev/vdb
diff --git a/contrib/network-storage/glusterfs/roles/glusterfs/server/meta/main.yml b/contrib/network-storage/glusterfs/roles/glusterfs/server/meta/main.yml
deleted file mode 100644
index b7fe4962e0432fa2368cd1cebad117f8c75e869d..0000000000000000000000000000000000000000
--- a/contrib/network-storage/glusterfs/roles/glusterfs/server/meta/main.yml
+++ /dev/null
@@ -1,30 +0,0 @@
----
-dependencies: []
-
-galaxy_info:
-  author: geerlingguy
-  description: GlusterFS installation for Linux.
-  company: "Midwestern Mac, LLC"
-  license: "license (BSD, MIT)"
-  min_ansible_version: "2.0"
-  platforms:
-  - name: EL
-    versions:
-    - "6"
-    - "7"
-  - name: Ubuntu
-    versions:
-    - precise
-    - trusty
-    - xenial
-  - name: Debian
-    versions:
-    - wheezy
-    - jessie
-  galaxy_tags:
-  - system
-  - networking
-  - cloud
-  - clustering
-  - files
-  - sharing
diff --git a/contrib/network-storage/glusterfs/roles/glusterfs/server/tasks/main.yml b/contrib/network-storage/glusterfs/roles/glusterfs/server/tasks/main.yml
deleted file mode 100644
index a9f7698a37e829878b9755c0811fac948c6ffc3a..0000000000000000000000000000000000000000
--- a/contrib/network-storage/glusterfs/roles/glusterfs/server/tasks/main.yml
+++ /dev/null
@@ -1,113 +0,0 @@
----
-# Include variables and define needed variables.
-- name: Include OS-specific variables.
-  include_vars: "{{ ansible_os_family }}.yml"
-
-# Install xfs package
-- name: Install xfs Debian
-  apt:
-    name: xfsprogs
-    state: present
-  when: ansible_os_family == "Debian"
-
-- name: Install xfs RedHat
-  package:
-    name: xfsprogs
-    state: present
-  when: ansible_os_family == "RedHat"
-
-# Format external volumes in xfs
-- name: Format volumes in xfs
-  community.general.filesystem:
-    fstype: xfs
-    dev: "{{ disk_volume_device_1 }}"
-
-# Mount external volumes
-- name: Mounting new xfs filesystem
-  ansible.posix.mount:
-    name: "{{ gluster_volume_node_mount_dir }}"
-    src: "{{ disk_volume_device_1 }}"
-    fstype: xfs
-    state: mounted
-
-# Setup/install tasks.
-- name: Setup RedHat distros for glusterfs
-  include_tasks: setup-RedHat.yml
-  when: ansible_os_family == 'RedHat'
-
-- name: Setup Debian distros for glusterfs
-  include_tasks: setup-Debian.yml
-  when: ansible_os_family == 'Debian'
-
-- name: Ensure GlusterFS is started and enabled at boot.
-  service:
-    name: "{{ glusterfs_daemon }}"
-    state: started
-    enabled: true
-
-- name: Ensure Gluster brick and mount directories exist.
-  file:
-    path: "{{ item }}"
-    state: directory
-    mode: "0775"
-  with_items:
-    - "{{ gluster_brick_dir }}"
-    - "{{ gluster_mount_dir }}"
-
-- name: Configure Gluster volume with replicas
-  gluster.gluster.gluster_volume:
-    state: present
-    name: "{{ gluster_brick_name }}"
-    brick: "{{ gluster_brick_dir }}"
-    replicas: "{{ groups['gfs-cluster'] | length }}"
-    cluster: "{% for item in groups['gfs-cluster'] -%}{{ hostvars[item]['ip'] | default(hostvars[item].ansible_default_ipv4['address']) }}{% if not loop.last %},{% endif %}{%- endfor %}"
-    host: "{{ inventory_hostname }}"
-    force: true
-  run_once: true
-  when: groups['gfs-cluster'] | length > 1
-
-- name: Configure Gluster volume without replicas
-  gluster.gluster.gluster_volume:
-    state: present
-    name: "{{ gluster_brick_name }}"
-    brick: "{{ gluster_brick_dir }}"
-    cluster: "{% for item in groups['gfs-cluster'] -%}{{ hostvars[item]['ip'] | default(hostvars[item].ansible_default_ipv4['address']) }}{% if not loop.last %},{% endif %}{%- endfor %}"
-    host: "{{ inventory_hostname }}"
-    force: true
-  run_once: true
-  when: groups['gfs-cluster'] | length <= 1
-
-- name: Mount glusterfs to retrieve disk size
-  ansible.posix.mount:
-    name: "{{ gluster_mount_dir }}"
-    src: "{{ ip | default(ansible_default_ipv4['address']) }}:/gluster"
-    fstype: glusterfs
-    opts: "defaults,_netdev"
-    state: mounted
-  when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
-
-- name: Get Gluster disk size
-  setup:
-    filter: ansible_mounts
-  register: mounts_data
-  when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
-
-- name: Set Gluster disk size to variable
-  set_fact:
-    gluster_disk_size_gb: "{{ (mounts_data.ansible_facts.ansible_mounts | selectattr('mount', 'equalto', gluster_mount_dir) | map(attribute='size_total') | first | int / (1024 * 1024 * 1024)) | int }}"
-  when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
-
-- name: Create file on GlusterFS
-  template:
-    dest: "{{ gluster_mount_dir }}/.test-file.txt"
-    src: test-file.txt
-    mode: "0644"
-  when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
-
-- name: Unmount glusterfs
-  ansible.posix.mount:
-    name: "{{ gluster_mount_dir }}"
-    fstype: glusterfs
-    src: "{{ ip | default(ansible_default_ipv4['address']) }}:/gluster"
-    state: unmounted
-  when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
diff --git a/contrib/network-storage/glusterfs/roles/glusterfs/server/tasks/setup-Debian.yml b/contrib/network-storage/glusterfs/roles/glusterfs/server/tasks/setup-Debian.yml
deleted file mode 100644
index 4d4b1b4b80d5997a890a164ca7854431c03ab491..0000000000000000000000000000000000000000
--- a/contrib/network-storage/glusterfs/roles/glusterfs/server/tasks/setup-Debian.yml
+++ /dev/null
@@ -1,26 +0,0 @@
----
-- name: Add PPA for GlusterFS.
-  apt_repository:
-    repo: 'ppa:gluster/glusterfs-{{ glusterfs_ppa_version }}'
-    state: present
-    update_cache: true
-  register: glusterfs_ppa_added
-  when: glusterfs_ppa_use
-
-- name: Ensure GlusterFS will reinstall if the PPA was just added.  # noqa no-handler
-  apt:
-    name: "{{ item }}"
-    state: absent
-  with_items:
-    - glusterfs-server
-    - glusterfs-client
-  when: glusterfs_ppa_added.changed
-
-- name: Ensure GlusterFS is installed.
-  apt:
-    name: "{{ item }}"
-    state: present
-    default_release: "{{ glusterfs_default_release }}"
-  with_items:
-    - glusterfs-server
-    - glusterfs-client
diff --git a/contrib/network-storage/glusterfs/roles/glusterfs/server/tasks/setup-RedHat.yml b/contrib/network-storage/glusterfs/roles/glusterfs/server/tasks/setup-RedHat.yml
deleted file mode 100644
index 5a4e09ef36dfe75e659ce73bf6464e3619bed3dc..0000000000000000000000000000000000000000
--- a/contrib/network-storage/glusterfs/roles/glusterfs/server/tasks/setup-RedHat.yml
+++ /dev/null
@@ -1,15 +0,0 @@
----
-- name: Install Prerequisites
-  package:
-    name: "{{ item }}"
-    state: present
-  with_items:
-    - "centos-release-gluster{{ glusterfs_default_release }}"
-
-- name: Install Packages
-  package:
-    name: "{{ item }}"
-    state: present
-  with_items:
-    - glusterfs-server
-    - glusterfs-client
diff --git a/contrib/network-storage/glusterfs/roles/glusterfs/server/templates/test-file.txt b/contrib/network-storage/glusterfs/roles/glusterfs/server/templates/test-file.txt
deleted file mode 100644
index 16b14f5da9e2fcd6f3f38cc9e584cef2f3c90ebe..0000000000000000000000000000000000000000
--- a/contrib/network-storage/glusterfs/roles/glusterfs/server/templates/test-file.txt
+++ /dev/null
@@ -1 +0,0 @@
-test file
diff --git a/contrib/network-storage/glusterfs/roles/glusterfs/server/vars/Debian.yml b/contrib/network-storage/glusterfs/roles/glusterfs/server/vars/Debian.yml
deleted file mode 100644
index e931068ae374edad94150dab65ca58da1b258d65..0000000000000000000000000000000000000000
--- a/contrib/network-storage/glusterfs/roles/glusterfs/server/vars/Debian.yml
+++ /dev/null
@@ -1,2 +0,0 @@
----
-glusterfs_daemon: glusterd
diff --git a/contrib/network-storage/glusterfs/roles/glusterfs/server/vars/RedHat.yml b/contrib/network-storage/glusterfs/roles/glusterfs/server/vars/RedHat.yml
deleted file mode 100644
index e931068ae374edad94150dab65ca58da1b258d65..0000000000000000000000000000000000000000
--- a/contrib/network-storage/glusterfs/roles/glusterfs/server/vars/RedHat.yml
+++ /dev/null
@@ -1,2 +0,0 @@
----
-glusterfs_daemon: glusterd
diff --git a/contrib/network-storage/glusterfs/roles/kubernetes-pv/ansible/tasks/main.yaml b/contrib/network-storage/glusterfs/roles/kubernetes-pv/ansible/tasks/main.yaml
deleted file mode 100644
index cf2bd0ee5cba28efc526875eca8bc30bb7598449..0000000000000000000000000000000000000000
--- a/contrib/network-storage/glusterfs/roles/kubernetes-pv/ansible/tasks/main.yaml
+++ /dev/null
@@ -1,23 +0,0 @@
----
-- name: Kubernetes Apps | Lay Down k8s GlusterFS Endpoint and PV
-  template:
-    src: "{{ item.file }}"
-    dest: "{{ kube_config_dir }}/{{ item.dest }}"
-    mode: "0644"
-  with_items:
-    - { file: glusterfs-kubernetes-endpoint.json.j2, type: ep, dest: glusterfs-kubernetes-endpoint.json}
-    - { file: glusterfs-kubernetes-pv.yml.j2, type: pv, dest: glusterfs-kubernetes-pv.yml}
-    - { file: glusterfs-kubernetes-endpoint-svc.json.j2, type: svc, dest: glusterfs-kubernetes-endpoint-svc.json}
-  register: gluster_pv
-  when: inventory_hostname == groups['kube_control_plane'][0] and groups['gfs-cluster'] is defined and hostvars[groups['gfs-cluster'][0]].gluster_disk_size_gb is defined
-
-- name: Kubernetes Apps | Set GlusterFS endpoint and PV
-  kube:
-    name: glusterfs
-    namespace: default
-    kubectl: "{{ bin_dir }}/kubectl"
-    resource: "{{ item.item.type }}"
-    filename: "{{ kube_config_dir }}/{{ item.item.dest }}"
-    state: "{{ item.changed | ternary('latest', 'present') }}"
-  with_items: "{{ gluster_pv.results }}"
-  when: inventory_hostname == groups['kube_control_plane'][0] and groups['gfs-cluster'] is defined
diff --git a/contrib/network-storage/glusterfs/roles/kubernetes-pv/ansible/templates/glusterfs-kubernetes-endpoint-svc.json.j2 b/contrib/network-storage/glusterfs/roles/kubernetes-pv/ansible/templates/glusterfs-kubernetes-endpoint-svc.json.j2
deleted file mode 100644
index 3cb511875d50240b4663b5112f0a10faebf43673..0000000000000000000000000000000000000000
--- a/contrib/network-storage/glusterfs/roles/kubernetes-pv/ansible/templates/glusterfs-kubernetes-endpoint-svc.json.j2
+++ /dev/null
@@ -1,12 +0,0 @@
-{
-  "kind": "Service",
-  "apiVersion": "v1",
-  "metadata": {
-    "name": "glusterfs"
-  },
-  "spec": {
-    "ports": [
-      {"port": 1}
-    ]
-  }
-}
diff --git a/contrib/network-storage/glusterfs/roles/kubernetes-pv/ansible/templates/glusterfs-kubernetes-endpoint.json.j2 b/contrib/network-storage/glusterfs/roles/kubernetes-pv/ansible/templates/glusterfs-kubernetes-endpoint.json.j2
deleted file mode 100644
index 36cc1cccab04a2229c5c88fa8f04642480aa72cf..0000000000000000000000000000000000000000
--- a/contrib/network-storage/glusterfs/roles/kubernetes-pv/ansible/templates/glusterfs-kubernetes-endpoint.json.j2
+++ /dev/null
@@ -1,23 +0,0 @@
-{
-  "kind": "Endpoints",
-  "apiVersion": "v1",
-  "metadata": {
-    "name": "glusterfs"
-  },
-  "subsets": [
-    {% for host in groups['gfs-cluster'] %}
-    {
-      "addresses": [
-        {
-          "ip": "{{hostvars[host]['ip']|default(hostvars[host].ansible_default_ipv4['address'])}}"
-        }
-      ],
-      "ports": [
-        {
-          "port": 1
-        }
-      ]
-    }{%- if not loop.last %}, {% endif -%}
-    {% endfor %}
-  ]
-}
diff --git a/contrib/network-storage/glusterfs/roles/kubernetes-pv/ansible/templates/glusterfs-kubernetes-pv.yml.j2 b/contrib/network-storage/glusterfs/roles/kubernetes-pv/ansible/templates/glusterfs-kubernetes-pv.yml.j2
deleted file mode 100644
index f6ba4358efc3fcb900d48aa7494f8dec6c9bd158..0000000000000000000000000000000000000000
--- a/contrib/network-storage/glusterfs/roles/kubernetes-pv/ansible/templates/glusterfs-kubernetes-pv.yml.j2
+++ /dev/null
@@ -1,14 +0,0 @@
-apiVersion: v1
-kind: PersistentVolume
-metadata:
-  name: glusterfs
-spec:
-  capacity:
-      storage: "{{ hostvars[groups['gfs-cluster'][0]].gluster_disk_size_gb }}Gi"
-  accessModes:
-    - ReadWriteMany
-  glusterfs:
-    endpoints: glusterfs
-    path: gluster
-    readOnly: false
-  persistentVolumeReclaimPolicy: Retain
diff --git a/contrib/network-storage/glusterfs/roles/kubernetes-pv/meta/main.yaml b/contrib/network-storage/glusterfs/roles/kubernetes-pv/meta/main.yaml
deleted file mode 100644
index a4ab33f5bbf2170452f62948f538a0cb8edbf484..0000000000000000000000000000000000000000
--- a/contrib/network-storage/glusterfs/roles/kubernetes-pv/meta/main.yaml
+++ /dev/null
@@ -1,3 +0,0 @@
----
-dependencies:
-  - {role: kubernetes-pv/ansible, tags: apps}
diff --git a/contrib/network-storage/heketi/README.md b/contrib/network-storage/heketi/README.md
deleted file mode 100644
index d5491d33909ba67714788bfeee9a9f7f595572d4..0000000000000000000000000000000000000000
--- a/contrib/network-storage/heketi/README.md
+++ /dev/null
@@ -1,27 +0,0 @@
-# Deploy Heketi/Glusterfs into Kubespray/Kubernetes
-
-This playbook aims to automate [this](https://github.com/heketi/heketi/blob/master/docs/admin/install-kubernetes.md) tutorial. It deploys heketi/glusterfs into kubernetes and sets up a storageclass.
-
-## Important notice
-
-> Due to resource limits on the current project maintainers and general lack of contributions we are considering placing Heketi into a [near-maintenance mode](https://github.com/heketi/heketi#important-notice)
-
-## Client Setup
-
-Heketi provides a CLI that provides users with a means to administer the deployment and configuration of GlusterFS in Kubernetes. [Download and install the heketi-cli](https://github.com/heketi/heketi/releases) on your client machine.
-
-## Install
-
-Copy the inventory.yml.sample over to inventory/sample/k8s_heketi_inventory.yml and change it according to your setup.
-
-```shell
-ansible-playbook --ask-become -i inventory/sample/k8s_heketi_inventory.yml contrib/network-storage/heketi/heketi.yml
-```
-
-## Tear down
-
-```shell
-ansible-playbook --ask-become -i inventory/sample/k8s_heketi_inventory.yml contrib/network-storage/heketi/heketi-tear-down.yml
-```
-
-Add `--extra-vars "heketi_remove_lvm=true"` to the command above to remove LVM packages from the system
diff --git a/contrib/network-storage/heketi/heketi-tear-down.yml b/contrib/network-storage/heketi/heketi-tear-down.yml
deleted file mode 100644
index 8c9d1c3a000b1aa74374d2ced32d7944ecec78e9..0000000000000000000000000000000000000000
--- a/contrib/network-storage/heketi/heketi-tear-down.yml
+++ /dev/null
@@ -1,11 +0,0 @@
----
-- name: Tear down heketi
-  hosts: kube_control_plane[0]
-  roles:
-    - { role: tear-down }
-
-- name: Teardown disks in heketi
-  hosts: heketi-node
-  become: true
-  roles:
-    - { role: tear-down-disks }
diff --git a/contrib/network-storage/heketi/heketi.yml b/contrib/network-storage/heketi/heketi.yml
deleted file mode 100644
index bc0c4d0fbbd4d716d324520e4c2750b5f20b5cc5..0000000000000000000000000000000000000000
--- a/contrib/network-storage/heketi/heketi.yml
+++ /dev/null
@@ -1,12 +0,0 @@
----
-- name: Prepare heketi install
-  hosts: heketi-node
-  roles:
-    - { role: prepare }
-
-- name: Provision heketi
-  hosts: kube_control_plane[0]
-  tags:
-    - "provision"
-  roles:
-    - { role: provision }
diff --git a/contrib/network-storage/heketi/inventory.yml.sample b/contrib/network-storage/heketi/inventory.yml.sample
deleted file mode 100644
index 467788ac3ece2c024b8ea3a95961018985629ed6..0000000000000000000000000000000000000000
--- a/contrib/network-storage/heketi/inventory.yml.sample
+++ /dev/null
@@ -1,33 +0,0 @@
-all:
-    vars:
-        heketi_admin_key: "11elfeinhundertundelf"
-        heketi_user_key: "!!einseinseins"
-        glusterfs_daemonset:
-            readiness_probe:
-                timeout_seconds: 3
-                initial_delay_seconds: 3
-            liveness_probe:
-                timeout_seconds: 3
-                initial_delay_seconds: 10
-    children:
-        k8s_cluster:
-            vars:
-                kubelet_fail_swap_on: false
-            children:
-                kube_control_plane:
-                    hosts:
-                        node1:
-                etcd:
-                    hosts:
-                        node2:
-                kube_node:
-                    hosts: &kube_nodes
-                        node1:
-                        node2:
-                        node3:
-                        node4:
-                heketi-node:
-                    vars:
-                        disk_volume_device_1: "/dev/vdb"
-                    hosts:
-                        <<: *kube_nodes
diff --git a/contrib/network-storage/heketi/requirements.txt b/contrib/network-storage/heketi/requirements.txt
deleted file mode 100644
index 45c1e038e5f2e9e76dff9d61e971f61deea7e0a1..0000000000000000000000000000000000000000
--- a/contrib/network-storage/heketi/requirements.txt
+++ /dev/null
@@ -1 +0,0 @@
-jmespath
diff --git a/contrib/network-storage/heketi/roles/prepare/tasks/main.yml b/contrib/network-storage/heketi/roles/prepare/tasks/main.yml
deleted file mode 100644
index 20012b120dafedfe2af9b83e5bdaa505f0fbdadd..0000000000000000000000000000000000000000
--- a/contrib/network-storage/heketi/roles/prepare/tasks/main.yml
+++ /dev/null
@@ -1,24 +0,0 @@
----
-- name: "Load lvm kernel modules"
-  become: true
-  with_items:
-    - "dm_snapshot"
-    - "dm_mirror"
-    - "dm_thin_pool"
-  community.general.modprobe:
-    name: "{{ item }}"
-    state: "present"
-
-- name: "Install glusterfs mount utils (RedHat)"
-  become: true
-  package:
-    name: "glusterfs-fuse"
-    state: "present"
-  when: "ansible_os_family == 'RedHat'"
-
-- name: "Install glusterfs mount utils (Debian)"
-  become: true
-  apt:
-    name: "glusterfs-client"
-    state: "present"
-  when: "ansible_os_family == 'Debian'"
diff --git a/contrib/network-storage/heketi/roles/provision/defaults/main.yml b/contrib/network-storage/heketi/roles/provision/defaults/main.yml
deleted file mode 100644
index ed97d539c095cf1413af30cc23dea272095b97dd..0000000000000000000000000000000000000000
--- a/contrib/network-storage/heketi/roles/provision/defaults/main.yml
+++ /dev/null
@@ -1 +0,0 @@
----
diff --git a/contrib/network-storage/heketi/roles/provision/handlers/main.yml b/contrib/network-storage/heketi/roles/provision/handlers/main.yml
deleted file mode 100644
index 4e768addaf2dac2b526acdafac6d6b76c597a77e..0000000000000000000000000000000000000000
--- a/contrib/network-storage/heketi/roles/provision/handlers/main.yml
+++ /dev/null
@@ -1,3 +0,0 @@
----
-- name: "Stop port forwarding"
-  command: "killall "
diff --git a/contrib/network-storage/heketi/roles/provision/tasks/bootstrap.yml b/contrib/network-storage/heketi/roles/provision/tasks/bootstrap.yml
deleted file mode 100644
index 7b4330038834bc375e2f48a50ffc1f60a5cd0e6a..0000000000000000000000000000000000000000
--- a/contrib/network-storage/heketi/roles/provision/tasks/bootstrap.yml
+++ /dev/null
@@ -1,64 +0,0 @@
----
-# Bootstrap heketi
-- name: "Get state of heketi service, deployment and pods."
-  register: "initial_heketi_state"
-  changed_when: false
-  command: "{{ bin_dir }}/kubectl get services,deployments,pods --selector=deploy-heketi --output=json"
-
-- name: "Bootstrap heketi."
-  when:
-    - "(initial_heketi_state.stdout | from_json | json_query(\"items[?kind=='Service']\")) | length == 0"
-    - "(initial_heketi_state.stdout | from_json | json_query(\"items[?kind=='Deployment']\")) | length == 0"
-    - "(initial_heketi_state.stdout | from_json | json_query(\"items[?kind=='Pod']\")) | length == 0"
-  include_tasks: "bootstrap/deploy.yml"
-
-# Prepare heketi topology
-- name: "Get heketi initial pod state."
-  register: "initial_heketi_pod"
-  command: "{{ bin_dir }}/kubectl get pods --selector=deploy-heketi=pod,glusterfs=heketi-pod,name=deploy-heketi --output=json"
-  changed_when: false
-
-- name: "Ensure heketi bootstrap pod is up."
-  assert:
-    that: "(initial_heketi_pod.stdout | from_json | json_query('items[*]')) | length == 1"
-
-- name: Store the initial heketi pod name
-  set_fact:
-    initial_heketi_pod_name: "{{ initial_heketi_pod.stdout | from_json | json_query(\"items[*].metadata.name | [0]\") }}"
-
-- name: "Test heketi topology."
-  changed_when: false
-  register: "heketi_topology"
-  command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology info --json"
-
-- name: "Load heketi topology."
-  when: "heketi_topology.stdout | from_json | json_query(\"clusters[*].nodes[*]\") | flatten | length == 0"
-  include_tasks: "bootstrap/topology.yml"
-
-# Provision heketi database volume
-- name: "Prepare heketi volumes."
-  include_tasks: "bootstrap/volumes.yml"
-
-# Remove bootstrap heketi
-- name: "Tear down bootstrap."
-  include_tasks: "bootstrap/tear-down.yml"
-
-# Prepare heketi storage
-- name: "Test heketi storage."
-  command: "{{ bin_dir }}/kubectl get secrets,endpoints,services,jobs --output=json"
-  changed_when: false
-  register: "heketi_storage_state"
-
-# ensure endpoints actually exist before trying to move database data to it
-- name: "Create heketi storage."
-  include_tasks: "bootstrap/storage.yml"
-  vars:
-    secret_query: "items[?metadata.name=='heketi-storage-secret' && kind=='Secret']"
-    endpoints_query: "items[?metadata.name=='heketi-storage-endpoints' && kind=='Endpoints']"
-    service_query: "items[?metadata.name=='heketi-storage-endpoints' && kind=='Service']"
-    job_query: "items[?metadata.name=='heketi-storage-copy-job' && kind=='Job']"
-  when:
-    - "heketi_storage_state.stdout | from_json | json_query(secret_query) | length == 0"
-    - "heketi_storage_state.stdout | from_json | json_query(endpoints_query) | length == 0"
-    - "heketi_storage_state.stdout | from_json | json_query(service_query) | length == 0"
-    - "heketi_storage_state.stdout | from_json | json_query(job_query) | length == 0"
diff --git a/contrib/network-storage/heketi/roles/provision/tasks/bootstrap/deploy.yml b/contrib/network-storage/heketi/roles/provision/tasks/bootstrap/deploy.yml
deleted file mode 100644
index 94d440150717dc97e2634181159397d8197dcf36..0000000000000000000000000000000000000000
--- a/contrib/network-storage/heketi/roles/provision/tasks/bootstrap/deploy.yml
+++ /dev/null
@@ -1,27 +0,0 @@
----
-- name: "Kubernetes Apps | Lay Down Heketi Bootstrap"
-  become: true
-  template:
-    src: "heketi-bootstrap.json.j2"
-    dest: "{{ kube_config_dir }}/heketi-bootstrap.json"
-    mode: "0640"
-  register: "rendering"
-- name: "Kubernetes Apps | Install and configure Heketi Bootstrap"
-  kube:
-    name: "GlusterFS"
-    kubectl: "{{ bin_dir }}/kubectl"
-    filename: "{{ kube_config_dir }}/heketi-bootstrap.json"
-    state: "{{ rendering.changed | ternary('latest', 'present') }}"
-- name: "Wait for heketi bootstrap to complete."
-  changed_when: false
-  register: "initial_heketi_state"
-  vars:
-    initial_heketi_state: { stdout: "{}" }
-    pods_query: "items[?kind=='Pod'].status.conditions | [0][?type=='Ready'].status | [0]"
-    deployments_query: "items[?kind=='Deployment'].status.conditions | [0][?type=='Available'].status | [0]"
-  command: "{{ bin_dir }}/kubectl get services,deployments,pods --selector=deploy-heketi --output=json"
-  until:
-    - "initial_heketi_state.stdout | from_json | json_query(pods_query) == 'True'"
-    - "initial_heketi_state.stdout | from_json | json_query(deployments_query) == 'True'"
-  retries: 60
-  delay: 5
diff --git a/contrib/network-storage/heketi/roles/provision/tasks/bootstrap/storage.yml b/contrib/network-storage/heketi/roles/provision/tasks/bootstrap/storage.yml
deleted file mode 100644
index 650c12d12eec489b0869c2fb3f56d22746f7d0ae..0000000000000000000000000000000000000000
--- a/contrib/network-storage/heketi/roles/provision/tasks/bootstrap/storage.yml
+++ /dev/null
@@ -1,33 +0,0 @@
----
-- name: "Test heketi storage."
-  command: "{{ bin_dir }}/kubectl get secrets,endpoints,services,jobs --output=json"
-  changed_when: false
-  register: "heketi_storage_state"
-- name: "Create heketi storage."
-  kube:
-    name: "GlusterFS"
-    kubectl: "{{ bin_dir }}/kubectl"
-    filename: "{{ kube_config_dir }}/heketi-storage-bootstrap.json"
-    state: "present"
-  vars:
-    secret_query: "items[?metadata.name=='heketi-storage-secret' && kind=='Secret']"
-    endpoints_query: "items[?metadata.name=='heketi-storage-endpoints' && kind=='Endpoints']"
-    service_query: "items[?metadata.name=='heketi-storage-endpoints' && kind=='Service']"
-    job_query: "items[?metadata.name=='heketi-storage-copy-job' && kind=='Job']"
-  when:
-    - "heketi_storage_state.stdout | from_json | json_query(secret_query) | length == 0"
-    - "heketi_storage_state.stdout | from_json | json_query(endpoints_query) | length == 0"
-    - "heketi_storage_state.stdout | from_json | json_query(service_query) | length == 0"
-    - "heketi_storage_state.stdout | from_json | json_query(job_query) | length == 0"
-  register: "heketi_storage_result"
-- name: "Get state of heketi database copy job."
-  command: "{{ bin_dir }}/kubectl get jobs --output=json"
-  changed_when: false
-  register: "heketi_storage_state"
-  vars:
-    heketi_storage_state: { stdout: "{}" }
-    job_query: "items[?metadata.name=='heketi-storage-copy-job' && kind=='Job' && status.succeeded==1]"
-  until:
-    - "heketi_storage_state.stdout | from_json | json_query(job_query) | length == 1"
-  retries: 60
-  delay: 5
diff --git a/contrib/network-storage/heketi/roles/provision/tasks/bootstrap/tear-down.yml b/contrib/network-storage/heketi/roles/provision/tasks/bootstrap/tear-down.yml
deleted file mode 100644
index ad48882b6c8f9cebbadaa918e42d3a74bb8a83db..0000000000000000000000000000000000000000
--- a/contrib/network-storage/heketi/roles/provision/tasks/bootstrap/tear-down.yml
+++ /dev/null
@@ -1,14 +0,0 @@
----
-- name: "Get existing Heketi deploy resources."
-  command: "{{ bin_dir }}/kubectl get all --selector=\"deploy-heketi\" -o=json"
-  register: "heketi_resources"
-  changed_when: false
-- name: "Delete bootstrap Heketi."
-  command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"deploy-heketi\""
-  when: "heketi_resources.stdout | from_json | json_query('items[*]') | length > 0"
-- name: "Ensure there is nothing left over."
-  command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"deploy-heketi\" -o=json"
-  register: "heketi_result"
-  until: "heketi_result.stdout | from_json | json_query('items[*]') | length == 0"
-  retries: 60
-  delay: 5
diff --git a/contrib/network-storage/heketi/roles/provision/tasks/bootstrap/topology.yml b/contrib/network-storage/heketi/roles/provision/tasks/bootstrap/topology.yml
deleted file mode 100644
index b011c024b684d07bb5be5da6e9169a05adf59f68..0000000000000000000000000000000000000000
--- a/contrib/network-storage/heketi/roles/provision/tasks/bootstrap/topology.yml
+++ /dev/null
@@ -1,27 +0,0 @@
----
-- name: "Get heketi topology."
-  changed_when: false
-  register: "heketi_topology"
-  command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology info --json"
-- name: "Render heketi topology template."
-  become: true
-  vars: { nodes: "{{ groups['heketi-node'] }}" }
-  register: "render"
-  template:
-    src: "topology.json.j2"
-    dest: "{{ kube_config_dir }}/topology.json"
-    mode: "0644"
-- name: "Copy topology configuration into container."
-  changed_when: false
-  command: "{{ bin_dir }}/kubectl cp {{ kube_config_dir }}/topology.json {{ initial_heketi_pod_name }}:/tmp/topology.json"
-- name: "Load heketi topology."  # noqa no-handler
-  when: "render.changed"
-  command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology load --json=/tmp/topology.json"
-  register: "load_heketi"
-- name: "Get heketi topology."
-  changed_when: false
-  register: "heketi_topology"
-  command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology info --json"
-  until: "heketi_topology.stdout | from_json | json_query(\"clusters[*].nodes[*].devices[?state=='online'].id\") | flatten | length == groups['heketi-node'] | length"
-  retries: 60
-  delay: 5
diff --git a/contrib/network-storage/heketi/roles/provision/tasks/bootstrap/volumes.yml b/contrib/network-storage/heketi/roles/provision/tasks/bootstrap/volumes.yml
deleted file mode 100644
index 6d26dfc9a61e89d74478ed447df1f05709e3600b..0000000000000000000000000000000000000000
--- a/contrib/network-storage/heketi/roles/provision/tasks/bootstrap/volumes.yml
+++ /dev/null
@@ -1,41 +0,0 @@
----
-- name: "Get heketi volume ids."
-  command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} volume list --json"
-  changed_when: false
-  register: "heketi_volumes"
-- name: "Get heketi volumes."
-  changed_when: false
-  command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} volume info {{ volume_id }} --json"
-  with_items: "{{ heketi_volumes.stdout | from_json | json_query(\"volumes[*]\") }}"
-  loop_control: { loop_var: "volume_id" }
-  register: "volumes_information"
-- name: "Test heketi database volume."
-  set_fact: { heketi_database_volume_exists: true }
-  with_items: "{{ volumes_information.results }}"
-  loop_control: { loop_var: "volume_information" }
-  vars: { volume: "{{ volume_information.stdout | from_json }}" }
-  when: "volume.name == 'heketidbstorage'"
-- name: "Provision database volume."
-  command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} setup-openshift-heketi-storage"
-  when: "heketi_database_volume_exists is undefined"
-- name: "Copy configuration from pod."
-  become: true
-  command: "{{ bin_dir }}/kubectl cp {{ initial_heketi_pod_name }}:/heketi-storage.json {{ kube_config_dir }}/heketi-storage-bootstrap.json"
-- name: "Get heketi volume ids."
-  command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} volume list --json"
-  changed_when: false
-  register: "heketi_volumes"
-- name: "Get heketi volumes."
-  changed_when: false
-  command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} volume info {{ volume_id }} --json"
-  with_items: "{{ heketi_volumes.stdout | from_json | json_query(\"volumes[*]\") }}"
-  loop_control: { loop_var: "volume_id" }
-  register: "volumes_information"
-- name: "Test heketi database volume."
-  set_fact: { heketi_database_volume_created: true }
-  with_items: "{{ volumes_information.results }}"
-  loop_control: { loop_var: "volume_information" }
-  vars: { volume: "{{ volume_information.stdout | from_json }}" }
-  when: "volume.name == 'heketidbstorage'"
-- name: "Ensure heketi database volume exists."
-  assert: { that: "heketi_database_volume_created is defined", msg: "Heketi database volume does not exist." }
diff --git a/contrib/network-storage/heketi/roles/provision/tasks/cleanup.yml b/contrib/network-storage/heketi/roles/provision/tasks/cleanup.yml
deleted file mode 100644
index 238f29bc214711f108230a1eb72b6ab527aa1ed3..0000000000000000000000000000000000000000
--- a/contrib/network-storage/heketi/roles/provision/tasks/cleanup.yml
+++ /dev/null
@@ -1,4 +0,0 @@
----
-- name: "Clean up left over jobs."
-  command: "{{ bin_dir }}/kubectl delete jobs,pods --selector=\"deploy-heketi\""
-  changed_when: false
diff --git a/contrib/network-storage/heketi/roles/provision/tasks/glusterfs.yml b/contrib/network-storage/heketi/roles/provision/tasks/glusterfs.yml
deleted file mode 100644
index 239e780d88a08974c5bf0e312f11cf18d65e7c29..0000000000000000000000000000000000000000
--- a/contrib/network-storage/heketi/roles/provision/tasks/glusterfs.yml
+++ /dev/null
@@ -1,44 +0,0 @@
----
-- name: "Kubernetes Apps | Lay Down GlusterFS Daemonset"
-  template:
-    src: "glusterfs-daemonset.json.j2"
-    dest: "{{ kube_config_dir }}/glusterfs-daemonset.json"
-    mode: "0644"
-  become: true
-  register: "rendering"
-- name: "Kubernetes Apps | Install and configure GlusterFS daemonset"
-  kube:
-    name: "GlusterFS"
-    kubectl: "{{ bin_dir }}/kubectl"
-    filename: "{{ kube_config_dir }}/glusterfs-daemonset.json"
-    state: "{{ rendering.changed | ternary('latest', 'present') }}"
-- name: "Kubernetes Apps | Label GlusterFS nodes"
-  include_tasks: "glusterfs/label.yml"
-  with_items: "{{ groups['heketi-node'] }}"
-  loop_control:
-    loop_var: "node"
-- name: "Kubernetes Apps | Wait for daemonset to become available."
-  register: "daemonset_state"
-  command: "{{ bin_dir }}/kubectl get daemonset glusterfs --output=json --ignore-not-found=true"
-  changed_when: false
-  vars:
-    daemonset_state: { stdout: "{}" }
-    ready: "{{ daemonset_state.stdout | from_json | json_query(\"status.numberReady\") }}"
-    desired: "{{ daemonset_state.stdout | from_json | json_query(\"status.desiredNumberScheduled\") }}"
-  until: "ready | int >= 3"
-  retries: 60
-  delay: 5
-
-- name: "Kubernetes Apps | Lay Down Heketi Service Account"
-  template:
-    src: "heketi-service-account.json.j2"
-    dest: "{{ kube_config_dir }}/heketi-service-account.json"
-    mode: "0644"
-  become: true
-  register: "rendering"
-- name: "Kubernetes Apps | Install and configure Heketi Service Account"
-  kube:
-    name: "GlusterFS"
-    kubectl: "{{ bin_dir }}/kubectl"
-    filename: "{{ kube_config_dir }}/heketi-service-account.json"
-    state: "{{ rendering.changed | ternary('latest', 'present') }}"
diff --git a/contrib/network-storage/heketi/roles/provision/tasks/glusterfs/label.yml b/contrib/network-storage/heketi/roles/provision/tasks/glusterfs/label.yml
deleted file mode 100644
index 4cefd47ac156534841ec81bb0119e172533eebd9..0000000000000000000000000000000000000000
--- a/contrib/network-storage/heketi/roles/provision/tasks/glusterfs/label.yml
+++ /dev/null
@@ -1,19 +0,0 @@
----
-- name: Get storage nodes
-  register: "label_present"
-  command: "{{ bin_dir }}/kubectl get node --selector=storagenode=glusterfs,kubernetes.io/hostname={{ node }} --ignore-not-found=true"
-  changed_when: false
-
-- name: "Assign storage label"
-  when: "label_present.stdout_lines | length == 0"
-  command: "{{ bin_dir }}/kubectl label node {{ node }} storagenode=glusterfs"
-
-- name: Get storage nodes again
-  register: "label_present"
-  command: "{{ bin_dir }}/kubectl get node --selector=storagenode=glusterfs,kubernetes.io/hostname={{ node }} --ignore-not-found=true"
-  changed_when: false
-
-- name: Ensure the label has been set
-  assert:
-    that: "label_present | length > 0"
-    msg: "Node {{ node }} has not been assigned with label storagenode=glusterfs."
diff --git a/contrib/network-storage/heketi/roles/provision/tasks/heketi.yml b/contrib/network-storage/heketi/roles/provision/tasks/heketi.yml
deleted file mode 100644
index 30c68c2bc53525226af3a35f2371cdc23481b25a..0000000000000000000000000000000000000000
--- a/contrib/network-storage/heketi/roles/provision/tasks/heketi.yml
+++ /dev/null
@@ -1,34 +0,0 @@
----
-- name: "Kubernetes Apps | Lay Down Heketi"
-  become: true
-  template:
-    src: "heketi-deployment.json.j2"
-    dest: "{{ kube_config_dir }}/heketi-deployment.json"
-    mode: "0644"
-  register: "rendering"
-
-- name: "Kubernetes Apps | Install and configure Heketi"
-  kube:
-    name: "GlusterFS"
-    kubectl: "{{ bin_dir }}/kubectl"
-    filename: "{{ kube_config_dir }}/heketi-deployment.json"
-    state: "{{ rendering.changed | ternary('latest', 'present') }}"
-
-- name: "Ensure heketi is up and running."
-  changed_when: false
-  register: "heketi_state"
-  vars:
-    heketi_state:
-      stdout: "{}"
-    pods_query: "items[?kind=='Pod'].status.conditions|[0][?type=='Ready'].status|[0]"
-    deployments_query: "items[?kind=='Deployment'].status.conditions|[0][?type=='Available'].status|[0]"
-  command: "{{ bin_dir }}/kubectl get deployments,pods --selector=glusterfs --output=json"
-  until:
-    - "heketi_state.stdout | from_json | json_query(pods_query) == 'True'"
-    - "heketi_state.stdout | from_json | json_query(deployments_query) == 'True'"
-  retries: 60
-  delay: 5
-
-- name: Set the Heketi pod name
-  set_fact:
-    heketi_pod_name: "{{ heketi_state.stdout | from_json | json_query(\"items[?kind=='Pod'].metadata.name|[0]\") }}"
diff --git a/contrib/network-storage/heketi/roles/provision/tasks/main.yml b/contrib/network-storage/heketi/roles/provision/tasks/main.yml
deleted file mode 100644
index 1feb27d7b5de3b5b1243b3a59de0fe606ef0724c..0000000000000000000000000000000000000000
--- a/contrib/network-storage/heketi/roles/provision/tasks/main.yml
+++ /dev/null
@@ -1,30 +0,0 @@
----
-- name: "Kubernetes Apps | GlusterFS"
-  include_tasks: "glusterfs.yml"
-
-- name: "Kubernetes Apps | Heketi Secrets"
-  include_tasks: "secret.yml"
-
-- name: "Kubernetes Apps | Test Heketi"
-  register: "heketi_service_state"
-  command: "{{ bin_dir }}/kubectl get service heketi-storage-endpoints -o=name --ignore-not-found=true"
-  changed_when: false
-
-- name: "Kubernetes Apps | Bootstrap Heketi"
-  when: "heketi_service_state.stdout == \"\""
-  include_tasks: "bootstrap.yml"
-
-- name: "Kubernetes Apps | Heketi"
-  include_tasks: "heketi.yml"
-
-- name: "Kubernetes Apps | Heketi Topology"
-  include_tasks: "topology.yml"
-
-- name: "Kubernetes Apps | Heketi Storage"
-  include_tasks: "storage.yml"
-
-- name: "Kubernetes Apps | Storage Class"
-  include_tasks: "storageclass.yml"
-
-- name: "Clean up"
-  include_tasks: "cleanup.yml"
diff --git a/contrib/network-storage/heketi/roles/provision/tasks/secret.yml b/contrib/network-storage/heketi/roles/provision/tasks/secret.yml
deleted file mode 100644
index 816bb156c27b4f90ecf6a31c34f85516ef63fc53..0000000000000000000000000000000000000000
--- a/contrib/network-storage/heketi/roles/provision/tasks/secret.yml
+++ /dev/null
@@ -1,45 +0,0 @@
----
-- name: Get clusterrolebindings
-  register: "clusterrolebinding_state"
-  command: "{{ bin_dir }}/kubectl get clusterrolebinding heketi-gluster-admin -o=name --ignore-not-found=true"
-  changed_when: false
-
-- name: "Kubernetes Apps | Deploy cluster role binding."
-  when: "clusterrolebinding_state.stdout | length == 0"
-  command: "{{ bin_dir }}/kubectl create clusterrolebinding heketi-gluster-admin --clusterrole=edit --serviceaccount=default:heketi-service-account"
-
-- name: Get clusterrolebindings again
-  register: "clusterrolebinding_state"
-  command: "{{ bin_dir }}/kubectl get clusterrolebinding heketi-gluster-admin -o=name --ignore-not-found=true"
-  changed_when: false
-
-- name: Make sure that clusterrolebindings are present now
-  assert:
-    that: "clusterrolebinding_state.stdout | length > 0"
-    msg: "Cluster role binding is not present."
-
-- name: Get the heketi-config-secret secret
-  register: "secret_state"
-  command: "{{ bin_dir }}/kubectl get secret heketi-config-secret -o=name --ignore-not-found=true"
-  changed_when: false
-
-- name: "Render Heketi secret configuration."
-  become: true
-  template:
-    src: "heketi.json.j2"
-    dest: "{{ kube_config_dir }}/heketi.json"
-    mode: "0644"
-
-- name: "Deploy Heketi config secret"
-  when: "secret_state.stdout | length == 0"
-  command: "{{ bin_dir }}/kubectl create secret generic heketi-config-secret --from-file={{ kube_config_dir }}/heketi.json"
-
-- name: Get the heketi-config-secret secret again
-  register: "secret_state"
-  command: "{{ bin_dir }}/kubectl get secret heketi-config-secret -o=name --ignore-not-found=true"
-  changed_when: false
-
-- name: Make sure the heketi-config-secret secret exists now
-  assert:
-    that: "secret_state.stdout | length > 0"
-    msg: "Heketi config secret is not present."
diff --git a/contrib/network-storage/heketi/roles/provision/tasks/storage.yml b/contrib/network-storage/heketi/roles/provision/tasks/storage.yml
deleted file mode 100644
index c3f8ebf2e585dbbc7754049d0b20a750511a9dc6..0000000000000000000000000000000000000000
--- a/contrib/network-storage/heketi/roles/provision/tasks/storage.yml
+++ /dev/null
@@ -1,15 +0,0 @@
----
-- name: "Kubernetes Apps | Lay Down Heketi Storage"
-  become: true
-  vars: { nodes: "{{ groups['heketi-node'] }}" }
-  template:
-    src: "heketi-storage.json.j2"
-    dest: "{{ kube_config_dir }}/heketi-storage.json"
-    mode: "0644"
-  register: "rendering"
-- name: "Kubernetes Apps | Install and configure Heketi Storage"
-  kube:
-    name: "GlusterFS"
-    kubectl: "{{ bin_dir }}/kubectl"
-    filename: "{{ kube_config_dir }}/heketi-storage.json"
-    state: "{{ rendering.changed | ternary('latest', 'present') }}"
diff --git a/contrib/network-storage/heketi/roles/provision/tasks/storageclass.yml b/contrib/network-storage/heketi/roles/provision/tasks/storageclass.yml
deleted file mode 100644
index fc57302bcbdbaaee0ccf68fe809c44a180520961..0000000000000000000000000000000000000000
--- a/contrib/network-storage/heketi/roles/provision/tasks/storageclass.yml
+++ /dev/null
@@ -1,26 +0,0 @@
----
-- name: "Test storage class."
-  command: "{{ bin_dir }}/kubectl get storageclass gluster --ignore-not-found=true --output=json"
-  register: "storageclass"
-  changed_when: false
-- name: "Test heketi service."
-  command: "{{ bin_dir }}/kubectl get service heketi --ignore-not-found=true --output=json"
-  register: "heketi_service"
-  changed_when: false
-- name: "Ensure heketi service is available."
-  assert: { that: "heketi_service.stdout != \"\"" }
-- name: "Render storage class configuration."
-  become: true
-  vars:
-    endpoint_address: "{{ (heketi_service.stdout | from_json).spec.clusterIP }}"
-  template:
-    src: "storageclass.yml.j2"
-    dest: "{{ kube_config_dir }}/storageclass.yml"
-    mode: "0644"
-  register: "rendering"
-- name: "Kubernetes Apps | Install and configure Storace Class"
-  kube:
-    name: "GlusterFS"
-    kubectl: "{{ bin_dir }}/kubectl"
-    filename: "{{ kube_config_dir }}/storageclass.yml"
-    state: "{{ rendering.changed | ternary('latest', 'present') }}"
diff --git a/contrib/network-storage/heketi/roles/provision/tasks/topology.yml b/contrib/network-storage/heketi/roles/provision/tasks/topology.yml
deleted file mode 100644
index edd5bd9e88f1ac706e9d922e048d4348f5918af5..0000000000000000000000000000000000000000
--- a/contrib/network-storage/heketi/roles/provision/tasks/topology.yml
+++ /dev/null
@@ -1,26 +0,0 @@
----
-- name: "Get heketi topology."
-  register: "heketi_topology"
-  changed_when: false
-  command: "{{ bin_dir }}/kubectl exec {{ heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology info --json"
-- name: "Render heketi topology template."
-  become: true
-  vars: { nodes: "{{ groups['heketi-node'] }}" }
-  register: "rendering"
-  template:
-    src: "topology.json.j2"
-    dest: "{{ kube_config_dir }}/topology.json"
-    mode: "0644"
-- name: "Copy topology configuration into container."  # noqa no-handler
-  when: "rendering.changed"
-  command: "{{ bin_dir }}/kubectl cp {{ kube_config_dir }}/topology.json {{ heketi_pod_name }}:/tmp/topology.json"
-- name: "Load heketi topology."  # noqa no-handler
-  when: "rendering.changed"
-  command: "{{ bin_dir }}/kubectl exec {{ heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology load --json=/tmp/topology.json"
-- name: "Get heketi topology."
-  register: "heketi_topology"
-  changed_when: false
-  command: "{{ bin_dir }}/kubectl exec {{ heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology info --json"
-  until: "heketi_topology.stdout | from_json | json_query(\"clusters[*].nodes[*].devices[?state=='online'].id\") | flatten | length == groups['heketi-node'] | length"
-  retries: 60
-  delay: 5
diff --git a/contrib/network-storage/heketi/roles/provision/templates/glusterfs-daemonset.json.j2 b/contrib/network-storage/heketi/roles/provision/templates/glusterfs-daemonset.json.j2
deleted file mode 100644
index a14b31cc9576672465151b46e19efa80705f9a72..0000000000000000000000000000000000000000
--- a/contrib/network-storage/heketi/roles/provision/templates/glusterfs-daemonset.json.j2
+++ /dev/null
@@ -1,149 +0,0 @@
-{
-    "kind": "DaemonSet",
-    "apiVersion": "apps/v1",
-    "metadata": {
-        "name": "glusterfs",
-        "labels": {
-            "glusterfs": "deployment"
-        },
-        "annotations": {
-            "description": "GlusterFS Daemon Set",
-            "tags": "glusterfs"
-        }
-    },
-    "spec": {
-        "selector": {
-            "matchLabels": {
-                "glusterfs-node": "daemonset"
-            }
-        },
-        "template": {
-            "metadata": {
-                "name": "glusterfs",
-                "labels": {
-                    "glusterfs-node": "daemonset"
-                }
-            },
-            "spec": {
-                "nodeSelector": {
-                    "storagenode" : "glusterfs"
-                },
-                "hostNetwork": true,
-                "containers": [
-                    {
-                        "image": "gluster/gluster-centos:gluster4u0_centos7",
-                        "imagePullPolicy": "IfNotPresent",
-                        "name": "glusterfs",
-                        "volumeMounts": [
-                            {
-                                "name": "glusterfs-heketi",
-                                "mountPath": "/var/lib/heketi"
-                            },
-                            {
-                                "name": "glusterfs-run",
-                                "mountPath": "/run"
-                            },
-                            {
-                                "name": "glusterfs-lvm",
-                                "mountPath": "/run/lvm"
-                            },
-                            {
-                                "name": "glusterfs-etc",
-                                "mountPath": "/etc/glusterfs"
-                            },
-                            {
-                                "name": "glusterfs-logs",
-                                "mountPath": "/var/log/glusterfs"
-                            },
-                            {
-                                "name": "glusterfs-config",
-                                "mountPath": "/var/lib/glusterd"
-                            },
-                            {
-                                "name": "glusterfs-dev",
-                                "mountPath": "/dev"
-                            },
-                            {
-                                "name": "glusterfs-cgroup",
-                                "mountPath": "/sys/fs/cgroup"
-                            }
-                        ],
-                        "securityContext": {
-                            "capabilities": {},
-                            "privileged": true
-                        },
-                        "readinessProbe": {
-                            "timeoutSeconds": {{ glusterfs_daemonset.readiness_probe.timeout_seconds }},
-                            "initialDelaySeconds": {{ glusterfs_daemonset.readiness_probe.initial_delay_seconds }},
-                            "exec": {
-                                "command": [
-                                    "/bin/bash",
-                                    "-c",
-                                    "systemctl status glusterd.service"
-                                ]
-                            }
-                        },
-                        "livenessProbe": {
-                            "timeoutSeconds": {{ glusterfs_daemonset.liveness_probe.timeout_seconds }},
-                            "initialDelaySeconds": {{ glusterfs_daemonset.liveness_probe.initial_delay_seconds }},
-                            "exec": {
-                                "command": [
-                                    "/bin/bash",
-                                    "-c",
-                                    "systemctl status glusterd.service"
-                                ]
-                            }
-                        }
-                    }
-                ],
-                "volumes": [
-                    {
-                        "name": "glusterfs-heketi",
-                        "hostPath": {
-                            "path": "/var/lib/heketi"
-                        }
-                    },
-                    {
-                        "name": "glusterfs-run"
-                    },
-                    {
-                        "name": "glusterfs-lvm",
-                        "hostPath": {
-                            "path": "/run/lvm"
-                        }
-                    },
-                    {
-                        "name": "glusterfs-etc",
-                        "hostPath": {
-                            "path": "/etc/glusterfs"
-                        }
-                    },
-                    {
-                        "name": "glusterfs-logs",
-                        "hostPath": {
-                            "path": "/var/log/glusterfs"
-                        }
-                    },
-                    {
-                        "name": "glusterfs-config",
-                        "hostPath": {
-                            "path": "/var/lib/glusterd"
-                        }
-                    },
-                    {
-                        "name": "glusterfs-dev",
-                        "hostPath": {
-                            "path": "/dev"
-                        }
-                    },
-                    {
-                        "name": "glusterfs-cgroup",
-                        "hostPath": {
-                            "path": "/sys/fs/cgroup"
-                        }
-                    }
-                ]
-            }
-        }
-    }
-}
diff --git a/contrib/network-storage/heketi/roles/provision/templates/heketi-bootstrap.json.j2 b/contrib/network-storage/heketi/roles/provision/templates/heketi-bootstrap.json.j2
deleted file mode 100644
index 7a932d0494b7e673a8ab20d34f7bf59da0e0b37c..0000000000000000000000000000000000000000
--- a/contrib/network-storage/heketi/roles/provision/templates/heketi-bootstrap.json.j2
+++ /dev/null
@@ -1,138 +0,0 @@
-{
-  "kind": "List",
-  "apiVersion": "v1",
-  "items": [
-    {
-      "kind": "Service",
-      "apiVersion": "v1",
-      "metadata": {
-        "name": "deploy-heketi",
-        "labels": {
-          "glusterfs": "heketi-service",
-          "deploy-heketi": "support"
-        },
-        "annotations": {
-          "description": "Exposes Heketi Service"
-        }
-      },
-      "spec": {
-        "selector": {
-          "name": "deploy-heketi"
-        },
-        "ports": [
-          {
-            "name": "deploy-heketi",
-            "port": 8080,
-            "targetPort": 8080
-          }
-        ]
-      }
-    },
-    {
-      "kind": "Deployment",
-      "apiVersion": "apps/v1",
-      "metadata": {
-        "name": "deploy-heketi",
-        "labels": {
-          "glusterfs": "heketi-deployment",
-          "deploy-heketi": "deployment"
-        },
-        "annotations": {
-          "description": "Defines how to deploy Heketi"
-        }
-      },
-      "spec": {
-        "selector": {
-          "matchLabels": {
-            "name": "deploy-heketi"
-          }
-        },
-        "replicas": 1,
-        "template": {
-          "metadata": {
-            "name": "deploy-heketi",
-            "labels": {
-              "name": "deploy-heketi",
-              "glusterfs": "heketi-pod",
-              "deploy-heketi": "pod"
-            }
-          },
-          "spec": {
-            "serviceAccountName": "heketi-service-account",
-            "containers": [
-              {
-                "image": "heketi/heketi:9",
-                "imagePullPolicy": "Always",
-                "name": "deploy-heketi",
-                "env": [
-                  {
-                    "name": "HEKETI_EXECUTOR",
-                    "value": "kubernetes"
-                  },
-                  {
-                    "name": "HEKETI_DB_PATH",
-                    "value": "/var/lib/heketi/heketi.db"
-                  },
-                  {
-                    "name": "HEKETI_FSTAB",
-                    "value": "/var/lib/heketi/fstab"
-                  },
-                  {
-                    "name": "HEKETI_SNAPSHOT_LIMIT",
-                    "value": "14"
-                  },
-                  {
-                    "name": "HEKETI_KUBE_GLUSTER_DAEMONSET",
-                    "value": "y"
-                  }
-                ],
-                "ports": [
-                  {
-                    "containerPort": 8080
-                  }
-                ],
-                "volumeMounts": [
-                  {
-                    "name": "db",
-                    "mountPath": "/var/lib/heketi"
-                  },
-                  {
-                    "name": "config",
-                    "mountPath": "/etc/heketi"
-                  }
-                ],
-                "readinessProbe": {
-                  "timeoutSeconds": 3,
-                  "initialDelaySeconds": 3,
-                  "httpGet": {
-                    "path": "/hello",
-                    "port": 8080
-                  }
-                },
-                "livenessProbe": {
-                  "timeoutSeconds": 3,
-                  "initialDelaySeconds": 10,
-                  "httpGet": {
-                    "path": "/hello",
-                    "port": 8080
-                  }
-                }
-              }
-            ],
-            "volumes": [
-              {
-                "name": "db"
-              },
-              {
-                "name": "config",
-                "secret": {
-                  "secretName": "heketi-config-secret"
-                }
-              }
-            ]
-          }
-        }
-      }
-    }
-  ]
-}
diff --git a/contrib/network-storage/heketi/roles/provision/templates/heketi-deployment.json.j2 b/contrib/network-storage/heketi/roles/provision/templates/heketi-deployment.json.j2
deleted file mode 100644
index 8e09ce855303f9719e24080da49468b55f0d118f..0000000000000000000000000000000000000000
--- a/contrib/network-storage/heketi/roles/provision/templates/heketi-deployment.json.j2
+++ /dev/null
@@ -1,164 +0,0 @@
-{
-  "kind": "List",
-  "apiVersion": "v1",
-  "items": [
-    {
-      "kind": "Secret",
-      "apiVersion": "v1",
-      "metadata": {
-        "name": "heketi-db-backup",
-        "labels": {
-          "glusterfs": "heketi-db",
-          "heketi": "db"
-        }
-      },
-      "data": {
-      },
-      "type": "Opaque"
-    },
-    {
-      "kind": "Service",
-      "apiVersion": "v1",
-      "metadata": {
-        "name": "heketi",
-        "labels": {
-          "glusterfs": "heketi-service",
-          "deploy-heketi": "support"
-        },
-        "annotations": {
-          "description": "Exposes Heketi Service"
-        }
-      },
-      "spec": {
-        "selector": {
-          "name": "heketi"
-        },
-        "ports": [
-          {
-            "name": "heketi",
-            "port": 8080,
-            "targetPort": 8080
-          }
-        ]
-      }
-    },
-    {
-      "kind": "Deployment",
-      "apiVersion": "apps/v1",
-      "metadata": {
-        "name": "heketi",
-        "labels": {
-          "glusterfs": "heketi-deployment"
-        },
-        "annotations": {
-          "description": "Defines how to deploy Heketi"
-        }
-      },
-      "spec": {
-        "selector": {
-          "matchLabels": {
-            "name": "heketi"
-          }
-        },
-        "replicas": 1,
-        "template": {
-          "metadata": {
-            "name": "heketi",
-            "labels": {
-              "name": "heketi",
-              "glusterfs": "heketi-pod"
-            }
-          },
-          "spec": {
-            "serviceAccountName": "heketi-service-account",
-            "containers": [
-              {
-                "image": "heketi/heketi:9",
-                "imagePullPolicy": "Always",
-                "name": "heketi",
-                "env": [
-                  {
-                    "name": "HEKETI_EXECUTOR",
-                    "value": "kubernetes"
-                  },
-                  {
-                    "name": "HEKETI_DB_PATH",
-                    "value": "/var/lib/heketi/heketi.db"
-                  },
-                  {
-                    "name": "HEKETI_FSTAB",
-                    "value": "/var/lib/heketi/fstab"
-                  },
-                  {
-                    "name": "HEKETI_SNAPSHOT_LIMIT",
-                    "value": "14"
-                  },
-                  {
-                    "name": "HEKETI_KUBE_GLUSTER_DAEMONSET",
-                    "value": "y"
-                  }
-                ],
-                "ports": [
-                  {
-                    "containerPort": 8080
-                  }
-                ],
-                "volumeMounts": [
-                  {
-                    "mountPath": "/backupdb",
-                    "name": "heketi-db-secret"
-                  },
-                  {
-                    "name": "db",
-                    "mountPath": "/var/lib/heketi"
-                  },
-                  {
-                    "name": "config",
-                    "mountPath": "/etc/heketi"
-                  }
-                ],
-                "readinessProbe": {
-                  "timeoutSeconds": 3,
-                  "initialDelaySeconds": 3,
-                  "httpGet": {
-                    "path": "/hello",
-                    "port": 8080
-                  }
-                },
-                "livenessProbe": {
-                  "timeoutSeconds": 3,
-                  "initialDelaySeconds": 10,
-                  "httpGet": {
-                    "path": "/hello",
-                    "port": 8080
-                  }
-                }
-              }
-            ],
-            "volumes": [
-              {
-                "name": "db",
-                "glusterfs": {
-                  "endpoints": "heketi-storage-endpoints",
-                  "path": "heketidbstorage"
-                }
-              },
-              {
-                "name": "heketi-db-secret",
-                "secret": {
-                  "secretName": "heketi-db-backup"
-                }
-              },
-              {
-                "name": "config",
-                "secret": {
-                  "secretName": "heketi-config-secret"
-                }
-              }
-            ]
-          }
-        }
-      }
-    }
-  ]
-}
diff --git a/contrib/network-storage/heketi/roles/provision/templates/heketi-service-account.json.j2 b/contrib/network-storage/heketi/roles/provision/templates/heketi-service-account.json.j2
deleted file mode 100644
index 1dbcb9e962c785ef9d1e0ad28ccc7f999ca31d49..0000000000000000000000000000000000000000
--- a/contrib/network-storage/heketi/roles/provision/templates/heketi-service-account.json.j2
+++ /dev/null
@@ -1,7 +0,0 @@
-{
-  "apiVersion": "v1",
-  "kind": "ServiceAccount",
-  "metadata": {
-    "name": "heketi-service-account"
-  }
-}
diff --git a/contrib/network-storage/heketi/roles/provision/templates/heketi-storage.json.j2 b/contrib/network-storage/heketi/roles/provision/templates/heketi-storage.json.j2
deleted file mode 100644
index e985d255ff94c9d15b66614e1e8a2082fd66d15c..0000000000000000000000000000000000000000
--- a/contrib/network-storage/heketi/roles/provision/templates/heketi-storage.json.j2
+++ /dev/null
@@ -1,54 +0,0 @@
-{
-  "apiVersion": "v1",
-  "kind": "List",
-  "items": [
-    {
-      "kind": "Endpoints",
-      "apiVersion": "v1",
-      "metadata": {
-        "name": "heketi-storage-endpoints",
-        "creationTimestamp": null
-      },
-      "subsets": [
-{% set nodeblocks = [] %}
-{% for node in nodes %}
-{% set nodeblock %}
-        {
-          "addresses": [
-            {
-              "ip": "{{ hostvars[node].ip }}"
-            }
-          ],
-          "ports": [
-            {
-              "port": 1
-            }
-          ]
-        }
-{% endset %}
-{% if nodeblocks.append(nodeblock) %}{% endif %}
-{% endfor %}
-{{ nodeblocks|join(',') }}
-      ]
-    },
-    {
-      "kind": "Service",
-      "apiVersion": "v1",
-      "metadata": {
-        "name": "heketi-storage-endpoints",
-        "creationTimestamp": null
-      },
-      "spec": {
-        "ports": [
-          {
-            "port": 1,
-            "targetPort": 0
-          }
-        ]
-      },
-      "status": {
-        "loadBalancer": {}
-      }
-    }
-  ]
-}
diff --git a/contrib/network-storage/heketi/roles/provision/templates/heketi.json.j2 b/contrib/network-storage/heketi/roles/provision/templates/heketi.json.j2
deleted file mode 100644
index 5861b684b43976e6010a18dea9dc6057c71395ad..0000000000000000000000000000000000000000
--- a/contrib/network-storage/heketi/roles/provision/templates/heketi.json.j2
+++ /dev/null
@@ -1,44 +0,0 @@
-{
-  "_port_comment": "Heketi Server Port Number",
-  "port": "8080",
-
-  "_use_auth": "Enable JWT authorization. Please enable for deployment",
-  "use_auth": true,
-
-  "_jwt": "Private keys for access",
-  "jwt": {
-    "_admin": "Admin has access to all APIs",
-    "admin": {
-      "key": "{{ heketi_admin_key }}"
-    },
-    "_user": "User only has access to /volumes endpoint",
-    "user": {
-      "key": "{{ heketi_user_key }}"
-    }
-  },
-
-  "_glusterfs_comment": "GlusterFS Configuration",
-  "glusterfs": {
-    "_executor_comment": "Execute plugin. Possible choices: mock, kubernetes, ssh",
-    "executor": "kubernetes",
-
-    "_db_comment": "Database file name",
-    "db": "/var/lib/heketi/heketi.db",
-
-    "kubeexec": {
-      "rebalance_on_expansion": true
-    },
-
-    "sshexec": {
-      "rebalance_on_expansion": true,
-      "keyfile": "/etc/heketi/private_key",
-      "fstab": "/etc/fstab",
-      "port": "22",
-      "user": "root",
-      "sudo": false
-    }
-  },
-
-  "_backup_db_to_kube_secret": "Backup the heketi database to a Kubernetes secret when running in Kubernetes. Default is off.",
-  "backup_db_to_kube_secret": false
-}
diff --git a/contrib/network-storage/heketi/roles/provision/templates/storageclass.yml.j2 b/contrib/network-storage/heketi/roles/provision/templates/storageclass.yml.j2
deleted file mode 100644
index c2b64cf6942a7a053dbfd204eb27e263c61a61b6..0000000000000000000000000000000000000000
--- a/contrib/network-storage/heketi/roles/provision/templates/storageclass.yml.j2
+++ /dev/null
@@ -1,12 +0,0 @@
----
-apiVersion: storage.k8s.io/v1
-kind: StorageClass
-metadata:
-  name: gluster
-  annotations:
-    storageclass.beta.kubernetes.io/is-default-class: "true"
-provisioner: kubernetes.io/glusterfs
-parameters:
-  resturl: "http://{{ endpoint_address }}:8080"
-  restuser: "admin"
-  restuserkey: "{{ heketi_admin_key }}"
diff --git a/contrib/network-storage/heketi/roles/provision/templates/topology.json.j2 b/contrib/network-storage/heketi/roles/provision/templates/topology.json.j2
deleted file mode 100644
index c19ce32866019bf3d8bdd0c4b05845a58dcaf435..0000000000000000000000000000000000000000
--- a/contrib/network-storage/heketi/roles/provision/templates/topology.json.j2
+++ /dev/null
@@ -1,34 +0,0 @@
-{
-  "clusters": [
-    {
-      "nodes": [
-{% set nodeblocks = [] %}
-{% for node in nodes %}
-{% set nodeblock %}
-        {
-          "node": {
-            "hostnames": {
-              "manage": [
-                "{{ node }}"
-              ],
-              "storage": [
-                "{{ hostvars[node].ip }}"
-              ]
-            },
-            "zone": 1
-          },
-          "devices": [
-            {
-              "name": "{{ hostvars[node]['disk_volume_device_1'] }}",
-              "destroydata": false
-            }
-          ]
-        }
-{% endset %}
-{% if nodeblocks.append(nodeblock) %}{% endif %}
-{% endfor %}
-{{ nodeblocks|join(',') }}
-      ]
-    }
-  ]
-}
diff --git a/contrib/network-storage/heketi/roles/tear-down-disks/defaults/main.yml b/contrib/network-storage/heketi/roles/tear-down-disks/defaults/main.yml
deleted file mode 100644
index c07ba2d2387e394f5a4d21b564dce99c40e58594..0000000000000000000000000000000000000000
--- a/contrib/network-storage/heketi/roles/tear-down-disks/defaults/main.yml
+++ /dev/null
@@ -1,2 +0,0 @@
----
-heketi_remove_lvm: false
diff --git a/contrib/network-storage/heketi/roles/tear-down-disks/tasks/main.yml b/contrib/network-storage/heketi/roles/tear-down-disks/tasks/main.yml
deleted file mode 100644
index f3ca033200a42f3d2e3fe741a06d3a5e41441610..0000000000000000000000000000000000000000
--- a/contrib/network-storage/heketi/roles/tear-down-disks/tasks/main.yml
+++ /dev/null
@@ -1,52 +0,0 @@
----
-- name: "Install lvm utils (RedHat)"
-  become: true
-  package:
-    name: "lvm2"
-    state: "present"
-  when: "ansible_os_family == 'RedHat'"
-
-- name: "Install lvm utils (Debian)"
-  become: true
-  apt:
-    name: "lvm2"
-    state: "present"
-  when: "ansible_os_family == 'Debian'"
-
-- name: "Get volume group information."
-  environment:
-    PATH: "{{ ansible_env.PATH }}:/sbin"  # Make sure we can workaround RH / CentOS conservative path management
-  become: true
-  shell: "pvs {{ disk_volume_device_1 }} --option vg_name | tail -n+2"
-  register: "volume_groups"
-  ignore_errors: true   # noqa ignore-errors
-  changed_when: false
-
-- name: "Remove volume groups."
-  environment:
-    PATH: "{{ ansible_env.PATH }}:/sbin"  # Make sure we can workaround RH / CentOS conservative path management
-  become: true
-  command: "vgremove {{ volume_group }} --yes"
-  with_items: "{{ volume_groups.stdout_lines }}"
-  loop_control: { loop_var: "volume_group" }
-
-- name: "Remove physical volume from cluster disks."
-  environment:
-    PATH: "{{ ansible_env.PATH }}:/sbin"  # Make sure we can workaround RH / CentOS conservative path management
-  become: true
-  command: "pvremove {{ disk_volume_device_1 }} --yes"
-  ignore_errors: true   # noqa ignore-errors
-
-- name: "Remove lvm utils (RedHat)"
-  become: true
-  package:
-    name: "lvm2"
-    state: "absent"
-  when: "ansible_os_family == 'RedHat' and heketi_remove_lvm"
-
-- name: "Remove lvm utils (Debian)"
-  become: true
-  apt:
-    name: "lvm2"
-    state: "absent"
-  when: "ansible_os_family == 'Debian' and heketi_remove_lvm"
diff --git a/contrib/network-storage/heketi/roles/tear-down/tasks/main.yml b/contrib/network-storage/heketi/roles/tear-down/tasks/main.yml
deleted file mode 100644
index 5c271e794d7357ebb7bf58bde4beadbb85708d86..0000000000000000000000000000000000000000
--- a/contrib/network-storage/heketi/roles/tear-down/tasks/main.yml
+++ /dev/null
@@ -1,51 +0,0 @@
----
-- name: Remove storage class.
-  command: "{{ bin_dir }}/kubectl delete storageclass gluster"
-  ignore_errors: true  # noqa ignore-errors
-- name: Tear down heketi.
-  command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-pod\""
-  ignore_errors: true  # noqa ignore-errors
-- name: Tear down heketi.
-  command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-deployment\""
-  ignore_errors: true  # noqa ignore-errors
-- name: Tear down bootstrap.
-  include_tasks: "../../provision/tasks/bootstrap/tear-down.yml"
-- name: Ensure there is nothing left over.
-  command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-pod\" -o=json"
-  register: "heketi_result"
-  until: "heketi_result.stdout | from_json | json_query('items[*]') | length == 0"
-  retries: 60
-  delay: 5
-- name: Ensure there is nothing left over.
-  command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-deployment\" -o=json"
-  register: "heketi_result"
-  until: "heketi_result.stdout | from_json | json_query('items[*]') | length == 0"
-  retries: 60
-  delay: 5
-- name: Tear down glusterfs.
-  command: "{{ bin_dir }}/kubectl delete daemonset.extensions/glusterfs"
-  ignore_errors: true  # noqa ignore-errors
-- name: Remove heketi storage service.
-  command: "{{ bin_dir }}/kubectl delete service heketi-storage-endpoints"
-  ignore_errors: true  # noqa ignore-errors
-- name: Remove heketi gluster role binding
-  command: "{{ bin_dir }}/kubectl delete clusterrolebinding heketi-gluster-admin"
-  ignore_errors: true  # noqa ignore-errors
-- name: Remove heketi config secret
-  command: "{{ bin_dir }}/kubectl delete secret heketi-config-secret"
-  ignore_errors: true  # noqa ignore-errors
-- name: Remove heketi db backup
-  command: "{{ bin_dir }}/kubectl delete secret heketi-db-backup"
-  ignore_errors: true  # noqa ignore-errors
-- name: Remove heketi service account
-  command: "{{ bin_dir }}/kubectl delete serviceaccount heketi-service-account"
-  ignore_errors: true  # noqa ignore-errors
-- name: Get secrets
-  command: "{{ bin_dir }}/kubectl get secrets --output=\"json\""
-  register: "secrets"
-  changed_when: false
-- name: Remove heketi storage secret
-  vars: { storage_query: "items[?metadata.annotations.\"kubernetes.io/service-account.name\"=='heketi-service-account'].metadata.name|[0]" }
-  command: "{{ bin_dir }}/kubectl delete secret {{ secrets.stdout | from_json | json_query(storage_query) }}"
-  when: "storage_query is defined"
-  ignore_errors: true  # noqa ignore-errors