diff --git a/README.md b/README.md
index ef4a24e302eaf31387f5ccb5cb644881f0db906e..8ec1020efd7f57c12ed47cbf5d8959c079019f9e 100644
--- a/README.md
+++ b/README.md
@@ -199,7 +199,7 @@ Note: Upstart/SysV init based OS types are not supported.
 - If using IPv6 for pods and services, the target servers are configured to allow **IPv6 forwarding**.
 - The **firewalls are not managed**, you'll need to implement your own rules the way you used to.
     in order to avoid any issue during deployment you should disable your firewall.
-- If kubespray is ran from non-root user account, correct privilege escalation method
+- If kubespray is run from non-root user account, correct privilege escalation method
     should be configured in the target servers. Then the `ansible_become` flag
     or command parameters `--become or -b` should be specified.
 
diff --git a/RELEASE.md b/RELEASE.md
index 05ea6c0bd013dc3f43d885a15baa6cef7ead4b28..296040de1e24c069b6795c548a0ddb88e10e4dc6 100644
--- a/RELEASE.md
+++ b/RELEASE.md
@@ -60,7 +60,7 @@ release-notes --start-sha <The start commit-id> --end-sha <The end commit-id> --
 ```
 
 If the release note file(/tmp/kubespray-release-note) contains "### Uncategorized" pull requests, those pull requests don't have a valid kind label(`kind/feature`, etc.).
-It is necessary to put a valid label on each pull request and run the above release-notes command again to get a better release note)
+It is necessary to put a valid label on each pull request and run the above release-notes command again to get a better release note
 
 ## Container image creation
 
diff --git a/docs/azure-csi.md b/docs/azure-csi.md
index 1cc3a68485ca29a6aa62c9222a591cb4e203b287..6aa16c2bc59e7d4f5eb87e6c08b3d63069e9ee40 100644
--- a/docs/azure-csi.md
+++ b/docs/azure-csi.md
@@ -14,7 +14,7 @@ If you want to deploy the Azure Disk storage class to provision volumes dynamica
 
 Before creating the instances you must first set the `azure_csi_` variables in the `group_vars/all.yml` file.
 
-All of the values can be retrieved using the azure cli tool which can be downloaded here: <https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest>
+All values can be retrieved using the azure cli tool which can be downloaded here: <https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest>
 
 After installation you have to run `az login` to get access to your account.
 
@@ -34,7 +34,7 @@ The name of the resource group your instances are in, a list of your resource gr
 
 Or you can do `az vm list | grep resourceGroup` and get the resource group corresponding to the VMs of your cluster.
 
-The resource group name is not case sensitive.
+The resource group name is not case-sensitive.
 
 ### azure\_csi\_vnet\_name
 
diff --git a/docs/azure.md b/docs/azure.md
index a58ca4576d5a8aa36703dc4314b5808c03b2e4c9..a164ea757011818ef488d569f0d1cd39ebe06f8d 100644
--- a/docs/azure.md
+++ b/docs/azure.md
@@ -10,7 +10,7 @@ Not all features are supported yet though, for a list of the current status have
 
 Before creating the instances you must first set the `azure_` variables in the `group_vars/all/all.yml` file.
 
-All of the values can be retrieved using the Azure CLI tool which can be downloaded here: <https://docs.microsoft.com/en-gb/cli/azure/install-azure-cli>
+All values can be retrieved using the Azure CLI tool which can be downloaded here: <https://docs.microsoft.com/en-gb/cli/azure/install-azure-cli>
 After installation you have to run `az login` to get access to your account.
 
 ### azure_cloud
diff --git a/docs/cloud.md b/docs/cloud.md
index ccd30fb612ba406ec17afb43a475a2dcd2b4fe1b..d7fcfef7fdc9b1c1032b583953e0493bba64002c 100644
--- a/docs/cloud.md
+++ b/docs/cloud.md
@@ -2,7 +2,7 @@
 
 ## Provisioning
 
-You can deploy instances in your cloud environment in several different ways. Examples include Terraform, Ansible (ec2 and gce modules), and manual creation.
+You can deploy instances in your cloud environment in several ways. Examples include Terraform, Ansible (ec2 and gce modules), and manual creation.
 
 ## Deploy kubernetes
 
diff --git a/docs/equinix-metal.md b/docs/equinix-metal.md
index 61260f061f17c6b3806e8285ea05d4c73f67533e..ccdabaed29e4a2a220bd3c0b94d669e319e1cd90 100644
--- a/docs/equinix-metal.md
+++ b/docs/equinix-metal.md
@@ -10,7 +10,7 @@ dynamically from the Terraform state file.
 ## Local Host Configuration
 
 To perform this installation, you will need a localhost to run Terraform/Ansible (laptop, VM, etc) and an account with Equinix Metal.
-In this example, we're using an m1.large CentOS 7 OpenStack VM as the localhost to kickoff the Kubernetes installation.
+In this example, we are provisioning a m1.large CentOS7 OpenStack VM as the localhost for the Kubernetes installation.
 You'll need Ansible, Git, and PIP.
 
 ```bash
diff --git a/docs/etcd.md b/docs/etcd.md
index 17aa291f5ed1f2e6673bb44e781f6a37dfb534b0..574cc31d269901e22bbd99228decf84952fb4df2 100644
--- a/docs/etcd.md
+++ b/docs/etcd.md
@@ -25,7 +25,7 @@ etcd_metrics_port: 2381
 ```
 
 To create a service `etcd-metrics` and associated endpoints in the `kube-system` namespace,
-define it's labels in the inventory with:
+define its labels in the inventory with:
 
 ```yaml
 etcd_metrics_service_labels:
diff --git a/docs/ha-mode.md b/docs/ha-mode.md
index de80199de375a81831956ded2ae2bb687c36da73..1bbfd35486a498bda3fee3b27c4e61d79eaa0c1d 100644
--- a/docs/ha-mode.md
+++ b/docs/ha-mode.md
@@ -54,7 +54,7 @@ listen kubernetes-apiserver-https
   balance roundrobin
 ```
 
-  Note: That's an example config managed elsewhere outside of Kubespray.
+  Note: That's an example config managed elsewhere outside Kubespray.
 
 And the corresponding example global vars for such a "cluster-aware"
 external LB with the cluster API access modes configured in Kubespray:
@@ -85,7 +85,7 @@ for it.
 
   Note: TLS/SSL termination for externally accessed API endpoints' will **not**
   be covered by Kubespray for that case. Make sure your external LB provides it.
-  Alternatively you may specify an externally load balanced VIPs in the
+  Alternatively you may specify an external load balanced VIPs in the
   `supplementary_addresses_in_ssl_keys` list. Then, kubespray will add them into
   the generated cluster certificates as well.
 
diff --git a/docs/integration.md b/docs/integration.md
index 962a5f4590c02a6326b64789da256c788cf20b8d..1060fbc6c4d14afd6c2321132de371d870a168a5 100644
--- a/docs/integration.md
+++ b/docs/integration.md
@@ -95,7 +95,7 @@
       ansible.builtin.import_playbook: 3d/kubespray/cluster.yml
     ```
 
-    Or your could copy separate tasks from cluster.yml into your ansible repository.
+    Or you could copy separate tasks from cluster.yml into your ansible repository.
 
 11. Commit changes to your ansible repo. Keep in mind, that submodule folder is just a link to the git commit hash of your forked repo.
 
@@ -170,7 +170,7 @@ If you made useful changes or fixed a bug in existent kubespray repo, use this f
    git push
    ```
 
-   If your branch doesn't exists on github, git will propose you to use something like
+   If your branch doesn't exist on github, git will propose you to use something like
 
    ```ShellSession
    git push --set-upstream origin fixes-name-date-index
diff --git a/docs/kube-ovn.md b/docs/kube-ovn.md
index 3ddc270da7abefcf02c0d3885bf9bf9c796829af..26d7cd93d9aca1f41b73725745c334a3c84cd430 100644
--- a/docs/kube-ovn.md
+++ b/docs/kube-ovn.md
@@ -4,7 +4,7 @@ Kube-OVN integrates the OVN-based Network Virtualization with Kubernetes. It off
 
 For more information please check [Kube-OVN documentation](https://github.com/alauda/kube-ovn)
 
-**Warning:** Kernel version (`cat /proc/version`) needs to be different than `3.10.0-862` or kube-ovn won't start and will print this message:
+**Warning:** Kernel version (`cat /proc/version`) needs to be different from `3.10.0-862` or kube-ovn won't start and will print this message:
 
 ```bash
 kernel version 3.10.0-862 has a nat related bug that will affect ovs function, please update to a version greater than 3.10.0-898
diff --git a/docs/kubernetes-reliability.md b/docs/kubernetes-reliability.md
index 149ec845cee98cc9504c58b6a513dc89fee60342..116b4e173f7b996cde07968fa1069d15195ffbd0 100644
--- a/docs/kubernetes-reliability.md
+++ b/docs/kubernetes-reliability.md
@@ -4,7 +4,7 @@ Distributed system such as Kubernetes are designed to be resilient to the
 failures.  More details about Kubernetes High-Availability (HA) may be found at
 [Building High-Availability Clusters](https://kubernetes.io/docs/admin/high-availability/)
 
-To have a simple view the most of parts of HA will be skipped to describe
+To have a simple view the most of the parts of HA will be skipped to describe
 Kubelet<->Controller Manager communication only.
 
 By default the normal behavior looks like:
diff --git a/docs/nodes.md b/docs/nodes.md
index 2cd9e9a3c2463e8e80e053cb9e75c00d6d1026c3..703cdfb6157903cc38d8ef001770dde7cc0d9641 100644
--- a/docs/nodes.md
+++ b/docs/nodes.md
@@ -138,7 +138,7 @@ Run `cluster.yml` with `--limit=kube_control_plane`
 
 ## Adding an etcd node
 
-You need to make sure there are always an odd number of etcd nodes in the cluster. In such a way, this is always a replace or scale up operation. Either add two new nodes or remove an old one.
+You need to make sure there are always an odd number of etcd nodes in the cluster. In such a way, this is always a replacement or scale up operation. Either add two new nodes or remove an old one.
 
 ### 1) Add the new node running cluster.yml
 
diff --git a/docs/offline-environment.md b/docs/offline-environment.md
index 05acc97b1ba35e685f8f27f3a26ab85d214ad534..79200a931a8a99f8554a5462cb5d001718929fb1 100644
--- a/docs/offline-environment.md
+++ b/docs/offline-environment.md
@@ -13,7 +13,7 @@ following artifacts in advance from another environment where has access to the
 
 Then you need to setup the following services on your offline environment:
 
-* a HTTP reverse proxy/cache/mirror to serve some static files (zips and binaries)
+* an HTTP reverse proxy/cache/mirror to serve some static files (zips and binaries)
 * an internal Yum/Deb repository for OS packages
 * an internal container image registry that need to be populated with all container images used by Kubespray
 * [Optional] an internal PyPi server for python packages used by Kubespray
@@ -97,7 +97,7 @@ If you use the settings like the one above, you'll need to define in your invent
 * `files_repo`: HTTP webserver or reverse proxy that is able to serve the files listed above. Path is not important, you
   can store them anywhere as long as it's accessible by kubespray. It's recommended to use `*_version` in the path so
   that you don't need to modify this setting everytime kubespray upgrades one of these components.
-* `yum_repo`/`debian_repo`/`ubuntu_repo`: OS package repository depending of your OS, should point to your internal
+* `yum_repo`/`debian_repo`/`ubuntu_repo`: OS package repository depending on your OS, should point to your internal
   repository. Adjust the path accordingly.
 
 ## Install Kubespray Python Packages
@@ -114,7 +114,7 @@ Look at the `requirements.txt` file and check if your OS provides all packages o
 manager). For those missing, you need to either use a proxy that has Internet access (typically from a DMZ) or setup a
 PyPi server in your network that will host these packages.
 
-If you're using a HTTP(S) proxy to download your python packages:
+If you're using an HTTP(S) proxy to download your python packages:
 
 ```bash
 sudo pip install --proxy=https://[username:password@]proxyserver:port -r requirements.txt
diff --git a/docs/setting-up-your-first-cluster.md b/docs/setting-up-your-first-cluster.md
index 03622da6c1085432ed7a6adf8b2b95dcb864aee4..a8200a3ca7a5175f5331909e4b558c9dd62883da 100644
--- a/docs/setting-up-your-first-cluster.md
+++ b/docs/setting-up-your-first-cluster.md
@@ -272,7 +272,7 @@ scp $USERNAME@$IP_CONTROLLER_0:/etc/kubernetes/admin.conf kubespray-do.conf
 
 This kubeconfig file uses the internal IP address of the controller node to
 access the API server. This kubeconfig file will thus not work of from
-outside of the VPC network. We will need to change the API server IP address
+outside the VPC network. We will need to change the API server IP address
 to the controller node his external IP address. The external IP address will be
 accepted in the
 TLS negotiation as we added the controllers external IP addresses in the SSL
@@ -482,7 +482,7 @@ nginx version: nginx/1.19.1
 
 ### Kubernetes services
 
-#### Expose outside of the cluster
+#### Expose outside the cluster
 
 In this section you will verify the ability to expose applications using a [Service](https://kubernetes.io/docs/concepts/services-networking/service/).
 
diff --git a/docs/upgrades.md b/docs/upgrades.md
index 22d81d591729bd1add96a214e0b7dd16bc31c62f..bca7057ccca7bb1adf11baa41e0053454118ccdb 100644
--- a/docs/upgrades.md
+++ b/docs/upgrades.md
@@ -263,7 +263,7 @@ Previous HEAD position was 6f97687d Release 2.8 robust san handling (#4478)
 HEAD is now at a4e65c7c Upgrade to Ansible >2.7.0 (#4471)
 ```
 
-:warning: IMPORTANT: Some of the variable formats changed in the k8s_cluster.yml between 2.8.5 and 2.9.0 :warning:
+:warning: IMPORTANT: Some variable formats changed in the k8s_cluster.yml between 2.8.5 and 2.9.0 :warning:
 
 If you do not keep your inventory copy up to date, **your upgrade will fail** and your first master will be left non-functional until fixed and re-run.
 
diff --git a/docs/vars.md b/docs/vars.md
index 7680ab2b5179881e52cd9d60f01924a17305d79d..877c78f7e2762d49b6839a1cf20dff5985bb68a6 100644
--- a/docs/vars.md
+++ b/docs/vars.md
@@ -81,7 +81,7 @@ following default cluster parameters:
   bits in kube_pods_subnet dictates how many kube_nodes can be in cluster. Setting this > 25 will
   raise an assertion in playbooks if the `kubelet_max_pods` var also isn't adjusted accordingly
   (assertion not applicable to calico which doesn't use this as a hard limit, see
-  [Calico IP block sizes](https://docs.projectcalico.org/reference/resources/ippool#block-sizes).
+  [Calico IP block sizes](https://docs.projectcalico.org/reference/resources/ippool#block-sizes)).
 
 * *enable_dual_stack_networks* - Setting this to true will provision both IPv4 and IPv6 networking for pods and services.
 
@@ -209,7 +209,7 @@ Stack](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.m
 
 * *kubelet_systemd_hardening* - If `true`, provides kubelet systemd service with security features for isolation.
 
-  **N.B.** To enable this feature, ensure you are using the **`cgroup v2`** on your system. Check it out with command: `sudo ls -l /sys/fs/cgroup/*.slice`. If directory does not exists, enable this with the following guide: [enable cgroup v2](https://rootlesscontaine.rs/getting-started/common/cgroup2/#enabling-cgroup-v2).
+  **N.B.** To enable this feature, ensure you are using the **`cgroup v2`** on your system. Check it out with command: `sudo ls -l /sys/fs/cgroup/*.slice`. If directory does not exist, enable this with the following guide: [enable cgroup v2](https://rootlesscontaine.rs/getting-started/common/cgroup2/#enabling-cgroup-v2).
 
   * *kubelet_secure_addresses* - By default *kubelet_systemd_hardening* set the **control plane** `ansible_host` IPs as the `kubelet_secure_addresses`. In case you have multiple interfaces in your control plane nodes and the `kube-apiserver` is not bound to the default interface, you can override them with this variable.
     Example: