diff --git a/docs/kubernetes-reliability.md b/docs/kubernetes-reliability.md
index e338272d3bc50d9b172fb93ce0b065d0282fc8fd..3e812d8f629aac95f2000f65c7e0755813b2a31a 100644
--- a/docs/kubernetes-reliability.md
+++ b/docs/kubernetes-reliability.md
@@ -69,7 +69,7 @@ minute which may require large etcd containers or even dedicated nodes for etcd.
 
 > If we calculate the number of tries, the division will give 5, but in reality
 > it will be from 3 to 5 with `nodeStatusUpdateRetry` attempts of each try. The
-> total number of attemtps will vary from 15 to 25 due to latency of all
+> total number of attempts will vary from 15 to 25 due to latency of all
 > components.
 
 ## Medium Update and Average Reaction
@@ -92,7 +92,7 @@ etcd updates per minute.
 Let's set `-–node-status-update-frequency` to **1m**.
 `--node-monitor-grace-period` will set to **5m** and `--pod-eviction-timeout`
 to **1m**. In this scenario, every kubelet will try to update the status every
-minute. There will be 5 * 5 = 25 attempts before unhealty status. After 5m,
+minute. There will be 5 * 5 = 25 attempts before unhealthy status. After 5m,
 Kubernetes controller manager will set unhealthy status. This means that pods
 will be evicted after 1m after being marked unhealthy. (6m in total).