Problem Description
By default,
- Containers run with unbounded compute resources on a Kubernetes cluster.
- If pod/ containers gone rouge and can monopolize resources, it can kill the entire cluster.
Solutions:
Use k8s features like Resource Quotas, LimitRange, Request/Limits to restrict this behavior. In this trashcan article, I am going through official blog to spike LimitRange feature.
Official page mentioned 4 types of resources,
- Limiting Container compute resources
- Limiting Pod compute resources
- Limiting Storage resources
- Limits/Requests Ratio
I am going to try all 4.
Limiting container compute resources
- Create a namespace limitrange-demo using the following kubectl command:
kubectl create namespace limitrange-demo
- Create a configuration file (limitrange.yaml) for limit range objects
apiVersion: v1
kind: LimitRange
metadata:
name: limit-mem-cpu-per-container
spec:
limits:
- max:
cpu: "800m"
memory: "1Gi"
min:
cpu: "100m"
memory: "99Mi"
default:
cpu: "700m"
memory: "900Mi"
defaultRequest:
cpu: "110m"
memory: "111Mi"
type: Container
- Now apply this limit range configuration file
kubectl apply -f limitrange.yaml -n limitrange-demo
Output:
limitrange/limit-mem-cpu-per-container created
** You can validate if the object is properly created using the following command
kubectl describe -f limitrange.yaml -n limitrange-demo
Output:
Name: limit-mem-cpu-per-container
Namespace: limitrange-demo
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Container memory 99Mi 5Gi 111Mi 900Mi -
Container cpu 100m 8 110m 700m -
# Or using following
kubectl describe limitrange/limit-mem-cpu-per-container -n limitrange-demo
Output:
Name: limit-mem-cpu-per-container
Namespace: limitrange-demo
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Container cpu 100m 8 110m 700m -
Container memory 99Mi 5Gi 111Mi 900Mi -
- Example manifests to test this feature.
Create a manifest file busybox1.yaml with the following content.
apiVersion: v1
kind: Pod
metadata:
name: busybox1
spec:
containers:
- name: busybox-cnt01
image: busybox
command: ["/bin/sh"]
args: ["-c", "while true; do echo hello from cnt01; sleep 10;done"]
resources:
requests:
memory: "100Mi"
cpu: "100m"
limits:
memory: "200Mi"
cpu: "500m"
- name: busybox-cnt02
image: busybox
command: ["/bin/sh"]
args: ["-c", "while true; do echo hello from cnt02; sleep 10;done"]
resources:
requests:
memory: "100Mi"
cpu: "100m"
- name: busybox-cnt03
image: busybox
command: ["/bin/sh"]
args: ["-c", "while true; do echo hello from cnt03; sleep 10;done"]
resources:
limits:
memory: "200Mi"
cpu: "500m"
- name: busybox-cnt04
image: busybox
command: ["/bin/sh"]
args: ["-c", "while true; do echo hello from cnt04; sleep 10;done"]
Now send this manifest to k8s api.
kubectl apply -f busybox1.yaml -n limitrange-demo
Letβs validate
kubectl describe -f busybox1.yaml -n limitrange-demo
Name: busybox1
Namespace: limitrange-demo
Priority: 0
Node: ip-69-76-24-106.us-west-2.compute.internal/69.76.24.106
Start Time: Tue, 25 Feb 2020 14:38:54 -0700
Labels: <none>
Annotations: cni.projectcalico.org/podIP: 100.96.5.82/32
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"busybox1","namespace":"limitrange-demo"},"spec":{"containers":[{"args...
kubernetes.io/limit-ranger:
LimitRanger plugin set: cpu, memory limit for container busybox-cnt02; cpu, memory request for container busybox-cnt04; cpu, memory limit ...
Status: Running
IP: 100.96.5.82
IPs: <none>
Containers:
busybox-cnt01:
Container ID: docker://afcd9d2ec8e5568fde8d9d9c155266bd1d685a36d8bc79ba6677a119ba6f09b5
Image: busybox
Image ID: docker-pullable://busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a
Port: <none>
Host Port: <none>
Command:
/bin/sh
Args:
-c
while true; do echo hello from cnt01; sleep 10;done
State: Running
Started: Tue, 25 Feb 2020 14:38:57 -0700
Ready: True
Restart Count: 0
Limits:
cpu: 500m
memory: 200Mi
Requests:
cpu: 100m
memory: 100Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mnfxb (ro)
busybox-cnt02:
Container ID: docker://a9bacf09475944c447a021f299ac384087d5af46f139b70ce14ee595098bbcf8
Image: busybox
Image ID: docker-pullable://busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a
Port: <none>
Host Port: <none>
Command:
/bin/sh
Args:
-c
while true; do echo hello from cnt02; sleep 10;done
State: Running
Started: Tue, 25 Feb 2020 14:38:59 -0700
Ready: True
Restart Count: 0
Limits:
cpu: 700m
memory: 900Mi
Requests:
cpu: 100m
memory: 100Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mnfxb (ro)
busybox-cnt03:
Container ID: docker://c8a2520d632e672a5b343f2dc680c178f2612515e70531cc1be0125e9a9c1fd3
Image: busybox
Image ID: docker-pullable://busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a
Port: <none>
Host Port: <none>
Command:
/bin/sh
Args:
-c
while true; do echo hello from cnt03; sleep 10;done
State: Running
Started: Tue, 25 Feb 2020 14:39:00 -0700
Ready: True
Restart Count: 0
Limits:
cpu: 500m
memory: 200Mi
Requests:
cpu: 500m
memory: 200Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mnfxb (ro)
busybox-cnt04:
Container ID: docker://c70901422ba7739e1ef1820d917ebf6471d8efb998de904287e7afdb24e9c909
Image: busybox
Image ID: docker-pullable://busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a
Port: <none>
Host Port: <none>
Command:
/bin/sh
Args:
-c
while true; do echo hello from cnt04; sleep 10;done
State: Running
Started: Tue, 25 Feb 2020 14:39:02 -0700
Ready: True
Restart Count: 0
Limits:
cpu: 700m
memory: 900Mi
Requests:
cpu: 110m
memory: 111Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mnfxb (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-mnfxb:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mnfxb
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 117s kubelet, ip-69-76-24-106.us-west-2.compute.internal pulling image "busybox"
Normal Pulled 114s kubelet, ip-69-76-24-106.us-west-2.compute.internal Successfully pulled image "busybox"
Normal Created 114s kubelet, ip-69-76-24-106.us-west-2.compute.internal Created container
Normal Started 114s kubelet, ip-69-76-24-106.us-west-2.compute.internal Started container
Normal Pulling 114s kubelet, ip-69-76-24-106.us-west-2.compute.internal pulling image "busybox"
Normal Pulled 113s kubelet, ip-69-76-24-106.us-west-2.compute.internal Successfully pulled image "busybox"
Normal Created 113s kubelet, ip-69-76-24-106.us-west-2.compute.internal Created container
Normal Started 112s kubelet, ip-69-76-24-106.us-west-2.compute.internal Started container
Normal Pulling 112s kubelet, ip-69-76-24-106.us-west-2.compute.internal pulling image "busybox"
Normal Pulled 111s kubelet, ip-69-76-24-106.us-west-2.compute.internal Successfully pulled image "busybox"
Normal Created 111s kubelet, ip-69-76-24-106.us-west-2.compute.internal Created container
Normal Started 111s kubelet, ip-69-76-24-106.us-west-2.compute.internal Started container
Normal Pulling 111s kubelet, ip-69-76-24-106.us-west-2.compute.internal pulling image "busybox"
Normal Pulled 109s kubelet, ip-69-76-24-106.us-west-2.compute.internal Successfully pulled image "busybox"
Normal Created 109s kubelet, ip-69-76-24-106.us-west-2.compute.internal Created container
Normal Started 109s kubelet, ip-69-76-24-106.us-west-2.compute.internal Started container
Normal Scheduled 40s default-scheduler Successfully assigned limitrange-demo/busybox1 to ip-69-76-24-106.us-west-2.compute.internal
If you look at the above output, any containers which are missing limits/request is allocated a default value which is mentioned in LimitRange object. If limits are missing value Kubernetes will allocate value for limits and if requests are missing value it will assign value for requests. If both values are missing it will assign a default value for both. If all values are mentioned in manifest it will do nothing as long as manifest respecting hard max and min limits.
Container 1
kubectl get po/busybox1 -n limitrange-demo -o json | jq ".spec.containers[0].resources"
Output:
{
"limits": {
"cpu": "500m",
"memory": "200Mi"
},
"requests": {
"cpu": "100m",
"memory": "100Mi"
}
}
Container 2
kubectl get po/busybox1 -n limitrange-demo -o json | jq ".spec.containers[1].resources"
Output:
{
"limits": {
"cpu": "700m",
"memory": "900Mi"
},
"requests": {
"cpu": "100m",
"memory": "100Mi"
}
}
Container 3
kubectl get po/busybox1 -n limitrange-demo -o json | jq ".spec.containers[2].resources"
Output:
{
"limits": {
"cpu": "500m",
"memory": "200Mi"
},
"requests": {
"cpu": "500m",
"memory": "200Mi"
}
}
Container 4
kubectl get po/busybox1 -n limitrange-demo -o json | jq ".spec.containers[3].resources"
Output:
{
"limits": {
"cpu": "700m",
"memory": "900Mi"
},
"requests": {
"cpu": "110m",
"memory": "111Mi"
}
}
Top comments (0)