DevOps Asked by again on February 17, 2021
Current using AWS’s auto-scale policy to start EKS worker nodes (min+desired 10, max 15) on a fixed time every day. There are 120 pods and 30 deployments.
Obviously, only one node will be up firstly. Then the second node. After that last node will be up.
For that reason, Kubernetes add all pods in the first nodes.
Is there any way so that pods will be evenly distributed in the scenario?
Take a look at the Descheduler. This project runs as a Kubernetes Job that aims at killing pods when it thinks the cluster is unbalanced.
The LowNodeUtilization
strategy seems to fit your case:
This strategy finds nodes that are under utilized and evicts pods, if possible, from other nodes in the hope that recreation of evicted pods will be scheduled on these underutilized nodes.
Another option is to apply a little of chaos engineering manually, forcing a Rolling Update on your deployment, and hopefully, the scheduler will fix the balance problem when pods are recreated.
You can use the kubectl rollout restart my-deployment
. It's way better than simply deleting the pods with kubectl delete pod
, as the rollout will ensure availability during the "rebalancing" (although deleting the pods altogether increases your chances for a better rebalance).
Correct answer by Eduardo Baitello on February 17, 2021
You should look into affinity and anti-affinity as that allows you to control what pods go on which node. You can have it so that with anti-affinity there is only 1 pod of each deployment on a node. This is a bit overkill and wouldn't work for you exactly though IIRC you can have multiple pods on a node just with a limit.
https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/
Answered by joshk132 on February 17, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP