-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VPA: Implement in-place updates support #7673
base: master
Are you sure you want to change the base?
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: maxcao13 The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Hi @maxcao13. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
This PR may require API review. If so, when the changes are ready, complete the pre-review checklist and request an API review. Status of requested reviews is tracked in the API Review project. |
/ok-to-test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is just my first review after going through all the changes. I will go over it multiple times, but these are my initial comments for now.
|
||
package annotations | ||
|
||
// TODO(maxcao13): This annotation currently doesn't do anything. Do we want an annotation to show vpa inplace resized only for cosmetic reasons? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not a fan of embedding logic into annotations. So, +1 for making this change purely for cosmetic reasons.
vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1/types.go
Outdated
Show resolved
Hide resolved
vertical-pod-autoscaler/pkg/updater/eviction/pods_eviction_restriction.go
Outdated
Show resolved
Hide resolved
"evicted", singleGroupStats.evicted, | ||
"updating", singleGroupStats.inPlaceUpdating) | ||
|
||
if singleGroupStats.running-(singleGroupStats.evicted+(singleGroupStats.inPlaceUpdating-1)) > shouldBeAlive { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you explain this logic? why do we have singleGroupStats.inPlaceUpdating-1
and not just singleGroupStats.inPlaceUpdating
? is it because that the pod is already in inPlaceUpdating
phase? (because of this line: if IsInPlaceUpdating(pod)
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I believe that's the case.
If we have 3 pods in a deployment, and eviction/disruption tolerance is 1 (we can only tolerate at most 1 pod is inPlaceResizing/evicting), then we need to check at least 2 pods are "alive".
Then if the pod we are currently checking if we canEvict
is currently IsInPlaceUpdating
, then we need to check if it's actually stuck in deferred or infeasible so that we can fallback to eviction. So that's what the
running - (evicted + (in-place-updating - 1)) > shouldBeAlive
thing is doing. We have to ignore the current pod since it's the same pod we are potentially trying to evict (by subtracting it).
In this example it would be
shouldBeAlive 2 (configured - eviction/disruption tolerance)
running = 3 (replicas - pending pods)
evicted = 0
in-place-updating = 1 (singleGroupStats is initialized during NewPodsEvictionRestriction)
3 - (0 + (1 - 1)) > 2
3 > 2
so we would be able to fallback to eviction.
vertical-pod-autoscaler/pkg/updater/priority/update_priority_calculator.go
Outdated
Show resolved
Hide resolved
vertical-pod-autoscaler/pkg/updater/priority/update_priority_calculator.go
Outdated
Show resolved
Hide resolved
//result := []*apiv1.Pod{} | ||
result := []*PrioritizedPod{} | ||
for num, podPrio := range calc.pods { | ||
if admission.Admit(podPrio.pod, podPrio.recommendation) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please explain what do we check here? I thought the purpose of this function is to sort pods based on priority
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The purpose of the function is return the pods by sorted priority in a list of PrioritizedPods
instead of regular apiv1.Pod
. They do the same thing, but this one returns the list of the wrapped type. We need it because we elected to put the disruptionless check within the PrioritizedPod
struct which may or may not be a scope problem as John pointed out:
// TODO(jkyros): scope issues, maybe not the best place to put Disruptionless
func (p PrioritizedPod) IsDisruptionless() bool {
return p.priority.Disruptionless
}
EDIT: If people are fine with including the disruptionless check in the PrioritizedPod
struct, then I can remove the old GetSortedPods
function (because I think it's not used anymore except in tests). If not, I will need to find a better way to check if a pod should be disruptionless (maybe another level of wrapping or something).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, just to confirm my understanding - first, we sort the pods by PodPriority
. Then, for each pod, we check whether it can be evicted based on the given recommendation. If it can be evicted, we add it to the result slice; otherwise, we log a message indicating that the pod cannot be evicted.
I think it is more logical to first filter the pods and then sort the list afterwards. but because this line:
type byPriorityDesc []PrioritizedPod
and the fact that we are returning
[]*PrioritizedPod
It might be an issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can do switch it around and loop through them to create a slice of pointers. That will probably be a bit better for complexity.
But regardless, I think I need to refactor this area of code a bit more. Particularly, John wrote out a TODO that I believe should make the new updater steps a bit more clear and efficient:
// TODO(jkyros): I need to know the priority details here so I can use them to determine what we want to do to the pod
// previously it was just "evict" but now we have to make decisions, so we need to know
I am going to try and calculate the CanEvict
and CanInPlaceUpdate
checks before attempting them, so that we can know exactly what to try and what not to try for each pod. Commits after 2fa0144 will be on this.
vertical-pod-autoscaler/pkg/updater/priority/update_priority_calculator.go
Show resolved
Hide resolved
vertical-pod-autoscaler/pkg/updater/priority/update_priority_calculator.go
Show resolved
Hide resolved
This just addes the UpdateModeInPlaceOrRecreate mode to the types so we can use it. I did not add InPlaceOnly, as that seemed contentious and it didn't seem like we had a good use case for it yet.
Signed-off-by: Max Cao <[email protected]>
So because of InPlacePodVerticalScaling, we can have a pod object whose resource spec is correct, but whose status is not, because that pod may have been updated in-place after the original admission. This would have been ignored until now because "the spec looks correct", but we need to take the status into account as well if a resize is in progress. This commit: - takes status resources into account for pods/containers that are being in-place resized - makes sure that any pods that are "stuck" in-place updating (i.e. the node doesn't have enough resources either temporarily or permanently) will still show up in the list as having "wrong" resources so they can still get queued for eviction and be re-assigned to nodes that do have enough resources
This commit makes the eviction restrictor in-place update aware. While this possibly could be a separate restrictor or refactored into a shared "disruption restrictor", I chose not to do that at this time. I don't think eviction/in-place update can be completely separate as they can both cause disruption (albeit in-place less so) -- they both need to factor in the total disruption -- so I just hacked the in-place update functions into the existing evictor and added some additional counters for disruption tracking. While we have the pod lifecycle to look at to figure out "where we are" in eviction, we don't have that luxury with in-place, so that's why we need the additional "IsInPlaceUpdating" helper.
The updater logic wasn't in-place aware, so I tried to make it so. The thought here is that we try to in-place update if we can, if we can't or if it gets stuck/can't satisfy the recommendation, then we fall back to eviction. I tried to keep the "blast radius" small by stuffing the in-place logic in its own function and then falling back to eviction if it's not possible. It would be nice if we had some sort of "can the node support an in-place resize with the current recommendation" but that seemed like a whole other can of worms and math.
We might want to add a few more that are combined disruption counters, e.g. in-place + eviction totals, but for now just add some separate counters to keep track of what in-place updates are doing.
For now, this just updates the mock with the new functions I added to the eviction interface. We need some in-place test cases.
TODO(jkyros): come back here and look at this after you get it working
So far this is just: - Make sure it scales when it can But we still need a bunch of other ones like - Test fallback to eviction - Test timeout/eviction when it gets stuck, etc
In the event that we can't perform the whole update, this calculates a set of updates that should be disruptionless and only queues that partial set, omitting the parts that would cause disruption.
Signed-off-by: Max Cao <[email protected]>
Signed-off-by: Max Cao <[email protected]>
Signed-off-by: Max Cao <[email protected]>
Signed-off-by: Max Cao <[email protected]>
6ea7337
to
53d4006
Compare
Appreciate the review! I will respond to the other comments tomorrow, just wanted to get the easy stuff out of the way and less cluttered. |
Sure, take your time |
klog.V(4).InfoS("Patched pod annotations", "pod", klog.KObj(res), "patches", string(patch)) | ||
} | ||
} else { | ||
err := fmt.Errorf("no patches to apply to %s", podToUpdate.Name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we combine it to one? Maybe something like that:
klog.ErrorS("Failed to patch pod. no patches to apply", "pod", klog.KObj(podToUpdate))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if that would work, I still need to return an err. Maybe this is better? WDYT?
err := fmt.Errorf("no resource patches were calculated to apply")
klog.ErrorS(err, "Failed to patch pod", "pod", klog.KObj(podToUpdate))
return err
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need to log the error inside the function if the function is already returning an error that gets logged here:
klog.ErrorS(evictErr, "updating pod failed", "pod", klog.KObj(pod)) |
maybe those logs are redundant?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, you're probably right. I'll just remove the log here, thanks!
EDIT: And remove the logs in other places in the function when they are also getting returned,
Signed-off-by: Max Cao <[email protected]>
Signed-off-by: Max Cao <[email protected]>
Signed-off-by: Max Cao <[email protected]>
Signed-off-by: Max Cao <[email protected]>
Signed-off-by: Max Cao <[email protected]>
This is an important note/question. It may just be me being confused about the AEP details, but AFAIK there is no way for a user to guarantee that a resize request will be disruptonless. We can only guarantee that a resize will be disruptful (by configuring container resize restart policy). So how can we dictate the resizing type based on the conditions in the AEP? e.g. in the AEP it states
Since the conditions are different, how can we actually make sure the first set of actions will actually be disruption free? I think this problem requires the need for a third resizePolicy |
What type of PR is this?
/kind feature
/kind api-change
What this PR does / why we need it:
This PR is an attempt to implement VPA in-place vertical scaling according to AEP-4016. It uses the VPA updater to actuate recommendations by sending
resize
patch requests to pods which allows in-place resize as enabled by theInPlacePodVerticalScaling
feature flag in k8s 1.27.0 alpha and above (or by eventual graduation).It includes some e2e tests currently according to the AEP, but I am sure we will probably need more.
This PR is a continuation of #6652 started by @jkyros with a cleaner git commit history.
Which issue(s) this PR fixes:
Fixes #4016
Special notes for your reviewer:
Notable general areas of concern:
containerResize
policy that currently does not exist in the k8s api.PodDisruptionBudgets
.Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: