You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem?
We have a project that creates/deletes many different types of external resources via resources in Kubernetes. One of those resources is AWS RDS instances. For many of our external resources, we have custom operators and they all behave in the same way. We keep the finalizers in place on the k8s resources until after the external resource is completely destroyed. This has a nice benefit of preventing orphans. If for some reason the deletion fails to complete the resource will still be in Kubernetes and we'll see it.
It seems that the ACK project doesn't work that way, the finalizer and DBInstance resource as removed from etcd while the RDS instance is still being terminated. That occasionally causes issues if a user recreates a DBInstance with the same name, it fails because the older instance is still being cleaned up. I don't know if there is a clear Kubernetes guideline on finalizers for this behavior or not.
Describe the solution you'd like
I see there is a --deletion-policy that can be passed to the controller. One idea is to have another possible value for that flag to wait for the RDS instance to be fully removed before removing the finalizer and removing the resource. Something like wait-for-delete?
Describe alternatives you've considered
As a work-around for now we'll probably be introducing some randomness to DBInstance names to force them to be unique.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem?
We have a project that creates/deletes many different types of external resources via resources in Kubernetes. One of those resources is AWS RDS instances. For many of our external resources, we have custom operators and they all behave in the same way. We keep the finalizers in place on the k8s resources until after the external resource is completely destroyed. This has a nice benefit of preventing orphans. If for some reason the deletion fails to complete the resource will still be in Kubernetes and we'll see it.
It seems that the ACK project doesn't work that way, the finalizer and DBInstance resource as removed from etcd while the RDS instance is still being terminated. That occasionally causes issues if a user recreates a DBInstance with the same name, it fails because the older instance is still being cleaned up. I don't know if there is a clear Kubernetes guideline on finalizers for this behavior or not.
Describe the solution you'd like
I see there is a
--deletion-policy
that can be passed to the controller. One idea is to have another possible value for that flag to wait for the RDS instance to be fully removed before removing the finalizer and removing the resource. Something likewait-for-delete
?Describe alternatives you've considered
As a work-around for now we'll probably be introducing some randomness to DBInstance names to force them to be unique.
The text was updated successfully, but these errors were encountered: