You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
I've configured Descheduler in a test cluster and I find it a little worrying having it running because while the policy is very descriptive, at the same time it is not very discriminate.
Describe the solution you'd like
What I'd like to do is setup different policies, potentially even with different evictors, for different use-cases within the same cluster. The best way I could think of handling this is as a CRD that I can apply in each namespace where I want descheduler to apply the policy. The next option I could think of was to continue using configmaps but read them from the Kubernetes API instead of reading from a volume in the pod.
In any case, I am running only a single instance of Descheduler; this is NOT a request to allow me to run multiple Descheduler instances in the same cluster. I just want to run one instance of Descheduler and have it apply policies per namespace like a built-in controller would do.
Describe alternatives you've considered
What version of descheduler are you using?
descheduler version: latest
Additional context
I'm a platform admin and while I can make use of labels and namespace selectors to target pods, the monolithic policy is not as flexible as I need it to be. Our clusters will be running a very diverse set of apps. Each app team has their own namespace in the cluster and we want to be able to give the app teams the final say in whether an eviction policy is applied to their workloads or not. They would not be able to edit the configmap currently used by Descheduler and we would not be able to accommodate the flexibility they need without making the config file overly complex. Having a descheduler policy defined in a single monolithic configmap prevents our app teams from making adjustments to the policy to accommodate their workloads and prevents our team from enforcing the organization's minimum standard in a way that is flexible and fair to all of our app teams.
The text was updated successfully, but these errors were encountered:
Hello, thank you for mentioning this. I feel that in my use case it isn't
going to allow me to achieve the goal that I detailed in the "additional
context" field of the issue.
Is your feature request related to a problem? Please describe.
I've configured Descheduler in a test cluster and I find it a little worrying having it running because while the policy is very descriptive, at the same time it is not very discriminate.
Describe the solution you'd like
What I'd like to do is setup different policies, potentially even with different evictors, for different use-cases within the same cluster. The best way I could think of handling this is as a CRD that I can apply in each namespace where I want descheduler to apply the policy. The next option I could think of was to continue using configmaps but read them from the Kubernetes API instead of reading from a volume in the pod.
In any case, I am running only a single instance of Descheduler; this is NOT a request to allow me to run multiple Descheduler instances in the same cluster. I just want to run one instance of Descheduler and have it apply policies per namespace like a built-in controller would do.
Describe alternatives you've considered
What version of descheduler are you using?
descheduler version: latest
Additional context
I'm a platform admin and while I can make use of labels and namespace selectors to target pods, the monolithic policy is not as flexible as I need it to be. Our clusters will be running a very diverse set of apps. Each app team has their own namespace in the cluster and we want to be able to give the app teams the final say in whether an eviction policy is applied to their workloads or not. They would not be able to edit the configmap currently used by Descheduler and we would not be able to accommodate the flexibility they need without making the config file overly complex. Having a descheduler policy defined in a single monolithic configmap prevents our app teams from making adjustments to the policy to accommodate their workloads and prevents our team from enforcing the organization's minimum standard in a way that is flexible and fair to all of our app teams.
The text was updated successfully, but these errors were encountered: