Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pod Annotator only sets the first container in the list and ignores init containers. #25

Open
kannon92 opened this issue Aug 24, 2024 · 4 comments

Comments

@kannon92
Copy link
Contributor

The code below patches the containers of the first pod. There are two bugs and one question I see here:

  1. An init container can have access to a GPU so one may not be able to get the MIG slice for the init container.

  2. Pods can take a list of containers and this only edits the first container in the list.

Question: Ephemeral containers are also ignored but not sure if they need access to the MIG slice.

func (a *PodAnnotator) Handle(ctx context.Context, req admission.Request) admission.Response {

...
pod.Spec.Containers[0].EnvFrom = append(pod.Spec.Containers[0].EnvFrom, v1.EnvFromSource{
ConfigMapRef: &v1.ConfigMapEnvSource{
LocalObjectReference: v1.LocalObjectReference{Name: configMapName},
},
})

// Add extended resource to resource limits
if pod.Spec.Containers[0].Resources.Limits == nil {
	pod.Spec.Containers[0].Resources.Limits = make(v1.ResourceList)
}
pod.Spec.Containers[0].Resources.Limits[v1.ResourceName(extendedResourceName)] = resource.MustParse("1")

// Marshal the updated pod object back to JSON
marshaledPod, err := json.Marshal(pod)
if err != nil {
	return admission.Errored(500, fmt.Errorf("could not marshal pod: %v", err))
}

// Return the patch response
return admission.PatchResponseFromRaw(req.Object.Raw, marshaledPod)

}

@asm582
Copy link
Contributor

asm582 commented Aug 26, 2024

Thanks for bringing this up, this is by design.

  • Can the init container use the same slice as the actual "workload" container?
  • Do pods that take the list of containers use the same slice or need different slices?

@asm582
Copy link
Contributor

asm582 commented Oct 8, 2024

@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 7, 2025
@kannon92
Copy link
Contributor Author

kannon92 commented Jan 7, 2025

/remove-lifecycle stale

@openshift-ci openshift-ci bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants