Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Please, support inline (ephemeral) volumes or suggest an alternative approach for externally provisioned volumes #783

Closed
jan-hudec opened this issue May 20, 2024 · 6 comments · Fixed by #905
Labels
kind/feature Categorizes issue or PR as related to a new feature.

Comments

@jan-hudec
Copy link

Is your feature request related to a problem?/Why is this needed

I have a Kubernetes application that needs to access a SMB3 share exposed by non-Kubernetes server.

In the past I used the flexVolume driver juliohm/cifs, specified directly in the pod spec of the deployment. But that driver has disappeared and cannot be easily installed any more. So I am looking to replace it with a CSI one.

Unfortunately inline volume is not supported, and creating a PV for this feels wrong, because the PV is not managed by the cluster. Instead, it is specific set of mount parameters to be used by that app and that app only.

Describe the solution you'd like in detail

spec:
  volumes:
    - name: legacyappshare
      csi:
        driver: smb.csi.k8s.io
        volumeAttributes:
          source: //legacyapp.intranet.local/share
        nodePublishSecretRef:
          name: legacyapp-credentials

in the pod spec.

Describe alternatives you've considered

  1. I can create a PV and PVC for that volume, but

    • I have to be careful not to damage the content with the lifecycle hooks. I do not want to clean it up either on provisioning or release, it is provisioned externally.
    • It feels wrong to have non-namespaced PV for something that is inherently specific to the application.
  2. I can make the pod privileged and simply run a

     mount -t cifs //legacyapp.intranet.local/share /mnt/legacyapp -odomain=d,username=$USER,password=$PASSWORD
    

    inside.

  3. I could even modify the application to use a userland CIFS/SMB3 library.

Additional context

I've seen some talk about this being not implemented for NFS for security reasons, but because of the last option I don't believe there actually are security reasons here. The application operator can access the share anyway if they have the credentials, and mounting it doesn't eat more resources than accessing it in some other way.

@andyzhangx andyzhangx added the kind/feature Categorizes issue or PR as related to a new feature. label May 22, 2024
@somejfn
Copy link

somejfn commented May 24, 2024

Also looking for this please. There's a lot of value in avoiding granting the creation of PVs to non-admin user like the Azure CSI driver equivalant.

@pcking999
Copy link

just want to add this feature would be extremely useful for me.

@ninlil
Copy link

ninlil commented Oct 23, 2024

We have 1 namespace per development-team on our clusters, and having a cluster-wide declared PV is a huge security-issue for us. We need a way to ensure that one team cant mount an SMB using a PV with credentials from another team.
If this is already possible then please explain how this is achieved.

@yrro
Copy link

yrro commented Oct 23, 2024

Create the PVC first. When you create the PV, ensure you name it so that the PVC attaches to it. Once the PVC is attached, the PV can't be used by any other PVC.

If your cluster has features that enforce containers to run with a given UID, make use of that in the PV's mount options so that even if the wrong namespace attaches to the PV, the kernel will prevent processes inside the pod from accessing any files.

@ninlil
Copy link

ninlil commented Oct 25, 2024

Yes, I can do that, but the teams have no access to cluster-wide resources, so they can't do it themself... and I don't want to be a blocker in the teams using SMB-shares because I have to create a PV when they ask for it (and I have the time)
By supporting the 'inline' method this driver would support a secure self-service scenario that I really think a lot of organizations would like.

but until the time that inline is supported this is the best we can do, despite having the teams wait in line. (because we are not relaxing RBAC to allow any team to create/edit a PV... then all h_ll can break loose)

@Kevinkevin189
Copy link

This feature helps me a lot.
In my scenario, some old smb sharepoint, which delegating a remote equipment's data collection and exporter,shares their measure data via smb server. This volume is provisioned, i just want to use them cluster wide,so i created it as a pv, with no storageclass, but it always report a username is not specified with parameter error,seems this is still a bug,

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants