-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Containers running in AKS can't write to lustre if the user is not root. #194
Comments
@pinduzera Thanks for bringing this up. I was able to see similar behavior in some circumstances. I will explain what is going on with the driver and what can be done to improve this behavior. The ownership/permissions that are exposed to a pod running as a non-root user is, by default, the same as it would be if you mounted the Lustre cluster manually with the client. So, if the permissions on the files in the Lustre cluster are correct for your user, it will work as expected. However, it does appear that an AMLFS (Azure Managed Lustre Filesystem) cluster is first created upon deployment with an empty This will work for cases where the user is already manually creating the AMLFS cluster and performing other management tasks with it, but I understand that this may not be as user-friendly for more automated approaches or for multiple clusters. So, one option using the AKS/Kubernetes built-in functionality is to set kind: Pod
apiVersion: v1
metadata:
name: lustre-echo-date
spec:
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
fsGroupChangePolicy: "Always" # Or "OnRootMismatch"
nodeSelector:
... This would allow AKS to change the gid of all the files/directories within that Lustre cluster mount to the specified fsGroup (in this case gid 1000). I realized when testing this that our driver does need an additional change for this to work, and I will be submitting a PR to improve this. Until then, you can add ---
apiVersion: storage.k8s.io/v1
kind: CSIDriver
metadata:
name: azurelustre.csi.azure.com
spec:
fsGroupPolicy: File
attachRequired: false
podInfoOnMount: true This changes the default from Again, I will be making changes for the next release so the driver will 'just work' for the end user with these options, but I hope that these suggestions can allow you to make progress until it is officially supported in the driver. Some notes:
|
Is your feature request related to a problem?/Why is this needed
Can't write to lustre filesystem in AKS with containers running in any other user that is not root.
Describe the solution you'd like in detail
Allow write options for containers running under distinct users.
Describe alternatives you've considered
Tried all the squash permissions with distinct PID/GID and IP considerations, nothing have worked. The containers were able to read, but no write
Additional context
The documentation has an example but that assumes the container is running as the root user, which some enterprises can't use as security policy. The documentation is also not clear if there is a way how to handle the permissions for the filesystem
The text was updated successfully, but these errors were encountered: