Interactive access to GPUs: Difference between revisions
Jump to navigation
Jump to search
No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
=== NOTE === | We have now re-deployed our GPU server as a UI, so all users with an account on our local LDAP can access it. The hostname is '''ui00.lhep.unibe.ch''' | ||
The procedure described below is therefore obsolete. | |||
{{Strikethroughdiv|=== NOTE === | |||
Due to lack of demand, this procedure has been established for ad-hoc users for limited time frames. As such, it is not very polished and might need adjustments. We will improve it in case higher demand arises. For the time being, only one user on the cluster is mapped to incoming ssh requests. The underlying resources are managed by Slurm, so the user will interact via Slurm client commands, e.g. 'srun'. Work in progress ... | Due to lack of demand, this procedure has been established for ad-hoc users for limited time frames. As such, it is not very polished and might need adjustments. We will improve it in case higher demand arises. For the time being, only one user on the cluster is mapped to incoming ssh requests. The underlying resources are managed by Slurm, so the user will interact via Slurm client commands, e.g. 'srun'. Work in progress ... | ||
Line 22: | Line 30: | ||
You also have the option to use the local disk on the node as TMPDIR for singularity, bu setting the following before invoking singularity: | You also have the option to use the local disk on the node as TMPDIR for singularity, bu setting the following before invoking singularity: | ||
export SINGULARITY_TMPDIR=/state/partition1 | export SINGULARITY_TMPDIR=/state/partition1}} |
Revision as of 15:27, 15 April 2025
We have now re-deployed our GPU server as a UI, so all users with an account on our local LDAP can access it. The hostname is ui00.lhep.unibe.ch
The procedure described below is therefore obsolete.
Template:Strikethroughdiv