<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.lhep.unibe.ch/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Lhep</id>
	<title>LHEP Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.lhep.unibe.ch/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Lhep"/>
	<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php/Special:Contributions/Lhep"/>
	<updated>2026-05-14T03:11:11Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.8</generator>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Job_submission_and_management_with_NorduGrid_ARC&amp;diff=953</id>
		<title>Job submission and management with NorduGrid ARC</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Job_submission_and_management_with_NorduGrid_ARC&amp;diff=953"/>
		<updated>2026-05-04T09:51:14Z</updated>

		<summary type="html">&lt;p&gt;Lhep: /* arcinfo */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Prerequisites ===&lt;br /&gt;
Job submission to our cluster occurs via [https://www.nordugrid.org/arc/arc6/ Nordugrid ARC middleware]. You need:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Grid user certificate&#039;&#039;&#039;&lt;br /&gt;
This is best obtained from CERN: [https://ca.cern.ch/ca/ CERN CA]. Being registered with CERN HR and having a CERN computer account are the pre-requisites. Users with FNAL accounts can obtain a certificate from the relevant FERMILAB CA. From those not eligible to either solution above, there is the possibility of requesting a certificate via the [https://www.scits.unibe.ch/services/escience_certificates/ Swiss CA]&lt;br /&gt;
&lt;br /&gt;
In order to carry out the following steps, your user certificate and private key must be installed in the .globus subdirectory of your home directory on the client machine. You can find instructions on how to do that in [https://atlassoftwaredocs.web.cern.ch/ASWTutorial/basicSetup/grid_vo/#using-the-grid-certificate here]. While this focuses on ATLAS, the steps to extract and install the certificate are  independent from the specific experiment.&lt;br /&gt;
&lt;br /&gt;
Have the certificate registered with a Virtual Organisation (VO), e.g. for ATLAS, browse to the [https://lcg-voms2.cern.ch:8443/voms/atlas/user/home.action ATLAS VOMS service]. The prerequisite for this step is having installed your personal certificate in your browser&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;ARC client&#039;&#039;&#039;&lt;br /&gt;
Users operating from the UIs (recommended) can setup the client from CVMFS:&lt;br /&gt;
&lt;br /&gt;
 # export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase&lt;br /&gt;
 # source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh&lt;br /&gt;
 # lsetup emi&lt;br /&gt;
&lt;br /&gt;
Otherwise installing a local client on any linux machine is possible by following the instructions for your specific operating system: [http://www.nordugrid.org/documents/arc-client-install.html Install ARC client]&lt;br /&gt;
&lt;br /&gt;
=== arcinfo ===&lt;br /&gt;
First, verify that the user interface is installed and you can see the submission endpoints. We have 3 CEs:&lt;br /&gt;
&lt;br /&gt;
 # arcinfo --version&lt;br /&gt;
 # arcinfo -c ce00.lhep.unibe.ch:&#039;&#039;&#039;8443&#039;&#039;&#039;&lt;br /&gt;
 # arcinfo -c ce02.lhep.unibe.ch:&#039;&#039;&#039;443&#039;&#039;&#039;&lt;br /&gt;
 # arcinfo -c ce03.lhep.unibe.ch:&#039;&#039;&#039;443&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== arcproxy / voms-proxy-init/info ===&lt;br /&gt;
Generates proxy with your credentials with one of these two commands:&lt;br /&gt;
&lt;br /&gt;
 # arcproxy --voms &amp;lt;yourVO&amp;gt;  (e.g. atlas)&lt;br /&gt;
 # voms-proxy-int --voms &amp;lt;yourVO)&lt;br /&gt;
&lt;br /&gt;
View the generated proxy information with one of these two commands&lt;br /&gt;
&lt;br /&gt;
 # arcproxy -I&lt;br /&gt;
 # voms-proxy-info&lt;br /&gt;
&lt;br /&gt;
=== Job submission ===&lt;br /&gt;
Check to see if you can submit a trivial job test.xrsl:&lt;br /&gt;
&lt;br /&gt;
 # vi test.xrsl&lt;br /&gt;
 &amp;amp;&lt;br /&gt;
 (executable = /usr/bin/env)&lt;br /&gt;
 (jobname = &amp;quot;test&amp;quot;)&lt;br /&gt;
 (stdout=test.log)&lt;br /&gt;
 (* join stdout and stderr *)&lt;br /&gt;
 (join=yes)&lt;br /&gt;
 (gmlog=log)&lt;br /&gt;
 (* by default in minutes *)&lt;br /&gt;
 (wallTime=&amp;quot;100&amp;quot;)&lt;br /&gt;
 (* by default in MB *)&lt;br /&gt;
 (memory=1000)&lt;br /&gt;
 (queue=&amp;quot;CLUSTER&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
 # arcsub -d info -T arcrest --computing-element ce00.lhep.unibe.ch:&#039;&#039;&#039;8443&#039;&#039;&#039; -o joblist.xml test.xrsl&lt;br /&gt;
 ...&lt;br /&gt;
 Job submitted with jobid: &amp;lt;job ID&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 # arcsub -d info -T arcrest --computing-element ce02.lhep.unibe.ch:&#039;&#039;&#039;443&#039;&#039;&#039; -o joblist.xml test.xrsl&lt;br /&gt;
 # arcsub -d info -T arcrest --computing-element ce03.lhep.unibe.ch:&#039;&#039;&#039;443&#039;&#039;&#039; -o joblist.xml test.xrsl&lt;br /&gt;
&lt;br /&gt;
In the job description file, one can also add a list of input files to upload to the cluster at job submission and a list of output files to retrieve once execution is FINISHED.&lt;br /&gt;
&lt;br /&gt;
Additional resources on job submission and job description:&lt;br /&gt;
&lt;br /&gt;
[https://www.nordugrid.org/arc/arc7/users/submit_job.html arc7 submit job]&lt;br /&gt;
&lt;br /&gt;
[http://www.nordugrid.org/documents/xrsl.pdf xrls job specification reference manual]&lt;br /&gt;
&lt;br /&gt;
=== arcls ===&lt;br /&gt;
Lists the job directory on the cluster&lt;br /&gt;
&lt;br /&gt;
 # arcls &amp;lt;job id&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== arcstat ===&lt;br /&gt;
Prints job status and some additional information from cluster, such as jobid, name, and status:&lt;br /&gt;
&lt;br /&gt;
 # arcstat -c ce00.lhep.unibe.ch&lt;br /&gt;
 # arcstat &amp;lt;job id&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== arccat ===&lt;br /&gt;
Prints the job stdout/stderr while the job is running on the cluster&lt;br /&gt;
&lt;br /&gt;
 # arccat &amp;lt;job id&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== arckill === &lt;br /&gt;
Kill one job, a list of jobs in filename or all jobs&lt;br /&gt;
&lt;br /&gt;
 # arckill &amp;lt;job id&amp;gt;&lt;br /&gt;
 # arckill filename&lt;br /&gt;
 # arckill -a&lt;br /&gt;
&lt;br /&gt;
=== arcget ===&lt;br /&gt;
When the status of the task is FINISHED, you can download the results of one specific task or multiple tasks. These and up in directories created by the download client and named after the job id&#039;s. In the example above, only the stdout/stderr file test.log will be downloaded as there are no further output files specified:&lt;br /&gt;
&lt;br /&gt;
 # arcget &amp;lt;job id&amp;gt;&lt;br /&gt;
 Results stored at: 4fQLDmY3BxjnmmR0Xox1SiGmABFKDmABFKDmvxHKDmABFKDmiPhU9m&lt;br /&gt;
 Jobs processed: 1, successfully retrieved: 1, successfully cleaned: 1&lt;br /&gt;
&lt;br /&gt;
or results of all completed tasks:&lt;br /&gt;
&lt;br /&gt;
 # arcget -a&lt;br /&gt;
&lt;br /&gt;
or all tasks in the list in the joblist.xml file:&lt;br /&gt;
&lt;br /&gt;
 # arcget -i joblist.xml&lt;br /&gt;
&lt;br /&gt;
=== Advanced xrls using DTR and singularity ===&lt;br /&gt;
&lt;br /&gt;
In this example we will use the DTR (Data TRansfer) feature of ARC to specify input files to copy to the cluster and retrieve the outputs. In addition, the job is setup to run in a container which is the default way for ATLAS.&lt;br /&gt;
&lt;br /&gt;
 # cat arc_test-lhep-8core.xrls&lt;br /&gt;
 &amp;amp;(executable=5333700233-8core.sh)&lt;br /&gt;
 (inputfiles=(container_script-8core.sh container_script-8core.sh)(my_release_testsetup.sh   my_release_testsetup.sh)(HITS.25720417._000948.pool.root.1 HITS.25720417._000948.pool.root.1)(RDO.26819764._019417.pool.root.1 RDO.26819764._019417.pool.root.1)(RDO.26819764._016665.pool.root.1 RDO.26819764._016665.pool.root.1)(RDO.26811885._009174.pool.root.1 RDO.26811885._009174.pool.root.1)(RDO.26819766._027499.pool.root.1 RDO.26819766._027499.pool.root.1)(RDO.26811885._004931.pool.root.1 RDO.26811885._004931.pool.root.1))&lt;br /&gt;
 (arguments=&amp;quot;&amp;quot;)&lt;br /&gt;
 (gmlog=&amp;quot;gridlog&amp;quot;)&lt;br /&gt;
 (stdout=mc20_lhep_8core.log)(join=yes)&lt;br /&gt;
 (memory=&amp;quot;1800&amp;quot;)&lt;br /&gt;
 (wallTime=&amp;quot;2870&amp;quot;)&lt;br /&gt;
 (count = &amp;quot;8&amp;quot;)&lt;br /&gt;
 (countpernode = &amp;quot;8&amp;quot;)&lt;br /&gt;
 (queue= &amp;quot;CLUSTER&amp;quot; )&lt;br /&gt;
 (jobname=mc20_lhep_8core)&lt;br /&gt;
 (outputfiles=(&amp;quot;/&amp;quot; &amp;quot;&amp;quot;))&lt;br /&gt;
&lt;br /&gt;
This specific job is a clone of Panda JobID 5333700233 to run on 8 cores within singularity&lt;br /&gt;
&lt;br /&gt;
NOTE: in this example the input files are uploaded from the local user directory, meaning you have to download them with rucio first. You can find the URI of the files to download with:&lt;br /&gt;
&lt;br /&gt;
 # lsetup rucio&lt;br /&gt;
 # rucio list-file-replicas mc16_13TeV:HITS.25720417._000948.pool.root.1&lt;br /&gt;
&lt;br /&gt;
In alternative one can specify to download the file to the cluster directly from the source on the remote SE. In such case the input file definition in the xlrs is:&lt;br /&gt;
&lt;br /&gt;
 (... HITS.25720417._000948.pool.root.1 rucio://rucio-lb-prod.cern.ch/replicas/HITS.25720417._000948.pool.root.1 ...)&lt;br /&gt;
&lt;br /&gt;
The executable script:&lt;br /&gt;
&lt;br /&gt;
 # cat 5333700233-8core.sh &lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 echo &amp;quot;*** Time is: ***&amp;quot;&lt;br /&gt;
 date&lt;br /&gt;
 chmod +x container_script-8core.sh&lt;br /&gt;
 &lt;br /&gt;
 if [ -z &amp;quot;$ATLAS_LOCAL_ROOT_BASE&amp;quot; ]; then export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase; fi;&lt;br /&gt;
 export thePlatform=&amp;quot;x86_64-centos7-gcc8-opt&amp;quot;&lt;br /&gt;
 source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh -c $thePlatform -s /srv/my_release_testsetup.sh -r /srv/container_script-8core.sh -e &amp;quot;-i -p -B /grid/lustre&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 echo &amp;quot;*** Time is: ***&amp;quot;&lt;br /&gt;
 date&lt;br /&gt;
&lt;br /&gt;
The container script which is launched by the executable script with singularity:&lt;br /&gt;
&lt;br /&gt;
 # cat container_script-8core.sh&lt;br /&gt;
 export TMPDIR=/srv&lt;br /&gt;
 export GFORTRAN_TMPDIR=/srv&lt;br /&gt;
 export ATHENA_PROC_NUMBER=8&lt;br /&gt;
 export ATHENA_CORE_NUMBER=8&lt;br /&gt;
 /usr/bin/time -f &amp;quot;%P %M&amp;quot;  Reco_tf.py --inputHITSFile=&amp;quot;HITS.25720417._000948.pool.root.1&amp;quot; --asetup=&amp;quot;RDOtoRDOTrigger:Athena,21.0.20.12&amp;quot; --maxEvents=&amp;quot;10000&amp;quot; --multithreaded=&amp;quot;True&amp;quot; --postInclude &amp;quot;default:PyJobTransforms/UseFrontier.py&amp;quot; &amp;quot;all:PyJobTransforms/DisableFileSizeLimit.py&amp;quot; --preInclude &amp;quot;all:Campaigns/MC20a.py&amp;quot; --skipEvents=&amp;quot;0&amp;quot; --autoConfiguration=&amp;quot;everything&amp;quot; --conditionsTag &amp;quot;default:OFLCOND-MC16-SDR-RUN2-09&amp;quot; &amp;quot;RDOtoRDOTrigger:OFLCOND-MC16-SDR-RUN2-08-02a&amp;quot; --geometryVersion=&amp;quot;default:ATLAS-R2-2016-01-00-01&amp;quot; --runNumber=&amp;quot;364681&amp;quot; --digiSeedOffset1=&amp;quot;313&amp;quot; --digiSeedOffset2=&amp;quot;313&amp;quot; --inputRDO_BKGFile=&amp;quot;RDO.26811885._004931.pool.root.1,RDO.26811885._009174.pool.root.1,RDO.26819764._016665.pool.root.1,RDO.26819764._019417.pool.root.1,RDO.26819766._027499.pool.root.1&amp;quot; --AMITag=&amp;quot;r13167&amp;quot; --steering &amp;quot;doOverlay&amp;quot; &amp;quot;doRDO_TRIG&amp;quot; --outputAODFile=&amp;quot;AOD.27857976._000879.pool.root.1&amp;quot; --jobNumber=&amp;quot;313&amp;quot; --triggerConfig=&amp;quot;RDOtoRDOTrigger=MCRECO:DBF:TRIGGERDBMC:2283,35,327&amp;quot;&lt;br /&gt;
 echo &amp;quot;*** Time is: ***&amp;quot;&lt;br /&gt;
 date&lt;br /&gt;
&lt;br /&gt;
The script to setup the ATLAS version for the job, which is also called by the executable script:&lt;br /&gt;
&lt;br /&gt;
 # cat my_release_testsetup.sh &lt;br /&gt;
 source $AtlasSetup/scripts/asetup.sh Athena,22.0.41.8,notest --platform x86_64-centos7-gcc8-opt --makeflags=&#039;$MAKEFLAGS&#039;&lt;br /&gt;
&lt;br /&gt;
NOTE: the last line of the xrls instructs ARC to download all the contents of the job directory once execution has FINISHED on the cluster. One can in alternative, specify a list of files of interest with a syntax for &amp;quot;local files&amp;quot; as used for the input files in the example xrls above.&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Job_submission_and_management_with_NorduGrid_ARC&amp;diff=952</id>
		<title>Job submission and management with NorduGrid ARC</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Job_submission_and_management_with_NorduGrid_ARC&amp;diff=952"/>
		<updated>2026-05-04T06:15:28Z</updated>

		<summary type="html">&lt;p&gt;Lhep: /* Job submission */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Prerequisites ===&lt;br /&gt;
Job submission to our cluster occurs via [https://www.nordugrid.org/arc/arc6/ Nordugrid ARC middleware]. You need:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Grid user certificate&#039;&#039;&#039;&lt;br /&gt;
This is best obtained from CERN: [https://ca.cern.ch/ca/ CERN CA]. Being registered with CERN HR and having a CERN computer account are the pre-requisites. Users with FNAL accounts can obtain a certificate from the relevant FERMILAB CA. From those not eligible to either solution above, there is the possibility of requesting a certificate via the [https://www.scits.unibe.ch/services/escience_certificates/ Swiss CA]&lt;br /&gt;
&lt;br /&gt;
In order to carry out the following steps, your user certificate and private key must be installed in the .globus subdirectory of your home directory on the client machine. You can find instructions on how to do that in [https://atlassoftwaredocs.web.cern.ch/ASWTutorial/basicSetup/grid_vo/#using-the-grid-certificate here]. While this focuses on ATLAS, the steps to extract and install the certificate are  independent from the specific experiment.&lt;br /&gt;
&lt;br /&gt;
Have the certificate registered with a Virtual Organisation (VO), e.g. for ATLAS, browse to the [https://lcg-voms2.cern.ch:8443/voms/atlas/user/home.action ATLAS VOMS service]. The prerequisite for this step is having installed your personal certificate in your browser&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;ARC client&#039;&#039;&#039;&lt;br /&gt;
Users operating from the UIs (recommended) can setup the client from CVMFS:&lt;br /&gt;
&lt;br /&gt;
 # export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase&lt;br /&gt;
 # source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh&lt;br /&gt;
 # lsetup emi&lt;br /&gt;
&lt;br /&gt;
Otherwise installing a local client on any linux machine is possible by following the instructions for your specific operating system: [http://www.nordugrid.org/documents/arc-client-install.html Install ARC client]&lt;br /&gt;
&lt;br /&gt;
=== arcinfo ===&lt;br /&gt;
First, verify that the user interface is installed and you can see the submission endpoints. We have 3 CEs:&lt;br /&gt;
&lt;br /&gt;
 # arcinfo --version&lt;br /&gt;
 # arcinfo -c ce00.lhep.unibe.ch:&#039;&#039;&#039;8443&#039;&#039;&#039;&lt;br /&gt;
 # arcinfo -c ce00.lhep.unibe.ch:&#039;&#039;&#039;443&#039;&#039;&#039;&lt;br /&gt;
 # arcinfo -c ce03.lhep.unibe.ch:&#039;&#039;&#039;443&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== arcproxy / voms-proxy-init/info ===&lt;br /&gt;
Generates proxy with your credentials with one of these two commands:&lt;br /&gt;
&lt;br /&gt;
 # arcproxy --voms &amp;lt;yourVO&amp;gt;  (e.g. atlas)&lt;br /&gt;
 # voms-proxy-int --voms &amp;lt;yourVO)&lt;br /&gt;
&lt;br /&gt;
View the generated proxy information with one of these two commands&lt;br /&gt;
&lt;br /&gt;
 # arcproxy -I&lt;br /&gt;
 # voms-proxy-info&lt;br /&gt;
&lt;br /&gt;
=== Job submission ===&lt;br /&gt;
Check to see if you can submit a trivial job test.xrsl:&lt;br /&gt;
&lt;br /&gt;
 # vi test.xrsl&lt;br /&gt;
 &amp;amp;&lt;br /&gt;
 (executable = /usr/bin/env)&lt;br /&gt;
 (jobname = &amp;quot;test&amp;quot;)&lt;br /&gt;
 (stdout=test.log)&lt;br /&gt;
 (* join stdout and stderr *)&lt;br /&gt;
 (join=yes)&lt;br /&gt;
 (gmlog=log)&lt;br /&gt;
 (* by default in minutes *)&lt;br /&gt;
 (wallTime=&amp;quot;100&amp;quot;)&lt;br /&gt;
 (* by default in MB *)&lt;br /&gt;
 (memory=1000)&lt;br /&gt;
 (queue=&amp;quot;CLUSTER&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
 # arcsub -d info -T arcrest --computing-element ce00.lhep.unibe.ch:&#039;&#039;&#039;8443&#039;&#039;&#039; -o joblist.xml test.xrsl&lt;br /&gt;
 ...&lt;br /&gt;
 Job submitted with jobid: &amp;lt;job ID&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 # arcsub -d info -T arcrest --computing-element ce02.lhep.unibe.ch:&#039;&#039;&#039;443&#039;&#039;&#039; -o joblist.xml test.xrsl&lt;br /&gt;
 # arcsub -d info -T arcrest --computing-element ce03.lhep.unibe.ch:&#039;&#039;&#039;443&#039;&#039;&#039; -o joblist.xml test.xrsl&lt;br /&gt;
&lt;br /&gt;
In the job description file, one can also add a list of input files to upload to the cluster at job submission and a list of output files to retrieve once execution is FINISHED.&lt;br /&gt;
&lt;br /&gt;
Additional resources on job submission and job description:&lt;br /&gt;
&lt;br /&gt;
[https://www.nordugrid.org/arc/arc7/users/submit_job.html arc7 submit job]&lt;br /&gt;
&lt;br /&gt;
[http://www.nordugrid.org/documents/xrsl.pdf xrls job specification reference manual]&lt;br /&gt;
&lt;br /&gt;
=== arcls ===&lt;br /&gt;
Lists the job directory on the cluster&lt;br /&gt;
&lt;br /&gt;
 # arcls &amp;lt;job id&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== arcstat ===&lt;br /&gt;
Prints job status and some additional information from cluster, such as jobid, name, and status:&lt;br /&gt;
&lt;br /&gt;
 # arcstat -c ce00.lhep.unibe.ch&lt;br /&gt;
 # arcstat &amp;lt;job id&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== arccat ===&lt;br /&gt;
Prints the job stdout/stderr while the job is running on the cluster&lt;br /&gt;
&lt;br /&gt;
 # arccat &amp;lt;job id&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== arckill === &lt;br /&gt;
Kill one job, a list of jobs in filename or all jobs&lt;br /&gt;
&lt;br /&gt;
 # arckill &amp;lt;job id&amp;gt;&lt;br /&gt;
 # arckill filename&lt;br /&gt;
 # arckill -a&lt;br /&gt;
&lt;br /&gt;
=== arcget ===&lt;br /&gt;
When the status of the task is FINISHED, you can download the results of one specific task or multiple tasks. These and up in directories created by the download client and named after the job id&#039;s. In the example above, only the stdout/stderr file test.log will be downloaded as there are no further output files specified:&lt;br /&gt;
&lt;br /&gt;
 # arcget &amp;lt;job id&amp;gt;&lt;br /&gt;
 Results stored at: 4fQLDmY3BxjnmmR0Xox1SiGmABFKDmABFKDmvxHKDmABFKDmiPhU9m&lt;br /&gt;
 Jobs processed: 1, successfully retrieved: 1, successfully cleaned: 1&lt;br /&gt;
&lt;br /&gt;
or results of all completed tasks:&lt;br /&gt;
&lt;br /&gt;
 # arcget -a&lt;br /&gt;
&lt;br /&gt;
or all tasks in the list in the joblist.xml file:&lt;br /&gt;
&lt;br /&gt;
 # arcget -i joblist.xml&lt;br /&gt;
&lt;br /&gt;
=== Advanced xrls using DTR and singularity ===&lt;br /&gt;
&lt;br /&gt;
In this example we will use the DTR (Data TRansfer) feature of ARC to specify input files to copy to the cluster and retrieve the outputs. In addition, the job is setup to run in a container which is the default way for ATLAS.&lt;br /&gt;
&lt;br /&gt;
 # cat arc_test-lhep-8core.xrls&lt;br /&gt;
 &amp;amp;(executable=5333700233-8core.sh)&lt;br /&gt;
 (inputfiles=(container_script-8core.sh container_script-8core.sh)(my_release_testsetup.sh   my_release_testsetup.sh)(HITS.25720417._000948.pool.root.1 HITS.25720417._000948.pool.root.1)(RDO.26819764._019417.pool.root.1 RDO.26819764._019417.pool.root.1)(RDO.26819764._016665.pool.root.1 RDO.26819764._016665.pool.root.1)(RDO.26811885._009174.pool.root.1 RDO.26811885._009174.pool.root.1)(RDO.26819766._027499.pool.root.1 RDO.26819766._027499.pool.root.1)(RDO.26811885._004931.pool.root.1 RDO.26811885._004931.pool.root.1))&lt;br /&gt;
 (arguments=&amp;quot;&amp;quot;)&lt;br /&gt;
 (gmlog=&amp;quot;gridlog&amp;quot;)&lt;br /&gt;
 (stdout=mc20_lhep_8core.log)(join=yes)&lt;br /&gt;
 (memory=&amp;quot;1800&amp;quot;)&lt;br /&gt;
 (wallTime=&amp;quot;2870&amp;quot;)&lt;br /&gt;
 (count = &amp;quot;8&amp;quot;)&lt;br /&gt;
 (countpernode = &amp;quot;8&amp;quot;)&lt;br /&gt;
 (queue= &amp;quot;CLUSTER&amp;quot; )&lt;br /&gt;
 (jobname=mc20_lhep_8core)&lt;br /&gt;
 (outputfiles=(&amp;quot;/&amp;quot; &amp;quot;&amp;quot;))&lt;br /&gt;
&lt;br /&gt;
This specific job is a clone of Panda JobID 5333700233 to run on 8 cores within singularity&lt;br /&gt;
&lt;br /&gt;
NOTE: in this example the input files are uploaded from the local user directory, meaning you have to download them with rucio first. You can find the URI of the files to download with:&lt;br /&gt;
&lt;br /&gt;
 # lsetup rucio&lt;br /&gt;
 # rucio list-file-replicas mc16_13TeV:HITS.25720417._000948.pool.root.1&lt;br /&gt;
&lt;br /&gt;
In alternative one can specify to download the file to the cluster directly from the source on the remote SE. In such case the input file definition in the xlrs is:&lt;br /&gt;
&lt;br /&gt;
 (... HITS.25720417._000948.pool.root.1 rucio://rucio-lb-prod.cern.ch/replicas/HITS.25720417._000948.pool.root.1 ...)&lt;br /&gt;
&lt;br /&gt;
The executable script:&lt;br /&gt;
&lt;br /&gt;
 # cat 5333700233-8core.sh &lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 echo &amp;quot;*** Time is: ***&amp;quot;&lt;br /&gt;
 date&lt;br /&gt;
 chmod +x container_script-8core.sh&lt;br /&gt;
 &lt;br /&gt;
 if [ -z &amp;quot;$ATLAS_LOCAL_ROOT_BASE&amp;quot; ]; then export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase; fi;&lt;br /&gt;
 export thePlatform=&amp;quot;x86_64-centos7-gcc8-opt&amp;quot;&lt;br /&gt;
 source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh -c $thePlatform -s /srv/my_release_testsetup.sh -r /srv/container_script-8core.sh -e &amp;quot;-i -p -B /grid/lustre&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 echo &amp;quot;*** Time is: ***&amp;quot;&lt;br /&gt;
 date&lt;br /&gt;
&lt;br /&gt;
The container script which is launched by the executable script with singularity:&lt;br /&gt;
&lt;br /&gt;
 # cat container_script-8core.sh&lt;br /&gt;
 export TMPDIR=/srv&lt;br /&gt;
 export GFORTRAN_TMPDIR=/srv&lt;br /&gt;
 export ATHENA_PROC_NUMBER=8&lt;br /&gt;
 export ATHENA_CORE_NUMBER=8&lt;br /&gt;
 /usr/bin/time -f &amp;quot;%P %M&amp;quot;  Reco_tf.py --inputHITSFile=&amp;quot;HITS.25720417._000948.pool.root.1&amp;quot; --asetup=&amp;quot;RDOtoRDOTrigger:Athena,21.0.20.12&amp;quot; --maxEvents=&amp;quot;10000&amp;quot; --multithreaded=&amp;quot;True&amp;quot; --postInclude &amp;quot;default:PyJobTransforms/UseFrontier.py&amp;quot; &amp;quot;all:PyJobTransforms/DisableFileSizeLimit.py&amp;quot; --preInclude &amp;quot;all:Campaigns/MC20a.py&amp;quot; --skipEvents=&amp;quot;0&amp;quot; --autoConfiguration=&amp;quot;everything&amp;quot; --conditionsTag &amp;quot;default:OFLCOND-MC16-SDR-RUN2-09&amp;quot; &amp;quot;RDOtoRDOTrigger:OFLCOND-MC16-SDR-RUN2-08-02a&amp;quot; --geometryVersion=&amp;quot;default:ATLAS-R2-2016-01-00-01&amp;quot; --runNumber=&amp;quot;364681&amp;quot; --digiSeedOffset1=&amp;quot;313&amp;quot; --digiSeedOffset2=&amp;quot;313&amp;quot; --inputRDO_BKGFile=&amp;quot;RDO.26811885._004931.pool.root.1,RDO.26811885._009174.pool.root.1,RDO.26819764._016665.pool.root.1,RDO.26819764._019417.pool.root.1,RDO.26819766._027499.pool.root.1&amp;quot; --AMITag=&amp;quot;r13167&amp;quot; --steering &amp;quot;doOverlay&amp;quot; &amp;quot;doRDO_TRIG&amp;quot; --outputAODFile=&amp;quot;AOD.27857976._000879.pool.root.1&amp;quot; --jobNumber=&amp;quot;313&amp;quot; --triggerConfig=&amp;quot;RDOtoRDOTrigger=MCRECO:DBF:TRIGGERDBMC:2283,35,327&amp;quot;&lt;br /&gt;
 echo &amp;quot;*** Time is: ***&amp;quot;&lt;br /&gt;
 date&lt;br /&gt;
&lt;br /&gt;
The script to setup the ATLAS version for the job, which is also called by the executable script:&lt;br /&gt;
&lt;br /&gt;
 # cat my_release_testsetup.sh &lt;br /&gt;
 source $AtlasSetup/scripts/asetup.sh Athena,22.0.41.8,notest --platform x86_64-centos7-gcc8-opt --makeflags=&#039;$MAKEFLAGS&#039;&lt;br /&gt;
&lt;br /&gt;
NOTE: the last line of the xrls instructs ARC to download all the contents of the job directory once execution has FINISHED on the cluster. One can in alternative, specify a list of files of interest with a syntax for &amp;quot;local files&amp;quot; as used for the input files in the example xrls above.&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Job_submission_and_management_with_NorduGrid_ARC&amp;diff=951</id>
		<title>Job submission and management with NorduGrid ARC</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Job_submission_and_management_with_NorduGrid_ARC&amp;diff=951"/>
		<updated>2026-05-04T06:12:54Z</updated>

		<summary type="html">&lt;p&gt;Lhep: /* arcinfo */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Prerequisites ===&lt;br /&gt;
Job submission to our cluster occurs via [https://www.nordugrid.org/arc/arc6/ Nordugrid ARC middleware]. You need:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Grid user certificate&#039;&#039;&#039;&lt;br /&gt;
This is best obtained from CERN: [https://ca.cern.ch/ca/ CERN CA]. Being registered with CERN HR and having a CERN computer account are the pre-requisites. Users with FNAL accounts can obtain a certificate from the relevant FERMILAB CA. From those not eligible to either solution above, there is the possibility of requesting a certificate via the [https://www.scits.unibe.ch/services/escience_certificates/ Swiss CA]&lt;br /&gt;
&lt;br /&gt;
In order to carry out the following steps, your user certificate and private key must be installed in the .globus subdirectory of your home directory on the client machine. You can find instructions on how to do that in [https://atlassoftwaredocs.web.cern.ch/ASWTutorial/basicSetup/grid_vo/#using-the-grid-certificate here]. While this focuses on ATLAS, the steps to extract and install the certificate are  independent from the specific experiment.&lt;br /&gt;
&lt;br /&gt;
Have the certificate registered with a Virtual Organisation (VO), e.g. for ATLAS, browse to the [https://lcg-voms2.cern.ch:8443/voms/atlas/user/home.action ATLAS VOMS service]. The prerequisite for this step is having installed your personal certificate in your browser&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;ARC client&#039;&#039;&#039;&lt;br /&gt;
Users operating from the UIs (recommended) can setup the client from CVMFS:&lt;br /&gt;
&lt;br /&gt;
 # export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase&lt;br /&gt;
 # source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh&lt;br /&gt;
 # lsetup emi&lt;br /&gt;
&lt;br /&gt;
Otherwise installing a local client on any linux machine is possible by following the instructions for your specific operating system: [http://www.nordugrid.org/documents/arc-client-install.html Install ARC client]&lt;br /&gt;
&lt;br /&gt;
=== arcinfo ===&lt;br /&gt;
First, verify that the user interface is installed and you can see the submission endpoints. We have 3 CEs:&lt;br /&gt;
&lt;br /&gt;
 # arcinfo --version&lt;br /&gt;
 # arcinfo -c ce00.lhep.unibe.ch:&#039;&#039;&#039;8443&#039;&#039;&#039;&lt;br /&gt;
 # arcinfo -c ce00.lhep.unibe.ch:&#039;&#039;&#039;443&#039;&#039;&#039;&lt;br /&gt;
 # arcinfo -c ce03.lhep.unibe.ch:&#039;&#039;&#039;443&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== arcproxy / voms-proxy-init/info ===&lt;br /&gt;
Generates proxy with your credentials with one of these two commands:&lt;br /&gt;
&lt;br /&gt;
 # arcproxy --voms &amp;lt;yourVO&amp;gt;  (e.g. atlas)&lt;br /&gt;
 # voms-proxy-int --voms &amp;lt;yourVO)&lt;br /&gt;
&lt;br /&gt;
View the generated proxy information with one of these two commands&lt;br /&gt;
&lt;br /&gt;
 # arcproxy -I&lt;br /&gt;
 # voms-proxy-info&lt;br /&gt;
&lt;br /&gt;
=== Job submission ===&lt;br /&gt;
Check to see if you can submit a trivial job test.xrsl:&lt;br /&gt;
&lt;br /&gt;
 # vi test.xrsl&lt;br /&gt;
 &amp;amp;&lt;br /&gt;
 (executable = /usr/bin/env)&lt;br /&gt;
 (jobname = &amp;quot;test&amp;quot;)&lt;br /&gt;
 (stdout=test.log)&lt;br /&gt;
 (* join stdout and stderr *)&lt;br /&gt;
 (join=yes)&lt;br /&gt;
 (gmlog=log)&lt;br /&gt;
 (* by default in minutes *)&lt;br /&gt;
 (wallTime=&amp;quot;100&amp;quot;)&lt;br /&gt;
 (* by default in MB *)&lt;br /&gt;
 (memory=1000)&lt;br /&gt;
 (queue=&amp;quot;CLUSTER&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
 # arcsub -d info -T arcrest --computing-element ce00.lhep.unibe.ch:&#039;&#039;&#039;8443&#039;&#039;&#039; -o joblist.xml test.xrsl&lt;br /&gt;
 ...&lt;br /&gt;
 Job submitted with jobid: &amp;lt;job ID&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 # arcsub -d info -T arcrest --computing-element ce02.lhep.unibe.ch:&#039;&#039;&#039;443&#039;&#039;&#039; -o joblist.xml test.xrsl&lt;br /&gt;
 # arcsub -d info -T arcrest --computing-element ce03.lhep.unibe.ch:&#039;&#039;&#039;443&#039;&#039;&#039; -o joblist.xml test.xrsl&lt;br /&gt;
&lt;br /&gt;
In the job description file, one can also add a list of input files to upload to the cluster at job submission and a list of output files to retrieve once execution is FINISHED.&lt;br /&gt;
&lt;br /&gt;
Additional resources on job submission and job description:&lt;br /&gt;
&lt;br /&gt;
[https://www.nordugrid.org/arc/arc6/users/submit_job.html?highlight=job%20description arc6 submit job]&lt;br /&gt;
&lt;br /&gt;
[http://www.nordugrid.org/documents/xrsl.pdf xrls job specification reference manual]&lt;br /&gt;
&lt;br /&gt;
=== arcls ===&lt;br /&gt;
Lists the job directory on the cluster&lt;br /&gt;
&lt;br /&gt;
 # arcls &amp;lt;job id&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== arcstat ===&lt;br /&gt;
Prints job status and some additional information from cluster, such as jobid, name, and status:&lt;br /&gt;
&lt;br /&gt;
 # arcstat -c ce00.lhep.unibe.ch&lt;br /&gt;
 # arcstat &amp;lt;job id&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== arccat ===&lt;br /&gt;
Prints the job stdout/stderr while the job is running on the cluster&lt;br /&gt;
&lt;br /&gt;
 # arccat &amp;lt;job id&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== arckill === &lt;br /&gt;
Kill one job, a list of jobs in filename or all jobs&lt;br /&gt;
&lt;br /&gt;
 # arckill &amp;lt;job id&amp;gt;&lt;br /&gt;
 # arckill filename&lt;br /&gt;
 # arckill -a&lt;br /&gt;
&lt;br /&gt;
=== arcget ===&lt;br /&gt;
When the status of the task is FINISHED, you can download the results of one specific task or multiple tasks. These and up in directories created by the download client and named after the job id&#039;s. In the example above, only the stdout/stderr file test.log will be downloaded as there are no further output files specified:&lt;br /&gt;
&lt;br /&gt;
 # arcget &amp;lt;job id&amp;gt;&lt;br /&gt;
 Results stored at: 4fQLDmY3BxjnmmR0Xox1SiGmABFKDmABFKDmvxHKDmABFKDmiPhU9m&lt;br /&gt;
 Jobs processed: 1, successfully retrieved: 1, successfully cleaned: 1&lt;br /&gt;
&lt;br /&gt;
or results of all completed tasks:&lt;br /&gt;
&lt;br /&gt;
 # arcget -a&lt;br /&gt;
&lt;br /&gt;
or all tasks in the list in the joblist.xml file:&lt;br /&gt;
&lt;br /&gt;
 # arcget -i joblist.xml&lt;br /&gt;
&lt;br /&gt;
=== Advanced xrls using DTR and singularity ===&lt;br /&gt;
&lt;br /&gt;
In this example we will use the DTR (Data TRansfer) feature of ARC to specify input files to copy to the cluster and retrieve the outputs. In addition, the job is setup to run in a container which is the default way for ATLAS.&lt;br /&gt;
&lt;br /&gt;
 # cat arc_test-lhep-8core.xrls&lt;br /&gt;
 &amp;amp;(executable=5333700233-8core.sh)&lt;br /&gt;
 (inputfiles=(container_script-8core.sh container_script-8core.sh)(my_release_testsetup.sh   my_release_testsetup.sh)(HITS.25720417._000948.pool.root.1 HITS.25720417._000948.pool.root.1)(RDO.26819764._019417.pool.root.1 RDO.26819764._019417.pool.root.1)(RDO.26819764._016665.pool.root.1 RDO.26819764._016665.pool.root.1)(RDO.26811885._009174.pool.root.1 RDO.26811885._009174.pool.root.1)(RDO.26819766._027499.pool.root.1 RDO.26819766._027499.pool.root.1)(RDO.26811885._004931.pool.root.1 RDO.26811885._004931.pool.root.1))&lt;br /&gt;
 (arguments=&amp;quot;&amp;quot;)&lt;br /&gt;
 (gmlog=&amp;quot;gridlog&amp;quot;)&lt;br /&gt;
 (stdout=mc20_lhep_8core.log)(join=yes)&lt;br /&gt;
 (memory=&amp;quot;1800&amp;quot;)&lt;br /&gt;
 (wallTime=&amp;quot;2870&amp;quot;)&lt;br /&gt;
 (count = &amp;quot;8&amp;quot;)&lt;br /&gt;
 (countpernode = &amp;quot;8&amp;quot;)&lt;br /&gt;
 (queue= &amp;quot;CLUSTER&amp;quot; )&lt;br /&gt;
 (jobname=mc20_lhep_8core)&lt;br /&gt;
 (outputfiles=(&amp;quot;/&amp;quot; &amp;quot;&amp;quot;))&lt;br /&gt;
&lt;br /&gt;
This specific job is a clone of Panda JobID 5333700233 to run on 8 cores within singularity&lt;br /&gt;
&lt;br /&gt;
NOTE: in this example the input files are uploaded from the local user directory, meaning you have to download them with rucio first. You can find the URI of the files to download with:&lt;br /&gt;
&lt;br /&gt;
 # lsetup rucio&lt;br /&gt;
 # rucio list-file-replicas mc16_13TeV:HITS.25720417._000948.pool.root.1&lt;br /&gt;
&lt;br /&gt;
In alternative one can specify to download the file to the cluster directly from the source on the remote SE. In such case the input file definition in the xlrs is:&lt;br /&gt;
&lt;br /&gt;
 (... HITS.25720417._000948.pool.root.1 rucio://rucio-lb-prod.cern.ch/replicas/HITS.25720417._000948.pool.root.1 ...)&lt;br /&gt;
&lt;br /&gt;
The executable script:&lt;br /&gt;
&lt;br /&gt;
 # cat 5333700233-8core.sh &lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 echo &amp;quot;*** Time is: ***&amp;quot;&lt;br /&gt;
 date&lt;br /&gt;
 chmod +x container_script-8core.sh&lt;br /&gt;
 &lt;br /&gt;
 if [ -z &amp;quot;$ATLAS_LOCAL_ROOT_BASE&amp;quot; ]; then export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase; fi;&lt;br /&gt;
 export thePlatform=&amp;quot;x86_64-centos7-gcc8-opt&amp;quot;&lt;br /&gt;
 source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh -c $thePlatform -s /srv/my_release_testsetup.sh -r /srv/container_script-8core.sh -e &amp;quot;-i -p -B /grid/lustre&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 echo &amp;quot;*** Time is: ***&amp;quot;&lt;br /&gt;
 date&lt;br /&gt;
&lt;br /&gt;
The container script which is launched by the executable script with singularity:&lt;br /&gt;
&lt;br /&gt;
 # cat container_script-8core.sh&lt;br /&gt;
 export TMPDIR=/srv&lt;br /&gt;
 export GFORTRAN_TMPDIR=/srv&lt;br /&gt;
 export ATHENA_PROC_NUMBER=8&lt;br /&gt;
 export ATHENA_CORE_NUMBER=8&lt;br /&gt;
 /usr/bin/time -f &amp;quot;%P %M&amp;quot;  Reco_tf.py --inputHITSFile=&amp;quot;HITS.25720417._000948.pool.root.1&amp;quot; --asetup=&amp;quot;RDOtoRDOTrigger:Athena,21.0.20.12&amp;quot; --maxEvents=&amp;quot;10000&amp;quot; --multithreaded=&amp;quot;True&amp;quot; --postInclude &amp;quot;default:PyJobTransforms/UseFrontier.py&amp;quot; &amp;quot;all:PyJobTransforms/DisableFileSizeLimit.py&amp;quot; --preInclude &amp;quot;all:Campaigns/MC20a.py&amp;quot; --skipEvents=&amp;quot;0&amp;quot; --autoConfiguration=&amp;quot;everything&amp;quot; --conditionsTag &amp;quot;default:OFLCOND-MC16-SDR-RUN2-09&amp;quot; &amp;quot;RDOtoRDOTrigger:OFLCOND-MC16-SDR-RUN2-08-02a&amp;quot; --geometryVersion=&amp;quot;default:ATLAS-R2-2016-01-00-01&amp;quot; --runNumber=&amp;quot;364681&amp;quot; --digiSeedOffset1=&amp;quot;313&amp;quot; --digiSeedOffset2=&amp;quot;313&amp;quot; --inputRDO_BKGFile=&amp;quot;RDO.26811885._004931.pool.root.1,RDO.26811885._009174.pool.root.1,RDO.26819764._016665.pool.root.1,RDO.26819764._019417.pool.root.1,RDO.26819766._027499.pool.root.1&amp;quot; --AMITag=&amp;quot;r13167&amp;quot; --steering &amp;quot;doOverlay&amp;quot; &amp;quot;doRDO_TRIG&amp;quot; --outputAODFile=&amp;quot;AOD.27857976._000879.pool.root.1&amp;quot; --jobNumber=&amp;quot;313&amp;quot; --triggerConfig=&amp;quot;RDOtoRDOTrigger=MCRECO:DBF:TRIGGERDBMC:2283,35,327&amp;quot;&lt;br /&gt;
 echo &amp;quot;*** Time is: ***&amp;quot;&lt;br /&gt;
 date&lt;br /&gt;
&lt;br /&gt;
The script to setup the ATLAS version for the job, which is also called by the executable script:&lt;br /&gt;
&lt;br /&gt;
 # cat my_release_testsetup.sh &lt;br /&gt;
 source $AtlasSetup/scripts/asetup.sh Athena,22.0.41.8,notest --platform x86_64-centos7-gcc8-opt --makeflags=&#039;$MAKEFLAGS&#039;&lt;br /&gt;
&lt;br /&gt;
NOTE: the last line of the xrls instructs ARC to download all the contents of the job directory once execution has FINISHED on the cluster. One can in alternative, specify a list of files of interest with a syntax for &amp;quot;local files&amp;quot; as used for the input files in the example xrls above.&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Job_submission_and_management_with_NorduGrid_ARC&amp;diff=950</id>
		<title>Job submission and management with NorduGrid ARC</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Job_submission_and_management_with_NorduGrid_ARC&amp;diff=950"/>
		<updated>2026-05-04T06:12:30Z</updated>

		<summary type="html">&lt;p&gt;Lhep: /* Job submission */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Prerequisites ===&lt;br /&gt;
Job submission to our cluster occurs via [https://www.nordugrid.org/arc/arc6/ Nordugrid ARC middleware]. You need:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Grid user certificate&#039;&#039;&#039;&lt;br /&gt;
This is best obtained from CERN: [https://ca.cern.ch/ca/ CERN CA]. Being registered with CERN HR and having a CERN computer account are the pre-requisites. Users with FNAL accounts can obtain a certificate from the relevant FERMILAB CA. From those not eligible to either solution above, there is the possibility of requesting a certificate via the [https://www.scits.unibe.ch/services/escience_certificates/ Swiss CA]&lt;br /&gt;
&lt;br /&gt;
In order to carry out the following steps, your user certificate and private key must be installed in the .globus subdirectory of your home directory on the client machine. You can find instructions on how to do that in [https://atlassoftwaredocs.web.cern.ch/ASWTutorial/basicSetup/grid_vo/#using-the-grid-certificate here]. While this focuses on ATLAS, the steps to extract and install the certificate are  independent from the specific experiment.&lt;br /&gt;
&lt;br /&gt;
Have the certificate registered with a Virtual Organisation (VO), e.g. for ATLAS, browse to the [https://lcg-voms2.cern.ch:8443/voms/atlas/user/home.action ATLAS VOMS service]. The prerequisite for this step is having installed your personal certificate in your browser&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;ARC client&#039;&#039;&#039;&lt;br /&gt;
Users operating from the UIs (recommended) can setup the client from CVMFS:&lt;br /&gt;
&lt;br /&gt;
 # export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase&lt;br /&gt;
 # source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh&lt;br /&gt;
 # lsetup emi&lt;br /&gt;
&lt;br /&gt;
Otherwise installing a local client on any linux machine is possible by following the instructions for your specific operating system: [http://www.nordugrid.org/documents/arc-client-install.html Install ARC client]&lt;br /&gt;
&lt;br /&gt;
=== arcinfo ===&lt;br /&gt;
First, verify that the user interface is installed and you can see the submission endpoints. We have 3 CEs:&lt;br /&gt;
&lt;br /&gt;
 # arcinfo --version&lt;br /&gt;
 # arcinfo -c ce00.lhep.unibe.ch:8443&lt;br /&gt;
 # arcinfo -c ce00.lhep.unibe.ch:443&lt;br /&gt;
 # arcinfo -c ce03.lhep.unibe.ch:443&lt;br /&gt;
&lt;br /&gt;
=== arcproxy / voms-proxy-init/info ===&lt;br /&gt;
Generates proxy with your credentials with one of these two commands:&lt;br /&gt;
&lt;br /&gt;
 # arcproxy --voms &amp;lt;yourVO&amp;gt;  (e.g. atlas)&lt;br /&gt;
 # voms-proxy-int --voms &amp;lt;yourVO)&lt;br /&gt;
&lt;br /&gt;
View the generated proxy information with one of these two commands&lt;br /&gt;
&lt;br /&gt;
 # arcproxy -I&lt;br /&gt;
 # voms-proxy-info&lt;br /&gt;
&lt;br /&gt;
=== Job submission ===&lt;br /&gt;
Check to see if you can submit a trivial job test.xrsl:&lt;br /&gt;
&lt;br /&gt;
 # vi test.xrsl&lt;br /&gt;
 &amp;amp;&lt;br /&gt;
 (executable = /usr/bin/env)&lt;br /&gt;
 (jobname = &amp;quot;test&amp;quot;)&lt;br /&gt;
 (stdout=test.log)&lt;br /&gt;
 (* join stdout and stderr *)&lt;br /&gt;
 (join=yes)&lt;br /&gt;
 (gmlog=log)&lt;br /&gt;
 (* by default in minutes *)&lt;br /&gt;
 (wallTime=&amp;quot;100&amp;quot;)&lt;br /&gt;
 (* by default in MB *)&lt;br /&gt;
 (memory=1000)&lt;br /&gt;
 (queue=&amp;quot;CLUSTER&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
 # arcsub -d info -T arcrest --computing-element ce00.lhep.unibe.ch:&#039;&#039;&#039;8443&#039;&#039;&#039; -o joblist.xml test.xrsl&lt;br /&gt;
 ...&lt;br /&gt;
 Job submitted with jobid: &amp;lt;job ID&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 # arcsub -d info -T arcrest --computing-element ce02.lhep.unibe.ch:&#039;&#039;&#039;443&#039;&#039;&#039; -o joblist.xml test.xrsl&lt;br /&gt;
 # arcsub -d info -T arcrest --computing-element ce03.lhep.unibe.ch:&#039;&#039;&#039;443&#039;&#039;&#039; -o joblist.xml test.xrsl&lt;br /&gt;
&lt;br /&gt;
In the job description file, one can also add a list of input files to upload to the cluster at job submission and a list of output files to retrieve once execution is FINISHED.&lt;br /&gt;
&lt;br /&gt;
Additional resources on job submission and job description:&lt;br /&gt;
&lt;br /&gt;
[https://www.nordugrid.org/arc/arc6/users/submit_job.html?highlight=job%20description arc6 submit job]&lt;br /&gt;
&lt;br /&gt;
[http://www.nordugrid.org/documents/xrsl.pdf xrls job specification reference manual]&lt;br /&gt;
&lt;br /&gt;
=== arcls ===&lt;br /&gt;
Lists the job directory on the cluster&lt;br /&gt;
&lt;br /&gt;
 # arcls &amp;lt;job id&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== arcstat ===&lt;br /&gt;
Prints job status and some additional information from cluster, such as jobid, name, and status:&lt;br /&gt;
&lt;br /&gt;
 # arcstat -c ce00.lhep.unibe.ch&lt;br /&gt;
 # arcstat &amp;lt;job id&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== arccat ===&lt;br /&gt;
Prints the job stdout/stderr while the job is running on the cluster&lt;br /&gt;
&lt;br /&gt;
 # arccat &amp;lt;job id&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== arckill === &lt;br /&gt;
Kill one job, a list of jobs in filename or all jobs&lt;br /&gt;
&lt;br /&gt;
 # arckill &amp;lt;job id&amp;gt;&lt;br /&gt;
 # arckill filename&lt;br /&gt;
 # arckill -a&lt;br /&gt;
&lt;br /&gt;
=== arcget ===&lt;br /&gt;
When the status of the task is FINISHED, you can download the results of one specific task or multiple tasks. These and up in directories created by the download client and named after the job id&#039;s. In the example above, only the stdout/stderr file test.log will be downloaded as there are no further output files specified:&lt;br /&gt;
&lt;br /&gt;
 # arcget &amp;lt;job id&amp;gt;&lt;br /&gt;
 Results stored at: 4fQLDmY3BxjnmmR0Xox1SiGmABFKDmABFKDmvxHKDmABFKDmiPhU9m&lt;br /&gt;
 Jobs processed: 1, successfully retrieved: 1, successfully cleaned: 1&lt;br /&gt;
&lt;br /&gt;
or results of all completed tasks:&lt;br /&gt;
&lt;br /&gt;
 # arcget -a&lt;br /&gt;
&lt;br /&gt;
or all tasks in the list in the joblist.xml file:&lt;br /&gt;
&lt;br /&gt;
 # arcget -i joblist.xml&lt;br /&gt;
&lt;br /&gt;
=== Advanced xrls using DTR and singularity ===&lt;br /&gt;
&lt;br /&gt;
In this example we will use the DTR (Data TRansfer) feature of ARC to specify input files to copy to the cluster and retrieve the outputs. In addition, the job is setup to run in a container which is the default way for ATLAS.&lt;br /&gt;
&lt;br /&gt;
 # cat arc_test-lhep-8core.xrls&lt;br /&gt;
 &amp;amp;(executable=5333700233-8core.sh)&lt;br /&gt;
 (inputfiles=(container_script-8core.sh container_script-8core.sh)(my_release_testsetup.sh   my_release_testsetup.sh)(HITS.25720417._000948.pool.root.1 HITS.25720417._000948.pool.root.1)(RDO.26819764._019417.pool.root.1 RDO.26819764._019417.pool.root.1)(RDO.26819764._016665.pool.root.1 RDO.26819764._016665.pool.root.1)(RDO.26811885._009174.pool.root.1 RDO.26811885._009174.pool.root.1)(RDO.26819766._027499.pool.root.1 RDO.26819766._027499.pool.root.1)(RDO.26811885._004931.pool.root.1 RDO.26811885._004931.pool.root.1))&lt;br /&gt;
 (arguments=&amp;quot;&amp;quot;)&lt;br /&gt;
 (gmlog=&amp;quot;gridlog&amp;quot;)&lt;br /&gt;
 (stdout=mc20_lhep_8core.log)(join=yes)&lt;br /&gt;
 (memory=&amp;quot;1800&amp;quot;)&lt;br /&gt;
 (wallTime=&amp;quot;2870&amp;quot;)&lt;br /&gt;
 (count = &amp;quot;8&amp;quot;)&lt;br /&gt;
 (countpernode = &amp;quot;8&amp;quot;)&lt;br /&gt;
 (queue= &amp;quot;CLUSTER&amp;quot; )&lt;br /&gt;
 (jobname=mc20_lhep_8core)&lt;br /&gt;
 (outputfiles=(&amp;quot;/&amp;quot; &amp;quot;&amp;quot;))&lt;br /&gt;
&lt;br /&gt;
This specific job is a clone of Panda JobID 5333700233 to run on 8 cores within singularity&lt;br /&gt;
&lt;br /&gt;
NOTE: in this example the input files are uploaded from the local user directory, meaning you have to download them with rucio first. You can find the URI of the files to download with:&lt;br /&gt;
&lt;br /&gt;
 # lsetup rucio&lt;br /&gt;
 # rucio list-file-replicas mc16_13TeV:HITS.25720417._000948.pool.root.1&lt;br /&gt;
&lt;br /&gt;
In alternative one can specify to download the file to the cluster directly from the source on the remote SE. In such case the input file definition in the xlrs is:&lt;br /&gt;
&lt;br /&gt;
 (... HITS.25720417._000948.pool.root.1 rucio://rucio-lb-prod.cern.ch/replicas/HITS.25720417._000948.pool.root.1 ...)&lt;br /&gt;
&lt;br /&gt;
The executable script:&lt;br /&gt;
&lt;br /&gt;
 # cat 5333700233-8core.sh &lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 echo &amp;quot;*** Time is: ***&amp;quot;&lt;br /&gt;
 date&lt;br /&gt;
 chmod +x container_script-8core.sh&lt;br /&gt;
 &lt;br /&gt;
 if [ -z &amp;quot;$ATLAS_LOCAL_ROOT_BASE&amp;quot; ]; then export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase; fi;&lt;br /&gt;
 export thePlatform=&amp;quot;x86_64-centos7-gcc8-opt&amp;quot;&lt;br /&gt;
 source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh -c $thePlatform -s /srv/my_release_testsetup.sh -r /srv/container_script-8core.sh -e &amp;quot;-i -p -B /grid/lustre&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 echo &amp;quot;*** Time is: ***&amp;quot;&lt;br /&gt;
 date&lt;br /&gt;
&lt;br /&gt;
The container script which is launched by the executable script with singularity:&lt;br /&gt;
&lt;br /&gt;
 # cat container_script-8core.sh&lt;br /&gt;
 export TMPDIR=/srv&lt;br /&gt;
 export GFORTRAN_TMPDIR=/srv&lt;br /&gt;
 export ATHENA_PROC_NUMBER=8&lt;br /&gt;
 export ATHENA_CORE_NUMBER=8&lt;br /&gt;
 /usr/bin/time -f &amp;quot;%P %M&amp;quot;  Reco_tf.py --inputHITSFile=&amp;quot;HITS.25720417._000948.pool.root.1&amp;quot; --asetup=&amp;quot;RDOtoRDOTrigger:Athena,21.0.20.12&amp;quot; --maxEvents=&amp;quot;10000&amp;quot; --multithreaded=&amp;quot;True&amp;quot; --postInclude &amp;quot;default:PyJobTransforms/UseFrontier.py&amp;quot; &amp;quot;all:PyJobTransforms/DisableFileSizeLimit.py&amp;quot; --preInclude &amp;quot;all:Campaigns/MC20a.py&amp;quot; --skipEvents=&amp;quot;0&amp;quot; --autoConfiguration=&amp;quot;everything&amp;quot; --conditionsTag &amp;quot;default:OFLCOND-MC16-SDR-RUN2-09&amp;quot; &amp;quot;RDOtoRDOTrigger:OFLCOND-MC16-SDR-RUN2-08-02a&amp;quot; --geometryVersion=&amp;quot;default:ATLAS-R2-2016-01-00-01&amp;quot; --runNumber=&amp;quot;364681&amp;quot; --digiSeedOffset1=&amp;quot;313&amp;quot; --digiSeedOffset2=&amp;quot;313&amp;quot; --inputRDO_BKGFile=&amp;quot;RDO.26811885._004931.pool.root.1,RDO.26811885._009174.pool.root.1,RDO.26819764._016665.pool.root.1,RDO.26819764._019417.pool.root.1,RDO.26819766._027499.pool.root.1&amp;quot; --AMITag=&amp;quot;r13167&amp;quot; --steering &amp;quot;doOverlay&amp;quot; &amp;quot;doRDO_TRIG&amp;quot; --outputAODFile=&amp;quot;AOD.27857976._000879.pool.root.1&amp;quot; --jobNumber=&amp;quot;313&amp;quot; --triggerConfig=&amp;quot;RDOtoRDOTrigger=MCRECO:DBF:TRIGGERDBMC:2283,35,327&amp;quot;&lt;br /&gt;
 echo &amp;quot;*** Time is: ***&amp;quot;&lt;br /&gt;
 date&lt;br /&gt;
&lt;br /&gt;
The script to setup the ATLAS version for the job, which is also called by the executable script:&lt;br /&gt;
&lt;br /&gt;
 # cat my_release_testsetup.sh &lt;br /&gt;
 source $AtlasSetup/scripts/asetup.sh Athena,22.0.41.8,notest --platform x86_64-centos7-gcc8-opt --makeflags=&#039;$MAKEFLAGS&#039;&lt;br /&gt;
&lt;br /&gt;
NOTE: the last line of the xrls instructs ARC to download all the contents of the job directory once execution has FINISHED on the cluster. One can in alternative, specify a list of files of interest with a syntax for &amp;quot;local files&amp;quot; as used for the input files in the example xrls above.&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Job_submission_and_management_with_NorduGrid_ARC&amp;diff=949</id>
		<title>Job submission and management with NorduGrid ARC</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Job_submission_and_management_with_NorduGrid_ARC&amp;diff=949"/>
		<updated>2026-05-04T06:12:04Z</updated>

		<summary type="html">&lt;p&gt;Lhep: /* arcinfo */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Prerequisites ===&lt;br /&gt;
Job submission to our cluster occurs via [https://www.nordugrid.org/arc/arc6/ Nordugrid ARC middleware]. You need:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Grid user certificate&#039;&#039;&#039;&lt;br /&gt;
This is best obtained from CERN: [https://ca.cern.ch/ca/ CERN CA]. Being registered with CERN HR and having a CERN computer account are the pre-requisites. Users with FNAL accounts can obtain a certificate from the relevant FERMILAB CA. From those not eligible to either solution above, there is the possibility of requesting a certificate via the [https://www.scits.unibe.ch/services/escience_certificates/ Swiss CA]&lt;br /&gt;
&lt;br /&gt;
In order to carry out the following steps, your user certificate and private key must be installed in the .globus subdirectory of your home directory on the client machine. You can find instructions on how to do that in [https://atlassoftwaredocs.web.cern.ch/ASWTutorial/basicSetup/grid_vo/#using-the-grid-certificate here]. While this focuses on ATLAS, the steps to extract and install the certificate are  independent from the specific experiment.&lt;br /&gt;
&lt;br /&gt;
Have the certificate registered with a Virtual Organisation (VO), e.g. for ATLAS, browse to the [https://lcg-voms2.cern.ch:8443/voms/atlas/user/home.action ATLAS VOMS service]. The prerequisite for this step is having installed your personal certificate in your browser&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;ARC client&#039;&#039;&#039;&lt;br /&gt;
Users operating from the UIs (recommended) can setup the client from CVMFS:&lt;br /&gt;
&lt;br /&gt;
 # export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase&lt;br /&gt;
 # source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh&lt;br /&gt;
 # lsetup emi&lt;br /&gt;
&lt;br /&gt;
Otherwise installing a local client on any linux machine is possible by following the instructions for your specific operating system: [http://www.nordugrid.org/documents/arc-client-install.html Install ARC client]&lt;br /&gt;
&lt;br /&gt;
=== arcinfo ===&lt;br /&gt;
First, verify that the user interface is installed and you can see the submission endpoints. We have 3 CEs:&lt;br /&gt;
&lt;br /&gt;
 # arcinfo --version&lt;br /&gt;
 # arcinfo -c ce00.lhep.unibe.ch:8443&lt;br /&gt;
 # arcinfo -c ce00.lhep.unibe.ch:443&lt;br /&gt;
 # arcinfo -c ce03.lhep.unibe.ch:443&lt;br /&gt;
&lt;br /&gt;
=== arcproxy / voms-proxy-init/info ===&lt;br /&gt;
Generates proxy with your credentials with one of these two commands:&lt;br /&gt;
&lt;br /&gt;
 # arcproxy --voms &amp;lt;yourVO&amp;gt;  (e.g. atlas)&lt;br /&gt;
 # voms-proxy-int --voms &amp;lt;yourVO)&lt;br /&gt;
&lt;br /&gt;
View the generated proxy information with one of these two commands&lt;br /&gt;
&lt;br /&gt;
 # arcproxy -I&lt;br /&gt;
 # voms-proxy-info&lt;br /&gt;
&lt;br /&gt;
=== Job submission ===&lt;br /&gt;
Check to see if you can submit a trivial job test.xrsl:&lt;br /&gt;
&lt;br /&gt;
 # vi test.xrsl&lt;br /&gt;
 &amp;amp;&lt;br /&gt;
 (executable = /usr/bin/env)&lt;br /&gt;
 (jobname = &amp;quot;test&amp;quot;)&lt;br /&gt;
 (stdout=test.log)&lt;br /&gt;
 (* join stdout and stderr *)&lt;br /&gt;
 (join=yes)&lt;br /&gt;
 (gmlog=log)&lt;br /&gt;
 (* by default in minutes *)&lt;br /&gt;
 (wallTime=&amp;quot;100&amp;quot;)&lt;br /&gt;
 (* by default in MB *)&lt;br /&gt;
 (memory=1000)&lt;br /&gt;
 (queue=&amp;quot;CLUSTER&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
 # arcsub -d info -T arcrest --computing-element ce00.lhep.unibe.ch:&#039;&#039;&#039;8443&#039;&#039;&#039; -o joblist.xml test.xrsl&lt;br /&gt;
 ...&lt;br /&gt;
 Job submitted with jobid: &amp;lt;job ID&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 # arcsub -d info -T arcrest --computing-element ce03.lhep.unibe.ch:&#039;&#039;&#039;443&#039;&#039;&#039; -o joblist.xml test.xrsl&lt;br /&gt;
&lt;br /&gt;
In the job description file, one can also add a list of input files to upload to the cluster at job submission and a list of output files to retrieve once execution is FINISHED.&lt;br /&gt;
&lt;br /&gt;
Additional resources on job submission and job description:&lt;br /&gt;
&lt;br /&gt;
[https://www.nordugrid.org/arc/arc6/users/submit_job.html?highlight=job%20description arc6 submit job]&lt;br /&gt;
&lt;br /&gt;
[http://www.nordugrid.org/documents/xrsl.pdf xrls job specification reference manual]&lt;br /&gt;
&lt;br /&gt;
=== arcls ===&lt;br /&gt;
Lists the job directory on the cluster&lt;br /&gt;
&lt;br /&gt;
 # arcls &amp;lt;job id&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== arcstat ===&lt;br /&gt;
Prints job status and some additional information from cluster, such as jobid, name, and status:&lt;br /&gt;
&lt;br /&gt;
 # arcstat -c ce00.lhep.unibe.ch&lt;br /&gt;
 # arcstat &amp;lt;job id&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== arccat ===&lt;br /&gt;
Prints the job stdout/stderr while the job is running on the cluster&lt;br /&gt;
&lt;br /&gt;
 # arccat &amp;lt;job id&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== arckill === &lt;br /&gt;
Kill one job, a list of jobs in filename or all jobs&lt;br /&gt;
&lt;br /&gt;
 # arckill &amp;lt;job id&amp;gt;&lt;br /&gt;
 # arckill filename&lt;br /&gt;
 # arckill -a&lt;br /&gt;
&lt;br /&gt;
=== arcget ===&lt;br /&gt;
When the status of the task is FINISHED, you can download the results of one specific task or multiple tasks. These and up in directories created by the download client and named after the job id&#039;s. In the example above, only the stdout/stderr file test.log will be downloaded as there are no further output files specified:&lt;br /&gt;
&lt;br /&gt;
 # arcget &amp;lt;job id&amp;gt;&lt;br /&gt;
 Results stored at: 4fQLDmY3BxjnmmR0Xox1SiGmABFKDmABFKDmvxHKDmABFKDmiPhU9m&lt;br /&gt;
 Jobs processed: 1, successfully retrieved: 1, successfully cleaned: 1&lt;br /&gt;
&lt;br /&gt;
or results of all completed tasks:&lt;br /&gt;
&lt;br /&gt;
 # arcget -a&lt;br /&gt;
&lt;br /&gt;
or all tasks in the list in the joblist.xml file:&lt;br /&gt;
&lt;br /&gt;
 # arcget -i joblist.xml&lt;br /&gt;
&lt;br /&gt;
=== Advanced xrls using DTR and singularity ===&lt;br /&gt;
&lt;br /&gt;
In this example we will use the DTR (Data TRansfer) feature of ARC to specify input files to copy to the cluster and retrieve the outputs. In addition, the job is setup to run in a container which is the default way for ATLAS.&lt;br /&gt;
&lt;br /&gt;
 # cat arc_test-lhep-8core.xrls&lt;br /&gt;
 &amp;amp;(executable=5333700233-8core.sh)&lt;br /&gt;
 (inputfiles=(container_script-8core.sh container_script-8core.sh)(my_release_testsetup.sh   my_release_testsetup.sh)(HITS.25720417._000948.pool.root.1 HITS.25720417._000948.pool.root.1)(RDO.26819764._019417.pool.root.1 RDO.26819764._019417.pool.root.1)(RDO.26819764._016665.pool.root.1 RDO.26819764._016665.pool.root.1)(RDO.26811885._009174.pool.root.1 RDO.26811885._009174.pool.root.1)(RDO.26819766._027499.pool.root.1 RDO.26819766._027499.pool.root.1)(RDO.26811885._004931.pool.root.1 RDO.26811885._004931.pool.root.1))&lt;br /&gt;
 (arguments=&amp;quot;&amp;quot;)&lt;br /&gt;
 (gmlog=&amp;quot;gridlog&amp;quot;)&lt;br /&gt;
 (stdout=mc20_lhep_8core.log)(join=yes)&lt;br /&gt;
 (memory=&amp;quot;1800&amp;quot;)&lt;br /&gt;
 (wallTime=&amp;quot;2870&amp;quot;)&lt;br /&gt;
 (count = &amp;quot;8&amp;quot;)&lt;br /&gt;
 (countpernode = &amp;quot;8&amp;quot;)&lt;br /&gt;
 (queue= &amp;quot;CLUSTER&amp;quot; )&lt;br /&gt;
 (jobname=mc20_lhep_8core)&lt;br /&gt;
 (outputfiles=(&amp;quot;/&amp;quot; &amp;quot;&amp;quot;))&lt;br /&gt;
&lt;br /&gt;
This specific job is a clone of Panda JobID 5333700233 to run on 8 cores within singularity&lt;br /&gt;
&lt;br /&gt;
NOTE: in this example the input files are uploaded from the local user directory, meaning you have to download them with rucio first. You can find the URI of the files to download with:&lt;br /&gt;
&lt;br /&gt;
 # lsetup rucio&lt;br /&gt;
 # rucio list-file-replicas mc16_13TeV:HITS.25720417._000948.pool.root.1&lt;br /&gt;
&lt;br /&gt;
In alternative one can specify to download the file to the cluster directly from the source on the remote SE. In such case the input file definition in the xlrs is:&lt;br /&gt;
&lt;br /&gt;
 (... HITS.25720417._000948.pool.root.1 rucio://rucio-lb-prod.cern.ch/replicas/HITS.25720417._000948.pool.root.1 ...)&lt;br /&gt;
&lt;br /&gt;
The executable script:&lt;br /&gt;
&lt;br /&gt;
 # cat 5333700233-8core.sh &lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 echo &amp;quot;*** Time is: ***&amp;quot;&lt;br /&gt;
 date&lt;br /&gt;
 chmod +x container_script-8core.sh&lt;br /&gt;
 &lt;br /&gt;
 if [ -z &amp;quot;$ATLAS_LOCAL_ROOT_BASE&amp;quot; ]; then export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase; fi;&lt;br /&gt;
 export thePlatform=&amp;quot;x86_64-centos7-gcc8-opt&amp;quot;&lt;br /&gt;
 source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh -c $thePlatform -s /srv/my_release_testsetup.sh -r /srv/container_script-8core.sh -e &amp;quot;-i -p -B /grid/lustre&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 echo &amp;quot;*** Time is: ***&amp;quot;&lt;br /&gt;
 date&lt;br /&gt;
&lt;br /&gt;
The container script which is launched by the executable script with singularity:&lt;br /&gt;
&lt;br /&gt;
 # cat container_script-8core.sh&lt;br /&gt;
 export TMPDIR=/srv&lt;br /&gt;
 export GFORTRAN_TMPDIR=/srv&lt;br /&gt;
 export ATHENA_PROC_NUMBER=8&lt;br /&gt;
 export ATHENA_CORE_NUMBER=8&lt;br /&gt;
 /usr/bin/time -f &amp;quot;%P %M&amp;quot;  Reco_tf.py --inputHITSFile=&amp;quot;HITS.25720417._000948.pool.root.1&amp;quot; --asetup=&amp;quot;RDOtoRDOTrigger:Athena,21.0.20.12&amp;quot; --maxEvents=&amp;quot;10000&amp;quot; --multithreaded=&amp;quot;True&amp;quot; --postInclude &amp;quot;default:PyJobTransforms/UseFrontier.py&amp;quot; &amp;quot;all:PyJobTransforms/DisableFileSizeLimit.py&amp;quot; --preInclude &amp;quot;all:Campaigns/MC20a.py&amp;quot; --skipEvents=&amp;quot;0&amp;quot; --autoConfiguration=&amp;quot;everything&amp;quot; --conditionsTag &amp;quot;default:OFLCOND-MC16-SDR-RUN2-09&amp;quot; &amp;quot;RDOtoRDOTrigger:OFLCOND-MC16-SDR-RUN2-08-02a&amp;quot; --geometryVersion=&amp;quot;default:ATLAS-R2-2016-01-00-01&amp;quot; --runNumber=&amp;quot;364681&amp;quot; --digiSeedOffset1=&amp;quot;313&amp;quot; --digiSeedOffset2=&amp;quot;313&amp;quot; --inputRDO_BKGFile=&amp;quot;RDO.26811885._004931.pool.root.1,RDO.26811885._009174.pool.root.1,RDO.26819764._016665.pool.root.1,RDO.26819764._019417.pool.root.1,RDO.26819766._027499.pool.root.1&amp;quot; --AMITag=&amp;quot;r13167&amp;quot; --steering &amp;quot;doOverlay&amp;quot; &amp;quot;doRDO_TRIG&amp;quot; --outputAODFile=&amp;quot;AOD.27857976._000879.pool.root.1&amp;quot; --jobNumber=&amp;quot;313&amp;quot; --triggerConfig=&amp;quot;RDOtoRDOTrigger=MCRECO:DBF:TRIGGERDBMC:2283,35,327&amp;quot;&lt;br /&gt;
 echo &amp;quot;*** Time is: ***&amp;quot;&lt;br /&gt;
 date&lt;br /&gt;
&lt;br /&gt;
The script to setup the ATLAS version for the job, which is also called by the executable script:&lt;br /&gt;
&lt;br /&gt;
 # cat my_release_testsetup.sh &lt;br /&gt;
 source $AtlasSetup/scripts/asetup.sh Athena,22.0.41.8,notest --platform x86_64-centos7-gcc8-opt --makeflags=&#039;$MAKEFLAGS&#039;&lt;br /&gt;
&lt;br /&gt;
NOTE: the last line of the xrls instructs ARC to download all the contents of the job directory once execution has FINISHED on the cluster. One can in alternative, specify a list of files of interest with a syntax for &amp;quot;local files&amp;quot; as used for the input files in the example xrls above.&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Job_submission_and_management_with_NorduGrid_ARC&amp;diff=904</id>
		<title>Job submission and management with NorduGrid ARC</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Job_submission_and_management_with_NorduGrid_ARC&amp;diff=904"/>
		<updated>2025-07-28T10:30:14Z</updated>

		<summary type="html">&lt;p&gt;Lhep: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Prerequisites ===&lt;br /&gt;
Job submission to our cluster occurs via [https://www.nordugrid.org/arc/arc6/ Nordugrid ARC middleware]. You need:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Grid user certificate&#039;&#039;&#039;&lt;br /&gt;
This is best obtained from CERN: [https://ca.cern.ch/ca/ CERN CA]. Being registered with CERN HR and having a CERN computer account are the pre-requisites. Users with FNAL accounts can obtain a certificate from the relevant FERMILAB CA. From those not eligible to either solution above, there is the possibility of requesting a certificate via the [https://www.scits.unibe.ch/services/escience_certificates/ Swiss CA]&lt;br /&gt;
&lt;br /&gt;
In order to carry out the following steps, your user certificate and private key must be installed in the .globus subdirectory of your home directory on the client machine. You can find instructions on how to do that in [https://atlassoftwaredocs.web.cern.ch/ASWTutorial/basicSetup/grid_vo/#using-the-grid-certificate here]. While this focuses on ATLAS, the steps to extract and install the certificate are  independent from the specific experiment.&lt;br /&gt;
&lt;br /&gt;
Have the certificate registered with a Virtual Organisation (VO), e.g. for ATLAS, browse to the [https://lcg-voms2.cern.ch:8443/voms/atlas/user/home.action ATLAS VOMS service]. The prerequisite for this step is having installed your personal certificate in your browser&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;ARC client&#039;&#039;&#039;&lt;br /&gt;
Users operating from the UIs (recommended) can setup the client from CVMFS:&lt;br /&gt;
&lt;br /&gt;
 # export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase&lt;br /&gt;
 # source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh&lt;br /&gt;
 # lsetup emi&lt;br /&gt;
&lt;br /&gt;
Otherwise installing a local client on any linux machine is possible by following the instructions for your specific operating system: [http://www.nordugrid.org/documents/arc-client-install.html Install ARC client]&lt;br /&gt;
&lt;br /&gt;
=== arcinfo ===&lt;br /&gt;
First, verify that the user interface is installed and you can see the submission endpoints&lt;br /&gt;
&lt;br /&gt;
 # arcinfo --version&lt;br /&gt;
 # arcinfo -c ce00.lhep.unibe.ch&lt;br /&gt;
 # arcinfo -c ce03.lhep.unibe.ch&lt;br /&gt;
&lt;br /&gt;
=== arcproxy / voms-proxy-init/info ===&lt;br /&gt;
Generates proxy with your credentials with one of these two commands:&lt;br /&gt;
&lt;br /&gt;
 # arcproxy --voms &amp;lt;yourVO&amp;gt;  (e.g. atlas)&lt;br /&gt;
 # voms-proxy-int --voms &amp;lt;yourVO)&lt;br /&gt;
&lt;br /&gt;
View the generated proxy information with one of these two commands&lt;br /&gt;
&lt;br /&gt;
 # arcproxy -I&lt;br /&gt;
 # voms-proxy-info&lt;br /&gt;
&lt;br /&gt;
=== Job submission ===&lt;br /&gt;
Check to see if you can submit a trivial job test.xrsl:&lt;br /&gt;
&lt;br /&gt;
 # vi test.xrsl&lt;br /&gt;
 &amp;amp;&lt;br /&gt;
 (executable = /usr/bin/env)&lt;br /&gt;
 (jobname = &amp;quot;test&amp;quot;)&lt;br /&gt;
 (stdout=test.log)&lt;br /&gt;
 (* join stdout and stderr *)&lt;br /&gt;
 (join=yes)&lt;br /&gt;
 (gmlog=log)&lt;br /&gt;
 (* by default in minutes *)&lt;br /&gt;
 (wallTime=&amp;quot;100&amp;quot;)&lt;br /&gt;
 (* by default in MB *)&lt;br /&gt;
 (memory=1000)&lt;br /&gt;
 (queue=&amp;quot;CLUSTER&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
 # arcsub -d info -T arcrest --computing-element ce00.lhep.unibe.ch:&#039;&#039;&#039;8443&#039;&#039;&#039; -o joblist.xml test.xrsl&lt;br /&gt;
 ...&lt;br /&gt;
 Job submitted with jobid: &amp;lt;job ID&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 # arcsub -d info -T arcrest --computing-element ce03.lhep.unibe.ch:&#039;&#039;&#039;443&#039;&#039;&#039; -o joblist.xml test.xrsl&lt;br /&gt;
&lt;br /&gt;
In the job description file, one can also add a list of input files to upload to the cluster at job submission and a list of output files to retrieve once execution is FINISHED.&lt;br /&gt;
&lt;br /&gt;
Additional resources on job submission and job description:&lt;br /&gt;
&lt;br /&gt;
[https://www.nordugrid.org/arc/arc6/users/submit_job.html?highlight=job%20description arc6 submit job]&lt;br /&gt;
&lt;br /&gt;
[http://www.nordugrid.org/documents/xrsl.pdf xrls job specification reference manual]&lt;br /&gt;
&lt;br /&gt;
=== arcls ===&lt;br /&gt;
Lists the job directory on the cluster&lt;br /&gt;
&lt;br /&gt;
 # arcls &amp;lt;job id&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== arcstat ===&lt;br /&gt;
Prints job status and some additional information from cluster, such as jobid, name, and status:&lt;br /&gt;
&lt;br /&gt;
 # arcstat -c ce00.lhep.unibe.ch&lt;br /&gt;
 # arcstat &amp;lt;job id&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== arccat ===&lt;br /&gt;
Prints the job stdout/stderr while the job is running on the cluster&lt;br /&gt;
&lt;br /&gt;
 # arccat &amp;lt;job id&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== arckill === &lt;br /&gt;
Kill one job, a list of jobs in filename or all jobs&lt;br /&gt;
&lt;br /&gt;
 # arckill &amp;lt;job id&amp;gt;&lt;br /&gt;
 # arckill filename&lt;br /&gt;
 # arckill -a&lt;br /&gt;
&lt;br /&gt;
=== arcget ===&lt;br /&gt;
When the status of the task is FINISHED, you can download the results of one specific task or multiple tasks. These and up in directories created by the download client and named after the job id&#039;s. In the example above, only the stdout/stderr file test.log will be downloaded as there are no further output files specified:&lt;br /&gt;
&lt;br /&gt;
 # arcget &amp;lt;job id&amp;gt;&lt;br /&gt;
 Results stored at: 4fQLDmY3BxjnmmR0Xox1SiGmABFKDmABFKDmvxHKDmABFKDmiPhU9m&lt;br /&gt;
 Jobs processed: 1, successfully retrieved: 1, successfully cleaned: 1&lt;br /&gt;
&lt;br /&gt;
or results of all completed tasks:&lt;br /&gt;
&lt;br /&gt;
 # arcget -a&lt;br /&gt;
&lt;br /&gt;
or all tasks in the list in the joblist.xml file:&lt;br /&gt;
&lt;br /&gt;
 # arcget -i joblist.xml&lt;br /&gt;
&lt;br /&gt;
=== Advanced xrls using DTR and singularity ===&lt;br /&gt;
&lt;br /&gt;
In this example we will use the DTR (Data TRansfer) feature of ARC to specify input files to copy to the cluster and retrieve the outputs. In addition, the job is setup to run in a container which is the default way for ATLAS.&lt;br /&gt;
&lt;br /&gt;
 # cat arc_test-lhep-8core.xrls&lt;br /&gt;
 &amp;amp;(executable=5333700233-8core.sh)&lt;br /&gt;
 (inputfiles=(container_script-8core.sh container_script-8core.sh)(my_release_testsetup.sh   my_release_testsetup.sh)(HITS.25720417._000948.pool.root.1 HITS.25720417._000948.pool.root.1)(RDO.26819764._019417.pool.root.1 RDO.26819764._019417.pool.root.1)(RDO.26819764._016665.pool.root.1 RDO.26819764._016665.pool.root.1)(RDO.26811885._009174.pool.root.1 RDO.26811885._009174.pool.root.1)(RDO.26819766._027499.pool.root.1 RDO.26819766._027499.pool.root.1)(RDO.26811885._004931.pool.root.1 RDO.26811885._004931.pool.root.1))&lt;br /&gt;
 (arguments=&amp;quot;&amp;quot;)&lt;br /&gt;
 (gmlog=&amp;quot;gridlog&amp;quot;)&lt;br /&gt;
 (stdout=mc20_lhep_8core.log)(join=yes)&lt;br /&gt;
 (memory=&amp;quot;1800&amp;quot;)&lt;br /&gt;
 (wallTime=&amp;quot;2870&amp;quot;)&lt;br /&gt;
 (count = &amp;quot;8&amp;quot;)&lt;br /&gt;
 (countpernode = &amp;quot;8&amp;quot;)&lt;br /&gt;
 (queue= &amp;quot;CLUSTER&amp;quot; )&lt;br /&gt;
 (jobname=mc20_lhep_8core)&lt;br /&gt;
 (outputfiles=(&amp;quot;/&amp;quot; &amp;quot;&amp;quot;))&lt;br /&gt;
&lt;br /&gt;
This specific job is a clone of Panda JobID 5333700233 to run on 8 cores within singularity&lt;br /&gt;
&lt;br /&gt;
NOTE: in this example the input files are uploaded from the local user directory, meaning you have to download them with rucio first. You can find the URI of the files to download with:&lt;br /&gt;
&lt;br /&gt;
 # lsetup rucio&lt;br /&gt;
 # rucio list-file-replicas mc16_13TeV:HITS.25720417._000948.pool.root.1&lt;br /&gt;
&lt;br /&gt;
In alternative one can specify to download the file to the cluster directly from the source on the remote SE. In such case the input file definition in the xlrs is:&lt;br /&gt;
&lt;br /&gt;
 (... HITS.25720417._000948.pool.root.1 rucio://rucio-lb-prod.cern.ch/replicas/HITS.25720417._000948.pool.root.1 ...)&lt;br /&gt;
&lt;br /&gt;
The executable script:&lt;br /&gt;
&lt;br /&gt;
 # cat 5333700233-8core.sh &lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 echo &amp;quot;*** Time is: ***&amp;quot;&lt;br /&gt;
 date&lt;br /&gt;
 chmod +x container_script-8core.sh&lt;br /&gt;
 &lt;br /&gt;
 if [ -z &amp;quot;$ATLAS_LOCAL_ROOT_BASE&amp;quot; ]; then export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase; fi;&lt;br /&gt;
 export thePlatform=&amp;quot;x86_64-centos7-gcc8-opt&amp;quot;&lt;br /&gt;
 source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh -c $thePlatform -s /srv/my_release_testsetup.sh -r /srv/container_script-8core.sh -e &amp;quot;-i -p -B /grid/lustre&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 echo &amp;quot;*** Time is: ***&amp;quot;&lt;br /&gt;
 date&lt;br /&gt;
&lt;br /&gt;
The container script which is launched by the executable script with singularity:&lt;br /&gt;
&lt;br /&gt;
 # cat container_script-8core.sh&lt;br /&gt;
 export TMPDIR=/srv&lt;br /&gt;
 export GFORTRAN_TMPDIR=/srv&lt;br /&gt;
 export ATHENA_PROC_NUMBER=8&lt;br /&gt;
 export ATHENA_CORE_NUMBER=8&lt;br /&gt;
 /usr/bin/time -f &amp;quot;%P %M&amp;quot;  Reco_tf.py --inputHITSFile=&amp;quot;HITS.25720417._000948.pool.root.1&amp;quot; --asetup=&amp;quot;RDOtoRDOTrigger:Athena,21.0.20.12&amp;quot; --maxEvents=&amp;quot;10000&amp;quot; --multithreaded=&amp;quot;True&amp;quot; --postInclude &amp;quot;default:PyJobTransforms/UseFrontier.py&amp;quot; &amp;quot;all:PyJobTransforms/DisableFileSizeLimit.py&amp;quot; --preInclude &amp;quot;all:Campaigns/MC20a.py&amp;quot; --skipEvents=&amp;quot;0&amp;quot; --autoConfiguration=&amp;quot;everything&amp;quot; --conditionsTag &amp;quot;default:OFLCOND-MC16-SDR-RUN2-09&amp;quot; &amp;quot;RDOtoRDOTrigger:OFLCOND-MC16-SDR-RUN2-08-02a&amp;quot; --geometryVersion=&amp;quot;default:ATLAS-R2-2016-01-00-01&amp;quot; --runNumber=&amp;quot;364681&amp;quot; --digiSeedOffset1=&amp;quot;313&amp;quot; --digiSeedOffset2=&amp;quot;313&amp;quot; --inputRDO_BKGFile=&amp;quot;RDO.26811885._004931.pool.root.1,RDO.26811885._009174.pool.root.1,RDO.26819764._016665.pool.root.1,RDO.26819764._019417.pool.root.1,RDO.26819766._027499.pool.root.1&amp;quot; --AMITag=&amp;quot;r13167&amp;quot; --steering &amp;quot;doOverlay&amp;quot; &amp;quot;doRDO_TRIG&amp;quot; --outputAODFile=&amp;quot;AOD.27857976._000879.pool.root.1&amp;quot; --jobNumber=&amp;quot;313&amp;quot; --triggerConfig=&amp;quot;RDOtoRDOTrigger=MCRECO:DBF:TRIGGERDBMC:2283,35,327&amp;quot;&lt;br /&gt;
 echo &amp;quot;*** Time is: ***&amp;quot;&lt;br /&gt;
 date&lt;br /&gt;
&lt;br /&gt;
The script to setup the ATLAS version for the job, which is also called by the executable script:&lt;br /&gt;
&lt;br /&gt;
 # cat my_release_testsetup.sh &lt;br /&gt;
 source $AtlasSetup/scripts/asetup.sh Athena,22.0.41.8,notest --platform x86_64-centos7-gcc8-opt --makeflags=&#039;$MAKEFLAGS&#039;&lt;br /&gt;
&lt;br /&gt;
NOTE: the last line of the xrls instructs ARC to download all the contents of the job directory once execution has FINISHED on the cluster. One can in alternative, specify a list of files of interest with a syntax for &amp;quot;local files&amp;quot; as used for the input files in the example xrls above.&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=MediaWiki:Sidebar&amp;diff=888</id>
		<title>MediaWiki:Sidebar</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=MediaWiki:Sidebar&amp;diff=888"/>
		<updated>2025-06-12T08:23:13Z</updated>

		<summary type="html">&lt;p&gt;Lhep: Created page with &amp;quot; * navigation ** mainpage|mainpage-description ** recentchanges-url|recentchanges ** randompage-url|randompage ** Neutrino_group | Neutrino Group ** helppage|help-mediawiki * SEARCH * TOOLBOX * LANGUAGES&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
* navigation&lt;br /&gt;
** mainpage|mainpage-description&lt;br /&gt;
** recentchanges-url|recentchanges&lt;br /&gt;
** randompage-url|randompage&lt;br /&gt;
** Neutrino_group | Neutrino Group&lt;br /&gt;
** helppage|help-mediawiki&lt;br /&gt;
* SEARCH&lt;br /&gt;
* TOOLBOX&lt;br /&gt;
* LANGUAGES&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Interactive_access_to_GPUs&amp;diff=806</id>
		<title>Interactive access to GPUs</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Interactive_access_to_GPUs&amp;diff=806"/>
		<updated>2025-04-15T14:21:10Z</updated>

		<summary type="html">&lt;p&gt;Lhep: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We have now re-deployed our GPU server as a UI, so all users with an account on our local LDAP can access it. The hostname is &#039;&#039;&#039;ui00.lhep.unibe.ch&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The procedure described below is therefore obsolete.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;s&amp;gt;=== NOTE ===&lt;br /&gt;
Due to lack of demand, this procedure has been established for ad-hoc users for limited time frames. As such, it is not very polished and might need adjustments. We will improve it in case higher demand arises. For the time being, only one user on the cluster is mapped to incoming ssh requests. The underlying resources are managed by Slurm, so the user will interact via Slurm client commands, e.g. &#039;srun&#039;. Work in progress ...&lt;br /&gt;
&lt;br /&gt;
=== Prerequisites ===&lt;br /&gt;
Users wishing to use special resources like GPUs should follow they following steps&amp;quot;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Provide their ssh public key&#039;&#039;&#039;&lt;br /&gt;
NOTE: the key &#039;&#039;&#039;_MUST_&#039;&#039;&#039; be protected by a passphrase. We will proactively remove any key that is not.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;System access&#039;&#039;&#039;&lt;br /&gt;
   ssh atlasch020@ce01.lhep.unibe.ch&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Start an interactive shell&#039;&#039;&#039;&lt;br /&gt;
   srun --partition=CLUSTER-GPU --gres=gpu:1 -t 100 --pty bash&lt;br /&gt;
&lt;br /&gt;
This will land the user on the worker node &#039;&#039;&#039;wn-1-1&#039;&#039;&#039; and open a bash shell on it. This is the special wn we have that is equipped with GPUs. The &#039;&#039;&#039;-t&#039;&#039;&#039; flag reserves a runtime of 100 minutes, 2GB or RAM are allocated by default. In order to tweak your user request for resources (e.g. &#039;&#039;&#039;--mem-per-cpu=&#039;&#039;&#039;), please read the &#039;srun&#039; docs: [https://slurm.schedmd.com/srun.html https://slurm.schedmd.com/srun.html]&lt;br /&gt;
&lt;br /&gt;
There is a local storage area of several hundred GBs on the local disk, which is where the user lands on. For interactive work, inputs, code, containers, etc should be copied over to this area first. We encourage to copy outputs out after every run to free up space on the local area. More complex data management schemes are possible, should be discussed according to the user(s) needs&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Run your code&#039;&#039;&#039;&lt;br /&gt;
At this stage, you can run your code/container interactively, the environment should be very much like any UI, but relevant mountpoints much be made available to the container environment, adding the &#039;&#039;&#039;-B mountpoint&#039;&#039;&#039; flag. For example &#039;&#039;&#039;-B /cvmfs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You also have the option to use the local disk on the node as TMPDIR for singularity, bu setting the following before invoking singularity:&lt;br /&gt;
 export SINGULARITY_TMPDIR=/state/partition1&amp;lt;/s&amp;gt;&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Interactive_access_to_GPUs&amp;diff=805</id>
		<title>Interactive access to GPUs</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Interactive_access_to_GPUs&amp;diff=805"/>
		<updated>2025-04-15T14:13:50Z</updated>

		<summary type="html">&lt;p&gt;Lhep: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We have now re-deployed our GPU server as a UI, so all users with an account on our local LDAP can access it. The hostname is &#039;&#039;&#039;ui00.lhep.unibe.ch&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The procedure described below is therefore obsolete.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;s&amp;gt;=== NOTE ===&lt;br /&gt;
Due to lack of demand, this procedure has been established for ad-hoc users for limited time frames. As such, it is not very polished and might need adjustments. We will improve it in case higher demand arises. For the time being, only one user on the cluster is mapped to incoming ssh requests. The underlying resources are managed by Slurm, so the user will interact via Slurm client commands, e.g. &#039;srun&#039;. Work in progress ...&lt;br /&gt;
&lt;br /&gt;
=== Prerequisites ===&lt;br /&gt;
Users wishing to use special resources like GPUs should follow they following steps&amp;quot;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Provide their ssh public key&#039;&#039;&#039;&lt;br /&gt;
NOTE: the key &#039;&#039;&#039;_MUST_&#039;&#039;&#039; be protected by a passphrase. We will proactively remove any key that is not.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;System access&#039;&#039;&#039;&lt;br /&gt;
   ssh atlasch020@ce01.lhep.unibe.ch&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Start an interactive shell&#039;&#039;&#039;&lt;br /&gt;
   srun --partition=CLUSTER-GPU --gres=gpu:1 -t 100 --pty bash&lt;br /&gt;
&lt;br /&gt;
This will land the user on the worker node &#039;&#039;&#039;wn-1-1&#039;&#039;&#039; and open a bash shell on it. This is the special wn we have that is equipped with GPUs. The &#039;&#039;&#039;-t&#039;&#039;&#039; flag reserves a runtime of 100 minutes, 2GB or RAM are allocated by default. In order to tweak your user request for resources (e.g. &#039;&#039;&#039;--mem-per-cpu=&#039;&#039;&#039;), please read the &#039;srun&#039; docs: [https://slurm.schedmd.com/srun.html https://slurm.schedmd.com/srun.html]&lt;br /&gt;
&lt;br /&gt;
There is a local storage area of several hundred GBs on the local disk, which is where the user lands on. For interactive work, inputs, code, containers, etc should be copied over to this area first. We encourage to copy outputs out after every run to free up space on the local area. More complex data management schemes are possible, should be discussed according to the user(s) needs&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Run your code&#039;&#039;&#039;&lt;br /&gt;
At this stage, you can run your code/container interactively, the environment should be very much like any UI, but relevant mountpoints much be made available to the container environment, adding the &#039;&#039;&#039;-B mountpoint&#039;&#039;&#039; flag. For example &#039;&#039;&#039;-B /cvmfs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You also have the option to use the local disk on the node as TMPDIR for singularity, bu setting the following before invoking singularity:&lt;br /&gt;
 export SINGULARITY_TMPDIR=/state/partition1&amp;lt;/s&amp;gt;&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Interactive_access_to_GPUs&amp;diff=804</id>
		<title>Interactive access to GPUs</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Interactive_access_to_GPUs&amp;diff=804"/>
		<updated>2025-04-15T14:11:53Z</updated>

		<summary type="html">&lt;p&gt;Lhep: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We have now re-deployed our GPU server as a UI, so all users with an account on our local LDAP can access it. The hostname is &#039;&#039;&#039;ui00.lhep.unibe.ch&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The procedure described below is therefore obsolete.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 {{Strikethroughdiv| === NOTE ===&lt;br /&gt;
Due to lack of demand, this procedure has been established for ad-hoc users for limited time frames. As such, it is not very polished and might need adjustments. We will improve it in case higher demand arises. For the time being, only one user on the cluster is mapped to incoming ssh requests. The underlying resources are managed by Slurm, so the user will interact via Slurm client commands, e.g. &#039;srun&#039;. Work in progress ...&lt;br /&gt;
&lt;br /&gt;
=== Prerequisites ===&lt;br /&gt;
Users wishing to use special resources like GPUs should follow they following steps&amp;quot;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Provide their ssh public key&#039;&#039;&#039;&lt;br /&gt;
NOTE: the key &#039;&#039;&#039;_MUST_&#039;&#039;&#039; be protected by a passphrase. We will proactively remove any key that is not.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;System access&#039;&#039;&#039;&lt;br /&gt;
   ssh atlasch020@ce01.lhep.unibe.ch&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Start an interactive shell&#039;&#039;&#039;&lt;br /&gt;
   srun --partition=CLUSTER-GPU --gres=gpu:1 -t 100 --pty bash&lt;br /&gt;
&lt;br /&gt;
This will land the user on the worker node &#039;&#039;&#039;wn-1-1&#039;&#039;&#039; and open a bash shell on it. This is the special wn we have that is equipped with GPUs. The &#039;&#039;&#039;-t&#039;&#039;&#039; flag reserves a runtime of 100 minutes, 2GB or RAM are allocated by default. In order to tweak your user request for resources (e.g. &#039;&#039;&#039;--mem-per-cpu=&#039;&#039;&#039;), please read the &#039;srun&#039; docs: [https://slurm.schedmd.com/srun.html https://slurm.schedmd.com/srun.html]&lt;br /&gt;
&lt;br /&gt;
There is a local storage area of several hundred GBs on the local disk, which is where the user lands on. For interactive work, inputs, code, containers, etc should be copied over to this area first. We encourage to copy outputs out after every run to free up space on the local area. More complex data management schemes are possible, should be discussed according to the user(s) needs&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Run your code&#039;&#039;&#039;&lt;br /&gt;
At this stage, you can run your code/container interactively, the environment should be very much like any UI, but relevant mountpoints much be made available to the container environment, adding the &#039;&#039;&#039;-B mountpoint&#039;&#039;&#039; flag. For example &#039;&#039;&#039;-B /cvmfs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You also have the option to use the local disk on the node as TMPDIR for singularity, bu setting the following before invoking singularity:&lt;br /&gt;
 export SINGULARITY_TMPDIR=/state/partition1}}&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Interactive_access_to_GPUs&amp;diff=803</id>
		<title>Interactive access to GPUs</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Interactive_access_to_GPUs&amp;diff=803"/>
		<updated>2025-04-15T14:11:36Z</updated>

		<summary type="html">&lt;p&gt;Lhep: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We have now re-deployed our GPU server as a UI, so all users with an account on our local LDAP can access it. The hostname is &#039;&#039;&#039;ui00.lhep.unibe.ch&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The procedure described below is therefore obsolete.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 {{Strikethroughdiv === NOTE ===&lt;br /&gt;
Due to lack of demand, this procedure has been established for ad-hoc users for limited time frames. As such, it is not very polished and might need adjustments. We will improve it in case higher demand arises. For the time being, only one user on the cluster is mapped to incoming ssh requests. The underlying resources are managed by Slurm, so the user will interact via Slurm client commands, e.g. &#039;srun&#039;. Work in progress ...&lt;br /&gt;
&lt;br /&gt;
=== Prerequisites ===&lt;br /&gt;
Users wishing to use special resources like GPUs should follow they following steps&amp;quot;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Provide their ssh public key&#039;&#039;&#039;&lt;br /&gt;
NOTE: the key &#039;&#039;&#039;_MUST_&#039;&#039;&#039; be protected by a passphrase. We will proactively remove any key that is not.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;System access&#039;&#039;&#039;&lt;br /&gt;
   ssh atlasch020@ce01.lhep.unibe.ch&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Start an interactive shell&#039;&#039;&#039;&lt;br /&gt;
   srun --partition=CLUSTER-GPU --gres=gpu:1 -t 100 --pty bash&lt;br /&gt;
&lt;br /&gt;
This will land the user on the worker node &#039;&#039;&#039;wn-1-1&#039;&#039;&#039; and open a bash shell on it. This is the special wn we have that is equipped with GPUs. The &#039;&#039;&#039;-t&#039;&#039;&#039; flag reserves a runtime of 100 minutes, 2GB or RAM are allocated by default. In order to tweak your user request for resources (e.g. &#039;&#039;&#039;--mem-per-cpu=&#039;&#039;&#039;), please read the &#039;srun&#039; docs: [https://slurm.schedmd.com/srun.html https://slurm.schedmd.com/srun.html]&lt;br /&gt;
&lt;br /&gt;
There is a local storage area of several hundred GBs on the local disk, which is where the user lands on. For interactive work, inputs, code, containers, etc should be copied over to this area first. We encourage to copy outputs out after every run to free up space on the local area. More complex data management schemes are possible, should be discussed according to the user(s) needs&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Run your code&#039;&#039;&#039;&lt;br /&gt;
At this stage, you can run your code/container interactively, the environment should be very much like any UI, but relevant mountpoints much be made available to the container environment, adding the &#039;&#039;&#039;-B mountpoint&#039;&#039;&#039; flag. For example &#039;&#039;&#039;-B /cvmfs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You also have the option to use the local disk on the node as TMPDIR for singularity, bu setting the following before invoking singularity:&lt;br /&gt;
 export SINGULARITY_TMPDIR=/state/partition1}}&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Interactive_access_to_GPUs&amp;diff=799</id>
		<title>Interactive access to GPUs</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Interactive_access_to_GPUs&amp;diff=799"/>
		<updated>2025-04-15T13:27:52Z</updated>

		<summary type="html">&lt;p&gt;Lhep: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We have now re-deployed our GPU server as a UI, so all users with an account on our local LDAP can access it. The hostname is &#039;&#039;&#039;ui00.lhep.unibe.ch&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The procedure described below is therefore obsolete.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 {{Strikethroughdiv|=== NOTE ===&lt;br /&gt;
Due to lack of demand, this procedure has been established for ad-hoc users for limited time frames. As such, it is not very polished and might need adjustments. We will improve it in case higher demand arises. For the time being, only one user on the cluster is mapped to incoming ssh requests. The underlying resources are managed by Slurm, so the user will interact via Slurm client commands, e.g. &#039;srun&#039;. Work in progress ...&lt;br /&gt;
&lt;br /&gt;
=== Prerequisites ===&lt;br /&gt;
Users wishing to use special resources like GPUs should follow they following steps&amp;quot;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Provide their ssh public key&#039;&#039;&#039;&lt;br /&gt;
NOTE: the key &#039;&#039;&#039;_MUST_&#039;&#039;&#039; be protected by a passphrase. We will proactively remove any key that is not.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;System access&#039;&#039;&#039;&lt;br /&gt;
   ssh atlasch020@ce01.lhep.unibe.ch&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Start an interactive shell&#039;&#039;&#039;&lt;br /&gt;
   srun --partition=CLUSTER-GPU --gres=gpu:1 -t 100 --pty bash&lt;br /&gt;
&lt;br /&gt;
This will land the user on the worker node &#039;&#039;&#039;wn-1-1&#039;&#039;&#039; and open a bash shell on it. This is the special wn we have that is equipped with GPUs. The &#039;&#039;&#039;-t&#039;&#039;&#039; flag reserves a runtime of 100 minutes, 2GB or RAM are allocated by default. In order to tweak your user request for resources (e.g. &#039;&#039;&#039;--mem-per-cpu=&#039;&#039;&#039;), please read the &#039;srun&#039; docs: [https://slurm.schedmd.com/srun.html https://slurm.schedmd.com/srun.html]&lt;br /&gt;
&lt;br /&gt;
There is a local storage area of several hundred GBs on the local disk, which is where the user lands on. For interactive work, inputs, code, containers, etc should be copied over to this area first. We encourage to copy outputs out after every run to free up space on the local area. More complex data management schemes are possible, should be discussed according to the user(s) needs&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Run your code&#039;&#039;&#039;&lt;br /&gt;
At this stage, you can run your code/container interactively, the environment should be very much like any UI, but relevant mountpoints much be made available to the container environment, adding the &#039;&#039;&#039;-B mountpoint&#039;&#039;&#039; flag. For example &#039;&#039;&#039;-B /cvmfs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You also have the option to use the local disk on the node as TMPDIR for singularity, bu setting the following before invoking singularity:&lt;br /&gt;
 export SINGULARITY_TMPDIR=/state/partition1}}&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Network_access&amp;diff=798</id>
		<title>Network access</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Network_access&amp;diff=798"/>
		<updated>2025-04-15T13:20:47Z</updated>

		<summary type="html">&lt;p&gt;Lhep: /* Ethernet */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Wireless / WiFi ===&lt;br /&gt;
&lt;br /&gt;
The Wireless network at the University of Bern is &#039;&#039;&#039;eduroam&#039;&#039;&#039;. It is accessible from almost everywhere on the University premises and also a lot of other higher education locations around Switzerland. To connect to it, use your campus account (&#039;&#039;ab12c345@unibe.ch&#039;&#039;). If your device doesn&#039;t manage to configure the settings automatically, you can also set them manually as such:&lt;br /&gt;
&lt;br /&gt;
* Network name: eduroam&lt;br /&gt;
* Security: WPA2 Enterprise&lt;br /&gt;
* EAP type: PEAP&lt;br /&gt;
* CA certificate: none / ignore&lt;br /&gt;
* Internal / Phase 2 authentication: MSCHAPv2&lt;br /&gt;
* Identity / User name: campusaccount@unibe.ch (as mentioned above, @campus.unibe.ch and @unibe.ch are both valid)&lt;br /&gt;
* External / Anonymous identity: anonymous@campus.unibe.ch&lt;br /&gt;
&lt;br /&gt;
=== Ethernet ===&lt;br /&gt;
&lt;br /&gt;
If you require a cabled ethernet connection, please contact the LHEP IT team and send them the MAC address for the device that requires the connection, location of the device, responsible person, and a short description of that device. They can also provide you with an ethernet cable if needed.&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=LHEP_Printers&amp;diff=797</id>
		<title>LHEP Printers</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=LHEP_Printers&amp;diff=797"/>
		<updated>2024-10-29T11:44:11Z</updated>

		<summary type="html">&lt;p&gt;Lhep: /* Office Printers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;If none of the below fit your needs, check out UniPrint here:&lt;br /&gt;
&lt;br /&gt;
https://www.unibe.ch/university/campus_and_infrastructure/rund_um_computer/drucken_kopieren_scannen/uniprint/index_eng.html&lt;br /&gt;
&lt;br /&gt;
UniPrint is a service provided by the IT services department, so they&#039;re handling support and maintenance for those devices.&lt;br /&gt;
&lt;br /&gt;
=== General Purpose Printers ===&lt;br /&gt;
&lt;br /&gt;
These are the printers available for general purposes or specifically for larger scale jobs.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Printer name!! Model !! Location !! Toner !! Replacement Toner Location&lt;br /&gt;
|-&lt;br /&gt;
| printerb4color.aec.unibe.ch || HP M553 || ExWi B4 || 508A/X || ExWi B4 below the desk that the printer is on&lt;br /&gt;
|-&lt;br /&gt;
| printerb4black.aec.unibe.ch || HP M606 || ExWi B4 || 81A/X || ExWi B4 below the desk that the printer is on&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Office Printers ===&lt;br /&gt;
&lt;br /&gt;
These printers are located in the offices they are listed, and are generally reserved for the inhabitants of the offices they&#039;re in.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Printer name!! Model !! Location !! Toner !! Replacement Toner Location&lt;br /&gt;
|-&lt;br /&gt;
| printer101.aec.unibe.ch || HP M452dn || ExWi 101 || 410A/X || ExWi 101&lt;br /&gt;
|-&lt;br /&gt;
| printer133.aec.unibe.ch || HP M452dn || ExWi 133 || 410A/X || ExWi 101&lt;br /&gt;
|-&lt;br /&gt;
| printer136color.aec.unibe.ch || HP M454dn || ExWi 136 || 415A/X || ExWi 136&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;s&amp;gt;printer136black.aec.unibe.ch || Brother MFC-3760N&amp;lt;/s&amp;gt; || Retired/Galerie || TN2220 || ExWi 136&lt;br /&gt;
|-&lt;br /&gt;
| printer138.aec.unibe.ch || HP M452dn || ExWi 138 || 410A/X || ExWi 101&lt;br /&gt;
|-&lt;br /&gt;
| printer139color.aec.unibe.ch || HP M252dn || ExWi 139 || 201A/X || ExWi 101&lt;br /&gt;
|-&lt;br /&gt;
| printer139black.aec.unibe.ch || HP P3015 || ExWi 139 || 55A/X || ExWi 101&lt;br /&gt;
|-&lt;br /&gt;
| printera70.aec.unibe.ch || HP M452dn || ExWi A70 || 410A/X || ExWi 101&lt;br /&gt;
|-&lt;br /&gt;
| printera71.aec.unibe.ch || HP M452dn || ExWi A71 || 410A/X || ExWi 101&lt;br /&gt;
|-&lt;br /&gt;
| printerbhallway.aec.unibe.ch || HP CP2025 || ExWi ATLAS Hallway outside B87 door || 304A/X || ExWi 101&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Empty Toner Cartridges ===&lt;br /&gt;
&lt;br /&gt;
For HP cartridges, there&#039;s a collection box in ExWi B4, but they can always be dropped off in the IT office ExWi 101 and they will be properly recycled.&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Network_access&amp;diff=796</id>
		<title>Network access</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Network_access&amp;diff=796"/>
		<updated>2023-09-26T16:17:04Z</updated>

		<summary type="html">&lt;p&gt;Lhep: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Wireless / WiFi ===&lt;br /&gt;
&lt;br /&gt;
The Wireless network at the University of Bern is &#039;&#039;&#039;eduroam&#039;&#039;&#039;. It is accessible from almost everywhere on the University premises and also a lot of other higher education locations around Switzerland. To connect to it, use your campus account (&#039;&#039;ab12c345@unibe.ch&#039;&#039;). If your device doesn&#039;t manage to configure the settings automatically, you can also set them manually as such:&lt;br /&gt;
&lt;br /&gt;
* Network name: eduroam&lt;br /&gt;
* Security: WPA2 Enterprise&lt;br /&gt;
* EAP type: PEAP&lt;br /&gt;
* CA certificate: none / ignore&lt;br /&gt;
* Internal / Phase 2 authentication: MSCHAPv2&lt;br /&gt;
* Identity / User name: campusaccount@unibe.ch (as mentioned above, @campus.unibe.ch and @unibe.ch are both valid)&lt;br /&gt;
* External / Anonymous identity: anonymous@campus.unibe.ch&lt;br /&gt;
&lt;br /&gt;
=== Ethernet ===&lt;br /&gt;
&lt;br /&gt;
If you require a cabled ethernet connection, please contact the LHEP IT team and send them the MAC address for the device that requires the connection with a short description of that device. They can also provide you with an ethernet cable if needed.&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Network_access&amp;diff=795</id>
		<title>Network access</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Network_access&amp;diff=795"/>
		<updated>2023-09-26T16:16:23Z</updated>

		<summary type="html">&lt;p&gt;Lhep: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Wireless / WiFi ===&lt;br /&gt;
&lt;br /&gt;
The Wireless network at the University of Bern is &#039;&#039;&#039;eduroam&#039;&#039;&#039;. It should be accessible from almost everywhere on the University premises. To connect to it, use your campus account (&#039;&#039;ab12c345@unibe.ch&#039;&#039;). If your device doesn&#039;t manage to configure the settings automatically, you can also set them manually as such:&lt;br /&gt;
&lt;br /&gt;
* Network name: eduroam&lt;br /&gt;
* Security: WPA2 Enterprise&lt;br /&gt;
* EAP type: PEAP&lt;br /&gt;
* CA certificate: none / ignore&lt;br /&gt;
* Internal / Phase 2 authentication: MSCHAPv2&lt;br /&gt;
* Identity / User name: campusaccount@unibe.ch (as mentioned above, @campus.unibe.ch and @unibe.ch are both valid)&lt;br /&gt;
* External / Anonymous identity: anonymous@campus.unibe.ch&lt;br /&gt;
&lt;br /&gt;
=== Ethernet ===&lt;br /&gt;
&lt;br /&gt;
If you require a cabled ethernet connection, please contact the LHEP IT team and send them the MAC address for the device that requires the connection with a short description of that device. They can also provide you with an ethernet cable if needed.&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Network_access&amp;diff=794</id>
		<title>Network access</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Network_access&amp;diff=794"/>
		<updated>2023-09-25T11:03:59Z</updated>

		<summary type="html">&lt;p&gt;Lhep: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Wireless / WiFi ===&lt;br /&gt;
&lt;br /&gt;
The Wireless network at the University of Bern is eduroam. It should be accessible from almost everywhere on the University premises. To connect to it, use your campus account (&#039;&#039;ab12c345@unibe.ch&#039;&#039;). If your device doesn&#039;t manage to configure the settings automatically, you can also set them manually as such:&lt;br /&gt;
&lt;br /&gt;
* Network name: eduroam&lt;br /&gt;
* Security: WPA2 Enterprise&lt;br /&gt;
* EAP type: PEAP&lt;br /&gt;
* CA certificate: none / ignore&lt;br /&gt;
* Internal / Phase 2 authentication: MSCHAPv2&lt;br /&gt;
* Identity / User name: campusaccount@unibe.ch (as mentioned above, @campus.unibe.ch and @unibe.ch are both valid)&lt;br /&gt;
* External / Anonymous identity: anonymous@campus.unibe.ch&lt;br /&gt;
&lt;br /&gt;
=== Ethernet ===&lt;br /&gt;
&lt;br /&gt;
If you require a cabled ethernet connection, please contact the LHEP IT team and send them the MAC address for the device that requires the connection with a short description of that device. They can also provide you with an ethernet cable if needed.&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Network_access&amp;diff=793</id>
		<title>Network access</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Network_access&amp;diff=793"/>
		<updated>2023-09-25T11:03:43Z</updated>

		<summary type="html">&lt;p&gt;Lhep: Created page with &amp;quot;=== Wireless / WiFi ===  The Wireless network at the University of Bern is eduroam. It should be accessible from almost everywhere on the University premises. To connect to it, use your campus account (&amp;#039;&amp;#039;ab12c345@unibe.ch&amp;#039;&amp;#039;). If your device doesn&amp;#039;t manage to configure the settings automatically, you can also set them manually as such:  Network name: eduroam Security: WPA2 Enterprise EAP type: PEAP CA certificate: none / ignore Internal / Phase 2 authentication: MSCHAPv2...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Wireless / WiFi ===&lt;br /&gt;
&lt;br /&gt;
The Wireless network at the University of Bern is eduroam. It should be accessible from almost everywhere on the University premises. To connect to it, use your campus account (&#039;&#039;ab12c345@unibe.ch&#039;&#039;). If your device doesn&#039;t manage to configure the settings automatically, you can also set them manually as such:&lt;br /&gt;
&lt;br /&gt;
Network name: eduroam&lt;br /&gt;
Security: WPA2 Enterprise&lt;br /&gt;
EAP type: PEAP&lt;br /&gt;
CA certificate: none / ignore&lt;br /&gt;
Internal / Phase 2 authentication: MSCHAPv2&lt;br /&gt;
Identity / User name: campusaccount@unibe.ch (as mentioned above, @campus.unibe.ch and @unibe.ch are both valid)&lt;br /&gt;
External / Anonymous identity: anonymous@campus.unibe.ch&lt;br /&gt;
&lt;br /&gt;
=== Ethernet ===&lt;br /&gt;
&lt;br /&gt;
If you require a cabled ethernet connection, please contact the LHEP IT team and send them the MAC address for the device that requires the connection with a short description of that device. They can also provide you with an ethernet cable if needed.&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Main_Page&amp;diff=792</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Main_Page&amp;diff=792"/>
		<updated>2023-09-25T10:53:27Z</updated>

		<summary type="html">&lt;p&gt;Lhep: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&lt;br /&gt;
== Wiki portal for the Albert Einstein Institute for Fundamental Physics / Laboratory for High Energy Physics ==&lt;br /&gt;
&lt;br /&gt;
=== University network ===&lt;br /&gt;
&lt;br /&gt;
Information regarding [[Network access]] at LHEP&lt;br /&gt;
&lt;br /&gt;
=== Printer information ===&lt;br /&gt;
&lt;br /&gt;
[[LHEP Printers|Information about the printers and their location at LHEP]]&lt;br /&gt;
&lt;br /&gt;
=== Software===&lt;br /&gt;
&lt;br /&gt;
You can find [[Software LHEP|Information about Software usage at LHEP]] here for Microsoft 365, Adobe, LabView and COMSOL.&lt;br /&gt;
&lt;br /&gt;
=== How to access the computing cluster ===&lt;br /&gt;
&lt;br /&gt;
[[Job submission and management with NorduGrid ARC]]&lt;br /&gt;
&lt;br /&gt;
[[Interactive access to GPUs]]&lt;br /&gt;
&lt;br /&gt;
=== Group Wikis ===&lt;br /&gt;
&lt;br /&gt;
* [https://wiki.lhep.unibe.ch/neutron/ Neutron Wiki]&lt;br /&gt;
&lt;br /&gt;
=== Archive ===&lt;br /&gt;
&lt;br /&gt;
Old wiki [[Old Wiki Archive|archive]]&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Software_LHEP&amp;diff=791</id>
		<title>Software LHEP</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Software_LHEP&amp;diff=791"/>
		<updated>2023-09-25T10:42:30Z</updated>

		<summary type="html">&lt;p&gt;Lhep: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Microsoft 365 ===&lt;br /&gt;
&lt;br /&gt;
Any application in the Microsoft 365 suite can either be run in browser or locally installed.&lt;br /&gt;
&lt;br /&gt;
Go to http://portal.office.com/ and log in with your campus account (&#039;&#039;first.last@unibe.ch&#039;&#039;). Then either choose the app that you want to use in the left sidebar, or click &#039;&#039;Install Apps&#039;&#039; in the top right, which will start a download.&lt;br /&gt;
&lt;br /&gt;
=== Adobe ===&lt;br /&gt;
&lt;br /&gt;
The whole Adobe Creative suite is free to use for anyone from the University. Go to https://creativecloud.adobe.com/apps/download/creative-cloud to download the Creative Cloud App. When asked for an account, use your your campus account (&#039;&#039;first.last@unibe.ch&#039;&#039;). Then you can install any app you wish from the suite.&lt;br /&gt;
&lt;br /&gt;
=== LabVIEW ===&lt;br /&gt;
LabVIEW is free for anyone at the University. &amp;lt;br/&amp;gt;&lt;br /&gt;
Download the current version directly from National Instruments: &amp;lt;br/&amp;gt;&lt;br /&gt;
https://www.ni.com/myni/ &amp;lt;br/&amp;gt;&lt;br /&gt;
Create a new account if needed.&lt;br /&gt;
&lt;br /&gt;
Direct link to download: &amp;lt;br/&amp;gt;&lt;br /&gt;
https://www.ni.com/en-us/support/downloads/software-products/download.labview.html &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After installation, when asked for a license, choose &#039;&#039;&amp;quot;connect to volume license server&amp;quot;&#039;&#039; and use &amp;lt;code&amp;gt;id-lizenz-svr1.campus.unibe.ch:28000&amp;lt;/code&amp;gt; as the server.&lt;br /&gt;
Then, fill out the user and client info, which needs to be identical, and after that, select the group &#039;&#039;&amp;quot;Labview&amp;quot;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
This will send a license request to the software group at IT services. You&#039;ll receive an email once it has been approved.&lt;br /&gt;
&lt;br /&gt;
=== COMSOL ===&lt;br /&gt;
If you require access to COMSOL, please contact the LHEP IT team.&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Software_LHEP&amp;diff=790</id>
		<title>Software LHEP</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Software_LHEP&amp;diff=790"/>
		<updated>2023-09-25T10:21:26Z</updated>

		<summary type="html">&lt;p&gt;Lhep: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Microsoft 365 ===&lt;br /&gt;
&lt;br /&gt;
Any application in the Microsoft 365 suite can either be run in browser or locally installed.&lt;br /&gt;
&lt;br /&gt;
Go to http://portal.office.com/ and log in with your campus account (&#039;&#039;first.last@unibe.ch&#039;&#039;). Then either choose the app that you want to use in the left sidebar, or click &#039;&#039;Install Apps&#039;&#039; in the top right, which will start a download.&lt;br /&gt;
&lt;br /&gt;
=== Adobe ===&lt;br /&gt;
&lt;br /&gt;
The whole Adobe Creative suite is free to use for anyone from the University. Go to https://creativecloud.adobe.com/apps/download/creative-cloud to download the Creative Cloud App. When asked for an account, use your email as username and the password associated with your campus account. Then you can install any app you wish from the suite.&lt;br /&gt;
&lt;br /&gt;
=== LabVIEW ===&lt;br /&gt;
LabVIEW is free for anyone at the University. &amp;lt;br/&amp;gt;&lt;br /&gt;
Download the current version directly from National Instruments: &amp;lt;br/&amp;gt;&lt;br /&gt;
https://www.ni.com/myni/ &amp;lt;br/&amp;gt;&lt;br /&gt;
Create a new account if needed.&lt;br /&gt;
&lt;br /&gt;
Direct link to download: &amp;lt;br/&amp;gt;&lt;br /&gt;
https://www.ni.com/en-us/support/downloads/software-products/download.labview.html &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After installation, when asked for a license, choose &#039;&#039;&amp;quot;connect to volume license server&amp;quot;&#039;&#039; and use &amp;lt;code&amp;gt;id-lizenz-svr1.campus.unibe.ch:28000&amp;lt;/code&amp;gt; as the server.&lt;br /&gt;
Then, fill out the user and client info, which needs to be identical, and after that, select the group &#039;&#039;&amp;quot;Labview&amp;quot;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
This will send a license request to the software group at IT services. You&#039;ll receive an email once it has been approved.&lt;br /&gt;
&lt;br /&gt;
=== COMSOL ===&lt;br /&gt;
If you require access to COMSOL, please contact the LHEP IT team.&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Software_LHEP&amp;diff=789</id>
		<title>Software LHEP</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Software_LHEP&amp;diff=789"/>
		<updated>2023-09-25T10:21:12Z</updated>

		<summary type="html">&lt;p&gt;Lhep: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Microsoft 365 ===&lt;br /&gt;
&lt;br /&gt;
Any application in the Microsoft 365 suite can either be run in browser or locally installed.&lt;br /&gt;
&lt;br /&gt;
Go to http://portal.office.com/ and log in with your campus account (first.last@unibe.ch). Then either choose the app that you want to use in the left sidebar, or click &#039;&#039;Install Apps&#039;&#039; in the top right, which will start a download.&lt;br /&gt;
&lt;br /&gt;
=== Adobe ===&lt;br /&gt;
&lt;br /&gt;
The whole Adobe Creative suite is free to use for anyone from the University. Go to https://creativecloud.adobe.com/apps/download/creative-cloud to download the Creative Cloud App. When asked for an account, use your email as username and the password associated with your campus account. Then you can install any app you wish from the suite.&lt;br /&gt;
&lt;br /&gt;
=== LabVIEW ===&lt;br /&gt;
LabVIEW is free for anyone at the University. &amp;lt;br/&amp;gt;&lt;br /&gt;
Download the current version directly from National Instruments: &amp;lt;br/&amp;gt;&lt;br /&gt;
https://www.ni.com/myni/ &amp;lt;br/&amp;gt;&lt;br /&gt;
Create a new account if needed.&lt;br /&gt;
&lt;br /&gt;
Direct link to download: &amp;lt;br/&amp;gt;&lt;br /&gt;
https://www.ni.com/en-us/support/downloads/software-products/download.labview.html &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After installation, when asked for a license, choose &#039;&#039;&amp;quot;connect to volume license server&amp;quot;&#039;&#039; and use &amp;lt;code&amp;gt;id-lizenz-svr1.campus.unibe.ch:28000&amp;lt;/code&amp;gt; as the server.&lt;br /&gt;
Then, fill out the user and client info, which needs to be identical, and after that, select the group &#039;&#039;&amp;quot;Labview&amp;quot;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
This will send a license request to the software group at IT services. You&#039;ll receive an email once it has been approved.&lt;br /&gt;
&lt;br /&gt;
=== COMSOL ===&lt;br /&gt;
If you require access to COMSOL, please contact the LHEP IT team.&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Main_Page&amp;diff=788</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Main_Page&amp;diff=788"/>
		<updated>2023-09-25T10:18:28Z</updated>

		<summary type="html">&lt;p&gt;Lhep: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&lt;br /&gt;
== Wiki portal for the Albert Einstein Institute for Fundamental Physics / Laboratory for High Energy Physics ==&lt;br /&gt;
&lt;br /&gt;
=== Printer information ===&lt;br /&gt;
&lt;br /&gt;
[[LHEP Printers|Information about the printers and their location at LHEP]]&lt;br /&gt;
&lt;br /&gt;
=== Software===&lt;br /&gt;
&lt;br /&gt;
You can find [[Software LHEP|Information about Software usage at LHEP]] here for Microsoft 365, Adobe, LabView and COMSOL.&lt;br /&gt;
&lt;br /&gt;
=== How to access the computing cluster ===&lt;br /&gt;
&lt;br /&gt;
[[Job submission and management with NorduGrid ARC]]&lt;br /&gt;
&lt;br /&gt;
[[Interactive access to GPUs]]&lt;br /&gt;
&lt;br /&gt;
=== Group Wikis ===&lt;br /&gt;
&lt;br /&gt;
* [https://wiki.lhep.unibe.ch/neutron/ Neutron Wiki]&lt;br /&gt;
&lt;br /&gt;
=== Archive ===&lt;br /&gt;
&lt;br /&gt;
Old wiki [[Old Wiki Archive|archive]]&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Software_LHEP&amp;diff=787</id>
		<title>Software LHEP</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Software_LHEP&amp;diff=787"/>
		<updated>2023-09-25T10:15:32Z</updated>

		<summary type="html">&lt;p&gt;Lhep: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Microsoft 365 ===&lt;br /&gt;
&lt;br /&gt;
Any application in the Microsoft 365 suite can either be run in browser or locally installed.&lt;br /&gt;
&lt;br /&gt;
Go to http://portal.office.com/ and log in with your campus account. Then either choose the app that you want to use in the left sidebar, or click &#039;&#039;Install Apps&#039;&#039; in the top right, which will start a download.&lt;br /&gt;
&lt;br /&gt;
=== Adobe ===&lt;br /&gt;
&lt;br /&gt;
The whole Adobe Creative suite is free to use for anyone from the University. Go to https://creativecloud.adobe.com/apps/download/creative-cloud to download the Creative Cloud App. When asked for an account, use your email as username and the password associated with your campus account. Then you can install any app you wish from the suite.&lt;br /&gt;
&lt;br /&gt;
=== LabVIEW ===&lt;br /&gt;
LabVIEW is free for anyone at the University. &amp;lt;br/&amp;gt;&lt;br /&gt;
Download the current version directly from National Instruments: &amp;lt;br/&amp;gt;&lt;br /&gt;
https://www.ni.com/myni/ &amp;lt;br/&amp;gt;&lt;br /&gt;
Create a new account if needed.&lt;br /&gt;
&lt;br /&gt;
Direct link to download: &amp;lt;br/&amp;gt;&lt;br /&gt;
https://www.ni.com/en-us/support/downloads/software-products/download.labview.html &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After installation, when asked for a license, choose &#039;&#039;&amp;quot;connect to volume license server&amp;quot;&#039;&#039; and use &amp;lt;code&amp;gt;id-lizenz-svr1.campus.unibe.ch:28000&amp;lt;/code&amp;gt; as the server.&lt;br /&gt;
Then, fill out the user and client info, which needs to be identical, and after that, select the group &#039;&#039;&amp;quot;Labview&amp;quot;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
This will send a license request to the software group at IT services. You&#039;ll receive an email once it has been approved.&lt;br /&gt;
&lt;br /&gt;
=== COMSOL ===&lt;br /&gt;
If you require access to COMSOL, please contact the LHEP IT team.&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Main_Page&amp;diff=786</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Main_Page&amp;diff=786"/>
		<updated>2023-09-25T10:08:40Z</updated>

		<summary type="html">&lt;p&gt;Lhep: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&lt;br /&gt;
== Wiki portal for the Albert Einstein Institute for Fundamental Physics / Laboratory for High Energy Physics ==&lt;br /&gt;
&lt;br /&gt;
=== Printer information ===&lt;br /&gt;
&lt;br /&gt;
[[LHEP Printers|Information about the printers and their location at LHEP]]&lt;br /&gt;
&lt;br /&gt;
=== Software===&lt;br /&gt;
&lt;br /&gt;
[[Software LHEP|Information about Software usage at LHEP]]&lt;br /&gt;
&lt;br /&gt;
=== How to access the computing cluster ===&lt;br /&gt;
&lt;br /&gt;
[[Job submission and management with NorduGrid ARC]]&lt;br /&gt;
&lt;br /&gt;
[[Interactive access to GPUs]]&lt;br /&gt;
&lt;br /&gt;
=== Group Wikis ===&lt;br /&gt;
&lt;br /&gt;
* [https://wiki.lhep.unibe.ch/neutron/ Neutron Wiki]&lt;br /&gt;
&lt;br /&gt;
=== Archive ===&lt;br /&gt;
&lt;br /&gt;
Old wiki [[Old Wiki Archive|archive]]&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Main_Page&amp;diff=785</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Main_Page&amp;diff=785"/>
		<updated>2023-09-25T10:07:59Z</updated>

		<summary type="html">&lt;p&gt;Lhep: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&lt;br /&gt;
== Wiki portal for the Albert Einstein Institute for Fundamental Physics / Laboratory for High Energy Physics ==&lt;br /&gt;
&lt;br /&gt;
=== LHEP Printer information ===&lt;br /&gt;
&lt;br /&gt;
[[LHEP Printers|Information about the printers and their location at LHEP]]&lt;br /&gt;
&lt;br /&gt;
=== Software at LHEP ===&lt;br /&gt;
&lt;br /&gt;
[[Software LHEP|Information about Software usage at LHEP]]&lt;br /&gt;
&lt;br /&gt;
=== How to access the computing cluster ===&lt;br /&gt;
&lt;br /&gt;
[[Job submission and management with NorduGrid ARC]]&lt;br /&gt;
&lt;br /&gt;
[[Interactive access to GPUs]]&lt;br /&gt;
&lt;br /&gt;
=== Group Wikis ===&lt;br /&gt;
&lt;br /&gt;
* [https://wiki.lhep.unibe.ch/neutron/ Neutron Wiki]&lt;br /&gt;
&lt;br /&gt;
=== Archive ===&lt;br /&gt;
&lt;br /&gt;
Old wiki [[Old Wiki Archive|archive]]&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Main_Page&amp;diff=784</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Main_Page&amp;diff=784"/>
		<updated>2023-09-25T10:07:44Z</updated>

		<summary type="html">&lt;p&gt;Lhep: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&lt;br /&gt;
== Wiki portal for the Albert Einstein Institute for Fundamental Physics / Laboratory for High Energy Physics ==&lt;br /&gt;
&lt;br /&gt;
=== LHEP Printer information ===&lt;br /&gt;
&lt;br /&gt;
[[LHEP Printers|Information about the printers and their location at LHEP]]&lt;br /&gt;
&lt;br /&gt;
=== Software at LHEP ===&lt;br /&gt;
&lt;br /&gt;
[[Software LHEP|Information about Software usage at LHEP]]&lt;br /&gt;
&lt;br /&gt;
=== How to access the computing cluster ===&lt;br /&gt;
&lt;br /&gt;
[[Job submission and management with NorduGrid ARC]]&lt;br /&gt;
&lt;br /&gt;
[[Interactive access to GPUs]]&lt;br /&gt;
&lt;br /&gt;
=== Group sub-wikis ===&lt;br /&gt;
&lt;br /&gt;
* [https://wiki.lhep.unibe.ch/neutron/ Neutron Wiki]&lt;br /&gt;
&lt;br /&gt;
=== Archive ===&lt;br /&gt;
&lt;br /&gt;
Old wiki [[Old Wiki Archive|archive]]&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Main_Page&amp;diff=783</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Main_Page&amp;diff=783"/>
		<updated>2023-09-25T10:07:26Z</updated>

		<summary type="html">&lt;p&gt;Lhep: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&lt;br /&gt;
== Wiki portal for the Albert Einstein Institute for Fundamental Physics / Laboratory for High Energy Physics ==&lt;br /&gt;
&lt;br /&gt;
=== LHEP Printer information ===&lt;br /&gt;
&lt;br /&gt;
[[LHEP Printers|Information about the printers and their location at LHEP]]&lt;br /&gt;
&lt;br /&gt;
=== Software at LHEP ===&lt;br /&gt;
&lt;br /&gt;
[[Software LHEP|Information about Software usage at LHEP]]&lt;br /&gt;
&lt;br /&gt;
=== How to access the computing cluster ===&lt;br /&gt;
&lt;br /&gt;
[[Job submission and management with NorduGrid ARC]]&lt;br /&gt;
&lt;br /&gt;
[[Interactive access to GPUs]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Group sub-wikis ===&lt;br /&gt;
&lt;br /&gt;
* [https://wiki.lhep.unibe.ch/neutron/ Neutron Wiki]&lt;br /&gt;
&lt;br /&gt;
=== Archive ===&lt;br /&gt;
&lt;br /&gt;
Old wiki [[Old Wiki Archive|archive]]&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Software_LHEP&amp;diff=782</id>
		<title>Software LHEP</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Software_LHEP&amp;diff=782"/>
		<updated>2023-09-05T16:24:21Z</updated>

		<summary type="html">&lt;p&gt;Lhep: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Microsoft 365 ===&lt;br /&gt;
&lt;br /&gt;
Any application in the Microsoft 365 suite can either be run in browser or locally installed.&lt;br /&gt;
&lt;br /&gt;
Go to http://portal.office.com/ and log in with your campus account. Then either choose the app that you want to use in the left sidebar, or click &#039;&#039;Install Apps&#039;&#039; in the top right, which will start a download.&lt;br /&gt;
&lt;br /&gt;
=== Adobe ===&lt;br /&gt;
&lt;br /&gt;
The whole Adobe Creative suite is free to use for anyone from the University. Go to https://creativecloud.adobe.com/apps/download/creative-cloud to download the Creative Cloud App. When asked for an account, use your email as username and the password associated with your campus account. Then you can install any app you wish from the suite.&lt;br /&gt;
&lt;br /&gt;
=== LabVIEW ===&lt;br /&gt;
LabVIEW is free for anyone at the University. &amp;lt;br/&amp;gt;&lt;br /&gt;
Download the current version directly from National Instruments: &amp;lt;br/&amp;gt;&lt;br /&gt;
https://www.ni.com/myni/ &amp;lt;br/&amp;gt;&lt;br /&gt;
Create a new account if needed.&lt;br /&gt;
&lt;br /&gt;
Direct link to download: &amp;lt;br/&amp;gt;&lt;br /&gt;
https://www.ni.com/en-us/support/downloads/software-products/download.labview.html &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After installation, when asked for a license, choose &#039;&#039;&amp;quot;connect to volume license server&amp;quot;&#039;&#039; and use &amp;lt;code&amp;gt;id-lizenz-svr1.campus.unibe.ch:28000&amp;lt;/code&amp;gt; as the server.&lt;br /&gt;
Then, fill out the user and client info, which needs to be identical, and after that, select the group &#039;&#039;&amp;quot;Labview&amp;quot;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
This will send a license request to the software group at IT services. You&#039;ll receive an email once it has been approved.&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Software_LHEP&amp;diff=781</id>
		<title>Software LHEP</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Software_LHEP&amp;diff=781"/>
		<updated>2023-09-05T16:23:35Z</updated>

		<summary type="html">&lt;p&gt;Lhep: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Microsoft 365 ===&lt;br /&gt;
&lt;br /&gt;
Any application in the Microsoft 365 suite can either be run in browser or locally installed.&lt;br /&gt;
&lt;br /&gt;
Go to http://portal.office.com/ and log in with your campus account. Then either choose the app that you want to use in the left sidebar, or click &#039;&#039;Install Apps&#039;&#039; in the top right, which will start a download.&lt;br /&gt;
&lt;br /&gt;
=== Adobe ===&lt;br /&gt;
&lt;br /&gt;
The whole Adobe Creative suite is free to use for anyone from the University. Go to https://creativecloud.adobe.com/apps/download/creative-cloud to download the Creative Cloud App. When asked for an account, use your email as username, and campusaccount password. Then you can install any app you wish from the suite.&lt;br /&gt;
&lt;br /&gt;
=== LabVIEW ===&lt;br /&gt;
LabVIEW is free for anyone at the University. &amp;lt;br/&amp;gt;&lt;br /&gt;
Download the current version directly from National Instruments: &amp;lt;br/&amp;gt;&lt;br /&gt;
https://www.ni.com/myni/ &amp;lt;br/&amp;gt;&lt;br /&gt;
Create a new account if needed.&lt;br /&gt;
&lt;br /&gt;
Direct link to download: &amp;lt;br/&amp;gt;&lt;br /&gt;
https://www.ni.com/en-us/support/downloads/software-products/download.labview.html &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After installation, when asked for a license, choose &#039;&#039;&amp;quot;connect to volume license server&amp;quot;&#039;&#039; and use &amp;lt;code&amp;gt;id-lizenz-svr1.campus.unibe.ch:28000&amp;lt;/code&amp;gt; as the server.&lt;br /&gt;
Then, fill out the user and client info, which needs to be identical, and after that, select the group &#039;&#039;&amp;quot;Labview&amp;quot;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
This will send a license request to the software group at IT services. You&#039;ll receive an email once it has been approved.&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Software_LHEP&amp;diff=780</id>
		<title>Software LHEP</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Software_LHEP&amp;diff=780"/>
		<updated>2023-09-05T16:19:18Z</updated>

		<summary type="html">&lt;p&gt;Lhep: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Microsoft 365 ===&lt;br /&gt;
&lt;br /&gt;
Any application in the Microsoft 365 suite can either be run in browser or locally installed.&lt;br /&gt;
&lt;br /&gt;
Go to http://portal.office.com/ and log in with your campus account. Then either choose the app that you want to use in the left sidebar, or click &#039;&#039;Install Apps&#039;&#039; in the top right, which will start a download.&lt;br /&gt;
&lt;br /&gt;
=== LabVIEW ===&lt;br /&gt;
LabVIEW is free for anyone at the University. &amp;lt;br/&amp;gt;&lt;br /&gt;
Download the current version directly from National Instruments: &amp;lt;br/&amp;gt;&lt;br /&gt;
https://www.ni.com/myni/ &amp;lt;br/&amp;gt;&lt;br /&gt;
Create a new account if needed.&lt;br /&gt;
&lt;br /&gt;
Direct link to download: &amp;lt;br/&amp;gt;&lt;br /&gt;
https://www.ni.com/en-us/support/downloads/software-products/download.labview.html &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After installation, when asked for a license, choose &#039;&#039;&amp;quot;connect to volume license server&amp;quot;&#039;&#039; and use &amp;lt;code&amp;gt;id-lizenz-svr1.campus.unibe.ch:28000&amp;lt;/code&amp;gt; as the server.&lt;br /&gt;
Then, fill out the user and client info, which needs to be identical, and after that, select the group &#039;&#039;&amp;quot;Labview&amp;quot;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
This will send a license request to the software group at IT services. You&#039;ll receive an email once it has been approved.&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Software_LHEP&amp;diff=779</id>
		<title>Software LHEP</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Software_LHEP&amp;diff=779"/>
		<updated>2023-09-05T16:15:48Z</updated>

		<summary type="html">&lt;p&gt;Lhep: Created page with &amp;quot;=== LabVIEW === LabVIEW is free for anyone at the University. &amp;lt;br/&amp;gt; Download the current version directly from National Instruments: &amp;lt;br/&amp;gt; https://www.ni.com/myni/ &amp;lt;br/&amp;gt; Create a new account if needed.  Direct link to download: &amp;lt;br/&amp;gt; https://www.ni.com/en-us/support/downloads/software-products/download.labview.html &amp;lt;br/&amp;gt;  After installation, when asked for a license, choose &amp;#039;&amp;#039;&amp;quot;connect to volume license server&amp;quot;&amp;#039;&amp;#039; and use &amp;lt;code&amp;gt;id-lizenz-svr1.campus.unibe.ch:28000&amp;lt;/code&amp;gt; a...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== LabVIEW ===&lt;br /&gt;
LabVIEW is free for anyone at the University. &amp;lt;br/&amp;gt;&lt;br /&gt;
Download the current version directly from National Instruments: &amp;lt;br/&amp;gt;&lt;br /&gt;
https://www.ni.com/myni/ &amp;lt;br/&amp;gt;&lt;br /&gt;
Create a new account if needed.&lt;br /&gt;
&lt;br /&gt;
Direct link to download: &amp;lt;br/&amp;gt;&lt;br /&gt;
https://www.ni.com/en-us/support/downloads/software-products/download.labview.html &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After installation, when asked for a license, choose &#039;&#039;&amp;quot;connect to volume license server&amp;quot;&#039;&#039; and use &amp;lt;code&amp;gt;id-lizenz-svr1.campus.unibe.ch:28000&amp;lt;/code&amp;gt; as the server.&lt;br /&gt;
Then, fill out the user and client info, which needs to be identical, and after that, select the group &#039;&#039;&amp;quot;Labview&amp;quot;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
This will send a license request to the software group at IT services. You&#039;ll receive an email once it has been approved.&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Main_Page&amp;diff=778</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Main_Page&amp;diff=778"/>
		<updated>2023-09-05T16:14:01Z</updated>

		<summary type="html">&lt;p&gt;Lhep: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&lt;br /&gt;
== Wiki portal for the Albert Einstein Institute for Fundamental Physics / Laboratory for High Energy Physics ==&lt;br /&gt;
&lt;br /&gt;
=== How to access the computing cluster ===&lt;br /&gt;
&lt;br /&gt;
[[Job submission and management with NorduGrid ARC]]&lt;br /&gt;
&lt;br /&gt;
[[Interactive access to GPUs]]&lt;br /&gt;
&lt;br /&gt;
=== LHEP Printer information ===&lt;br /&gt;
&lt;br /&gt;
[[LHEP Printers|Information about the printers and their location at LHEP]]&lt;br /&gt;
&lt;br /&gt;
=== Software at LHEP ===&lt;br /&gt;
&lt;br /&gt;
[[Software LHEP|Information about Software usage at LHEP]]&lt;br /&gt;
&lt;br /&gt;
=== Group sub-wikis ===&lt;br /&gt;
&lt;br /&gt;
* [https://wiki.lhep.unibe.ch/neutron/ Neutron Wiki]&lt;br /&gt;
&lt;br /&gt;
=== Archive ===&lt;br /&gt;
&lt;br /&gt;
Old wiki [[Old Wiki Archive|archive]]&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Interactive_access_to_GPUs&amp;diff=777</id>
		<title>Interactive access to GPUs</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Interactive_access_to_GPUs&amp;diff=777"/>
		<updated>2023-06-09T14:46:39Z</updated>

		<summary type="html">&lt;p&gt;Lhep: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== NOTE ===&lt;br /&gt;
Due to lack of demand, this procedure has been established for ad-hoc users for limited time frames. As such, it is not very polished and might need adjustments. We will improve it in case higher demand arises. For the time being, only one user on the cluster is mapped to incoming ssh requests. The underlying resources are managed by Slurm, so the user will interact via Slurm client commands, e.g. &#039;srun&#039;. Work in progress ...&lt;br /&gt;
&lt;br /&gt;
=== Prerequisites ===&lt;br /&gt;
Users wishing to use special resources like GPUs should follow they following steps&amp;quot;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Provide their ssh public key&#039;&#039;&#039;&lt;br /&gt;
NOTE: the key &#039;&#039;&#039;_MUST_&#039;&#039;&#039; be protected by a passphrase. We will proactively remove any key that is not.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;System access&#039;&#039;&#039;&lt;br /&gt;
   ssh atlasch020@ce01.lhep.unibe.ch&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Start an interactive shell&#039;&#039;&#039;&lt;br /&gt;
   srun --partition=CLUSTER-GPU --gres=gpu:1 -t 100 --pty bash&lt;br /&gt;
&lt;br /&gt;
This will land the user on the worker node &#039;&#039;&#039;wn-1-1&#039;&#039;&#039; and open a bash shell on it. This is the special wn we have that is equipped with GPUs. The &#039;&#039;&#039;-t&#039;&#039;&#039; flag reserves a runtime of 100 minutes, 2GB or RAM are allocated by default. In order to tweak your user request for resources (e.g. &#039;&#039;&#039;--mem-per-cpu=&#039;&#039;&#039;), please read the &#039;srun&#039; docs: [https://slurm.schedmd.com/srun.html https://slurm.schedmd.com/srun.html]&lt;br /&gt;
&lt;br /&gt;
There is a local storage area of several hundred GBs on the local disk, which is where the user lands on. For interactive work, inputs, code, containers, etc should be copied over to this area first. We encourage to copy outputs out after every run to free up space on the local area. More complex data management schemes are possible, should be discussed according to the user(s) needs&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Run your code&#039;&#039;&#039;&lt;br /&gt;
At this stage, you can run your code/container interactively, the environment should be very much like any UI, but relevant mountpoints much be made available to the container environment, adding the &#039;&#039;&#039;-B mountpoint&#039;&#039;&#039; flag. For example &#039;&#039;&#039;-B /cvmfs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You also have the option to use the local disk on the node as TMPDIR for singularity, bu setting the following before invoking singularity:&lt;br /&gt;
 export SINGULARITY_TMPDIR=/state/partition1&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Interactive_access_to_GPUs&amp;diff=776</id>
		<title>Interactive access to GPUs</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Interactive_access_to_GPUs&amp;diff=776"/>
		<updated>2023-06-09T14:45:05Z</updated>

		<summary type="html">&lt;p&gt;Lhep: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== NOTE ===&lt;br /&gt;
Due to lack of demand, this procedure has been established for ad-hoc users for limited time frames. As such, it is not very polished and might need adjustments. We will improve it in case higher demand arises. For the time being, only one user on the cluster is mapped to incoming ssh requests. The underlying resources are managed by Slurm, so the user will interact via Slurm client commands, e.g. &#039;srun&#039;. Work in progress ...&lt;br /&gt;
&lt;br /&gt;
=== Prerequisites ===&lt;br /&gt;
Users wishing to use special resources like GPUs should follow they following steps&amp;quot;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Provide their ssh public key&#039;&#039;&#039;&lt;br /&gt;
NOTE: the key &#039;&#039;&#039;_MUST_&#039;&#039;&#039; be protected by a passphrase. We will proactively remove any key that is not.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;System access&#039;&#039;&#039;&lt;br /&gt;
   ssh atlasch020@ce01.lhep.unibe.ch&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Start an interactive shell&#039;&#039;&#039;&lt;br /&gt;
   srun --partition=CLUSTER-GPU --gres=gpu:1 -t 100 --pty bash&lt;br /&gt;
&lt;br /&gt;
This will land the user on the worker node &#039;&#039;&#039;wn-1-1&#039;&#039;&#039; and open a bash shell on it. This is the special wn we have that is equipped with GPUs. The &#039;&#039;&#039;-t&#039;&#039;&#039; flag reserves a runtime of 100 minutes, 2GB or RAM are allocated by default. In order to tweak your user request for resources (e.g. &#039;&#039;&#039;--mem-per-cpu=&#039;&#039;&#039;), please read the &#039;srun&#039; docs: [https://slurm.schedmd.com/srun.html https://slurm.schedmd.com/srun.html]&lt;br /&gt;
&lt;br /&gt;
There is a local storage area of several hundred GBs on the local disk, which is where the user lands on. For interactive work, inputs, code, containers, etc should be copied over to this area first. We encourage to copy outputs out after every run to free up space on the local area. More complex data management schemes are possible, should be discussed according to the user(s) needs&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Run your code&#039;&#039;&#039;&lt;br /&gt;
At this stage, you can run your code/container interactively, the environment should be very much like any UI, but relevant mountpoints much be made available to the container environment, adding the &#039;&#039;&#039;-B mountpoint&#039;&#039;&#039; flag. For example &#039;&#039;&#039;-B /cvmfs&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Interactive_access_to_GPUs&amp;diff=775</id>
		<title>Interactive access to GPUs</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Interactive_access_to_GPUs&amp;diff=775"/>
		<updated>2023-06-02T11:16:06Z</updated>

		<summary type="html">&lt;p&gt;Lhep: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== NOTE ===&lt;br /&gt;
Due to lack of demand, this procedure has been established for ad-hoc users for limited time frames. As such, it is not very polished and might need adjustments. We will improve it in case higher demand arises. For the time being, only one user on the cluster is mapped to incoming ssh requests. The underlying resources are managed by Slurm, so the user will interact via Slurm client commands, e.g. &#039;srun&#039;. Work in progress ...&lt;br /&gt;
&lt;br /&gt;
=== Prerequisites ===&lt;br /&gt;
Users wishing to use special resources like GPUs should follow they following steps&amp;quot;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Provide their ssh public key&#039;&#039;&#039;&lt;br /&gt;
NOTE: the key &#039;&#039;&#039;_MUST_&#039;&#039;&#039; be protected by a passphrase. We will proactively remove any key that is not.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;System access&#039;&#039;&#039;&lt;br /&gt;
   ssh atlasch020@ce01.lhep.unibe.ch&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Start an interactive shell&#039;&#039;&#039;&lt;br /&gt;
   srun --partition=CLUSTER-GPU --gres=gpu:1 -t 100 --pty bash&lt;br /&gt;
&lt;br /&gt;
This will land the user on the worker node &#039;&#039;&#039;wn-1-1&#039;&#039;&#039; and open a bash shell on it. This is the special wn we have that is equipped with GPUs. The &#039;&#039;&#039;-t&#039;&#039;&#039; flag reserves a runtime of 100 minutes, 2GB or RAM are allocated by default. In order to tweak your user request for resources (e.g. &#039;&#039;&#039;--mem-per-cpu=&#039;&#039;&#039;), please read the &#039;srun&#039; docs: [https://slurm.schedmd.com/srun.html https://slurm.schedmd.com/srun.html]&lt;br /&gt;
&lt;br /&gt;
There is a local storage area of several hundred GBs on the local disk, which is where the user lands on. For interactive work, inputs, code, containers, etc should be copied over to this area first. We encourage to copy outputs out after every run to free up space on the local area. More complex data management schemes are possible, should be discussed according to the user(s) needs&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Run your code&#039;&#039;&#039;&lt;br /&gt;
At this stage, you can run your code/container interactively, the environment should be very much like any UI (includibf &#039;&#039;&#039;CVMFS&#039;&#039;&#039;, and additionally, the user should be able to make use of one or more GPUs&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Interactive_access_to_GPUs&amp;diff=774</id>
		<title>Interactive access to GPUs</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Interactive_access_to_GPUs&amp;diff=774"/>
		<updated>2023-06-02T11:15:07Z</updated>

		<summary type="html">&lt;p&gt;Lhep: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== NOTE ===&lt;br /&gt;
Due to lack of demand, this procedure has been established for ad-hoc users for limited time frames. As such, it is not very polished and might need adjustments. We will improve it in case higher demand arises. For the time being, only one user on the cluster is mapped to incoming ssh requests. The underlying resources are managed by Slurm, so the user will interact via Slurm client commands, e.g. &#039;srun&#039;. Work in progress ...&lt;br /&gt;
&lt;br /&gt;
=== Prerequisites ===&lt;br /&gt;
Users wishing to use special resources like GPUs should follow they following steps&amp;quot;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Provide their ssh public key&#039;&#039;&#039;&lt;br /&gt;
NOTE: the key &#039;&#039;&#039;_MUST_&#039;&#039;&#039; be protected by a passphrase. We will proactively remove any key that is not.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;System access&#039;&#039;&#039;&lt;br /&gt;
   ssh atlasch020@ce01.lhep.unibe.ch&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Start an interactive shell&#039;&#039;&#039;&lt;br /&gt;
   srun --partition=CLUSTER-GPU --gres=gpu:1 -t 100 --pty bash&lt;br /&gt;
&lt;br /&gt;
This will land the user on the worker node &#039;&#039;&#039;wn-1-1&#039;&#039;&#039;, which is the special wn we have that is equipped with CPUs. The &#039;&#039;&#039;-t&#039;&#039;&#039; flag reserves a runtime of 100 minutes, 2GB or RAM are allocated by default. In order to tweak your user request for resources (e.g. &#039;&#039;&#039;--mem-per-cpu=&#039;&#039;&#039;), please read the &#039;srun&#039; docs: [https://slurm.schedmd.com/srun.html https://slurm.schedmd.com/srun.html]&lt;br /&gt;
&lt;br /&gt;
There is a local storage area of several hundred GBs on the local disk, which is where the user lands on. For interactive work, inputs, code, containers, etc should be copied over to this area first. We encourage to copy outputs out after every run to free up space on the local area. More complex data management schemes are possible, should be discussed according to the user(s) needs&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Run your code&#039;&#039;&#039;&lt;br /&gt;
At this stage, you can run your code/container interactively, the environment should be very much like any UI (includibf &#039;&#039;&#039;CVMFS&#039;&#039;&#039;, and additionally, the user should be able to make use of one or more GPUs&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Interactive_access_to_GPUs&amp;diff=773</id>
		<title>Interactive access to GPUs</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Interactive_access_to_GPUs&amp;diff=773"/>
		<updated>2023-06-02T11:14:50Z</updated>

		<summary type="html">&lt;p&gt;Lhep: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== NOTE ===&lt;br /&gt;
Due to lack of demand, this procedure has been established for ad-hoc users for limited time frames. As such, it is not very polished and might need adjustments. We will improve it in case higher demand arises. For the time being, only one user on the cluster is mapped to incoming ssh requests. The underlying resources are managed by Slurm, so the user will interact via Slurm client commands, e.g. &#039;srun&#039;. Work in progress ...&lt;br /&gt;
&lt;br /&gt;
=== Prerequisites ===&lt;br /&gt;
Users wishing to use special resources like GPUs should follow they following steps&amp;quot;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Provide their ssh public key&#039;&#039;&#039;&lt;br /&gt;
NOTE: the key &#039;&#039;&#039;_MUST_&#039;&#039;&#039;) be protected by a passphrase. We will proactively remove any key that is not.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;System access&#039;&#039;&#039;&lt;br /&gt;
   ssh atlasch020@ce01.lhep.unibe.ch&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Start an interactive shell&#039;&#039;&#039;&lt;br /&gt;
   srun --partition=CLUSTER-GPU --gres=gpu:1 -t 100 --pty bash&lt;br /&gt;
&lt;br /&gt;
This will land the user on the worker node &#039;&#039;&#039;wn-1-1&#039;&#039;&#039;, which is the special wn we have that is equipped with CPUs. The &#039;&#039;&#039;-t&#039;&#039;&#039; flag reserves a runtime of 100 minutes, 2GB or RAM are allocated by default. In order to tweak your user request for resources (e.g. &#039;&#039;&#039;--mem-per-cpu=&#039;&#039;&#039;), please read the &#039;srun&#039; docs: [https://slurm.schedmd.com/srun.html https://slurm.schedmd.com/srun.html]&lt;br /&gt;
&lt;br /&gt;
There is a local storage area of several hundred GBs on the local disk, which is where the user lands on. For interactive work, inputs, code, containers, etc should be copied over to this area first. We encourage to copy outputs out after every run to free up space on the local area. More complex data management schemes are possible, should be discussed according to the user(s) needs&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Run your code&#039;&#039;&#039;&lt;br /&gt;
At this stage, you can run your code/container interactively, the environment should be very much like any UI (includibf &#039;&#039;&#039;CVMFS&#039;&#039;&#039;, and additionally, the user should be able to make use of one or more GPUs&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Interactive_access_to_GPUs&amp;diff=772</id>
		<title>Interactive access to GPUs</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Interactive_access_to_GPUs&amp;diff=772"/>
		<updated>2023-06-02T11:14:26Z</updated>

		<summary type="html">&lt;p&gt;Lhep: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== NOTE ===&lt;br /&gt;
Due to lack of demand, this procedure has been established for ad-hoc users for limited time frames. As such, it is not very polished and might need adjustments. We will improve it in case higher demand arises. For the time being, only one user on the cluster is mapped to incoming ssh requests. The underlying resources are managed by Slurm, so the user will interact via Slurm client commands, e.g. &#039;srun&#039;. Work in progress ...&lt;br /&gt;
&lt;br /&gt;
=== Prerequisites ===&lt;br /&gt;
Users wishing to use special resources like GPUs should follow they following steps&amp;quot;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Provide their ssh public key&#039;&#039;&#039;&lt;br /&gt;
NOTE: the key _&#039;&#039;&#039;MUST&#039;&#039;&#039;) be protected by a passphrase. We will proactively remove any key that is not.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;System access&#039;&#039;&#039;&lt;br /&gt;
   ssh atlasch020@ce01.lhep.unibe.ch&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Start an interactive shell&#039;&#039;&#039;&lt;br /&gt;
   srun --partition=CLUSTER-GPU --gres=gpu:1 -t 100 --pty bash&lt;br /&gt;
&lt;br /&gt;
This will land the user on the worker node &#039;&#039;&#039;wn-1-1&#039;&#039;&#039;, which is the special wn we have that is equipped with CPUs. The &#039;&#039;&#039;-t&#039;&#039;&#039; flag reserves a runtime of 100 minutes, 2GB or RAM are allocated by default. In order to tweak your user request for resources (e.g. &#039;&#039;&#039;--mem-per-cpu=&#039;&#039;&#039;), please read the &#039;srun&#039; docs: [https://slurm.schedmd.com/srun.html https://slurm.schedmd.com/srun.html]&lt;br /&gt;
&lt;br /&gt;
There is a local storage area of several hundred GBs on the local disk, which is where the user lands on. For interactive work, inputs, code, containers, etc should be copied over to this area first. We encourage to copy outputs out after every run to free up space on the local area. More complex data management schemes are possible, should be discussed according to the user(s) needs&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Run your code&#039;&#039;&#039;&lt;br /&gt;
At this stage, you can run your code/container interactively, the environment should be very much like any UI (includibf &#039;&#039;&#039;CVMFS&#039;&#039;&#039;, and additionally, the user should be able to make use of one or more GPUs&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Interactive_access_to_GPUs&amp;diff=771</id>
		<title>Interactive access to GPUs</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Interactive_access_to_GPUs&amp;diff=771"/>
		<updated>2023-06-02T11:13:54Z</updated>

		<summary type="html">&lt;p&gt;Lhep: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== NOTE ===&lt;br /&gt;
Due to lack of demand, this procedure has been established for ad-hoc users for limited time frames. As such, it is not very polished and might need adjustments. We will improve it in case higher demand arises. For the time being, only one user on the cluster is mapped to incoming ssh requests. The underlying resources are managed by Slurm, so the user will interact via Slurm client commands, e.g. &#039;srun&#039;. Work in progress ...&lt;br /&gt;
&lt;br /&gt;
=== Prerequisites ===&lt;br /&gt;
Users wishing to use special resources like GPUs should follow they following steps&amp;quot;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Provide their ssh public key&#039;&#039;&#039;&lt;br /&gt;
NOTE: the key MUST be protected by a passphrase. We will proactively remove any key that is not.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;System access&#039;&#039;&#039;&lt;br /&gt;
   ssh atlasch020@ce01.lhep.unibe.ch&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Start an interactive shell&#039;&#039;&#039;&lt;br /&gt;
   srun --partition=CLUSTER-GPU --gres=gpu:1 -t 100 --pty bash&lt;br /&gt;
&lt;br /&gt;
This will land the user on the worker node &#039;&#039;&#039;wn-1-1&#039;&#039;&#039;, which is the special wn we have that is equipped with CPUs. The &#039;&#039;&#039;-t&#039;&#039;&#039; flag reserves a runtime of 100 minutes, 2GB or RAM are allocated by default. In order to tweak your user request for resources (e.g. &#039;&#039;&#039;--mem-per-cpu=&#039;&#039;&#039;), please read the &#039;srun&#039; docs: [https://slurm.schedmd.com/srun.html https://slurm.schedmd.com/srun.html]&lt;br /&gt;
&lt;br /&gt;
There is a local storage area of several hundred GBs on the local disk, which is where the user lands on. For interactive work, inputs, code, containers, etc should be copied over to this area first. We encourage to copy outputs out after every run to free up space on the local area. More complex data management schemes are possible, should be discussed according to the user(s) needs&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Run your code&#039;&#039;&#039;&lt;br /&gt;
At this stage, you can run your code/container interactively, the environment should be very much like any UI (includibf &#039;&#039;&#039;CVMFS&#039;&#039;&#039;, and additionally, the user should be able to make use of one or more GPUs&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Interactive_access_to_GPUs&amp;diff=770</id>
		<title>Interactive access to GPUs</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Interactive_access_to_GPUs&amp;diff=770"/>
		<updated>2023-06-02T11:11:46Z</updated>

		<summary type="html">&lt;p&gt;Lhep: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== NOTE ===&lt;br /&gt;
Due to lack of demand, this procedure has been established for ad-hoc users for limited time frames. As such, it is not very polished and might need adjustments. We will improve it in case higher demand arises. For the time being, only one user on the cluster is mapped to incoming ssh requests. The underlying resources are managed by Slurm, so the user will interact via Slurm client commands, e.g. &#039;srun&#039;. Work in progress ...&lt;br /&gt;
&lt;br /&gt;
=== Prerequisites ===&lt;br /&gt;
Users wishing to use special resources like GPUs should follow they following steps&amp;quot;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Provide their ssh public key&#039;&#039;&#039;&lt;br /&gt;
NOTE: the key MUST be protected by a passphrase. We will proactively remove any key that is not.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;System access&#039;&#039;&#039;&lt;br /&gt;
   ssh atlasch020@ce01.lhep.unibe.ch&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Start an interactive shell&#039;&#039;&#039;&lt;br /&gt;
   srun --partition=CLUSTER-GPU --gres=gpu:1 -t 100 --pty bash&lt;br /&gt;
&lt;br /&gt;
This will land the user on the worker node &#039;&#039;&#039;wn-1-1&#039;&#039;&#039;, which is the special wn we have that is equipped with CPUs. The &#039;&#039;&#039;-t&#039;&#039;&#039; flag reserves a runtime of 100 minutes, 2GB or RAM are allocated by default. In order to tweak your user request for resources (e.g. &#039;&#039;&#039;--mem-per-cpu=&#039;&#039;&#039;), please read the &#039;srun&#039; docs: [https://slurm.schedmd.com/srun.html https://slurm.schedmd.com/srun.html]&lt;br /&gt;
&lt;br /&gt;
There is a local storage area of several hundred gygabytes on the local disk, which is evere the user lands on. For interactive work, inputs, code, containers, etc should be copied over to this area first. We encourage to copy outputs out after every run to free up space on the local area. More complex data management schemes are possible, should be discussed according to the user(s) needs&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Run your code&#039;&#039;&#039;&lt;br /&gt;
At this stage, you can run your code/container interactively, the environmens should be very much like any UI, and should be able to make use of one or more GPUs&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Interactive_access_to_GPUs&amp;diff=769</id>
		<title>Interactive access to GPUs</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Interactive_access_to_GPUs&amp;diff=769"/>
		<updated>2023-06-02T11:10:31Z</updated>

		<summary type="html">&lt;p&gt;Lhep: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== NOTE ===&lt;br /&gt;
Due to lack of demand, this procedure has been established for ad-hoc users for limited time frames. As such, it is not very polished and might need adjustments. We will improve it in case higher demand arises. For the time being, only one user on the cluster is mapped to incoming ssh requests. The underlying resources are managed by Slurm, so the user will interact via Slurm client commands, e.g. &#039;srun&#039;. Work in progress ...&lt;br /&gt;
&lt;br /&gt;
=== Prerequisites ===&lt;br /&gt;
Users wishing to use special resources like GPUs should follow they following steps&amp;quot;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Provide their ssh public key&#039;&#039;&#039;&lt;br /&gt;
NOTE: the key MUST be protected by a passphrase. We will proactively remove any key that is not.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;System access&#039;&#039;&#039;&lt;br /&gt;
   ssh atlasch020@ce01.lhep.unibe.ch&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Start an interactive shell&#039;&#039;&#039;&lt;br /&gt;
   srun --partition=CLUSTER-GPU --gres=gpu:1 -t 100 --pty bash&lt;br /&gt;
&lt;br /&gt;
This will land the user on the worker node &#039;&#039;&#039;wn-1-1&#039;&#039;&#039;, which is the special wn we have that is equipped with CPUs. The &#039;&#039;&#039;-t&#039;&#039;&#039; flag reserves a runtime of 100 minutes, 2GB or RAM are allocated by default. In order to tweak your user request for resources (e.g. &#039;&#039;&#039;--mem-per-cpu=&#039;&#039;&#039;), please read the &#039;srun&#039; docs: [https://slurm.schedmd.com/srun.html [https://slurm.schedmd.com/srun.html]]&lt;br /&gt;
&lt;br /&gt;
There is a local storage area of several hundred gygabytes. For interactive work, inputs, code, containers, etc should be copied over to this area first. We encourage to copy outputs out after every run to free up space on the local area. More complex data management schemes are possible, should be discussed according to the user(s) needs&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Run your code&#039;&#039;&#039;&lt;br /&gt;
At this stage, you can run your code/container interactively, the environmens should be very much like any UI, and should be able to make use of one or more GPUs&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Interactive_access_to_GPUs&amp;diff=768</id>
		<title>Interactive access to GPUs</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Interactive_access_to_GPUs&amp;diff=768"/>
		<updated>2023-06-02T11:09:57Z</updated>

		<summary type="html">&lt;p&gt;Lhep: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== NOTE ===&lt;br /&gt;
Due to lack of demand, this procedure has been established for ad-hoc users for limited time frames. As such, it is not very polished and might need adjustments. We will improve it in case higher demand arises. For the time being, only one user on the cluster is mapped to incoming ssh requests. The underlying resources are managed by Slurm, so the user will interact via Slurm client commands, e.g. &#039;srun&#039;. Work in progress ...&lt;br /&gt;
&lt;br /&gt;
=== Prerequisites ===&lt;br /&gt;
Users wishing to use special resources like GPUs should follow they following steps&amp;quot;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Provide their ssh public key&#039;&#039;&#039;&lt;br /&gt;
NOTE: the key MUST be protected by a passphrase. We will proactively remove any key that is not.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;System access&#039;&#039;&#039;&lt;br /&gt;
   ssh atlasch020@ce01.lhep.unibe.ch&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Start an interactive shell&#039;&#039;&#039;&lt;br /&gt;
   srun --partition=CLUSTER-GPU --gres=gpu:1 -t 100 --pty bash&lt;br /&gt;
&lt;br /&gt;
This will land the user on the worker node &#039;&#039;&#039;wn-1-1&#039;&#039;&#039;, which is the special wn we have that is equipped with CPUs. The &#039;&#039;&#039;-t&#039;&#039;&#039; flag reserves a runtime of 100 minutes, 2GB or RAM are allocated by default. In order to tweak your user request for resources (e.g. &#039;&#039;&#039;--mem-per-cpu=&#039;&#039;&#039;), please read the &#039;srun&#039; docs: [https://slurm.schedmd.com/srun.html]&lt;br /&gt;
&lt;br /&gt;
There is a local storage area of several hundred gygabytes. For interactive work, inputs, code, containers, etc should be copied over to this area first. We encourage to copy outputs out after every run to free up space on the local area. More complex data management schemes are possible, should be discussed according to the user(s) needs&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Run your code&#039;&#039;&#039;&lt;br /&gt;
At this stage, you can run your code/container interactively, the environmens should be very much like any UI, and should be able to make use of one or more GPUs&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Interactive_access_to_GPUs&amp;diff=767</id>
		<title>Interactive access to GPUs</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Interactive_access_to_GPUs&amp;diff=767"/>
		<updated>2023-06-02T11:09:00Z</updated>

		<summary type="html">&lt;p&gt;Lhep: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== NOTE ===&lt;br /&gt;
Due to lack of demand, this procedure has been established for ad-hoc users for limited time frames. As such, it is not very polished and might need adjustments. We will improve it in case higher demand arises. For the time being, only one user on the cluster is mapped to incoming ssh requests. The underlying resources are managed by Slurm, so the user will interact via Slurm client commands, e.g. &#039;srun&#039;. Work in progress ...&lt;br /&gt;
&lt;br /&gt;
=== Prerequisites ===&lt;br /&gt;
Users wishing to use special resources like GPUs should follow they following steps&amp;quot;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Provide their ssh public key&#039;&#039;&#039;&lt;br /&gt;
NOTE: the key MUST be protected by a passphrase. We will proactively remove any key that is not.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;System access&#039;&#039;&#039;&lt;br /&gt;
   ssh atlasch020@ce01.lhep.unibe.ch&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Start an interactive shell&#039;&#039;&#039;&lt;br /&gt;
   srun --partition=CLUSTER-GPU --gres=gpu:1 -t 100 --pty bash&lt;br /&gt;
&lt;br /&gt;
This will land the user on the worker node &#039;&#039;&#039;wn-1-1&#039;&#039;&#039;, which is the special wn we have that is equipped with CPUs. The &#039;-t&#039; flag reserves a runtime of 100 minutes, 2GB or RAM are allocated by default. In order to tweak your user request for resources (e.g. &#039;--mem-per-cpu=&#039;), please read the &#039;srun&#039; docs: [https://slurm.schedmd.com/srun.html]&lt;br /&gt;
&lt;br /&gt;
There is a local storage area of several hundred gygabytes. For interactive work, inputs, code, containers, etc should be copied over to this area first. We encourage to copy outputs out after every run to free up space on the local area. More complex data management schemes are possible, should be discussed according to the user(s) needs&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Run your code&#039;&#039;&#039;&lt;br /&gt;
At this stage, you can run your code/container interactively, the environmens should be very much like any UI, and should be able to make use of one or more GPUs&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Interactive_access_to_GPUs&amp;diff=766</id>
		<title>Interactive access to GPUs</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Interactive_access_to_GPUs&amp;diff=766"/>
		<updated>2023-06-02T11:08:33Z</updated>

		<summary type="html">&lt;p&gt;Lhep: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== NOTE ===&lt;br /&gt;
Due to lack of demand, this procedure has been established for ad-hoc users for limited time frames. As such, it is not very polished and might need adjustments. We will improve it in case higher demand arises. For the time being, only one user on the cluster is mapped to incoming ssh requests. The underlying resources are managed by Slurm, so the user will interact via Slurm client commands, e.g. &#039;srun&#039;. Work in progress ...&lt;br /&gt;
&lt;br /&gt;
=== Prerequisites ===&lt;br /&gt;
Users wishing to use special resources like GPUs should follow they following steps&amp;quot;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Provide their ssh public key&#039;&#039;&#039;&lt;br /&gt;
NOTE: the key MUST be protected by a passphrase. We will proactively remove any key that is not.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;System access&#039;&#039;&#039;&lt;br /&gt;
   ssh atlasch020@ce01.lhep.unibe.ch&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Start an interactive shell&#039;&#039;&#039;&lt;br /&gt;
   srun --partition=CLUSTER-GPU --gres=gpu:1 -t 100 --pty bash&lt;br /&gt;
&lt;br /&gt;
This will land the user on the worker node &#039;wn-1-1&#039;, which is the special wn we have that is equipped with CPUs. The &#039;-t&#039; flag reserves a runtime of 100 minutes, 2GB or RAM are allocated by default. In order to tweak your user request for resources (e.g. &#039;--mem-per-cpu=&#039;), please read the &#039;srun&#039; docs: [https://slurm.schedmd.com/srun.html]&lt;br /&gt;
&lt;br /&gt;
There is a local storage area of several hundred gygabytes. For interactive work, inputs, code, containers, etc should be copied over to this area first. We encourage to copy outputs out after every run to free up space on the local area. More complex data management schemes are possible, should be discussed according to the user(s) needs&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Run your code&#039;&#039;&#039;&lt;br /&gt;
At this stage, you can run your code/container interactively, the environmens should be very much like any UI, and should be able to make use of one or more GPUs&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Interactive_access_to_GPUs&amp;diff=765</id>
		<title>Interactive access to GPUs</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Interactive_access_to_GPUs&amp;diff=765"/>
		<updated>2023-06-02T10:57:19Z</updated>

		<summary type="html">&lt;p&gt;Lhep: Created page with &amp;quot;=== NOTE === Due to lack of demand, this procedure has been established for ad-hoc users for limited time frames. As such, it is not very polished and might need adjustments. We will improve it in case higher demand arises. For the time being, only one user on the cluster is mapped to incoming ssh requests. Work in progress ...  === Prerequisites === Users wishing to use special resources like GPUs should follow they following steps&amp;quot;  * &amp;#039;&amp;#039;&amp;#039;Provide their ssh public key&amp;#039;&amp;#039;&amp;#039;...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== NOTE ===&lt;br /&gt;
Due to lack of demand, this procedure has been established for ad-hoc users for limited time frames. As such, it is not very polished and might need adjustments. We will improve it in case higher demand arises. For the time being, only one user on the cluster is mapped to incoming ssh requests. Work in progress ...&lt;br /&gt;
&lt;br /&gt;
=== Prerequisites ===&lt;br /&gt;
Users wishing to use special resources like GPUs should follow they following steps&amp;quot;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Provide their ssh public key&#039;&#039;&#039;&lt;br /&gt;
NOTE: the key MUST be protected by a passphrase. We will proactively remove any key that is not.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;System access&#039;&#039;&#039;&lt;br /&gt;
   ssh atlasch020@ce01.lhep.unibe.ch&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Start an interactive shell&#039;&#039;&#039;&lt;br /&gt;
   srun --partition=CLUSTER-GPU --gres=gpu:1 -t 100 --pty bash&lt;br /&gt;
&lt;br /&gt;
This will land the user on the worker node &#039;wn-1-1&#039;, which is the special wn we have that is equipped with CPUs. There is a local storage area of several hundred gygabytes. For interactive work, inputs, code, etc should be copied over to this area first. We encourage to copy outputs out after every run to free up space on the local area. More complex data management schemes are possible, should be discussed according to the user(s) needs&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Run your code&#039;&#039;&#039;&lt;br /&gt;
At this stage, you can run your code interactively and should be able to make use of one or more GPUs&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Main_Page&amp;diff=764</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Main_Page&amp;diff=764"/>
		<updated>2023-06-02T10:37:10Z</updated>

		<summary type="html">&lt;p&gt;Lhep: /* How to access the computing cluster */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&lt;br /&gt;
== Wiki portal for the Albert Einstein Institute for Fundamental Physics / Laboratory for High Energy Physics ==&lt;br /&gt;
&lt;br /&gt;
=== How to access the computing cluster ===&lt;br /&gt;
&lt;br /&gt;
[[Job submission and management with NorduGrid ARC]]&lt;br /&gt;
&lt;br /&gt;
[[Interactive access to GPUs]]&lt;br /&gt;
&lt;br /&gt;
=== LHEP Printer information ===&lt;br /&gt;
&lt;br /&gt;
[[LHEP Printers|Information about the printers and their location at LHEP]]&lt;br /&gt;
&lt;br /&gt;
=== Group sub-wikis ===&lt;br /&gt;
&lt;br /&gt;
* [https://wiki.lhep.unibe.ch/neutron/ Neutron Wiki]&lt;br /&gt;
&lt;br /&gt;
=== Archive ===&lt;br /&gt;
&lt;br /&gt;
Old wiki [[Old Wiki Archive|archive]]&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Main_Page&amp;diff=763</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Main_Page&amp;diff=763"/>
		<updated>2023-06-02T10:36:55Z</updated>

		<summary type="html">&lt;p&gt;Lhep: /* How to access the computing cluster */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&lt;br /&gt;
== Wiki portal for the Albert Einstein Institute for Fundamental Physics / Laboratory for High Energy Physics ==&lt;br /&gt;
&lt;br /&gt;
=== How to access the computing cluster ===&lt;br /&gt;
&lt;br /&gt;
[[Job submission and management with NorduGrid ARC]]&lt;br /&gt;
[[Interactive access to GPUs]]&lt;br /&gt;
&lt;br /&gt;
=== LHEP Printer information ===&lt;br /&gt;
&lt;br /&gt;
[[LHEP Printers|Information about the printers and their location at LHEP]]&lt;br /&gt;
&lt;br /&gt;
=== Group sub-wikis ===&lt;br /&gt;
&lt;br /&gt;
* [https://wiki.lhep.unibe.ch/neutron/ Neutron Wiki]&lt;br /&gt;
&lt;br /&gt;
=== Archive ===&lt;br /&gt;
&lt;br /&gt;
Old wiki [[Old Wiki Archive|archive]]&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Main_Page&amp;diff=762</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Main_Page&amp;diff=762"/>
		<updated>2023-06-02T10:36:37Z</updated>

		<summary type="html">&lt;p&gt;Lhep: /* How to access GPUs on the computing cluster */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&lt;br /&gt;
== Wiki portal for the Albert Einstein Institute for Fundamental Physics / Laboratory for High Energy Physics ==&lt;br /&gt;
&lt;br /&gt;
=== How to access the computing cluster ===&lt;br /&gt;
&lt;br /&gt;
[[Job submission and management with NorduGrid ARC]]&lt;br /&gt;
&lt;br /&gt;
=== LHEP Printer information ===&lt;br /&gt;
&lt;br /&gt;
[[LHEP Printers|Information about the printers and their location at LHEP]]&lt;br /&gt;
&lt;br /&gt;
=== Group sub-wikis ===&lt;br /&gt;
&lt;br /&gt;
* [https://wiki.lhep.unibe.ch/neutron/ Neutron Wiki]&lt;br /&gt;
&lt;br /&gt;
=== Archive ===&lt;br /&gt;
&lt;br /&gt;
Old wiki [[Old Wiki Archive|archive]]&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
	<entry>
		<id>https://wiki.lhep.unibe.ch/index.php?title=Main_Page&amp;diff=761</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.lhep.unibe.ch/index.php?title=Main_Page&amp;diff=761"/>
		<updated>2023-06-02T10:35:39Z</updated>

		<summary type="html">&lt;p&gt;Lhep: /* Wiki portal for the Albert Einstein Institute for Fundamental Physics / Laboratory for High Energy Physics */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&lt;br /&gt;
== Wiki portal for the Albert Einstein Institute for Fundamental Physics / Laboratory for High Energy Physics ==&lt;br /&gt;
&lt;br /&gt;
=== How to access the computing cluster ===&lt;br /&gt;
&lt;br /&gt;
[[Job submission and management with NorduGrid ARC]]&lt;br /&gt;
&lt;br /&gt;
=== How to access GPUs on the computing cluster ===&lt;br /&gt;
[[Interactive access to GPUs]]&lt;br /&gt;
&lt;br /&gt;
=== LHEP Printer information ===&lt;br /&gt;
&lt;br /&gt;
[[LHEP Printers|Information about the printers and their location at LHEP]]&lt;br /&gt;
&lt;br /&gt;
=== Group sub-wikis ===&lt;br /&gt;
&lt;br /&gt;
* [https://wiki.lhep.unibe.ch/neutron/ Neutron Wiki]&lt;br /&gt;
&lt;br /&gt;
=== Archive ===&lt;br /&gt;
&lt;br /&gt;
Old wiki [[Old Wiki Archive|archive]]&lt;/div&gt;</summary>
		<author><name>Lhep</name></author>
	</entry>
</feed>