Running Gaussian on checkers.westgrid.ca

Accessing Checkers, a U. of Alberta supercomputer:

Fromanother unix/linux computer:

ssh checkers.westgrid.ca

From Windows: use the Putty software and type checkers.westgrid.ca into the hostname bar (you can save this hostname in Putty as well so you don’t have to type the name every time).

Running gaussian on Checkers:

  1. Prepare a normal Gaussian input file (myfile.com or myfile.gjf)
  2. Prepare a TORQUE submission file (myfile.pbs), declaring # of processors desired, a walltime limit, and # memory per processor desired
  3. Type “qsub myfile.pbs” to run
  4. Monitor progress with showq or qstat commands (eg. qstat -u[username])

The TORQUE *.pbs file is needed for queuing purposes; runs demanding lots of resources will sit longer in queue. Use this example file, myfile.pbs:

#!/bin/bash

#PBS -S /bin/bash

#PBS -l nodes=1:ppn=2,mem=2gb,walltime=01:23:45

# walltime is hours:mins:secs

module load gaussian

cd $PBS_O_WORKDIR

echo "Current working directory is `pwd`"

echo "Running on `hostname`"

echo "Starting G09 run at: `date`"

g09 < myfile.com > myfile.log

echo "Program G09 finished with exit code $? at: `date`"

You need to edit the 3rdline and 2nd-last line each run. In the 3rd line:

  • nodes (of compute nodes): you have to keep nodes=1 for Gaussian to work.
  • ppn (processors per node): anything from 1 to 8, but generally use 2 or 4. Anything over 6 is wasteful (Gaussian becomes inefficient) and you might get stuck in queue awhile. Might override %nproc in the input file?
  • mem (memory per process): 2gb or 4gb is okay here. Might override %mem in the input file?
  • walltime (how many hours the run is allowed to go before the computer kills it): Longer walltimes mean longer queue times, so you should become good at estimating these.

Running VASP onorcinus.westgrid.ca

Accessing Orcinus, a UBC supercomputer:

From another unix/linux computer:

ssh orcinus.westgrid.ca

From Windows: use the Putty software and type orcinus.westgrid.ca into the hostname bar (you can save this hostname in Putty as well so you don’t have to type the name every time).

Running VASP on Orcinus:

  1. Prepare a normal VASP subdirectory (run1.dir) including INCAR, POSCAR, POTCAR and KPOINTS files, just like on cadmium or dextrose.
  2. Prepare empty CHG, CHGCAR, WAVECAR files (due to some bug on Orcinus)
  3. Prepare a TORQUE submission file (run1.pbs), declaring # of processors desired, a walltime limit, and # memory per processor desired.
  4. Place the *.pbs file within the *.dir directory, and cd into that directory.
  5. Type “qsub run1.pbs” to run
  6. Monitor progress with showq or qstat commands (eg. qstat -u[username])

The TORQUE *.pbs file is needed for queuing purposes; runs demanding lots of resources will sit longer in queue. Use this example file, run1.pbs:

#!/bin/bash

#PBS -S /bin/bash

#PBS -l procs=16,walltime=24:00:00,pmem=4gb,qos=parallel

# or: PBS -l nodes=4:ppn=4,walltime=24:00:00,pmem=4gb,qos=parallel

# walltime is hours:mins:secs

module load intel

VASP="/global/software/VASP/vasp.4.6/vasp"

NC=`/bin/awk 'END {print NR}' $PBS_NODEFILE`

cd $PBS_O_WORKDIR

echo "Current working directory is `pwd`"

echo "executing job ${JOBID} on the hosts:"

cat $PBS_NODEFILE

echo "Starting VASP at: `date`"

mpiexec -np $NC ${VASP}

echo "Program VASP finished with exit code $? at: `date`"

You need to edit the 3rd line only (see tips for this under Running Gaussian, previous page). The 4th line is another option for the 3rd line, for a more specific request, but the 3rd line gives the computer more flexibility to find open processors.

[Red: changes Aug. 23, 2010]