RACC nodes

RACC nodes and partitions (queues) Service node VMs cluster-head: batch scheduler (Slurm), cluster head racc-login: login node load balancing and gate keeping (Gridengine) racc-cron: user cronjobs Shared login nodes for interactive work 5 login nodes: racc-login-0-[1-5] 3 blades in FX chassis (chassis shared with NX nodes) 2 servers in DSS7500 chassis + 2 x 150 TB of scratch storage 16+ decent cores, 256 GB RAM Free partition ‘cluster’ Logical rack 0:  compute-0-[0-10] 11 Dell PowerEdge C6220 (3 x 4-way chassis, one blade faulty) 2 x Xeon E5-2650L 1.80GHz 16 cores, 96 GB RAM Logical rack 1: compute-1-[0-12] 13 1U X8 Supermicro nodes 2 x Xeon E5620 2.40GHz 8 cores, 48 GB RAM Logical rack 2: vm-2-[0-23] Nutanix VM nodes - not used anymore Logical rack 3: compute-3-[0-5] 6 1U X9 Supermicro nodes 2 x Xeon E5-2620  2.00GHz 12 cores, 96 GB RAM Logical rack 4:   compute-4-[0-4] 5 1U X9 Supermicro nodes 2 x Xeon E5-2630  2.30GHz 12 cores, 96 GB RAM Logical rack 5: compute-5-[0-6] 7 1U X8 Supermicro nodes 2 x Xeon X5690   3.47GHz 12 cores, 96+ GB RAM Logical rack 6: compute-6-2 1 1U X9 Supermicro nodes, new node, less than 3...
Read More

Free Cluster available for testing

The Free Cluster is available for testing. A draft user guide can be found here (the document will be updated as testing progresses). The Free Cluster has a number of login nodes which can be used to submit batch jobs and for interactive computing as well, and over 200 CPU cores for batch jobs. Sessions on the login nodes are load balanced and automatically scheduled when you connect to the cluster (no qrsh needed). These login nodes can be used for interactive computing, on similar terms as the interactive nodes in met-cluster, and these login nodes have now more memory than the interactive nodes on met-cluster. Some popular applications including Matlab, IDL, python, and compilers are already installed and available via the module command. Further applications and libraries can be installed as required. In batch mode user jobs are isolated and allocation boundaries are strictly enforced. It will prevent oversubscribing resources, runaway processes and will minimize negative impact that incorrectly submitted jobs can...
Read More

New Home Directories

On Tuesday the 5th of June the Academic Computing team will be migrating all Unix home directories to a new location to offer greater performance and scalability. As standard all users including staff and students will have a default quota of 10GB and this can be increased with a simple request via the IT Service Desk. To help with the transition we will be mounting the new home directories under their new home under /home/users/%username%. On the 5th of June we will need to take all Unix Systems offline to mount the new home directories and complete some migration tasks. Services affected include: Met-Cluster Maths-Cluster Free-Cluster NX-Managed Desktop Linux VMs hosted on the Research Cloud...
Read More

short network outage in Meteorology on 20th March 2018

Essential work needs to be performed on a core networking device. Due to this, users within the Meteorology building and IT/Maths will see an outage at roughly 18:05 on the 20th March, lasting up to 10 minutes. Please log out by 6pm on Tuesday evening to avoid data loss, as home directories will not be available during the network downtime. Please contact IT through the usual channels in case of any problems....
Read More