The University of Texas at Dallas
close menu

UT Dallas External Phone Calls

OIT is aware of an issue affecting some external calls to the University to be disconnected and is working with Microsoft for a solution. Calls to our Service Desk are functioning normally. We will provide additional details as they become available.

Cyberinfrastructure & Research Services

Ganymede

CIRC’s primary resource is Ganymede, a 7000-core high performance computing cluster with 25TB of memory based on CentOS 7.6 and OpenHPC 1.x. It has a 10 Gigabit ethernet network and a FDR (56 Gbps) InfiniBand interconnect configured in a semi-fat-tree topology. It has two distributed file systems, one for the home directories over the 10 Gigabit Ethernet network available via NFS and one for the scratch file system that is a 200TB high-performance parallel file system (WekaFS) accessible over the InfiniBand network. The WekaFS file system uses Dell storage enclosures directly attached to the Ganymede Infiniband interconnect. Compute nodes are all dual processor with a variety of Intel architectures including Sandy/Ivy Bridge, Haswell/Broadwell, and Sky/Cascade Lake. The freely available queues have the following resources:

110 Dell C8220 compute blades, each with:

  • 2x Intel Xeon E5-2680 (Sandy Bridge) 8-core 2.7GHz/20M cache processors
  • 32GB (22GB Usable) ECC DDR3 Memory
  • 56Gb/s FDR Infiniband

In addition to the large number of “standard” compute nodes, Ganymede also has a number of “big memory” nodes for jobs that require larger amounts of memory:

8 “high capacity” Dell C8220 compute blades, each with:

  • 2x Intel Xeon E5-2680 (Sandy Bridge) 8-core 2.7GHz/20M cache processors
  • 128GB (115GB usable) ECC DDR3 Memory
  • 56Gb/s FDR Infiniband

16 Dell M620 compute blades, each with:

  • 2x Intel Xeon E5-2660v2 (Ivy Bridge) 10-core 3.0GHz/25M cache processors
  • 256GB (240GB usable) ECC DDR3 Memory
  • 40Gb/s FDR Infiniband

1 Dell R630 server, with:

  • 2x Intel Xeon E5-2630v3 (Haswell) 8-core 3.2GHz/20M cache processors
  • 256GB (240GB usable) ECC DDR4 Memory
  • 56Gb/s FDR Infiniband

Ganymede2

Ganymede2, first put into service in late 2022, is CIRC’s newest campus computing resource. Ganymede2 is, like its predecessor, a condo-model high performance computing (HPC) cluster built on Rocky Linux 8 and OpenHPC 2.x. Currently Ganymede2 is primarily comprised of privately-owned hardware and as such has very limited public access, but more resources are coming online soon. Ganymede2 is comprised of 5100 CPU cores (10200 threads with Hyperthreading/SMT) and 33TB of memory. Node configurations are varied, with architectures ranging from Intel Cascade Lake to AMD Milan CPUs and core counts ranging from 16 to 96 cores per node. Ganymede2 also has GPU support, with installed GPUs ranging from RTX 3090 to A100 SXM4 models.

Like Ganymede1, Ganymede2 has multiple network fabrics for optimal node-to-node and node-to-storage communication: a 25Gb/s Ethernet link for connection to /home, /opt, and CIRC’s Moose FS-based group storage and a 200Gb/s HDR Infiniband link to the parallel WekaFS scratch filesystem. The WekaFS is currently set to a 20TB quota for Ganymede2, but future expansion could allow the filesystem to grow to multiple petabytes.

We are currently offering buy-ins to Ganymede2! Please contact the CIRC team for hardware inquiries, providing your budget, software, and computational needs. We will facilitate specification, purchase, installation, and operation of your compute nodes. Rough pricing* of nodes** can be as-follows, but more in-depth configurations can be tailored by a CIRC team member to fit your needs and budget:

Standard-config – $10,000

  • Intel/AMD dual-socket 20-core ~3.0GHz
  • 256GB DDR5 memory
  • 480GB SSD
  • 25GbE network card
  • 200Gb/s HDR-IB card
  • 5-year warranty

High-memory config – $12,000

  • Intel/AMD dual-socket 24-core ~3.0GHz
  • 512GB DDR5 memory
  • 480GB SSD
  • 25GbE network card
  • 200Gb/s HDR-IB card
  • 5-year warranty

NVIDIA RTX Config – $51,000

  • Intel Dual-socket 32-core 2.0GHz
  • 512GB DDR4 memory
  • 6x NVIDIA RTX A6000 GPUs, each with 48GB GDDR6 memory
  • 480GB SSD
  • 25GbE network card
  • 200Gb/s HDR-IB card
  • 5-year warranty

NVIDIA A100 Config – $92,000

  • Intel dual-socket 32-core 2.0GHz
  • 1024GB DDR4 memory
  • 4x NVIDIA A100 SXM4 GPUs, each with 80GB HBM2e memory
  • 480GB SSD
  • 25GbE network card
  • 200Gb/s HDR-IB card
  • 5-year warranty

*pricing subject to change based on market conditions and vendor pricing. Quotes will be procured by a CIRC team member with each request.

**CPU nodes are cheaper if purchased in groups of 4 due to chassis form factor ability.

TACC Systems

NSF funds four OAC (Office of Advance Cyberinfrastructure) National HPC Centers. They are San Diego Super Computing Center (SDSC), National Center for Supercomputing Application (NCSA) – UIUC at Urbana Champaign, Pittsburgh Super Computing Center (PSC), at Carnegie-Mellon and Texas Advanced Computing Center at UT-Austin (TACC).

The Texas Advanced Computational Center (TACC) is an XSEDE (Extreme Science and Engineering Discovery Environment) funded NSF center that provides computational resources for our national scientists and researchers. Additionally, it provides dedicated resources for University of Texas System schools. Through partnerships with the XSEDE program and TACC specifically, CIRC is able to assist our campus researchers in getting access and compute time on TACC systems. Thus, a good place to start is Lonestar6 (LS6) in order to demonstrate need for Stampede2 or Stampede3 or Frontera computing systems. Stampede3, that will replace Stampede2 soon, is an NSF-funded system featuring Intel Skylake nodes with 48 cores/node, Intel Ice Lake nodes with 80 cores/node, Intel Sapphire Rapids High Bandwidth Memory (HBM) nodes with 56 cores/node.

Note, in order to be able to run large jobs on any TACC systems, users will usually be required to submit a scaling study as part of the allocation request.

In order to get access to any TACC resources, contact circ-assist@utdallas.edu to discuss your research and computing needs.

Additional Information

Introduction to HPC

Visit the Introduction to HPC page for more information about the various software available for use on CIRC systems.

CIRC Systems

The Cyberinfrastructure Research Computing (CIRC) team at UT Dallas manages three main clusters:

  • Ganymede, a condo cluster with several free general-use queues.
  • Europa, a cluster with compute nodes from retired supercomputer Stampede1.
  • Titan, a GPU cluster.
CIRC Storage

Visit the storage page for more information about the various storage options available for use on CIRC systems.