SCS Lab has transitioned into Gnosis Research Center (GRC). This website is archived. Please visit the new website at https://grc.iit.edu.


Resources

Our members enjoy access to several resources

Testbed Systems
Related Work
Conferences
Journals

TESTBED SYSTEMS

Overview

The Scalable Computing Software (SCS) Lab manages several cluster computers to support the group's research. Cluster resources within the lab are controlled via a batch queuing system that coordinates all jobs running on the clusters. The nodes should not be accessed directly, as the scheduler will allocate resources such as CPU, Memory and Storage exclusively to each job.
Once you have access to use the cluster, you can submit, monitor, and cancel jobs from the head nodes, ares.cs.iit.edu and hec.cs.iit.edu. These two nodes should not be used for any compute-intensive work, however you can get a shell on a compute node simply by starting an interactive job. You can use the cluster by starting batch jobs or interactive jobs. Interactive jobs give you access to a shell on one of the nodes, from which you can execute commands by hand, whereas batch jobs run from a given shell script in the background and automatically terminate when finished.
If you encounter any problems using the cluster, please send us a request via  and be as specific as you can when describing your issue.
Regular members of the SCS lab enjoy access to the resources. If you wish to gain access to the cluster and you do not belong to the core team, please submit and request via  and state the following: (i) your CS login ID, (ii) name of professor you're working with (and put him under cc on the form) (iii) reasons for requesting access (i.e., research project description) (iv) projected time period for which you would need access (v) resources that you may interfere other uses significantly (e.g., global file system, network) (vi) commands that you need to run as root privilege.
If we have any trouble with your job, we will try to get in touch with you but we reserve the right to kill your jobs at any time.
If you have questions about the cluster, send us a request at .
The SCS lab manages two cluster computers, Ares and HEC, each for different research scope. Our flagship cluster is Ares with 1576 cores and a 30TFLOPs peak performance. HEC is a smaller 128 core machine that specializes to network research. All HEC nodes are connected with InfiniBand network powered by Mellanox InfiniHost III Ex adapters. You can find the detailed hardware configurations below.
We also have access to the platform. It is consisted with two clusters located in Texas Advanced Computing Center (TACC) at Texas and University of Chicago. It has 338 compute nodes connected with 10Gbps Ethernet network. Among all compute nodes, 41 of them are connected via InfiniBand as well. Each compute node has four 6-core (12 threads) Intel Xeon E5-2670 v3 "Haswell" processors and 128GiB RAM. There are also 24 storage nodes with 16 2TB hard drives and 20 GPU nodes. In total, the Chameleon Cloud platform has 13,056 cores, 66 TiB of RAM, and 1.5PB of configurable storage.
Related sites to the Dynamic Virtual Machine Project:
Related sites to the Grid Harvest Service (GHS) Project:
Related sites to the Dynamic Virtual Machine Project:
Related sites to the Server-Push Data Access Architecture Project:
Related sites to the Workflow Research Project:
We use cookies to give you the best experience. Read our cookie policy.