- The cluster can be reached via ssh at habanero.rcs.columbia.edu.
- The user documentation is located at: https://wikis.cuit.columbia.edu/confluence/display/rcs/Habanero+HPC+Cluster+User+Documentation
- Haba-pi at lists.columbia.edu
yetipsych Dec 11 2013 afni
yetipsych Dec 11 2013 art
yetipsych Dec 20 2013 config
yetipsych Dec 7 2013 freesurfer
yetipsych Dec 6 2013 fsl
yetipsych Dec 11 2013 install
yetipsych Dec 3 2013 matlab
yetipsych Dec 2 2013 qt
yetipsych Dec 6 2013 R
yetipsych Dec 6 2013 rstudio
yetipsych Dec 13 2013 src
yetipsych Dec 6 2013 utils
- Launch date for the new service will be Tuesday November 15
- Introduction to Habanero workshops have been scheduled for December 7 and January 31.
- The workshops will run from 1 to 3 pm in the Science & Engineering Library, Northwest Corner Building.
- “The plan is to keep the group names the same by default, so in your group’s case that would be “habapsych” shorter version of “habaneropsych”.
- If you wish to change the name of the group please let us know.
- We will also reach out and ask you if you wish to give access to the same people on “habapsych” or if you wish to modify the list. “
- $ ssh-keygen -R yetisubmit.cc.columbia.edu
- April 8 – May 2: CUIT publishes price estimates and takes orders
- May 2: orders must be in
- September 2016: cluster goes live
- Standard Node. 24 cores, 128 GB memory.
- High Memory Node. 24 cores, 512 GB memory.
- GPU Node. 24 cores, 128 GB memory and 1 (possibly 2) NVidia K80 GPU.
- Scratch Storage. Each order will be expected to purchase at least 1 terabyte.
Q: If I have a share of Yeti, will I be able to use the new cluster?
A: No, as it will be a separate cluster with its own queuing system. SRCPAC is endeavoring to follow best practice of other University shared HPC which recommends that to take full advantage of the inevitable improvement in system components that new systems be created every few years. Yeti began in 2013, so it is time to start afresh.