Since most analyses we run involve resources larger than those in our laptops, we typically used a high performance computing environment. Particularly, JHPCE. One of the first things you’ll do is get a JHPCE account. Once you do, you’ll want to get familiarized with many parts of JHPCE. You can see the archive of questions people have asked through bithelp
here.
R
which you can load using module load conda_R
.sgejobs::job_single()
.colorout
which you can access through jalvesaq/colorout nowadays and install with remotes::install_github('jalvesaq/colorout')
.There were a lot of miscellaneous things about using the cluster that might save some confusion if you knew now:
/users/[yourusername]/
, which will likely fill up very quickly. Most of us have directories under the /dcl01
and /dcl02
filesystems, where there is far more space.-cwd
flag in bash scripts. By default, it will default to dumping output files in your home directory, regardless of where you submit the script.h_fsize
flag for bash scripts, which defaults to 10G. See https://jhpce.jhu.edu/knowledge-base/how-to/.rmate
which now also exists as a LIBD module that you can load with module load rmate
at JHPCE.