LDAS @ LIGO-Livingston¶
Key information | |
---|---|
Home page | |
Account sign-up | See Requesting an account on the LDG page |
Support | Open a Help Desk ticket |
SSH CA Cert | @cert-authority * ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP68UP/uHcsHPUgZmy2WFK6fyhus937RqGQQ7lZuXwkCiu7g25hL3drV9CYEcy+Vcze3tIDyvPUR6FLSsdVUbNs= LLO_SSH_CERT_AUTHORITY |
This service is available for all @LIGO.ORG registered collaboration members with accounts and access coordinated as part of the LIGO Data Grid.
Login hosts¶
Hostname | Description | Memory | CPU model | # core | GPU |
---|---|---|---|---|---|
ldas-grid.ligo-la.caltech.edu | Production submit | 64GB | 2 x 2.4GHz Xeon E5-2630v3 | 16 | |
ldas-pcdev1.ligo-la.caltech.edu | Large mem/post-processing | 128GB | 2 x 2.4GHz Xeon E5-2630v3 | 16 | |
ldas-pcdev2.ligo-la.caltech.edu | Large mem/post-processing | 512GB | 4 x 2.7GHz Xeon E5-4650 | 32 | 4 x Tesla K10 + 2 x GTX750 |
ldas-pcdev4.ligo-la.caltech.edu | 16GB | 2 x AMD Opteron 2376 | 8 | ||
ldas-pcdev5.ligo-la.caltech.edu | 512GB | 2 x 2.2GHz Xeon E5-2698v4 | 40 | ||
ldas-pcdev6.ligo-la.caltech.edu | 1.5TB | 4 x 3.0GHz Xeon Gold 6154 | 72 | ||
detchar.ligo-la.caltech.edu | Dedicated DetChar | 128GB | 2 x 2.4GHz Xeon E5-2630v3 | 16 | |
dgx1.ligo-la.caltech.edu | 512GB | 2 x 2.2GHz Xeon E5-2698v4 | 40 | 8 x Tesla V100-SXM2-16GB |
For details on how to connect to these machines, please see Access to the LIGO Data Grid.
Additional services¶
Service | URL |
---|---|
JupyterLab | https://jupyter.ligo-la.caltech.edu |
User webspace | https://ldas-jobs.ligo-la.caltech.edu/~USER/ |
Configuring your user environment on LDAS¶
This page describes the default user environments on LDAS, and how to customise availability and versions of the following software distributions:
Intel oneAPI¶
The Intel oneAPI Base Toolkit is available by default on LDAS, with the exception of the intelpython
and mpi
modules.
Disabling all Intel modules¶
To disable loading of all Intel oneAPI modules, create an empty file in your home directory called ~/.nointel
:
touch ~/.nointel
Customising the included oneAPI modules¶
To take full control over which modules to include/exclude (including pinning specific versions) please create ~/.oneapi_config.txt
that takes precedence over the default /opt/intel/oneapi/oneapi_config.txt
Conda environment selection¶
The igwn
conda environment is activated for all users by default when logging into a Rocky Linux 8 headnode. This can be customized by each user in a few different ways:
-
To prevent any conda pre-setup from occurring, which will prevent you from running
conda activate
from your shell and prevent any conda environment activation, create an empty~/.noconda
file in your home directory:touch ~/.noconda
-
To allow the conda pre-setup to occur, but prevent any conda environment from activating on login, you can create a file called
~/.noigwn
in your home directory:touch ~/.igwn
-
To change the conda environment that gets activated when you login from the default
igwn
to something else, create a file called~/.conda_version
in your home directory. This file should contain a single line that is the name of the custom environment you want to activate:echo "igwn-py39-20220827" > ~/.conda_version
Corner cases:
-
If your selected conda environment doesn't exist, then no conda environment will be activated and a message will be printed to the screen. You will still be able to log in, but you will not be in an igwn conda environment. At this point you should remove or rename your
~/.conda_version
custom environment selection file. -
If you have multiple lines in
~/.conda_version
, only the first line will be read. -
If conda is broken or unavailable to the point that it is not allowing you to log in at all, then you can ssh into a cluster headnode using port 2222 to bypass any conda setup, regardless of the presence of a
~/.noigwn
or~/.conda_version
file. This has the same effect as creating a~/.noconda
file in your home directory.ssh -p 2222 albert.einstein@ldas-pcdev1.ligo.caltech.edu
MATLAB¶
Enabling MATLAB¶
MATLAB is available on the command path by default, and can be discovered using which
:
$ which matlab
/ldcg/matlab_r2015a/bin/matlab
Note
The default matlab
version will be updated from time-to-time according to approval from the Software Change Control Board.
Enabling a specific version of MATLAB
To select a specific version of MATLAB, create a file in your ${HOME}
directory named .usematlab_{release}
, where {release}
is the release number of MATLAB that you want, e.g:
touch ~/.usematlab_r2019a
Listing available MATLAB releases
To list the available MATLAB releases, just run this:
ls /ldcg/ | grep matlab
Disabling MATLAB¶
Disabling MATLAB
To opt out of all MATLAB releases, create a file in your ${HOME}
directory named .nomatlab
:
touch ~/.nomatlab
Warning
~/.nomatlab
takes precedence over any ~/.usematlab_*
files, so if you want to opt in after previously opting out, make sure and remove the old ~/.nomatlab
file.
Restoring accidentally deleted/modified files at CIT¶
Home directories on the CIT cluster use the ZFS filesystem, which allows for periodic snapshots. This allows you to recover accidentally deleted/modified files as long as the file you want to recover existed when the snapshot was taken.
Here's an example. Let's assume you're a user working in your home directory on the CIT cluster:
$ pwd
/home/albert.einstein/temp
$ ls -l test.file
-rw------- 1 albert.einstein albert.einstein 8416 Feb 18 2020 test.file
$ rm test.file
Oops--you didn't mean to delete test.file!
To find out if you can recover this file, first you need to see what snapshots are available. You can find them by looking at the files in the .zfs/snapshot directory inside your home directory:
$ ls /home/albert.einstein/.zfs/snapshot
autosnap_2021-10-01_00:00:01_monthly/ autosnap_2021-12-07_00:42:36_weekly/ autosnap_2021-12-07_15:25:30_hourly/ autosnap_2021-12-08_05:56:38_hourly/
autosnap_2021-11-01_19:27:51_monthly/ autosnap_2021-12-07_02:43:53_hourly/ autosnap_2021-12-07_16:16:33_hourly/ autosnap_2021-12-08_07:03:20_hourly/
autosnap_2021-11-15_23:30:55_weekly/ autosnap_2021-12-07_03:32:00_hourly/ autosnap_2021-12-07_17:35:32_hourly/ autosnap_2021-12-08_08:34:30_hourly/
autosnap_2021-11-16_16:17:41_monthly/ autosnap_2021-12-07_04:18:34_hourly/ autosnap_2021-12-07_18:42:37_hourly/ autosnap_2021-12-08_09:22:19_hourly/
autosnap_2021-11-16_16:17:41_weekly/ autosnap_2021-12-07_05:33:12_hourly/ autosnap_2021-12-07_19:06:09_hourly/ autosnap_2021-12-08_10:32:40_hourly/
autosnap_2021-11-22_23:48:55_weekly/ autosnap_2021-12-07_06:06:00_hourly/ autosnap_2021-12-07_20:38:54_hourly/ autosnap_2021-12-08_11:52:12_hourly/
autosnap_2021-11-29_23:43:30_weekly/ autosnap_2021-12-07_07:42:33_hourly/ autosnap_2021-12-07_21:05:52_hourly/ autosnap_2021-12-08_13:02:57_hourly/
autosnap_2021-12-01_00:24:05_monthly/ autosnap_2021-12-07_08:07:01_hourly/ autosnap_2021-12-07_22:31:05_hourly/ autosnap_2021-12-08_14:48:35_hourly/
autosnap_2021-12-02_00:17:00_daily/ autosnap_2021-12-07_09:45:35_hourly/ autosnap_2021-12-07_23:47:03_hourly/ autosnap_2021-12-08_16:08:23_hourly/
autosnap_2021-12-03_00:11:47_daily/ autosnap_2021-12-07_10:15:41_hourly/ autosnap_2021-12-08_00:21:21_daily/ autosnap_2021-12-08_17:28:41_hourly/
autosnap_2021-12-04_00:04:44_daily/ autosnap_2021-12-07_11:47:58_hourly/ autosnap_2021-12-08_00:21:21_hourly/ autosnap_2021-12-08_18:28:45_hourly/
autosnap_2021-12-05_00:06:23_daily/ autosnap_2021-12-07_12:25:30_hourly/ autosnap_2021-12-08_02:14:08_hourly/ autosnap_2021-12-08_19:39:26_hourly/
autosnap_2021-12-06_00:12:35_daily/ autosnap_2021-12-07_13:11:19_hourly/ autosnap_2021-12-08_03:54:22_hourly/
autosnap_2021-12-07_00:42:36_daily/ autosnap_2021-12-07_14:44:16_hourly/ autosnap_2021-12-08_04:39:22_hourly/
As you can see, there are snapshots labeled with their time day and time (denoted in yyyy-mm-dd_hh:dd:ss
format). To see if a particular file exists in a snapshot, you can just ls the snapshot for it. However, you should note that the <homedir>/.zfs/snapshot/<snapshot> will be the root of that snapshot, i.e. a picture of what <homedir> looked like at the time the snapshot was taken. Therefore, you'll need to look down the path where the file of interest lived (in the example, /temp):
$ ls -l /home/albert.einstein/.zfs/snapshot/autosnap_2021-12-08_11:52:12_hourly/temp/test.file
-rw------- 1 albert.einstein albert.einstein 8416 Feb 18 2020 /home/albert.einstein/.zfs/snapshot/autosnap_2021-12-08_11:52:12_hourly/temp/test.file
You can view/open any of the files in a snapshot just as you would with the original file, so you can check that the file is the version you want. However, please note that the snapshots are read only, so you cannot modify the file inside the snapshot.
Once you've found a version of the file to restore, you simply need to copy it back to your home directory so you can work with it. Simply use cp:
$ cp -ip /home/albert.einstein/.zfs/snapshot/autosnap_2021-12-08_11:52:12_hourly/temp/test.file /home/albert.einstein/temp
$ ls -l test.file
-rw------- 1 albert.einstein albert.einstein 8416 Feb 18 2020 test.file
The -p option will preserve the ownership and timestamps of the file (if that's what you want).
If you want to restore an entire directory tree, this is also possible, just use something like
$ cp -ipr /home/albert.einstein/.zfs/snapshot/autosnap_2021-12-08_11:52:12_hourly/temp /home/albert.einstein
to restore the entire "temp" directory, where the '-r' option is for a recursive copy of the entire tree.