diff --git a/.vscode/settings.json b/.vscode/settings.json new file mode 100644 index 00000000..d99f2f30 --- /dev/null +++ b/.vscode/settings.json @@ -0,0 +1,6 @@ +{ + "[python]": { + "editor.defaultFormatter": "ms-python.black-formatter" + }, + "python.formatting.provider": "none" +} \ No newline at end of file diff --git a/_episodes/02-filesystem.md b/_episodes/02-filesystem.md index 4dd10f59..26d70f67 100644 --- a/_episodes/02-filesystem.md +++ b/_episodes/02-filesystem.md @@ -70,7 +70,8 @@ As well as disk space, 'inodes' are also tracked, this is the *number* of files. Notice that the project space for this user is over quota and has been locked, meaning no more data can be added. When your space is locked you will need to move or remove data. Also note that none of the nobackup space is being used. Likely data from project can be moved to nobackup. `nn_storage_quota` uses cached data, and so will no immediately show changes to storage use. -For more details on our persistent and nobackup storage systems, including data retention and the nobackup autodelete schedule, please see our [Filesystem and Quota](https://support.nesi.org.nz/hc/en-gb/articles/360000177256-NeSI-File-Systems-and-Quotas) documentation. +For more details on our persistent and nobackup storage systems, including data retention and the nobackup autodelete schedule, +please see our [Filesystem and Quota](https://docs.nesi.org.nz/Storage/File_Systems_and_Quotas/NeSI_File_Systems_and_Quotas/) documentation. ### Working Directory diff --git a/_episodes/05-scheduler.md b/_episodes/05-scheduler.md index f4d303df..2a5305b8 100644 --- a/_episodes/05-scheduler.md +++ b/_episodes/05-scheduler.md @@ -99,7 +99,7 @@ You will get the output printed to your terminal as if you had just run those co > > You can kill a currently running task by pressing the keys ctrl + c. > If you just want your terminal back, but want the task to continue running you can 'background' it by pressing ctrl + v. -> Note, a backgrounded task is still attached to your terminal session, and will be killed when you close the terminal (if you need to keep running a task after you log out, have a look at [tmux](https://support.nesi.org.nz/hc/en-gb/articles/4563511601679-tmux-Reference-sheet)). +> Note, a backgrounded task is still attached to your terminal session, and will be killed when you close the terminal (if you need to keep running a task after you log out, have a look at [tmux](https://docs.nesi.org.nz/Getting_Started/Cheat_Sheets/tmux-Reference_sheet/)). {: .callout} ## Scheduled Batch Job diff --git a/_episodes/064-parallel.md b/_episodes/064-parallel.md index af565e05..c98ebe8d 100644 --- a/_episodes/064-parallel.md +++ b/_episodes/064-parallel.md @@ -197,7 +197,7 @@ GPUs compute large number of simple operation in parallel, making them well suit On NeSI, GPU's are specialised pieces of hardware that you request in addition to your CPUs and memory. -You can find an up-to-date(ish) list of GPUs available on NeSI in our [Support Documentation](https://support.nesi.org.nz/hc/en-gb/articles/4963040656783-Available-GPUs-on-NeSI) +You can find an up-to-date(ish) list of GPUs available on NeSI in our [Support Documentation](https://docs.nesi.org.nz/Scientific_Computing/The_NeSI_High_Performance_Computers/Available_GPUs_on_NeSI/) GPUs can be requested using `--gpus-per-node=:` diff --git a/_episodes/07-resources.md b/_episodes/07-resources.md index 087f562a..0e185a13 100644 --- a/_episodes/07-resources.md +++ b/_episodes/07-resources.md @@ -217,8 +217,8 @@ SMT is why you are provided 2 CPUs instead of 1 as we do not allow 2 different jobs to share a core. This also explains why you will sometimes see CPU efficiency above 100%, since CPU efficiency is based on core and not thread. -For more details please see our documentation: - +For more details please see our [documentation on Hyperthreading +](https://docs.nesi.org.nz/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Hyperthreading/) ## Measuring the System Load From Currently Running Tasks diff --git a/_includes/snippets_library/NeSI_Mahuika_slurm/_config_options.yml b/_includes/snippets_library/NeSI_Mahuika_slurm/_config_options.yml index 5254c129..432dad1c 100644 --- a/_includes/snippets_library/NeSI_Mahuika_slurm/_config_options.yml +++ b/_includes/snippets_library/NeSI_Mahuika_slurm/_config_options.yml @@ -126,7 +126,7 @@ training_site: "https://carpentries.github.io/instructor-training" workshop_repo: "https://github.com/carpentries/workshop-template" workshop_site: "https://carpentries.github.io/workshop-template" cc_by_human: "https://creativecommons.org/licenses/by/4.0/" -support_docs: "https://support.nesi.org.nz/hc/en-gb" +support_docs: "https://docs.nesi.org.nz" exercise: "https://docs.google.com/spreadsheets/d/1D5PnhE6iJOB3ZKkDCiBHnk5CNZlhmj_gS-IXKGkkVoI/edit?usp=sharing" diff --git a/_includes/snippets_library/NeSI_Mahuika_slurm/scheduler/option-flags-list.snip b/_includes/snippets_library/NeSI_Mahuika_slurm/scheduler/option-flags-list.snip index ee893031..6668e3ce 100644 --- a/_includes/snippets_library/NeSI_Mahuika_slurm/scheduler/option-flags-list.snip +++ b/_includes/snippets_library/NeSI_Mahuika_slurm/scheduler/option-flags-list.snip @@ -34,7 +34,7 @@ #SBATCH --cpus-per-task=10

Will request 10 logical CPUs per task.

-

See Hyperthreading.

+

See Hyperthreading.

diff --git a/setup.md b/setup.md index e1123f8d..38444206 100644 --- a/setup.md +++ b/setup.md @@ -41,7 +41,7 @@ In a web browser, navigate to [https://jupyter.nesi.org.nz](https://jupyter.nesi ### SSH and SBATCH ![Terminal](/fig/UsingJupyterHub1.svg) -From your local computer, using an SSH client to connect to a shell session (interactive), running on the NeSI login Node. Jobs scripts are submitted using the `sbatch` command (non-interactive). Instructions for SSH and command-line setup can be found in our documentation: [Accessing the HPCs](https://support.nesi.org.nz/hc/en-gb/sections/360000034315) +From your local computer, using an SSH client to connect to a shell session (interactive), running on the NeSI login Node. Jobs scripts are submitted using the `sbatch` command (non-interactive). Instructions for SSH and command-line setup can be found in our documentation: [Accessing the HPCs](https://docs.nesi.org.nz/Getting_Started/Accessing_the_HPCs/Setting_Up_and_Resetting_Your_Password/) **Best For:** Users familiar with command line, Linux/Mac users.