This is the multi-page printable view of this section. Click here to print.
Neurocontainers
1 - CVMFS
Install the CernVM File System (CVMFS)
To begin, install CVMFS. Follow the official instructions here: https://cvmfs.readthedocs.io/en/stable/cpt-quickstart.html#getting-the-software
An example installation for Ubuntu in Windows Subsystem for Linux (WSL) would look like this:
sudo apt-get install lsb-release
wget https://ecsft.cern.ch/dist/cvmfs/cvmfs-release/cvmfs-release-latest_all.deb
sudo dpkg -i cvmfs-release-latest_all.deb
rm -f cvmfs-release-latest_all.deb
sudo apt-get update
sudo apt-get install build-essential
sudo apt-get install cvmfs
Ubuntu 24.04 might have an issue with this, so try installing dependies manually:
sudo apt install libattr1=1:2.5.2-1build1 libuuid1=2.39.3-9ubuntu6
Configure CVMFS
Once installed create the keys and configure the servers used:
sudo mkdir -p /etc/cvmfs/keys/ardc.edu.au/
echo "-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAwUPEmxDp217SAtZxaBep
Bi2TQcLoh5AJ//HSIz68ypjOGFjwExGlHb95Frhu1SpcH5OASbV+jJ60oEBLi3sD
qA6rGYt9kVi90lWvEjQnhBkPb0uWcp1gNqQAUocybCzHvoiG3fUzAe259CrK09qR
pX8sZhgK3eHlfx4ycyMiIQeg66AHlgVCJ2fKa6fl1vnh6adJEPULmn6vZnevvUke
I6U1VcYTKm5dPMrOlY/fGimKlyWvivzVv1laa5TAR2Dt4CfdQncOz+rkXmWjLjkD
87WMiTgtKybsmMLb2yCGSgLSArlSWhbMA0MaZSzAwE9PJKCCMvTANo5644zc8jBe
NQIDAQAB
-----END PUBLIC KEY-----" | sudo tee /etc/cvmfs/keys/ardc.edu.au/neurodesk.ardc.edu.au.pub
echo "CVMFS_USE_GEOAPI=yes" | sudo tee /etc/cvmfs/config.d/neurodesk.ardc.edu.au.conf
echo 'CVMFS_SERVER_URL="http://cvmfs-geoproximity.neurodesk.org/cvmfs/@fqrn@;http://cvmfs.neurodesk.org/cvmfs/@fqrn@;http://s1osggoc-cvmfs.openhtc.io:8080/cvmfs/@fqrn@;http://s1fnal-cvmfs.openhtc.io:8080/cvmfs/@fqrn@;http://s1sampa-cvmfs.openhtc.io:8080/cvmfs/@fqrn@;http://s1brisbane-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1nikhef-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1bnl-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1perth-cvmfs.openhtc.io/cvmfs/@fqrn@"' | sudo tee -a /etc/cvmfs/config.d/neurodesk.ardc.edu.au.conf
echo 'CVMFS_KEYS_DIR="/etc/cvmfs/keys/ardc.edu.au/"' | sudo tee -a /etc/cvmfs/config.d/neurodesk.ardc.edu.au.conf
echo "CVMFS_HTTP_PROXY=DIRECT" | sudo tee /etc/cvmfs/default.local
echo "CVMFS_QUOTA_LIMIT=5000" | sudo tee -a /etc/cvmfs/default.local
sudo cvmfs_config setup
You can use the list above, but you can also pick a subset of servers that are close to you or fit your usecase better. To better understand what to choose, we use the following CVMFS server setup:
These CVMFS Stratum 1 servers are hosted by the Open Science Grid and every server has a Cloudflare CDN alias that is correctly located through the Maxmind GEOAPI service in the CVMFS client:
- Illinois, USA: s1fnal-cvmfs.openhtc.io:8080 -> cvmfs-s1fnal.opensciencegrid.org:8000
- Sao Paulo, Brazil: s1sampa-cvmfs.openhtc.io:8080 -> sampacs01.if.usp.br:8000
- Nebraska, USA: s1osggoc-cvmfs.openhtc.io:8080 -> cvmfs-s1goc.opensciencegrid.org:8000
- New York, US: s1bnl-cvmfs.openhtc.io:8080 -> cvmfs-s1bnl.opensciencegrid.org:8000
- Oxford, UK: s1ral-cvmfs.openhtc.io:8080 -> cvmfs-egi.gridpp.rl.ac.uk:8000
This server is currently down:
- Netherlands, Europe: s1nikhef-cvmfs.openhtc.io:8080 -> cvmfs01.nikhef.nl:8000
This CVMFS Stratum 1 server is hosted by ARDC Nectar Cloud and also has a Cloudflare CDN alias.
- Brisbane, Queensland, Australia: s1brisbane-cvmfs.openhtc.io -> cvmfs-brisbane.neurodesk.org
- Sydney, New South Wales, Australia: s1sydney-cvmfs.openhtc.io -> cvmfs-sydney.neurodesk.org
- Melbourne, Victoria, Australia: s1melbourne-cvmfs.openhtc.io -> cvmfs-melbourne.neurodesk.org
This CVMFS Stratum 1 server is hosted by Pawseys Nimbus Cloud and also has a Cloudflare CDN alias.
- Perth, Western Australia, Australia: s1perth-cvmfs.openhtc.io -> cvmfs-perth.neurodesk.org
This CVMFS Stratum 1 server is hosted by AWS:
- Frankfurt, Germany: cvmfs-frankfurt.neurodesk.org -> ec2-3-72-92-91.eu-central-1.compute.amazonaws.com
This CMVFS Stratum 1 server is hosted by Jetstream:
- Indiana, US: cvmfs-jetstream.neurodesk.org -> 149.165.172.188
Then we have a one geolocation-steered domain: cvmfs-geoproximity.neurodesk.org
153.02 (Longitude),-27.46 (Latitude) -> cvmfs-brisbane.neurodesk.org
151.2073, -33.8678 -> cvmfs-sydney.neurodesk.org
115.86,-31.95 -> cvmfs-perth.neurodesk.org
-88.30,41.84 -> cvmfs-s1fnal.opensciencegrid.org
-96.66,40.83 -> cvmfs-s1goc.opensciencegrid.org
-1.26, 51.75 -> cvmfs-egi.gridpp.rl.ac.uk
4.90,52.37 -> cvmfs01.nikhef.nl
8.68,50.11 -> ec2-3-72-92-91.eu-central-1.compute.amazonaws.com
-46.63,-23.54 -> sampacs01.if.usp.br
-86.45,39.22 -> cvmfs-jetstream.neurodesk.org
145.13,-37.92 -> cvmfs-melbourne.neurodesk.org
Every location has a health check attached to it and doesn’t forward to it if the destination is not working.
Then we have 3 direct URLS without CDNs as well that are geolocation-steered: cvmfs1.neurodesk.org: South America -> sampacs01.if.usp.br North America -> cvmfs-s1fnal.opensciencegrid.org Default -> cvmfs-brisbane.neurodesk.org Europe -> ec2-3-72-92-91.eu-central-1.compute.amazonaws.com Asia -> cvmfs-perth.neurodesk.org
cvmfs2.neurodesk.org: North America -> cvmfs-s1goc.opensciencegrid.org Europe -> cvmfs01.nikhef.nl Default -> cvmfs-s1goc.opensciencegrid.org
cvmfs3.neurodesk.org: North America -> cvmfs-s1bnl.opensciencegrid.org Asia -> cvmfs-brisbane.neurodesk.org Default -> cvmfs-s1bnl.opensciencegrid.org Oceania -> cvmfs-perth.neurodesk.org
These servers are currently NOT working and are NOT YET mirroring our repository (we are waiting for RAL to come back online, then the others will mirror that):
- Swinburne, Australia: s1swinburne-cvmfs.openhtc.io:8080 -> cvmfs-s1.hpc.swin.edu.au:8000
- China: s1ihep-cvmfs.openhtc.io:8080 -> cvmfs-stratum-one.ihep.ac.cn:8000
For WSL users
You will need to run this for each new WSL session:
sudo cvmfs_config wsl2_start
Test if the connection works:
sudo cvmfs_config chksetup
ls /cvmfs/neurodesk.ardc.edu.au
sudo cvmfs_talk -i neurodesk.ardc.edu.au host info
cvmfs_config stat -v neurodesk.ardc.edu.au
For Ubuntu 22.04 users
If configuring CVMFS returns the following error:
Error: failed to load cvmfs library, tried: './libcvmfs_fuse3_stub.so' '/usr/lib/libcvmfs_fuse3_stub.so' '/usr/lib64/libcvmfs_fuse3_stub.so' './libcvmfs_fuse_stub.so' '/usr/lib/libcvmfs_fuse_stub.so' '/usr/lib64/libcvmfs_fuse_stub.so'
./libcvmfs_fuse3_stub.so: cannot open shared object file: No such file or directory
/usr/lib/libcvmfs_fuse3_stub.so: cannot open shared object file: No such file or directory
/usr/lib64/libcvmfs_fuse3_stub.so: cannot open shared object file: No such file or directory
./libcvmfs_fuse_stub.so: cannot open shared object file: No such file or directory
libcrypto.so.1.1: cannot open shared object file: No such file or directory
/usr/lib64/libcvmfs_fuse_stub.so: cannot open shared object file: No such file or directory
Failed to read CernVM-FS configuration
A temporary workaround is:
wget https://mirror.umd.edu/ubuntu/ubuntu/pool/main/o/openssl/libssl1.1_1.1.1f-1ubuntu2.15_amd64.deb
dpkg -i libssl1.1_1.1.1f-1ubuntu2.15_amd64.deb
Install singularity/apptainer
e.g for Ubuntu/Debian install apptainer:
sudo apt-get install -y software-properties-common
sudo add-apt-repository -y ppa:apptainer/ppa
sudo apt-get update
sudo apt-get install -y apptainer
sudo apt-get install -y apptainer-suid
e.g. for Ubuntu/Debian install singularity:
export VERSION=1.18.3 OS=linux ARCH=amd64 && \
wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz && \
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz && \
rm go$VERSION.$OS-$ARCH.tar.gz
echo 'export GOPATH=${HOME}/go' >> ~/.bashrc && \
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' >> ~/.bashrc && \
source ~/.bashrc
go get -d github.com/sylabs/singularity
export VERSION=v3.10.0 # or another tag or branch if you like && \
cd $GOPATH/src/github.com/sylabs/singularity && \
git fetch && \
git checkout $VERSION # omit this command to install the latest bleeding edge code from master
export VERSION=3.10.0 && # adjust this as necessary \
mkdir -p $GOPATH/src/github.com/sylabs && \
cd $GOPATH/src/github.com/sylabs && \
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-ce-${VERSION}.tar.gz && \
tar -xzf singularity-ce-${VERSION}.tar.gz && \789
cd ./singularity-ce-${VERSION} && \
./mconfig --without-seccomp --without-conmon
./mconfig --without-seccomp --without-conmon && \
make -C ./builddir && \
sudo make -C ./builddir install
export PATH="/usr/local/singularity/bin:${PATH}"
Use of Neurodesk CVMFS containers
The containers are now available in /cvmfs/neurodesk.ardc.edu.au/containers/ and can be started with:
singularity shell /cvmfs/neurodesk.ardc.edu.au/containers/itksnap_3.8.0_20201208/itksnap_3.8.0_20201208.simg
make sure that SINGULARITY_BINDPATH includes the directories you want to work with:
export SINGULARITY_BINDPATH='/cvmfs,/mnt,/home'
For WSL users
The homedirectory might not be supported. Avoid mounting it with
singularity shell --no-home /cvmfs/neurodesk.ardc.edu.au/containers/itksnap_3.8.0_20201208/itksnap_3.8.0_20201208.simg
or configure permanently:
sudo vi /etc/singularity/singularity.conf
set
mount home = no
Install module system
sudo yum install lmod
or
sudo apt install lmod
Use of containers in the module system
Configuration for module system
Create a the new file /usr/share/module.sh
with the content (NOTE: update the version, here 6.6, with your lmod version, e.g. 6.6 (Ubuntu 20.04/22.04), 8.6.19 (Ubuntu 24.04)):
# system-wide profile.modules #
# Initialize modules for all sh-derivative shells #
#----------------------------------------------------------------------#
trap "" 1 2 3
case "$0" in
-bash|bash|*/bash) . /usr/share/lmod/YOURLMODVERSION_HERE/init/bash ;;
-ksh|ksh|*/ksh) . /usr/share/lmod/YOURLMODVERSION_HERE/init/ksh ;;
-zsh|zsh|*/zsh) . /usr/share/lmod/YOURLMODVERSION_HERE/init/zsh ;;
-sh|sh|*/sh) . /usr/share/lmod/YOURLMODVERSION_HERE/init/sh ;;
*) . /usr/share/lmod/YOURLMODVERSION_HERE/init/sh ;; # default for scripts
esac
trap - 1 2 3
Make the module system usable in the shell
Add the following lines to your ~/.bashrc
file or to /etc/bash.bashrc
for a global install:
if [ -f '/usr/share/module.sh' ]; then source /usr/share/module.sh; fi
if [ -d /cvmfs/neurodesk.ardc.edu.au/neurodesk-modules ]; then
# export MODULEPATH="/cvmfs/neurodesk.ardc.edu.au/neurodesk-modules"
module use /cvmfs/neurodesk.ardc.edu.au/neurodesk-modules/*
else
export MODULEPATH="/neurodesktop-storage/containers/modules"
module use $MODULEPATH
export CVMFS_DISABLE=true
fi
if [ -f '/usr/share/module.sh' ]; then
echo 'Run "ml av" to see which tools are available - use "ml <tool>" to use them in this shell.'
if [ -v "$CVMFS_DISABLE" ]; then
if [ ! -d $MODULEPATH ]; then
echo 'Neurodesk tools not yet downloaded. Choose tools to install from the Application menu.'
fi
fi
fi
Restart the current shell or run
source ~/.bashrc
Use of containers in the module system
export SINGULARITY_BINDPATH='/cvmfs,/mnt,/home'
module use /cvmfs/neurodesk.ardc.edu.au/neurodesk-modules/*
ml fsl
fslmaths
Troubleshooting and diagnostics
# Check servers
sudo cvmfs_talk -i neurodesk.ardc.edu.au host probe
sudo cvmfs_talk -i neurodesk.ardc.edu.au host info
# Change settings
sudo touch /var/log/cvmfs_debug.log.cachemgr
sudo chown cvmfs /var/log/cvmfs_debug.log.cachemgr
sudo touch /var/log/cvmfs_debug.log
sudo chown cvmfs /var/log/cvmfs_debug.log
sudo vi /etc/cvmfs/config.d/neurodesk.ardc.edu.au.conf
echo -e "\nCVMFS_DEBUGLOG=/var/log/cvmfs_debug.log" | sudo tee -a /etc/cvmfs/default.local
cat /etc/cvmfs/default.local
sudo cvmfs_config umount
sudo service autofs stop
sudo mount -t cvmfs neurodesk.ardc.edu.au /cvmfs/neurodesk.ardc.edu.au
# check if new settings are applied correctly:
cvmfs_config showconfig neurodesk.ardc.edu.au
cat /var/log/cvmfs_debug.log
cat /var/log/cvmfs_debug.log.cachemgr
2 - DataLad
Using Neurodesk Containers with DataLad
This page explains how to use DataLad and the ReproNim containers with Neurodesk tools.
Install DataLad, datalad-container, and the ReproNim containers repository
conda install datalad
pip install datalad_container
datalad install https://github.com/ReproNim/containers.git
cd containers
List all default available containers
datalad containers-list
Download and run the latest container version
datalad containers-run -n neurodesk-romeo
Change version of container
You can change which version of a container is used in two ways:
Option 1: change version in .datalad/config
vi .datalad/config
# now change the version of the container you like
# all available containers can be seen via `ls images/neurodesk`
datalad save -m 'downgraded version of romeo to x.x.x'
datalad containers-run -n neurodesk-romeo
Option 2: change version using freeze_versions script
# all available containers can be seen via `ls images/neurodesk`
scripts/freeze_versions neurodesk-romeo=3.2.4
datalad save -m 'downgraded version of romeo to 3.2.4'
datalad containers-run -n neurodesk-romeo
3 - Docker
Our containers are automatically built in https://github.com/NeuroDesk/neurocontainers/ and hosted on dockerhub and on github
Pull Docker containers
e.g. for a julia container docker
docker pull vnmd/julia_1.6.1
You an also build singularity images from dockerhub
singularity build julia_1.6.1.simg docker://vnmd/julia_1.6.1
Replace julia_1.6.1
with your selected application. You can find the available containers here: https://neurodesk.org/applications/
4 - Open Recon
These instructions were tested on GitHub Codespaces, and we recommend this as a starting point.
For a local setup you need Docker (https://www.docker.com/), Python3 and you need to install neurodocker and add it to your path:
python -m pip install neurodocker
#the path depends on your local setup
export PATH=$PATH:~/.local/lib/python3.12/site-packages/bin
export PATH=$PATH:~/.local/bin
1) add the installation of the Python MRD server to any recipe in https://github.com/neurodesk/neurocontainers
Make sure to adjust invertcontrast.py to your pipeline needs (or replace/rename other files from the Python MRD server:
- include: macros/openrecon/neurodocker.yaml
here is an example: https://github.com/NeuroDesk/neurocontainers/tree/main/recipes/openreconexample
Then build the recipe:
sf-login openreconexample --architecture x86_64
2) test the tool inside the container on its own first and then test through MRD server
convert dicom data to mrd test data:
cd /opt/code/python-ismrmrd-server
python3 dicom2mrd.py -o input_data.h5 PATH_TO_YOUR_DICOM_FILES
start server and client and test application:
python3 /opt/code/python-ismrmrd-server/main.py -v -r -H=0.0.0.0 -p=9002 -s -S=/tmp/share/saved_data &
# wait until you see Serving ... and the press ENTER
python3 /opt/code/python-ismrmrd-server/client.py -G dataset -o openrecon_output.h5 input_data.h5
3) submit the container-recipe to the https://github.com/NeuroDesk/neurocontainers/ repository
here is an example: https://github.com/NeuroDesk/neurocontainers/tree/main/recipes/openreconexample
Then the container gets build automatically.
4) submit the container to the https://github.com/NeuroDesk/openrecon/ repository
here is an example: https://github.com/NeuroDesk/openrecon/tree/main/recipes/openreconexample
More detailed instructions for building all of this on Github
contributed by Kerrin Pine
Prerequisites
You have a GitHub account (public github.com required, so that a container can be submitted to the public NeuroDesk OpenRecon repository and built)
Process
- Fork neurodesk/neurocontainers to your personal GitHub account (Go to https://github.com/neurodesk/neurocontainers, in the upper right click “Fork”, if prompted, fork to your personal GitHub account.
- After forking, go to your forked repo, e.g.: github.com/YOUR_GITHUB_USERNAME/neurocontainers
- Create a new codespace (In your forked repo, click the green <> Code button, Select “Create codespace on master”)
- In the terminal created, run neurodocker –version and you should see something like 2.0.0, good!
- Still in the terminal, cd recipes, mkdir projectname (replace with your project name), copy files from recipes/openreconexample to this new directory to give us something to start from
- In build.yaml and test.yaml, change all occurences of openreconexample to your own project name and change the openreconexample.py to projectname.py
- Follow the instructions in build.yaml to build
- Building drops you into the container itself. Follow the instructions in test.yaml to import your own test DICOM data (can be dragged from another window into the folder in Codespaces) into a h5 file for testing.
- Continue following the instructions in test.yaml to start the server and send demo data to it (e.g. python3 /opt/code/python-ismrmrd-server/client.py -G dataset -o /buildhostdirectory/output.h5 /buildhostdirectory/b0map.h5 c cbsopenreconexample). You should see an appropriate number of images being sent from the client to the server and an appropriate number returned. The output in e.g. output.h5 can be viewed with the built in H5web.
- Clicking control-shift-X or Extensions on the left and installing “niivue” gives you a NII image viewer in Codespaces if you need to check intermediate outputs for troubleshooting.
- Once the container has been thoroughly tested and you are happy with it, your new files will need to be committed (and pushed if you were not working on github.com), don’t include your demo data!
- To build a container ready for the scanner, the first step is to open a pull request (e.g. “Add new openreconexample container for MRD server, This PR adds a new example container for OpenRecon using the MRD server. It includes: neurodocker.yaml for building the container, Customized MRD Python scripts, Testing inside GitHub Codespaces”.
- The second step is to write a recipe for https://github.com/neurodesk/openrecon, again since it’s not our repository, we need to fork it, navigate to recipes, create a folder for our project and add an OpenReconLabel.json (defines how the container description and UI options appear on the scanner), and a params.sh with the version number. Then, open a pull request. Updating the version number will trigger the container to be re-built, instructions for downloading and installing the container then appear as an “Issue” on this repository.
5 - Singularity
Our docker containers are converted to singularity containers and stored on Object storage.
Download Singularity Containers
First get an overview of which containers are available as Singularity containers: https://github.com/NeuroDesk/neurocommand/blob/main/cvmfs/log.txt
curl -s https://raw.githubusercontent.com/NeuroDesk/neurocommand/main/cvmfs/log.txt
assign the container name to a variable:
export container=itksnap_3.8.0_20201208
Then download the containers. One way is to use CURL:
curl -X GET https://neurocontainers.neurodesk.org/$container.simg -O
Singularity Containers and GPUs
Some of our containers contain GPU-accelerated applications. Here is an example that tests the GPU accelerated program eddy in FSL:
curl -X GET https://neurocontainers.neurodesk.org/fsl_6.0.5.1_20221016.simg -O
git clone https://github.com/neurolabusc/gpu_test.git
singularity shell --nv fsl_6.0.5.1_20221016.simg
cd gpu_test/etest/
bash runme_gpu.sh
Transparent Singularity
The singularity containers can be also be used in combination with our Transparent Singularity Tool, which wraps the executables inside a container to make them easily available for pipelines. More information can be found here:
one example to do this is:
curl -s https://raw.githubusercontent.com/NeuroDesk/neurocommand/main/cvmfs/log.txt
export container=itksnap_3.8.0_20201208
git clone https://github.com/NeuroDesk/transparent-singularity ${container}
cd ${container}
./run_transparent_singularity.sh ${container}
6 - Windows 11 and Windows Subsystem for Linux
1. Install WSL
Follow the instructions to enable Windows Subsystem for Linux 2 in Windows 11: https://docs.microsoft.com/en-us/windows/wsl/install
2. Configure CVMFS, Singularity and LMOD (only needs to be done once)
Install build tools
sudo apt update
sudo apt install make gcc
Install singularity
export SINGULARITY_VERSION=3.9.3 VERSION=1.17.2 OS=linux ARCH=amd64
wget -q https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz
sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz
rm go$VERSION.$OS-$ARCH.tar.gz
export GOPATH=${HOME}/go
export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin
mkdir -p $GOPATH/src/github.com/sylabs
cd $GOPATH/src/github.com/sylabs
wget -q https://github.com/sylabs/singularity/releases/download/v${SINGULARITY_VERSION}/singularity-ce-${SINGULARITY_VERSION}.tar.gz
tar -xzvf singularity-ce-${SINGULARITY_VERSION}.tar.gz
cd singularity-ce-${SINGULARITY_VERSION}
./mconfig --prefix=/usr/local/singularity
make -C builddir
sudo make -C builddir install
cd ..
sudo rm -rf singularity-ce-${SINGULARITY_VERSION}
sudo rm -rf /usr/local/go $GOPATH
Setup Bindpaths for Singularity (e.g. in .bashrc)
export PATH="/usr/local/singularity/bin:${PATH}"
export SINGULARITY_BINDPATH='/cvmfs,/mnt,/home'
CVMFS
Follow the instructions here: https://neurodesk.org/docs/getting-started/neurocontainers/cvmfs/
LMOD
sudo apt install lmod
3. Use Neurodesk containers
When restarting WSL the cvmfs service has to be started manually:
sudo cvmfs_config wsl2_start
Initialize the neurodesk modules:
module use /cvmfs/neurodesk.ardc.edu.au/neurodesk-modules/*
Example usage of fsleyes:
ml fsl
fsleyes
List the available programs:
ml av