Welcome to BrainSpace’s documentation!¶
BrainSpace is a lightweight cross-platform toolbox primarily intended for macroscale gradient mapping and analysis of neuroimaging and connectome level data. The current version of BrainSpace is available in Python and MATLAB, programming languages widely used by the neuroimaging and network neuroscience communities. The toolbox also contains several maps that allow for exploratory analysis of gradient correspondence with other brain-derived features, together with tools to generate spatial null models.
Installation Guide¶
BrainSpace is available in Python and MATLAB.
Python installation¶
BrainSpace works on Python 3.5+, and probably with older versions of Python 3, although it is not tested.
Dependencies¶
To use BrainSpace, the following Python packages are required:
Nibabel is required for reading/writing Gifti surfaces. Matplotlib is only used for colormaps and we may remove this dependency in future releases.
Additional dependencies¶
To enable interactivity, some plotting functionality in IPython notebooks makes use of the panel package. PyQT is another dependency for background plotting. See PyVista for more on background plotting. The support of background rendering however is still experimental.
Installation¶
BrainSpace can be installed using pip
:
pip install brainspace
Alternatively, you can install the package from Github as follows:
git clone https://github.com/MICA-MNI/BrainSpace.git
cd BrainSpace
python setup.py install
MATLAB installation¶
This toolbox has been tested with MATLAB versions R2018b, although we expect it to work with versions R2018a and newer. Operating systems used during testing were OSX Mojave (10.14.6) and Linux Xenial Xerus (16.04.6).
BrainSpace can be installed by downloading and unzipping the code from Github and running the following in MATLAB:
addpath(genpath('/path/to/BrainSpace/matlab/'))
If you want to load BrainSpace every time you start MATLAB, type edit
startup
and append the above line to the end of this file.
You can move the MATLAB directory to other locations. However, the example data loader functions used in our tutorials require the MATLAB and shared directories to both be in the same directory.
If you wish to open gifti files (necessary for the tutorial) you will also need to install the gifti library.
Getting Started¶
Introduction to Gradients¶
Classically, many neuroimaging studies have attempted to parcellate the human brain into distinct areas based on anatomical or functional features. Whilst effective, the strict boundaries assumed by these methods are often unrealistic and there lacks a clear ordering between parcels. More recently, “gradient” approaches, which define the brain as a set of continuous scores along manifold axes, have gained popularity (see also References). Core to these techniques is the computation of an affinity matrix that captures inter-area similarity of a given feature followed by the application of dimensionality reduction techniques to identify a gradual ordering of the input matrix in a lower dimensional manifold space.
BrainSpace is a compact and flexible toolbox that implements a wide variety of approaches to build macroscale gradients from neuroimaging and connectome data. In this overview, we will show the the basic steps needed for the identi cation of gradients, and their visualization. The steps below will help you to get started and to build your first gradients. If you haven’t done so yet, we recommend you install the package (Installation Guide) so you can follow along with the examples.
Basic Gradient Construction¶
The packages comes with the conte69 surface, and several cortical features and parcellations. Let’s start by loading the conte69 surfaces:
>>> from brainspace.datasets import load_conte69
>>> # Load left and right hemispheres
>>> surf_lh, surf_rh = load_conte69()
>>> surf_lh.n_points
32492
>>> surf_rh.n_points
32492
addpath(genpath('/path/to/micasoft/BrainSpace/matlab'));
% Load left and right hemispheres
[surf_lh, surf_rh] = load_conte69();
To load your own surfaces, you can use the read_surface
function.
BrainSpace also provides surface plotting functionality. We can plot the
conte69 surfaces as follows:
>>> from brainspace.plotting import plot_hemispheres
>>> plot_hemispheres(surf_lh, surf_rh, size=(800, 200))
plot_hemispheres(ones(64984,1),{surf_lh,surf_rh});

Let’s also load the mean connectivity matrix built from a subset of the human connectome project (HCP). The package comes with several example matrices, downsampled using the Schaefer parcellations (Schaefer et al., 2017). Let’s load one of them.
>>> from brainspace.datasets import load_group_fc, load_parcellation
>>> labeling = load_parcellation('schaefer', scale=400, join=True)
>>> m = load_group_fc('schaefer', scale=400)
>>> m.shape
(400, 400)
labeling = load_parcellation('schaefer',400);
conn_matices = load_group_fc('schaefer',400);
m = conn_matices.schaefer_400;
To compute the gradients of our connectivity matrix m we create the GradientMaps object and fit the model to our data:
>>> from brainspace.gradient import GradientMaps
>>> # Build gradients using diffusion maps and normalized angle
>>> gm = GradientMaps(n_components=2, approach='dm', kernel='normalized_angle')
>>> # and fit to the data
>>> gm.fit(m)
GradientMaps(alignment=None, approach='dm', kernel='normalized_angle',
n_components=2, random_state=None)
>>> # The gradients are in
>>> gm.gradients_.shape
(400, 2)
% Build gradients using diffusion maps and normalized angle
gm = GradientMaps('kernel','na','approach','dm','n_components',2);
% and fit to the data
gm = gm.fit(m);
Now we can visually inspect the gradients. Let’s plot the first gradient:
>>> import numpy as np
>>> from brainspace.utils.parcellation import map_to_labels
>>> # map to original size
>>> grad = map_to_labels(gm.gradients_[:, 0], labeling, mask=labeling != 0,
... fill=np.nan)
>>> # Plot first gradient on the cortical surface.
>>> plot_hemispheres(surf_lh, surf_rh, array_name=grad, size=(800, 200))
% Plot the first gradient on the cortical surface.
plot_hemispheres(gm.gradients{1}(:,1), {surf_lh,surf_rh}, ...
'parcellation',labeling.schaefer_400);

As we can see, this gradient corresponds to those observed previously in the literature i.e. running from default mode to sensory areas.
That concludes this getting started section. For more full documentation and tutorials please see MATLAB Package and/or Python Package.
Python Package¶
This page contains links to all related documents on BrainSpace python package. The tutorials is a good starting point to get familiar with the main functionality provided by BrainSpace: creation and alignment of gradients, and null models. With the tutorials you can also learn about the data that comes with the package, plotting and other functions. For more information, please refer to the API.
In the python package, surface functionality is built on top of the
Visualization Toolkit (VTK). BrainSpace
offers a high-level interface to work with VTK.
In VTK wrapping, we introduce the wrapping scheme used in BrainSpace and some
basic usage examples. Note, however, that this part is not a requirement to
start using BrainSpace. In the Mesh module you can find most
of the functionality you need to work with surfaces. A surface in BrainSpace
is represented as BSPolyData
object. Reading/writing of several
formats is supported through the read_surface
and
write_surface
functions.
Tutorials¶

Tutorial 0: Preparing your data for gradient analysis
Note
Click here to download the full example code
Tutorial 0: Preparing your data for gradient analysis¶
In this example, we will introduce how to preprocess raw MRI data and how to prepare it for subsequent gradient analysis in the next tutorials.
Requirements¶
For this tutorial, you will need to install the Python package
nilearn version 0.9.0 or above. You can
do it using pip
:
pip install "nilearn>=0.9.0"
Preprocessing¶
Begin with an MRI dataset that is organized in BIDS format. We recommend preprocessing your data using fmriprep, as described below, but any preprocessing pipeline will work.
Following is example code to run fmriprep using docker from the command line:
docker run -ti --rm \
-v <local_BIDS_data_dir>:/data:ro \
-v <local_output_dir>:/out poldracklab/fmriprep:latest \
--output-spaces fsaverage5 \
--fs-license-file license.txt \
/data /out participant
Note
For this tutorial, it is crucial to output the data onto a cortical surface template space.
Import the dataset as timeseries¶
The timeseries should be a numpy array with the dimensions: nodes x timepoints
Following is an example for reading in data:
import nibabel as nib
import numpy as np
filename = 'filename.{}.mgz' # where {} will be replaced with 'lh' and 'rh'
timeseries = [None] * 2
for i, h in enumerate(['lh', 'rh']):
timeseries[i] = nib.load(filename.format(h)).get_fdata().squeeze()
timeseries = np.vstack(timeseries)
As a working example, simply fetch timeseries:
from brainspace.datasets import fetch_timeseries_preprocessing
timeseries = fetch_timeseries_preprocessing()
Confound regression¶
To remove confound regressors from the output of the fmriprep pipeline, first extract the confound columns. For example:
from nilearn.interfaces.fmriprep import load_confounds_strategy
confounds_out = load_confounds_strategy("path/to/fmriprep/output/sub-<subject>_task-<task>_space-<space>_desc-preproc_bold.nii.gz",
denoise_strategy='simple')
As a working example, simply read in confounds
from brainspace.datasets import load_confounds_preprocessing
confounds_out = load_confounds_preprocessing()
Do the confound regression
from nilearn import signal
clean_ts = signal.clean(timeseries.T, confounds=confounds_out).T
And extract the cleaned timeseries onto a set of labels
import numpy as np
from nilearn import datasets
from brainspace.utils.parcellation import reduce_by_labels
# Fetch surface atlas
atlas = datasets.fetch_atlas_surf_destrieux()
# Remove non-cortex regions
regions = atlas['labels'].copy()
masked_regions = [b'Medial_wall', b'Unknown']
masked_labels = [regions.index(r) for r in masked_regions]
for r in masked_regions:
regions.remove(r)
# Build Destrieux parcellation and mask
labeling = np.concatenate([atlas['map_left'], atlas['map_right']])
mask = ~np.isin(labeling, masked_labels)
# Distinct labels for left and right hemispheres
lab_lh = atlas['map_left']
labeling[lab_lh.size:] += lab_lh.max() + 1
# extract mean timeseries for each label
seed_ts = reduce_by_labels(clean_ts[mask], labeling[mask], axis=1, red_op='mean')
Out:
/home/oualid/Apps/anaconda3/envs/py3/lib/python3.7/site-packages/nilearn/datasets/__init__.py:89: FutureWarning: Fetchers from the nilearn.datasets module will be updated in version 0.9 to return python strings instead of bytes and Pandas dataframes instead of Numpy arrays.
"Numpy arrays.", FutureWarning)
Dataset created in /home/oualid/nilearn_data/destrieux_surface
Downloading data from https://www.nitrc.org/frs/download.php/9343/lh.aparc.a2009s.annot ...
...done. (0 seconds, 0 min)
Downloading data from https://www.nitrc.org/frs/download.php/9342/rh.aparc.a2009s.annot ...
...done. (0 seconds, 0 min)
Calculate functional connectivity matrix¶
The following example uses nilearn:
from nilearn.connectome import ConnectivityMeasure
correlation_measure = ConnectivityMeasure(kind='correlation')
correlation_matrix = correlation_measure.fit_transform([seed_ts.T])[0]
Plot the correlation matrix:
from nilearn import plotting
# Reduce matrix size, only for visualization purposes
mat_mask = np.where(np.std(correlation_matrix, axis=1) > 0.2)[0]
c = correlation_matrix[mat_mask][:, mat_mask]
# Create corresponding region names
regions_list = ['%s_%s' % (h, r.decode()) for h in ['L', 'R'] for r in regions]
masked_regions = [regions_list[i] for i in mat_mask]
corr_plot = plotting.plot_matrix(c, figure=(15, 15), labels=masked_regions,
vmax=0.8, vmin=-0.8, reorder=True)

Run gradient analysis and visualize¶
Run gradient analysis
from brainspace.gradient import GradientMaps
gm = GradientMaps(n_components=2, random_state=0)
gm.fit(correlation_matrix)
Out:
GradientMaps(n_components=2, random_state=0)
Visualize results
from brainspace.datasets import load_fsa5
from brainspace.plotting import plot_hemispheres
from brainspace.utils.parcellation import map_to_labels
# Map gradients to original parcels
grad = [None] * 2
for i, g in enumerate(gm.gradients_.T):
grad[i] = map_to_labels(g, labeling, mask=mask, fill=np.nan)
# Load fsaverage5 surfaces
surf_lh, surf_rh = load_fsa5()
plot_hemispheres(surf_lh, surf_rh, array_name=grad, size=(1200, 400), cmap='viridis_r',
color_bar=True, label_text=['Grad1', 'Grad2'], zoom=1.5)

This concludes the setup tutorial. The following tutorials can be run using either the output generated here or the example data.
Total running time of the script: ( 0 minutes 3.871 seconds)
Note
Click here to download the full example code
Tutorial 1: Building your first gradient¶
In this example, we will derive a gradient and do some basic inspections to determine which gradients may be of interest and what the multidimensional organization of the gradients looks like.
We’ll first start by loading some sample data. Note that we’re using parcellated data for computational efficiency.
from brainspace.datasets import load_group_fc, load_parcellation, load_conte69
# First load mean connectivity matrix and Schaefer parcellation
conn_matrix = load_group_fc('schaefer', scale=400)
labeling = load_parcellation('schaefer', scale=400, join=True)
# and load the conte69 surfaces
surf_lh, surf_rh = load_conte69()
Let’s first look at the parcellation scheme we’re using.
from brainspace.plotting import plot_hemispheres
plot_hemispheres(surf_lh, surf_rh, array_name=labeling, size=(1200, 200),
cmap='tab20', zoom=1.85)

and let’s construct our gradients.
from brainspace.gradient import GradientMaps
# Ask for 10 gradients (default)
gm = GradientMaps(n_components=10, random_state=0)
gm.fit(conn_matrix)
Out:
GradientMaps(random_state=0)
Note that the default parameters are diffusion embedding approach, 10 components, and no kernel (use raw data). Once you have your gradients, a good first step is to simply inspect what they look like. Let’s have a look at the first two gradients.
import numpy as np
from brainspace.utils.parcellation import map_to_labels
mask = labeling != 0
grad = [None] * 2
for i in range(2):
# map the gradient to the parcels
grad[i] = map_to_labels(gm.gradients_[:, i], labeling, mask=mask, fill=np.nan)
plot_hemispheres(surf_lh, surf_rh, array_name=grad, size=(1200, 400), cmap='viridis_r',
color_bar=True, label_text=['Grad1', 'Grad2'], zoom=1.55)

But which gradients should you keep for your analysis? In some cases you may have an a priori interest in some previously defined set of gradients. When you do not have a pre-defined set, you can instead look at the lambdas (eigenvalues) of each component in a scree plot. Higher eigenvalues (or lower in Laplacian eigenmaps) are more important, so one can choose a cut-off based on a scree plot.
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, figsize=(5, 4))
ax.scatter(range(gm.lambdas_.size), gm.lambdas_)
ax.set_xlabel('Component Nb')
ax.set_ylabel('Eigenvalue')
plt.show()

This concludes the first tutorial. In the next tutorial we will have a look at how to customize the methods of gradient estimation, as well as gradient alignments.
Total running time of the script: ( 0 minutes 1.397 seconds)
Note
Click here to download the full example code
Tutorial 2: Customizing and aligning gradients¶
In this tutorial you’ll learn about the methods available within the GradientMaps class. The flexible usage of this class allows for the customization of gradient computation with different kernels and dimensionality reductions, as well as aligning gradients from different datasets. This tutorial will only show you how to apply these techniques.
Customizing gradient computation¶
As before, we’ll start by loading the sample data.
from brainspace.datasets import load_group_fc, load_parcellation, load_conte69
# First load mean connectivity matrix and Schaefer parcellation
conn_matrix = load_group_fc('schaefer', scale=400)
labeling = load_parcellation('schaefer', scale=400, join=True)
mask = labeling != 0
# and load the conte69 hemisphere surfaces
surf_lh, surf_rh = load_conte69()
The GradientMaps object allows for many different kernels and dimensionality reduction techniques. Let’s have a look at three different kernels.
import numpy as np
from brainspace.gradient import GradientMaps
from brainspace.plotting import plot_hemispheres
from brainspace.utils.parcellation import map_to_labels
kernels = ['pearson', 'spearman', 'normalized_angle']
gradients_kernel = [None] * len(kernels)
for i, k in enumerate(kernels):
gm = GradientMaps(kernel=k, approach='dm', random_state=0)
gm.fit(conn_matrix)
gradients_kernel[i] = map_to_labels(gm.gradients_[:, i], labeling, mask=mask,
fill=np.nan)
label_text = ['Pearson', 'Spearman', 'Normalized\nAngle']
plot_hemispheres(surf_lh, surf_rh, array_name=gradients_kernel, size=(1200, 600),
cmap='viridis_r', color_bar=True, label_text=label_text, zoom=1.45)

It seems the gradients provided by these kernels are quite similar although their scaling is quite different. Do note that the gradients are in arbitrary units, so the smaller/larger axes across kernels do not imply anything. Similar to using different kernels, we can also use different dimensionality reduction techniques.
# PCA, Laplacian eigenmaps and diffusion mapping
embeddings = ['pca', 'le', 'dm']
gradients_embedding = [None] * len(embeddings)
for i, emb in enumerate(embeddings):
gm = GradientMaps(kernel='normalized_angle', approach=emb, random_state=0)
gm.fit(conn_matrix)
gradients_embedding[i] = map_to_labels(gm.gradients_[:, 0], labeling, mask=mask,
fill=np.nan)
label_text = ['PCA', 'LE', 'DM']
plot_hemispheres(surf_lh, surf_rh, array_name=gradients_embedding, size=(1200, 600),
cmap='viridis_r', color_bar=True, label_text=label_text, zoom=1.45)

Gradient alignment¶
A more principled way of increasing comparability across gradients are alignment techniques. BrainSpace provides two alignment techniques: Procrustes analysis, and joint alignment. For this example we will load functional connectivity data of a second subject group and align it with the first group.
conn_matrix2 = load_group_fc('schaefer', scale=400, group='holdout')
gp = GradientMaps(kernel='normalized_angle', alignment='procrustes')
gj = GradientMaps(kernel='normalized_angle', alignment='joint')
gp.fit([conn_matrix, conn_matrix2])
gj.fit([conn_matrix, conn_matrix2])
Out:
GradientMaps(alignment='joint', kernel='normalized_angle')
Here, gp contains the Procrustes aligned data and gj contains the joint aligned data. Let’s plot them, but in separate figures to keep things organized.
# First gradient from original and holdout data, without alignment
gradients_unaligned = [None] * 2
for i in range(2):
gradients_unaligned[i] = map_to_labels(gp.gradients_[i][:, 0], labeling,
mask=mask, fill=np.nan)
label_text = ['Unaligned\nGroup 1', 'Unaligned\nGroup 2']
plot_hemispheres(surf_lh, surf_rh, array_name=gradients_unaligned, size=(1200, 400),
cmap='viridis_r', color_bar=True, label_text=label_text, zoom=1.5)

# With procrustes alignment
gradients_procrustes = [None] * 2
for i in range(2):
gradients_procrustes[i] = map_to_labels(gp.aligned_[i][:, 0], labeling, mask=mask,
fill=np.nan)
label_text = ['Procrustes\nGroup 1', 'Procrustes\nGroup 2']
plot_hemispheres(surf_lh, surf_rh, array_name=gradients_procrustes, size=(1200, 400),
cmap='viridis_r', color_bar=True, label_text=label_text, zoom=1.5)

# With joint alignment
gradients_joint = [None] * 2
for i in range(2):
gradients_joint[i] = map_to_labels(gj.aligned_[i][:, 0], labeling, mask=mask,
fill=np.nan)
label_text = ['Joint\nGroup 1', 'Joint\nGroup 2']
plot_hemispheres(surf_lh, surf_rh, array_name=gradients_joint, size=(1200, 400),
cmap='viridis_r', color_bar=True, label_text=label_text, zoom=1.5)

Although in this example, we don’t see any big differences, if the input data was less similar, alignments may also resolve changes in the order of the gradients. However, you should always inspect the output of an alignment; if the input data are sufficiently dissimilar then the alignment may produce odd results.
In some instances, you may want to align gradients to an out-of-sample gradient, for example when aligning individuals to a hold-out group gradient. When performing a Procrustes alignemnt, a ‘reference’ can be specified. The first alignment iteration will then be to the reference. For purposes of this example, we will use the gradient of the hold-out group as the reference.
gref = GradientMaps(kernel='normalized_angle', approach='le')
gref.fit(conn_matrix2)
galign = GradientMaps(kernel='normalized_angle', approach='le', alignment='procrustes')
galign.fit(conn_matrix, reference=gref.gradients_)
Out:
GradientMaps(alignment='procrustes', approach='le', kernel='normalized_angle')
The gradients in galign.aligned_ are now aligned to the reference gradients.
Gradient fusion¶
We can also fuse data across multiple modalities and build mutli-modal gradients. In this case we only look at one set of output gradients, rather than one per modality.
First, let’s load the example data of microstructural profile covariance (Paquola et al., 2019) and functional connectivity.
from brainspace.datasets import load_group_mpc
# First load mean connectivity matrix and parcellation
fc = load_group_fc('vosdewael', scale=200)
mpc = load_group_mpc('vosdewael', scale=200)
labeling = load_parcellation('vosdewael', scale=200, join=True)
mask = labeling != 0
seeds = [None] * 2
seeds[0] = map_to_labels(fc[0], labeling, mask=mask, fill=np.nan)
seeds[1] = map_to_labels(mpc[0], labeling, mask=mask, fill=np.nan)
# visualise the features from a seed region (seed 0)
plot_hemispheres(surf_lh, surf_rh, array_name=seeds, label_text=['FC', 'MPC'],
size=(1200, 400), color_bar=True, cmap='viridis', zoom=1.45)

In order to fuse the matrices, we simply pass the matrices to the fusion command which will rescale and horizontally concatenate the matrices.
# Negative numbers are not allowed in fusion.
fc[fc < 0] = 0
def fusion(*args):
from scipy.stats import rankdata
from sklearn.preprocessing import minmax_scale
max_rk = [None] * len(args)
masks = [None] * len(args)
for j, a in enumerate(args):
m = masks[j] = a != 0
a[m] = rankdata(a[m])
max_rk[j] = a[m].max()
max_rk = min(max_rk)
for j, a in enumerate(args):
m = masks[j]
a[m] = minmax_scale(a[m], feature_range=(1, max_rk))
return np.hstack(args)
# fuse the matrices
fused_matrix = fusion(fc, mpc)
We then use this output in the fit function. This will convert the long horizontal array into a square affinity matrix, and then perform embedding.
gm = GradientMaps(n_components=2, kernel='normalized_angle')
gm.fit(fused_matrix)
gradients_fused = [None] * 2
for i in range(2):
gradients_fused[i] = map_to_labels(gm.gradients_[:, i], labeling, mask=mask,
fill=np.nan)
plot_hemispheres(surf_lh, surf_rh, array_name=gradients_fused,
label_text=['Gradient 1', 'Gradient 2'], size=(1200, 400),
color_bar=True, cmap='viridis', zoom=1.45)

Note
The mpc matrix presented here matches the subject cohort of (Paquola et al., 2019). Other matrices in this package match the subject groups used by (Vos de Wael et al., 2018). We make direct comparisons in our tutorial for didactic purposes only.
That concludes the second tutorial. In the third tutorial we will consider null hypothesis testing of comparisons between gradients and other markers.
Total running time of the script: ( 0 minutes 6.461 seconds)
Note
Click here to download the full example code
Tutorial 3: Null models for gradient significance¶
In this tutorial we assess the significance of correlations between the first canonical gradient and data from other modalities (curvature, cortical thickness and T1w/T2w image intensity). A normal test of the significance of the correlation cannot be used, because the spatial auto-correlation in MRI data may bias the test statistic. In this tutorial we will show three approaches for null hypothesis testing: spin permutations, Moran spectral randomization, and autocorrelation-preserving surrogates based on variogram matching.
Note
When using either approach to compare gradients to non-gradient markers, we recommend randomizing the non-gradient markers as these randomizations need not maintain the statistical independence between gradients.
Spin Permutations¶
Here, we use the spin permutations approach previously proposed in (Alexander-Bloch et al., 2018), which preserves the auto-correlation of the permuted feature(s) by rotating the feature data on the spherical domain. We will start by loading the conte69 surfaces for left and right hemispheres, their corresponding spheres, midline mask, and t1w/t2w intensity as well as cortical thickness data, and a template functional gradient.
import numpy as np
from brainspace.datasets import load_gradient, load_marker, load_conte69
# load the conte69 hemisphere surfaces and spheres
surf_lh, surf_rh = load_conte69()
sphere_lh, sphere_rh = load_conte69(as_sphere=True)
# Load the data
t1wt2w_lh, t1wt2w_rh = load_marker('t1wt2w')
t1wt2w = np.concatenate([t1wt2w_lh, t1wt2w_rh])
thickness_lh, thickness_rh = load_marker('thickness')
thickness = np.concatenate([thickness_lh, thickness_rh])
# Template functional gradient
embedding = load_gradient('fc', idx=0, join=True)
Let’s first generate some null data using spintest.
from brainspace.null_models import SpinPermutations
from brainspace.plotting import plot_hemispheres
# Let's create some rotations
n_rand = 1000
sp = SpinPermutations(n_rep=n_rand, random_state=0)
sp.fit(sphere_lh, points_rh=sphere_rh)
t1wt2w_rotated = np.hstack(sp.randomize(t1wt2w_lh, t1wt2w_rh))
thickness_rotated = np.hstack(sp.randomize(thickness_lh, thickness_rh))
As an illustration of the rotation, let’s plot the original t1w/t2w data
# Plot original data
plot_hemispheres(surf_lh, surf_rh, array_name=t1wt2w, size=(1200, 200), cmap='viridis',
nan_color=(0.5, 0.5, 0.5, 1), color_bar=True, zoom=1.65)

as well as a few rotated versions.
# Plot some rotations
plot_hemispheres(surf_lh, surf_rh, array_name=t1wt2w_rotated[:3], size=(1200, 600),
cmap='viridis', nan_color=(0.5, 0.5, 0.5, 1), color_bar=True,
zoom=1.55, label_text=['Rot0', 'Rot1', 'Rot2'])

Warning
With spin permutations, midline vertices (i.e,, NaNs) from both the original and rotated data are discarded. Depending on the overlap of midlines in the, statistical comparisons between them may compare different numbers of features. This can bias your test statistics. Therefore, if a large portion of the sphere is not used, we recommend using Moran spectral randomization instead.
Now we simply compute the correlations between the first gradient and the original data, as well as all rotated data.
from matplotlib import pyplot as plt
from scipy.stats import spearmanr
fig, axs = plt.subplots(1, 2, figsize=(9, 3.5))
feats = {'t1wt2w': t1wt2w, 'thickness': thickness}
rotated = {'t1wt2w': t1wt2w_rotated, 'thickness': thickness_rotated}
r_spin = np.empty(n_rand)
mask = ~np.isnan(thickness)
for k, (fn, feat) in enumerate(feats.items()):
r_obs, pv_obs = spearmanr(feat[mask], embedding[mask])
# Compute perm pval
for i, perm in enumerate(rotated[fn]):
mask_rot = mask & ~np.isnan(perm) # Remove midline
r_spin[i] = spearmanr(perm[mask_rot], embedding[mask_rot])[0]
pv_spin = np.mean(np.abs(r_spin) >= np.abs(r_obs))
# Plot null dist
axs[k].hist(r_spin, bins=25, density=True, alpha=0.5, color=(.8, .8, .8))
axs[k].axvline(r_obs, lw=2, ls='--', color='k')
axs[k].set_xlabel(f'Correlation with {fn}')
if k == 0:
axs[k].set_ylabel('Density')
print(f'{fn.capitalize()}:\n Obs : {pv_obs:.5e}\n Spin: {pv_spin:.5e}\n')
fig.tight_layout()
plt.show()

Out:
T1wt2w:
Obs : 0.00000e+00
Spin: 1.00000e-03
Thickness:
Obs : 0.00000e+00
Spin: 1.37000e-01
It is interesting to see that both p-values increase when taking into consideration the auto-correlation present in the surfaces. Also, we can see that the correlation with thickness is no longer statistically significant after spin permutations.
Moran Spectral Randomization¶
Moran Spectral Randomization (MSR) computes Moran’s I, a metric for spatial auto-correlation and generates normally distributed data with similar auto-correlation. MSR relies on a weight matrix denoting the spatial proximity of features to one another. Within neuroimaging, one straightforward example of this is inverse geodesic distance i.e. distance along the cortical surface.
In this example we will show how to use MSR to assess statistical significance between cortical markers (here curvature and cortical t1wt2w intensity) and the first functional connectivity gradient. We will start by loading the left temporal lobe mask, t1w/t2w intensity as well as cortical thickness data, and a template functional gradient
from brainspace.datasets import load_mask
n_pts_lh = surf_lh.n_points
mask_tl, _ = load_mask(name='temporal')
# Keep only the temporal lobe.
embedding_tl = embedding[:n_pts_lh][mask_tl]
t1wt2w_tl = t1wt2w_lh[mask_tl]
curv_tl = load_marker('curvature')[0][mask_tl]
We will now compute the Moran eigenvectors. This can be done either by providing a weight matrix of spatial proximity between each vertex, or by providing a cortical surface. Here we’ll use a cortical surface.
from brainspace.null_models import MoranRandomization
from brainspace.mesh import mesh_elements as me
# compute spatial weight matrix
w = me.get_ring_distance(surf_lh, n_ring=1, mask=mask_tl)
w.data **= -1
msr = MoranRandomization(n_rep=n_rand, procedure='singleton', tol=1e-6,
random_state=0)
msr.fit(w)
Out:
MoranRandomization(n_rep=1000, random_state=0, tol=1e-06)
Using the Moran eigenvectors we can now compute the randomized data.
curv_rand = msr.randomize(curv_tl)
t1wt2w_rand = msr.randomize(t1wt2w_tl)
Now that we have the randomized data, we can compute correlations between the gradient and the real/randomised data and generate the non-parametric p-values.
fig, axs = plt.subplots(1, 2, figsize=(9, 3.5))
feats = {'t1wt2w': t1wt2w_tl, 'curvature': curv_tl}
rand = {'t1wt2w': t1wt2w_rand, 'curvature': curv_rand}
for k, (fn, data) in enumerate(rand.items()):
r_obs, pv_obs = spearmanr(feats[fn], embedding_tl, nan_policy='omit')
# Compute perm pval
r_rand = np.asarray([spearmanr(embedding_tl, d)[0] for d in data])
pv_rand = np.mean(np.abs(r_rand) >= np.abs(r_obs))
# Plot null dist
axs[k].hist(r_rand, bins=25, density=True, alpha=0.5, color=(.8, .8, .8))
axs[k].axvline(r_obs, lw=2, ls='--', color='k')
axs[k].set_xlabel(f'Correlation with {fn}')
if k == 0:
axs[k].set_ylabel('Density')
print(f'{fn.capitalize()}:\n Obs : {pv_obs:.5e}\n Moran: {pv_rand:.5e}\n')
fig.tight_layout()
plt.show()

Out:
T1wt2w:
Obs : 0.00000e+00
Moran: 0.00000e+00
Curvature:
Obs : 6.63802e-05
Moran: 3.50000e-01
Variogram Matching¶
Here, we will repeat the same analysis using the variogram matching approach presented in (Burt et al., 2020), which generates novel brainmaps with similar spatial autocorrelation to the input data.
We will need a distance matrix that tells us what the spatial distance between our datapoints is. For this example, we will use geodesic distance.
from brainspace.mesh.mesh_elements import get_immediate_distance
from scipy.sparse.csgraph import dijkstra
# Compute geodesic distance
gd = get_immediate_distance(surf_lh, mask=mask_tl)
gd = dijkstra(gd, directed=False)
idx_sorted = np.argsort(gd, axis=1)
Now we’ve got everything we need to generate our surrogate datasets. By default, BrainSpace will use all available data to generate surrogate maps. However, this process is extremely computationally and memory intensive. When using this method with more than a few hundred regions, we recommend subsampling the data. This can be done using SampledSurrogateMaps instead of the SurrogateMaps.
from brainspace.null_models import SampledSurrogateMaps
n_surrogate_datasets = 1000
# Note: number samples must be greater than number neighbors
num_samples = 100
num_neighbors = 50
ssm = SampledSurrogateMaps(ns=num_samples, knn=num_neighbors, random_state=0)
ssm.fit(gd, idx_sorted)
t1wt2w_surrogates = ssm.randomize(t1wt2w_tl, n_rep=n_surrogate_datasets)
curv_surrogates = ssm.randomize(curv_tl, n_rep=n_surrogate_datasets)
Similar to the previous case, we can now plot the results:
import matplotlib.pyplot as plt
from scipy.stats import spearmanr
fig, axs = plt.subplots(1, 2, figsize=(9, 3.5))
feats = {'t1wt2w': t1wt2w_tl, 'curvature': curv_tl}
rand = {'t1wt2w': t1wt2w_surrogates, 'curvature': curv_surrogates}
for k, (fn, data) in enumerate(rand.items()):
r_obs, pv_obs = spearmanr(feats[fn], embedding_tl, nan_policy='omit')
# Compute perm pval
r_rand = np.asarray([spearmanr(embedding_tl, d)[0] for d in data])
pv_rand = np.mean(np.abs(r_rand) >= np.abs(r_obs))
# Plot null dist
axs[k].hist(r_rand, bins=25, density=True, alpha=0.5, color=(.8, .8, .8))
axs[k].axvline(r_obs, lw=2, ls='--', color='k')
axs[k].set_xlabel(f'Correlation with {fn}')
if k == 0:
axs[k].set_ylabel('Density')
print(f'{fn.capitalize()}:\n Obs : {pv_obs:.5e}\n '
f'Variogram: {pv_rand:.5e}\n')
fig.tight_layout()
plt.show()

Out:
T1wt2w:
Obs : 0.00000e+00
Variogram: 0.00000e+00
Curvature:
Obs : 6.63802e-05
Variogram: 3.26000e-01
Total running time of the script: ( 3 minutes 29.601 seconds)
API Reference¶
Gradient Maps¶
Gradients¶
GradientMaps ([n_components, approach, …]) |
Gradient maps. |
Embedding¶
Embedding ([n_components]) |
Base class for embedding approaches. |
DiffusionMaps ([n_components, alpha, …]) |
Diffusion maps. |
LaplacianEigenmaps ([n_components, …]) |
Laplacian eigenmaps. |
PCAMaps ([n_components, random_state]) |
Principal component analysis. |
diffusion_mapping (adj[, n_components, …]) |
Compute diffusion map of affinity matrix. |
laplacian_eigenmaps (adj[, n_components, …]) |
Compute embedding using Laplacian eigenmaps. |
Alignment¶
ProcrustesAlignment ([n_iter, tol, verbose]) |
Iterative alignment using generalized procrustes analysis. |
procrustes_alignment (data[, reference, …]) |
Iterative alignment using generalized procrustes analysis. |
procrustes (source, target[, center, scale]) |
Align source to target using procrustes analysis. |
Kernels¶
compute_affinity (x[, kernel, sparsity, …]) |
Compute affinity matrix. |
Utility functions¶
dominant_set (s, k[, is_thresh, norm, copy, …]) |
Keep the largest elements for each row. |
is_symmetric (x[, tol]) |
Check if input is symmetric. |
make_symmetric (x[, check, tol, copy, …]) |
Make array symmetric. |
Null models¶
Spin permutations¶
SpinPermutations ([unique, n_rep, …]) |
Spin permutations. |
spin_permutations (spheres, data[, unique, …]) |
Generate null data using spin permutations. |
Moran spectral randomization¶
MoranRandomization ([procedure, spectrum, …]) |
Moran spectral randomization. |
compute_mem (w[, n_ring, spectrum, tol]) |
Compute Moran eigenvectors map. |
moran_randomization (x, mem[, n_rep, …]) |
Generate random samples from x based on Moran spectral randomization. |
Surrogate maps¶
SurrogateMaps ([deltas, kernel, pv, nh, …]) |
Spatial autocorrelation-preserving surrogate brain maps. |
SampledSurrogateMaps ([ns, pv, nh, knn, b, …]) |
Spatial autocorrelation-preserving surrogate brain maps wih sampling. |
Plotting functionality¶
Plotting of brain surfaces¶
plot_hemispheres (surf_lh, surf_rh[, …]) |
Plot left and right hemispheres in lateral and medial views. |
plot_surf (surfs, layout[, array_name, view, …]) |
Plot surfaces arranged according to the layout. |
build_plotter (surfs, layout[, array_name, …]) |
Build plotter arranged according to the layout. |
Generic plotting¶
Plotter ([nrow, ncol, offscreen, …]) |
|
GridPlotter ([nrow, ncol, try_qt, offscreen]) |
Datasets¶
Surfaces¶
load_conte69 ([as_sphere, with_normals, join]) |
Load conte69 surfaces. |
Cortical features¶
load_mask ([name, join]) |
Load mask for conte69. |
load_parcellation (name[, scale, join]) |
Load parcellation for conte69. |
load_marker (name[, join]) |
Load cortical data for conte69. |
load_gradient (name[, idx, join]) |
Load gradient for conte69. |
Connectivity matrices¶
load_group_fc (parcellation[, scale, group]) |
Load group level connectivity matrix for a given parcellation. |
load_group_mpc (parcellation[, scale]) |
Load group level connectivity matrix for a given parcellation. |
Mesh¶
BrainSpace provides basic functionality for working with surface meshes. This functionality is built on top of the Visualization Toolkit (VTK).
- Read/Write functionality
- Surface creation
- Elements
- Connectivity
- Operations on meshes
- Operations on mesh data
- Mesh clustering
Read/Write functionality¶
read_surface (ipth[, itype, return_data, update]) |
Read surface data. |
write_surface (ifilter, opth[, oformat, otype]) |
Write surface data. |
Surface creation¶
build_polydata (points[, cells]) |
Build surface (PolyData) from points and cells. |
to_lines (surf) |
Convert all cells in PolyData to lines. |
to_vertex (surf) |
Convert all cells in PolyData to vertex cells. |
Elements¶
get_cells (surf) |
Get surface cells. |
get_points (surf[, mask]) |
Get surface points. |
get_edges (surf[, mask]) |
Get surface edges. |
Connectivity¶
get_cell2point_connectivity (surf[, mask, dtype]) |
Get cell to point connectivity. |
get_point2cell_connectivity (surf[, mask, dtype]) |
Get point to cell connectivity. |
get_cell_neighbors (surf[, include_self, …]) |
Get cell connectivity based on shared edges. |
get_immediate_adjacency (surf[, …]) |
Get immediate adjacency matrix. |
get_ring_adjacency (surf[, n_ring, …]) |
Get adjacency in the neighborhood of each point. |
get_immediate_distance (surf[, metric, mask, …]) |
Get immediate distance matrix. |
get_ring_distance (surf[, n_ring, metric, …]) |
Get distance matrix in the neighborhood of each point. |
Operations on meshes¶
drop_cells (surf, array[, low, upp]) |
Remove surface cells whose values fall within the threshold. |
mask_cells (surf, mask) |
Mask surface cells. |
select_cells (surf, array[, low, upp]) |
Select surface cells whose values fall within the threshold. |
drop_points (surf, array[, low, upp]) |
Remove surface points whose values fall within the threshold. |
mask_points (surf, mask) |
Mask surface points. |
select_points (surf, array[, low, upp]) |
Select surface points whose values fall within the threshold. |
get_connected_components (surf[, labeling, …]) |
Get connected components. |
downsample_with_parcellation (surf, labeling) |
Downsample surface according to labeling. |
Operations on mesh data¶
compute_cell_area (surf[, append, key]) |
Compute cell area. |
compute_cell_center (surf[, append, key]) |
Compute center of cells (parametric center). |
get_n_adjacent_cells (surf[, append, key]) |
Compute number of adjacent cells for each point. |
map_celldata_to_pointdata (surf, cell_data[, …]) |
Map cell data to point data. |
map_pointdata_to_celldata (surf, point_data) |
Map point data to cell data. |
compute_point_area (surf[, cell_area, …]) |
Compute point area from its adjacent cells. |
get_labeling_border (surf, labeling[, …]) |
Get labeling borders. |
get_parcellation_centroids (surf, labeling[, …]) |
Compute parcels centroids. |
propagate_labeling (surf, labeling[, …]) |
Propagate labeling on surface points. |
Mesh clustering¶
Clustering and sampling of surface vertices.
cluster_points (surf[, n_clusters, is_size, …]) |
Clustering of surface points. |
sample_points_clustering (surf[, keep, mask, …]) |
Sample equidistant points from surface based on clustering. |
VTK interface¶
Surface mesh functionality provided in BrainSpace is built on top of the Visualization Toolkit (VTK). BrainSpace provides several wrappers for most data objects and some filters in VTK. Here we present a subset of this functionality. Please also refer to VTK wrapping document for an introduction to the wrapping interface.
Basic wrapping¶
wrap_vtk (obj, **kwargs) |
Wrap input object to BSVTKObjectWrapper or one of its subclasses. |
is_vtk (obj) |
Check if obj is a vtk object. |
is_wrapper (obj) |
Check if obj is a wrapper. |
BSVTKObjectWrapper (vtkobject, **kwargs) |
Base class for all classes that wrap VTK objects. |
Pipeline functionality¶
serial_connect (*filters[, as_data, update, port]) |
Connect filters serially. |
get_output (ftr[, as_data, update, port]) |
Get output from filter. |
to_data (ftr[, port]) |
Extract data from filter. |
connect (ftr0, ftr1[, port0, port1, add_conn]) |
Connection of two filters. |
VTK wrappers¶
Data objects¶
BSDataObject ([vtkobject]) |
Wrapper for vtkDataObject. |
BSTable ([vtkobject]) |
Wrapper for vtkTable. |
BSCompositeDataSet ([vtkobject]) |
Wrapper for vtkCompositeDataSet. |
BSDataSet ([vtkobject]) |
Wrapper for vtkDataSet. |
BSPointSet ([vtkobject]) |
Wrapper for vtkPointSet. |
BSPolyData ([vtkobject]) |
Wrapper for vtkPolyData. |
BSUnstructuredGrid ([vtkobject]) |
Wrapper for vtkUnstructuredGrid. |
Algorithms¶
BSAlgorithm ([vtkobject]) |
Wrapper for vtkAlgorithm. |
BSPolyDataAlgorithm ([vtkobject]) |
Wrapper for vtkPolyDataAlgorithm. |
BSWindowToImageFilter ([vtkobject]) |
Wrapper for vtkWindowToImageFilter. |
BSImageWriter ([vtkobject]) |
Wrapper for vtkImageWriter. |
BSBMPWriter ([vtkobject]) |
Wrapper for vtkBMPWriter. |
BSJPEGWriter ([vtkobject]) |
Wrapper for vtkJPEGWriter. |
BSPNGWriter ([vtkobject]) |
Wrapper for vtkPNGWriter. |
BSPostScriptWriter ([vtkobject]) |
Wrapper for vtkPostScriptWriter. |
BSTIFFWriter ([vtkobject]) |
Wrapper for vtkTIFFWriter. |
Mappers¶
BSDataSetMapper ([vtkobject]) |
Wrapper for vtkDataSetMapper. |
BSPolyDataMapper ([vtkobject]) |
Wrapper for vtkPolyDataMapper. |
BSLabeledContourMapper ([vtkobject]) |
Wrapper for vtkLabeledContourMapper. |
BSLabeledDataMapper ([vtkobject]) |
Wrapper for vtkLabeledDataMapper. |
BSLabelPlacementMapper ([vtkobject]) |
Wrapper for vtkLabelPlacementMapper. |
BSPolyDataMapper2D ([vtkobject]) |
Wrapper for vtkPolyDataMapper2D. |
BSTextMapper2D ([vtkobject]) |
Wrapper for vtkPolyDataMapper2D. |
Actors¶
BSActor2D ([vtkobject]) |
Wrapper for vtkActor2D. |
BSScalarBarActor ([vtkobject]) |
Wrapper for vtkScalarBarActor. |
BSTexturedActor2D ([vtkobject]) |
Wrapper for vtkTexturedActor2D. |
BSTextActor ([vtkobject]) |
Wrapper for vtkTextActor. |
BSActor ([vtkobject]) |
Wrapper for vtkActor. |
Lookup tables¶
BSScalarsToColors ([vtkobject]) |
Wrapper for vtkScalarsToColors. |
BSLookupTable ([vtkobject]) |
Wrapper for vtkLookupTable. |
BSLookupTableWithEnabling ([vtkobject]) |
Wrapper for vtkLookupTableWithEnabling. |
BSWindowLevelLookupTable ([vtkobject]) |
Wrapper for vtkWindowLevelLookupTable. |
BSColorTransferFunction ([vtkobject]) |
Wrapper for vtkColorTransferFunction. |
BSDiscretizableColorTransferFunction ([vtkobject]) |
Wrapper for vtkDiscretizableColorTransferFunction. |
Rendering¶
BSRenderer ([vtkobject]) |
Wrapper for vtkRenderer. |
BSRenderWindow ([vtkobject]) |
Wrapper for vtkRenderWindow. |
BSRenderWindowInteractor ([vtkobject]) |
Wrapper for vtkRenderWindowInteractor. |
BSGenericRenderWindowInteractor ([vtkobject]) |
Wrapper for vtkGenericRenderWindowInteractor. |
BSInteractorStyle ([vtkobject]) |
Wrapper for vtkInteractorStyle. |
BSInteractorStyleJoystickCamera ([vtkobject]) |
Wrapper for vtkInteractorStyleJoystickCamera. |
BSInteractorStyleJoystickActor ([vtkobject]) |
Wrapper for vtkInteractorStyleJoystickActor. |
BSInteractorStyleTerrain ([vtkobject]) |
Wrapper for vtkInteractorStyleTerrain. |
BSInteractorStyleRubberBandZoom ([vtkobject]) |
Wrapper for vtkInteractorStyleRubberBandZoom. |
BSInteractorStyleTrackballActor ([vtkobject]) |
Wrapper for vtkInteractorStyleTrackballActor. |
BSInteractorStyleTrackballCamera ([vtkobject]) |
Wrapper for vtkInteractorStyleTrackballCamera. |
BSInteractorStyleImage ([vtkobject]) |
Wrapper for vtkInteractorStyleImage. |
BSInteractorStyleRubberBandPick ([vtkobject]) |
Wrapper for vtkInteractorStyleRubberBandPick. |
BSInteractorStyleSwitchBase ([vtkobject]) |
Wrapper for vtkInteractorStyleSwitchBase. |
BSInteractorStyleSwitch ([vtkobject]) |
Wrapper for vtkInteractorStyleSwitch. |
BSCamera ([vtkobject]) |
Wrapper for vtkCamera. |
Properties¶
BSProperty ([vtkobject]) |
Wrapper for vtkProperty. |
BSProperty2D ([vtkobject]) |
Wrapper for vtkProperty2D. |
BSTextProperty ([vtkobject]) |
Wrapper for vtkTextProperty. |
Miscellanea¶
BSCellArray ([vtkobject]) |
Wrapper for vtkCellArray. |
BSGL2PSExporter ([vtkobject]) |
Wrapper for vtkGL2PSExporter. |
BSCollection ([vtkobject]) |
Wrapper for vtkCollection. |
BSPropCollection ([vtkobject]) |
Wrapper for vtkPropCollection. |
BSActor2DCollection ([vtkobject]) |
Wrapper for vtkActor2DCollection. |
BSActorCollection ([vtkobject]) |
Wrapper for vtkActorCollection. |
BSProp3DCollection ([vtkobject]) |
Wrapper for vtkProp3DCollection. |
BSMapperCollection ([vtkobject]) |
Wrapper for vtkMapperCollection. |
BSRendererCollection ([vtkobject]) |
Wrapper for vtkRendererCollection. |
BSPolyDataCollection ([vtkobject]) |
Wrapper for vtkPolyDataCollection. |
BSTextPropertyCollection ([vtkobject]) |
Wrapper for vtkTextPropertyCollection. |
BSCoordinate ([vtkobject]) |
Wrapper for vtkCoordinate. |
Decorators¶
wrap_input (*xargs[, skip]) |
Decorator to wrap the arguments of a function. |
wrap_output (func) |
Decorator to wrap output of function. |
unwrap_input (*xargs[, vtype, skip]) |
Decorator to unwrap input arguments of function. |
unwrap_output ([vtype]) |
Decorator to wrap both arguments and output of a function. |
append_vtk ([to]) |
Decorator to append data to surface. |
Utils¶
Parcellation¶
relabel (lab[, new_labels]) |
Relabel array. |
relabel_consecutive (lab[, start_from]) |
Relabel array with consecutive values. |
relabel_by_overlap (lab, ref_lab) |
Relabel according to overlap with reference. |
map_to_mask (values, mask[, fill, axis]) |
Assign data to mask. |
map_to_labels (source_val, target_lab[, …]) |
Map data in source to target according to their labels. |
reduce_by_labels (values, labels[, weights, …]) |
Summarize data in values according to labels. |
VTK wrapping¶
Surface mesh functionality provided in BrainSpace is built on top of the Visualization Toolkit (VTK). This document introduces the wrapping interface used by BrainSpace to work with VTK objects.
Wrapping interface¶
BSVTKObjectWrapper
is the base class for all wrappers implemented in
BrainSpace. Wrapping a VTK object is done with the wrap_vtk
function.
When wrapping a VTK object, if the corresponding wrapper does not exist, it
falls back to BSVTKObjectWrapper
. Check the API for the complete list
of wrappers implemented in the current version of BrainSpace.
The BSVTKObjectWrapper
is a wrapper that extends the Python object
wrapper in VTK
VTKObjectWrapper
to provide easier access to VTK setter and getter class methods. The wrapper,
since it is a subclass of
VTKObjectWrapper
,
holds a reference to the VTK object in the
BSVTKObjectWrapper.VTKObject
attribute. And further includes the
following functionality:
setVTK
andgetVTK
to invoke several setter/getter methods on the VTK object:>>> import vtk >>> # Let's create a sphere with VTK >>> s = vtk.vtkSphere() >>> s (vtkCommonDataModelPython.vtkSphere)0x7f610d222f48 >>> # And check the default values >>> s.GetRadius() 0.5 >>> s.GetCenter() (0.0, 0.0, 0.0) >>> # We are going to wrap the sphere >>> from brainspace.vtk_interface import wrap_vtk >>> ws = wrap_vtk(s) >>> # ws is an instance of BSVTKObjectWrapper >>> ws <brainspace.vtk_interface.base.BSVTKObjectWrapper at 0x7f60cd7d6f60> >>> # and holds a reference to the VTK sphere >>> ws.VTKObject (vtkCommonDataModelPython.vtkSphere)0x7f610d222f48 >>> # Now we can invoke getter methods as follows: >>> ws.getVTK('radius', center=None) {'radius': 0.5, 'center': (0.0, 0.0, 0.0)} >>> # and set different values >>> ws.setVTK(radius=2, center=(1.5, 1.5, 1.5)) <brainspace.vtk_interface.base.BSVTKObjectWrapper at 0x7f60cd7d6f60> >>> # To check that everything works as expected >>> s.GetRadius() 2.0 >>> s.GetCenter() (1.5, 1.5, 1.5) >>> # these methods can be invoked on the wrapper, which forwards >>> # them to the VTK object >>> ws.GetRadius() 2.0
Calling VTK setters and getters can also be treated as attributes, by overloading
__setattr__
and__getattr__
:>>> # we can access to the radius and center >>> ws.radius 2.0 >>> ws.center (1.5, 1.5, 1.5) >>> # and even set new values >>> ws.radius = 8 >>> ws.center = (10, 10, 10) >>> # check that everything's ok >>> s.GetRadius() 8.0 >>> s.GetCenter() (10.0, 10.0, 10.0)
This functionality is available for all methods that start with Get/Set. To see
all the methods available, BSVTKObjectWrapper
holds a
dictionary vtk_map
for each wrapped class:
>>> ws.vtk_map.keys()
dict_keys(['set', 'get'])
>>> # for getter methods
>>> ws.vtk_map['get']
{'addressasstring': 'GetAddressAsString',
'center': 'GetCenter',
'classname': 'GetClassName',
'command': 'GetCommand',
'debug': 'GetDebug',
'globalwarningdisplay': 'GetGlobalWarningDisplay',
'mtime': 'GetMTime',
'radius': 'GetRadius',
'referencecount': 'GetReferenceCount',
'transform': 'GetTransform'}
Note that this approach is case-insensitive. But we recommend using camel case, at least, for methods with more that one word:
>>> # To access the reference count, for example, these are the same
>>> ws.referencecount
1
>>> ws.ReferenceCount
1
wrap_vtk
provides a nice way to wrap and simultaneously set the values for a VTK object:
>>> ws2 = wrap_vtk(vtk.vtkSphere(), radius=10, center=(5, 5, 0))
>>> ws2.getVTK('radius', 'center')
{'radius': 10.0, 'center': (5.0, 5.0, 0.0)}
>>> ws2.VTKObject.GetRadius()
10.0
>>> ws2.VTKObject.GetCenter()
(5.0, 5.0, 0.0)
In VTK, among setter methods, we have state methods with the form SetSomethingToValue. Using the previous functionality, these methods can be called as follows:
>>> # Let's create a mapper
>>> m = vtk.vtkPolyDataMapper()
>>> # This class has several state methods to set the color mode
>>> [m for m in dir(m) if m.startswith('SetColorModeTo')]
['SetColorModeToDefault',
'SetColorModeToDirectScalars',
'SetColorModeToMapScalars']
>>> # The default value is
>>> m.GetColorModeAsString()
'Default'
>>> # Now we are going to wrap the VTK object
>>> wm = wrap_vtk(m)
>>> wm
<brainspace.vtk_interface.wrappers.BSPolyDataMapper at 0x7f60ada07828>
>>> # and change the default value as we know so far
>>> # Note that we use None because the method accepts no arguments
>>> wm.colorModeToMapScalars = None # same as wm.SetColorMoteToMapScalars()
>>> wm.GetColorModeAsString()
'MapScalars'
>>> # state methods can also be used as follows
>>> wm.colorMode = 'DirectScalars' # which is the default
>>> wm.GetColorModeAsString()
'Default'
>>> # This can be used when wrapping
>>> m2 = wrap_vtk(vtkPolyDataMapper(), colorMode='mapScalars', scalarVisibility=False)
>>> m2.getVTK('colorModeAsString', 'scalarVisibility')
{'colorModeAsString': 'MapScalars', 'scalarVisibility': 0}
In the example above, the wrapper class of our mapper is no longer BSVTKObjectWrapper
but BSPolyDataMapper
. This is because wrap_vtk
looks for a
convenient wrapper by searching the hierarchy of wrappers in a bottom-up fashion,
and BSPolyDataMapper
is a wrapper that is already implemented in the
current version of BrainSpace. We can also see this with VTK actor:
>>> wa = wrap_vtk(vtk.vtkActor())
>>> wa
<brainspace.vtk_interface.wrappers.BSActor at 0x7f60cd749e80>
>>> # When a wrapper exists, the VTK object can be created directly
>>> from brainspace.vtk_interface.wrappers import BSActor
>>> wa2 = BSActor()
<brainspace.vtk_interface.wrappers.BSActor at 0x7f60cce8fac8>
>>> # and can be created with arguments
>>> # for example, setting the previous mapper
>>> wa3 = BSActor(mapper=wm)
>>> wa3.mapper.VTKObject is wm.VTKObject
True
>>> wa3.mapper.VTKObject is m
True
BSActor
is a special wrapper, because calls to setter and getter methods can
be forwarded also to its property (i.e., GetProperty()). Methods are first
forwarded to the VTK object of the actor, but if they do not exist, they are
forwarded then to the property. As of the current version, this is only implemented
for BSActor
:
>>> # To see the opacity using the VTK object
>>> wa3.VTKObject.GetProperty().GetOpacity()
1.0
>>> # this wa3.VTKObject.GetOpacity() raises an exception
>>> # Now, using the wrapper
>>> wa3.GetOpacity()
1.0
>>> # or
>>> wa3.opacity
1.0
>>> # and we can set the opacity
>>> wa3.opacity = 0.25
>>> wa3.VTKObject.GetProperty().GetOpacity()
0.25
The advantage of this approach over existing packages that build over VTK is that we do not need to learn about all the new API. If the user is familiar with VTK, then using this approach is straightforward, we can invoke the setter and getter methods by simply stripping the Get/Set prefixes.
Pipeline liaisons¶
VTK workflow is based on connecting (a source to) several filters (and to a sink). This often makes the code very cumbersome. Let’s see a dummy example:
>>> # Generate point cloud
>>> point_source = vtk.vtkPointSource()
>>> point_source.SetNumberOfPoints(25)
>>> # Build convex hull from point cloud
>>> delauny = vtk.vtkDelaunay2D()
>>> delauny.SetInputConnection(point_source.GetOutputPort())
>>> delauny.SetTolerance(0.01)
>>> # Smooth convex hull
>>> smooth_filter = vtk.vtkWindowedSincPolyDataFilter()
>>> smooth_filter.SetInputConnection(delauny.GetOutputPort())
>>> smooth_filter.SetNumberOfIterations(20)
>>> smooth_filter.FeatureEdgeSmoothingOn()
>>> smooth_filter.NonManifoldSmoothingOn()
>>> # Compute normals
>>> normals_filter = vtk.vtkPolyDataNormals()
>>> normals_filter.SetInputConnection(smooth_filter.GetOutputPort())
>>> normals_filter.SplittingOff()
>>> normals_filter.ConsistencyOn()
>>> normals_filter.AutoOrientNormalsOn()
>>> normals_filter.ComputePointNormalsOn()
>>> # Execute pipeline
>>> normals_filter.Update()
>>> # Get the output
>>> output1 = normals_filter.GetOutput()
>>> output1
(vtkCommonDataModelPython.vtkPolyData)0x7f60cceabb28
>>> output1.GetNumberOfPoints()
25
For these scenarios, Brainspace provides the serial_connect
function
to serially connect several filters and skip the boilerplate of connecting
filters. The previous example can be rewritten as follows:
>>> from brainspace.vtk_interface.pipeline import serial_connect
>>> # Generate point cloud
>>> point_source = wrap_vtk(vtk.vtkPointSource, numberOfPoints=25)
>>> # Build convex hull from point cloud
>>> delauny = wrap_vtk(vtk.vtkDelaunay2D, tolerance=0.01)
>>> # Smooth convex hull
>>> smooth_filter = wrap_vtk(vtk.vtkWindowedSincPolyDataFilter,
... numberOfIterations=20, featureEdgeSmoothing=True,
... nonManifoldSmoothing=True)
>>> # Compute normals
>>> normals_filter = wrap_vtk(vtk.vtkPolyDataNormals, splitting=False,
... consistency=True, autoOrientNormals=True,
... computePointNormals=True)
>>> # Execute and get the output
>>> output2 = serial_connect(point_source, delauny, smooth_filter,
... normals_filter, as_data=True)
>>> output2
<brainspace.vtk_interface.wrappers.BSPolyData at 0x7f60a3f5fa20>
>>> output2.GetNumberOfPoints()
25
First, note that we can simply provide the VTK class instead of the object
to wrap_vtk
. Furthermore, the output object of the previous pipeline
is a polydata. This brings us to one of the most important wrappers in
BrainSpace, BSPolyData
, a wrapper for VTK polydata objects.
Data object wrappers¶
BrainSpace is intended primarily to work with triangular surface meshes of the brain.
Hence, the importance of BSPolyData
. VTK already provides very good
wrappers for VTK data objects in the dataset_adapter
module. In BrainSpace, these wrappers are simply extended to incorporate the
aforementioned functionality of the BSVTKObjectWrapper
and some
additional features:
>>> # Using the output from previous example
>>> output2.numberOfPoints
25
>>> output2.n_points
25
>>> # Check if polydata has only triangles
>>> output2.has_only_triangle
True
>>> # Since a polydata can hold vertices, lines, triangles, and their
>>> # poly versions, polygons are returned as a 1D array
>>> output2.Polygons.shape
(144,)
>>> # we further provide an additional method to recover the polygons:
>>> output2.GetCells2D().shape
(36, 3)
>>> # this method raises an exception if the polydata holds different
>>> # cell or polygon types, this can be checked with
>>> output2.has_unique_cell_type
True
>>> # to get the cell types
>>> output2.cell_types
array([5])
>>> vtk.VTK_TRIANGLE == 5
True
>>> # To get the point data
>>> output2.PointData.keys()
['Normals']
>>> output2.point_keys
['Normals']
>>> output2.PointData['Normals'].shape
(25, 3)
>>> # or
>>> output2.get_array(name='Normals', at='point').shape
(25, 3)
>>> # we do not have to specify the attributes
>>> output2.get_array(name='Normals')
(25, 3)
>>> # raises exception if name is in more than one attribute (e.g., point
>>> # and cell data)
>>> dummy_cell_normals = np.zeros((output2.n_cells, 3))
>>> output2.append_array(dummy_cell_normals, name='Normals', at='cell')
>>> output2.get_array(name='Normals') # Raise exception!
Most properties and methods of BSPolyData
are inherited
from BSDataSet
. Check out their documentations for more information.
Plotting¶
BrainSpace offers two high-level plotting functions: plot_surf
and
plot_hemispheres
. These functions are based on the wrappers of the corresponding
VTK objects. We have already seen above the BSPolyDataMapper
and
BSActor
class wrappers. Here we will show how rendering is performed
using these wrappers. The base class for all plotters is BasePlotter
, with
subclasses Plotter
and GridPlotter
.
These classes forward unknown set/get requests to BSRenderWindow
, which
is a wrapper of vtkRenderWindow
. Let’s see a simple example:
>>> ipth = 'path_to_surface'
>>> from brainspace.mesh import mesh_io as mio
>>> from brainspace.plotting.base import Plotter
>>> # from brainspace.vtk_interface.wrappers import BSLookupTable
>>> surf = mio.read_surface(ipth, return_data=True)
>>> yeo7_colors = np.array([[0, 0, 0, 255],
... [0, 118, 14, 255],
... [230, 148, 34, 255],
... [205, 62, 78, 255],
... [120, 18, 134, 255],
... [220, 248, 164, 255],
... [70, 130, 180, 255],
... [196, 58, 250, 255]], dtype=np.uint8)
>>> # create a plotter with 2 rows and 2 columns
>>> # other arguments are forwarded to BSRenderWindow (i.e., vtkRenderWindow)
>>> p = Plotter(n_rows=2, n_cols=2, try_qt=False, size=(400, 400))
>>> # Add first renderer, span all columns of first row
>>> ren1 = p.AddRenderer(row=0, col=None, background=(1,1,1))
>>> # Add actor (actor created if not provided)
>>> ac1 = ren1.AddActor()
>>> # Set mapper (mapper created if not provided)
>>> m1 = ac1.SetMapper(inputDataObject=surf, colorMode='mapScalars',
... scalarMode='usePointFieldData',
... interpolateScalarsBeforeMapping=False,
... arrayName='yeo7', useLookupTableScalarRange=True)
>>> # Set mapper lookup table (created if not provided)
>>> lut1 = m1.SetLookupTable(numberOfTableValues=8, Range=(0, 7),
... table=yeo7_colors)
>>> # Add second renderer in first column of second row
>>> ren2 = p.AddRenderer(row=1, col=0, background=(1,1,1))
>>> ac2 = ren2.AddActor(opacity=1, edgeVisibility=0)
>>> m2 = ac2.SetMapper(inputData=surf)
>>> # plot in notebook
>>> p.show(interactive=False, embed_nb=True)
See that we can use AddRenderer
to create the renderer if it is
not provided as an argument. The same happens with methods AddActor
,
SetMapper
and SetLookupTable
, from
BSRenderer
, BSActor
and BSPolyDataMapper
wrapper
classes, respectively.
The only difference between Plotter
and GridPlotter
is that
in the latter a renderer is restricted to a single entry, cannot span more than one
entry.
MATLAB Package¶
This page contains links to descriptions of all MATLAB code available in this toolbox as well as the tutorials. The code is divided into five groups: the “main object” which performs the computations, “surface handling” functions for read/writing and parcellating surfaces, “visualization” which plots data, “data loaders” which loads sample data for our tutorials, and “support functions” which are requisites for the other functions, but are not intended for usage by the user.
Tutorials¶
Tutorial 1: Building your first gradient¶
In this example, we will derive a gradient and do some basic inspections to determine which gradients may be of interest and what the multidimensional organization of the gradients looks like.
We’ll first start by loading some sample data. Note that we’re using parcellated data for computational efficiency.
% First load mean connectivity matrix and Schaefer parcellation
conn_matrix = load_group_fc('schaefer',400);
labeling = load_parcellation('schaefer',400);
% The loader functions output data in a struct array for when you
% load multiple parcellations. Let's just bring them to numeric arrays.
conn_matrix = conn_matrix.schaefer_400;
labeling = labeling.schaefer_400;
% and load the conte69 hemisphere surfaces
[surf_lh, surf_rh] = load_conte69();
Let’s first look at the parcellation scheme we’re using.
h = plot_hemispheres(labeling, {surf_lh,surf_rh});
colormap(h.handles.figure,lines(401))

and let’s construct our gradients.
% Construct the gradients
gm = GradientMaps();
gm = gm.fit(conn_matrix);
Note that the default parameters (normalized angle kernel, diffusion embedding approach, 10 components) will be reported in the console. Once you have your gradients a good first step is to simply inspect what they look like. Let’s have a look at the first two gradients.
plot_hemispheres(gm.gradients{1}(:,1:2),{surf_lh,surf_rh}, ...
'parcellation', labeling, ...
'labeltext',{'Gradient 1','Gradient 2'});

But which gradients should you keep for your analysis? In some cases you may have an a-priori interest in some previously defined set of gradients. When you don not have a pre-defined set, you can instead look at the lambdas (eigenvalues) of each component in a scree plot. Higher eigenvalues (or lower in laplacian eigenmapping) are more important, so one can choose a cut-off based on a scree plot.
scree_plot(gm.lambda{1});

A reasonable choice based on this scree plot would be to focus on the first two gradients as they provide a good trade-off between keeping few components whilst retaining a large amount of variance.
Inspecting gradients together can be quite informative. BrainSpace provides tools for plotting a set of gradients in 2D or 3D space, and assigning them colors based on their position. This color can then be propagated to the surface to get an idea of the multidimensional interaction between the gradients. You do this as follows:
gradient_in_euclidean(gm.gradients{1}(:,1:2));

We can see that the values of each region are relatively clustered along three lines, colored here in red, green, and blue. If we want to put these colors on the cortical surface, we simply provide the same function with the surface (and parcellation if using parcellated data).
gradient_in_euclidean(gm.gradients{1}(:,1:2),{surf_lh,surf_rh},labeling);

It now becomes quite evident that the three lines we see in the scatter plot correspond to the somatomotor (red), default mode (green) and visual (blue) networks.
This concludes the first tutorial. In the next tutorial we will have a look at how to customize the methods of gradient estimation, as well as gradient alignments.
Tutorial 2: Customizing and aligning gradients¶
In this tutorial you’ll learn about the methods available within the
GradientMaps
class. The flexible usage of this class allows for the
customization of gradient computation with different kernels and dimensionality
reductions, as well as aligning gradients from different datasets. This tutorial
will only show you how to apply these techniques, for in-depth descriptions we
recommend you read the main manuscript.
Customizing Gradient Computation¶
As before, we’ll start by loading the sample data.
% First load mean connectivity matrix and Schaefer parcellation
conn_matrix = load_group_fc('schaefer',400);
labeling = load_parcellation('schaefer',400);
% The loader functions output data in a struct array for when you
% load multiple parcellations. Let's just bring them to numeric arrays.
conn_matrix = conn_matrix.schaefer_400;
labeling = labeling.schaefer_400;
% and load the conte69 hemisphere surfaces
[surf_lh, surf_rh] = load_conte69();
The GradientMaps
object allows for many different kernels and dimensionality
reduction techniques (for a full list see GradientMaps). Let’s have a look
at three different kernels.
kernels = {'pearson', ...
'spearman', ...
'normalizedAngle'}; % Case-insensitive
for ii = 1:numel(kernels)
gm_k{ii} = GradientMaps('kernel',kernels{ii},'approach','dm');
gm_k{ii} = gm_k{ii}.fit(conn_matrix);
end
label_text = {'Pearson','Spearman',{'Normalized','Angle'}};
plot_hemispheres([gm_k{1}.gradients{1}(:,1), ...
gm_k{2}.gradients{1}(:,1), ...
gm_k{3}.gradients{1}(:,1)], ...
{surf_lh,surf_rh}, ...
'parcellation', labeling, ...
'labeltext',label_text);

It seems the gradients provided by these kernels are quite similar although their scaling is quite different. Do note that the gradients are in arbitrary units, so the smaller/larger axes across kernels do not imply anything. Similar to using different kernels, we can also use different dimensionality reduction techniques.
% Compute gradients for every approach.
embeddings = {'principalComponentAnalysis', ...
'laplacianEigenmap', ...
'diffusionEmbedding'}; % case-insensitve
for ii = 1:numel(embeddings)
gm_m{ii} = GradientMaps('kernel','na','approach',embeddings{ii});
gm_m{ii} = gm_m{ii}.fit(conn_matrix);
end
% Plot all to one figure.
label_text = {{'Principal','Component Analysis'}, ...
{'Laplacian','Eigenmap'}, {'Diffusion','Map Embedding'}};
plot_hemispheres([gm_m{1}.gradients{1}(:,1), ...
gm_m{2}.gradients{1}(:,1), ...
gm_m{3}.gradients{1}(:,1)], ...
{surf_lh,surf_rh}, ...
'parcellation', labeling, ...
'labeltext', label_text);

Here we do see some substantial differences: PCA appears to find a slightly different axis, with the somatomotor in the middle between default mode and visual, whereas LE and DM both find the canonical first gradient but their signs are flipped! Fortunately, the sign of gradients is arbitrary, so we could simply multiply either the LM and DM gradient by -1 to make them more comparable.
Gradient Alignment¶
A more principled way of increasing comparability across gradients are alignment techniques. BrainSpace provides two alignment techniques: Procrustes analysis, and joint alignment. For this example we will load functional connectivity data of a second subject group and align it with the first group using a normalized angle kernel and laplacian eigenmap approach.
conn_matrix2 = load_group_fc('schaefer',400,'holdout');
conn_matrix2 = conn_matrix2.schaefer_400;
Gp = GradientMaps('kernel','na','approach','le','alignment','pa');
Gj = GradientMaps('kernel','na','approach','le','alignment','ja');
Gp = Gp.fit({conn_matrix2,conn_matrix});
Gj = Gj.fit({conn_matrix2,conn_matrix});
Here, Gp
contains the Procrustes aligned data and Gj
contains the joint
aligned data. Let’s plot them, but in separate figures to keep things organized.
Note that the gradients in the first element of the gradients/aligned property
correspond to the first data matrix provided in fit() and the gradients in the
second element correspond to the second data matrix.
plot_hemispheres([Gp.gradients{1}(:,1),Gp.gradients{2}(:,1)], ...
{surf_lh,surf_rh}, 'parcellation', labeling, ...
'labeltext',{'Unaligned Group 1','Unaligned Group 2'});

plot_hemispheres([Gp.aligned{1}(:,1),Gp.aligned{2}(:,1)], ...
{surf_lh,surf_rh},'parcellation',labeling, ...
'labeltext',{'Procrustes Group 1','Procrustes Group 2'});

plot_hemispheres([Gj.aligned{1}(:,1),Gj.aligned{2}(:,1)], ...
{surf_lh,surf_rh},'parcellation',labeling, ...
'labeltext',{'Joint Group 1','Joint Group 2'});

Before gradient alignment, the first gradient is reversed, but both alignments resolve this issue. If the input data was less similar, alignments may also resolve changes in the order of the gradients. However, you should always inspect the output of an alignment; if the input data are sufficiently dissimilar then the alignment may produce odd results.
In some instances, you may want to align gradients to an out-of-sample gradient, for example when aligning individuals to a hold-out group gradient. When performing a Procrustes alignemnt, a ‘reference’ can be specified. The first alignment iteration will then be to the reference. For purposes of this example, we will use the gradient of the hold-out group as the reference.
Gref = GradientMaps('kernel','na','approach','le');
Gref = Gref.fit(conn_matrix2);
Galign = GradientMaps('kernel','na','approach','le','alignment','pa');
Galign = Galign.fit(conn_matrix,'reference',Gref.gradients{1});
The gradients in Galign.aligned
are now aligned to the reference gradients.
Gradient Fusion¶
We can also fuse data across multiple modalities and build mutli-modal gradients as was done by (Paquola et al., 2019). In this case we only look at one set of output gradients, rather than one per modality.
First, let’s load the example data of microstructural profile covariance and functional connectivity.
% First load mean connectivity matrix and parcellation
mpc = load_group_mpc('vosdewael',200);
fc = load_group_fc('vosdewael',200);
labeling = load_parcellation('vosdewael',200);
% The loader functions output data in a struct array for when you
% load multiple parcellations. Let's just bring them to numeric arrays.
mpc = mpc.vosdewael_200;
fc = fc.vosdewael_200;
labeling = labeling.vosdewael_200;
% and load the conte69 hemisphere surfaces
[surf_lh, surf_rh] = load_conte69();
% visualise the features from a seed region
h = plot_hemispheres([fc(:,1),mpc(:,1)], ...
{surf_lh,surf_rh}, ...
'parcellation', labeling, ...
'labeltext',{'FC','MPC'});

In order to fuse the matrices, we simply pass the matrices to the fusion command which will rescale and horizontally concatenate the matrices. To do this, you will need the fusion function, which is not included with the BrainSpace package.
function fused_matrix = fusion(M)
% FUSION Fuses matrices from multiple modalities for deep embedding.
%
% fused_matrix = fusion(M) rank orders the input
% matrices and scales them to the maximum of the sparsest matrix.
% M must be a cell array of matrices with the same dimensionality.
% Check the input for nans, infs, and negatives.
if any(cellfun(@(x) any(x(:)<0) || any(isnan(x(:))) || any(isinf(x(:))), M))
error('Negative numbers, nans, and infs are not allowed in the input matrices.');
end
% Check matrix sizes
sz = cellfun(@size,M,'Uniform',false);
if any(cellfun(@numel,sz) > 2)
error('Input matrices may not have more than two dimensions.')
end
if ~all(cellfun(@(x)all(x==sz{1}),sz))
error('Input matrices must have equal dimensions.')
end
% Reshape data to be a vector per input matrix
vectorized = cellfun(@(x)x(:),M,'Uniform',false);
data_matrix = cat(2,vectorized{:});
data_matrix(data_matrix==0) = nan;
% Get rank order and scale to 1 through the maximum of the smallest rank
ranking = tiedrank(data_matrix);
max_val = min(max(ranking));
rank_scaled = rescale(ranking,1,max_val,'InputMin',min(ranking),'InputMax',max(ranking));
% Reshape to output format and remove nans.
fused_matrix = reshape(rank_scaled, size(M{1},1), []);
fused_matrix(isnan(fused_matrix)) = 0;
end
With this function in your path, you can now run the following.
% Negative numbers are not allowed in fusion.
fc(fc<0) = 0;
% fuse the matrices
fused_matrix = fusion({fc,mpc});
We then use this output in the fit function. This will convert the long horizontal array into a square affinity matrix, and then perform embedding.
% resolve the gradients
gm = GradientMaps('kernel','na');
gm = gm.fit(fused_matrix, 'Sparsity', 0);
h = plot_hemispheres(gm.gradients{1}(:,1:2), ...
{surf_lh,surf_rh}, ...
'parcellation', labeling, ...
'labeltext',{'eigenvector1','eigenvector2'});

Note
The mpc matrix presented here match the subject cohort of (Paquola et al., 2019). Other matrices in this package match the subject groups used by (Vos de Wael et al., 2018). We make direct comparisons in our tutorial for didactic purposes only.
That concludes the second tutorial. In the third tutorial we will consider null hypothesis testing of comparisons between gradients and other markers.
Tutorial 3: Null models for gradient significance¶
In this tutorial we assess the significance of correlations between the first canonical gradient and data from other modalities (curvature, cortical thickness and T1w/T2w image intensity). A normal test of the significance of the correlation cannot be used, because the spatial auto-correlation in MRI data may bias the test statistic. In this tutorial we will show two approaches for null hypothesis testing: spin permutations and Moran spectral randomization.
Note
When using these approaches to compare gradients to non-gradient markers, we recommend randomizing the non-gradient markers as these randomizations need not maintain the statistical independence between gradients.
Spin Permutations¶
Here, we use the spin permutations approach previously proposed in (Alexander-Bloch et al., 2018), which preserves the auto-correlation of the permuted feature(s) by rotating the feature data on the spherical domain. We will start by loading the conte69 surfaces for left and right hemispheres, their corresponding spheres, midline mask, and t1w/t2w intensity as well as cortical thickness data, and a template functional gradient.
% load the conte69 hemisphere surfaces and spheres
[surf_lh, surf_rh] = load_conte69;
[sphere_lh, sphere_rh] = load_conte69('spheres');
% Load the data
[t1wt2w_lh,t1wt2w_rh] = load_marker('t1wt2w');
[thickness_lh,thickness_rh] = load_marker('thickness');
% Template functional gradient
embedding = load_gradient('fc',1);
Let’s first generate some null data using spintest.
% Let's create some rotations
n_permutations = 1000;
y_rand = spin_permutations({[t1wt2w_lh,thickness_lh],[t1wt2w_rh,thickness_rh]}, ...
{sphere_lh,sphere_rh}, ...
n_permutations,'random_state',0);
% Merge the rotated data into single vectors
t1wt2w_rotated = squeeze([y_rand{1}(:,1,:); y_rand{2}(:,1,:)]);
thickness_rotated = squeeze([y_rand{1}(:,2,:); y_rand{2}(:,2,:)]);
As an illustration of the rotation, let’s plot the original t1w/t2w data
% Plot original data
h1 = plot_hemispheres([t1wt2w_lh;t1wt2w_rh],{surf_lh,surf_rh});

as well as a few rotated version.
% Plot a few of the rotations
h2 = plot_hemispheres(t1wt2w_rotated(:,1:3),{surf_lh,surf_rh});

Warning
Depending on the overlap of midlines (i.e. NaNs) in the original data and in the rotation, statistical comparisons between them may compare different numbers of features. This can bias your test statistics. Therefore, if a large portion of the sphere is not used, we recommend using Moran spectral randomization instead.
Now we simply compute the correlations between the first gradient and the original data, as well as all rotated data.
% Find correlation between FC-G1 with thickness and T1w/T2w
[r_original_thick, pval_thick_spin] = corr(embedding,[thickness_lh;thickness_rh], ...
'rows','pairwise','type','spearman');
% pval_thick_spin = 0
[r_original_t1wt2w, pval_t1wt2w_spin] = corr(embedding,[t1wt2w_lh;t1wt2w_rh], ...
'rows','pairwise','type','spearman');
% pval_t1wt2w_spin = 0
r_rand_thick = corr(embedding,thickness_rotated, ...
'rows','pairwise','type','spearman');
r_rand_t1wt2w = corr(embedding,t1wt2w_rotated, ...
'rows','pairwise','type','spearman');
To find a p-value, we simply compute the percentile rank of the true correlation in the distribution or random correlations. Assuming a threshold of p<0.05 for statistical significance and disregarding multiple comparison corrections, we consider the correlation to be significant if it is lower or higher than the 2.5th/97.5th percentile, respectively.
% Compute percentile rank.
prctile_rank_thick = mean(r_original_thick > r_rand_thick);
% prctile_rank_thick = 0.9410
prctile_rank_t1wt2w = mean(r_original_t1wt2w > r_rand_t1wt2w);
% prctile_rank_t1wt2w = 0
significant_thick = prctile_rank_thick < 0.025 || prctile_rank_thick >= 0.975;
significant_t1wt2w = prctile_rank_t1wt2w < 0.025 || prctile_rank_t1wt2w >= 0.975;
If significant is true, then we have found a statistically significant correlation. Alternatively, one could also test the one-tailed hypothesis whether the percentile rank is lower or higher than the 5th/95th percentile, respectively.
Moran Spectral Randomization¶
Moran Spectral Randomization (MSR) computes Moran’s I, a metric for spatial auto-correlation and generates normally distributed data with similar auto-correlation. MSR relies on a weight matrix denoting the spatial proximity of features to one another. Within neuroimaging, one straightforward example of this is inverse geodesic distance i.e. distance along the cortical surface.
In this example we will show how to use MSR to assess statistical significance between cortical markers (here curvature and cortical t1wt2w intensity) and the first functional connectivity gradient. We will start by loading the conte69 surfaces for left and right hemispheres, a left temporal lobe mask, t1w/t2w intensity as well as cortical thickness data, and a template functional gradient.
% load the conte69 hemisphere surfaces and spheres
[surf_lh, surf_rh] = load_conte69();
% Load the data
t1wt2w_lh = load_marker('t1wt2w');
curv_lh = load_marker('curvature');
% Load mask
temporal_mask_tmp = load_mask('temporal');
% There's a one vertex overlap between the HCP midline mask (i.e. nans) and
% our temporal mask.
temporal_mask_lh = temporal_mask_tmp & ~isnan(t1wt2w_lh);
% Load the embedding
embedding = load_gradient('fc',1);
embedding_lh = embedding(1:end/2);
% Keep only the temporal lobe.
embedding_tl = embedding(temporal_mask_lh);
t1wt2w_tl = t1wt2w_lh(temporal_mask_lh);
curv_tl = curv_lh(temporal_mask_lh);
We will now compute the Moran eigenvectors. This can be done either by providing a weight matrix of spatial proximity between each vertex, or by providing a cortical surface (see also: compute_mem). Here we’ll use a cortical surface.
n_ring = 1;
MEM = compute_mem(surf_lh,'n_ring',n_ring,'mask',~temporal_mask_lh);
Using the Moran eigenvectors we can now compute the randomized data. As the computationally intensive portion of MSR is mostly in compute_mem, we can push the number of permutations a bit further.
n_rand = 10000;
y_rand = moran_randomization([curv_tl,t1wt2w_tl],MEM,n_rand, ...
'procedure','singleton','joint',true,'random_state',0);
curv_rand = squeeze(y_rand(:,1,:));
t1wt2w_rand = squeeze(y_rand(:,2,:));
Now that we have the randomized data, we can compute correlations between the gradient and the real/randomized data.
[r_original_curv,pval_curv_moran] = corr(embedding_tl,curv_tl,'type','spearman');
% pval_curv_moran = 6.6380e-05
[r_original_t1wt2w,pval_t1wt2w_moran] = corr(embedding_tl,t1wt2w_tl,'type','spearman');
% pval_t1wt2w_moran = 0
r_rand_curv = corr(embedding_tl,curv_rand,'type','spearman');
r_rand_t1wt2w = corr(embedding_tl,t1wt2w_rand,'type','spearman');
To find a p-value, we simply compute the percentile rank of the true correlation in the distribution or random correlations. Assuming a threshold of p<0.05 for statistical significance and disregarding multiple comparison corrections, we consider the correlation to be significant if it is lower or higher than the 2.5th/97.5th percentile, respectively.
prctile_rank_curv = mean(r_original_curv > r_rand_curv);
% prctile_rank_curv = 0.8249
prctile_rank_t1wt2w = mean(r_original_t1wt2w > r_rand_t1wt2w);
% prctile_rank_t1wt2w = 0
significant_curv = prctile_rank_curv < 0.025 || prctile_rank_curv >= 0.975;
significant_t1wt2w = prctile_rank_t1wt2w < 0.025 || prctile_rank_t1wt2w >= 0.975;
If significant is true, then we have found a statistically significant correlation. Alternatively, one could also test the one-tailed hypothesis whether the percentile rank is lower or higher than the 5th/95th percentile, respectively.
There are some scenarios where MSR results do not follow a normal distribution. It is relatively simple to check whether this occurs in our data by visualizing the null distributions. Check this interesting paper for more information (Burt et al.,2020).
% Compute the correlations between real and random data.
upper_triangle = triu(ones(size(curv_rand,2),'logical'),1);
r_real_rand_curv = corr(curv_tl,curv_rand);
r_real_rand_t1wt2w = corr(t1wt2w_tl,t1wt2w_rand);
r_rand_rand_curv = corr(curv_rand);
r_rand_rand_t1wt2w = corr(t1wt2w_rand);
r_rand_rand_curv = r_rand_rand_curv(upper_triangle);
r_rand_rand_t1wt2w = r_rand_rand_t1wt2w(upper_triangle);
% Plot histograms
figure('Color','w');
subplot(2,2,1);
hist(r_real_rand_curv,100);
title('Correlation curvature real and random');
subplot(2,2,2);
hist(r_real_rand_t1wt2w,100);
title('Correlation t1w/t2w real and random');
subplot(2,2,3);
hist(r_rand_rand_curv,100);
title('Correlation curvature random and random');
subplot(2,2,4);
hist(r_rand_rand_t1wt2w,100);
title('Correlation curvature random and random');

Indeed, our histograms appear to be normally distributed. This concludes the third and last tutorial. You should now be familliar with all the functionality of the BrainSpace toolbox. For more details on any specific function, please see MATLAB Package.
Variogram Matching¶
Here, we use the variogram matching presented in (Burt et al., 2020), which generates novel brainmaps with similar spatial autocorrelation to the input data. We will start by loading the conte69 surfaces for left and right hemispheres, a left temporal lobe mask, t1w/t2w intensity as well as cortical thickness data.
% load the conte69 hemisphere surfaces and spheres
[surf_lh, surf_rh] = load_conte69();
% Load the data
t1wt2w_lh = load_marker('t1wt2w');
curv_lh = load_marker('curvature');
% Load mask
temporal_mask_tmp = load_mask('temporal');
% There's a one vertex overlap between the HCP midline mask (i.e. nans) and
% our temporal mask.
temporal_mask_lh = temporal_mask_tmp & ~isnan(t1wt2w_lh);
% Load the embedding
embedding = load_gradient('fc',1);
embedding_lh = embedding(1:end/2);
% Keep only the temporal lobe.
t1wt2w_tl = t1wt2w_lh(temporal_mask_lh);
curv_tl = curv_lh(temporal_mask_lh);
Next, we will need a distance matrix that tells us what the spatial distance between our datapoints is. For this example, we will use geodesic distance.
G = surface_to_graph(surf_lh, 'geodesic', ~temporal_mask_lh, true);
geodesic_distance = distances(G);
Now we’ve got everything we need to generate our surrogate datasets. By default, BrainSpace will use all available data to generate surrogate maps. However, this process is extremely computationally and memory intensive. When using this method with more than a few hundred regions, we recommend subsampling the data. This can be done using the ‘ns’ and ‘knn’ name-value pairs. ‘ns’ determines how many data points to sample for the generation of the variogram; ‘knn’ determines how many neighbors to use in the smoothing step.
random_initialization = 0;
n_surrogate_datasets = 10;
% Note: num_samples must be greater than num_neighbors
num_samples = 200;
num_neighbors = 100;
obj_subsample = variogram(geodesic_distance, 'ns', num_samples, ...
'knn', num_neighbors, 'random_state', random_initialization);
surrogates_subsample = obj_subsample.fit(t1wt2w_tl, n_surrogate_datasets);
Note
The variogram class also supports parallel processing with the ‘num_workers’ Name-Value pair. This functionality requires the Parallel Computing Toolbox.
The correlation between the real data, surrogate data, and the marker of interest can then be compared as follows. Note that, for this example, we only generated 10 surogate datasets. For any practical usage we recommend generating at least 1000.
r_real = corr(t1wt2w_tl, curv_tl);
r_surrogate = corr(surrogates_subsample, curv_tl);
prctile_rank = mean(r_real > r_surrogate);
Main Object¶
GradientMaps¶
Synopsis¶
The core object of the MATLAB BrainSpace package (source code).
Usage¶
gm = GradientMaps(varargin_1);
gm = gm.fit({data_matrix_1,...,data_matrix_n},varargin_2);
- gm: the GradientMaps object.
- data_matrix_n: the input feature matrix
- varargin_1: a set of name-value pairs (see below).
- varargin_2: a set of name-value pairs (see below).
Description¶
Properties¶
The method property is a structure array which itself consists of four fields. Each of these is set at the initalization of the object (see below) and cannot be modifed afterwards. The fields are: “kernel”, “approach”, “alignment”, “n_components”, and “verbose”.
The gradients property is a cell array containing the (unaligned) gradients of each input matrix. Each cell is an n-by-m matrix where n is the number of datapoints and m the number of components. In joint embedding the gradients of all data sets are computed simultaneously, and thus no unaligned gradients are stored.
The aligned property is a cell array of identical dimensions to the gradients property. If an alignment was requested, then the aligned data are stored here.
The lambda property stores the variance explained (for PCA) or the eigenvalues (for LE and DM). Note that all computed lambdas are provided even if this is more than the number of requested components.
Initialization¶
A basic GradientMaps object can initialized by simply running it without
arguments i.e. gm = GradientMaps();
. However, several name-value pairs can
be provided to alter its behavior.
- ‘kernel’
- ‘none’, ‘’
- ‘pearson’, ‘p’
- ‘spearman’, ‘sm’
- ‘gaussian’, ‘g’
- ‘cosine similarity’, ‘cs’, ‘cossim’, ‘cosine_similarity’
- ‘normalized angle’, ‘na’, ‘normangle’, ‘normalized_angle’ (default)
- a function handle (the function will be applied to the data matrix)
- ‘approach’
- ‘principal component analysis’, ‘pca’
- ‘laplacian eigenmap’, ‘le’
- ‘diffusion embedding’, ‘dm’ (default)
- a function handle (the function will be applied to the post-kernel data matrix)
- ‘alignment’
- ‘none’, ‘’ (default)
- ‘procrustes analysis’, ‘pa’, ‘procrustes’
- ‘joint alignment’, ‘ja’, ‘joint’
- a function handle (the function will be applied to the post-manifold data matrix)
- ‘random_state’
- Any input accepted by MATLAB’s
rng
function, or nan to use the current random state. (default: nan)
- Any input accepted by MATLAB’s
- ‘n_components’
- Any natural number in double format. (default: 10)
- ‘verbose’
- Determines whether non-warning/error messages are diplayed (default: false).
Putting it all together, an example initialization could be: gm =
GradientMaps('kernel','g','approach','pca','alignment','','random_state',10,'verbose',true);
Public Methods¶
Public methods are accesible to the end-user. Disregarding the constructor (see initialization section), the GradientMaps class contains one public method.
- fit
- Uses the settings set in the methods to compute the gradients of all provided data matrices.
varargin
can be used to provide name-value pairs to modify the behavior of the fitting process. The following name-value pairs are allowed: - sparsity (default: 90)
Sets the sparsity at which the data matrix is thresholded.- tolerance (default: 1e-6)
Floating point errors may cause the kernel to output asymmetric matrices. This number denotes the amount of asymmetry that is allowed before an error is thrown.- gamma (default: 1 / number_of_data_points)
The gamma parameter used in the Gaussian kernel.- alpha (default: 0.5)
The alpha paramter used in diffusion embedding.- diffusion_time (default: 0)
The diffusion time used in diffusion embedding. Leave at 0 for automatic estimation.- niterations (default: 10)
The number of iterations in Procrustes analysis.- reference (default: gradients of the first data matrix)
The target for alignment for the first iteration of Procrustes analysis.
Example usage:
fit({data_matrix_1,data_matrix_2,...,data_matrix_n},'sparsity',75)
- Uses the settings set in the methods to compute the gradients of all provided data matrices.
Private Methods¶
Private methods are not accesible to the user, but are called by other methods i.e. GradientMaps initialization and GradientMaps.fit. The GradientMaps class contains three private methods. As these methods are not intended for user interaction, we only provide a basic explanation here.
- set(obj,varargin): used for setting properties of the GradientMaps class.
- kernels(obj,data,varargin): performs kernel computations.
- approaches(obj,data,varargin): performs dimensionality reduction.
Surface Handling¶
read_surface¶
Synopsis¶
Reads surfaces (source code).
Description¶
Reads surfaces from the disk. Accepted formats are gifti files (provided the gifti library is installed), freesurfer files, .mat files and .obj files.
When provided with a .mat file, the file must contain variables corresponding to a MATLAB surface i.e. ‘vertices’ and ‘faces’ or a SurfStat surface i.e. ‘coord and ‘tri’. ‘vertices’ is an n-by-3 list of vertex coordinates, ‘faces’ is an m-by-3 matrix of triangles, ‘coord’, is a 3-by-n list of vertex coordinates, and ‘tri’ is identical to faces in the MATLAB format.
write_surface¶
Synopsis¶
Writes surfaces (source code).
Description¶
Writes surfaces to the designated output path. Accepted formats are .gii files (provided the gifti library is installed), freesurfer files, .mat files and .obj files. File type is determined by the extension; if the filetype is not recognized the default is freesurfer format.
When saving a .mat file, the first will save the fields inside the surface structure.
combine_surfaces¶
Synopsis¶
Combines two surfaces into a single surface (source code).
Usage¶
SC = combine_surfaces(S1,S2);
SC = combine_surfaces(S1,S2,format);
- S1, S2: two surfaces
- format: the output format; either ‘SurfStat’ (default) or ‘MATLAB’.
- S: Combined surface.
Description¶
This can be used to merge left and right hemisphere surfaces into a single surface. The input surfaces can be any file readable by read_surface or a surface loaded into memory.
A MATLAB structure consists of two fields: vertices, which is a n-by-3 list of vertex coordinates, and faces, which is a m-by-3 matrix of triangles. A SurfStat format consists of two fields: coord, which is a 3-by-n list of vertex coordinates, and tri, which is identical to faces.
split_surfaces¶
Synopsis¶
Splits surfaces in a single variable into separate variables. (source code).
Usage¶
varargout = split_surfaces(surface);
- surface: surface loaded into MATLABB
- varargout: Variables to load the surfaces into.
Description¶
If you have a variable containing multiple disconnected surfaces, this function will split them up and return each in a separate output variable. Accepts all surfaces readable by convert_surface.
full2parcel¶
Synopsis¶
Converts data from a full data matrix to parcellated data (source code).
Usage¶
parcellated_data = full2parcel(data,parcellation);
- data: an n-by-m data matrix.
- parcellation: a 1-by-m parcel vector containing natural numbers.
- parcellated_data: n-by-max(parcellation) matrix containing the mean column of each parcel.
Description¶
A useful tool for quickly moving data between vertex and parcel level, especially in combination with parcel2full. For more flexible usage, see also labelmean.
parcel2full¶
Synopsis¶
Converts data from a parcellated data matrix to full data matrix by assigning each vertex the value of its parcel (source code).
Usage¶
full_data = parcel2full(parcellated_data,parcellation);
- parcellated_data: an n-by-max(parcellation) data matrix.
- parcellation: a 1-by-m parcel vector containing natural numbers.
- full_data: n-by-m matrix containing the data at the vertex level.
Description¶
A useful tool for quickly moving data between parcel and vertex level, especially in combination with full2parcel. This is mostly used for plotting parcellated data on the surface.
Null Hypothesis Testing¶
spin_permutations¶
Synopsis¶
Performs a spin test to generate null data for hypothesis testing (source code).
Usage¶
null_data = spin_permutations(data,spheres,n_rep,varargin);
- data: An n-by-m matrix of data to be randomised (single sphere) or cell array of n-by-m matrices (two spheres).
- spheres: Cell array containing spheres.
- n_rep: The number of permutation to perform.
- null_data: The randomised data.
- varargin: See name-value pairs below.
Description¶
Spin test as described by (Alexander-Bloch, 2018). Data is either an n-by-m matrix where n is the number of vertices and/or parcels or, when spinning on two spheres, a two-element cell array each containing an n-by-m matrix. Spheres is either a file in a format readable by read_surface, a sphere loaded into memory, or a two-element cell array containing files/spheres corresponding to the respective elements in the data cell array.
Name-Value pairs¶
- ‘parcellation’: a n-by-1 vector containing the parcellation scheme. If you are performing vertexwise analysis, do not provide this pair.
- ‘surface_algorithm’: program used to generate the spheres. Either ‘FreeSurfer’ (default), or ‘CIVET’. If Freesurfer, rotations are flipped along the x-axis for the second sphere.
compute_mem¶
Synopsis¶
Computes the moran eigenvectors required for Moran spectral randomization (source code).
Usage¶
MEM = compute_mem(W,varargin);
- W: A matrix denoting distance between features or a cortical surface.
- varargin: Name-value pairs (see below).
Description¶
The Moran eigenvectors hold information on the spatial autocorrelation of the data. Their computation is kept separate from the randomization as these eigenvectors can be used for any feature on the same surface. These eigenvectors are used for Moran spectral randomization by moran_randomization. See also (Wagner and Dray, 2015).
If W is provided as a cortical surface, then this can either be a surface in memory or a file readable by read_surface.
Name-Value pairs¶
- n_ring: Only used if W is a surface. Vertices that are within n_ring steps of each other have their distance computed (Default: 1).
- mask: Only used if W is a surface. A n-by-1 logical denoting a mask.
n
denotes the number of vertices. Vertices corresponding to True values are discarded when computing the eigenvectors. You can also provide an empty logical array to discard nothing (Default: []). - spectrum: Determines the behavior for discarding eigenvectors with eigenvalue=0. Set to ‘all’ for discarding only one and reorthogonalizing the remainder or ‘NonZero’ for discarding all zero eigenvalues (Default: ‘all’).
moran_randomization¶
Synopsis¶
Computes the moran eigenvectors required for Moran spectral randomization (source code).
Usage¶
Y_rand = moran_randomization(Y,MEM,n_rep,varargin);
- Y: An n-by-m data matrix to randomize where n is number of datapoints and m are different modalities.
- MEM: Moran eigenvectors as returned by compute_mem.
- n_rep: Number of pertubations
- varargin: See name-value pairs below.
- Y_rand: Randomized data.
Description¶
Implementation of Moran spectral randomization as presented by (Wagner and Dray, 2015). This function uses the eigenvectors computed by compute_mem to generate null model data with similar spatial autocorrelation. The implemented procedures are ‘singleton’ and ‘pair’. Singleton matches the input data’s autocorrelation more closely at the cost of fewer possible randomizations (max: 2n). In most use-cases this allows for ample randomizations. In cases where the maximum number of randomizations becomes restrictive, we recommend using the pair procedure instead.
Name-Value Pairs¶
- procedure: Randomization procedure; either ‘singleton’ or ‘pair’.
- joint: If true, randomizes different modalities identically.
- random_state: Initilaization of the random state. Accepts any argument accepted by rng() or nan for no initialization.
Visualization¶
gradient_in_euclidean¶
Synopsis¶
Plots gradient data in 3D Euclidean space (source code).
Usage¶
handles = gradient_in_euclidean(gradients)
handles = gradient_in_euclidean(gradients,surface)
handles = gradient_in_euclidean(gradients,surface,parcellation)
- gradients: an n-by-3 matrix of gradients (usually the first three).
- surface: a surface readable by convert_surface or a two-element cell array containing left and right hemispheric surfaces in that order.
- parcellation: an m-by-1 vector containing the parcellation scheme, where m is the number of vertices.
- handles: a structure containing the handles of the graphics objects.
Description¶
Produces a 3D scatter plot of gradient data in Euclidean space with each datapoint color coded by their location. If provided a surface (and parcellation), also produces surface plots with the colors projected back to the surface.
BrainSpace only provides basic figure building functionality. For more information on how to use MATLAB to create publication-ready figures we recommend delving into graphics object properties (e.g. figure, axes, surface).
plot_hemispheres¶
Synopsis¶
Plots data on the cortical surface (source code).
Usage¶
obj = plot_hemispheres(data,surface,varargin);
- data: an n-by-m vector of data to plot where n is the number of vertices or parcels, and m the number of markers to plot.
- surface: a surface readable by convert_surface or a two-element cell array containing left and right hemispheric surfaces in that order.
- varargin: Name-Value Pairs (see below).
- obj: an object allowing for further modificatio of the figure (see below).
Description¶
Plots any data vector onto cortical surfaces. Data is always provided as a single vector; if two surfaces are provided then the n vertices of the first surface will be assigned datapoints 1:n and the second surface is assigned the remainder. If a parcellation scheme is provided, data should have as many datapoints as there are parcels.
BrainSpace only provides basic figure building functionality. For more information on how to use MATLAB to create publication-ready figures we recommend delving into graphics object properties (e.g. figure, axes, surface). Also see the source code for basic graphic object property modifications.
Name-Value Pairs¶
- parcellation: an k-by-1 vector containing the parcellation scheme, where k is the number of vertices. Default [].
- labeltext: A cell array of m elements containing labels for each column of data. These will be printed next to the hemispheres. Default: [].
- views: A character vector containing the requested view angles. Options are: l(ateral), m(edial), i(nferior), s(uperior), a(nterior), and p(osterior). Default: ‘lm’.
Public methods¶
Public methods can be used with obj.(method) e.g. obj.colorlimits to use the colorlimits method.
- colorlimits(limits): Sets the color limits of each row. Limits must be a 2-element numeric vector or n-by-2 matrix where n is the number of columns in obj.data. The first column of limits denotes the minimum color limit and the second the maximum. When limits is a 2-element vector, then the limits are applied to all axes, with limits(1) as minimum and limits(2) as maximum.
- colorMaps(maps): Maps must either be an n-by-3 color map, or a cell array with the same number of elements as columns in obj.data, each containing n-by-3 colormaps.
- labels(varargin): Modifies the properties of the labeltext. Varargin are valid name-value pairs for MATLAB’s text function. e.g. obj.labels(‘FontSize’,25)
scree_plot¶
Synopsis¶
Produces a scree plot of the lambdas (source code).
Usage¶
handles = scree_plot(lambdas)
- lambdas: a vector of lambdas; can be taken from the GradientMaps object.
- handles: a structure containing the handles of the graphics objects.
Description¶
scree_plot plots the lambdas scaled to a sum of 1. It is a useful tool for identifying the difference in the importance of each eigenvector (i.e. gradient) with higher lambdas being more important in principal component analysis and diffusion embedding, and lower ones more important in Laplacian eigenmaps.
BrainSpace only provides basic figure building functionality. For more information on how to use MATLAB to create publication-ready figures we recommend delving into graphics object properties (e.g. figure, axes, surface).
Data Loaders¶
load_conte69¶
Synopsis¶
Loads conte69 surfaces (source code).
Usage¶
[surf_lh,surf_rh] = load_conte69(name)
- name The type of surface. Either ‘surfaces’ for neocortex, ‘spheres’ for cortical spheres or ‘5k_surfaces’ for downsampled neocortex.
- surf_lh Left hemispheric surface.
- surf_rh Right hemispheric surface.
load_group_fc¶
Synopsis¶
Loads group level functional connectivity matrices (source code).
Usage¶
conn_matrices = load_group_fc(parcellation,scale,group)
- parcellation: Name of the parcellation, either ‘schaefer’ for Schaefer (functional) parcellations or ‘vosdewael’ for a subparcellation of the Desikan-Killiany atlas. Both may be provided as a cell or string array.
- scale: Scale of the parcellation. Either 100, 200, 300, or 400. Multiple may be provided as a vector.
- group: Loads data from the main group if set to ‘main’ (default) or the holdout group if set to ‘holdout’.
- conn_matrices: Structure of all requested data.
Note
Data matrices were drived from the discovery (main) and validation (holdout) groups of (Vos de Wael et al., 2018).
load_group_mpc¶
Synopsis¶
Loads group level microstructural profile covariance matrices (source code).
Usage¶
conn_matrices = load_group_mpc(parcellation,scale,group)
- parcellation: Name of the parcellation, either ‘schaefer’ for Schaefer (functional) parcellations or ‘vosdewael’ for a subparcellation of the Desikan-Killiany atlas. Both may be provided as a cell or string array.
- scale: Scale of the parcellation. Either 100, 200, 300, or 400. Multiple may be provided as a vector.
- group: Loads data from the main group if set to ‘main’ (default) or the holdout group if set to ‘holdout’.
- conn_matrices: Structure of all requested data.
Note
The mpc matrix presented here match the subject cohort of (Paquola et al., 2019). Other matrices in this package match the subject groups used by (Vos de Wael et al., 2018). We make direct comparisons in our tutorial for didactic purposes only.
load_mask¶
Synopsis¶
Loads cortical masks (source code).
Usage¶
[mask_lh,mask_rh] = load_mask(name)
- name: Type of mask: either ‘midline’ for the midline or ‘temporal’ for the temporal lobe.
- mask_lh: Left hemispheric mask.
- mask_rh: Right hemispheric mask.
load_marker¶
Synopsis¶
Loads metric data (source code).
Usage¶
[metric_lh,metric_rh] = load_marker(name)
- name: The type of surface. Either ‘thickness’ for cortical thickness, ‘t1wt2w’ for t1w/t2w intensity or ‘curvature’ for curvature.
- metric_lh: Data on left hemisphere.
- metric_rh: Data on right hemisphere.
Note
Data matrices were drived from the discovery (main) group of (Vos de Wael et al., 2018).
load_parcellation¶
Synopsis¶
Loads parcellation schemes (source code).
Usage¶
labels = load_parcellation(parcellation,scale)
- parcellation: Name of the parcellation, either ‘schaefer’ for Schaefer (functional) parcellations or ‘vosdewael’ for a subparcellation of the Desikan-Killiany atlas. Both may be provided as a cell or string array.
- scale: Scale of the parcellation. Either 100, 200, 300, or 400. Multiple may be provided as a vector.
- labels: Structure containing the parcellation schemes.
load_gradient¶
Synopsis¶
Loads template gradients (source code).
Usage¶
data = load_gradient(name,number)
- name: The type of gradient, either ‘fc’ for functional connectivity or ‘mpc’ for microstructural profile covariance.
- number: The rank of the gradient, either 1 or 2.
- data: Output data.
Description¶
Loads normative functional connectivity and microstructural profile covariance gradients, both computed from the HCP dataset. Gradients were computed with a cosine similarity kernel and diffusion mapping on a downsampled 5K cortical mesh. Resulting gradients were upsampled to the 32K mesh.
Support Functions¶
SurfStatReadSurf1¶
Synopsis¶
SurfStat’s surface loader, used here for loading Freesurfer and .obj files. For surface loading in BrainSpace, please use read_surface instead (source code).
SurfStatWriteSurf1¶
Synopsis¶
SurfStat’s surface writer, used here for writing Freesurfer and .obj files. For surface writing in BrainSpace, please use write_surface instead (source code).
convert_surface¶
Synopsis¶
Loads, converts, and writes surfaces (source code).
Usage¶
S = convert_surface(file);
S = convert_surface(surface_struct,varargin);
- file: path to a gifti, freesurfer, or .obj file.
- surface_struct: a surface stored either in MATLAB format or SurfStat format.
- varargin: see Name-Value pairs below.
- S: Output surface.
Description¶
This function is BrainSpace’s surface loader/writer. When provided with the path to a surface file it will load gifti files (provided the gifti library is installed), freesurfer files, .mat files and .obj files. It can also be provided with a structure variable containing a surface in either MATLAB format or SurfStat format. When provided with a ‘path’ name-value pair, the input surface will also be written to the disk.
A MATLAB surface is a structure consisting of two fields: ‘vertices’, which is an n-by-3 list of vertex coordinates, and ‘faces’, which is an m-by-3 matrix of triangles. A SurfStat format consists of two fields: ‘coord’, which is a 3-by-n list of vertex coordinates, and ‘tri’, which is identical to faces in the MATLAB format.
Name-Value pairs¶
- format: the format to convert to; either SurfStat (default) or MATLAB
- path: a path to write the surface to (Default: ‘’). If used, only one surface can be provided in S.
diffusion_mapping¶
Synopsis¶
Performs the diffusion mapping computations (source code).
Usage¶
[gradients,lambdas] = diffusion_mapping(data, n_components, alpha, diffusion_time, random_state);
- data: the data matrix to perform the diffusion mapping on.
- n_components: the number of components to return.
- alpha: the alpha parameter.
- diffusion_time: the diffusion_time parameter; set to 0 for automatic estimation.
- random_state: Input passed to the rng() function for randomization initialization (default: no initialization).
- gradients: the output gradients.
- lambdas: the output eigenvalues.
Description¶
Performs the diffusion procedure. This implementation is based on the mapalign package by Satrajid Ghosh.
labelmean¶
Synopsis¶
Takes the mean of columns with the same label (source code).
Usage¶
label_data = labelmean(data,label,varargin);
- data: an n-by-m data matrix.
- label: a 1-by-m label vector containing natural numbers.
- label_data: n-by-max(label) matrix containing the mean column of each label.
- varargin: See below.
Description¶
A fast way of computing the mean for each label. Accepts the following additional arguments in varargin:
- sortByLabelVector: Sorts the columns of
label_data
by the order of appearance in thelabel
vector, rather than ascending order. - ignoreWarning: Does not display order of column sorting or warnings due to 0s in the label vector.
laplacian_eigenmap¶
Synopsis¶
Performs the laplacian eigenmap computations (source code).
Usage¶
[gradients, lambdas] = laplacian_eigenmap(data, n_components)
- data: the data matrix to perform the laplacian eigenmapping on.
- n_components: the number of components to return.
- gradients: the output gradients.
- mapping: structure containing the output gradients and lambdas.
Description¶
Performs the laplacian eigenmap procedure. Original implemented in the MATLAB Toolbox for Dimensionality Reduction by Laurens van der Maaten. Modified for use in the BrainSpace toolbox.
procrustes_alignment¶
Synopsis¶
Performs gradient alignment through iterative singular value decomposition (source code).
Usage¶
aligned = procrustes_alignment(gradients,varargin)
- gradients: cell array of gradients
- varargin: Name-value pairs (see below).
- aligned: Aligned gradients.
Description¶
Gradient alignment through Procrustes analysis [see (Langs et al., 2015)]. On the first iteration all gradients are aligned to either the first provided gradient set or to a template (see Name-Value pairs) using singular value decomposition. On every subsequent iteration alignment is to the mean.
Name-Value pairs¶
- nIterations: Number of iterations to run (default: 100).
- reference: Template to align to on the first iteration (default: first provided gradient set)
surface_to_graph¶
Synopsis¶
Converts a surface to a graph (source code).
Usage¶
G = surface_to_graph(S,distance)
G = surface_to_graph(S,distance,mask)
G = surface_to_graph(S,distance,mask,removeDegreeZero)
- G: Output graph
- S: a surface.
- distance: either ‘geodesic’ for returning a weighted graph, or ‘mesh’ for unweighted.
- mask: a logical mask where True denotes vertices to remove (Default: []).
- removeDegreeZero: a logical, if true then vertices with degree 0 are removed from the output graph.
Description¶
Converts surfaces readable by convert_surface into a graph. Masks can be used to remove portions of the surfaces (e.g. midline)
References¶
When using BrainSpace, please cite the following manuscript:
The following references used methodology similar, or identical, to those described here:
Functional gradients of the adult connectome.
Functional gradients of the neonatal connectome.
Functional gradients in autism.
Functional gradients in aging.
Functional gradients of the nonhuman primate connectome.
Meta-analytic co-activation and functional gradients in the hippocampus.
Gradients of microstructural markers.
- Paquola C, Vos de Wael R, Wagstyl K, Bethlehem R, Seidlitz J, Hong S, Bullmore ET, Evans AC, Misic B, Margulies DS, Smallwood J, Bernhardt BC (2019) Dissociations between microstructural and functional hierarchies within regions of transmodal cortex. PLoS Biology, in press.
- Paquola C, Bethlehem RAI, Seidlitz J, Wagstyl K, Romero-Garcia, Whitaker KJ, Vos de Wael R, Williams GB, NSPN Consortium, Vertes PE, Bernhardt BC, Bullmore ET (2019). A moment of change: shifts in myeloarchitecture profiles characterize adolescent development of cortical gradients. Preprint: https://www.biorxiv.org/content/10.1101/706341v1
Please send us your gradient papers, so that we can list them here as well!
Funding¶
Our research is kindly supported by:
- Canadian Institutes of Health Research (CIHR)
- National Science and Engineering Research Council of Canada (NSERC)
- Azrieli Center for Autism Research
- The Montreal Neurological Institute
- SickKids Foundation
We also like to thank these funders for training/salary support
- Savoy Foundation for Epilepsy (to RvdW)
- Healthy Brain and Healthy Lives (to OB)
- Fonds de la Recherche du Quebec - Sante (to BB)
Authors¶
- Reinder Vos de Wael, MICA Lab - Montreal Neurological Institute
- Oualid Benkarim, MICA Lab - Montreal Neurological Institute
- Boris Bernhardt, MICA Lab - Montreal Neurological Institute
License¶
The BrainSpace source code is available under the BSD (3-Clause) license.