Tools for building full brain network models from standard structural MR scans.
These notes are out of date.
- Scripts to help set up an environment with dependecies
- Extensive visual diagnostics for intermediate & final results
- Follow FS' organization in per-subject topic folders (adds dwi & seeg)
- Functionality for rapidly reparcellating and obtaining new brain model
- friendly cli with docopt / argparse
- calibrate standard 10-20 layout against aparc for MEG & EEG
- EEG shoudl move sensors to be a few mm outside the head surface (lh.seghead)
- break up into separate workflows
- fs recon all
- resample surface
- coreg t1 dwi
- dmr estimates
- tractography
- em fwd for cortical surface
- coreg t1 ct
- label electrodes
then for every connectivity, generate
- connectome matrices
- region mapping
- reduced lead fields
- ROI vol map for fMRI
- FS & tractography can run in paraller
- FS has a new
-parallel
flag which makes things faster
rm -rf $SUBJECTS_DIR/tvb
time recon-all -i ../t1_raw.nii.gz -s tvb -all -parallel
The dwi workflow can start in parallel, as well as tractography, until connectome generation step, in which we need the parc vols in diffusion space.
Also in parallel most steps of EM fwds can be done.
Most steps are parameter free, but a few to keep in mind
- subparcellation
- number of tracks (though 10M sifted to 5M seems safe default, 100M->10M is lux)
sift always on, topup & act depends on data.
Main difficulty with preprocessing reversed acquisition is formatting data because mrconvert asks interactively which series to convert. We can do this batch with a Bash function like
function mrchoose() {
choice=$1
shift
$@ << EOF
$choice
EOF
}
- new subdivided parcellations can be made on a sphere
- save to new annot files label/{r,l}h.foo.annot
- mri_aparc2aseg --s tvb --aseg aseg.presurf.hypos --annot foo
- results in mri/foo+aseg.mgz
- usuable for tck2connectome
In the following sections we step through the workflow used for the
default dataset. Note that the dMR data were already corrected for
eddy distortion with FSL's topup
, but a script is provided to handle
that if you need it.
This workflow produces
- Cortical surface usuable for simulation
- ROI diffusion based connectome, usuable for simulation
- Cortical parcellation for mapping cortical regions in (2) to vertices in (1)
- Volumetric parcellation for cortical & subcortical regions in (2)
- BEM surfaces, including a scalp/head surface
- Forward solutions from activty on ROIs in (2) to sEEG, EEG & MEG, w/ sensors xyz & ori
- BOLD forward solution via (2)
- Cortical surface geodesic distance matrix
The dependencies are, roughly,
- T1
- DMR
- CT
- recon-all -> 1
- Low-res surface & annot -> 4
Producing these data requires many intermediate data that will be organized in the subject's folder, following FreeSurfer conventions as possible.
Finally, in the following examples, the subject name is always tvb
, so
by convention we set this in an environment variable before starting:
export SUBJECT=tvb
recon-all -s ${SUBJECT} -i t1_raw.nii.gz -all -parallel
This takes takes 5+ hours, though the time required can vary significantly across datasets.
Loading the T1, aparc+aseg, lh.white surface with lh.aparc.DK annot should look like
The default parcellation can be subdivided into regions of roughly 100.0 mm^2
SUBJECT=tvb python utils.py lh aparc aparc100 100
SUBJECT=tvb python utils.py rh aparc aparc100 100
This takes a few seconds. Check the subdivision visually,
freeview -f ${SUBJECTS_DIR}/${SUBJECT}/surf/lh.pial:annot=aparc100 -viewport 3d
For each such parcellation, the corresponding volume should be generated
mri_aparc2aseg --s ${SUBJECT} --aseg aseg --annot aparc250
This takes 130 seconds and generates mri/aparc100+aseg.mgz
.
freeview -v ${SUBJECTS_DIR}/${SUBJECT}/mri/aparc100+aseg.mgz -viewport 3d
TODO Subdivision of the subcortical structures.
The cortical surfaces generated by FreeSurfer are too large for simulation (>100k
vertices), so we resample the anatomy through a lower resolution subject, with the
resamp-anat.sh
script:
$ mris_info $SUBJECTS_DIR/$SUBJECT/surf/lh.pial | grep nvertices
nvertices 123270
$ SUBJECT=tvb ./resamp-anat.sh
...
$ mris_info $SUBJECTS_DIR/$SUBJECT/surf/lh.pial.fsaverage5 | grep nvertices
nvertices 10242
produces ~10k vertex surfaces, resampled annotations,
and a screenshot of the results on the subject's
T1, under name resamp-$SUBJECT-$TRGSUBJECT.png
in the working directory.
By default, the fsaverage5 subject is used, but this can be set via the
TRGSUBJECT
env var.
This requires the development version of FreeSurfer, pending release of v6.
The geodesic matrices of vertex-vertex distances along the cortical surfaces are computed as
SUBJECT=tvb python utils.py gdist pial.fsaverage5
This requires 150 seconds and produces per-hemisphere mat files containing the sparse matrices.
This workflow has moved to seeg-ct.sh
.
This is moving to a dedicated workflow in em-forward.sh
.
WIP requires update
Preparing a diffusion MR sequence for tractography requires a few preprocessing step. We'll use a new subject subfolder, and change into that folder for this section of the workflow
mkdir -p ${SUBECTS_DIR}/${SUBJECT}/dmr
pushd ${SUBJECTS_DIR}/${SUBJECT}/dmr
Copy the diffusion into this folder as dwi.nii.gz
, along with the bvecs
and bvals
,
and convert the data to MRtrix format, extracting the brain mask and low-b image.
mrconvert dwi.nii.gz dwi.mif -fslgrad bvecs bvals
dwi2mask dwi.mif mask.mif
dwiextract dwi.mif lowb.mif -bzero
mrconvert lowb.mif lowb.nii.gz
These steps take about 90 seconds depending on file system and hard disk speed, and can require several GB of RAM depending on the dataset.
In order to apply the anatomical labeling obtained in previous steps to the diffusion images and tractography, we need to convert the relevant images and obtain a transform from the diffusion image coordinate system to that of the T1:
for image in T1 aparc+aseg aparc250+aseg
do
mri_convert ../mri/$image.mgz $image.nii.gz --out_orientation RAS -rt nearest
done
for parc in aparc aparc250
do
fslreorient2std $parc+aseg.nii.gz $parc+aseg-reo.nii.gz
mv $parc+aseg-reo.nii.gz $parc+aseg.nii.gz
done
regopt="-dof 6 -searchrx -180 180 -searchry -180 180 -searchrz -180 180 -cost mutualinfo"
flirt -in lowb.nii.gz -ref T1.nii.gz -omat d2t.mat -out lowb-in-t1.nii.gz $regopt
This takes about 90 seconds. Next, we apply the inverse transform to the T1 and the volumetric parcellation:
convert_xfm -omat t2d.mat -inverse d2t.mat
for parc in aparc aparc250
do
flirt -applyxfm -in $parc+aseg.nii.gz -ref lowb.nii.gz \
-out $parc+aseg-in-d.nii.gz -init t2d.mat -interp nearestneighbour
done
flirt -applyxfm -in T1.nii.gz -ref lowb.nii.gz \
-out t1-in-d.nii.gz -init t2d.mat -interp nearestneighbour
This takes a few seconds. Visualize the result:
freeview -v t1-in-d.nii.gz lowb.nii.gz:colormap=heat aparc+aseg-in-d.nii.gz:colormap=jet
TODO Redo this section, following new syntax for mrtrix3 commands.
dwi2response
dwi2fod
dwi2response dwi.mif response.txt -mask mask.mif -sf sf-mask.mif # 36m
dwi2fod dwi.mif response.txt csd.mif -mask mask.mif #
This takes about 40 minutes. TODO images for verification
MRtrix uses the grey matter - white matter interface to improve track seeding and so-called ACT with several criteria on tissue type to improve track termination, so generate the necessary volumes:
act_anat_prepare_fsl t1-in-d.nii.gz act.mif
5tt2gmwmi act.mif gmwmi.mif
These steps take 9 minutes.
The previous section works with volumetric diffusion weighted images. Obtaining connectomes from such images requires generating tracks and applying volumetric parcellations to tracks.
opt="-seed_gmwmi gmwmi.mif -act act.mif -unidirectional -maxlength 250 -step 0.5"
tckgen $opt -num 10000000 csd.mif brain.tck
About 200 tracks per second are selected, so a 10M track tractogram requires about 13 hours. Again, this can be accelerated by allocated more CPUs to your VM and changing the MRtrix configuration as described above.
This is one step which is significantly slower in a VM: natively, on a Haswell 4 core Xeon, running tckgen with 8 threads, can select ~4.2k tracks per second, so 10M takes less than an hour.
The speed at this step is highly dependent on the dataset and tractography
options. The simplest way to speed this up is to distribute across many
nodes in a cluster and concatenate into single tractogram with e.g.
tckedit parts*.tck all.tck
.
SIFT filters tracks to improve the fit with the diffusion image,
tcksift brain.tck csd.mif brain-sift.tck -act act.mif -term_number 2000000
This requires X hours. Note that SIFT requires 1 GB of RAM per 4-8M tracks, so for the default 10M tracks above, you may need to allocate 3 GB of RAM to your VM. This results in a 2M track tractogram.
FreeSurfer's strategy for generating parcellations assigns unique ids to different regions, but the unique list of ids for all regions is not contiguous, so we generate a secondary parcellation volume with reassigned, contiguous sequence of ids. More details and example images are provided on the relevant MRtrix example wiki page.
The file mapping FreeSurfer names & indices to connectome indices is the connectome configuration file.
For the default parcellation, MRtrix ships with such a file, so we can simply
conf=/usr/local/mrtrix3/src/connectome/config/fs_default.txt
labelconfig aparc+aseg-in-d.nii.gz $conf aparc+aseg-in-d.mif \
-lut_freesurfer ${FREESURFER_HOME}/FreeSurferColorLUT.txt
For the example subdivided, we need to generate a new connectome configuration.
_TODO Python script for this_. Then, run `labelconfig` for each custom parcellation,
for parc in aparc250
do
for hemi in lh rh
do
python annot_to_lut_conn_conf.py lh aparc250 aparc250-lut.txt aparc250-conf.txt
labelconfig $parc+aseg-in-d.nii.gz $conf $parc+aseg-in-d.mif
-lut_freesurfer ${FREESURFER_HOME}/FreeSurferColorLUT.txt
done
Once the connectome-configured parcellation volumes are prepared, we can
apply them to the tractogram generated above, obtained matrices of
track counts and mean track lengths, for each parcellation volume:
```bash
assignment="-assignment_radial_search 2"
for parc in aparc aparc250
do
for metric in count meanlength
do
tck2connectome brain-sift.tck $parc+aseg-in-d.mif \
$parc-$metric.csv $assignment -zero_diagonal -metric $metric
done
done
One curiosity of tractography is that regions may have quite wide track length distributions, which may have an effect on simulation of conduction velocity. These distributions can be obtained by dumping lengths & node assignments,
tckstats brain.tck -dump lengths.txt
tck2connectome brain-sifted.tck aparc+aseg_d.mif weights.txt -out_assignments assignments.txt
and analyzing with a custom script
python analyze_track_lengths.py