top

I want to get help or discuss issues about MRtrix

Please subscribe to the MRtrix mailing list and post your questions there. You can also browse through the archives to see if your question has already been addressed. Another advantage of joining the mailing list is that you get notified of any new releases.

top

What data do I need to perform constrained spherical deconvolution?

The input data for CSD is a high angular resolution diffusion-weighted imaging (HARDI) data set. There are three main aspects of the acquisition that impact on the quality of the CSD results. In general, there will be trade-offs between the parameters concerned, meaning that there is no simple answer to this question. However, we do provide our own recommendations as part of this discussion.

b-value:
higher b-values produce stronger angular contrast in the DW signal, providing improved discrimination between the different fibre orientations. Note that although the raw DW images will look much noisier at higher b-values, it is the vastly improved constrast to noise ratio in the angular domain that is critical for CSD.
We would recommend a b-value of approximately 3000s/mm².
number of DW directions:
a larger number of DW directions will produce a better characterisation of the DW signal. In addition to an overall increase in SNR, it provides a more precise definition of the features of the DW signal in the angular domain, which is critical for CSD. In addition, since the DW signal increases in angular constrast with higher b-values, a larger number of DW directions becomes even more important then. Please note that in the context of this discussion, it is the number of unique directions that is important: a 3 × 12 directions acquisition still only contains 12 directions (although with improved SNR).
We would recommend a minimum of 60 DW directions.
SNR:
higher SNR obviously produces better results. Larger voxels will provide higher SNR, but at the expense of spatial localisation. However, CSD will produce poor quality results if the SNR in the b=0 image is too low. We would recommend adjusting the voxel size until the SNR exceeds 20. This should not require a huge sacrifice in terms of imaging resolution: for example, using 2.5mm rather than 2mm isotropic voxels effectively doubles the SNR, at the expense of a relatively small reduction in spatial resolution.

In the same way, there is also no simple answer to what the minimum requirements are. It is possible to get reasonable-looking results using b=1000s/mm² and 30 DW directions, but the quality may then be questionable. In general, we would urge you to follow the recommendations given here if you intend to use CSD.

top

How do I perform the super-resolved version of CSD?

Super-CSD is actually performed in the same way as 'normal' CSD, but using a higher harmonic order (lmax) than would otherwise be possible given the data. For example, 60 directions provides enough data to perform a spherical harmonic fit up to harmonic order 8 (which requires 45 parameters), but not enough for harmonic order 10 (which requires 66 parameters) – see the table in the response function coefficient section. This means that a 60 direction data set will be analysed using straight CSD if csdeconv is performed with lmax=8 or lower, and super-resolved CSD if csdeconv is performed with lmax=10 or higher.

top

I keep getting 'failed to converge' messages with csdeconv

csdeconv will produce one such message per voxel where the CSD fails to converge. It is not unusual to get a dozen or so such messages per data set when performing a super-resolved reconstruction (see above). The voxels affected are typically not in white matter, so these failures usually won't affect any subsequent tractography.

However, if you are getting a lot of these messages, you may need to check that the CSD images are suitable. They can be loaded directly into MRview: the voxels that failed to converge will appear black. If these messages do signal a real problem, you should try to perform the CSD using a lower value of lmax. In particular, you will usually not get any such messages when performing non-super-resolved CSD (see above).

top

I want to coregister my anatomical images with the DWI/tracks

MRtrix now includes support for the NIfTI image format, allowing straightforward interaction with SPM and FSL (amongst others). Both of these packages provide robust functionality for coregistration. Some simple instructions for coregistration using these packages are given below.

An important point to bear in mind is that the orientation of the DWI data and any images derived from them (including the CSD results) should not be modified, since this may affect the orientation of the DW gradients with respect to the data, and hence also affect the orientation of the fibres relative to the data, which would obviously invalidate any subsequent tractography results. In practice, this means that the anatomical images should be coregistered to the DWI data, leaving the DWI data unmodified.

The first step in the coregistration procedure is to convert the images of interest to NIfTI format. The FA map appears to provide adequate contrast for coregistration with the anatomical images, so we will convert these data:

> mrconvert anat.mif anat_coreg.nii
mrconvert: copying data... 100%
> mrconvert fa.mif fa_coreg.nii
mrconvert: copying data... 100%

The subsequent steps depend on the software package to be used.

SPM

The procedure with SPM is straightforward: set the FA map as the reference image, the anatomical as the source image, and coregister (estimate only) with the normalised mutual information cost function. Once the processing is done, the anat_coreg.nii image will have been re-oriented to match that of the FA map (only the header's orientation field will have been modified). The anat_coreg.nii can then be loaded into MRview instead of the original anat.mif.

anat_coreg.nii should now be coregistered with the DWI data and any tracks generated from them.

FSL

The procedure to use with FSL is slightly more complex. The FLIRT command does not produce good results if the FA map is specified as the reference. The steps required are therefore to coregister the anatomical images to the FA map, producing a 4×4 affine transform matrix. As with SPM, the normalised mutual information cost function produces the best results. The inverse of this transform can then be applied to the anatomical images using the mrtransform command included as part of MRtrix.

> flirt -ref anat_coreg.nii -in fa_coreg.nii -cost normmi -searchcost normmi -dof 6 -omat transform.txt
> mrtransform anat_coreg.nii -transform transform.txt -reference fa_coreg.nii -inverse -flipx anat_coreg.mif
mrtransform: copying image data... 100%

anat_coreg.mif should now be coregistered with the DWI data and any tracks generated from them.

top

How do I produce an image of the track count through each voxel?

The tracks2prob command can be used to generate an image where each voxel contains the number of tracks that pass through that voxel. For example:

> tracks2prob tracks.tck -template anat.mif track_image.mif
tracks2prob: generating track count image...  - ok

This will produce an image of the number of tracks through each voxel based on the anat.mif template image.

top

What are these 'mrtrix-azdj28.mif' files that keep appearing in my folder?

MRtrix will produce temporary files when data pipes are used. If one of the programs in the pipeline crashes, these files will not be deleted (see here for details). If you find one or more of these files amongst your data, you can safely delete them – assuming of course that there are no currently running MRtrix programs that may be accessing the file!

top

Why am I getting strange alignment issues with my Analyse format data?

If you use the Analyse image format to store your data, you may find that your FOD orientations or tractography results are not correctly aligned with the image. This is due to a limitation of the Analyse image format: it is not capable of storing the image transformation matrix. If the data were acquired in a non-axial orientation, and the DW gradient orientations were not reoriented to match the image axes, this will cause an orientation mismatch. This problem can also occur if the processing is performed in a different format that does support storing of the transform (e.g. MRtrix or NIfTI), and the results subsequently converted to Analyse format.

Another issue with the Analyse image format is the lack of a clear convention for left-right ordering. In particular, certain versions of SPM used the convention that the data were stored left to right, which is the opposite of the official Analyse application. In other versions of SPM, the convention to use for this ordering can be set by editing a configuration file. An image generated using one convention will be flipped if read assuming the opposite convention. Consequently, it is impossible to guarantee that images stored in Analyse format are correctly oriented.

For these reasons, the use of Analyse images is strongly discouraged.

top

Can I use the same gradient direction tables with MRtrix as I use with FSL?

In general, no. With MRtrix, the gradient directions need to be specified with respect to real (scanner) coordinates. This is different from FSL, which expects the gradient directions to be specified with respect to the image axes. Therefore, if there is any difference between these two coordinate systems, the gradient directions will be wrong, and so will the orientations inferred by dwi2tensor and csdeconv.

The only exception to this rule is when the images were acquired in a pure axial orientation: in this case the two coordinate systems will be equivalent. You can check this using mrinfo, by looking at the 'transform' entry. For example:

> mrinfo dwi.mif
************************************************
Image:               "dwi.mif"
************************************************
  Format:            MRTrix
  Dimensions:        112 x 112 x 37 x 68
  Voxel size:        2.09821 x 2.09821 x 3 x ?
  Dimension labels:  0. left->right (mm)
                     1. posterior->anterior (mm)
                     2. inferior->superior (mm)
                     3. undefined (?)
  Data type:         unsigned 16 bit integer (little endian)
  Data layout:       [ -0 -1 +2 +3 ]
  Data scaling:      offset = 0, multiplier = 1
  Comments:          anonymous
  Transform:                    1           0          -0      -115.4
                                0           1          -0      -106.6
                               -0          -0           1      -7.898
                                0           0           0           1
The 3×3 top-left part of the transform matrix (highlighted in bold) specifies the rotation component of the transform. If this part is the identity (as it is above), then the acquisition is pure axial, and FSL gradient tables can be used with MRtrix.

top

Why do are my MRtrix-generated NIfTI images displayed in a different orientation to my original images in FSLview?

When accessing any image, MRtrix will always ensure that the image axes correspond as closely as possible with the MRtrix convention. To do this, it will often be necessary to modify the transformation matrix and the associated data layout. When writing out NIfTI format images, MRtrix always uses this new transformation matrix, and writes out the data in a near-axial orientation. For example, images acquired in the sagittal plane and converted to NIfTI using mriconvert are typically written out as a stack of sagittal slices, with the transformation matrix containing the appropriate rotation. When converted using MRtrix, the voxels for the same image will be written out as a stack of axial slices, along with the corresponding (but different) transformation matrix. FSLview always displays images assuming they are stored as a stack of axial slices, but will label the axes according to the transformation matrix. This causes problems when displaying images processed with MRtrix alongside otherwise equivalent images, since their data layouts are now different. Note that these will be displayed correctly in MRView, and interpreted correctly in any MRtrix application.

The simplest way to get around this is to also convert your original NIfTI images using mrconvert, and write them out as NIfTI images again. While it may seem odd to convert NIfTI images to NIfTI, simply running them through mrconvert will modify the images to match.

top

How do I spatially normalise my tracks into template space?

You may want to warp tracks generated in each subject's native space into a common template space, using warps estimated using other applications (e.g. SPM or ANTS). Since normalisation packages store the warp information in their own format, it is not sensible for MRtrix to attempt to support reading the warp information directly. Instead, the idea is to generate a 'no-warp' image in template space, apply the relevant normalisation command to warp it into native space, and use the final warped image to warp the tracks into template space. This is achieved as follows:

First, a 'no-warp' image is created in template space, with each voxel containing its own real-space coordinates in template space:

> gen_unit_warp template_image.nii nowarp-[].nii

Note the use of the square brackets to instruct the application to produce a set of image volumes, rather than a single 4D image (see here for details). Also, the image format is specified as NIfTI, since this format is supported by most normalisation packages.

The set of images produced is then warped into each subject's native space, using the appropriate warp field and the package originally used to estimate it. Details on performing this step are dependent on the exact package used, and will not be discussed further here. The most important consideration is to ensure that the target image in subject space (i.e. the image whose dimensions, voxel size, etc. will be used as a template when creating the warped images) covers the entire extent of all the tracks to be warped.

Once the 'no-warp' field images have been warped into subject space, each voxel contains the coordinates of its equivalent location in template space. It is then trivial to warp the tracks into template space. Assuming the warped images have been stored under the filename warp-0.nii, warp-1.nii, warp-2.nii, this is achieved as follows:

 > normalise_tracks my_tracks.tck warp-[].nii my_warped_tracks.tck