diff --git a/content/images/figure_3.png b/content/images/figure_3.png new file mode 100644 index 0000000..a9d2d4e Binary files /dev/null and b/content/images/figure_3.png differ diff --git a/content/index.ipynb b/content/index.ipynb index 79065bc..060470e 100644 --- a/content/index.ipynb +++ b/content/index.ipynb @@ -275,16 +275,14 @@ ] }, { - "cell_type": "markdown", + "cell_type": "code", + "execution_count": null, "metadata": {}, + "outputs": [], "source": [ - "# 3     |     RESULTS\n", - "\n", - "## 3.1     |     Dashboard\n", - "\n", - "To disseminate the challenge results, a web-based dashboard was developed (Figure 2, https://rrsg2020.dashboards.neurolibre.org). The landing page (Figure 2-a) showcases the relationship between the phantom and brain datasets acquired at different sites/vendors. Navigating to the phantom section leads to the information about the submitted datasets, such as the mean/std/median/CoV for each sphere, % difference from the reference values, number of scans, and temperature (Figure 2-b, left). Other options allow users to limit the results by specific versions of the phantom or the MRI manufacturer. Selecting either “By Sphere” (Figure 2-b, right) or “By Site” tabs will display whisker plots for the selected options, enabling further exploration of the datasets.\n", + "### 2.7     |     Dashboard\n", "\n", - "Returning to the home page and selecting the brain section allows exploration of information on the brain datasets (Figure 2-c, left), such as mean T1 and STD for different ROI regions, as well as selection of specific MRI manufacturers. Choosing the “By Regions” tab provides whisker plots of the datasets for the selected ROI (Figure 2-c, right), similar to the plots for the phantom." + "To widely disseminate the challenge results, a web-based dashboard was developed (Figure 2, https://rrsg2020.dashboards.neurolibre.org). The landing page (Figure 2-a) showcases the relationship between the phantom and brain datasets acquired at different sites/vendors. Selecting the Phantom or In Vivo icons and then clicking an ROI will display whisker plots for that region. Additional sections of the dashboard allow for displaying statistics summaries for both sets of data, a magnitude vs complex data fitting comparison, and hierarchical shift function analyses.\n" ] }, { @@ -299,7 +297,7 @@ "metadata": {}, "source": [ "

\n", - "Figure 2 Dashboard. a) welcome page listing all the sites, the types of subject, and scanner, and the relationship between the three. Row b) shows two of the phantom dashboard tabs, and row c) shows two of the human data dashboard tabs Link: https://rrsg2020.dashboards.neurolibre.org\n", + "Figure 2 Dashboard. a) Welcome page listing all the sites, the types of subject, and scanner, and the relationship between the three. b) The phantom tab for a selected ROI, and c) The in vivo tab for a selected ROI. Link: https://rrsg2020.dashboards.neurolibre.org\n", "

" ] }, @@ -307,24 +305,41 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## 3.2     |     Submissions\n", - "\n", + "# 3     |     RESULTS\n", "\n", - "Eighteen submissions were included in the analysis, which resulted in 38 T1 maps of the NIST/system phantom, and 56 brain T1 maps. Figure 3 illustrates all the submissions that acquired phantom data (Figure 3-a) and human data (Figure 3-b), the number of scanners used for each submission, and the number of T1 mapping datasets. It should be noted that these numbers include a subset of measurements where both complex and magnitude-only data from the same acquisition were used to fit T1 maps, thus the total number of unique acquisitions is lower than the numbers reported above (27 for phantom data and 44 for human data). The datasets were collected on three MRI manufacturers (Siemens, GE, Philips) and were acquired at 3T [^three-t], except for one dataset acquired at 0.35T (the ViewRay MRidian MR-linac) . To showcase the heterogeneity of the actual T1 map data from the independently-implemented submissions, Figure 4 displays six T1 maps of the phantoms submitted to the challenge.\n", + "Figure 3 presents a comprehensive overview of the challenge results through violin plots, depicting inter- and intra- submission comparisons in both phantoms (a) and human (b) datasets. Inter-submission coefficients of variation (CoV) were computed by selecting a single T1 map submitted by each challenge participant and calculating the CoV. For the phantom (Figure 3-a), the average inter-submission CoV for the first five spheres, representing the expected T1 value range in the human brain (approximately 500 to 2000 ms) was 6.1 %. By addressing outliers from two sites associated with specific challenges for sphere 4 (signal null near a TI), the mean inter-submission CoV reduced to 4.1%. One participant (submission 6, Figure 1) measured T1 maps using a consistent protocol at 7 different sites, and the mean intra-submission CoV across the first five spheres for this submission was calculated to be 2.9 %.\n", "\n", "\n", - "```{image} images/figure_3_full.png\n", + "For the human datasets, inter-submission CoVs for independently-implemented imaging protocols were 5.9% for genu, 10.6% for splenium, 16% for cortical GM, and 22% for deep GM. One participant (submission 18, Figure 1) measured a large dataset (13 individuals) on three scanners and two vendors, and the intra-submission CoVs for this submission were 3.2% for genu, 3.1% for splenium, 6.9 % for cortical GM, and 7.1% for deep GM.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "```{image} images/figure_3.png\n", "---\n", "width: 900px\n", "name: fig3\n", "align: center\n", "---\n", "```\n", + "\n", "

\n", - "Figure 3 Complete list of the datasets submitted to the challenge. Submissions that included phantom data are shown in a), and those that included human brain data are shown in b). Submissions were assigned numbers to keep track of which submissions included both phantom and human data. Some submissions included datasets acquired on multiple scanners. For the phantom (panel a), each submission acquired all their data using a single phantom, however some researchers shared the same physical phantom with each other (same color). Some additional details about the datasets are included in the T1 maps column, if relevant. Note that for complex datasets in the magnitude/phase format, T1 maps were calculated both using magnitude-only data and complex-data, but these were from the same measurement (branching off arrow). \n", - "

\n", + "Figure 3 Summary of results of the challenge as violin plots displaying the inter- and intra- submission dataset comparisons for phantoms (a) and human brains (b). Interactive figure available at: https://preprint.neurolibre.org/10.55458/neurolibre.00014/.\n", + "

\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", "\n", - "Of these datasets, several submissions went beyond the minimum acquisition and acquired additional datasets using the ISMRM/NIST phantom, such as a traveling phantom (7 scanners/sites), scan-rescan, same-day rescans on two MRIs, short TR vs long TR, and 4 point TI vs 14 point TI. For humans, one site acquired 13 subjects on two scanners (different manufacturers), one site acquired 6 subjects, and one site acquired a subject using two different head coils (20 channels vs. 64 channels)." + "A scatterplot of the T1 data for all submissions and their ROIs is shown in Figure 4 (phantom a-c, and human brains d-f). The NIST T1 data is shown in Figure 4 a-c, and the same ROI T1 values are presented in each plot for different axes types (linear, log, and error) to better visualize the results. Figure 4-a shows good agreement for this dataset in comparison with the temperature-corrected reference T1 values. However, this trend did not persist for low T1 values (T1 < 100-200 ms), as seen in the log-log plot (Figure 4-b), which was expected because the imaging protocol is optimized for human water-based T1 values (T1 > 500 ms). Higher variability is seen at long T1 values (T1 ~ 2000 ms) in Figure 4-a. Errors exceeding 10% are observed in the phantom spheres with T1 values below 300 ms (Figure 4-c), and 3-4 measurements with outlier values exceeding 10% error were observed in the human water-based tissue range (~500-2000 ms).\n", + "\n", + "\n", + "Figure 4 d-f displays the scatter plot data for human datasets submitted to this challenge, showing mean and standard deviation T1 values from the WM (genu and splenium) and GM (cerebral cortex and deep GM) ROIs. Mean WM T1 values across all submissions were 828 ± 38 ms in the genu and 852 ± 49 ms in the splenium, and mean GM T1 values were 1548 ± 156 ms in the cortex and 1188 ± 133 ms in the deep GM, with less variations overall in WM compared to GM, possibly due to better ROI placement and less partial voluming in WM. The lower standard deviations for the ROIs of human database ID site 9 (submission 18, Figure 1) are due to good slice positioning, cutting through the AC-PC line and the genu for proper ROI placement, particularly for the corpus callosum and deep GM." ] }, { @@ -518,254 +533,6 @@ ] } ], - "source": [ - "# PYTHON CODE\n", - "# Module imports\n", - "import matplotlib.pyplot as plt\n", - "from PIL import Image\n", - "from matplotlib.image import imread\n", - "import scipy.io\n", - "import plotly.graph_objs as go\n", - "import numpy as np\n", - "from plotly import __version__\n", - "from plotly.offline import init_notebook_mode, iplot, plot\n", - "config={'showLink': False, 'displayModeBar': False, 'responsive': True}\n", - "\n", - "init_notebook_mode(connected=True)\n", - "\n", - "from IPython.display import display, HTML\n", - "\n", - "import os\n", - "import markdown\n", - "import random\n", - "from scipy.integrate import quad\n", - "\n", - "import warnings\n", - "warnings.filterwarnings('ignore')\n", - "\n", - "xAxis = np.linspace(0,256*3-1, num=256*3)\n", - "yAxis = np.linspace(0,256*2-1, num=256*2)\n", - "\n", - "# T1 maps\n", - "im_2_padded = np.pad(im_2,32)\n", - "images_1 = np.concatenate((im_1, im_5, im_3), axis=1)\n", - "images_2 = np.concatenate((im_4, im_2_padded, im_6), axis=1)\n", - "images = np.concatenate((images_2, images_1), axis=0)\n", - "\n", - "# TI_1 maps\n", - "TI_1_2_padded = np.pad(TI_1_2,32)\n", - "TI_1_images_1 = np.concatenate((TI_1_1, TI_1_5, TI_1_3), axis=1)\n", - "TI_1_images_2 = np.concatenate((TI_1_4, TI_1_2_padded, TI_1_6), axis=1)\n", - "TI_1_images = np.concatenate((TI_1_images_2, TI_1_images_1), axis=0)\n", - "\n", - "# TI_2 maps\n", - "TI_2_2_padded = np.pad(TI_2_2,32)\n", - "TI_2_images_1 = np.concatenate((TI_2_1, TI_2_5, TI_2_3), axis=1)\n", - "TI_2_images_2 = np.concatenate((TI_2_4, TI_2_2_padded, TI_2_6), axis=1)\n", - "TI_2_images = np.concatenate((TI_2_images_2, TI_2_images_1), axis=0)\n", - "\n", - "# TI_3 maps\n", - "TI_3_2_padded = np.pad(TI_3_2,32)\n", - "TI_3_images_1 = np.concatenate((TI_3_1, TI_3_5, TI_3_3), axis=1)\n", - "TI_3_images_2 = np.concatenate((TI_3_4, TI_3_2_padded, TI_3_6), axis=1)\n", - "TI_3_images = np.concatenate((TI_3_images_2, TI_3_images_1), axis=0)\n", - "\n", - "# TI_4 maps\n", - "TI_4_2_padded = np.pad(TI_4_2,32)\n", - "TI_4_images_1 = np.concatenate((TI_4_1, TI_4_5, TI_4_3), axis=1)\n", - "TI_4_images_2 = np.concatenate((TI_4_4, TI_4_2_padded, TI_4_6), axis=1)\n", - "TI_4_images = np.concatenate((TI_4_images_2, TI_4_images_1), axis=0)\n", - "\n", - "trace1 = go.Heatmap(x = xAxis,\n", - " y = yAxis_1,\n", - " z=images,\n", - " zmin=0,\n", - " zmax=3000,\n", - " colorscale='viridis',\n", - " colorbar={\"title\": 'T1 (ms)',\n", - " 'titlefont': dict(\n", - " family='Times New Roman',\n", - " size=26,\n", - " )\n", - " },\n", - " xaxis='x2',\n", - " yaxis='y2',\n", - " visible=True)\n", - "\n", - "trace2 = go.Heatmap(x = xAxis,\n", - " y = yAxis_1,\n", - " z=TI_1_images,\n", - " zmin=0,\n", - " zmax=3000,\n", - " colorscale='gray',\n", - " colorbar={\"title\": 'T1 (ms)',\n", - " 'titlefont': dict(\n", - " family='Times New Roman',\n", - " size=26,\n", - " color='white'\n", - " )\n", - " },\n", - " xaxis='x2',\n", - " yaxis='y2',\n", - " visible=False)\n", - "\n", - "trace3 = go.Heatmap(x = xAxis,\n", - " y = yAxis_1,\n", - " z=TI_2_images,\n", - " zmin=0,\n", - " zmax=3000,\n", - " colorscale='gray',\n", - " colorbar={\"title\": 'T1 (ms)',\n", - " 'titlefont': dict(\n", - " family='Times New Roman',\n", - " size=26,\n", - " color='white'\n", - " )\n", - " },\n", - " xaxis='x2',\n", - " yaxis='y2',\n", - " visible=False)\n", - "\n", - "trace4 = go.Heatmap(x = xAxis,\n", - " y = yAxis_1,\n", - " z=TI_3_images,\n", - " zmin=0,\n", - " zmax=3000,\n", - " colorscale='gray',\n", - " colorbar={\"title\": 'T1 (ms)',\n", - " 'titlefont': dict(\n", - " family='Times New Roman',\n", - " size=26,\n", - " color='white'\n", - " )\n", - " },\n", - " xaxis='x2',\n", - " yaxis='y2',\n", - " visible=False)\n", - "\n", - "trace5 = go.Heatmap(x = xAxis,\n", - " y = yAxis_1,\n", - " z=TI_4_images,\n", - " zmin=0,\n", - " zmax=3000,\n", - " colorscale='gray',\n", - " colorbar={\"title\": 'T1 (ms)',\n", - " 'titlefont': dict(\n", - " family='Times New Roman',\n", - " size=26,\n", - " color='white'\n", - " )\n", - " },\n", - " xaxis='x2',\n", - " yaxis='y2',\n", - " visible=False)\n", - "\n", - "data=[trace1, trace2, trace3, trace4, trace5]\n", - "\n", - "updatemenus = list([\n", - " dict(active=0,\n", - " x = 0.4,\n", - " xanchor = 'left',\n", - " y = -0.15,\n", - " yanchor = 'bottom',\n", - " direction = 'up',\n", - " font=dict(\n", - " family='Times New Roman',\n", - " size=16\n", - " ),\n", - " buttons=list([ \n", - " dict(label = 'T1 maps',\n", - " method = 'update',\n", - " args = [{'visible': [True, False, False, False, False],\n", - " 'showscale': True,},\n", - " ]),\n", - " dict(label = 'TI = 50 ms',\n", - " method = 'update',\n", - " args = [\n", - " {\n", - " 'visible': [False, True, False, False, False],\n", - " 'showscale': True,},\n", - " ]),\n", - " dict(label = 'TI = 400 ms',\n", - " method = 'update',\n", - " args = [{'visible': [False, False, True, False, False],\n", - " 'showscale': True,},\n", - " ]),\n", - " dict(label = 'TI = 1100 ms',\n", - " method = 'update',\n", - " args = [{'visible': [False, False, False, True, False],\n", - " 'showscale': True,},\n", - " ]),\n", - " dict(label = 'TI ~ 2500 ms',\n", - " method = 'update',\n", - " args = [{'visible': [False, False, False, False, True],\n", - " 'showscale': True,},\n", - " ]),\n", - " ])\n", - " )\n", - "])\n", - "\n", - "layout = dict(\n", - " width=960,\n", - " height=649,\n", - " margin = dict(\n", - " t=40,\n", - " r=50,\n", - " b=10,\n", - " l=50),\n", - " xaxis = dict(range = [0,256*3-1], autorange = False,\n", - " showgrid = False, zeroline = False, showticklabels = False,\n", - " ticks = '', domain=[0, 1]),\n", - " yaxis = dict(range = [0,256*2-1], autorange = False,\n", - " showgrid = False, zeroline = False, showticklabels = False,\n", - " ticks = '', domain=[0, 1]),\n", - " xaxis2 = dict(range = [0,256*3-1], autorange = False,\n", - " showgrid = False, zeroline = False, showticklabels = False,\n", - " ticks = '', domain=[0, 1]),\n", - " yaxis2 = dict(range = [0,256*2-1], autorange = False,\n", - " showgrid = False, zeroline = False, showticklabels = False,\n", - " ticks = '', domain=[0, 1], anchor='x2'),\n", - " showlegend = False,\n", - " autosize = False,\n", - " updatemenus=updatemenus\n", - ")\n", - "\n", - "\n", - "fig = dict(data=data, layout=layout)\n", - "\n", - "#iplot(fig, filename = 'basic-heatmap', config = config)\n", - "plot(fig, filename = 'figure2.html', config = config)\n", - "display(HTML('figure2.html'))" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "

\n", - "Figure 4 Example T1 maps that were generated from submitted data. Note the differences in acquisitions (e.g. FOV (top middle), orientation (bottom right, gradient distortion correction (top left and right) and resulting artifacts in the T1 maps (e.g. ghosting (bottom left), ringing (bottom middle), noise profiles (top left and bottom right), deformation/slice mispositioning (top right)) resulting from the independently-implemented acquisition protocols.\n", - "

" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 3.2     |     Phantom\n", - "\n", - "An overview of the T1 results for the submitted ISMRM/NIST phantom datasets is displayed in Figure 5. The same data is presented in each column with different axes types (linear, log, and error) to better visualize the results. Figure 5-a shows good agreement (slope = 0.98, intercept = -14 ms) for this dataset in comparison to the reference T1 values. However, this trend did not persist for low T1 values (T1 smaller than 100-200 ms), as seen in the log-log plot (Figure 5-b), which was expected because the imaging protocol is optimized for human water-based T1 values (T1 higher than 500 ms). Errors exceeding 10% are observed for T1 values of phantom spheres below this threshold (Figure 5-c). These trends are observed for the entire-dataset plots as well (Figure 5 d-f). More variability is seen in Figure 5-d around the identity diagonal at very high T1 (T1 ~ 2000 ms) than towards the WM-GM values (T1 ~ 600-1400 ms), which is less apparent in the log-log plot (Figure 5-e). In addition to the low T1 values exceeding the 10% error threshold (Figure 5-f), a few measurements with outlier values exceeding 10% error (`~`3-4) were observed in the human water-based tissue range (`~`500-2000 ms)." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "tags": [ - "report_output", - "hide_input" - ] - }, - "outputs": [], "source": [ "\n", "from os import path\n", @@ -1417,602 +1184,10 @@ "display(HTML('figure3.html'))" ] }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "

\n", - "Figure 5 Measured mean T1 values vs. temperature-corrected NIST reference values of the phantom spheres presented as linear plots (a,d), log-log plots (b,e), and plots of the error relative to reference T1 value. Plots (a–c) are of an example single dataset (T1 map dataset 6-1-1 in Figure 3-a), whereas plots (d–f) are of all acquired datasets. The dashed lines in plots (c,f) represent a ±10 % error.\n", - "

" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Inter-submission coefficients of variation (CoV) were calculated by selecting ad hoc one single T1 map submitted per challenge participant [^inter-cov] and calculating the CoV of the T1 means per sphere. The average inter-submission CoV across the first five spheres representing the expected T1 value range in the human brain (`~` 500 to `~` 2000 ms) was 6.1 % (sphere 1 = 4.7 %, sphere 2 = 3.1 %, sphere 3 = 6.3 %, sphere 4 = 12.8 %, sphere 5 = 7.3 %). Two sites were clear outliers that had particular issues for sphere 4, likely because the signal null for this particular T1 value (`~`600 ms) is near the second TI and these outliers incorrectly flip the signal at the near-null TI during magnitude-only data fitting [^magnitude-fit]; by removing these outliers, the mean inter-submission CoV reduces to 4.1 % (sphere 1 = 5.4 %, sphere 2 = 3. 5%, sphere 3 = 2.5 %, sphere 4 = 4.2 %, sphere 5 = 4.9 %). One participant measured T1 maps with one phantom using one implemented protocol (identical imaging parameters) at 7 different sites on systems from a single manufacturer at 3T, and so a mean intra-submission CoV across the first five spheres for this case was calculated to be 2.9 % (sphere 1 = 4.9 %, sphere 2 = 3.5 %, sphere 3 = 2.6 %, sphere 4 = 2.0 %, sphere 5 = 1.6 %)." - ] - }, { "cell_type": "code", "execution_count": null, - "metadata": { - "tags": [ - "remove_input", - "report_output" - ] - }, - "outputs": [], - "source": [ - "import pandas as pd\n", - "import numpy as np\n", - "import plotly.graph_objects as go\n", - "from plotly.offline import init_notebook_mode, iplot\n", - "import plotly.express as px\n", - "from plotly.subplots import make_subplots\n", - "\n", - "init_notebook_mode(connected=True) \n", - "\n", - "if build == 'latest':\n", - " df = pd.read_pickle('analysis/databases/3T_NIST_T1maps_database.pkl')\n", - "elif build=='archive':\n", - " df = pd.read_pickle(data_path[0] + '/analysis/databases/3T_NIST_T1maps_database.pkl')\n", - "\n", - "def pctdif(a1,a2):\n", - " return list(np.abs((a1 - a2)/((a1+a2)/2))*100) \n", - "\n", - "if build == 'latest':\n", - " df = pd.read_pickle('analysis/databases/3T_NIST_T1maps_database.pkl')\n", - "elif build=='archive':\n", - " df = pd.read_pickle(data_path[0] + '/analysis/databases/3T_NIST_T1maps_database.pkl')\n", - "\n", - "cc = pd.DataFrame()\n", - "dd = pd.DataFrame()\n", - "fig = go.Figure()\n", - "kek = np.transpose([str(ii).split('.') for ii in list(df.index)])\n", - "u, c = np.unique(kek[0], return_counts=True)\n", - "for ii in range(len(c)):\n", - " if c[ii] > 1:\n", - " # Iterate over all the sites with multiple entries \n", - " dec = '{:0>3}'.format(c[ii])\n", - " site_all = df[(df.index>=float(f\"{u[ii]}.{'001'}\")) & (df.index<=float(f\"{u[ii]}.{dec}\"))]\n", - " if np.unique(site_all['Data type']).size > 1:\n", - " # If a site has complex & magnitute\n", - " cplx = site_all[site_all['Data type'] == \"Complex\"]\n", - " mag = site_all[site_all['Data type'] == \"Magnitude\"]\n", - " if len(cplx) == len(mag):\n", - " # Confirm pairing \n", - " for jj in range(len(cplx)):\n", - " # Create scatter pairs for each submission per site \n", - " xper = [cplx.iloc[jj][f\"T1 - NIST sphere {sphr+1}\"] for sphr in range(5)]\n", - " yper = [mag.iloc[jj][f\"T1 - NIST sphere {sphr+1}\"] for sphr in range(5)]\n", - " aa = pd.concat([pd.DataFrame(data={'magnitude':list(yper[i][:]),'complex':list(xper[i][:]),'dif':pctdif(xper[i][:],yper[i][:]),'sphere':[f\"Sphere {i+1}\"]*len(xper[i][:]),'site':[mag.iloc[jj]['site name']]*len(xper[i][:])}) for i in range(5)],\n", - " ignore_index=True)\n", - " xdat = np.concatenate(xper).ravel().tolist()\n", - " ydat = np.concatenate(yper).ravel().tolist()\n", - " difdat = np.array(pctdif(np.array(xdat),np.array(ydat)))\n", - " difdat = np.interp(difdat, (difdat.min(), difdat.max()), (2, 30))\n", - " fig.add_trace(go.Scatter(x=ydat,y=xdat,marker=dict(size=list(difdat.astype(int))),mode=\"markers\",name=mag.iloc[jj]['site name']))\n", - " cc = pd.concat([aa,cc],ignore_index=True)\n", - "\n", - "fig.update_layout(shapes = [{'type': 'line', 'yref': 'paper', 'xref': 'paper', 'y0': 0, 'y1': 1, 'x0': 0, 'x1': 1,'layer':'below'}])\n", - "fig.update_traces(opacity=0.8)\n", - "fig.update_layout(margin=dict(l=0, r=0, t=0, b=30),paper_bgcolor = \"rgba(0,0,0,0)\", plot_bgcolor=\"rgba(0,0,0,0)\", legend_title=\"\")\n", - "fig.update_yaxes(color='black',gridwidth=1, gridcolor='rgba(0,0,0,0.2)', title=\"T1 (ms) complex-only data\",showline=True, linewidth=2,linecolor='black')\n", - "fig.update_xaxes(gridwidth=1, gridcolor='rgba(0,0,0,0.2)', title=\"T1 (ms) magnitude-only data\",showline=True, linewidth=2,linecolor='black')\n", - "fig.add_annotation(x=2209.9, y=2331.6,\n", - " text=\"122ms (5%) difference\",\n", - " showarrow=True,\n", - " arrowhead=2)\n", - "fig.add_annotation(x=1885.9, y=1705,\n", - " text=\"163ms (9%) difference\",\n", - " showarrow=True,\n", - " arrowhead=2,yanchor=\"bottom\",ay=40)\n", - "fig.add_annotation(x=1324.9, y=1297.9,\n", - " text=\"16ms (2%) difference\",\n", - " showarrow=True,\n", - " arrowhead=2,yanchor=\"bottom\",xanchor='left',ay=40)\n", - "fig.add_annotation(x=739.8, y=639.3,\n", - " text=\"48ms (7%) difference\",\n", - " showarrow=True,\n", - " arrowhead=2,yanchor=\"bottom\",xanchor='left',ay=40)\n", - "fig.update_layout(height=600,width=960,yaxis_range=[500,2500],xaxis_range=[500,2500])\n", - "\n", - "plot(fig, filename = 'figure4.html', config = config)\n", - "display(HTML('figure4.html')) " - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "

\n", - "Figure 6 Scatter plot comparing complex and magnitude-only fitted data across the first five spheres representing the expected T1 value range in the human brain (~500 to ~2000 ms). The markers are color-coded based on the implementation site, while their size represents the percent difference calculated between the two fitting methods (magnitude or complex) for that datapoint. \n", - "

" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Figure 6 compares the mean T1 values measured using complex and magnitude-only data for the 11 datasets where authors provided both in their submissions. Note that these datasets are from the same acquisition, not two unique acquisitions. The scatter plot shows that for the range of T1 values expected in the brain (T1 > 500 ms), there is almost no difference in fitted T1 values between the two types of data (the highest outlier indicates `~`9 ms difference). However, for T1 values less than `~`250 ms large errors [^hsf-tab] were present, likely due to the data acquisition imaging protocol (specifically, TI range) being sub-optimal for this range of T1 values." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "tags": [ - "remove_input", - "report_output" - ] - }, - "outputs": [], - "source": [ - "import pandas as pd\n", - "import numpy as np\n", - "import plotly.graph_objects as go\n", - "from plotly.offline import init_notebook_mode, iplot\n", - "import plotly.express as px\n", - "from plotly.subplots import make_subplots\n", - "\n", - "init_notebook_mode(connected=True) \n", - "\n", - "def explode_mount_traces(xp,mosaic,rw,cl,shw):\n", - " traces = []\n", - " for trace in range(len(xp[\"data\"])):\n", - " if not shw:\n", - " xp[\"data\"][trace]['showlegend'] = False\n", - " traces.append(xp[\"data\"][trace])\n", - " for trc in traces:\n", - " mosaic.append_trace(trc, row=rw, col=cl)\n", - " return mosaic \n", - " \n", - "\n", - "if build == 'latest':\n", - " sites = pd.read_pickle(\"./R_HSF_sites.pkl\")\n", - " trend = pd.read_pickle(\"./R_HSF_trend.pkl\")\n", - "elif build=='archive':\n", - " sites = pd.read_pickle(data_path[0] + '/R_HSF_sites.pkl')\n", - " trend = pd.read_pickle(data_path[0] + '/R_HSF_trend.pkl')\n", - "\n", - "def get_hsf_pair(sph,mount,rw1,cl1,rw2,cl2,shw):\n", - " ind = px.line(sites[sites['sphere']==sph], x=\"quantile\", y=\"pct\",color='vendor', line_group=\"id\", line_shape=\"linear\",\n", - " markers=True ,title='Sphere 1',color_discrete_sequence=px.colors.qualitative.Vivid)\n", - " ind.update_traces(marker_size=9,line_width=3,opacity=0.6,line_smoothing=1.3)\n", - " trnd = px.line(trend[trend['sphere']==sph], x=\"quantile\", y=\"difference\",error_y=\"ymax\",error_y_minus=\"ymin\",\n", - " animation_frame=\"sphere\", color_discrete_sequence=[\"black\"]*126,\n", - " markers=[True,True])\n", - " trnd.update_traces(marker_size=7, marker_symbol=\"square\",line_width=3,opacity=0.9,line_smoothing=1.3)\n", - " mount = explode_mount_traces(ind,mount,rw1,cl1,shw)\n", - " mount = explode_mount_traces(trnd,mount,rw2,cl2,shw)\n", - " return mount\n", - "\n", - " \n", - "fig = make_subplots(rows=7, cols=3,\n", - " specs=[[{\"rowspan\": 2}, {\"rowspan\": 2},{\"rowspan\": 2}],\n", - " [None,None,None],\n", - " [{\"rowspan\": 1}, {\"rowspan\": 1},{\"rowspan\": 1}],\n", - " [None,None,None],\n", - " [{\"rowspan\": 2}, {\"rowspan\": 2},{\"rowspan\": 2}],\n", - " [None,None,None],\n", - " [{\"rowspan\": 1}, {\"rowspan\": 1},{\"rowspan\": 1}]],\n", - " shared_xaxes=True,\n", - " vertical_spacing=0.03,\n", - " subplot_titles=(\"Sphere 1\", \"Sphere 2\", \"Sphere 3\", \"\",\"\",\"\", \"Sphere 4\", \"Sphere 5\", \"Sphere 6\"))\n", - "\n", - "# Populate subplots.\n", - "for ii in [1,2,3]:\n", - " fig = get_hsf_pair(ii,fig,1,ii, 3,ii,False)\n", - " fig = get_hsf_pair(ii+3,fig,5,ii,7,ii,False)\n", - "\n", - "\n", - "# A little hack to show the legend once\n", - "fig = get_hsf_pair(6,fig,5,3,7,3,True)\n", - "\n", - "dticks_notxt=dict(tickmode = 'array',tickvals = [.1,.2 ,.3 ,.4 , .5, .6, .7,.8, .9],ticktext = ['q1', 'q2', 'q3', 'q4', 'q5','q6', 'q7', 'q8','q9'],showticklabels=False)\n", - "dticks=dict(tickmode = 'array',tickvals = [.1,.2 ,.3 ,.4 , .5, .6, .7,.8, .9],ticktext = ['q1', 'q2', 'q3', 'q4', 'q5','q6', 'q7', 'q8','q9'],showticklabels=True)\n", - "\n", - "rng_pct_1 = dict(range=[-45,20]) \n", - "rng_pct_2 = dict(range=[-45,20]) \n", - "rng_ms = dict(range=[-150,150]) \n", - "rng_ms2 = dict(range=[-70,70])\n", - "errp = dict(title=\"% error\")\n", - "errms = dict(title=\"∆T1 (ms)\")\n", - " \n", - "fig.update_layout(yaxis1 = errp,yaxis4 = errms,yaxis7 = errp,yaxis10 = errms)\n", - "# Update y ranges \n", - "fig.update_layout(yaxis4 = rng_ms,yaxis5 = rng_ms,yaxis6 = rng_ms,yaxis10 = rng_ms2,yaxis11 = rng_ms2,yaxis12 = rng_ms2,\n", - " yaxis1 = rng_pct_1,yaxis2 = rng_pct_1,yaxis3 = rng_pct_1,\n", - " yaxis7 = rng_pct_2,yaxis8 = rng_pct_2,yaxis9 = rng_pct_2)\n", - "\n", - "fig.add_annotation(x=0.07, y=1,\n", - " text=\"T1 ~= 1900 ms\",\n", - " showarrow=False,\n", - " xref = 'paper',\n", - " yref=\"paper\")\n", - "\n", - "fig.add_annotation(x=0.5, y=1,\n", - " text=\"T1 ~= 1400 ms\",\n", - " showarrow=False,\n", - " xref = 'paper',\n", - " yref=\"paper\")\n", - "\n", - "fig.add_annotation(x=0.92, y=1,\n", - " text=\"T1 ~= 980 ms\",\n", - " showarrow=False,\n", - " xref = 'paper',\n", - " yref=\"paper\")\n", - "\n", - "fig.add_annotation(x=0.07, y=0.4,\n", - " text=\"T1 ~= 700 ms\",\n", - " showarrow=False,\n", - " xref = 'paper',\n", - " yref=\"paper\")\n", - "\n", - "fig.add_annotation(x=0.5, y=0.4,\n", - " text=\"T1 ~= 500 ms\",\n", - " showarrow=False,\n", - " xref = 'paper',\n", - " yref=\"paper\")\n", - "\n", - "fig.add_annotation(x=0.92, y=0.4,\n", - " text=\"T1 ~= 350 ms\",\n", - " showarrow=False,\n", - " xref = 'paper',\n", - " yref=\"paper\")\n", - "\n", - "fig.update_yaxes(zeroline=True,zerolinecolor=\"red\",zerolinewidth=3)\n", - "fig.update_layout(height=1000, width=960)\n", - "fig.update_layout(xaxis1 = dticks_notxt,xaxis2 = dticks_notxt,xaxis3 = dticks_notxt,xaxis4 = dticks,xaxis5 = dticks, xaxis6 = dticks)\n", - "fig.update_layout(xaxis7 = dticks_notxt,xaxis8 = dticks_notxt,xaxis9 = dticks_notxt,xaxis10 = dticks,xaxis11 = dticks, xaxis12 = dticks)\n", - "fig.update_yaxes(gridwidth=1, gridcolor='rgba(0,0,0,0.4)')\n", - "fig.update_xaxes(gridwidth=1, gridcolor='rgba(0,0,0,0.1)')\n", - "fig.update_layout(margin=dict(t=30),paper_bgcolor = \"rgba(0,0,0,0)\", plot_bgcolor=\"rgba(220,220,220,0.1)\", legend_title=\"\")\n", - "\n", - "fig.update_layout(legend_traceorder=\"grouped\")\n", - "\n", - "plot(fig, filename = 'figure5.html', config = config)\n", - "display(HTML('figure5.html')) " - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "

\n", - "Figure 7 Hierarchical analysis of T1 estimation error across voxel-wise distributions in spheres 1-6, using 20 measurements split into 9 quantiles (q1-q9). Each panel shows individual shift functions for each measurement (colored by vendor) in the top row, which characterize the percent measurement error as either overestimation or underestimation. The bottom row in each panel (black markers) displays the average trend of bootstrapped differences at each decile in milliseconds. The trends highlight any notable common patterns at the respective decile, such as a 39 ms median underestimation trend in Sphere 3. Straight lines in the top row indicate a homogeneous measurement error across voxels. High-density intervals not intersecting the zero crossing indicate a significant trend at the respective decile. \n", - "

" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "

​\n", - "​​The direction of the measurement error in the phantom is influenced by both the measurement site and the reference value, as indicated by the individual shift functions (Figure 7). For example, at sphere 1 (~2000 ms), nearly half of the measurements (20 shown in total) are positioned on each side of the zero-crossing. On the other hand, for sphere 3 (~1s), nearly all the measurements show underestimation as shift functions are located below the zero-crossing. Bootstrapped differences capture these trends, indicating a dominant overestimation at sphere 1 (median difference of +17 ms) and underestimation at sphere 3 (median difference of -39 ms). High-density intervals associated with these median differences do not indicate a common pattern for the former (intervals cross zero), whereas they reveal a notable underestimation trend at sphere 3 (intervals do not include zero). A similar common pattern is also observed for sphere 2 (median overestimation of 35 ms). In addition, the shape of individual shift functions conveys information about how voxel-wise distributions differ. For example, some outliers are observed with these graphs (two by looking at sphere 4). After looking at the T1 maps, these were not due to mispositioning of ROIs but due to the acquired T1 mapping data not fitting well for all spheres relative to the reference values. This produced the worse agreement for sphere 4 (these outliers for sphere 4 can also be seen in Figure 5-f). Lastly, the spread of shift functions around the zero-crossing does not indicate vendor-specific clustering for the selected measurements and reference values.\n", - "

" - ] - }, - { - "cell_type": "markdown", "metadata": {}, - "source": [ - "## 3.3     |     Human\n", - "\n", - "

\n", - "Figure 8 summarizes the results from human datasets submitted to this challenge, showing mean and standard deviation T1 values from the WM (genu) and GM (cerebral cortex) ROIs. The top plot collapses all datasets for each site, while the bottom plot shows each dataset separately. Mean WM T1 values across all submissions was 828 ± 38 ms in the genu and 854 ± 50 ms in the splenium, and mean GM T1 values were 1548 ± 156 ms in the cortex and 1188 ± 133 ms in the deep GM, with less variations overall in WM compared to GM, possibly due to better ROI placement and less partial voluming in WM. Inter-submission coefficients of variation (CoV) for independently-implemented imaging protocols were calculated using one T1 map measurement per submission that most closely matched the proposed protocol, and were 6.0% for genu, 11% for splenium, 16% for cortical GM and 22% for deep GM. One site (site 9) measured multiple subjects on three scanners using two different vendors, and so intra-submission CoVs for these centrally-implemented protocols were calculated over acquired T1 maps from this site, and were 2.9% for genu, 3.5% for splenium, 6.9 % for cortical GM and 7.8% for deep GM. This particular acquisition had an ideal slice positioning, cutting through the AC-PC line and the genu for proper ROI placement, particularly for the corpus callosum and deep GM (Supplementary Figure 1 - top left).\n", - "

" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "tags": [ - "remove_output", - "hide_input" - ] - }, - "outputs": [], - "source": [ - "from os import path\n", - "import os\n", - "\n", - "if build == 'latest':\n", - " if path.isdir('analysis')== False:\n", - " !git clone https://github.com/rrsg2020/analysis.git\n", - " dir_name = 'analysis'\n", - " analysis = os.listdir(dir_name)\n", - "\n", - " for item in analysis:\n", - " if item.endswith(\".ipynb\"):\n", - " os.remove(os.path.join(dir_name, item))\n", - " if item.endswith(\".md\"):\n", - " os.remove(os.path.join(dir_name, item))\n", - "elif build == 'archive':\n", - " if os.path.isdir(Path('../../data')):\n", - " data_path = ['../../data/rrsg-2020-neurolibre']\n", - " else:\n", - " # define data requirement path\n", - " data_req_path = os.path.join(\"..\", \"binder\", \"data_requirement.json\")\n", - " # download data\n", - " repo2data = Repo2Data(data_req_path)\n", - " data_path = repo2data.install() \n", - "\n", - "# Imports\n", - "import warnings\n", - "warnings.filterwarnings(\"ignore\")\n", - "\n", - "from pathlib import Path\n", - "import pandas as pd\n", - "import nibabel as nib\n", - "import numpy as np\n", - "\n", - "from analysis.src.database import *\n", - "import matplotlib.pyplot as plt\n", - "plt.style.use('analysis/custom_matplotlibrc')\n", - "plt.rcParams[\"figure.figsize\"] = (20,5)\n", - "fig_id = 0\n", - "\n", - "# Configurations\n", - "if build == 'latest':\n", - " database_path = Path('analysis/databases/3T_human_T1maps_database.pkl')\n", - " output_folder = Path(\"analysis/plots/08_wholedataset_scatter_Human/\")\n", - "elif build=='archive':\n", - " database_path = Path(data_path[0] + '/analysis/databases/3T_human_T1maps_database.pkl')\n", - " output_folder = Path(data_path[0] + '/analysis/plots/08_wholedataset_scatter_Human/')\n", - "\n", - "estimate_type = 'mean' # median or mean\n", - "\n", - "# Define functions\n", - "\n", - "def plot_both_scatter(x1, x2, y, y_std,\n", - " title, x1_label, x2_label, y_label,\n", - " file_prefix, folder_path, fig_id):\n", - " \n", - " plt.rcParams[\"figure.figsize\"] = (20,10)\n", - "\n", - " fig, axs = plt.subplots(2)\n", - " fig.suptitle(title)\n", - " axs[0].errorbar(x1, y, y_std, fmt='o', solid_capstyle='projecting')\n", - " axs[0].set_xlabel(x1_label)\n", - " axs[0].set_ylabel(y_label)\n", - " axs[0].set_xticks(np.arange(0, np.max(x1), step=1))\n", - "\n", - "\n", - " axs[1].errorbar(x2, y, y_std, fmt='o', solid_capstyle='projecting')\n", - " axs[1].set_xlabel(x2_label)\n", - " axs[1].set_ylabel(y_label)\n", - " axs[1].set_xticklabels(labels=x2, rotation=90)\n", - "\n", - "\n", - " if fig_id<10:\n", - " filename = \"0\" + str(fig_id) + \"_\" + file_prefix\n", - " else:\n", - " filename = str(fig_id) + \"_\" + file_prefix\n", - "\n", - " fig.savefig(folder_path / (str(filename) + '.svg'), facecolor='white')\n", - " fig.savefig(folder_path / (str(filename) + '.png'), facecolor='white')\n", - " fig_id = fig_id + 1\n", - " plt.show()\n", - " return fig_id\n", - "\n", - "# Load database\n", - "\n", - "df = pd.read_pickle(database_path)\n", - "\n", - "genu_estimate = np.array([])\n", - "genu_std = np.array([])\n", - "splenium_estimate = np.array([])\n", - "splenium_std = np.array([])\n", - "deepgm_estimate = np.array([])\n", - "deepgm_std = np.array([])\n", - "cgm_estimate = np.array([])\n", - "cgm_std = np.array([])\n", - "\n", - "ii = 0\n", - "for index, row in df.iterrows():\n", - " \n", - " if estimate_type is 'mean':\n", - " genu_estimate = np.append(genu_estimate, np.mean(df.loc[index]['T1 - genu (WM)']))\n", - " genu_std = np.append(genu_std, np.std(df.loc[index]['T1 - genu (WM)']))\n", - " splenium_estimate = np.append(splenium_estimate, np.mean(df.loc[index]['T1 - splenium (WM)']))\n", - " splenium_std = np.append(splenium_std, np.std(df.loc[index]['T1 - splenium (WM)']))\n", - " deepgm_estimate = np.append(deepgm_estimate, np.mean(df.loc[index]['T1 - deep GM']))\n", - " deepgm_std = np.append(deepgm_std, np.std(df.loc[index]['T1 - deep GM']))\n", - " cgm_estimate = np.append(cgm_estimate, np.mean(df.loc[index]['T1 - cortical GM']))\n", - " cgm_std = np.append(cgm_std, np.std(df.loc[index]['T1 - cortical GM']))\n", - " elif estimate_type is 'median':\n", - " genu_estimate = np.append(genu_estimate, np.median(df.loc[index]['T1 - genu (WM)']))\n", - " genu_std = np.append(genu_std, np.std(df.loc[index]['T1 - genu (WM)']))\n", - " splenium_estimate = np.append(splenium_estimate, np.median(df.loc[index]['T1 - splenium (WM)']))\n", - " splenium_std = np.append(splenium_std, np.std(df.loc[index]['T1 - splenium (WM)']))\n", - " deepgm_estimate = np.append(deepgm_estimate, np.median(df.loc[index]['T1 - deep GM']))\n", - " deepgm_std = np.append(deepgm_std, np.std(df.loc[index]['T1 - deep GM']))\n", - " cgm_estimate = np.append(cgm_estimate, np.median(df.loc[index]['T1 - cortical GM']))\n", - " cgm_std = np.append(cgm_std, np.std(df.loc[index]['T1 - cortical GM']))\n", - " else:\n", - " Exception('Unsupported dataset estimate type.')\n", - " ii = ii +1\n", - "\n", - "# Store the IDs\n", - "indexes_numbers = df.index\n", - "indexes_strings = indexes_numbers.map(str)\n", - "\n", - "x1_label='Site #'\n", - "x2_label='Site #.Meas #'\n", - "y_label=\"T$_1$ (ms)\"\n", - "file_prefix = 'WM_and_GM'\n", - "folder_path=output_folder\n", - "\n", - "x1=indexes_numbers\n", - "x2=indexes_strings\n", - "y=genu_estimate\n", - "y_std=genu_std\n", - "\n", - "# Paper formatting of x tick labels (remove leading zero, pad zero at the end for multiples of 10)\n", - "x3=[]\n", - "for num in x2:\n", - " x3.append(num.replace('.0', '.'))\n", - "\n", - "index=0\n", - "for num in x3:\n", - " if num[-3] != '.':\n", - " x3[index]=num+'0'\n", - " index+=1\n", - "\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "tags": [ - "report_output", - "remove_input" - ] - }, - "outputs": [], - "source": [ - "# PYTHON CODE\n", - "# Module imports\n", - "\n", - "import matplotlib.pyplot as plt\n", - "from PIL import Image\n", - "from matplotlib.image import imread\n", - "import scipy.io\n", - "import plotly.graph_objs as go\n", - "import numpy as np\n", - "from plotly import __version__\n", - "from plotly.offline import init_notebook_mode, iplot, plot\n", - "config={'showLink': False, 'displayModeBar': False}\n", - "\n", - "init_notebook_mode(connected=True)\n", - "\n", - "from IPython.display import display, HTML\n", - "\n", - "import os\n", - "import markdown\n", - "import random\n", - "from scipy.integrate import quad\n", - "\n", - "import warnings\n", - "warnings.filterwarnings('ignore')\n", - "config={'showLink': False, 'displayModeBar': False}\n", - "\n", - "data_wm=go.Scatter(\n", - " x=x1,\n", - " y=genu_estimate,\n", - " error_y=dict(\n", - " type='data', # value of error bar given in data coordinates\n", - " array=genu_std,\n", - " visible=True),\n", - " name = 'White matter (one 5x5 ROI, ~genu)',\n", - " mode = 'markers',\n", - " marker=dict(color='#007ea7'),\n", - " visible = True,\n", - " )\n", - "\n", - "data_gm=go.Scatter(\n", - " x=x1,\n", - " y=cgm_estimate,\n", - " error_y=dict(\n", - " type='data', # value of error bar given in data coordinates\n", - " array=cgm_std,\n", - " visible=True),\n", - " name = 'Grey matter (three 3x3 ROIs, cortex)',\n", - " mode = 'markers',\n", - " marker=dict(color='#D22B2B'),\n", - " visible = True,\n", - " )\n", - "\n", - "\n", - "data = [data_wm, data_gm]\n", - "\n", - "\n", - "layout = go.Layout(\n", - " width=960,\n", - " height=250,\n", - " margin=go.layout.Margin(\n", - " l=80,\n", - " r=40,\n", - " b=80,\n", - " t=10,\n", - " ),\n", - " xaxis_title='Site #',\n", - " yaxis_title='T1 (ms)',\n", - " font=dict(\n", - " family='Times New Roman',\n", - " size=22\n", - " ),\n", - " xaxis=dict(\n", - " autorange=False,\n", - " range=[0.01,10.99],\n", - " dtick=1,\n", - " showgrid=False,\n", - " linecolor='black',\n", - " linewidth=2\n", - " ),\n", - " yaxis=dict(\n", - " autorange=False,\n", - " range=[0, 2999],\n", - " dtick=500,\n", - " showgrid=True,\n", - " gridcolor='rgb(169,169,169)',\n", - " linecolor='black',\n", - " linewidth=2,\n", - " tickfont=dict(\n", - " family='Times New Roman',\n", - " size=18,\n", - " ),\n", - " ),\n", - " annotations=[\n", - " dict(\n", - " x=-0.1,\n", - " y=-0.5,\n", - " showarrow=False,\n", - " text='a',\n", - " font=dict(\n", - " family='Times New Roman',\n", - " size=64\n", - " ),\n", - " xref='paper',\n", - " yref='paper'\n", - " ),\n", - " ],\n", - " legend=dict(\n", - " x=0.3,\n", - " y=1.05,\n", - " traceorder='normal',\n", - " font=dict(\n", - " family='Times New Roman',\n", - " size=16,\n", - " color='#000'\n", - " ),\n", - " bordercolor='#000000',\n", - " borderwidth=2\n", - " ),\n", - " paper_bgcolor='rgb(255, 255, 255)',\n", - " plot_bgcolor='rgb(255, 255, 255)',\n", - ")\n", - "\n", - "fig = dict(data=data, layout=layout)\n", - "\n", - "#iplot(fig, filename = 'figure6a', config = config)\n", - "plot(fig, filename = 'figure6a.html', config = config)\n", - "display(HTML('figure6a.html'))" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "tags": [ - "remove_input", - "report_output" - ] - }, "outputs": [], "source": [ "# PYTHON CODE\n", @@ -2145,7 +1320,7 @@ "metadata": {}, "source": [ "

\n", - "Figure 8 Mean T1 values in two sets of ROIs, white matter (one 5⨯5 voxel ROI, genu) and gray matter (three 3⨯3 voxel ROIs, cortex). Top figure shows all datasets collapsed into sites, whereas the bottom shows each individual dataset.\n", + "Figure 4 Measured mean T1 values vs. temperature-corrected NIST reference values of the phantom spheres are presented as linear plots (a), log-log plots (b), and plots of the error relative to reference T1 value (c). The dashed lines in plots (c) represent a ±10 % error. Mean T1 values in two sets of ROIs, white matter (one 5⨯5 voxel ROI, genu) and gray matter (three 3⨯3 voxel ROIs, cortex). Top figure shows all datasets collapsed into sites, whereas the bottom shows each individual dataset. In subplot g), the missing datapoint for deep GM in 10.001 was due to the slice positioning of the acquisition not containing deep GM. Interactive figure available at: https://preprint.neurolibre.org/10.55458/neurolibre.00014/.\n", "

" ] }, @@ -2155,53 +1330,26 @@ "source": [ "## 4     |     DISCUSSION\n", "\n", - "## 4.1     |     Achievements of the challenge\n", + "The challenge focused on exploring if different research groups could reproduce T1 maps based on the protocol information reported in a published PDF [31]. Eighteen submissions independently implemented the inversion recovery T1 mapping acquisition protocol as outlined in Barral et al. [31], and reported T1 mapping data in a standard quantitative MRI phantom and/or human brains at 27 MRI sites, using systems from three different vendors (GE, Philips, Siemens). The collaborative effort produced an open-source database of 94 T1 mapping datasets, including 38 ISMRM/NIST phantom and 56 human brain datasets. The inter-submission variability was twice as high as the intra-submission variability in both phantom and human brain T1 measurements, **demonstrating that a PDF is not enough for reproducibility in quantitative MRI.**\n", "\n", + "More information is needed to unify all the aspects of a pulse sequence across sites. However, in a vendor-native setting, this is a major challenge given the disparities between proprietary development libraries [41]. Vendor-neutral pulse sequence design platforms [42–44] have emerged as a powerful solution to standardize sequence components at the implementation level. Vendor neutrality has been shown to significantly reduce the variability of T1 maps acquired using VFA across vendors [44]. In the absence of a vendor-neutral framework, a vendor-native alternative is the implementation of a strategy to control the saturation of MT across TRs [45]. Nevertheless, this approach can still benefit from a vendor-neutral approach to enhance accessibility and unify implementations. This is because vendor-specific constraints are recognized to impose limitations on the adaptability of sequences, resulting in significant variability even when implementations are closely aligned within their respective vendor-native development environments [46].\n", "\n", - "The challenge focused on exploring the reproducibility of the gold standard inversion recovery T1 mapping method reported in a seminal paper {cite:p}`Barral2010-qm`. Eighteen submissions independently implemented the inversion recovery T1 mapping acquisition protocol as outlined in {cite:t}`Barral2010-qm`(which is optimized for the T1 values observed in brain tissue), and reported T1 mapping data in a standard quantitative MRI phantom and/or human brains at 27 MRI sites, using systems from three different vendors (GE, Philips, Siemens). The collaborative effort produced an open-source database of 94 T1 mapping datasets, including 38 ISMRM/NIST phantom and 56 human brain datasets. A standardized T1 processing pipeline was developed for different dataset types, including magnitude-only and complex data. Additionally, Jupyter notebooks that can be executed in containerized environments were developed for quality assurance, visualization, and analyses. An interactive web-based dashboard was also developed to allow for easy exploration of the challenge results in a web-browser.\n", - "\n", - "To evaluate the accuracy of the resulting T1 values, the challenge used the standard ISMRM/NIST phantom with fiducial spheres having T1 values in the range of human brain tissue, from 500 to 2000 ms (see Figure 5). As anticipated for this protocol, there was a decrease in the accuracy in measurements for spheres with T1 below 300 ms. Overall, the majority of the independently implemented imaging protocols from various sites are consistent with the temperature-corrected reference values, with only a few exceptions. Using the NIST phantom, we report that sites that independently implemented the imaging protocol resulted in an inter-submission mean CoV (6.1 %) that was twice as high as the intra-submission mean CoV measured at seven sites (2.9 %). A similar trend was observed in vivo. Inter-submission CoV for WM (genu) was 6.0 % and for GM (cortex) was 16.5 % vs the intra-submission CoV that was 2.9 % and 6.9%, with generally higher CoVs relative to the phantom measurements likely due to biological variability {cite:p}`Piechnik2013-xl,Stanisz2005-qg`." + "The 2020 Reproducibility Challenge, jointly organized by the Reproducible Research and Quantitative MR ISMRM study groups, led to the creation of a large open database of standard quantitative MR phantom and human brain inversion recovery T1 maps. These maps were measured using independently implemented imaging protocols on MRI scanners from three different manufacturers. All collected data, processing pipeline code, computational environment files, and analysis scripts were shared with the goal of promoting reproducible research practices, and an interactive dashboard was developed to broaden the accessibility and engagement of the resulting datasets (https://rrsg2020.dashboards.neurolibre.org). The differences in stability between independently implemented (inter-submission) and centrally shared (intra-submission) protocols observed both in phantoms and in vivo could help inform future meta-analyses of quantitative MRI metrics [47,48] and better guide multi-center collaborations." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "## 4.2     |     Comparison with other studies\n", + "# ACKNOWLEDGEMENT\n", "\n", - "The work done during this challenge involved a multi-center quantitative T1 mapping study using the NIST phantom across various sites. This work overlaps with two recent studies {cite:p}`Bane2018-wt,Keenan2021-ly`. {cite:t}`Bane2018-wt` focused on the reproducibility of two standard quantitative T1 techniques (inversion recovery and variable flip angle) and a wide variety of site-specific T1 mapping protocols for DCE, mostly VFA protocols with fewer flip angles, which were implemented at eight imaging centers covering the same 3 MRI vendors featured in this challenge (GE/Philips/Siemens). The inter-platform coefficient of variation for the standard inversion recovery T1 protocol was 5.46% at 3 T in {cite:p}`Bane2018-wt`, which was substantially lower than what they observed for their standard VFA protocol (22.87%). However, Bane et al.’s work differed from the challenge in several ways. First, the standard imaging protocol for inversion recovery used by {cite:t}`Bane2018-wt` had more inversion times (14 compared to the challenge’s 4) to cover the entire range of T1 values of the phantom. Secondly, {cite:t}`Bane2018-wt` used a single traveling phantom for all sites, whereas the challenge used a total of 8 different phantoms (some were shared amongst people who participated independently). Thirdly, {cite:t}`Bane2018-wt` averaged the signals within each ROI of each sphere prior to fitting for the T1 values, whereas the challenge pipeline fits the T1 values on a per-voxel basis and only subsequently calculates the mean/median/std. They also only acquired magnitude data, in contrast to the challenge where participants were encouraged to submit both complex and magnitude-only data. Lastly, in {cite:t}`Bane2018-wt`, the implementations of the common inversion recovery protocols were fully standardized (full protocol) across all the platforms (except for two cases where one manufacturer couldn’t achieve the lowest TI) and imposed and coordinated by the principal researchers. In contrast, the challenge sought to explore the variations that would occur for a less-restricted protocol (Table 2) that is independently-implemented at multiple centers, which more closely emulates the quantitative MR research flow (publication of a technique and protocol → independently implement the pulse sequence and/or protocol → use the new implementation independently in a study → publish). Of note, in the challenge, one participating group coordinated a large multicenter dataset that mirrors the study by {cite:t}`Bane2018-wt` by imaging a single phantom across 7 different imaging sites, albeit doing so on a single manufacturer. Using this subset, the mean cross-site CoV was 2.9 % (range: 1.6 - 4.9 %) for the first five spheres, which is in agreement with the range of observations for all spheres by Bane et al. (Bane et al. 2018) at 3T using their full inversion recovery protocol (CoV = 5.46 %; range: 0.99 - 14.6 %). \n", + "

\n", + "The conception of this collaborative reproducibility challenge originated from discussions with experts, including Paul Tofts, Joëlle Barral, and Ilana Leppert, who provided valuable insights. Additionally, Kathryn Keenan, Zydrunas Gimbutas, and Andrew Dienstfrey from NIST provided their code to generate the ROI template for the ISMRM/NIST phantom. Dylan Roskams-Edris and Gabriel Pelletier from the Tanenbaum Open Science Institute (TOSI) offered valuable insights and guidance related to data ethics and data sharing in the context of this international multi-center conference challenge. The 2020 RRSG study group committee members who launched the challenge, Martin Uecker, Florian Knoll, Nikola Stikov, Maria Eugenia Caligiuri, and Daniel Gallichan, as well as the 2020 qMRSG committee members, Kathryn Keenan, Diego Hernando, Xavier Golay, Annie Yuxin Zhang, and Jeff Gunter, also played an essential role in making this challenge possible. We’d also like to extend our thanks to all the volunteers and individuals who helped with the scanning at each imaging site.\n", + "The authors thank the ISMRM Reproducible Research Study Group for conducting a code review of the code (Version 1) supplied in the Data Availability Statement. The scope of the code review covered only the code’s ease of download, quality of documentation, and ability to run, but did not consider scientific accuracy or code efficiency.\n", "\n", - "Another study by {cite:t}`Bane2018-wt,Keenan2021-ly` also investigated the accuracy of T1 mapping techniques using a single ISMRM/NIST system phantom at multiple sites and on multiple platforms. Like {cite:t}`Bane2018-wt` they used an inversion recovery imaging protocol optimized for the full range of T1 values represented in the ISMRM/NIST phantom, which consisted of 9 to 10 inversion times and a TR of 4500 ms (TR `~` 5T1 of WM at 3T). They reported no consistent pattern of differences in measured inversion recovery T1 values across MRI vendors for the two T1 mapping techniques they used (inversion recovery and VFA). They observed relative errors between their T1 measurements and the reference values of the phantom to be below 10% for all T1 values and the larger errors were observed at the lowest and highest T1 values of the phantom." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 4.3     |     Lessons Learned and Future Directions\n", "\n", - "There are some important things to note about this challenge. Firstly, the submissions for this challenge were due in March 2020, which was impacted by the COVID-19 pandemic lockdowns, thereby reducing repeated experiments due to access limitations. Nevertheless, a substantial number of participants submitted their datasets. Some groups intended on acquiring more data, and others intended on re-scanning volunteers, but could no longer do so due to local pandemic restrictions.\n", + "Lastly, we acknowledge use of ChatGPT (v3), a generative language model, for accelerating manuscript preparation. The co-first authors employed ChatGPT in the initial draft for transforming bullet point sentences into paragraphs, proofreading for typos, and refining the academic tone. ChatGPT served exclusively as a writing aid, and was not used to create or interpret results.\n", "\n", - "This reproducibility challenge aimed to compare differences between independently-implemented protocols. Crowning a winner was not an aim of this challenge, due to concerns that participants would have changed their protocols to get closer to the reference T1 values, leading to a broader difference in protocol implementations across MRI sites. Instead, we focused on building consensus by creating an open data repository, sharing reproducible workflows, and presenting the results through interactive visualizations. Future work warrants the study of inter-site differences in a vendor-neutral workflow {cite:p}`Karakuzu2022-venus` by adhering to the latest Brain Imaging Data Structure (BIDS) community data standard on qMRI {cite:p}`Karakuzu2022-bids`. " - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# 5     |     CONCLUSION\n", - "\n", - "The 2020 Reproducibility Challenge, jointly organized by the Reproducible Research and Quantitative MR ISMRM study groups, led to the creation of a large open database of standard quantitative MR phantom and human brain inversion recovery T1 maps. These maps were measured using independently implemented imaging protocols on MRI scanners from three different manufacturers. All collected data, processing pipeline code, computational environment files, and analysis scripts were shared with the goal of promoting reproducible research practices, and an interactive dashboard was developed to broaden the accessibility and engagement of the resulting datasets (https://rrsg2020.dashboards.neurolibre.org). The differences in stability between independently implemented (inter-submission) and centrally shared (intra-submission) protocols observed both in phantoms and in vivo could help inform future meta-analyses of quantitative MRI metrics {cite}`Mancini2020-sv,Lazari2021-oy` and better guide multi-center collaborations." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# ACKNOWLEDGEMENT\n", - "\n", - "

\n", - "The conception of this collaborative reproducibility challenge originated from discussions with experts, including Paul Tofts, Joëlle Barral, and Ilana Leppert, who provided valuable insights. Additionally, Kathryn Keenan, Zydrunas Gimbutas, and Andrew Dienstfrey from NIST provided their code to generate the ROI template for the ISMRM/NIST phantom. Dylan Roskams-Edris and Gabriel Pelletier from the Tanenbaum Open Science Institute (TOSI) offered valuable insights and guidance related to data ethics and data sharing in the context of this international multi-center conference challenge. The 2020 RRSG study group committee members who launched the challenge, Martin Uecker, Florian Knoll, Nikola Stikov, Maria Eugenia Caligiuri, and Daniel Gallichan, as well as the 2020 qMRSG committee members, Kathryn Keenan, Diego Hernando, Xavier Golay, Annie Yuxin Zhang, and Jeff Gunter, also played an essential role in making this challenge possible. Finally, we extend our thanks to all the volunteers and individuals who helped with the scanning at each imaging site.\n", "

\n" ] }, @@ -2211,7 +1359,7 @@ "source": [ "# DATA AVAILABILITY STATEMENT\n", "\n", - "An interactive preprint of this manuscript is available at http://rrsg2020.github.io/paper. All imaging data submitted to the challenge, dataset details, registered ROI maps, and processed T1 maps are hosted on OSF https://osf.io/ywc9g/. The dataset submissions and quality assurance were handled through GitHub issues in this repository https://github.com/rrsg2020/data_submission (commit: `9d7eff1`). Note that accepted submissions are closed issues, and that the GitHub branches associated with the issue numbers contain the Dockerfile and Jupyter Notebook scripts that reproduce these preliminary quality assurance results and can be run in a browser using MyBinder. The ROI registration scripts for the phantoms and T1 fitting pipeline to process all datasets are hosted in this GitHub repository, https://github.com/rrsg2020/t1_fitting_pipeline (commit: 3497a4e). All the analyses of the datasets were done using Jupyter Notebooks and are available in this repository, https://github.com/rrsg2020/analysis (commit: `8d38644`), which also contains a Dockerfile to reproduce the environment using a tool like MyBinder. A dashboard was developed to explore the datasets information and results in a browser, which is accessible here, https://rrsg2020.dashboards.neurolibre.org, and the code is also available on GitHub: https://github.com/rrsg2020/rrsg2020-dashboard (commit: `6ee9321`). " + "An interactive NeuroLibre preprint of this manuscript is available at https://preprint.neurolibre.org/10.55458/neurolibre.00014/. All imaging data submitted to the challenge, dataset details, registered ROI maps, and processed T1 maps are hosted on OSF https://osf.io/ywc9g/. The dataset submissions and quality assurance were handled through GitHub issues in this repository https://github.com/rrsg2020/data_submission (commit: 9d7eff1). Note that accepted submissions are closed issues, and that the GitHub branches associated with the issue numbers contain the Dockerfile and Jupyter Notebook scripts that reproduce these preliminary quality assurance results and can be run in a browser using MyBinder. The ROI registration scripts for the phantoms and T1 fitting pipeline to process all datasets are hosted in this GitHub repository, https://github.com/rrsg2020/t1_fitting_pipeline (commit: 3497a4e). All the analyses of the datasets were done using Jupyter Notebooks and are available in this repository, https://github.com/rrsg2020/analysis (commit: 8d38644), which also contains a Dockerfile to reproduce the environment using a tool like MyBinder. A dashboard was developed to explore the datasets information and results in a browser, which is accessible here, https://rrsg2020.dashboards.neurolibre.org, and the code is also available on GitHub: https://github.com/rrsg2020/rrsg2020-dashboard (commit: 6ee9321). " ] }, { @@ -2256,9 +1404,55 @@ "source": [ "# References \n", "\n", - "```{bibliography}\n", - ":filter: docname in docnames\n", - "```" + "\n", + "1. \tKeenan KE, Biller JR, Delfino JG, Boss MA, Does MD, Evelhoch JL, et al. Recommendations towards standards for quantitative MRI (qMRI) and outstanding needs. J Magn Reson Imaging. 2019;49: e26–e39.\n", + "2. \tFryback DG, Thornbury JR. The efficacy of diagnostic imaging. Med Decis Making. 1991;11: 88–94.\n", + "3. \tSchweitzer M. Stages of technical efficacy: Journal of Magnetic Resonance Imaging style. J Magn Reson Imaging. 2016;44: 781–782.\n", + "4. \tSeiberlich N, Gulani V, Campbell A, Sourbron S, Doneva MI, Calamante F, et al. Quantitative Magnetic Resonance Imaging. Academic Press; 2020.\n", + "5. \tDamadian R. Tumor detection by nuclear magnetic resonance. Science. 1971;171: 1151–1153.\n", + "6. \tPykett IL, Mansfield P. A line scan image study of a tumorous rat leg by NMR. Phys Med Biol. 1978;23: 961–967.\n", + "7. \tStikov N, Boudreau M, Levesque IR, Tardif CL, Barral JK, Pike GB. On the accuracy of T1 mapping: Searching for common ground. Magn Reson Med. 2015;73: 514–522.\n", + "8. \tBoudreau M, Keenan KE, Stikov N. Quantitative T1 and T1r Mapping. Quantitative Magnetic Resonance Imaging. 2020. pp. 19–45.\n", + "9. \tBottomley PA, Foster TH, Argersinger RE, Pfeifer LM. A review of normal tissue hydrogen NMR relaxation times and relaxation mechanisms from 1-100 MHz: dependence on tissue type, NMR frequency, temperature, species, excision, and age. Med Phys. 1984;11: 425–448.\n", + "10. \tWansapura JP, Holland SK, Dunn RS, Ball WS Jr. NMR relaxation times in the human brain at 3.0 tesla. J Magn Reson Imaging. 1999;9: 531–538.\n", + "11. \tDieringer MA, Deimling M, Santoro D, Wuerfel J, Madai VI, Sobesky J, et al. Rapid parametric mapping of the longitudinal relaxation time T1 using two-dimensional variable flip angle magnetic resonance imaging at 1.5 Tesla, 3 Tesla, and 7 Tesla. PLoS One. 2014;9: e91318.\n", + "12. \tErnst RR, Anderson WA. Application of Fourier Transform Spectroscopy to Magnetic Resonance. Rev Sci Instrum. 1966;37: 93–102.\n", + "13. \tRedpath TW, Smith FW. Technical note: use of a double inversion recovery pulse sequence to image selectively grey or white brain matter. Br J Radiol. 1994;67: 1258–1263.\n", + "14. \tTofts PS. Modeling tracer kinetics in dynamic Gd-DTPA MR imaging. J Magn Reson Imaging. 1997;7: 91–101.\n", + "15. \tSled JG, Pike GB. Quantitative imaging of magnetization transfer exchange and relaxation properties in vivo using MRI. Magn Reson Med. 2001;46: 923–931.\n", + "16. \tYuan J, Chow SKK, Yeung DKW, Ahuja AT, King AD. Quantitative evaluation of dual-flip-angle T1 mapping on DCE-MRI kinetic parameter estimation in head and neck. Quant Imaging Med Surg. 2012;2: 245–253.\n", + "17. \tDrain LE. A Direct Method of Measuring Nuclear Spin-Lattice Relaxation Times. Proc Phys Soc A. 1949;62: 301.\n", + "18. \tHahn EL. An Accurate Nuclear Magnetic Resonance Method for Measuring Spin-Lattice Relaxation Times. Physical Review. 1949. pp. 145–146. doi:10.1103/physrev.76.145\n", + "19. \tFram EK, Herfkens RJ, Johnson GA, Glover GH, Karis JP, Shimakawa A, et al. Rapid calculation of T1 using variable flip angle gradient refocused imaging. Magn Reson Imaging. 1987;5: 201–208.\n", + "20. \tDeoni SCL, Rutt BK, Peters TM. Rapid combinedT1 andT2 mapping using gradient recalled acquisition in the steady state. Magnetic Resonance in Medicine. 2003. pp. 515–526. doi:10.1002/mrm.10407\n", + "21. \tCheng H-LM, Wright GA. Rapid high-resolutionT1 mapping by variable flip angles: Accurate and precise measurements in the presence of radiofrequency field inhomogeneity. Magnetic Resonance in Medicine. 2006. pp. 566–574. doi:10.1002/mrm.20791\n", + "22. \tLook DC, Locker DR. Time saving in measurement of NMR and EPR relaxation times. Rev Sci Instrum. 1970;41: 250–251.\n", + "23. \tMessroghli DR, Radjenovic A, Kozerke S, Higgins DM, Sivananthan MU, Ridgway JP. Modified Look-Locker inversion recovery (MOLLI) for high-resolution T1 mapping of the heart. Magn Reson Med. 2004;52: 141–146.\n", + "24. \tPiechnik SK, Ferreira VM, Dall’Armellina E, Cochlin LE, Greiser A, Neubauer S, et al. Shortened Modified Look-Locker Inversion recovery (ShMOLLI) for clinical myocardial T1-mapping at 1.5 and 3 T within a 9 heartbeat breathhold. J Cardiovasc Magn Reson. 2010;12: 69.\n", + "25. \tMarques JP, Kober T, Krueger G, van der Zwaag W, Van de Moortele P-F, Gruetter R. MP2RAGE, a self bias-field corrected sequence for improved segmentation and T1-mapping at high field. NeuroImage. 2010. pp. 1271–1281. doi:10.1016/j.neuroimage.2009.10.002\n", + "26. \tMarques JP, Gruetter R. New developments and applications of the MP2RAGE sequence--focusing the contrast and high spatial resolution R1 mapping. PLoS One. 2013;8: e69294.\n", + "27. \tKeenan KE, Ainslie M, Barker AJ, Boss MA, Cecil KM, Charles C, et al. Quantitative magnetic resonance imaging phantoms: A review and the need for a system phantom. Magn Reson Med. 2018;79: 48–61.\n", + "28. \tStupic KF, Ainslie M, Boss MA, Charles C, Dienstfrey AM, Evelhoch JL, et al. A standard system phantom for magnetic resonance imaging. Magn Reson Med. 2021;86: 1194–1211.\n", + "29. \tBane O, Hectors SJ, Wagner M, Arlinghaus LL, Aryal MP, Cao Y, et al. Accuracy, repeatability, and interplatform reproducibility of T1 quantification methods used for DCE-MRI: Results from a multicenter phantom study. Magn Reson Med. 2018;79: 2564–2575.\n", + "30. \tKeenan KE, Gimbutas Z, Dienstfrey A, Stupic KF, Boss MA, Russek SE, et al. Multi-site, multi-platform comparison of MRI T1 measurement using the system phantom. PLoS One. 2021;16: e0252966.\n", + "31. \tBarral JK, Gudmundson E, Stikov N, Etezadi-Amoli M, Stoica P, Nishimura DG. A robust methodology for in vivo T1 mapping. Magn Reson Med. 2010;64: 1057–1067.\n", + "32. \tKluyver T, Ragan-Kelley B, Granger B, Bussonnier M, Frederic J, Kelley K, et al. Jupyter Notebooks – a publishing format for reproducible computational workflows. Positioning and Power in Academic Publishing: Players, Agents and Agendas. Amsterdam, NY: IOS Press; 2016. pp. 87–90.\n", + "33. \tBeg, Taka, Kluyver, Konovalov, Ragan-Kelley, Thiery, et al. Using Jupyter for Reproducible Scientific Workflows. https://www.computer.org › csdl › magazine › 2021/02https://www.computer.org › csdl › magazine › 2021/02. 2021;23: 36–46.\n", + "34. \tKarakuzu A, Boudreau M, Duval T, Boshkovski T, Leppert I, Cabana J-F, et al. qMRLab: Quantitative MRI analysis, under one umbrella. J Open Source Softw. 2020;5: 2343.\n", + "35. \tCabana J-F, Gu Y, Boudreau M, Levesque IR, Atchia Y, Sled JG, et al. Quantitative magnetization transfer imagingmadeeasy with qMTLab: Software for data simulation, analysis, and visualization. Concepts Magn Reson Part A Bridg Educ Res. 2015;44A: 263–277.\n", + "36. \tAvants BB, Tustison N, Song G. Advanced normalization tools (ANTS). Insight J. 2009;2: 1–35.\n", + "37. \tMcCarthy P. FSLeyes. 2019. doi:10.5281/zenodo.3403671\n", + "38. \tMerkel D. Docker: Lightweight Linux containers for consistent development and deployment. 2014 [cited 14 Feb 2023]. Available: https://www.seltzer.com/margo/teaching/CS508.19/papers/merkel14.pdf\n", + "39. \tBoettiger C. An introduction to Docker for reproducible research. Oper Syst Rev. 2015;49: 71–79.\n", + "40. \tProject Jupyter, Bussonnier M, Forde J, Freeman J, Granger B, Head T, et al. Binder 2.0 - Reproducible, interactive, sharable environments for science at scale. Proceedings of the Python in Science Conference. SciPy; 2018. doi:10.25080/majora-4af1f417-011\n", + "41. \tGracien, Maiworm, Brüche, Shrestha. How stable is quantitative MRI?–Assessment of intra-and inter-scanner-model reproducibility using identical acquisition sequences and data analysis …. Neuroimage. 2020. Available: https://www.sciencedirect.com/science/article/pii/S1053811919309553\n", + "42. \tLayton KJ, Kroboth S, Jia F, Littin S, Yu H, Leupold J, et al. Pulseq: A rapid and hardware-independent pulse sequence prototyping framework. Magn Reson Med. 2017;77: 1544–1552.\n", + "43. \tCordes C, Konstandin S, Porter D, Günther M. Portable and platform-independent MR pulse sequence programs. Magn Reson Med. 2020;83: 1277–1290.\n", + "44. \tKarakuzu A, Biswas L, Cohen-Adad J, Stikov N. Vendor-neutral sequences and fully transparent workflows improve inter-vendor reproducibility of quantitative MRI. Magn Reson Med. 2022;88: 1212–1228.\n", + "45. \tA G Teixeira RP, Neji R, Wood TC, Baburamani AA, Malik SJ, Hajnal JV. Controlled saturation magnetization transfer for reproducible multivendor variable flip angle T1 and T2 mapping. Magn Reson Med. 2020;84: 221–236.\n", + "46. \tLee Y, Callaghan MF, Acosta-Cabronero J, Lutti A, Nagy Z. Establishing intra- and inter-vendor reproducibility of T1 relaxation time measurements with 3T MRI. Magn Reson Med. 2019;81: 454–465.\n", + "47. \tMancini M, Karakuzu A, Cohen-Adad J, Cercignani M, Nichols TE, Stikov N. An interactive meta-analysis of MRI biomarkers of myelin. Elife. 2020;9. doi:10.7554/eLife.61523\n", + "48. \tLazari A, Lipp I. Can MRI measure myelin? Systematic review, qualitative assessment, and meta-analysis of studies validating microstructural imaging with myelin histology. Neuroimage. 2021;230: 117744." ] } ],