diff --git a/.buildinfo b/.buildinfo index 487f01a..b22d5a2 100644 --- a/.buildinfo +++ b/.buildinfo @@ -1,4 +1,4 @@ # Sphinx build info version 1 # This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done. -config: 214d35bb16c7218d3b7aab3048acdedb +config: dd3c8afeb18c12fdda28369b78c55497 tags: 645f666f9bcd5a90fca523b33c5a78b7 diff --git a/.doctrees/auto_examples/plot_marginal_cumulative_incidence_estimation.doctree b/.doctrees/auto_examples/plot_marginal_cumulative_incidence_estimation.doctree index 45a58dd..94162d9 100644 Binary files a/.doctrees/auto_examples/plot_marginal_cumulative_incidence_estimation.doctree and b/.doctrees/auto_examples/plot_marginal_cumulative_incidence_estimation.doctree differ diff --git a/.doctrees/auto_examples/sg_execution_times.doctree b/.doctrees/auto_examples/sg_execution_times.doctree index 2a1b64f..697c7dd 100644 Binary files a/.doctrees/auto_examples/sg_execution_times.doctree and b/.doctrees/auto_examples/sg_execution_times.doctree differ diff --git a/.doctrees/environment.pickle b/.doctrees/environment.pickle index 88a8bc3..59ebecd 100644 Binary files a/.doctrees/environment.pickle and b/.doctrees/environment.pickle differ diff --git a/_downloads/07fcc19ba03226cd3d83d4e40ec44385/auto_examples_python.zip b/_downloads/07fcc19ba03226cd3d83d4e40ec44385/auto_examples_python.zip index 468c73e..199ce6a 100644 Binary files a/_downloads/07fcc19ba03226cd3d83d4e40ec44385/auto_examples_python.zip and b/_downloads/07fcc19ba03226cd3d83d4e40ec44385/auto_examples_python.zip differ diff --git a/_downloads/2932d6ac7842b6d781c27c5737ae52fa/plot_marginal_cumulative_incidence_estimation.ipynb b/_downloads/2932d6ac7842b6d781c27c5737ae52fa/plot_marginal_cumulative_incidence_estimation.ipynb index 0dee5e9..a92d84e 100644 --- a/_downloads/2932d6ac7842b6d781c27c5737ae52fa/plot_marginal_cumulative_incidence_estimation.ipynb +++ b/_downloads/2932d6ac7842b6d781c27c5737ae52fa/plot_marginal_cumulative_incidence_estimation.ipynb @@ -130,7 +130,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The resulting incidence curves are indeed monotonic. However, for smaller\ntraining set sizes of the training set, the resulting models can be\nsignificantly biased, in particular in regions where the CIFs is getting\nflatter. This effect diminishes with larger training set sizes (lower\nepistemic uncertainty).\n\n" + "The resulting incidence curves are indeed monotonic. However, for smaller\ntraining set sizes, the resulting models can be significantly biased, in\nparticular large time horizons, where the CIFs are getting flatter. This\neffect diminishes with larger training set sizes (lower epistemic\nuncertainty).\n\n" ] } ], diff --git a/_downloads/6f1e7a639e0699d6164445b55e6c116d/auto_examples_jupyter.zip b/_downloads/6f1e7a639e0699d6164445b55e6c116d/auto_examples_jupyter.zip index c28f12d..fdedb67 100644 Binary files a/_downloads/6f1e7a639e0699d6164445b55e6c116d/auto_examples_jupyter.zip and b/_downloads/6f1e7a639e0699d6164445b55e6c116d/auto_examples_jupyter.zip differ diff --git a/_downloads/878bb6ffe7fba36a0655277b8151e4cd/plot_marginal_cumulative_incidence_estimation.py b/_downloads/878bb6ffe7fba36a0655277b8151e4cd/plot_marginal_cumulative_incidence_estimation.py index d828d0f..257c31d 100644 --- a/_downloads/878bb6ffe7fba36a0655277b8151e4cd/plot_marginal_cumulative_incidence_estimation.py +++ b/_downloads/878bb6ffe7fba36a0655277b8151e4cd/plot_marginal_cumulative_incidence_estimation.py @@ -252,7 +252,7 @@ def plot_cumulative_incidence_functions(distributions, y, gb_incidence=None, aj= # %% # # The resulting incidence curves are indeed monotonic. However, for smaller -# training set sizes of the training set, the resulting models can be -# significantly biased, in particular in regions where the CIFs is getting -# flatter. This effect diminishes with larger training set sizes (lower -# epistemic uncertainty). +# training set sizes, the resulting models can be significantly biased, in +# particular large time horizons, where the CIFs are getting flatter. This +# effect diminishes with larger training set sizes (lower epistemic +# uncertainty). diff --git a/_sources/auto_examples/plot_marginal_cumulative_incidence_estimation.rst.txt b/_sources/auto_examples/plot_marginal_cumulative_incidence_estimation.rst.txt index 00c700c..09b1262 100644 --- a/_sources/auto_examples/plot_marginal_cumulative_incidence_estimation.rst.txt +++ b/_sources/auto_examples/plot_marginal_cumulative_incidence_estimation.rst.txt @@ -266,15 +266,15 @@ theoretical CIFs: .. code-block:: none - GB Incidence for event 1 fit in 0.572 s - GB Incidence for event 1 prediction in 0.124 s - Aalen-Johansen for event 1 fit in 6.459 s - GB Incidence for event 2 fit in 0.554 s - GB Incidence for event 2 prediction in 0.124 s - Aalen-Johansen for event 2 fit in 6.501 s - GB Incidence for event 3 fit in 0.539 s - GB Incidence for event 3 prediction in 0.129 s - Aalen-Johansen for event 3 fit in 6.505 s + GB Incidence for event 1 fit in 0.715 s + GB Incidence for event 1 prediction in 0.150 s + Aalen-Johansen for event 1 fit in 7.610 s + GB Incidence for event 2 fit in 0.893 s + GB Incidence for event 2 prediction in 0.171 s + Aalen-Johansen for event 2 fit in 7.699 s + GB Incidence for event 3 fit in 0.662 s + GB Incidence for event 3 prediction in 0.147 s + Aalen-Johansen for event 3 fit in 7.668 s @@ -345,15 +345,15 @@ increases the amount of censoring. .. code-block:: none - GB Incidence for event 1 fit in 0.621 s - GB Incidence for event 1 prediction in 0.125 s - Aalen-Johansen for event 1 fit in 6.504 s - GB Incidence for event 2 fit in 0.582 s - GB Incidence for event 2 prediction in 0.125 s - Aalen-Johansen for event 2 fit in 6.554 s - GB Incidence for event 3 fit in 0.528 s - GB Incidence for event 3 prediction in 0.123 s - Aalen-Johansen for event 3 fit in 6.490 s + GB Incidence for event 1 fit in 0.705 s + GB Incidence for event 1 prediction in 0.155 s + Aalen-Johansen for event 1 fit in 7.684 s + GB Incidence for event 2 fit in 0.691 s + GB Incidence for event 2 prediction in 0.157 s + Aalen-Johansen for event 2 fit in 7.684 s + GB Incidence for event 3 fit in 0.639 s + GB Incidence for event 3 prediction in 0.147 s + Aalen-Johansen for event 3 fit in 7.573 s @@ -403,12 +403,12 @@ constraint: .. code-block:: none - GB Incidence for event 1 fit in 0.544 s - GB Incidence for event 1 prediction in 0.123 s - GB Incidence for event 2 fit in 0.549 s - GB Incidence for event 2 prediction in 0.128 s - GB Incidence for event 3 fit in 0.515 s - GB Incidence for event 3 prediction in 0.129 s + GB Incidence for event 1 fit in 0.669 s + GB Incidence for event 1 prediction in 0.144 s + GB Incidence for event 2 fit in 0.675 s + GB Incidence for event 2 prediction in 0.156 s + GB Incidence for event 3 fit in 0.638 s + GB Incidence for event 3 prediction in 0.145 s @@ -416,15 +416,15 @@ constraint: .. GENERATED FROM PYTHON SOURCE LINES 253-258 The resulting incidence curves are indeed monotonic. However, for smaller -training set sizes of the training set, the resulting models can be -significantly biased, in particular in regions where the CIFs is getting -flatter. This effect diminishes with larger training set sizes (lower -epistemic uncertainty). +training set sizes, the resulting models can be significantly biased, in +particular large time horizons, where the CIFs are getting flatter. This +effect diminishes with larger training set sizes (lower epistemic +uncertainty). .. rst-class:: sphx-glr-timing - **Total running time of the script:** (0 minutes 49.453 seconds) + **Total running time of the script:** (0 minutes 58.568 seconds) .. _sphx_glr_download_auto_examples_plot_marginal_cumulative_incidence_estimation.py: diff --git a/_sources/auto_examples/sg_execution_times.rst.txt b/_sources/auto_examples/sg_execution_times.rst.txt index 105c0fd..66c84aa 100644 --- a/_sources/auto_examples/sg_execution_times.rst.txt +++ b/_sources/auto_examples/sg_execution_times.rst.txt @@ -6,8 +6,8 @@ Computation times ================= -**00:49.453** total execution time for **auto_examples** files: +**00:58.568** total execution time for **auto_examples** files: +---------------------------------------------------------------------------------------------------------------------------------------+-----------+--------+ -| :ref:`sphx_glr_auto_examples_plot_marginal_cumulative_incidence_estimation.py` (``plot_marginal_cumulative_incidence_estimation.py``) | 00:49.453 | 0.0 MB | +| :ref:`sphx_glr_auto_examples_plot_marginal_cumulative_incidence_estimation.py` (``plot_marginal_cumulative_incidence_estimation.py``) | 00:58.568 | 0.0 MB | +---------------------------------------------------------------------------------------------------------------------------------------+-----------+--------+ diff --git a/auto_examples/plot_marginal_cumulative_incidence_estimation.html b/auto_examples/plot_marginal_cumulative_incidence_estimation.html index f234da0..7947278 100644 --- a/auto_examples/plot_marginal_cumulative_incidence_estimation.html +++ b/auto_examples/plot_marginal_cumulative_incidence_estimation.html @@ -516,15 +516,15 @@

CIFs estimated on uncensored data) -Cause-specific cumulative incidence functions (0.0% censoring), Event 1, Event 2, Event 3
-Cause-specific cumulative incidence functions (67.4% censoring), Event 1, Event 2, Event 3
-Cause-specific cumulative incidence functions (67.4% censoring), Event 1, Event 2, Event 3
GB Incidence for event 1 fit in 0.544 s
-GB Incidence for event 1 prediction in 0.123 s
-GB Incidence for event 2 fit in 0.549 s
-GB Incidence for event 2 prediction in 0.128 s
-GB Incidence for event 3 fit in 0.515 s
-GB Incidence for event 3 prediction in 0.129 s
+Cause-specific cumulative incidence functions (67.4% censoring), Event 1, Event 2, Event 3
GB Incidence for event 1 fit in 0.669 s
+GB Incidence for event 1 prediction in 0.144 s
+GB Incidence for event 2 fit in 0.675 s
+GB Incidence for event 2 prediction in 0.156 s
+GB Incidence for event 3 fit in 0.638 s
+GB Incidence for event 3 prediction in 0.145 s
 

The resulting incidence curves are indeed monotonic. However, for smaller -training set sizes of the training set, the resulting models can be -significantly biased, in particular in regions where the CIFs is getting -flatter. This effect diminishes with larger training set sizes (lower -epistemic uncertainty).

-

Total running time of the script: (0 minutes 49.453 seconds)

+training set sizes, the resulting models can be significantly biased, in +particular large time horizons, where the CIFs are getting flatter. This +effect diminishes with larger training set sizes (lower epistemic +uncertainty).

+

Total running time of the script: (0 minutes 58.568 seconds)