Skip to content

Commit

Permalink
Built site for gh-pages
Browse files Browse the repository at this point in the history
  • Loading branch information
Quarto GHA Workflow Runner committed Sep 9, 2024
1 parent f224e85 commit bf469e4
Show file tree
Hide file tree
Showing 4 changed files with 27 additions and 21 deletions.
2 changes: 1 addition & 1 deletion .nojekyll
Original file line number Diff line number Diff line change
@@ -1 +1 @@
905f3538
00f9b1ed
2 changes: 1 addition & 1 deletion search.json
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@
"href": "tutorial_pages/check-power.html",
"title": "Checking power through simulations",
"section": "",
"text": "Checking power through simulations\nThe power of a statistical test tells us the probability that the test correctly rejects the null hypothesis. In other words, if we only examine true effects, the power is the proportion of tests that will (correctly) reject the null hypothesis. Often, the power is set to 80%, though, as with alpha = 0.05, this is an arbitrary choice.\nGenerally, we want to do power analysis before collecting data, to work out the sample size we need to detect some effect. If we are calculating a required sample size, the power analysis can also be called a sample size calculation.\nTaking the example of a t-test, we need to understand a few parameters:\n\nn, the sample size.\ndelta, the difference in means that you want to be able to detect. Deciding what this value should be is tricky. You might rely on estimates from the literature (though bear in mind they are likely to be inflated), or you can use a minimally important difference, which is the threshold below which you do not consider a difference interesting enough to be worth detecting. In a clinical trial, for example, this might be the smallest difference that a patient would care about.\nsd, the standard deviation. Usually, this needs to be estimated from the literature or pilot studies.\nsig.level, the alpha, as discussed previously.\npower, the power as defined above.\n\nYou can calculate any one of these parameters, given all of the others. We usually want to specify, delta, sd, sig.level and power and calculate the required sample size.\nWe can calculate the required sample size for a t-test using:\npower.t.test(n = NULL, delta = 0.5, sd = 1, sig.level = 0.05, power = 0.8)\nNotice that n = NULL, so this parameter is calculated.\nThe sample size n we need, given this set of parameters, is 64 per group.\nJust as we can check the alpha of our test by sampling from the same distribution (i.e. simulating data without an effect), we can check the power by sampling from different distributions (i.e. simulating data with an effect).\nIf we sample values from two normal distributions with different means (e.g. N(0,1) and N(0.5,1)), what is the minimum sample size we need to detect a significant difference in means with a t-test 80% of the time?\n\nYOUR TURN:\n1. Use your simulation skills to work out the power through simulation. Write a function that does the following: i) Draws n values from a random normal distribution with mean1 and another n values from a normal distribution with mean2. ii) Compares the means of these two samples with a t-test and extracts the p-value. 2. Replicate the function 1000 times using the parameters used in the power calculation above (that used the power.t.test() function). 3. Calculate the proportion of p-values that are smaller than 0.05.\n\np-values of t-tests comparing means from 1000 simulations of N(0,1) and N(0.5,1) with n = 64:\n \n\nThe proportion of correctly rejected null hypotheses in the simulation is close to 0.8, which is what we would expect.\nUsing simulations for power analysis is not really necessary for simple examples like a t-test, though it is useful to check your understanding.\nWhen analyses become complex and it is hard or impossible to determine a sample size analytically (i.e. you can’t calculate it, or there’s no suitable function to use), then simulations are an indispensable tool.\nA simple example of a power analysis like the one you’ve just done can be found in the “Power analysis” section of this paper:\n\nBlanco, D., Schroter, S., Aldcroft, A., Moher, D., Boutron, I., Kirkham, J. J., & Cobo, E. (2020). Effect of an editorial intervention to improve the completeness of reporting of randomised trials: a randomised controlled trial. BMJ Open, 10(5), e036799. https://doi.org/10.1136/bmjopen-2020-036799\n\nA complete self-paced tutorial to simulate data for power analysis of complex statistical designs can be found here:\n\nhttps://lmu-osc.github.io/Simulations-for-Advanced-Power-Analyses/\n\n\n\n\n\n\n Back to top",
"text": "Checking power through simulations\nThe power of a statistical test tells us the probability that the test correctly rejects the null hypothesis. In other words, if we only examine true effects, the power is the proportion of tests that will (correctly) reject the null hypothesis. Often, the power is set to 80%, though, as with alpha = 0.05, this is an arbitrary choice.\nGenerally, we want to do power analysis before collecting data, to work out the sample size we need to detect some effect. If we are calculating a required sample size, the power analysis can also be called a sample size calculation.\nTaking the example of a t-test, we need to understand a few parameters:\n\nn, the sample size.\ndelta, the difference in means that you want to be able to detect. Deciding what this value should be is tricky. You might rely on estimates from the literature (though bear in mind they are likely to be inflated), or you can use a minimally important difference, which is the threshold below which you do not consider a difference interesting enough to be worth detecting. In a clinical trial, for example, this might be the smallest difference that a patient would care about.\nsd, the standard deviation. Usually, this needs to be estimated from the literature or pilot studies.\nsig.level, the alpha, as discussed previously.\npower, the power as defined above.\n\nYou can calculate any one of these parameters, given all of the others. We usually want to specify, delta, sd, sig.level and power and calculate the required sample size.\nWe can calculate the required sample size for a t-test using:\npower.t.test(n = NULL, delta = 0.5, sd = 1, sig.level = 0.05, power = 0.8)\nNotice that n = NULL, so this parameter is calculated.\nThe sample size n we need, given this set of parameters, is 64 per group.\nJust as we can check the alpha of our test by sampling from the same distribution (i.e. simulating data without an effect), we can check the power by sampling from different distributions (i.e. simulating data with an effect).\nIf we sample values from two normal distributions with different means (e.g. N(0,1) and N(0.5,1)), what is the minimum sample size we need to detect a significant difference in means with a t-test 80% of the time?\n\nYOUR TURN:\n1. Use your simulation skills to work out the power through simulation. Write a function that does the following:\ni. Draws `n` values from a random normal distribution with `mean1` and another `n` values from a normal distribution with `mean2`.\nii. Compares the means of these two samples with a *t*-test and extracts the *p*-value.\n\nReplicate the function 1000 times using the parameters used in the power calculation above (that used the power.t.test() function).\nCalculate the proportion of p-values that are smaller than 0.05.\n\n\np-values of t-tests comparing means from 1000 simulations of N(0,1) and N(0.5,1) with n = 64:\n \n\nThe proportion of correctly rejected null hypotheses in the simulation is close to 0.8, which is what we would expect.\nUsing simulations for power analysis is not really necessary for simple examples like a t-test, though it is useful to check your understanding.\nWhen analyses become complex and it is hard or impossible to determine a sample size analytically (i.e. you can’t calculate it, or there’s no suitable function to use), then simulations are an indispensable tool.\nA simple example of a power analysis like the one you’ve just done can be found in the “Power analysis” section of this paper:\n\nBlanco, D., Schroter, S., Aldcroft, A., Moher, D., Boutron, I., Kirkham, J. J., & Cobo, E. (2020). Effect of an editorial intervention to improve the completeness of reporting of randomised trials: a randomised controlled trial. BMJ Open, 10(5), e036799. https://doi.org/10.1136/bmjopen-2020-036799\n\nA complete self-paced tutorial to simulate data for power analysis of complex statistical designs can be found here:\n\nhttps://lmu-osc.github.io/Simulations-for-Advanced-Power-Analyses/\n\n\n\n\n\n\n Back to top",
"crumbs": [
"Tutorial",
"Simulate to check power"
Expand Down
36 changes: 18 additions & 18 deletions sitemap.xml
Original file line number Diff line number Diff line change
Expand Up @@ -2,74 +2,74 @@
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
<url>
<loc>https://lmu-osc.github.io/Introduction-Simulations-in-R/tutorial_pages/simulate-for-preregistration.html</loc>
<lastmod>2024-09-09T00:45:59.740Z</lastmod>
<lastmod>2024-09-09T01:06:53.344Z</lastmod>
</url>
<url>
<loc>https://lmu-osc.github.io/Introduction-Simulations-in-R/tutorial_pages/sample-size-n.html</loc>
<lastmod>2024-09-09T00:45:59.740Z</lastmod>
<lastmod>2024-09-09T01:06:53.344Z</lastmod>
</url>
<url>
<loc>https://lmu-osc.github.io/Introduction-Simulations-in-R/tutorial_pages/repeat.html</loc>
<lastmod>2024-09-09T00:45:59.740Z</lastmod>
<lastmod>2024-09-09T01:06:53.344Z</lastmod>
</url>
<url>
<loc>https://lmu-osc.github.io/Introduction-Simulations-in-R/tutorial_pages/random-numbers-generators.html</loc>
<lastmod>2024-09-09T00:45:59.740Z</lastmod>
<lastmod>2024-09-09T01:06:53.344Z</lastmod>
</url>
<url>
<loc>https://lmu-osc.github.io/Introduction-Simulations-in-R/tutorial_pages/number-of-simulations-nrep.html</loc>
<lastmod>2024-09-09T00:45:59.740Z</lastmod>
<lastmod>2024-09-09T01:06:53.344Z</lastmod>
</url>
<url>
<loc>https://lmu-osc.github.io/Introduction-Simulations-in-R/tutorial_pages/general-structure.html</loc>
<lastmod>2024-09-09T00:45:59.740Z</lastmod>
<lastmod>2024-09-09T01:06:53.344Z</lastmod>
</url>
<url>
<loc>https://lmu-osc.github.io/Introduction-Simulations-in-R/tutorial_pages/download-repo.html</loc>
<lastmod>2024-09-09T00:45:59.740Z</lastmod>
<lastmod>2024-09-09T01:06:53.344Z</lastmod>
</url>
<url>
<loc>https://lmu-osc.github.io/Introduction-Simulations-in-R/tutorial_pages/check-power.html</loc>
<lastmod>2024-09-09T00:45:59.740Z</lastmod>
<lastmod>2024-09-09T01:06:53.344Z</lastmod>
</url>
<url>
<loc>https://lmu-osc.github.io/Introduction-Simulations-in-R/tutorial_pages/basic-principles.html</loc>
<lastmod>2024-09-09T00:45:59.740Z</lastmod>
<lastmod>2024-09-09T01:06:53.344Z</lastmod>
</url>
<url>
<loc>https://lmu-osc.github.io/Introduction-Simulations-in-R/index.html</loc>
<lastmod>2024-09-09T00:45:59.736Z</lastmod>
<lastmod>2024-09-09T01:06:53.340Z</lastmod>
</url>
<url>
<loc>https://lmu-osc.github.io/Introduction-Simulations-in-R/tutorial_pages/check-alpha.html</loc>
<lastmod>2024-09-09T00:45:59.740Z</lastmod>
<lastmod>2024-09-09T01:06:53.344Z</lastmod>
</url>
<url>
<loc>https://lmu-osc.github.io/Introduction-Simulations-in-R/tutorial_pages/definition.html</loc>
<lastmod>2024-09-09T00:45:59.740Z</lastmod>
<lastmod>2024-09-09T01:06:53.344Z</lastmod>
</url>
<url>
<loc>https://lmu-osc.github.io/Introduction-Simulations-in-R/tutorial_pages/dry-rule.html</loc>
<lastmod>2024-09-09T00:45:59.740Z</lastmod>
<lastmod>2024-09-09T01:06:53.344Z</lastmod>
</url>
<url>
<loc>https://lmu-osc.github.io/Introduction-Simulations-in-R/tutorial_pages/limitations.html</loc>
<lastmod>2024-09-09T00:45:59.740Z</lastmod>
<lastmod>2024-09-09T01:06:53.344Z</lastmod>
</url>
<url>
<loc>https://lmu-osc.github.io/Introduction-Simulations-in-R/tutorial_pages/purpose.html</loc>
<lastmod>2024-09-09T00:45:59.740Z</lastmod>
<lastmod>2024-09-09T01:06:53.344Z</lastmod>
</url>
<url>
<loc>https://lmu-osc.github.io/Introduction-Simulations-in-R/tutorial_pages/real-life-example.html</loc>
<lastmod>2024-09-09T00:45:59.740Z</lastmod>
<lastmod>2024-09-09T01:06:53.344Z</lastmod>
</url>
<url>
<loc>https://lmu-osc.github.io/Introduction-Simulations-in-R/tutorial_pages/resources.html</loc>
<lastmod>2024-09-09T00:45:59.740Z</lastmod>
<lastmod>2024-09-09T01:06:53.344Z</lastmod>
</url>
<url>
<loc>https://lmu-osc.github.io/Introduction-Simulations-in-R/tutorial_pages/seed.html</loc>
<lastmod>2024-09-09T00:45:59.740Z</lastmod>
<lastmod>2024-09-09T01:06:53.344Z</lastmod>
</url>
</urlset>
8 changes: 7 additions & 1 deletion tutorial_pages/check-power.html
Original file line number Diff line number Diff line change
Expand Up @@ -303,7 +303,13 @@ <h1>Checking power through simulations</h1>
<p>If we sample values from two normal distributions with different means (e.g.&nbsp;N(0,1) and N(0.5,1)), what is the minimum sample size we need to detect a significant difference in means with a <em>t</em>-test 80% of the time?</p>
<hr>
<p><strong>YOUR TURN:</strong><br>
1. Use your simulation skills to work out the power through simulation. Write a function that does the following: i) Draws <code>n</code> values from a random normal distribution with <code>mean1</code> and another <code>n</code> values from a normal distribution with <code>mean2</code>. ii) Compares the means of these two samples with a <em>t</em>-test and extracts the <em>p</em>-value. 2. Replicate the function 1000 times using the parameters used in the power calculation above (that used the <code>power.t.test()</code> function). 3. Calculate the proportion of <em>p</em>-values that are smaller than 0.05.</p>
1. Use your simulation skills to work out the power through simulation. Write a function that does the following:</p>
<pre><code>i. Draws `n` values from a random normal distribution with `mean1` and another `n` values from a normal distribution with `mean2`.
ii. Compares the means of these two samples with a *t*-test and extracts the *p*-value.</code></pre>
<ol start="2" type="1">
<li>Replicate the function 1000 times using the parameters used in the power calculation above (that used the <code>power.t.test()</code> function).</li>
<li>Calculate the proportion of <em>p</em>-values that are smaller than 0.05.</li>
</ol>
<hr>
<p><strong><em>p</em>-values of <em>t</em>-tests comparing means from 1000 simulations of N(0,1) and N(0.5,1) with n = 64:</strong></p>
<p><br> <img src="../assets/hist-power.png" width="500"><br>
Expand Down

0 comments on commit bf469e4

Please sign in to comment.