-
-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
One-sided CIs #584
Comments
(Might also consider estimating one sided CIs for other parameters.) |
So we need to pass down the |
For the CIs controlled by |
What if an htest object is already computed with 1-sided alternative, and a two-sided alternative is requested via effectsize? |
Does it apply for all htest objects? |
User request overrides the htest: tt <- t.test(mtcars$mpg, mtcars$hp, alternative = "less")
effectsize::effectsize(tt)
#> Cohen's d | 95% CI
#> -------------------------
#> -2.60 | [-Inf, -1.91]
#>
#> - Estimated using un-pooled SD.
#> - One-sided CIs: lower bound fixed at (-Inf).
effectsize::effectsize(tt, alternative = "two.sided")
#> Cohen's d | 95% CI
#> --------------------------
#> -2.60 | [-3.40, -1.79]
#>
#> - Estimated using un-pooled SD. Created on 2021-08-19 by the reprex package (v2.0.1)
Not all. For the ones supported by
|
wilcox only, or all rank tests (including friedman and kruskal)? |
library(effectsize)
tab <- rbind(c(762, 327, 468),
c(484, 239, 477),
c(86, 150, 570)) Default to chisq.test(tab) |>
effectsize()
#> Cramer's V | 95% CI
#> -------------------------
#> 0.24 | [0.22, 1.00]
#>
#> - One-sided CIs: upper bound fixed at (1).
oneway.test(mtcars$mpg ~ mtcars$cyl, var.equal = TRUE) |>
effectsize()
#> Eta2 | 95% CI
#> -------------------
#> 0.73 | [0.57, 1.00]
#>
#> - One-sided CIs: upper bound fixed at (1).
kruskal.test(mtcars$mpg ~ mtcars$cyl) |>
effectsize()
#> Epsilon2 (rank) | 95% CI
#> ------------------------------
#> 0.83 | [0.78, 1.00]
#>
#> - One-sided CIs: upper bound fixed at (1).
RoundingTimes <- matrix(c(5.40, 5.50, 5.55,
5.85, 5.70, 5.75,
5.20, 5.60, 5.50,
5.55, 5.50, 5.40,
5.90, 5.85, 5.70,
5.45, 5.55, 5.60),ncol = 3)
friedman.test(RoundingTimes) |>
effectsize()
#> Kendall's W | 95% CI
#> --------------------------
#> 0.33 | [0.08, 1.00]
#>
#> - One-sided CIs: upper bound fixed at (1). Default to mcnemar.test(tab) |>
effectsize()
#> Cohen's g | 95% CI
#> ------------------------
#> 0.22 | [0.20, 0.24] Default to t.test(mtcars$mpg[mtcars$am=="0"], mtcars$mpg[mtcars$am=="1"],
alternative = "less") |>
effectsize()
#> Cohen's d | 95% CI
#> -------------------------
#> -1.41 | [-Inf, -0.67]
#>
#> - Estimated using un-pooled SD.
#> - One-sided CIs: lower bound fixed at (-Inf).
wilcox.test(mtcars$mpg[mtcars$am=="0"], mtcars$mpg[mtcars$am=="1"],
alternative = "less") |>
effectsize()
#> Warning in wilcox.test.default(mtcars$mpg[mtcars$am == "0"],
#> mtcars$mpg[mtcars$am == : cannot compute exact p-value with ties
#> r (rank biserial) | 95% CI
#> ----------------------------------
#> -0.66 | [-1.00, -0.42]
#>
#> - One-sided CIs: lower bound fixed at (-1). Other htest objects just passes to cor.test(mtcars$mpg, mtcars$hp, alternative = "greater") |>
effectsize()
#> Warning: This 'htest' method is not (yet?) supported.
#> Returning 'parameters::model_parameters(model)'.
#> Pearson's product-moment correlation
#>
#> Parameter1 | Parameter2 | r | 95% CI | t(30) | p
#> ----------------------------------------------------------------
#> mtcars$mpg | mtcars$hp | -0.78 | [-0.87, 1.00] | -6.74 | > .999
#>
#> Alternative hypothesis: true correlation is greater than 0
prop.test(3, 10, alternative = "greater") |>
effectsize()
#> Warning: This 'htest' method is not (yet?) supported.
#> Returning 'parameters::model_parameters(model)'.
#> 1-sample proportions test
#>
#> Proportion | 95% CI | Chi2(1) | Null_value | p
#> --------------------------------------------------------
#> 30.00% | [0.10, 1.00] | 0.90 | 0.50 | 0.829
#>
#> Alternative hypothesis: true p is greater than 0.5
matrix(c(3, 1, 1, 3), 2) |>
fisher.test(alternative = "greater") |>
effectsize() # prints bad CI! <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
#> Warning: This 'htest' method is not (yet?) supported.
#> Returning 'parameters::model_parameters(model)'.
#> Fisher's Exact Test for Count Data
#>
#> Odds.Ratio | CI_low | p
#> ---------------------------
#> 6.41 | 0.31 | 0.243
#>
#> Alternative hypothesis: true odds ratio is greater than 1 Created on 2021-09-15 by the reprex package (v2.0.1) |
It seems to me this somehow contradicts #584 (comment). So, does this now apply to all htest objects? I'm still not sure where to add the |
I think maybe don't let user override these defaults, as they match the p-values for the tests. I think the only change you need in |
But I haven't added the functionality that passed down |
My question is, for which of those htests, for which we can have effectsizes from |
see this commit for my start: |
I don't think you need to pass, as But all supported htest tests can have a different |
as per easystats/effectsize#366
This affects, by default, Phi, Cohen's w, Cramer's V, ANOVA effect sizes, rank Epsilon squared, Kendall's W - which now will all default to one-sided CIs.
Old behavior
NEW behavior
Information about the "side" can be found in the "alternative" attribute:
Created on 2021-08-18 by the reprex package (v2.0.1)
The text was updated successfully, but these errors were encountered: