Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

One-sided CIs #584

Open
mattansb opened this issue Aug 18, 2021 · 14 comments
Open

One-sided CIs #584

mattansb opened this issue Aug 18, 2021 · 14 comments
Labels
Enhancement 💥 Implemented features can be improved or revised

Comments

@mattansb
Copy link
Member

as per easystats/effectsize#366

This affects, by default, Phi, Cohen's w, Cramer's V, ANOVA effect sizes, rank Epsilon squared, Kendall's W - which now will all default to one-sided CIs.

library(parameters)
df <- iris
df$Sepal.Big <- ifelse(df$Sepal.Width >= 3, "Yes", "No")

model <- aov(Sepal.Length ~ Sepal.Big, data = df)

Old behavior

model_parameters(
  model,
  omega_squared = "partial",
  eta_squared = "partial",
  epsilon_squared = "partial",
  ci = 0.90
)
#> Parameter | Sum_Squares |  df | Mean_Square |    F |     p |   Omega2 | Omega2 90% CI | Eta2 |  Eta2 90% CI | Epsilon2 | Epsilon2 90% CI
#> ----------------------------------------------------------------------------------------------------------------------------------------
#> Sepal.Big |        1.10 |   1 |        1.10 | 1.61 | 0.207 | 4.04e-03 |  [0.00, 0.04] | 0.01 | [0.00, 0.05] | 4.07e-03 |    [0.00, 0.04]
#> Residuals |      101.07 | 148 |        0.68 |      |       |          |               |      |              |          |                
#>   
#> Anova Table (Type 1 tests)

NEW behavior

model_parameters(
  model,
  omega_squared = "partial",
  eta_squared = "partial",
  epsilon_squared = "partial",
  ci = 0.95
)
#> Parameter | Sum_Squares |  df | Mean_Square |    F |     p |   Omega2 | Omega2 95% CI | Eta2 |  Eta2 95% CI | Epsilon2 | Epsilon2 95% CI
#> ----------------------------------------------------------------------------------------------------------------------------------------
#> Sepal.Big |        1.10 |   1 |        1.10 | 1.61 | 0.207 | 4.04e-03 |  [0.00, 1.00] | 0.01 | [0.00, 1.00] | 4.07e-03 |    [0.00, 1.00]
#> Residuals |      101.07 | 148 |        0.68 |      |       |          |               |      |              |          |                
#>   
#> Anova Table (Type 1 tests)

Information about the "side" can be found in the "alternative" attribute:

library(effectsize)

cohens_d(mpg ~ am, data = mtcars) |>
  attr("alternative")
#> [1] "two.sided"

cohens_d(mpg ~ am, data = mtcars, alternative = "g") |>
  attr("alternative")
#> [1] "greater"

cohens_d(mpg ~ am, data = mtcars, alternative = "less") |>
  attr("alternative")
#> [1] "less"

Created on 2021-08-18 by the reprex package (v2.0.1)

@mattansb
Copy link
Member Author

(Might also consider estimating one sided CIs for other parameters.)

@strengejacke
Copy link
Member

So we need to pass down the alternative argument?

@mattansb
Copy link
Member Author

For the CIs controlled by effectsize, yes. Most default to "two.sided", but Phi, Cohen's w, Cramer's V, ANOVA effect sizes, rank Epsilon squared, Kendall's W default to "greater".

@strengejacke strengejacke added the Enhancement 💥 Implemented features can be improved or revised label Aug 18, 2021
@strengejacke
Copy link
Member

What if an htest object is already computed with 1-sided alternative, and a two-sided alternative is requested via effectsize?

strengejacke added a commit that referenced this issue Aug 18, 2021
@strengejacke
Copy link
Member

Does it apply for all htest objects?

@mattansb
Copy link
Member Author

What if an htest object is already computed with 1-sided alternative, and a two-sided alternative is requested via effectsize?

User request overrides the htest:

tt <- t.test(mtcars$mpg, mtcars$hp, alternative = "less")
effectsize::effectsize(tt)
#> Cohen's d |        95% CI
#> -------------------------
#> -2.60     | [-Inf, -1.91]
#> 
#> - Estimated using un-pooled SD.
#> - One-sided CIs: lower bound fixed at (-Inf).
effectsize::effectsize(tt, alternative = "two.sided")
#> Cohen's d |         95% CI
#> --------------------------
#> -2.60     | [-3.40, -1.79]
#> 
#> - Estimated using un-pooled SD.

Created on 2021-08-19 by the reprex package (v2.0.1)

Does it apply for all htest objects?

Not all. For the ones supported by effectsize these are:

  • t.test
  • wilcox.test
  • fisher.test
  • cor.test

@strengejacke
Copy link
Member

wilcox only, or all rank tests (including friedman and kruskal)?

strengejacke added a commit that referenced this issue Sep 15, 2021
@mattansb
Copy link
Member Author

library(effectsize)

tab <- rbind(c(762, 327, 468),
             c(484, 239, 477), 
             c(86, 150, 570))

Default to alternative="greater":

chisq.test(tab) |>
  effectsize()  
#> Cramer's V |       95% CI
#> -------------------------
#> 0.24       | [0.22, 1.00]
#> 
#> - One-sided CIs: upper bound fixed at (1).

oneway.test(mtcars$mpg ~ mtcars$cyl, var.equal = TRUE) |>
  effectsize()
#> Eta2 |       95% CI
#> -------------------
#> 0.73 | [0.57, 1.00]
#> 
#> - One-sided CIs: upper bound fixed at (1).

kruskal.test(mtcars$mpg ~ mtcars$cyl) |>
  effectsize()
#> Epsilon2 (rank) |       95% CI
#> ------------------------------
#> 0.83            | [0.78, 1.00]
#> 
#> - One-sided CIs: upper bound fixed at (1).

RoundingTimes <- matrix(c(5.40, 5.50, 5.55,
                          5.85, 5.70, 5.75,
                          5.20, 5.60, 5.50,
                          5.55, 5.50, 5.40,
                          5.90, 5.85, 5.70,
                          5.45, 5.55, 5.60),ncol = 3)
                          
friedman.test(RoundingTimes) |>
  effectsize()
#> Kendall's W |       95% CI
#> --------------------------
#> 0.33        | [0.08, 1.00]
#> 
#> - One-sided CIs: upper bound fixed at (1).

Default to alternative="two.sided":

mcnemar.test(tab) |>
  effectsize()
#> Cohen's g |       95% CI
#> ------------------------
#> 0.22      | [0.20, 0.24]

Default to alternative from htest:

t.test(mtcars$mpg[mtcars$am=="0"], mtcars$mpg[mtcars$am=="1"],
       alternative = "less") |>
  effectsize()
#> Cohen's d |        95% CI
#> -------------------------
#> -1.41     | [-Inf, -0.67]
#> 
#> - Estimated using un-pooled SD.
#> - One-sided CIs: lower bound fixed at (-Inf).

wilcox.test(mtcars$mpg[mtcars$am=="0"], mtcars$mpg[mtcars$am=="1"],
            alternative = "less") |>
  effectsize()
#> Warning in wilcox.test.default(mtcars$mpg[mtcars$am == "0"],
#> mtcars$mpg[mtcars$am == : cannot compute exact p-value with ties
#> r (rank biserial) |         95% CI
#> ----------------------------------
#> -0.66             | [-1.00, -0.42]
#> 
#> - One-sided CIs: lower bound fixed at (-1).

Other htest objects just passes to parameters::model_parameters

cor.test(mtcars$mpg, mtcars$hp, alternative = "greater") |>
  effectsize()
#> Warning: This 'htest' method is not (yet?) supported.
#> Returning 'parameters::model_parameters(model)'.
#> Pearson's product-moment correlation
#> 
#> Parameter1 | Parameter2 |     r |        95% CI | t(30) |      p
#> ----------------------------------------------------------------
#> mtcars$mpg |  mtcars$hp | -0.78 | [-0.87, 1.00] | -6.74 | > .999
#> 
#> Alternative hypothesis: true correlation is greater than 0

prop.test(3, 10, alternative = "greater") |>
  effectsize()
#> Warning: This 'htest' method is not (yet?) supported.
#> Returning 'parameters::model_parameters(model)'.
#> 1-sample proportions test
#> 
#> Proportion |       95% CI | Chi2(1) | Null_value |     p
#> --------------------------------------------------------
#> 30.00%     | [0.10, 1.00] |    0.90 |       0.50 | 0.829
#> 
#> Alternative hypothesis: true p is greater than 0.5

matrix(c(3, 1, 1, 3), 2) |>
  fisher.test(alternative = "greater") |>
  effectsize() # prints bad CI! <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
#> Warning: This 'htest' method is not (yet?) supported.
#> Returning 'parameters::model_parameters(model)'.
#> Fisher's Exact Test for Count Data
#> 
#> Odds.Ratio | CI_low |     p
#> ---------------------------
#> 6.41       |   0.31 | 0.243
#> 
#> Alternative hypothesis: true odds ratio is greater than 1

Created on 2021-09-15 by the reprex package (v2.0.1)

@strengejacke
Copy link
Member

It seems to me this somehow contradicts #584 (comment).

So, does this now apply to all htest objects? I'm still not sure where to add the alternative argument in the htest-methods for model_parameters() and where not...

@mattansb
Copy link
Member Author

I think maybe don't let user override these defaults, as they match the p-values for the tests.

I think the only change you need in parameters is to add a footnote about about one-sided CIs (when alternative isn't two.sided) . Should be enough.

@strengejacke
Copy link
Member

But I haven't added the functionality that passed down alternative to effectsize yet, because I thought not all htest can handle alternative and result in an error?

@strengejacke
Copy link
Member

My question is, for which of those htests, for which we can have effectsizes from model_parameters(), do I also pass alternative to effectsize::effectsize()?

@strengejacke
Copy link
Member

see this commit for my start:
4caee74

@mattansb
Copy link
Member Author

I don't think you need to pass, as effectsize is smart enough to do the correct on by default to match the htest on its own.

But all supported htest tests can have a different alternative - it wont fail for any.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Enhancement 💥 Implemented features can be improved or revised
Projects
None yet
Development

No branches or pull requests

2 participants