Bayes factor vs P value Unicorn Meta Zoo #1: Why another podcast? Announcing the arrival of Valued Associate #679: Cesar ManaraWhen should I be worried about the Jeffreys-Lindley paradox in Bayesian model choice?Bayesian analysis and Lindley's paradox?Do Bayes factors require multiple comparison correction?When does it make sense to reject/accept an hypothesis?Why are 0.05 < p < 0.95 results called false positives?Marginal Likelihoods for Bayes Factors with Multiple Discrete HypothesisIs p-value essentially useless and dangerous to use?Are smaller p-values more convincing?Interpreting Granger Causality F-testBayes factor (B) vs p-values: sensitive (H0/H1) vs insensitive dataWald test and LRT arriving at different conclusionsCompute Bayesian Probability

Map material from china not allowed to leave the country

Check if a string is entirely made of the same substring

Suing a Police Officer Instead of the Police Department

Did the Roman Empire have penal colonies?

I preordered a game on my Xbox while on the home screen of my friend's account. Which of us owns the game?

Was Dennis Ritchie being too modest in this quote about C and Pascal?

How much of a wave function must reside inside event horizon for it to be consumed by the black hole?

finding a tangent line to a parabola

Are there moral objections to a life motivated purely by money? How to sway a person from this lifestyle?

How long after the last departure shall the airport stay open for an emergency return?

"Whatever a Russian does, they end up making the Kalashnikov gun"? Are there any similar proverbs in English?

How to keep bees out of canned beverages?

Is Diceware more secure than a long passphrase?

How do I prove this combinatorial identity

What’s with the clanks at the end of the credits in Avengers: Endgame?

Do I need to watch Ant-Man and the Wasp and Captain Marvel before watching Avengers: Endgame?

Protagonist's race is hidden - should I reveal it?

How do I check if a string is entirely made of the same substring?

Long vowel quality before R

Is there any pythonic way to find average of specific tuple elements in array?

How would I use different systems of magic when they are capable of the same effects?

What is the ongoing value of the Kanban board to the developers as opposed to management

std::unique_ptr of base class holding reference of derived class does not show warning in gcc compiler while naked pointer shows it. Why?

Co-worker works way more than he should



Bayes factor vs P value



Unicorn Meta Zoo #1: Why another podcast?
Announcing the arrival of Valued Associate #679: Cesar ManaraWhen should I be worried about the Jeffreys-Lindley paradox in Bayesian model choice?Bayesian analysis and Lindley's paradox?Do Bayes factors require multiple comparison correction?When does it make sense to reject/accept an hypothesis?Why are 0.05 < p < 0.95 results called false positives?Marginal Likelihoods for Bayes Factors with Multiple Discrete HypothesisIs p-value essentially useless and dangerous to use?Are smaller p-values more convincing?Interpreting Granger Causality F-testBayes factor (B) vs p-values: sensitive (H0/H1) vs insensitive dataWald test and LRT arriving at different conclusionsCompute Bayesian Probability



.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








6












$begingroup$


I am trying to understand Bayes Factor (BF). I believe they are like likelihood ratio of 2 hypotheses. So if BF is 5, it means H1 is 5 times more likely than H0. And value of 3-10 indicates moderate evidence, while >10 indicates strong evidence.



However, for P-value, traditionally 0.05 is taken as cut-off. At this P value, H1/H0 likelihood should be 95/5 or 19.



So why a cut-off of >3 is taken for BF while a cut-off of >19 is taken for P values? These values are not anywhere close either.



I may be missing something very basic since I am a beginner in this area.










share|cite|improve this question











$endgroup$











  • $begingroup$
    I am uncomfortable with saying "if BF is $5$, it means $H_1$ is $5$ times more likely than $H_0$". The Bayes factor may be a marginal likelihood ratio, but it is not a probability ratio or odds ratio, and needs to be combined with a prior to be useful
    $endgroup$
    – Henry
    2 hours ago










  • $begingroup$
    If we do not have any particular prior information, then what can we say about meaning of BF?
    $endgroup$
    – rnso
    1 hour ago

















6












$begingroup$


I am trying to understand Bayes Factor (BF). I believe they are like likelihood ratio of 2 hypotheses. So if BF is 5, it means H1 is 5 times more likely than H0. And value of 3-10 indicates moderate evidence, while >10 indicates strong evidence.



However, for P-value, traditionally 0.05 is taken as cut-off. At this P value, H1/H0 likelihood should be 95/5 or 19.



So why a cut-off of >3 is taken for BF while a cut-off of >19 is taken for P values? These values are not anywhere close either.



I may be missing something very basic since I am a beginner in this area.










share|cite|improve this question











$endgroup$











  • $begingroup$
    I am uncomfortable with saying "if BF is $5$, it means $H_1$ is $5$ times more likely than $H_0$". The Bayes factor may be a marginal likelihood ratio, but it is not a probability ratio or odds ratio, and needs to be combined with a prior to be useful
    $endgroup$
    – Henry
    2 hours ago










  • $begingroup$
    If we do not have any particular prior information, then what can we say about meaning of BF?
    $endgroup$
    – rnso
    1 hour ago













6












6








6


3



$begingroup$


I am trying to understand Bayes Factor (BF). I believe they are like likelihood ratio of 2 hypotheses. So if BF is 5, it means H1 is 5 times more likely than H0. And value of 3-10 indicates moderate evidence, while >10 indicates strong evidence.



However, for P-value, traditionally 0.05 is taken as cut-off. At this P value, H1/H0 likelihood should be 95/5 or 19.



So why a cut-off of >3 is taken for BF while a cut-off of >19 is taken for P values? These values are not anywhere close either.



I may be missing something very basic since I am a beginner in this area.










share|cite|improve this question











$endgroup$




I am trying to understand Bayes Factor (BF). I believe they are like likelihood ratio of 2 hypotheses. So if BF is 5, it means H1 is 5 times more likely than H0. And value of 3-10 indicates moderate evidence, while >10 indicates strong evidence.



However, for P-value, traditionally 0.05 is taken as cut-off. At this P value, H1/H0 likelihood should be 95/5 or 19.



So why a cut-off of >3 is taken for BF while a cut-off of >19 is taken for P values? These values are not anywhere close either.



I may be missing something very basic since I am a beginner in this area.







hypothesis-testing bayesian p-value






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited 8 hours ago







rnso

















asked 9 hours ago









rnsornso

4,082103168




4,082103168











  • $begingroup$
    I am uncomfortable with saying "if BF is $5$, it means $H_1$ is $5$ times more likely than $H_0$". The Bayes factor may be a marginal likelihood ratio, but it is not a probability ratio or odds ratio, and needs to be combined with a prior to be useful
    $endgroup$
    – Henry
    2 hours ago










  • $begingroup$
    If we do not have any particular prior information, then what can we say about meaning of BF?
    $endgroup$
    – rnso
    1 hour ago
















  • $begingroup$
    I am uncomfortable with saying "if BF is $5$, it means $H_1$ is $5$ times more likely than $H_0$". The Bayes factor may be a marginal likelihood ratio, but it is not a probability ratio or odds ratio, and needs to be combined with a prior to be useful
    $endgroup$
    – Henry
    2 hours ago










  • $begingroup$
    If we do not have any particular prior information, then what can we say about meaning of BF?
    $endgroup$
    – rnso
    1 hour ago















$begingroup$
I am uncomfortable with saying "if BF is $5$, it means $H_1$ is $5$ times more likely than $H_0$". The Bayes factor may be a marginal likelihood ratio, but it is not a probability ratio or odds ratio, and needs to be combined with a prior to be useful
$endgroup$
– Henry
2 hours ago




$begingroup$
I am uncomfortable with saying "if BF is $5$, it means $H_1$ is $5$ times more likely than $H_0$". The Bayes factor may be a marginal likelihood ratio, but it is not a probability ratio or odds ratio, and needs to be combined with a prior to be useful
$endgroup$
– Henry
2 hours ago












$begingroup$
If we do not have any particular prior information, then what can we say about meaning of BF?
$endgroup$
– rnso
1 hour ago




$begingroup$
If we do not have any particular prior information, then what can we say about meaning of BF?
$endgroup$
– rnso
1 hour ago










2 Answers
2






active

oldest

votes


















7












$begingroup$

A few things:



The BF gives you evidence in favor of a hypothesis, while a frequentist hypothesis test gives you evidence against a (null) hypothesis. So it's kind of "apples to oranges."



These two procedures, despite the difference in interpretations, may lead to different decisions. For example, a BF might reject while a frequentist hypothesis test doesn't, or vice versa. This problem is often referred to as the Jeffreys-Lindley's paradox. There have been many posts on this site about this; see e.g. here, and here.



"At this P value, H1/H0 likelihood should be 95/5 or 19." No, this isn't true because, roughly $p(y mid H_1) neq 1- p(y mid H_0)$. Computing a p-value and performing a frequentist test, at a minimum, does not require you to have any idea about $p(y mid H_1)$. Also, p-values are often integrals/sums of densities/pmfs, while a BF doesn't integrate over the data sample space.






share|cite|improve this answer











$endgroup$












  • $begingroup$
    What would you say is approximate Bayes Factor value corresponding to P=0.05?
    $endgroup$
    – rnso
    5 hours ago






  • 2




    $begingroup$
    Taylor is saying the threshold for evidence against one hypothesis ($textH_0$) can't be directly compared to the threshold of evidence for another hypothesis ($textH_1$), also not approximately. When you stop believing in a null-effect need not relate to when you start believing in an alternative. This is exactly why the $p$-value shouldn't be interpreted as $1 - (textbelief in H_1)$
    $endgroup$
    – Frans Rodenburg
    5 hours ago







  • 1




    $begingroup$
    Maybe this can be clarifying: en.wikipedia.org/wiki/Misunderstandings_of_p-values The frequentist $p$-value is not a measure of evidence for anything.
    $endgroup$
    – Frans Rodenburg
    5 hours ago






  • 2




    $begingroup$
    Sorry, last comment: The reason you can't see it as evidence in favor of $textH_1$ is that it is the chance of observing this large an effect size if $textH_0$ were true. If $textH_0$ is indeed true, the $p$-value should be uniformly random, so its value has no meaning on the probability of $textH_1$. This subtlety in interpretation is by the way one of the reasons $p$-values see so much misuse.
    $endgroup$
    – Frans Rodenburg
    5 hours ago






  • 1




    $begingroup$
    @benxyzzy: the distribution of a $p$-value is only uniform under the null hypothesis, not under the alternative where it is heavily skewed towards zero.
    $endgroup$
    – Xi'an
    36 mins ago


















5












$begingroup$

The Bayes factor $B_01$ can be turned into a probability under equal weights as
$$P_01=frac11+frac1large B_01$$but this does not make them comparable with a $p$-value since




  1. $P_01$ is a probability in the parameter space, not in the sampling space

  2. its value and range depend on the choice of the prior measure, they are thus relative rather than absolute (and Taylor's mention of the Lindley-Jeffreys paradox is appropriate at this stage)

  3. both $B_01$ and $P_01$ contain a penalty for complexity (Occam's razor) by integrating out over the parameter space

If you wish to consider a Bayesian equivalent to the $p$-value, the posterior predictive $p$-value (Meng, 1994) should be investigated
$$Q_01=mathbb P(B_01(X)le B_01(x^textobs))$$
where $x^textobs$ denotes the observation and $X$ is distributed from the posterior predictive
$$Xsim int_Theta f(x|theta) pi(theta|x^textobs),textdtheta$$
but this does not imply that the same "default" criteria for rejection and significance should apply to this object.






share|cite|improve this answer











$endgroup$













    Your Answer








    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "65"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f404933%2fbayes-factor-vs-p-value%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    7












    $begingroup$

    A few things:



    The BF gives you evidence in favor of a hypothesis, while a frequentist hypothesis test gives you evidence against a (null) hypothesis. So it's kind of "apples to oranges."



    These two procedures, despite the difference in interpretations, may lead to different decisions. For example, a BF might reject while a frequentist hypothesis test doesn't, or vice versa. This problem is often referred to as the Jeffreys-Lindley's paradox. There have been many posts on this site about this; see e.g. here, and here.



    "At this P value, H1/H0 likelihood should be 95/5 or 19." No, this isn't true because, roughly $p(y mid H_1) neq 1- p(y mid H_0)$. Computing a p-value and performing a frequentist test, at a minimum, does not require you to have any idea about $p(y mid H_1)$. Also, p-values are often integrals/sums of densities/pmfs, while a BF doesn't integrate over the data sample space.






    share|cite|improve this answer











    $endgroup$












    • $begingroup$
      What would you say is approximate Bayes Factor value corresponding to P=0.05?
      $endgroup$
      – rnso
      5 hours ago






    • 2




      $begingroup$
      Taylor is saying the threshold for evidence against one hypothesis ($textH_0$) can't be directly compared to the threshold of evidence for another hypothesis ($textH_1$), also not approximately. When you stop believing in a null-effect need not relate to when you start believing in an alternative. This is exactly why the $p$-value shouldn't be interpreted as $1 - (textbelief in H_1)$
      $endgroup$
      – Frans Rodenburg
      5 hours ago







    • 1




      $begingroup$
      Maybe this can be clarifying: en.wikipedia.org/wiki/Misunderstandings_of_p-values The frequentist $p$-value is not a measure of evidence for anything.
      $endgroup$
      – Frans Rodenburg
      5 hours ago






    • 2




      $begingroup$
      Sorry, last comment: The reason you can't see it as evidence in favor of $textH_1$ is that it is the chance of observing this large an effect size if $textH_0$ were true. If $textH_0$ is indeed true, the $p$-value should be uniformly random, so its value has no meaning on the probability of $textH_1$. This subtlety in interpretation is by the way one of the reasons $p$-values see so much misuse.
      $endgroup$
      – Frans Rodenburg
      5 hours ago






    • 1




      $begingroup$
      @benxyzzy: the distribution of a $p$-value is only uniform under the null hypothesis, not under the alternative where it is heavily skewed towards zero.
      $endgroup$
      – Xi'an
      36 mins ago















    7












    $begingroup$

    A few things:



    The BF gives you evidence in favor of a hypothesis, while a frequentist hypothesis test gives you evidence against a (null) hypothesis. So it's kind of "apples to oranges."



    These two procedures, despite the difference in interpretations, may lead to different decisions. For example, a BF might reject while a frequentist hypothesis test doesn't, or vice versa. This problem is often referred to as the Jeffreys-Lindley's paradox. There have been many posts on this site about this; see e.g. here, and here.



    "At this P value, H1/H0 likelihood should be 95/5 or 19." No, this isn't true because, roughly $p(y mid H_1) neq 1- p(y mid H_0)$. Computing a p-value and performing a frequentist test, at a minimum, does not require you to have any idea about $p(y mid H_1)$. Also, p-values are often integrals/sums of densities/pmfs, while a BF doesn't integrate over the data sample space.






    share|cite|improve this answer











    $endgroup$












    • $begingroup$
      What would you say is approximate Bayes Factor value corresponding to P=0.05?
      $endgroup$
      – rnso
      5 hours ago






    • 2




      $begingroup$
      Taylor is saying the threshold for evidence against one hypothesis ($textH_0$) can't be directly compared to the threshold of evidence for another hypothesis ($textH_1$), also not approximately. When you stop believing in a null-effect need not relate to when you start believing in an alternative. This is exactly why the $p$-value shouldn't be interpreted as $1 - (textbelief in H_1)$
      $endgroup$
      – Frans Rodenburg
      5 hours ago







    • 1




      $begingroup$
      Maybe this can be clarifying: en.wikipedia.org/wiki/Misunderstandings_of_p-values The frequentist $p$-value is not a measure of evidence for anything.
      $endgroup$
      – Frans Rodenburg
      5 hours ago






    • 2




      $begingroup$
      Sorry, last comment: The reason you can't see it as evidence in favor of $textH_1$ is that it is the chance of observing this large an effect size if $textH_0$ were true. If $textH_0$ is indeed true, the $p$-value should be uniformly random, so its value has no meaning on the probability of $textH_1$. This subtlety in interpretation is by the way one of the reasons $p$-values see so much misuse.
      $endgroup$
      – Frans Rodenburg
      5 hours ago






    • 1




      $begingroup$
      @benxyzzy: the distribution of a $p$-value is only uniform under the null hypothesis, not under the alternative where it is heavily skewed towards zero.
      $endgroup$
      – Xi'an
      36 mins ago













    7












    7








    7





    $begingroup$

    A few things:



    The BF gives you evidence in favor of a hypothesis, while a frequentist hypothesis test gives you evidence against a (null) hypothesis. So it's kind of "apples to oranges."



    These two procedures, despite the difference in interpretations, may lead to different decisions. For example, a BF might reject while a frequentist hypothesis test doesn't, or vice versa. This problem is often referred to as the Jeffreys-Lindley's paradox. There have been many posts on this site about this; see e.g. here, and here.



    "At this P value, H1/H0 likelihood should be 95/5 or 19." No, this isn't true because, roughly $p(y mid H_1) neq 1- p(y mid H_0)$. Computing a p-value and performing a frequentist test, at a minimum, does not require you to have any idea about $p(y mid H_1)$. Also, p-values are often integrals/sums of densities/pmfs, while a BF doesn't integrate over the data sample space.






    share|cite|improve this answer











    $endgroup$



    A few things:



    The BF gives you evidence in favor of a hypothesis, while a frequentist hypothesis test gives you evidence against a (null) hypothesis. So it's kind of "apples to oranges."



    These two procedures, despite the difference in interpretations, may lead to different decisions. For example, a BF might reject while a frequentist hypothesis test doesn't, or vice versa. This problem is often referred to as the Jeffreys-Lindley's paradox. There have been many posts on this site about this; see e.g. here, and here.



    "At this P value, H1/H0 likelihood should be 95/5 or 19." No, this isn't true because, roughly $p(y mid H_1) neq 1- p(y mid H_0)$. Computing a p-value and performing a frequentist test, at a minimum, does not require you to have any idea about $p(y mid H_1)$. Also, p-values are often integrals/sums of densities/pmfs, while a BF doesn't integrate over the data sample space.







    share|cite|improve this answer














    share|cite|improve this answer



    share|cite|improve this answer








    edited 6 hours ago









    Xi'an

    59.9k897369




    59.9k897369










    answered 8 hours ago









    TaylorTaylor

    12.8k21946




    12.8k21946











    • $begingroup$
      What would you say is approximate Bayes Factor value corresponding to P=0.05?
      $endgroup$
      – rnso
      5 hours ago






    • 2




      $begingroup$
      Taylor is saying the threshold for evidence against one hypothesis ($textH_0$) can't be directly compared to the threshold of evidence for another hypothesis ($textH_1$), also not approximately. When you stop believing in a null-effect need not relate to when you start believing in an alternative. This is exactly why the $p$-value shouldn't be interpreted as $1 - (textbelief in H_1)$
      $endgroup$
      – Frans Rodenburg
      5 hours ago







    • 1




      $begingroup$
      Maybe this can be clarifying: en.wikipedia.org/wiki/Misunderstandings_of_p-values The frequentist $p$-value is not a measure of evidence for anything.
      $endgroup$
      – Frans Rodenburg
      5 hours ago






    • 2




      $begingroup$
      Sorry, last comment: The reason you can't see it as evidence in favor of $textH_1$ is that it is the chance of observing this large an effect size if $textH_0$ were true. If $textH_0$ is indeed true, the $p$-value should be uniformly random, so its value has no meaning on the probability of $textH_1$. This subtlety in interpretation is by the way one of the reasons $p$-values see so much misuse.
      $endgroup$
      – Frans Rodenburg
      5 hours ago






    • 1




      $begingroup$
      @benxyzzy: the distribution of a $p$-value is only uniform under the null hypothesis, not under the alternative where it is heavily skewed towards zero.
      $endgroup$
      – Xi'an
      36 mins ago
















    • $begingroup$
      What would you say is approximate Bayes Factor value corresponding to P=0.05?
      $endgroup$
      – rnso
      5 hours ago






    • 2




      $begingroup$
      Taylor is saying the threshold for evidence against one hypothesis ($textH_0$) can't be directly compared to the threshold of evidence for another hypothesis ($textH_1$), also not approximately. When you stop believing in a null-effect need not relate to when you start believing in an alternative. This is exactly why the $p$-value shouldn't be interpreted as $1 - (textbelief in H_1)$
      $endgroup$
      – Frans Rodenburg
      5 hours ago







    • 1




      $begingroup$
      Maybe this can be clarifying: en.wikipedia.org/wiki/Misunderstandings_of_p-values The frequentist $p$-value is not a measure of evidence for anything.
      $endgroup$
      – Frans Rodenburg
      5 hours ago






    • 2




      $begingroup$
      Sorry, last comment: The reason you can't see it as evidence in favor of $textH_1$ is that it is the chance of observing this large an effect size if $textH_0$ were true. If $textH_0$ is indeed true, the $p$-value should be uniformly random, so its value has no meaning on the probability of $textH_1$. This subtlety in interpretation is by the way one of the reasons $p$-values see so much misuse.
      $endgroup$
      – Frans Rodenburg
      5 hours ago






    • 1




      $begingroup$
      @benxyzzy: the distribution of a $p$-value is only uniform under the null hypothesis, not under the alternative where it is heavily skewed towards zero.
      $endgroup$
      – Xi'an
      36 mins ago















    $begingroup$
    What would you say is approximate Bayes Factor value corresponding to P=0.05?
    $endgroup$
    – rnso
    5 hours ago




    $begingroup$
    What would you say is approximate Bayes Factor value corresponding to P=0.05?
    $endgroup$
    – rnso
    5 hours ago




    2




    2




    $begingroup$
    Taylor is saying the threshold for evidence against one hypothesis ($textH_0$) can't be directly compared to the threshold of evidence for another hypothesis ($textH_1$), also not approximately. When you stop believing in a null-effect need not relate to when you start believing in an alternative. This is exactly why the $p$-value shouldn't be interpreted as $1 - (textbelief in H_1)$
    $endgroup$
    – Frans Rodenburg
    5 hours ago





    $begingroup$
    Taylor is saying the threshold for evidence against one hypothesis ($textH_0$) can't be directly compared to the threshold of evidence for another hypothesis ($textH_1$), also not approximately. When you stop believing in a null-effect need not relate to when you start believing in an alternative. This is exactly why the $p$-value shouldn't be interpreted as $1 - (textbelief in H_1)$
    $endgroup$
    – Frans Rodenburg
    5 hours ago





    1




    1




    $begingroup$
    Maybe this can be clarifying: en.wikipedia.org/wiki/Misunderstandings_of_p-values The frequentist $p$-value is not a measure of evidence for anything.
    $endgroup$
    – Frans Rodenburg
    5 hours ago




    $begingroup$
    Maybe this can be clarifying: en.wikipedia.org/wiki/Misunderstandings_of_p-values The frequentist $p$-value is not a measure of evidence for anything.
    $endgroup$
    – Frans Rodenburg
    5 hours ago




    2




    2




    $begingroup$
    Sorry, last comment: The reason you can't see it as evidence in favor of $textH_1$ is that it is the chance of observing this large an effect size if $textH_0$ were true. If $textH_0$ is indeed true, the $p$-value should be uniformly random, so its value has no meaning on the probability of $textH_1$. This subtlety in interpretation is by the way one of the reasons $p$-values see so much misuse.
    $endgroup$
    – Frans Rodenburg
    5 hours ago




    $begingroup$
    Sorry, last comment: The reason you can't see it as evidence in favor of $textH_1$ is that it is the chance of observing this large an effect size if $textH_0$ were true. If $textH_0$ is indeed true, the $p$-value should be uniformly random, so its value has no meaning on the probability of $textH_1$. This subtlety in interpretation is by the way one of the reasons $p$-values see so much misuse.
    $endgroup$
    – Frans Rodenburg
    5 hours ago




    1




    1




    $begingroup$
    @benxyzzy: the distribution of a $p$-value is only uniform under the null hypothesis, not under the alternative where it is heavily skewed towards zero.
    $endgroup$
    – Xi'an
    36 mins ago




    $begingroup$
    @benxyzzy: the distribution of a $p$-value is only uniform under the null hypothesis, not under the alternative where it is heavily skewed towards zero.
    $endgroup$
    – Xi'an
    36 mins ago













    5












    $begingroup$

    The Bayes factor $B_01$ can be turned into a probability under equal weights as
    $$P_01=frac11+frac1large B_01$$but this does not make them comparable with a $p$-value since




    1. $P_01$ is a probability in the parameter space, not in the sampling space

    2. its value and range depend on the choice of the prior measure, they are thus relative rather than absolute (and Taylor's mention of the Lindley-Jeffreys paradox is appropriate at this stage)

    3. both $B_01$ and $P_01$ contain a penalty for complexity (Occam's razor) by integrating out over the parameter space

    If you wish to consider a Bayesian equivalent to the $p$-value, the posterior predictive $p$-value (Meng, 1994) should be investigated
    $$Q_01=mathbb P(B_01(X)le B_01(x^textobs))$$
    where $x^textobs$ denotes the observation and $X$ is distributed from the posterior predictive
    $$Xsim int_Theta f(x|theta) pi(theta|x^textobs),textdtheta$$
    but this does not imply that the same "default" criteria for rejection and significance should apply to this object.






    share|cite|improve this answer











    $endgroup$

















      5












      $begingroup$

      The Bayes factor $B_01$ can be turned into a probability under equal weights as
      $$P_01=frac11+frac1large B_01$$but this does not make them comparable with a $p$-value since




      1. $P_01$ is a probability in the parameter space, not in the sampling space

      2. its value and range depend on the choice of the prior measure, they are thus relative rather than absolute (and Taylor's mention of the Lindley-Jeffreys paradox is appropriate at this stage)

      3. both $B_01$ and $P_01$ contain a penalty for complexity (Occam's razor) by integrating out over the parameter space

      If you wish to consider a Bayesian equivalent to the $p$-value, the posterior predictive $p$-value (Meng, 1994) should be investigated
      $$Q_01=mathbb P(B_01(X)le B_01(x^textobs))$$
      where $x^textobs$ denotes the observation and $X$ is distributed from the posterior predictive
      $$Xsim int_Theta f(x|theta) pi(theta|x^textobs),textdtheta$$
      but this does not imply that the same "default" criteria for rejection and significance should apply to this object.






      share|cite|improve this answer











      $endgroup$















        5












        5








        5





        $begingroup$

        The Bayes factor $B_01$ can be turned into a probability under equal weights as
        $$P_01=frac11+frac1large B_01$$but this does not make them comparable with a $p$-value since




        1. $P_01$ is a probability in the parameter space, not in the sampling space

        2. its value and range depend on the choice of the prior measure, they are thus relative rather than absolute (and Taylor's mention of the Lindley-Jeffreys paradox is appropriate at this stage)

        3. both $B_01$ and $P_01$ contain a penalty for complexity (Occam's razor) by integrating out over the parameter space

        If you wish to consider a Bayesian equivalent to the $p$-value, the posterior predictive $p$-value (Meng, 1994) should be investigated
        $$Q_01=mathbb P(B_01(X)le B_01(x^textobs))$$
        where $x^textobs$ denotes the observation and $X$ is distributed from the posterior predictive
        $$Xsim int_Theta f(x|theta) pi(theta|x^textobs),textdtheta$$
        but this does not imply that the same "default" criteria for rejection and significance should apply to this object.






        share|cite|improve this answer











        $endgroup$



        The Bayes factor $B_01$ can be turned into a probability under equal weights as
        $$P_01=frac11+frac1large B_01$$but this does not make them comparable with a $p$-value since




        1. $P_01$ is a probability in the parameter space, not in the sampling space

        2. its value and range depend on the choice of the prior measure, they are thus relative rather than absolute (and Taylor's mention of the Lindley-Jeffreys paradox is appropriate at this stage)

        3. both $B_01$ and $P_01$ contain a penalty for complexity (Occam's razor) by integrating out over the parameter space

        If you wish to consider a Bayesian equivalent to the $p$-value, the posterior predictive $p$-value (Meng, 1994) should be investigated
        $$Q_01=mathbb P(B_01(X)le B_01(x^textobs))$$
        where $x^textobs$ denotes the observation and $X$ is distributed from the posterior predictive
        $$Xsim int_Theta f(x|theta) pi(theta|x^textobs),textdtheta$$
        but this does not imply that the same "default" criteria for rejection and significance should apply to this object.







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited 5 hours ago

























        answered 6 hours ago









        Xi'anXi'an

        59.9k897369




        59.9k897369



























            draft saved

            draft discarded
















































            Thanks for contributing an answer to Cross Validated!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f404933%2fbayes-factor-vs-p-value%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Log på Navigationsmenu

            Creating second map without labels using QGIS?How to lock map labels for inset map in Print Composer?How to Force the Showing of Labels of a Vector File in QGISQGIS Valmiera, Labels only show for part of polygonsRemoving duplicate point labels in QGISLabeling every feature using QGIS?Show labels for point features outside map canvasAbbreviate Road Labels in QGIS only when requiredExporting map from composer in QGIS - text labels have moved in output?How to make sure labels in qgis turn up in layout map?Writing label expression with ArcMap and If then Statement?

            Nuuk Indholdsfortegnelse Etyomologi | Historie | Geografi | Transport og infrastruktur | Politik og administration | Uddannelsesinstitutioner | Kultur | Venskabsbyer | Noter | Eksterne henvisninger | Se også | Navigationsmenuwww.sermersooq.gl64°10′N 51°45′V / 64.167°N 51.750°V / 64.167; -51.75064°10′N 51°45′V / 64.167°N 51.750°V / 64.167; -51.750DMI - KlimanormalerSalmonsen, s. 850Grønlands Naturinstitut undersøger rensdyr i Akia og Maniitsoq foråret 2008Grønlands NaturinstitutNy vej til Qinngorput indviet i dagAntallet af biler i Nuuk må begrænsesNy taxacentral mødt med demonstrationKøreplan. Rute 1, 2 og 3SnescootersporNuukNord er for storSkoler i Kommuneqarfik SermersooqAtuarfik Samuel KleinschmidtKangillinguit AtuarfiatNuussuup AtuarfiaNuuk Internationale FriskoleIlinniarfissuaq, Grønlands SeminariumLedelseÅrsberetning for 2008Kunst og arkitekturÅrsberetning for 2008Julie om naturenNuuk KunstmuseumSilamiutGrønlands Nationalmuseum og ArkivStatistisk ÅrbogGrønlands LandsbibliotekStore koncerter på stribeVandhund nummer 1.000.000Kommuneqarfik Sermersooq – MalikForsidenVenskabsbyerLyngby-Taarbæk i GrønlandArctic Business NetworkWinter Cities 2008 i NuukDagligt opdaterede satellitbilleder fra NuukområdetKommuneqarfik Sermersooqs hjemmesideTurist i NuukGrønlands Statistiks databankGrønlands Hjemmestyres valgresultaterrrWorldCat124325457671310-5