high yield biostats

jakstat33

Senior Member
10+ Year Member
15+ Year Member
Feb 6, 2004
265
0
42
SW
    fadem (BRS BS, 3rd edition, pg 255-6) agrees with high yield

    Originally posted by nuclearrabbit77
    on pg. 37-38 of high yield biostats they define type I (alpha) error to be false negative, and type II (beta) error to be false positive.

    however i checked up on some other sources which say it's the opposite.
     

    Sohalia

    namaste
    7+ Year Member
    15+ Year Member
    Jul 14, 2002
    235
    0
    Chicago, IL
      Hello,
      I'm in my 2nd semester of a 1 year epidemiology MPH at UNC-CH.

      Type 1 error (alpha) Usually set at .05, to indicate that you accept a 5% chance of rejecting the null hypothesis when in fact the null hypothesis is correct (false positive)

      Type II error (beta) Usually set at .20, to indicate that you accept a 20% chance of failing to reject the null hypothesis when in fact the null hypothesis is incorrect (false negative)

      Type II is usually set so much higher than type I in study design because it is considered a more serious error to think that you have a significant result when in fact you don't than it is to think that your result is not significant when in fact it is. They both can be set to whatever the researcher wants, but by convention usually are set as above.

      And I think (1-beta)=power of the study, so generally studies (are designed to) have 80% power to detect "significant results", significant results being defined differently for each study (i.e. a 10% change in the intervention group vs. the placebo group, etc.).

      Ok, yeah, the above was off the top of my head, but I double checked the error thing in Gordis' book "Epidemiology" (because I'm all anal like that), so I think you're good to go with that.
      :)

      Hey there again:
      I actually looked at the explanation in high yield, and they are correct (and in line with what I was saying) except that they are defining things by the null hypothesis--i.e. to them, "false negative" means that you are falsely (incorrectly) saying that the null hypothesis is not true. They do the same for "false positive": you are incorrectly (falsely) saying that the null hypothesis is true.

      To me, it makes more sense to define things by what your result is vs. reality: so if you find a significant result when the null hypothesis is true, I would call that a false positive, and if you were to find a insignificant result when the null hypothesis is not true, I would call that a false negative.

      The theory behind what is said in the high yield bios book with regard to this is correct tho, it's just a difference in what your reference is as you define the realtionships.

      Hope that wasn't too confusing--bottom line go with whatever makes sense to you. I'm a visual person, so drawing out that 2x2 table on page 36 is what I would do w/regard to answering questions on this.
       
      About the Ads

      Scott_L

      Senior Member
      10+ Year Member
      15+ Year Member
      Jan 16, 2001
      115
      0
      Baltimore, MD
      LWW.com
        Here is what the author had to say...

        There's a little more to this than meets the eye.

        Conventionally, most authors call Type I errors false negative, and
        Type II false positive; but I do see some authors who define them the
        other way round.

        I think the reason for this is that in hypothesis testing, we are
        strictly speaking testing the null hypothesis, ie. the hypothesis that
        there is no difference, and when we reach a positive conclusion about
        the null hypothesis, we are reaching a negative conclusion about the
        phenomenon we are studying.

        For example, let's say we are testing a drug vs. a placebo.

        Let's say that we find that there is no difference between the drug and
        the placebo; the null hypothesis is that there is no difference, so we
        accept the null hypothesis.

        If in reality this is a mistake, we have made a false positive (type
        II) error (we have reached a positive conclusion, but it is a false
        one).

        But some authors would say that a study which reaches a conclusion that
        there IS a significant difference is a "positive" study, and one that
        concludes that there is no difference (eg. between a drug and a
        placebo) is a "negative" study. (This sounds like common sense, but you
        have to remember that strictly speaking in statistics we are testing a
        null hypothesis, not a drug).

        An example of this is shown on the American College of Physicians
        online primer at

        http://www.acponline.org/journals/ecp/novdec01/primer_errors.htm (and
        the table it is linked to)

        where they say "A type I error is analogous to a false-positive result
        during diagnostic testing: A difference is shown when in "truth" there
        is none." - but this is only because they are taking the "common sense"
        idea that a "positive" result is one that shows a "difference"
        - and if you look at the table which the page links to, you see that a
        type I error is one where you conclude there is no "difference" when in
        fact there is one; which is the same as I am saying, except they
        confuse us by referring this to a "positive" study!

        It sure is confusing. But basically everyone agrees that a type I is
        rejecting the null hypothesis when it is true, and a type II is
        accepting a null hypothesis that is false; the "false positive" or
        "false negative" terminology really depends on whether you are
        referring to the null hypothesis or the alternative hypothesis.
         

        Scott_L

        Senior Member
        10+ Year Member
        15+ Year Member
        Jan 16, 2001
        115
        0
        Baltimore, MD
        LWW.com
          From the author:

          There's a little more to this than meets the eye.

          Conventionally, most authors call Type I errors false negative, and
          Type II false positive; but I do see some authors who define them the
          other way round.

          I think the reason for this is that in hypothesis testing, we are
          strictly speaking testing the null hypothesis, ie. the hypothesis that
          there is no difference, and when we reach a positive conclusion about
          the null hypothesis, we are reaching a negative conclusion about the
          phenomenon we are studying.

          For example, let's say we are testing a drug vs. a placebo.

          Let's say that we find that there is no difference between the drug and
          the placebo; the null hypothesis is that there is no difference, so we
          accept the null hypothesis.

          If in reality this is a mistake, we have made a false positive (type
          II) error (we have reached a positive conclusion, but it is a false
          one).

          But some authors would say that a study which reaches a conclusion that
          there IS a significant difference is a "positive" study, and one that
          concludes that there is no difference (eg. between a drug and a
          placebo) is a "negative" study. (This sounds like common sense, but you
          have to remember that strictly speaking in statistics we are testing a
          null hypothesis, not a drug).

          An example of this is shown on the American College of Physicians
          online primer at

          http://www.acponline.org/journals/ecp/novdec01/primer_errors.htm (and
          the table it is linked to)

          where they say "A type I error is analogous to a false-positive result
          during diagnostic testing: A difference is shown when in "truth" there
          is none." - but this is only because they are taking the "common sense"
          idea that a "positive" result is one that shows a "difference"
          - and if you look at the table which the page links to, you see that a
          type I error is one where you conclude there is no "difference" when in
          fact there is one; which is the same as I am saying, except they
          confuse us by referring this to a "positive" study!

          It sure is confusing. But basically everyone agrees that a type I is
          rejecting the null hypothesis when it is true, and a type II is
          accepting a null hypothesis that is false; the "false positive" or
          "false negative" terminology really depends on whether you are
          referring to the null hypothesis or the alternative hypothesis.

          Hope this helps!

          Feel free to contact me if I can be of any assistance

          -Scott

          [email protected]
           
          This thread is more than 17 years old.

          Your message may be considered spam for the following reasons:

          1. Your new thread title is very short, and likely is unhelpful.
          2. Your reply is very short and likely does not add anything to the thread.
          3. Your reply is very long and likely does not add anything to the thread.
          4. It is very likely that it does not need any further discussion and thus bumping it serves no purpose.
          5. Your message is mostly quotes or spoilers.
          6. Your reply has occurred very quickly after a previous reply and likely does not add anything to the thread.
          7. This thread is locked.