Type I and Type II Errors
When performing hypothesis testing, the goal is to either reject or fail to reject the null hypothesis. More about hypothesis testing can be found in my previous blog post here. Nevertheless, there is always a change of researchers rejecting the null hypothesis when they should not, or failing to reject the null hypothesis when they should have. These are described as type I and type II errors and that is what this blog post is about.
Type I Error (alpha): also known as False Positive; it happens when the null hypothesis is rejected when it should not be rejected, in other words, it represents the probability of rejecting the null hypothesis given that the null hypothesis is true (equation 1).
Type II Error (Beta): also known as False Negative; it happens when we fail to reject the null hypothesis when it should be rejected, in other words, it represents the probability of not rejecting the null hypothesis given that it is not true (equation 2).
The figure below provides a visual representation of these types of errors. H0 represents the null hypothesis, while H1 represents the alternative hypothesis. When defining both the null and the alternative hypotheses, the researcher defines a significance level (alpha), which is the level at which the null hypothesis is rejected or not. If, for instance, alpha is 5%, it means that there is a 5% chance of the null hypothesis being wrongfully rejected. Furthermore, Beta represents the type II error and it is related to the power.
Power is the probability of rejecting the null hypothesis when it is false. It is determined by “managing tradeoffs to settle on sample size”. Essentially, larger samples have higher chances of detecting small effects, however, larger samples are more costly, and at some point the benefits of having larger samples become minuscule.