p-value of a one-tailed test

This post is devoted to the calculation of the p-value of both left-tailed and right-tailed z-test. The basic definition of the p-value is in the post about the p-value of the two-tailed test.

Left-tailed test

We will start with the left-tailed test and we will use the same assignment and number like in the previous post. The level of significance of the test \alpha = 5 % and the value of the statistics is Z =  -1.9497 .

In case of the left-tailed z-test test, the p-value is the area under the probability density function from - \infty to the value of the statistics, in our case to -1.9497 .

The function which gives us the area under the curve for any value is the probability density function. We will use Microsoft Excel to get the p-value, more specifically, the function NORM.S.DIST (which we have used for the p-value of the two-tailed test). The only difference is that now we do not multiply the result by 2, so we write

=NORM.S.DIST(-1.9497,TRUE) 

to get 0.0256 as a result. The value is lower than 0.05 which confirms our decision to reject the null hypothesis.

We also know that for any level of significance lower than 0.0256 (e.g. 0.01) the null hypothesis would not be rejected. This is the reason why we must always add the level of significance to the statement about rejection of the null hypothesis.

Right-tailed test

We stay on the value of significance \alpha = 5 % and the value of the statistics is Z = - 0.2269 . It is not surprising that the p-value is now the area under the curve of the probability density function from the value of the statistics to \infty (i.e. to the right). You can see it in the figure below.

Getting the exact value is little different now. The distribution function always measures the area to the left. We know that the total area under the curve equal to 1. So we get the “remaining” area by subtracting the value of the statistics from 1. In Microsoft Excel, we will use the formula

=1-NORM.S.DIST( -0.2269,TRUE)

to get the result 0.5897. The value is higher than \alpha but it is also higher than 0.5. It makes sense because it contains the whole area from 0 to + \infty (which is 0.5) plus something more.

If we did not extract the value of the distribution function from 1 then we would get “only” 0.4102 which is the “remaining” white area. So these figures may come handy in checking results of calculations.

One-tailed z-test

In the first article about z-test, we showed how the two-tailed test works. It is also possible to construct an one-tailed test. The one-tailed test differs with a sign of alternative hypothesis. The two-tail test had the inequality sign (\neq ) in the alternative hypothesis formula. For an one-tailed test, there are two options:

  • left-tailed test which has less than (< ) sign,
  • right-tailed test which has greater than (> ) sign.

The formula for the test statistics stay the same, whereas the critical region is now different. It lays only in one side of the x axis. More specifically, the whole critical region is in the left side for the left-tailed test or in the right side for the right-tailed test (pretty straightforward if you ask me). The computation of the p-value also changes.

Left-tailed test

Let’s start with the left-tailed test. Our assignment stays with a small modification. We will assume that the machine does not allow to set the length of the component to more than 190 mm, it can be set to shorter length though. So we only need to check if the components are in average shorter, or not. We now have a new set of 20 observations.

189.3828188.1783189.2116190.6886190.0598188.4774189.5478189.1187188.5021189.9077191.3365190.1061189.76189.1427189.8124189.1332190.7368189.4031190.099189.5483

As we said, the null hypothesis stays and the alternative has < sign, so:

  • H_0: \mu_0 = 190 ,
  • H_1: \mu_0 < 190 .

The critical region consists only of one part now, as you can see at the figure below. Because the area under the probability density function must still equal to the significance level, the border values are closer to zero. You can compare the figure with the figure for two-tailed test.

The formula for the critical region is

W = ( - \infty, u_{\alpha} \rangle \, .

Please note that \alpha is not divided by 2. We will keep the level of significance on 5 % so the critical region for our example is

W = ( - \infty, - 1.6449 \rangle \, .

We can get the value – 1.6449 from a statistical table. Please note we use the value for 0.95 because the normal distribution is symmetric. We only add the minus sign.

The value of the statistics is:

Z =  \frac{ \bar{x} - \mu_0}{\sigma} \cdot \sqrt{n} =  \frac{189.61 - 190}{0.9} \cdot \sqrt{20} = -1.9497

and it lies in the critical region so we reject the null hypothesis for \alpha = 5 % . The conclusion of the test is that the expected value of the length of the component is lower than 190 mm, i.e. the machine was set incorrectly.

Right-tailed test

Now we will go through the last variant which is called right-tailed test. We will work with the same example with a one modification: We assume the machine could not be set to produce shorter components than 190 mm but it set be by mistake set to produce longer one. We will use a new sample of data.

189.7755189.2872189.8255189.4385190.1141189.4514190.7601189.7654190.0965190.0603189.9051188.8833190.8449190.2563190.8725189.3535191.1673189.1358189.3755190.7177

The new hypotheses are:

  • H_0: \mu_0 = 190 ,
  • H_1: \mu_0 > 190 .

It is not surprising that the critical region now lies in the right part of the x axis.

The formula for the critical region is

W = \langle u_{1 - \alpha}, \infty) \, ,

and if we stay on \alpha = 5 % we get

W = \langle 1.6449, \infty) \, .

The value of the statistics is now

Z =  \frac{ \bar{x} - \mu_0}{\sigma} \cdot \sqrt{n} =  \frac{189.95 - 190}{0.9} \cdot \sqrt{20} = -0.2269

and it does not lies in the critical region so we do not reject the null hypothesis for \alpha = 5 %.

p-value of z-test

The are two main approaches for statistical hypothesis testing. The classical method follows the sequence of steps described in the article about z-test. The second approach uses the p-value. This method is preferred by some of statistical software and therefore it is useful to understand it.

In terms of statistics, the p-value is defined as a probability that the null hypothesis is true. We will use this knowledge to create a simple rule how to decide whether we should reject the null hypothesis: If the p-value is lower than the level of significance then the null hypothesis is rejected. Otherwise it is not rejected.

The calculate the p-value, we need to know the value of the test statistics. Let’s go back to the example from the article about z-test. The value of the statistics of the test was -1.2125. We are performing two-tailed test for which is the computation of the p-value slightly more complicated than for one-tailed.

We can see the probability density function of the test statistics (which has the normal distribution) at the figure below. The level of significance is 5 % so the size of the red area is 0.05. The p-value is depicted by the blue hatched area from - \infty to the value of the statistics. To be able to compare it the p-value with the level of significance, we need to add the second blue hatched area – the one from 1.2125 to $\infty$.

p-value can be calculated as the area below the probability density function. In other words, it is the value of the distribution function of the standard normal distribution for the value of the statistics multiplied by 2.

The exact value cannot be found in the statistical tables, but we can use plenty of software product to calculate it. For example, in Microsoft Excel we can use the function NORM.S.DIST (distribution function of the standard normal distribution) and multiply the result by 2 to see 0.225 is the p-value of the test. Please note that 0.225 > 0.05.

Now we will look the the boundary case. Let’s assume that the value of the statistics is – 1.96, so it equal the side point of the critical region. In this case both read and blue hatched area have the same side points and therefore their areas must be the same. So the area of the blue hatched area is 0.05, i.e. the p-value is 5 %.

Now let’s assume that the value of the statistics lies in the critical region, for example it equals -2.40. At the figure below it can be seen that the blue hatched area is smaller than the red one. So the p-value is smaller than the level of significance and the null hypothesis is rejected.

Critical Region for z-test

A calculation of a critical region (or a rejection region) is one of the steps of a testing of a statistical hypothesis. When we have a value of a test statistics and a critical region, we can decide about rejecting or non-rejecting of a null hypothesis. There is a rule which we can call a “golden rule of the hypothesis testing.

If a value of a statistics is an element of a critical region, the null hypothesis is rejected.

And consequently, if a value of a statistics is not an element of a critical region, the null hypothesis is not rejected.

We will show the computation of the critical region on z-test, which is described in this article. Just to give you a quick review: The null hypothesis of the z-test is that a expected value of a data equals to a given number (in our case 190). The alternative hypothesis is that the expected value differs from the given number (does not equal 190). We assume two-tailed test, one-tailed text will be described in another article.

As is written in the article about z-test, each test has its statistics and each statistics has a statistical distribution. The statistics of z-test has a normalized normal (Gauss) distribution. The distribution function of the normal distribution is defined for all real numbers, but only a part of them make up the critical region. Our task is to identify this part.

There is a simple logic behind this. Let’s look once more on the formula of statistics:

Z = \frac{ \bar{x} - \mu_0}{\sigma} \cdot \sqrt{n} \, ,

Now let’s assume that null hypothesis is true, i.e. the expected value of the data equals 190 (\mu_0 = 190 ). It is highly probable that the average value \bar{x} of a random sample is close to 190. The statistics has a difference \bar{x} - \mu_0 in the numerator so if \bar(x) is close to \mu_0 then the value of statistics is close to zero.

On the other hand, the average value of a random sample might be significantly different from 190, but it is rather unlikely. If there is a big difference between the hypothetical expected value and the average value of the random sample, then the value of the statistics is far from zero and it can be both positive or negative. The further is the value of the statistics from zero the less probable it is.

So we will cut of the least probable values of the statistics and we will add them to the critical region. But how much of the values? It depends on a level of significance (denoted as \alpha ). The level of significance basically says what percentage of the least probable values we will add to the critical region. Usually, we add there 1 %, 5 % or 10 %.

We can depict that using a probability density function (PDF). On a figure below, you can see PDF of the normalized normal distribution with critical regions for three values of \alpha . As you can see, the critical region consists of the most extreme values. The higher is \alpha the larger get the critical region. Critical region for each \alpha consists of two equally sized parts – one on the left and one on the right.

Critical regions of z-test

So the formula for the critical region is:

W = ( - \infty, u_{\frac{\alpha}{2}} \rangle \cup \langle  u_{1 - \frac{\alpha}{2}}, \infty ) \, .

If we substitute to the formula, we see that for \alpha = 1 % the

W = ( - \infty, -2.57583 \rangle \cup \langle  2.57583, \infty ) \, ,

for \alpha = 5 %

W = ( - \infty, -1.95996 \rangle \cup \langle 1.95996, \infty ) \, ,

and finally for \alpha = 10 %

W = ( - \infty, -1.64485 \rangle \cup \langle 1.64485, \infty ) \, .

What can be z-test used for and how to perform it

I have created a decision tree which can be used to select a proper statistical test for a statistical hypothesis. If we have a one sample of data and we want to check a hypothesis about a expected value, we can use z-test. This test assumes a variance of the test data is known. If the variance is not known one sample t-test needs to be used. The other assumption is that data has a normal distribution. z-test is one of the simplest statistical tests so we will use it to explain main principles of the hypothesis testing.

Let’s assume we are asked to solve this example: We have a machine which produces components of a specific length. The desired length is 190 mm. The inaccuracy of the machine is known and it is constant and characterized by standard deviation sigma = 0.9 mm. The machine was set up by an employee and we want to check whether it was set correctly. To check this, we measured length of 20 sample components.

The measured values are in a following table.

190.0312190.217189.3279189.5428190.8622189.8215189.572190.3029189.1481188.9884191.2978190.7778188.3871189.2987188.7469190.4492189.9642188.6776189.8427189.8637

Note: I tried to write this article as simple as possible. It contains links to more detail information. It uses no special software. A short tutorial for Excel and Python will be written.

At first, we need to formulate hypotheses. Two hypotheses are usually formulated: a null hypothesis H_0 and an alternative hypothesis H_1 (or H_A). The null hypothesis usually has an equal sign and the alternative hypothesis always contradicts the null hypothesis. The hypotheses for our examples are:

  • The null hypothesis H_0 : The expected value of components is 190 mm. (\mu_0 = 190 )
  • The alternative hypothesis H_1: The expected value of components is not 190 mm. (\mu_0 \neq 190 )

As we know, the length of components will not be exactly 190 mm because it is affected by the inaccuracy of the machine. Even the average length will not be exactly 190 mm. But the key point of the testing is to say whether the to difference could by explained by the inaccuracy or it must have been caused by an error of the machine setting.

For example, if the average length of the component was 150 mm, it would be obviously caused by error. On the other hand, the average length 190.01 would suggest right setting. But what about 189.6 or 190.9? In these cases, it is impossible to decide out of one’s head and the hypothesis testing comes handy.

Before starting the actual calculation, we need to realize one more think. The outcome of our calculation does not be necessarily right. The reason is we base our decision only on a small sample (20 components), not all of them. For example, we might select a lot of shorter components and then the average length would be significantly lower than 190 mm and we would keen to reject the null hypothesis H_0. This is called Type I error.

On the other hand, another error may happen. If the configuration of the machine was only slightly different (for example 189.99), we may not detect such a small difference. This situation is called Type II error. You can see all possible situation of the table below.

H_0 is true H_0 is false
H_0 is not rejectedCorrect decision Type II error
H_0 is rejected Type I error Correct decision

The good news is that one can set the probability of Type I error. The probability of this error is called a level of significance and it is denoted by \alpha . On the contrary, the probability of Type II error is unknown.

Computation of the Test

No we will go through the computation itself. We will start with a classical method which consists of following steps:

  1. Definition of hypotheses
  2. Selection of a test statistics
  3. Computation of a critical region
  4. Computation of a value of the test statistics
  5. Interpretation of the result

We have already defined the hypotheses so we will go to the second point.

Selection of a Test Statistics

The test statistics is basically a formula. Each statistical test has its own test statistics so we basically select the formula by selecting the test. The test statistics of the z-test is

Z = \frac{ \bar{x} - \mu_0}{\sigma} \cdot \sqrt{n} \, ,

where \bar(x) denotes an average of the sample data, \mu_0 the hypothetical expected value (from H_0), \sigma denotes the standard deviation and n a number of observations in the sample. We will use this formula in the 4th step.

Please note the bigger is the difference between the hypothetical expected value and the average value the further is the value of the statistics from 0.

Computation of a Critical Region

The critical region is used for the decision whether H_0 is rejected. There is a simple rule: If the value of the statistics is on element of the critical region, H_0 is rejected. Otherwise it is not rejected.

There is a simple logic behind calculation of the critical region: Assuming the H_0 is true, the critical region contains the least probable values of the test statistics. To be more specific, if H0 is true then the measured average value will be probably close to the theoretical expected value. On the other hand, the big difference will be improbable. As we know, a big difference between these two values causes the value of the statistics to be significantly different from 0. So the extremely low and extremely high values are improbable and thus they should be in the critical region.

So we will simply cut the least probable value of the statistics. To do this, we need to know a statistical distribution of the statistics. The statistics of the z-test has normal (Gauss) distribution.

Let’s set the level of significance to \alpha = 5 % . Both high and low values of the test statistics are suspicious so we will split the critical region into two parts. The first part will contain the least probable low values and the second part will contain the lest probable high probables.

We will split the the level of significance equally into two regions. So we need to identify the lowest values with total probability 2.5 % and the highest values with total probability 2.5 %. To do this, we will use the quantile function. We will denote the quantile function by u . So we can write down a formula for the critical region as:

W = ( - \infty, u_{\frac{\alpha}{2}} \rangle \cup \langle  u_{1 - \frac{\alpha}{2}}, \infty ) \, .

There are many ways how to get values of the quantile function. We can use tabled values which are part of each textbook, software like Microsoft Excel or programming language like Python or R. Let’s start with the most old-fashion way – the statistical table. We can use the table here. The desired quantile is \frac{\alpha}{2} =  \frac{0.05}{2}  = 0.025. Because the normal distribution is symmetric the statistical tables contains values for quantile 0.5 and higher. So we need to get a value for quantile 1 - 0.025 = 0.975. Now we can find the desired value: 1.96.

The lower border value of the critical region is – 1.96. Now we can write down the critical region:

W = ( - \infty, - 1.96 \rangle \cup \langle  1.96, \infty ) \, .

Computation of a Value of the Test Statistics

The computation is quite easy. We can substitute \mu_0 = 190, \sigma = 0.9 and n = 20. The arithmetical mean of values is \bar{x} = 189.76 (you can check it in Excel or with a calculator). So the value of the statistics is

Z = \frac{  189.76 - 190}{0.9} \cdot \sqrt{20} = -1.2125 \, .

Interpretation of the Result

The interpretation is simple. The value of the statistics is not element of the critical area so we do not reject H_0 (at \alpha = 0.05).

It is never said that H_0 . As we said earlier, our result may be wrong because of possibility of Type II error. So we do not know a probability that our outcome is true.

Conclusion and Other Resources

This example was quite simple and many more things may be shown: one-tailed tests, calculation of p-value and tests for many other hypothesis.