Hypothesis testing is the keystone of many statistical applications. All hypothesis tests share a simple logical
structure. A null hypothesis and an alternative hypothesis are put forward about a sample or a set of samples. The
test consists in calculating the probability that the null hypothesis is true given the data. The null hypothesis is
rejected if the probability is less than a pre-determined significance level. The probability is calculated from the
distribution of the test statistic.
This logical structure is encapsulated by the HypothesisTest class.
Some tests use the data for one sample. These classes inherit from the OneSampleTest class. Other tests use data for two or more samples.
These inherit from TwoSampleTest and MultiSampleTest, respectively. Some tests require all sample data
to calculate the test statistic. The goodness-of-fit tests fall in this category. Other tests only require summary
data such as the mean and the standard deviation. The tests in the Extreme Optimization Statistics Library for
.NET support both options when they are available.
Properties of hypothesis tests
Hypothesis tests work by calculating a test statistic from the given data. The test statistic follows a
distribution that may depend on some properties of the data. The value of the test statistic can be obtained using
the Statistic property. The corresponding
probability (often called the p-value) is available through the PValue property.
Whether the null hypothesis is rejected depends on a number of factors. The significance level (SignificanceLevel property) specifies the
smallest permissable probability that the null hypothesis is true. This value is always between 0 and 1. 0.05 is a
common value. If the probability or p-value is less than the pre-determined significance level, the null hypothesis
is rejected. Different null hypotheses give rise to different ways of calculating the probability. There are three
possibilities, enumerated by the HypothesisType
The null hypothesis is rejected if the test statistic lies in the left (lower) tail of the test
The null hypothesis is rejected if the test statistic lies in the right (upper) tail of the test
The null hypothesis is rejected if the test statistic lies too far on either side of the mean of the test
For some tests, the hypothesis type is fixed. For others, it may be supplied by the user. In this case, different
alternative hypothesis will give rise to different hypothesis types. The hypothesis type corresponds to the
The Reject()()()() method tests the actual
hypothesis. If the p-value is less than the significance level, the null hypothesis is rejected and this method
returns true. Otherwise, the null hypothesis cannot be rejected, and the method returns
false. This method is overloaded. One overload takes no parameters and uses the test's current
significance level as a reference. The other overload lets you specify the significance level to test against.
Many tests have a null hypothesis that states that a parameter has a specific value. For example, the z
test is used to test if the mean of a sample is equal to a specific value. In these cases, a confidence interval can
be calculated. The confidence interval for a specified confidence level is the interval that contains the actual
value of the parameter with a probability equal to the confidence level.
Confidence intervals can be calculated using the GetConfidenceInterval()()()() method, which
has two overloads. If no parameter is specified, the confidence level corresponding to the test's current
significance level is used. If the significance level is 5%, the confidence level is 95%. The second overload takes
the confidence level as its only parameter. The confidence level is between 0 and 1. For example, a confidence level
of 95% corresponds to a value of 0.95.