7 Hypothesis Tests in development

 

You have already identified (step 1) your aims in relation to testing the system that you are investigating.

 

If you have multivariate data (i.e. more than one response variable) then you should also look at Step 8.

Note that testing for correlation between two or more variables (e.g. does weight and height increase together) does not imply multivariate analysis

 

Overview of using hypothesis tests - click here

 

Hypothesis tests typically enable you to decide between two hypotheses based on the probability of the experimental evidence recorded.

The Null Hypothesis is normally that the observed evidence could have occurred just by chance.

The Proposed (or Alternative) Hypothesis is that the observed evidence occurred due to the effect that you are investigating.

 

If you use software to perform the test, then you will normally get a p-value.

If p < α, where α is the significance level that you set (normally 0.05), then you would accept the Proposed (or Alternative) Hypothesis

If p > α, then you would state that, on the basis, of the experimental evidence it is not possible to Reject the Null Hypothesis.

 

There is often a blurring between whether an experiment fits exactly into one or the other of the above categories

 

ACTION: You now need to

Related data

 

General assumption - the data values of the response variable are independent.

 

The measurements are grouped under the headings of testing for -

 

Changes in y-value (dependent variable) as a function of changes in x-value (independent variable)

Examples: Effectiveness of microbial action for increases in concentration of microbial agent

Analyses: Linear correlation, Non-linear correlation, ANOVA, t-tests

Notes: There are many issues - is uncertainty the same for all data?, -  may just be appropriate to use descriptive stats

Link here to a Further Discussion 

 

Difference between a mean, or median, value compared to a target value (one sample test) - Show/Hide

Examples: Comparing level of cadmium concentration in a river with a specific value

Notes:

Analyses:

t-test (assumes normal data; robust for deviations from normality although errors with skewed data),

Z-test (assumes normal data, but requires that standard deviation of data is already known)

Non-parametric: Wilcoxon

 

Difference in mean, or median, values for two values (levels) of one factor (two sample test) - Show/Hide

Examples: Comparing level of cadmium concentrations in two rivers

Comparing values of retention times in the chromatograms of a shoe sole and a scuff mark at the scene of a crime

Analyses:

t-test and paired t-test (both assume normal data, but are robust for deviations from normality although have errors with skewed data),

Non-parametric: Mann-Whitney, Paired Wilcoxon (for paired data)

Notes: Where the data in the two data samples is related, then use paired tests.

 

Differences in mean, or median, values for more than two factor levels and/or more than one factor.

e.g. for cadmium concentrations in three rivers or in two rivers but also at different times of the year.

Is the quality of a fingerprint affected by the factors: surface type, immersion in water, ambient temperature, etc?

Analyses:

ANOVA (analysis of variance) - general name for a range of tests that assume normal data, but are robust for deviations from normality although have errors with skewed data.

GLM (general linear model) - included under the heading of 'ANOVAs', but fits data using a least squares method and is usually more flexible than a basic ANOVA.

Post hoc tests: Tukey test, Bonferroni

Non-parametric: Kruskal-Wallis (equivalent to a one-way ANOVA), Friedman (equivalent to a two-way ANOVA)

Notes: Generalized linear model in SPSS gives much greater flexibility and can be used for response variables of different types and distributions.

 

Chi-squared (for frequency data)

 

Test for normality (goodness of fit to a normal distribution)

Examples: Checking that a sample of experimental values can be considered to follow a normal distribution

Analyses:

Anderson-Darling, Kolmogorov-Smirnov (compares cumulative frequencies)

Ryan-Joiner (based on a correlation test)

Notes: The tests actually test that the data is NOT normal. However, it is common practice to accept that, if the tests do not show significant deviations from normality, then, assuming that you have no previous reason to suspect non-normality, you can treat the data as normal.

 

Differences in variance or standard deviation (one or two samples)

e.g.

Notes: Testing for a significant difference in variance between samples can be important before conducting tests for differences in mean values (t-tests and ANOVAs)

Analyses:

F-test (assumes normal data),

Bartlett's test (for more than two variances, but susceptible to departures from normality)

Non-parametric: Levene's test 

 

Correlation between two or more variables

e.g.

Analyses:

Pearson's correlation (assumes normal data),

Non-parametric: Spearman's rho

Note: Bivariate - correlation between two variables, Partial correlation

 

Differences in frequencies for one factor or goodness of fit to a specific distribution

e.g. Does the number of newspapers purchased differ for different days of the week

Does the distribution of experimental data values fit a Poisson (or normal) distribution

Analyses:

Chi-squared

Goodness of fit for Poisson distribution

Goodness of fit for normal distribution (see test for normality)

 

Differences in frequencies for more than one factor - association between factors

e.g.

Analyses:

Chi-squared,

 

Differences in proportions or percentages (one or two samples)

e.g.

Analyses:

Fisher's exact test (tests for a zero difference between two proportions)

Proportion test based on the normal distribution (approximation, but does test for non-zero differences)

 

Calculation of power or the test

 

Validity of a model