Abstract
When we make inferences to a population, we rely on a statistic in our sample to make a decision about a population parameter. At the heart of our decision is a concern with Type I error. Before we reject our null hypothesis, we want to be fairly confident that it is in fact false for the population we are studying. For this reason, we want the observed risk of a Type I error in a test of statistical significance to be as small as possible. But how do statisticians calculate that risk? How do they define the observed significance level associated with the outcome of a test?
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Because of rounding error, the total for our example is actually slightly larger than 1 (see Table 7.5).
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer Science+Business Media New York
About this chapter
Cite this chapter
Weisburd, D., Britt, C. (2014). Defining the Observed Significance Level of a Test: A Simple Example Using the Binomial Distribution. In: Statistics in Criminal Justice. Springer, Boston, MA. https://doi.org/10.1007/978-1-4614-9170-5_7
Download citation
DOI: https://doi.org/10.1007/978-1-4614-9170-5_7
Published:
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4614-9169-9
Online ISBN: 978-1-4614-9170-5
eBook Packages: Humanities, Social Sciences and LawSocial Sciences (R0)