Probability and Simulations

Probability and Simulations

Copyright: © 2021 |Pages: 42
DOI: 10.4018/978-1-7998-3871-5.ch008
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter is fundamental in educational terms as it deals with a topic that cannot be underestimated: the use of laboratory simulation. As long as we talk about probability in the classic sense, paper, pen, and maybe a pocket calculator are sufficient tools, but when we want to analyze a probabilistic model in depth, the use of a computer tool is essential. This support not only allows us to confirm hypotheses but, in some cases, it is indispensable. In the school it is unthinkable to work on large data sets (often not available) that would require not only appropriate software but also a different approach. The balance should be sufficiently significant but easy to handle dataset.
Chapter Preview
Top

Introduction

The generation of random numbers is too important to be left to chance. ~ Robert R. Coveyou

Simulation is therefore a model of reality that allows to evaluate and predict the development of a series of events. The importance of simulation in the teaching of probability is an irrefutable fact. As Henry & Parzysz say (2011): “Of course the computer does not make use of the notion of probability when it performs a simulation: it just exhibits the effects of the equipartition principle on the random numbers that it generates (a principle which is part of its specifications). But even so, the use of computer simulations in the classroom, as pseudo-random generators, fosters a better understanding of the notions of relative frequency, sample fluctuation, variability of empirical parameters and finally probability”.

The simulation allows performing a random experiment with several tests that normally are not possible in class, of course this can be accepted provided that the simulation reflects the characteristics of the experiment to be referred to. Obviously, if the model is inadequate or even wrong the results are not reliable.

Simulation is also useful in another case: when you study a topic or a problem or an event and want to try to understand which theoretical model can be linked to it. Here is a simple example: you want to know what distribution can be underlying the toss of a fair coin, in other words what is the probability that launching 10 coins will occur 0, 1, 2, ..., 9, 10 heads. To answer this question we try to repeat the experiment virtually 100000 times and record the results. In Table 1 we can compare the empirical results with those of two known distributions.

Table 1.
Comparison of distributions
EmpiricalBinomialPoisson
00.00090.00100.0067
10.01010.00980.0337
20.04480.04390.0842
30.11710.11720.1404
40.20680.20510.1755
50.24570.24610.1755
60.20240.20510.1462
70.11770.11720.1044
80.04400.04390.0653
90.00990.00980.0363
100.00060.00100.0181

Key Terms in this Chapter

Median: The value separating the higher half from the lower half of a probability distribution.

Pseudo Random Numbers: Numbers generated by a deterministic algorithm that produces a sequence with statistical properties similar to those of a sequence of numbers generated by a random process.

Sample Space: An experiment or random trial is the set of all possible outcomes.

Redd-Frost Model: A mathematical model of epidemics.

Markov Chain: A model in which the probability of each event depends only on the state attained in the previous event.

Complete Chapter List

Search this Book:
Reset