Analysis of the Quality of Pseudorandom Number Generators Based on Cellular Automata

Analysis of the Quality of Pseudorandom Number Generators Based on Cellular Automata

DOI: 10.4018/978-1-5225-2773-2.ch008
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The eighth chapter describes the studies of the presented generators using the ENT and NIST tests. For a complete description of the experiment, different initial settings were used. Tests were conducted for different sizes and for a different number of cells, which initially had a logical “1” states. Also, bit sequences of different lengths are formed. The results presented in this chapter indicate optimal sizes and optimal initial settings of cells of the cellular automaton. Generators are described on the basis of cellular automata with a Moore neighborhood. The obtained results are compared for all the pseudo-random number generators described earlier. Also, the generators were examined using graphical tests. The results of the graphical tests for the generators described in this manuscript are presented in this chapter. Test results are presented in tabular form and in graphical form.
Chapter Preview
Top

Ent Quality Analyzing Tests Of Pseudorandom Number Generators Based On Cellular Automata

The tests implemented in the program ENT (Walker, 2008) were used for the analysis and verification of the quality described generators and are posted on the website by John Walker in 2008. The program implements the five tests, which are intended for a statistical evaluation of the pseudo-random bit sequences. The tests aimed at processing bit files. The files are bit sequences, each element of which is formed by a bit sequence.

  • 1.

    Entropy: This test is based on a calculation of entropy. Author of this program indicates the source of the description of the method (Richard, 1980). In accordance with this test is determined the size of a compression of the resulting file. If the file compression does not reducing its size, then the bit sequence is considered to be random. The program shows the result as the number of bits per character, and also identifies the size of the compression. The compression size displays the number of characters that reducing the file is an appropriate amount as a percentage.

  • 2.

    Chi-Square Test: This test is the most commonly used test. This test is used to test the null hypothesis about the observed random variable subordinating to the specific theoretical distribution law. It is defined as the percentage that indicates how the calculated random sequence value exceeds the selection value. This percentage gives an estimate of randomness of the testing sequence. If the resulting percent value is greater than 99% or less than 1%, then the sequence is not considered to be random.

If the percentage value is obtained in the range from 5% to 99% or from 5% to 1%, then the sequence is considered as questionable and is aroused the suspicion and it may be a random. In addition, the sequence is considered to close to the suspect if the resulting percentage of the test conducted is in the range from 90% to 95% or from 5% to 10%. The remaining value of the received percent indicate on a sure random sequence. The author refers to the program information source (Knuth, 1969).

  • 3.

    Arithmetic Mean: The simple arithmetic test that determines the value, which is being obtained by dividing the sum of the byte length on the file. For a random sequences, this value approaching to the value of 0.5 on the output of the program.

  • 4.

    Monte Carlo Value for Pi: The test determines the percentage of hits in the values of a circle is inscribed in a square. The number of π is calculated. If this number approaches the value 3.143580574, then the sequence is defined as random. To specify the the coordinates of a square the six bytes is used as 24 bit. The hitting to the inside circle is determined that in the given square is inscribed. This hit is taken as the target hit. To calculate the π the calculated the percentage of hitting the target is used.

  • 5.

    Serial Correlation Coefficient: This approach in detail is described in the work (Knuth, 1969). This test evaluates the dependence of each byte from the previous one. For random sequences, this quantity tends to 0.

To testing of the described generators the following procedure are used.

  • 1.

    The program generator models are selected.

  • 2.

    For selected generators the initial installation are specified.

  • 3.

    For each initial set the bit file has been formed, which by its structure indicated the formed a bit sequence.

  • 4.

    Due to using of the ENT program the all generated bit files are processed.

Complete Chapter List

Search this Book:
Reset