• Keine Ergebnisse gefunden

5.3 Hardware Testing

5.3.2 Hardware Results

The results from the hardware synthesis and power simulation can be seen in Figures 5.11 to 5.14.

Thefirst hardware characteristic to be studied is area.

The area results for each of the selected designs is shown in Figure 5.11. The synthesis used an out-of-date 0.25μmCMOS process, industry uses 180 nm or 90 nm technology; however, it did allow for a comparison of different designs. The newer technology allows for a scaling down in size, but the general size ratios between the designs remain the same.

The area analysis divides the RNG tests into two groups, the random walk/runs based tests and the pattern matching tests. The pattern matching tests are significantly larger than the other tests, by at least a factor of ten. The smallest design is the longest runs test. The number of multiplication and division operations present in the poker and serial tests make their designs more complex when compared to the relatively simple additions needed for the other designs.

The synthesized serial test circuit is approximately 4% of the total smart card chip area. For some designers this might be too large.

The FIPS test group made up of the longest runs, runs, poker and frequency test requires an area of 691286μm2. Within this group, the poker test is the largest contributor to the area with it making up 88% of the FIPS area.

In Figure 5.12 the area results have been zoomed in to include only the smaller tests. It is easier to notice the differences in sizes for each of these designs now that the two longest tests are removed. Here the designs are divided again into two groups, in essence making three groupings

5.3. HARDWARE TESTING 71

0 10000 20000 30000 40000 50000 60000 70000

Longest Runs Frequency Autocorrelation Turning Point Frequency Block Runs RNG Tests

Area)

Figure 5.12: The area results for the six smallest randomness tests.

for the area analysis. The simple counters are the smallest designs, which include the following tests:

• longest runs

• frequency

• autocorrelation

• turning point.

The more complex counters are the

• runs

• frequency block tests.

Smart cards work with a base speed of 5 MHz but the internal processing speed is usually looped up to speeds of 25 to 50 MHz. This is a design restriction that hardware developers for smart cards need to take into account. For a 50 MHz smart card, the algorithm implementation needs to have a device time delay less than 20 ns. In other words, any algorithm implementation needs to reach the end of its slowest processing path for that clock cycle before the 20 ns are up. If a design cannotfit in this time restriction, it either needs to be optimized further or, if that is

72 CHAPTER 5. HARDWARE IMPLEMENTATION

0 5 10 15 20 25 30 35 40 45 50

Longest Runs

Frequency

Autocorrelation TurningPoint Frequen

cyBloc k

Runs Poker Serial

RNG Test

DesignTimeDelay(ns)

Figure 5.13: Longest path timing delay analysis for the eight randomness tests.

not possible, the smart card has to run at a slower clock speed. This has the negative effect of reducing the processing speed for all calculations.

Figure 5.13 shows the longest path time delay for the eight implemented tests. The ordering of the tests on the x-axis is the same as in the area measurement graph (see Figure 5.12). This is used to allow for easier comparison of the different tests. The most striking result is the serial test. It is the largest test which is assumed to have the longest delay path, however, the difference between the serial test and the poker test is immense. The time delay path has been examined to investigate where the design is spending most of its time and it is in the division component.

Of the 45.22 ns spent processing the longest path in the serial test, 44.5 ns is in the divider. The serial test implementation uses a DesignwareTM divider. Therefore, for greater optimization a custom divider or a new serial test implementation without the division has be to designed.

The rest of the tests all fall below the 50 MHz (20 ns) line. Therefore, except for the serial test, they are all acceptable for current smart card speeds. The ordering of the designs based on the time delay do not necessarily follow the area size; for example, the longest runs test has a longer processing path than the frequency test. For many applications a compromise is required between the time delay and the design size to achieve efficient operation. This is the reason for the variance in the time delay.

The designs have been optimized with regard to all three characteristics: power consumption, area and time delay. However, the area and power consumption characteristics have been given a higher rating in the optimization hierarchy, since they are the most important properties for smart

5.3. HARDWARE TESTING 73

0 1 2 3 4 5 6 7 8 9 10

0 20 40 60 80 100 120 140 160 180

Clock Speed (MHz)

PowerConsumption(mW)

Longest Runs Frequency Autocorrelation Turning Point

Frequency Block Runs Poker Serial

Figure 5.14:Power consumption analysis for the eight randomness tests.

card manufacturers.

The current trend in smart card development is shifting away from contact only cards to either all contactless or a hybrid contact/contactless card. The use of contactless technology has increased the importance of using low power designs. Each of the design has been optimized using the power consumption parameters in Synopsys Design CompilerTM.

The power consumption results can be seen in Figure 5.14. The data is plotted as points on a power vs clock frequency axis. Some of the data lines are shorter than others; for example, the frequency block, poker and serial tests. They are shorter due to the limitation from their time delay. The mentioned test implementations operating speeds are restricted to a clock frequency of timedelay1 or slower.

Three speeds are of particular interest in the power analysis: 5 MHz, or the base smart card frequency; 20 MHz, the last point where all the tests can be compared; and 50 MHz, the max-imum operating speed of current smart cards. At the speed of 50 MHz the poker test is by far the most power hungry circuit design at approximately 6 mW. The next closest tests are the fre-quency block and runs tests. The autocorrelation, turning point, frefre-quency and longest runs are grouped closely near the 1 mW mark. For the 20 MHz point, the serial test result is also available.

This test requires slightly less power (2.0 mW) than the poker test (2.5 mW).

The power consumption results generally follow the results from the area with the largest design requiring the most power. However, it is interesting that the serial test is more efficient than the poker test. The main difference between them is not in the counting of the various

74 CHAPTER 5. HARDWARE IMPLEMENTATION

statistical properties but in the actual calculation of the statistic. The poker test has more mul-tiplications whereas the serial test has a divider circuit. The one divider circuit from Synopsys DesignwareTM is slow and large but has been designed to be efficient with power consumption.

The multiplications are also efficient but not to the point of the divider.

The calculation times required in clock cycles for the tests is shown in Table 5.2. As a boundary limit the tests have to complete their calculation within the initialization time of two seconds. The tests are setup to count as each bit arrives from the RNG. The important point to keep small is the time between the last bit arriving and the calculation of the “pass” or “fail”.

The shorter this time the more bits the generator is able to create before reaching the two second limit. Current cryptographic RNGs in smart cards are not able to produce the full 20000 bits within that time interval. The more bits the RNG is allowed to produce the better the results are for testing purposes. The hardware implementations of the RNG all require 20000 bits, since they are based on the FIPS 140-2. It is hoped that the results from the simulator allows this to be reduced.

The results from the calculation time show that the smallest tests do not have long calculation times. The more complex tests, poker and serial, require more time, since they perform the calculation of the statistic and then compare it to a given range. This statistic calculation is the time consuming part. However, even these designs are very quick, and most of the two seconds can be dedicated to the bit generation.

From a hardware point of view, only the serial test has any problems in modern smart card implementations. Its current design does not allow it to be clocked at a standard operating fre-quency. The rest of the tests are all acceptable.

Test Number of Cycles

Frequency 2

Runs 2

Longest Runs 2

Serial 8

Poker 8

Autocorrelation 2

Frequency Block 2

Turing Point 3

Table 5.2: Cycles required to calculate the test results after the arrival of the last bit in the test sequence.

75

Chapter 6

Empirical Test Quality Measurement

6.1 Introduction

In the previous chapter we have looked at the hardware aspects of the random number generator tests, which has allowed us to see if the selected tests are acceptable for a smart card implemen-tation from a physical point of view (area, power consumption, and calculation time). However, this still leaves a variety of questions unanswered:

1. What are the minimum number of tests that are required to be implemented on the smart card RNG test unit?

2. Can the test sequence be reduced from 20000 bits to a smaller sequence without loss of testing “quality”?

It is not possible to determine the “quality” of a random number generator without having a measuring point. The standard for this thesis is the FIPS 140-2 test criteria, as it is the desired standard to be implemented in the smart card. The FIPS 140-2 test suite is made up of four tests (frequency, poker, runs and longest runs), a sample sequence length of 20000 bits, and a significance level ofα =0.0001 (1 misjudgment in 10000 trials). Therefore, the following is used as the definition for quality for this thesis.

Definition 6.1.1. A test or test group’squalityis a percent measure of how well the selected test or test group mimics the FIPS 140-2 test criteria.

Normally, a failure in a RNG results in a stuck-at type failure (stuck-at 0 or stuck-at 1).

However, there are also cases where a bit stream may still be produced with nonrandom char-acteristics. For cryptographic applications, the use of nonrandom sequences is worse than a full deactivation of the device. These poor cryptographic random sequences provide a false sense of security without informing the user to a possible breach in security. In essence, these poor ran-dom sequences are a hole in the protective shield around the users data. To prevent this security hole from occurring, the RNG must be tested before each use.

76 CHAPTER 6. EMPIRICAL TEST QUALITY MEASUREMENT

Test 3) Digitiser

2) Noise

1) RNG Source

FIPS 140−2

Pass/Fail

Pass/Fail

Figure 6.1:Simulator setup and possible failure points.

There are many different random number generator tests available in literature; however, they detect faults at different sensitivities. To investigate the sensitivities of the eight selected tests requires a simulator. This chapter describes the simulator that has been programmed to incorporate the possible failure points in a RNG system, and presents the results from the study of the behavior of the empirical test to the different faulty bit streams. These failure points are modeled as poor RNGs. Figure 6.1 shows the three points of vulnerability in the RNG system.

Thefirst point is the actual RNG itself. It is possible that the generator has aflawed design or is damaged during use and begins to produce a poor sequence of bits. The second point examines the effects of outside interference. How will the test unit react to interference or noise on the line?

Thefinal point is the digitizer. Often a natural source is sampled and used as the randomness source. If the digitizer oversamples the natural source the output will have nonrandom qualities.

The following is a list of the models of the failure points and the type of generators used to represent these failures:

Failure Point 1: Failure in the random number generator 1. ANSI C generator

2. Repeating pattern generator 3. Biased generator

Failure Point 2: Frequency noise introduced into the random source 1. Frequency addition with a wide spectrum

2. Frequency addition with a narrow spectrum 3. Addition of pink (1f) noise

Failure Point 3: Failure in the digitizer or the sampling 1. Oversampling

6.2. RANDOM NUMBER GENERATOR FAILURE EXPERIMENTS 77