• Keine Ergebnisse gefunden

The Construction of Tables

Im Dokument Computation Seminar (Seite 61-66)

PAUL HERGET Cincinnati Observatory

THE FIR S TID E A I wish to mention is a very simple one that comes from calculus. This idea occurred to me years ago when we had only a 601 and were unable to per-form division.

Suppose our purpose is to tabulate an analytic function, f(x) = X, at equal intervals of the argument, x, where X represents the numerical value placed in the table for each entry. This method is to be applied in the cases where there exists an inverse function, F(X) = x, and will be practi-cable, if it is simpler to compute F(X) than itis to compute f(x) by some direct means.

The problem may be stated as follows: If the series for sin cf> can be expanded on a program cycle or a series of programs, then if the inverse function is needed, the series is expanded until it equals the sine, thus giving the angle. If we write F (X 0

+

e) - x

=

0 and expand the expression by Taylor's theorem and use Newton's method of approximation, e = x

;,(~~fo)

is obtained, where Xo is some approximation to the correct value. Also the follow-ing equations are true:

Now write

dX = f'(x)dx dx = F'(X)dx

dX 1 f'() 6,1

dx = F' (X) = x w

r 6,1

Xl

=

Xo

+

e= Xo

+

[x - F(Xo)] - . (1) w

This is a first-order approximation, but it is applied only to the residual which is supposed to be small. If a set of approximate Xo's can be obtained for the entries to be put into the table, this iterative equation is computed instead of attempting to compute

f

(x). This is on the assumption that F(Xo) can be computed more readily than f(x). In the case of an arc sine table, for example, this means that one must have, necessarily, a table of sines to use, or a sine series, or something similar.

If the table is to be at equal intervals of the argument, pre-sumably the entries are in units of a certain decimal place, so that l/w simply requires proper placing of the decimal point, and 6, 1 is obtained by differencing the values to be

62

placed in the table at that stage. If necessary, a first ap-proximation can be obtained as illustrated in Figure 1.

FIGURE 1

The true curve is approximated by a series of chords.

Each value of X 0 is obtained by summary punching with progressive totals, tabulating blank cards, and accumulating a constant first difference from the digit emitter. Some hand computing is necessary to find the points where the constant emitted first difference should be changed, and the number of changes is balanced against the size of the allowable error of Xo.

The division by F' ( X 0) has been replaced by a multi-plication by 6, 1, and equation (1) is iterated repeatedly until the final values are reached. Although this procedure may not appear attractive, it is for the purpose of eliminat-ing the division. It may be applied if a table, say, to four places is available, and a table to eight places is needed.

Perhaps some of Mr. Hastings' inverse functions could be generated more easily than the direct functions. It would have an application there.

N ext, I would like to indicate the results in two sim-ple cases. If a reciprocal table is to be obtained, then f(x) = X = l/x, and f'(x) is eliminated, as derived from this expression, instead of substituting 6,1/W in equation (1). Then X

=

Xo

+

(1 - xXo)Xo. It is not necessary to summary punch any differences. An iterative formula is ob-tained which would have been obob-tained more easily some

S E M I N A R

other way! In the second case, if f(x)

=

X-1/2, the same procedure is applied, obtaining

X

=

Xo

+

(0.5 - 0.5xXg)Xo . sine and cosine function, and contain some ideas which may be extended to other functions. Let Figure 2 represent the quantities in a sine table at equal intervals of the argument.

cp - w sin (cp - w) A sin (cp - w)

The second difference opposite sin cp is sin (cp

+

w) - 2 sin cp

+

sin (cp - w) corresponding square brackets, which is, in general, differ-ent from the ordinary concept of interpolation.

It is seen from trigonometry that the square brackets in equation (2) which have been derived by means of working with Everett's interpolation formula, have closed expres-sions, namely

I shall describe briefly how we constructed an eight-place sine table, in what we considered a most efficient manner of arriving at a set of punched cards for the table. Ninety cards were key punched, proof read, and differenced, each containing the sine of an integral degree. If the argument is in degrees and if an eight-place sine table is desired, then the interval must be O. °01 in order to have linear in-terpolation. This yields a second difference of three units in the eighth place. Since the second difference coefficient is the basic idea of the process can be seen. Now this complete set of square brackets is much like a sine and cosine table

In the present case it means that when one multiplication has been made of a square bracket times sin cp, it is used once for a given value of m in one degree and again for the same value of n in the next degree. Although every single sine entry is obtained as the sum of two products, there are only as many products as there are entries in the table, be-cause each one is used twice.

In practice the entries are not formed directly by equa-tion (2) but the square brackets are differenced, the prod-ucts formed, and then the interpolates are built up by pro-gressive totals over each range of one degree. This process enables a multiplication with eight-figure factors (the nor-mal capacity of a 601) and still protects the end figures against accumulated roundings. The differences of the square brackets are of the form:

0.01

+

A6,lE2

+

A26,lE4

+ ...

If this expression is evaluated to 12 decimal places, there will be only 8 significant figures beside the leading 0.01.

6, 1 E2 means the first difference of the Everett second differ-ence coefficients. The multiplication of sin cp by 0.01 is

Nine thousand multiplications were performed, using 100 group multipliers, and the work was arranged with three fields to the card. Then the cards were sorted and

tabu-lated; 3,000 summary cards yielded the final values for the table. There was an automatic check as each sine of an integral degree was reproduced at the end of that interval.

The final table was then reproduced, and it was necessary to punch the first differences of the rounded values in order to interpolate these. The final check was the differencing the disadvantage of accumulated rounding off errors, just as in numerical integration; thus, more places must be car-ried as a protection. In fact, this is the process of numerical integration in the case of this simple function. That leads me to make one other comment: by means of numerical in-tegration it is possible to tabulate these functions, or to generate the functions so that cards are punched.

There is no need to discuss the subject further, since it is all covered in the subject of numerical integration. The ease with which equations can be integrated depends upon how much protection can be obtained against the accumula-tion errors, and how well the integrands, needed for the integration, can be computed.

THE ABOvE concludes the remarks with respect to tables, which have equal intervals of the argument. The remaining remarks will apply to optimum interval tables, and I am not sure whether or not it is necessary to describe the essential property of an optimum interval table. It amounts to this:

What does this mean in numbers? This is illustrated in the following example:

to compensate in advance for the error we would otherwise make. This means that all the card columns corresponding to .325 are wired directly to the multiplier unit and the machine takes no cognizance of whether the interval' is or-dinary or unusual.

That is all there is to an optimum interval table. You see, you fool the machine by giving it a table which doesn't make sense. The control panel is wired in a way that doesn't make sense. Generally, this theorem doesn't hold, but in this case the two nonsenses make sense.

What I would like to show you is a very unsophisticated way of looking at the construction of such a table, if a table with second order interpolation is desired. The generaliza-tions are fairly obvious. If one wishes to have an interpola-tion formula of the form

f

n = F 0

+

N D 1

+

N2 D 2 ( 4 ) the way in which we shall approach the problem is as fol-lows: Suppose the intervals of the argument, which we are to use, have been established, the interval may be changed whenever the function's variation warrants, but within the restriction that the combinations of intervals must always end in a cycle of ten units of the left-hand position of the fourth derivative terms which cannot be included because the interpolation formula is to be only quadratic. However, since these terms are always positive, they shall be used as

S E M I N A R

FIGURE 3

E

KE

illustrated in Figure 3. Let

E

be the total error committed at x

=

x 0

+

a if we neglect the third order term com-pletely. Let

KE

be the fractional part of this error which is taken into account if the cubic term is replaced by a linear term, as shown. Then the remaining neglected error is h3

E -

hKE. This error has a maximum (shown by the short vertical line) at h =

Y

K/3. If the error is set at this point equal in magnitude and with opposite sign to the error at the end of the interval, we have

~ ~

(

~

- K

)E =

(K - 1)

E

and K

=

3/4 .

If we analyze the quartic term in the same way, we ob-tain K = 2

(y2 -

1). Our interpolation formula becomes

1 [ 3 1 { 3 35 a2 } f(x)

=

_3/2 1 - 2-- 1

+

4-242 h

.to Xo Xo

+ ~~

8

x~

{I

+ (y2 -

1 1) 63

24x~

a2 } h2 ] .

Since the two braces are so nearly alike, we may use, with-out sensible error, 1

+

1.09375 a2 / x~ for each of them.

N ow we may expect that the interval which may be used is somewhat more favorable than that which would be determined on the basis of. neglecting the third and fourth order terms completely. I shall let Dr. Grosch ex-plain the way in which the intervals are obtained. What is done as a rule of thumb is to write 192e = w3fiii (x), where

B is the admissible error, usually one-half unit in the last place. Then the size of the third derivative will control the value of the interval which may be used.

At this stage we have

f(x)

=

fo

+

hf1

+

h2f2. (5)

\Ve are still faced with one other problem before we are finished: h is counted from the middle of the interval; so we shall write n - a

=

h. Then

'n

is counted in the same units as h, but from the beginning of the interval. But if the

65

value of the argument at the beginning of the interval is not zero, but A, then let N - A

=

n where N is the number which is actually used as the multiplier in equation (4).

Making these substitutions in (5) in order to reduce it to the form of (4), we find that all of the following relations exist. I shall write down only the end results.

D2

=

f2' D1

= 11 -

2(a

+

A)D2

Fo

=10 -

(a+A) D 1 - (a+A)2D 2 ·

This is about the simplest way, of which I could think, to present the development from the function and a Taylor's series to the final results entered in the table.

DISCUSSION

Dr. King: I would like to make some much more general remarks. It is a good thing for a lot of people to work on these problems so as to make sure that the best method finally comes out.

On the other hand, there is a point where it is inefficient to have too many people, and I would like to ask the speak-ers whether they think the last word has been more or less said on optimum interval tables, and, if so, I am sure there are some particular little details that could be improved.

So I would like to hear from them whether they think the time is ripe for people to get together and have one system of optimum interval tables.

Dr. Grosch: I think, honestly, we can say that the poly-nomial case for the single variable is just about under con-trol now. By the time you go to two variables it becomes so complicated that it may not be worth investigating until we have some big bi- or trivariates that we just have to make. It is really a beastly job, even in the linear case.

I have made some explorations in that direction and don't feel at all satisfied. In the univariate case, I don't think there is much we can do beyond this inverse matrix busi-ness, and the reason I am so sure of it is this: that if you pick any interval (and you may pick a wrong interval, be-cause several terms in the Taylor series are contributing;

higher order terms, as Dr. Herget shows, are being added with lower order terms and so forth); but if you pick an interval under any assumption whatsoever, Mr. Hastings' comment of yesterday is the key to the whole situation that the error curve for a certain degree of approximation is going to look just about the same. It will change a little bit.

He said Tchebyscheff was the zeroth order approximation to that curve. It will change a little, but the position of the extrema is very stable. Therefore, you are going to make an error of E at the position where you think those extrema are going to occur; and, even if the function doesn't be-have quite the way its next higher term indicates it should, the extrema aren't going to shift very much. Therefore, your value of the actual error curve obtained when you use the table will not be more than a tenth of a per cent or a hundredth of a per cent greater than theoretical E,

unless you come to a curve so very bad that the error curve doesn't look anything like a Tchebyscheff polyno-mial; and, of course, we can always invent such curves.

But I think they are going to be quite hard to invent.

I also expect that the rational function is going to have a very stable error curve, what Professor Tukey referred to as the Tchebyscheff polynomial for rational functions.

But I don't have that as yet and I don't know whether Mr. Hastings has.

Professor K unz: I think one of the important things in this talk is that Dr. Grosch has reminded us that there are other ways of interpolating. Just as a very trivial sugges-tion: if you take a hyperbola and pass it through three points, this gives you second order interpolation in a more general sense. Some of you who haven't tried similar things might like to try it. One of the interesting prop-erties is that you try inverse interpolation with such a function, and it is just as easy as direct interpolation. You can obtain second order inverse interpolation very nicely.

I have used this in quite a few cases, and it sometimes yields a very nice fit to curves, particularly if they have a singularity somewhere in the region.

It is just a suggestion to become sort of initiated to these reciprocal differences which are a little forbidding and are awfully hard to integrate.

Professor Tukey: I cannot agree with Professor Kunz in the use of the word "interpolation." The essential point about this is that we have given up interpolating, just as we have given up expanding in series. Weare trying to get something that is a good fit, and that is a different problem.

Mr. Bell: While we are on the subject of tables, I would like to point out another way of getting a fit to curves of various sorts. It is a widespread opinion among engineers that a problem which involves curves of some sort cannot be done on punehed cards. I am talking, of course, about engineers who have hearsay knowledge of IBM equip-ment.

Now, this is not true. All you have to do is read a lot of points. With the points you can obtain first differences, set up a linear interpolation, which can be done quite quickly. Of course, this is completely non-elegant, but very practical. Many times you have whole families of curves. We, in our organization, are fortunate in having rapid ways of reading such data. We can read, say, a thousand points from families of curves in maybe an hour and be ready to go on a problem without mathematics.

Im Dokument Computation Seminar (Seite 61-66)