## Journal: Guaranteeing bounds on uncertainty in metrology

I saw in the current (April 1996) issue of SIAM News an article on metrology that contains unsolved problems regarding uncertainty. This looks like an invitation for interval methods...

The exact reference is

V. Srinivasan,

How tall is the pyramid of Cheops?... and other problems in computational metrology,

SIAM News, Vol. 29/No.3, April 1996, pp. 8,9,17.

On p. 17, Srinivasan says:

``All current implementations of computational metrology algorithms ignore uncertainties in input data and attempt to solve deterministic versions of the problems. Most, if not all, of them use floating-point arithmetic and therefore are subject to the vagaries of round-off errors. Even in these deterministic versions, therefore, currently available software cannot promise solutions of guaranteed precision. In some cases, attempts have been made to apply inverse error analysis; that is, the output can be provably claimed to be the exact solution to some perturbed input. This, of course, does not solve the problems in which we know the input uncertainties and want to know the output uncertainties.''

``It is sometimes possible, taking input uncertainties into account, to give bounds for the uncertainties of the output. It is important to remember, however, that for practical reasons loose bounds are not very useful. A loose estimate for measurement uncertainty may underestimate real variations in actual parts. It may also lead to overspecification of the required capabilities of manufacturing processes. Both outcomes are clearly undesirable.''

``In summary, CMM software has improved considerably in the past ten years. Interesting theoretical solutions to some of the deterministic problems in computational metrology have been proposed, although the really important problems involving measurement uncertainties have remained unsolved.''

Thus far the quotation. The author stresses the importance of providing guaranteed results that are not too loose in the sense of providing bounds that strictly covers the worst case and is not far away from this worst case.

Thus simple techniques like approximate linearization or the use of centered forms are probably not sufficient. Ultimately, in my opinion, the problems seem to reduce to relaxed global optimization problems, namely finding enclosures for certain (not very complicated) functions of inputs restricted by interval or ellipsoidal bounds, that have a guaranteed limit for the overestimation in the final width, of perhaps 5 percent or so; the location of the inputs leading to the extreme cases need not be computed. Since the accuracy demanded depends on both lower and upper bounds for the range, this is a little different from standard global optimization problems and requires some extra thought.

Nondeterministic assumptions about the noise can probably be replaced by ellipsoidal bounds on the input uncertainty \eps defined by

\eps^T C^{-1} \eps = s^2,

for s=1, s=2, s=3 in turn, where C is the covariance matrix of the input. Using a Cholesky factor L of C and A=L^{-1}, this can be written in the form

||A\eps||^2 = s^2,

more useful from a computational point of view when C is not diagonal.