{2}The empirical evidence (comparisons of using formal and informal techniques to determine the correctness of an implementation) might not seem to support the speed and ease claims (see [16] for some projects which support the full claim), although there is a growing body of evidence that supports the effectiveness claim [3 , 4,5]. It should be noted, however, that rarely are the techniques employed with the same level of rigor. Informal techniques, exactingly applied to every detail, would be significantly slower than as they are employed in practice--while formal techniques might be improved (in speed, at least) by a more intuitively guided application.
{3}For an excellent discussion of this "paradox of applied mathematics", and an insightful analysis of a debate on how it relates to computer science, see [6].
{4}This is undoubtedly due to the fact that the
author has spent his professional career in computer and
system security.
{5}If anybody has a notion of how to go about
this, I'd be happy to chat about it, and show you my (partial)
mechanization of Andrews [7] Q0 Logic.
{6}The automated tools that have been developed
are probably wildly tailored to the programming style of Dr.
Harrison, but they work pretty well (from version to version of
the HOL Light system) for the intended application.
{7}We will do our best however, not to get ahead
of ourselves, and demonstrate that this project is feasible before
trying to automate the process.
{8}It also has two benefits which are particularly
of benefit to the PhD student: the largest user community for
provers of this kind, and the prover of this kind with which the author
is most familiar.
{9}Experience in the computer security community suggest
that an object is called "trusted" either when:
{10}Our bias throughout the project and its presentation will
be to prefer formal and automated reasoning over reasoning that is not. While
we concede the need for informal and/or non-automated reasoning, we would
prefer that, where possible, error-prone humans are not doing the tedious
checking.
{11}It should be noted that tactics actually work
backward from the goal to known truths. So, what they actually do is
figure out what the last step of the proof should be and what rule
would have to have been used to get there. Tactics can be combined
(using a special kind of operator called a tactical), enabling them to
perform a series of such "backward" steps.
{12}To be fair, NQTHM includes a facility for doing some
quantification (hence the NQ for "Newly Quantified"), but the basic part of
the system, that got the most use, did not have quantification.
{13}That is, to the extent that the semantics of the
imperative features of the language have been defined and incorporated into
the reasoning system. So far, imperative features of ML have gotten less
attention. But this work seems like the ideal way to drive the semantics
work since it will provide a direct way to benefit from it--the ability
to create faster provers.
{14}Harrison [34]states a
preference for a computational reflection tool which is able "to
repair the hull of a ship while it is sailing, without bringing it
into dry dock."--and there is no doubt that Slind's MM does a much better
job of meeting that requirement.
To follow the analogy, however, it is important to note
that, generally, structural improvement only occurs in drydock, and that
repairs performed in drydock have a much greater level of assurance. The
scheme proposed here is clearly the computational reflection drydock (as it
requires more infrastructure and cost, but should deliver higher assurance);
we accept the costs involved because we seek to maximize assurance.
We will do our best to stick to the first version.
Saying that "all of the trust may be placed" in one part of the object,
then, means that the obligation to do informal analysis may be restricted
to that part of the object.