Stephen
Yablo, MIT

draft of August 6, 1999

comments very welcome

Frege
in “Notes for L. Darmstaedter” asks: “Is arithmetic a game or
a science?” His own view, of course, is that it is a science, albeit a
science of a very particular kind. Unlike physics and geology, its results are
knowable a priori; unlike them too, it deals with a special sort of logical
object. But an unusual science is a science nevertheless.

I
say that Frege has asked the right question, but given the wrong answer. The
picture of mathematics as a science has led to a stagnant, boring, and
generally unsatisfying state of affairs from which the alternative picture
holds out some hope of delivering us. Because the unsatisfyingness has
essentially to do with the unequal treatment of pure mathematics and
mathematics as it comes up in applications, let's start by talking about that.

Frege
is very clear about the reasons for his position. He tells us in the
__Grundgesetze__
that “it is applicability alone which elevates arithmetic from a game to
the rank of a science” (vol. II, sec. 91). A game-like interpretation
would be fine, it seems, except for the fact that arithmetic does so much work
for us in our dealings with the material world.

If we stay within [the] boundaries [of formal mathematics], its rules appear as arbitrary as those of chess. [But] applicability cannot be an accident (Grundgesetze, vol. II, sec. 89).

The
suggestion here is that the explanation of why
__these__
rules are the ones we use is that these rules are the ones that track
mathematical reality. If we take it that the tracking reality is the
job-description of science, then here is an argument for the scientific
interpretation he favors.

The
funny thing is that, Frege notwithstanding, applicability is widely considered
to create
__problems__
for the view of arithmetic as a science. Arithmetic qua science is a
deductively organized description of sui generis objects entirely disconnected
from the natural world. (Likewise the theory of real and complex numbers, and
analysis, and set theory.) But now we have

THE PROBLEM OF APPLIED MATH: How can a story about objects disconnected from the natural world be so useful in natural science = the theory of the natural world?

This
has also been called (by Wigner, a one-time student of Hilbert’s) the
problem of "the unreasonable effectiveness of mathematics." By whatever name,
it bids fair to be considered
__the__
most urgent problem in the philosophy of math. What have philosophers had to
say about it?

There
are really two issues here. One is, what have philosophers had to say in a
__constructive__
vein? What do they say by way of trying to
__solve__
the applicability problem? The other is, what do they say they say in a
__dialectical__
vein? What are the "morals" and "lessons" they're inclined to draw from the
problem?

Now,
you might
__expect__
that what is said dialectically would be based on what is said in a
constructive vein. That is, we're to draw the following lessons from
applicability,
__because__
they are the ones that emerge from our story about how applications in fact
work.

But
you'd be wrong. The "moral" that has been drawn is quite
__strong__:
applicability is taken to show that mathematics gives a literally
__true__
description of a special mathematical division of reality. Whereas when we
look at what gets said in a constructive vein, we find that it is extremely
__weak__.
All that usually gets said is that standard mathematics had
__better
be true
__,
if its applicability isn't to seem a mystery. Even Frege appears to take a
version of this line:

'How does [formal arithmetic] differ from a mere game?' Thomae ...allud[es] to the services it could render to natural science. The reason can only be that numerical signs have reference and chess pieces have not. There is no other ground for attributing a higher value to arithmetic than to chess (section 90) [1]

It
is admitted, of course, that truth is only a
__necessary__
condition of applicability, not the full explanation. Most true statements,
e.g., statements about what I had for lunch yesterday, are of no use whatever
to scientists. To suppose that truth alone should make for applicability would
be like supposing that random high quality products should improve the
operation of random machines. This seems to be what the Dormouse believed in
Alice in Wonderland; asked what had possessed him to drip butter into the the
Mad Hatter’s watch, he says “but it was the BEST butter.”
The best record of what I had for lunch won’t help science any more than
the best butter will improve the operation of a watch.

But
although no one quite maintains that truth is enough, not a lot has been
written about about what more might be required. The assumption seems to be
that any additional requirements will be particular to this or that application
and of little overall philosophical interest. The most that can be said in
__general__
about how mathematics manages to apply is that mathematical terms refer and
mathematical reality truly is as mathematics describes it.

A
little while ago I said that applicability was
__the__
most urgent problem in the philosophy of mathematics. That may have surprised
you. You may have had another problem in mind for the role:

THE PROBLEM OF PURE MATHEMATICS. Assuming that mathematics is true, whatmakesit so? What are the truthmakers for mathematical claims? (And how is it that we manage to be so extremely reliable about when the truthmakers obtain?)

Just
about every philosopher of mathematics has started here, leaving applicability
to be dealt with at a time to be named later. It's little wonder then that the
agenda has been dominated by issues like the following:

is arithmetic true in virtue of

(a) the behavior of particular objects (the numbers), or

(b the behavior of ω-sequences in general, or

(c) the fact that it follows from Peano's axioms?

if (a), are the numbers sets, and if so which ones? how do we know that such things exist? how do we manage to be so reliable about their properties?

if (b), are the ω-sequences actual or can they be merely possible?

if the first, how do we know that such things exist? how do we manage to be so reliable about their properties? if the second, how does the proposal differ fron (c)?

if (c), are the axioms first-order or second-? if first-, what about the incompleteness theorems? if second-, is second-order logic to be seen as basic or as a branch of set theory? if basic, aren't we treating the (considerable) mathematical content latent in second-order logic as holding primitively and groundlessly? [2] if a branch of set theory, then doesn't the trilemma just recur in a new key? That is: is set theory true in virtue of

(a') the behavior of particular objects (the sets),

(b') the behavior of ∈-structures in general, or

(c') the fact that it follows from such and such axioms?

Having
circled around these issues for many decades now, I am sure that we are all
good and tired of them. But what is the alternative? A number of theories
have, for better or worse, managed to bear up tolerably well under critical
pressure – not well enough to make anyone
__happy__,
perhaps, but well enough to have earned their place in a standard menu of
options. One can't just throw these theories in the waste basket and start
over! And so the issue of what makes pure mathematics true continues to be
what philosophers wrangle over.

You
might feel edified by the decades of wrangling, or you might not. But one
thing seems clear: the continued obsession with issues like the above means
that not much time has been left for the problem of
__applications__.
One takes the occasional sidelong glance, to be sure. But this is mainly to
reassure ourselves that as long as mathematics is true, there is no reason why
scientists should not take full advantage of it.

Look
at that claim again: as long as mathematics is true, there is no reason why
scientists should not take advantage of it. I suppose this means that they
should not feel
__guilty__
about taking advantage of it, no more than they would feel guilty about taking
advantage of any truths. But then it begs a crucial question. Why should
scientists
__want__
to take advantage of it? What
__good__
does it do them? What sort of advantage is there to be taken?

The
reason this matters is that, depending on how we answer, the pure problem is
greatly transformed. It could be, after all, that the kind of help mathematics
gives is a kind it could give
__even
if it were false
__.
If that were so, then PURITY – which in its usual form presupposes that
mathematics is true – is going to need a different sort of treatment than
we have grown accustomed to.

Where
are we? I said that philosophy of mathematics has tended to emphasise PURITY
over APPLICABILITY. The standard line on applicability has been that as long
as mathematics is true, applicability shouldn't be much of a mystery. Also
that beyond truth, there isn't a whole lot to say. And so we should feel free
to go back to our true love: the pure problem, traditionally, the problem of
whence the truth of mathematics derivies.

A
notable exception to all these generalizations is the work of Hartry Field.
Field begins his book
__Science
without Numbers
__
by noting that

most of the literature on the philosophy of mathematics takes the following three questions as central:

(a) How much of standard mathematics is true?....

(b) What entities do we have to postulate to account for the truth of (this part of) mathematics?

(c) What sort of account can we give of our knowledge of these truths?

A fourth question is also sometimes discussed, though usually quite cursorily:

(d) What sort of account is possible of how mathematics is applied to the physical world?

Now, my view is that question (d) is really the fundamental one. (vii)

Not
only does Field see applicability as centrally important, he dissents from
both aspects of the "standard line" on it. Where the standard line links the
utility of mathematics on its truth, Field holds that mathematics (although
certainly useful) is very likely
__false__.
Where the standard line offers little
__other__
than truth to explain usefulness, Field lays great stress on the fact that
mathematical theories are
__conservative__
over nominalistic ones, ie., any nominalistic conclusions that can be proved
with mathematics can also be proven (albeit often much less easily) without it.
The utility of mathematics lies in the
__no-risk
deductive assistance that it provides to the beleaguered theorist
__.

I
think that this is on the right track. But there is something strangely
half-way about it. I do not doubt that Field has shown us a way in which
mathematics
__can__
be useful without being true. It can be used to facilitate deduction in
special nominalistically reformulated theories of his own device: theories that
are "qualitative" in nature rather than quantitative.

Interesting
as this is, it leaves more or less untouched the issue of how mathematics
__does__
manage to be useful without being true. It's not as though the people who use
mathematics in their scientific work are practitioners of Field's qualitative
science. No, they practice regular Platonic science.

How
without being true does mathematics manage to be of so much help to
__them__?
Field never quite says. He's quite explicit in fact that the relevance of his
argument to
__actual__
applications of mathematics is limited and indirect:

[What I have said] is not of course intended to license the use of mathematical existence assertions in axiom systems for the particular sciences:sucha use of mathematics remains, for the nominalist, quite illegitimate. (Or, more accurately, a nominalist should streat such a use of mathematics as a temporary expedient that we indulge in when we don’t know how to axiomatize the science properly.) (14)

And
now one begins to wonder what Field sees himself as doing. Does he think that
the role of mathematics vis-a-vis his the
__non__-nominalistic
theories that scientists in fact use is somehow
__analogous__
to its role in connection with his custom-built nominalistic theories? So that
by explaining and justifying the one he explained and justified the other? If
so, it is strange that he does not say more about how the analogy is supposed
to work.

Or
is his view rather that he has
__not__
explained (or justified) actual applications of mathematics -- but that's OK
because, come the revolution, these actual applications will be supplanted by
the Field-style applications of which he has already treated? This stands our
usual approach to recalcitrant phenomena on its head. Our usual approach is to
theorize the phenomena that we find, not popularize the phenomena we have a
theory of!

As
you may have been beginning to suspect, these complaints have been based on a
deliberate misunderstanding of Field's project. It is true that Field wants
to know

(d) What sort of account is possible of how mathematics is applied to the physical world?

But
(d) can mean either of two different things, depending on whether one is
motivated by
__applicability__,
or
__indispensability__.[3]

Applicability
is, in the first instance, a
__problem__:
the problem of explaining the unreasonable effectiveness of mathematics. It is
also, potentially, an
__argument__
for mathematical objects. For the best explanation may require that mathematics
be true.

Indispensability
is, in the first instance, an
__argument__
for the existence of mathematical objects. The argument is usually credited to
Quine and Putnam. They say that since numbers are indispensable to science,
and we are committed to science, we are committed to numbers. But, just as
applicability was first a problem (for nominalism), second an argument (against
nominalism), indispensability is first an argument against nominalism, second a
problem. How do you nominalists propose to deal with the fact that numbers have
a
__permanent__
position in the range of our quantifiers?

Once
this distinction is drawn, it is clear that Field's concern is more with
rebutting indispensability than applicability. His question is

(d1) How can applications be conceived so that they come out dispensable?

To
__this__,
Field's two-part package of (a) nominalistically reformulated scientific
theories, and (b) conservation claims, seems a perfectly appropriate answer.
But we are still entitled to wonder what Field would say about

(d2) How are actual applications to be understood, be they indispensable or not?

If
there is a complaint to be made about Field, it is not that he has attempted
and failed to answer (d2), but that he doesn't properly address (d2) at all,
and nor do the resources he provides appear to be of much use with it.

Now,
Field
__might__
reply that the indispensability argument is the important one. But that will be
hard to argue. One reason, already mentioned, is that a serious mystery
remains even if indispensability is established. How is the Fieldian
nominalist to explain the usefulness-without-truth of mathematics in
__ordinary__,
quantitative, science?

Second
and more important, though, suppose that such an explanation can be given.
Then
__indispensability
becomes a red herring
__.
Why should we be asked to
__demathematicize
__science,
if ordinary science's mathematical aspects can be understood on some other
basis than that they are true?

Putting
both of these pieces together: the point of nominalizing science is not
achieved unless a further condition is met, given which condition the
nominalization becomes unneeded. Since the nominalization would be superseded
by a supplement that is required anyway, the nominalization serves no very
important purpose.

That
is my first worry about Field's approach. The second worry is related. One
fault we found with the "standard line" on applicability is that it rested too
much on the mere fact of a theory's
__truth__.
There are all kinds of (equally) true mathematical theories; why do some of
them, e.g., ordinary arithmetic, come up again and again in science, whereas
others, e.g.,
__modular__
arithmetic, come up very infrequently?
[4]

One
can complain with equal justice that Field rests too much on the mere fact of a
theory's
__conservativeness__.
If N is a theory of concrete objects, then just about
__any__
non-concrete theory M is going to be conservative over N. (This is a gross
exaggeration, but not in a way that matters to the present argument.) Why then
do the same few mathematical theories do the lion's share of the work? Field
notices this but doesn't attempt to explain it:

Even standard axioms of number theory can be modified without endangering [conservativeness]; similarly for standard axioms of analysis. What makes the mathematical theories we accept better than these alternatives to them is not that they are true and the modifications of them are not true, but rather that they are more useful: they are more of an aid to us in drawing consequences from those nominalistic theories that we are interested in. (15)

This
in a way just sharpens the question. Why is arithmetic so much more useful in
drawing consequences than modular arithmetic? Also, how can the fact that
number theory is more of an aid in drawing nominalistic consequences from
Field-theories, which no one uses, help to explain the importance of number
theory in connection with the theories people
__do__
use?

The
third worry is this. Consider the kind of usefulness-without-truth that Field
lays so much weight on; mathematics thanks to its conservativeness gives
no-risk deductive assistance. It’s far from clear why
__this
particular form
__
of usefulness-without-truth deserves its special status. Does Field think that
deductive assistance is the
__only__
kind of theoretical help that objects can give without having to go the trouble
of existing? He gives no argument for this claim that I know of, but consider
the following quote:

if our interest is only with inferences among claims that don't say anything about numbers (but which may employ, say, numerical quantifiers), then we can employ numerical theory without harm, for we will get no conclusions with numerical theory that wouldn't be valid without it. Indeed, numerical theory is not onlyharmlessfor the purpose of drawing nominalistic consequences, it is also ratherusefulfor this purpose ... There are other purposes for which this justification for feigning acceptance of numerical theory does not apply, and we must decide whether or not to genuinely accept the theory. For instance, there may be observations that we want to formulate that we don't see how to formulate without reference to numbers, or there may be explanations that we want to state that we can't see how to state without reference to numbers...if such circumstances do arise, then we will have to genuinely accept numerical theory if we are not to reduce our ability to formulate our observations or our explanations("Platonism for Cheap?" 161-2, italics added).

But,
__why__
will we have to accept numerical theory in these circumstances? Having just
maintained that the
__deductive__
usefulness of Xs is not a reason to accept that Xs exist, he seems now to be
saying that
__representational__
usefulness is another matter. One might wonder whether there is much of a
difference here. I am not denying that deductive usefulness is an important
non-evidential reason for making as if to believe in numbers. But it's hard to
see why representational usefulness isn't similarly situated.

Representational
usefulness will be the focus in what follows. But I don't want to give the
impression that the possibilities end there. Another way that numbers appear
to "help" is by redistributing theoretical content in a way that streamlines
theory revision.

Suppose
for example that I am working in a first-order language speaking of material
objects only. And suppose that my theory says that there are between two and
three quarks in each Z-particle:

(a) ( ∀z)[(∃q_{1})(∃q_{2})(q_{1 }≠ q_{2}& q_{i}∈z & ( ∀r_{1}) ( ∀r_{2})((r_{1}≠ r_{2 }& r_{j}∈z) → (r_{1 }= q_{1}etc.))].

Then
I discover that my theory is wrong: the number of quarks in a Z-particle is
between two and
__four__.
Substantial revisions are now required in my sentence. I will need to write
in a new quantifier '
∀r_{3}';
two new non-identities 'r
_{1}
≠ r
_{3}'
and 'r
_{2}
≠ r
_{3}';
and two new identities 'r
_{3
}=
q
_{1}'
and 'r
_{1
}=
q
_{2}.'
Compare this with the revisions that would have been required had
quantification over numbers been allowed – had my initial statement been

(a') ( ∀z)(∀n)(n = #q (q∈z) → 2<n<3).

It
would have been enough just to strike out the '3' and write in a '4.' Someone
might say that the revisions would have been just as easy had we helped
ourselves to numerical quantifiers (
∃_{n}x)
defined in the usual recursive way:

(∃_{0}x)Fx =df ( ∀x)¬Fx

(∃_{n+1}x)Fx
=df (
∃y)Fy
&(
∃_{n}x)(Fx
& x ≠ y).

But
this just postpones the inevitable. Our theory might tell us that the stars
have on average 2.5 planets. If we know the stars to number less than a million
– without
__some__
upper bound there's no finite representation at all -- we can write this out as
a huge disjunction:

(b) [( ∃_{2}x)Sx&(∃_{5}y)Py]v[(∃_{4}x)Sx &( ∃_{10}y)Py]v... [( ∃_{1000000}x)Sx & ( ∃_{2}_{5000000}y)Py]_{}

Then
we learn that it's actually 2.
__4__
planets that stars have on average. About 300,000 of (b)'s 500,000 disjuncts
will now have to be scratched out, and all or almost all of the remaining
numerical subscripts will need rewriting. Had we started out with

(b') #y (y is a planet) ÷ #x (x is a star) = 2.5

instead
of (b) the job would obviously have been much easier. Again, someone might say
that we get the same advantages as (b') by using numerical adjectives: there
are 2.5-times-as-many planets as stars. The problem rearises though when we
switch to cases of functional dependence. Suppose the ratio of planets to
stars is thought to be 2.5 times the log to the base 10
_{
}of
the # of planets). Well, maybe substitutional quantification would work here.
But....And etc. But enough about that. let's leave other theoretical benefits
to the side for now and go back to representational utility.

What
is it that allows us to take our uses of numbers for deductive purposes so
lightly? The deductive advantages that "real" Xs do, or would, confer are
(Field tells us) equally conferred by Xs that are just "supposed" to exist.
But the same would
__appear__
to apply to the
__representational__
advantages conferred by Xs; these advantages don't appear to depend on the Xs
really existing either. The cosmologist need not believe in "the average
star" to derive representational advantage from it, as e.g. in "the average
star has 2.4 planets." Does the physicist need to believe in the real
existence of numbers to derive expressive advantages from
__them__,
that is, to find it expressively helpful to couch her theory in numerical
terms?

Suppose,
for example, that our physicist has it in mind to record an infinite bunch of
facts of the form

(i) a projectile fired at X meters per second from the surface of a planetary sphere Y kilograms in mass and Z meters in diameter willnotescape its gravitational field

(ii) a projectile fired at X' meters per second from the surface of a planetary sphere Y' kilograms in mass and Z' meters in diameterwillescape its gravitational field

Why
not just say that

(iii) the escape velocity from a planet of mass M and diameter 2R is the square root of 2GM/R, where G is the gravitational constant?

Why
not indeed? To express the infinitely many facts in finite compass, one brings
in numbers as representational aids. One does this despite the fact that what
we're trying to get across doesn't have anything to do with numbers, and could
be expressed without them were it not for the finitude requirement.

The
question is whether functioning in this way as a representational aid is a
privilege reserved to existing things. The answer
__appears__
to be that it isn't. That (iii) succeeds in gathering together into a single
content infinitely many facts of form (i) and (ii) owes nothing whatever to the
real existence of numbers. All that's required is that
*we
understand what (iii) asks of the non-numerical world
*,
the numerical world taken momentarily for granted. How the real existence of
numbers could help or hinder that understanding is difficult to imagine.

An
oddity of the situation is that Field makes the same sort of point himself in
his writings on truth. He thinks that "true" is a device that exists "to serve
a certain logical need" -- a need that would also be served by infinite
conjunction and disjunction if we had them, but (given that we don't) would go
*unfilled*
were it not for "true." No need then to take the truth-predicate ontologically
seriously; its place in the language is secured by a role it can fill quite
regardless of whether it picks out a property. It would seem natural for Field
to consider whether the same applies to mathematical objects. Why couldn't it
be that, just as truth is an essential aid in the expression of facts not about
truth (there is no such property), numbers are an essential aid in the
expression of facts not about numbers (there are no such things)? I am not
saying that Field would go along with this; everything suggests he wouldn't.
It's because I can't see why
__not__
to go along with it that I find the position strangely half-hearted.

I
don't want to overreach here. It is not as though
__whenever__
a theorist uses Xs for representational purposes, their existence or not is a
matter of indifference. A classic if hackneyed example is the standard meter
bar in Paris. Suppose a 19th century physicist says

(*) the wavelength of delta-rays is 2 times the length of the standard meter bar.

What
role is the standard meter bar playing in this statement? We appeal to it for
the help it gives in classifying objects as to length. Our topic is not how
long the meter bar is but how large the wavelength. A reference to the meter
bar is unavoidable, though, if we want the wavelength pinned down precisely.

Now
in this case, it is certainly
__not__
a matter of indifference whether the meter bar exists as opposed to merely
being supposed to exist. One reason has to do with descriptive precision. A
merely supposed meter bar can be supposed to be any
__number__
of lengths, or rather, there is no very precise length such that the bar is
supposed to be
__exactly
that long
__.
But then (*) construed in terms of a supposed meter bar fails to specify the
wavelength of delta-rays very precisely.

Another
reason why a merely supposed meter bar would not do is this. The supposed
__length__
of such a bar is liable to fluctuate over time. That is, what strikes us as
the same length as our supposed meter bar today may strike us as longer or
shorter than it tomorrow. The length of the actual meter bar is by comparison
precise and stable -- which is good if the point of appealing to it is to point
out a precise and stable property of delta-rays.

It
is because the bar has features not predictable from our concept of it -- which
features are needed for the application at hand -- that a merely supposed
meter bar would not have been enough in this case. Of course, other cases can
be imagined. Had our native grip on the notion of a meter been better, so
that the meter bar, rather than providing a check on our visual judgments, was
the party in check ("whoops, hold everything, looks to me like it's expanded to
1001 millimeters"), then a real meter bar would not have been needed. But as
matters actually stand, or stood, the physical bar plays an indispensable role.

Compare
in this respect the role that the number 2 plays in statement (*). Can we say
in this case that the number 2 needs to exist to settle various questions not
available from "what we have in mind" by 2? Our concept does indeed leave
various things open: whether 2 contains 1, for instance; whether 2 numbers the
bicycles in m basement; maybe even whether every number divisible by 2 is the
sum of 2 primes (Goldbach).

But
none of these are features that matter to 2's role in (*). As far as
__that__
statement is concerned, the features that matter are available already from my
concept.. And so whether 2 exists or not is immaterial. Because its
representational contribution trades only on aspects made just as determinate
by our concept of 2 as they would be by the object itself, we can afford to be
agnostic or even skeptical about it in reading "the wavelength is 2 times the
length of the standard meter bar."

Where
does this leave us? It's beginning to seem as though a distinction needs to
be drawn between two sorts of (putative) object. Call an object or type of
object X
__premeditated__
relative to a statement S if our conception of Xs is determinate enough that
one of the following two statements is definitely correct and the other is
definitely incorrect:

(i) assuming that the Xs as we conceive them exist, it is the case that S.

(ii) assuming that the Xs as we conceive them exist, it is not the case that S.

Numbers
are premeditated relative to our claim S above, because, assuming numbers,
there is no question but that the wavelength is 2 times the length of the
standard meter bar. After all, it flows from our concept of number that
__if
numbers exist
__,
then A is 2 times as long as B iff A is twice as long as B. The standard meter
bar is
__not__
premeditated, however; because, helping ourselves to no more about that bar
than what is provided by our conception of it, there is just no telling how
many times it would fit into the wavelength of delta-rays.

Pulling
these various threads together, it appears then that to the extent that Xs are
premeditated, they can play their representational role equally well whether
they exist or not. Numbers are premeditated, so their existence or not is
irrelevant to the issue of how (*) should be evaluated. The standard meter bar
is not premeditated, so it needs to really exist if (*) wants to have a
determinate truth-value.
[5]
The point remains, of course, that theoretical indispensability and/or
utility is not itself an argument for existence. Our question should not be:
how can these things be so useful if they don't really exist? But rather: how
__do__
these things make themselves useful and is that a way that calls on them to
exist?

The
scientific utility of at least
__certain__
objects – the premeditated ones – can be explained in a way that
remains neutral on the objects' existence. Numbers, sets, and the like, are
premeditated, so their scientific applicability does not argue for their
reality. Add to this that their applicability is the main thing that
__does__
argue for their reality, and it seems we ought to conclude that there is no
reason to believe that these objects exist.

If
they don't exist, though, then what is to be said about pure mathematics?
That'll be our topic starting in the next section. First let's revisit our
three earlier worries about Field.

Worry
three was: Field recognizes only a single form of usefulness-without-truth,
viz. no-cost deductive assistance. Our answer to worry three is that
mathematics is useful not only for the deductive assistance it lends us but
also the
__representational
__assistance.

Worry
one was: Field doesn't give us an explanation of the usefulness-without-truth
of mathematics in
__ordinary__
non-Fieldian science. Our answer to worry one is that mathematics provides
representational assistance even in the context of ordinary, non-Fieldian
science.
[6]
(E.g., we say how a physical system evolves over time by giving the equations
of motion for a point in a made-up mathematical structure called phase space.)

Worry
two was that the feature of mathematical theories that Field cites as
explaining their applicability – their conservativeness w.r.t.
nominalistic theories – does nothing to privilege "standard mathematics"
over any number of other conservative-over-science theories that scientists
rarely if ever appeal to. Why arithmetic, the theory of real numbers,
analysis, and so on? One would
__like__
to be able to say that these theories are (is that for good and specific
reasons) particularly well-suited to representing facts of the sort that need
representing.
[7]
Different subject matters lend themselves to different sorts of fictinal
representation. Much as the subject matter of genealogy invites representation
in terms of trees, the subject matter of science invites representation in
terms of numbers. More on this later.

To
say it one more time, the standard procedure in philosophy of mathematics is to
start with the pure problem and leave applicability for later. No surprise
then that most current theories have a lot to say about what makes mathematics
true, much less to offer about what makes it so useful in science.

The
approach suggested here here looks to be in an opposite fix. Our theory of
applications is not too shabby. But what are we going to say about pure
mathematics? If the line on applications is right, then it becomes likely that
arithmetic, set theory, and so on are largely untrue. At the very least then
the problem of purity is going to have be reconceived. It can't be: in virtue
of what is arithmetic true? It'll have to be: how is the line drawn between
"acceptable" arithmetical claims and "unacceptable" ones? And it is very
unclear what acceptability could amount to if it floats completely free of
truth.

Just
maybe there is a clue in the line on applications. Suppose that mathematical
objects "start life" as representational aids. Some systems of mathematicalia
will work better in this capacity than others; e.g., standard arithmetic will
work better than a modular arithmetic in which all the operations are "modulo
N." As wisdom accumulates about the kind(s) of mathematical system needed,
theorists develop an intuitive sense of what is the right way to go and what
the wrong way.

Norms
are developed that soon take on a life of their own, guiding the development of
mathematical theories past the point where natural science greatly cares. The
process begins to feed on itself, as descriptive needs arise w.r.t.., not the
natural world, but the of our system of representational aids itself. These
needs encourage the construction of still further theory, maybe with further
ontology, and so it goes.

You
can see where this is headed. If the pressures our descriptive task exerts on
us are sufficiently coherent and sharply enough felt, we begin to feel under
the same sort of external constraint that is encountered in science itself.
Our theory is certainly answerable to
__something__,
and what more natural candidate than the
__objects__
of which it purports to give a literally true account? Thus arises the
feeling of the objectivity of mathematics qua description of mathematical
objects.

I
can make these ideas a
__little__
bit more precise by bringing in some ideas of Kendall Walton's about
making-as-if. The thread that links as-if games together is that they call upon
their participants to pretend or imagine that certain things are the case.
These to-be-imagined items make up the game's
__content__,
and to elaborate and adapt oneself to this content is typically the game's very
point.
[8]
At least one of the things we are about in a game of mud pies, for instance,
is to work out who has what sorts of pies, how much longer they need to be
baked, etc. At least one of the things we're about in a discussion of Sherlock
Holmes is to work out, say, how exactly Holmes picked up Moriarty's trail near
Reichenbach falls, how we are to think of Watson as having acquired his war
wound, and so on.

As
I say, to elaborate and adapt oneself to the game's content is typically the
game's very point. An alternative point suggests itself, though, when we
reflect that all but the most boring games are played with
__props__,
whose game-independent properties help to determine what it is that players are
supposed to imagine. That Sam's pie is too big for the oven doesn't follow
from the rules of mud pies alone; you have to throw in the fact that Sam's
clump of mud fails to fit into the hollow stump. If readers of "The Final
Problem" are to think of Holmes as living nearer to Hyde Park than Central
Park, the facts of nineteenth century geography deserve a large part of the
credit.

Now,
a game whose content reflects the game-independent properties of worldly props
can be seen in two different lights. What ordinarily happens is that we take
an interest in the props because and to the extent that they influence the
content; one tramps around London in search of 221B Baker street for the light
it may shed on what is true according to the Holmes stories.

Using
games to talk about game-independent reality makes a certain in principle
sense, then. Is such a thing ever actually done? A case can be made that it is
done all the time -- not indeed with explicit self-identified games like "mud
pies" but impromptu everyday games hardly rising to the level of consciousness.
Some examples of Walton's suggest how this could be so:

Where in Italy is the town of Crotone? I ask. You explain that it is on the arch of the Italian boot. 'See that thundercloud over there -- the big, angry face near the horizon,' you say; 'it is headed this way.'...We speak of the saddle of a mountain and the shoulder of a highway....All of these cases are linked to make-believe. We think of Italy and the thundercloud as something like pictures. Italy (or a map of Italy) depicts a boot. The cloud is a prop which makes it fictional that there is an angry face...The saddle of a mountain is, fictionally, a horse's saddle. But our interest, in these instances, is not in the make-believe itself, and it is not for the sake of games of make-believe that we regard these things as props...[The make-believe] is useful for articulating, remembering, and communicating facts about the props -- about the geography of Italy, or the identity of the storm cloud...or mountain topography. It is by thinking of Italy or the thundercloud...as potential if not actual props that I understand where Crotone is, which cloud is the one being talked about. [9]

A
certain kind of make-believe game, Walton says, can be "useful for
articulating, remembering, and communicating facts" about aspects of the
game-independent world. He might have added that make-believe games can make it
easier to reason about such facts, to systematize them, to visualize them, to
spot connections with other facts, and to evaluate potential lines of
research. That similar virtues have been claimed for metaphors is no
accident, if metaphors are themselves moves in world-oriented pretend games:

The metaphorical statement (in its context) implies or suggests or introduces or calls to mind a (possible) game of make-believe...In saying what she does, the speaker describes things that are or would be props in the implied game. [To the extent that paraphrase is possible] the paraphrase will specify features of the props by virtue of which it would be fictional in the implied game that the speaker speaks truly, if her utterance is an act of verbal participation in it. [10]

A
metaphor on this view is an utterance that represents its objects as being
__like
so
__:
the way that they
__need__
to be to make the utterance "correct" in a game that it itself suggests. The
game is played not for its own sake but to make clear which game-independent
properties are being attributed. They are the ones that do or would confer
legitimacy upon the utterance construed as a move in the game.

Seen
in the light of Walton's theory, our suggestion above can be put like this:
numbers as they figure in applied mathematics are creatures of existential
metaphor. They're part of a realm that we play along with because the pretense
affords a desirable – sometimes irreplaceable – mode of access to
certain real-world conditions, viz. the conditions that make a pretense like
that appropriate in the relevant game. Much as we make as if, e.g., people
have associated with them stores of something called "luck," so as to be able
to describe some of them metaphorically as individuals whose luck is "running
out," we make as if pluralities have associated with them things called
"numbers," so as to be able to express an (otherwise hard to express because)
infinitely disjunctive fact about relative cardinalities like so: the number
of Fs is divisible by the number of Gs.

Now,
if applied mathematics is to be seen as prop-oriented making-as-if, then the
thought that immediately suggests itself is that pure mathematics should be
conceived as
__content__-oriented
making-as-if.

Why
not? It seems a truism that pure mathematicians spend most of their time
trying to work out what is true according to this or that mathematical theory.
[11]
All that needs to be added to the truism, to arrive at the conception of pure
mathematics as content-oriented make believe, is this: that the mathematician's
interest in working out what is true-according-to-the-theory is by and large
independent of whether the theory is
__really
true
__
– true not in the sense of being true-according-to-a-theory, but of
fidelity to a realm of mathematical objects.
[12]

This
brings us to the question that fictionalist accounts of mathematics have
traditionally found most difficult. If the whole thing is just a game, why is
it conducted in such a serious spirit -- a spirit more characteristic of a
search after truth than the spinning out of a fairy tale? Why does
mathematical theory-construction have the feeling of being answerable to
objective external contraints?

A
partial answer is that mathematics is no ordinary game. It's a game that was
invented with prop-oriented applications in mind, and that, although it has
taken on a life of its own, still derives a good deal of its authority from
those applications. Our feeling of a right and a wrong way of going in pure
mathematics is due to the fact that some ways of going
__really
do
__
yield a product better suited to the prop-oriented applications that started
the ball rolling, and that keep it rolling today. Pure mathematics is an
endeavor in which we trace out the contours of existing schemes of metaphorical
representation and search for the best ways of elaborating those schemes.

A
fuller answer would distinguish two contexts in which we debate mathematical
truth. Some of the time we are doing "normal" mathematics; we are wondering
what is the case according to this or that mathematical theory. But for all I
have said, there is a perfectly objective fact of the matter about what is
true according to Peano Arithmetic, or Zermelo-Fraenkel set theory. So no
immediate problem of objectivity arises here. Where a problem
__does__
seem to arise is in the context of theory-
__development__.
Why do some ways of constructing mathematical theories, and extending existing
ones, strike us as better than others? I have no really satisfying answer to
this, but let me indicate where an answer might be sought.
[13]

A
distinction is sometimes drawn between
__true__
metaphors and metaphors that are
__apt__.
That these are two independent species of metaphorical goodness can be seen by
looking at cases where they come apart.

An
excellent source for the first quality (truth) without the second (aptness) is
back issues of
__Reader's
Digest
__
magazine. There one finds jarring if not necessarily inaccurate titles along
the lines of "Tooth Decay: America's Silent Dental Killer," "The Sino-Soviet
Conflict: A Fight in the Family," and, my personal favorite, "South America:
Sleeping Giant on Our Doorstep."
[14]
Another good source is political metaphor. When Calvin Coolidge said that
"The future lies ahead," the problem was not that he was
__wrong__
– where else would it lie? -- but that he didn't seem to be mobilizing
the available metaphorical resources to maximal advantage. Likewise when
George Bush told us before the 1992 elections that "It's no exaggeration to say
that the undecideds could go one way or another."

Of
course, a likelier problem with political metaphor is the reverse: aptness
without truth. The following are either patently (metaphorically) untrue or
can be imagined untrue at no cost to their aptness. Stalin: "One death is a
tragedy. A million deaths is a statistic." Churchill: "Man will occasionally
stumble over truth, but most times he will pick himself up and carry on." Will
Rogers: "Diplomacy is the art of saying "Nice doggie" until you can find a
rock." Richard Nixon: "America is a pitiful helpless giant."

Not
the best examples, I fear. But let's move on to the question they were meant to
raise. How does metaphorical aptness differ from metaphorical truth? David
Hills maintains that where truth is a semantic feature, aptness an aesthetic
one: "When I call Romeo's utterance apt, I mean that it possesses some degree
of poetic power... Aptness is a specialized kind of beauty attaching to
interpreted forms of words...For a form of words to be apt is for it...to be
the proper object of a certain kind of felt satisfaction on the part of the
audience to which it is addressed" (119-120).

An
immediate difficulty with this is that "apt" can be used in connection not with
__particular__
metaphorical claims but entire metaphorical frameworks, or types of them. It
might be said, for example, that rising pressure is a good metaphor for intense
emotion; that possible worlds provide a good metaphor for modality; or that war
makes a good (or bad) metaphor for argument. What is meant by this sort of
claim? Not that pressure (worlds, war) are metaphorically
__true__
of emotion (modality, argument). There's no question of truth because no
metaphorical claims have been made. But it would be equally silly to speak
here of poetic power or beauty. The suggestion seems rather to be that a
make-believe game built around pressure (worlds, war)
__lends__
itself to the metaphorical expression of truths about emotion (possibility,
argument). The game "lends itself" in the sense of affording access to lots of
those truths, or to particularly important ones, and/or in the sense of
presenting those truths in a cognitively or motivationally advantageous light.

Aptness
then is
__at
least
__
a feature of prop-oriented make-believe games; a game is apt relative to such
and such a subject matter to the extent that it lends itself to the expression
of truths about that subject matter. A particular metaphorical
__utterance__
is apt to the extent that (a) it is a move in an apt game, and (b) it makes
impressive use of the resources that game provides. The reason it is so easy
to have aptness without truth is that to make satisfying use of a game with
lots of expressive potential is one thing, to make veridical use of a game
with arbitrary expressive potential is another.

Back
now to the main issue: what accounts for the feeling of a right and a wrong way
of proceeding when it comes to mathematical theory-development? I want to say
that a proposed new axiom A strikes us as correct roughly to the extent that a
theory incorporating A seems to us to make for an
__apter
game
__`
–
`a
game that lent itself to the expression of more metaphorical truths
`--
`than
a theory that omitted A, or incoporated its negation.

Take
for instance the controversy over the axiom of choice. One of the many
considerations arguing
__against__
acceptance of the axiom is that it requires us to suppose that the spheres of
pure geometry decompose into parts that are reassemblable into enormously
larger spheres. (The Banach-Tarski theorem.) Physical spheres are not
__like__
that, so we imagine, hence the axiom of choice makes geometrical space an
imperfect metaphor for physical space.

One
of the many considerations arguing in
__favor__
of the axiom is that it blocks the possibility of sets X and Y neither of which
is injectable into the other. This is crucial if injectability and
uninjectability are to serve as metaphors for relative size. It is crucial that
the statement about functions that "encodes" the fact that there are not as
many Ys as Xs should be seen in the game to
__entail__
the statement "encoding" the fact there are at least as many Xs as Ys. This
entailment would not go through if sets were not assumed to satisfy the axiom
of choice.
[15]

Add
to this that choice
__also__
mitigates the paradoxicality of the Banach-Tarski result, by opening our eyes
to the possibility of regions too inconceivably complicated to be assigned a
"size," and it is no surprise that choice is judged to make for an apter
overall game.

Suppose
we are working with a theory T and are trying to decide whether to extend it to
T* = T + A. An impression I
__don't__
want to leave is that T*'s "aptness" is simply a matter of its expressive
potential with regard to our original
__naturalistic__
subject matter: the world we "really believe in," which contains only concrete
things. T* may also be valued for the expressive assistance it provides in
connection with the
__mathematical
__
subject matter postulated by T – a subject matter which we "take" to
obtain in our role as players of the T-game. A "new" set-theoretic axiom may
be valued for the light it sheds not on concreta but on mathematical objects
already in play. So it is, e.g., with the axiom of projective determinacy and
the sets of reals studied in descriptive set theory.

If
mathematics is a myth, how did the myth arise? You got me. What we can do is
construct a myth about how it
__might__
have arisen. The idea in other words is to tell a just-so story such that

(i) the story treats mathematical objects as pulled long ago out of thin air

(ii) everything isas ifthis really happened, and we had just forgotten.

My
strategy here is borrowed from Wilfrid Sellars in "Empiricism and the
Philosophy of Mind." Sellars asks us to

Imagine a stage in pre-history in which humans are limited to what I shall call a Rylean language, a language of which the fundamental descriptive vocabulary speaks of public properties of public objects located in Space and Enduring through time. ...What resources would have to be added to the Rylean language of these talking animals in order that they might come to recognize each other and themselves as animals thatthink,observe, and havefeelingsandsensations? And, how could the addition of these resources be construed as reasonable? (Sections 48-9)

I
want us to go back to the same stage of pre-history; but since we are
interested in the absence of abstracta (not private episodes) from the
language, let us think of it not as a Rylean language but a
__Goodmanian__
one.

∃_{0}x Fx =_{df }∀x (Fx → x ≠ x)

∃_{n+1}x Fx =_{df }∃y (Fy & ∃_{n}x (Fx & x ≠ y))

Our
ancestors can easily say things like "there are exactly two cats" and "there
are exactly three hundred apples," and with the help of some simple
truth-functional combinations, that "there are at least (at most) fifty houses"
too.

Everything
goes swimmingly until one day the Goodmanians realize that there are things
they have no way of saying. An example is "there are exactly as many Fs as
Gs." If the language had infinite disjunction, they could use

(∃_{0}x Fx & ∃_{0}x Gx) v (∃_{1}x Fx & ∃_{1}x Gx) v etc.

But
they don't, and so more radical measures are required. They convene a
conference and decide to institute a make believe game in which it's supposed
that, in addition to the regular, "concrete," objects, there are new objects
called "numbers." The point of the numbers is to serve as measures of
cardinality; there is not supposed to be much more to the intrinsic nature of,
say, 5, than that is suited to measure the cardinality of the Fs just in case
there are five Fs. Their game is governed by the following simple rule:

(R1) when ∃_{k}x Fx,say that there's a "number" k = (#x)Fx

With
the help of this rule, that there are as many Fs as Gs can be put by "saying"
that (#x)Fx = (#x)Gx. Of course they don't really believe in numbers but it's
clear enough what the point of such an utterance would be; to call attention to
the real-world condition that makes (#x)Fx = (#x)Gx sayable in the game. Other
previously inexpressible contents now come within reach as well. That there
are finitely many of cats can be put by saying that that some number k is the
number of cats.

(R2) pretend that there's an "addition" operation on the numbers, such that #(F) + #(G) = #(F-or-G) (assuming that no Fs are Gs).

Using
this made-up operation of addition, their observation can be put as follows: 7
+ 5 = 12. Other previously inexpressible contents fall to the same method.
No matter what the Fs and Gs (etc.) may be, if F ≈ F', and G ≈ G'
, then F-or-G ≈ G'-or-F'.
[16]
(They use "≈" for "exactly as many as.") This can now by put by
"saying" that
∀m
∀n
(m+n = n+m). Another example: if F ≈ F', G ≈ G', and H ≈ H',
then ((F-or-G)-or-H) ≈ (F'-or-(G'-or-H')). Addition lets them say it
like so:
∀mnp
((m+n)+p) = (m+(n+p)).

(R3) pretend that there's a "multiplication" operation on the "numbers," such that #(F) × #(G) = #(K), the Ks being the result of making #(F) copies of the Gs.

Now
they can access the desired content metaphorically; all they have to "say" is
that
∀m∀n
(m
×
n = n
×
m). Other previously inexpressible contents come within reach as well. For
instance, what does
∀m∀n∀p
((m
+
n)
×
p
= (m
×
p) + (n
×
p)) tell us?

(B1) F ≈ G iff #(F) = #(G)

(B2) F«G iff #(F)<#(G)

(R4) for each x and y, pretend there's an ordered pair <x,y>, where the identity of a pair is fixed by its members and their order.

(R5) whenever you've "got" some ordered pairs, pretend they form a collection, where the identity of the collection is given by its members.

Functions,
injections, and bijections – one-one maps -- can now be defined in the
usual way. Three new bridge principles are introduced:

(B3) F≈G iff the Fs can be mapped 1-1onto the Gs.

(B4) F«G iff the Fs can be mapped 1-1into the Gs.

They
notice that (B3) and (B4) cover some of the ground already covered by (B1) and
(B2). But they satisfy themselves that no conflicts will arise: when the Fs
and Gs are finite, the RHS of (B1), resp. (B2) is equivalent to that of (B3),
resp. (B4). Note that (B3) and (B4) are meant to apply also when there are
infinitely Fs and/or Gs. Rather than the numbers coming first, and being used
to define cardinality relations (as in the finite case), cardinality relations
will now be coming first, and numbers will be defined in terms of them.

(R5) pretend that there are finiteand infinitenumbers, all of them subject to the condition that #(F)<#(G) iff F«G, and #(F)=#(G) iff F ≈ G.

It
is unclear as yet how many of these infinite numbers there are. The only one
they can be sure of is
ω
= the # of finite numbers. They wonder, for instance, how many one-one maps
there are from the finite numbers onto themselves –
ω
or perhaps more?

(R6) there are fractions k/n, subject to the condition that the length of R' is k/n iff k ropes the length of R, laid end to end, are exactly the same length as n ropes the length of R'

These
fractions do not allow our ancestors to express anything new but they are an
essential step on the way to the introduction of the reals, as we see below. An
autonomous axiomatic theory of the rationals is devised, by analogy with the
development of Peano's axioms on Day Four.

(R7) for every polynomial P(x) with integral coefficients and everya,bsuch that P(a) < 0 < P(b), say that there is a numbercbetweenaandbwith P(c) = 0.

These
"solutions" to P(x) = 0 are the algebraics. Our ancestors think of the
positive square root of 2, for instance, as the "solution" to Q(x) = x
^{2
}`–
2 =
`0
that lies between the positive rationals whose squares are less than 2 and the
ones whose squares exceed 2.

(R8) whenever you've got a finite numberkand some rationals each less thank, say that there is a numberrat least as big as any of them, such that no rationalqsmaller thanris at least as big as any of them.

The
reals are to be thought of as the "completion" of the rationals under least
upper bound. This finally gives us all the length-measures that they want.
(Infinitesimals are not much on their minds.) Functions on the reals are
introduced to represent space and time, and dynamic processes taking place
therein. Analysis is worked up to theorize these functions.

(R9) whenever you've got some things, pretend that they form a collection, subject to the usual identity-conditions.

This
of course will need qualification down the road! Now they can put their point
by saying that a collection C is infinite iff there's a one-one function from C
to one of its proper subcollections.

(R9') whenever you've got some sets, pretend that they form a class.

Every
class is a set but not vice versa: for instance there's a universal class that
contains all the sets there are, but that class is never obtained by the
process of repeatedly taking power sets. Classes that aren't sets are
__proper__
classes, and this is the one remaining kind of collection.

For
the same sort of reason – applied in full generality it leads to
contradiction -- (R5) needs modification too. Say that the Ks are "bounded"
iff there are Ls such that the Ks are injectable into the Ls but not
conversely. Then we have

(R5') ifbut only ifthe Fs and Gs are bounded, pretend they have numbers #(F) and #(G), subject to the conditions that #(F)<#(G) iff F«G, and #(F)=#(G) iff F≈G.

(R5')
gives us a nicely
__numerical__
way of conceptualizing the relation between the sets and the proper classes.
Recall that the Fs form a
__finite__
class iff there is a natural number n such that n = #(F). What we can say now
is that the Fs form a
__set-sized__
class – a
__set__
– iff there is a number N
__of
any sort
__
– finite or infinite -- such that N = #(F). Another way to put it is
that a class is a set iff it is not as big as the entire universe.

I'd
like now to explore how the "gameskeeping" conception of mathematics might be
applied to an episode, or trend, in the history of mathematics. Penelope
Maddy has noted something paradoxical in Quine's picture of how mathematics
works. Quine sees mathematics as continuous with "total science" both in its
subject matter and in its methods. This somehow leads to conclusions
dramatically at odds with those reached by working mathematical scientists.
Quine's idea in a nutshell is that in set theory, as elsewhere in science, we
should keep our ontology as small as practically possible. He is prepared to

recognize indenumerable infinites only because they are forced on me by the simplest known systematizations of more welcome matters. Magnitudes in excess of such demands, e.g., beth-omega_{ }or inacessible numbers, I look upon only as mathematical recreation and without ontological rights. Sets that are compatible with [Godel's axiom of constructibility V= L] afford a convenient cut-off... [18].

Quine
even proposes that we opt for the "minimal natural model" of ZFC, a model in
which all sets are constructible
__and__
the tower of sets is chopped off at the earliest possible point. Such an
approach is "valued as inactivat[ing] the more gratuitous flights of higher set
theory..."

Valued
by whom? Actual set-theorists, one would assume. But no. To them, cardinals
the size of beth-omega are not even slightly controversial. They are
guaranteedby an axiom introduced already in the 1920s (Replacement) and
accepted by everyone. Inaccessibles are considered innocent too, except by
the lonely few who suspect that ZF may be inconsistent. (Its consistency is
deducible in ZFC from the existence of an inaccessible.) As for Godel's axiom
of constructibility, it has been widely criticized – including by Godel
himself – as entirely too restrictive. Here is Moschovakis, quoted by
Maddy:

The key argument against accepting V=L...is that the axiom of constructibility appears to restrict unduly the notion of anarbitraryset of integers (1980, 610).

Set-theorists
have wanted to avoid axioms that would "count sets out" just on grounds of
arbitrariness. They have wanted, in fact, to run as far as possible in the
other direction, seeking as fully packed a set-theoretic universe as the
iterative conception of a set permits. All this is reviewed in fascinating
detail in Maddy's book; see in particular her discussion of the rise and fall
of Definabilism, first in analysis and then in the theory of sets.

If
Quine's picture of set theory as something like abstract physics cannot make
sense of the field's plenitudinarian tendencies, can any other picture do
better? Well, clearly one is not going to be worried about multiplying
entities if the entities are not assumed to really exist. But we can say more.
The likeliest approach if the set-theoretic universe is make-believe would be
(A) to articulate the clearest intuitive conception possible, and then, (B)
subject to that constraint, let all heck break loose.

Regarding
(A),
__some__
sort of constraint is needed or the clarity of our intuitive vision will
suffer. This is the justification usually offered for the axiom of foundation,
which serves no real mathematical purpose – there is not a single theorem
of mainstream mathematics that relies on it -- but just forces sets into the
familiar and comprehensible tower structure. Without foundation there would
be no possibility of "taking in" the universe of sets in one intellectual
glance.

Regarding
(B), it helps to remember that sets "originally" came in to improve our
descriptions of non-sets. E.g., there are infinitely many Zs iff the set of Zs
has a proper subset Y that maps onto it one-one, and uncountably many Zs iff it
has an infinite proper subset Y that
__cannot__
be mapped onto it one-one. Since these notions of infinitely and uncountably
many are topic neutral -- the Zs do not have to meet a "niceness" condition for
it to make sense to ask how many of them there – it would be
counterproductive to have "niceness" constraints on when the Zs are going to
count as bundleable together into a set.
[19]
It would be still more counterproductive to impose "niceness" constraints on
the 1-1 functions; when it comes to infinitude, one way of pairing the Zs off
1-1 with just some of the Zs seems as good as another.

So:
if we think of sets as "originally" being brought in to help us deal more
effectively with non-mathematical objects, a restriction to "nice" sets would
have been unmotivated and counterproductive. It would not be surprising,
though, if the anything-goes attitude at work in these original applications
were to reverberate upward to contexts where the topic is sets themselves.
Just as we don't want to tie our hands unnecessarily in applying set-theoretic
methods to the matter of whether there are uncountably many space-time points,
we don't want to tie our hands either in considering whether there are
infinitely many natural numbers, or uncountably many sets of such numbers.

A
case can thus be made for (imagining there to be) a
__plenitude__
of sets of numbers; and a "full" power set gathering all these sets together;
and a plenitude of 1-1 functions from the power set to its proper subsets to
ensure that if the power set isn't countable, there will be a function on hand
to witness the fact. This I hope gives
__some__
of the flavor of why the preference for a "full" universe is not terribly
surprising on a gameskeeping conception of the theory of sets.

**APPENDIX** on Hilbert and Field

It is widely accepted that mathematical instrumentalism — versions of which have been advanced by David Hilbert and Hartry Field -- runs into problems with Godel's Incompleteness Theorems. Let's review how this is supposed to work.

An instrumentalist is someone who distinguishes two sorts of sentence: the "contentual" and the "ideal." Contentual statements we accept just to the extent that they seem (likely to be) true. Ideal ones are "accepted" not because they're true, but for the help they give in reaching true contentual results from contentual premises. For instance, they help us to "see" which contentual results follow from our contentual premises; they help us to prove those results in fewer steps; they allow us to prove lots of different results using similar methods. The ideal sentences are valued, in short, for reasons of __convenience__.

Now, if all the ideal sentences are doing is speeding up proofs and the like, it will be hard to defend our reliance on them unless a case can be made they're not going to lead us astray. If we can't feel as confident about ideally proven results as about those with contentual proofs, then reliance on the ideal sentences would seem irresponsible. Engineers may be willing to give up quality control for speed (etc.), but philosophers, and philosophically-minded scientists and mathematicians, are not.

Hilbert had an answer to this. He thought he could prove __in a purely contentual __way that the use of ideal sentences never leads us astray. Since the contentual lines up for him with the finite, we can put the goal like this: he thought he could prove using finitary mathematics (FM, often identified with PRA = primitive recursive arithmetic) that infinitary mathematics (IM) is __real-sound__: any finitary results provable in IM are in fact true.

This proposal was, the story goes, scotched by the Second Incompleteness Theorem. FM is being asked to prove that 'if finitary sentence S is IM-provable, then S.' But we know from the Second Incompleteness Theorem (or rather, the closely related Lob's Theorem) that FM can't even prove all instances of the weaker claim that 'if S is FM-provable, then S.'

One might wonder, though, whether a finitary proof of real-soundness should really have been the goal. Hilbert took the reliability of FM for granted; he had no ambition of proving real-soundness for __it__. But then why should we attribute to him the more ambitious goal of proving real-soundness for IM? His concern should really have been to show that in FM that IM is no __more__ unreliable than FM is. One way to show it would be to establish that any FM-sentence provable in IM is provable already in FM -- that is, that IM is __conservative__ over FM.

What Godel's Second Theorem certainly does do is undermine an attractive __strategy__ for establishing conservativeness. The strategy is to prove in FM that IM is consistent. If we assume with Hilbert that FM is __complete__ with respect to finitary statements -- it either proves or refutes every one -- then from the consistency of IM it follows that IM is conservative (with respect to finitary statements) over FM. The argument is simple: for any finitary statement S that IM proves, FM proves either S or its negation. If it were the negation then IM (which we assume to contain FM) would be inconsistent. So it must be that FM already proves S.

The problem with this otherwise attractive strategy is that, by Godel, FM can't even prove its own consistency; so it certainly can't prove the consistency of the more powerful IM.

The possibility remains though of proving conservativeness by some less demanding method. After all, one might say, for FM to "know" that IM is conservative over it, there is no __need__ for FM to "know" that IM is consistent or even that it itself is consistent. There is no need for FM to know that it is consistent because conservativeness could still hold even if it were __not__ consistent; indeed it __would__ hold, because if FM proves everything then it certainly proves all finitary sentences provable in IM. (See Simpson, "Partial Realizations of Hilbert's Program," for a discussion of what might still be possible in the aftermath of Godel.)

Field is concerned to address the reliability issue as well. He starts by offering a __platonistic__ (and so ideal) proof of the conservativeness of set theory (ST) over nominalized science (NS): any nominalistic result entailed by NS+ST is entailed already by NS. Critics pointed out that if "entails" is understood proof-theoretically, then unless NS is first-order — and as Field initially formulates it, it is not -- the result is not even true. The added set theory enables us to tease out semantic consequences of NS that NS is not in a position to prove.

Another question Field faced was this: how can we trust your platonistic proof before we get the assurances of reliability (of platonistic methods) that the proof itself was supposed to provide?

These complaints led Field to offer a __nominalistic__ proof of conservativeness in which entailment is understood not semantically or proof-theoretically but as a modal primitive. But the proof only works for first-order languages, and there's a question of whether Field's nominalized science can be formulated in a wholly first-order way.

It might be replied that conservativeness becomes less important when the underlying logic lacks a complete proof procedure, as it will on the non-first-order proposals under discussion. Why should it be a problem if adding ST enables us to extract genuine but previously undeducible consequences of NS?

This is quite right as far as it goes, but it begs the question of whether ST doesn't perhaps enable __more__ than this, viz. the deduction of falsehoods. It's one thing to say that failures of conservativeness don't __automatically__ make for failures of soundness when you lack a complete proof procedure. It's another to explain what, in the absence of conservativeness, is supposed to convince us that soundness is indeed preserved when ST is added on.

Now, the above is predicated on the idea, shared by Hilbert and Field, that __all the ideal sentences are there for is convenience__, most notably the shortening of proofs. How are things changed if we say (as in the main body of this paper) that the appeal to ideal elements is representationally essential?

The first point is that while __calculational convenience__ may not be worth enough to justify risk-taking on the soundness front, added risks __might__ be justified if there were more at stake, such as the possibility of representing important facts about the material world. The pressure to establish a conservativeness result is greatly reduced if we move beyond "calculational" instrumentalism to "expressive" instrumentalism.

The second point is that the idea of conservativeness gets less of a grip if ideal elements due to their representational role are ubiquitous. Physics as actually practiced doesn't __contain__ a whole lot of math-free sentences, so it's hard to feel inspired by the goal of preserving deductive relations among them.

The third point is that the expressive advantages that ideal elements afford — e.g. the ability to represent the notion of "finitely many" -- are just the kind to kill any possibility of a complete proof procedure. Ironically then, the objects whose use was supposed to be legitimated by a conservativeness result have the effect of making conservativeness less desirable, since in the absence of a complete proof procedure you can have real-soundness without it.

The fourth point is that for us, "ideal vs. contentual" is a misnomer, since math-infused sentences have concrete truth-conditions just as much as nominalistic sentences, provided they're understood metaphorically. That ideal objects buy us access to new concrete truth-conditions was indeed the reason for bringing them on board. (Note that the metaphorical truth-conditions of a pure-mathematical sentence, e.g., "there are infinitely many primes," will be be necessarily satisfied or necessarily frustrated. Such a sentence doesn't talk about concrete objects at all, so its acceptability in the game does not depend on how concrete objects may or may not behave. This may help to explain the feeling that mathematical claims are either necessary or impossible.)

Fifth, given that ideal claims have (metaphorical) truth-conditions, the idea of a truth-preserving inference makes sense not just for "contentual" sentences but all sentences. An inference is truth-preserving iff when you assign the sentences involved the appropriate sorts of content —- metaphorical in the case of ideal sentences, metaphorical = literal in the case of contentual —- the conclusion can't be false unless some of the premises are false.

The sixth point is more or less immediate given the fifth: our real concern should be not in making sure that the deductions math enables between __contentual__, that is, __nominalisti__c, sentences, are truth preserving, but in making sure that inferences as among __arbitrary__ sentences are truth-preserving. The question is: if I am playing the math game as a way of gaining greater descriptive purchase on the material world, and I draw logical consequences from sentences that are true in the game — and so really true as far as their metaphorical contents are concerned — will these consequences will also be true in the game? Is truth-in-the-game closed under logical consequence?

Now, it has to be conceded that there are (or can be) games where truth-in-the-game __isn't__ closed under logical consequence. Consider the game in which we make as if moods are containers which people occupy. "He's in a good mood," "I'm coming out my mood," and so on. Suppose a player of the game were to reason that since "X is in an FG" logically entails "X is an a G," from "he is in a good mood" one ought to be able to infer "he's in a mood."

This inference __is__ truth-preserving as far as literal content is concerned; how can you be in a good mood without being in some mood or other? But that is not to say that it preserves truth on the intended figural reading So understood, the inference takes us from (what may well be) the truth that he is feeling upbeat to (what may well be) the untruth that he is, well, feeling moody. It follows that that there can be no general guarantee that reasoning, in a game, in accordance with logic will never lead you astray — never lead you from a true (metaphorical) content to a false one

The question for us is whether this can occur in the particular game or games that we play with mathematical objects.

I see no good way of ruling the possibility out; I can't even rule it out that our mathematical theories are inconsistent. This would bother me a great deal if the game were one that we played for reasons of convenience. It bothers me less if it's a game we have __got__ to play on pain of losing our representational grip on reality.

[1] "....if [arithmetic statements] were viewed as having a sense, the rules could not be arbitrarily stipulated; they would hafe tobe so chosen that from formulas expressing true propositions could be derived only formulas likewise expressing true propositions...Such matters, however, lie entirely oputside formal arithmetic and only arise when applications are to be made" (section 91).

[2]
If we were willing to be this obscurantist, we should have just have shrugged
off the original trilemma...

[3]
I am indebted here to unpublished work of Ana Carolina Sartorio.

[4]
This worry becomes especially acute if we are plenitudinous platonists, as to
escape epistemological worries we might have to be.

[5]
Have some notion of representational conservativeness; the same concrete
configurations are ruled in and out?

[6]
Actually it proves some deductive assistance in regular science too. Explain.
But not enough.

[7]
Nature is arithmetical even if arithmetic isn't true.

[8]
Better, such and such is part of the game's content if "it is to be imagined
....
__should
the question arise
__,
it being understood that often the question
__shouldn't__
arise" (Walton 1990, 40). Subject to the usual qualifications, the ideas
about make-believe and metaphor in the next few paragraphs are all due to
Walton (1990, 1993).

[9]
Walton 1993, 40-1.

[10]
Ibid., 46. I should say that Walton does
__not__
take himself to be offering a general theory of metaphor.

[11]
The theory might be a collection of axioms; it might be that plus some
informal depiction of the kind of object the axioms attempt to characterize; or
it might be an informal depiction pure and simple.

[12]
As opposed to true-according-to-such-and-such-a-background-theory.

[13]
Talk about the unclarity among mathematicians about whether a proposed new
axiom is "true" or "fitting." This is what you'd expect on the story to come
(?).

[14]
"Laughing all the Way to the Bank" (about worshipping by the river Ganges).

[15]
Thanks here to Hartry Field.

[16]
Assume disjointness throughout.

[17]
Even after all this time, there lingered on a few features of the practice that
hinted at the mathematical objects' true provenance. Impatience.
Indeterminacy, insubstantiality, expressiveness, silliness, unavailability,
etc. ("A Paradox of Existence.") A different sort of evidence is given in
the following....

[18]
Quine 1986