The Changing Debates about Measurement
Bas C. van Fraassen
Measurement is the means through which data, and subsequently a data model, are obtained. To understand what sort of data model can be obtained we need to understand what sort of data can be obtained. What sort of activities qualify as measurement, what do or can we learn through measurement, what are the limits of and constraints on what we can learn?
This subject has been the scene of philosophical debate for about a century and a half. Paul Teller takes as its background a simple, arguably naïve, realist conception of nature characterized by certain quantities and of epistemic access to the values of those quantities. Philosophical dismay with that conception began in the nineteenth century and drove much of the effort to understand how measurement relates to theory in science.
Two approaches to this problem dominated the discussion in the twentieth century: the representational theory of measurement and the considerably more realist analytic theory of measurement. Neither was aptly named. The former term is due to Luce, Narens, and Suppes, with reference to mathematical representation theorems (Luce and Narens 1994; Luce and Suppes 2002). The latter position, which postulates the sort of realism Teller castigates, was presented by Domotor and Batitsky (2008). However, both approaches are recognizable in a large diversity of writings. Teller’s presentation of his pragmatist approach begins with a sharp critique that applies to both.
How did it all begin? A crucial new philosophical perplexity about measurement came with the shock to the theory of space, when it seemed that measurement would have to decide between Euclidean and non-Euclidean geometry. First reactions, by Gauss and Lobachevski, assumed that the decision could be made by measuring the interior angles of a triangle formed by light rays. But why must light ray paths mark straight lines? And what measures the angles? Straightness and, more generally, congruity are familiar as geometric mathematical concepts, but now they must be identified as empirical physical quantities.
Helmholtz’s famous lecture of 1870 on physical congruence cast a bright new light on the ambiguities in what counts as, or indeed what is, a measurement of a particular quantity (Helmholz 1870/1956). Mathematically, a space may admit many congruence relations, hence the question whether physical space is Euclidean will hinge on which procedures count as establishing congruence between distant bodies. With several striking thought experiments Helmholtz argues that the same measurement procedures will yield alternative conclusions about the structure of space when understood in the light of different assumptions, not independently testable, about what is physically involved in those procedures.
One of Helmholz’s concluding remarks set the stage for a century-long effort to ground the theory of measurement in a conception of simple physical operations that would serve to define qualitative, comparative, and quantitative scales: “In conclusion, I would again urge that the axioms of geometry are not propositions pertaining only to the pure doctrine of space. They are concerned with quantity. We can speak of quantities only when we know of some way by which we can compare, divide and measure them” (1870/1956, 664).
Through a succession of writings on the subject in the century following Helmholz’s lecture, a general theory of measurement developed (see especially Diez 1997a, 1997b). The early stages appeared at the hands of Helmholz, Hölder, Campbell, and Stevens. However, this project was effected most fully in collaboration with theoretical psychologists by Patrick Suppes in the 1950s and 1960s (Suppes 1969; Suppes et al. 1989).
This general theory, there called the representational theory of measurement, focused on the representation of physical relations in numerical scales. It was remarkable for two features: the sophistication of its mathematical development and the paucity of its empirical basis. In typical format, the results show that a domain on which a certain operation (“combining”) and relation (“ordering”) are defined can be, if certain axioms hold for that domain, uniquely represented by a numerical structure with + and <. The gloss on this result is that there is a quantity pertaining to the objects in the domain whose values are the numbers assigned in that representation. But on the side of the domains in question, to characterize those operations and relations we are offered counterfactual conditionals about manipulation. (For example, stick A is to be assigned a greater length than stick B exactly if, were they to be laid side by side, stick A would extend beyond stick B.)
The assumptions include ones that go beyond the finite, such as the “Archimedean” assumption about how any greater value of the quantity length would be surpassed by some finite amount of combining. As is honestly noted, to satisfy the axioms in question, those counterfactuals, even taken at face value, are entirely unrealistic, requiring infinitely fine discrimination (see, e.g., Batitsky 1998). This critique is strongly augmented by Paul Teller’s arguments.
In the scientific theories that were kept in mind, the entities dealt with are characterized by quantities that have definite values. A body in Newtonian mechanics has a mass, position, and speed; each of these has a real number as its value. It has a direction of motion, a velocity, and an acceleration; each of these has a vector over the real number continuum as its value. The values change, but at each instant each of these quantities has one of its possible values. These quantities correspond to the dimensions of a logical space, and a measurement will locate a body in that space. So far, the representational theory of measurement was true in its aim and arrived at its proper target. But all this comes with a blank space about what a measurement is—what sorts of operations, under what conditions, count as measurement, or as measurement of a given theoretically defined quantity.
What is striking about this approach, in retrospect, is how the crucial point about the theory-dependence of measurement, which we already see coming to the fore in Helmholtz’s 1870 lecture, was lost from sight. The focus in recent decades on scientific practice shifted attention to the question of how measurement, in the role of data-generating procedure, appears in a model of the experiment (e.g., Chang 2007). There is no escaping the fact that the model of the experiment with which the experimenter is working is itself a theoretical model. Whether a given procedure is actually generating data for the experimenter depends on whether this procedure itself can be theoretically represented as properly related to a quantity by which the theory represents features of the entity investigated.
We sense a threatening, enfeebling relativism or skepticism that looms in these reflections. “Il n’y a pas de hors-texte” is a disturbing thought in any context, and the disturbing thought of vicious circles or “theoretical nepotism” has certainly been raised about these new thoughts about measurement. So it is not surprising to find a strongly realist reaction.
One response, ably presented by Zoltan Domotor and Vadim Batitsky, is to reject the demand for anything resembling an “operationalist” basis and to take a solidly realist line. That is, terms purporting to refer to quantities are theoretical terms, and to take a theory to be true is to take those terms to refer to real, objective characteristics present in nature. These characteristics and relations between them are there to be discovered, not invented; a measurement is a physical operation that evaluates a quantity, by means of interaction, to evoke a manifestation of its value. The theory of measurement, conceived in this form, is not a philosophical elucidation; it is itself a scientific theory of great generality, the theory of physical quantities.
Again, this rival account is developed with exemplary mathematical sophistication. The theory of measurement operations and measurement instruments developed and the innovative algebraic treatment of relations among quantities are valuable in their own right.
But to simply replace the naïve empiricist account with a postulational approach looks a little like just declaring victory and going home. That the gap between theory and nature can be bridged by a simple postulation of physical quantities that match their mathematized counterparts was roundly castigated already by Hans Reichenbach (cf. van Fraassen 2008, 118–121, 240–244).
There is a third way, which divides the labor between mathematical articulation and historical appreciation of scientific practice. We can think of that too as an empiricist way—but improving upon the naïve empiricism of the original motivation for the representational theory of measurement. In effect there are two points of view to adopt when studying any specific measurement (measurement of length, of mass, of temperature, of force, and so on). One view, which is very theoretical—“from above,” so to speak—studies the measurement interaction as it is represented in the relevant theory. This study will include a particularization within a mathematical framework such as Suppes’s or Domotor and Batitsky’s. The other, which is predominantly historical and “from within” the relevant practice, studies how the theoretical concepts, models, and measuring operations evolved in mutual interaction. Neither will pretend to proceed from a “hygienic” theory-free basis; together they elucidate measurement as a situated activity, never outside a theory-laden and perspective-determined context.
The latter approach, exemplified in case studies that are at once philosophical and historical, was inspired by a take on the theory-dependence of observation and measurement that is quite different from the “realist” analytic theory of measurement. The seminal writings of Sellars (1948), Feyerabend (1957), and Kuhn (1962) demonstrated that the language in which observation and measurement results are reported is thoroughly and irremediably theory laden. There are always rival theoretical contexts, and the same “readings” take on different meanings; the same operations may have contrary significance or no significance at all, and they may not count as measurements of the same quantity or not count as measurement at all in these different contexts.
Teller explores here the repercussions of this view when conjoined with an appreciation of how highly idealized those theories are and how tenuous is any claim to accuracy (let alone truth) of even the best theories available. Not only accuracy and truth but reference itself becomes a challenge. Ostensibly scientific descriptions refer to concrete entities and quantities in nature, but what are the referents if those descriptions can only be understood within their theoretical context? A naïve assumption of truth of the theory that supplies or constitutes the context might make this question moot, but that is exactly the attitude Teller takes out of play. Theory laden, one might say, but laden with false theories! In retrospect, as we see when Teller pursues this theme, even the views of those seminal writers involved conceptions of science far removed from its actual practice.
Once the topic of measurement is approached in working context, immersed in experimental and modeling practice, very different if equally disturbing questions appear. These are the questions addressed by Paul Teller, who shows us the great distance between simplistic philosophical conceptions and the problems practically and pragmatically faced in scientific practice.
1. For a longer discussion of his thought experiments and their philosophical impact, see van Fraassen (2008), 214, 229.
2. “There is no such thing as [what is] outside the text,” the (in)famous dictum of Jacques Derrida, from Of Grammatology, trans. Gayatri Chakravorty Spivak (1967; Baltimore: Johns Hopkins University Press, 1976), pp. 158–59.
Batitsky, Vadim. 1998. “Empiricism and the Myth of Fundamental Measurement.” Synthese 116: 51–73.
Diez, José A. 1997a. “A Hundred Years of Numbers. An Historical Introduction to Measurement Theory 1887–1990. Part I.” Studies in the History and Philosophy of Science 28 (1): 167–85.
Diez, José A. 1997b. “A Hundred Years of Numbers. An Historical Introduction to Measurement Theory 1887–1990. Part II.” Studies in the History and Philosophy of Science 28 (2): 237–65.
Domotor, Zoltan, and Vadim Batitsky. 2008. “The Analytic versus Representational Theory of Measurement: A Philosophy of Science Perspective.” Measurement Science Review 8: 129–46.
Feyerabend, Paul K. 1957. “An Attempt at a Realistic Interpretation of Experience.” Proceedings of the Aristotelian Society 58: 143–57.
Helmholtz, Hermann von. (1870) 1956. “On the Origin and Significance of Geometrical Axioms.” Translated by J. R. Newman. In The World of Mathematics, vol. 1, edited by James Newman, 647–68. New York: Simon and Schuster.
Kuhn, Thomas S. 1962. The Structure of Scientific Revolutions. International Encyclopedia of Unified Science, vol. 2, no. 2. Chicago: University of Chicago Press.
Luce, R. Duncan, and Louis Narens. 1994. “Fifteen Problems Concerning the Representational Theory of Measurement.” In Patrick Suppes: Scientific Philosopher, vol. 2, edited by Paul Humphreys, 219–49. Dordrecht, the Netherlands: Kluwer Academic.
Luce, R. Duncan, and Patrick Suppes. 2002. “Representational Measurement Theory.” In Stevens’ Handbook of Experimental Psychology, vol. 4, edited by J. Wixted and H. Pashler, 1–41. New York: Wiley.
Sellars, Wilfrid. 1948. “Concepts as Involving Laws and Inconceivable without Them.” Philosophy of Science 15: 287–315.
Suppes, Patrick. 1969. Studies in the Methodology and Foundations of Science. Boston: Reidel.
Suppes, Patrick, David M. Krantz, R. Duncan Luce, and Amos Tversky. 1989. Foundations of Measurement, 2 vols. New York: Academic Press.
Van Fraassen, Bas C. 2008. Scientific Representation: Paradoxes of Perspective. Oxford: Oxford University Press.