Skip to main content

The Experimental Side of Modeling: 3

The Experimental Side of Modeling

3

3

Articulating the World

Experimental Practice and Conceptual Understanding

Joseph Rouse

Questions about the interpretation of data are grounded in philosophical issues about the conceptual articulation of perceptual experience or causal interaction. I approach these issues by considering how experimental practice contributes to conceptual understanding. “Experimental” is used broadly to incorporate a wide variety of empirically oriented practices; observational sciences, clinical sciences, or comparative sciences such as paleontology or systematics are also “experimental” in this broad sense. “Experimental” as the umbrella term highlights that even seemingly descriptive or observational sciences typically must undertake material work (with instruments, sample collection and preparation, shielding from extraneous interference, observational protocols, and much more) to allow objects to show themselves appropriately. All empirical sciences intervene in and transform aspects of the world to let them be intelligible and knowable.

Despite renewed philosophical interest in experiment and material practice, conceptual articulation in science is still primarily understood as theory construction. Quine’s (1953) famous image of scientific theory as a self-enclosed fabric or field that only encounters experience at its periphery is instructive. Quine not only neglects experimental activity in favor of perceptual receptivity (“surface irritations”; Quine 1960), but his image of conceptual development in the sciences involves a clear division of labor. Experience and experiment impinge from “outside” our theory or conceptual scheme to provide occasions for conceptual development. The resulting work of conceptual articulation is nevertheless a linguistic or mathematical activity of developing and regulating “internal” inferential relations among sentences or equations to reconstruct the “fabric” of theory.

This insistence upon the constitutive role of theory and theoretical language within scientific understanding arose from familiar criticisms of empiricist accounts of conceptual content. Many earlier empiricists in the philosophy of science are now widely recognized as making linguistic frameworks constitutive for how empirical data could have conceptual significance (Richardson 1998; Friedman 1999). Yet their “post-positivist” successors have gone further in this direction, and not merely by asserting that observation is theory laden. Most recent philosophical discussions of how theories or theoretical models relate to the world begin where phenomena have already been articulated conceptually. As one influential example, James Bogen and James Woodward (1988) argued,

Well-developed scientific theories predict and explain facts about phenomena. Phenomena are detected through the use of data, but in most cases are not observable in any interesting sense of the term . . . Examples of phenomena, for which the above data might provide evidence, include weak neutral currents, the decay of the proton, and chunking and recency effects in human memory. (306)

Nancy Cartwright (1983, essay 7) recognizes that one cannot apply theories or models to events in the world without preparing a description of them in the proper terms for the theory to apply, but she only characterizes this “first stage of theory entry” as an operation on the “unprepared description [which] may well use the language and the concepts of the theory, but is not constrained by any of the mathematical needs of the theory” (133). Michael Friedman (1974) made this tendency to take conceptual articulation for granted especially clear by arguing that the “phenomena” scientific theories seek to explain are best understood as laws that characterize regular patterns rather than specific events. If the relation between theory and the world begins with the explanation of law-like patterns, however, that relation comes into philosophical purview only after the world has already been conceptualized.

Outside philosophy of science, how conceptual understanding is accountable to the world has gained renewed prominence from McDowell (1994), Brandom (1994), Haugeland (1998), and others. McDowell expresses shared concerns with the image of a treacherous philosophical passage. On one side looms the rocks of Scylla, where attempts to ground conceptual content on merely “Given” causal or experiential impacts run aground. On the other beckons the whirlpool Charybdis, where the entirely intralinguistic coherence of purported conceptual judgments would become a mere “frictionless spinning in a void.”[1] The post-positivist philosophy of science has steered toward Charybdis in giving primacy to theory in conceptual understanding. Yet the implicit division of labor between conceptual development as theory-construction and its “external” empirical accountability blocks any passage between Scylla and Charybdis. Or so I shall argue.

Experimentation and the Double Mediation of Theoretical Understanding

Philosophical work on scientific theories now often emphasizes the need for models to articulate their content. Morgan and Morrison (1999) influentially describe models as partially autonomous mediators between theories and the world. Theories do not confront the world directly but instead apply to models as relatively independent, abstract representations. Discussions of models as mediators have nevertheless attended more to relations between theories and models than to those between models and the world. In seeking to understand how science allows aspects of the world to show up within the space of reasons, I cannot settle for this starting point. Yet I must also avoid resorting to the Myth of the Given. Neither data nor other observable intermediaries are “Given” manifestations of the world.

My proposed path between Scylla and Charybis begins with Hacking’s (1983) conception of “phenomena,” which is quite different from Bogen and Woodward’s use of the term for events-under-a-description. Phenomena in Hacking’s sense are publicly accessible events in the world rather than linguistic or perceptual representations. He also more subtly shifts emphasis from what is observed or recognized to what is salient and noteworthy. Phenomena show something important about the world, rather than our merely finding something there. Hacking’s term also clearly has a normative significance that cannot refer to something merely “given.” Most events in nature or laboratories are not phenomena, for such events show little or nothing. Creating a phenomenon is an achievement, whose focus is the salience and clarity of a pattern against a background. Hacking suggested that

old science on every continent [began] with the stars, because only the skies afford some phenomena on display, with many more that can be obtained by careful observation and collation. Only the planets, and more distant bodies, have the right combination of complex regularity against a background of chaos. (1983, 227)[2]

Some natural events have the requisite salience and clarity, but most phenomena must be created.

I use Hacking’s concept to respond to Morgan and Morrison that theoretical understanding is doubly mediated. “Phenomena” mediate in turn between models and the world to enable conceptual understanding. Hacking himself may share this thought. He concluded,

In nature there is just complexity, which we are remarkably able to analyze. We do so by distinguishing, in the mind, numerous different laws. We also do so, by presenting, in the laboratory, pure, isolated phenomena. (1983, 226)

I think the “analysis” he had in mind was to make the world’s complexity intelligible by articulating it conceptually. To take Hacking’s suggestion seriously, however, we must understand how recognition or creation of phenomena could be a scientific “analysis,” complementary to nomological representation. Elgin (1991) makes an instructive distinction between the properties an event merely instantiates and those it exemplifies. Turning a flashlight instantiates the constant velocity of light in different inertial reference frames, but the Michelson/Morley experiment exemplifies it. Similarly, homeotic mutants exemplify a modularity of development that normal limb or eye development merely instantiates.

Consider Elgin’s example of the Michelson/Morley experiment. The interferometer apparatus lets a light beam tangential to the earth’s motion show any difference between its velocity and that of a perpendicular beam. No such difference is manifest, but this display of the constant velocity of light is contextual in a way belying the comparative abstraction of the description. We can talk about the constant velocity of coincident light beams traveling in different directions relative to the earth’s motion without mentioning the beam-splitters, mirrors, compensation plates, or detectors that enable the Michelson/Morley phenomenon. We often represent phenomena in this way, abstracting from the requisite apparatus, shielding, and other surrounding circumstances. Such decontextualized events are precisely what Bogen and Woodward or Friedman meant by “phenomena.” Yet Hacking argued that such decontextualizing talk is importantly misleading. Using a different example, he claimed,

The Hall effect does not exist outside of certain kinds of apparatus . . . That sounds paradoxical. Does not a current passing through a conductor, at right angles to a magnetic field, produce a potential, anywhere in nature? Yes and no. If anywhere in nature there is such an arrangement, with no intervening causes, then the Hall effect occurs. But nowhere outside the laboratory is there such a pure arrangement. (1983, 226)

The apparatus that produces and sustains such events in isolation is integral to the phenomenon, as much a part of the Hall effect as the conductor, the current, and the magnetic field. Yet the consequent material contextuality of phenomena might give us pause. Surely conceptual understanding must transcend such particularity to capture the generality of concepts. Perhaps we must say what a phenomenon shows and not merely show it to articulate the world conceptually. I shall argue, however, that the phenomena themselves, and not merely their verbal characterization, have conceptual significance pointing beyond themselves.

Why Not Determine Concepts by Their Empirical Successes?

I begin with sustained critical attention to two important but flawed attempts to ascribe conceptual significance to the phenomena themselves rather than under a description. Both fail to pass between Scylla and Charybdis. Consider first Hacking’s own account of relations between models and phenomena as “self-vindication,” which emphasizes their stable co-evolution. “Self-vindication” occurs in laboratory sciences because

the . . . systematic and topical theories that we retain, . . . are true to different phenomena and different data domains. Theories are not checked by comparison with a passive world . . . We [instead] invent devices that produce data and isolate or create phenomena, and a network of different levels of theory is true to these phenomena . . . Thus there evolves a curious tailor-made fit between our ideas, our apparatus, and our observations. A coherence theory of truth? No, a coherence theory of thought, action, materials, and marks. (Hacking 1992, 57–58)

Hacking rightly emphasizes mutual coadaptation of models and phenomena. We nevertheless cannot understand the empirically grounded conceptual content of models as a self-vindicating coadaptation with their data domain. Hacking’s proposal steers directly into McDowell’s Charybdis, rendering conceptual understanding empty in splendidly coherent isolation. Hacking envisages stable, coherent domains, as scientific claims gradually become almost irrefutable by limiting their application to well-defined phenomena they already fit. That exemplar of conceptual stability, geometrical optics, tellingly illustrates his claim:

Geometrical optics takes no cognizance of the fact that all shadows have blurred edges. The fine structure of shadows requires an instrumentarium quite different from that of lenses and mirrors, together with a new systematic theory and topical hypotheses. Geometrical optics is true only to the phenomena of rectilinear propagation of light. Better: it is true of certain models of rectilinear propagation. (Hacking 1992, 55)

In supposedly securing the correctness of such theories within their domains, Hacking’s proposal renders them empty. He helps himself to a presumption his own account undermines, namely that geometrical models of rectilinearity amount to a theory about optics. He indicates the difficulty in his concluding sentence: if the domain to be accounted for cannot be identified independent of the theory accounting for it, the theory is not about anything other than itself. It is one thing to say geometrical optics has limited effective range or only approximate accuracy. It is another thing to confine its domain to those phenomena that it accommodates. The fine structure of shadows is directly relevant to geometrical optics, and thereby displays the theory’s empirical limitations. Only through such openness to empirical challenge does the theory purport to be about the propagation of light. McDowell’s criticism of Davidson thus also applies to Hacking’s view: he “manages to be comfortable with his coherentism, which dispenses with rational constraint upon [conceptual thought] from outside it, because he does not see that emptiness is the threat” (McDowell 1994, 68).

My second case is Nancy Cartwright’s proposal that concepts like force in mechanics have limited scope. Force, she argues, is an abstract concept needing more concrete “fitting out.” Just as I am not working unless I also do something more concrete like writing a paper, teaching a class, or thinking about the curriculum, so there is no force among the causes of a motion unless there is an approximately accurate force function such as F = −kx or F = mg (Cartwright 1999, 24–28, 37–46). Experimentation plays a role here, she argues, because these functions only apply accurately to “nomological machines” rather than to messier events. Cartwright’s proposal does better than Hacking’s, in allowing limited open-endedness to conceptual domains. The concept of force extends beyond the models for F = ma actually in hand to apply wherever reasonably accurate models could be successfully developed.

This extension is still not sufficient. First, the domain of mechanics is then gerrymandered. Apparently similar situations, such as objects in free fall in the Earth’s atmosphere, fall on different sides of its borders.[3] Second, this gerrymandered domain empties the concept of force of conceptual significance, and hence of content, because Cartwright conflates two dimensions of conceptual normativity.[4] A concept expresses a norm of classification with respect to which we may then succeed or fail to show how various circumstances accord with the norm.[5] Typically, we understand how and why it matters to apply this concept and group together these instances instead of or in addition to others. The difference it makes shows what is at stake in succeeding or failing to grasp things in accord with that concept (e.g., by finding an appropriate force function). Both dimensions of conceptual normativity are required: we need to specify the concept’s domain (what it is a concept of) and to understand the difference between correct and incorrect application within that domain. By defining what is at stake in applying a concept in terms of criteria for its successful empirical application, she removes any meaningful stakes in that success. The concept then just is the classificatory grouping.[6] In removing a concept’s accountability to an independently specifiable domain, Hacking and Cartwright thereby undermine both dimensions of conceptual normativity, because, as Wittgenstein (1953) famously noted, where one cannot talk about error one also cannot talk about correctness.

Salient Patterns and Conceptual Normativity

Think again about phenomena in Hacking’s sense. Their crucial feature is the manifestation of a meaningful pattern standing out against a background. This “standing out” need not be perceptual, of course. Some astronomical phenomena are visible to anyone who looks, but most require more effort. Experimental phenomena require actually arranging things to manifest a significant pattern, even if that pattern is subtle, elusive, or complex. As Karen Barad commented about a prominent recent case,

It is not trivial to detect the extant quantum behavior in quantum eraser experiments . . . In the quantum eraser experiment the interference pattern was not evident if one only tracked the single detector [that was originally sufficient to manifest a superposition in a two-slit apparatus] . . . What was required to make the interference pattern evident upon the erasure of which-path information was the tracking of two detectors simultaneously. (2007, 348–49)

That there is a pattern that stands out in an experimental phenomenon is thus crucially linked to scientific capacities and skills for pattern recognition. As Daniel Dennett once noted,

the self-contradictory air of “indiscernible pattern” should be taken seriously . . . In the root case, a pattern is “by definition” a candidate for pattern recognition. (1991, 32)

This link between “real patterns” and their recognition does not confer any special privilege upon our capacities for discernment. Perhaps the pattern shows up with complex instruments whose patterned output is discernible only through sophisticated computer analysis of data. What is critical is the normativity of recognition, to allow for the possibility of error. The patterns that show up in phenomena must not merely indicate a psychological or cultural propensity for responsiveness to them. Our responsiveness, taking them as significant, must be open to assessment. What were once taken as revealing patterns in the world have often been later rejected as misleading, artifactual, or coincidental. The challenge is to understand how and why those initially salient patterns lost their apparent significance and especially why that loss corrects an earlier error rather than merely changing our de facto responses.

Experimental phenomena are conceptually significant because the pattern they embody informatively refers beyond itself. To this extent, the salience of natural or experimental phenomena is broadly inductive.[7] Consider the Morgan group’s work initiating classical genetics. Their experiments correlated differences in crossover frequencies of mutant traits with visible differences in chromosomal cytology. If these correlations were peculiar to Drosophila melanogaster, or worse, to these particular flies, they would have had no scientific significance. Their salience instead indicated a more general pattern in the cross-generational transmission of traits and the chromosomal location of “genes” as discrete causal factors.

Yet the philosophical issue here is not how to reason inductively from a telling instance of a concept to its wider applicability. We need to think about reflective judgment in the Kantian sense instead of the inductive-inferential acceptance of determinate judgments. The question concerns how to articulate and understand relevant conceptual content rather than how to justify judgments employing those concepts. The issue is a normative concern for how to articulate the phenomena understandingly, rather than a merely psychological consideration of how we arrive at one concept rather than another. In this respect, the issue descends from the “grue” problem of Goodman (1954). Goodman’s concern was not why we actually project “green” rather than “grue,” for which evolutionary and other considerations provide straightforward answers. His concern was why it is appropriate to project green, as opposed to why we (should) accept this or that judgment in either term.

Marc Lange’s (2000) revisionist conception of natural laws helps here. For Lange, a law expresses how unexamined cases would behave in the same way as cases already considered. In taking a hypothesis to be a law, we commit ourselves to inductive strategies, and thus to the inductive projectibility of concepts employed in the law. Because many inference rules are consistent with any given body of data, Lange asks which possible inference rule is salient. The salient inference rule would impose neither artificial limitations upon its scope nor unmotivated bends in its further extension. Salience of an inference rule, Lange argues, is not

something psychological, concerning the way our minds work . . . [Rather] it possesses a certain kind of justificatory status: in the manner characteristic of observation reports, this status [determines] . . . what would count as an unexamined [case] being relevantly the same as the [cases] already examined. (2000, 194)[8]

Whereas Lange compares salient rules to observation reports, however, I compare them to the salient pattern of a phenomenon. Its normative status as a salient pattern meaningfully articulates the world, rendering intelligible those aspects of the world falling within its scope, albeit defeasibly.

This role for meaningful patterns in the world does not steer back onto the philosophical rocks of Scylla. The salient patterns in natural or experimental phenomena are nothing Given but instead indicate the defeasibility of the pattern itself, and its scope and significance. One of Lange’s examples illustrates this point especially clearly. Consider the pattern of correlated measurements of the pressure and volume of gases at constant temperature. Absent other considerations, their linear inverse correlation yields Boyle’s law. Yet couple this same phenomenon with a model—one that identifies volume with the free space between gas molecules rather than the container size, and understands pressure as reduced by intermolecular forces diminishing rapidly with distance—and the salient pattern extension instead becomes the van der Waals law. How this pattern would continue “in the same way” at other volumes and pressures has shifted, such that the straight-line extension of Boyle’s law incorporates an “unmotivated bend.” Moreover, modeled and measured differently, all general patterns dissipate in favor of ones specific to the chemistry of each gas.

Recognizing the inherent normativity of pattern recognition in experimental practice recovers its requisite two dimensions. I criticized Hacking and Cartwright for defining the scope and content of scientific concepts by their successful applications. Yet they rightly looked to the back-and-forth between experimental phenomena and theoretical models for the articulation of conceptual content. Haugeland (1998) points us in the right direction by distinguishing

two fundamentally different sorts of pattern recognition. On the one hand, there is recognizing an integral, present pattern from the outside—outer recognition . . . On the other hand, there is recognizing a global pattern from the inside, by recognizing whether what is present, the current element, fits the pattern— . . . inner recognition. The first is telling whether something (a pattern) is there; the second is telling whether what’s there belongs (to a pattern). (1998, 285, italics in original)

A pattern is a candidate for outer recognition if what is salient in context points beyond itself in an informative way. The apparent pattern is not just an isolated curiosity or spurious association. Consequently, there is something genuinely at stake in how we extend this pattern, such that it can be done correctly or incorrectly. Only if it matters to distinguish those motions caused by forces from those that would not be so caused does classical mechanics have anything to be right or wrong about.[9] Whether a pattern indicates anything beyond its own occurrence is defeasible, in which case it shows itself to be a coincidental, merely apparent pattern.

Inner recognition identifies an element in or continuation of a larger pattern. Inner recognition is only at issue if some larger pattern is there, with something at stake in getting it right. Inner recognition grasps how to go on rightly, consonant with what is thereby at stake. For classical mechanics, inner recognition involves identifying forces and calculating their contributions to an outcome. But the existence of a pattern depends upon the possibility of recognizing how it applies. Haugeland thus concludes, rightly,

What is crucial for [conceptual understanding] is that the two recognitive skills be distinct [even though mutually constitutive]. In particular, skillful practitioners must be able to find them in conflict—that is, simultaneously to outer-recognize some phenomenon as present (actual) and inner-recognize it as not allowed (impossible). (1998, 286)[10]

Both dimensions of conceptual normativity are needed to sustain the claim that the pattern apparently displayed in a phenomenon enhances the world’s intelligibility. Something must be genuinely at stake in recognizing that pattern, and any issues that arise in tracking that pattern must be resolvable without betraying what was at stake.

Models and Conceptual Articulation

A two-dimensional account of the normativity of pattern recognition puts Cartwright’s and Hacking’s discussions of laboratory phenomena and theoretical modeling in a new light. Cartwright challenges this conception of scientific understanding in her work, from How the Laws of Physics Lie (1983) to The Dappled World (1999), by challenging the compatibility of inner and outer recognition in physics. In the first work, she argued that explanatory patterns expressed in fundamental laws and concepts are not candidates for inner recognition because most events in the world cannot be accurately treated in those terms without ad hoc phenomenological emendation and ceteris paribus hedging. In the latter work she argued that the alleged universality of the fundamental laws is illusory. The scope of their concepts is restricted to those situations, nomological machines, that actually generate more or less law-like behavior and the broader tendencies of their causal capacities. In the dappled world we live in, we need other, less precise concepts and laws. Scientific understanding is a patchwork rather than a conceptually unified field.

Cartwright draws upon two importantly connected features of scientific work. First, concepts that express the patterns projected inductively from revealing experimental or natural phenomena often outrun the relatively limited domains in which scientists understand their application in detail. Cartwright’s examples typically involve mathematical theories where only a limited number of situations can be described and modeled accurately in terms of the theory, yet the point applies more generally. Classical genetics mapped phenotypic differences onto relative locations on chromosomes, but only a very few organisms were mapped sufficiently to allow genes correlated with traits to be localized in this way. Moreover, substantial practical barriers prevented establishing for most organisms the standardized breeding stocks and a wide enough range of recognized phenotypic mutations to allow for sufficiently dense and accurate mapping. Second, in “gaps” where one set of theoretical concepts cannot be applied in detail, other patterns often provide alternative ways of understanding and predicting behavior of interest. Cartwright cites the apparent overlap between classical mechanics and fluid mechanics (1999, chapter 1). The motion of a bank note in a swirling wind does not permit a well-defined force function for the causal effects of the wind, but the situation may well be more tractable in different terms. This issue has been widely discussed in one direction as reduction or supervenience relations between theoretical domains, but the relation goes in both directions: the supposedly supervening conceptual domain might be said to explicate the concepts or events that cannot be accurately modeled in terms of a “lower” or more basic level of analysis.[11]

Cartwright has identified important issues, but her response remains unsatisfactory. Her conclusion that “fundamental” physical concepts have limited scope depends upon a familiar but untenable account of grasping a concept and applying it. In this view, grasping a concept is (implicitly) grasping what it means in every possible, relevant situation. Here, Cartwright agrees with her “fundamentalist” opponents that F = ma, the quantum mechanical formalism, and other theoretical principles provide fully general schemata for applying their constituent concepts within their domains; she only disagrees about how far those domains extend. The fundamentalist takes their domain as unrestricted, with only epistemic limits on working out their application. Cartwright ascribes semantic and perhaps even metaphysical significance to those limitations, which she takes to display the inapplicability of those concepts.[12]

The alternative account by Wilson (2006) of empirical and mathematical concepts not only shows why this shared account of conceptual understanding is untenable, but also helps clarify how to acknowledge and respond to Cartwright’s underlying concerns while also reconciling them with my concern for conceptual understanding. I want to understand the conceptual significance of experimental phenomena and their relation to practices of theoretical modeling without losing contact with what is at stake in applying scientific concepts. Cartwright thinks the dappled, patchwork character of the world is not amenable to smooth, systematic inclusion within the supposedly regimented universality of fundamental physical concepts. Wilson rejects the underlying “classical picture of concepts” and instead treats empirical concepts as organized in varying ways, such as loosely unified patchworks of facades bound together into atlases, or overlapping patchworks pulled in different directions by competing “directivities.”[13] A fully general concept need not have any fully general way of applying it. As a telling example, he addresses

the popular categorization of classical physics as billiard ball mechanics. In point of fact, it is quite unlikely that any treatment of the fully generic billiard ball collision can be found anywhere in the physical literature. Instead, one is usually provided with accounts that work approximately well in a limited range of cases, coupled with a footnote of the “for more details, see . . .” type . . . [These] specialist texts do not simply “add more details” to Newton, but commonly overturn the underpinnings of the older treatments altogether. (Wilson 2006, 180–81, italics in original)

The sequence of models treats billiard balls incompatibly first as point masses, then rigid bodies, almost-rigid bodies with corrections for energy loss, elastic solids distorting on impact, then with shock waves moving through the ball, generating explosive collisions at high velocities, and so on. Some models also break down the balls’ impact into stages, each modeled differently, with gaps. Wilson concludes that “to the best I know, this lengthy chain of billiard ball declination never reaches bottom” (2006, 181).

Wilson provides extraordinarily rich case studies of disparate links among conceptual facades, patches, or platforms with accompanying “property dragging” that can shift how concepts apply in different settings. One distinction helps indicate the extent to which empirical concepts need not be smoothly regimented or fully determinately graspable. Think of sequences of billiard ball collision models as exemplifying an intensifying articulation of concepts with increasing precision and fine-grained detail. We then also need extensive articulation to adapt familiar concepts to unfamiliar circumstances. Wilson objects here to what he calls “tropospheric complacency”:

We readily fancy that we already “know what it is like” to be red or solid or icy everywhere, even in alien circumstances subject to violent gravitational tides or unimaginable temperatures, deep within the ground under extreme pressures, or at size scales much smaller or grander than our own, and so forth. (2006, 55, italics in original)

Thought experiments such as programming a machine to find rubies on Pluto (2006, 231–33) tellingly indicate the parochial character of confidence that we already know how to apply familiar concepts outside familiar settings (or even that the correct application is determinate).

Rejecting such complacency allows endorsement of Cartwright’s denial that general law-schemata are sufficient to understand more complex or less accommodating settings, while rejecting limitations on the scope of their concepts. Concepts commit us to more than we know how to say or do. “Force” or “gene” should be understood as “dappled concepts” rather than as uniformly projectable concepts with limited scope in a dappled world. Brandom (1994, 583) suggests a telling analogy between conceptual understanding and grasping a stick. We may only firmly grasp a concept at one end of its domain, but we take hold of its entirety from that end. We are also accountable for unanticipated consequences of its use elsewhere. The same is true for pattern recognition in experimental work. These patterns can be inductively salient far beyond what we know how to say or act upon.

That open texture is why I discuss inner and outer recognition in terms of the issues and stakes in concept use. “Issues” and “stakes” are fundamentally anaphoric concepts. They allow reference to the scope and significance of a pattern, a concept, or a practice (what is at stake there), and what it would be for them to go on in the same way under other circumstances or more stringent demands (what is at issue), even though those issues and stakes might be contested or unknown. Recognizing the anaphoric character of conceptual normativity lets us see what is wrong with Lange’s claim that inner recognition is shaped by disciplinary interests. He says,

A discipline’s concerns affect what it takes for an inference rule to qualify as “reliable” there. They limit the error that can be tolerated in a certain prediction . . . as well as deem certain facts to be entirely outside the field’s range of interests . . . With regard to a fact with which a discipline is not concerned, any inference rule is trivially accurate enough for that discipline’s purposes. (2000, 228, italics in original)

Lange makes an important point here that is misleadingly expressed in terms of scientific disciplines and their concerns. First, what matters is not de facto interests of a discipline but what is at stake in its practices and achievements. Scientists can be wrong about what is at stake in their work, and those stakes can shift over time as the discipline develops. Second, the relevant locus of the stakes in empirical science is not disciplines as social institutions but domains of inquiry to which disciplines are accountable. The formation and maintenance of a scientific discipline is a commitment to the intelligibility and empirical accountability of a domain of inquiry with respect to its issues and stakes.

These considerations about conceptual normativity also refine Hacking’s notion of phenomena as salient, informative patterns. The concepts developed to express what is inductively salient in a phenomenon are always open to further intensive and extensive articulation. The same is true of experimental phenomena. The implicit suggestion that phenomena are stable patterns of salience should give way to recognition of the interconnected dynamics of ongoing experimentation and model building.[14]

Thus far, I have discussed experimental phenomena as if experimenters merely established a significant pattern in the world, whose conceptual role needed further articulation by model building. That impression understates the conceptual significance of experimentation. Instead of distinct experimental phenomena, we should consider systematically interconnected experimental capacities. Salient patterns manifest in experimentation articulate whole domains of conceptual relationships rather than single concepts (Rouse 2008). Moreover, what matters is not a static experimental setting, but its ongoing differential reproduction, as new, potentially destabilizing elements are introduced into a relatively well-understood system. As Barad argues,

[Scientific] apparatuses are constituted through particular practices that are perpetually open to rearrangements, rearticulations, and other reworkings. That is part of the creativity and difficulty of doing science: getting the instrumentation to work in a particular way for a particular purpose (which is always open to the possibility of being changed during the experiment as different insights are gained). (2007, 170)

The shifting dynamics of conceptual articulation in the differential reproduction of experimental systems suggests that all scientific concepts are dappled—that is, open to further intensive and extensive articulation in ways that might be only patchily linked together. That is not a deficiency. The supposed ideal of a completely articulated, accurate, and precise conceptual understanding is very far from ideal. Consider Lange’s example of conceptual relations among pressure, temperature, and volume of gases. Neither Boyle’s nor van der Waals’s law yields a fully accurate, general characterization of these correlated macroproperties or the corresponding concepts. Yet each law brings a real pattern in the world to the fore, despite noise it cannot fully accommodate. These laws are not approximations to a more accurate but perhaps messy and complex relation among these macroproperties. Any treatment of pressure, temperature, and volume more precise than van der Waals’s law requires attending to the chemical specificity of gases, and because gases can be mixed proportionally, the relevant variability has no obvious limit. Insisting upon more precise specification of these correlations abandons any generally applicable conceptual relationship among these properties, as these relationships only show up ceteris paribus.

Hacking was nevertheless right to recognize the stabilization of some scientific conceptual relationships, even if such stability is not self-vindicating. The patterns already disclosed and modeled in a scientific field are sometimes sufficiently articulated with respect to what is at stake in its inquiries. The situations where inner recognition of those conceptual patterns might falter if pushed far enough do not matter to scientific understanding, and those divergences are rightly set aside as noise. That is why Lange indexes natural laws to scientific disciplines, or better, to their domain-constitutive stakes. The scientific irrelevance of some gaps or breakdowns in theoretical understanding can hold even when more refined experimental systems or theoretical models are needed in engineering or other practical contexts.[15]

At other times, seemingly marginal phenomena, such as the fine-grained edges of shadows, the indistinguishable precipitation patterns of normal and cancerous cells in the ultracentrifuge, the discrete wavelengths of photoelectric emission, or subtle shifts in the kernel patterning of maize visible only to an extraordinarily skilled and prepared eye, matter in ways that conceptually reorganize a whole region of inquiry. That is why, contra Cartwright, the scope of scientific concepts extends further and deeper than their application can be accurately modeled, even when their current articulation seems sufficient to their scientific stakes.

A New Scientific Image?

I conclude with a provocative suggestion. Bas van Fraassen (1980) some years ago proposed a dramatic reconception of Sellars’s account of the manifest and scientific images. Sellars (1963) argued that the explanatory power of the scientific image provided epistemic and metaphysical primacy over the manifest image of “humanity-in-the-world.” Van Fraassen challenged Sellars’s account via two central considerations. He first argued that explanatory power is only a pragmatic virtue of theories and not their fundamental accomplishment. Second, he proposed limiting belief in the scientific image to where it is rationally accountable to human observation, due to its privileged role in justifying our beliefs. His constructive empiricism thereby restored priority to the manifest image as source of epistemic norms.

My arguments suggest a different revision of the scientific image and its relation to our self-understanding. This revised image puts conceptual articulation at the center of the scientific enterprise rather than as a preliminary step toward justified beliefs or explanatory power. The sciences expand and reconfigure the space of reasons in both breadth and depth. At their best, they extend and clarify those aspects of the world that fall within the scope of what we can say, reason about, and act responsively and responsibly toward. The sciences do so in part by creating phenomena, extending them beyond the laboratory, and constructing and refining models that further articulate the world conceptually. That achievement often reconfigures the world itself to show itself differently and reorients ourselves to be responsive to new patterns that reconstitute what is at issue and at stake there. Conceptual understanding does require reasoned critical scrutiny of what we say and do, and holding performances and commitments accountable to evidence. Justification is more than just an optional virtue. Yet justification is always contextual, responsive to what is at issue and at stake in various circumstances. We therefore should not replace Sellars’s emphasis upon explanation with van Fraassen’s general conception of empirical adequacy as the telos of the scientific image. Empirical adequacy can be assessed at various levels of conceptual articulation, in response to different issues and concerns. Empirical adequacy is contextual and pragmatic, just as van Fraassen insisted about explanation.

The resulting reconception of the scientific image also revises again the relation between that image and our understanding of ourselves as persons accountable to norms. It yields a scientific conception of the world that would privilege neither a nature seemingly indifferent to normativity, nor a humanism that subordinates a scientific conception of the world to human capacities or interests. Karen Barad aptly expresses such a reconception of the Sellarsian clash of images in the title of her book: Meeting the Universe Halfway (2007). Working out how to meet the universe halfway—by grasping scientific and ethical understanding as part of scientifically articulated nature, and as responsive to issues and stakes that are not just up to us—requires a long story and another occasion.[16] Yet such a reconception of the scientific image is ultimately what is at stake in understanding the role of experimentation in conceptual understanding.

Notes

1. McDowell (1984) actually invokes the figures of Scylla and Charybdis in a different context, namely what it is to follow a rule. There, Scylla is the notion that we can only follow rules by an explicit interpretation, and Charybdis is regularism, the notion that rule-following is just a blind, habitual regularity. I adapt the analogy to his later argument in McDowell (1994), with different parallels to Scylla and Charybdis, because the form of the argument is similar and the figures of Scylla and Charybdis are especially apt there. It is crucial not to confuse the two contexts, however; when McDowell (1984) talks about a “pattern,” he means a pattern of behavior supposedly in accord with a rule, whereas when I talk about “patterns” later I mean the salient pattern of events in the world in a natural or experimental phenomenon.

2. “Once a genuine effect is achieved, that is enough. The [scientist] need not go on running the experiment again and again to lay bare a regularity before our eyes. A single case, if it is the right case, will do” (Cartwright 1989, 92). There are, admittedly, some phenomena for which “regularity” seems more appropriate, such as the recurrent patterns of the fixed stars or the robustness of normal morphological development. The phenomenon in such cases is not the striking pattern of any of the constellations or the specific genetic, epigenetic, and morphological sequences through which tetrapod limbs develop; it is instead the robust regularity of their recurrence. In these cases, however, the regularity itself is the phenomenon rather than a repetition of it. Humeans presume that a conjunction of events must occur repeatedly to be intelligible to us. Hacking and Cartwright, by contrast, treat some regularities as themselves temporally extended single occurrences. Phenomena in this sense are indeed repeatable under the right circumstances. Their contribution to scientific understanding, however, does not depend upon their actual repetition.

3. Cartwright’s account also requires further specification to understand which models count as successful extensions of the theory. Wilson (2006), for example, argues that many of the extensions of classical mechanics beyond its core applications to rigid bodies involve extensive “property-dragging,” “representational lifts,” and more or less ad hoc “physics avoidance.” Cartwright might take many of these cases to exemplify “the claim that to get a good representative model whose targeted claims are true (or true enough) we very often have to produce models that are not models of the theory” (2008, 40).

4. My objection to Cartwright’s view is subtly but importantly different from those offered by Kitcher (1999) and in a review by Winsberg et al. (2000). They each claim that her account of the scope of laws is vacuous because it allegedly reduces to “laws apply only where they do.” Their objections turn upon a tacit commitment to a Humean conception of law in denying that she can specify the domain of mechanics without reference to Newton’s laws. Because Cartwright allows for the intelligibility of singular causes, however, she can identify the domain of mechanics with those causes of motion that can be successfully modeled by differential equations for a force function. My objection below raises a different problem that arises even if one can identify causes of motion without reference to laws, and thus could specify (in terms of causes) where the domain of the laws of mechanics is supposed to reside in her account. My objection concerns how the concepts (e.g., force) applied within that domain acquire content; because the concepts are defined in terms of their success conditions, she has no resources for understanding what this success amounts to.

5. Failures to bring a concept to bear upon various circumstances within its domain have a potentially double-edged significance. Initially, if the concept is taken prima facie to have relevant applicability, then the failure to articulate how it applies in these circumstances marks a failure of understanding on the part of those who attempt the application. Sustained failure, or the reinforcement of that failure by inferences to it from other conceptual norms, may shift the significance of failure from a failure of understanding by concept-users to a failure of intelligibility on the side of the concept itself.

6. One can see the point in another way by recognizing that the scope-limited conception of “force” that Cartwright advances would have conceptual content if there were some further significant difference demarcated by the difference between those systems and those with a well-defined force function. Otherwise, the domain of “force” in her account would characterize something like “mathematically analyzable motions” in much the same way that Hacking reduces geometrical optics to models of rectilinearity (rather than of the rectilinear propagation of light).

7. The salience of a pattern encountered in a scientific phenomenon, then, should be sharply distinguished from the kind of formalism highlighted in Kant’s (1987) account of judgments of the beautiful or the sublime (as opposed to the broader account of reflective judgment sketched in the First Introduction), or from any psychological account of how and why patterns attract our interest or appreciation in isolation.

8. I argue later, however, that the role Lange here assigns to de facto agreement among competent observers is not appropriate in light of his (and my) larger purposes.

9. In the case of classical mechanics, we conclude that there are no motions within its domain that are not caused by forces (although of course quantum mechanics does permit such displacements, such as in quantum tunneling). That inclusiveness does not trivialize the concept in the way that Cartwright’s restriction of scope to its approximately accurate models does, precisely because of the defeasible coincidence between inner and outer recognition (as we will see later). There is (if classical mechanics is indeed a domain of genuine scientific understanding) a conceivable gap, what Haugeland (1998, chapter 13) calls an “excluded zone,” between what we could recognize as a relevant cause of motion and what we can understand with the conceptual resources of classical mechanics. No actual situations belong within the excluded zone because such occurrences are impossible. Yet such impossibilities must be conceivable and even recognizable. Moreover, if such impossibilities occurred and could not be explained away or isolated as a relevant domain limitation (as we do with quantum discontinuities), then what seemed like salient patterns in the various phenomena of classical mechanics would turn out to have been artifacts, curiosities, or other misunderstandings. On this conception, to say that a phenomenon belongs within the domain of the concept of force is not to say that we yet understand how to model it in those terms; it does make the concept ultimately accountable to that phenomenon and empirically limited in its domain to the extent that it cannot be applied to that phenomenon to the degree of accuracy called for in context.

10. Haugeland talks about what is crucial for “objectivity” rather than for conceptual understanding. Yet objectivity matters to Haugeland only as the standard for understanding. I have argued elsewhere that Haugeland’s appeal to objectivity is misconstrued and that conceptual understanding should be accountable not to “objects” (even in Haugeland’s quite general and formal sense) but to what is at issue and at stake in various practices and performances (Rouse 2002, chapters 7–9).

11. I use “explicate” here for any domain relations for which the possibility of reduction or supervenience might be raised, even where we might rightly conclude that the explicating domain does not supervene. Thus, mental concepts explicate the domain of organismal behavior for some organisms with sufficiently flexible responsive repertoires, even if mental concepts do not supervene upon physical states or non-mental biological functions.

12. Cartwright does not assign this role to de facto epistemic limitations that might merely reflect failures of imagination or effort. The concepts apply wherever more generally applicable models could be developed that enable the situations in question to be described with sufficient accuracy in their terms, without ad hoc emendation. As she once succinctly put the relevant criterion of generality, “It is no theory that needs a new Hamiltonian for each new physical circumstance” (Cartwright 1983, 139).

13. Wilson identifies the “classical picture of concepts” with three assumptions:

  1. (i) we can determinately compare different agents with respect to the degree to which they share “conceptual contents”;
  2. (ii) that initially unclear “concepts” can be successively refined by “clear thinking” until their “contents” emerge as impeccably clear and well defined;
  3. (iii) that the truth-values of claims involving such clarified notions can be regarded as fixed irrespective of our limited abilities to check them (2006, 4–5).

He also identifies it differently as “classical gluing,” whereby predicate and property are reliably attached directly or indirectly (e.g., indirectly via a theoretical web and its attendant “hazy holism”).

14. Rouse (1996) argues for a shift of philosophical understanding from a static to a dynamic conception of epistemology. More recently (2009) I interpreted the account of conceptual normativity in Rouse (2002) as a nonequilibrium conceptual and epistemic dynamics. Brandom (2011) develops this analogy more extensively in interpreting of Wilson (2006) as offering accounts of the statics, kinematics, and dynamics of concepts.

15. Most of Wilson’s examples are drawn from materials science, engineering, and applied physics with close attention to the behavior of actual materials. Brandom (2011) suggested that these domains exemplify the pressure put on concepts when routinely examined and developed by professionals through multiple iterations of a feedback cycle of extensions to new cases. He notes jurisprudence as a similar case, in which concepts such as contract or property with acceptable uses in political thought are under pressure from application in case law. These concepts are generally in good order for what is at stake in some contexts but are open to indefinitely extendable intensive and extensive articulation for other purposes.

16. I make a beginning toward working out such an understanding in Rouse (2002) and develop it further in Rouse (2015).

References

Barad, Karen. 2007. Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning. Durham, N.C.: Duke University Press.

Bogen, James, and James Woodward. 1988. “Saving the Phenomena.” Philosophical Review 98: 303–52.

Brandom, Robert. 1994. Making It Explicit: Reasoning, Representing and Discursive Commitment. Cambridge, Mass.: Harvard University Press.

Brandom, Robert. 2011. “Platforms, Patchworks, and Parking Garages: Wilson’s Account of Conceptual Fine-Structure in Wandering Significance.” Philosophy and Phenomenological Research 82: 183–201.

Cartwright, Nancy D. 1983. How the Laws of Physics Lie. Oxford: Oxford University Press.

Cartwright, Nancy D. 1989. Nature’s Capacities and Their Measurement. Oxford: Oxford University Press.

Cartwright, Nancy D. 1999. The Dappled World. Cambridge: Cambridge University Press.

Cartwright, Nancy D. 2008. “Reply to Daniela Bailer-Jones.” In Nancy Cartwright’s Philosophy of Science, edited by S. Hartmann et al., 38–40. New York: Routledge.

Dennett, Daniel. 1991. “Real Patterns.” Journal of Philosophy 89: 27–51.

Elgin, Catherine. 1991. “Understanding in Art and Science.” In Philosophy and the Arts, Midwest Studies in Philosophy, vol. 16, edited by P. French, T. Uehling Jr. and H. Wettstein, 196–208. Notre Dame, Ind.: University of Notre Dame Press.

Friedman, Michael. 1974. “Explanation and Scientific Understanding.” Journal of Philosophy 71: 5–19.

Friedman, Michael. 1999. Reconsidering Logical Positivism. Cambridge: Cambridge University Press.

Goodman, Nelson. 1954. Fact, Fiction and Forecast. Cambridge, Mass.: Harvard University Press.

Hacking, Ian. 1983. Representing and Intervening. Cambridge: Cambridge University Press.

Hacking, Ian. 1992. “The Self-Vindication of the Laboratory Sciences.” In Science as Practice and Culture, edited by A. Pickering, 29–64. Chicago: University of Chicago Press.

Haugeland, John. 1998. Having Thought. Cambridge, Mass.: Harvard University Press.

Kant, Immanuel. 1987. Critique of Judgment, translated by W. Pluhar. Indianapolis: Hackett.

Kitcher, Philip. 1999. “Unification as a Regulative Ideal.” Perspectives on Science 7: 337–48.

Lange, Marc. 2000. Natural Laws in Scientific Practice. Oxford: Oxford University Press.

McDowell, John. 1984. “Wittgenstein on Following a Rule.” Synthese 58: 325–63.

McDowell, John. 1994. Mind and World. Cambridge, Mass.: Harvard University Press.

Morgan, Mary, and Margaret Morrison. 1999. Models as Mediators. Cambridge: Cambridge University Press.

Quine, Willard V. O. 1953. “Two Dogmas of Empiricism.” In From a Logical Point of View, 20–46. Cambridge, Mass.: Harvard University Press.

Quine, Willard V. O. 1960. Word and Object. Cambridge, Mass.: MIT Press.

Richardson, Alan. 1998. Carnap’s Construction of the World. Cambridge: Cambridge University Press.

Rouse, Joseph. 1996. Engaging Science. Ithaca, N.Y.: Cornell University Press.

Rouse, Joseph. 2002. How Scientific Practices Matter. Chicago: University of Chicago Press.

Rouse, Joseph. 2008. “Laboratory Fictions.” In Fictions in Science, edited by M. Suarez, 37–55. New York: Routledge.

Rouse, Joseph. 2009. “Standpoint Theories Reconsidered.” Hypatia 24: 200–9.

Rouse, Joseph T. 2015. Articulating the World: Conceptual Understanding and the Scientific Image. Chicago: University of Chicago Press.

Sellars, Wilfrid. 1963. “Philosophy and the Scientific Image of Man.” In Science, Perception and Reality, 1–37. London: Routledge and Kegan Paul.

van Fraassen, Bas C. 1980. The Scientific Image. Oxford: Oxford University Press.

Wilson, Mark. 2006. Wandering Significance. Oxford: Oxford University Press.

Winsberg, Eric, Mathias Frisch, Karen Merikangas Darling, and Arthur Fine. 2000. “Review, Nancy Cartwright, The Dappled World.” Journal of Philosophy 97: 403–8.

Wittgenstein, Ludwig. 1953. Philosophical Investigations. London: Blackwell.

Next Chapter
4
PreviousNext
Copyright 2018 by the Regents of the University of Minnesota
Powered by Manifold Scholarship. Learn more at manifoldapp.org