Thursday, January 7, 2010

Supervenience and Neuroscience


My paper "Supervenience and Neuroscience" has just today made the transition from sorta-mostly-accepted for publication in Synthese to totally-definitely-accepted for publication in Synthese. Many Brain-Hammer Heads were very helpful in suffering through earlier versions. Thanks, all y'all!

Here's a link to the latest version: [link].

Abstract: The philosophical technical term “supervenience” is frequently used in the philosophy of mind as a concise way of characterizing the core idea of physicalism in a manner that is neutral with respect to debates between reductive physicalists and nonreductive physicalists. I argue against this alleged neutrality and side with reductive physicalists. I am especially interested here in debates between psychoneural reductionists and nonreductive functionalist physicalists. Central to my arguments will be considerations concerning how best to articulate the spirit of the idea of supervenience. I argue for a version of supervenience, “fine-grained supervenience,” which is the claim that if, at a given time, a single entity instantiates two distinct mental properties, it must do so in virtue of instantiating two distinct physical properties. I argue further that despite initial appearances to the contrary, such a construal of supervenience can be embraced only by reductive physicalists.

16 comments:

  1. Hi Pete,

    Congratulations on the S&N piece! I started reading it last night, and there is much to give functionalists pause.

    By way of defending the functionalist, however, I'd like to raise a worry for the coherence of the "mental-mental" supervenience examples you discuss, more specifically, the Chinese Room case.

    As you correctly point out, the original system's reply was that the whole room understood Chinese, not Searle. Though this may be counterintuitive, such a case is not a case of mental-mental supervenience, since the mental states of the whole room do not supervene on Searle's mind (as you also point out).

    But now we are asked to imagine Searle "internalizing" the contents of the room (memorizing the program and other data); here, it looks like the functionalist will have to say that the mind that understands Chinese supervenes on Searle's mind (since whether the program and other data is internal or external should not matter to the functionalist), which in turn supervenes on Searle's brain. This violates FGS.

    My worry is that when we examine the details, it's not so clear that there is a mind which understands Chinese that supervenes on Searle's mind (even to a functionalist who is prepared to bite some bullets here). The system's reply says that Searle is executing the program (for shorthand, I'll say that he is the CPU of the system). Now, functionally speaking, the CPU is distinct from the program and other stored data (if it were not, then the original system's reply would have been simply incoherent. But you are willing to grant that this reply is not obviously wrong). Also, we are to imagine that only some of Searle's mental states serve to play the functional role of the CPU. After all, Searle can think about Paris in the Spring, his dog, and have other thoughts that may be completely irrelevant to his functioning as the CPU of the system that allegedly understands Chinese. To avoid violating FGS, it will have to turn out that the mental states which constitute Searle's serving as CPU have a different physical supervenience base than the mental states involved in thinking about dogs and Paris. But now imagine that Searle memorizes the program. Since there is a functional difference between CPU and program, when Searle's mental life is engaged in serving the role as CPU, such engagement cannot also serve as playing the role of program. In effect, the program part would have to be "external" to Searle's mind when Searle's mind is playing the role of CPU. Of course, this is compatible with saying that the program part is internal, in the sense of inside the skull. But this is not enough, for the supervenience base for the mind that understands Chinese would still be wider than the base of Searle's mind (when Searle is acting as CPU).

    I think the failure to appreciate this point has generated a lot of confusion in the literature on the Chinese Room. It's as if we are to imagine Searle's mind is playing the role of CPU and program of the Chinese system. But this cannot be right. The program is "input" to the CPU, but states of memory are not "inputs" to anyone's mind. Insofar as Searle's mind is playing the role of CPU, his mind cannot play the role of program. Insofar as the program is part of Searle's mind, he cannot play the role of CPU.

    What do you think? Am I misunderstanding something?

    P.S. Thanks for posting that link to the stuff on x-phi; a few months ago, I got into a somewhat unpleasant exchange about this topic, an exchange wherein I raised some of the same concerns that you did.

    ReplyDelete
  2. Oh, the last post was by Martin.

    ReplyDelete
  3. Thanks, Martin.

    I'm having a hard time following your thought, here. I suspect the part that I'm getting hung up on is how it is that the program has a wide supervenience base.

    For what it's worth, on p.7 I do explicitly assume internalism in my description of the Searle stuff.

    Maybe some terminological stipulation can help get us some common ground here. All of these are slightly weird ways of using the terms, I admit. But my aim right now is not analysis.

    Program: a description of events such that, when such events occur, mental states of Chinese understanding will occur.

    Running the program: the enacting of such events; the occurrence of the events alleged to suffice for Chinese-understanding mental states.

    The Chinese mind: a particular set of chinese-understanding mental states.

    Searle's mind: a particular set of mental events or mental states

    Searle's brain: a particular set of neural events or neural states

    Central to the dialectic as I see it is a claim by Searle's that at no point need his mind contain any mental states of understanding Chinese. The bullet biting functionalist is granting this claim of Searle's and insisting that, nonetheless, the Chinese mind will arise in any situation in which the program is run, including when the running of the program is achieved by Searle's mind.

    So we have Chinese Mind being nonidentical to Searle's,and both being nonidentical to Searle's brain, but CM supervening on SM supervening on SB.

    ReplyDelete
  4. I guess you are trying to spell out a way for the functionalist to motivate the claim that CM and SM supervene on different brainstates/brainevents? That seems like a good place for the functionalist to wind up in, but I'm not seeing what is supposed to get them there.

    ReplyDelete
  5. Thanks for the reply, Pete.

    Let me see if I can restate/reformulate the worry.

    Searle understands English, not Chinese. Understanding English or Chinese is a system level property (property of a person, not a person's parts). When Searle "internalizes" the contents of the room by memorizing them, how are we to make sense of the program running on Searle's "mind"? Searle does not "read" or "understand" what's in memory in the way that he understands English. There is no mind--Searle's mind--that takes the contents of memory as input (maybe there is a homunculus, though). So it is not at all clear to me that if the program for understanding Chinese is running on something inside Searle's skull, then it is Searle's mind that is running the program. A more plausible story to me would be to say that, in internalizing the contents of the room, the program is now being run by subpersonal processes/agencies (a la Dennett). But this would also suggest, pace Searle, that he would understand Chinese! His status as CPU has vanished, relative to the Chinese program.

    So maybe what I would like to see spelled out are the details involved in claiming that Searle's mind is running C, running C in a way that makes it plausible to say that Searle still does not understand Chinese.

    ReplyDelete
  6. Where 'C' stands for the program for understanding Chinese.

    MR

    ReplyDelete
  7. Thanks, Martin.

    I think it's getting clearer to me now what the worry is.

    Here's a different attempt at defending my line of thought. Please let me know if you think this advances things.

    One way of thinking about Searle's core claim is in terms of what it might mean for one to consciously follow some algorithm. Suppose that you are calculating the tip on our bar tab and you do so not by scratching out arabic numerals on a napkin, but by consciously visualizing columns of numerals. At the appropriate time in which you need to "carry the one", you consciously visualize that arabic numeral appearing in the appropriate column. These conscious visualizations may be accompanied by conscious verbalizations like "ok, Martin, don't forget to carry the one!".

    With this kind of picture in mind we can sketch out Searle's argument like this: First, the program can be spelled out as a series of operations described in english and perhaps also involving pictures of chinese characters. Second, Searle can run the program by engaing in a series of conscious mental states each of which is either a bit of conscious verbalization in english or a conscious visual image of a chinese character, a chinese character that, Searle is stipulating and the imagined functionalist is granting, Searle doesn't understand.

    So, there's a way for thinking that all of the events described in the program are implemented by conscious mental states none of which are states of understanding Chinese.

    Of course, it is in theory possible for a functionalist to tell a tale whereby Searle is trained to go through a bunch of mental sequences which, while he goes through them, involve in addition to their own subvening brain states, additional brainstates that constitute a distinct supervenince base for the chinese mind. But that seems a little ad hoc at this point for the functionalist to whip that out. Would they have independent motivation for doing so? Especially important here is that they have motivation for doing so that isn't just acceptance of FGS. Because if that's what' motivating them, then I win anyway.

    Anyway, is this helping?

    ReplyDelete
  8. This helps a lot, Pete. Thanks!

    I have a lingering-but-I-am-too-tired-to-spell-out-in-great-detail-right-now worry that might make the claim of distinct supervenience bases less ad hoc. The actual sequences of conscious mental states that we are imagining to constitute running the program are probably going to be insufficient to individuate programs. Two programs may share actual state sequences but differ in their counterfactual sequences. If the counterfactuals were relevant to program individuation, then one might argue that the supervenience base for program instantiation (and thus CM instantiation) was wider than actual sequences of conscious mental states.

    I dunno, I'll have to think about it some more.

    MR

    ReplyDelete
  9. If you come up with anything further, please do let me know. Thanks for your work on this so far!

    ReplyDelete
  10. Hey Pete,

    Do you think Churchland's state-space semantics founders on FGS?

    Here's the worry. According to Churchland, concepts are points in partitioned activation spaces (bit of an over-simplification, but it should not affect the argument). That a point is the concept it is depends on the overall geometry of the space.

    A conceptual framework is the partitioned activation space itself. The partitioned activation space has representational content.

    A concept (point) inherits its content from the conceptual framework (holism).

    So consider Cottrell's face-recognition network (after training). Churchland says that the partitioned activation space of the hidden layer represents something about hill-billy faces; the space itself has representational content. Particular points in that space represent prototypical family members (e.g., Hatfields, McCoys, etc.).

    The very same set of connections determines both the structure of the hidden layer (thus the content of the conceptual framework itself) and thus, of course, the content of the points (given holism).

    This is how Churchland describes this stuff (and we are talking about abeyant states).

    I want to say that we have distinct representational properties (a point does not represent what the entire space represents, even though the connection is intimate) without distinct physical properties.

    In the paper you talk about a related worry, but I don't *think* it's the same one. There are distinct dispositional properties here, of course, but I am/Churchland is talking about the overall shape of the activation space that the weights induce, not the specific dispositions to token one vector in the presence of another. For Churchland, it is the shape of the activation space that has primacy, when it comes to determining the content of the space. Insofar as the content of the points is inherited from the overall geometry of partitioned space, it's not clear to me how distinct representational properties are grounded in physical differences.

    Thoughts?

    Martin

    ReplyDelete
  11. Hi Martin,

    Thanks for pressing me on this.

    So, there's three main ways on the table for being a psychological realist connectionist holist about abeyant states.

    1. Be a realist about distinct physical dispositions that might have as their categorical base one and the same set of connection weights, affirm distinct mental states, and affirm the supervenience of the distinct mental states on the distinct physical dispositions

    2. Be a Quinean about the dispositions and just identify them with the categorical base and deny that there are distinct mental states (there's only one mental state and it's something like *knowing how to distinguish hatfields and mccoys*).

    3. Affirm distnct mental states but refuse to supply distinct physical supervenience bases for them.

    1 & 2 are each consistent with FGS. 3, the one you are here cheering for, is not.

    I'm wondering what can be said in favor of 3. Here are some considerations against it.

    A. All of the considerations for FGS count against 3. So, for instance, doubled qualia and intermittent doubled qualia are consistent with versions of physicalism that deny FGS. But being consistent with DQ and IDQ doesn't seem consistent with any version of physicalism worthy of the name...and so on...

    B.Assuming the causal closure of the physical, it doesn't look like there's any causal/explanatory work for the distinct mental states positied in 3 to do. They can't be distinguished on grounds of having distinct causal powers, since the physical has all the causal powerws and the 3-ist is denying relevant physical distinctnesses.

    So, guess the thing I'd like to hear more about is what would make 3 preferable to 1 or 2? As long as 1 and 2 are plausible ways of describing everything that's going on with connectionist implementations, then connectionist implementations don't threaten to serve as counterexamples to FGS. If, however, there are grounds for prefering 3, or something similar, over 1 & 2, then FGS would seem to me to be threatened.

    ReplyDelete
  12. Hi Pete,

    I am quite sympathetic to your response here, and I brought up SSS precisely because I think the argument(s) of SN put pressure on it (or at least put pressure on a particular way of casting it).

    (Aside: though you do not mention Ramsey's "Representation Reconsidered" in SN, Ramsey appears to want to embrace 1--talk of the weights, or the partitioned activation spaces they induce, as representations does not do any work that talk of complex dispositions cannot do, so we should not characterize the abeyant "knowledge" of networks in representational terms. I don't have RR in front of me, but I think that is the gist).

    With that said, how might we defend 3?

    Let's go back 10-15 years or so, when Churchland and Fodor started that "conceptual similarity across neural diversity" dispute. Fodor argued that if one abandons atomism AND tries to ground concepts in neuronal activation spaces, then neuronal diversity is going to make conceptual identity across brains/networks look impossible.

    One aspect of the worry concerned the variability in the dimensionality of networks, but even if we considered networks with the same number of dimensions, there was a problem: allegedly shared concepts did not correspond to unique points in activation spaces (as Churchland admitted).

    To be continued...

    Martin

    ReplyDelete
  13. Continued...

    Now, if this is right, then Fodor can say that the attempt to reduce concepts (abeyant) to physical dispositions isn't going to work, since there isn't going to be a unique realization (across and at the level of networks) of those dispositions, i.e., the disposition to activate MALE will only help if there is some way to identify the concept MALE across networks. But there is nothing available to do that(unless, perhaps, one adopts some externalist account--anathema to Churchland!). Option 2 fares no better in this regard, since Fodor will rightly ask "What makes this *knowing how to distinguish Hatfields and McCoys*, that is, what motivates the intentional characterization of the physical states of the network?"

    So what's Churchland's response? Though the points that constitute concepts may vary across networks, concept identity can be secured via the shared geometric structure of the partitioned spaces of networks.

    But how does the geometry help with the question of content? This is where Churchland makes his appeal to the analogy with maps (heck, he even appeal to the Wittgenstein of the Tractatus!). The most recent attempt I've seen to do this is in Churchland's "The Potrayal of Worlds" in "Neurophilosophy at Work".

    So, the thinking appears to be this: we need this construal of SSS because, without it, we cannot make sense of conceptual similarity.

    However, in making concept identity depend on the geometry, it looks like we run afoul of FGS. The various points in the partitioned activation space correspond to different concepts, but their identity as concepts depends on the overall geometry. Since the latter is determined by the weight matrix, we violate FGS.

    OK, so does any of this conflict with the causal closure of the physical? Well, I think now we are back to some familiar knots. Churchland might say well yes, of course, we can tell the causal story without introducing SSS, but if we limit ourselves to the physical particulars of specific networks, we will miss what they have in common (geometry), and it is precisely the latter we need in order to subsume these networks under psychologically interesting generalizations (this reminds me of Andy Clark's take on distributed connectionism ,in Microcognition). Now, it is well known that many folks who make this move (e.g., Fodor) want to preserve the idea that the existence of these interesting generalizations imply causal powers, causal powers that cannot be reduced. I am more sympathetic to the position that says that you can get explanatory benefits without introducing causal powers, and I wonder whether this is how we should interpret Churchland here. If this is right, then the cogency of the view will ride on whether--and to what extent--we can divorce explanation from causation. This is a matter for another post, however.

    Isn't it great that it is our *job* to talk about this stuff?

    Martin

    ReplyDelete
  14. Hi Martin,
    Thanks for fleshing this out. It is a very interesting line of defense of 3.

    Two main things come to mind right now. The first is to question whether the causal/explanatory challenge is sufficiently addressed by leaning on the content-sharing issue. The second is to express queasiness about the role that mathematical abstracta are being called to play by leaning on geometry. (some of this just kind of echoes stuff you've mentioned along the way)

    Spelling this out further:

    Re: Causation and content sharing.

    I do think that Churchlands content similarity response (involving that stuff about rotating hypersolids in Putting Neurophilosophy to Work) Fodor's challenge about how you and I can have thoughts with identical contents is a good response. But I wonder if it really helps in addressing the causal/explanatory challenge I was trying to raise against 3. What's the explanadum that the discussion of the Fodor challenge revolves around? One way of describing it is that Fodor and other east coast types have an anti-holistic intuition that diverse thinkers can nonetheless "share thoughts". But giving a philosophical or metaphysical explanation that preserves the truth, or something close, of this intuition, doesn't quite rise to the challenge I wanted to push. I want to raise a challenge that's more like: given that you admit that 3-ist theories give no superior causal oomph over 1 or 2, what's so great about 'em (and please don't bring intuitiveness into it)? Anyway, I'm just thinking out loud here. This is pretty ill-formed on my part.

    Re: abstracta.

    So, I take my FGS paper to be pitched at people who have a pripr commitment to calling themselves physicalists. But as I'm hearing the defense of 3 that you are rehearsing here, you've got irreducible mental thingies that depend on a nonphysical nonmental geometrical thingie which in turn depends on the physical thingies. Aside from being even more pluralistic than dualism, it has this other feature that might give the self-described physicalist pause: it has geometric abstracta serving as intermediaries between the physical and the irreducibly mental. Yikes!!!

    I know "yikes" is a lousy objection, but that's all I've got right now.

    Anyway, overall, the line you are pushing strikes me as an interesting way to go. I'll need to think about it a whole bunch more.

    It is indeed a great job to think about this stuff!

    ReplyDelete
  15. Hi Pete,

    Your responses (and challenges) have been quite helpful in clarifying my thinking about the matter; thanks for that.

    Regarding the stuff about abstracta, since I don't (and neither does Churchland, I think) want to say that hypersolids, etc. are playing an ontologically mediating role between the mental and the physical, let me clarify what I take the role of the geometry to be in Churchland's account.

    Talk of weight "matrices," activation "vectors," etc., is presumably justified because it provides us a rigorous way to measure physical magnitudes and express their relations (this is straightforward Churchlandia). But it is the magnitudes and their relations that are ontologically significant here, not the mathematics used to capture them.

    With that said, let me try to re-express what I take the crucial point to be.

    Churchland's response to Fodor requires that the property whereby some physical thingy is a concept of X is a higher-order relational property that distinct lower-order properties can share (distinct things with distinct lower-order physical properties). Holism plus neural diversity entails this (I *think*).

    Now, here's the rub. According to Churchland, the relevant relations are the one's that are described by the geometry. Since what's described by the geometry is determined by the weights, distinct concepts within a conceptual framework do not have distinct physical bases.

    What moves are available here as a response? One would be to reject FGS. But suppose you want to hang on to FGS. What options remain? A couple of possibilities come to mind. One option is to deny that the concepts which populate these conceptual frameworks are distinct concepts. But a consequence of saying this is that when, e.g., the face recognition network is successfully sorting Hatfields and McCoys, it isn't exercising different concepts. But having distinct concepts is supposed to be what accounts for such sorting abilities, so this looks like a non-starter.

    Another option is to embrace some sort of instrumentalism about concepts, when it comes to attributing concepts to neural networks (when it comes to characterizing a network's "long term knowledge"). Talk of "concepts" is merely a handy way to classify and predict the dispositions/behavior of diverse networks (where the stuff about geometry is a useful way to generate those classifications and predictions).

    Clearly, the person who accepts the latter alternative is the offspring of Dennett and Churchland!

    Martin

    ReplyDelete
  16. Hey Martin,
    This is pretty much just repeating my main beef, but it boggles me how "having distinct concepts is supposed to be what accounts for such sorting abilities" can be squared with a denial of physically distinct properties in virtue of which such concepts are instantiated. But this is just me begging the question right now. I'll have to think about it a bunch more.

    It's funny that my defense of psychoneuro reductinism winds up with me taking Fodor's side in the classic debates between him and Churchland&Dennett!

    ReplyDelete