Wednesday, June 10, 2009

Bandwidth and Storage in the Human Biocomputer


One ballpark estimate of the memory capacity of the human brain is that it is in the range of 10^13 - 10^17 bits (Tipler, 1995). These numbers are much too large for current purposes if we assume that not all of the brain’s memory is in the service of concepts. A smaller number may be arrived at by assuming that concepts are restricted to pre-frontal cortex (PFC). The volume percentage of PFC to the whole brain is 12.51 (McBride, Arnold, & Gur, 1999) and thus we arrive at a reduced memory capacity estimate in the ballpark of 10^12 bits.

Is what Mary knows while staring at red for the first time simply too much information than can be squeezed into a memory store of 10^12 bits? It seems not.

An early estimate of the bandwidth of the human eye for color vision is 4.32 x 10^7 bits/sec (Jacobson, 1950, 1951). A more recent estimate is 10^6 bits/sec (Koch et al., 2006) aka a megabyte per second (1MB/sec). The computer-savvy reader may already have an intuitive grasp of 1MB/sec. The Wikipedia entry for “megabyte” (accessed July 24, 2008) tells us that a megabyte of data is roughly equivalent to a 1024x1024 pixel bitmap image with 256 colors (8 bpp color depth), 1 minute of 128 kbit/s MP3 compressed music, or a typical book volume in text format (500 pages × 2000 characters).

Assuming Mary has to stare at a red object for a full second to know what it’s like to see red, our lowest estimate of human memory capacity is still an order of magnitude higher than what comes into her eye during that second. (And that’s assuming that Mary has a normal-sized human PFC. Physically omniscient Mary may likely have a bigger brain than normal.) From a purely information-theoretic perspective, giving her bigger lobes would make it even easier to know what it’s like.

So, from an information-theoretic perspective, Mary’s memory capacity is easily large enough for phenomenal knowledge to be conceptual. But the information has to get in there somehow and maybe color vision is the only pipeline fat enough to do the trick. Unfortunately for the defender of the Experience Requirement, there’s no purely information theoretic basis for her position.

Jacobson (1950, 1951) gives a bandwidth estimate of 4.32 x 10^6 bits/sec for the eye for black and white vision and an estimate of 9,900 bits/sec for the bandwidth of the human ear. Continuing with our assumption that Mary would require a full second to gain, via color vision, knowledge of what it’s like to see red, then the very same amount of information can be acquired by a color blind person in 10 seconds and a blind person acquiring the information auditorially would need a full 73 minutes. Reducing our estimate of how long Mary needs to a tenth of a second means that the color blind could acquire that information in about a second and the fully blind in seven minutes.

(Of course, none of this is to say that, for example, the blind person would be hearing red. But it is to say that she is learning whatever is to be learned by the sighted when they see red. The information acquired about red may enter sensory systems without giving rise to conscious experience.)

The above considerations about bandwidth help us to see why the Experience Requirement may strike so many people as plausible. There is a marked difference between what you can learn in a second and what you can learn in 73 minutes. And it is reasonable to assume that people have an at least rough grasp of the informational capacities of their various sensory systems.

Nonetheless, regardless of whether we interpret Hume’s assertion about what the blind can know about color as a claim about nomological, metaphysical, or logical possibility, the claim receives no support from these information-theoretic considerations. From the information-theoretic perspective it is nowhere near impossible for the blind to acquire information of the presence of redness. They just need a longer time than the sighted to do so.

35 comments:

  1. These bandwidth requirements seem like straw men. Ignoring the general numbers, wouldn't the experience requirement folks just say it is more than information of a certain quantity that is needed for experience, but information encoded in a certain format, a format that is inaccesible to (pre release) Mary?

    Imagine showing the text printout of a zipped JPEG file to a monkey. Sure, the information about the corresponding image is there, and if the monkey stares at the printout long enough he will have processed the information in some sense. Even assume he memorizes the text printout of the .zip file. Because he doesn't understand how .zip files are encoded (much less how to translate from a text printout of a JPEG to actual image data), he'll never know what the image is of.

    Perhaps the 'knowing what it is like' (WIL) box requires the data in the correct format (it needs the uncompressed ZIP file opened in Illustrator), and this format is produced by a properly functioning visual system. Once the WIL box is properly activated by such a visual system, and memory formed, then you can have the concept of what it is like to see red.

    Now let's introduce Swamp Mary. She wired as if her WIL box was properly activated. But we could also imagine configuring her brain as if she had met Joe Shmoe, but that doesn't mean she has met Joe Schmoe. She doesn't remember meeting Joe Schmoe (she never actually met him). Couldn't someone say she doesn't have a real memory of seeing red, so her "knowledge" of what it is like, which depends upon that memory, is ersatz knowledge (true belief that is not justified because it doesn't have the right history).

    A second possibility is the experience requirement could be meshed with this 'formatting' objection by saying that Swamp Mary's brain has been configured so that during occurrent activations of knowing WIL to see red, her visual system is put in the right state to feed the information in the right format to the WIL box. For instance, the WIL to see red box activated the "redness" centers in V1, and these hadn't been activated previously in Swamp Mary, and this offline imaginative activation gets things in the right format for her WIL box.

    This second possibility gets around some of the strangeness that would make it seem like she doesn't really know what it is like to see red, even though her experience is the exact same as regular Mary (this would salve the conscience of those that are internalists about phenomenal content, who would say that in virtue of having an experience of red, you know WIL to see red).

    ReplyDelete
  2. Hi Eric,

    The point of the bandwidth stuff is to give some actual evidence for thinking both that the experience requirement is false and that the experience requirement would have a certain attractiveness to the average Joe or Jane.

    The experience requirement lovers, the gappy physicalists, don't get to just say that there's some format that's inaccessible to pre-release Mary. They need to give some evidence or reason for believing so. And so far, I don't see that there's much hope for them.

    Instead of imagining a monkey looking at a text printout of a zipped JPEG file, imagine someone who, like pre-release Mary, is physically omniscient. Why wouldn't they know what the image is of? Being physically omniscient, they know every causal and counterfactual relation that the zipped jpeg bears to the subject of the image and they know which entitites in the universe it bears various structural isomorphisms to. In other words, the physically omniscient person has mental states that satisfy the psychosemantic criteria spelled out by theories of content like causal and descriptive theories.

    As best as I can tell, pre-release Mary can represent everything that needs to be represented. So what's missing?

    As for the suggestion that someone without an actual causal link to a red quale lacks sufficient justification or whatever for their true belief to count as knowledge, all I have so far to say to that is what I said to Chase Wrenn a while ago here:

    http://www.petemandik.com/blog/2009/05/22/theres-something-about-swamp-mary/comment-page-1/#comment-551979

    ReplyDelete
  3. This comment has been removed by the author.

    ReplyDelete
  4. This comment has been removed by the author.

    ReplyDelete
  5. Oops I need to be careful now that you are on blogspot, as I have a chess identity here. :)

    Pete you said:
    "The experience requirement lovers, the gappy physicalists, don't get to just say that there's some format that's inaccessible to pre-release Mary. They need to give some evidence or reason for believing so."

    All parties are just throwing around speculative hypotheses, and nobody has particularly strong evidence.

    Primae facie, at least, the view that a blind person will not know what it is like to see (even if they are a vision scientist), is reasonable. And one obvious explanation is that the information is not encoded in a way (when they are a scientist) that their perceptual system can actually use.

    I haven't said anything particularly controversial yet. Your argument ends with "But it is to say that she is learning whatever is to be learned by the sighted when they see red. The information acquired about red may enter sensory systems without giving rise to conscious experience."

    That was why I brought up the monkey. He may have all of the exact same information as when I look at a picture of a car, but he has no idea what it is like to see a car. Information is not sufficient. The formatting of the information could be extremely important. You haven't given evidence to the contrary, and indeed the explanation of the apparent gap doesn't seem to do the encoding issue justice.

    When I said you were using straw men, I probably miswrote. I should have said that nobody believes that merely getting the information into the brain is sufficient, so they also won't buy your explanation for the appearance of the gap. These bandwidth arguments won't work if you just throw out quantitative values without discussing how the information is encoded, and if you don't address the receiver/decoder of that encoded information (you implicitly assume it is a PFC conceptual system, but that is not obviously a good move).

    I always want these explanations of the gap to work, but so far nobody has really done it for me. Perhaps you are relying on the view that conceptual content exhausts phenomenal content, and I never bought that one.

    ReplyDelete
  6. In your defense, it could be that you could address my 'formatting' issue, but also have your argument go through. That would be via the second possibility I mentioned above (not the ersatz knowledge argument, but the feedback pathway argument). Your claim "The information acquired about red may enter sensory systems without giving rise to conscious experience" suggests this is the case. In this case, the feedback pathways to early vision format things correctly for input to the WIL box.

    That's what I was getting at when I said "Mary's brain has been configured so that during occurrent 'knowing WIL to see red' her visual system is put in the right format for her knowing WIL box. For instance, the 'WIL to see red' box [via feedback] activates the "redness" centers in V1, and these hadn't been activated previously in Swamp Mary, and this offline imaginative activation gets things in the right format for her WIL box."

    In other words, this was meant to be more conciliatory, yet hold on to the (to me obviously reasonable and probably correct) question of formatting of information.

    However, this would imply that it would be possible for certain people to literally be incapable of knowing what it is like to see red, if their early visual system were never functioning properly.

    You seem to implicitly accept this when you say may enter sensory systems. Yes! And if their sensory system is completely fucked? Then perhaps they will never know WIL to see red. And perhaps if Mary has a fucked visual system, the transformation to swamp mary involves fixing those problems.

    ReplyDelete
  7. It still seems to me that this line of defense you are offering on behalf of the gappy physicalist begs the central question: is there an epistemic gap?

    The basic structure of my paper is like this: Part A presents the Swamp Mary and psychosemantic arguments for the conclusion that gappy physicalism is false. Part B, the stuff about bandwidth, is aimed to provide some sketch of how gappy physicalism can appear plausible without actually being true. But it's not the burden of Part B to show that gappy physicalism is false. That's part A's job.

    Your suggestions about formatting seem to me to be attacking part B without adequately addressing Part A. It's question begging for the gappy physicalist to simply assert, at this point, that prereleease Mary is ignorant of something. And it doesn't strike me as any less question begging for the gappy physicalist to point out how popular and noncontroversial most people regard the assumption that prerelease Mary is phenomenally ignorant.

    Regarding your point about the insufficiency of information: I totally agree. Of course mere input information doesn't suffice for there being something its like or for one's knowing what it is like. For what I think is sufficient, see my allocentric-egocentric interface stuff in, for instance, my Transcending Zombies paper. So, of course, I agree that there is something more. But given that Mary has a conceptual contribution that is physically omniscient, why wouldn't she be able to perfectly digest any information thrown her way? I don't see how formatting obstacles arise for her.

    Compare: You can tell me, in English, a description of all the furniture in your house, but a monolingual Chinese speaker won't learn anything from that even if they overhear it. All the information is there in the audio signal, but the Chinese person cannot extract it. Which is again to say, yes, information is insufficient. But now bring in Chinese Mary, who knows everything physical. She knows all the physical relations that the audio signals /char/ and /cowch/ enter into, etc. Why would she NOT thereby know, then, the meaning of your description and thus, that you have a chair, a couch, etc in your house? You can put your description in any format you want. Swahili, , zipped jpeg, what ever. It won't foil Mary.

    No matter how it's zipped, she will decrypt.

    ReplyDelete
  8. Pete: you are right about me mixing in objections to Part A in this response to Part B. Now that I've formulated my objection more clearly it will help me see where I think you went wrong in part A (or perhaps I will become more convinced by Part A).

    ReplyDelete
  9. "But given that Mary has a conceptual contribution that is physically omniscient, why wouldn't she be able to perfectly digest any information thrown her way? I don't see how formatting obstacles arise for her."

    This is clearly the crux. I think she could conceptually know what is going on when Joe is experiencing red, or even know what she would go through if she had a properly functioning visual system and saw red. But that is not the same as knowing WIL to see red. I do not think that is a conceptual achievement, something to be gained by learning more propositionally structured scientific knowledge. It is something you can gain only by having a sensory system that has some minimal functionality in the relevant modality.

    At any rate, you discuss related objections in your Part A and I will read them over more closely.

    The key thing I will look for is whether we can know what part of Swamp Mary was changed by the freak incident: it could be that her sensory system was made to work, and there is feedback from a more conceptual system to the sensory system required for knowledge of WIL. If you assume that it was merely a change in here "concept box" with no fixing of her perceptual system, I'll be able to pinpoint the problem I think (and I assume you think swamp mary doesn't need sensory system change (despite the quote from you above) because you think pre-swamp mary can know WIL).

    I am also Blue Devil Knight I'll stop deleting when I am accidentally signed in as that name (that's my alias at my chess blog).

    ReplyDelete
  10. One more thing: I assume you would disagree with this counterfactual reformulation of the experience requirement:

    So something like:
    "[F]or some experiences at least, and red experiences in particular, knowledge of what it’s like to have such an experience requires that the knower could have such an experience."

    This wouldn't change much for you (you would still reject this formulation I assume), but it may help me formulate my objection to Part A, as I think I am arguing that the above experience requirement is correct.

    ReplyDelete
  11. Counterfactual Mary
    OK, I've read over your previous bits more closely. I think the nomological view has some traction (and this is what I used in some sense in my last comment).

    Use my modified experience requirement:
    "For some experiences, knowledge of what it’s like to have such an experience requires that the knower could have such an experience."

    So, one change that was made in the transition from Mary-->Swamp Mary was that Mary's visual system was partially fixed. Now she can see color if shown some red apple or whatever.

    Just as I argued above, when in an occurrent state of knowing what it is like to see red, her "knowing what it is like" box can activate these repaired perceptual bits, which is contitutive of such occurrent episodes.

    Because pre-release Mary has a messed up visual system, she couldn't know what it is like because it essentially involves such feedback to properly functioning visual system.

    Now, this is probably a species of the nomological psychosemantics of the swamp mistress, so let me address that.

    On your argument
    Frankly, I don't completely see how your account with electrons and such applies to my account in terms of abilities to see red, but here's how I would attack your bit on nomological psychosemantics.

    You said: "A person who has phenomenal knowledge is nomically related, so says Nomological, to red quale, and Mary, being physically omniscient, is nomically related to every physical state of a person who has phenomenal knowledge."

    Sure, Mary can be nomologically related to red quale (a blind person can be too), but that is clearly not sufficient for knowing what it's like to see a red quale. I could set it up so my doorbell goes off every time someone has a red quale, but that doesn't mean it knows what it is like to see red. The problems here are the same I mentioned above with the bandwidth stuff.

    So it isn't enough to be nomologically related to red quale. I think a better way to put it is that you must be able to have a red quale (even if you have never had one) to know what it is like to have a red quale.

    ReplyDelete
  12. Oh please please tell me you don't need Wikipedia to tell you that 1000 * 1000 = 1000000.

    ReplyDelete
  13. One thing that I'm not quite following here: Are you assuming that prerelease Mary needs surgery before she sees red? That's one version of the classic thought experiment: her restriction to non-red is due to neural impairment. But another version is that, as a matter of fact, she just had not yet been exposed to red reflectors or emitters. On that version, I don't think your stuff about counterfactuals applies: that prerelease Mary *can* see red, she just hadn't yet.

    I'll have more comments a bit later today.

    ReplyDelete
  14. Good point. For grayscale-raised Mary, I guess I would have to say she can know what it is like to see red because she has a normal visual system.

    However, I'm not sure I like that conclusion, though I'm not sure why and will need to think about it.

    My first thought is that it could be that the feedback I've speculated is necessary actually requires a baptism with seeing red. That is, the ability to activate the red offline and know what it is like requires feedback, but that feedback skill doesn't emerge except with some experience with red things (with an exception for Swamp Mary, for whom we've built in the feedback loops). Like a neural net trained with backprop over time versus a neural net trained by hand setting the weights.

    I'm just pushing around in possibility space here and frankly don't know what I really think. I guess I'm backtracking a bit and pushing for the experience requirement in your original form instead of my counterfactual form.

    This question may seem unrelated, but could there be a swamp monkey? Would a monkey ever have an occurrent knowing what it's like to see red in the absence of a red stimulus? This is getting at the degree to which knowing what it is like (when not in direct perception) is a cognitive property.

    OK I need to give all this a day to think about it.

    ReplyDelete
  15. (I purposely ingored the developmental question of the possibility of grayscale Mary--I'm not sure it is irrelevant, so I thought I'd bring it up in this footnote).

    ReplyDelete
  16. Hi Pete, Eric,

    What do you think of the view that John Haugeland defends in “Representational Genera,” the view which says that what distinguishes schemes of representation (e.g., logical, iconic, distributed) are the contents represented? If Haugeland is right, then perhaps we can think of Mary’s situation in the following way: our color vision system employs a scheme of representation whose contents are proprietary to that scheme. Pre-release Mary has never used that scheme, and so there are contents that Mary has never represented before. Mary’s omniscience prior to release consists in her correctly representing everything physical that can be represented by other representational formats, e.g., language. On the face of it, then, the Mary example would have no implications for physicalism, but rather only for our capacity to represent certain contents.

    ReplyDelete
  17. Martin: I think Pete's argument is that, despite the fact that we have these different representational schemes or whatever, there is not an epistemic gap (and of course everyone in the discussion agrees there is no ontological gap that follows from the different formats).

    This all reminds me of an argument that Ramachandran and Hirstein made a few years ago in their fun article "Three laws of qualia":
    "In our electric fish example, however, we are deliberately introducing a creature which is similar to us in every respect, except that it has one type of qualia that we lack. And the point is, even though your description of the fish is complete scientifically, it will always be missing something, namely the actual experience of electrical qualia. This seems to suggest that there is an epistemological barrier between us and the fish. What we have said so far isn’t new, except that we have come up with a
    thought experiment which very clearly states the problem of why qualia are thought to be essentially private. It also makes it clear that the problem of qualia is not necessarily a scientific problem, because your scientific description is complete. It’s just BIOLOGICAL FUNCTIONS OF CONSCIOUSNESS, QUALIA & SELF 431 that the description is incomplete epistemologically because the experience of electric current is something you never will know.

    This is what philosophers have assumed for centuries, that there is a barrier which you simply cannot get across. But is this really true? We think not; it’s not as though there is this great vertical divide in nature between mind and matter, substance and spirit. We will argue that this barrier is only apparent,3 and that it arises due to language. In fact, this barrier is the same barrier that emerges when there is any translation.The language of nerve impulses (which neurons use to communicate among
    themselves) is one language; a spoken natural language such as English is a different language. The problem is that X can tell you about his qualia only by using an intermediate, spoken language (when he says, ‘Yes but there’s still the experience of red which you are missing’), and the experience itself is lost in the translation. You are just looking at a bunch of neurons and how they’re firing and how they’re responding when X says ‘red’, but what X is calling the subjective sensation of qualia is supposed to be private forever and ever. We would argue, however, that it’s only private so long as he uses spoken language as an intermediary. If you, the colour blind superscientist, avoid that and take a cable made of neurons from X’s area V4 (Zeki, 1993) and connect it directly to the same area in your brain, then perhaps you’ll see colour after all (recall that the higher-level visual processing structures are intact in your brain). The connection has to bypass your eyes, since you don’t have the right cone cells, and go straight to the neurons in your brain without an intermediate translation. When X says ‘red’, it doesn’t make any sense to you, because ‘red’ is a translation, and you
    don’t understand colour language, because you never had the relevant physiology and training which would allow you to understand it. But if you skip the translation and use a cable of neurons, so that the nerve impulses themselves go directly to the area, then perhaps you’ll say, ‘Oh my God, I see what you mean.’ The possibility of this demolishes the philosophers’ argument (Kripke, 1980; Searle, 1980; 1992) that there is a barrier which is insurmountable. Notice that the same point applies to any instruments
    I might use to detect activity in your brain—the instrument’s output is a sort of translation of the events it is actually detecting."

    ReplyDelete
  18. The argument from Rama and Hirstein is different from, but related to, Pete's arguments. Namely, by focusing on the format of the information, I believe they get at something important.

    Nobody wants to say that because information is in different formats, that means there is no possible translation between formats that preserves information (that has to be false). Rather, the problem is that for the "qualia box" to use the information to produce the right qualia (and perhaps contribute to the knowing what it is like), the formatting matters.

    Trying to know what it is like to see red by studying mathematical or linguistically formatted public theories, might for all intents and purposes be like trying to give a dedicated jpg reader an image in bitmap format: sure, of course we could make a device that translates between the two, but this device can't. Our qualia (and 'what it's like' box) might be like that.

    I'm still trying to figure out what I actually think here.

    ReplyDelete
  19. Hi Martin,

    It's been years since I last read Haugeland's great paper, so the following might be a bit off. There are two ways of applying Haugeland's account of differences in representational genus to the Mary case, and neither seem to support gappy physicalism.

    The first is to say that Mary is physically omniscient, but her omniscience is encoded in a linguistic format. What she learns upon release is encoded in connectionist format and is a kind of know how instead of knowing that. This is just the ability hypothesis, and is a version of non-gappy physicalism.

    The second is to say that Mary is not physically omniscient, that what is encoded in a linguistic format is a subset of all the facts, and what she learns and encodes in another representational genus is a representation of new, yet physical facts. This would be a sort of subjective physical facts hypothesis like I used to defend (in my 2001 paper on mental representation and subjectivity) and I believe is still defended by Robert Howell. But this still wouldn't be gappy physicalism.

    ReplyDelete
  20. Hi Pete,

    Perhaps part of the reason why the Mary debate seems intractable is that the space of possibilities is being limited in illegitimate ways. For example, one often hears that the options for knowledge are exhausted by the following: (a) knowledge-that (propositional knowledge); (b) know-how (abilities); (c) knowledge-by-acquaintance. But I wonder if this is true. For instance, suppose Churchland is right about the propositional attitudes. Must he then declare that all knowledge is either ability or acquaintance? Given his allegiance to Sellars, I don’t think Churchland is too keen on (c). With respect to (b), abilities are usually thought to be “non-cognitive” or “non-representational,” but Churchland is quite happy to say that the brain represents lots of stuff. But what could it represent, if not propositions?

    Before answering that, consider a prior question. Why is Churchland down on the propositional attitudes in the first place? The answer, it seems, is that he agrees with Fodor that a scientifically respectable account of the attitudes requires LOT; since the brain is not a LOT machine, however, the attitudes must go. OK, so why do we need LOT? Because we need to explain how mental states can come to have propositional contents, and the best (only?) way we know how this can be done is with linguistic schemes.

    Back to Mary. Let’s grant she has all sorts of propositional knowledge. Does she know everything physical? If all knowledge is propositional knowledge (putting aside abilities and acquaintance), her knowledge of the physical is exhaustive. But now why think that all knowledge is propositional? If we take Haugeland’s proposal seriously, then perhaps we should say that she lacks knowledge, but does not lack knowledge-that. But what’s missing isn’t ability, nor is it acquaintance. It is something representational.

    Is this gappy physicalism? Maybe not, as you say. But maybe it is something close, something that captures the spirit of the idea that Mary “knows everything physical” but is still importantly ignorant.

    A final point. One might be tempted to say that, since, e.g., a picture can be encoded in a jpeg file, differences in representational format are irrelevant. After all, the picture can be recovered from the file, so the file must contain all the relevant information. But encoding information is not the same thing as representing, and the jpeg file does not represent what the picture does. Compare: I send you a set of instructions in English for assigning RGB values to locations in an image. The resulting picture is of Barack Obama. The instructions I sent you do not represent Obama; they represent RGB values and locations. The instructions do, however, encode the picture. So maybe Mary can linguistically encode other kinds of representations without those encodings themselves having the contents of the representations they encode.

    OK, enough rambling. I’ve enjoyed your recent series of posts and the discussions that have ensued!

    ReplyDelete
  21. Thanks for the additional thoughts and kind words, Martin.

    Regarding the suggestion that there can be an encoding of x that fails to be a representation of x, I don't find the Obama example very compelling (but then again, I'm not moved to think Mary is in any sense importantly ignorant). The jpeg file, considered apart from anyone's smarts or decoding capabilities, represents nothing at all, not even RGB values. If I have no equipment to decode it or smarts to understand the decoding, it might as well be random beeps and boops. However, if I'm sufficently intelligent and educated (like, you know, physically omniscient), then, like the way the Cypher guy in the Matrix can look at the green scrolling alphanumerics and see the blonde in the red dress, I can just look at a string of numbers and see that this is a picture of Obama.

    ReplyDelete
  22. A side question: do people that become blind in adulthood because of trauma to V1 or retinae believe they know what it is like to see red? Has any research been done on this?

    Pete said:
    However, if I'm sufficently intelligent and educated (like, you know, physically omniscient), then, like the way the Cypher guy in the Matrix can look at the green scrolling alphanumerics and see the blonde in the red dress, I can just look at a string of numbers and see that this is a picture of Obama.

    Stephen Hawking could know everything possible about motor control, but that won't let him walk. If KWIL (knowing what it is like) is more like walking than knowing that 2x2=4, Mary won't know what it is like to see red no matter how much new information she gets.

    I'm not saying KWIL is know-how, and I like a lot what Martin has said above. What would we call the ability to parse a sentence (e.g., whatever is disrupted for Wernicke's aphasiacs), or produce language (e.g., whatever is disrupted in Broca's aphasia). Is that know-how? Know-that? It seems an impoverished way to classify neurofunctional systems.

    No matter how hard Stephen Hawking thinks, he ain't gonna walk, because the proper interface isn't there between his central system and his motor control machinery. (Just so we don't get hung up on motor control, we could make similar arguments about disorders like simultanagnosia, which people could know everything about but not fix).

    Let me argue for a revised historical account, not of the ability to KWIL to see red, but of the formation of the proper synaptic connections to KWIL to see red. I think it gets around your objection to the exemplification and causal theories of KWIL:

    You said:
    Since Swamp Mary need not have any actual past, her phenomenal knowledge cannot be grounded in causal relations to past qualia occurrences.

    I think I have a way around this.

    First, assume that synaptic connections of a certain type between the visual system and the KWIL system are required to KWIL to see red. Second, assume that these connections are not innate, but in normal subects they emerge via experiences seeing red, and ultimately KWIL is achieved via this training period.

    Of course, Swamp Mary has those connections gifted to her from above, and knows what it is like to see red because she has the right connections between her KWIL box and her visual system. This is fine, but in this scenario we wouldn't necessarily expect Mary to KWIL to see red.

    By analogy, say we spend two years training a rat to discriminate tones, so it has undergone perceptual learning and can literally discriminate more tones than normal rats. Sure, there could be a Swamp Rat that could discriminate tones the same way. However, take a normal rat without that history of training on tone discrimination, and it will not, and it cannot without training on actual tones.

    So I am claiming that based on the above assumptions (hypotheses), Mary will not, but Swamp Mary will, know what it is like to see red. I'm proposing a curtailed experience requirement (i.e., what is required is the wiring up of the brain that typically takes place during experiences seeing red, but of course you could rewire the brain via surgery or Swamp).

    To kill this argument, you would have to argue that that Mary could rewire her synaptic weights without the red experience. However, this is not necessarily plausible (just like someone with simultanagnosia can't arbitrarily reconfigure his neural network to make him normal, or Hawkings can't make himself walk).

    ReplyDelete
  23. Hi Eric,

    Thanks for continuing to plug away at this stuff. The considerations you raise are interesting to think about.

    Here are some comments on your argument.

    Re: Assumption #1, “that synaptic connections of a certain type between the visual system and the KWIL system are required to KWIL to see red,” I am all in favor of identifying knowledge of what it’s like with sets of synaptic connections (this accounts for the requisite abeyance), but I don’t see why the KWIL system, whatever that is, needs also to be wired to the visual system. (And the answer better not be to assert the Experience Requirement, which would be question begging.) If the KWIL system merits being called the “Knowing What It’s Like system”, then why don’t connections wholly internal to it suffice for knowing what it’s like? But this probably is not a super big issue. What will really big issue is how you get from your assumptions to your conclusion. More on this shortly.

    Re: Assumption #2, “that these connections are not innate, but in normal subjects they emerge via experiences seeing red, and ultimately KWIL is achieved via this training period.” That’s totally cool with me. Ditto for: “Of course, Swamp Mary has those connections gifted to her from above”

    What I don’t get, and still needs to be explained, is why prerelease Mary’s physical omniscience, which includes knowledge all about those connections, won’t suffice for her to satisfy the psychosemantic criteria on knowing what it’s like, while the connections in Swampy will.

    What you offer to get us from the assumptions to the conclusion is an argument by analogy. But it strikes me that the analogy is inappropriate. You compare a Swamp Rat who’s able to make certain discriminations to a normal rat yet untrained rat unable to make the discriminiations. The reliance on a Normal rat is where I think the analogy goes astray. What you should have done is bring in a physically omniscient Super Rat. But now the analogy, which is fixed to be properly analogous to a comparison between Swampy and prerelease Mary, doesn’t obviously get you your conclusion. Being physically omniscient, for any two physically distinct stimuli, Super Rat knows that they are distinct. Unlike Normal, who fails all sorts of discrimination tasks, Super Rat passes them all. Similarly for Mary. For any physically distinct stimuli, she can discriminate them.

    Now, you might insist that she’s not able to discriminate them in the same way that Swampy is able to, and further, you might insist, such an ability is constitutive of knowing what it’s like. But then this is just the Lewis-Nemirow ability hypothesis which doesn’t really concern me much here, since the ability hypothesis isn’t gappy physicalism. (You say that you that you aren’t reducing KWIL to a kind of know how, but it still looks like you are pushing the ability hypothesis here.)

    Look, I am totally prepared to grant that normal people who haven’t see red normally don’t know what it’s like. And, like I’ve said, this may be due to big differences between bandwidth in the various input systems in the normal human. But my enemy is the gappy physicalist, a philosopher making a very strong modal claim about what couldn’t possibly be known even by a physically omniscient being.

    BTW, your side question (“do people that become blind in adulthood because of trauma to V1 or retinae believe they know what it is like to see red?” ) is a very cool one. Unfortunately I don’t know what the answer is.

    ReplyDelete
  24. I will have to chew on this, thanks for the response.

    ReplyDelete
  25. A few points:
    1. The important part of the analogy with rat perceptual learning was comparing the two ways that synaptic connections are formed. If KWIL consists in having a brain wired a certain way (e.g., feedback loops or whatever), and it is not possible to rewire oneself this way merely by knowing a bunch of stuff then Mary can't KWIL (just as Hawking can't rewire himself to walk or an aphasiac rewire himself to talk).

    2. Given 1, you address my concerns when you say:
    "[Y]ou might insist, such an ability is constitutive of knowing what it’s like."

    This is pretty good, though I wouldn't word it this way. My main premise was that KWIL consists in these feedback connections (or whatever) between the visual system and a KWIL box (obviously speaking loosely). If you don't have the hardware, then you don't KWIL, and the only way normal people get this hardware right is via experiences seeing red things (though we can fake it with Swamp Mary).

    3. From the previous point, you conclude:
    "But then this is just the Lewis-Nemirow ability hypothesis which doesn’t really concern me much here, since the ability hypothesis isn’t gappy physicalism. (You say that you that you aren’t reducing KWIL to a kind of know how, but it still looks like you are pushing the ability hypothesis here.)"

    I don't know much about the ability hypothesis other than the 'Mary acquires some know-how' slogan I heard in grad school. I don't want to hop on that train just yet.

    Let's say I'm right, that experiences are required (in normal humans) to tune the network as we learn what it is like to see red. What if we were to redescribe this fine-tuning of the brain's hardware as saying normal people gain the concept of 'this experience of red' via this fine-tuning process. Indeed, before anyone knew any neuroscience, these were probably pretty common concepts (concepts about what it is like to feel hungry, to have a pain in the toe, to see a pretty sunset. Indeed when I was a boy of ten or so the 'birds and the bees' book my parents gave me said that the experience of an orgasm feels sort of like the feeling right before you sneeze (a pretty good analogy I think). I knew no neuroscience at the time, but I was able to understand this experiential theory of the orgasm :)).

    Poor Mary, on the other hand, has the concept 'State induced by stimulus blah blah, inducing neuronal state blah blah'. Of course the concepts are coextensional, but learning the mapping between coextensional concepts, or coextensional sets of concepts (in the case of thermodynamics and statistical mechanics) can be a major cognitive achievement. Could it be that this is what Mary goes through upon exiting the black and white world? This is precisely the closing of the epistemic gap between the world of experience and the world of neurons.

    ReplyDelete
  26. Holy crap I found the book I was talking about on Amazon. It is called Where did I come from?, and on page 21 is the passage I was talking about. On the orgasm, they say:
    "[I]t's not easy to tell you what this feels like. But you know how it is when you have a tickle in your nose for a long time, and then you have a really big sneeze? It's a little like that."

    The fact that the analogy in this passage stuck we me for over 20 years is pretty amazing.

    I'm guessing I was younger than ten when I read this. Weird nostalgia reading that book, it seems so silly now but cute.

    ReplyDelete
  27. Aww, dude! Your last paragraph in your second to last comment is endorsing a version of the phenomenal concepts strategy. Now you have to go back to the first post in the series and read it all over again.

    But seriously, if you don't feel like doing that, consider this.

    Suppose that as matters of fact, (1) there's a monolingual speaker of Pig Latin, call him Piggy, (2) Mary's first language is English, not Pig Latin, and (3) Mary cannot get herself into the same physical state as Piggy.

    It is not at all obvious that 1&2&3 alone suffice to entail...

    (4) In some relevant sense of the word 'know' there is something that Piggy knows that Mary cannot.

    Something else needs to be added to 1&2&3 in order to derive 4. Why? Because, words like "edray" are just physical thingies (we do get to assume physicalism, here) and whatever relates them to the rest of the world so that they mean the same as "red", well, Mary knows about that too. So, on the face of it, it looks like whatever Piggy knows, and expresses by saying, for example, "edray ookslay oremay ikelay urpleay oremay anthay ellowyay", or whatever, Mary, just by being physically omniscient, would know what Piggy knows too. Since she knows what his words mean (meanings being phsycial) and what makes his words true (truth makers being physical as well).

    You seem especially attracted to theses along the line of 3, but what is it about 3-ish theses that is *relevant* to questions along the lines of 4?

    ReplyDelete
  28. Yes you are right I realized as I was driving home I was espousing some sort of phenomenal concepts strategy, and I'll have to think about it some more. I'm least confident about the stuff in my number 3, more comfortable in the stuff in nums 1 and 2.

    ReplyDelete
  29. Pete,

    I like the Cypher example, so let's stick with it. You write:

    "However, if I'm sufficently intelligent and educated (like, you know, physically omniscient), then, like the way the Cypher guy in the Matrix can look at the green scrolling alphanumerics and see the blonde in the red dress, I can just look at a string of numbers and see that this is a picture of Obama."

    My first thought: yes and no. Having the code by itself isn't going to get you the experience of seeing the woman. Neither is all your intelligence. Now, if you are the kind of person who can visually represent the woman in red, then it is of course possible to "translate" the code into a visual experience. But this isn't like translating French into German (or whatever), for precisely the reason Haugeland gives: French and German belong to the same genera, whereas visual experiences and natural languages may not be. The translation here removes the proprietary contents of the code/language and replaces it with different kinds of contents. Of course, you might be able to figure out that the code codes for a visual experience, and that the experience is an experience of a woman wearing red (I want to say, "read extensionally"), without yourself visually representing anything. But knowing this is not the same as visually representing the woman wearing red. Compare: I can know that the folded up piece of paper in your glove box is a map of Chicago without thereby representing Chicago map-wise.

    I think Eric is swimming in the same (perhaps shark infested) waters on this point.

    ReplyDelete
  30. Ok, Martin, fair enough. I think that your description of the Cypher example is likely better than mine. Cypher, despite his intelligence, may not really count as *seeing* the woman. It was a mistake on my part to say otherwise. The mistake is most regrettable for the way that it detracts from what I take to really matter for the "Swamp Mary" arguments. Even if Cypher doesn't count as seeing the woman, the relevant question is, is there anything he, utilizing the alphanumerics or whatever, *must* fail to represent/know about, even though he may be physically omniscient?

    I take it that you want to argue for something like the Experience Requirement based on examples concerning encodings that aren't representing or knowings of x that aren't representations x-wise. So far, I must admit that I haven't found the examples you've supplied compelling. (the map in the glove box example doesn't really do anything for me: I don't have any intuitions about how best to apply the technical phrase "representing Chicago map-wise".)

    As far as bringing Haugeland's RG paper into it, I'm not sure this helps the case you seem to be offering the gappy (-ish?) physicalist. What Haugland does to distinguish between genera is to invoke relative ease of "witless" transformation. Iconic and linguistic genera, for example, are distinct for the relatively large amount of wit required to effect a suitable transformation from the one to the other.

    I don't mind granting that, for normal people, there are multiple representational schemes serving as proper parts of their cognitive economies for which enormous quantities of wit would have to be piled on to translate the one to the other. But my concerns are to address claims that gappy physicalists have made about an allegedly unbridgeble divide between ways of knowing/representing. The Haugeland program allows (as far as I remember) that there could be amounts of wit that would effect a translation between species of distinct genera. I still don't see how a Haugelandish argument would entail that physical omniscience is an insufficient amount of wit.

    ReplyDelete
  31. Suppose that as matters of fact, (1) there's a monolingual speaker of Pig Latin, call him Piggy, (2) Mary's first language is English, not Pig Latin, and (3) Mary cannot get herself into the same physical state as Piggy.

    It is not at all obvious that 1&2&3 alone suffice to entail...

    (4) In some relevant sense of the word 'know' there is something that Piggy knows that Mary cannot.


    Something else needs to be added to 1&2&3 in order to derive 4.

    [...]


    You seem especially attracted to theses along the line of 3, but what is it about 3-ish theses that is *relevant* to questions along the lines of 4?


    This seems a decent enough analogy. Assume a subject's being in state P is partly constitutive of the meaning (including intensional contents) of his expression E. Then, assume Mary can't get into state P. In such a case, Mary will have trouble understanding and generating expressions with the same meaning as E. She'll likely get the extension right, but not all of the meaning.

    Here's the diaologue (sans pig latin):
    Normal: I saw a beautiful red sunset yesterday, just a deep shade of crimson.

    Mary: Ah, yes, your brain responded thusly to stimulus X,Y,Z.

    Normal: Urr, yeah that indeed happened, but that's not what I am talking about. I'm talking about an experience of a red sunset. Our two descriptions may be coextensive, but your translation leaves out the experience, which is what I was referring to. I guess you can't have these experience concepts without having the experience yourself, or at least without someone wiring your brain as if you had had these experiences. Eric laid it all out quite clearly above.

    Mary: yes, I guess he did. I guess it isn't strange that my concepts about color wouldn't have all of the exact same meanings as your concepts about color, since mine don't have the intensional aspects added by the actual experience of color. Isn't it weird that so many philosophers think such a thing is irrelevant? I want to go have an operation so I can better understand what you folks are talking about when you refer to your experiences.

    ReplyDelete
  32. Note there is a lot Mary does know. She knows that so-and-so's experience of red is identical to physical state X-Y-Z (a state which she lacks). She should also know that she doesn't know what it is like to see red, that her brain hasn't gone through the correct tuning period yet. I can't see this bothering her.

    So, under this story there is still a lot Mary can know about color and color experience. Everything, basically, except those aspects that require you to have the experience yourself.

    I realize this is basically a phenomenal concepts strategy, but that seems right.

    ReplyDelete
  33. Right here is where we are either disagreeing or just talking right past each other.:

    Assume a subject's being in state P is partly constitutive of the meaning (including intensional contents) of his expression E. Then, assume Mary can't get into state P.

    That second assumption, that Mary cannot get into a state constitutive of such-and-such meaning, is not an assumption that I will grant. Further, I've offered arguments that such an assumption is false. As best as I can tell what meaning/content might be for a physicalist who also grants the possibility of Swamp Mary (Nomological, Descriptive-isomorphism), there is no relevant content that Mary's states can't have. It's question begging for you to simply assume otherwise.

    ReplyDelete
  34. Hi Pete,

    I think if you take another look at RG, you will find that Haugeland is arguing for a rather radical thesis: the different genera represent different kinds of contents, so there are no content preserving translations from one genus to another. What wit gets you is the capacity to “say” something accurate in one scheme based on what is “said” accurately in another. For example, consider a photo “of” Barack Obama. You and I look at the photo and we know on its basis that, e.g., Barack Obama is wearing a red tie. But according to Haugeland, the photo does not represent Barack Obama wearing a red tie. Thus, it would be a mistake to think that the photo and the sentence “Barack Obama is wearing a red tie” share content or that in “translating” the photo into words we are preserving content. Rather, we can come to know that Barack Obama is wearing a red tie on the basis of the photo because we have the background knowledge of the sorts of circumstances that typically bring about photos like the one we are looking at (e.g., they come about by Barack Obama wearing a red tie). Strictly speaking, however, the photo does not “say” this.

    Here is another way to arrive at a similar point. Again recall why Fodor thinks that the systematicity and productivity of the attitudes is an argument for LOT: if thoughts literally have logical form that mirror propositional structure, then systematicity and productivity are not mysteries. A nice case of IBE, according to Fodor. But what is good for the goose is good for the gander, so the argument seems to suggest that anything that is in the representation business but lacks logical form (e.g, maps, pictures, scale models) does not have propositional content.

    I actually think this is pretty compelling, except for one thing: Smolensky (and others) have shown how to do tensor-product encodings of logical form, encodings that themselves do not appear to have logical forms. Furthermore, and this is the kicker, Smolensky proved that connectionist networks employing such encodings can be productive and systematic (up to any arbitrarily specified complexity). But then it is not at all clear what logical form is really doing, in terms of representation, which cannot be done with TP encodings. So maybe pre-release Mary can encode everything that is represented in different formats without loss of content!

    I imagine you know all this stuff already; but maybe it has applications to the case at hand that have not yet been made.

    ReplyDelete
  35. Pete: my arguments above made my case.

    KWIL requires interactions between certain brain regions, and since Mary hasn't developed these interactions (while Swamp Mary was given them from on high)), she can't KWIL. This blocks your objection to causal theories, as I explicitly mentioned above and argued fairly extensively.

    I'm starting to think this story isn't just a possibility argument against your claims, but is probably true.

    ReplyDelete