tag:blogger.com,1999:blog-3011259550074300139.post9000088950388206653..comments2023-12-24T05:22:32.888-05:00Comments on Brain Hammer: Bandwidth and Storage in the Human BiocomputerPete Mandikhttp://www.blogger.com/profile/10952230864825600992noreply@blogger.comBlogger35125tag:blogger.com,1999:blog-3011259550074300139.post-78491444826259468932009-06-19T22:46:30.714-04:002009-06-19T22:46:30.714-04:00Pete: my arguments above made my case.
KWIL requ...Pete: my arguments above made my case.<br /><br /> KWIL requires interactions between certain brain regions, and since Mary hasn't developed these interactions (while Swamp Mary was given them from on high)), she can't KWIL. This blocks your objection to causal theories, as I explicitly mentioned above and argued fairly extensively.<br /><br />I'm starting to think this story isn't just a possibility argument against your claims, but is probably true.Eric Thomsonhttp://neurochannels.blogspot.comnoreply@blogger.comtag:blogger.com,1999:blog-3011259550074300139.post-75075434838936223372009-06-19T21:39:43.218-04:002009-06-19T21:39:43.218-04:00Hi Pete,
I think if you take another look at RG, ...Hi Pete,<br /><br />I think if you take another look at RG, you will find that Haugeland is arguing for a rather radical thesis: the different genera represent different kinds of contents, so there are no content preserving translations from one genus to another. What wit gets you is the capacity to “say” something accurate in one scheme based on what is “said” accurately in another. For example, consider a photo “of” Barack Obama. You and I look at the photo and we know on its basis that, e.g., Barack Obama is wearing a red tie. But according to Haugeland, the photo does not represent Barack Obama wearing a red tie. Thus, it would be a mistake to think that the photo and the sentence “Barack Obama is wearing a red tie” share content or that in “translating” the photo into words we are preserving content. Rather, we can come to know that Barack Obama is wearing a red tie on the basis of the photo because we have the background knowledge of the sorts of circumstances that typically bring about photos like the one we are looking at (e.g., they come about by Barack Obama wearing a red tie). Strictly speaking, however, the photo does not “say” this.<br /><br />Here is another way to arrive at a similar point. Again recall why Fodor thinks that the systematicity and productivity of the attitudes is an argument for LOT: if thoughts literally have logical form that mirror propositional structure, then systematicity and productivity are not mysteries. A nice case of IBE, according to Fodor. But what is good for the goose is good for the gander, so the argument seems to suggest that anything that is in the representation business but lacks logical form (e.g, maps, pictures, scale models) does not have propositional content. <br /><br />I actually think this is pretty compelling, except for one thing: Smolensky (and others) have shown how to do tensor-product encodings of logical form, encodings that themselves do not appear to have logical forms. Furthermore, and this is the kicker, Smolensky proved that connectionist networks employing such encodings can be productive and systematic (up to any arbitrarily specified complexity). But then it is not at all clear what logical form is really doing, in terms of representation, which cannot be done with TP encodings. So maybe pre-release Mary can encode everything that is represented in different formats without loss of content!<br /><br />I imagine you know all this stuff already; but maybe it has applications to the case at hand that have not yet been made.Martinhttps://www.blogger.com/profile/01897337125668319628noreply@blogger.comtag:blogger.com,1999:blog-3011259550074300139.post-78111117828309220352009-06-19T17:12:49.938-04:002009-06-19T17:12:49.938-04:00Right here is where we are either disagreeing or j...Right here is where we are either disagreeing or just talking right past each other.:<br /><br /><i>Assume a subject's being in state P is partly constitutive of the meaning (including intensional contents) of his expression E. Then, assume Mary can't get into state P.</i><br /><br />That second assumption, that Mary cannot get into a state constitutive of such-and-such meaning, is not an assumption that I will grant. Further, I've offered arguments that such an assumption is false. As best as I can tell what meaning/content might be for a physicalist who also grants the possibility of Swamp Mary (Nomological, Descriptive-isomorphism), there is no relevant content that Mary's states can't have. It's question begging for you to simply assume otherwise.Pete Mandikhttps://www.blogger.com/profile/10952230864825600992noreply@blogger.comtag:blogger.com,1999:blog-3011259550074300139.post-68126358598720413372009-06-19T16:39:19.246-04:002009-06-19T16:39:19.246-04:00Note there is a lot Mary does know. She knows that...Note there is a lot Mary <i>does</i> know. She knows that so-and-so's experience of red is identical to physical state X-Y-Z (a state which she lacks). She should also know that she doesn't know what it is like to see red, that her brain hasn't gone through the correct tuning period yet. I can't see this bothering her.<br /><br />So, under this story there is still a lot Mary can know about color and color experience. Everything, basically, except those aspects that require you to have the experience yourself.<br /><br />I realize this is basically a phenomenal concepts strategy, but that seems right.Eric Thomsonhttp://neurochannels.blogspot.comnoreply@blogger.comtag:blogger.com,1999:blog-3011259550074300139.post-5578539536654116652009-06-19T16:23:22.721-04:002009-06-19T16:23:22.721-04:00Suppose that as matters of fact, (1) there's a...<i>Suppose that as matters of fact, (1) there's a monolingual speaker of Pig Latin, call him Piggy, (2) Mary's first language is English, not Pig Latin, and (3) Mary cannot get herself into the same physical state as Piggy. <br /><br />It is not at all obvious that 1&2&3 alone suffice to entail...<br /><br />(4) In some relevant sense of the word 'know' there is something that Piggy knows that Mary cannot.<br /><br /><br />Something else needs to be added to 1&2&3 in order to derive 4.<br /><br />[...]<br /><br /><br />You seem especially attracted to theses along the line of 3, but what is it about 3-ish theses that is *relevant* to questions along the lines of 4?</i><br /><br />This seems a decent enough analogy. Assume a subject's being in state P is partly constitutive of the meaning (including intensional contents) of his expression E. Then, assume Mary can't get into state P. In such a case, Mary will have trouble understanding and generating expressions with the same meaning as E. She'll likely get the extension right, but not all of the meaning.<br /><br />Here's the diaologue (sans pig latin):<br />Normal: I saw a beautiful red sunset yesterday, just a deep shade of crimson. <br /><br />Mary: Ah, yes, your brain responded thusly to stimulus X,Y,Z.<br /><br />Normal: Urr, yeah that indeed happened, but that's not what I am talking about. I'm talking about an experience of a red sunset. Our two descriptions may be coextensive, but your translation leaves out the experience, which is what I was referring to. I guess you can't have these experience concepts without having the experience yourself, or at least without someone wiring your brain as if you had had these experiences. Eric laid it all out quite clearly above.<br /><br />Mary: yes, I guess he did. I guess it isn't strange that my concepts about color wouldn't have all of the exact same meanings as your concepts about color, since mine don't have the intensional aspects added by the actual <i>experience of color</i>. Isn't it weird that so many philosophers think such a thing is irrelevant? I want to go have an operation so I can better understand what you folks are talking about when you refer to your experiences.Eric Thomsonhttp://neurochannels.blogspot.comnoreply@blogger.comtag:blogger.com,1999:blog-3011259550074300139.post-17589826483266906512009-06-18T21:02:26.341-04:002009-06-18T21:02:26.341-04:00Ok, Martin, fair enough. I think that your descrip...Ok, Martin, fair enough. I think that your description of the Cypher example is likely better than mine. Cypher, despite his intelligence, may not really count as *seeing* the woman. It was a mistake on my part to say otherwise. The mistake is most regrettable for the way that it detracts from what I take to really matter for the "Swamp Mary" arguments. Even if Cypher doesn't count as seeing the woman, the relevant question is, is there anything he, utilizing the alphanumerics or whatever, *must* fail to represent/know about, even though he may be physically omniscient?<br /><br />I take it that you want to argue for something like the Experience Requirement based on examples concerning encodings that aren't representing or knowings of x that aren't representations x-wise. So far, I must admit that I haven't found the examples you've supplied compelling. (the map in the glove box example doesn't really do anything for me: I don't have any intuitions about how best to apply the technical phrase "representing Chicago map-wise".)<br /><br />As far as bringing Haugeland's RG paper into it, I'm not sure this helps the case you seem to be offering the gappy (-ish?) physicalist. What Haugland does to distinguish between genera is to invoke relative ease of "witless" transformation. Iconic and linguistic genera, for example, are distinct for the relatively large amount of wit required to effect a suitable transformation from the one to the other. <br /><br />I don't mind granting that, for normal people, there are multiple representational schemes serving as proper parts of their cognitive economies for which enormous quantities of wit would have to be piled on to translate the one to the other. But my concerns are to address claims that gappy physicalists have made about an allegedly unbridgeble divide between ways of knowing/representing. The Haugeland program allows (as far as I remember) that there could be amounts of wit that would effect a translation between species of distinct genera. I still don't see how a Haugelandish argument would entail that physical omniscience is an insufficient amount of wit.Pete Mandikhttps://www.blogger.com/profile/10952230864825600992noreply@blogger.comtag:blogger.com,1999:blog-3011259550074300139.post-24500565545431862342009-06-18T20:21:17.025-04:002009-06-18T20:21:17.025-04:00Pete,
I like the Cypher example, so let's sti...Pete,<br /><br />I like the Cypher example, so let's stick with it. You write:<br /><br />"However, if I'm sufficently intelligent and educated (like, you know, physically omniscient), then, like the way the Cypher guy in the Matrix can look at the green scrolling alphanumerics and see the blonde in the red dress, I can just look at a string of numbers and see that this is a picture of Obama."<br /><br />My first thought: yes and no. Having the code by itself isn't going to get you the experience of seeing the woman. Neither is all your intelligence. Now, if you are the kind of person who can visually represent the woman in red, then it is of course possible to "translate" the code into a visual experience. But this isn't like translating French into German (or whatever), for precisely the reason Haugeland gives: French and German belong to the same genera, whereas visual experiences and natural languages may not be. The translation here removes the proprietary contents of the code/language and replaces it with different kinds of contents. Of course, you might be able to figure out that the code codes for a visual experience, and that the experience is an experience of a woman wearing red (I want to say, "read extensionally"), without yourself visually representing anything. But knowing this is not the same as visually representing the woman wearing red. Compare: I can know that the folded up piece of paper in your glove box is a map of Chicago without thereby representing Chicago map-wise.<br /><br />I think Eric is swimming in the same (perhaps shark infested) waters on this point.Martinhttps://www.blogger.com/profile/01897337125668319628noreply@blogger.comtag:blogger.com,1999:blog-3011259550074300139.post-90377524064827139402009-06-18T20:09:56.619-04:002009-06-18T20:09:56.619-04:00Yes you are right I realized as I was driving home...Yes you are right I realized as I was driving home I was espousing some sort of phenomenal concepts strategy, and I'll have to think about it some more. I'm least confident about the stuff in my number 3, more comfortable in the stuff in nums 1 and 2.Eric Thomsonhttp://neurochannels.blogspot.comnoreply@blogger.comtag:blogger.com,1999:blog-3011259550074300139.post-58760889353085152412009-06-18T18:42:02.410-04:002009-06-18T18:42:02.410-04:00Aww, dude! Your last paragraph in your second to l...Aww, dude! Your last paragraph in your second to last comment is endorsing a version of the phenomenal concepts strategy. Now you have to go back to the first post in the series and read it all over again.<br /><br />But seriously, if you don't feel like doing that, consider this.<br /><br />Suppose that as matters of fact, (1) there's a monolingual speaker of Pig Latin, call him Piggy, (2) Mary's first language is English, not Pig Latin, and (3) Mary cannot get herself into the same physical state as Piggy. <br /><br />It is not at all obvious that 1&2&3 alone suffice to entail...<br /><br /> (4) In some relevant sense of the word 'know' there is something that Piggy knows that Mary cannot.<br /><br />Something else needs to be added to 1&2&3 in order to derive 4. Why? Because, words like "edray" are just physical thingies (we do get to assume physicalism, here) and whatever relates them to the rest of the world so that they mean the same as "red", well, Mary knows about that too. So, on the face of it, it looks like whatever Piggy knows, and expresses by saying, for example, "edray ookslay oremay ikelay urpleay oremay anthay ellowyay", or whatever, Mary, just by being physically omniscient, would know what Piggy knows too. Since she knows what his words mean (meanings being phsycial) and what makes his words true (truth makers being physical as well).<br /><br />You seem especially attracted to theses along the line of 3, but what is it about 3-ish theses that is *relevant* to questions along the lines of 4?Pete Mandikhttps://www.blogger.com/profile/10952230864825600992noreply@blogger.comtag:blogger.com,1999:blog-3011259550074300139.post-19502003047639342472009-06-18T18:02:34.812-04:002009-06-18T18:02:34.812-04:00Holy crap I found the book I was talking about on...Holy crap I found the book I was talking about on Amazon. It is called <a href="http://www.amazon.com/reader/0818402539?%5Fencoding=UTF8&token=LPAROsDcEVMa29mcWAjZrxh7GK3ZPLu%20zHVZMtzhccXT0NqPJu%20ztQ%3D%3D&query=sneeze&page=28" rel="nofollow">Where did I come from?</a>, and on page 21 is the passage I was talking about. On the orgasm, they say:<br />"[I]t's not easy to tell you what this feels like. But you know how it is when you have a tickle in your nose for a long time, and then you have a really big sneeze? It's a little like that."<br /><br />The fact that the analogy in this passage stuck we me for over 20 years is pretty amazing.<br /><br />I'm guessing I was younger than ten when I read this. Weird nostalgia reading that book, it seems so silly now but cute.Eric Thomsonhttp://neurochannels.blogspot.comnoreply@blogger.comtag:blogger.com,1999:blog-3011259550074300139.post-88087212887063717842009-06-18T17:54:57.436-04:002009-06-18T17:54:57.436-04:00A few points:
1. The important part of the analogy...A few points:<br />1. The important part of the analogy with rat perceptual learning was comparing the two ways that synaptic connections are formed. If KWIL consists in having a brain wired a certain way (e.g., feedback loops or whatever), and it is not possible to rewire oneself this way merely by knowing a bunch of stuff then Mary can't KWIL (just as Hawking can't rewire himself to walk or an aphasiac rewire himself to talk).<br /><br />2. Given 1, you address my concerns when you say:<br />"[Y]ou might insist, such an ability is constitutive of knowing what it’s like."<br /><br />This is pretty good, though I wouldn't word it this way. My main premise was that KWIL consists in these feedback connections (or whatever) between the visual system and a KWIL box (obviously speaking loosely). If you don't have the hardware, then you don't KWIL, and the only way normal people get this hardware right is via experiences seeing red things (though we can fake it with Swamp Mary).<br /><br />3. From the previous point, you conclude:<br />"But then this is just the Lewis-Nemirow ability hypothesis which doesn’t really concern me much here, since the ability hypothesis isn’t gappy physicalism. (You say that you that you aren’t reducing KWIL to a kind of know how, but it still looks like you are pushing the ability hypothesis here.)"<br /><br />I don't know much about the ability hypothesis other than the 'Mary acquires some know-how' slogan I heard in grad school. I don't want to hop on that train just yet. <br /><br />Let's say I'm right, that experiences are required (in normal humans) to tune the network as we learn what it is like to see red. What if we were to redescribe this fine-tuning of the brain's hardware as saying normal people gain the concept of 'this experience of red' via this fine-tuning process. Indeed, before anyone knew any neuroscience, these were probably pretty common concepts (concepts about what it is like to feel hungry, to have a pain in the toe, to see a pretty sunset. Indeed when I was a boy of ten or so the 'birds and the bees' book my parents gave me said that the experience of an orgasm feels sort of like the feeling right before you sneeze (a pretty good analogy I think). I knew no neuroscience at the time, but I was able to understand this experiential theory of the orgasm :)).<br /><br />Poor Mary, on the other hand, has the concept 'State induced by stimulus blah blah, inducing neuronal state blah blah'. Of course the concepts are coextensional, but learning the mapping between coextensional concepts, or coextensional sets of concepts (in the case of thermodynamics and statistical mechanics) can be a major cognitive achievement. Could it be that this is what Mary goes through upon exiting the black and white world? This is precisely the closing of the epistemic gap between the world of experience and the world of neurons.Eric Thomsonhttp://neurochannels.blogspot.comnoreply@blogger.comtag:blogger.com,1999:blog-3011259550074300139.post-69339070006198558542009-06-18T15:55:00.852-04:002009-06-18T15:55:00.852-04:00I will have to chew on this, thanks for the respon...I will have to chew on this, thanks for the response.Eric Thomsonhttp://neurochannels.blogspot.comnoreply@blogger.comtag:blogger.com,1999:blog-3011259550074300139.post-81006710939084555722009-06-18T15:25:40.652-04:002009-06-18T15:25:40.652-04:00Hi Eric,
Thanks for continuing to plug away at th...Hi Eric,<br /><br />Thanks for continuing to plug away at this stuff. The considerations you raise are interesting to think about.<br /><br />Here are some comments on your argument.<br /><br />Re: Assumption #1, “that synaptic connections of a certain type between the visual system and the KWIL system are required to KWIL to see red,” I am all in favor of identifying knowledge of what it’s like with sets of synaptic connections (this accounts for the requisite abeyance), but I don’t see why the KWIL system, whatever that is, needs also to be wired to the visual system. (And the answer better not be to assert the Experience Requirement, which would be question begging.) If the KWIL system merits being called the “Knowing What It’s Like system”, then why don’t connections wholly internal to it suffice for knowing what it’s like? But this probably is not a super big issue. What will really big issue is how you get from your assumptions to your conclusion. More on this shortly.<br /><br />Re: Assumption #2, “that these connections are not innate, but in normal subjects they emerge via experiences seeing red, and ultimately KWIL is achieved via this training period.” That’s totally cool with me. Ditto for: “Of course, Swamp Mary has those connections gifted to her from above”<br /><br />What I don’t get, and still needs to be explained, is why prerelease Mary’s physical omniscience, which includes knowledge all about those connections, won’t suffice for her to satisfy the psychosemantic criteria on knowing what it’s like, while the connections in Swampy will.<br /><br />What you offer to get us from the assumptions to the conclusion is an argument by analogy. But it strikes me that the analogy is inappropriate. You compare a Swamp Rat who’s able to make certain discriminations to a normal rat yet untrained rat unable to make the discriminiations. The reliance on a Normal rat is where I think the analogy goes astray. What you should have done is bring in a physically omniscient Super Rat. But now the analogy, which is fixed to be properly analogous to a comparison between Swampy and prerelease Mary, doesn’t obviously get you your conclusion. Being physically omniscient, for any two physically distinct stimuli, Super Rat knows that they are distinct. Unlike Normal, who fails all sorts of discrimination tasks, Super Rat passes them all. Similarly for Mary. For any physically distinct stimuli, she can discriminate them.<br /><br />Now, you might insist that she’s not able to discriminate them in the same way that Swampy is able to, and further, you might insist, such an ability is constitutive of knowing what it’s like. But then this is just the Lewis-Nemirow ability hypothesis which doesn’t really concern me much here, since the ability hypothesis isn’t gappy physicalism. (You say that you that you aren’t reducing KWIL to a kind of know how, but it still looks like you are pushing the ability hypothesis here.)<br /><br />Look, I am totally prepared to grant that normal people who haven’t see red normally don’t know what it’s like. And, like I’ve said, this may be due to big differences between bandwidth in the various input systems in the normal human. But my enemy is the gappy physicalist, a philosopher making a very strong modal claim about what couldn’t possibly be known even by a physically omniscient being. <br /><br />BTW, your side question (“do people that become blind in adulthood because of trauma to V1 or retinae believe they know what it is like to see red?” ) is a very cool one. Unfortunately I don’t know what the answer is.Pete Mandikhttps://www.blogger.com/profile/10952230864825600992noreply@blogger.comtag:blogger.com,1999:blog-3011259550074300139.post-71434068434792944502009-06-18T11:44:23.670-04:002009-06-18T11:44:23.670-04:00A side question: do people that become blind in ad...A side question: do people that become blind in adulthood because of trauma to V1 or retinae believe they know what it is like to see red? Has any research been done on this?<br /><br />Pete said:<br /><i>However, if I'm sufficently intelligent and educated (like, you know, physically omniscient), then, like the way the Cypher guy in the Matrix can look at the green scrolling alphanumerics and see the blonde in the red dress, I can just look at a string of numbers and see that this is a picture of Obama.</i><br /><br />Stephen Hawking could know everything possible about motor control, but that won't let him walk. If KWIL (knowing what it is like) is more like walking than knowing that 2x2=4, Mary won't know what it is like to see red no matter how much new information she gets. <br /><br />I'm not saying KWIL is know-how, and I like a lot what Martin has said above. What would we call the ability to parse a sentence (e.g., whatever is disrupted for Wernicke's aphasiacs), or produce language (e.g., whatever is disrupted in Broca's aphasia). Is that know-how? Know-that? It seems an impoverished way to classify neurofunctional systems. <br /><br />No matter how hard Stephen Hawking thinks, he ain't gonna walk, because the proper interface isn't there between his central system and his motor control machinery. (Just so we don't get hung up on motor control, we could make similar arguments about disorders like simultanagnosia, which people could know everything about but not fix).<br /><br />Let me argue for a revised historical account, not of the ability to KWIL to see red, but of the formation of the proper synaptic connections to KWIL to see red. I think it gets around your objection to the exemplification and causal theories of KWIL:<br /><br />You said:<br /><i>Since Swamp Mary need not have any actual past, her phenomenal knowledge cannot be grounded in causal relations to past qualia occurrences.</i><br /><br />I think I have a way around this.<br /><br />First, assume that synaptic connections of a certain type between the visual system and the KWIL system are required to KWIL to see red. Second, assume that these connections are not innate, but in normal subects they emerge via experiences seeing red, and ultimately KWIL is achieved via this training period.<br /><br />Of course, Swamp Mary has those connections gifted to her from above, and knows what it is like to see red because she has the right connections between her KWIL box and her visual system. This is fine, but in this scenario we wouldn't necessarily expect <i>Mary</i> to KWIL to see red. <br /><br />By analogy, say we spend two years training a rat to discriminate tones, so it has undergone perceptual learning and can literally discriminate more tones than normal rats. Sure, there could be a Swamp Rat that could discriminate tones the same way. However, take a normal rat without that history of training on tone discrimination, and it will not, and it cannot without training on actual tones.<br /><br />So I am claiming that based on the above assumptions (hypotheses), Mary will not, but Swamp Mary will, know what it is like to see red. I'm proposing a curtailed experience requirement (i.e., what is required is the wiring up of the brain that typically takes place during experiences seeing red, but of course you could rewire the brain via surgery or Swamp).<br /><br />To kill this argument, you would have to argue that that Mary could rewire her synaptic weights without the red experience. However, this is not necessarily plausible (just like someone with simultanagnosia can't arbitrarily reconfigure his neural network to make him normal, or Hawkings can't make himself walk).Eric Thomsonhttp://neurochannels.blogspot.comnoreply@blogger.comtag:blogger.com,1999:blog-3011259550074300139.post-89284372544850779142009-06-17T18:52:14.621-04:002009-06-17T18:52:14.621-04:00Thanks for the additional thoughts and kind words,...Thanks for the additional thoughts and kind words, Martin.<br /><br />Regarding the suggestion that there can be an encoding of x that fails to be a representation of x, I don't find the Obama example very compelling (but then again, I'm not moved to think Mary is in any sense importantly ignorant). The jpeg file, considered apart from anyone's smarts or decoding capabilities, represents nothing at all, not even RGB values. If I have no equipment to decode it or smarts to understand the decoding, it might as well be random beeps and boops. However, if I'm sufficently intelligent and educated (like, you know, physically omniscient), then, like the way the Cypher guy in the Matrix can look at the green scrolling alphanumerics and see the blonde in the red dress, I can just look at a string of numbers and see that this is a picture of Obama.Pete Mandikhttps://www.blogger.com/profile/10952230864825600992noreply@blogger.comtag:blogger.com,1999:blog-3011259550074300139.post-8767145097055782612009-06-17T10:25:33.935-04:002009-06-17T10:25:33.935-04:00Hi Pete,
Perhaps part of the reason why the Mary ...Hi Pete,<br /><br />Perhaps part of the reason why the Mary debate seems intractable is that the space of possibilities is being limited in illegitimate ways. For example, one often hears that the options for knowledge are exhausted by the following: (a) knowledge-that (propositional knowledge); (b) know-how (abilities); (c) knowledge-by-acquaintance. But I wonder if this is true. For instance, suppose Churchland is right about the propositional attitudes. Must he then declare that all knowledge is either ability or acquaintance? Given his allegiance to Sellars, I don’t think Churchland is too keen on (c). With respect to (b), abilities are usually thought to be “non-cognitive” or “non-representational,” but Churchland is quite happy to say that the brain represents lots of stuff. But what could it represent, if not propositions? <br /><br />Before answering that, consider a prior question. Why is Churchland down on the propositional attitudes in the first place? The answer, it seems, is that he agrees with Fodor that a scientifically respectable account of the attitudes requires LOT; since the brain is not a LOT machine, however, the attitudes must go. OK, so why do we need LOT? Because we need to explain how mental states can come to have propositional contents, and the best (only?) way we know how this can be done is with linguistic schemes. <br /><br />Back to Mary. Let’s grant she has all sorts of propositional knowledge. Does she know everything physical? If all knowledge is propositional knowledge (putting aside abilities and acquaintance), her knowledge of the physical is exhaustive. But now why think that all knowledge is propositional? If we take Haugeland’s proposal seriously, then perhaps we should say that she lacks knowledge, but does not lack knowledge-that. But what’s missing isn’t ability, nor is it acquaintance. It is something representational. <br /><br />Is this gappy physicalism? Maybe not, as you say. But maybe it is something close, something that captures the spirit of the idea that Mary “knows everything physical” but is still importantly ignorant.<br /><br />A final point. One might be tempted to say that, since, e.g., a picture can be encoded in a jpeg file, differences in representational format are irrelevant. After all, the picture can be recovered from the file, so the file must contain all the relevant information. But encoding information is not the same thing as representing, and the jpeg file does not represent what the picture does. Compare: I send you a set of instructions in English for assigning RGB values to locations in an image. The resulting picture is of Barack Obama. The instructions I sent you do not represent Obama; they represent RGB values and locations. The instructions do, however, encode the picture. So maybe Mary can linguistically encode other kinds of representations without those encodings themselves having the contents of the representations they encode.<br /><br />OK, enough rambling. I’ve enjoyed your recent series of posts and the discussions that have ensued!Martinhttps://www.blogger.com/profile/01897337125668319628noreply@blogger.comtag:blogger.com,1999:blog-3011259550074300139.post-26335327249840795572009-06-17T07:04:37.219-04:002009-06-17T07:04:37.219-04:00Hi Martin,
It's been years since I last read ...Hi Martin,<br /><br />It's been years since I last read Haugeland's great paper, so the following might be a bit off. There are two ways of applying Haugeland's account of differences in representational genus to the Mary case, and neither seem to support gappy physicalism. <br /><br />The first is to say that Mary is physically omniscient, but her omniscience is encoded in a linguistic format. What she learns upon release is encoded in connectionist format and is a kind of know how instead of knowing that. This is just the ability hypothesis, and is a version of non-gappy physicalism.<br /><br />The second is to say that Mary is not physically omniscient, that what is encoded in a linguistic format is a subset of all the facts, and what she learns and encodes in another representational genus is a representation of new, yet physical facts. This would be a sort of subjective physical facts hypothesis like I used to defend (in my 2001 paper on mental representation and subjectivity) and I believe is still defended by Robert Howell. But this still wouldn't be gappy physicalism.Pete Mandikhttps://www.blogger.com/profile/10952230864825600992noreply@blogger.comtag:blogger.com,1999:blog-3011259550074300139.post-51292503965737306412009-06-15T13:42:38.857-04:002009-06-15T13:42:38.857-04:00The argument from Rama and Hirstein is different f...The argument from Rama and Hirstein is different from, but related to, Pete's arguments. Namely, by focusing on the format of the information, I believe they get at something important. <br /><br />Nobody wants to say that because information is in different formats, that means there is no possible translation between formats that preserves information (that has to be false). Rather, the problem is that for the "qualia box" to use the information to produce the right qualia (and perhaps contribute to the knowing what it is like), the formatting matters.<br /><br />Trying to know what it is like to see red by studying mathematical or linguistically formatted public theories, might for all intents and purposes be like trying to give a dedicated jpg reader an image in bitmap format: sure, of course we could make a device that translates between the two, but this device can't. Our qualia (and 'what it's like' box) might be like that.<br /><br />I'm still trying to figure out what I actually think here.Eric Thomsonhttp://neurochannels.blogspot.comnoreply@blogger.comtag:blogger.com,1999:blog-3011259550074300139.post-14554562308471318232009-06-15T13:40:02.568-04:002009-06-15T13:40:02.568-04:00Martin: I think Pete's argument is that, despi...Martin: I think Pete's argument is that, despite the fact that we have these different representational schemes or whatever, there is not an epistemic gap (and of course everyone in the discussion agrees there is no ontological gap that follows from the different formats).<br /><br />This all reminds me of an argument that Ramachandran and Hirstein made a few years ago in their fun article "Three laws of qualia":<br />"In our electric fish example, however, we are deliberately introducing a creature which is similar to us in every respect, except that it has one type of qualia that we lack. And the point is, even though your description of the fish is complete scientifically, it will always be missing something, namely the actual experience of electrical qualia. This seems to suggest that there is an epistemological barrier between us and the fish. What we have said so far isn’t new, except that we have come up with a<br />thought experiment which very clearly states the problem of why qualia are thought to be essentially private. It also makes it clear that the problem of qualia is not necessarily a scientific problem, because your scientific description is complete. It’s just BIOLOGICAL FUNCTIONS OF CONSCIOUSNESS, QUALIA & SELF 431 that the description is incomplete epistemologically because the experience of electric current is something you never will know.<br /><br />This is what philosophers have assumed for centuries, that there is a barrier which you simply cannot get across. But is this really true? We think not; it’s not as though there is this great vertical divide in nature between mind and matter, substance and spirit. We will argue that this barrier is only apparent,3 and that it arises due to language. In fact, this barrier is the same barrier that emerges when there is any translation.The language of nerve impulses (which neurons use to communicate among<br />themselves) is one language; a spoken natural language such as English is a different language. The problem is that X can tell you about his qualia only by using an intermediate, spoken language (when he says, ‘Yes but there’s still the experience of red which you are missing’), and the experience itself is lost in the translation. You are just looking at a bunch of neurons and how they’re firing and how they’re responding when X says ‘red’, but what X is calling the subjective sensation of qualia is supposed to be private forever and ever. We would argue, however, that it’s only private so long as he uses spoken language as an intermediary. If you, the colour blind superscientist, avoid that and take a cable made of neurons from X’s area V4 (Zeki, 1993) and connect it directly to the same area in your brain, then perhaps you’ll see colour after all (recall that the higher-level visual processing structures are intact in your brain). The connection has to bypass your eyes, since you don’t have the right cone cells, and go straight to the neurons in your brain without an intermediate translation. When X says ‘red’, it doesn’t make any sense to you, because ‘red’ is a translation, and you<br />don’t understand colour language, because you never had the relevant physiology and training which would allow you to understand it. But if you skip the translation and use a cable of neurons, so that the nerve impulses themselves go directly to the area, then perhaps you’ll say, ‘Oh my God, I see what you mean.’ The possibility of this demolishes the philosophers’ argument (Kripke, 1980; Searle, 1980; 1992) that there is a barrier which is insurmountable. Notice that the same point applies to any instruments<br />I might use to detect activity in your brain—the instrument’s output is a sort of translation of the events it is actually detecting."Eric Thomsonhttp://neurochannels.blogspot.comnoreply@blogger.comtag:blogger.com,1999:blog-3011259550074300139.post-40980625084151761162009-06-15T12:10:37.705-04:002009-06-15T12:10:37.705-04:00Hi Pete, Eric,
What do you think of the view that...Hi Pete, Eric,<br /><br />What do you think of the view that John Haugeland defends in “Representational Genera,” the view which says that what distinguishes schemes of representation (e.g., logical, iconic, distributed) are the contents represented? If Haugeland is right, then perhaps we can think of Mary’s situation in the following way: our color vision system employs a scheme of representation whose contents are proprietary to that scheme. Pre-release Mary has never used that scheme, and so there are contents that Mary has never represented before. Mary’s omniscience prior to release consists in her correctly representing everything physical that can be represented by other representational formats, e.g., language. On the face of it, then, the Mary example would have no implications for physicalism, but rather only for our capacity to represent certain contents.Martinhttps://www.blogger.com/profile/01897337125668319628noreply@blogger.comtag:blogger.com,1999:blog-3011259550074300139.post-82168710021534299282009-06-15T11:14:12.461-04:002009-06-15T11:14:12.461-04:00(I purposely ingored the developmental question of...(I purposely ingored the developmental question of the possibility of grayscale Mary--I'm not sure it is irrelevant, so I thought I'd bring it up in this footnote).Eric Thomsonhttp://neurochannels.blogspot.comnoreply@blogger.comtag:blogger.com,1999:blog-3011259550074300139.post-24735166930323654862009-06-15T09:57:21.039-04:002009-06-15T09:57:21.039-04:00Good point. For grayscale-raised Mary, I guess I w...Good point. For grayscale-raised Mary, I guess I would have to say she can know what it is like to see red because she has a normal visual system.<br /><br />However, I'm not sure I like that conclusion, though I'm not sure why and will need to think about it.<br /><br />My first thought is that it could be that the feedback I've speculated is necessary actually requires a baptism with seeing red. That is, the ability to activate the red offline and know what it is like requires feedback, but that feedback skill doesn't emerge except with some experience with red things (with an exception for Swamp Mary, for whom we've built in the feedback loops). Like a neural net trained with backprop over time versus a neural net trained by hand setting the weights.<br /><br />I'm just pushing around in possibility space here and frankly don't know what I really think. I guess I'm backtracking a bit and pushing for the experience requirement in your original form instead of my counterfactual form.<br /><br />This question may seem unrelated, but could there be a swamp monkey? Would a monkey ever have an occurrent knowing what it's like to see red in the absence of a red stimulus? This is getting at the degree to which knowing what it is like (when not in direct perception) is a cognitive property.<br /><br />OK I need to give all this a day to think about it.Eric Thomsonhttp://neurochannels.blogspot.comnoreply@blogger.comtag:blogger.com,1999:blog-3011259550074300139.post-10513252032293996942009-06-15T07:08:39.432-04:002009-06-15T07:08:39.432-04:00One thing that I'm not quite following here: A...One thing that I'm not quite following here: Are you assuming that prerelease Mary needs surgery before she sees red? That's one version of the classic thought experiment: her restriction to non-red is due to neural impairment. But another version is that, as a matter of fact, she just had not yet been exposed to red reflectors or emitters. On that version, I don't think your stuff about counterfactuals applies: that prerelease Mary *can* see red, she just hadn't yet.<br /><br />I'll have more comments a bit later today.Pete Mandiknoreply@blogger.comtag:blogger.com,1999:blog-3011259550074300139.post-12314836446171073222009-06-11T22:09:40.669-04:002009-06-11T22:09:40.669-04:00Oh please please tell me you don't need Wikipe...Oh please please tell me you don't need Wikipedia to tell you that 1000 * 1000 = 1000000.Eric Steinhartnoreply@blogger.comtag:blogger.com,1999:blog-3011259550074300139.post-42705234872059915552009-06-11T16:46:38.853-04:002009-06-11T16:46:38.853-04:00Counterfactual Mary
OK, I've read over your p...<b> Counterfactual Mary</b> <br />OK, I've read over your previous bits more closely. I think the nomological view has some traction (and this is what I used in some sense in my last comment).<br /><br />Use my modified experience requirement:<br />"For some experiences, knowledge of what it’s like to have such an experience requires that the knower could have such an experience."<br /><br />So, one change that was made in the transition from Mary-->Swamp Mary was that Mary's visual system was partially fixed. Now she <i>can</i> see color if shown some red apple or whatever. <br /><br />Just as I argued above, when in an occurrent state of knowing what it is like to see red, her "knowing what it is like" box can activate these repaired perceptual bits, which is contitutive of such occurrent episodes.<br /><br />Because pre-release Mary has a messed up visual system, she couldn't know what it is like because it essentially involves such feedback to properly functioning visual system.<br /><br />Now, this is probably a species of the nomological psychosemantics of the swamp mistress, so let me address that.<br /><br /><b>On your argument</b><br />Frankly, I don't completely see how your account with electrons and such applies to my account in terms of abilities to see red, but here's how I would attack your bit on nomological psychosemantics.<br /><br />You said: "A person who has phenomenal knowledge is nomically related, so says Nomological, to red quale, and Mary, being physically omniscient, is nomically related to every physical state of a person who has phenomenal knowledge."<br /><br />Sure, Mary can be nomologically related to red quale (a blind person can be too), but that is clearly not sufficient for knowing what it's like to see a red quale. I could set it up so my doorbell goes off every time someone has a red quale, but that doesn't mean it knows what it is like to see red. The problems here are the same I mentioned above with the bandwidth stuff.<br /><br />So it isn't enough to be nomologically related to red quale. I think a better way to put it is that you must be able to have a red quale (even if you have never had one) to know what it is like to have a red quale.Eric Thomsonhttp://neurochannels.blogspot.comnoreply@blogger.com