Wednesday, May 25, 2011

Useful Obviously False Beliefs

"You've got to believe!" That's the advice professional magician Mark Mitton gave at his recent talk in the CUNY Cognitive Science Symposium. Here's the idea: in order for an illusion to be believable to the audience, the magician has to believe it too. This is, of course, puzzling for all of the sorts of reasons familiar to philosophers debating doxastic voluntarism and self-deception. How do you make yourself believe something that you know you don't?

I've been thinking about something similar to self-deception since Mark's remark. There's a kind of advice given in all sorts of areas of human performance, music and sports to name two, that involves believing (or pretending?) something obviously absurd but is efficacious nonetheless. Here's a quick list:


  • Vocal coaches tell their singers stuff like "breath out of your back" or "quit singing out of the top of your head." Or so I hear.
  • In martial arts, if you are going to punch someone really hard in the solar plexus, it helps to aim for their back, as if your fist can just bore right through.
  • In ball sports (you know, sports with balls?) there's this stuff about "following through" which I guess is something like throwing the ball after it's already been thrown. I dunno, I hate sports.
  • Acting? Maybe this is like the magic thing? I dunno, I'm not an actor (but I've played one on TV).


Anyway, hammer-heads are hereby invited to multiply examples and submit pointers to work done in the relevant areas. That would be awesome. You'd better believe it!

14 comments:

  1. Here's perhaps a fitting example: theories of intelligence. Carol Dweck has been promoting this distinction between entity theorists (people believing that one's level of intelligence is fixed) and those who adopt a growth mindset (intelligence is malleable and, um, embiggenable). Growth mindset folks seem to learn faster and perform better (among other positive things). It's not at all clear what intelligence even is, but it seems that believing you can get smarter over time with work actually translates into getting smarter over time with work. Believing instead that your intelligence is fixed keeps people at more limited levels of learning and performance (even if they've been told they're really smart).

    And, yes, always follow through when doing ball-related activities.

    ReplyDelete
  2. Very interesting, Roblin. I hadn't heard about that.

    One thing I'm wondering about, and I don't think I was very clear about this in the post, is how much these sorts of effects-of-belief work even when the believer kinda, um, doesn't believe it.

    One thing that put me onto this topic was all that stuff about Gately in Infinite Jest. Dude's a full-on atheist and is trying to follow all those higher-power steps of AA. And it works.

    So I'm not just interested in efficacious beliefs that may very well be false, but efficacious beliefs that are known to be false by the believers. (It's hard to even describe this with a straight face if you've had too much philosophical training!)

    ReplyDelete
  3. It's probably not 'obviously' false, but there was a story on 'This American Life' a few years ago called 'Windfall': http://www.thisamericanlife.org/radio-archives/episode/113/windfall

    Episode 4 was about someone who, in anticipation of a big accident settlement, cleaned up his life. But then was denied the settlement because the accident clearly had had not caused him pain & suffering (or something like that). It would have been useful (i.e. he would have gotten the money) for him to believe he wasn't going to get the money, and his belief that he was going to get the money turned out to be not useful.
    I used to have students discuss it when I introduced pragmatism.

    ReplyDelete
  4. Very cool, Peter. Thanks!

    Reminds me a bit of this:

    Sorenson, Roy. A Cure for Incontinence!, Mind (October 1997) 743

    http://artsci.wustl.edu/~rsorense/PAPERS/Cure.pdf

    ReplyDelete
  5. I find it very difficult to think of good examples. That is, it's easy to think of beliefs I *suspect* are false yet the maintenance of which is extremely useful (free will, unified self, rationality, etc.). But beliefs I *know* to be false yet maintain for practical purposes are hard - though I like the examples offered so far.

    On the implications of this though: First, perhaps the paradoxical air can be cleared a bit with the System 1 / System 2 distinction. Seems like the cases suggested above could be explained like this: my fast, automatic, unconscious, innate System 1 belief-forming systems produce useful beliefs I know (through slow, conscious, deliberate, culturally-enabled System 2 belief-checking systems) to be false. Often when this happens, I try to retrain myself in ways that eliminate these false though intuitive beliefs. But sometimes this leads to unwelcome consequences - the false, System 1 beliefs are so extraordinarily useful. In such cases, I don't retrain myself. That's what the magicians are doing.

    Another implication: I wonder what this says for teleofunctionalist theories of belief content. E.g., on Millikan's view, the truth conditions of a belief are normal conditions on the performance of its proper function. If they don't hold, the belief can't perform its proper function. But these cases seem to be straightforward counterexamples. Even though we know their truth conditions don't hold, they remain useful to our projects, and hence, presumably, biologically useful in the long run. So some beliefs' truth conditions aren't normal conditions on their proper functioning. This is a case I tried to make in print about mythological beliefs. I think a lot of people's "quasi-belief" in supernatural and magical possibilities (despite their sober, reflective denial of such possibilities) might fall into the class of which you write.

    Finally, have you all read McKay and Dennett's BBS piece on adaptive misbelief from last year (I have a commentary on it)? I think it touches on these issues. Gendler's Alief vs. Belief distinction might also be relevant, as might Dennett's opinion vs. belief distinction.

    ReplyDelete
  6. Thanks, Tad. That's all really useful stuff!

    The 2 systems line as you sketch it is very plausible. I'm not familiar with the "system 1, system 2" terminology, though. Where's that from?

    I remember arguing with you a million years ago about the teleofunctionalism stuff, but I confess that I've grown to become much more sympathetic to your line.

    I'm not familiar with the McKay and Dennett article, but on your recommendation I've just downloaded it.

    It hadn't occurred to me to connect this to Gendler, but I think that is relevant now that you mention it.

    Gracias, amigo!

    ReplyDelete
  7. Keith Stanovich's dual systems models of rationality (an attempt to model apparent irrationality of the kind shown by Kahnemann & Tversky (I think, or maybe Nisbett & Ross - I always get them confused)) is the locus classicus. He shows that people with adequate formal training can reflectively overcome automatic belief fixation strategies that fail.to.pass formal muster. Carruthers makes a lot of it in A of the M. Now it's a fashionable way of explaining differences b/w animal/infant and adult human cognition in such domains as number and theory of mind (see Apperly & Butterfill 2009 for a two-system theory of the latter inspired by two-system theories.of.the.former). Sorry for the poor syntax and sketchiness. On the 'droid.

    ReplyDelete
  8. Thank you! I've heard about false beliefs being useful (that's what McKay & Dennett's all about), but not about beliefs you know to be false yet maintain anyway being useful. Had no idea that this is key to the magician's craft. Dennett would probably be into this stuff - he often likens what the brain pulls off in consciousness to what magicians pull off with their audiences.

    ReplyDelete
  9. I think it helps here to make a distinction between belief and pragmatic acceptance (Bratman, Cohen, Engel, Stalnaker, et al.). Both attitudes involve treating a content as true, but they differ in function. Belief is epistemically motivated, context independent, and involuntary, whereas pragmatic acceptance is pragmatically motivated, context dependent, and voluntary. A standard example is that of the lawyer who accepts that her client is innocent while believing that he is guilty. I've written quite a bit on this and see the distinction as important to explaining things like delusion and self-deception.

    That's a bit simplistic, actually. In fact, I think there's a three-way distinction between pragmatic acceptance and two kinds of belief: type 1 and type 2. In a nutshell, pragmatic acceptance and type 2 belief are subclasses of a broader mental attitude, acceptance-as-premise, which is conscious and personally controlled, and which contrasts with type 1 belief, which is non-conscious and dispositional.

    If that's right, then type 2 belief formation can be voluntary in a weak sense. One can decide to believe a content (as opposed to remaining on the fence), provided one has epistemic support for it. The idea is, roughly, that acceptance-as-a-premise is voluntary, and the presence of epistemic support renders a voluntary acceptance a (type 2) belief. In effect, evidence becomes an enabling condition for deciding to believe. There are particularly interesting cases where one thinks the belief to be formed will be self-fulfilling (e.g. believing that I can jump the gap will give me the strength to jump to gap). In such cases, I think it's possible to decide to believe without independent epistemic support.

    All this ties in closely with the dual process/systems distinction Tad mentioned, which is also a hobbyhorse of mine. As well as Stanovich, you might look at Jonathan Evans's work (he's really the founder of dual-process theory in cognitive psychology). See his 1996 book with David Over *Rationality and Reasoning* (Psychology Press) and his recent *Thinking Twice* (2011, Oxford). For a collection of papers on dual-systems theory, see the book Jonathan and I edited: *In Two Minds: Dual Processes and Beyond* (Oxford 2009).

    If you're interested, I can send you refs on this stuff.

    ReplyDelete
  10. Thanks, Keith, I would indeed be interested in receiving refs. I'm very grateful to hear what your own work on this has been.

    This thread is one of those things that makes me really glad to be blogging!

    ReplyDelete
  11. Yes, great stuff. I'm going to chase down these references myself.

    This all made me think of the Seinfeld episode where Jerry wants to pass a lie detector test about watching (I think) Melrose Place. George advises him: "It's not a lie if you believe it." Another locus classicus (of sorts).

    ReplyDelete
  12. Very cool stuff.

    Here's something in the ballpark (where one plays "ball sports," or so I'm told...):

    Placebos without Deception: A Randomized Controlled Trial in Irritable Bowel Syndrome
    ( http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0015591 )

    It's not quite there, as the subjects are told: “placebo pills made of an inert substance, like sugar pills, that have been shown in clinical studies to produce significant improvement in IBS symptoms through mind-body self-healing processes.”

    It would be cool to know if dualists showed more of an effect! Anywho, check it out.

    ReplyDelete
  13. Way cool, Josh. Thanks. That does indeed seem in the ballpark.

    ReplyDelete