In chapter 5 of Dennett’s 1969 book, Content and Consciousness, he sketches an account of how, without recourse to dualism, our introspective reports can be infallible and we can have “certainty about the contents of our own thoughts” (p. 100). At the heart of Dennett’s sketch is a functional sketch of the brain as an intentional system, especially as it enables persons to make verbal reports on occasions of sensory stimulation. At the heart of this functional/Intentional view is a distinction Dennett borrows from Putnam (1960), a distinction between functional or logical states of a system and physical states of a system. As Dennett states the key idea:
“A particular machine T is in logical state A if, and only if, it performs what the machine table specifies for logical state A, regardless of the physical state it is in” (p. 102).
For both Dennett and Putnam, a significant upshot of such a notion of states is that T can be in A without itself ascertaining that it is in state A. Putnam argues, in a passage Dennett quotes (pp. 102-103): “Indeed,…suppose T could not be in state A without first ascertaining that it was in state A (by first passing through a sequence of other states). Clearly a vicious regress would be involved. And one ‘breaks’ the regress simply by noting that the machine, in ascertaining [anything] passes through its states—but it need not in any significant sense ‘ascertain’ that it is passing through them.”
“Suppose T ‘ascertained’ it was in state B; this could only mean that it behaved or operated as ifit were in state B, and if T does this it is in state B. Possibly there has been a breakdown so that it should be in state A, but if it ‘ascertains’ that it is in state B (behaves as if it were in state B) it is in state B.
Now suppose the machine table contained the instruction: ‘Print: “I am in state A” when in state A.’ When the machine prints ‘I am in state A’ are we to say the machine ascertained it was in state A? The machine’s ‘verbal report’, as Putnam says, ‘issues directly from the state it “reports”; no “computation” or additional “evidence” is needed to arrive at the “answer”.’ The report issues directly from the state it reports in that the machine is in state A only if it reports it is in state A. If any sense is to be made of the question, ‘How does T know it is in state A?’, the only answer is degenerate: ‘by being in state A’.
‘Even if some accident causes the printing mechanism to print: “I am in state A” when the machine is not in state A, there was not a “miscomputation” (only, so to speak, a “verbal slip”).’ Putnam compares this situation to the human report ‘I am in pain’, and contrasts these to the reports ‘Vacuum tube 312 has failed’ and ‘I have a fever’. Human beings have some capacity for the monitoring of internal physical states such as fevers, and computers can have similar monitoring devices for their own physical states, but when either makes a report of such internal physical conditions, the question of how these are ascertained makes perfect sense, and can be answered by giving a succession of states through which the system passes in order to ascertain its physical condition. But when the state reported is a logical or functionally individuated state, the task of ascertaining, monitoring or examining drops out of the reporting process.
A Turing machine designed so that its output could be interpreted as reports of its logical states would be, like human introspectors, invulnerable to all but ‘verbal’ errors. It could not misidentify its logical states in its reports just because it does not have to identify its states at all. “(p. 103-104)