This seems as good an opportunity as any to say something about the current status of this line of thought. Of late, the productivity argument has come under two sorts of criticism that a cognitive scientist might find persuasive:
—Theperformance/competence argument. The claim that conceptual repertoires are typically productive requires not just an idealization to infinite cognitive capacity, but the kind of idealization that presupposes a memory/program distinction.This presupposition is, however, tendentious in the present polemical climate. No doubt, if your model for cognitivearchitecture is a Turing machine with a finite tape, it’s quite natural to equate the concepts that a mind could entertainwith the ones that its program could enumerate assuming that the tape supply is extended arbitrarily. Because the Turingpicture allows the size of the memory to vary while the program stays the same, it invites the idea that machines areindividuated by their programs.
But this way of drawing a ‘performance/competence’ distinction seems considerably less natural if your model of cognitive architecture is (e.g.) a neural net. The natural model for ‘extending’ the memory of a network (and likewise,mutatis mutandis, for other finite automata) is to add new nodes. However, the idea of adding nodes to a network whilepreserving its identity is arguably dubious in a way that the idea of preserving the identity of a Turing machine tapewhile adding to its tape is arguably not.52 The problem is precisely that the memory/program distinction isn’t availablefor networks. A network is individuated by the totality of its nodes, and the nodes are individuated by the totality oftheir connections, direct and indirect, to one another.53 In consequence, ‘adding’ a node to a network changes theidentity of all the other nodes, and hence the identity
If the criterion of machine individuation is I(nput)/O(utput) equivalence, then a finite tape Turing machine is a finite automaton. This doesn’t, I think, show that the intuitions driving the discussion in the text are incoherent. Rather it shows (what’s anyhow independently plausible) that I/O equivalence isn’t what’s primarily at issue indiscussions of cognitive architecture. (See Pylyshyn 1984.)
of the network itself. In this context, the idealization from a finite cognitive performance to a productive conceptual capacity may strike the theorist as begging precisely the architectural issues that he wants to stress.
—The finite representation argument. If a finite creature has an infinite conceptual capacity, then, no doubt, the capacity must be finitely determined.; that is, there must be a finite set of sufficient conditions, call it S, such that a creature has thecapacity if S obtains. But it doesn’t follow by any argument I can think of that satisfying S depends on the creature’srepresenting the compositional structure of its conceptual repertoire; or even that the conceptual repertoire has acompositional structure. For all I know, for example, it may be that sufficient conditions for having an infiniteconceptual capacity can be finitely specified in and only in the language of neurology, or of particle physics. And,presumably, notions like computational state and representation aren’t accessible in these vocabularies. It’s tempting tosuppose that one has one’s conceptual capacities in virtue of some act of intellection that one has performed. Andthen, if the capacity is infinite, it‘s hard to see what act of intellection that could be other than grasping the primitivebasis of a system of representations; of Mentalese, in effect. But talk of grasping is tendentious in the present context.It’s in the nature of intentional explanations of intentional capacities that they have to run out sooner or later. It’sentirely plausible that explaining what determines one’s conceptual capacities (figuratively, explaining one’s mastery ofMentalese) is where they run out.
One needs to be sort of careful here. I’m not denying that Mentalese has a compositional semantics. In fact, I can’t actually think of any other way to explain its productivity, and writing blank checks on neurology (or particle physics)strikes me as unedifying. But I do think we should reject the following argument: ‘Mentalese must have a compositionalsemantics because mastering Mentalese requires grasping its compositional semantics.’ It isn’t obvious that masteringMentalese requires grasping anything