Jesse Prinz on concepts, part I

20140714_Prinz_Portraitby Dan Tippens

This is Part I of an interview with Professor Jesse Prinz of City University of New York. In this video, our assistant editor Dan Tippens asks Professor Prinz about his book “Furnishing the Mind: Concepts and their perceptual basis.” First, Dan asks Prinz what the desiderata of a theory of concepts are. Prinz then discusses why he thinks that intentionality should be exorcised from the list of desiderata. The two then move on to discussing the leading extant theories of concepts, and end Part I with some conversation about a-modal theories of concepts and how they differ from traditional, and neo, empiricist theories. (Thanks to Luke Rodgers for his assistance with the editing of this video.)

[We apologize for the not ideal quality of the audio. However, your experience will be significantly augmented by the use of standard earphones.]

_____

Daniel Tippens is a research technician at New York University School of Medicine. He is also an assistant editor for the webzine Scientia Salon.

Jesse J. Prinz is a Distinguished Professor of philosophy and director of the Committee for Interdisciplinary Science Studies at the City University of New York, Graduate Center. He works primarily in the philosophy of psychology and ethics and has authored several books and over 100 articles, addressing such topics as emotion, moral psychology, aesthetics and consciousness. Much of his work in these areas has been a defense of empiricism against psychological nativism, and he situates his work as in the naturalistic tradition of philosophy associated with David Hume. Prinz is also an advocate of experimental philosophy.

47 thoughts on “Jesse Prinz on concepts, part I

  1. Audio quality is just so terrible that I failed to understand most of what Prinz was speaking. 😦 For future reference, I suggest that a proper microphone be used and placed in front of the speaker (or a bug on the shirt or something…).

    I can usually cope with strong US accent and fast pronunciation, but Prinz changes the volume of his voice several times per sentence (especially when ending it), so if the microphone doesn’t capture everything cleanly (and it doesn’t since it is not close enough) I basically lose several words per sentence. And it is not easy to interpolate them from the context, since the topic is not trivial and I cannot guess what he wants to say. Besides, I cannot concentrate on the meaning of his ideas if I have to concentrate on repeating his sentences in my head to fill in the blanks in what I heard.

    In contrast, Dan was louder and much clearer to understand, so I don’t think I have a technical problem with playback. I even tried downloading the interview from YouTube, in various formats, trying to find the one with best audio quality. But they all sound the same, so I think that the source recording was poor to begin with.

    I gave up trying after the first 15 minutes. Is there a written transcript of the interview available, or something to that effect?

    Liked by 2 people

  2. Hi marko,

    Unfortunately there is no transcript. In the future better steps will be taken to ensure higher audio and video quality.

    we seem to have some people who have consistent audio problems with the video and others that don’t have too much of a problem. I have found that wearing headphones makes things easier to understand in the interview.

    Liked by 1 person

  3. Audio quality is just so terrible that I failed to understand most of what Prinz was speaking

    Unfortunately I have the same problem. To me it sounds as though the room acoustics are poor which is accentuated by problems with the microphone quality/placement. Pity there is no transcript.

    I can usually cope with strong US accent and fast pronunciation

    You must be an avid consumer of Hollywood movies! Americans seldom appreciate the difficulties their accent causes for foreigners. I recommend compulsory elocution lessons in developing that lovely, crisp British accent. But anyone who has heard a South African accent will know this is the blackest of black pots calling the kettle black 🙂

    Like

  4. Hi Marko,

    This American had some problems with it too. If you can somehow Bluetooth to a stereo, turn down the treble and up the bass, that helps!

    I very much sympathize with Jesse Prinz for taking intentionality out of the equation, for example, and instead just relying upon a “behaviorist” perspective. How much failure can we bear (such as crowds of people clueless about a given intentionality discussion), before simply getting pragmatic? Therefore we would not try to identify how the frog perceives the fly, but rather just observe its behavior and work from that. Consider the possibility, however, that our mental and behavioral sciences today happen to be similarly primitive as physics was before the Newtonian revolution. Can’t you imagine people back then just saying, “Hey, whatever works!” Of course many things have indeed been figured out since then in physics. Might this happen here as well?

    I think Ned Block would say so (Dan’s last interview), since he believes that his “access consciousness” and “phenomenal consciousness” model happens to be solid. But of course his model has now been kicked around for a while without effectively doing much in a practical sense. I’m optimistic as well, having developed a similar but far more extensive model than Block’s. Of course few want to invest their time in the ideas of “a nobody,” so I haven’t gotten much consideration yet. (Furthermore my ideas themselves do seem to step on some noble toes!) I am a patient man however, especially given that I find Scientia Salon to be amazingly fun!

    Liked by 2 people

  5. It would be easy and cheap to get the audio processed by a pro prior to release. If it’s worth putting on YT it must be worth making it sound better than this. Ten minutes to get the sound and the time it takes to run it off, not an expensive job. .

    Like

  6. All, I added a note to this effect within the body of the post: we apologize for the less than ideal quality of the audio; however, as Thomas pointed out, the use of simple earphones greatly improves the situation, and Jesse is worth listening to. Cheers!

    Liked by 1 person

  7. I listened to it once but really need to hear it again. My thought about how the frog perceives the fly as a dot, object, sound etc. would be similar to how a human perceives money as coin, currency, a credit card or his electronic bank statement. Most poor people or a child would think of currency or coin but a person in the modern world only handles a small percentage of his money as coin or currency but perceives the value of his property and debts.

    Talk about the older philosophers gives us the traditional mechanistic grounding of thought or I think of Leibniz mill analogy or the perception of something structural or mechanical is natural to anybody.

    Like

  8. Maybe I’m more attuned to Americans, but I could follow it ok. And it was a breath of fresh air. What a sensible chap! Since I’ve been reading SS I’ve thought many times that philosophers were inventing “pseudo problems” (to quote Prinz) by their wrong framing of a problem. The focus on intentionality is one (the syntax/semantics “problem” is another).

    A symptom of the wrong approach is the heavy emphasis on language. If we’re dealing with minds, then animals like chimpanzees and gorillas and dogs and dolphins do most of what our minds do, and yet they don’t use words (at least, if they do, their language is vastly less developed). Which tells us that linguistics should not be an important part of our “theory of mind”. Our language modules are a rather late-evolved add-on to the mind for the specific task of communicating with conspecifics. Language is thus a veneer to the mind, rather than being core to its function.

    A “theory of mind” should not be a “philosophical” issue, quite blatantly it is an engineering issue. [Why would the mind be more a “philosophical” issue than, say, the cardio-vascular system or the immune system?]

    My “theory of mind” is that brains are neural networks, coupled to sensory devices to receive information, and coupled to output devices such as muscles. Then they process information in the way that neural networks do, which is hard to understand because neural networks quickly get very complex as the number of nodes increases.

    But still, we understand the basics of neural networks, since we can build them, train them and play with them, and from there the whole issue is one of engineering and computation.

    [By the way, anyone suggesting that I’m failing to distinguish the mind from the brain is also inventing a pseudo-problem for themselves by wrong framing.]

    Like

  9. 1. Language is thus a veneer to the mind, rather than being core to its function.
    2. A “theory of mind” should not be a “philosophical” issue, quite blatantly it is an engineering issue.

    I think that 2 is (mostly) true (I would say “science and engineering”), but 1 is false.

    For human minds, the minds that can keep creating radically novel cultures, technologies, ideologies and such, language is core to their function. Chimp minds can’t do that. And language function will be in the core engineering of a human-level mind. The special architecture of the human brain allows it to be a “language engine” that can process recursive languages and create new languages.

    Liked by 1 person

  10. The audio is fine (with a pair of AKG K240 MKII headphones!). It’s the blue hair and the green thongs that’s a struggle.

    Like

  11. I also could understand Prinz OK, but I haven’t been able to take the time to listen to the entire interview yet.

    Carnap was saying that mind/body problems and the like were pseudo problems (a phase I think he coined) about a century ago so it is not a new idea introduced by Prinz.

    But I should point out to Coel that he is one of those who makes a sharp distinction between the mind and the brain because he holds that there could be an identical mind function without a biological brain being involved at all and that even a trivially simple mechanical device could produce the identical function.

    I think that it is too early to make any assumptions about the role of language, or at least we should avoid making any assumptions about what language is or that it is something separable from mind function in general.

    I am not a big fan of second guessing evidence that is not yet in. We will know that mind is an engineering issue when we have an engineering model of it. Currently neural networks are giving little insight into anything but the more or less trivial issues involved.

    At the moment we cannot build a machine to do what the mind does and there is no scientist or mathematician who yet has much of a clue about how we would go about doing so.

    As Hume pointed out, concepts have at their base feelings and we have no fruitful scientific theory at all of why there is something that things feel like. Neuroscientists are tying themselves in knots trying to account for us being a process that has knowledge about how it feels to be that process. (As I pointed out before, p-zombies, having been evicted from philosophy, are moving into neuroscience).

    Personally I think that it is much too early in the day to have anything so grand as a “Theory of the Mind”, it seems to me that we have a long journey ahead of us.

    Liked by 1 person

  12. Actually brains and minds are neural networks, made of neural networks etc. But we shouldn’t conflate minds to networks or computers any more than saying birds fly and so do airplanes. Airplanes can sit there for years like Howard Hughes’ airplane The Spruce Goose. Of course pilots fly the extension of themselves which is an airplane.

    For any living thing stimuli including sunlight, taste, smell, thunder and lightning all evoke feeling and emotion or have some fundamental meanings. Language does count because even a short word like ‘a’ has meaning but more important forms a concept which I believe is central to philosophy vs empirical investigation of neural networks or the networks are living beings. Our very selves are imbedded in those networks.

    I’m a technical guy but there is a difference between reading an appliance manual vs reading Shakespeare which evokes deeper meaning and emotions or more enlivens our living neural networks.

    Those larger lobes in the front of our brains not only give us the nimble motor movement but more nimbly parse our world and conscious environment by symbols, sounds and language. I think those lobes evolved from the more fundamental and emotional parts of our brains which we share with other species.

    Movement and dance evokes meaning or the more complex motor networks are mirrored in the other environmental networks of language and symbols.

    I think we are on the verge of a breakthrough in total understanding because what occurs in our neural networks is a process or an ‘eventfulness’ which become feeling at a fundamental level, and senses, meaning, movement etc.in the higher functions which still connect to the fundamental level.

    I think the bigger insights are coming sooner than we think.

    Like

  13. In his popular exposition Relativity, Einstein has a discussion on the idea of time in science. Interestingly, although it differs from our naive idea of time, it is nevertheless defined in terms of our naive idea of time. Our observer language is out path the the objective nature of things.

    This seems to me the problem of arbitrarily jettisoning the idea of intentionality at the outset (and it may be that I have misunderstood Prinz).

    Am I to listen to this video on the premise that his words are not about anything? If so then you could have the best audio quality available and his words could be regarded as nothing more than disturbances in the air caused by my speakers.

    But then again his implied claim that his words are not about anything would themselves not be about anything.

    So, even if it turns out that we can eliminate intentionality in some final theory of concepts, it is not even coherent to do so at the outset.

    Another thing is (from his book) “all (human) concepts are copies of perceptual representations”.

    This causes some severe ‘chicken and egg’ problems. Concepts underlie the process of perception. In the dark I see a menacing figure – no it is just a coat stand. My mind used the ‘menacing figure’ concept to interpret that set of signals from the eyes and then substituted the more reasonable ‘coat stand’ concept as more data presented itself. It seems likely to me (and it may be that it is a BAHFest idea) that our brains give precedence to danger concepts because it gave survival advantage to our evolutionary ancestors. If I had mistaken a menacing figure for a coat stand rather than vice-versa then I would have not been able to avoid danger in time.

    But it seems clear to me that our perception mechanisms cannot do without concepts. Probably we are hard-wired with a starter set of concepts – babies interpret smiles at an early age.

    So it seems to me the other way around, we need concepts in order to perceive, not perceptions in order to have concepts. I would suggest that from our starter set of concepts we use information from our senses to build new concepts which improves our perception process. But I think the concept came before the chicken – I mean perception.

    But, again, I am just thinking out loud here.

    Like

  14. That language is not necessary for concepts is, prima facie, obvious to me, but on deeper analysis it seems to depend upon what is meant by language.

    Take the example of a three year old with a moderate intellectual disability so that he has no expressed language and precious little received language.

    He is playing Temple Run on his iPod and he has worked out how to get around obstacles and over obstacles but cannot work out how to get under them. He is highly frustrated because, as he has no expressed language, he cannot simply ask his brother “How do I get under obstacles?” (or whatever would be the usual three-year-olds version of this question).

    So he goes to his brother and, still holding the iPod, holds his brothers hand over the screen so that his brother plays the game and he watches. His brother goes under a couple of obstacles and so he pushes his brother’s hand away and continues to play, now being able to pass any obstacle.

    At first I think that this settles the matter, clearly you don’t need language for concepts – he has just asked his brother to show him how to go under objects in his computer game. The concepts of ‘show me how’, ‘obstacles’, avoiding obstacles and ‘under’ are all there without language.

    But on the other hand he has just asked his brother to show him how to get under obstacles, so how can I say that he has no expressive language? He has no expressive verbal language, but clearly he has some kind of expressive language. And in what way would the nature of that transaction have been different if he had simply said “Show me how to get under obstacles in this game”?

    At the same age he could easily express the request to his father “Transfer the video I recorded on your phone onto my iPad” without any words. When his father is tardy in doing this he takes matters into his own hands and simply uses his iPad to film the screen of his father’s phone, nicely overloading the concept of ‘transfer’.

    So, again, it is premature to dispense with the idea of language as part of what a concept is, they may not be concepts that can be separated.

    Like

  15. Hi Robin,

    I really enjoyed your comment. I remember that prior to reading Prinz’ book, I saw a lecture he gave on the idea of leaving intentionality out of the desiderata for a theory of concepts. After watching this lecture, I was puzzled because I thought something like this:

    If we want to give a theory of concepts we need to say what concepts are. Concepts are representations of things (according to the dominant view held by most philosophers and scientists). Having intentionality is a necessary condition for being a representation. So, it seems plausible that we need to give an account of how concepts have intentionality in order to justifiably believe that concepts are representations (which we think they are).

    However, I can now at least understand why Prinz is trying to get rid of intentionality. As I was reading Jesse’s book, I noticed that he kept on falsifying (or claiming inadequacy about) the psychological theories of concepts such as prototype theory, exemplar theory, and theory theory because they have trouble with a very strict intentionality requirement. The requirement was that concepts be able to refer very rigidly to all and only x. For example, he says in the interview “It was believed that my concept of chairs needed to be able to refer to all and only chairs.” This is a very strict. Additionally, it seems to be wrong. It leaves no room for an adequate theory of intentionality to have vague reference (which some of them do have), or to explain how we sometimes don’t actually share concepts perfectly with one another when we communicate (which we do), etc.

    So, Prinz thinks (for more reasons than what I have mentioned above) that strict reference, which is what theories of intentionality were trying to explain, was just a pseudo problem in the sense that it appeared like it was a problem, but really it wasn’t. If you relax the intentionality requirement, you realize that the leading psychological theories of can be easily given accounts of reference that are satisfactory.

    You also mentioned the concern that neo-empiricism has things backward. You claimed that we use concepts to perceive, and so any theory that says we acquire concepts by perceiving has a problem with one desideratum known as the “acquisition” requirement. The acquisition requirement states that a good theory of concepts must explain how they are acquired. You argued that neo-empiricism fails to meet this requirement.

    However, it is important to note a few things. First of all, there is a distinction between what are known as “high-level properties” and “low-level properties” in perception. Suppose you and your pet dog are looking at a blue circular object that is a blueberry. You have a concept of a blueberry, whereas your pet dog doesn’t. When you both look at the blue-circular object, you see a blue-circular object *as* a blueberry, whereas your dog just sees a blue circular object. The blueness, circularity, and contrast which the dog perceives are low-level properties, and the property of *being a blueberry* which you perceive over and above the low-level properties is a high-level property. High level properties are known as “conceptual contents” because they seem to be concepts deployed in perception.

    This distinction is important because we intuitively think that many creatures down the phylogenetic tree can have perceptual systems without having concepts. In other words, conceptual faculties don’t seem necessary for perception.

    There is a long standing debate about whether or not high-level properties are actually constitutive of perceptual experience, or if they are really post-perceptual cognitive representations being deployed. In the former case, these properties are represented within experience, and in the latter case they are represented in thought (cognition).

    If it turns out that high-level properties aren’t represented in perception, then it looks like concepts definitely aren’t necessary for perception. But even if they are represented in perception, it still doesn’t seem like concepts are necessary for perception (given that other creatures without conceptual faculties can enjoy perceptions).

    Now if you say, “well, isn’t it the case that low-level properties are concepts being deployed too?”
    This would be an equivocation with the kinds of concepts that Prinz has in mind. Prinz is concerned about concepts that are the building blocks of thought. Low-level properties are not the kind of representations that Prinz is calling “concepts” here. He is concerned with properties like “being a dog” “being a blueberry” “justice” etc. Not things like spatial frequency, contrast, and depth relations. Another way to think of this is that he is concerned with the concepts that are ordinarily deployed in human cognition, and since creatures without cognition like ours can still perceive, it looks like concepts aren’t necessary for perception.

    I should also mention that when Prinz says that concepts are couched in perceptual representations, he is using “perceptual representations” broadly to include more things than just the five senses. For example, for Jesse perceptual representations include motor representations (representations deployed to cause motor movements), proprioceptive representations, emotional representations, etc.

    Liked by 1 person

  16. Modvs1,

    Jesse and I just wanted to ruin your prototype concept of interviews which never includes green flip flops or blue hair 🙂

    Liked by 1 person

  17. One other concern I have, should you choose to include low level representations as being “concepts” is that you run the risk of conflating concepts (building blocks of thought) with representations generally. For example, we have representations of oxygen levels in our blood to monitor our breathing, but these representations are intuitively not building blocks of thought, we can’t access these representations. The idea is that there are some representations which are deployed in perception (low-level properties) which aren’t deployed in cognition. So, in order to divide the mind up at its joints, we want to have a distinction between concepts (building blocks of thought) and representations (the things that the all systems in the mind use).

    Like

  18. So much to say, only one comment left 🙂

    If computationalism is true, as Coel for example holds, then a theory of mind is quite obviously and blatantly not an engineering problem.

    That is because computation is substrate independent. If computationalism is true then all that I am capable of experiencing could be completely and faithfully replicated on a very simple mechanical device of very few parts. It could be replicated on a wide variety of substrates in innumerable different ways and would not even require that any of the physics of this Universe were the case. (Note to Philip Thrift who usually disputes this, I am saying this would be the case if computationalism is true).

    All that a mind, a mind identical to my own, would require is a substrate that is capable of implementing the logic of some universal machine. If computationalism is true then there would not even be any output requirement.

    If so then the engineering considerations would be quite trivial and beside the point, a theory of the mind would be something for pure mathematics. The question of finding out what sort of algorithm it is that can know about what it feels like to be that algorithm. All these theories of mind would have to be recast in this light.

    Me, I am skeptical (to put it mildly) that a theory of the mind is a question of pure mathematics, but that is what computationalism implies and that is just one of the reasons I reject computationalism.

    Incidentally, note that for any neural network there could be an algorithm that simulates a neural network and provides the same function and we are back to pure mathematics.

    But for this reason too, I am not really able to be confident, a priori, of what sort of theory a theory of the mind will be.

    Hi Dantip,

    Thanks for the reply, I have not left myself much space to comment. You say “equivocation”, I ask “what distinction are you making?”. As for conflating it with representation, I reject that. There is a difference between “X represents Y” and “X represents Y to me”. I cannot see how the second can be the case without a concept. I cannot make sense of having a perception without a concept – even if my visual field consisted of only a patch of light and a patch of dark then I would still need concepts to perceive this.

    If I cannot find any perception that did not require a concept then I cannot conclude that so close a cousin of a dog could either, different though its concept might be.

    Liked by 2 people

  19. Thoughts so far:

    Coel I think you’re quite right about researchers placing too much emphasis on “language” — in many regards anthropocentrism does seem common, as if what we more uniquely do means everything. Surely some basics (such a useful consciousness model) would be helpful for us to develop better higher level models of ourselves. This is not to fault Philip Thrift’s observation regarding how important languages happen to be for us — I think Coel was just referencing something more basic than you were. I do hope he continues to build his neural network theory, given that I also have my own model. Perhaps we could compare and contrast them some day.

    Victorpanzica, I do appreciate your emphasis that neural networks are not inherently the same as (conscious) minds. So then what will a given neural network require in order for consciousness to transpire? From my own model conscious function will not occur without the punishment/reward incentive of qualia. (Therefore I believe that as this revelation plays out, qualia will come to found modern ethics.) I love your optimism that “…bigger insights are coming sooner than we think.”

    Robin Herbert I like your pondering regarding the three year old with language deficiencies — we might even consider the extremity of “a feral child.” (https://en.m.wikipedia.org/wiki/Feral_child) I suspect that language is what permitted human advancement more than any other dynamic. It might have began as “communication,” though today I see it as another form of thought which, for example, cats simply do not have.

    So Dantip, it sounds to me like you’re saying that Prinz is getting rid of some problems associated with intentionality assessments through less strict rules, not deleting the intentionality concept itself? Sounds good me!

    As far as your dog/human scenario, I wonder if you’d say that a better subject to consider than a blueberry, would be a dollar bill? I can see plenty of high level things that a dog couldn’t grasp about a dollar, such as the words on it, though the blueberry seems tame enough for each of us to ponder reasonably well. I believe that dogs do conceptualize things, and in the same essential manner that we do (though without our language, culture, added cognition, and such). Like us they use the conscious element to their minds to 1) interpret inputs (senses, sensations, and memories) as well as 2) construct scenarios for the purpose of promoting their happiness. I define these dynamics as two seperate varieties of “thought” (somewhat like Ned Block’s model).

    Like

  20. Two technological perspectives:

    1. The substrate-independent view: Biology doesn’t matter. Maintains that “there could be an identical mind function without a biological brain being involved at all.”

    2. The substrate-dependent view: Biology matters. Takes into account a “new synthesis” — AI (artificial intelligence) + SB (synthetic biology) — and recognizes the distinction between a linguistic (language-to-language) and synthetic compiler.
    http://www.journals.elsevier.com/biosystems/call-for-papers/call-for-paper-special-issue-on-what-synthetic-biology-can-o/
    http://codicalist.wordpress.com/2015/06/30/a-new-epistemology-ontology-landscape/

    (I think 2 may be the right view, which means silicon-based neurosynaptic chip technology will fall short of the goal.)

    Like

  21. Philosopher Eric: “So then what will a given neural network require in order for consciousness to transpire?”

    I like to think of Leibniz analogy of the mill, except think of the mill with gears made of styrofoam as opposed to metal or hard wood. It is not really the gears that grind the wheat but the internal structure of the gears which transmit the forces of nature. Similarly when neural networks fire we are only seeing the gears turn but there is some internalized action occurrung across the cells causing the qualia. I would suspect the cells are metabolically unifying or as I coined it Supercell Theory. I think of the cells as biological molecules because they are structured and repeatable to form larger structures, in this case metaphysical functions similar to how muscle cells unify to form an emergent function. I am talking about the more fundamental levels of the brain. The higher lobes perform some more complex translational functions on these fundamental functions so there is a massive interaction between functions throughout all levels of the brain and nervous system.

    Like

  22. Okay, I’m going to bite Coel’s offering:

    “A “theory of mind” should not be a “philosophical” issue, quite blatantly it is an engineering issue”

    Not really. It is certainly *in part* an engineering issue, but one also has to decide what exactly one is trying to do, i.e., one needs conceptual clarification. For instance:

    “anyone suggesting that I’m failing to distinguish the mind from the brain is also inventing a pseudo-problem for themselves by wrong framing”

    Or not. For instance, I think of “mind” as a verb, not an object (“minding” as opposed to “the mind”), so that minding is something the brain does, in complex interaction with the external environment, as well as with the internal one (e.g., full body cognition).

    Thinking of it this way immediately makes the problem of mind not just an engineering one, but a sociological one as well, both layered with philosophical assumptions and analysis.

    Liked by 3 people

  23. Indeed, I would go as far as saying that not even the brain is a simple engineering problem. Being the result of evolutionary processes, not of conscious engineering, there is likely a lot of redundancy, mess, non functionality, historical leftovers, etc. So it’s a bio-molecular-evolutionary problem, but definitely not an engineering one.

    Liked by 4 people

  24. Two supplementary videos that might help clarify Prinz’s position: A very interesting discussion between Prinz and Edouard Machery that covers much the same ground as the interview (so far) but with emphasis on disagreements between the two interlocutors:

    – and a lecture Prinz gave attempting to bring Lockean notions into present day theories of representation/conceptualization:

    – I enjoyed the Prinz-Machery discussion; but I must admit the lecture on Locke convinced me that there is much in Prinz’s philosophy I can’t quite accept. I don’t believe that classical (Lockean) empiricism can be salvaged through neuroscience, especially as the studies he relies on remain inconclusive. (I’m not sure that resort to neuroscience resolves such philosophical issues, anyway.) Their inconclusive character may be the result of ungrounded assumptions and poorly phrased questions. (It should be remembered that Locke’s empiricism has been chipped away at even by more sophisticated empiricists, beginning with Hobbes, then Hume, after all.)

    The theories of (classical) representationalism and associationism seem to me pretty well shot and anachronistic at this point.

    I suggest a different avenue towards a resolution to some of the difficulties Prinz struggles with: Peirce, who originated Pragmatism partially to resolve such difficulties in classical empiricism, especially through semiotics.

    The failure of the Analytic tradition to account for and fully absorb Peirce and semiotics – with it’s implicit critiques of classical empiricism, and its potential resolutions of problems of representation, intentionality, and meaning – I think a great embarrassment for the tradition of professional American philosophy. I am not suggesting that Peircean semiotics can resolve all problems; on the contrary, it is really just a beginning. But it is a beginning that has not been allowed to begin on its own terms. Consequently, it has had to find its development elsewhere, especially in cultural studies. But its original intent was as a propaedeutic to logic. (Peirce wasn’t the first to think this – the idea, in rough form, was in currency in the Middle Ages, and surfaces in Hobbes.)

    At any rate, in listening to Prinz, I kept thinking, ‘well, but the chain of signification will produce the same result and still leave open further opportunities for concept formation as sign interconnection, without resort to base sensory dependence.’ Conceptualization is indeed a response – to signs.

    The movement of the fly signifies “food” to the frog. But humans do have a choice of whether they want to call the vinyl bag stuffed with foam pellets a “bean bag chair.” The pragmatic arbitrary quality of human language usage raises difficulties for a theory that insists on baseline composition of concepts from sensory input.

    Liked by 1 person

  25. Massimo raises an interesting prospect saying the (human) brain is “a bio-molecular-evolutionary problem”: A biocompiler for a (human-level) brain would in some way recapitulate its evolutionary history.

    Like

  26. Thanks for the extra videos EJ — I now see how talented a speaker Jesse Prinz happens to be. Though I do tend to be cautious of such people, given their potential to spin things however they like, I’ve detected no such trickery from him.

    (Though I shouldn’t fault a person for being well educated in both past and present theory, one might also observe that a vast history of talented people with such educations have not been able to take our mental/behavioral/philosophical fields very far yet. So might it be that a standard education itself tends to place a person on an extra difficult path? In an epistemic sense philosophers should naturally be open to this question… though the personal implications of finding merits to this could be quite unpalatable. Nevertheless I had this notion as a college kid, and so have approached these fields from a position of “noneducation.” As time goes by here it should become apparent whether or not my nonstandard approach has put me on a less challenging path to discovery.)

    The first of these videos was themed to make the theory of John Locke seem more relevant today, and I must say that I’m now somewhat of a fan. This really hit home for me when an audience member asked Prinz’ thoughts on what Locke (and perhaps he) would make of the modern Chomsky “modal” type of perspective, where we have all sorts of mental devices set up to deal with our various circumstances. Prinz was diplomatic regarding this standard perspective, mentioning that we must assess the evidence, though firmly against the notion, and presumed this of Locke as well.

    I concur, believing that consciousness evolved for the exact opposite reason, or to deal with the maverick circumstances. Otherwise the vast non-conscious mind should have been a sufficient tool, and we’d essentially just be “computers.” Apparently the need for autonomy in diverse environments (perhaps even for fish), requires the general purpose tool of consciousness to deal with associated contingencies. Thus perhaps a simple but effective model of the conscious mind would do wonders for these fields, but standard educations have made them too difficult to build. Hopefully my own consciousness model will thus become helpful.

    Liked by 1 person

  27. Hi Robin,

    Coel […] makes a sharp distinction between the mind and the brain because he holds that there could be an identical mind function without a biological brain being involved at all …

    An “engineering” perspective does indeed make a distinction between the function and the implementation. Thus a functionally-equivalent ladder could be made out of either wood or aluminium. I would indeed suggest that a mind/brain could, in principle, be implemented in something other than a biological substrate.

    … and that even a trivially simple mechanical device could produce the identical function.

    No, I didn’t say that. An “identical function” to a hugely complex human brain would take a hugely complicated mechanical device, not a trivally simple one.

    If computationalism is true … then a theory of mind is quite obviously and blatantly not an engineering problem. That is because computation is substrate independent.

    Lots of engineering can be implemented using different substrates. For example the above ladder, which can be made of wood or of aluminium.

    If computationalism is true then all that I am capable of experiencing could be completely and faithfully replicated on a very simple mechanical device of very few parts.

    That’s not true. Even if you tried replicating your behaviour on a simple Turing engine, you’d need very complex software, which would need a vast number of “parts”. Software needs to have physical instantiation in order to exist.

    Hi dantip,

    Concepts are representations of things (according to the dominant view held by most philosophers and scientists).

    That rather depends on what we mean by “things”. A concept is rather an abstract idea about things, rather than a “representation” of some “thing”.

    For example, an “inverse square law” is a concept, but it is not a “thing”, not a class of things, and not a representation of any “thing” or class of things.

    [I could be misunderstanding what “representation” means there, but if “representation” and “thing” are interpreted broadly enough to include an “inverse square law” then it makes those words rather meaningless, and I’m not sure that such language is that helpful.]

    Hi Massimo,

    … not even the brain is a simple engineering problem. Being the result of evolutionary processes, not of conscious engineering, …

    Agreed. It is made by a “blind watchmaker” rather than by an intelligent engineer. But the point of the engineering perspective is the focus on function and on hardware, as opposed to the philosophical focus on language.

    A frog can catch a fly without any involvement of language.

    Liked by 1 person

  28. Coel,
    I’ve thought many times that philosophers were inventing “pseudo problems” (to quote Prinz) by their wrong framing of a problem.

    That is an airily dismissive claim and it is hard to take it seriously when you provide no substance.

    If we’re dealing with minds, then animals like chimpanzees and gorillas and dogs and dolphins do most of what our minds do

    I will believe that when chimps submit articles to arXiv and join our debates. Your words suggest an impoverished view of what our minds do.

    Our language modules are a rather late-evolved add-on to the mind for the specific task of communicating with conspecifics. Language is thus a veneer to the mind, rather than being core to its function.

    Why does ‘late-evolved‘ mean a ‘veneer to the mind‘?

    That is an extraordinary assumption. How do you know this? A far more appropriate metaphor is that of a catalyst, which captures the pervasive and transformative changes introduced by language acquisition. As evidence see the astonishing gap between our culture and that of our close relatives, the chimpanzees.

    A “theory of mind” should not be a “philosophical” issue

    That is like arguing that life should not be a ‘biological issue’. You are ignoring the principle of levels of explanation.

    quite blatantly it is an engineering issue

    And quite blatantly biology is a physics issue. But appealing to a lower level of explanation doesn’t say anything useful, in an explanatory sense, about the higher level. You have swept all useful distinctions under the carpet where they can be quietly ignored, on the principle of ‘out of sight, out of mind’, pun intended.

    Why would the mind be more a “philosophical” issue than, say, the cardio-vascular system or the immune system?

    Because my cardio-vascular system does not think about philosophy, compose essays, poetry or music and my immune system does not make me immune to devouring curiosity.

    My “theory of mind” is that brains are neural networks, coupled to sensory devices to receive information, and coupled to output devices such as muscles.

    That is like claiming that physics explains the social behaviour of butterflies. Ultimately true but useless when talking about butterflies. In the same way your trivially true statement about neural networks says nothing useful about the functioning of our conscious minds.

    By the way, anyone suggesting that I’m failing to distinguish the mind from the brain is also inventing a pseudo-problem for themselves by wrong framing

    By that line of reasoning anyone who disagrees with you has invented a pseudo-problem. That is a neat rhetorical move but the reality is that all you have done so far is talk about the brain. It seems you have failed to distinguish between the brain and the mind by the simple expedient of ignoring the mind(or sweeping it under the carpet).

    Like

  29. Philosopher Eric

    Very much agree with your previous comment.

    ” So might it be that a standard education itself tends to place a person on an extra difficult path? ”

    I have no doubt whatsoever, not even a hint of one, that this is a fact.

    Like

  30. Hi Dan and Robert, some thoughts,

    “If you relax the intentionality requirement, you realize that the leading psychological theories of can be easily given accounts of reference that are satisfactory.”

    Agreed. I first though Jesse was saying we should do away with intentionality. I’ve always thought strict reference was wrong but …

    “However, it is important to note a few things. First of all, there is a distinction between what are known as “high-level properties” and “low-level properties” in perception. Suppose you and your pet dog are looking at a blue circular object that is a blueberry. You have a concept of a blueberry, whereas your pet dog doesn’t.”

    I’ll start from there to try and explain my thinking.

    I believe that dogs and humans have similar low level properties, and that both have high level properties and that one of the distinctions between Human’s high-level cognitions and dog’s high level cognitions is of course verbal language, and I believe that dogs non-verbal cognitions (or non-verbal thoughts?) do have conceptual content. i.e. what the circular blue thing means to them when they see the thing we call a blueberry.

    Otherwise the idea of concepts gets restricted to language but I can see how extending the use of the word concept to really lower level properties becomes problematic semantically (though I think the distinction between low and high level properties is not strict). I mean like, also, how far down is it ok to use the word cognition to refer to what we normally call biological processes, e.g. is feature detection a kind of cognition ? (in my word web, in a way, yes)

    “Prinz says that concepts are couched in perceptual representations, he is using “perceptual representations” broadly to include more things than just the five senses. For example, for Jesse perceptual representations include motor representations (representations deployed to cause motor movements), proprioceptive representations, emotional representations, etc”

    I totally agree.

    (And I believe that overall neither perception nor representation takes precedence.)

    Like

  31. Robin,

    Sorry, I didn’t mean to say Robert in my previous comment, I meant you-

    By the way I’ve been enjoying your comments.

    Like

  32. Coel,

    ” It is made by a “blind watchmaker” rather than by an intelligent engineer. But the point of the engineering perspective is the focus on function and on hardware, as opposed to the philosophical focus on language”

    But reverse engineering is fraught with perils when the engineer is blind and unintelligent, because you may try to understand the “function” of something that looks like a Rube Goldberg machine…

    “A frog can catch a fly without any involvement of language”

    No doubt. But could you have thought and written down that sentence without the involvement of language?

    Like

  33. Massimo: Nature is an engineer because there is no perfect engineered product and they are full of ‘residuals’ and shortcuts for cost saving, power saving and ease of production. The CTL-ALT-DEL is a rarely used function still found in modern PC’s but Bill Gates team put it in the DOS OS about 35 yrs ago.

    Good reverse engineering entails experience of recognition of certain design patterns i.e. the neocortex is layered just like the retina and visual cortex and has a terminus into other parts of the nervous system. The neocortex is also physically folded much like the circuit board fits in a case that fits under the desk.

    Like

  34. It seems to me we start out with an intuition something like “concepts are chunks of thought like words are chunks of speech”, and it is unclear whether we can go anywhere with that, given how little we understand thought.

    It is arguable that the Scientific Revolution got its main impetus from the dictum (which I don’t know that anyone exactly said), “Try to know what is knowable, and never mind the ridicule that will result from studying such trivial matter”. See The Royal Society: Concept and Creation by Margery Purver. I think it is implicit in Bacon’s New Organon, and the Royal Society, very largely inspired by Bacon, proceeded to exemplify it — this idea I got years ago from reading Boorstin’s The Discoverers. Also, there is much of just that sort of ridicule in contemporary literature, such as Swift’s parody of the society as trying to extract sunbeams from cucumbers. No doubt they did study cucumbers. They also spent much energy meticulously drawing and describing flies and lice, and newly discovered microscopic organisms, and talking about weighing air, which gave their sponsor, Charles II, a good laugh.

    To say that the mind is a blank slate, and everything in it comes from experience is seductively simple. So is taking as axiomatic that the purpose of the nervous system of an animal is to maximize pleasure and minimize pain, or something along those lines (pain and pleasure having come into being as the solution to the problem of maximum survival). The great thing about min/maximization problems is they give us the question neatly packaged, and save us from the pain of not only not knowing the answers, but doubting whether we know the right questions.

    Prinz seems very interesting, and I hope to read the book under discussion; while he seems part of a trend of his generation of anti-nativism and rehabilitating behaviorism, he seems quite undogmatic, and respectful of those with different opinions.

    Also, I liked the suggestion that Locke’s strong emphasis on empiricism and against innateness may have had an ephemeral political impetus. This sort of contingency is something too little considered in the history of ideas, where it appears to me that the predestinarianism in early protestantism was a reaction to the corruption of “means” of getting to heaven in the form of indulgences for sale, while its later abandonment by Americans of Calvinist origins had much to do with a new culture of celebration of freedom.

    Liked by 1 person

  35. Victor,

    “The CTL-ALT-DEL is a rarely used function still found in modern PC’s but Bill Gates team put it in the DOS OS about 35 yrs ago.”

    I appreciate the analogy and the example. But it isn’t even close. The level of historical mess in biological systems is orders of magnitude bigger than in human designed ones, which is why a strict adaptationist program based on reverse engineering is a bad starting point in biology.

    Like

  36. I was fascinated to hear Jesse’s semi rehabilitation of BF Skinner. The part where he talks about “Internal behavior.” One of the good parts of getting older is seeing how certain debates play out over time.

    Like

  37. If the word “concept” refers to a concept, as defined by Prinz, then we should expect it to fetch a variety of things up into short term memory depending on the context, unless we make and exception of “concept” and demand that it refers to some essence. Proxytype seems like a very good candidate for at least one sort of concept though not, I suspect, for all.

    If you hold the concept “democracy” the way most people do, then it may indeed fetch up various shaggy dogs that one imagines to represent democracies. Is Russia a democracy today? Well, they have elections. Is Afghanistan, or any country where I doubt that most voters can make head or tail out of what anyone is for or against, and what results is a game of the most powerful, and most capable of assembling a view of what is going on, trying, through manipulation of the population to out-maneuver each other. Would a person be totally out of line to suggest maybe the US is not so much of a democracy today, and to put forth terms under which it would again be a democracy? If you hold “democracy” in a manner suitable for having productive debates about it, then I suspect you have something like a “theory theory” concept of democracy.

    In the preparation to be a mathematics graduate student, you have to accept that in this domain, a concept is whatever it is defined to be, no more or less. In real life, everyone has to some extent idiosyncratic sets of associations (or proxytypes?) for any given concept, or for any given word, which is the best we can do at referring to concepts, as far as I know. Mathematics has the blessing of not having to refer to anything, if by “refer” we mean a kind of pointing to some stuff. It does have to avoid self-contradiction, which is why the next thing that comes after a mathematical definition is some examples, usually at least one super-simple or “degenerate” case, just to confirm that an entity (of the mathematical imagination) can have all of the attributes given in the introduction. A T0 topological space is a set with certain characteristics, a T1 space is a T0 space with some other properties; a T2 space… and so on. This does not seem anything like the construction of “dogginess”, no matter by whom, and I submit everyone constructs dogginess in their own way.

    Like

  38. Prinz’ work reminds me of a broader, if diffuse, project. There is a great deal of anti-nativist sentiment in the last couple of decades, and I wonder what it represents. Probably it partakes of politics at least as much as in Locke’s time. The pendulum has been swinging. At one time fervor over “nature vs nurture” led to E. O. Wilson’s being harassed, and getting a pitcher of water dumped on his head. But with Chomsky, and observations such as in some birds “recognizing” the first creature seen after hatching as mother/species-prototype, and cognitive psychology, twin studies, and on and on Nativism came into vogue. Conservatives and other anti-liberals began to use the new climate of opinion in ways that made many people uncomfortable.

    When Steven Pinker wrote the Language Instinct (1994) it acted as a call to arms for all those afraid of a new eugenics, or who were just sick and tired of Noam Chomsky, and growing sick and tired of cheerful Steven Pinker. Something of a firestorm ensued, and some (mostly bad, I think) books were written attacking the Language Instinct most specifically, and there was some extreme incivility right up to now, such as Vyvyan Evans and his book and Aeon article.

    Michael Tomasello responded to The Language Instinct with “Language is not an Instinct” Cognitive Development, 10, 131-156 (1995), a 25 page review. He became one of the “go to” sources for anti-nativists, but a key statement in that article is quite measured: “All of the most important lines of evidence that Pinker’s new book adduces for an innate Universal Grammar are also compatible with a less rigidly nativistic view of language acquisition in which there is a biological foundation for language, just not in the form of specific linguistic structures preformed in the human genome.”

    Thirteen years later (2008), Tomasello wrote Origins of Human Communication, a beautifully written and meticulously argued summary of his conclusions based on decades of studying great apes in the wild and in tamer surroundings, as well as young children. It was not a particularly anti-Chomsky or Pinker book though he still doesn’t see Universal Grammar as necessary. And he does emphatically attribute the possibility of language to one specifically human trait (among others), “shared intentionality”.

    One strong suggestion is that pointing, seemingly out of drive to create shared experiences or attention, is a key adaptation. (He observes that apes seem not to point except by interaction with humans) Typical is the following, from a study in which parents recorded the earliest instances of children’s pointing:

    Example 17:
    At age 13 months, J watches as Dad arranges the Christmas tree; when Grandpa enters the room J points to tree for him and vocalizes. Gloss : Attend to the Christmas tree; isn’t it great?

    Based on many other examples, mostly under experimental conditions, this appears to be quite typical. A synopsis of much of the book’s argument is to be found in the 8 page non-technical Ultra-social Animal: http://www.eva.mpg.de/psycho/staff/tomas/pdf/Tomasello_EJSP_2014.pdf.

    Liked by 1 person

  39. Most of The Ultra-social Animal reads like the best possible 8-page synopsis of Tomasello’s Origins of Human Communication, although it somewhat loses its bearings in the last page or two, drawing overly happy conclusions, IMO.

    In Prinz’ The Return of Concept Empiricism, written as a chapter for H. Cohen and C. Leferbvre (Eds.) Categorization and Cognitive Science, Elsevier (2005), he makes but one reference to Tomasello, used as a caveat to the statement “Mindreading seems to be lacking in apes” (p10), and goes on to note that a key element of mindreading, false belief attribution does not appear until around age four, suggesting that it is cultural, rather than innate This is actually fairly uncharacteristic for Tomasello, who credits great apes with fairly weak mind-reading, limited in several specific ways, which can largely be gleaned even from the short article (The Ultra-social Animal – see above).

    Unfortunately for the argument Prinz was making, it has become widely accepted that infants intuit things about what others do or don’t know as early as 12 months.

    Evidence was given in 2003 in “Understanding attention: 12- and 18-month-olds know what is new for other persons.” Tomasello, Michael; Haberl, Katharina
    Developmental Psychology, Vol 39(5), Sep 2003, 906-912. ABSTRACT: “Infants at 12 and 18 months of age played with 2 adults and 2 new toys. For a 3rd toy, however, 1 of the adults left the room while the child and the other adult played with it. This adult then returned, looked at all 3 toys aligned on a tray, showed great excitement (“Wow! Cool!”), and then asked, “Can you give it to me?” To retrieve the toy the adult wanted, infants had to (a) know that people attend to and get excited about new things and (b) identify what was new for the adult even though it was not new for them. Infants at both ages did this successfully, lending support to the hypothesis that 1-year-old infants possess a genuine understanding of other persons as intentional and attentional agents. (PsycINFO Database Record (c) 2012 APA, all rights reserved)”. Sperber and White’s 1986/96 Relevance: Communication and Cognition, and the recent book by Sperber’s colleague Thom Scott-Phillips’ 2014 Speaking Our Minds: Why human communication is different, and how language evolved to make it special, the former being more philosophical, the latter, with frequent citing of Tomasello, presenting a shorter (but less rich and with overly pointed conclusions, IMO) evolutionary explanation of the evolution of language than Tomasello(2008).

    One idea that all this discourages is the sort of story, popular in the 1950s and 60s that larger brains per se are the key to human success. While many details are still very opaque, it appears that brains don’t just get bigger without accompanying evolution of new features. If this were not so, I can’t see why intelligent life couldn’t have evolved from the higher mammals much sooner, as a mutation to make an organ bigger has got to be one of the easier feats of evolution.

    Like

  40. (Assume a culture unfamiliar with cats; having no encounter with woven wool.)

    “Thanks for having me over.”

    “Not at all; I was impressed with your presentation, and am happy to converse with those of other cultures. By the way, could you pet Sheba? just to get her used to you.”

    “What, that furry animal thing on the floor?

    “She’s not on the floor directly; we call that a mat.”

    “Quite nice; can I get one of those?”

    “I have a friend who weaves them; I’m sure she’ll be happy to do weave one for you.”

    “And this is a ‘sheba’?”

    “No, that’s her name; she’s a cat.”

    “She won’t bite?”

    “She loves getting petted.”

    “You keep her for company?”

    “Rather she keeps me, I would say!”

    “Hey, she’s humming -”

    “We call it purring.”

    “Whatever; sounds pleasant enough.”

    “You know, I’ve always wondered whether lions purr….”

    “Lions?”

    “Big cats; big big cats, with fangs and claws, and….”

    “That doesn’t sound fun; what do they have to do with Sheba?”

    “Same biological classification, ‘feline.'”

    “Oh, yes, I read about those in books.”

    “Now you understand?”

    “No, there was nothing about petting and purring in the books I read.”

    “Well, have a seat; she’s especially appreciative when you have her on your lap.”

    “I only see a rock here -”

    “That’s my chair.”

    “So you people make chairs out of rock?”

    “Well, it’s patented; we’re calling them ‘Stone Cold Seats,’ they’re going on sale next week.”

    “You Americans are so inventive. Back home, we sit on blocks of wood; unless we’re praying. Then we seat in lotus position on – what did you call this?”

    “A ‘mat.'”

    “Yes, that; except ours are made from twining of hemp…. Sheba seems comfortable now -”

    “You are petting behind her ears, that’s her number one pleasure zone for such attention.”

    “Hmm… tell me, is she edible?”

    “Huh… elsewhere, well, maybe, um – no.”

    “Really? Where I come from, all domesticated animals are understood to be eventually eaten.”

    “Well, then Sheba is lucky she’s not living there.”

    “I meant no offense.”

    “Of course; just please don’t eat my cat.”

    “She would have to be skinned first -”

    “I don’t think she would like that -”

    “Ha! – you Americans are strange, thinking some animal would like or not like something. But I respect your customs here; I’ll not eat your cat.”

    “Thank you very much; by the way, would you like to view my collection of M.C. Escher drawings?”

    “I am so pleased you would share them with me! Escher is one of the permitted Western artists in our State Museum!”

    “And then I have some rather racy water colors by Matisse….”

    “Oh, he’s definitely not permitted; but I’m here to learn how your decadent Western mind works….”

    “I’ve read the Rubaiyat in the original Parsi; don’t talk to me about decadence.”

    “That’s all about Allah -”

    “Yeah; and I’m all about cats.”

    “Aha! – show me the paintings -”

    (dantip: posting on this thread, which you’ve left open, to make last reply on next thread.)

    Liked by 1 person

  41. For all my critiques, I do find the proxytype very interesting and plausible (apparently not everybody does), making good use of the concepts of long and short term memory, and actually saying something that may one day be testable.

    RE Massimo “I appreciate the analogy and the example. But it isn’t even close. The level of historical mess in biological systems is orders of magnitude bigger than in human designed ones,”

    On the one hand, the ctrl-alt-del example does seem off by orders of magnitude. On the other hand, I’m don’t know but what some software systems, running into millions of lines of code, might approach biological systems in complexity. The thing about software – like DNA, no matter how cruddy and Rube Goldberg-like the implementation, you hit the copy button and it copies. Unlike say printed circuits or almost any other engineered item,if you make a godawful prototype and it works, you’ve just begun — you’ve got to spend a lot of time making it manufactureable and robust. Especially when you cobble together something out of 1000s of modules that do all sorts of things — things you have no idea how to program – using in each case only some of their functionality, you might approximate the the waste and kludginess of biology.

    “which is why a strict adaptationist program based on reverse engineering is a bad starting point in biology.”

    I can’t really tell from this how much you’re dismissing adaptationist programs — maybe lenient ones, as opposed to strict ones, are OK?

    A great deal of evo-psych is tedious and annoying; the sexual cases are just too easy to say something plausible about, so we get pelted with a lot of uninteresting low hanging fruit that, not having been set up as a link in an important testable argument is fairly useless, unless it is useful to annoy and scandalize the easily scandalized.

    Still, as an example, I think you might find some good examples of that sort of reasoning integrated with other sorts of evidence in, say, the Michael Tomasello book I mentioned above.

    Like

Comments are closed.