Jesse Prinz on concepts, part II

by Dan Tippens

This is Part II of an interview with Professor Jesse Prinz of City University of New York. In this video, Dan Tippens first asks Prinz about his Proxytype theory of concepts. Dan then raises an objection to Prinz’s view; that Proxytype theory might have a problem satisfying the requirement that concepts be capable of being publicly shared. Afterward, Prinz explains how an empiricist would be able to account for abstract concepts which don’t obviously have any perceivable features such as the concept “truth” of “justice.” Dan subsequently questions whether we should be committed to a strong form of empiricism or if we ought to adopt a more moderate account. The interview closes with Dan asking Prinz to explain how the neo-empiricist accounts for the phenomenon of “inner speech” given that inner speech has some distinctive representational differences from auditory perception (which seems to be the modality from which inner speech would be derived). (Thanks to Luke Rodgers for his assistance with the editing of this video.)

[We apologize for the not ideal quality of the audio. However, your experience will be significantly augmented by the use of standard earphones.]

_____

Daniel Tippens is a research technician at New York University School of Medicine. He is also an assistant editor for the webzine Scientia Salon.

Jesse J. Prinz is a Distinguished Professor of philosophy and director of the Committee for Interdisciplinary Science Studies at the City University of New York, Graduate Center. He works primarily in the philosophy of psychology and ethics and has authored several books and over 100 articles, addressing such topics as emotion, moral psychology, aesthetics and consciousness. Much of his work in these areas has been a defense of empiricism against psychological nativism, and he situates his work as in the naturalistic tradition of philosophy associated with David Hume. Prinz is also an advocate of experimental philosophy.

50 thoughts on “Jesse Prinz on concepts, part II

  1. Interesting. Can be seen also as another example of a philosophical theory that’s made its way into computer science (AI).

    “A Computational Framework for Concept Representation in Cognitive Systems and Architectures: Concepts as Heterogeneous Proxytypes”
    BICA 2014. 5th Annual International Conference on Biologically Inspired Cognitive Architectures
    http://www.sciencedirect.com/science/article/pii/S1877050914015233

    In this paper a possible general framework for the representation of concepts in cognitive artificial systems and cognitive architectures is proposed. The framework is inspired by the so called proxytype theory of concepts and combines it with the heterogeneity approach to concept representations, according to which concepts do not constitute a unitary phenomenon.

    BTW, I have to disagree with Coel’s statement (in part I) that a “mind/brain could, in principle, be implemented in something other than a biological substrate”. It’s not that I’m just pushing for biocomputer technology. It’s that I think that there are scientific reasons a biological substrate will be needed.

    Like

  2. Hi Dantip,

    Enjoyed the interview, I thought you asked some pretty interesting and pertinent questions. Still to digest the answers.

    Reiterating the point I made in the last thread – that the child without expressive or received verbal language, who asks his brother to show him how to get under obstacles in Temple Run, is doing just the same thing as though he had said “Show me how to get under obstacles in Temple Run”. He has expressed the same concepts for the same reasons and made himself understood. If someone wants to say there is some important distinction there then they would have to explain what it is.

    The same is true when those of us with verbal language express concepts non-verbally, say when there is a verbal language barrier, or if we need to maintain silence and there are no writing materials to hand.

    So what I would suggest is that there is no real distinction between verbal and non-verbal language. Verbal language just increases the usefulness of something the brain was already doing.

    I have been thinking of some examples of non-human animals expressing concepts, for example a dog who goes to his master with the food bowl, or its leash, in his mouth is clearly expressing concepts like “give me food” and “let’s go for a walk”. Scratching at the door he expresses the concept of the need to go outside.

    A friend once described to me how some dolphins expressed the concept “come out and see this” (a patch of bioluminescence he had sailed into while below deck).

    Going to more distant relatives, when the seed tray outside our house is empty, rainbow lorikeets line up on the window sill looking inside. If no seed is forthcoming they begin to tap on the windows with their beaks. What else are they doing but expressing concepts like “come outside, bring food”?

    Now our last common ancestor with the birds was very long ago indeed. Some will say no, that I have simply misinterpreted some culturally transmitted behaviour which only serendipitously communicated that concept.

    Maybe, but that seems implausible. They wanted food, I understood they wanted food and they got food. If that is not communicating a concept then I don’t know what is.

    By contrast I don’t think a bee sting expresses the concept “you are a danger to me”. The examples I gave cannot be explained by any simple genetically programmed behaviour pattern.

    If I am right in my first example of the non-verbal child, that there is no distinction then I suggest that the lorikeets are using language just as if they had said “You in there, make with the food!”, that is to say language is just expressing and receiving concepts.

    So I think that concepts and language cannot necessarily be disentangled.

    On the other hand, if there is a categorical distinction between verbal and non-verbal language I am wrong, but that distinction would have to be demonstrated in the case of the non-verbal child.

    Like

  3. Suppose I have a mechanical instantiation of a Turing Machine. It has three tapes with patterns of squares and spaces on them, I crank a handle and it moves the tape, reads and perhaps writes a new square, moves some rods into a new pattern.

    Taking a rest I stop cranking and grab coffee and a highlighter,( yellow so as not to affect the operation of the machine) and start to highlight which areas of the tape encode properties, representations and proxytypes.

    Now you will allow that none of these patterns of squares and spaces is inherently a representation or a proxytype. That depends upon the logic in the cogs and levers of the mechanical device I have been cranking and which now sits idle.

    The tape with the program is not inherently a program – that depends upon the configuration of those idle cogs and levers. Those idle cogs and levers are not even inherently a Turing Machine.

    So, really, I have not identified any representations of proxytypes, I have just identified the patterns which will take on this function once I finish my cup of coffee and start cranking again.

    Now you might suppose that all this logic programmed into those cogs and levers, those squares and spaces, is sitting somewhere in some Platonic realm.

    But if not then you have to conclude that there are no representations of proxytypes stored in the idle machinery, only patterns which can potentially take on this function.

    Now I finish my coffee and start cranking, one turn, then another, then another. Notice that at no point during the operation is there ever a representation or proxytype on the machine or tapes.

    Imperfect as this analogy is,I think that there is a similar argument to say that it is never true that there is a representation or proxytype stored in our brains, these are always descriptive of what is happening.

    This suggests to me a clarification – that ‘representation’ is a process, not a thing. It is incorrect to speak of representations stored in either short or long term memory, only patterns which will be used when a particular process of representation is happening. No process – no representation.

    This, in turn, suggests that Prinz is over complicating the matter. If we realise that there are no representations stored in our brains (not even the primitive properties of which they consist) there is no need for the idea of a proxytype. There are just patterns, various of which are accessed and become the basis for various processes of representation.

    So the dog is representing hunger and food to itself and performing actions which trigger an analogous process in our own brain which is predicated upon the patterns stored there.

    Thinking out loud again, but I think that any theory that is based upon the premise that there could be representations stored in our memory cannot be right.

    Liked by 3 people

  4. Hi Coel,

    That’s not true. Even if you tried replicating your behaviour on a simple Turing engine, you’d need very complex software, which would need a vast number of “parts”. Software needs to have physical instantiation in order to exist.

    Am I to understand that in your parlance, a long pattern of squares and gaps on a tape is a “hugely complicated mechanical device”?

    Like

  5. “Suppose I have a mechanical instantiation of a Turing Machine.”

    Like other natural-bio-computationalists, I think that the Church-Turing thesis is ready for retirement.

    “The historical roots of Turing-Church computation remind us that the theory exists in a frame of relevance, which is not well suited to natural computation, nanocomputation, and other new application domains.”
    Super-Turing or Non-Turing? Extending the Concept of Computation
    http://web.eecs.utk.edu/~mclennan/papers/ST-NT.pdf

    Like

  6. If I understand things about right, Prinz has a “proxy” way of looking at our term “concept,” and in this manner he seems to be arguing that it can encompass exemplar, prototype, and theory forms. Apparently many believe that the entire “concept” of the term “concept” resides on shaky ground (Oh the irony!) given the noted separate varieties.

    I might very well not have this straight however, since I do find it extremely useful for all such nuances to be included in the term’s definition. Furthermore from my current perspective those two presented objections actually seem like validation: 1) If two people do in practice refer to “dogs” with somewhat or even entirely different notions in mind, then I’d naturally want to use a term which reflects this exact potential discrepancy — observe that the “concepts” of each person wouldn’t perfectly match. 2) If terms such as “truth,” “justice,” “two,” and so on aren’t “perceived” (which I must also dispute!) then we’d naturally require a definition for “concept” which is flexible enough to handle non-perceived forms of it.

    (Perhaps the issue is that we say “The concept…” as if it’s a singular entity, even though it may very well not be so singular when considered by two separate people?)

    In the comments for the recent “Brontosaurus” thread, I mentioned that we need to stop looking for “true” definitions. Here physicists would no longer ask “What is time?” and biologists would no longer ask “What is life?”, for two examples, but instead propose potentially useful definitions for such terms. Thus theorists would be free to use all sorts of nonstandard definitions in the quest to develop new insights, and critics would be obligated to accept these specific definitions while considering associated work. Perhaps if my perspective on definition were to become standard, then this “concepts dispute” would simply evaporate? Regardless I’d say that under a flawed system, Jesse Prinz seems to be making the best of things.

    Like

  7. Hi Philip,

    I think that the Church-Turing thesis is ready for retirement.

    No need, we never really had the “Church-Turing Thesis” in the first place.

    Retiring the “Church-Turing Thesis” would be a little like sending Big Foot extinct.

    Church or Turing never said what most people seem to think they said, and the whole position of ‘computationalism’ is based on a myth about what the supposed “Church-Turing Thesis” says.

    A “computation” in the well established sense, only gives us a definition of any computation that can be done with natural numbers.

    We already know about ‘non-Turing’ or ‘super-Turing’ computations – water running down a plug hole for example. Practically everything that happens in the Universe as well (unless the laws of physics as we currently understand them are wrong).

    So, to say that the brain is doing non-Turing or super-Turing computations is sort of a truism. Of course it is, everything in nature is.

    It is just that none of the ubiquitous non-Turing or super-Turing computers can do any natural number computations that a Universal Turing Machine (or any of the equivalent definitions of a universal machine) can’t.

    But I am a little puzzled that you quote a sentence from me and then mention the mythical “Church-Turing Thesis”. What has it to do with what I said?

    Nothing I said depends upon there being such a beast. What I said about patterns in the squares and spaces on the tape applies just as much to patterns in a neural network. None of them can be considered to be inherently a representation.

    Like

  8. Hi Robin,

    I was having trouble understanding your ruminations on concepts and representations, so I thought perhaps it would be helpful to have you lay out how you explain some of the desiderata for a theory of concepts ( unless you deny all the desiderata altogether?):

    1. Compositionality

    Concepts must be capable of being combined together to form more complex concepts, e.g the concepts CHAIR and WOOD can be combined to create WOOD CHAIR.

    2. Acquisition

    A theory of concepts must explain how we come to acquire concepts.

    3. Categorization

    There are 3 levels of categorization. The subordinate level, the basic level, and the superordinate level. For example for the concept DOG here is the level distribution…

    Subordinate level: Rotweiler
    Basic level: Dog
    Superordinate level: Animal

    In categorization tasks, subjects are asked to either perform category production or category identification:
    Category Identification tasks require the subject to Identify what category an object belongs to. E.g they show a picture of something and ask the subject to report what category it belongs to.

    In Category production tasks, a subject is asked to describe a category, in which they probably will be asked to report what essential features of certain categories are – “what are birds?” Response: “winged, feathered, flying creatures.”

    It turns out that subjects are faster at categorization tasks when it comes to the basic level of categorization. People are faster at identification and production at the basic level as opposed to subordinate or superordinate levels.

    A good theory of concepts should be able to either explain or predict (not sure which one) these phenomena.

    4. Publicity

    Concepts can be shared and understood between people and within the same person at different times. For example, when I say “aardvarks are nocturnal” I am deploying the concepts within that sentence, and presumably you (the listener) understand the sentence by activating those concepts as well. Consequently, it looks like we share (at least roughly) the same concepts.

    All,

    Here are two experiments (among many others) that Prinz brought up in his book which he thinks support concept empiricism. I figured it would be fun to put them up and have people talk about them:

    Barsalou:

    Subjects were asked to perform one of two feature listing tasks. Either they were asked to imagine an object and list its features or to simply list the features of an object. Concept empiricism predicts that the same features will be listed if the representations deployed are all perceptual representations or mere copies of perceptual representations.

    Barsalou found that the same features were listed in both tasks.

    If one holds that the representations used were different and not of perceptual origin, then one wouldn’t *necessarily* have this same prediction, though one*might.* However, the non-empiricist (a-modal theorist) has an explanatory deficit: their theory doesn’t *predict* that the same features would be listed, even though this finding could be made consistent with an a-modal theory.

    So, the neo-empiricist thinks he has an advantage over the a-modal theorist because he predicts and explains this feature-listing study, while the a-modal theorist doesn’t predict it but can only try to retroactively explain it.

    Morrow, Greenspan, and Bower:

    Subjects were asked to study the floor plan of a multiroom interior with a number of objects in it. They then remove the floor plan and ask subjects to read narratives which describe the movements of a protagonist through that interior, beginning in one room (the source) and passing through another (the path), and arriving at a third (the destination). After reading the passages, the subjects are asked whether a particular object is in the same room as the protagonist, or a different room. Responses are faster for objects in the same room, suggesting that the destination room is where the subjects are spatially imagining they are.

    One thing to be concerned about is this: both of these studies (and many other ones Prinz employs in his book) seem to use mental imagery to support concept empiricism. However, a-modal theorists (almost all of them) concede that mental imagery exists, and that it is regularly deployed in cognition. So, these experiments seem to only show that we sometimes use mental imagery over other forms of cognition in at least a task-specific manner. But this doesn’t exclusively support neo-empiricism since a-modal theorists hold that mental imagery exists and is used as well.

    Liked by 1 person

  9. Robin Herbert wrote: “Am I to understand that in your parlance, a long pattern of squares and gaps on a tape is a “hugely complicated mechanical device?”

    Well, if a long series of 1s and 0s happened to represent the works of Shakespeare, then it would be complicated, and if it represented the complete process for building a Boing 747, then it would indeed represent a hugely complicated mechanical device. The fact that the tape itself doesn’t seem to be a “mechanical device” is misleading. You combine the tape and the very simple turing machine, and you get a huge amount of mechanical complexity, related to the complexity of what the tape represents.

    Liked by 1 person

  10. Robin Herbert,

    Regarding the claim that “none of the ubiquitous non-Turing or super-Turing computers can do any natural number computations that a Universal Turing Machine (or any of the equivalent definitions of a universal machine) can’t,” a question for biocomputers is whether there are

    biologically computing agents that can “compute Turing-uncomputable functions.”

    http://www.ncbi.nlm.nih.gov/pubmed/15527956

    Like

  11. Any purely internal — mental, cognitive, neural, call it what you like — account of representations (and thus, concepts) is going to run into the Private Language Argument, for which no one — I repeat, not a single soul — has come up with an even remotely plausible response.

    Representations and concepts have an irreducibly social dimension that speaks to the question of correct and incorrect interpretation/application/understanding, which is why concepts cannot be treated, adequately, purely by way of a theory of the mind. This will be a problem for *any* view that treats concepts as mental objects.

    From Ian Ground’s recent paper, “Why Wittgenstein matters”:

    “The famous or infamous remarks against the possibility of a logically private language, is aimed inter alia, at, the thought is that even if internal representationalism did make sense, the pure representing individual, could never establish, in its own case, isolated from a potentially public practice that the representations were being used correctly. Not because they would not know. But because the notions of correct and incorrect here would lack traction.”

    Liked by 4 people

  12. Hi Robin,

    Am I to understand that in your parlance, a long pattern of squares and gaps on a tape is a “hugely complicated mechanical device”?

    Why yes, indeed. Hal Morris has already answered for me, but:

    Complexity is best understood in terms of the amount of information needed to specify it. Thus highly specific software is hugely complex. The combination of the tape and Turing machine (both are integral to the result) is certainly a “mechanical device”, and thus a hugely complicated one.

    Of course, in our brain, the “software” is encoded in the hardware configuration, rather than being conceptually distinct. Claiming that a Turing-engine simulation of you is “simple” is a bit like claiming the brain is simple on the grounds that it’s merely two litres of water, some charcoal and a few spoon-fulls of salts.

    Liked by 1 person

  13. We’ll start here: basically, what Robin Herbert said in his second comment here; what Aravis said; what Massimo and labnut said in response to Coel in the first thread; and of course what I said in my comment there.

    Dantip: Thank you for your broad but concise explanation of the basic standards of the current Analytic theories of conception (comepletely congruent with the SEP article on mental representation you cited in the previous thread) that Prinz is working with. Please understand I intend this respectfully; but do you not see how utterly artificial these standards are? Nobody in real life is conforming to these standards. The brain does not work computationally, and it is not a library of ready-made files.

    My twelfth year was rather an ‘annus mirablis’ – I read Homer, worked through a 9th-grade algebra text-book, and even developed a theory of continental drift – from genealogical rather than geological sources ( the inheritance relationship between Native Americans and Asiatics was a-buzz in magazines at the time; given that, and given the shape of the continents, there just had to have been a land bridge to the north that allowed relatively large migrations, since the cultural dissemination revealed through Heyerdahl could not explain biological family resemblance.) I had one teacher – Alan Zito – who encouraged my thinking; unfortunately when I moved on to a conservative Junior High, I was back-stepped by teachers convinced that learning could only follow prescribed methodology; my mathematics books (bought from a used book store) were confiscated – “algebra is taught in 9th grade, not before” – I was consigned to the the ‘problem child’ curriculum of the day, until a handful of English teachers realized that I was reading way above my assigned class….

    I mention this because this experience well-prepared me for the Pragmatist theories (there are more than one) of knowledge, conception, education, and discourse.

    The fundamental principle I experienced (before I could articulate it), is: ideas are not ‘composed’ but develop as relationships – therefore cannot be ‘acquired’, but *generated* – learning is not a ‘taking-in,’ but a ‘reaching-out’ (necessarily involving the social) – when we stop to think on it, how else can we account for the evolutionary development of conception? Certainly not by assuming that the mind is floating in some sensual stasis until called upon to account for sensory experience.

    The Analytic tradition seems impoverished in such matters.

    Let’s stop using the terms “Analytic” and “Continental” for the apposites we’re discussing. What we should really say is, the (Frege-to-Carnap) GERMAN analytic tradition (however disguised with reference to Hume) and the (Brentano-to-Heidegger) GERMAN phenomenological tradition. The notion that the Analytic tradition is primarily ‘Anglo-American” is silly. But there *is* an American philosophical tradition: Pragmatism.

    The references cited in the SEP article on Mental Representation were largely written in the past 30 or 40 years. Unfortunately, the critique of representationalism has been ongoing for 300 years; much of it remains unanswered. Problem? I think so….

    Liked by 4 people

  14. What seems most interesting to me in looking (so far only briefly) at Prinz and Machery and others mentioned here is the extent to which what they do is science-based. How you label it (philosophy of psychology, methodological analysis, etc.) doesn’t matter as much as this general orientation which appears to represent quite a radical break from previous ways of operating. Prinz himself emphasizes the radical nature of these changes.

    Could this not be seen as a return to something like — or at least to the general spirit of — logical empiricism? In both cases, science is central and the philosophy is subservient to the science in the sense that experiments etc. are constantly being appealed to to justify various claims and approaches. And this new talk about a ‘humbler’ kind of philosophy does seem to echo the mid-20th century notion of philosophy as a handmaid of science (which was a conscious twist on the medieval idea of philosophy as ancilla theologiae or handmaid of theology). Humble is good, in my opinion, for both intrinsic and extrinsic reasons.

    I have always felt that work in the philosophy of language has been greatly handicapped by philosophers’ reluctance to draw on linguistics and psychology. For example, I came across prototype theory via studying linguistics rather than through philosophy. And, though much of what, for example, Wittgenstein says about categories and conceptualization is broadly correct, there is no particular reason to stop there, no reason not to develop actual theories — but *scientific theories* (testable, etc., and ultimately related to actual brain processes) as distinct from philosophical theories. (Wittgenstein believed — with some justification, I think — that purely philosophical theories are unnecessary and ill-conceived.)

    On another issue, I am encouraged that both Prinz and Machery (apparently) reject traditional normative approaches to ethics, as normative ethics is an area where claims to expertise (philosophical, religious — or scientific) are particularly dubious.

    My comment should not be read as being dismissive of other kinds of writing that have been dubbed philosophy: Nietzsche (trained in philology, not philosophy) means a lot to me, for example. But whatever he (or, say, Heidegger in his later work) was doing was a very different kind of activity from either traditional or neo-empirical philosophy.

    Which is not to say that it is incompatible with a neo-empirical approach. I certainly don’t see Nietzsche as being anti-science in the way he is often interpreted these days, and don’t find it at all surprising that Machery sees his own (descriptive rather than prescriptive or normative) approach to ethics as being broadly Nietzschean.

    Liked by 1 person

  15. Mark English wrote:

    And, though much of what, for example, Wittgenstein says about categories and conceptualization is broadly correct, there is no particular reason to stop there, no reason not to develop actual theories — but *scientific theories* (testable, etc., and ultimately related to actual brain processes) as distinct from philosophical theories

    ——————————————————————————————————————

    Certainly we should develop scientific theories in response to scientific problems and questions. But remember that Wittgenstein also showed us that science could be in the grip of a picture just as much as philosophy. Indeed, many of the false pictures that he is interested in uncovering are one’s that come very naturally to *everyone*, specialists and common folk alike.

    The question I’ve raised is whether *any* theory that treats concepts as purely mental objects can possibly be correct, and the answer is “no,” for the reason I gave — i.e. the Private Language Argument (not to mention the rule-following argument, which also applies). *Anything* that is interpretable and thus, has conditions of correctness and incorrectness *must* be partly public, for the reasons that Wittgenstein laid out in the Investigations. Hence the irreducibly social dimension of concepts. And hence the impossibility that *any* empiricist/Lockean treatment is going to work.

    Liked by 1 person

  16. ejwinner: “The brain does not work computationally”

    While there is some scientific indication that it may not work Turing-computationally, still it is thought that it does work bio-computationally.
    cf. Bio-steps beyond Turing

    This would be the Pragmatist’s approach to computation, not trapped-in by a fixed, Platonist’s definition of “computation”. As Prof. S. Barry Cooper has said, we do not know yet the complete physical nature of computation.

    Liked by 1 person

  17. If every word we use is actually a concept in and of itself, how we get caught in a language game about concepts is obvious. The key to understanding language is to not lose the stimulus response SR paradigm. Language SR simply reflects brain complexity or landscape etc.

    My engineer’s design for this system would be that language SR accesses the brain via the outer layers of the neocortex. Likewise we pass these outer layer patterns or metadata (intentionality?) between us as language. The visual system also can translate these sounds into symbols.

    The engineer’s pov is to also look at the time domain of language and thought. Language neurons are the same biology which generate body movement and responses or we can say that the time domain of muscle activity matches the time domain levels of thought and language which brain biology also generates.

    Seems simplistic but the phylogeny of the mammal brain is very explicit.

    Like

  18. When chairs were handcrafted, the concept ‘chair’ would necessarily intersect the concept ‘carpentry,’ so that when one wanted a chair, one would need to think of either one’s own carpentry skills, or the skills of another one would need to ask, pay, or bargain with in order to have the chair constructed. (This would not be true in cultures where mats were preferred in use for resting one’s legs.) When assembly line production was developed, this necessary intersection with carpentry disappeared, which means that, without notice, the very idea of ‘chair’ had been redefined.

    Imagine that I’m on a hike, and find a rock on which to sit when needing rest. Finding it comfortable, and having the resources to do so, I have the rock moved into my home. I grow so fond of it that I remove all recognizable chairs from my home, and even go out for another rock just as comfortable, which I also move into my home. So a friend visits, and I say, ‘take a seat,’ indicating the rocks. At first my friend may be confused, but recognizing that the rocks are of the height to provide rest on the legs when seated upon, and finding the rocks comfortable, we sit together and carry on the visit without much further remarks on the matter.

    As my friend, and other guests, visit repeatedly, I continue offering the rocks as objects for use in sitting restfully, sometimes referring to them as ‘seats’ but sometimes, with increasing frequency, as ‘chairs.’ Eventually, my repeated guests understand that, in the context of my house, the word ‘chair’ refers to the rocks.

    Something like this did occur with the dissemination of the ‘beanbag chair.’ Some of us still remain uncomfortable with this translation, others don’t think much about it one way or another. What happened is that the concept ‘chair’ has been either driven into total abstraction (‘any object upon which to rest’) – or revealed as always a matter of shared usage signifying particular instruments contingent upon given contexts.

    For centuries, the concept ‘atom’ included the understanding ‘smallest unit of matter.’ Eventually, scientists discovered an entity that seemed smaller than any other, and that, through combinatorial interactions, formed all larger material entities, so called it ‘atom.’ Then it was discovered that this entity was composed of other entities – but since the word ‘atom’ was in play, scientists dubbed these newly discovered entities ‘sub-atomic,’ perhaps in order to preserve the history of the word ‘atom.’ Then even smaller particles were discovered, until the word ‘atomic’ now only signifies the initially discovered entity and the uses we can make of it; the original usage is of only historical interest; and the understanding ‘smallest unit of matter’ is only of use in constructing hypothesis within given research enterprises.

    The kind of schematic of representationalism Prinz is working with can’t really account for such continual modifications of signification in play as usage; and neuroscience won’t really reveal the social dynamic of it.

    Like

  19. Hi Ejwinner,

    I don’t have too much time to respond, unfortunately. Normally I would try to summarize what your claims were and then respond to them or comment on them, but I’m afraid I only have time to do the former.

    “Imagine that I’m on a hike, and find a rock on which to sit when needing rest. Finding it comfortable, and having the resources to do so, I have the rock moved into my home. I grow so fond of it that I remove all recognizable chairs from my home, and even go out for another rock just as comfortable, which I also move into my home. So a friend visits, and I say, ‘take a seat,’ indicating the rocks. At first my friend may be confused, but recognizing that the rocks are of the height to provide rest on the legs when seated upon, and finding the rocks comfortable, we sit together and carry on the visit without much further remarks on the matter.

    As my friend, and other guests, visit repeatedly, I continue offering the rocks as objects for use in sitting restfully, sometimes referring to them as ‘seats’ but sometimes, with increasing frequency, as ‘chairs.’ Eventually, my repeated guests understand that, in the context of my house, the word ‘chair’ refers to the rocks.

    _____________________________________

    I think this is actually a perfect example of what Jesse’s theory can and indeed was *made* to explain. Specifically, it was made to explain context-sensitive deployment of concepts.

    When your friends visit and you say “take a seat” indicating the rocks, the fact that your friends may at first be confused but then figure out what you are suggesting is completely capable of being explained on Prinz’s view. On his theory, what is happening is that initially they are drawing up a proxytype for the context (perhaps a prototype or an exemplar of “SEAT”) and it conflicts with what you are suggesting. However, after further information about the context is given to them, such as you continually gesturing at the rocks when you say “take a seat,” they deploy a different proxytype (a theory concept of SEAT- that it is something whose function is to allow one to sit down), and they then sit down.

    On Jesse’s view, when your friends come in later on and recognize that “chair” refers to the rocks, their conceptual recollection mechanisms have just taken the context into account and are now drawing the appropriate proxytype for the context.

    It is also worth noting here that there is no problem with the same words being used to express different concepts. Indeed, consider the sentence “Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo.” The context of the sentence allows you to draw up a different proxytype for each instance of the word’s usage. The same goes for plenty of ordinary day interactions.

    “Something like this did occur with the dissemination of the ‘beanbag chair.’ Some of us still remain uncomfortable with this translation, others don’t think much about it one way or another. What happened is that the concept ‘chair’ has been either driven into total abstraction (‘any object upon which to rest’) – or revealed as always a matter of shared usage signifying particular instruments contingent upon given contexts.”

    ______________________________

    Just to reiterate again, Jesse thinks all different types of conceptual structures – prototypes, exemplars, and theories, exist and populate our minds. They are just drawn up in context-sentsitive manners. So the “total abstraction” concept of a chair is likely just the theoretical concept which is drawn up in certain interactions, and the “shared usage concept” of chair that you mention would likely just be the prototype concept. Each is drawn up in a context sensitive way.

    I think what I have said here applies to help explain the case of the atom that you discuss at the end of your comment as well.

    Last note: you seemed incredulous that I wouldn’t find the list of desiderata for a theory of concepts which I outlined in a previous comment arbitrary. The reason I don’t find them arbitrary at all is because they are set in place for very good reasons. Acquisition is set in place because we obviously must acquire concepts in some way or another. Categorization is in place because experimental from Eleanor Rosche results have consistently shown that we are faster to deploy concepts at certain levels of abstraction (basic level) over others. Compositionality is there because we can combine concepts to create more complex ones. We want to explain these things that a real phenomena and ideally we could do it with a single theory of concepts. These desiderata aren’t like a constellation: its not like we just clustered a set of things together because we wanted to; there were good reasons.

    If you wanted to, you could draw a distinction between a scientific theory of concepts and a philosophical theory of concepts and claim that they attempt to explain different things, I’d be open to hearing about that.

    Aravis,

    I feel I should mention what Jesse says about Wittegenstein in his book. He doesn’t attempt to conclusively refute Wittgenstein’s arguments, but he just outlines two plausible responses to him.

    According to Wittgenstein (loosely), concepts can be individuated by how they are used. Having a concept is being able to follow a rule for using that concept, like stepping in line with a troop of soldiers on a march with no one person leading the march. For something to count as a rule, there must be criteria for correctness. If rules were private rather than public, there would be no way to confirm that one were conforming to them. One can’t know he is using the rule the same way from time 1 to time 2. So, rules must be public, as correctness and rules only have application in public contexts.

    Jesse’s response: First, one can challenge the claim that “correctness” can make sense outside of public contexts. For example, perhaps instead of conformity to “rules” being what matters for proper use, it is actually conformity to *laws.* Others might claim that correctness can be explained in terms of conformity to evolved or designed functions.

    Second response: A big motivation for the claim that “following a rule” in a private context isn’t possible is an epistemic motivation; one cannot confirm that s/he is conforming to a private rule in one context vs. another. However, the fact that one cannot tell that he or she is following a rule doesn’t entail the conclusion that one is not following a rule. A criterion for correctness can simply be a correct way of conforming to a rule rather than a method of verifying that one is conforming.

    One reason Wittgenstein things verifying a rule is important for correctness is because it involved a normative dimension: one can be punished or held accountable for failure to conform to a rule. He thought only a public setting could hold one accountable for non-conformity.

    However, one could say that what holds you accountable to a private rule is costly result from failure to abide by certain rules. For example, misidentifying a poisonous plant as another instance of a familiar nutritious plant will be quite taxing on me. I have incentive to follow certain rules in private settings.

    This all comes from Jesse’s book on pages 19-21. He says a bit more but I only had time to write this out.

    Liked by 1 person

  20. Thanks dantip for your comment above to Robin. I know very well that if I’m given a series of pictures to identify, it will not be “the basic level” that comes up most readily (“dog”), but rather “the superordinate level” (animal). This is because it will be far less taxing for me to choose between three answers (“animal,” “vegetable,” “mineral”) than three thousand or so (“dog,” “tree,” “cloud”…). If researchers have been finding otherwise…

    Let’s say that the researchers were instead to get things right however. Would this be because they had found a more effective definition of “concept” such that there is a premade mode in the mind where such things fit? Or would this perhaps validate my own position that there is not? Either way I don’t see how these observations give us more effective definitions for the term. All this should actually tell us is that choosing between three should be less consciously taxing than thousands.

    Instead of using experiments in the attempt to build “true definitions” for terms such as “concept,” I’d give researchers the ability to define their terms in any manner they find useful. Thus they could use such definitions to build models of reality for us to check. Here you wouldn’t say, “A good theory of concepts should be able to either explain or predict (not sure which one) these phenomena.” Instead you might say, “A useful definition for the term ‘concept’ will help us build more effective models of reality.”

    Moving now to empiricism and modality, I’d appreciate your interpretation of what I happen to be in this regard. Empiricism seems clear to me, since sensory input should be quite necessary in order for any such sense to indeed be contemplated. Furthermore the memory of a past sense should still be a current input, or change nothing. But then I’ve also presumed my ideas to be a modal, since I believe that consciousness is something that is built up (language for example), and doesn’t relying upon countless innate “modes.” So what am I getting wrong to believe myself both “empiricist” and “a modal”?

    Barsalou’s experiment may have gone wrong as well. If I’m asked to imagine an object and list its features, or instead to just list the features of an object, I’d expect to come up with at least somewhat different results. If you look hard enough, different beginnings should always bring somewhat different ends. But then this also seems consistent to me with both empiricism and a modality.

    Now to perhaps confuse things most, I find the Morrow, Greenspan, and Bower experiment to make sense. If a person is taking instructions to visualize walking somewhere, that person might very well quickly recall a vase that happens to be in the presently considered room, but have to think harder to recall that the tv which isn’t in the room, isn’t in the room.

    So can you give me a bit more guidance?

    Like

  21. dantip:

    Going to far into the Private Language and Rule Following arguments is probably a thread-hijack, and my comments were as much directed to others in the discussion as they were to Prinz, so I don’t want to go on too long about this.

    That said, if Prinz’s arguments are as you describe then his understanding of Wittgenstein’s critique is quite misinformed. This is not a surprise, as Wittgenstein is held with great ambivalence today in the analytic tradition, and there are few really good, knowledgeable Wittgenstein scholars working in analytic philosophy departments.

    To understand the problems of private language and rule following as epistemic in nature is to entirely misunderstand them. And it’s not just a matter of standards of correctness or incorrectness in themselves. It is a matter of *what the concept actually is*. That is, in the private — i.e. purely internal — context, a radical indeterminacy is in effect. There is no fact of the matter as to whether I should say “2004” after “2002” or whether I should say “2006.” There is no fact of the matter as to what “chair” means. Private language is impossible, because private rule following is impossible, and private rule following is impossible, because in the absence of the *social* fact of a particular use “passing muster,” there are indefinitely many uses that are consistent with ones past use.

    Again, I don’t expect Prinz to understand Wittgenstein. It really is no longer part of an analytic philosophical education. (I myself had to learn it on the side — with a lot of help from Ian Ground and also, Peter Hacker — as it was barely a part of my CUNY Graduate Center PHD education.) But what everyone needs to understand is that the neo-Empiricism that people like Prinz are resurrecting — just like early 20th century logical empiricsm and classical 17th and 18th century empiricism — was thoroughly discredited by Wittgenstein, precisely because of this reliance on the purely psychological treatment of concepts. (The were also thoroughly discredited by Quine and Davidson, on entirely different fronts.) Any theory of concepts that has a chance in hell of avoiding Private Language and Rule-Following problems are going to have to be thoroughly sociolinguistic in nature. Put another way and paraphrasing Hilary Putnam, “Concepts just ain’t in the head.”

    Liked by 2 people

  22. Aravis,

    Thanks for detailing a response while trying not to hijack the thread. I just thought it was important, since I did the interview with Jesse, to make it known in the discussion that Jesse at least acknowledges Wittgenstein, and attempt to relay some of his thoughts on Wittegenstein out into the discussion.

    Let me ask you one other thing then: Jesse is concerned with explaining some experimental results that emerged from the psychological literature in the 80’s and 90’s, and his account seems to do pretty well. The things he wants to explain are things like categorization task results (feature listing or category identification), and other categorization tasks such as the two experiments I mentioned above, Do you think we could say he is giving what we could call a psychological account of concepts (an account of concepts that tries to explain certain empirical results), and can avoid Wittgensteinian criticisms? Or do you think the two are necessarily inseparable?

    Like

  23. dantip:

    If by “giving an account” you mean one that actually makes sense of what we mean when we say that we “understand” or “grasp” a concept, then my answer is “no.” The answers are going to be sociolinguistic in nature, not psychological — although psychology is, of course, going to be a *part* of a sociolinguistic account.

    Look, the same thing is true of theories of “the mind.” As I mentioned in a previous discussion thread, I don’t believe that “the mind” is a thing, so theories that treat it as such are making a basic category error. (I also linked to an excellent talk by Peter Hacker, in which this point — which is, of course, also a Wittgensteinian one — is explained carefully and in detail.) Thus, there can’t be a “theory of the mind.” Instead, there are a number of theories of a number of things, all of which speak to a piece of what we mean when we use the word “mind”: as in, “I’ve changed my mind”; “He’s fair-minded”; “she’s lost her mind”; “I’m of one mind and he’s of another”; “Massimo is mindful”; “I really don’t mind”; etc.

    I should also say that this sort of disambiguation and revealing of common category errors, committed both by scientists and “ordinary folk,” are precisely what philosophy should be about. That’s what it means to say that its function is primarily critical. And this is what makes it entirely different from science, whose purpose is to advance our knowledge of the world.

    End hijacking.

    Liked by 2 people

  24. Hi Philosopher Eric,

    “Instead of using experiments in the attempt to build “true definitions” for terms such as “concept,” I’d give researchers the ability to define their terms in any manner they find useful. Thus they could use such definitions to build models of reality for us to check. Here you wouldn’t say, “A good theory of concepts should be able to either explain or predict (not sure which one) these phenomena.” Instead you might say, “A useful definition for the term ‘concept’ will help us build more effective models of reality.”

    ________________________________

    I think Prinz would say that “effective models of reality” are the very things that we infer the existence of in order to explain the empirical results. In other words, when a good theory of concepts (for Prinz) explains or predicts certain phenomena, it is a good model of reality (albeit most likely a bit simplified).Specifically, it is a good model of what kinds of things really do populate our minds.

    “Moving now to empiricism and modality, I’d appreciate your interpretation of what I happen to be in this regard. Empiricism seems clear to me, since sensory input should be quite necessary in order for any such sense to indeed be contemplated. Furthermore the memory of a past sense should still be a current input, or change nothing. But then I’ve also presumed my ideas to be a modal, since I believe that consciousness is something that is built up (language for example), and doesn’t relying upon countless innate “modes.” So what am I getting wrong to believe myself both “empiricist” and “a modal”?”

    ________________________________________

    Empiricism for Prinz is this: Sensory experience is causally prior to concepts; all concepts are copies of sensory representations that are deployed in a modality-specific medium (whatever type of representation is used in vision, that is the type of representation used for a visual concept – this is a bit simplified but it should do for now).

    Yes deploying concept could be the input for something. Just to take a fun case, a synesthete deploying his concept of a number can cause a synesthetic effect (say, seeing the color red). The number concept here would be the input to the synesthete’s visual system, causing him to experience redness. This is still consistent with Jesse’s account though.

    Like

  25. Dan-T,
    Empiricism for Prinz is this: Sensory experience is causally prior to concepts; all concepts are copies of sensory representations that are deployed in a modality-specific medium

    That seems to be a very strong position: ‘all concepts‘. I can imagine that at bottom concepts have their origin in ‘copies of sensory representations‘. But I think that our mind scaffolds concepts with increasing degrees of abstraction. It is this scaffolding ability that gives our minds such richness of concepts. As an analogy, one can think of a 100 storey building. The basement level ties it to the ground, provides access and essential services. It supports the upper floors but the upper floors become increasingly different until the executive penthouse in no way resembles the basement. In the same way basement concepts, tied to sensory representations support a vast scaffolding of increasingly abstract concepts. Empirical methods give us direct access to the basement but the upper floors are out of reach of empirical methods.

    I found this thoughtful review of Jesse Prinz’s proxytype theory of concepts very helpful – http://siucc2012.ias-research.net/files/2012/07/06_mark_cain.pdf

    Coel, you questioned my quote from Georges Lemaître, see below:

    “Comme je lui parlais de mes idées sur l’origine des rayons cosmiques, il réagissait vivement … mais lorseque je lui parlais de l’atome primitif, il m’arrêtait; ‘Non, pas cela, cela suggère trop la création’” (Georges Lemaître, “Rencontre avec A. Einstein,” Revue des questions scientifiques 129 [1958]: 130, (http://alberteinstein.info/vufind1/Record/EAR000065917, part of the Einstein archives)

    Quoted in
    Worlds Without End: The many lives of the multiverse(2014) – Mary-Jane Rubenstein, page 303
    as well as
    Matter and Spirit in the Universe(2005), Helge Kragh, page 83.
    and in several other books.

    See also http://arxiv.org/pdf/1311.2763.pdf for some background.

    Like

  26. “The worst place in the world” is a phrase which occurs among accounts of early 20th century Antarctic exploration, books concerning which, for some obscure reason, I’ve recently been reading some, and re-reading others. But why do I subject unfortunate readers to such ‘comma-infested’ convoluted sentences as just above, and earlier too!?

    Well, in that respect, now I’ve also learned what I suspect to be ‘the worst English sentence in the world’ up to 2015, published as philosophy of course, namely:

    “The famous or infamous remarks against the possibility of a logically private language, is aimed inter alia, at, the thought is that even if internal representationalism did make sense, the pure representing individual, could never establish, in its own case, isolated from a potentially public practice that the representations were being used correctly.”

    Only a formerly 12-year-old genius has much chance of making sense from that, or even of delineating where is the subject, and where the predicate.

    On a 3rd topic occurring in responses here, there is no credible evidence whatsoever that the claimed delineation of the range of computability is, for classical or quantum computation, anything other than, up to ‘coding’ (not in the sense that some misuse that word to mean programming) the recursive functions (i.e. the Church-Turing thesis). But it is David Deutsch that people here should be reading (not the goofy stuff about super-turing or bio- or hyper- computation) to learn actual fundamental theoretical advances since Turing, advances in quantum computation and feasibility, its physics basis (including the Everett version of so-called multiverse) as opposed to mathematics/classical physics which did lead to the invention of conventional computers. (And yes, on the other end of the history of this, Babbage/Lovelace did great stuff way back, but nowhere near that!)

    One is tempted to wonder where the general squabble by philosophers over the meaning of “CONCEPT” leaves the definition of philosophy as “exploring CONCEPTual space”.

    Like

  27. Dantip,

    Thank you for your reply.

    The word I use to remark the given schema was ‘artificial,’ not arbitrary. A nit-pick, perhaps, but an important one.’Artificial’ because super-imposed on natural processes to explain what may simply not need explaining. Yes, we can re-categorize the behavior found in the rock-chair example using ‘proxytype,’ and ‘exemplar’ modeling, but to what end?

    ‘Conceptual recollection mechanisms’? let me check that in my conceptual refutation gear-box and see if there’s a proper negative response to that.

    Oops! My mind isn’t a machine; can’t do that.

    What I can do is point out that such an explanation is strictly irrelevant to what I and my friends are doing – we’re negotiating what can be signified within the context. For instance one can imagine serious disagreements – ‘rocks ain’t chairs!’ – ‘they are in my house!’ – ‘that’s it, I’m leaving!’ – etc. So the issue has somewhat more to do with social-territorial response than it does with concepts like ‘rock’ and ‘chair.’ *But* over time, the social response can change the concept. I don’t think the schema you elucidate can properly explain this. One reason that Pragmatism developed a theory of conceptualization that is use-based and instrumental is because the knowledge-base of the day was undergoing profound changes in terminology, conceptualization, and use, and non-dynamic schema like classical empiricism could not adequately account for this.

    “Acquisition is set in place because we obviously must acquire concepts in some way or another.” No, we need to generate and develop concepts in particular situations for particular purposes. This is really a fundamental upon which we will not reach agreement. The Prinz model implicates a teacher-centered education whereby young minds absorb knowledge vicariously; I suggest a learner-based model, whereby learning is accomplished by tackling goal-implicit challenges.

    Another essential disagreement here, that intersects with your discussion with Aravis. Language is *not* primarily a “deployment of concepts;” conceptualization occurs in practice, achieves contingent clarity, and then gains concretion into words found in dictionaries, text-books, etc. When a police officer orders ‘get out of that car,’ he doesn’t care how the object you’re sitting in is conceived, and you won’t stop to check your ‘conceptual recollection mechanisms’ in deciding to comply. Nor does an intimate exchange of love vows in the bedroom necessitate any clarification of ‘love’ concepts, however contextually sensitized. Wittgenstein’s hammer, shared between two carpenters, is understood in use, not by concept reliability. (BTW, the same is true of Heidegger’s ‘ready-to-hand’ hammer, also posited as refutation of assumed need for conceptual schematics in practical usage.)

    Language is a chain of signification generating internal and external responses, our own and those of others. Categorization of links in this chain into concepts is part of the game; but not a necessary part of it. Attempts to hypostatize such categorization as necessary may actually blinker us from what is really going on.

    (Also: suggested reading: http://www.peirce.org/writings/p119.html .)

    Like

  28. phoffman wrote:

    I’ve also learned what I suspect to be ‘the worst English sentence in the world’ up to 2015, published as philosophy of course, namely:

    “The famous or infamous remarks against the possibility of a logically private language, is aimed inter alia, at, the thought is that even if internal representationalism did make sense, the pure representing individual, could never establish, in its own case, isolated from a potentially public practice that the representations were being used correctly.”

    Only a formerly 12-year-old genius has much chance of making sense from that, or even of delineating where is the subject, and where the predicate.

    —————————————————————————

    Ian Ground is a dear friend of mine, a Professor at the University of Newcastle, and one of the most knowledgeable people on Wittgenstein around today. The excerpt that I quoted is from a talk given to the Royal Institute of Philosophy, a venerable institution and publisher of one of the best philosophy journals in the world.

    The nastiness of your comment — combined with the swipe “of course philosophy” — is unsuitable to a venue, in which adults are trying to have a serious conversation.

    I had no difficulty understanding the sentence and neither, apparently, did the referees at the Royal Institute, none of whom are 12 years old.

    Why not go be a jerk somewhere else?

    Sorry to have to waste a post on this, but really.

    Liked by 2 people

  29. Hi Ejwinner,

    “The word I use to remark the given schema was ‘artificial,’ not arbitrary. A nit-pick, perhaps, but an important one.’Artificial’ because super-imposed on natural processes to explain what may simply not need explaining. Yes, we can re-categorize the behavior found in the rock-chair example using ‘proxytype,’ and ‘exemplar’ modeling, but to what end?”

    To the end of explaining the how our brain performs certain feats which we routinely observe both in and outside of the lab.

    “‘Conceptual recollection mechanisms’? let me check that in my conceptual refutation gear-box and see if there’s a proper negative response to that.

    Oops! My mind isn’t a machine; can’t do that.”

    ______________________________________

    Hehe as always I like your prose. Though sometimes I have trouble deciphering things through it. Are you suggesting that we don’t have mechanisms to recall stored concepts? I feel like that would be akin to saying we don’t have mechanisms to recall memories..

    “Acquisition is set in place because we obviously must acquire concepts in some way or another.” No, we need to generate and develop concepts in particular situations for particular purposes.”

    _______________________________________

    Just to be clear, “generate” could mean recall/produce a concept for this particular occasion from a long-term knowledge store (which I suggested we have mechanisms to deal with), in which case Prinz’s theory certainly can handle this (as I outlined above).

    Or it could mean acquire those things which we have within our knowledge-stores to begin with (which is what I meant by acquisition). I don’t think you meant this, since this is what you seemed to be suggesting we don’t need to explain.

    Do you really think we don’t need to explain both, especially the latter (acquisition)? Or are you suggesting that we can generate and develop concepts in particular situations for particular purposes without prior knowledge stores with which to create these concepts??

    I’ll also just note that the rest of the desiderata still need explaining even if you wanted to wave this desideratum away :).

    “Language is *not* primarily a “deployment of concepts;” conceptualization occurs in practice, achieves contingent clarity, and then gains concretion into words found in dictionaries, text-books, etc. ”

    Sure, perhaps we should be clear about something. You might think there is a distinction between language and communication. Communication can be any kind of way to get some kind of message across to somebody else. Language serves this purpose sometimes, but per chomsky, it isn’t necessarily only used for the purpose of communication.

    In order for language to be communicative, don’t you need concepts which are deployed in response to certain words? When you say, “the cat is on the mat,” don’t I need a shared concept of CAT and MAT in order to understand what you’re saying? Or don’t I at least need some concept of CAT and MAT to have any understanding of what you’re saying at all?

    Like

  30. Just a note from the Dan wearing his Moderator hat:

    I didn’t catch the harsh parts of Phoffman’s comment, so I felt it only fair to allow Aravis to respond. Hopefully this won’t become a regular thing

    Liked by 1 person

  31. phoffman56: “But it is David Deutsch that people here should be reading (not the goofy stuff about super-turing or bio- or hyper- computation) …”

    Sometimes “goofy” stuff might turn out to be good stuff. If it’s useful in the end, that’s what matters.

    “We recursion theorists were busy doing our sums while the natural world around us computed in mysterious and wondrous ways.”
    — S. Barry Cooper

    Like

  32. Hi Philip,

    A natural number computation that a universal machine can’t do? I would have thought they would have made more fuss about finding that.

    Hi Hal and Coel,

    Say we have two identical mechanical instantiations of Turing Machine logic. Without the tape they obviously have identical mechanical complexity since they are identical.

    So we take two large rolls of tape of the same length but with different patterns of square and space, and load the beginning of each into a machine.

    By your definition, one of these machines might have become hugely more mechanically complicated than the other by virtue of the complexity of the program on the tape. Let’s see your reasoning.

    Complexity is best understood in terms of the amount of information needed to specify it.

    No. That is a definition for mathematically interesting variety of string complexity, but it is not even a good measure of programmatic complexity, never mind mechanical complexity.

    For example, for any mathematically complex operation one can posit a sequence of mathematically trivial operations in a certain order which is, by this measure. more complex.

    However, I am happy to stipulate the programmatic complexity, you need to show that this increases the mechanical complexity of the machine it runs on.

    Thus highly specific software is hugely complex. The combination of the tape and Turing machine (both are integral to the result) is certainly a “mechanical device”, and thus a hugely complicated one.

    Non sequitur. It is not enough to claim that programmatic complexity increases mechanical complexity. You need to be specific and show the specific increase in mechanical complexity that you claim is caused by the increase in programmatic complexity.

    Let’s take the two mechanical devices I described above, with the beginning of the tapes loaded in each.

    We crank the handle of each once. Each machine processes the space or the square, as the case may be, under the head according to the same mechanical principle.

    Neither machine has done anything more complicated than the other.

    Now we crank the handle of each again – same thing.

    We can crank each handle as many times as we like, but each time we crank it, neither has done anything more complex or complicated than the other, no matter what is the differential in programmatic complexity. Even if mechanical complexity were somehow accumulative, neither machine would have done anything more mechanically complicated or complex than the other. That is kinda the point about what Turing and others discovered the nature of computation.

    So what mechanical complexity has increased, and how?

    Hi dantip,

    I wrote quite a long response, but could not get it so that I was happy it said what I wanted it to say. Basically, I am not offering an alternate theory, just questioning some of Prinz’s premise’s, in particular questioning whether or not he has really jettisoned intentionality or is assuming it, for example saying that a representation can be stored in memory. If you are interested, here is the last best draft: https://docs.google.com/document/d/16oWZLWQA2BfSw_BKhDA-gxh1YwuMku6WZMvBjGTE-Vs/pub

    Hi mod, please do me a favour and discard the last in favour of this 🙂

    Liked by 1 person

  33. Dantip,

    I didn’t mean to ‘wave away’ the ‘acquisition-need’ hypothesis, but to suggest that reconstructing the behavior in other terms opens doors to alternative perspectives, alternative theories and understandings. That’s why I say that the acquisition-need hypothesis suggests a teacher-centered education, while a generative-development model suggests a learner-centered education. The claim that a strong theory of concepts “must” satisfy criteria X, Y, and Z, unnecessarily closes the door to possible alternatives that might have greater use.

    I won’t answer questions concerning lab-experiments in psychology, because every such requires close, critical examination; and my general rule of thumb comes from Morse Peckham who argued that lab subjects are basically performers in a theater, playing to their audience.

    I largely have nothing to say on concept-combination, since this is obvious and inevitable, regardless of what theory of conception one holds. ‘Categorization’ is theory-bound – that is, how we understand conceptualization will determine how we understand categorization, since categorization is a function of conceptualization.

    However, allowing such ‘desiderata’ does not surrender the ground to the theory Prinz offers.

    “You might think there is a distinction between language and communication.” No, I believe both language and communication are part and parcel of the process of semiosis – sign production, signification, interpretation, response.

    Consider: Watch the Machery/Prinz discussion I posted in the previous thread – with the sound muted. Notice Machery’s restlessness (he is recurrently shifting his weight and changing the camera angles) – this suggests that feels he has a lot more to say than he actually says, further suggesting that he is restraining himself in deference to his interlocutor. Now note Prinz’s calm – virtually frozen when not speaking. He may be clearing visual space as his own show of respect of his interlocutor. Notably he only becomes truly animated when discussing his own take on the subject, which is to be expected.

    What ‘concepts’ are they ‘deploying’ in the signification of their bodily signs? Yet the communication is clear.

    “When you say, “the cat is on the mat,” don’t I need a shared concept of CAT and MAT in order to understand what you’re saying?”

    No, we need a shared understanding of appropriate response. I suspect that when my mother first told me she loved me, I responded appropriately – without having any sense of what the concept ‘love’ might mean. Indeed, such experience undoubtedly helped me develop such an understanding – how else could I have learned it?

    (I noted that Prinz’s understanding and my own differed in their implicit models of education. Writing the above, I now wonder if Prinz’s theory isn’t lacking a robust element of educational modeling. Learning begins at the breast – isn’t that obvious? Prinz’s neo-empiricism – quite like the classical empiricism before it – seems to assume at the earliest a young adult. Since learning occurs prior to our having “knowledge-stores,” the question ought to be how learning develops such knowledge stores; since, I suspect, such processes will continue throughout our lives.)

    Liked by 2 people

  34. Hi Ejwinner,

    “I won’t answer questions concerning lab-experiments in psychology, because every such requires close, critical examination; and my general rule of thumb comes from Morse Peckham who argued that lab subjects are basically performers in a theater, playing to their audience.”

    _________________________________________

    Alright, but then I think you basically got off of Prinz’s boat before he could even get started. His whole book relies on the validity of psychological experiments.

    But this does concern me. You seem to be suggesting with this that you think that all, or most, lab subjects conform to what is called the “response bias” in psychology. This is when a subject guesses what the experimenters are asking for and, either consciously or not, alter their verbal report or behavior as a result. So, their behavior is entirely artificial, and explaining it won’t have any value in explaining how they ordinarily behave.

    For a few reasons I suggest this a really not the right attitude to have toward psychology experiments. First of all, there are controls to prevent such a thing such as, in various ways, misleading subjects about the nature of the experiment.

    Second, psychology experiments, cognitive and vision science, have had rather extraordinary predictive success which has translated into many ophthalmology benefits. Also, work in cognitive psychology has explained many *ordinary* ways we go about doing things. For example, the distinction between System 1 and System 2 reasoning goes a long way to explaining our ordinary answers we give to people and ways of reacting to them. I think this is good evidence that subjects aren’t just “play acting” and giving us behavior that doesn’t reflect how they normally act.

    My last point is just anecdotal- I’ve been the subject in a few psychology experiments and have never felt any particular reason to “play-act.” Perhaps I was doing so unconsciously, but the second point made above supports the claim that I wasn’t.

    I suspect this quote is more of a relevant problem for *social* psychology as opposed to cognitive psychology and the perceptual sciences.

    I warmly suggest that the appropriate view to have is that most psychology experiments take place in *idealized* conditions (like many physics or chemistry experiments), but that explaining behavior in these idealized conditions very much allows you to extract general principles about how people ordinarily do behave, just like how the ideal gas laws help to predict and explain how gases behave.

    Anyway, this is more of a discussion for the philosophy of science part of psychology. Like I said, if you really don’t think that experiments get at the way people really work, you and I have been talking past one another in some respects.

    “Watch the Machery/Prinz discussion I posted in the previous thread – with the sound muted. Notice Machery’s restlessness (he is recurrently shifting his weight and changing the camera angles) – this suggests that feels he has a lot more to say than he actually says, further suggesting that he is restraining himself in deference to his interlocutor. Now note Prinz’s calm – virtually frozen when not speaking. He may be clearing visual space as his own show of respect of his interlocutor. Notably he only becomes truly animated when discussing his own take on the subject, which is to be expected.

    What ‘concepts’ are they ‘deploying’ in the signification of their bodily signs? Yet the communication is clear.”

    ________________________________

    I feel like this is case is precisely why you would want a distinction between language and communication. Bees can communicate when they perform their waggle dance, but intuitively they have no conceptual repertoire. We can also communicate, through things like body-language, but intuitively this, just like bee’s behavior, isn’t a case of expression of concepts. Chomsky calls this, I think, “mere signaling.” Nothing different from how plants can signal one another through release of certain chemicals.

    When it comes to *language* use, though, we deploy concepts. The reason Chomsky thinks that language and communication are different has something to do with this – he thinks the function of language is mostly for thinking, as opposed to communicating. In other words, engaging in meaningful syntactic operations on concepts.

    “No, we need a shared understanding of appropriate response. I suspect that when my mother first told me she loved me, I responded appropriately – without having any sense of what the concept ‘love’ might mean. Indeed, such experience undoubtedly helped me develop such an understanding – how else could I have learned it?”

    ________________________________

    But if you could ask the baby to write down paradigmatic features of love, his answer would obviously be vastly different from what an adult says. But two adults from the same area are likely to have the same feature-listing responses, suggesting they have the same concept, i.e they have the same cluster of features falling under their concept of “love.” This isn’t just an artificial experimental task, people write about these things in books and elsewhere. Indeed, a ton of philosophy goes into listing the relevant features of certain concepts and disagreeing about them.

    In other words, its not the case that the only kind of behavior we want to explain is whether a person hugs you or not after you say “I love you” to them (or whatever the behavior is that the baby may have done here). There are all sorts of other behaviors that need to be explained, and I don’t think that saying “we have a shared understanding of appropriate response” really help to explain these things.

    Like

  35. dantip:

    But don’t you see? If concepts are not mental objects, then it doesn’t matter what experiments and tests you do. You’re looking in the wrong place.

    A similar thing is true of mind. If mind is not a “thing” than a theory of it, in the sense of “the mind is the brain!” is worse than wrong. It’s a basic category error. And you can do studies all day long. You’re object of study is the wrong one.

    Take the classic example of a category error, described by Gilbert Ryle in The Concept of Mind. Someone on a tour of a University is shown the dining hall, the student center, the dormitories, the lecture halls, etc., and when the tour is over, asks “Yeah, but where’s the university. You haven’t shown me that.” The mistake he has made, of course, is in thinking that “university” refers to a concrete object, like “Dormitory A,” when in fact, it is a term describing the institutional relationship between these concrete objects (and other things).

    So what good would it have done if the person had taken out all sorts of measuring equipment and begun an observational investigation to find out where the university is and what it’s like? No good at all.

    The same is true here. Concepts are not mental objects — indeed, they *can’t* be — for the reasons I’ve described. So, the experiments Prinz talks about are about as useful as the imaginary guy, peering through his optical devices and attending to his spectrometers looking for the university.

    Liked by 1 person

  36. Hi Aravis,

    I guess I really have a hard time buying into the idea that concepts aren’t mental objects. I’ll have to watch and read more Ian Ground to get my intuitions churning. It’s funny, I listened to one lecture you sent me and Ground said, “the attitude of analytic philosophy toward Wittgenstein has been very much passive-aggressive.” He is definitely right.

    My thought has always been that as long as you think there are kinds which are not mere social constructions, then there is good reason to think that concepts are mental objects as opposed to constructed objects like constellations or Universities. The reasons are introspective, explanatory, and philosophical reasons (though there are clearly reasons against as well).

    …Doing philosophy is hard, and has a 90% chance of inducing a headache.

    Like

  37. I would like to apologize to both Aravis, and to the author of the string of English words which he quotes, but only if I am incorrect on the following grammatical criticism. And my thanks to anyone who might be able actually explain how I am wrong, if that is the case.
    The sentence takes exactly the form

    ‘X is aimed inter alia, at, Y is that even if Z, A, could never establish, in its own case, isolated from public practice that B.’

    Here X, Y and A are noun clauses, and Z and B would be, in isolation, perfectly good assertive sentences. Quite apart from the apparent illiteracy of the comma after A, I cannot see how this could, with any possible specifics for the 5 variables, be any kind of a meaningful sentence in English, assertive, interrogative, imperative or whatever. (I’m aged, so apologize, if needed, for old fashioned grammar terminology.)

    The Royal Society and everybody else in scholarly publishing has a tough time getting thorough refereeing, and in any case, again if I am correct, a goofy string of words is surely something the author should notice before the non-sentence sees the light of day.

    I’d be happy alternatively to see how a very minor addition or deletion here would make that word salad grammatically sensible, but right now I’m not holding my breath on that.

    Possibly Aravis misquoted, and the author at least deserves both of our apologies.

    Given my grammatical criticisms, I cannot of course make any meaningful sense of it. Quite likely it will seem either meaningless or false to me anyway, if the kindly person referred to above is able to reconcile me to it being at least grammatically correct.
    But that is a different matter:

    My deserved reputation here is presumably, among other ‘sins’, as a skeptic about there being much at all that is worthwhile in Wittgenstein, either early or late. But I must admit that skepticism to be much more based on his either misinformed or dishonest, but detailed, criticisms about what Godel had achieved. That is easily found: look at Stanford Encyc…, popular here, but not at the hagiography on Wittgenstein in general where that major matter is studiously avoided, but rather on the article about his mathematical philosophy by the fellow from Canadian cowboy country whose name escapes me right now. I admit it is not based on reading much other of Wittgenstein’s writing. Steven Weinberg says that he is entertaining as a writer. But I get the distinct impression that, in private conversation, Weinberg would have added to that sentence ‘of fiction’. I do keep waiting for Wittgenstein groupies like Ray Monk to say something that begins to change my negativity here.

    The pompous emptiness of a phrase along the lines of ‘Whereas one may not get it to pass, thereof must one wait in line for the next opportunity at the outhouse’ just leaves me cold. Maybe the German to English translations are misleading, but I doubt that.

    Like

  38. dantip:

    In order to understand why concepts cannot be mental objects, you need to understand why there cannot be a private language and why one cannot privately follow a rule. Those two ideas are at the heart of the critique.

    Ian’s talk is certainly a good, basic overview. And as sacriligious as it will be to some, so is Kripke’s “Wittgenstein on Rules and Private Language.” Kripke frames the rule following and private language arguments in ways digestible for analytic philosophers and thus, goes about things in a very non-Wittgensteinian way, but it is still a good place to get the gist of the problems Wittgenstein is pointing out.

    One can also come to the same conclusion — i.e. that concepts can’t be mental objects — from a different angle. Hilary Putnam’s “The Meaning of ‘Meaning'” also demonstrates why “meanings can’t be in the head,” which means that *concepts* can’t be in the head. Oscar and twin-Oscar, in 1760, are in *exactly the same psychological state* and yet, they are entertaining *different* concepts.

    But I really do suggest trying to wrap your mind around the Wittgensteinian arguments. They are paradigm changing in the way that Hume’s skepticism was and thus, are essential to understanding a number of fundamental faults in the majority of the research programs being pursued by today’s neo-empiricists. Indeed, Wittgenstein is to twentieth century analytic philosophy what Hume was to Enlightenment Empiricism and Rationalism.

    This is my last post, so I won’t be able to reply further. Hope it was somewhat helpful.

    Like

  39. Dantip.

    “I think you basically got off of Prinz’s boat before he could even get started.”

    Yes; although I do respect him and believe he would make a good teacher, given his grasp on the subject matter.

    “I’ve been the subject in a few psychology experiments and have never felt any particular reason to ‘play-act.’ – of course not! if you did, Peckham’s argument would be meaningless, as would be the whole notion of “response bias”.

    “- there are controls to prevent such a thing” – No, there are ‘controls’ designed to allow for ‘deniability’ – in order to enhance papers with grants behind them. (I’ve worked in the university too; let’s not get carried away with presumed justifications; I know what they’re all about; and chose not to play that game, which is why I’m not in the academy.)

    “- work in cognitive psychology has explained many *ordinary* ways we go about doing things.” What you have is an explanation that implicates further research – those wonderful grants! – but you don’t have what we really do in going about doing things. That requires getting into the stream of ordinary doing things.

    Philosophy – good philosophy, academic or ‘amateur’ – never begins top-down; it always begins with someone wondering ‘what are we really doing here?’

    ‘What ‘concepts’ are they ‘deploying’ in the signification of their bodily signs? Yet the communication is clear.’
    “I feel like this is case is precisely why you would want a distinction between language and communication.”

    I can’t give a lecture on semiotics here. I’ll say that, despite my respect for Chomsky and some of what he has done, especially politically, I think his principle theory (genetic language programming) is useless, and semiotics proves it. As said before, the Analytic tradition has chosen to ignore semiotics. (Semiotics was almost re-introduced by Wittgenstein; but, as Aravis notes, the tradition has chosen to reduce W. to a footnote.) So what can be said?

    We swim among signs. As animals, born this way. As social animals, our semiosis must always be brought into congruence with what is expected of us.

    Unlike other animals, our signifying procedures are extraordinarily complex (and open to wide interpretation). Speaking, gesturing, embracing, thinking, urinating, eating – signification is the fundamental ontology of the human animal experience we are – ‘language,’ ‘communication,’ self-talk, painting, music, scratching our privates – all just instances of the whole.

    But the whole is never complete – how could it be? On our death-beds – “oh, and one last thing -” Nope, sorry.

    Every situation is new – yet the same; everything we encounter is an instance of re-interpretation of signs we thought we understood, now must understand anew – hence conceptualization.

    “But if you could ask the baby to write down paradigmatic features of love,” – but we can’t do this – the whole point. Where is the neo-empiricist understanding of what the baby does? That’s where learning first takes place – that’s the starting point of any conceptualization we could possibly develop.

    We’re signifying animals, not conceptualizing machines.

    Like

  40. Phoffman,

    Unfortunately I let your comment slip through once again, but now that I re-read it, it seems to be laced with jabs at Aravis and Ian Ground more than it does apologetic content. Please stop doing this. It gives the discussion thread negative undertones, and nobody appreciates that.

    Liked by 2 people

  41. I have greatly enjoyed this conversation between Aravis, EJWinner, Robin, Hal and Dan-T(among others). It is an exemplar of thoughtful, respectful conversation. This is Scientia Salon at its best. I have learned so much from this interchange. Hoffman, I urge you to learn from and emulate their example. In that way you can become a respected and productive member of the Salon.

    Dan-T,
    Chomsky calls this, I think, “mere signaling.” Nothing different from how plants can signal

    What we do can hardly be called ‘mere signalling’ and can hardly be compared to plants. Every social gathering is a rich and complex web of signalling where the signals often count for more than the explicit communication. Asperger’s syndrome people have severe problems because they cannot read the signals. Meet a new person and your first and abiding impressions are formed by signalling.

    A lovely example of signalling is Hewlett-Packard’s practice of management by wandering around(MBWA). We too were urged to practice MBWA. I soon found we were immersing ourselves in the web of signalling taking place in the workforce, learning a great deal from it that we could not learn from talking to individuals in our offices. And we too were returning important signals to the workforce.

    My experience leads me to think that signalling is rich, complex, continual, multidirectional and conceptual. The signals evoke a conceptual response, as indeed they are meant to do.

    I am with EJWinner when it comes to psychological experiments. It is a deeply embedded part of our nature as socially communicative animals to constantly and very quickly evaluate intent, threat, opportunity and to posture or position ourselves accordingly. I appreciate that experimental design is intended to compensate for this but I seriously doubt that is possible. Even worse, the subjects of the experiments are mostly undergraduates in psychology departments.

    … he thinks the function of language is mostly for thinking, as opposed to communicating. In other words, engaging in meaningful syntactic operations on concepts
    while Aravis says
    But don’t you see? If concepts are not mental objects, then it doesn’t matter what experiments and tests you do. You’re looking in the wrong place.

    For a long time I thought as Dan-T did but reading Aravis’ words was an ‘aha’ moment for me. Correct me if I have it wrong, but it now seems to me that all meaning is socially constructed and that language is a social construction with internal concepts being the shadow of that social construction. On that understanding, the tests examine the shadow and not the real thing. Written language is the codified, abstract form of our signalling and the concepts really reside in our shared communication, not the brain. If that is right, the subject of our examination really should be what we express in our writings. Oh, wait, doing that is called ‘philosophy’! Philosophy might be called the ‘laboratory of the mind’.

    Oh wow, so many ideas flow from this understanding, especially the role of intuition. I would call internal concepts ‘intuitions’ that are responses to our meaning making social interactions.

    I must go for a walk to clarify my thoughts. Perhaps I am taking this too far in the exhilaration of my ‘aha’ moment.

    Liked by 2 people

  42. Hi Robin,

    By your definition, one of these machines might have become hugely more mechanically complicated than the other by virtue of the complexity of the program on the tape.

    Your original question was whether I thought that a Turing engine plus software that allowed it to emulate a brain was a “hugely complicated mechanical device”. My answer is yes. The combination (machine + tape) is a device that is (1) mechanical, and (2) hugely complicated.

    If you’re now asking about some notion of “mechanical complexity” that is different from “complexity”, well, I’m not sure what that concept is, but anyhow I was talking about “complexity”.

    If your point is that the combination is of (a) a simple mechanical device, and (b) a complex non-mechanical software tape, then ok, fine, but that makes no real difference to me since the combination is what is important.

    For example, for any mathematically complex operation one can posit a sequence of mathematically trivial operations in a certain order which is, by this measure. more complex.

    By the information *needed* to specify the tape, I was intending to mean the *minimum* information needed.

    Hi Aravis,

    I’ll avoid comment on the Private Language issue (since I’m not sure I can follow Wittgenstein’s argument**), but on Putnam and twin-Oscar:

    Hilary Putnam’s “The Meaning of ‘Meaning’” also demonstrates why “meanings can’t be in the head,” which means that *concepts* can’t be in the head. Oscar and twin-Oscar, in 1760, are in *exactly the same psychological state* and yet, they are entertaining *different* concepts.

    This one I don’t find convincing, since the standard reply seems to be a sufficient rebuttal. That reply being that the concept is indeed the same in the two twins since the concept is “sufficiently water-like” where water-like is specified by a list of properties including taste, look, feel, etc.

    In neither case, in 1760, does the concept include the chemical constitution (since neither knows about it), and thus there is no difference between the concepts.

    The “concepts” in the minds of the two twins only diverge if/when they learn about the non-common aspects of “water”, and if they’ve “learned” that then their physiological and psychological states must then be different.

    [**Though I did get as far as the SEP entry: “Even among those who accept that there is a reasonably self-contained and straightforward private language argument to be discussed, there has been fundamental and widespread disagreement over its details, its significance and even its intended conclusion, let alone over its soundness. The result is that every reading of the argument (including that which follows) is controversial”, which suggests that the argument is not knock-down.]

    Liked by 1 person

  43. ejwhinner, it’s good to hear that you’re a harsh critic of modern philosophy and its related sciences. I’m not yet sure if you theorize greater causes for our situation than standard types of maladies (like “money,” “politics,” and so on) though I’d also hope for you to entertain the explanation provided below.

    Aravis, Wittgenstein’s ideas are indeed on my list of things to explore. Of course no one seems to think there’s anything simple about them, and I do happily remain “simple.” We’ll see. You’ve mentioned that the best examples of philosophy concern its “criticism,” though I’m not yet convinced that this area has been sufficiently healthy (and you yourself lament the fall of Wittgenstein). As things stand today however, philosophy encompasses far more than just criticism. I find the most important of them to be “ethics,” but doubt that psychologists, for example, will choose to relieve philosophers of this burden.

    dantip, please don’t let me rob you of your wonderful optimism — it’s surely in your professional interests to believe in the current system as it stands today. Nevertheless my objection does happen to be quite fundamental:

    While halfheartedly going through the Mark Cain pdf that Labnut provided above, my thoughts wandered to our family cat relaxing on the bed with me. I find its behavior to suggest consciousness quite clearly, or that the creature is identical to me in this manner. Nevertheless science/philosophy has not yet been able to develop serious models of the consciousness dynamic. Therefore it may well be that the countless “higher theories” of Prinz and such, will be plowed into oblivion once we do have fundamentals at our disposal. This was actually my thought as an idealistic college kid two decades back. Here I decided to work on this independently to thus insulate myself from apparent structural deficiencies. Once we have such tools at our disposal, true progress should indeed be made.

    So then what has my approach been? I believe that the key to this lies in “ethics.” Of course here some will shake there heads and complain “Morality?” I do find this satisfying since such social constructs are most certainly NOT what I’m referring to. Instead I’m talking about the biological nature of good/bad for the conscious entity — perhaps “physical ethics.”

    Observe that if we were to identify the aspect of consciousness which manifests itself as good/bad for the conscious entity, then we could use this premise as an ideology from which to “properly” lead our lives and structure our societies. Furthermore whether acquired through a supernatural god or a perfectly natural evolution, this would define good/bad for any given subject, whether individual or social.

    “Qualia” is what I’ve found to be this good/bad aspect of ourselves. Thus I’ve used this premise to theorize a broad description of human dynamics in the attempt to found future mental and behavioral sciences. I do very much hope for others to join me in competing for this tremendous prize!

    Like

  44. I know Aravis has used up his quota of comments here, but I do want to make the point that, as human language is quite clearly a social and cultural phenomenon, the irreducibly social nature of language-generated concepts — the concepts associated with specific words — is not in dispute. Briefly, this is how I see the situation…

    The various interacting systems which make up a human language (phonological, morpho-syntactic, semantic, pragmatic etc.) can be seen as abstractions which linguists infer from observing the linguistic behaviour of a particular group or set of groups.

    But it is also the case that each of us has developed a more or less specifiable competence (or set of linguistic competencies) which is often referred to as an idiolect. Obviously, this is somehow encoded in the brain. This linguistic knowledge or competence, because it corresponds in large part with the idiolects of others in our speech community, allows us to understand what others are saying and to speak comprehensibly. There is no great mystery here.

    On the more general question of concepts, like Machery I have reservations about using this one word to cover a range of very different things.

    Like

  45. In answer to Massimo and Hal’s comments on the other thread concerning my citing of the CTL-ALT-DEL command in computers, a less overt example is the 640KB limitation on working memory in the original DOS OS. Bill Gates is even cited, though he claims wrongfully, circa 1981 that 640KB would be the limit on what a PC would need. I purchased an Apple Laptop on Saturday with 512GB. Now back in 1981 the classic application of the PC was as a business desktop as Gates envisioned it. The classical applications were word processors and spreadsheets or applications based in the rules of strict logic and arithmetic.What has happened over the years is computers became incredibly graphic. Interestingly computers now traffic in graphical conceptualizations of reality. It all relates because the working memory of computers move around large chunks of data or the computer program of course is graphical metadata manipulation. The Wittgenstein insight that brains or minds think in or conceptualize in pictures seems to hold for computers.

    Like

  46. Victorpanzica:

    Please re-read the discussion thread. It is *not* the case that Wittgenstein thinks “brains or minds think in or conceptualize in pictures.” Indeed, his entire later philosophy — i.e. The Investigations, Blue and Brown Books, etc. — is devoted to *opposing* that view.

    Liked by 1 person

  47. Hi Dan,

    I enjoyed the interview.

    ‘When your friends visit and you say “take a seat” indicating the rocks …’

    I like your ‘process’ example, it helped me understand what Jesse was getting at.

    I think Jesse’s idea of proxytypes, from an internal cognitive/psychological perspective, can make it easier to talk at a higher level and in a more ongoing fashion about prototypes, exemplars, ‘theories’ and contextual framing.

    When I was first exposed to the various concept theories my first reaction was “obviously they are all right to a certain degree, even a-modality like ones”, at the same time I think I agree with Aravis, talk of things like objects, elements, constructs, proxytypes, or concepts being in the brain (or mind or both) is wrong or at best incomplete.

    But we need narratives to advance, and to expect any theory to be The Right One is unproductive, in that sense I feel certain Proxytype theory is useful.

    Varia,

    In my comment to Part One, I feel over extended the word concept. It seems to me now that it might be best to avoid using the word without its linguistic perspective (sociolinguistic to be precise ;).

    On the use of words like ‘mere’ and ‘just’, from the way I and others often use them I get the impression they’re often a sign that things like a false dichotomy are being slipped into the conversation.

    Like

  48. Hi Aravis, The comment was for the previous discussion thread but it has relevancy to this thread and the moderator can use his judgment to reject it.

    As the Scientia Salon header says, ‘Philosophy, Science and All Interesting Things In Between’
    I think the history and structure of the PC hw, os and sw is interesting and relevant to the proxy theory. As philosophers, understanding the topic would be very relevant.

    Steve Jobs was infatuated from the beginning with the graphical fonts as early as the 70’s. The GUI developed by HP was technology he acquired to build the early Apple PC’s. We also know Jobs infatuation with the unity of technology with the human experience including not just the visuals of the screen but the touch and feel of the peripherals. This also lead to his obsession with proprietary ownership of all aspects of the Apple designs.

    Bill Gates was interested in proliferating the PC as a standard device on every business desktop but saw the aesthetic and gaming aspects as a novelty. He also licensed the MS OS for use in the lower priced ĉlone as well as IBM PC’s to gain the worldwide desktop market for the MS applications.

    An interesting case of two men with juxtaposed theories and fierce competitors working for the same ends.

    This is very relevant to the proxy theory discussion.

    Like

Comments are closed.