Ned Block on phenomenal consciousness, part I

Ned Block
Ned Block

by Dan Tippens

Our Assistant Editor, Daniel Tippens, asks Professor Ned Block, of New York University, about his work on the relationship between phenomenal consciousness and access consciousness. This is part I of that interview, we will publish part II later this week.

SciSal: I first wanted to start with an introduction to the concept of consciousness. As you’ve written in several of your papers, you call it a “mongrel” concept. It has a lot of different meanings that people frequently conflate. And it’s really important to distinguish between them in order to make good conceptual advances. So, could you distinguish between some of these senses of consciousness before we go any further?

Block: So the one that I’m most interested in is what I call phenomenal consciousness, which some people cash out as the redness of red, what it’s like to see or smell or hear, that internal experience that you get when you have a sensation or images in your mind. That’s what I call phenomenal consciousness. Now, I think that’s something we share with animals — certainly other mammals. And you know I believe that it does not require language or much in the way of cognition — maybe nothing in the way of cognition.

Another sense of conscious and consciousness is the one in which we are conscious of things. We are conscious of our own thoughts. We can be conscious of our pains, of our perceptions. That involves some notion of monitoring, some feedback and maybe some awareness of yourself. So that is another notion. That’s called monitoring consciousness or self-consciousness.

Another idea is what I call access consciousness. And that’s when you have an episode of phenomenal consciousness and it is available to your cognitive systems. So you can think about it. You can reason about it. So you smell a certain smell — smoke. And that fact of your smelling smoke can be used by you to think about calling the fire department, or to think about investigating the source of the smoke. That’s what I call access consciousness.

So those three: phenomenal consciousness, monitoring consciousness and access consciousness I think are the three main ideas. And of course, you can’t figure out what something is in the brain until you make the distinctions you need to between really different fundamental phenomena. I think those are three fundamentally different phenomena, although they overlap and interact.

SciSal: I was hoping I could get you to say a little bit more about access consciousness before we go into phenomenal overflow.  I remember when I first read about access consciousness, I kind of leaned too far onto the behavioral aspect. So when I thought of access consciousness, I thought of the ability to report — verbally report, or maybe write down something, perform some external action. However, this isn’t what is fundamental to access consciousness for you.

Block: Right.

SciSal: Could you say a little bit more, to distinguish between just mere behavioral report and what you mean by access consciousness?

Block: Yeah. So let me say I think that report is a good first approximation to access consciousness. One important reason it’s not adequate is that animals have access consciousness. And you know by and large, they can’t report. So it’s the extent to which a phenomenal state is available to cognitive machinery. And so a mouse or a dog or a cat can be access conscious of a smell or something they see. And then a representation is sent to their machinery of reasoning and deciding, even if they can’t report it.

One very common idea is the idea of the neuronal global workspace. The main adherent of this view is a French group led by Stanislas Dehaene, who has a book that just recently came out on how consciousness works [1]. Now, unfortunately, he thinks that all there is to consciousness is broadcasting in the global workplace. So he thinks consciousness is just access consciousness. He doesn’t really recognize phenomenal consciousness. Or when he does recognize it, he thinks that it can’t be studied scientifically. And the reason he thinks that is that after all, the main data for a theory of consciousness is what people say about their own conscious states. They give numerical ratings for how visible a stimulus is. They tell you what the content of their state is. Those are all features in access consciousness. So it seems to him that’s all you can study. He’s been arguing that for years. But one of the most interesting developments is that people have figured out how to study phenomenal consciousness. And I believe they have pretty much shown that phenomenal consciousness is something quite distinct from access consciousness.

SciSal: One thing that’s very crucial to the whole debate about phenomenal overflow is selective attention. It says on your website, at least, that you’re starting to write a book on it, given its importance. And I know that Wayne Wu just published a book recently on attention as well. It’s becoming a very hot topic in philosophy. Now, I was hoping we could just kind of discuss conceptually some things about selective attention before we go into your arguments. First of all, how would you define selective attention?

Block: Well, I think that what William James said about it is largely right. It involves both amplifying or focusing on representations and crucially suppressing other representations. So it’s a combination of amplification and inhibition, amplifying the things you’re attending to or the representations of the things you’re attending to and suppressing the things you’re not attending to. So it is a process in which that happens. And the clearest cases are visual attention.

Now, it complicates things a bit that there are three types. So there is spatial attention, in which you’re attending to an area of space. And then the evidence is that the attentional field there has what is known as a Mexican hat shape. A Mexican hat has a peak and then a dip, and then it goes up again. So the idea is there is a kind of amplification at the center and then inhibitions surrounding that center. And then it goes back to a higher level outside that. So spatial attention is thought to have that kind of shape. And that tells you since we’re not aware of that Mexican hat shape, we’re really not aware of very much of where we’re attending. And that is in itself an interesting thing. Then there is something that is often called object based attention, where we attend to an object. And the test of that is whether the attention spreads in that object — whether you’re faster to notice something in that object near where you’re attending as opposed to something in a different object. And then there is something called feature-based attention, which is probably the least well studied, in which you amplify all examples in your visual field of a certain feature, like a color for example.

SciSal: Is there a hierarchy between how they relate to one another in terms of when they’re deployed? So I remember thinking to myself that maybe objects just are — philosophically, at least — a bundle of properties, a bundle of features, right? So maybe you always deploy feature-based first and then object-based — or maybe spatial first. Could you tell us a little bit about how they may relate to one another?

Block: Yeah, I don’t think what you’re suggesting is right. What may be true is that object-based is a species of spatial attention. That could be. But the relations between them are not that well understood. I should say that there is also another distinction between what’s sometimes called bottom-up attention and top-down attention. So in bottom-up attention, you know, if there’s a noise [slams hand onto table] like, on one side, your attention will automatically be drawn to that noise. But also you can decide to pay attention to something else.

SciSal: Alright, now I was hoping we could  come to the phenomenal overflow argument now that we’re kind of situated a little bit with the terminology. So first of all, could you tell us generally what the phenomenal overflow claim is — what the rich view of consciousness is that you defend?

Block: Yeah, right. So I guess first of all, it’s based on the idea that people often have that they have a very rich visual field, conscious of many things at a time and that you lose your kind of cognitive grip on most of them. So there’s just a constant, you know, constantly evolving visual world in front of you. And you can only take in cognitively a few of those things. And that sort of intuitive idea is verified experimentally by a number of ways. Probably the technique that has been the most useful is the study of what is called iconic memory. And that was known even a hundred years ago and maybe even much longer ago. When you see you get a brief stimulus, and then even after the stimulus goes off, people have a mental image of many of the items.

And this was first demonstrated experimentally by Sperling in 1960. And what he showed is you can have a grid of letters — say, 12 letters, three rows of four. And if you showed it to people briefly, people could report three or four of those letters. But they said that they had a mental image of the whole grid. And then he tested this by giving them a tone — a high tone for the top row, a medium tone for the middle row and a low tone for the bottom row. And if you show them the image — the stimulus — and then after it went off, if you gave them the medium tone, they could report three or four letters from that row. This is called partial report superiority because you could report three or four from any given row — or if you’re not cued, just three or four. It’s like you have a cognitive apparatus that can take in three or four letters, even though there’s a lot more in your image. But once you take in those three or four, the image disappears [2].

SciSal: Could you say a little bit about how you respond to some of the common objections to this way of reading the results? A lot of people claim that an alternative interpretation is that there’s actually unconscious representation of most of the scene. And the cue, when you hear the tone, attracts your attention to the cued part of the scene and boosts that part into conscious awareness. This explanation gets the same partial report superiority result that you talked about without the claim that there is more consciously represented than what you had access to.

Block: Yeah, that’s right.

SciSal: Can you explain how you tend to respond to these objections or what you think of them?

Block: Yes. So first of all, let me say that I think you’re right — that that is the main objection. And the best proponent of that has been a guy at Oxford named Ian Phillips. The key issue is: does the attentional bottleneck come between unconscious perception and conscious perception? That’s his view. Or does the attentional bottleneck come after conscious perception, between conscious perception and cognition? That’s what I think. So I think that you consciously perceive many things and the bottleneck comes between conscious perception and cognition. You can only cognize a few of them. He thinks it’s unconscious perception that sees too many things. And you can only phenomenally appreciate a few. So those are the two views.

There have been quite a few experiments lately that support my side of the disagreement. One of them was done by an Israeli group headed by this person, which also has a philosopher on it, Jacobson. They did a version of Sperling which had letters of very many different colors. And the rows could either be low in color diversity, just from a third of the colors, or they could be high in color diversity, from all of the colors. And what they realized was that people were aware of the color diversity outside of the focus of attention. So they must have seen at least two colors because you can’t judge color diversity without having some grip on at least two colors. And they’re able to show that this had to be a conscious phenomenon.

They used three different tests. The most convincing one is they made the stimuli very hard to see. Some people were not aware of them consciously. And they found that people’s ability to do this depended on conscious perception of the grid. So it could not be unconscious. Although some other judgments could be made unconsciously, like the average color. People were able to have some appreciation of the average color even though they ranked it as the lowest visibility ranking, which is zero visibility. So diversity took conscious perception. That suggests we really do have conscious perception outside the focus of attention [3].

SciSal: I recall reading that paper, and one of the concerns I had when I read your paper arguing that this experiment you described with color diversity supports phenomenal overflow, was that you specified that it was only what is known as focal attention that was being deployed to the main task — focal attention being, I take it, attention to a small portion of your visual field, right?

Block: It’s a relatively small area of space, yeah.

SciSal: Yeah. However, I recall in a review of attention research learning about something called diffuse attention, which is kind of attention that is spread out over a large area. But it’s probably not going to give you as detailed information. Now, I can imagine someone saying, well isn’t it possible that there is diffuse attention even if not focal attention to the color parts of the scene…

Block: Yes.

SciSal: Which would allow you to still have conscious representation…

Block: Yes, yes, and that someone would be me, for example. I think that’s probably diffuse attention.

SciSal: Oh, okay.

Block: But here’s the thing. The key fact is that it’s that kind of diffuse attention that’s present in the regular Sperling phenomenon. The question is what’s in your consciousness? So whatever diffuse attention you apply to the whole visual field or to any of the un-cued rows, it’s the same diffuse attention that’s involved in the regular Sperling case when we are able to cognize only three or four of those items. The idea of the Usher experiment is that it shows that we are perceiving at least two items more than the items that we can report. So it suggests a richer phenomenology of perception.

SciSal: I see. So to summarize and make some things clear, what is really important about attention and its relationship to access is that attention seems to filter information into working memory. And then working memory is what is globally broadcast, i.e what is accessed, right?

Block: Right. That’s right.

SciSal: And I recall in one of your papers you mentioned that a lot of people think it’s between seven and nine characters that can be held in working memory because of a famous old experiment in the 1950s by George Miller, but it’s really something more like three or four items that can be stored in working memory. So to make it relevant, I take it the idea was if you can have more than three or four items that you are conscious of, then that shows that there’s more on your conscious perception than what is contained within working memory — i.e., what you have access to, regardless of which attentional mechanisms, diffuse or focal, filtered the four items into working memory.

Block: Yeah, that’s right. I should say, by the way, that it’s recently been shown, especially by my colleague across the street, that you really shouldn’t think of working memory as having three or four slots. But for the kinds of materials that are used in these experiments like, you know, letters and, you know, rectangles that can be oriented — for those things, you get the special case where there is a capacity of three or four items. It can be more for simpler items and less for more complex items. In fact, in the Usher experiment, it’s not really three or four, it’s really just three. And one of the key ideas here is that the things outside the cued row do not affect how many letters you can report in the central task. You can still report three whether or not they’re reporting color diversity.

[To be continued later this week]

_____

Dan Tippens is Assistant Editor at Scientia Salon. He received his Bachelors of Arts in Philosophy at New York University. He is now a research technician at New York University School of Medicine in the S. Arthur Localio Laboratory.

Ned Block is Silver Professor of Philosophy, Psychology and Neural Science at New York University, where he arrived in 1996 from MIT. He works in philosophy of mind and foundations of neuroscience and cognitive science and is currently writing a book on attention. He is a past president of the Society for Philosophy and Psychology, a past Chair of the MIT Press Cognitive Science Board, and past President of the Association for the Scientific Study of Consciousness. The Philosophers’ Annual selected his papers as one of the “ten best” in 1983, 1990, 1995, 2002 and 2010.

[1] Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts, by S. Dehaene, Viking, 2014.

[2] Perceptual consciousness overflows cognitive access, by N. Block, Trends in Cognitive Sciences, 2011.

[3] Rich conscious perception outside focal attention, by N. Block, Trends in Cognitive Sciences, 2014.

26 thoughts on “Ned Block on phenomenal consciousness, part I

  1. This is good and I think useful stuff. A whole lot to digest if you are a researcher in this field.

    I see that Ned Block has been a participant in the TSC Conference (2014). I also see that the name “Toward a Science of Consciousness” (2015 and earlier) will become “The Science of Consciousness” (2016). That’s very interesting. 🙂

    Like

  2. Thanks for presenting this; it is definitely worth reading. I would have liked to see a more explicit explanation of how phenomenal consciousness can be dissociated experimentally from access consciousness (according to Block), but I think I get the gist of it.

    I also think the underlying reasoning is incorrect, though. The error shows up most explicitly here: “So they must have seen at least two colors because you can’t judge color diversity without having some grip on at least two colors.”

    The background assumption is that if we are aware of a high level property of the world, then we must be aware of the low level facts that underlie it, i.e., if you see a staircase, then you must see stairs. But that isn’t correct, as can be shown in many ways.

    Consider for example the experience, which many people report, of dreaming that they are speaking a foreign language (which in reality they don’t know) with perfect fluency. This puzzles many people, because they assume that if they are speaking fluently, they must speak many individual words fluently. But how could that be, when they don’t actually know the language? The answer, unless you believe in mystical voodoo, must surely be that it is not true: that the high-level analysis is a hallucination not supported by low-level perception. In the same way it is possible to perceive fluency without perceiving any specific words, it is theoretically possible to perceive color diversity without perceiving any specific colors.

    The ultimate example of this sort of dissociation of high level perception from low level perception is Anton-Babinski syndrome, in which people with a certain type of brain damage become blind but continue to believe that they can see. When asked *what* they see, though, they can’t answer.

    Liked by 1 person

  3. Doesn’t this go to the issue of whether information and thus our perception of it, i.e., knowledge, is inherently objective, or subjective.
    Science does naturally assume an objective reality, that can be studied on its own terms, yet it seems more and more evident that the framing is integral to the nature of information. Much like taking a picture, one has to set shutter speed, aperture, lens, lighting, angle, focus, etc. to extract the particular image. Yet if one were to say leave the shutter open longer, seemingly more information would be received, yet the result would white out any specific information.
    So all our mental editing and processing, from subconscious, to conscious, to cognition, to analyzing and editing these experiences and perceptions, isn’t just selecting the information, but in extracting and processing signal from the noise, effectively creating it. We photoshop.
    Then onto the social and cultural processing and editing of this information, as to whether it supports, expands, or questions the larger models and framing devices.

    Like

  4. Hi Bill Skaggs,

    I wanted to elaborate on your concern because I do think it is a legitimate one. Indeed, I was trying to bring up the concern when I started to talk about diffuse attention, but I never quite got to raise it.

    So first it is important to recognize another issue in this debate. Many people accept that we can consciously represent (consciously see) information both generically and specifically. For example, we can consciously represent a group of trees (generic representation) without consciously representing each individual constituent tree which constitutes the group (specific representation). Here is a simple way to demonstrate this by using something known as visual crowding:

    I I I x I I I I I I I I I I I I I

    Fixate your eyes on the x and shift your attention to the 3 lines on the left and then shift your attention to the group of lines on the right. Some (such as Michael Tye) think it is obvious that you can consciously see each individual line on the left but that you cannot consciously see each individual line on the right. Evidence of this, he might think, is that you can’t count each line on the right, whereas you can count each line on the left (while your eyes are fixated on the x).

    So, while you consciously represent each individual line on the left, you do not consciously represent each individual line on the right. However, clearly you see something on the right. What do you consiously see? The *group* of lines, but not each individual constituent line. In other words, on the right you have more generic conscious representation (possibly with unconscious specific representation of each individual line) whereas on the left you have specific conscious representation of all of the lines.

    So I think this is something you were getting at Bill Skaggs. You were pointing out it is possible to have a conscious generic color diversity (generic) representation without consciously representing at least two colors (specific). To be clearer, it is possible that the subjects were able to make the color diversity judgment on the basis of a generic representation of the color diversity without consciously representing the specific constituent colors. So, the subjects would then still only have 4 items its working memory (3 items from the focally attended row an 1 generic item of color diversity), therefore not overflowing access consciousness.

    This is what I was trying to bring up for discussion by starting to raise the point about diffuse attention. Some think that diffuse attention tends to give you generic representation. I was going to point out that it is possible that subjects were diffusely attending to the colors in the task, thereby coming to have generic color diversity representations without consciously representing the specific constituent colors that make up the diversity of colors.

    Just to draw your attention to one thing in Block’s paper, he mentions two controls that were used to decrease the plausibility of this hypothesis. Though I don’t think these are conclusive, they are worth bringing up. I think it easiest just to quote him:

    “Could it be that color-diversity judgments were based on unconscious color perception? There were two different manipulations intended to exclude that possibility. First, subjects were asked to press an escape button if they did not see colors in the uncued rows, and there were catch trials with colorless uncued rows. Subjects were 93% accurate on the catch trials but no subject pressed the escape button when the uncued rows were colored. In another variant, the presentation of the array was reduced from 300 ms to 16.7 ms and masks were introduced to decrease the visibility of the array. In addition, the subjects were asked to give a visibility rating just before giving thediversity judgment.Therewasa strong correlation between visibility ratings and accuracy on the diversity judgment. At the lowest visibility level, diversity judgments were at chance; at the highest visibility level, diversity judgmentswere 80% accurate.Further,judgments of color averages could be made with the lowest visibility ratings (i.e., unconscious perception), but color-diversity
    judgments required conscious perception.”

    But I really think the most crucial thing to remember is this: there was one group who was not forced to make any color diversity judgment or perform that secondary task at all. They were a control group that also served to show the maximum working memory capacity for the focal task in that experiment, which turned out to be 3 items. In the test group, you see the maximum working memory capacity being met (3 items for the focal-attention task) but then you also see an accurate color diversity judgment that doesn’t seem like it could be explained by unconscious perception (see Block’s quote about the controls above). So even if subjects were only obtaining a generic representation of color diversity, it still shows that the working memory capacity was being exceeded since at least 4 items were consciously being perceived but only 3 items could be in working memory.

    Like

  5. Reserving comment on the content until I read the whole interview. Right now I find it interesting and intriguing, a well-handled interview directed toward understanding and clarification.

    Liked by 1 person

  6. Hi Dan,

    Good interview, very perceptive questions and very clear and illuminative answers from Ned Block.

    Like ejwinner I want to read the thing more thoroughly and gather my thoughts before commenting further.

    I think that there is a paradox at the heart of the concept of a science of consciousness, but as my points are often misunderstood in this subject I want to try and express myself as well as I can.

    Liked by 1 person

  7. Thanks Dan for adding so much clarification. My view is basically that the whole idea of drawing inferences about brain processes from the reports that people give is ultimately hopeless, so I don’t give much credence to any of this stuff. But I am happy to see the reasoning laid out so clearly, even if I don’t think it will ultimately turn out to be valid.

    Like

  8. To riff a bit on Bill Skaggs, I would like to see how Ned Block relates these different types of consciousness to explicit vs. implicit memory. It seems there is some degree of parallel, which could be enlightening, but probably shouldn’t be pushed too far.

    This also seemingly parallels to some degree the old Kantian phenomena/noumena split, which is a split in the way types of consciousness or types of memory aren’t. That said, there may also be parallels there.

    I’d like to hear more from Ned and/or Dan.

    That said, this distinction, with the added distinction of separating access consciousness from “report,” does offer some benefit as a way to approach animal intelligence, a matter that’s been discussed here before.

    Also, to riff on Bill, and on the response from Dan: Perhaps this is something that again, per my thoughts on the subject, would connect to layers of consciousness, subconsciousness, etc. Per my comments above, I see implicit vs. explicit memory related indeed to levels of consciousness myself.

    Next, I’ll say that I’m not sold on the whole idea of phenomenal overflow.

    First of all, especially when coupled with ideas like “working memory” and “workspace,” it sounds like an overly-computer-driven idea of consciousness. Yes, I know that’s ironic, given Block’s take on things like the Turing test. Nonetheless, it comes off sounding that way.

    I’m not denying, of course, that such a thing as “working memory” exists, just that it sounds computer-driven when discussed in conjunction with other terminology.

    On selective attention, we then have the question: If the selectivity of our attentional focus is itself sometimes subconscious, then what drives that focus?

    Block himself answers, if not exactly that, the related idea, later on:

    The key issue is: does the attentional bottleneck come between unconscious perception and conscious perception? That’s his view. Or does the attentional bottleneck come after conscious perception, between conscious perception and cognition? That’s what I think. So I think that you consciously perceive many things and the bottleneck comes between conscious perception and cognition.

    My thought? “Mu,” per a word I have used here before, mainly because I think we lack the information to make an educated guess, I also think that future findings that DO enlighten us will show that the attentional bottleneck will be on both sides of the gate, probably on a case-by-case basis.

    Both Ned and Dan might also comment on ideas in this paper, including distinguishing phenomenal from state consciousness. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4255486/

    Oh, and more info please on the 7±2 not being quite correct on working memory “slots”?

    Like

  9. Hi Socratic,

    Unfortunately Ned is delivering a series of lectures outside of the States this week, so it looks like you all will be stuck with me to attempt to explain some of these things. Hopefully I can add some clarity. Let me say upfront, that some of the explanations I provide may not be the exact explanations that Block would offer (indeed, that is likely the case since he is an expert on this subject).

    That being said, let me just lay out some conceptual and terminological considerations that are also at play in this debate that unfortunately Ned wasn’t able to elaborate on during the interview.

    First, there are different kinds of “unconscious” representations.

    Pre-conscious representations: these are representations that are not consciously represented, but are capable of being attended to, being filtered into working memory, and therefore being globally broadcast.
    One way to think of this is a way Ned has put it before: Think of preconscious representations as being in a lottery; they are entered in to win, but its not necessarily the case that any particular one of them will win out and become globally broadcast.

    Subliminal representations: These are representations that degrade too quickly to have a chance of being filtered into working memory and being globally broadcast. For example, when a stimulus is presented very briefly followed by a mask, the representation is unconsciously processed and may have some effects on various things, but the mask causes it to be degraded before the representation ever has a chance to get into working memory

    Unconscious representations: These are representations that are longer lasting than subliminal representations and have various effects, but they will never enter into working memory because they are not connected to working memory in such a way that they could ever get into it. For example, the representations of CO2 levels in the body effect your breathing, but these will never enter into working memory.

    Second, there is both unconscious and conscious attention.

    Unconscious attention: This is attention that is deployed unconsciously. What “drives” or causes this attention to be drawn is typically the stimulus itself (a salient feature, a loud sound, etc. even if they aren’t consciously represented). The best way to iluminate this is through cases of blindsight. Subjects with blindsight have abnormalities which prevent them from having a visual experience in certain parts of the visual field (say, the left half). However, it has been shown by Robert Kentridge that these subjects can still have their attention drawn to various stimuli within the consciously dark portion of the visual field. So, they are deploying attention unconsciously.

    Conscious attention: This is the attention we are all familiar with; we can direct our attention at various things consciously.

    With these two kinds of attention in mind, it is important to note that there are three ways you might have thought about the relationship between attention and phenomenal consciousness. The first is the view that attention is necessary and sufficient for conscious representation of something. This doesn’t seem right given the blindsight case given above (the blindsight patient can deploy attention to stimuli in his blind field but those stimuli will never be consciously represented.

    The second view is that attention is necessary but not sufficient for conscious representation. This would be consistent with the blindsight case mentioned above. Even though the blindsight patient deployed attention to a stimulus, that wasn’t sufficient for conscious awareness. This view is what most opponents of Block accept.

    The third view is the view Block takes; attention is neither necessary nor sufficient for conscious representation. This view was clarified in the interview.

    R.e the George Miller working memory hypothesis about working memory being able to contain 7 +/- 2 items. The reason this is no longer accepted is because Miller didn’t account for what is known as “chunking” in his experiment. So consider the following example: CIAFBINSA. This has 9 characters, and people can hold this series of letters within working memory. However, the reason they can do this is because they can “chunk” the letters together into composite items. For example, we can chunk CIA as being one item, FBI as being another item, and NSA as being yet another item. When we chunk the items, what is filtered into working memory is really only 3 items: composite items. Miller didn’t account for the chunking phenomenon.

    For more information on the Miller experiment and working memory, see this paper starting at page 298 by Ned http://www.nyu.edu/gsas/dept/philo/faculty/block/papers/2008_Aristotsoc.pdf

    Liked by 1 person

  10. OK, quick question, which I think is an interesting philosophical implication of this and also links to past discussions.

    Does a simulated brain produce simulated language or real language?

    Here is the question in more detail:

    If we are to have a science of consciousness then it should, eventually, comprise of physical models which describe and predict observations.

    These will be mathematical models – symbol manipulations.

    Since the model is not the reality, we have more reason to expect the model to be conscious then we have to expect a model of water to be wet.

    What the models will predict is not consciousness, but reports of consciousness (since we can only make predictions about what can be objectively observed).

    Suppose, once we have made advances in the understanding of the brain, we can create a mathematical model to predict the results of the types of experiments above we could have a mathematical model from which we derive language about phenomenal consciousness.

    Here is my question. Is the language which is derived from such a mathematical model real language?

    That is to say, does such language have meaning and is it about anything?

    If the language derived from the model is ostensibly about phenomenal consciousness, is it actually about phenomenal consciousness? Even if the mathematical model it is derived from is not phenomenally conscious?

    Like

  11. Let me say one other thing to Socratic and make a quick note on Robin’s last comment:

    Socratic:

    You brought up Brown’s (initially proffered by David Rosenthal) higher order theory of consciousness. The higher order theory (HOT) claims that what makes a state conscious is when it is the target of some relevant higher order representation. The most simple form says something like this: I am conscious of the representation of an apple when I have a thought about that representation, e.g “I am having a state of representing an apple.”

    This simple version is clearly a bit too cognitively driven. Intuitively we think that other animals on the phylogenetic tree can have conscious states, but on this simple form of HOT they could not, as they cannot have higher-order representations about their first-order states (they can’t think to themselves, “I am seeing an apple.”)

    However, other forms of HOT theories have been given that are more plausible and allow for other creatures to enter into higher order thought relationships. For me, atleast, I tend to think that HOT of consciousness are actually best for *monitoring* or *self* consciousness as opposed to theories of phenomenal consciousness. It seems that in order to be able to monitor your own states or be aware of your states (in the monitoring sense Block mentioned in the post) you must have a higher-order thought about it (in some sense). This would do work explaining why we might think that some other animals do have self-consciousness (such as primates) while others don’t (such as fish). Some can have some sort of higher order thought (in some relevant sense) about their states. Block has a paper arguing against HOT here: http://www.nyu.edu/gsas/dept/philo/faculty/block/papers/HOPHOT.pdf

    Robin:

    I can only say something on one part of your comment, you said, “What the models will predict is not consciousness, but reports of consciousness (since we can only make predictions about what can be objectively observed).”

    This is actually known as the methodological puzzle of consciousness which Block has discussed elsewhere ( http://www.nyu.edu/gsas/dept/philo/faculty/block/papers/Block_BBS.pdf ).

    The puzzle is: how can we use verbal reports about consciousness to somehow go beyond access consciousness (what you can report on) and illuminate things about what we are consciously experiencing. Stay tuned for part II of the interview as Block will touch on this there.

    Liked by 1 person

  12. Robin Herbert: “Does a simulated brain produce simulated language or real language? …”

    Again I think it’s worth distinguishing a simulation from an assembly.

    Given a DSL (domain-specific language) program p that models something in nature, p can be compiled into (the target object) a simulation Sml(p) (which could be an incomplete mimicry) or an assembly Asm(p). Sml(p) is code (e.g., Intel machine code) that runs in a conventional computer. Asm(p) is the output of what was once called a matter compiler, which is now technology that assembles what is called programmable matter.
    http://en.wikipedia.org/wiki/Programmable_matter#Synthetic_biology

    Biological, chemical. and quantum DSLs are themselves in very primitive stages of development. There are also (deep) neural-network DSLs. But what are the higher cognitive and consciousness DSLs? And it isn’t clear how intertranslatable (between “lower” and “higher”) these languages will be. Perhaps not very. And compiling a consciousness program (which will be developed some day) into a working conscious output (one that actually lives in the world, contradicting the naysayers) perhaps will require a biomolecular compiler.

    A simulation (of consciousness) and an assembly (of consciousness) do not end up being the same things.

    (Perhaps this is the way an engineer sees things, vs. a scientist or vs. a philosopher?)

    Like

  13. Dan, I’m really struggling with the sheer amount of terminology employed here. You’ve done a good job of trying to help in this regard. But it all seems a bit like a never-ending series of conceptual boxes nested in smaller conceptual boxes. I made some notes in an attempt to understand some of the major concepts: “phenomenal,” “monitoring” (self-consciousness), and “access” which then leads into subtypes of selective attention directed at spacial, object, feature.

    My notes/questions read like this:

    –Phenomenal consciousness roughly tantamount to raw perception/sensation/stimulus? Reflexive response, as in primitive reflexes in newborns? By-product of CNS?

    –monitoring or self-consciousness entailed by awareness of perception, i.e., to attend to perception? A point of view, to posit a subject/object relation?

    –Access consciousness = memory trace or facsimile perception that triggers cognition or apperception?

    –Selective Attention: spatial, object, feature–this state a necessary precursor to intention? See monitoring?)

    The question arises whether there is any agreed upon terminology here between, say, philosophy and neuroscience. I was, for example, like Socratic and perhaps others, caught up in the appropriateness of Block’s use of the metaphor “bottleneck” and the contention regarding where this might occur. Why is this described as a bottleneck? Why could it not be plausible that both may be relevant depending on certain conditions? Or to adopt terminology from the physicists, why a bottleneck rather than something more like a phase shift? You made a laudable attempt to address the bottleneck point with a discussion of more types of conscious states, but I only come away feeling that we are engaged in a sort of conceptual ouroboros.

    I am looking forward to part two where I hope I can come away with a better appreciation of the viewpoints.

    Liked by 1 person

  14. Robin,
    Here is my question. Is the language which is derived from such a mathematical model real language?

    Take any of my large corporate programs and try to model it mathematically. It just won’t work. The model is the computer program that I wrote. It is instantiated as native binary code on the CPU. The point I am trying to make is that there are some things that cannot be modelled mathematically and my program is one of them. The program is the model.

    To understand my program you must understand the language in which it is written. But, if you start with the binary code at the machine level and try to reverse engineer it with no knowledge of the CPU programming model(registers, stacks, ALU, GPU etc) and no knowledge of the programming language I used(Pascal, C, Lua, Java, etc), you have no chance of re-creating the program I wrote. If you cannot re-create the program you have no chance of understanding how my program works. You can only look at it from an external, functional point of view.

    This is where we are with the mind. There simply is no conceivable way that we can re-create the high level program of the mind by reverse engineering from the neuronal level(equivalent to the native binary code on the CPU). As with my program example, you can only look at the mind from the external, functional point of view. The external, functional point of view is the one used by researchers such as Ned Block but it is intrinsically incapable of revealing the programming language of the mind.

    It is for this reason that understanding the functioning of consciousness will always be beyond us. I am afraid the mysterians are right, but for the wrong reasons.

    Like

  15. Hi Thomas,

    Let me try to clarify some things as best as I can. Let’s start with the terms you took notes on, starting first with phenomenal consciousness:

    Phenomenal consciousness refers only to the qualitative experiences we seem to have in our mental lives. For example, when I drink coffee, I experience a qualitative experience of the taste of the coffee – there is something it is like for me to taste the coffee, but also, there is something it is like to experience a rich visual field populated with various colors and shapes.

    To understand better, you can imagine a robot who is just like you and I functionally (he sounds like you and I, he claims that enjoys the view he is looking at), but he has no qualitative experiences at all. When he opens his eyes, lightwaves trigger various mechanisms that allow him to say that something looks beautiful, or to cause him to run away from a threat, but he has no visual experience. He is just a “phenomenally dark” machine. He is a great input-output system with no phenomenal conscious experience.

    The qualitative experiences that the robots lacks, but which we have, is what we are referring to when we talk about phenomenal consciousness.

    Self-consciousness refers to our ability to have a kind of higher-order awareness of one’s own mental states and one’s self. So here is a way to think about self consciousness: Imagine you are looking at an apple. It is possible for you to then be aware that you are in a state of looking at an apple. The state of looking at an apple we can call a first-order mental state (it is a state about the world), and the state of you being aware of the first order state we can call a monitoring state (it is the state of you being aware of some other state you are in). Other animals may not have this to the same extent that we do.

    Access consciousness refers to the mental states that we have available for use in reasoning, report, etc. So here is a way to illuminate access consciousness (but please keep in mind this is for illustrative purposes only): imagine you are driving home from a rough day at work and you are just daydreaming/reflecting on the day’s events. Before you know it, you are back at your home, and it seems like you have no idea how you got there. However, clearly your eyes were open and were processing information such that you were able to get home safely. So, it looks like whatever you were seeing while you were driving, while it may have been influencing your actions, it wasn’t available to you to think about and reason about. Since you were so lost in your thoughts about work, the visual information about the road and cars in front of you wasn’t being accessed such that you were thinking about it or reasoning about it.

    Please note the difference between being *accessed* and merely being *accessible.* The visual information about the cars in front of you and the road might have been accessible to you while you were daydreaming (since the information *could* have been thought about had you paid attention to it), but it wasn’t being accessed (since you were so lost in your thoughts).

    Selective attention is important for the following reasons. Block and others think that what is access conscious to you (what is being accessed) is whatever is being held in working memory. However, since working memory can’t possibly hold all of the items that are in front of you at any given time, there must be some mechanism to let some things into working memory, and prevent others from getting in. This mechanism is selective attention.

    So, when you are looking in front of you, there is a ton of information which can’t all fit into working memory. Selective attention takes some of the information about what is in front of you and filters it into working memory, thereby granting you access (making you access conscious) to that information (now you can think and reason on that information).

    The attentional bottleneck thought was this: Selective attention is just like a bottleneck. There is a lot of stuff trying to get into working memory, but selective attention narrows down which stuff can get in. The question is this: is the stuff, or some of it, that is trying to get into working memory *phenomenally conscious* before it is filtered into working memory? Or is that stuff all *unconscious* to you before it gets into working memory, and only becomes phenomenally conscious once it is within working memory (are you only phenomenally conscious of the stuff you have access to).

    Also regarding terminology – basically all of the terminology employed by Block is the terminology used by neural scientists and psychologists. Block does, after all, spend most of his time reading not only the philosophical literature on the mind, but also a vast amount of the scientific literature. Working memory, global neuronal workspace, and selective attention are all commonly accepted terms in the scientific and philosophical community. A great place to see this is neuroscientist Stanislas Dehaene’s new book which we cited in the OP.

    Like

  16. I see my question was misunderstood. Let me briefly rephrase.

    If, in future, we can create mathematical models to describe and predict the behaviour of the real mind, then one of those behaviours is language.

    Given an accurate and detailed model with simulated sensory input we should be able to derive the behaviors from the mathematical model, for example saying”

    “That hurts”

    “That tastes salty”

    etc.

    I am asking if the distinction between model and reality holds for sentences like those.

    If other words, do those sentences have meaning? Are they about anything?

    If they have meaning and are about something then there is no distinction between model and reality for that language.

    If had a mathematical model of a human brain solving say “1 + 1′ then the sentence “1 plus 1 is equal to 2” would clearly have meaning and be about something.

    But the sentences about pain and saltiness, what are they about, given a mathematical model – a symbol manipulation – does not have the phenomenal experience of pain or salty taste?

    Alternatively, can the mathematical model, in the right circumstances, have those phenomenal experiences?

    Like

  17. Robin Herbert: “Alternatively, can the mathematical model, in the right circumstances, have those phenomenal experiences?”

    As I responded above, the mathematical model — the source program — cannot. But the target object of a biomolecular compiler — a biomolecular assembly — can.

    Like

  18. Dan, thank you for taking the time to help me out.

    But are you saying much when you compare a robot to a human in terms of experience? It would seem definitionally inherent that there would be qualitative differences. There are clearly existential differences.

    More to the point, aren’t we just privileging human experience in this case. That is to say, we have no problem when we describe the nature of the robot experience in a mechanistic or even a reductionist manner that we are hesitant to apply to ourselves on a phenomenal level. And in this case, but for allowances made in terms of the higher order that you address in your discussion of self-consciousness, this doesn’t seem to be a point that could otherwise be even made So, I suppose this aligns my thinking more closely to Dehaene’s or perhaps Phillips’s.

    I get monitoring or self conscious differentiation. No problem following you there. But access consciousness, especially in terms of your example, is problematic in that it seems almost tautological. What seems at play here are levels of consciousness present simultaneously, the focus on one doesn’t preclude the presence of the other, except to say that one is selectively focused. You might say you had no recollection of driving from point A to point B, but you won’t say that you accomplished this feat unconsciously. So this differentiation is useful. But does it yield a robust depiction of human phenomenal consciousness at work, so to speak? One possible analogue is muscle memory. Another is allocating CPU processing power on an as-needs basis. How this fully addresses phenomenal consciousness is still unclear to me.

    Your discussion of selective attention is excellent, but reminded me of a more academic and scholarly discussion of Huxley’s “Doors of Perception.” Now, it’s been decades since I read it, but it was concerned with explaining altered states of consciousness produced by ingesting hallucinogenic substances. His point–and I may have misconstrued it–is that “ordinary” human consciousness acts like a regulator or governor for human consciousness, a “filter” to use your term, so that one is not overwhelmed by a sensory bombardment, to prevent everything from seemingly occurring simultaneously. Hallucinogenics inhibit those filters. The idea is obviously no longer in vogue. And my own limited experience yielded mostly fatigue, not heightened consciousness.

    Just to make certain you didn’t misconstrue my concern with terminology. It was an honest question dictated by my own lack of familiarity with discussions of this subject. There were simply an abundance of terms introduced that made me wonder whether there weren’t other terms that a reader might be familiar with. For example, “pre-conscious representations.” Despite the use of the lottery metaphor, this doesn’t really do much for me. Is there a conscious state that somehow enables one’s apprehension of pre-conscious states, or is this some “primitive” that we accept for purposes of discussion?

    Like

  19. Hi Thomas,

    “But are you saying much when you compare a robot to a human in terms of experience? It would seem definitionally inherent that there would be qualitative differences. There are clearly existential differences.”

    I was just using the robot thought experiment to illustrate what phenomenal consciousness is. Wasn’t trying to make any points about the intrinsic nature of robots vs. humans. I think one important thing to keep in mind is that all 3 forms of consciousness that Block mentioned above can be present at the same time. For example, I can be having a visual experience of an apple, aware of my visual experience, and able to think and reason about my visual experience. In this case I would be phenomenally, monitoring, and access conscious.

    “I get monitoring or self conscious differentiation. No problem following you there. But access consciousness, especially in terms of your example, is problematic in that it seems almost tautological. What seems at play here are levels of consciousness present simultaneously, the focus on one doesn’t preclude the presence of the other, except to say that one is selectively focused. You might say you had no recollection of driving from point A to point B, but you won’t say that you accomplished this feat unconsciously. So this differentiation is useful. But does it yield a robust depiction of human phenomenal consciousness at work, so to speak? One possible analogue is muscle memory. Another is allocating CPU processing power on an as-needs basis. How this fully addresses phenomenal consciousness is still unclear to me.”

    So yeah once again, the example was just to illustrate what access consciousness is, not to say that it was the only form of consciousness present, and not to use it as an argument for phenomenal overflow.

    So yeah, various sorts of consciousness could be present at once for the driver. For example, the driver in the car was, perhaps, access conscious of his thoughts/daydreams (he was thinking about them and reasoning about them), he was self-conscious of some of those thoughts, perhaps, and perhaps he was phenomenally conscious of some of his visual experience without having access to that visual experience (perhaps someone like Block could admit it).

    “For example, “pre-conscious representations.” Despite the use of the lottery metaphor, this doesn’t really do much for me. Is there a conscious state that somehow enables one’s apprehension of pre-conscious states, or is this some “primitive” that we accept for purposes of discussion?”

    I’m not quite sure what you mean by this unfortunately. Just a bit of background, the distinctions I drew between types of unconscious representations are actually laid out by Stanislas Dehaene in his new book that we linked in the OP, so these are the standard terms in neuroscience and cog psych. Pre-conscious representations are just supposed to be representations that are not yet in working memory, but could enter into working memory if they get attended to. In other words, they are accessible representations but not currently accessed.

    “Just to make certain you didn’t misconstrue my concern with terminology. It was an honest question dictated by my own lack of familiarity with discussions of this subject.”

    Don’t worry, I never question your motives or concerns, Thomas. This moderator has seen consistent honest intentions from you :). But for much more clarity on some of these terms, see this original paper by Ned on access and phenomenal consciousness http://cogprints.org/231/1/199712004.html

    Like

  20. I was really expecting to hear more commentary, but I thought the same on the Baggini post . . . and then the comments came in after a day or so. I think maybe most are waiting for part two. But you’ve been helpful and, specifically, I find this comment of yours to be accommodating:

    “So yeah once again, the example was just to illustrate what access consciousness is, not to say that it was the only form of consciousness present, and not to use it as an argument for phenomenal overflow.”

    Still, puzzling over the bottleneck point. Would like to hear more from Block on this, both (1) regarding the appropriateness of the metaphor and (2) more on the differing rationales used to indicate where it occurs.

    Like

  21. Echoing Thomas Jones, I think people are waiting for part 2. I thought the material so far sounded a most plausible division up of the concept. The responders to Block’s (1995) target article in Brain and Behavior Sciences do bring up a lot of possible objections however. Are optical illusions P or A? What about dreams? Can you really ever P without A, so why imagine a division? Automatism in epilepsy or sleep can be very complex eg car driving, murdering your mother-in-law (presumably perception and planning are involved but not our “total” consciousness)? Is the old separation of sensation and perception all inside P? Wittgenstein gets a mention re the phenomenological differences “between hearing the exclamation ‘Block!’ as a request for a building block and hearing the exclamation ‘Block!’ as a greeting to the philosopher…perceptual content can have a marginal top-down effect on the structure of the sensory field” – is that just attentional, or are the qualia actually different? (I have read the suggestion that human faces are usually just a single quale) And Dennett wonders if there is no sharp dividing line, with P and A flowing into each other “gradually”.

    My own thought was two-pronged 1) regarding other altered states of consciousness, notably hypnosis. In passing, a recent paper with 2 philosophers as coauthors:

    Hypnotizing Libet: Readiness potentials with non-conscious volition
    .

    “Specifically, Readiness Potentials still occur when subjects make self-timed, endogenously-initiated movements due to a post-hypnotic suggestion, without a conscious feeling of having willed those movements.”

    We can think of hypnosis as solely affecting attention, as per TX Barber’s idea that nothing done in a hypnotic state can’t be done while in a normal conscious state, but maybe P is affected. The second was biofeedback and related techniques, where we enlarge our perception (P-consciousness?) by learning to attend to stimuli that are presumably usually only available unconsciously, or perhaps alter A-consciousness when we learn to, say, increase beta waves, and thus concentration and attention.

    Liked by 1 person

  22. labnut: “The external, functional point of view is the one used by researchers such as Ned Block but it is intrinsically incapable of revealing the programming language of the mind. It is for this reason that understanding the functioning of consciousness will always be beyond us. I am afraid the mysterians are right, but for the wrong reasons.”

    There may not be a good “programming language of the mind” DSL yet. But I see no good reason why there can’t be (and many paths — e.g., “A Calculus of Ideas: A Mathematical Study of Human Thought” by Ulf Grenander, AMS Notices review — could combine to lead to one, including Ned Block’s and others listed above).

    Like

  23. Hi Thomas,

    Thanks for your inquiries about Ned’s theory, since otherwise Dan might not have given us such “accessible” explanations. What I most want to emphasize right now however, is my great respect for Ned Block. This is a philosopher who is nevertheless trying to do something amazing — namely he’s trying to promote a traditionally philosopic aspect of reality, up to the realm of science for the practical use of humanity. Imagine working as an engineer a few hundred years ago, and thus being deprived of the theory of Sir Isaac Newton. Without a functional and accepted model of the conscious mind, the modern cognitive scientist must surely remain similarly crippled. In fact, I believe that our mental and behavioral science as a whole need philosophy to rise up to become the founding science upon which our mental and behavioral sciences in general are based. If successful, Ned Block may indeed become the modern founder of these fields (and thus even help prevent Coel from validly teasing us all about how his iPhone might be conscious).

    As it happens, I have similar ambitions myself. I’m not throwing stones right now however, since as they say, “Those who live in glass houses…” (Actually in my case, “fish bowl” might be a more apt metaphor.) Nevertheless I do believe that my own model happens to be a far more useful description of reality than Ned’s, and therefore I will be throwing stones when the time is right. I very much enjoy spending my weekends on the lawn with both cocktails and philosophy.

    Like

  24. I’m still reserving direct comment until the second part appears; but on re-reading the first part, I see now where I’m having difficulty with Block’s modelling – and the domain of such models in which his positioning takes place. I’m concerned with some of compartmentalization going on here, and not sure that enough bridgework has been built between the asserted compartments.

    Again I may get into that later; or the second part may draw the whole question in another direction.

    For now, it occurred to me to reproduce here part of a review I wrote on Stan Brakhage’s “Dog Star Man,” reporting an experience and its ground which I think still valid:

    “I sat through the complete Dog Star Man (4+ hours) in a museum in 1974. I dozed off quite frequently, but only for a couple seconds at a time. There didn’t seem to be much sense trying to think the movie through, so I just sort of let it happen. When the lights came on, I decided this much-heralded avant-garde film wasn’t anything special, only a little overlong.

    I had to walk a mile back home, and it was midnight. In the twenty minutes it took to make this journey, the entire film ran through my head again, at lightning speed. I wasn’t doing any drugs – yet the whole street around me seemed shot through with flickering light and overlapping images from this movie.

    Back around 1960, neurobiologists had begun speculating that the human brain actually remembers every sensation we experience. Brackhage seems to have taken this seriously. Some of the images in DSM are only a single frame; but despite the ’24 frames per second’ rule of film-perception theory, one notes these single-frame images and remembers them anyway.”

    http://www.imdb.com/title/tt0234512/

    Any modelling of consciousness is useful for focusing on and considering particular phenomena, but it may be that the mind functions holistically in a way that is not open to clean demarcations.

    Liked by 1 person

  25. Dan thanks. So “Pre-conscious representations” “subliminal” and “unconscious” are a bit related to Dennett’s multiple drafts, though I’m sure Block would run away from that analogy, given his professional relationship to Dennett in general.

    Ditto on unconscious attention, in part, and how it would relate to these issues?

    And, gotcha on chunking.

    That said, depending on how good we are at chunking, by possible multiple levels of chunking, it’s theoretically possible that we can handle a fair amount. Look at working memory’s part in recall of massive amounts of information in ancient mnemonic techniques.

    Per your second comment to me, it sounds like we’re getting into self-reference issues, potentially tangled loops, etc. In that case, can we make as sharp of divisions as Block would claim?

    Robin and Phillip, meanwhile, seem to be focused on (speaking again of Block — and others — versus Dennett) the question of whether consciousness is substrate-independent. I disagree with Dennett; I don’t think it is. His stance on this remains nothing more than an assertion at this point.

    Thomas and others: Wiki’s article on functionalism, which mentions Block sparring with Dennett and others, may be of some help: http://en.wikipedia.org/wiki/Functionalism_(philosophy_of_mind)

    This issue also seems to be related to qualia, and Block’s stance on that issue.

    And, per Thomas, I’ll take a look at part two of the interview shortly.

    Like

  26. Hi PhilospherEric. Couldn’t agree more with this bit – “In fact, I believe that our mental and behavioral science as a whole need philosophy to rise up to become the founding science upon which our mental and behavioral sciences in general are based.”

    This looks like philosophy 101. Kant concluded that the proper subject for any rational psychology is a phenomenon that is ‘not an instance of a category’. Afik consciousness studies has not even got as far as considering the meaning of this idea, let alone understanding it.

    Without metaphysics the behavioural and mental ‘sciences’ can simply float free from reality. Which is presumably why it is so widely ignored . Then we can put forward any old theory. Yet again the blame for this free-for-all of theory generation must lie with the philosophers. Until they get their act together all other disciplines must flail around blindly unanchored to any stable foundation.

    Unlike you, it seems, I see no attempt by Block to address this problem in these articles.

    Liked by 1 person

Comments are closed.