Why fish (likely) don’t feel pain

school-of-butterfly-fishby Brian Key

What’s it feel like to be a fish? I contend that it doesn’t feel like anything to be a fish.  Interestingly, much of our own lives are led without attending to how we feel. We just get on with it and do things. Most of the time we act like automatons. We manage to get dressed in the morning, or walk to the bus station, or get in the car and drive to the shops without thinking about what it feels like. Consequently, much of what we do is accomplished non-consciously.

There is an enormous amount of neural processing of information in the brain that never reaches our consciousness (that is, we never become aware of it and hence are unable to report it). I propose that fish spend all of their lives without ever feeling anything. In a recent paper in the academic journal Biology & Philosophy (Key, 2015) I discussed this idea in relation to the feeling of pain. I argued (as have others; Rose et al., 2014) that there is no credible scientific evidence for fish feeling pain.

I will now address the question about whether fish feel pain in this article using a slightly different approach. I propose that by defining how the human brain processes sensory stimuli in order to feel pain, we will be able to define a set of minimal neural properties that any vertebrate must possess in order to, at least, have the potential to experience pain. As an introduction to this argument I first highlight anthropomorphism as a major stumbling block for many people in recognizing fish do not feel pain and then I discuss the difference between noxious stimuli and pain since these terms are often conflated.

Resisting anthropomorphic tendencies

Grey wolves hunt as a pack. They carefully select their prey, and then perform a series of highly coordinated maneuvers as a team, in order to corral their target. Initially, each wolf maintains a safe working distance from other members of the pack as well as from their prey. They are relentless and seemingly strategic with an overall goal of driving the agitated prey towards one wolf. A cohesive group mentality emerges that portrays logic, intelligence and a willingness to achieve a common goal. Eventually one wolf comes close enough to lock its jaws on a rear leg of the prey, before wrestling it to the ground. The rest of the pack converges to share in the kill. There appears a purpose to their collective behavior that ensures a successful outcome.

But is everything as it seems? A team of international scientists from Spain and the U.S.A. has simulated the behavior of a hunting pack of wolves using very simple rules (Muro et al., 2011; Escobedo et al., 2014). Their computer models do not rely on high-level cognitive skills or sophisticated intra-pack social communication. The complex spatial dynamics of the hunting group emerges by having the computer-generated wolves obey simple inter-wolf and wolf-prey attractive/repulsive rules.

For instance, much of the hunting strategy can be reproduced by having each simulated wolf merely move towards a prey while keeping a safe distance from it and other wolves. In this way the prey is driven towards a single wolf in the pack. Simple rules are all that are needed to generate this hunting behavior. There is no need for sophisticated communication between the wolves, apart from visual contact. There is no need for a group strategy, each wolf can act independently to create what appears to be an elaborate ambush.

The lesson is clear. Watching and analyzing animals behaving, either in the wild or in captivity, is fraught with tendencies to describe underlying causes of actions and reactions in terms of human experience. This human-centered explanation of behavior is referred to as anthropomorphism: when humans observe animals responding to a sensory stimulation in a way that reflects how they would react, there is often a strong desire to invoke anthropomorphic explanations.

One can easily imagine that a group of humans closing in on a prey would either communicate amongst themselves or learn from experience how each other is thinking, and hence how they would react to different scenarios in order to achieve a common goal. Because humans so easily reflect on their own behavior, human-like qualities are bestowed on animals spontaneously. For example, when a fish squirms after it is hooked, there is a natural tendency to imagine the pain that the fish is feeling. It seems intuitive. A hook in your mouth would hurt, so why wouldn’t fish feel the same.

Our anthropomorphic and sometimes intuitive view of the world is not, however, always helpful in understanding the behavior of animals (particularly those that are not our close relatives; de Waal, 2009). Yet, even scientists at the top of their profession adopt anthropomorphism, a line of thinking that can camouflage biologically and evolutionarily more plausible explanations of animal behavior.   (However, not everyone will agree. Readers are referred to an essay by Marc Bekoff; Bekoff, 2006).

Why do humans so easily fall victim to anthropomorphism? One could argue that we are hard wired for empathy and hence anthropomorphism, especially given the role of a specialized set of neurons in the cortex (so-called mirror neurons) and subcortical regions which appear to non-consciously drive these behaviors (Corrandini and Antonietti, 2013; Gazzola et al., 2007; Heberlein and Adolphs, 2004).

Defining key terms

One of the common queries raised by discerning readers is that if fish don’t feel pain, why do they then squirm, flap and wriggle about in distress when they are raised out of water? Why do they fight so hard to escape a fisherman’s line? It is a simple and emotionally powerful anthropomorphic argument. That is, if a hook was pierced into your lips and then someone yanked on it, wouldn’t you struggle to escape and free yourself, just like a fish?

Maybe not.  A wild horse submits to a leash within minutes. A bear trapped in a foot snare shortly gives up its struggle. Why does a fish continue to fight in the face of supposedly extreme pain (in some cases, as in big-game sport fishing, fish will relentlessly fight against the hook for 1-2 hours). An alternative view is that fish do not feel pain.

There are two terms that need defining here: fish and pain. When I refer to fish, I am referring only to bony ray-finned fish, since they are the most common experimental fish model and the fish most people are familiar with (these are fish with bones as well as fins that have spikes). The most defining anatomical feature of ray-finned fish is gills. Whales, porpoises, dolphins, seals, otters and dugongs are not fish. These animals are marine mammals; they possess lungs rather than gills.

Pain is a term that many readers will not have difficulty in understanding. Everyone has some vivid recollection of it, after touching something hot or smashing a thumb with a hammer. However, we must be very clear in our definition given the claim that fish do not feel pain. Pain is the subjective and unpleasant experience (colloquially referred to as a “feeling”) associated with a mental state that occurs following exposure to a noxious stimulus.

The mental state is the neural activity in the brain that is indirectly activated by the stimulation of peripheral sensory receptors. A noxious stimulus is one that is physically damaging to body tissues (e.g., cutting, cold and heat) or causes the activation of peripheral sensory receptors and neural pathways that would normally be stimulated had the body been physically damaged.

Gentle touch and warm water are not noxious stimuli. They neither cause physical damage to tissues nor activate sensory receptors and nerves normally stimulated by physical damage. It should be noted that pain is not a necessary consequence of noxious stimuli. For example, there are many anecdotes of people who have experienced traumatic accidents resulting in severe body tissue trauma without feeling any immediate pain. This means that it is possible to cut your skin without feeling pain.

Some basic neurobiological concepts

To feel pain requires that you are aware or conscious of your own mental state. To be aware first requires that you attend to the stimulus. A simple demonstration of this concept is illustrated by asking you to feel the pressure on your ischial tuberosities (the bony parts of the pelvis that you sit on) when you are seated. Before I directed your attention to your backside you were probably not aware of it, but immediately afterwards you became conscious of the feeling of your seated position. To feel a sensory stimulus requires attention to that stimulus (in this case, pressure on the ischial tuberosities).

Awareness of the mental state associated with peripheral stimulation of sensory receptors arises as a result of the process of attention. This is called the top-down attentional system since it involves the frontal lobes, supposedly the highest hierarchical level in the brain (Collins and Koechlin, 2012). However, attention is not always under conscious or voluntary top-down control. It is possible for the sensory stimulus itself to non-consciously activate attentional processes in what is referred to as the bottom-up attentional system (Driver and Frackowiak, 2001). A relevant and simple example would be to accidentally stand on a sharp object while walking. In this case the noxious stimulus activates attentional circuitry and causes awareness (pain, in this example). In humans, the cerebral cortex in the frontal and parietal lobes of the brain is intimately involved in attending to input from our sensory receptors. In summary, feeling pain requires the activity of neural circuits associated with attention.  Once the brain is attending to a sensory stimulus then it becomes possible to subjectively experience a specific sensation.

These top-down and bottom-up attentional mechanisms are not specific to feeling pain. Much of our understanding of their contribution to processing of sensory stimuli comes from the visual system (Corbetta and Shuklman, 2002; Buschman and Miller, 2007).  What is pertinent to our discussion is that both the top-down and bottom-up attentional mechanisms are dependent on specific neural activity in the frontal and parietal areas of the cerebral cortex, respectively.

What is the cerebral cortex?

In everyday language the cerebral cortex is the “grey matter.” This grey matter is a thin outer covering of the mammalian brain that typically consists of 3-6 discrete horizontal layers of neurons and their processes. These layered neurons are interconnected vertically to create minicolumns or canonical microcircuits that are repeated across the whole surface of the brain. Each of these minicolumns is interconnected horizontally to produce a massively powerful processing machine.

These canonical microcircuits can be likened to integrated circuits or microprocessor chips in computers. As computers have evolved, more and more circuits have been added to their chips (you may remember the progression in personal computer evolution from 286 to 386 to 486 to Pentium and Core chips). The cerebral cortex has evolved by both increasing the complexity of the canonical microcircuit from 3 layers to 6 layers of neurons (the latter is called the neocortex) and by adding more and more of these “chips,” leading to an expanded surface area of the cortex (Rakic, 2009).

Pain is in the cerebral cortex

Pain causes elevated electrical activity in at least five principal regions in the human forebrain: the anterior cingulate cortex (ACC), the frontal and posterior parietal cortex, the somatosensory (S) regions I and II, the insular cortex, and the subcortical amygdala. These five regions form a core, interconnected circuit that is referred to as the pain matrix (Brooks and Tracey, 2005).

However, just because there is electrical activity in a particular brain region during pain does not mean that that region is responsible for the sensation. For example, while the amygdala is active during pain it is involved in modulating the pain (as well as many other things), rather than producing the feeling of pain. This has been clearly demonstrated in ablation studies in both rats and rhesus monkeys. These animals continue to quickly remove their tails away from a noxious heat stimulus even after bilateral ablation of their amygdala (Manning and Mayer, 1995; Manning et al., 2001; Veinante et al., 2013). Consequently, it is reasonable to remove this subcortical region from the matrix responsible for feeling pain.

On this criterion, the ACC also does not belong to the feeling-pain matrix. Lesion of the nerve fibers arising from the ACC is called cingulotomy and has been practiced clinically for over 50 years to relieve intolerable pain. However, patients continue to feel pain after this surgery — they just no longer seem to be bothered by the presence of their pain (Foltz and White, 1962). Thus, the ACC is not responsible for feeling pain per se. The frontoparietal nexus is likewise associated with attention to pain rather than the actual feeling of pain (Lobanov et al., 2013).

There is compelling evidence that SI, SII and the insular cortex are the essential components of the pain experience. For example, there is an interesting clinical case of a patient who had ischemic stroke that selectively damaged a small portion of SI and SII in the right side of the brain (Ploner et al., 1999). This patient could no longer perceive any acute pain in response to thermal noxious stimuli or pinprick to the left hand (Ploner et al., 1999). In addition, numerous other clinical studies have revealed that when cortical lesions involve a substantial portion of SI, patients no longer experience any pain (Vierck et al., 2013). Likewise, patients with lesions to the SII-insula cortex have been shown to either lack the sensation of pain (Biemond, 1956) or have altered pain perception (Starr et a., 2009; Veldhuijzen et al., 2010; Garcia-Larrea, 2012a and 2012b).

Another important test of whether a brain region is responsible for pain is to selectively stimulate that region with electrical current.There are only two cortical regions that have ever been shown to cause pain when electrically stimulated (Mazzola et al., 2012): the SII and the insula, which make these two regions the most critical components of the feeling-pain matrix (Garcia-Larrea, 2012a, 2012b).

What does conscious processing of noxious stimuli involve?

I have already described above that the brain must have attentional mechanisms in order to feel pain. But what else does the brain need to do in order to experience pain?  Since pain is, by its very definition, the conscious processing of neural signals arising from noxious stimuli, we should, in the first instance, be asking what does conscious processing in the human brain do. Ideally, if we can identify what conscious processing accomplishes, we should be able to relate this to specific neural architectures. Once these architectures are characterized they can then be used as biomarkers for the likelihood that a nervous system feels pain.

Conscious processing is dependent on at least two non-mutually exclusive processes: signal amplification and global integration over the cerebral cortex (Dehaene et al., 2014). Why are these processes so important? Amplification provides a mechanism to increase signal-to-noise ratio and to produce on-going neural activity after the initial sensory stimulus has ceased (Murphy and Miller, 2009). Global integration ensures the sharing and synchronization of neural information so that the most appropriate response is generated in the context of current and past experiences.

Recently, the amount of information transferred across distant sites within the cortex has been quantified using electroencephalography. These quantitative values have been successfully used to distinguish between conscious, minimally-conscious and non-conscious patients (Casali et al., 2013; King et al., 2013). Thus, global integration is a critical defining feature of conscious processing.

What neural architectures enable the cortex to perform signal amplification and global integration? Both of these processes rely on the global propagation of neural information over the cortex surface. Such propagation is achieved by extensive lateral interconnections (axon pathways) between cortical regions. These cortical regions must be reciprocally linked by axons transmitting both feedforward excitatory and feedback excitatory and inhibitory activities (Douglas, 1995; Ganguli et al., 2008; Murphy and Miller, 2009).

In the sensation of pain, amplification involves long-distance attentional pathways associated with the fronto-parietal cortices and their interconnections with the feeling-pain matrix (Lobanov et al., 2013). The SI and SII sensory cortices possess topographical maps of the body that process information associated with the somatosensory system (see Key, 2015). Slight offsets of these maps (at least in human SI) for different sensations has been proposed to allow integration of different qualities (e.g,. touch and nociception: Mancini et al., 2012; Haggard et al., 2013).

This idea has gained considerable support from recent high resolution mapping in primates (Vierck et al., 2013). It is now clear that different sub-modalities of pain, such as sharp-pricking pain and dull-burning pain, are mapped in different subregions of SI. Moreover, lateral interactions between these subregions significantly alter their relative levels of neural activity (Vierck et al., 2013).

Somatotopic maps of noxious stimuli also exist in the anterior and posterior insular cortex (Brooks et al., 2005; Baumgartner et al., 2010). Separate somatotopic maps are present for pinprick and heat noxious stimuli within the human anterior insular cortex (Baumgartner et al., 2010). This segregation of sensory inputs raises the possibility that integration occurs between these two sub-modalities and also allows these sub-modalities to be integrated separately as well as together with emotional and empathetic information that reaches the anterior insular cortex (Damasio et al., 2000; Baumgartner et al., 2010; Gu et al., 2010; Gu et al., 2013; Frot et al., 2014).

Amplification and global integration is also dependent on the local microcircuitry in each cortical region (Gilbert, 1983). The local cytoarchitecture of the cortex (the presence of discrete lamina and columnar organization) is capable of simultaneously maintaining both the differentiation and spatiotemporal relationships of neural signals. For example, separate features or qualities of sensory stimuli can be partitioned to different lamina while the columnar organization enables these signals to be integrated. Both short- and long-range connections between columns provide additional levels of integration.

The six-layered neocortex is well suited for this neural processing. Signals from the thalamus terminate in layer 4 and are then passed vertically to layer 2 within a minicolumn. Activity is then projected to layer 5 within the same minicolumn. Strong inhibitory circuits involving interneurons refine the flow of information through this canonical microcircuit (Wolf et al., 2014). The layer 2 neurons project to other cortical regions (local and long-distance), while layer 5 neurons project to subcortical regions.

Taken together, if the signal is strong enough and if sufficient information is transferred and integrated, then the feeling of pain emerges (at present, how this occurs remains a mystery).

In summary, to the best of our knowledge, for any vertebrate nervous system to feel pain it must be capable of transferring and integrating a certain level of neural information. I contend that such a nervous system must have, at least, the following organizational principles:

1. An attentional system to amplify neural information;

2. Distinct topographical coding of different qualities of somatosensory information;

3. The integration of different somatosensory information both between modalities (e.g., touch and pain) and within a single modality (sharp versus dull pain);

4. Higher-level integration of noxious signaling with other relevant information (e.g., emotional valence). This requires significant long-range axonal pathways (feedforward and feedback) between brain regions integrating this information;

5. Laminated and columnar organization of canonical neural circuits to differentiate between inputs and to allow preservation of spatiotemporal relationships. The lamina must be capable of processing inputs as well as outputs to either higher or lower hierarchical regions while maintaining meaningful representations of the neural information. The lamina must possess strong local inhibitory interneuron circuits to filter information;

6 Strong lateral interconnections (both local and long distance) between minicolumns to maintain integrity and biological relevance of processing in relation to initial stimulus.

I propose that each of these features is necessary but not sufficient for pain in vertebrates. On this basis it should be concluded that fish lack the prerequisite neuroanatomical features necessary to perform the required neurophysiological functions responsible for the feeling of pain. Fish lack the distinct topographical coding of spatiotemporal integration of different somatosensory modalities; they lack the higher–order integration of somatosensory information with other sensory systems; and they lack a laminated and columnar organization of somatosensory information. What, then, does it feel like to be a fish? The evidence best supports the idea that it doesn’t feel like anything to be a fish. They are non-conscious animals that survive without feeling; they just do it. There is nothing heretical about this idea. For much of our lives, we humans also exist non-consciously.

_____

Brian Key is a Professor of Developmental Neurobiology in the School of Biomedical Sciences, University of Queensland. He is the Head of the Brain Growth and Regeneration Lab there. The Lab is dedicated to understanding the principles of stem cell biology, differentiation, axon guidance, plasticity, regeneration and development of the brain.

References

Baumgartner, U., Iannetti, G.D., Zambreanu, L., Stoeter, P., Treede, R-D. and Tracey, I. (2010) Multiple somatotopic representations of heat and mechanical pain in the operculo-insular cortex: a high-resolution fMRI study. J. Neurophysiol. 104:2863-2872.

Bekoff, M. (2006) Public lives of animals. Troubled scientist, pissy baboons, angry elephants, and happy hounds. J Conscious. Studies 13, 115-131.

Biemond, A. (1956) The conduction of pain above the level of the thalamus opticus. Arch. Neurol. Psychr. 75:231-244.

Brooks, J. and Tracey, I. (2005) From nociception to pain perception: imaging the spinal and supraspinal pathways. J. Anat. 207:19-33.

Brooks, J.C.W., Zambreanu, L., Godinez, A., Craig, A.D. and Tracey, I. (2005) Somatotopic organization of the human insula to painful heat studies with high resolution functional imaging. NeuroImage 27:201-209.

Buschman, T.J. and Miller, E.K. (2007) Top-down versus bottom-up control of attention in the prefrontal and posterior parietal cortices. Science 315:1860-1862.

Casali, A.G., Gosseries, O., Rosanova, M., Boly, M., Sarasso, S., Casali, K.R., Casarotto, S., Bruno, M-A., Laureys, S., Tononi, G. and Massimini, M. (2014) A theoretically based index of consciousness independent of sensory processing and behaviour. Sci. Transl. Med. 5:198ra105.

Collins, A. and Koechlin, E. (2012) Reasoning, learning, and creativity: frontal lobe function and human deision making. PLoS Biol. 10(3):e1001293.

Corbetta, M. and Shulman, G.L. (2002) Control of goal-directed and stimulus-driven attention in the brain.  Nature Rev. Neurosci. 3:201-215.

Corradini, A. and Antonietti, A. (2013) Mirror neurons and their function in cognitively understood empathy. Conscious Cogn. 22:1152-1161.

Damasio, A.R., Grabowski, T.J., Bechara, A., Damasio,H., Ponto, L.L.B., Parvizi, J. and Hichwa, R.D. (2000) Subcortical and cortical activity during the feeling of self-generated emotions. Nature Neurosci. 3:1049-1056.

Dehaene, S., Charles, L., King, J-R. and Marti, S. (2014) Toward a computational theory of conscious processing. Curr. Opin. Neurobiol. 25:76-84.

de Waal, F.B.M. (2009) Darwin’s last laugh. Nature 460, 175.

Douglas, R.J., Koch, C., Mahowald, M., Martin, K.A.C. and Suarez, H.H. (1995) Recurrent excitation in neocortical circuits. Science 269:981-985.

Driver, J. and Frackowiak, R.S.J. (2001) Neurobiological measures of human selective attention. Neuropsychog. 39:1257-1262.

Escobedo, R., Muro, C., Spector, L. and Coppinger, R.P. (2014) Group size, individual role differentiation and effectiveness of cooperation in a homogenous group of hunters. J. R. Soc. Interface 11:20140204.

Foltz, E.L. and White, L.E. (1962) Pain “relief” by frontal cingulumotomy. J. Neurosurg. 19:89-100.

Frot, M., Faillenot, I. and Mauguiere, F. (2014) Processing of nociceptive input from posterior to anterior insula in humans. Hum. Brain Mapp. 35:5486-5499.

Ganguli, S., Bisley, J.W., Roitman, J.D., Shadlen, M.N., Goldberg, M.E. and Miller, K.D. (2008) One-dimensional dynamics of attention and decision making in LIP. Neuron 58:15-25.

Garcia-Larrea, L. (2012a) Insights gained into pain processing from patients with focal brain lesions. Neurosci. Lett. 520:188-191.

Garcia-Larrea, L. (2012b) The posterior insular-opercula cortex and the search of a primary cortex for pain. Clin. Neurophysiol. 42:299-313.

Gazzola, V., Rizzolatti, G., Wicker, B. and Keyser, C. (2007) The anthropomorphic brain: the mirror system responds to human and robotic actions. Neuroimage 35:1674-1684.

Gilbert, C.D. (1983) Microcircuitry of the visual cortex. Ann. Rev. Neurosci. 6:217-247.

Gu, X., Gao, Z., Wang, X., Liu, X., Knight, R.T., Hof, P.R. and Fan, J. (2010) Anterior insular cortex is necessary for empathetic pain perception. Brain 135:2726-2735.

Gu, X., Liu, X., Van Dam, N.T., Hof, P.R. and Fan, J. (2013) Cognition-emotion integration in the anterior insular cortex. Cereb. Cortex 23:20-27.

Haggard, P., Iannetti, G.D. and Longo, M.R. (2013) Spatial sensory representation in pain perception. Curr. Biol. R164-R176.

Heberlein, A. and Adolphs, R. (2004) Impaired spontaneous anthropomorphizing despite intact perception and social knowledge. PNAS 101:7487-7491.

Key, B. (2015) Fish do not feel pain and its implications for understanding phenomenal consciousness. Biol. Philos. DOI 10.1007/s10539-014-9469-4.

King, J-R., Sitt, J.D., Faugeras, F., Rohaut, B., Karoui, I.E., Cohen, L., Naccache, L. and Dehaene, S. (2013) Information sharing in the brain indexes consciousness in noncommunicative patients. Curr. Biol. 23:1914-1919.

Lobanov, O.V., Quevedo, A.S., Hadsel, M.S., Kraft, R.A. and Coghill, R.C. (2013) Frontoparietal mechanisms supporting attention to location and intensity of painful stimuli. Pain 154:1758-1768.

Mancini, F., Haggard, P., Iannetti, G.D., Longo, M.R. and Sereno, M.I. (2012) Fine-grained nociceptive maps in primary somatosensory cortex. J. Neurosci. 32:17155-17162.

Manning, B.H. and Mayer, D.J. (1995) The central nucleus of the amygdala contributes to the production of morphine antinociception in the rat tail-flick test. J. Neurosci. 15:8199-8213.

Manning, B.H., Merin, N.M., Meng, I.D. and Amaral, D.G. (2001) Reduction in opioid- and cannabinoid-induced antinociception in rhesus monkeys after bilateral lesions of the amygdaloid complex. J. Neurosci. 21:8238-8246.

Mazzola, L., Isnard, J., Peyron, R. and Mauguiere, F.  (2012) Stimulation of the human cortex and experience of pain: Wilder Penfield’s observations revisited. Brain 135:631-640

Muro, C., Escobedo, R., Spector, L. and Coppinger, R.L. (2011) Wolf-pack (Canis lupus) hunting strategies emerge from simple rules in computational simulations. Behav. Processes 88:192-197.

Murphy, B.K. and Miler, K.D. (2009) Balanced amplification: a new mechanism of selective amplification of neural activity patterns. Neuron 61:635-648.

Ploner, M., Freund, H.-J. and Schnitzler, A. (1999) Pain affect without pain sensation in a patient with a postcentral lesion. Pain 81:211-214.

Rakic, P. (2009) Evolution of the neocortex: perspective from developmental biology. Nat. Rev. Neurosci. 10:724-735.

Rose, J.D., Arlinghaus, R., Cooke, S.J., Diggles, B.K., Sawynok, W., Stevens, E.D. and Wynne, C.D.L. (2014) Can fish really feel pain? Fish Fisher. 15:97-133.

Starr, C.J., Sawaki, L., Wittenberg, G.F., Burdette, J.H., Oshiro, Y., Quevedo, A.S. nd Coghill, R.C. (2009) Roles of the insular cortex in the modulation of pain: insights from brain lesions. J. Neurosci. 29:2684-2694.

Veinante, P., Yalcin, I. and Barrot, M. (2013) The amygdala between sensation and affect: a role in pain. J. Mol. Psychiatr. 1:9

Veldhuijzen, D.S., Greesnpan, J.D., Kim, J.H. and Lenz, F.A. (2010) Altered pain and thermal sensation in subjects with isolated parietal and insular cortical lesions. Eur. J. Pain 14:535.e1-535.e11.

Vierck, C.J., Whitsel, B.L., Favorov, O.V., Brown, A.W. and Tommerdahl, M. (2013) Role of primary somatosensory cortex in the coding of pain. Pain 154:334-3443.

Wolf, F., Engelken, R., Puelma-Touzel, M., Weidinger, J.D.F. and Neef, A. (2014) Dynamical models of cortical circuits. Curr. Opin. Neurobiol. 25:228-236.

127 thoughts on “Why fish (likely) don’t feel pain

  1. My answer to this lies in a pair of Guest Blog posts I wrote for Scientific American in 2013, http://blogs.scientificamerican.com/guest-blog/2013/09/20/how-could-we-recognize-pain-in-an-octopus/ and http://blogs.scientificamerican.com/guest-blog/2013/09/26/how-could-we-recognize-pain-in-an-octopus-part-2/. They deal with pain in octopus rather than fish, but I believe the issues are the same. To get across the gist of the argument, I’m going to cut-and-paste a few paragraphs from those essays — I can’t say it any better now than I did then:

    “Most people think of pain as a particular type of experience — as something that happens inside our minds and can only be observed by ourselves. But as philosophers are well aware, there’s a big problem with that approach. The problem with thinking of pain as a private experience is that it leaves us helpless to identify pain in other people, much less in other species of animals. … The standard solution, implicit in most scientific discussions of pain, is that the more closely an animal resembles us, the safer we are in attributing experiences like ours to it. Other people have bodies and brains very similar to ours — the argument goes — so we can safely assume that they have pains like ours, especially when they tell us about them. Some types of animals, particularly mammals, have brain structures and nociceptive systems so similar to ours that it seems only reasonable for them to have pain experiences like ours. And so on.”

    “But here’s the thing. That solution just doesn’t work. It is wrong even for mammals and other people, and if you try to apply it to octopuses or lobsters, it doesn’t produce anything except bafflement.”

    I support that assertion using four thought experiments, involving (1) pain in non-human aliens; (2) pain in a human-like robot; (3) a man who appears to be in pain but claims not to experience any; (4) a woman who claims to experience pain but shows no sign of it.

    The essence of my conclusion:

    “When we attribute pain to another person or another species, we are actually going beyond science. We are not simply making an assessment of the facts, we are also making a moral judgement. When we call something pain, we are implicitly calling it wrong. In order to recognize something as pain, we must see it as a combination of nociception (detection of tissue damage and response to it) and wrongness. When we ask whether an octopus has pain, we aren’t just asking whether an octopus has a nociceptive system that resembles ours, we are also asking whether it is morally wrong to inflict damage on an octopus. Science can answer questions about the structure of the nociceptive system, but it can’t answer questions about moral wrongness. It can give us information that helps us to come up with answers, but it can’t provide us directly with the answers.

    Liked by 1 person

  2. A note or two, first:

    Brian You specified bony fish. I get that you were wanting to rule out cetaceans, which of course aren’t fish. But, were you also deliberately ruling out cartilaginous fish, and if so, why? Do they have some sensory differences related to pain? Does a shark feel pain, or at least potentially, where a salmon does not? Or, do we just not have very much experimental data specific to cartilaginous fish?

    Second, kind of toward Massimo and, if we have further essays on consciousness in general (not free will!) — I note the issue that even humans process a lot of things unconsciously, as well as the fact that our attentional system can be nonconsciously driven. Maybe an essay on where we’re at on determining degrees of conscious vs unconscious behavior in humans might be in order down the road?

    (It’s also interesting, and a bit ironic, that our “consciousness imputers” may themselves act unconsciously, Brian!)

    Back to the meat.

    Brian, you said you wanted to go with a sensory stimuli processing angle rather than a conscious/unconscious one. Is that because the stimuli processing angle is narrower, and thus easier to work with? Because you think it more accurately gets at “what is pain?” Both? Other reasons? Your comments appreciated.

    Second, you next go on to discuss the pain-experiencing mechanisms in the brains of humans, then expand to primates. If primate brain structure doesn’t map well to fish, is this not potentially itself an “anatomical anthropomorphizing,” the idea that because their brains aren’t structured our way, they can’t feel pain? I said “potentially,” because I don’t know, and overall, I agree with you. But, I wanted to put that out.

    Finally, speaking of that, your six points.

    On point 1, do fish not have an attentional system like we do, or even close? You mention, after enumerating the points, some things that they lack, but you don’t mention this as a lack.

    Then, shifting this a bit to philosophy. This, per my opening paragraphs, and point 4, about emotional valence, seem to be where animal mind research in general moves from science, specifically neuroscience, to philosophy of mind. Comments from you, Massimo or others appreciated.

    Liked by 1 person

  3. “On this basis it should be concluded that fish lack the prerequisite neuroanatomical features necessary to perform the required neurophysiological functions responsible for the feeling of pain.”

    Should that be “… the feeling of pain as humans experience it”

    Liked by 1 person

  4. Thank you for a worthwhile review of pain consciousness. A few observations/questions:

    “Resisting anthropomorphic tendencies” is an impossible challenge to which the OP immediately feels victim by suggesting that “the computer-generated wolves obey simple inter-wolf and wolf-prey attractive/repulsive rules.” Communication of ‘simple’ rules and adjusting social behavior based on them in real time is only simple in anthropomorphic and anthropocentric terms. These matters are extremely complex, and given that our understanding of biological processes are in its infancy these kinds of errors would be unavoidable. We simply do not understand the level of awareness of any form of life, man, wolf or fish. We do have a pretty good ‘feel’ of the human condition for obvious reasons.

    Fish “are non-conscious animals that survive without feeling; they just do it”, because they have none of the neurologic structures that we have identified as necessary. I would be interested in your interpretation of avian behavior, also in the absence of a cortex. Birds have a pallium, yet they are capable of associative learning and complex problem solving.

    Could it not be that anthropomorphism inevitably infects us all, blinding us to the possibilities. Fish have eyes, it would therefore make no sense to propose, without proof, that they are not aware of an image of some sort, and respond to such a mental construct. Eyes are for seeing, and seeing is done with the brain.

    Are wolves and dogs unconscious too? They seem to have all the tools. I am willing to bet that the above mentioned computer algorithm explicitly simulated a few simple rules, but implicitly assumed much about the intelligence of wolves, ‘unconscious’ or otherwise. In the end, a rigorous definition of consciousness is needed to avoid discussions at cross-purposes.

    Liked by 1 person

  5. Brian, thanks for bringing us up to date on the neural correlates of pain. You say

    “Taken together, if the signal is strong enough and if sufficient information is transferred and integrated, then the feeling of pain emerges (at present, how this occurs remains a mystery).”

    Indeed, and one might also wonder whether pain and other conscious experiences add anything to the behavior control already afforded by the informational goings-on with which they are associated. From a neuroscientific, physicalist standpoint, it isn’t as if pain emerges as an additional observable property that then makes its own contribution to behavior control. As Dennett has put it, there is no second transduction of neurally-instantiated information into some further sort of thing (Kinds of Minds, p. 72). The informational goings-on are all that there is, so they have to be sufficient for such control. Pain, on the other hand, is entirely subjective: it exists for the person alone, so can’t play a role in a physicalist account even though of course we *experience* it as causing such things as wincing, avoiding hot stoves, etc.

    One might claim that pain doesn’t emerge at all, that it just *is* the informational goings-on, in which case it does play a causal role from a physicalist perspective. That identity claim seems problematic since pain, unlike its neural correlates, isn’t available to observation; it’s undergone by (it exists for) only the person in pain – that’s what makes it an experience. But should some sort of identity thesis about consciousness pan out, this would mean pain and other conscious experiences don’t add to what their neural correlates are doing, since they *are* those correlates. Either way, it isn’t obvious what behavioral advantage conscious experience gives to us that fish, should they be zombies, don’t enjoy.

    Liked by 1 person

  6. Brian,
    ” They are non-conscious animals that survive without feeling; they just do it.”
    I’m not sure I follow the thread of your logic. First you show that people with specific damage to particular areas of the brain do not feel pain, but that presumably doesn’t make them non-conscious, as they are not in an apparently vegetative state. Only non-conscious of that particular sense. So why does the lack of pain receptacles in a fish make them wholy non-conscious? They have eyes. Presumably they have some visual sense. They also sense underwater vibration and chemistry, i.e. sound and smell. Might it be that pain is just not evolutionarily useful for the survival of the species? Are you sure you are not applying your particular mental framing in ways not entirely accurate?

    Liked by 1 person

  7. Excellent article. I think this kind of work – especially looking at physical conditions-of-possibility for subjective experiences – is what will end up paring down the “hard problem” and leading us to new conceptual frameworks for understanding experiences.

    I have one minor qualm. Although I completely agree with what you’re saying about the dangers of anthropomorphism, I’m not sure that the wolf-pack simulation necessarily supports the conclusion you’re making. As an analogy, we’ve created very concise non-experiential algorithms that successfully play chess — so we know that purpose, awareness, consciousness, etc. are not *required* in order to play chess. But it turns out that our algorithms play chess in a totally different way than people do.

    So along with the dangers of anthropomorphism, we also need to be sensitive to the dangers of problem-domain reduction.

    Liked by 1 person

  8. Interesting essay, I like it! 🙂

    Also, I have a couple of questions. First, I would second the question by Marclevesque — isn’t the proposed definition of pain somewhat too anthropomorphic? Namely, Brian could have instead compared the sensation of touch in fish versus a human, and arguably he would reach a similar conclusion that fish does not experience touch in the same way a human does. Would it follow that the fish does not feel touch at all?

    Second, I would repeat the statement by Bill — the concept of pain is related both to neurophysiological activity as well as moral wrongness. I understand that Brian has focused on the former component, but to claim that fish does not feel pain (like a human does) does not imply that it is morally ok to inflict excessive physical stress to fish without any remorse. When I was a kid, I remember a friend of mine picking an earthworm from the ground and violently cutting it in a dozen pieces with a wooden stick. Granted, an earthworm does not feel pain anywhere near like humans (or even fish) do, but I would still describe my friend’s behaviour as sadistic brutality.

    Finally, I am somewhat baffled by the overwhelming usage of technical jargon. Picking a semi-random sentence from the essay as an example:

    The local cytoarchitecture of the cortex (the presence of discrete lamina and columnar organization) is capable of simultaneously maintaining both the differentiation and spatiotemporal relationships of neural signals.

    One gets the impression that some parts of the essay have been copy-pasted from a research paper. Being a theoretical physicist, I fully understand the need for technical jargon in research. But I do not understand the need for it in an essay for the general public.

    So, while figuring out the jargon in the essay was not a big problem for me personally (google and wikipedia helped), it might be for others. I think it is confusing and counterproductive to use it in such a large amount in a SciSal essay.

    Liked by 1 person

  9. [This comment is actually from Prof. Valerie Hardcastle, author of The Myth of Pain]

    Should we care whether fish can feel pain? My answer is no, because processing pain in and of itself has a negative effect on the brain and body. Here are three non-fish examples.

    Preterm infants normally undergo multiple painful experiences – heel sticks, intravenous catheters, chest tubes, endotracheal suctioning, surgery. Indeed, the sickest of premature babies are subjected to an average of 750 procedures before their discharge. And yet, analgesics are provided in less than 10% of the cases, largely because most doctors believe that neonates are like fish; they cannot actually experience their pain. However, acute nociceptive stimuli in neonatal patients lead to extended episodes of hyperalgesia, during which non-painful stimuli can induce chronic pain. In addition, nociceptive stimuli in neonates have been linked to early intraventricular hemorrhage and periventricular leukomalacia, a type of white-matter brain injury. Administering analgesics to neonates is correlated with fewer episodes of these types of injuries.

    Coded pain scores during immunization in male babies are linearly related to type of circumcision analgesia (no circumcision, topical agents, and placebo analgesics). There are also some data linking early pain experiences to permanent changes in spinal cord processing as well as a compromised immune system and even some behavioral disorders, such as hyper-vigilance, sleep disturbances, avoidance behaviors, and feeding problems. Regardless of the considerable discussion regarding whether young infants are conscious or, if conscious, whether they can remember painful events, traces of those events linger in the body and the brain, altering their developmental trajectory in fundamental ways.

    Pain processing also changes brains morphologically. With long-term or repeated pain, we can measure a significant decrease in gray matter in the areas associated with the “neuromatrix” in the brain, especially in the anterior cingulate cortex, right insular cortex, dorsolateral prefrontal cortex, amygdala, and brainstem. A few studies suggest that areas that exhibit the most change in gray matter density also include the hippocampus, multiple lateral frontal regions and portions of the occipital lobe, suggesting that the morphometric changes are not limited to pain-specific regions. Either way, the observed morphological differences in chronic pain conditions correlate to the length of time pain patients have been suffering as well as the intensity of their pain.

    All of these lines of evidence suggest a new way to think about creatures with pain. Regardless of level of consciousness, it is clear that processing pain information is bad for brains and other functional systems. It is also clear that blocking brains from processing pain information prevents these pain-related disruptions. Therefore, it seems clear (to me at least) that whether someone or something is conscious has little bearing on whether one should treat its pain. We should focus on maximizing the good for the system as a whole, regardless of the status of consciousness. All else being equal, minimizing pain activation is always preferable. Therefore, we should do our best to minimize pain in all creatures, even those we believe are not conscious.

    Liked by 1 person

  10. Professor Key rightly says that one should be clear about one’s definition of “pain”, but I am not convinced that his usage of the term is entirely consistent. He defines pain as “the subjective and unpleasant experience (colloquially referred to as a ‘feeling’) associated with a mental state that occurs following exposure to a noxious stimulus”, and immediately thereafter identifies the mental state in question with “the neural activity in the brain that is indirectly activated by the stimulation of peripheral sensory receptors”.

    Since I think some form of type identity is plausible for sensations, I don’t have much of a problem with this. However, I do find the wording somewhat puzzling. If pain is “the subjective and unpleasant experience … associated with a mental state”, and the mental state is a certain kind of neural activity, it would seem that pain sensations are not in fact being identified with neural activity, but rather with something which is merely “associated with” such activity. What does this mean? If the subjective experience is not itself neural activity but only “associated with” such activity, what is it, and what is this relation of “association” supposed to amount to?

    This may be to read too much into the expression “associated with”, and perhaps Prof. Key does mean to identify the subjective experience, the mental state, and the neural activity. But if that is so, why talk about “mental states” here at all? I am well aware that philosophers tend to talk about pains as “mental states”, but when they do so they typically take themselves to mean something quite different from neural activity (type identity is scarcely popular among philosophers nowadays, even if it has some very able defenders).

    One could say that to call a pain a “mental” state is simply to fall in with common usage, but is it even the case that people in general (i.e. “the folk”) think of (say) the pain of a stubbed toe as a “mental” state rather than a physiological one? I doubt it. Rather, it seems to me that when people use “mind”-involving expressions with reference to bodily pains, it’s usually in order to question whether the pain is “real” or not (thus consider “it’s all in his mind”, or “he says he’s in pain, but it seems to me it’s all psychological”).

    In saying this, I do not suggest that the folk tend not to be mind/body dualists (they do), but is it clear that they think of bodily pain as being on the “mental” rather than the “physical” side of the dichotomy? If so, why do they reach for pain killers?

    More serious than the above concerns (which are perhaps all “merely terminological”) is whether Professor Key sticks to his own definition of “pain”, and the extent to which, if he does not, this vitiates his argument. Since my 500 words is already up I will address this in a follow-up comment (which I hope to get round to later today).

    Like

  11. Hello Brian,

    I really enjoyed the clear paper that you wrote. Thanks.

    Your argument that fish (of this particular kind) don’t feel pain seems to be that in order for us to be warranted in believing something experiences pain, it must have all of the following necessary features:

    1. An attentional system to amplify neural information;
    2. Distinct topographical coding of different qualities of somatosensory information;
    3. The integration of different somatosensory information both between modalities (e.g., touch and pain) and within a single modality (sharp versus dull pain);
    4. Higher-level integration of noxious signaling with other relevant information (e.g., emotional valence). This requires significant long-range axonal pathways (feedforward and feedback) between brain regions integrating this information;
    5. Laminated and columnar organization of canonical neural circuits to differentiate between inputs and to allow preservation of spatiotemporal relationships. The lamina must be capable of processing inputs as well as outputs to either higher or lower hierarchical regions while maintaining meaningful representations of the neural information. The lamina must possess strong local inhibitory interneuron circuits to filter information;
    6 Strong lateral interconnections (both local and long distance) between minicolumns to maintain integrity and biological relevance of processing in relation to initial stimulus.

    Since fish do not have all of these necessary features, we are warranted in concluding that fish don’t experience pain.
    I endorse the strategy of argument which you employ. It looks like it has the form:

    -Find the neural correlates of x
    -Find out whether creature Y has the neural correlates of x
    -If creature Y has the neural correlates of x, then we say creature Y has x. If we don’t find these neural correlates of x, we are warranted in concluding that creature Y does not have x.

    Features 1-6 are simply the alleged neural correlates for pain, and your argument runs through smoothly assuming that they are the neural correlates for pain.

    However, the recent literature discussing whether “phenomenal consciousness overflows cognitive access” might suggest otherwise.

    Ned Block drew the distinction in the philosophical literature between “phenomenal” and “access” consciousness in the 90’s. Phenomenal consciousness refers to the “what it’s like” of experience (the qualitative experience of pain, the taste of coffee, etc.). Access consciousness, by contrast, refers to what an organism has access to for guided report, reasoning, control, etc.

    Block has argued that phenomenal consciousness overflows access consciousness, meaning that we can be experiencing more than what we have access to (I can be experiencing pain even if I can’t report it). (see next comment)

    Liked by 1 person

  12. He recently argued that what he calls “identity crowding” in visual perception shows us a case where we *cannot* attend to, or have access consciousness of, a stimulus (because it is below the grain of attention in a crowded scene) but one nonetheless sees it (is phenomenally conscious of it)

    He also points to cases such as some paralyzed anosognosics (paralyzed patients who are unaware of their disability) In order to show that access and phenomenal consciousness are dissociable. These paralyzed anosognisics are instructed to move their arm. After a few seconds (and no action performed) the patients report that they moved their arm (that they experienced moving their arm when they obviously did not). Later, after some patients recover from their anosognosia, they sometimes report that they find it odd that they said that they were moving their arms when instructed to do so while paralyzed, for they certainly knew they couldn’t move their arms.

    In other words, it is plausible that these patients actually were not experiencing moving their arms but were reporting otherwise. This would be one example of one’s reports being dissociated from one’s experience.

    Now, Block and defenders of the “rich” view of consciousness (phenomenal consciousness overflows cognitive access) would probably say that all of features 1-6 that you mentioned are necessary for *access* consciousness but not *phenomenal* consciousness. This is especially the case since you mention Dehaene’s “global neuronal workspace” theory as being the working theory of *phenomenal* consciousness in your paper.

    Deheane himself admits at the beginning of his recent book “consciousness and the brain: deciphering how the brain codes our thoughts” that the global neuronal workspace theory is a theory of access consciousness, and he simply assumes that phenomenal consciousness does not overflow access consciousness, so the theory is one of phenomenal consciousness as well.

    Additionally almost all of the requirements you mention sound like candidate neural mechanisms for access consciousness, such as higher level integration of noxious stimuli with other relevant information, preservation of spatiotemporal relationships, strong lateral connections, etc. These are all types of things Dehaene points to in his book as being signatures of consciousness (where here he means access consciousness)

    So, it seems possible that you have pointed out all the requirements for access consciousness of pain, but not phenomenal consciousness. In other words, it is possible that fish experience pain, they just have none of the higher level neural processes required for our kind of robust access consciousness.

    I was just wondering if you had a way to account for this, or you were just assuming that access consciousness is not overflowed by phenomenal consciousness? Any input on this would be great, I personally am very interested in the debate about access and phenomenal consciousness. Thanks.

    Liked by 1 person

  13. This is a very interesting post, although I share Bill Skagg’s concern. Maybe it works for fish because they are phylogenetically very close to us; maybe not; but one cannot easily reason from our specific brain structure to whether other animals (or aliens) feel because they may have evolved differently-looking and differently-wired systems with the same functionality. Perhaps I misunderstand, but part of the argumentation sounds like “to see, we mammals need a camera eye, so we can conclude that insects, who have complex eyes, cannot see”.

    On a more general level, I find it hard to define a noxious stimulus that motivates the animal to attempt to escape the cause of that stimulus as anything but pain, even if that animal is incapable of figuring out a better strategy than instinctively tugging at the line.

    Like

  14. Valerie Hardcastle:

    “…it seems clear (to me at least) that whether someone or something is conscious has little bearing on whether one should treat its pain. We should focus on maximizing the good for the system as a whole, regardless of the status of consciousness. All else being equal, minimizing pain activation is always preferable. Therefore, we should do our best to minimize pain in all creatures, even those we believe are not conscious.”

    Usually the reason we want to minimize pain for creatures is because they are conscious and thus experience pain, and therefore suffer. If fish aren’t conscious and don’t experience pain (still an open question in my book), then presumably it shouldn’t worry us that they *seem* to be suffering as they wriggle on the hook. It doesn’t seem right to say fish have pain absent any consciousness, since pain is an experience, which is what consciousness consists in. Fish are undergoing severe bodily damage and eventual death as they head to our dinner plates, but on Brian’s account they don’t have the neural architecture necessary for feeling pain or perhaps for having any sort of experience, so aren’t objects of moral concern (on the assumption that it’s the capacity for conscious experience and suffering that makes an entity a proper object of such concern).

    But Valerie’s reporting of the long-term damage done by the activation of nociceptive neural networks drives home the point that, once we decide a creature *is* an object of moral concern, we should minimize the activation of those networks (e.g., during medical procedures for neonates and for patients who may or may not be conscious) whether or not we believe such activation actually results in experiencing pain at that time. Activation of those networks, even absent consciousness and pain, can do lasting damage apparently.

    Liked by 1 person

  15. I’m not even sure what was the point or relevance of Prof. Valerie Hardcastle’s comment. Brian Key’s paper is about understanding the nature, mechanisms, or conditions of conscious pain as well as why not all creatures we normally think experience pain really do not. In other words, Key’s article focuses on the empirical and theoretical issues of understanding of pain. Hardcastle’s comment focuses on how we ought to prevent pain (information) in the light of empirical research regardless of whether or not we are capable of experiencing it. In other words, Hardcastle’s paper focuses on the normative and therapeutic issues of preventing pain (information). These are two very different issues and i’m not sure how bringing up the latter somehow constitutes a genuine or interesting disagreement with the former. It may be the case that it doesn’t really matter whether or not fish experience pain, but that’s an issue of value rather than an issue of fact. In other words, I don’t think Hardcastle’s comment is directly engaging with Key’s work.

    Liked by 1 person

  16. In response to Professor Hardcastle’s comment,

    First I am not quite sure how her point addresses the paper since it seemed like the OP was making mostly claims about how to determine when something can undergo conscious pain and whether other creatures fit the bill. I didn’t think he was directly making any normative claims about what reasons these facts give us to treat people one way or another.

    But besides this point, I think I disagree with her.

    She argues that whether or not organisms *consciously experience* pain is irrelevant to the question of how we ought to treat them. This is because pain information processing (unconscious) is bad for organisms, so even if it is unconscious, we still ought to avoid inducing pain information processing.

    She points out cases such as the infants who may not undergo conscious pain, but we nonetheless care about preventing their pain information processing since even unconscious pain information processing can lead to serious morphological and functional changes in the infant’s brain anatomy. Consequently, whether or not infants feel conscious pain shouldn’t prevent us from preventing pain information processing.

    I take it, that as a result, whether or not fish consciously feel pain shouldn’t prevent us from preventing their pain information processing as well.

    However, I think there is a disanalogy. In the case of infants, we feel as though all of these functional and morphological changes early in the infant’s development which result from pain information processing will lead to conscious pain later on for the infant (emotional pain, acute or chronic physical pains, etc.). This, I think, explains why we care about preventing the pain information processing (non-conscious) in infants.

    Perhaps this thought experiment will help. Let’s say that we knew that a fetus was going to, tragically, die 2 days after being outside of the womb. We all agree that both the fetus and the infant 2 days after leaving the womb can not experience conscious pain, it can only process pain information unconsciously. In other words, throughout the organism’s life, it will never experience any conscious pain at all.

    Would any of us think that it would be bad for the fetus if it processed pain information? I don’t think so. We know that for the entire duration of its like it will never experience conscious pain, which is the thing we all really find to be intrinsically bad. My suspicion is that pain information processing is really only bad insofar as it will reliably or plausibly lead to conscious pain, it isn’t intrinsically bad by itself.

    So, since a fish, on OP’s account, can never experience conscious pain, we don’t feel like we have to prevent any pain information processing (unless you give some sort of sanctity of life argument for them, but this would be a whole different argument).

    Like

  17. Armchair philosophers and ethologists are much to be feared.
    The author uses neuroanatomy to over-rule ethology. Anybody who has interacted with fishes know that they behave as if they experienced pain. Another objection: (some) fishes can act in a very clever way. Pain is a big help for intelligence. It’s a more economical hypothesis. Consider:
    https://patriceayme.wordpress.com/2014/07/27/diving-into-truth/

    Anti-anthropomorphism sounds scientific, but it is actually a contrived hypothesis, insisting, with the Bible, that man is special. Instead of just an animal.

    Once I was in an African National Park. I saw a large antelope (Hypotragus Equinuus), obviously in a panic, dash down a twenty foot embankment, on the other side of a wide river. He landed on the 200 foot wide beach, separated from the river itself by dunes… A large lioness followed down the embankment. Then the lioness took a hard left, from my perspective, instead of following her prey, she went ninety degrees! She went full speed for 400 meters or so, and then angled through the field of dunes, along the river, which was much wider there. Meanwhile the antelope, seeing the lioness was not in hot pursuit, had slowed down. But he was confronted to a new problem: a wide river, full of crocodiles.

    From my vantage point, I could see at least a hundred trunks floating in the river, each one a croc. The antelope trotted upstream, knowing full well that to swim across meant certain death. Soon it saw the solution in the distance: shallow rapids. He accelerated. By then the lioness was in ambush near the top of the large last dune dominating the narrows.

    The antelope arrived at a very brisk pace, scanning ahead to figure out the optimal point. The lioness was crouched, observing just so within the grass hidden by the very top of the dune, which she had craftily put between herself and the direction she knew her prey would come from.

    I screamed.

    The two beasts sprang into action. The antelope understood that there was an ambush, and bounded in an enormous effort, taking a dangerous short-cut. At the same time, realizing apes were foiling her plan, as apes tend to do, the lioness also charged.

    She missed.

    The antelope climbed on my side of the river, still pursued by the lioness, who took the time to throw me a very dirty look.

    This was not my only encounter with very clever wild animals.

    I have encountered lions many times. Lions in good standing resist hunting instinct and pangs of hunger, and don’t attack human beings.

    Once I was diving as a child in Africa, spear in hand. I caught a lobster. However the critter screamed in such a heart breaking fashion, I did not renew the experience. Another time, I had caught an octopus, and, although mostly dead and hopeless, it made a point to bite me in protest. Yes, it was clearly a protest.

    So animals have feelings and emotions. If we directly interact with them, it’s blatant.

    Like

  18. Thank you everyone for your very insightful comments. Massimo has suggested that I refrain from answering every individual comment and instead stay big picture. Adhering to this philosophy I will give an analogy that hopefully will address at least some of the comments in this first post.

    Let’s do a thought experiment, as philosophers so like to engage in. Imagine I am an alien and arrived on earth and the very first animal I encountered was a bird. I then set forth to understand the basic principles that enabled this creature to fly without reference to any other animal. Once I determined these principles then I could use them to assess the likelihood that any other animal (non-bird) could fly. In order to do this I had to first define “flying” as apposed, for example, to “gliding”. I then came up with two very fundamental principles: 1. must have a wing-like structure; 2. must have a motor system (e.g. muscles) to power flight (rather than to “glide”). With these principles I could then indicate whether any new animal I found could have the potential to fly. I found a bat, yes to both principles; a bee, yes to both principles; a fly, yes to both principles. Then I found the marsupial Petaurus breviceps (sugar glider); it had wing-like structures but no motor system to power them, so it can’t fly. Then I found a chicken. It had wings (albeit small for animal size) and it had muscles (albeit rudimentary) to power them. But when observed, it didn’t fly. The important thing is that the principle was to “power” flight. These animals can’t fly because the musculature was insufficient to provide power. The take home message is that once the fundamental principles are defined, then it is possible to make inferences. Note that I didn’t mandate the possession of ectodermal derivatives, such as feathers, as a fundamental principle. However, more narrow principles (such as feathers) can be used subsequently for defining characteristics of flight of evolutionary-related species.

    One could argue that a whole new animal species was discovered, it had power in the form of an organ that generated jet propulsion and it did not possess wings but rather was shaped like a bullet. This creature seemed to fly, but based on my principles I failed to predict it could do so. That’s when you must sufficiently and adequately define “flying”. Flying involves overcoming the force of gravity. In this case, without wing-like structures the animal would quickly succumb to gravity and fall to earth (like a bullet does).

    Once the definitions and principles are established it becomes possible to infer whether fish are capable of feeling pain. (In next comment, as my 500 words are up, I will hopefully address the vision/touch comments about fish as this is really important).

    Like

  19. According to the Professor Key, something only counts as a feeling of pain if one is self-consciously directing one’s attention to it. He writes:

    “To feel pain requires that you are aware or conscious of your own mental state. To be aware first requires that you attend to the stimulus. … To feel a sensory stimulus requires attention to that stimulus.”

    In order to see what may be wrong with this, it is worth noting that Prof. Key himself appears to contradict it in several places. Thus, in the third sentence of the essay he maintains that “much of our own lives are led without attending to how we feel”. Here as elsewhere Prof. Key appears to distinguish between feelings, on the one hand, and conscious attention to those feelings on the other (just as he also makes a distinction between “attention to pain” and “the actual feeling of pain” when noting that the ACC is associated with the former but not the latter). And this seems to me to be entirely correct. I may be undergoing any number of feelings and sensations and emotions without explicitly paying attention to them: while walking in the countryside with my wife I may be suffering from the pain in my arthritic big toe, feeling the ice-cold wind on my face, smelling the manure the farmers have been spreading on the fields, tasting the gum in my mouth, enjoying inhaling and exhaling from my cigarette, while also paying attention to what my wife is saying and feeling happy or sad or cheerful or angry or indifferent about it. In some sense or at some level I am surely aware of all of these sensations and feelings, yet I cannot be consciously attending to all of them at once. Does it follow that I am not undergoing them, that they do not exist, or that until I consciously attend to them I am but a zombie?

    In short, I would say that attention is not an all-or-nothing affair, and that we must distinguish between feelings and sensations, on the one hand, and varying degrees of conscious awareness of these feelings and sensations on the other (note that this does not preclude the possibility that it might make sense to think of the first-order sensations and feelings themselves as modes of awareness). While it is obviously true that “much of our own lives are led without attending to how we feel”, these feelings do not simply spring into existence when we focus our conscious attention upon them. Rather, they must already be there — we must be already undergoing them — before we *can* turn our conscious attention to them (or neglect to do so) in the first place.

    So contrary to what Prof Key suggests, it seems to me that it does *not* follow from the fact that “much of our own lives are led without attending to how we feel” that we are for the most part non-conscious “automatons” — the fact that we manage to “get in the car and drive to the shops without thinking about what it feels like” notwithstanding. And if we can experience pains and undergo other feelings, sensations and emotions without consciously attending to them, it does not follow from the fact that fish are not capable of conscious attention that they are not capable of undergoing sensations or feelings of pain. (To be clear, I am not claiming that fish *can* or *do* feel pain, but to the extent that the argument presented above depends upon the premise that pain requires conscious attention, and infers from the fact that fish do not have the capacity for conscious attention they cannot feel pain, I am not persuaded that it is sound.)

    Like

  20. Hi Brian,

    Thanks for an interesting and informative article, although I am not even remotely qualified to judge the science in it.

    In terms of the emotions and suffering of animals I think that there is a big element of giving them the benefit of the doubt.

    From your article and other scientific opinion I have heard I think you are probably right about the kind of fish you discuss.

    Patrice I agree with some of what you say, but I doubt that you could really say that the octopus bite was in protest. How does an octopus know that the situation is hopeless?

    But with your story of the lion and the antelope, yes, how could that not be described as reasoning, tactics, panic, frustration etc?

    I have seen behaviour which seem to be reasoning and emotion. We have a large basin outside our back window sitting in a wooden stand and the birds come to drink and bathe. When it was becoming empty and the birds could no longer scoop up water I saw a cockatoo lift up the edge of the basin so that it sat at an angle in the stand and it could drink again. I cannot see this as anything other than geometric reasoning and a basic understanding of how water behaves.

    I recall in my youth that our friends had a dog and she used to have the most heartbreaking expression when she thought that the family were going out of the house without her. When she was told to come she bounded in with a happy expression. I cannot see that this can be seen as anything other than that she was sad to be left out and happy to be included. At least that appears to be the most parsimonious explanation.

    Like

  21. The alien thought experiment brings up an interesting point. Suppose the aliens could use logic, mathematics and could form and test hypotheses but were not conscious – they had no sensations.

    They would surely classify the fish’s reaction to a hook through its lip as the same sort of thing as a human’s reaction to the same – even though the neural mechanism differed. They could not form a hypothesis of a subjective feeling of pain as they would have no way of knowing what that was. Nor would they need the hypothesis.

    So, to the aliens, it would be a strange distinction between the case of putting a hook through the lip of a fish but not doing the same to a human, other than a subjective bias towards our own species.

    Like

  22. Not sure I get Professor Hardcastle’s point. What could represent the “good” of a non-conscious system? I can cane the hard disk of my computer knowing that I have back ups of any data that I really care about and it doesn’t matter if it falls over. Or I can just cane it to see how much damage it can take. Or I can try dropping a new line of laptops to see how easily they break. I don’t think that there is any objection to me doing that – so what is the difference if the system is organic?

    I think that conscious experience must be the criterion, even though this is hard to define in other species. As I said before, we give as much benefit of the doubt as we can.

    Like

  23. Various thoughts as I wait for the author to weigh in further, though I appreciate Brian’s initial comment about delineation issues.

    Patrice and others: “As if” behavior is no guarantor of something. And, per Patrice in general, I turn a very skeptical eye at projecting intellectual as well as emotional attributes on creatures without warrant. I doubt that antelopes “understand” anything when being pursued by prey animals, or “knew” in a non-instinctual, consciously calculated way, that it was “certain death” to cross. I also, on the emotional side, question that the lion had a “dirty look.”

    Tom Clark Per my comments a few essays back about how free will may have started as a spandrel, but then got “realized” as having value — that value would be focus upon, and better achievement of goals. My idea of how free will developed is not necessary for finding plausibility on free will, as “consciousness consciously in action,” having value as a goal-setting, and goal-realization, device.

    Marcel & All The issue of pain “as we define it” and related doesn’t bother me. This connects, if vaguely, to realism vs. non-realism, and also to such things as qualia. This is just another case of a subjective interpretation of an objectively generated phenomenon; if fish don’t objectively generate — and cannot generate — an approximate correlate to pain, then whatever they experience isn’t pain.

    Beyond that, here’s where I think consciousness connects to pain. A consciously goal-setting creature (flourishing alert!) can have among its goals … increasing pleasure and avoiding pain! And, planning for how to do so.

    A dog may not do that as well as a human, but it’s arguable a dog can do that. It’s pretty much beyond just arguable that a primate can.

    A fish?

    Sorry, but I see no indication that “as if” behavior reflects that a fish can consciously distinguish between “pain” and “non pain,” then take courses of action to try to increase “non pain” and decrease “pain.”

    Massimo: Per Hardcastle’s opening paragraph and what I’ve said on volition issue “something like free will,” if fish are nonconscious, I think we should, to be precise, reference “processing something like pain.”

    That said, anthropomorphism or non-anthropomorphism, pain-feeling or non-pain-feeling, there are good, humanistic ethical reasons for treating animals with kindness, per Massimo’s conclusion. I would say treat animals “humanely,” but, per what Twain has said about the “Moral Sense” in “The Mysterious Stranger” and “The Damned Human Race” http://www.zengardner.com/the-damned-human-race-mark-twain-essay/ about the “moral sense,” I do, snark aside, think that maybe we ought not to use a word like “humanely” so lightly.

    And, indeed, if there’s anthropomorphizing of any sort, it’s of the sort about which Twain cautions.

    Daniel Tippens I see no reason to distinguish the way Block does. Access consciousness can and will be used for subjective ends, for one thing. Or, via Wiki, there’s Willian Lycan, arguing at least eight distinct types of consciousness can be identified. (That said, I think all his types overlap and are thus subtypes.) http://en.wikipedia.org/wiki/Consciousness#Types_of_consciousness

    Like

  24. Brian,

    two very fundamental principles: 1. must have a wing-like structure; 2. must have a motor system (e.g. muscles) to power flight (rather than to “glide”). With these principles I could then indicate whether any new animal I found could have the potential to fly.
    […]
    That’s when you must sufficiently and adequately define “flying”. Flying involves overcoming the force of gravity.

    I can easily imagine a baloon-like animal which can fly (according to your definition of flying) by filling itself up with hot air, and overcoming the force of gravity via buoyancy rather than propulsion. Horizontal motion can be achieved by opening small “vents” which would blow the air out in one or other horizontal direction, against the desired direction of motion. This method of flying has no wings, and no muscles for propulsion (muscles are arguably needed only for air intake, i.e. breathing, which has nothing directly to do with overcoming gravity).

    Also, as a serious alien explorer 🙂 , you would not limit yourself to exploring only the Earth’s atmosphere. Water is also a fluid, and overcoming gravity in water is arguably also a form of flying. And if you dip into the ocean, you would surely find a whole bunch of animals which successfully fly in that environment (they never fall to the ocean floor), namely… well, whad’ya know… fish! 🙂 They overcome gravity with buoyancy — no wings! Granted, most use muscles for horizontal propulsion, but still not to counter gravity. Others (jellyfish) use a form of locomotion that arguably has neither wings nor muscles, unless you stretch the definitions of these beyond any sanity.

    So understanding the concept of what makes birds fly does not give you a completely general account on all possible ways something could fly.

    Once the definitions and principles are established it becomes possible to infer whether fish are capable of feeling pain.

    On the contrary — as the flying example suggests, I’d say that the principles of feeling pain that you have introduced are too narrow and anthropocentric. Note that in the flying example you have made a distinction between the intrinsic “meaning” of flying (overcoming gravity) and the necessary hardware one needs to do it (wings and muscles). My argument above indicates that the necessary hardware is not really necessary, since flying can be achieved by different means. However, in the pain story you have only identified brain hardware necessary for pain in humans, but you didn’t give an independent definition of what “pain” itself is. Like in the flying example, one could arguably experience pain using different hardware, for some suitable definition of “pain”.

    I think that is the main point of criticism here. You need to define the concept of pain in a hardware-independent way, and then argue that human hardware enables them to experience it, while fish hardware does not. Defining “pain” via the presence of appropriate hardware is the very definition of “anthropocentric”.

    The fish certainly do not experience pain in the way humans do, but that does not mean that they do not experience pain at all. Please define “pain” so that even fish could in principle experience it, and only then argue that fish lack suitable hardware to actually do it.

    Like

  25. Do Humans Think? Not clear.
    Arriving from an extrasolar planet, the head of some lab down under, presents himself to experiment with.
    He feels fishes are (vastly inferior beings) without feelings.
    That’s (likely) fishy.

    The lab head identifies being conscious and being in pain. Says he: ”To feel pain requires that you are aware or conscious of your own mental state”. Yet, anybody who has slept with a sharp pain knows that one can be in pain, and not conscious. Actually, a sharp enough pain will awake somebody not conscious: in the beginning, clearly, there was pain, and still no consciousness. There is a point when one is in pain, but not conscious.

    Reciprocally, it’s possible to be conscious and not in pain… while being cut open, or having one’s burns been cleaned of all the debris (consecutive to an explosion: been there, done that). Also one can shut down pain deliberately by creating a mental state to override it. I find this very useful when running uphill, or after colliding with stinging nettles (it’s curative too, as scratching makes the situation worse).
    Not just that, but my down-under experimental lab head tells me that wolves are simple computer programs.
    I wonder why he thinks he is not one either, just repeating what Descartes programmed him to say, 400 years ago.

    He explained to me that this had to do with a general principle: resisting anthropomorphism. According to anthropomorphism, what has the form of man in other species achieves a similar purpose. For example, if it looks like a human foot, it is used like a human foot, for walking. And if it looks like a human mouth, it is used like a human mouth, to eat. And if it looks like a human vocal apparatus, it is used like a human vocal apparatus, to speak (as the latest ecological research on parrots in Brazil shows).
    And if it looks like wings, it should be used to fly. Do we know an animal with relatively large wings who does not fly? No (ostriches should have wings forty feet across to fly; even penguins fly under water: as water is 1,000 the density of air, their wings, in the air, proportioned to the density, would allow them to fly).
    It is a question of evolution.

    Why to evolve very costly structures (such as a wing big enough for flying), and not use it? (That was one problem with the Space Shuttle.)
    So why to evolve a complicated brain, and not use it for pain?

    That should be resisted my human lab head told me. We should resist the idea that non-human brains are there to do something similar to what human brains do: feel pleasure, pain, planning, imagination, thinking.
    Instead, we should imagine, the lab heads tells me, of other brains not his as simple programs. He, and he alone, and some of his fellow humans, have mastered the art of thinking. Or the art of feeling. Fishes are just fishy.

    Like

  26. Hi Socratic,

    I think this might be a case where the wikipedia entry just doesn’t do justice to what is currently going on in the philosophy of mind/ cognitive science field. If you are interested in reasons for Block’s distinction see here: http://plato.stanford.edu/entries/consciousness/ . Interestingly, Block won the Jean Nicod prize (a prize for a distinguished philosopher of mind or cognitive scientist with philosophical leanings) last year for his work on this exact subject.

    But in short there are conceptual, theoretical, and empirical reasons for accepting the distinction that Block draws. The empirical (though controversial) evidence comes from iconic memory experiments like the Sperling paradigm ( http://symboldomains.com/pdfs/Sperling_PsychMonogr_1960.pdf ), the Landman and Lamme (Dwayne Holmes will likely be able to tell us all about this in more detail) paradigm, identity crowding cases, and more.

    Even neuroscientists, cognitive psychologists and vision scientists appeal to (though don’t necessarily accept) the distinction. See for example this recent long discussion on “trends in cognitive science” from cell press: http://www.nyu.edu/gsas/dept/philo/courses/consciousness08Fall/papers/2006_TiCS_Tsuchiya.pdf

    Click to access lamme.pdf

    http://www.sciencedirect.com/science/article/pii/S1364661312002203

    and here: http://www.nyu.edu/gsas/dept/philo/faculty/block/papers/1995_Function.pdf

    The second link is Block’s landmark 1995 paper with replies. I would give a summary of this discussion but the best I can do for now due to time constraints and comment length restrictions is to refer you to the literature. I hope it helps. This post referring you to the literature is not intended to be an appeal to authority or anything like that (especially since I only have my B.A in philosophy), but I did my honors thesis in phil mind so this literature came to mind (pun intended) when I saw your comment, and I think it is a good and really interesting read if you’re interested in this topic.

    Like

  27. Thank you for all the discussion. Massimo P. has indicated not to answer every point, or to address too early on, as others will weigh in, and he was correct. I appreciate SocraticGadfly’s latest comments since they eloquently answer several enquiries. In this second post I wish to address the impression I am getting that some people believe that it is possible to experience pain even if you are not aware of it. If you don’t feel it (not aware of it) then it is simply non-conscious and not a feeling (i.e. not pain). However, non-conscious neural activity is still very capable of driving behaviour. There is considerable research in human psychology (e.g. subliminal processing) devoted to this concept. The important point is that if you are not aware of it, then you can’t feel it (hence my “ischial tuberosity/backside” analogy).

    Continuing on from this idea I will briefly address several enquiries about fish vision. I contend that fish do not experience the “mind’s eye”. Their vision is non-conscious. I don’t wish to go into depth about this matter here (I anticipate that people will raise all sorts of scenarios) as I am writing a research report on it at present.

    SocraticGadfly asked why bony fish and not cartilaginous. There is research indicating that some of these fish may not even be able to respond to noxious stimuli so I didn’t want to “muddy” the waters here.

    Abe – don’t read so much into “associate”. There are books about the matter you raise. BTW I am actually very interested in the illusory nature of our senses and its relation to the so-called “hard problem” (but that is another essay/book).

    Until my next post.

    Like

  28. The underlaying assumption seems to be that consciousness is an evolved qualia of complex cognition, yet what is the logical basis for that? There would seem to be far more evidence that it would be primary to complex logic and essentially emotive in nature. Otherwise one might argue lots of people fall under the threshold. What if we were to go way out on a limb and suppose it is elemental to biological life in general and complex organisms are essentially a magnification of this sense? For one thing, it would certainly explain the primary behavior, i.e. survival instinct of virtually all life forms. It might further explain the relationship between consciousness and the subconscious, with consciousness as a further focusing and magnification of multiple internal impulses. As well as herd and swarm consciousness behaviors, with individuals as parts of a larger whole and not just the whole as a sum of its parts. Much as the focus of the individual concentrates its different elements.
    I realize this is a potentially controversial argument to make, but it would be interesting to hear logical counter arguments.

    Like

  29. Brian Key

    Good piece, although it would be nice to know how it does it? What is that experience we have, of pain, or of thought? It is a signal finalization in a brain by synchronous timings – probably from a concurrent flow of ionic current (literally relying on regularity to a split current falling every which way, but always one way as reset dominoes!) Use the domino analogy for a spreading flow, one way, but many ways. Within that flow you distinguish thalamus and neo cortex (including SI), and other regions, but you need to generalize now. You need to separate between hormones as transmitter in synaptic bulbs, and nerve lines of current spreading every which way. Every neuron would be equally hormonal to PRESERVE, and ionic to FLOW in confined channels synchronously, to spread preserved functional events (interfaces with a world) captured and retained by receptors all the way to finalization. Using that division, for the experience by neurons themselves (as an electrostatic event between hormonal insulation and nerve current) and its process, you can partition a brain! central regions tend to hormonal preservation (including decompression of hormones from a brain to accompany outputs), white matter. Whether a fish feels pain, I will answer using Daniel Tippins comment to demonstrate an error in his analysis, based on Block, in fact “Block’s view”, while explaining the actual process.

    Daniel Tippins

    “So, it seems possible that you have pointed out all the requirements for access consciousness of pain, but not phenomenal consciousness. In other words, it is possible that fish experience pain, they just have none of the higher level neural processes required for our kind of robust access consciousness.”

    You are on a tangent there based on Block. Phenomenal consciousness does not overflow access consciousness to make us aware of things without knowing it (unable to access it by thoughts or reasoning). Access consciousness (thought & reasoning) is merely the FOCUS of phenomenal consciousness in each moment of awareness. Mind is constructed from feelings, as their integrated and well modulated level of experience. All feelings arise by a concurrent finalization in a brain in diverse cortices, and also in integrated cortices and junctions to spread & modulate. Thoughts are a transparent window in integrated finalizations, as the cream of integration, nothing more, and continually there to guide us. So, its correct that Brian Key has drawn a very hard line between thoughts and feelings, as if thoughts constrain or veto feelings entirely, whereas the experience for a fish or a human might be different than “suggested”. A fish might be persistent in feelings, whatever the level of integration they may have to enable thoughts to control them or even “care about them”. They might not be well enough neurally integrated to give a damn in thoughts about what they feel in some body parts, but they might feel nonetheless. How would experiments reveal a fundamental “lacking” in the fish to give a damn in thoughts about its own feelings? it comes down to complexity in architecture to network and boost “attention:” sufficiently, like some humans. Brian should not step too lightly into the shoes of a fish.

    Like

  30. Brian Key

    So, how do we classify pain? in my brief coverage above, I classify it as a feeling. In some other comments, it is an adverse feeling to be avoided. Whether adverse or not, I say it is a feeling. I also say vision is a feeling of a different kind that might have pain with it, and hearing likewise, touch, and every experience of awareness would be a feeling. It is something more than a flat line! Feelings integrate across cortices to partitioning that follows a regime of hormonal preservation in reserves, and nerve networking along confined synchronous channels. Your coverage crosses that partitioning. The purpose of partitioning is NOT to derogate from the capacity of each SINGLE NEURON to reach synchronous finalization. In my model, any and all neurons might finalize continually together as dominoes fall and reset and fall etc, for experiences of “feelings” of all functional sites, and their integration across cortices for complex experiences of thought constructed from those very feelings, continually.
    Consequently, I have to resist your view of cortices being so relevant to a diverse experience across SINGLE NEURONS, firing together. It could be a problem with cortical tampering – I think so – which is prone to ungrounded analyses, or assumptions about underlying causes. Disable something and feelings persist or change, and so on. What if they are just junctions and regions for different components of the experience, which is nevertheless always one of feelings? The correct level of single neurons firing together does not have any regions or junctions, because all is neurons! Tampering would reveal partial and sometime drastic affects upon INTEGRATION, rather than the feelings themselves created by the firings you see in every animal brain! I rate feelings and memory as omnipresent, as basic neuron function – from electrostatics for different feelings, and from strengthening & extending for memory – both fundamental & automatic to neurons. Nice try, though Brian Key, you have challenged a very basic belief we have about animals that might be well founded (the belief, that is).

    Like

  31. Hi Brian,

    This may not be the platform to engage in a lengthy discussion about the phenomenal consciousness vs access consciousness debate, so I understand the brevity of your response. However, I was hoping you could give me some general reasons (even if they are just intuitions) for why you hold the “sparse” view of consciousness (that what you are conscious of is constituted solely by what you have access to).

    You simply said, “In this second post I wish to address the impression I am getting that some people believe that it is possible to experience pain even if you are not aware of it. If you don’t feel it (not aware of it) then it is simply non-conscious and not a feeling (i.e. not pain).”

    My concern is that here you have simply conflated feeling something (phenomenal consciousness) with being aware of something (I take it that here you meant access consciousness). So, you seem to help yourself to the position that you can’t be undergoing a conscious experience if you are not attending to or accessing that experience by simply defining what it means to be conscious of something in a conducive way for yourself. But this is the very thing under (very controversial, as indicated by the literature I cited) dispute.

    As I have noted earlier though (and cited and described), there is a large amount of (admittedly controversial) empirical support for the rich view (Block has made much of his career off of using vast amounts of empirical support). So, could you provide further general reasons that you have for holding that you can’t experience something without attending to it?

    I just find this very interesting because in almost every panel discussion and conference I have attended on consciousness and attention, when Block raises his arguments for the rich view of consciousness (grounded in empirical and theoretical considerations), most people who disagree with him actually only end up doing so, for the most part, by either begging the question and saying that to be conscious of something just is to have access to it, or by appealing to intuition (that it just seems obvious that this is the case). Unforunately, neither reason seems good to me since pre-theoretic terms typically end up needing to be refined in light of data (so simply assuming the folk/common sense definition seems out of place), and intuition isn’t the sole, nor is it the primary, determinant of which theory is right (especially since intuitions on this matter go both ways).

    John Smith,

    “You are on a tangent there based on Block. Phenomenal consciousness does not overflow access consciousness to make us aware of things without knowing it (unable to access it by thoughts or reasoning). Access consciousness (thought & reasoning) is merely the FOCUS of phenomenal consciousness in each moment of awareness. Mind is constructed from feelings, as their integrated and well modulated level of experience. All feelings arise by a concurrent finalization in a brain in diverse cortices, and also in integrated cortices and junctions to spread & modulate. Thoughts are a transparent window in integrated finalizations, as the cream of integration, nothing more, and continually there to guide us.”

    Honestly I am trying to be charitable, but all I see here are a string of unsupported assertions and no argument. So it sounds more like you are just stating your views as opposed to making reasoned arguments. Additionally it actually sounds like you actually support the rich view of consciousness (phenomenal consciousness overflows access consciousness) since you say that access consciousness is the focus of phenomenal consciousness (perhaps though I just don’t know what you mean by this). It sounds like you think access consciousness is just what we focus on in the set of phenomenally conscious things, implying that we pick out some things that we are phenomenally conscious of to think about/ access but not all of them. This would be the rich view of consciousness.

    Anyway I am out of comments, looking forward to hearing more about this. Thanks again Brian

    Like

  32. SocraticGadfly,

    “Daniel Tippens I see no reason to distinguish the way Block does. Access consciousness can and will be used for subjective ends, for one thing. Or, via Wiki, there’s Willian Lycan, arguing at least eight distinct types of consciousness can be identified. (That said, I think all his types overlap and are thus subtypes.)”

    Your objection is extremely unclear. First, It’s very unclear what “subjective ends” mean, hence unclear why this is a problem for Ned Block. However, if you just intended to complain that the term “access consciousness” can be used by anyone for any personal purpose to characterize any aspect of consciousness, then I do not think you have shown it. Ned Block’s term “access consciousness” has been around for almost a decade, so if you are correct we should already see people using the term for “subjective ends”. Apparently, you haven’t provided any example. Second, It’s very unclear how pointing out to William Lycan’s taxonomy of consciousness somehow constitutes as a good argument against Ned Block’s distinction between phenomenal and access consciousness. Merely pointing out an alternative way to taxonomies consciousness only shows that there’s an alternative, but it remains an open question as to which one is the best way to categorize consciousness.

    Brian Key,
    You said:
    “If you don’t feel it (not aware of it) then it is simply non-conscious and not a feeling (i.e. not pain). However, non-conscious neural activity is still very capable of driving behaviour. There is considerable research in human psychology (e.g. subliminal processing) devoted to this concept. The important point is that if you are not aware of it, then you can’t feel it ”

    I think someone like Daniel Tippens would say that you are begging the question. What you said implies that conscious experience is awareness of some mental state, but that’s merely restating what you already said earlier (something that is precisely at dispute). Furthermore, I think Tippens might say that you defined consciousness in such a way that conflates the access and phenomenal consciousness. If I recall correctly, you said feeling (phenomenal consciousness) is equivalent to awareness of some mental state (access consciousness), but that’s precisely what Tippens is disputing. He disagrees with your definition of conscious experience precisely because you do not distinguish phenomenal consciousness from access consciousness. Your only reply was just really restating your views.

    Like

  33. Pain in fish is the subject in depth of this Post. Though totally unqualified to comment about the “science” I venture to suggest that if you substitute “distressful experience” for the concept of “pain” perhaps Brian’s thoroughly-argued thesis and conclusion would be less reliable.

    If one accepts the Theory of Evolution by natural selection then competition is the name of the game. It entails the belief that all Life is inherently programmed to strenuously stay alive and reproduce itself and its own species. Some complex and mobile forms become predators and prey on others. They both have developed senses and many defence mechanisms to help them in their fight for existence against each other.

    One of these defence mechanisms that I know by subjective experience is pain which has the purpose of reducing greater harm. The responses in myself when I feel pain I also see happening in other humans. I cannot KNOW but ASSUME their subjective experiences are at least very similar to and as unpleasant as my own. To be humane (act morally?) I feel that I should not inflict or allow this unpleasant distressful experience to be inflicted on another human for *my* pleasure or sport.

    I also observe similar distress occurring in other life forms as they struggle to stay alive. Thus, still being humane, I also feel I should not, for pleasure or sport, inflict or have inflicted on them what also looks like an unpleasant distressful experience. Even to inflict distress gratuitously to life we make use of, predatorily, (as our beasts of burden, as our food or for other reasons such as vivi-section) feels abhorrent.

    Such feelings as these could be absolved by being assured that, when other life forms are being hurt or frightened, they do not suffer pain, or not the *same* pain, as we humans do. The underlying “Headline” issue [I can just see an angling magazine screamer “FISH DON’T FEEL PAIN: OFFICIAL”] (and is an issue for of moral philosophy rather than of neurobiological interest) is that blood-sportsmen wish to have some proof that animals, birds, -or fish, do not experience pain, or experience it to a much lesser degree. This is so that we humans can morally-comfortably chase them, injure them, kill them or just catch them and return them to the “wild” just for our pleasure/sport. The “big-game sport’s” fish that is being “played” on the end of a line for 1-2 hours is not itself “playing”. That “pain-less” fish is, with much expense of energy at least, striving for its very existence whilst giving the angler his exercise and amusement.

    Like

  34. I would like to raise the possibility of a dichotomy between consciousness and thought, i.e.. what one is conscious of. Much as there would be a dichotomy between energy and the forms it manifests. As the only way to actually describe energy is by the forms so manifested, so to with consciousness it is the forms this sense manifests which seem the more definable and thus reductionistically evident.
    So just as energy can go from sunlight, to plant cellulose, to wood in a stove, to radiant heat and still, on some level be the same continuous process, so to can one’s consciousness be involved in very high order abstract thinking and doing so, not be watching one’s direction and seriously stub one’s toe and so the same sense of consciousness goes from one form to another, but is still the same sense of consciousness.
    As an evolutionary process, it would seem higher order consciousness evolved from multitudes of feedback loops with one’s context and given the natural inclination, for survival, to prefer order over disorder, these higher order functions gravitate to such concepts as the mathematical universe hypothesis, yet if one were to actually follow the processes of intellectual creation and consolidation, much of the detail gets lost and we end up with these highly concentrated distillations, which necessarily are isolated from real world experiences, because they necessarily had to distill away much of that larger content to extract these forms.
    The act of thinking is to make necessary distinctions and judgements, so it would be natural to define consciousness in terms of its distinct forms, i.e.. thoughts, but that is a consequence of isolation and concentration, rather than properly taking all aspects into account.
    Not that fish have complex feedback loops, but the ones they have developed are necessarily highly ingrained over the course of eons, so much of what they do would seem automatic, but the sense of consciousness would still exist as a state of present function. Determination is an effect of causation, not the other way around.

    Like

  35. Brian writes “The evidence best supports the idea that it doesn’t feel like anything to be a fish. They are non-conscious animals that survive without feeling; they just do it.” The topic really then is consciousness which refers to vastly more than pain. Pain is such a human concern, but consciousness also includes seeing, hearing, smelling, memory, learning, etc.

    The thesis is that fish are automatons, they are just there surviving, filling the ocean. The more an animal resembles us in structure and function, the more likely it is that there is consciousness. It seems, then, that primates are conscious even though they do not think like we do – according to the evidence they can think in single words only. Socratic says dogs are likely conscious, and Brian agrees, but wolves and antelopes apparently are not. Was the lion conscious while pursuing its prey? (Cats have more neurons than dogs) Are dolphins conscious? Their brains are more complex than ours on external examination; they are second or third on the hierarchy of neuron number.

    The definition of consciousness used by Brian is largely anthropomorphic but it breaks down when confronted with all the evidence. As I previously alluded, birds have very different brains than we do (I guess they are evolutionarily more related to fish, reptiles and dinosaurs), yet they have well demonstrated abilities in memory and associative learning. According to the OP thesis, birds are non-conscious automatons, which clearly does not appear to be the case. The basic problem for the anthropocentric definition of consciousness is that it cannot deal with the very complex behaviors of ‘lower’ animals.

    There is a simpler and more explanatory definition of consciousness that apparently is adopted by most psychologists working in the field of human consciousness:
    “Theorists across a number of disciplines distinguish between two forms of consciousness. The first, phenomenal awareness, describes feelings, sensations, and orienting to the present moment. It is essentially the way living things with brains obtain information from the environment. The general view is that this lower level of consciousness is much older in phylogeny and is present in many if not all animals. The second form of consciousness involves the ability to reason, reflect on one’s experiences, and have a sense of self, especially one that extends beyond the current moment. Researchers have argued that this type of consciousness is unique to humans.” (Baumeister and Masicampo, 2010.)

    My own working definition of consciousness is even more basic: all life exhibits consciousness through its ability to sense and adapt to the environment. It is a fundamental property of life. Human consciousness appeared only ~200,000 years ago and has gotten us to this point. An understanding of the strengths and weaknesses of our brain seems to be an essential precondition for solving our many problems.

    Like

  36. Brian: The flight analogy is misleading. Flight differs from pain in a crucial way: it is observable. We can examine any given individual and determine directly whether it is capable of flight (or powered flight, or whatever). This observability is what enables science to proceed in its usual way, by generating and testing hypotheses.

    But if pain is a type of experience, then it can only be directly observed in one individual: myself. Everything else is speculative — perhaps more plausible or less plausible, but still just speculation. There is no way of testing a hypothesis. Thus there is no way of doing science. Really there is not even any way of doing good philosophy, because generalizing from an n of 1 is absurd. (I’m not saying anything original here; this is a standard argument in the philosophy of mind.)

    Like

  37. Maybe if we think about pain as a communal concept? If mirror neuron theories are correct it would seem that perhaps pain isn’t at all a personal thing for say schools of herring or sardine the same way it is for humans, but a communal thing?

    Like

  38. It is interesting to know the neurobiological correlates of the experience of pain in higher animals, and that fish don’t have them. This confirms the immemorial rationale of fish-eating vegetarians. Indeed fish, as associated with Christ, have been considered sacrificial in nature, and given in abundance to humanity as an ethical (and even, on Fridays and during Lent, a consecrated) source of nutrition. Unlike hunters, fishermen are not considered cruel or primitive.

    Surely there is much more to be said about the relationship of pain to consciousness beyond the fact that fish have little of either. The degree to which a stimulus is judged to be painful is often the degree to which it alters, interferes with or even obliterates consciousness.

    The issue of animal pain, and consciousness, is of course an old and important point at which daily life and philosophy have met. “Followers of the Cartesian philosophy could be identified because they kicked their dogs.”

    Re: Jargon. As a layperson I had no trouble with anything in your article, not compared to the quantum mechanics, logic and maths to be found regularly on SciSal.

    Like

  39. Daniel and somewhat Philonous To be more precise, I don’t think Block’s separation is of zero worth. But both his twofold and Lycan’s eightfold (is he a Buddhist) divisions seem better understood as subtypes, not distinct types. Sorry, I don’t see a difference between Block’s ideas and Lycan’s, and I saw enough at the Wikipedia link about both to stand by it.

    In other words, consciousness is more unitary than intelligence. These proposed divisions do not, IMO, rise to the level of the difference between, say, analytical intelligence and kinesthetic intelligence, per the research and writing on intelligence types. I just don’t see that degree of differentiation.

    As for Philonous’ other comment, I read Block as trying to make access consciousness sound more objective than phenomenal consciousness, and there’s no such guarantee of that.

    Here’s another way to rephrase this. We talk about semantic vs. episodic memory, but, memory research, at least to me, seems to look at both as being more subtypes of “memory” rather than radically distinct.

    As for examples? Any introspection is going to be subjective; this is the whole issue of heterophenomenology and why people like Dennett raise questions about it. Do you really want one specific example of my most recent introspection?

    As for “appeals” to it? Daniel notes that people do that without agreeing with it. Or, like me, they may reference it without agreeing with the degree of separation.

    But, my objection is now stronger, reading through more comments.

    Per Liam, if a creature is unable to articulate a phenomenal perception, why should we apply “consciousness” to the purely phenomenal behavior?

    There. THAT is my objection. Thanks, Liam for giving me the brain stimulation. Calling the purely phenomenal “conscious” without a way for that creature to consciously describe it is an overdescription. And, if a creature has what some call “access consciousness,” there’s no need to modify “consciousness” with “access.”

    In other words, Occam’s Razor, and Ned Block needs to give himself a shave.

    Brodix Or, there’s a third possibility — these things complexly bootstrap each other, like “good” emergent properties often can and will do. You mention feedback loops, in distinguishing consciousness from thought, in a follow-up comment.

    Brian Thanks for the explanation on cartilaginous fishes, and the words otherwise.

    All I’m leery of a word like “automaton” because of connotative baggage that accompanies its denotative meaning.

    Otherwise, wolves can be conscious to some degree without particular behaviors of theirs being conscious, appearances to the contrary.

    Astrodreamer A physicist’s shop talk is a biologist’s jargon and vice versa, eh? It will be that way TFN.

    Like

  40. Great essay, very informative and interesting. Fish have some mirror response neural mechanism which allow them to swim closely behind or alongside other fish which allow for the fish normative behavior of school movement. The ‘pain’ flapping response may be another form of body communication which signals a predator for either the fight or flight response.

    The same complex neural mechanisms and response to pain are also applied to pleasure behavior as in mating which is essentially dancing between mates or another form of body communication.

    The most refined form of painpleasurse or essentially feeling may be the form of communication via feeling instantiated in the auditory cortices or namely language in all species.

    Like

  41. I read this article for its information, and it is information rich, for which I thank Dr. Key.

    About the implications drawn from the research, concerning the nature of “pain” and “consciousness,” I am not entirely happy with them. While we have come to deny, rightly, the anthropomorphic descriptions of what we can know about the behavior of other animals, we still seem to be engaging in anthropocentric conceptualizations of what can be described.

    All animal forms struggle to live. In that struggle, they engage in efforts to escape, attack, or contain threats or actual damage to their well-being. Does it matter whether they do this “consciously” or not? Does it matter whether what they are responding to can be called “pain?”

    Certainly the well-being of an animal bodily damaged suffers, that is, reduces in measure and quality. Perhaps that’s the only definition we need of suffering.

    Right now the arthritis in my knees is acting up. Meditation, distraction, and aspirin help reduce the claim it has on my attention; but the sensation is still there, and it still reduces my ability to walk, whether or not I cry “ouch!” when I attempt to do so.

    In the Wiki article on “Pain in animals,” the caption under a diagram notes: “Reflex arc of a dog with a pin in her paw. Note there is no communication to the brain, but the paw is withdrawn by nervous impulses generated by the spinal cord. There is no conscious interpretation of the stimulus by the dog.” * So what is going on when my dog whines, whimpers, or howls? What is it signifying? How should I respond to it?

    I think Dr. Hardcastle, mogguy, brodix, and Liam Ubert are suggesting interesting alternative ways to see these issues and describe them.

    In some Eastern philosophies, sentience is not identified with (something like human) consciousness, but with the evident interaction between organism and environment in the organism’s struggle to survive. Impedance or cessation in the efforts of this struggle, is enough to define suffering.

    When a fish is pegged to the dock with a knife, it’s squirming surely expresses some struggle to go on living. It may not feel pain, but it suffers nonetheless.

    (Whether this is avoidable or not, is an entirely different question.)
    —–
    * http://en.wikipedia.org/wiki/Pain_in_animals

    Like

  42. Valerie Hardcastle asks
    Should we care whether fish can feel pain?

    I think this is the most important question, should we care? Not, does the fish feel pain?

    I argue that we should care, regardless of whether it feels pain, because:

    1. We perceive what seems to be suffering. Yes, I know that is an anthropomorphism, but that is precisely the point. If we perceive what seems to be suffering, we should respect these intuitions and attend to them because they tell us something important about ourselves. We become better and nobler people in the process, with enhanced sensitivity and respect. On the other hand, if we persuade ourselves that it does not matter, we impoverish our emotional sensibility, becoming coarser and deadened to the world around us.

    2. We are integrated with the natural world and this requires from us a deep sensitivity to it and a respect for it. Tibetan  Buddhists are quite remarkable in this way[1]. Where I live we are faced with a sharply declining rhino population. Our elephant population is under threat. Leopards may face extinction. Does any of this matter? Ian Player, in Environmental Intelligence, argues that it matters deeply. Nature is our home[2] and we are inextricably tied to it, not only by necessity, but emotionally. We need it to be complete, healthy and well adapted people. This requires from us a sensitivity to nature, an awareness of the suffering and damage we may cause. A rational, western, cost-benefit analysis consistently undervalues nature. We restore this value by listening to our intuitions about nature and attending to them.

    3. We do not really know what takes place in the fish’s brain. But we do know it suffers tissue damage, causing a reaction that evokes in us the feeling that it suffers and that alone should be sufficient for us to care. 

    4. We are jeopardising our own future. Our insensitivity to the effects on nature, of our actions, is having a cumulative snowball effect that is degrading nature, perhaps irreparably. Halting this and reversing it starts with the simple emotion of caring, in a thousand small ways. But caring has no boundaries unless you can live with blatant hypocrisy in your own self. Few people can manage that.

    5. We have become deadened by consumerism and narcissism. The self has become the ultimate object of satisfaction and worship. The selfie-stick has replaced the cross as the symbol of purpose and meaning. With this set of values we are sleep walking into destruction. We can reverse this by beginning to care about nature, in the large and in the particular, expanding our circle of compassion beyond the self

    The Dalai Lama expresses these sentiments far better than I could[3]. Pope Francis says something similar[4].

    [1] Heinrich Harrer, Seven Years in Tibet, tells how earthworms were removed before building could take place,.
    [2] Peter Smith, Why Climb Mountains, http://bit.ly/1ABOm2U
    [3] Dalai Lama, Universal Responsibility and the Environment, http://bit.ly/1yR7tQR
    [4] Pope Francis, Radical Environmentalism, http://theatln.tc/1twb5Ku

    Like

  43. Daniel Tippens

    “Honestly I am trying to be charitable, but all I see here are a string of unsupported assertions and no argument. So it sounds more like you are just stating your views as opposed to making reasoned arguments. Additionally it actually sounds like you actually support the rich view of consciousness (phenomenal consciousness overflows access consciousness) since you say that access consciousness is the focus of phenomenal consciousness (perhaps though I just don’t know what you mean by this). It sounds like you think access consciousness is just what we focus on in the set of phenomenally conscious things, implying that we pick out some things that we are phenomenally conscious of to think about/ access but not all of them. This would be the rich view of consciousness.”

    Your support Block is opposed by my argument and you give none in reply. Don’t make the old “ambit” argument – “not even wrong – therefore not worth considering at all” In fact the mistake in reasoning is yours, not mine, so cut the ad hominems! If you cannot refute a simple reversal of Block to make greater sense of a process building to thoughts. Feelings would not descend or “overflow” from thoughts. Complex thoughts would build from simpler feelings. The argument is a basic, sensible, reversal of Block entirely consistent with known results, and it explain currently unexplained “thoughts”. that’s how to make progress, so don’t just block it like a stone wall without argument.

    You don’t need to be charitable to me. Argument requires effort, not copying swathes of other people’s theories. Consider the process building from basic to complex in processing as it finalizes experiences, and not complex overflowing to basic – its quite obvious. if you don’t like that proposition and don’t want to do any follow up reading here http://1drv.ms/1tnKM6f that’s not my problem, Progress is not made byrepetition of conterntious theories, it is by challenging them. Just answer, don’t excuse yourself on an ambit basis.

    “My concern is that here you have simply conflated feeling something (phenomenal consciousness) with being aware of something (I take it that here you meant access consciousness). So, you seem to help yourself to the position that you can’t be undergoing a conscious experience if you are not attending to or accessing that experience by simply defining what it means to be conscious of something in a conducive way for yourself.”

    You haven’t explained how “feelings” arise and what they are, to make that claim against Brian Key. Read my explanation of how feelings are constructed, and counter it if you can. Argument is not based on scholarship and quoting other people who have matching contentious views, it is by filling the gaps with new arguments, new ideas. My ideas are consistent with facts, logic, and process, which is all that an argument requires. If you wish to falsify it, find “facts” to do so, but you might find that difficult as I adjust ther theories with requisite care to conform to whatever “facts” they might rely upon.

    Like

  44. @Marko

    I think that is the main point of criticism here. You need to define the concept of pain in a hardware-independent way, and then argue that human hardware enables them to experience it, while fish hardware does not. Defining “pain” via the presence of appropriate hardware is the very definition of “anthropocentric”.

    That’s a really interesting point, and the flying analogy brings it out nicely. I’m not sure there’s a need to define pain in a hardware-independent way if we’re talking about fish rather than, say, aliens. The basic similarity in hardware is precisely what licenses the inference with fish, and what would revoke that license for androids or whatever.

    So the argument is: Given an extremely similar hardware implementation, system A needs pieces X, Y and Z to exhibit P. System B has only X, so it’s unlikely that P is occurring.

    Back to the bird analogy. When the alien analyzes the first animal (the bird), it’s without reference to any other animal. When the alien analyzes subsequent animals, it’s specifically with reference to the bird. So it’s “ornithocentric” in the same way as you’re saying we’re anthropocentric in our analysis of the fish.

    So there are two main questions: 1) whether we can be sure we have identified the hardware features that are essential to experiencing pain (or whatever); and 2) whether we have established enough essential hardware similarity to merit the inference.

    Like

  45. They do move like fish and like fish they are a bit androgynous.

    For animals the tail along with other body movements is used for communication, especially during mating.

    Like

  46. I really like how Marko is pushing my analogy about flying. In this analogy, I started with a single species (bird) and worked out what was needed for that species to fly (while trying to keep that definition as broad as possible but without going to extremes of evolutionary possibilities; I thought my bullet was going to extremes but Marko came up with a ballooning species). Nonetheless, Marko makes a point or draws a line in the sand that many people fall back to. That is, fish experience “fish pain”, not human or “human-like” pain. This position is rather difficult to challenge when the “fish pain” definition can be argued to absurdity to be correlated with neural activity anywhere in the fish nervous system. However, I contend here and elsewhere that a fish is not conscious (i.e. aware) of noxious stimuli. If it is not conscious, then it can neither feel “fish pain” nor feel “human-like” pain.

    Like

Comments are closed.