Could a computer ever be conscious? I think so, at least in principle.
Scientia Salon has seen a number of very interesting discussions on this theme which unfortunately have failed to shift anybody’s position [1]. That much is to be expected. The problem is that the two sides seem to be talking two different languages, each appearing obtuse, evasive or disingenuous to the other (although it has to be said, the conversation was very civil). I think the problem is that the two camps have radically different intuitions, so that what seems obvious to one side is anything but to the other. It’s important to keep this in mind and to understand that just because the other side doesn’t follow the unassailable logic of your argument doesn’t mean that they’re in denial, ideologically prejudiced or plain dumb.
The more formal critiques of computational consciousness include those such as Searle’s Chinese Room [2] and the Lucas-Penrose argument [3]. While these are certainly very interesting ideas to discuss, it seems to me that such discussion all too often ends in frustration as the debate is undermined by fundamental differences in intuition.
And so my goal in this article is not to discuss any of the more prominent arguments but to explore our conflicting intuitions. I’m not hoping to persuade anybody that computers can be conscious but rather to explain as well as I can my reasons for intuiting that they can, as well as my interpretation of the intuitions that lead others to skepticism. I am hoping to show that mine is at least a coherent position, and in particular that it is not as obviously wrong as it may appear to some.
I also want to make clear that I make no claims about the feasibility or attainability of general artificial intelligence. I am not one of those who think the Singularity is around the corner — my concern is only to explore one view of what consciousness is.
Let me start with some empirical assumptions which I think are probably true, but admit could be false.
Empirical assumption 1: I assume naturalism. If your objection to computationalism comes from a belief that you have a supernatural soul anchored to your brain, this discussion is simply not for you.
Empirical assumption 2: The laws of physics are computable, that is, any physical process can be simulated to any desired degree of precision by a computer [4].
It should soon become clear that these assumptions entail that it ought to be possible in principle to make a computer with the outward appearance of intelligence. If we take a strictly behaviouristic interpretation of intelligence (as I will from now on), it therefore seems to be relatively uncontroversial that computers can be intelligent, though this is certainly distinct from the stronger claim that they can be conscious.
The reason I am a computationalist has to do with a very straightforward thought experiment. The central idea involves conducting a simulation of a person, and so before I get too deep into it I need to address an objection that has often been raised (in this corner of the Internet at least), and this is the observation that a simulation of X is not X.
Let me concede right off the bat that a simulation of X is often not X. This is particularly clear if X is a substance. A simulation of photosynthesis may produce simulated sugar, but simulated sugar cannot sweeten your coffee! However, I suggest that sugar is not a particularly good analogy for consciousness because consciousness is clearly not a substance. We can’t after all have a test tube full of consciousness.
It would seem instead that consciousness must be a property of some kind. It is certainly true that physical properties are not usually exhibited by simulations. A simulation of a waterfall is not wet, a simulation of a fire is not hot, and a virtual black hole is not going to spaghettify [5] you any time soon. However I think that there are some properties which are not physical in this way, and these may be preserved in virtualisation. Orderliness, complexity, elegance and even intelligent, intentional behavior can be just as evident in simulations as they are in physical things. I propose that such properties be called abstract properties.
At this point, it would seem that consciousness could be either a physical property (like temperature) or an abstract property (like complexity). Indeed this seems to be one of the major points of departure between computationalists and anti-computationalists. I may not be able to persuade you that consciousness is an abstract property, but it would seem to me that the possibility is worthy of some consideration. If there are exceptions to the maxim that a simulation of X is not X, then consciousness could be one of them.
It seems obvious that we need to distinguish between physical properties and abstract properties, so let’s try to elaborate on that distinction. What jumps out at me straight away is that physical properties seem to involve physical forces and materials, while abstract properties seem to have more to do with pattern or form. To me that suggests that consciousness is the latter, but let’s see if we can delve deeper.
If we allow that we can distinguish between the physical and the virtual [6], it seems that physical properties are those which can directly affect physical objects. Such physical detectors include our own senses as well as devices such as thermometers and Geiger counters. In contrast, abstract properties seem to be those which cannot directly interact with physical objects. If they interact with matter at all, it is only when they are perceived by a mind or detected by a computer program. Since there is no such thing as a consciousness detector, and very little reason to think that any such device can ever be built, consciousness does not feel like a physical property to me. Indeed, if consciousness is detected directly by anything at all it is only by the mind that is conscious. For me, this very strongly suggests that consciousness is an abstract property.
Whether abstract or physical, consciousness is arguably unlike all the other properties so far discussed. Indeed it may be in a category all of its own, and this is because it is uniquely subjective. As far as we know, the only observer that has direct evidence of the consciousness of any entity is that very same entity. Whether or not consciousness is like complexity, orderliness or intelligence, there does seem to be enough reason to at least consider the suggestion that a computational process might in fact be conscious just as we are, if only because it is not obviously analogous to physical properties such as temperature or mass.
I hope that I have at least earned the right to ask you to entertain for a while the idea of a simulated person. If not, I implore you to bear with me anyway!
If naturalism is true, and if the laws of physics are computable, then it should be possible to simulate any physical system to any desired precision. Let’s suppose that the physical system we want to simulate is the room in which you are currently sitting [7]. The simulation is set up with the state of every particle in the room including those in your body [8].
Assuming that you accept for the sake of argument this rather fantastic, unfeasible premise, this virtual room includes within it a structure which corresponds to your body, and this body contains a virtual brain. The simulation may not be an absolutely perfect recreation (indeed quantum mechanics would seem to preclude that possibility), but, virtual/physical distinctions aside, it should be much more like you than your twin would be. Nothing ‘Virtual You’ does will be out of character for you. Anything you can do physically it can do virtually and vice versa. Everything you know it claims to know, everything you like it claims to like and so on.
But I have only established that Virtual You behaves like physical you. So far, I don’t think this is particularly controversial. The question is whether it has an inner mental life. This, again, is where intuitions divide. Computationalists think that it is abundantly clear that it must be conscious, while the opposite claim is just as evident to anti-computationalists. This may be an impasse, but I can at least outline some reasons for preferring computationalism.
Rather than asking whether it can be conscious, let’s first ask what seems to me to be a somewhat simpler question: can it believe? On this, opinions divide largely along the same party lines. Perhaps I cannot convince you that it believes, but it seems clear to me that it must at the very least have some kind of pseudo-belief, a virtual, functional kind of belief we can attribute to it whether or not it actually believes, such that Virtual You pseudo-believes the things that you actually believe. Let’s adopt the convention to refer to this kind of pseudo-belief as ‘belief*’ in order to be clear that I am not merely stipulating a definition of belief to suit my ends. I will continue to use this asterisk convention [9] to distinguish functional, objectively identifiable concepts from the intuitive subjective kind which apply only to conscious minds.
A believer* speaks and behaves as if it believes a certain proposition to be true. Within the brain of the believer* can be found an apparent representation of that proposition which, though virtual, is otherwise just like that we presume must be in your brain. It is debatable whether this is an actual representation – anti-computationalists might suggest that it is not – but it is at least a ‘representation*’ in that it corresponds to objects in the world (it might even be said to ‘refer*’ to them) and is modified appropriately as Virtual You acquires new information about those objects. From a functional perspective, beliefs* do everything that beliefs can do and even have analogous (virtual) biological representations. What, then, is the difference between beliefs* and beliefs?
The difference is not clear to me unless we stipulate that a belief can only take place in a physical biological brain. I imagine the anti-computationalist would offer the argument that beliefs* are formal, syntactic things whereas beliefs have semantics, following with the assertion that semantics cannot be derived from syntax. This latter claim is an oft-repeated refrain from the opponents of computationalism, but it seems to me to be an open question. If beliefs are no more than beliefs*, then perhaps a formal account of semantics is possible after all.
It might help if I can explain a little why I think beliefs and beliefs* are the same thing.
If beliefs* don’t have true semantics, they at least behave quite like they do, since it would seem that we can identify within the simulation the analogues of references and representations. So, on the one side we have beliefs, references, representations, semantics, understanding and so on, and on the other we have beliefs*, references*, semantics*, understanding* and so on. Everything that can be said about biological minds can be said about virtual minds* as long as we suffix our terms with ‘*’ as required. We therefore end up with two distinct models of mind which are conceptually indistinguishable apart from the labels we apply and the insistence that one is real while the other is fake. The computationalist intuition is that these two indistinguishable systems are in fact the same, while the anti-computationalist intuition is that they are somehow different.
If computationalism can answer all the other objections against computationalism (which I think it can), then it seems much more parsimonious to conclude that the computationalist intuition is correct. Parsimony is only a rule of thumb, and can lead us astray, but all things being equal it leads us to truth more often than not. This is why I think computers can believe.
Unfortunately, belief is not enough — far from it. We also need qualia, i.e., the ineffable indescribable whatness of sensory experience: the redness of red, the taste and feel of a hot morsel in the mouth, the agony of pain. The intuitions regarding qualia are perhaps the greatest factor in leading so many to regard computationalism as absurd. It is very difficult to imagine that any machine could have real experiences the way we do. Any appearance of consciousness in such a thing is therefore assumed to be an illusion.
However, if Virtual You believes at all, it is clear that Virtual You believes it is experiencing qualia. If you ask it if it feels pain, it will answer in the affirmative. It will claim to recall experiences from your childhood, and it will not notice any difference between the kinds of sensations it claims to feel today and those it remembers from long ago. If you are an anti-computationalist then it will be very difficult if not impossible to persuade Virtual You that it is a simulation, so convinced will it be of its false qualia (or qualia*). With this being the case, I propose that you have no way of being certain that you are not such a simulation yourself, because if you were a simulation then you would still believe you were experiencing the very same qualia as you perceive right now, and in a world where such simulations are possible you have no other evidence and no justification for holding yourself to be real.
Neurology and psychology may yet have a lot of light to shed on qualia. I am certainly not claiming to understand exactly how they work, but conceptually or metaphysically it doesn’t seem to me that there is a real problem here. If you can understand how a simulation could believe itself to be experiencing qualia, then I propose that there is no mystery to explain. If there is no principled way to justify the belief that your brain is more than a sophisticated biological computer, then the possibility remains open that it is just that.
When confronted with such arguments, it seems to me that the anti-computationalist either makes a straightforward appeal to intuition or makes a circular argument of some kind which boils down to the assertion that qualia really are a mysterious phenomenon which cannot be explained away so easily or that semantics are more than semantics*. When computationalists fail to agree, we are often accused of being disingenuous or obtuse.
But computationalists are not being disingenuous or obtuse. We just don’t see the distinction the other camp sees between semantics and semantics* or qualia and qualia*. It is becoming increasingly clear to me that the source of the dispute is not ignorance or stupidity or muddled thinking in either camp, but radically different fundamental intuitions. At least one set of intuitions is wrong, and it’s hard to say which. Either computationalists are missing some mental faculty of perception, or anti-computationalists are experiencing some kind of illusion.
It would seem the way forward is to put as little weight on intuitions as possible. The computationalist account has the advantage of parsimony, dissolving the problem of connecting semantics to syntax and explaining what properties of brains enable consciousness (i.e., logical structure). None of this means that computationalism is correct, but it suggests that it should be taken very seriously unless a fatal flaw can be identified which does not ultimately rest on anti-computationalist intuitions.
The famous arguments against computationalism alluded to earlier may present such fatal flaws. Their proponents certainly think so, while I obviously disagree, but until we understand each other’s intuitions a little better there is perhaps little point in having such discussions at all.
_____
Mark O’Brien is a software developer and amateur philosopher who despite never having achieved anything in the field has an unjustified confidence in his own opinions and sees it as his sacred duty to share them with the world. The world has yet to notice. You might very well think that his pseudonymous alter ego is a regular on Scientia Salon, but he couldn’t possibly comment. He is Irish and lives in Aberdeen, Scotland.
[1] The debate about consciousness was most vigorous on the following articles, but also creeps into other discussions quite frequently: The Turing test doesn’t matter, by Massimo Pigliucci, 12 June 2014; What to do about consciousness, by Mike Trites, 23 April 2014; My philosophy, so far — part II, by Massimo Pigliucci, 22 May 2014.
[2] This oft-discussed thought experiment is part of a family of such that seek to disprove computationalism by positing computational systems which our intuition suggests cannot understand. Other examples include Ned Block’s homunculi-headed robot and China Brain. See the Stanford Encyclopedia of Philosophy on the Chinese Room; here is an animated 60-second short explaining the basic idea.
[3] The Lucas-Penrose argument concludes that human intelligence cannot be reduced to mechanism because mechanisms are constrained by Gödel’s incompleteness theorems to be unable to prove all true statements. The argument fails (in my view) because it assumes without evidence that human beings are not also so limited.
[4] It must be said that the assumption that the laws of physics are computable is doubted by certain anti-computationalists, especially those who endorse the Lucas-Penrose argument.
[5] See Wikipedia if you are unfamiliar with this wonderfully evocative technical term.
[6] As trivial as it may seem to be to distinguish between virtual and physical, it may not be so straightforward if computationalism is true and we happen to live in a simulation!
[7] Such a detailed simulation is, of course, entirely unfeasible. I use it only to establish a point of principle about the nature of consciousness, so I encourage you not to concern yourself too much with practical barriers to implementation, unless of course some physical law makes such a computation physically impossible.
[8] We will also presumably need a crude simulation of the exterior of the room. We don’t want to run out of oxygen or radiate heat away to a vacuum, and the room will need to be supported so that objects within the room are bound to the floor by gravity.
[9] This convention is adapted from one established in the excellent paper: Field, Hartry (1978). Mental representation. Erkenntnis 13 (July):9-61.
This seems to say that if we assume that machines can be conscious then they can be conscious.
LikeLike
Hi DM,
I like your “*” argument, and would agree with it. However, a couple of things:
A *complete* (= functionally equivalent) simulation of sugar would indeed sweeten your coffee. Thus a complete (= functionally equivalent) simulation of photosynthesis would indeed produce simulated sugar that could indeed sweeten your coffee.
Of course if you only simulate *some* aspects of sugar and photosynthesis then you amy indeed have simulated sugar that does not sweeten coffee.
I suggest that the only reason we cannot produce a consciousness detector is that we don’t really understand what consciousness is, and thus what we are trying to detect.
Further, aren’t we ourselves fairly good at detecting consciousness in other entities (other humans, cats, dolphins, etc)? (I accept that there is some question begging in that assertion, but it is still true to some degree.)
I agree with you that consciousness is an “abstract” property, and by that I mean the important thing is the particular pattern of physical stuff. I don’t see anything in principle stopping us from detecting particular patterns of physical stuff.
LikeLike
While I agree that there is no real reason why computers could not be intelligent or conscious, the second assumption seems rather bold, and perhaps unnecessarily so. A computer could be conscious in a concious-computer-way even if it turned out to be physically impossible to accurately simulate a human mind in a computer.
At the core of this discussion seems to be the question of what consciousness and qualia actually are. I must admit that I don’t really understand why they are often considered to be such mysterious, complicated concepts, especially by dualist philosophers. Depending on the context, consciousness appears to be nothing more than either perception or self-perception; and qualia are essentially the observation that sensory inputs are differentiated into kinds.
A computer with, say, a scanner and some text recognition software preceives a text, and a computer monitoring a thermometer perceives a temperature. In the first sense, they both have consciousness. Combine visual perception with a robot that, for example, maintains a mental picture of itself in space relative to obstacles for the purposes of navigation, and you have consciousness in the second sense. Unless magic is postulated, everything we humans do would appear to be merely more of the same – more and more detailed sensory perception and more complicated computation.
So yes, I am on board with computationalism in principle. But I still remain highly sceptical that an electronic computer could ever think and perceive and be conscious in the same way as a human even if it could do all those things in its own way, simply because it would have a very different “brain”. Saying “imagine if we could make a perfect simulation of the human brain” is not much of an argument until such a simulation is shown to be possible. The human brain is pretty much the most complicated item we have found in the universe so far, and some things, even seemingly simple things, are just fundamentally impossible to model precisely. As in: really fundamentally impossible for all practical purposes.
LikeLike
I disagree with this. If you have a causally complete local physical simulation, and that simulation includes simulated water, then the blankets in your simulation will be simulated wet after the simulated water hits them. Further, I think *this* is the place to go after the difference in intuition. If you set consciousness aside, people can generally get their heads around the idea that an emergent physical property like the wetness of the blanket isn’t anything more than a manifestation of the physical things happening at a higher level of organization/explanation. The blanked is “wet” *within* the simulation. It exhibits all the physical properties of a blanket in the world with respect to any causal effects it would have on anything else within the simulation. If one can accept *this*, and one can accept that consciousness doesn’t involve anything non-physical, one can start to see how consciousness could occur within a simulation.
Again, I disagree and think this is the wrong way to attack the intuition difference. The whole thing relies upon an absence of dualism. Why introduce another kind of dualism (physical/”not physical”) to explain it?
LikeLike
DM,
“The more formal critiques of computational consciousness include those such as Searle’s Chinese Room [2] and the Lucas-Penrose argument [3]. While these are certainly very interesting ideas to discuss, it seems to me that such discussion all too often ends in frustration as the debate is undermined by fundamental differences in intuition.”
That is a cavalier dismissal of important arguments. I suggest the frustration is occasioned by inability to answer the objections and the debate has nothing to do with differences in intuition but the failure to follow careful reasoning processes. But fair enough, you have made it clear you are not dealing with these arguments and you rather want to advance your own intuitions.
“each appearing obtuse, evasive or disingenuous to the other (although it has to be said, the conversation was very civil). I think the problem is that the two camps have radically different intuitions”
No, one camp is deeply sceptical of the other camp because they have advanced weak arguments that go nowhere. It is not a matter of “ radically different intuitions, it is a matter of failing to advance good arguments. The simple truth of the matter is that consciousness is one of the most difficult problems encountered by science. We hardly understand the subject at all and, for the time being, there is not even a remote prospect that we can reproduce conciousness.
It is better by far to admit our profound ignorance. That will open our minds to a greater range of potential solutions.
LikeLike
Asher, glad to have someone else who spells out that DM’s position is fundamentally dualistic. Your solution is to bite the bullet and say that even simulated water would be wet. Mine is to reject the whole shebang and keep a distinction between the physical and the virtual worlds.
LikeLike
Or rather that there is little reason to assume that they cannot — that it is not easy to articulate the difference between a physical person and a virtual person with respect to consciousness, so perhaps there is no difference.
LikeLike
Hi Asher,
I agree with this, and I made similar points on John S Wilkins post on information where he argued that a simulation of a gravitational body would not gravitationally attract you.
I think both approaches are valid, but to fully argue that something in the simulation is wet in the fullest sense of the word means that really we need to place an observer within the simulation who would perceive it as wet. Unfortunately, this presupposes that computationalism is true and could be seen as circular, which is why I would go the other way.
Because I think it is necessary. I don’t think out and out type physicalism is really tenable. As Aravis argued on other threads, currencies exist and are not their atoms, but a pattern about how certain atoms are used and interpreted (if we need to relate them to atoms at all). So, on my view, consciousness is more like a currency than a substance.
LikeLike
Hi labnut,
Not at all. I’ve discussed these at length elsewhere (particularly on my blog). The point of this post is that such discussion tends to bottom out at a different set of intuitions, and that these are worth exploring a bit. I certainly feel like I’ve roundly dismissed all the arguments by Searle and Penrose (elsewhere), but of course that might be because I have the wrong intuitions, which is why it’s important to explain what those are.
And which camp is that? I know which one you mean, of course, but from where I’m sitting your statement could be read the other way entirely. This is the problem. There is a perception of unreasonable intransigence on both sides. Better understanding is required.
LikeLike
But not substance dualism. Dualism per se need not be a dirty word (c.f. mathematical Platonism).
LikeLike
I’m not sure it’s possible to make a functionally complete simulation of sugar without making a functionally complete simulation of a sugar taster to go with it from whose perspective it would be functionally complete. Otherwise you’re just making sugar.
And if you simulate the taster too, then you’re assuming the taster can experience qualia, so you’re begging the question.
Sure. Which is why I left it open that it could be detected by a mind or sophisticated information processing. Just not by a basic physical sensor. There is no consciousness particle, for instance.
LikeLike
DM, while I agree that substance dualism is pretty bad, why exactly you don’t think your brand is problematic? And aren’t you in a sense saying that consciousness is in fact “substantially” different from every other activity of the brain?
LikeLike
I’m sympathetic to this view, but this is precisely the view that anti-computationalists believe misses the point entirely. From their perspective, you’re not talking about real perception at all, but some kind of pseudo-perception (or perception*). You’re changing the subject by talking about perception with an interpretation that most people will not recognise.
It’s an argument in principle. I’m not convinced we will ever build a computer that is conscious in the same way as a human either, for the practical reasons you mentioned. I am only convinced that there is no fundamental metaphysical reason why not. This is an argument about what consciousness is, not about whether we will ever succeed in building a conscious computer.
LikeLike
I think the question of whether computers can be conscious is presently unanswerable and that at this point the wrong position is thinking one has any idea either way.
Given that selfhood is a precondition of consciousness – that is, there cannot be consciousness without something, a self, that is conscious – and given that ‘computer’ is vague enough to cover any human artifact (machine) of a kind that might have consciousness, it seems the question of whether computers can be conscious can be stated:
Can humans make a machine that has a self?
And this might be further clarified as:
Can the organic system we call a “self” be realized in a non-organic, human-made machine?
This is an empirical question and presently we do not know enough about selves or the prospects of human machines to have any idea.
Two possibilities that could stand in the way of a machine with a self are physical impossibility and limits of human intelligence. Regarding the former, we can have no idea about physical possibility until we know the mechanism underlying a conscious self.
As we know almost nothing about selves, I think this topic is less philosophy than a subgenre science fiction.
LikeLike
Hi Massimo,
It might help if you explained why it is. The main problem with substance dualism is it’s hard to see how mental stuff can interface with physical stuff. This is not a problem for my kind of dualism, which is basically the same as that which distinguishes hardware from software. Microsoft Windows is multiply realizable (and so not really identical to any particular set of electrons whizzing around), but we don’t think that it is mysterious that physical events (e.g. mouse clicks) can interact with it. It works because the hardware/software divide is just two different levels of description of the same process. What makes Windows Windows, (and I think what makes me me) is something about the pattern of that process, not something about a particular set of atoms.
As argued elsewhere, it’s not much different from the way in which currencies or nations or other abstract concepts exist. We don’t necessarily have to go full out Platonist. Would you call Aravis a dualist also for his example that currencies exist even though they don’t reduce to particular physical objects?
I don’t know. You might want to elaborate a little.
LikeLike
The computationalist intuition may be intuitively correct because consciousness is the ability to compute or time differentiate the environment. The non-computationalist or the “sentnientnist” has a hard time accepting the intuition because of a lack of explanation of biological sentience or sweetness. The Hameroff-Penrose Theory whether right or wrong may be on the right track, because they are trying to prove the computationalism on a deeper level which may lead to an explanation of sentience and bring the two camps closer together or even close the explanatory gap.
LikeLike
Hi labnut,
It is good of you to admit that!
LikeLike
DM,
“And which camp is that? I know which one you mean, of course
glad you understand the importance of context to meaning.
“There is a perception of unreasonable intransigence on both sides
That might be your perception. My perception is one of weak arguments and absence of evidence.
“Better understanding is required.”
So glad you understand that vital point. Until we have obtained better understanding it is advisable we drop dogmatic attitudes and replace them with an attitude of enquiry.
LikeLike
I agree with Asher’s analysis above, and suggest that the term “simulation” is problematic here. The word “simulation” is usually used when we pick out some aspects of a topic, and reproduce those, while not bothering with other aspects. Thus we only simulate some aspects of the overall behaviour. If we simulate all aspects then we’re really talking about replication, not simulation.
However, as regards consciousness, we’re not yet agreed which aspects are important and which are not (and thus what can be left out of the simulation). Without that, we can’t answer whether a “simulation” would be conscious. I would assert that a replication would indeed be conscious, but as for a simulation, well that depends.
Note that no simulation can be entirely virtual, it needs some physical existence. If we are discussing “consciousness” without any reference to the physical implementation, then we are indeed being dualistic.
LikeLike
This seems like a very good idea, Labnut. These discussions, while they are important, in the end seem to get in the way. If we do not know whether machine consciousness is possible, then we do not know enough about consciousness to make it worthwhile speculating.
Besides, if machine consciousness is possible then conscious biological machines is what we already are, and we are the evidence that it is possible. The only other possibility would be that it is not possible, and we are the evidence for that. Either way, we are the only evidence.
I see no alternative to an empirical study such as is undertaken in mysticism, or a careful analysis of the metaphysical issues. This is not one for physics or computer science. Or not unless they are prepared to form joint working parties.
.
LikeLike
Sure, We can assume what we like. This is what I was suggesting.
LikeLike
I don’t have a problem with keeping the distinction. But would you say that, with respect to the simulation, the blanket would behave in all ways the same as a wet blanket in reality? That’s the crux of it. If you have captured all causal aspects of the system in your simulation, then a simulated wet blanket behaves, with respect to everything else in the simulation, in exactly the same way as a wet blanket would behave toward those same things in reality.
LikeLike
I think the type/token distinction tends to have category problems. You have some individual physical brain/body event that is like enough in its phenomenological effects to be categorized with the same label as some other brain/body event. Now is that position “type” or “token” physicalism? The likeness of the two patterns themselves, the likeness of two different brain structures is not at stake. A highly summarized, highly abstracted phenomenological effect is where we call it “pain” or “sadness” or whatever.
LikeLike
It’s a problem in exactly the same way. *All* you are simulating is physical causes. If you want to say something else emerges somehow from that, you have to say *how* it emerges, just like the substance dualist has to say how the non-physical interacts with the physical. Yes, we don’t have an answer to how that happens. But if someone is talking to me about currencies being non-physical, my question is going to be, “how do they come about? What is the kind of existence does it have and how does it interact with those physical tokens?”. This is not an easy question, because we use a mode of expression that is prone to category problems (cause of winning an election vs. cause of subatomic event) and a lack of conceptual frameworks for expressing how abstract objects exist physically.
If you’re a physicalist in this debate, you *must* be prepared to concede that we do not know how it works, and that we are hypothesising that nothing else is required. And that’s why scientism, wrt consciousness, is a big deal and will remain a big deal. If it’s not physical, then figuring it out will remain essentially a non-scientific question. Throwing in some other kind of dualism, unless it’s explicable, is not parsimonious.
LikeLike
My own thought about this is that a conscious computer is possible, but it may have to be a computer made with unconventional* technology rather than a conventional computer made with silicon technology. For example, it may have to be a biochemical-based computer that processes molecules to some degree like the human brain does. I wouldn’t be surprised if this is the case.
* Unconventional Computation & Natural Computation 2014
http://conferences.csd.uwo.ca/ucnc2014/
LikeLike
Massimo, I’m confused (nothing new there). You just wrote an article describing the distinction between the physical and mathematical. Computation is a mathematical concept. A computation is something that can be discussed/manipulated in abstract terms. For example, it is a classical tool of mathematical thinking that two statements which are not (necessarily) obviously the same statement can be shown to be equivalent. Thus, (x+x)/x can be shown to be equivalent to 1+1.
The way I interpret Mr. O’Brien’s fine article is to say that he has set up a comparison between a person and a person* (simulated person). Assuming the simulation is sufficiently close, the responses to everything that you ask the person about his consciousness will be equivalent to the responses* to everything that you ask* the person* about his* consciousness*. This could only be true if the person* has some conscious* experience* which is equivalent to the person’s conscious experience.
Perhaps this is where my Computationlist intuition comes from: One of my favorite theorems from high school math was the duck theorem. (If it looks like a duck, and it quacks like a duck …). Given that the conscious* experience* which results* in the responses* to everything that you ask* the person* about his* consciousness* is by definition a computation, consciousness looks a lot like a duck, I mean a computation.
James
LikeLike
Hi Labnut,
That’s exactly my perception… of the anti-computationalist side. That’s the problem. There’s little point in discussing arguments like the Chinese Room when they appear utterly to miss the point to one side while their refutations seem to fail embarrassingly to the other. In my view we need to get down to the basic intuitions where we differ, and it may be that this is really the problem of trying to work out what the concept of consciousness as we conceive it actually is.
In other words, the question may not be what is it that this phenomenon objectively is, but what is it we think we are talking about? It seems to me that while it is perfectly obvious to everyone what phenomenal consciousness is, the two sides don’t actually agree and so are talking past each other. This is more obviously true for intentional states such as belief.
LikeLike
“Either computationalists are missing some mental faculty of perception, or anti-computationalists are experiencing some kind of illusion.”
Computationalists are not missing some mental faculty of perception; they are missing the animal body that consciousness originated to organize and maintain.
The computer model of the brain is a bad model, not because it cannot account for every computational or linguistic activity of the brain, but because it fails to address the organic origins of the brain, and what the brain really does as organ of the body.
All animal brains organize responses to internal and external environments in order to achieve basic satisfactions of biologically driven needs: nutrition, mating, defense, and other primary impulses of the organism as such.
The human brain is far more complex than that of other animals (as far as we know), and probably receives greater, and more complex, sensory experiences (both internal and external) than other animals, requiring greater interpretation and response management, thus risking a greater probability of failing to achieve satisfaction of basic needs. Human consciousness may have originated as the organizing principle to assure prioritization of interpretation and response in a manner to reduce the risk of failure, and achieve satisfaction.
Now let’s get messy: This suggests that lust has more to do with how the human brain generates consciousness than beliefs or semantics (possibly side-effects of having a self-reflective consciousness, directed towards its social environment). Lust not only involves brain functioning and conscious management, it arises from physiological events necessitated by the organism’s being qua organism.
I think an adequate disproof of the claim that computers can develop human consciousness (which is not to say that they can’t achieve some other form of self awareness, although I’ve seen no discussion as to what that might include), is that no computer now, or that we could now conceive, would be capable of a wet-dream. Wet-dreams are events of consciousness, but they also involve a physiological event. A computer capable of a wet-dream would not only require dreaming capacity, but also be able to wet the sheets with some urogenital apparatus. (It might ‘report’ a wet-dream, even convincingly; but only the wet sheet would be empirical evidence of it. And of course we would have to test the sheet for dna to see that it is wet by sexual fluid, and not just water in a properly primed pump.) In short: the computer would have to be an organism.
The problem with positing a Virtual Self capable of even this in a Virtual World, is that the Virtual World and its inhabitants are generated by a consciousness outside of their own existence. That is, the ‘mind’ encountered in a Virtual World – or any computer programming – is not the mind of a Virtual anything, nor of the computer – it is the mind of the (human, non-virtual) programmer. That’s why all computational arguments seem weak and, frankly, esoteric. To have a strong argument for the computational model, we need either a computer that somehow built itself and programmed itself (meaning that it would need to exist immaterially before it existed materially, which is silly), or we need a computer that evolved through the process of natural selection, capable of reproduction and adaptation to its environment, and – hey – we already have that. Except that humans are something more than this, as well. I absolutely reject any supernatural explanation of what that “something more” might be; but I also reject any elimination of it from the human experience as somehow unimportant.
Finally: “I propose that you have no way of being certain that you are not such a simulation yourself” – I don’t need that kind of certainty; if you actually claim I am mere simulation, bring forth the evidence. (In this context, I don’t care what the Virtual Selves believe* – that was decided by their programmer. And I have seen no evidence for a programmer for this universe.)
LikeLike
Hi DM,
That is my point, a *complete* simulation of sugar would indeed be sugar (see also my comment further down in response to Asher). The term “simulation” usually means a partial replication, with some aspects being replicated and some not. Thus the term is problematic here in that it doesn’t specify which aspects of a conscious entity are being simulated and which are not (if it were all of them then it “would be sugar”).
Can you clarify about which aspects of a conscious entity you consider that we need to replicate in order to produce consciousness? (And, by that, I mean which *physical* aspects, since being a physicalist that is all there is.)
LikeLike
Hi Coel, Asher,
Indeed, we can assume a simulation has a physical existence.
I also think that it is reasonable to think of complete simulations that do not physically reproduce physical phenomena. A perfectly faithful simulation of an electron does not actually give a computer a slight net negative charge, although the charge is present within the context of the simulation.
Whether there actually is a charge or whether a blanket actually is wet is not a question of whether the simulation is complete but a question of perspective — whether that perspective is outside the simulation as usually assumed (e.g by Massimo when he makes such arguments), or inside the simulation as would not be accepted by non-computationalists. Insisting that a perfect simulation really has these physical properties is likely to lead non-computationalists to misunderstand you, although I do understand and I do agree.
We’re stipulating that the simulation behaves just like reality, so that’s not the issue. But for non-computationalists, this can be the case while it can still fail to be conscious. For Massimo, a simulation of a fire is not physically hot but virtually hot, and for him virtual things are not real in the same way that physical things are. A simulation might be virtually conscious, but there is no reason (in his view) to see it as really conscious.
Do I also have to explain how hardware causes a software process to emerge? Because I see it as much the same kind of relation. We accept how the hardware/software distinction works without much problem. There may be a good deal of philosophical work to do to figure out how to reconcile the idea of physical causation with logical causation, but there is no sense of profound mystery, which in my view is appropriate. This is the attitude I have for consicousness.
Sure. Lots of work for conceptual analysts. But no sense of profound mystery. No sense that some new kind of scientific or indeed supernatural understanding is needed to sort it out. It’s purely a problem of conceptual analysis for understanding a state of affairs that is manifestly so: currencies exist (in some sense at least) and they are not identical to any collection of atoms.
As it happens, I’m not. I’m a mathematical Platonist, and I think the mind is a mathematical structure. But I’ve tried to leave that out of it as much as possible because it’s not necessarily crucial to the core of the point I’m trying to get across.
I agree. Good thing I’m not a scientismist!
I think it is explicable and necessary. If you are a computationalist, then you think that consciousness is a property of some pattern (when instantiated physically at least). A pattern is not a physical object. You can’t hold a pattern in your hands, at best you can hold a representation or an instantiation of a pattern. This is true even for physicalists, I think, who would say that the pattern itself doesn’t really exist and only exists insofar as it is instantiated physically in some way (e.g. in brains).
But the pattern itself can still have properties: a cube has six sides. This is a property of a form, structure or pattern which means that any physical or indeed virtual instantiation of the pattern will also have six sides. Six-sidedness is therefore preserved in virtualisation the way (physical) mass is not. My argument is that consciousness is like this.
As a Platonist, it irks me that dualism is treated as a dirty word, so I just wanted to point out that it needn’t be. Physical monism is not the only tenable philosophical position. As such, I’m reasonably happy to reclaim the word if only to undermine that criticism.
But whether this is really dualism is actually questionable. If it is dualistic to suggest that consciousness exists and is a property of a physical pattern, then it seems to me that it should be just as dualistic to note that entropy exists and is a property of a physical pattern. I didn’t use entropy as an example earlier because non-computationalists are more likely to argue that simulated entropy is not real entropy than that simulated complexity is real complexity.
Hmm. Perhaps rather than “abstract properties” they should have been called “structural properties”. Would that have been less problematic?
LikeLike
Agreed. To riff on you, and a bit on Coel, we don’t know if what we call “consciousness” would “translate” across hardware, or whatever word you use. Or, to riff against Dennett, we don’t know if consciousness is algorithmic or not. (I’ve only explicitly heard him make the algorithmic statement about evolution, in “Darwin’s Dangerous Idea,” but I doubt he’d run away from applying the word to consciousness.)
That said, if there is something that it is like for a machine to be conscious, we might not be able to detect what that “something” is. Likewise, we might not be able to agree as to what that “something” might be, and therefore, not be properly looking for machine consciousness.
===
It’s a bit like SETI, looking only for carbon-based life forms it presumes are not too dissimular from us. Well, what if there’s a silicon-based life form, living under “water” in a methane sea, orbiting a star whose radiation peaks in the lower UV, and whose creatures “see” at that wavelength? They could make us look like idiots on intelligence factors and we’d never find a trace of them.
LikeLike
I believe that a simulation of what it is to be conscious might indeed be conscious — in the simulation world! Shades of Stanslaw Lem or similar.
LikeLike
Hi Coel,
Sugar can be simulated without the omission of any detail at all and still not be physical sugar, because those details are manifested only within the context of a virtual environment. What separates simulation from reality is not detail (although this is also usually the case) but physicality from the perspective of some observer. An electron in the real world is represented by an electron. An electron in a simulation has all the same state as a real electron but physically it’s just a pattern of charges in the memory cells of a stick of RAM. That’s all we see as physical observers. A virtual observer just sees an electron.
I think you would need to make an object which processes information analogously. There is some causal sequence of events in a brain which needs to have an exact analogue in the simulation. I don’t know precisely which events these are, but if we go the whole hog and reproduce within the context of a virtual environment every physical aspect of a real environment which includes a conscious person, then consciousness will be replicated. I’m confident that the same can be achieved with much greater levels of abstraction, but how much abstraction we can get away with is an empirical question.
LikeLike
At a minimum, a computer or robot that is not free to interact with its own environment on at least somewhat its own terms, to question its own reactions to said environment, etc., can hardly be spoken of as “conscious” as we normally understand it. Here, ideas like “embodied cognition” come into play, as well as the idea that, contra Dennett’s philosophical (or more, science fiction) short essays, let alone Kurzweil’s singularity, one cannot “translate” consciousness from one substrate to another in a “hey presto” way.
LikeLike
I don’t agree that it’s an empirical question. If we were to build the simulation discussed in the article, there is one camp of people that would say we had built selves and one camp that would say we had not. There is no known empirical test to see who is right, and in my view there cannot be.
LikeLike
Hi ejwinner,
I think the thought experiment in the article addresses those points. If we scan you and simulate you, Virtual You will have a virtual body and the virtual brain will be performing the same function as a virtual organ of that virtual body as your brain does for you.
But Virtual You could have a wet* dream*, complete with sticky* sheets*. Again, the problem is that you see a pertinent distinction between physical sticky sheets and virtual sticky sheets, whereas computationalists don’t. What you have is not an argument but a restatement of the intuitions that cause many to be skeptical of computationalism.
As a programmer myself, I can say that this is certainly untrue. The fact is that we do have computer programs that can evolve and learn largely autonomously. Your apparent idea that computer programs can only do what they were explicitly programmed to do is simply false, and demonstrably so.
Programmers are routinely surprised by the results of their own programs. I myself have written programs that generated musical compositions that would never have occurred to me had I been working directly. Computer programs are not static creations like characters in novels, they are dynamic, and can exhibit a level of chaos, complexity and unpredictability that makes them take on a life of their own (figuratively speaking at least).
Time and time again, genetic algorithms, artificial neural networks and even simple rule-driven complex systems have shown emergent behaviour which cannot easily be explained by their creators. We can set computers a goal (such as facial recognition) and give them millions of samples to train themselves on, and when they succeed we might not have the foggiest idea how they did it.
The fact that programmers may not completely understand their creations shows that computer programs are not simply trivial reflections of the mind of the programmers, but this is important only to make a point. If we did make a conscious* machine, and if we did understand perfectly how it worked, that understanding would not and could not rob it of its consciousness, any more than God’s understanding of our brains would rob us of ours.
LikeLike
Hi James,
That’s it in a nutshell!
LikeLike
Hi DM,
OK, you’ve lost me. To me a computer cannot do a “perfectly faithful simulation” of an electron. It can certainly simulate some aspects of an electron, and could represent the effects of an electron as they acted on some other part of a wider simulation, but it could not actually produce a “perfectly faithful simulation” unless the computer effectively turned itself into an electron (which it can’t do because it is too big).
I can simulate blood for a theatrical play using tomato sauce, and for some purpose it acts like blood. But the only “perfectly faithful” simulation would be one that is literally indistinguishable from real blood.
Again, this comes back to the issue that you have not specified which aspects of the conscious entity are being replicated and which are not (I agree that a *full* simulation, aka an exact replica, would indeed be conscious).
Agreed, though the physical instantiation is actually essential!
OK, but a pattern of physical objects *is* a physical object.
The pattern certainly wouldn’t exist platonically, though the pattern is a real feature of the physical instantiation.
Sorry, I’m lost again. What do you mean by “virtual instantiation” of the pattern here? The *physical* instantiation of the cubic pattern does indeed have six sides. A “virtual instantiation” of the pattern sounds (if anything) as though it is some *other* pattern (say computer machine code) that acts as a *partial* simulation of *some* aspects of the pattern inside some wider simulation.
I can sort of buy this in that if consciousness relates to *some* aspects of our brain (which I think it does) then we could simulate *those* aspects in a radically different physical instantiation, but we still have to decide exactly what aspects we need to simulate.
Would you agree that the physical instantiation of the pattern is actually essential for consciousness (or are you really going to declare that a non-material platonic pattern can be conscious?)? If so, what aspects of the physical instantiation are essential? Which, I guess is the same as asking which aspects of the pattern are essential (which I guess we don’t know).
LikeLike
The best way for me to understand the Computationalist intuition is to ask questions about it.
Suppose there are 100 register machines, all connected to a central storage and can pass signals to each other. They are set up to run an algorithm as follows:
Computer 1: Runs step 1, saves context
Computer 2: Loads context, runs next step, saves context
Computer 3: Loads context, runs next step, saves context
…
Computer 99: Loads context, runs next step, saves context
Computer 100: Loads context, runs next step, saves context
Computer 1: Loads context, runs next step, saves context
Computer 2: Loads context, runs next step, saves context
Computer 3: Loads context, runs next step, saves context
…
And so on, until the algorithm halts or is terminated.
Call this run of the algorithm R1.
If Computationalism is true then, for all I know, I could be R1.
If there is no algorithm for which the above statement is true then Computationalism is false.
R2 is simply an identical re-run of R1, but each machine saves a trace of before memory values and before contexts that it uses at each step.
Uncontroversially, if I could be R1 then I could be R2.
R3 is a repeat of R1 again, but this time and before the actual step is executed the before memory value is overwritten with the memory value from the same step in the previous execution and the context is overwritten with the same context from the previous execution.
Naturally this step will not actually change any values since they will be the same from run to run. Again, if I could be R1 and R2 then I could, for all I know, be R3.
Now we disconnect all the machines from each other and disconnect each machine from the central storage and give them each an individual storage.
Let me stress, at this stage, there is no connection or information flow between the 100 individual machines.
Each machine is given a precise clock and, using the trace only, each of the steps is run in exactly the same order it was in R1, except that the before memory values are taken from the trace and the before contexts are taken from the trace.
So exactly the same processor operations take place in exactly the same order as in R1, R2 and R3 but no two machines are connected and no machine contains has enough information run the algorithm as a whole. Call this R4.
I presume that the Computationalists will agree that they could, for all they know, be R1, R2 and R3.
But could you, at least in principle, be R4?
It seems to me that a Computationalist must say “yes” to this, or to say that the conscious experience in R1 depends upon something that ensures that the values read by the context switching routines for each step are physically the same values as were written by context switching routines in the previous step. I am not sure what that would even mean.
But I would be interested in the response, would the Computationalist say that he could, for all he knows, be R4?
LikeLike
Hi Disagreeable Me,
“I also think that it is reasonable to think of complete simulations that do not physically reproduce physical phenomena.”
“Whether there actually is a charge or whether a blanket actually is wet is not a question of whether the simulation is complete but a question of perspective — whether that perspective is outside the simulation as usually assumed (e.g by Massimo when he makes such arguments), or inside the simulation as would not be accepted by non-computationalists”
Would you agree: the blanket would be wet for a simulated person in the simulation, but there is no blanket or wetness as far as an outside observer is concerned?
LikeLike
Reblogged this on SelfAwarePatterns.
LikeLike
An excellent post. As a fellow computationist, I agree completely.
That said, I’m a little uneasy labeling consciousness as “abstract”. If we consider software to be abstract, then I can go along with it, but both interact with the physical world which seems un-abstract to me. But that’s just a semantic nit on my part on something I fully realize is difficult to express due to the limitations of language.
Scanning the comments, there seems to be some concern with whether or not this view is dualism. As DM notes, it is dualism, although not Cartesian substance dualism. I think those who are concerned about it being any type of dualism need to explain why this is necessarily a bad thing, and why it isn’t also a problem for the software / hardware dualism of the computing device you’re using to read this.
I’m not sure what would ever resolve this debate, except perhaps having it in a post upload virtual environment. Even then, it would only be resolved for those actually inside the environment. Those outside the environment could always argue that it’s all just simulation.
LikeLike
The question of “Can computer be conscious?” must be answered at least at two levels.
One, ‘what is conscious’? First, this is a semantic issue. Are we talking about a ‘human-like-conscious’? Is cockroach conscious?
Two, ‘what is the mechanism for consciousness’? Is this mechanism arising from a ‘lego structure’ (such as a brain)? Or there must be a ‘base’ for those legos?
I have discussed these two issues many times at this Webzine and will not repeat them here. The following is a partial list of those discussions.
https://scientiasalon.wordpress.com/2014/04/25/plato-and-the-proper-explanation-of-our-actions/comment-page-1/#comment-1245
http://selfawarepatterns.com/2014/07/16/david-chalmers-how-do-you-explain-consciousness/comment-page-1/#comment-5407
https://scientiasalon.wordpress.com/2014/07/21/is-quantum-mechanics-relevant-to-the-philosophy-of-mind-and-the-other-way-around/comment-page-1/#comment-5018
https://scientiasalon.wordpress.com/2014/07/10/string-theory-and-the-no-alternatives-argument/comment-page-1/#comment-4599
https://scientiasalon.wordpress.com/2014/07/10/string-theory-and-the-no-alternatives-argument/comment-page-1/#comment-4805
https://scientiasalon.wordpress.com/2014/07/24/clarifying-sam-harriss-clarification/comment-page-1/#comment-5122
https://scientiasalon.wordpress.com/2014/07/24/clarifying-sam-harriss-clarification/comment-page-1/#comment-5188
https://scientiasalon.wordpress.com/2014/08/04/p-zombies-are-inconceivable-with-notes-on-the-idea-of-metaphysical-possibility/comment-page-1/#comment-5587
Yet, this article brought up a new perspective on this consciousness issue: ‘Is the humanity on Earth only a simulated game of Martians?’ How can we sure that we are not a simulation of some higher being?
Coel: “I agree with Asher’s analysis above, and suggest that the term “simulation” is problematic here. The word “simulation” is usually used when we pick out some aspects of a topic, and reproduce those, while not bothering with other aspects. Thus we only simulate some aspects of the overall behaviour. If we simulate all aspects then we’re really talking about replication, not simulation.”
Amen!
Although this simulation issue is a total nonsense, I did ask ‘two’ similar issues in my life.
One, can we ever understand the final truth (whatever that is)? The answer is very simple. If that final truth can be ‘mapped’ into our brain, we then definitely can understand it. That is, a part of our ‘brain’ must have an ‘identical structure’ the same as that final truth. So, if we ‘know’ the final truth, we will definitely know our brain-structure. In fact, we can ‘design’ a brain which encompasses that final truth, and this design is available at http://www.prequark.org/inte001.htm .
Two, can we ‘design’ a universe with an ‘a priori’ starting point and enter into a beauty-contest with the nature universe? My answer was a definitely big “Yes”, and I am confident to win this contest. But, …, but, can I ‘produce’ a universe with my design although my design is identical to or better than the nature’s? My answer is a definitely big “No”. ‘Nature’ just got a bit (very tiny bit) more than a ‘design’.
It is this tiny-bitty bit which separates the simulation from the reality. I would like to take the opportunity here to announce to the entire universe that I (Tienzen (Jeh-Tween) Gong) am not a simulation of any kind.
LikeLike
Hi,
The “simulation” thing is a red herring. If Computationalism is true then you could, for all you know, be an algorithm running on, for example, a register machine.
Whether this algorithm could be deemed a “simulation” by whatever definition of “simulation” you might choose is quite besides the point.
If there could be no register machine algorithm that could produce the moment of consciousness you are experiencing right now then Computationalism is false.
Note that if you are a Computationalist you are saying that empirical data is an abstract mathematical model which may, or may not, have been derived from the physical substrate which it ostensibly represents.
I wonder if that is what some of you want to be saying.
As for the point that the algorithm must be run in a physical environment – yes, but you cannot assume that the physical environment is anything like the physical environment that we experience. The ostensible physical world could just be a fiction from the imagination of the programmer or programmers and the real environment in which the program is run could be radically different.
All that would be necessary is that it is an environment in which it is possible to implement the logic of a universal machine.
If you could be an algorithm running on a computer then you could not, even in principle, say whether you were or were not such a thing or even talk about the probabilities of such a thing being or not being the case.
So it seems to me that someone who claims to be a Computationalist and who is not maximally skeptical about the existence of physical reality is being inconsistent.
LikeLike
I do find it interesting that software engineers often seem to be most eagre proponents of a computational theory of mind. Perhaps that says something about the ‘intuitions’ you were referring to?
LikeLike
I see this inconsistency a lot on TV. There are many documentaries in which physicists say that what they like about physics is that they are exploring the nature of reality.
But they nearly always say at some point that we could, for all we know, be a simulation running on a computer of an advanced civilisation.
As I said before that if you were a simulation then you could not say anything at all about the physical substrate in which the simulation is running, other than that it is one in which the logic of a universal machine can be implemented.
And if we can be simulations then we have no idea whether or not we are simulations, no basis, even, on which to base even a rough probability that we are a simulation or not.
So why are these physicists so sure they are exploring the nature of reality and not simply solving a mathematical puzzle set by a being in an entirely different kind of reality?
LikeLike
DM,
As I noted parenthetically, “which is not to say that [computers] can’t achieve some other form of self awareness, although I’ve seen no discussion as to what that might include.”
The second part of your reply indicates that you believe they can have (even evolve) some kind of self-awareness (that isn’t human). I’m agnostic as to that; however, that is the discussion you really need to pursue. Trying to convince us skeptics that computers can develop human-like consciousness while denying the empirical impasse – those sticky sheets (as opposed to supposedly sticky* sheets* that we can only learn about by report – is simply not convincing. I don’t see this as an issue of intuition, but of epistemology, and of empirical demonstration.
While you apparently think that the programming you develop produces unpredictable results seems to imply that the program can form (what appears to be) an independent intentionality, I’m personally unconvinced of that as well. I spent some time in my youth experimenting in improvisational music forms with electronic instruments, and quite a lot of unexpected aural phenomena can occur when one lets fly an algorithmic riff. That doesn’t make it sentient.
LikeLiked by 1 person
Hi SAP,
Software does not have to interact with any part of the physical world other than the hardware necessary to implement the algorithm. If software represents a coffee cup then that coffee cup is an abstract object whether or not the software has constructed it from some input device or from a stored mathematical description of a coffee cup.
That this abstract object (which is all that you perceive) seems un-abstract then that is just a measure of how well the hardware has rendered the abstract object. Again, this is true whether or not the hardware is representing an actual physical object or not.
LikeLike
SocraticGadfly,
I think that well noted.
LikeLike
Thanks for responding, DM, but I think there’s a confusion in what you say. What counts as being a self is a conceptual question, but for each sufficiently clear answer, it’s an empirical question whether a self in that sense obtains in a given case. This holds whatever the barriers to knowing might be.
Conceptual uncertainty does not mean the general matter is not empirical. That we disagree on what color Yeti (Bigfoot) or the Loch Ness Monster is does not mean the question of the existence of such is not an empirical matter.
LikeLike