On Mind uploading and consciousness
Among the many problems with this idea of mind “uploading:”
1. Nobody really knows what the mind is, they only pretend to, based on quite limited understanding of various brain mechanics; triumphant as the materialists like to think themselves, they have never conclusively established that the mind is simply the brain (that consciousness is just something neural tissue “has,” however you want to describe it).
2. Further, most mind “uploading” talk relies on the kind of reductionism you hear from pop science columnists like David Brooks, who like to prattle about their “minds” being in the “cloud” because they look stuff up on their telephones all the time (and thus have “outsourced” their memories there); Whatever the mind is, it cannot (or rather, I think it should not) be reduced to data storage, processing, and computation (all the computational power of the world’s fastest computer linked to data storage containing records of everything ever written, photographed, or videotaped, with an ultra-sophisticated algorithm capable of searching all of that data, identifying patterns in it, responding to queries, whatever, would still not be a mind; it would just be a fast computer with a lot of data and good software to process the data, having no awareness whatsoever of what is being processed or volitional intelligence of any kind.
So, even if you could make copies of all of your memories and upload them into the “cloud,” or map out all of your neural tissue and upload that map (to be digitally reconstructed), whatever you uploaded would just be data, a copy of information held by you; rebuilding all of that into some sort of software analogue would not be you anymore than a perfect physical representation of your body would be you, it would just be a mimicry; if enough data is behind it, the mimicry might be convincing, but it wouldn’t be you, or anybody, it would just be a machine programmed to take on the apparent character of a person.
I can easily imagine a future in which people are fooled into thinking they are uploading themselves into some computer matrix, to achieve immortal consciousness, when in fact their mental structure is being copied, their thoughts are being transcribed and stored, their memories captured, but they themselves, once physically destroyed, die; copies might float around for a century or two mimicking them, but they are dead. The copies are no more them than the soulless imitations of life in a cuckoo clock, they’re just better imitations.
…
The market gives the people what they want, which, in the case of most people, is trash. Democracy is organized on similar principles.
I find the mindBay idea [discussed in correspondence] compelling. It’s not hard to think of people copying themselves, as much as that could ever be done, for sale, primarily out of vanity, like the people who fall all over themselves to get on reality television. Imagine the possibilities! We could pay $20 to discover, through truly intimate immersion, that the brain of Paris Hilton is even more vapid, damaged, and saturated in toxins (more experiential than chemical) than we could have ever begun to speculate. A whole new genre of warning labels would have to be concocted (Extended Exposure to the thoughts and memories of this creature Will make you into a stupider and more irritating person: Beware!)
On Tue, Jan 29, 2013 at 1:08 PM, … wrote:
Well, if “nobody really knows what the mind is”, as you correctly state, then how can you be so certain that it cannot be copied or uploaded?
I’m glad we agree about the central point. Materialists usually just assume whatever they don’t understand is just a more complicated version of whatever they think they do understand. It’s good to establish that we simply have no basis for assertions of firm knowledge at all here.
I don’t know definitively that the mind can’t be copied or uploaded, assuming we could ever even really understand what it is (I also don’t know that Yahweh, Apollo, Loki, the archangels, demons, sprites, gnomes, or assorted other figures of human belief/folklore weren’t/aren’t real, I just have no concrete reason to accept that their current or historical reality has been demonstrated. I concede these possibilities. A difference between skepticism and supposedly rational positive atheism). Since we do not know what minds are, have no idea how one would actually be copied or uploaded, and have never done anything remotely like either, I would think the onus would actually be on the people who think mind uploading is just a question of computing power and clever software to establish that:
1. That’s all there is to it
2. Thus, it can be done
Rather than simply assuming these things are true and waiting around for the Big Rock Candy Mountain, at which time all things will be possible.
If I was lambasting anything in my post, it was careless popular talk, overstatements by analogy, that confuse the issue and/or trivialize it (like the current fashionable “idea” that people are merging their minds now with machines because they use machines to look things up quickly, and thus “offload” “memories” or “mental storage capacity” to some mystical remote place, the cloud, from which things may be retrieved, rather than internalizing them in their own minds [what used to be called learning]. That is no different then somebody pretending to have mastered the law because he lives next door to a law library, or has a WestLaw account.
Joss Whedon (who I consider a tremendously talented and amusing entertainer, if shallow at times) played with these ideas in his Dollhouse show; you could, rightfully, say that is a strawman comparison because it didn’t do the complexity of the thinking behind them full justice, but even so, I suspect he hit the high points, namely the idea that minds are basically reducible to physical structures which can be scanned, copied, stored, “uploaded” and “downloaded” at will. I do strongly suspect there is an irreducible core within sentient creatures that can not be captured this way, and is thus not transferable. The show hinted at such a possibility without really deciding anything.
Do I know that? No, and I didn’t claim that I did, but nobody has established otherwise. I think most intelligent people regularly in the habit of interacting with sentient creatures, across the course of human history, have come to similar conclusions, which doesn’t make them right, but does, I think, place the burden of establishing that they are wrong on the few moderns who assert the contrary, namely that sentient life is just matter organized in a particular way that gives rise to either the appearance, or the reality, of consciousness (which would then just be an aftereffect of something physical), and that thus can be fully understood, copied, transferred, recreated, etc.
It seems to me your position is no more valid than those who claim that it can. If nobody really knows what the mind is, then it is still an open question and your assertions are no better than anyone else’s; and, as with your biological chauvinism, is merely based on emotional preference.
It is certainly an open question, I never said otherwise. I merely said the burden was on those who thought they could answer it in this particular, materialist, way to demonstrate that they are right, since their assertions run contrary to all known experience, and what they claim they will be able to do in the reasonably foreseeable future has never, to our knowledge, been done.
My assertions are better than some peoples, because what I’m saying is rooted in experience and (present limited) understanding. If I said, nobody has ever turned a kangaroo into a water buffalo, and there is no reason to believe they can, that statement would have two things behind it that the contrary assertion would not:
1. The fact that nobody has ever actually turned a kangaroo into a water buffalo before, at least that we know of
2. The fact that nobody has ever demonstrated how such a thing could be done
There is no reason to believe it can be done because it hasn’t been done and nobody has provided a reason that it could. This does not mean it cannot, it means they haven’t given the reason it can. Burden on them.
“Biological chauvinism.” Again with the name calling. You are trying to coin a slur that refers to people who value living, sentient creatures more highly than those who value nonliving machines. (Try “organicist.” I made that one up, it’s better.)
Since no machine we know of has ever demonstrated consciousness, sentience, volitional intelligence, or an emotional connection with anything else (traits all demonstrated by my cats, if not necessarily by all humans), I’d say it’s only natural, “human,” to prefer living creatures to machines. Further, I think there is something strange about being “offended” by such a preference.
I mean, what are we really trying to accomplish anyway? Are we trying to develop/understand technology (machines) that will improve the lives of people and animals in the future (living, sentient creatures), or
do we just want to make these machines for their own sake?
I think that is a really important question. I guess you, since you’re no kind of “chauvinist,” would be happy enough to make machines for their own sake, or to replace people and animals with them altogether (perhaps, as part of some greater, magical, “progressive” evolutionary project in which ever more complex intelligences arise, in the principle that ever more complex and powerful intelligences, whether “biological” or something else always should, and must, follow less complex/weaker intelligences; a principle that has never been established anywhere).
Most people would disagree (normal people are biological chauvinists, just as normal people would recoil with horror if someone were to torture a cat, but would be indifferent to smashing an iPad with a rock; normal people also care more about their children and parents than they do random people they have never met and have no relation to, another, biologically driven kind of “chauvinism”).
You say this is an emotional preference, and thus is presumptively invalid. I wonder, where do your preferences come from? What drives emotions to begin with, and why? Are they just the product of brain chemistry reacting to current and recollected interactions with environment, filtered/channeled/driven, I suppose though complex genetically defined channels? Or, do they well up from within for additional reasons that are not just integral to our ability to be in the world (survive and reproduce) but essential to the question, why would we want to bother?
People who have no more emotional attachment to animals or other people than they do to machines or abstractions are difficult to distinguish from machines; sociopaths, soulless automatons, if that’s your non-biological chauvinist ideal,
you can keep it.
I know people like that. I can give you some phone numbers if you’re interested.
For myself, I do not see why there is any reason to believe there is anything mystical or nonmaterial about the mind, whatever it is. If there is, then I think the onus is on those who believe this is the case to show it. However, if I knock you in the head hard enough, you go unconscious, or even die. What happens to your mind then? Where does it go?
Humans have been trying to answer that question as long as there have been humans (as far as we know). Nobody doubts that the brain is the physical manifestation of the mind, and that damaging the brain impairs or destroys the mind’s ability to function. This does not in itself establish that the mind is reducible to the brain. Maybe it is, but that hasn’t been established.
Whether unconscious or dead, you don’t have your mind anymore, either temporarily or permanently. That would seem to be a pretty good indicator that the mind, whatever it is, is very closely related to the brain.
You are assuming the object here, since this is not known. It is possible, of course, that the mind is something separate from the brain, but dependent upon it for being, so that destroying the brain destroys the mind (as destroying the body destroys the brain). It is also possible that destroying the brain has the effect of terminating a particular incarnation of the mind, which is released, and either does or does not manifest itself again in some other form.
This was believed, in one way or another, by most people for what we know of human history. I have a bias towards placing the burden of proof, as above, on those who are challenging/dismissing beliefs/views/conclusions long held, all other things being equal.
That doesn’t mean I think that those beliefs are established themselves as true, merely that the burden is on whoever is challenging them. Without belaboring this, the most rational reason I can articulate for such a burden placement is that:
Intelligent people, arguably people far more intelligent than any of us, have considered these questions countless times through the centuries, and while we like to flatter ourselves that we now command vastly more information than they did, and are thus better, or uniquely privileged, to make informed judgements:
1. We don’t actually know that much, if anything, more than they did about quite a bit, and
2. We should really ask ourselves why they came to the conclusions they did, rather than just dismissing those conclusions as having been rooted in ignorance (again in the often faulty assumption that we know so much more)
So, if it is true, as seems likely to me, that the mind is based on processes and information in the brain, then it is based on something physical, which means that, in principle, it is treatable as other physical objects: movable, copyable, storable, modifiable, improvable. Now one can debate whether this is practicable in the real world any time soon with foreseeable technology, (due to the extreme complexity of the brain), but that is a different question.
You reduce the mind to the brain and declare that, at some point in the future we will be able to copy the brain. I say, in addition to my other objections to this materialism, how will you know that you have copied the thing itself, and not just a part of the thing, or that you haven’t just made a superficial reconstruction of the thing that misses the essential core (the part we don’t, and possibly can’t understand)?
Further, by even your admission, this “uploading” would be making a copy of the brain, not “transferring” the brain, or the mind, into something else (a “new substrate”), so you are not describing somebody “uploading” “himself” into a machine, you are describing a machine making a copy of somebody. So it wouldn’t be “you” in the cloud anyway, just a representation of you.
I guess what you’re really saying is that there is no you at all, just a complicated object, like other complicated objects, and if we can make a copy, sufficiently exact, of that object, we have made a copy of “you.”
I don’t hold to that.
Good to hear back from you.
Comments below.
On Fri, Feb 8, 2013 at 12:13 PM,… wrote:
Thanks for the detailed and thoughtful reply. I delayed responding because I couldn’t decide on how much to respond to and for how long. Finally I decided to just respond to what strikes me as the main point.You seem to be contending, (correct me if I am wrong), that no matter how advanced a machine becomes, it will always be mimicking consciousness or sentience, that it will never really attain these characteristics.
No, I didn’t say that I thought it was necessarily impossible for a machine to have these characteristics, I said nobody had convincingly demonstrated that they could create one that did. Further:
1. We don’t really understand what consciousness is, or what the mind is
2. No machine demonstrating consciousness or the other attributes of mind has ever been built (to our knowledge)
3. So, nobody has convincingly explained how they could build such a machine (a high bar, given #1), or demonstrated that they could, and the burden is on those who think that it can be done to establish this
Unless you have some criteria, which I haven’t seen, which, if fulfilled by a computer, robot, android, whatever, you would then agree that it was conscious or sentient. If so, I would be interested in hearing them. I’m sure they would be thought provoking. But if you don’t have any such criteria, then there can really be no further discussion. You have predecided the issue and there is really no more to be said.
Thanks, as stated above, I haven’t pre-decided the question so much as attempted to set a high threshold for answering it.
I think there are a few questions here:
1. Can we create a sentient/conscious machine? (I say create to include various methods that mimic a kind of evolution, rather than restricting the discussion to machines that are simply designed, programmed and built)
2. Should we?
3. How would we know the machine is sentient/conscious?
#3 is critical, of course, not just to answering question 1, but also to the process by which #1 could be accomplished (if we have no way of testing consciousness/sentience, how would we even know not only if we had done it, but if we were close to doing it?)
I’ve already discussed #1; #2 is one of those permanent questions that can never really be resolved, but has to be talked through, because of the possible consequences of being wrong, either way (we’ll leave it for now).
So, to #3
I don’t think we can ever know absolutely that anything (synthetic as well as organic, for argument’s sake) is conscious; all we can really do is list the attributes of consciousness, test for them, then decide how convincing the results are.
I believe what are called higher level animals (mammals generally, though not exclusively) are conscious/sentient, though I concede that I cannot prove this definitively (note, I cannot prove definitively that humans or a given human is conscious/sentient either, only test for the attributes and evaluate the responses).
Some attributes of conscious/sentient creatures:
1. Volitional intelligence (goal orientation; being able to come up with goals [with some degree of independence] then act to fulfill them; as discussed before, randomization is not the same thing; there has to be some loose set of goals and preferences the creature is determining and deciding within [prioritizing], ideally for its own reasons
2. Awareness that it is alive, and that it is, subjectively, thinking/acting
3. The ability to respond spontaneously to situations, particularly those outside of the bounds of its experience
4. The ability to learn from experience (trial, error, evaluation, correction)
5. Understanding the non-arbitrary character of things and relationships; that is to say, knowing that a thing is a thing, and not something else, and that the difference matters (and not just for purposes of solving some problem); this is basically Searle’s aboutness idea (sentient creatures know that their thoughts are about something, and not something else, or rather that there is a non-reducible difference between things, and the difference matters; rearranging words into sentences means nothing [and does nothing to demonstrate sentience/consciousness] even if the sentences follow perfect rules of grammar and meaning [within context] unless the actor knows what the sentences are about [otherwise it’s just an exercise in problem solving through processing inputs and applying rules]; I would say the same thing about manipulating objects in the world (if you think any given thing may as well be any other given thing, unless you have some specific task that requires it to be something particular, well, I’d say you are not really living in the world, just using bits and pieces of it as they present themselves and seem to meet some need); I concede this point is debatable on some levels, but I do think it a difference between the automaton and the living creature
6. Curiosity about things in their own right (what they are, how they work, what you can do with them); an extension of #5 that is not really a strict requirement, more of a clue that consciousness is there
7. Some would say, and I would like to say, the ability to form subjective bonds of different kinds with other creatures demonstrating sentience (unfortunately sociopaths do not appear to do this and they are both sentient and conscious, so we can’t make it a rule here, though we should definitely make it a rule somewhere else (by extension, the ability to relate to other apparently sentient creatures as ends in themselves, not as means to an end; not a requirement of consciousness, but something that should be there if we’re going to allow these things to come into being)
8. Both lasting and immediate subjective reactions to things and apparently sentient creatures in the world (ideally not just interacting with the world, but feeling, love, hate, desire, etc.; I can imagine a consciousness that lacks these attributes, they aren’t necessary for a thing to be conscious, but again, should arguably be necessary for other reasons)
Do cats do all of these things? I would say, of course, though men like Descartes dismissed all animals as being machine like automatons of instinct (I think he just didn’t spend enough time with cats, or other animals, or he wasn’t paying attention).
Can a sufficiently powerful machine demonstrate all of these attributes (whether through clever programming, as part of a neural net kind of evolved response set, though a hybrid method, etc.)?
I don’t know. Given the inherently subjective (specific to the actor whose consciousness is at issue) nature of some of these attributes, it is very difficult to establish them independently (like #2).
How do we know that the machine (or cat or person for that matter) is really conscious even if it has convincingly established that it meets all of these criteria, and not just mimicking consciousness?
I would say if we know that the machine is only doing whatever it is doing through application of complex algorithms to data sets (including data from mechanical senses), then it is hard to argue that it isn’t just compelling mimicry (I mean, I don’t care how complicated the program is, if you are just executing a set of rules applied to a set of facts to achieve a desired result, you are not really meeting the criteria, you are just creating the appearance of doing so). If the methods by which it has come to act are different (not just rule based algorithms), it gets harder to say, and ultimately almost impossible.
Beyond a certain point of convincingly demonstrating the criteria have been met, as far as they can be, given all of that, I suppose you would have to give them the benefit of the doubt, just as we do with people and some of us do with animals.
I would only wonder though how you would respond if someone were to make the claim that you are not really conscious or sentient, but are merely mimicking having these traits. (I am not making this claim, by the way.) But if I did, how would you answer? Bearing in mind that anything you say or do could be said or done by a sufficiently advanced machine that, by your view, would only be giving the appearance of having these characteristics.
That’s a perfectly reasonable question. All you’ve seen from me are characters on a screen; I could be a chatbot, for all you know.
My answers are the same as above; the criteria I set out up there are the criteria I would apply to try to determine whether people, animals or machines were sentient/conscious. (I’m sure the list could be improved upon, by the way)
I would try to demonstrate through my words and other conduct that I met them.
I have often wondered whether John Searle had been asked this question, and if so, and if he answered, what his answer was. Now I have the chance to ask someone who, if my understanding is correct, seems to have a similar view. I’m not going to pass it up.
I’ve been reading Searle; there’s a lot of overlap between our views, but I part company with him on some critical points (like his mind/brain answer, namely that consciousness is real, though subjectively experienced, and is something that neural tissues have as a kind of dynamic property [which is really just a way of saying that matter thinks, it just knows that it is thinking]; to me this is not a “way out” of dualism, it is just acceptance of the materialist position dressed up to account for the obvious fact that subjective consciousness exists [if there were not at least one subjective consciousness, there would be nobody to think/act/etc.])
Thanks for the response. It helped me to clarify my views on some things.
…
Thanks for sparking the discussion and continuing the debate. I find I never really know quite what I think about anything until I’ve worked through it this way with other people.
Leave a Reply