Planet Ulrich

A Dangerous Subversive Takes on the Future, with Movie Reviews!

Posts Tagged ‘life vs. non-life

From correspondence between a friend and the Singularitarians, on the question of life, non-life and Deutsch’s Computation reductionism

leave a comment »

“If we cannot really know [if machines are alive], then we can’t assume that they aren’t alive any more easily than we can assume they are sentient. But how can we act without assumption of one or the other and still proceed?”
We don’t really know, absolutely, that they’re not alive now, we just have no reason to think that they are (and can thus proceed on the assumption that they’re not, as I proceed on the assumption that my car is not alive), and I don’t think some set of programmers writing code designed to emulate human responses and fool us into thinking a machine is alive could ever rightly reverse this judgment.
The burden of proof here is on the machines, you could say, and it should be a high burden.  I’ve seen no evidence of humans having created any kind of mechanical life, let alone mechanical life having the potential to become superhuman in intelligence.  It could be that we can’t do that, that all we can ever do is create really fast machines with a lot of memory, and maybe that’s fine, maybe that’s better.
I think we should be asking ourselves, how can we create machines that improve the quality of human lives (and by extension, the lives of other complex, sentient creatures, such as mammals)?
not,
How can we build machines that are alive?
Or worse,
How can we build machines that are alive, to replace us?
But I’m not saying you should build a really complex machine and then try to torture it or anything like that.

“If you don’t know if [Shrodinger’s] cat is dead or alive, it seems to me you still have limited options as to how you can proceed. What is better – to assume that it’s more likely the cat is dead and light the box on fire, or to assume it’s more likely alive and open the box so you can at least check before doing something destructive?”

Well, of course, if that were the question you would err on the side of assuming the cat were alive  (in the formal question it is impossible to open the box).  But Shrodinger’s Cat is always offered as some kind of weird wave/particle duality analogy (with people talking about the cat being in a superposition state between being alive and dead, something absurd on its face); I only brought it up as an example of an epistemological question being presented as a physical one.
“The question comes down to one of greater harm – which has more dire consequences not just for humans, but for all life? That we assume machines are incapable of reaching a sentient status and continuing to treat them like machines (in other words, assume they can’t possibly be slaves, potentially enslaving something sentient) or to assume that they could be sentient and do everything in our power to figure it out before we get to the point of doing something destructive?”
Again, you are reframing the question a bit.  I’m not assuming it is absolutely impossible for a machine to become sentient, I’m expressing skepticism about our ability to make (directly or indirectly) a machine that is sentient, and also asserting that there is a qualitative difference between life and non life that has nothing to do with computational power (I’m saying I do not believe life is reducible to computation, thus a really powerful computation device would not be alive merely because it could compute powerfully, though it might be able to run a sufficiently complex set of algorithms to create the appearance of being alive.  It could, I suppose, become alive for some other reason, but not just because it had a fast processor, a lot of memory, access to vast databases, and clever software.  I think there’s more to life than all of that; we hardly can claim to understand these questions in their entirety about organic creatures let alone these potential synthetic devices).

For all of that, of course I think we should monitor it closely and not do anything avoidable that is destructive.  But why would we want  to create machines that are alive, let alone a synthetic superintelligence?

Written by ulrichthered

February 21, 2013 at 3:48 pm

Posted in Singularity

Tagged with ,

Deutsch and “Artificial Intelligence”

leave a comment »

http://www.aeonmagazine.com/being-human/david-deutsch-artificial-intelligence/

I find myself agreeing with John Searle (I came to the same conclusion as Searle independently, namely, that the human as computer assertion or computer [if sufficiently powerful] as human [or effectively human] is a metaphor (as the human as steam engine or universe as clock ideas were metaphors), probably an inevitable one (this is simply how humans seem to think; given the ubiquity of this particular type of machine [the computer] now, and its ability to act on highly complex sets of instructions to accomplish things in the world, a set of humans were bound to compare it to humans [first, then compare humans to it, then talk about humans as though they were only more powerful versions of the particular machine, etc.].
I also don’t accept the “Universality of Computation.”
Sorry. I don’t think everything can be reduced to calculation, information retrieval, processing. I do think there is a qualitative difference, further, between life and non-life (which would be “racist” as the author says, though he never says that something “racist” is therefore “not true”; I suppose we are all simply supposed to know this, since “racist” things are, by definition, “bad” and “bad” things can not be “true”.)
Nonsense.
A machine that fools you into thinking it’s not a machine is still a machine. You’ve just been fooled. The Turing test, for all of Turing’s obvious genius and accomplishments, is silly; more importantly, it’s epistemic, not physical/ontologic (it is a statement about conclusions we as humans have come to about the identity of a thing, our knowledge, or seeming knowledge, of that identity, not the identity itself.)
You can say whatever you’d like, but thinking something is alive or human or conscious or whatever does not make it so. I suppose it is fairer to say we simply cannot know, ultimately, whether something is alive in the same way that we are ourselves alive (conscious, sentient, however you’d like to describe it); saying, because we cannot really know, and the thing seems to be alive (sentient, conscious whatever) therefore it is, or may as well be, strikes me as wrong.
(Similarly, you may not know whether or not the cat in the box is dead or alive, but it is either dead or alive, not both, or neither; you simply don’t know. That is to say, it has properties in itself independent of your understanding or observation. A person is alive, sentient, intelligent, conscious, however you want to describe it in himself, not because you think he is.)
I simply do not accept the reductionist idea that life is just non-life that can compute and act with apparent volition, and that the only difference between a person and a software program is computing power and clever enough coding. (I also think it a monstrous idea; but that’s a moral/aesthetic judgment, not an argument against the validity of the concept, so I’ll leave it be).
What is it that drives your computations? (what makes a person want to go left rather than right, decide to write an essay rather than go skiing, etc.) Only living things have desire (as one of their characteristics), volition, drive; these things cannot be reduced to the product of “computation” however complex; they are qualitatively different from that.
Life is not just problem solving (something makes the living creature [not “entity;” a cat or a person is not a rock, a corporation or an iPad] decide to solve or attempt to solve one problem and not some other problem, experience something, abandon something else, etc.

Written by ulrichthered

February 21, 2013 at 3:08 pm