Planet Ulrich

A Dangerous Subversive Takes on the Future, with Movie Reviews!

Posts Tagged ‘AI

On a Proof that Friendly AGI is Impossible

leave a comment »

In response to much screaming and moaning about the prospect of someone developing a logical proof that “friendly” AGI (Advanced General Intelligence) is impossible:

A proof that friendly AGI is impossible, would not make friendly AGI impossible, it would simply demonstrate that it is impossible; the proof would be a step forward in the sense that it would tell us something we don’t know…  which would be, I’d say, rather important.

There seems to be the implication here that we would rather:
1.  Not prove that friendly AGI is impossible
2.  Build AGI (while hoping for the best)
3.  Then find out
We might want it to be possible. but wanting doesn’t make it so, and if somebody could prove that it’s not, well, that would be a pretty powerful argument against trying to build it (hence, I suspect, some of the hostility to the suggestion).
You would think that the people who want to build AGI would welcome an attempt to prove that friendly AGI is impossible,to help them, I don’t know, build AGI that is friendly (if they’re going to build AGI at all).
I’m not convinced that any kind of AGI is possible (if by AGI we mean machines that possess independent consciousness like that of sentient organic creatures), and I’m certainly not convinced that it would be desirable if it were possible.  Right now, all we have are machines processing instruction/data sets of 0s and 1s.  The machines have no awareness of what the 0s and 1s correspond to, nor do they have any volition or the other attributes of sentient creatures.
Which is to say, for all of the computational power, data storage, and informational retrieval sophistication of these machines, they do not now have the intelligence of a rodent, or a cockroach, let alone that of a human (or superhuman or god).  Making the machines more powerful, or giving them more data to process, doesn’t change that.
The core arguments I’ve seen against various “Friendly” constraints express concerns that I think should be answered, not shouted down or wished away.
If an AGI could be built subject to constraints like Asimov’s laws of robotics, then it wouldn’t really be autonomous, for example:
1.  Raising all kinds of, in my view, spurious ethical complaints about “enslaving” machines, but also
2.  The idea that a sufficiently intelligent AGI would find ways to make itself autonomous, removing the constraints, or creating successor AGIs without them
Some say AGIs would be “friendly” by definition, because they wouldn’t have any reason not to be (they wouldn’t want anything, they would bring about or be the product of a world of perfect abundance, and lacking scarcity, there would be no conflict between them and humans, etc..  More wishful thinking uninformed by the dreadful history of, say, humans, on these points, or really by common sense (reminiscent of other eschatological visions, like Marx’s withering away of the state, or the Gnostic’s Kingdom of God on Earth).
Just because we can’t think of good reasons why AGIs might want to hurt us, it’s rather “nearsighted” to say that they wouldn’t, no?
If they were true AGI, they would have their own reasons for doing whatever they do (volition is part of what it means to be conscious, or GI if you’d like).
Our ancestors had no trouble envisioning gods who lived under conditions of abundance capable of all kinds of malice and mayhem.
Maybe the apparent AGIs would actually have no GI at all; maybe they’d just be really powerful machines, and an instruction set would get corrupted, causing them to wipe out humanity (processing those 1s and 0s) without having the slightest awareness they were doing so?
Think of a Google car glitching and running off a cliff.

Written by ulrichthered

February 22, 2013 at 3:55 pm

Posted in Singularity

Tagged with , ,

Deutsch and “Artificial Intelligence”

leave a comment »

http://www.aeonmagazine.com/being-human/david-deutsch-artificial-intelligence/

I find myself agreeing with John Searle (I came to the same conclusion as Searle independently, namely, that the human as computer assertion or computer [if sufficiently powerful] as human [or effectively human] is a metaphor (as the human as steam engine or universe as clock ideas were metaphors), probably an inevitable one (this is simply how humans seem to think; given the ubiquity of this particular type of machine [the computer] now, and its ability to act on highly complex sets of instructions to accomplish things in the world, a set of humans were bound to compare it to humans [first, then compare humans to it, then talk about humans as though they were only more powerful versions of the particular machine, etc.].
I also don’t accept the “Universality of Computation.”
Sorry. I don’t think everything can be reduced to calculation, information retrieval, processing. I do think there is a qualitative difference, further, between life and non-life (which would be “racist” as the author says, though he never says that something “racist” is therefore “not true”; I suppose we are all simply supposed to know this, since “racist” things are, by definition, “bad” and “bad” things can not be “true”.)
Nonsense.
A machine that fools you into thinking it’s not a machine is still a machine. You’ve just been fooled. The Turing test, for all of Turing’s obvious genius and accomplishments, is silly; more importantly, it’s epistemic, not physical/ontologic (it is a statement about conclusions we as humans have come to about the identity of a thing, our knowledge, or seeming knowledge, of that identity, not the identity itself.)
You can say whatever you’d like, but thinking something is alive or human or conscious or whatever does not make it so. I suppose it is fairer to say we simply cannot know, ultimately, whether something is alive in the same way that we are ourselves alive (conscious, sentient, however you’d like to describe it); saying, because we cannot really know, and the thing seems to be alive (sentient, conscious whatever) therefore it is, or may as well be, strikes me as wrong.
(Similarly, you may not know whether or not the cat in the box is dead or alive, but it is either dead or alive, not both, or neither; you simply don’t know. That is to say, it has properties in itself independent of your understanding or observation. A person is alive, sentient, intelligent, conscious, however you want to describe it in himself, not because you think he is.)
I simply do not accept the reductionist idea that life is just non-life that can compute and act with apparent volition, and that the only difference between a person and a software program is computing power and clever enough coding. (I also think it a monstrous idea; but that’s a moral/aesthetic judgment, not an argument against the validity of the concept, so I’ll leave it be).
What is it that drives your computations? (what makes a person want to go left rather than right, decide to write an essay rather than go skiing, etc.) Only living things have desire (as one of their characteristics), volition, drive; these things cannot be reduced to the product of “computation” however complex; they are qualitatively different from that.
Life is not just problem solving (something makes the living creature [not “entity;” a cat or a person is not a rock, a corporation or an iPad] decide to solve or attempt to solve one problem and not some other problem, experience something, abandon something else, etc.

Written by ulrichthered

February 21, 2013 at 3:08 pm