Planet Ulrich

A Dangerous Subversive Takes on the Future, with Movie Reviews!

Posts Tagged ‘AGI

On a Proof that Friendly AGI is Impossible

leave a comment »

In response to much screaming and moaning about the prospect of someone developing a logical proof that “friendly” AGI (Advanced General Intelligence) is impossible:

A proof that friendly AGI is impossible, would not make friendly AGI impossible, it would simply demonstrate that it is impossible; the proof would be a step forward in the sense that it would tell us something we don’t know…  which would be, I’d say, rather important.

There seems to be the implication here that we would rather:
1.  Not prove that friendly AGI is impossible
2.  Build AGI (while hoping for the best)
3.  Then find out
We might want it to be possible. but wanting doesn’t make it so, and if somebody could prove that it’s not, well, that would be a pretty powerful argument against trying to build it (hence, I suspect, some of the hostility to the suggestion).
You would think that the people who want to build AGI would welcome an attempt to prove that friendly AGI is impossible,to help them, I don’t know, build AGI that is friendly (if they’re going to build AGI at all).
I’m not convinced that any kind of AGI is possible (if by AGI we mean machines that possess independent consciousness like that of sentient organic creatures), and I’m certainly not convinced that it would be desirable if it were possible.  Right now, all we have are machines processing instruction/data sets of 0s and 1s.  The machines have no awareness of what the 0s and 1s correspond to, nor do they have any volition or the other attributes of sentient creatures.
Which is to say, for all of the computational power, data storage, and informational retrieval sophistication of these machines, they do not now have the intelligence of a rodent, or a cockroach, let alone that of a human (or superhuman or god).  Making the machines more powerful, or giving them more data to process, doesn’t change that.
The core arguments I’ve seen against various “Friendly” constraints express concerns that I think should be answered, not shouted down or wished away.
If an AGI could be built subject to constraints like Asimov’s laws of robotics, then it wouldn’t really be autonomous, for example:
1.  Raising all kinds of, in my view, spurious ethical complaints about “enslaving” machines, but also
2.  The idea that a sufficiently intelligent AGI would find ways to make itself autonomous, removing the constraints, or creating successor AGIs without them
Some say AGIs would be “friendly” by definition, because they wouldn’t have any reason not to be (they wouldn’t want anything, they would bring about or be the product of a world of perfect abundance, and lacking scarcity, there would be no conflict between them and humans, etc..  More wishful thinking uninformed by the dreadful history of, say, humans, on these points, or really by common sense (reminiscent of other eschatological visions, like Marx’s withering away of the state, or the Gnostic’s Kingdom of God on Earth).
Just because we can’t think of good reasons why AGIs might want to hurt us, it’s rather “nearsighted” to say that they wouldn’t, no?
If they were true AGI, they would have their own reasons for doing whatever they do (volition is part of what it means to be conscious, or GI if you’d like).
Our ancestors had no trouble envisioning gods who lived under conditions of abundance capable of all kinds of malice and mayhem.
Maybe the apparent AGIs would actually have no GI at all; maybe they’d just be really powerful machines, and an instruction set would get corrupted, causing them to wipe out humanity (processing those 1s and 0s) without having the slightest awareness they were doing so?
Think of a Google car glitching and running off a cliff.

Written by ulrichthered

February 22, 2013 at 3:55 pm

Posted in Singularity

Tagged with , ,