Planet Ulrich

A Dangerous Subversive Takes on the Future, with Movie Reviews!

Posts Tagged ‘singularity

What the general public (or nonscientists, at any rate) wish Scientists understood.

leave a comment »

In response to the Article, “What Scientists wished the General Public Understood,”
http://www.sciencemag.org/content/338/6103/40.full
which of course, condescended and sneered in all the ways you’d expect media “scientists” to do.
Now, I think scientists do understand the ideas I’ve offered below, at least some of the time, in the abstract, but too many of them seem to fall into the usual bias/belief traps in the particular, those who call themselves “social scientists” most especially
(as an aside, I don’t know that I believe in “social scientists” at all, not, at any rate, as “scientists,” rather than people aping the language of science to make themselves sound more convincing, while misapplying bits of scientific method, generally more to dupe the public into thinking they are thereby objectively pursuing truth, rather than some other, usually better funded, agenda).
1.  Just because something can’t be measured, or can’t be measured precisely, does not mean it does not exist.
Common examples,
Consciousness
Beauty

The passions

General intelligence (though this is more historic than current, we have gotten sufficiently adept at approximating it [Spearman’s g], or devising tests that allow us to quantify problem solving capacities that correlate to a high degree with other observed characteristics/consequences of intelligence to serve as effective and useful proxies; the only reason I even included it on this list is the Flynn effect, which seems to me has to be some kind of data artifact involving testing [increasing test scores over time in no way seem to correlate with actual increases in intelligence, which by every other metric seems to be declining in advanced societies [as we should expect it to; once all of the gains from normalizing nutrition and basic environment have accrued, differential fertility favoring the left half of the bell curve would strongly favor decline])
2.  Just because something can be measured does not mean that it matters
It is depressingly easy to manipulate people with spurious numbers having no demonstrable connection to anything, or numbers that are insufficient in themselves to have meaning, but which are thrust on the public with the implication that they mean certain things that in no way follow.
An example would be the argument that we share 98 or 98.5 or whatever percentage of our DNA with Chimpanzees, therefore we are largely indistinguishable, “scientifically”, from these creatures; this of course is more vulgar scientism than science, since we certainly cannot claim sufficient understanding of the vast complexities of the genome (or rather the consequences of genotypic variance on phenotypic life in the world) to make any meaningful statements about the importance, or lack thereof, of even the smallest genotypic variance, but we are obvious very different from chimpanzees.
Some more than others.
3.  If a theory conflicts with observed reality, or seems to make no sense, the problem lies with the theory, not reality, or our observations of it
It is easy to say this and everyone would agree, until they don’t like the results, then they start making excuses and qualifying, making questions of fact questions of motive, etc.
The current equality obsession is the best available example.
4.  It is more likely that men will lie than miracles occur 
Strictly speaking, again, this idea of Hume’s is cardinal to scientific thought, but it’s good to keep it in mind as a skeptical principle, because people are always presenting things that seem absurd or totally contrary to experience and reason as having been established by some abstruse process only intelligible to experts (such things may very well be in fact true, or as true as we can establish for now, but we should be on guard)
5.  Just because we do not understand something does not mean it is not real
People seem to always be reducing their view of the world to whatever fits into the latest set of theories, or what they think they understand about those theories; such people also tend to mock and sneer at the folly and ignorance of all who proceeded them.
It may not be expected of a scientist to say, “I have no idea why that happened or how that works” but it should be, as opposed to, “We cannot explain that, therefore it is impossible”.  This may seem inconsistent with the statement about miracles, but only on the surface (that is probabilistic, and merely places a higher burden on claims that depart radically from our understanding or experience, it does not say things should simply be dismissed because we can’t explain them).
6.  A simulation is not the thing simulated
Just because you can build something having certain apparent attributes of something else, does not mean you have built the thing itself.
We can make a smart enough chatbot now to fool most people into thinking it is human (or at least something acting at least as human as the people who chat into little windows for a living), but it’s a chatbot, not a person.  Making the computer more powerful or giving it more memory/data won’t change that.
7.  It is wrong to reduce what we do not understand to metaphors about what we do
We do not understand life, and it is a mistake to reduce it to a set of metaphors about other, simpler things, we do understand, because we know how to build them
A person is not a thinking machine any more than he is an engine.  It is a mistake to reduce thought to “computation” and “data/retrieval/processing”.  We know very little about how thought really works, or what consciousness is, but we seem to be rushing headlong from
a person can be like a machine
to a machine can be like a person
to a machine can be no different from a person
to a machine can be better than a person
Machines are objects.  People are alive.  There is a qualitative difference between life and nonlife that cannot be dismissed because we don’t know what it is, how to describe it, or worse, how it can be overcome.
8.  Thinking does not make it so
(what you think you know about something is not the thing itself; you may believe an object has come alive for whatever reasons, but it is or is not alive, which is to say, it has properties that are independent of your observation or whatever conclusions you have come to).

Written by ulrichthered

February 21, 2013 at 5:43 pm

Posted in Singularity

Tagged with , ,

From correspondence between a friend and the Singularitarians, on the question of life, non-life and Deutsch’s Computation reductionism

leave a comment »

“If we cannot really know [if machines are alive], then we can’t assume that they aren’t alive any more easily than we can assume they are sentient. But how can we act without assumption of one or the other and still proceed?”
We don’t really know, absolutely, that they’re not alive now, we just have no reason to think that they are (and can thus proceed on the assumption that they’re not, as I proceed on the assumption that my car is not alive), and I don’t think some set of programmers writing code designed to emulate human responses and fool us into thinking a machine is alive could ever rightly reverse this judgment.
The burden of proof here is on the machines, you could say, and it should be a high burden.  I’ve seen no evidence of humans having created any kind of mechanical life, let alone mechanical life having the potential to become superhuman in intelligence.  It could be that we can’t do that, that all we can ever do is create really fast machines with a lot of memory, and maybe that’s fine, maybe that’s better.
I think we should be asking ourselves, how can we create machines that improve the quality of human lives (and by extension, the lives of other complex, sentient creatures, such as mammals)?
not,
How can we build machines that are alive?
Or worse,
How can we build machines that are alive, to replace us?
But I’m not saying you should build a really complex machine and then try to torture it or anything like that.

“If you don’t know if [Shrodinger’s] cat is dead or alive, it seems to me you still have limited options as to how you can proceed. What is better – to assume that it’s more likely the cat is dead and light the box on fire, or to assume it’s more likely alive and open the box so you can at least check before doing something destructive?”

Well, of course, if that were the question you would err on the side of assuming the cat were alive  (in the formal question it is impossible to open the box).  But Shrodinger’s Cat is always offered as some kind of weird wave/particle duality analogy (with people talking about the cat being in a superposition state between being alive and dead, something absurd on its face); I only brought it up as an example of an epistemological question being presented as a physical one.
“The question comes down to one of greater harm – which has more dire consequences not just for humans, but for all life? That we assume machines are incapable of reaching a sentient status and continuing to treat them like machines (in other words, assume they can’t possibly be slaves, potentially enslaving something sentient) or to assume that they could be sentient and do everything in our power to figure it out before we get to the point of doing something destructive?”
Again, you are reframing the question a bit.  I’m not assuming it is absolutely impossible for a machine to become sentient, I’m expressing skepticism about our ability to make (directly or indirectly) a machine that is sentient, and also asserting that there is a qualitative difference between life and non life that has nothing to do with computational power (I’m saying I do not believe life is reducible to computation, thus a really powerful computation device would not be alive merely because it could compute powerfully, though it might be able to run a sufficiently complex set of algorithms to create the appearance of being alive.  It could, I suppose, become alive for some other reason, but not just because it had a fast processor, a lot of memory, access to vast databases, and clever software.  I think there’s more to life than all of that; we hardly can claim to understand these questions in their entirety about organic creatures let alone these potential synthetic devices).

For all of that, of course I think we should monitor it closely and not do anything avoidable that is destructive.  But why would we want  to create machines that are alive, let alone a synthetic superintelligence?

Written by ulrichthered

February 21, 2013 at 3:48 pm

Posted in Singularity

Tagged with ,

Deutsch and “Artificial Intelligence”

leave a comment »

http://www.aeonmagazine.com/being-human/david-deutsch-artificial-intelligence/

I find myself agreeing with John Searle (I came to the same conclusion as Searle independently, namely, that the human as computer assertion or computer [if sufficiently powerful] as human [or effectively human] is a metaphor (as the human as steam engine or universe as clock ideas were metaphors), probably an inevitable one (this is simply how humans seem to think; given the ubiquity of this particular type of machine [the computer] now, and its ability to act on highly complex sets of instructions to accomplish things in the world, a set of humans were bound to compare it to humans [first, then compare humans to it, then talk about humans as though they were only more powerful versions of the particular machine, etc.].
I also don’t accept the “Universality of Computation.”
Sorry. I don’t think everything can be reduced to calculation, information retrieval, processing. I do think there is a qualitative difference, further, between life and non-life (which would be “racist” as the author says, though he never says that something “racist” is therefore “not true”; I suppose we are all simply supposed to know this, since “racist” things are, by definition, “bad” and “bad” things can not be “true”.)
Nonsense.
A machine that fools you into thinking it’s not a machine is still a machine. You’ve just been fooled. The Turing test, for all of Turing’s obvious genius and accomplishments, is silly; more importantly, it’s epistemic, not physical/ontologic (it is a statement about conclusions we as humans have come to about the identity of a thing, our knowledge, or seeming knowledge, of that identity, not the identity itself.)
You can say whatever you’d like, but thinking something is alive or human or conscious or whatever does not make it so. I suppose it is fairer to say we simply cannot know, ultimately, whether something is alive in the same way that we are ourselves alive (conscious, sentient, however you’d like to describe it); saying, because we cannot really know, and the thing seems to be alive (sentient, conscious whatever) therefore it is, or may as well be, strikes me as wrong.
(Similarly, you may not know whether or not the cat in the box is dead or alive, but it is either dead or alive, not both, or neither; you simply don’t know. That is to say, it has properties in itself independent of your understanding or observation. A person is alive, sentient, intelligent, conscious, however you want to describe it in himself, not because you think he is.)
I simply do not accept the reductionist idea that life is just non-life that can compute and act with apparent volition, and that the only difference between a person and a software program is computing power and clever enough coding. (I also think it a monstrous idea; but that’s a moral/aesthetic judgment, not an argument against the validity of the concept, so I’ll leave it be).
What is it that drives your computations? (what makes a person want to go left rather than right, decide to write an essay rather than go skiing, etc.) Only living things have desire (as one of their characteristics), volition, drive; these things cannot be reduced to the product of “computation” however complex; they are qualitatively different from that.
Life is not just problem solving (something makes the living creature [not “entity;” a cat or a person is not a rock, a corporation or an iPad] decide to solve or attempt to solve one problem and not some other problem, experience something, abandon something else, etc.

Written by ulrichthered

February 21, 2013 at 3:08 pm

A Letter from a friend to the Singularitarians

leave a comment »

Good day.

I would describe myself as a qualified technology enthusiast (which is to say, someone excited about certain intermediate term technological possibilities [regenerative medicine, genomics, robotics, etc.], while skeptical about others [AI, the singularity, the Internet as it has evolved {post Web 2.0}, the surveillance state, etc.]).

So I’m neither a Luddite nor an extropian.  I have all of the rectangles (Plasma, LCD, Mac Pro [with dual widescreens!] MacBook Pro, iPad, iPhone, iPod, etc.), and I’ve been online one way or another since the BBS days, but I don’t watch commercial television; I don’t post on Facebook, Tweet, pass around YouTube clips, or otherwise spend my days playing with my telephone.

I also don’t believe that the Singularity will bring about abundance, the withering away of the state, or the peaceful ascension of humans into physical immortality (as indefinitely young post-humans) (most of that would be just fine, I just don’t think any of it will happen, not all at once, and if at all, not for a long time).

Technology is used by people (who seem intent on reducing it to its most moronic or destructive possible applications, like the blubbery infantile post humans chattering away at each other from their motorized chairs in Wall-E); I don’t believe people become magically transformed from the vapid, annoying, myopic, grasping clods they tend to be as individuals into some kind of wise superhuman force by stacking them on top of each other either (whether you call that stacking the market or democracy or state or metastate) (so I’m not receptive to arguments about market process [unhindered, of course, by the State] just solving all of our problems for us)  I’m not anti-market (I was an Austrian!), I just don’t sacrifice to the Market God, just as I’m not anti-Machine, but believe:

The machines were created to serve us, not replace us.

As a lifelong anti-Marxist, I never believed in the labor theory of value (and think the closest thing we can get to a social ideal would be having the machines  do all of the [non-creative] work, freeing humans to do more interesting things with their time; rather like Athens or the Roman Republic (with machines standing in for the slaves).

Until recently, nobody really cared about “jobs” (work was for slaves), what they cared about was wealth.

Written by ulrichthered

February 21, 2013 at 2:56 pm