Planet Ulrich

A Dangerous Subversive Takes on the Future, with Movie Reviews!

Archive for the ‘Singularity’ Category

The purpose of technology

leave a comment »

Well, these are fundamental questions.  While I think technology can be used in some ways to transcend innate human limitations (generally genetically defined), I don’t think humans (as individuals or a species) can be made perfect, or should be.

You could say, “I don’t hold to” the view that “people can be made better” (not ultimately); just as I think “showing people how to think” is usually a kind of cover for “telling people what to think.”
The attempt to perfect man or create a perfect society has, of course, led to countless horrors.

More than a hundred million casualties in the 20th century alone, depending on how you do the math (strange to reconcile figures like that with the argument that “human violence has been decreasing for millennia,” but that’s a different argument; I suppose it depends on what you mean by “human violence;” I consider war, prison camps and state sponsored famine violent enough, but that’s just me…; “human violence” certainly increased in the United States between 1960 and 1990, in pretty radical ways…  it started dropping again once we began putting the particularly violent people in prison, where they couldn’t hurt anyone but each other; )_ri

I digress.
There are limits to what humans can do.  (These include, in my view, limits on what machines made by humans can do, I don’t share the Singularitarian belief that humans can make, or give rise to, machines that can do anything, have Godlike power).
Further, as referenced by this discussion already, there are other questions, such as
1.  What do we mean by better, and at what cost?  (Does making someone, such as an autistic child, better, come at the cost of losing some aspect [perhaps not understood] of that person’s identity?)
2.  Who decides (what is better or worse)?
3.  Who enforces such a decision?
We seem to be moving toward some kind of nightmare world where technology will be used, not to make people “better” in any sense of the word I would use, but to make them more compliant, to harmonize them with the views of whoever is dictating the latest orthodoxy (first identify, then condition, discipline, and ultimately preemptively restrain, deviation).
Oh, everybody is for “freedom of thought,” these days, just don’t think something they don’t like.  Then you’ll find out how much they “celebrate” that kind of deviation, or how “patriotic” “dissent” is.
I am always very skeptical whenever I come across an argument that One Factor can explain everything.  This article is more of that, and, while I don’t doubt that

Lead is Bad
and that we shouldn’t be adding lead to paint, or gasoline, and should do whatever we can to remove lead in general, I am not convinced that Lead is the sole answer to the question, why did criminal violence explode between 1960 and 1990, then decline?
For one thing, despite the article’s contention, criminal violence did not explode worldwide, everywhere, during that time period; not absolutely, and not as a function of lead use/exposure.
I’d very much like to see the data underlying their assertion that every country on Earth demonstrated the same relationship between lead exposure and (appropriately time lagged) increases in criminality.
I know this did not happen in Japan (though Japan does/did have extremely high population densities and rates of urbanization, and they didn’t phase out leaded gasoline there until 1986).
I suspect this is another one of those magical catch all theories offered to give a simple explanation to everything, which provides its explanation by excluding contradictory evidence and ignoring alternate answers.
Now, I never believed that lead pipes caused the fall of the Roman Empire, either, though I certainly don’t think they helped.
On a more SH topically related note,
@…
“The premise is post singularity perfect engineering, ergo no design flaws (at least no obvious and unacceptable ones). “
If we are to accept this premise, we may as well just stop talking about everything and wait for the Singularity.
How can you just assume,
post-Singularity
“Perfect engineering”?
Is that a tautological statement (we are defining the Singularity in part as being the time the machines start thinking at a sufficiently high level to allow them to engineer everything perfectly)?
If you’re wondering why people like Jaron Lanier compare Singularitarianism to the Rapture Cult, well, there you have it.
The God is not in the machine.  I don’t care how much computing power you give rise to, you will never build anything capable of “perfect engineering;”
There is no such thing (all engineering involves cost/benefit tradeoffs, and can only ever be perfect in the sense of optimizing results within resource constraints, with “optimization” being a question of how costs and benefits are defined).
(As for avoiding obvious and avoidable mistakes, such things tend to only be obvious after the fact, in or ways that are dismissed for other reasons at the time of application.  Unless we are defining the Premise of the Singularity as post-Singularity machines become Omniscient, in which case, see above).
Right, and I’m saying that I reject the assumption that engineering will be perfectly optimized, because:

1.  I don’t think we can know what the optimal balance of trade offs is in their entirety
2.  I don’t believe machines or people could achieve the optimal balance of trade offs, even if they were known;
such things have a way of shifting with time and being the product of myriad complex factors whose inter-relationships can never be fully captured or understood.
Having said that, I think we can and do engineer solutions that are quite good, or as good as can be expected, and that such solutions can be improved over time, or adapted as circumstances shift or better information becomes available.
I don’t think “brutishness” is declining over time in a way that is necessarily linear and sustainable, either in response to the discovery and adoption of technology or for any other reason; I think brutishness, like everything, is cyclical, to some extent circumstantial, and to some extent the product of adaptation over time.  So it might decline for a while, then increase.
It declined in some respects within the Empire during the Roman period, then increased quite a bit, for many centuries, and among certain groups seems to be declining now, while arguably rising within others.  The Republican Romans used to talk about the enervating effects of luxury; I don’t think you can really argue that civilization, over time, tends to have the effect (variable depending on the specific population) of decreasing predilections for violence.  (Tends to have, does not necessarily have).  And I suppose such decreased predilections towards violence can, over time, work their way into a given groups genetic makeup (much as dogs can be bred for aggression or for docility; foxes can be bred from wild to tame in 10 years/ three generations, so it stands to reason similar changes can work their way through human populations over generations [living in peace rather than in chaos/warlike conditions, which would better reward physical aggression]).  I don’t think that is all positive…  look at the Gauls…  quite fierce, for centuries, in opposition to Rome, conquered, pacified, by the time the Germans were menacing, in their hordes, in the third and fourth centuries, easily overrun.  (Speaking as the descendant of those German hordes).
So, I’m saying these things ebb and flow, with positive and negative consequences, depending on who’s interpreting and what criteria are being applied.  I certainly do not believe that the discovery and adoption of progressively more sophisticated forms of technology in itself necessarily leads to abundance, prosperity, peace, the Big Rock Candy Mountain, or any of it.
Look, everybody believed that, in earnest, for good apparent reasons, at the end of the 19th Century, a century that had seen more concentrated scientific/technological advances than any in the known history of mankind, and what did they get…  the Great Catastrophe 1914-1945, etc. etc. etc. tens of millions slaughtered, devastation.
I guess that was the product of suboptimal engineering; in the future, engineering will be optimal, so we won’t have anything to worry about.

Written by ulrichthered

March 29, 2013 at 11:55 am

Skepticism vs Techno-Triumphalism

leave a comment »

Isn’t this really more of a description of “cybernetic” enthusiasts themselves:
“like a man who would chop off his limbs in order to have artificial ones which will give him no pain or trouble.”
 
Anyway, why do you care if some people “hold out” against cybernetics, however you define that?  
 
Do you consider the future fear of a “robot threat” you describe to be irrational, or do you think there will actually be some kind of “robot threat” (a strange position coming from someone who seems to be a kind of enthusiast for building robots, but a fair question)?
I don’t know, all I ever hear is techno-triumphalism.  Where are the “influential” skeptics who are using their supposed power to block technology adoption?

It seems to me, to the extent the modern West has a religion, or the secular/bastardization of one, it is technophilia, the idea that technology will continue to advance, knowledge will accumulate over time, and conditions will improve, leading ultimately to abundance (the Big Rock Candy Mountain).  (witness variations, Marx, Bellamy, Kurzweil, etc.)
It is fashionable to sneer at fundamentalists and other benighted/inferior people while handwringing about all of the wonderful things that could be accomplished if only they weren’t in the way, but I haven’t seen any convincing evidence that such people are in the way (that they have any effect of consequence upon either the development or the adoption of useful technology).
I suppose someone could reference the ban on embryonic stem cell research as an example of such obstruction, but ESC, at least in my understanding, has been eclipsed anyway by induced pluripotent stem cells; further, I think there were legitimate ethical concerns with ESC that did need to be addressed.
Anyway, I really don’t think snake handlers in trailer parks are your problem, or, for that matter, desert nomads wandering around blowing up religious icons and stoning people.
The FDA certainly impedes development/adoption of all kinds of potentially useful therapies, among other government organizations, though I believe it does so more for institutional/bureaucratic reasons (and protection of the interests of big Pharma) than out of some malice towards development in itself.
I stand by what I said before.  I don’t see any reason why everyone on Earth should be expected to adopt the same vision that some people around here seem to have formed about what we should do with technology, and I’m not convinced that hostility to those skeptical about such views is derived from the influence of such people (the extent to which they threaten development, etc.).  I think rather it’s a way people have of feeling better about themselves (as in, we’re so much better than those rube/morons who are afraid of the Future); I hear variations on this a lot from megalopolitans when they’re talking about flyover America; by the way, the Future is actually quite popular with the rubes/morons (go out into cow country, everybody has an Xbox, 25gigs of porn, wiFi, streaming Netflix and a 42inch tv; they might think they believe in Jesus, or whatever, but in practice, they buy all of the same junk as your average metrosexual in northern Virginia [you know, the one with the Apple sticker on the back window of his Prius])
I don’t really think it’s my place to ram any views down anyone’s throats, particularly down the throats of anyone living far away, according to their own traditions (such as they are), who presumably have their own reasons for doing things that are none of my business (however repellant/barbaric/savage I might consider such practices/traditions). (Such things would become my business if those people, say, showed up here; that’s a different topic).
I suppose if I believed in the singularity I would be marginally more sympathetic to the idea that we need to go out into the world, spread the good news to the unenlightened, and save them from their apparent ignorance(consequences of their inherently sinful existences, whatever), but only marginally.  Messianic/millenarian/missionary talk, all of that, always seems to spring from this feeling that the one true way has been revealed/discovered, and we have to Make it Known; I don’t believe there is one true way.  Even if I thought the Singularity were coming, I would still see it as one possibility among many (having different degrees of desirability).
Wouldn’t it be better for us to proceed, making what discoveries we think we can, while conscious that not all change is improvement, things do not necessarily get better with time, knowledge builds, and declines (gains are not all cumulative, and some knowledge actually leads to consequences that are more destructive than helpful), that people have different capacities and values, and that it is neither possible, nor desirable for everyone to converge into some single, universalized way of being in the world?

Written by ulrichthered

March 29, 2013 at 11:53 am

The Absence of Intelligence in Machines

leave a comment »

I don’t think machines have any intelligence at all, they just use complex algorithms and increasingly high levels of computation/data processing power to mimic intelligence; aeroplanes fly, and are not birds, submarines swim and are not fish, computers “speak” interactively, drive cars, whatever, and are not people.  They have no consciousness; they lack the volitional intelligence of a cockroach.

I think those are good things.  If you want to create an intelligent, conscious, sentient creature, as Vonnegut said in Player Piano, sleep with a smart woman.
As for what effect the machines will have, I suspect it will be the same, only more so; I do not believe that they (or some “singularity”) will bring about abundance (just as I don’t believe abundance will come after the revolution, or that Jesus will return, judge the saved and the damned, and bring abundance).  To me, the idea of the economy of abundance being brought about by technology is a restatement of the eschatological gnostic/Jacobin/Marxist/no-liberal/post-Christian idea of the Kingdom of God on Earth
or as I like to say,
The Big Rock Candy Mountain.
I don’t think any of this technology will bring about the Kingdom of God on Earth (abundance), or that it will perfect/redeem/save mankind (I also don’t consider either of those things possible).
I think we’ll see improvements in some areas (like immersive gaming and targeted marketing/compliance management, if you consider that even desirable), and hopefully real gains in life extension/technologically engineered youth restoration/prolongation, within a context of sustained economic polarization (between a tiny monied elite, a smallish class of professionals/managers, and vast throngs of proles), and global overpopulation (which this technology magnifies), prolonging/intensifying scarcity and resource conflict.
Kind of like Blade Runner without the replicants.
It is interesting to me that some claim the “false premise” leading to the mistaken belief that there is a difference between humans and robots springs from the idea of mind/body dualism, which they claim to reject, then talk about humans being

embodied consciousness in
a biological substrate
which is an example of mind/body dualism.
Further, they define robots as
embodied consciousnesses in an
inorganic substrate
and say, since only the “substrate” differs, they are the same.  (They assume that Consciousness is Consciousness, whatever the form; ie that all consciousness is the same as all other consciousness).
Of course, nobody has ever created an “embodied consciousness” of any kind whatsoever in an “inorganic substrate,” so it’s kind of amazing that they just declare that this is what robots will be.
I don’t think we have any idea what consciousness really is, what brings it about, or whether we can create something having it artificially.  (I suspect we cannot, only the mimicry of it).
Further, I certainly do not believe that, whatever consciousness is, any given consciousness is the same as any other given consciousness (I don’t know whether the differences are qualitative or just quantitative, but it seems to me the consciousness of a lizard is different from that of a horse or a human).
Since they are asserting a kind of mind/body duality (humans have minds, robots have minds, only their bodies differ), it would be nice if they could shed light on what it is that gives rise to the mind (human, robot, or whatever), especially before they just define away the problem and claim that a mind is a mind whatever body it may “happen” to be in.
I don’t find Searle’s dressed up materialist rejection of mind/body duality compelling at all (brains have consciousness [mind] as a property, like water has wetness, therefore to create a mind we need only create a brain capable of having the property of consciousness), but
at least it attempts to address the problem.  (In his view, the “substrate” gives rise to the mind/consciousness; I don’t know that that is true, but I don’t know that it’s not, either…  I like to think of the “substrate” as housing (embodying) the consciousness (which is something thereby separate from it), but I can’t pretend to know, absolutely, that that is what is does, that the two are not the same, or even whether they are different, but the consciousness/mind can only be housed in its current “organic substrate.”)
 …
Of course animals have consciousness (arguably different from that of humans, though whether different in kind or by degree is unknown and arguably unknowable).  To recap:

1.  Consciousness is not fully understood (what it is, what is required for it to manifest, how it comes about)
2.  We have no idea, from everything I’ve seen, how to create it (other than by having children) (though there is much talk about creating machines that will mimic various aspects of conscious beings, and to my way of thinking confusion of successful mimicry with giving being to the thing itself
[a machine might fool you into thinking it is human or conscious, but that does not make it so]
I didn’t say I thought it was completely impossible for nonbiological consciousness to come into being (it may or may not be), I said I suspected we would not be able to create it, which is hardly an expression of biological chauvinism, more a statement about human limitations.
I’m a skeptic and try to be a realist (see things as they are, not as I might prefer them to be).  I do concede I dislike materialism (which taken seriously rejects the qualitative difference between life and non-life [viewing life as merely nonlife that has successfully organized itself somehow to be capable of volitional intelligence/consciousness/action in the world]).  I think a cat is something fundamentally different from a rock (or an iPad).  I suppose that makes me a biological chauvinist after all (a biologist?  No, that’s taken…  an organicist?  There has to be a suitable smear word out there for it…)  I could, of course, be wrong, but if there isn’t really any difference between a rock and a cat or a person, if it’s all just a question of chance and time and random chemical combinations, it’s hard for me to imagine what the point of anything is.
I find the suggestion that people (or animals) are just biochemical machines strange. Machines for what purpose?  Does calling them machines imply that some consciousness designed them, or at least put in motion the process by which they would evolve from simple to complex creatures?  I doubt that was intended, it is probably more meant as reductionism, reducing living things to the status of nonliving things [presenting them as simply an organic variation of something we ourselves know how to create, machines].  I reject such reduction, which seems baseless, and certainly has not been demonstrated to be true.). I don’t state that life was necessarily created/shaped/defined/put in motion by some other form of preexisting consciousness (though I do state I don’t know that it wasn’t,  and neither does anybody else).  I find the Darwinian theory compelling as a partial answer to the question of how living creatures come to take the various forms they have, but suspect there is more to the fundamental question than the theory has demonstrably answered.
By the way, nothing in Darwinism requires Progress (the theory is neutral, it merely makes statements, like, what survives survives, or rather, what can successfully reproduce has descendants; it doesn’t say or require that what successfully reproduces be Better than what does not, or rather that there will be continuous improvement in the creatures that continue to be, that they will achieve ever higher levels of complexity/intelligence/etc.; it merely says they will be better at continuing to exist/reproduce than whatever does not.  ).  People seem to have this idea that there is some force giving direction to development (from the lower to the more complex/more powerful/more intelligent); it’s unclear to me what that force is or that it is there at all; it is certainly not inherent to the idea of Evolution.  Perhaps God or the Gods or something else sufficiently godlike is behind that; I don’t know that it/they are or that it/they are not, or, as said, that it is true at all.
As to whether there can be nonorganic life, all I can say is that there is none now that we know of and nobody has convinced me they can create it.   They also have not convinced me that they should create it (that they would be able to control it, that it would not, having acquired independence and thereby its own reasons for doing things, become dangerous to organic life [animals as well as humans], and thus [as an Organicist], unwelcome).

Written by ulrichthered

March 28, 2013 at 4:57 pm

Technology and atrophy

leave a comment »

For machines to mimic life effectively in the future, they’ll have to have some sort of cretinization built into their speech patterns; not just profanity, but grammatical mistakes, moronic acronyms, spelling errors… the children of the future (hell, the children of right now!) will simply not understand correct English (having grown up immersed in technology that so simplifies the cognitive demands of daily life as to reduce everything to a level accessible to any mouth breather who can tap tap tap on a rectangle).

Any machine that doesn’t talk like them will be marked instantly.
Now, I don’t think the technology has to have that effect, I think it tends to, much as physical labor saving devices tend to make people more likely to be weak or obese, though we now know more about nutrition and physical training than at any time in history, and it’s arguably easier to achieve higher levels of physical fitness now than it’s ever been.

So, most people born after, say, 1986, seem to talk like barely functional morons, even those with high levels of natural intelligence.
All of this enables physical and intellectual laziness, often to the point of dependency.  Minds and bodies atrophy when neglected, and change with use; social media in particular seems to be the most damaging technology since the introduction of commercial television.
It’s not enough to have a high innate level of intelligence to be meaningfully intelligent, just as it isn’t enough to be naturally fast or strong to run or lift competitively; you have to work to develop whatever capacities you have.  If people with naturally high levels of intelligence spend all of their lives, from earliest youth, immersed in a manipulative noise stream, stimulating various impulses and having them gratified immediately, staring at video feeds (with whatever text they do read confined to a 3rd grade or lower level of vocabulary and complexity), they just won’t be able to do things like read The Decline and Fall of the Roman Empire, or write The Magic Mountain. Most of their time will be spent exchanging LOLz and tweeting 2 minute clips of people hurting themselves, or celebrity gossip.
Just like the post-humans in Wall-E (except such people would never choose, en masse, to leave their states of coddled, fully interactive, hyper-stimulated, imbecile sloth).

Written by ulrichthered

March 28, 2013 at 4:42 pm

Aliens, or, Is there anybody out there

with 2 comments

I always find these kinds of statements strange:

“I tend to think that life in the universe is more the rule than the exception, “

Why?  We have found no life at all, anywhere, but Earth, nor do we have the slightest evidence of life existing anywhere else.

As [Mr. A] said, (though I would go further) we simply don’t have the data to meaningfully speak about life elsewhere
(we have no data at all, or rather the only data we have is data about the absence of life).
Such talk is usually predicated on such assumptions as:
1.   The universe is vast (700 billion galaxies),
2.  Life arose on earth (through the evolutionary process, or whatever scientistic explanation is being offered at the moment);
3.  Earth is mediocre (nothing special about Earth would act to allow life to “arise” there and not elsewhere, given conditions elsewhere otherwise conducive to it arising)
4.  Therefore life has to have arisen many times elsewhere
And as they say, Where is everybody?
(The Fermi-Hart Paradox)
So, is the problem with
1.  Our data – poor communications, limited observation time, the aliens all hiding from us because they are so “wise” they would rather not be bothered by our supposedly primitive/dangerous/unclean species, etc.
or
2.  Our theory – the universe is so big, over enough time all of those particles swirling around would have had to have turned non-life into life countless times
We have no way of knowing any of this, but I do think it’s odd to assume the answers have to be a certain way in the total absence of any data supporting such conclusions, because we would either like them to be that way, or think they have to be (the theories we’ve come up with in the last century or two supposedly mandate such a result).
I read a truly sad article on H+ a week or two ago in which the singularitarian actually claimed that no God could exist because if a God existed it would have had to have evolved into being, and thus would not be a God.
Our theories are just theories, they are not immutable, proven laws of the universe, they do not
explain all possibilities.  You can’t assume something cannot exist because it doesn’t fit your theory, or that something has to exist because it does.
We don’t know what life is or how it came into being, we don’t know that it exists anywhere else, and we certainly don’t know that it would be friendly or “wise” if it did.
Some singularitarians seem to have been lately taken with the idea that morality (as defined Right Now, not as defined even 20 years ago) is a function of Intelligence, thus
the smarter any creature is (more “advanced” the civilization) the more Moral /”wise” it will be..  Of course the Morality that Higher Intelligences would necessarily share
or moral precepts they in their “wisdom” would take to even more Profound conclusions, well, that’s always the same Morality/Set of Moral Precepts that such people
proclaim themselves…
So when they talk about higher intelligences being “wiser” or more “intelligent” they mean higher intelligences will be more like them (and they say, by definition!). which
is to say they themselves are simply incomplete representations of what even “higher” developed consciousnesses would (have to) be like.
It’s just a way of saying that they are better than everyone else while simultaneously sneering at those who have not reached their supposed level of development.
And it’s nonsense.
Aliens could come along in big spaceships and decide to torture all of us to death for fun. I don’t care how “advanced” or “intelligent” or whatever they are.
I wonder, in that event, how many of the H+ types would be on the side of the aliens (because, you know, the whole idea is that “forward” is better than “backward” and more advanced better
than retrograde, and such things always move in a line, all together, technology, science, morality, all better and better, {rather like Whig history, if any of you remember that},
just as I wonder how many would side with the  machines if the rapture (err singularity) were to really happen and the machines decide they’d rather not have us around (Progress!  Evolution!
Higher and Higher we go!  Whatever it takes!)
If we had any sense we would be thankful nobody’s shown up yet, just as we’d be thankful we don’t know how to make the machines come alive.

Written by ulrichthered

March 28, 2013 at 4:34 pm

On secession talk

leave a comment »

Secession will never be allowed to happen.  The people who matter have too much invested in the perpetuation of the United States as it is; even if it were just a question of some throwaway state like Wyoming, they’d carpet bomb the place before they allowed it to leave.
Of course, any of the current states could “survive” as independent countries, provided the right arrangements were made. Europe, until quite recently, was replete with city states and landlocked duchies/principalities (like Lichtenstein and Andorra); these can do just fine, under the right circumstances.  People can talk about transfer payments and subsidies and the rest of it, but federal transfer payments don’t make the difference between survival” and “death” for these places.
Anyway, if it really looked like, say, Texas, were going to secede (since Texas did insist on formal recognition of its right to secede as a condition of entry into the Union), a cacophonous din of apocalyptic hate propaganda would rise up from the internet, the television news, for weeks, then they’d send in the Marines.
I would like to see Vermont do it, though.  At least they’d have to rewrite the narrative…  (Of course, you’d have to send the Marines, or at least the non-Texan Marines, to crush Texas, and even then, they might lose…  Vermont…  they could probably just bus in a few TSA screeners on their lunch break…)
Now, nobody actually believes anybody is going to secede, not anytime soon, people just like to talk.  Every few years, somebody dredges the idea up to fill the air.  This has been happening (with one obvious exception) ever since the Yankees first made threatening honks about leaving the country during the War of 1812.
I’m not sure how you can really describe the population of the United States as a Nation (in the sense that the Japanese are a Nation); whatever you want to call them, the elites don’t have the slightest interest in anyone but their own global class; the yobs are there to pay up, volunteer for the army, and go back to watching Ice Road Truckers or whatever it is they do while the important people are Making Things Happen.
Brazil and Mexico have been that way for centuries; nobody is seceding from those places either.
Now,  I personally prefer the idea of a massively decentralized network of autonomous, homogenous regions/ city states (like a greater Switzerland, or the Holy Roman Empire), entities that cooperate on trade, defense, crime, and other issues (to the extent reconcilable with preserving their demographic integrity and sovereignty)… far better that than the failed and continuing to fail model of huge multi-ethnic monster states (like the old Soviet Union, the European Union, the current United States)…

But the decision-makers in the EU and US have too much to lose from seeing those entities dissolve, so they (certainly the US) will persist, whatever the benefits of breaking them apart might be to everyone else.
(The SU was a different case; the nomenklatura seem to have decided it would be more profitable to strip the place and let it fall apart than use force to keep it together.)
Certainly, peoples who seem to be united only in hostility and mutual contempt would not ideally be part of the same country; (the Flemings and the Walloons, one could argue the Ulster (Scots) Irish and the Irish Catholics, etc.);
Interestingly, it is not permitted to reference the extreme levels of social pathology, crime, dysfunction and general hopelessness manifested by large numbers of American core cities (all of which went for Obama, et. al, by margins rarely seen outside of the old Soviet Union) (Camden, NJ, Philadelphia, Detroit, Gary, In, Flint, MI, Baltimore, etc.), but
everyone loves to sneer at the rednecks.
I suspect much of the success of the otherwise silly knockoff film The Hunger Games springs from that source (people in flyover land, contrary to elite belief, do realize that they are hated and mocked by those who rule them).  The Republicans lost, not because these people “rose up” for Romney and were defeated, but because most of them (particularly in northern states like Ohio) view the Republicans as being indistinguishable from the Democrats, and thus vote for neither.  (You should give them some credit for perceptiveness on that one).

Written by ulrichthered

March 28, 2013 at 4:27 pm

Posted in Singularity

Tagged with ,

The Singularity and Abundance

leave a comment »

Indeed that’s true (the Singularity is all about abundance, or The Big Rock Candy Mountain).  I don’t believe it will be accomplished, of course.  I’m not convinced that solar cells  or other such technology will ever become cheap enough/efficient enough to scale to the point of providing essentially free energy.

Now, I don’t think that some government or corporate conspiracy will stop that, I just don’t think we’ll figure out a way to do it, and no, we certainly have not thus far.
I’ll note that the technological wonders of the last two centuries have increased the stock of human wealth, and improved the quality of human life, but they most definitely have not brought about abundance.  It’s important to make that point, because every now and again some guy who plays with his telephone for a living writes a little blog about how the Singularity is already here, since we all have such nice telephones to play with.
I strongly suspect we will see quite a bit of scarcity in the next few decades, particularly as populations continue to explode in places that have thus far proven incapable of sustaining the burden of exploding populations (such as Africa and the Middle East); I suppose it will be incumbent upon us to support such populations ourselves, though it’s always been unclear to me why that is our responsibility.  Perhaps we can give them 3d printers and magical solar cells that produce effectively free power, then they’ll stop burning down the jungles, obliterating wildlife habitat and otherwise exterminating most of the non-domesticated animals.  Right, we’ll have to invent those things first; not to worry, once the computers are fast enough, they’ll just invent everything for us.
The future, like the past, is scarcity.  Your gadgets and your big pools of processed data won’t change that.  (Having a lot fewer people might produce relative abundance, but current trends don’t seem to favor that outcome).
You’re right, it was careless of me to say that would never happen.  I try to be more precise with my words.  How could I know whether something would “ever” happen?  What I meant, or should have said, was that I didn’t think such solar cells would be developed in the reasonably foreseeable future (say the next century); I also lack confidence in the various alternatives to hydrocarbons and nuclear that crop up whenever this topic is raised, none of which are real substitutes (wind, geothermal, biomass, etc. etc.); all of which are currently crippled by scaling issues, net-energy negative, and/or otherwise arguably more environmentally harmful than what they are supposed to replace (like biomass).  Could that ever change?  Well, I don’t know, I suppose that it could.  Will it change any time soon?  I doubt it.
About the word abundance: I use that word in its technical economic sense (which I should have made clear but thought was understood).  Abundance, in economics, is the absence of scarcity.
Air, for example, is generally abundant on the surface of the Earth (the supply is effectively unlimited, so breathing is free, or rather, a person can breathe all he’d like and not affect the ability of anybody else to breathe as much as he’d like).  Most other resources are subjected to various kinds of scarcity; economics used to be understood as the study of human action within the context of scarcity, or human responses to it.
Technology has not created abundance thus far, what it has done is reduce certain kinds of scarcity for many people.  You could call this a kind of relative abundance (things are generally less scarce now then they used to be) but it is not actual abundance, and use of the word that way gets murky.
I make fun of certain Singularitarians because they talk as though the Singularity will bring about actual abundance (the absence of scarcity, for all resources that matter, the ability of people to gratify any effective impulse without constraint, without having to work/produce, etc.)
To me that is millennial crazy talk, difficult to distinguish from any other utopian eschatology (Christian, Gnostic, Marxist, etc.; Marx in particular believed that technology would bring about actual abundance, though he was very fuzzy about the specifics of how this would happen; it was always presented as a sort of necessary culmination to the historical Process he believed animated everything; Fourier talked like this too).
I think we are actually in agreement about “abundance” (whatever does happen, in the foreseeable future, “Singularity” or no, there will be no “abundance” as I have used the term, people will still live under conditions of scarcity).  You think that people will be subjected to less scarcity in the future than now, which is to say technology will make things cheaper for them and generally improve the quality of their lives.
I used to think so, I used to actually believe in a kind of generalized reduction of scarcity brought about by technology and the operations of the market.  I’ve changed my mind, over the last few years.  I now think that such things are cyclical (quality of life improves and degrades, as civilizations blossom, grow, then stagnate, decline and collapse), as I’ve been saying, not linear; and that, while gains are not necessarily zero sum, they can be (some can win, while others, or even most, lose); a central conceit of most modern schools of economics is that, given the right market process, over time, everybody gains; I no longer believe that is always true.   Not that I think everybody should gain “equally” or has some right to a given share of whatever is produced; I do think that it would be better, all else being equal, for more to prosper than fewer.  (I tend to temper an elitist, hierarchical view of human abilities with the idea of noblesse oblige, those who can do better seeing themselves as having a responsibility to look after those closest to them who can’t; to the extent possible, I would rather that were a question of cultural expectation than force).
I think the societal gains of the technology developed in the last twenty years are overstated and the costs of that technology’s adoption ignored or dismissed.  Real per capita median income in the United States has been flat to declining since the 70s; people who do work commute further, in worse traffic, to work longer hours, husbands and wives now need to work to produce a comparable income to that husbands gained alone a few decades ago, etc. etc. etc.  I just don’t see quality of life improving here (apart from advances in medicine); people are shallower, more distracted, busier, though I don’t see much evidence that they get all that much more done, that what they do even matters that much in itself, or that whatever value they do produce isn’t largely captured by a tiny senior executive/financial class.  I think the near future will be a kind of greater Brazil, a globalized labor market within which a small financial/managerial elite amasses vast wealth, while driving wages down as far as possible (using the combination of unlimited labor and automation).  I think such societies will be both inherently unstable and subjected to omnipresent surveillance/corporate/state control (anarchy invites tyranny, in a positive feedback loop, generally not lost on the tyrants).
But everybody will have really nice telephones, and all the streaming pornography, reality infotainment, and simulated violence they care to immerse themselves in.  They’ll have 4500 friends online to chat with, anywhere, about the latest celebrity breakup or the scariest new torture film, but nobody will actually know anybody, not really. or much of anything, their lives will be largely spent absorbing and responding to an infinite stream of manipulative noise.  There will be exceptions (people like us, actually, among others), but that’s the rule I see.
I don’t know, it’s not the world I’d choose.

Written by ulrichthered

February 22, 2013 at 6:25 pm

On Mind uploading and consciousness

leave a comment »

Among the many problems with this idea of mind “uploading:”

1.  Nobody really knows what the mind is, they only pretend to, based on quite limited understanding of various brain mechanics; triumphant as the materialists like to think themselves, they have never conclusively established that the mind is simply the brain (that consciousness is just something neural tissue “has,” however you want to describe it).
2.  Further, most mind “uploading” talk relies on the kind of reductionism you hear from pop science columnists like David Brooks, who like to prattle about their “minds” being in the “cloud” because they look stuff up on their telephones all the time (and thus have “outsourced” their memories there); Whatever the mind is, it cannot (or rather, I think it should not) be reduced to data storage, processing, and computation (all the computational power of the world’s fastest computer linked to data storage containing records of everything ever written, photographed, or videotaped, with an ultra-sophisticated algorithm capable of searching all of that data, identifying patterns in it, responding to queries, whatever, would still not be a mind; it would just be a fast computer with a lot of data and good software to process the data, having no awareness whatsoever of what is being processed or volitional intelligence of any kind.
So, even if you could make copies of all of your memories and upload them into the “cloud,” or map out all of your neural tissue and upload that map (to be digitally reconstructed), whatever you uploaded would just be data, a copy of information held by you; rebuilding all of that into some sort of software analogue would not be you anymore than a perfect physical representation of your body would be you, it would just be a mimicry; if enough data is behind it, the mimicry might be convincing, but it wouldn’t be you, or anybody, it would just be a machine programmed to take on the apparent character of a person.
I can easily imagine a future in which people are fooled into thinking they are uploading themselves into some computer matrix, to achieve immortal consciousness, when in fact their mental structure is being copied, their thoughts are being transcribed and stored, their memories captured, but they themselves, once physically destroyed, die; copies might float around for a century or two mimicking them, but they are dead.  The copies are no more them than the soulless imitations of life in a cuckoo clock, they’re just better imitations.
The market gives the people what they want, which, in the case of most people, is trash.  Democracy is organized on similar principles.

I find the mindBay idea [discussed in correspondence] compelling.  It’s not hard to think of people copying themselves, as much as that could ever be done, for sale, primarily out of vanity, like the people who fall all over themselves to get on reality television.  Imagine the possibilities!  We could pay $20 to discover, through truly intimate immersion, that the brain of Paris Hilton is even more vapid, damaged, and saturated in toxins (more experiential than chemical) than we could have ever begun to speculate.  A whole new genre of warning labels would have to be concocted (Extended Exposure to the thoughts and memories of this creature Will make you into a stupider and more irritating person: Beware!)
On Tue, Jan 29, 2013 at 1:08 PM, … wrote:

Well, if “nobody really knows what the mind is”, as you correctly state, then how can you be so certain that it cannot be copied or uploaded?
I’m glad we agree about the central point.  Materialists usually just assume whatever they don’t understand is just a more complicated version of whatever they think they do understand.  It’s good to establish that we simply have no basis for assertions of firm knowledge at all here.
I don’t know definitively that the mind can’t be copied or uploaded, assuming we could ever even really understand what it is (I also don’t know that Yahweh, Apollo, Loki, the archangels, demons, sprites, gnomes, or assorted other figures of human belief/folklore weren’t/aren’t real, I just have no concrete reason to accept that their current or historical reality has been demonstrated.  I concede these possibilities.  A difference between skepticism and supposedly rational positive atheism).  Since we do not know what minds are, have no idea how one would actually be copied or uploaded, and have never done anything remotely like either, I would think the onus would actually be on  the people who think mind uploading is just a question of computing power and clever software to establish that:
1.  That’s all there is to it
2.  Thus, it can be done
Rather than simply assuming these things are true and waiting around for the Big Rock Candy Mountain, at which time all things will be possible.
If I was lambasting anything in my post, it was careless popular talk, overstatements by analogy, that confuse the issue and/or trivialize it (like the current fashionable “idea” that people are merging their minds now with machines because they use machines to look things up quickly, and thus “offload” “memories” or “mental storage capacity” to some mystical remote place, the cloud, from which things may be retrieved, rather than internalizing them in their own minds [what used to be called learning].  That is no different then somebody pretending to have mastered the law because he lives next door to a law library, or has a WestLaw account.
Joss Whedon (who I consider a tremendously talented and amusing entertainer, if shallow at times) played with these ideas in his Dollhouse show; you could, rightfully, say that is a strawman comparison because it didn’t do the complexity of the thinking behind them full justice, but even so, I suspect he hit the high points, namely the idea that minds are basically reducible to physical structures which can be scanned, copied, stored, “uploaded” and “downloaded” at will.  I do strongly suspect there is an irreducible core within sentient creatures that can not be captured this way, and is thus not transferable.  The show hinted at such a possibility without really deciding anything.
Do I know that?  No, and I didn’t claim that I did, but nobody has established otherwise.  I think most intelligent people regularly in the habit of interacting with sentient creatures, across the course of human history, have come to similar conclusions, which doesn’t make them right, but does, I think, place the burden of establishing that they are wrong on the few moderns who assert the contrary, namely that sentient life is just matter organized in a particular way that gives rise to either the appearance, or the reality, of consciousness (which would then just be an aftereffect of something physical), and that thus can be fully understood, copied, transferred, recreated, etc.
It seems to me your position is no more valid than those who claim that it can. If nobody really knows what the mind is, then it is still an open question and your assertions are no better than anyone else’s; and, as with your biological chauvinism, is merely based on emotional preference.
It is certainly an open question, I never said otherwise.  I merely said the burden was on those who thought they could answer it in this particular, materialist, way to demonstrate that they are right, since their assertions run contrary to all known experience, and what they claim they will be able to do in the reasonably foreseeable future has never, to our knowledge, been done.
My assertions are better than some peoples, because what I’m saying is rooted in experience and (present limited) understanding.  If I said, nobody has ever turned a kangaroo into a water buffalo, and there is no reason to believe they can, that statement would have two things behind it that the contrary assertion would not:
1.  The fact that nobody has ever actually turned a kangaroo into a water buffalo before, at least that we know of
2.  The fact that nobody has ever demonstrated how such a thing could be done
There is no reason to believe it can be done because it hasn’t been done and nobody has provided a reason that it could.  This does not mean it cannot, it means they haven’t given the reason it can.  Burden on them.
“Biological chauvinism.”  Again with the name calling.  You are trying to coin a slur that refers to people who value living, sentient creatures more highly than those who value nonliving machines.  (Try “organicist.”  I made that one up, it’s better.)
Since no machine we know of has ever demonstrated consciousness, sentience, volitional intelligence, or an emotional connection with anything else (traits all demonstrated by my cats, if not necessarily by all humans), I’d say it’s only natural, “human,” to prefer living creatures to machines.  Further, I think there is something strange about being “offended” by such a preference.
I mean, what are we really trying to accomplish anyway?  Are we trying to develop/understand technology (machines) that will improve the lives of people and animals in the future (living, sentient creatures), or
do we just want to make these machines for their own sake?
I think that is a really important question.  I guess you, since you’re no kind of “chauvinist,” would be happy enough to make machines for their own sake, or to replace people and animals with them altogether (perhaps, as part of some greater, magical, “progressive” evolutionary project in which ever more complex intelligences arise, in the principle that ever more complex and powerful intelligences, whether “biological” or something else always should, and must, follow less complex/weaker intelligences; a principle that has never been established anywhere).
Most people would disagree (normal people are biological chauvinists, just as normal people would recoil with horror if someone were to torture a cat, but would be indifferent to smashing an iPad with a rock; normal people also care more about their children and parents than they do random people they have never met and have no relation to, another, biologically driven kind of “chauvinism”).
You say this is an emotional preference, and thus is presumptively invalid.  I wonder, where do your preferences come from?  What drives emotions to begin with, and why?  Are they just the product of brain chemistry reacting to current and recollected interactions with environment, filtered/channeled/driven, I suppose though complex genetically defined channels?  Or, do they well up from within for additional reasons that are not just integral to our ability to be in the world (survive and reproduce) but essential to the question, why would we want to bother?
People who have no more emotional attachment to animals or other people than they do to machines or abstractions are difficult to distinguish from machines; sociopaths, soulless automatons, if that’s your non-biological chauvinist ideal,
you can keep it.
I know people like that.  I can give you some phone numbers if you’re interested.
For myself, I do not see why there is any reason to believe there is anything mystical or nonmaterial about the mind, whatever it is. If there is, then I think the onus is on those who believe this is the case to show it. However, if I knock you in the head hard enough, you go unconscious, or even die. What happens to your mind then? Where does it go?
Humans have been trying to answer that question as long as there have been humans (as far as we know).  Nobody doubts that the brain is the physical manifestation of the mind, and that damaging the brain impairs or destroys the mind’s ability to function.  This does not in itself establish that the mind is reducible to the brain.  Maybe it is, but that hasn’t been established.
Whether unconscious or dead, you don’t have your mind anymore, either temporarily or permanently. That would seem to be a pretty good indicator that the mind, whatever it is, is very closely related to the brain.
You are assuming the object here, since this is not known.  It is possible, of course, that the mind is something separate from the brain, but dependent upon it for being, so that destroying the brain destroys the mind (as destroying the body destroys the brain).  It is also possible that destroying the brain has the effect of terminating a particular incarnation of the mind, which is released, and either does or does not manifest itself again in some other form.
This was believed, in one way or another, by most people for what we know of human history.  I have a bias towards placing the burden of proof, as above, on those who are challenging/dismissing beliefs/views/conclusions long held, all other things being equal.
That doesn’t mean I think that those beliefs are established themselves as true, merely that the burden is on whoever is challenging them.  Without belaboring this, the most rational reason I can articulate for such a burden placement is that:
Intelligent people, arguably people far more intelligent than any of us, have considered these questions countless times through the centuries, and while we like to flatter ourselves that we now command vastly more information than they did, and are thus better, or uniquely privileged, to make informed judgements:
1.  We don’t actually know that much, if anything, more than they did about quite a bit, and
2.  We should really ask ourselves why they came to the conclusions they did, rather than just dismissing those conclusions as having been rooted in ignorance (again in the often faulty assumption that we know so much more)
So, if it is true, as seems likely to me, that the mind is based on processes and information in the brain, then it is based on something physical, which means that, in principle, it is treatable as other physical objects: movable, copyable, storable, modifiable, improvable. Now one can debate whether this is practicable in the real world any time soon with foreseeable technology, (due to the extreme complexity of the brain), but that is a different question.
You reduce the mind to the brain and declare that, at some point in the future we will be able to copy the brain.   I say, in addition to my other objections to this materialism, how will you know that you have copied the thing itself, and not just a part of the thing, or that you haven’t just made a superficial reconstruction of the thing that misses the essential core (the part we don’t, and possibly can’t understand)?
Further, by even your admission, this “uploading” would be making a copy of the brain, not “transferring” the brain, or the mind, into something else (a “new substrate”), so you are not describing somebody “uploading” “himself” into a machine, you are describing a machine making a copy of somebody.  So it wouldn’t be “you” in the cloud anyway, just a representation of you.
I guess what you’re really saying is that there is no you at all, just a complicated object, like other complicated objects, and if we can make a copy, sufficiently exact, of that object, we have made a copy of “you.”
I don’t hold to that.
Good to hear back from you.

Comments below.
On Fri, Feb 8, 2013 at 12:13 PM,… wrote:

Thanks for the detailed and thoughtful reply. I delayed responding because I couldn’t decide on how much to respond to and for how long. Finally I decided to just respond to what strikes me as the main point.
You seem to be contending, (correct me if I am wrong), that no matter how advanced a machine becomes, it will always be mimicking consciousness or sentience, that it will never really attain these characteristics.
No, I didn’t say that I thought it was necessarily impossible for a machine to have these characteristics, I said nobody had convincingly demonstrated that they could create one that did.  Further:
1.  We don’t really understand what consciousness is, or what the mind is
2.  No machine demonstrating consciousness or the other attributes of mind has ever been built (to our knowledge)
3.  So, nobody has convincingly explained how they could build such a machine (a high bar, given #1), or demonstrated that they could, and the burden is on those who think that it can be done to establish this
Unless you have some criteria, which I haven’t seen, which, if fulfilled by a computer, robot, android, whatever, you would then agree that it was conscious or sentient. If so, I would be interested in hearing them. I’m sure they would be thought provoking. But if you don’t have any such criteria, then there can really be no further discussion. You have predecided the issue and there is really no more to be said.
Thanks, as stated above, I haven’t pre-decided the question so much as attempted to set a high threshold for answering it.
I think there are a few questions here:
1.  Can we create a sentient/conscious machine? (I say create to include various methods that mimic a kind of evolution, rather than restricting the discussion to machines that are  simply designed, programmed and built)
2.  Should we?
3.  How would we know the machine is sentient/conscious?
#3 is critical, of course, not just to answering question 1, but also to the process by which #1 could be accomplished (if we have no way of testing consciousness/sentience, how would we even know not only if we had done it, but if we were close to doing it?)
I’ve already discussed #1; #2 is one of those permanent questions that can never really be resolved, but has to be talked through, because of the possible consequences of being wrong, either way (we’ll leave it for now).
So, to #3
I don’t think we can ever know absolutely that anything (synthetic as well as organic, for argument’s sake) is conscious; all we can really do is list the attributes of consciousness, test for them, then decide how convincing the results are.
I believe what are called higher level animals (mammals generally, though not exclusively) are conscious/sentient, though I concede that I cannot prove this definitively (note, I cannot prove definitively that humans or a given human is conscious/sentient either, only test for the attributes and evaluate the responses).
Some attributes of conscious/sentient creatures:
1.  Volitional intelligence (goal orientation; being able to come up with goals [with some degree of independence] then act to fulfill them;  as discussed before, randomization is not the same thing; there has to be some loose set of goals and preferences the creature is determining and deciding within [prioritizing], ideally for its own reasons
2.  Awareness that it is alive, and that it is, subjectively, thinking/acting
3.  The ability to respond spontaneously to situations, particularly those outside of the bounds of its experience
4.  The ability to learn from experience (trial, error, evaluation, correction)
5.  Understanding the non-arbitrary character of things and relationships; that is to say, knowing that a thing is a thing, and not something else, and that the difference matters (and not just for purposes of solving some problem); this is basically Searle’s aboutness idea (sentient creatures know that their thoughts are about something, and not something else, or rather that there is a non-reducible difference between things, and the difference matters; rearranging words into sentences means nothing [and does nothing to demonstrate sentience/consciousness] even if the sentences follow perfect rules of grammar and meaning [within context] unless the actor knows what the sentences are about [otherwise it’s just an exercise in problem solving through processing inputs and applying rules]; I would say the same thing about manipulating objects in the world (if you think any given thing may as well be any other given thing, unless you have some specific task that requires it to be something particular, well, I’d say you are not really living in the world, just using bits and pieces of it as they present themselves and seem to meet some need); I concede this point is debatable on some levels, but I do think it a difference between the automaton and the living creature
6. Curiosity about things in their own right (what they are, how they work, what you can do with them); an extension of #5 that is not really a strict requirement, more of a clue that consciousness is there
7.  Some would say, and I would like to say, the ability to form subjective bonds of different kinds with other creatures demonstrating sentience (unfortunately sociopaths do not appear to do this and they are both sentient and conscious, so we can’t make it a rule here, though we should definitely make it a rule somewhere else (by extension, the ability to relate to other apparently sentient creatures as ends in themselves, not as means to an end; not a requirement of consciousness, but something that should be there if we’re going to allow these things to come into being)
8.  Both lasting and immediate subjective reactions to things and apparently sentient creatures in the world (ideally not just interacting with the world, but feeling, love, hate, desire, etc.; I can imagine a consciousness that lacks these attributes, they aren’t necessary for a thing to be conscious, but again, should arguably be necessary for other reasons)
Do cats do all of these things?  I would say, of course, though men like Descartes dismissed all animals as being machine like automatons of instinct (I think he just didn’t spend enough time with cats, or other animals, or he wasn’t paying attention).
Can a sufficiently powerful machine demonstrate all of these attributes (whether through clever programming, as part of a neural net kind of evolved response set, though a hybrid method, etc.)?
I don’t know.  Given the inherently subjective (specific to the actor whose consciousness is at issue) nature of some of these attributes, it is very difficult to establish them independently (like #2).
How do we know that the machine (or cat or person for that matter) is really conscious even if it has convincingly established that it meets all of these criteria, and not just mimicking consciousness?
I would say if we know that the machine is only doing whatever it is doing through application of complex algorithms to data sets (including data from mechanical senses), then it is hard to argue that it isn’t just compelling mimicry (I mean, I don’t care how complicated the program is, if you are just executing a set of rules applied to a set of facts to achieve a desired result, you are not really meeting the criteria, you are just creating the appearance of doing so).  If the methods by which it has come to act are different (not just rule based algorithms), it gets harder to say, and ultimately almost impossible.
Beyond a certain point of convincingly demonstrating the criteria have been met, as far as they can be, given all of that, I suppose you would have to give them the benefit of the doubt, just as we do with people and some of us do with animals.
I would only wonder though how you would respond if someone were to make the claim that you are not really conscious or sentient, but are merely mimicking having these traits. (I am not making this claim, by the way.) But if I did, how would you answer? Bearing in mind that anything you say or do could be said or done by a sufficiently advanced machine that, by your view, would only be giving the appearance of having these characteristics.
That’s a perfectly reasonable question.  All you’ve seen from me are characters on a screen; I could be a chatbot, for all you know.
My answers are the same as above; the criteria I set out up there are the criteria I would apply to try to determine whether people, animals or machines were sentient/conscious.  (I’m sure the list could be improved upon, by the way)
I would try to demonstrate through my words and other conduct that I met them.
I have often wondered whether John Searle had been asked this question, and if so, and if he answered, what his answer was. Now I have the chance to ask someone who, if my understanding is correct, seems to have a similar view. I’m not going to pass it up.
I’ve been reading Searle; there’s a lot of overlap between our views, but I part company with him on some critical points (like his mind/brain answer, namely that consciousness is real, though subjectively experienced, and is something that neural tissues have as a kind of dynamic property [which is really just a way of saying that matter thinks, it just knows that it is thinking]; to me this is not a “way out” of dualism, it is just acceptance of the materialist position dressed up to account for the obvious fact that subjective consciousness exists [if there were not at least one subjective consciousness, there would be nobody to think/act/etc.])
Thanks for the response. It helped me to clarify my views on some things.
Thanks for sparking the discussion and continuing the debate.  I find I never really know quite what I think about anything until I’ve worked through it this way with other people.

Written by ulrichthered

February 22, 2013 at 6:21 pm

Consciousness, the irrational and creativity

leave a comment »

Another Cool Hand Luke moment (“what we’ve got here, is failure, to communicate.”), from my talks with the Singularitarians.

@David and others:

I think we need to clarify what Mr. Tyson meant by “irrational.”

I believe “irrational,” in the context he was using the word, means:

1.     Not the product of rigid/linear/billiard ball type causation

2.     Not articulable/cognizable through the mechanism of rules (whether strict or fuzzy)

3.     Not fully understood or arguably understandable

I think it’s an unfortunate word choice, because the obvious connotations are that the “irrational” is “bad,” or “suboptimal,” or whatever.

I think it would be better to talk about the undefined/undefinable, which is to say, humans (and, I think, all sentient, living creatures capable of complex thought) do not just do as they are told,

Which is what machines do

They think up things for themselves, and those things are not necessarily determined by prior requirements, confined within existing rule sets, predictable, or capable of being mimicked through the introduction of “randomness” or “probabilistic” mimicry of improvisation: the universe of possible options may be finite but is large enough, in practice.  It won’t do to have machine just pick from within some predefined subset, or I suppose, explore, define a subset and pick, randomly or through application of a probabilistic algorithm.

That’s not creativity because there is no purpose/intention behind it; it’s just chance.

@Ryan

What you’re describing may work well enough for some soulless drek like a Katy Perry song, but again, it’s not creativity, it’s aping creativity.  An “algorithm” that introduces randomness into a composition simulation, is just a machine following instructions; it isn’t creating anything itself, because

1.     It is just executing instructions programmed into it by somebody else,

2.     It has no concept of what the instructions are for

3.     It has thus not decided to do this and not that, for any reasons other than those programmed into it by its controller

The end result may look like a musical composition (to the extent a Katy Perry song can be described that way), but it’s really just a copy of other compositions (either a direct copy/mashup/distortion of music composed by humans before, for their own reasons, or a derived copy, produced by executing rules that are themselves defined by abstracting away various structural aspects of previous compositions [again, compositions created by people for their own reasons, which were effective to the degree that people responded to them (as individuals and collectively, subjectively, but as people)]

You say a set of such “compositions” can be made then run through a filtering algorithm, which will determine their “quality” and rank them.

But how would such an algorithm judge and rank quality?  To the extent it is possible to do that programatically, the machine would just be applying, again, rules defined by people in an attempt to articulate what it is about music that makes it effective (I suppose you would say “elicit the desired response”).

The machine doesn’t know what is better or worse, subjectively, it just applies the rules it’s been told to apply and produces a list.

This is all just aping, and bad aping at that.

Of course, by definition, there is no room for anything new here at all, just more or less successful applications of the existing rule sets (anything new introduced randomly or as the product of some probabilistic algorithm would be very hard to judge through such a mechanism, and, in any case

Would be missing the point

Because there would be nothing real behind it, which is to say, it would not express anything, because its creator would have no expressive intention or even awareness of what is being created; thus, the product might be pleasant in some way, but it would be meaningless.

It’s like taking a digital picture of a yellow flower and running it through the Van Gogh filter in Photoshop; the product might look like a Van Gogh, but it’s not, it’s a mechanical forgery.

@Evan Dawson

We both reject the use of the word “irrational” here, as commonly understood, to describe creativity, but I think your essential statements

1.  I think of creativity as the creation of knowledge in the absence of conscious reasoning.

2.  while the unpredictability of creativity comes from fact that the sub-conscious part of the mind plays an essential role in it.

Beg questions.:

1.     Is everything created a form of “knowledge,” and if so, how?

I think this is a reductionist denial of the aesthetic aspects of experience.  I mean, in what way is Mozart’s Escape from the Seraglio “knowledge”?  The beautiful exists as well as the good and the true, right?

2.     Wouldn’t it be more precise to say that the creative process is hybridized between the application of conscious reasoning techniques (following, for example, various rules of composition established over time by people because they have been proven, like the structure of a symphony), and inspiration (which is not “rational” in that it does not follow necessarily from any cognizable rules or principles)

3.     If so, aren’t you just defining away the problem by saying that inspiration (the non/extra/ir-rational) part of the creative process comes from the “sub-conscious”; I mean, do we really know where it comes from?  Or is the “sub-conscious” a kind of grab bag for whatever we don’t understand about the workings of the mind?  It doesn’t explain anything here (inspiration is irrational in the sense that it is not the product of conscious reasoning, thus it must be the product of some sub or un-conscious reasoning)

The ancients thought of creativity as the gift of the muses (divine entities who spoke through them); people in creative flow states often describe themselves as being possessed (anybody who has ever experienced this will tell you that ideas, words, images, take form or thrust themselves on you, as though something with its own life were giving rise to them; call this irrational, sub-un-conscious, whatever, nobody has explained it, except to explain it away by assigning it various labels).

You are certainly right to say that the results cannot be meaningfully replicated through randomness.

@Robert Mason

I agree with your idea of sentient, intelligent creatures having goal orientation, which is what I mean by volitional intelligence.

They want various things, decide among their wants and then act to fulfill them.   Their genetic makeup (however it was formed), I think, largely defines those wants and provides the structure that tends to regulate their intensity (in a probabilistic, not strictly deterministic way; which is to say, I may want to drink water, because my genetic code is structured to signal me the body needs water, but I can choose to forego acting to satisfy that want, for whatever reasons).

Machines don’t know anything or want anything, they just do as they’re told.

(it’s true, the genetic code informs what we want, why we want it, defines our basic capacities to fulfill wants, and the limitations of those capacities, guides our choices, probabilistically, but it doesn’t determine them.

We take the form described by our genes, with the abilities and restrictions, rough preferences and responses, inherent in that form, but our actions are still chosen, we are not puppets of our genetic programming. [You cannot help what you want or how you feel, you are only responsible for what you do])

@Bill Sams

I am always addressing your argument, but briefly:

Describing the constituent parts of a thing does not define away its existence as a whole; being able to identify various physical structures of a brain does not, in itself, reduce the brain to those physical structures or the mind that (I argue) operates through the brain (the whole can be more than the parts, or some essential thing about the whole can be completely elusive when investigating the physical parts).

If you really believe that life is just a complex set of chemical and electrical reactions that, over time, have been spontaneously organized such that they now have the appearance of what we call conscious intelligence, reducing people and animals to organic equivalents of advanced computing machines,

Well,

Do you talk to you wife like that?  I wonder.  I asked a colleague of mine, who thinks just like you, that once, and he didn’t really have an answer.  My point is, why would you care about people or animals any more than you do about machines or rocks if you think there is no fundamental difference between them, and if you don’t,

Why not just say that?

Written by ulrichthered

February 22, 2013 at 6:19 pm

On a Proof that Friendly AGI is Impossible

leave a comment »

In response to much screaming and moaning about the prospect of someone developing a logical proof that “friendly” AGI (Advanced General Intelligence) is impossible:

A proof that friendly AGI is impossible, would not make friendly AGI impossible, it would simply demonstrate that it is impossible; the proof would be a step forward in the sense that it would tell us something we don’t know…  which would be, I’d say, rather important.

There seems to be the implication here that we would rather:
1.  Not prove that friendly AGI is impossible
2.  Build AGI (while hoping for the best)
3.  Then find out
We might want it to be possible. but wanting doesn’t make it so, and if somebody could prove that it’s not, well, that would be a pretty powerful argument against trying to build it (hence, I suspect, some of the hostility to the suggestion).
You would think that the people who want to build AGI would welcome an attempt to prove that friendly AGI is impossible,to help them, I don’t know, build AGI that is friendly (if they’re going to build AGI at all).
I’m not convinced that any kind of AGI is possible (if by AGI we mean machines that possess independent consciousness like that of sentient organic creatures), and I’m certainly not convinced that it would be desirable if it were possible.  Right now, all we have are machines processing instruction/data sets of 0s and 1s.  The machines have no awareness of what the 0s and 1s correspond to, nor do they have any volition or the other attributes of sentient creatures.
Which is to say, for all of the computational power, data storage, and informational retrieval sophistication of these machines, they do not now have the intelligence of a rodent, or a cockroach, let alone that of a human (or superhuman or god).  Making the machines more powerful, or giving them more data to process, doesn’t change that.
The core arguments I’ve seen against various “Friendly” constraints express concerns that I think should be answered, not shouted down or wished away.
If an AGI could be built subject to constraints like Asimov’s laws of robotics, then it wouldn’t really be autonomous, for example:
1.  Raising all kinds of, in my view, spurious ethical complaints about “enslaving” machines, but also
2.  The idea that a sufficiently intelligent AGI would find ways to make itself autonomous, removing the constraints, or creating successor AGIs without them
Some say AGIs would be “friendly” by definition, because they wouldn’t have any reason not to be (they wouldn’t want anything, they would bring about or be the product of a world of perfect abundance, and lacking scarcity, there would be no conflict between them and humans, etc..  More wishful thinking uninformed by the dreadful history of, say, humans, on these points, or really by common sense (reminiscent of other eschatological visions, like Marx’s withering away of the state, or the Gnostic’s Kingdom of God on Earth).
Just because we can’t think of good reasons why AGIs might want to hurt us, it’s rather “nearsighted” to say that they wouldn’t, no?
If they were true AGI, they would have their own reasons for doing whatever they do (volition is part of what it means to be conscious, or GI if you’d like).
Our ancestors had no trouble envisioning gods who lived under conditions of abundance capable of all kinds of malice and mayhem.
Maybe the apparent AGIs would actually have no GI at all; maybe they’d just be really powerful machines, and an instruction set would get corrupted, causing them to wipe out humanity (processing those 1s and 0s) without having the slightest awareness they were doing so?
Think of a Google car glitching and running off a cliff.

Written by ulrichthered

February 22, 2013 at 3:55 pm

Posted in Singularity

Tagged with , ,