Planet Ulrich

A Dangerous Subversive Takes on the Future, with Movie Reviews!

Posts Tagged ‘singularity

Consciousness, the irrational and creativity

leave a comment »

Another Cool Hand Luke moment (“what we’ve got here, is failure, to communicate.”), from my talks with the Singularitarians.

@David and others:

I think we need to clarify what Mr. Tyson meant by “irrational.”

I believe “irrational,” in the context he was using the word, means:

1.     Not the product of rigid/linear/billiard ball type causation

2.     Not articulable/cognizable through the mechanism of rules (whether strict or fuzzy)

3.     Not fully understood or arguably understandable

I think it’s an unfortunate word choice, because the obvious connotations are that the “irrational” is “bad,” or “suboptimal,” or whatever.

I think it would be better to talk about the undefined/undefinable, which is to say, humans (and, I think, all sentient, living creatures capable of complex thought) do not just do as they are told,

Which is what machines do

They think up things for themselves, and those things are not necessarily determined by prior requirements, confined within existing rule sets, predictable, or capable of being mimicked through the introduction of “randomness” or “probabilistic” mimicry of improvisation: the universe of possible options may be finite but is large enough, in practice.  It won’t do to have machine just pick from within some predefined subset, or I suppose, explore, define a subset and pick, randomly or through application of a probabilistic algorithm.

That’s not creativity because there is no purpose/intention behind it; it’s just chance.

@Ryan

What you’re describing may work well enough for some soulless drek like a Katy Perry song, but again, it’s not creativity, it’s aping creativity.  An “algorithm” that introduces randomness into a composition simulation, is just a machine following instructions; it isn’t creating anything itself, because

1.     It is just executing instructions programmed into it by somebody else,

2.     It has no concept of what the instructions are for

3.     It has thus not decided to do this and not that, for any reasons other than those programmed into it by its controller

The end result may look like a musical composition (to the extent a Katy Perry song can be described that way), but it’s really just a copy of other compositions (either a direct copy/mashup/distortion of music composed by humans before, for their own reasons, or a derived copy, produced by executing rules that are themselves defined by abstracting away various structural aspects of previous compositions [again, compositions created by people for their own reasons, which were effective to the degree that people responded to them (as individuals and collectively, subjectively, but as people)]

You say a set of such “compositions” can be made then run through a filtering algorithm, which will determine their “quality” and rank them.

But how would such an algorithm judge and rank quality?  To the extent it is possible to do that programatically, the machine would just be applying, again, rules defined by people in an attempt to articulate what it is about music that makes it effective (I suppose you would say “elicit the desired response”).

The machine doesn’t know what is better or worse, subjectively, it just applies the rules it’s been told to apply and produces a list.

This is all just aping, and bad aping at that.

Of course, by definition, there is no room for anything new here at all, just more or less successful applications of the existing rule sets (anything new introduced randomly or as the product of some probabilistic algorithm would be very hard to judge through such a mechanism, and, in any case

Would be missing the point

Because there would be nothing real behind it, which is to say, it would not express anything, because its creator would have no expressive intention or even awareness of what is being created; thus, the product might be pleasant in some way, but it would be meaningless.

It’s like taking a digital picture of a yellow flower and running it through the Van Gogh filter in Photoshop; the product might look like a Van Gogh, but it’s not, it’s a mechanical forgery.

@Evan Dawson

We both reject the use of the word “irrational” here, as commonly understood, to describe creativity, but I think your essential statements

1.  I think of creativity as the creation of knowledge in the absence of conscious reasoning.

2.  while the unpredictability of creativity comes from fact that the sub-conscious part of the mind plays an essential role in it.

Beg questions.:

1.     Is everything created a form of “knowledge,” and if so, how?

I think this is a reductionist denial of the aesthetic aspects of experience.  I mean, in what way is Mozart’s Escape from the Seraglio “knowledge”?  The beautiful exists as well as the good and the true, right?

2.     Wouldn’t it be more precise to say that the creative process is hybridized between the application of conscious reasoning techniques (following, for example, various rules of composition established over time by people because they have been proven, like the structure of a symphony), and inspiration (which is not “rational” in that it does not follow necessarily from any cognizable rules or principles)

3.     If so, aren’t you just defining away the problem by saying that inspiration (the non/extra/ir-rational) part of the creative process comes from the “sub-conscious”; I mean, do we really know where it comes from?  Or is the “sub-conscious” a kind of grab bag for whatever we don’t understand about the workings of the mind?  It doesn’t explain anything here (inspiration is irrational in the sense that it is not the product of conscious reasoning, thus it must be the product of some sub or un-conscious reasoning)

The ancients thought of creativity as the gift of the muses (divine entities who spoke through them); people in creative flow states often describe themselves as being possessed (anybody who has ever experienced this will tell you that ideas, words, images, take form or thrust themselves on you, as though something with its own life were giving rise to them; call this irrational, sub-un-conscious, whatever, nobody has explained it, except to explain it away by assigning it various labels).

You are certainly right to say that the results cannot be meaningfully replicated through randomness.

@Robert Mason

I agree with your idea of sentient, intelligent creatures having goal orientation, which is what I mean by volitional intelligence.

They want various things, decide among their wants and then act to fulfill them.   Their genetic makeup (however it was formed), I think, largely defines those wants and provides the structure that tends to regulate their intensity (in a probabilistic, not strictly deterministic way; which is to say, I may want to drink water, because my genetic code is structured to signal me the body needs water, but I can choose to forego acting to satisfy that want, for whatever reasons).

Machines don’t know anything or want anything, they just do as they’re told.

(it’s true, the genetic code informs what we want, why we want it, defines our basic capacities to fulfill wants, and the limitations of those capacities, guides our choices, probabilistically, but it doesn’t determine them.

We take the form described by our genes, with the abilities and restrictions, rough preferences and responses, inherent in that form, but our actions are still chosen, we are not puppets of our genetic programming. [You cannot help what you want or how you feel, you are only responsible for what you do])

@Bill Sams

I am always addressing your argument, but briefly:

Describing the constituent parts of a thing does not define away its existence as a whole; being able to identify various physical structures of a brain does not, in itself, reduce the brain to those physical structures or the mind that (I argue) operates through the brain (the whole can be more than the parts, or some essential thing about the whole can be completely elusive when investigating the physical parts).

If you really believe that life is just a complex set of chemical and electrical reactions that, over time, have been spontaneously organized such that they now have the appearance of what we call conscious intelligence, reducing people and animals to organic equivalents of advanced computing machines,

Well,

Do you talk to you wife like that?  I wonder.  I asked a colleague of mine, who thinks just like you, that once, and he didn’t really have an answer.  My point is, why would you care about people or animals any more than you do about machines or rocks if you think there is no fundamental difference between them, and if you don’t,

Why not just say that?

Written by ulrichthered

February 22, 2013 at 6:19 pm

On a Proof that Friendly AGI is Impossible

leave a comment »

In response to much screaming and moaning about the prospect of someone developing a logical proof that “friendly” AGI (Advanced General Intelligence) is impossible:

A proof that friendly AGI is impossible, would not make friendly AGI impossible, it would simply demonstrate that it is impossible; the proof would be a step forward in the sense that it would tell us something we don’t know…  which would be, I’d say, rather important.

There seems to be the implication here that we would rather:
1.  Not prove that friendly AGI is impossible
2.  Build AGI (while hoping for the best)
3.  Then find out
We might want it to be possible. but wanting doesn’t make it so, and if somebody could prove that it’s not, well, that would be a pretty powerful argument against trying to build it (hence, I suspect, some of the hostility to the suggestion).
You would think that the people who want to build AGI would welcome an attempt to prove that friendly AGI is impossible,to help them, I don’t know, build AGI that is friendly (if they’re going to build AGI at all).
I’m not convinced that any kind of AGI is possible (if by AGI we mean machines that possess independent consciousness like that of sentient organic creatures), and I’m certainly not convinced that it would be desirable if it were possible.  Right now, all we have are machines processing instruction/data sets of 0s and 1s.  The machines have no awareness of what the 0s and 1s correspond to, nor do they have any volition or the other attributes of sentient creatures.
Which is to say, for all of the computational power, data storage, and informational retrieval sophistication of these machines, they do not now have the intelligence of a rodent, or a cockroach, let alone that of a human (or superhuman or god).  Making the machines more powerful, or giving them more data to process, doesn’t change that.
The core arguments I’ve seen against various “Friendly” constraints express concerns that I think should be answered, not shouted down or wished away.
If an AGI could be built subject to constraints like Asimov’s laws of robotics, then it wouldn’t really be autonomous, for example:
1.  Raising all kinds of, in my view, spurious ethical complaints about “enslaving” machines, but also
2.  The idea that a sufficiently intelligent AGI would find ways to make itself autonomous, removing the constraints, or creating successor AGIs without them
Some say AGIs would be “friendly” by definition, because they wouldn’t have any reason not to be (they wouldn’t want anything, they would bring about or be the product of a world of perfect abundance, and lacking scarcity, there would be no conflict between them and humans, etc..  More wishful thinking uninformed by the dreadful history of, say, humans, on these points, or really by common sense (reminiscent of other eschatological visions, like Marx’s withering away of the state, or the Gnostic’s Kingdom of God on Earth).
Just because we can’t think of good reasons why AGIs might want to hurt us, it’s rather “nearsighted” to say that they wouldn’t, no?
If they were true AGI, they would have their own reasons for doing whatever they do (volition is part of what it means to be conscious, or GI if you’d like).
Our ancestors had no trouble envisioning gods who lived under conditions of abundance capable of all kinds of malice and mayhem.
Maybe the apparent AGIs would actually have no GI at all; maybe they’d just be really powerful machines, and an instruction set would get corrupted, causing them to wipe out humanity (processing those 1s and 0s) without having the slightest awareness they were doing so?
Think of a Google car glitching and running off a cliff.

Written by ulrichthered

February 22, 2013 at 3:55 pm

Posted in Singularity

Tagged with , ,

Some kind suggestions from our friend to the Singularitarians

leave a comment »

Note: Many of these people are the kind who rooted for Arnold in the first Terminator, so they can be difficult to reach.

My friend tried here, though, by way of advising them on how to talk to a fearful public about the future:

You could just play clips from the Jetsons.

In all seriousness, that presents a world most people would both relate to and want, with various technologies (apart from the sky houses and floating cars, I suppose), that are
1.  More or less practicable within the reasonably foreseeable future, and
2.  Extensions of technology they already have, leading to
3.  A world very much like the world they are in now, (family life, work, etc., all very familiar), but with everything made more convenient (actually it’s a world rather like the world of upper middle class 1960s America, which was better in almost every imaginable way than the present)
I think that’s what most normal people want.  I actually mean it, I would start out with that, using it as a kind of icebreaker and intro into the larger talk, while laughing at it a bit to make the point that I wasn’t talking down to them.  Most of their ideas about technology will have been supplied by, or at least filtered through and heavily influenced by, pop culture concepts (probably true for all of us):
1.  Jetsons (positive and familiar, a kind of best case scenario with no millenarian/gnostic/utopian overtones, potentially contrasted against other referents:
2.  Forbidden Planet/Lost in Space (everybody loves Robbie the Robot)
3.  SkyNet (see, Robbie wouldn’t become self aware and decide to blow up the planet, a good contrast)
[Note to the reader: as above, some of these people want SkyNet to become self aware and blow up the planet]
4.  2001 (I guess a less educated crowd would be less likely to care about this one, but I think it has to be used if you’re really presenting pop culture based question/answers/scenarios involving the topic of AI)
[Note: They would have locked Dave outside of the spaceship too, don’t fool yourself]
5.  Wall-E (this is that hits close to home in ways that are difficult to explain away without people feeling as though they’re being personally attacked, and too many of the criticisms are obviously real and valid, but I’d be aware of the possibility of some skeptic throwing it at you)
[Note: People should feel like they’re being attacked.  That’s the point.  If you’re a blubbery moron whose entire life is spent staring into a little screen and clacking nonsense phrases to your imaginary friends, you should know that there’s a problem.  Not that people that far gone are capable any longer of understanding the problem…  or even recognizing themselves]
6.  The Borg (again, it’s a question of audience sophistication, but people are afraid, I’d say with good reason, of being absorbed into some kind of hive technology, so to allay those fears, they need to be addressed and arguments presented to calm them/refute their originating concepts)
[Note: Even more of these people want to be assimilated into the Borg than want SkyNet to blow up the planet, and/or Arnold to come back from the future and kill everybody.]
7.  The Matrix (I think this one can be skipped or mocked; it’s really not something average people think about/take seriously, and it was more of a philosophy of mind/Marxist thought experiment than an exploration of any kind of likely future, machine made or otherwise)
[Note: I’ve since changed my mind about that.  I think the Matrix was quite serious, or should be taken extremely seriously; Marxists may have originated the concepts of false consciousness and the spectacle, but that doesn’t make those ideas invalid; I’d say they were onto quite a lot….
I consider Google, for example, a kind of criminal conspiracy against the existence of independent thought…  if those people could create something like the Matrix, basically a permanent filter that intermediates between each person and the world, becoming his reality, starting as a crutch, then a substitute, coming to take the place of unmediated life in the world, not only answering all of his questions but telling him what the answers mean, how they should be interpreted….   making him feel as though its “memories” are his memories, and its judgements infallible… something always there, something he comes to think of as merging with himself…,  well
they’d do it.  I think they’re on their way.)

Written by ulrichthered

February 21, 2013 at 6:10 pm

Posted in Singularity

Tagged with , , ,

What the general public (or nonscientists, at any rate) wish Scientists understood.

leave a comment »

In response to the Article, “What Scientists wished the General Public Understood,”
http://www.sciencemag.org/content/338/6103/40.full
which of course, condescended and sneered in all the ways you’d expect media “scientists” to do.
Now, I think scientists do understand the ideas I’ve offered below, at least some of the time, in the abstract, but too many of them seem to fall into the usual bias/belief traps in the particular, those who call themselves “social scientists” most especially
(as an aside, I don’t know that I believe in “social scientists” at all, not, at any rate, as “scientists,” rather than people aping the language of science to make themselves sound more convincing, while misapplying bits of scientific method, generally more to dupe the public into thinking they are thereby objectively pursuing truth, rather than some other, usually better funded, agenda).
1.  Just because something can’t be measured, or can’t be measured precisely, does not mean it does not exist.
Common examples,
Consciousness
Beauty

The passions

General intelligence (though this is more historic than current, we have gotten sufficiently adept at approximating it [Spearman’s g], or devising tests that allow us to quantify problem solving capacities that correlate to a high degree with other observed characteristics/consequences of intelligence to serve as effective and useful proxies; the only reason I even included it on this list is the Flynn effect, which seems to me has to be some kind of data artifact involving testing [increasing test scores over time in no way seem to correlate with actual increases in intelligence, which by every other metric seems to be declining in advanced societies [as we should expect it to; once all of the gains from normalizing nutrition and basic environment have accrued, differential fertility favoring the left half of the bell curve would strongly favor decline])
2.  Just because something can be measured does not mean that it matters
It is depressingly easy to manipulate people with spurious numbers having no demonstrable connection to anything, or numbers that are insufficient in themselves to have meaning, but which are thrust on the public with the implication that they mean certain things that in no way follow.
An example would be the argument that we share 98 or 98.5 or whatever percentage of our DNA with Chimpanzees, therefore we are largely indistinguishable, “scientifically”, from these creatures; this of course is more vulgar scientism than science, since we certainly cannot claim sufficient understanding of the vast complexities of the genome (or rather the consequences of genotypic variance on phenotypic life in the world) to make any meaningful statements about the importance, or lack thereof, of even the smallest genotypic variance, but we are obvious very different from chimpanzees.
Some more than others.
3.  If a theory conflicts with observed reality, or seems to make no sense, the problem lies with the theory, not reality, or our observations of it
It is easy to say this and everyone would agree, until they don’t like the results, then they start making excuses and qualifying, making questions of fact questions of motive, etc.
The current equality obsession is the best available example.
4.  It is more likely that men will lie than miracles occur 
Strictly speaking, again, this idea of Hume’s is cardinal to scientific thought, but it’s good to keep it in mind as a skeptical principle, because people are always presenting things that seem absurd or totally contrary to experience and reason as having been established by some abstruse process only intelligible to experts (such things may very well be in fact true, or as true as we can establish for now, but we should be on guard)
5.  Just because we do not understand something does not mean it is not real
People seem to always be reducing their view of the world to whatever fits into the latest set of theories, or what they think they understand about those theories; such people also tend to mock and sneer at the folly and ignorance of all who proceeded them.
It may not be expected of a scientist to say, “I have no idea why that happened or how that works” but it should be, as opposed to, “We cannot explain that, therefore it is impossible”.  This may seem inconsistent with the statement about miracles, but only on the surface (that is probabilistic, and merely places a higher burden on claims that depart radically from our understanding or experience, it does not say things should simply be dismissed because we can’t explain them).
6.  A simulation is not the thing simulated
Just because you can build something having certain apparent attributes of something else, does not mean you have built the thing itself.
We can make a smart enough chatbot now to fool most people into thinking it is human (or at least something acting at least as human as the people who chat into little windows for a living), but it’s a chatbot, not a person.  Making the computer more powerful or giving it more memory/data won’t change that.
7.  It is wrong to reduce what we do not understand to metaphors about what we do
We do not understand life, and it is a mistake to reduce it to a set of metaphors about other, simpler things, we do understand, because we know how to build them
A person is not a thinking machine any more than he is an engine.  It is a mistake to reduce thought to “computation” and “data/retrieval/processing”.  We know very little about how thought really works, or what consciousness is, but we seem to be rushing headlong from
a person can be like a machine
to a machine can be like a person
to a machine can be no different from a person
to a machine can be better than a person
Machines are objects.  People are alive.  There is a qualitative difference between life and nonlife that cannot be dismissed because we don’t know what it is, how to describe it, or worse, how it can be overcome.
8.  Thinking does not make it so
(what you think you know about something is not the thing itself; you may believe an object has come alive for whatever reasons, but it is or is not alive, which is to say, it has properties that are independent of your observation or whatever conclusions you have come to).

Written by ulrichthered

February 21, 2013 at 5:43 pm

Posted in Singularity

Tagged with , ,

From correspondence between a friend and the Singularitarians, on the question of life, non-life and Deutsch’s Computation reductionism

leave a comment »

“If we cannot really know [if machines are alive], then we can’t assume that they aren’t alive any more easily than we can assume they are sentient. But how can we act without assumption of one or the other and still proceed?”
We don’t really know, absolutely, that they’re not alive now, we just have no reason to think that they are (and can thus proceed on the assumption that they’re not, as I proceed on the assumption that my car is not alive), and I don’t think some set of programmers writing code designed to emulate human responses and fool us into thinking a machine is alive could ever rightly reverse this judgment.
The burden of proof here is on the machines, you could say, and it should be a high burden.  I’ve seen no evidence of humans having created any kind of mechanical life, let alone mechanical life having the potential to become superhuman in intelligence.  It could be that we can’t do that, that all we can ever do is create really fast machines with a lot of memory, and maybe that’s fine, maybe that’s better.
I think we should be asking ourselves, how can we create machines that improve the quality of human lives (and by extension, the lives of other complex, sentient creatures, such as mammals)?
not,
How can we build machines that are alive?
Or worse,
How can we build machines that are alive, to replace us?
But I’m not saying you should build a really complex machine and then try to torture it or anything like that.

“If you don’t know if [Shrodinger’s] cat is dead or alive, it seems to me you still have limited options as to how you can proceed. What is better – to assume that it’s more likely the cat is dead and light the box on fire, or to assume it’s more likely alive and open the box so you can at least check before doing something destructive?”

Well, of course, if that were the question you would err on the side of assuming the cat were alive  (in the formal question it is impossible to open the box).  But Shrodinger’s Cat is always offered as some kind of weird wave/particle duality analogy (with people talking about the cat being in a superposition state between being alive and dead, something absurd on its face); I only brought it up as an example of an epistemological question being presented as a physical one.
“The question comes down to one of greater harm – which has more dire consequences not just for humans, but for all life? That we assume machines are incapable of reaching a sentient status and continuing to treat them like machines (in other words, assume they can’t possibly be slaves, potentially enslaving something sentient) or to assume that they could be sentient and do everything in our power to figure it out before we get to the point of doing something destructive?”
Again, you are reframing the question a bit.  I’m not assuming it is absolutely impossible for a machine to become sentient, I’m expressing skepticism about our ability to make (directly or indirectly) a machine that is sentient, and also asserting that there is a qualitative difference between life and non life that has nothing to do with computational power (I’m saying I do not believe life is reducible to computation, thus a really powerful computation device would not be alive merely because it could compute powerfully, though it might be able to run a sufficiently complex set of algorithms to create the appearance of being alive.  It could, I suppose, become alive for some other reason, but not just because it had a fast processor, a lot of memory, access to vast databases, and clever software.  I think there’s more to life than all of that; we hardly can claim to understand these questions in their entirety about organic creatures let alone these potential synthetic devices).

For all of that, of course I think we should monitor it closely and not do anything avoidable that is destructive.  But why would we want  to create machines that are alive, let alone a synthetic superintelligence?

Written by ulrichthered

February 21, 2013 at 3:48 pm

Posted in Singularity

Tagged with ,

Deutsch and “Artificial Intelligence”

leave a comment »

http://www.aeonmagazine.com/being-human/david-deutsch-artificial-intelligence/

I find myself agreeing with John Searle (I came to the same conclusion as Searle independently, namely, that the human as computer assertion or computer [if sufficiently powerful] as human [or effectively human] is a metaphor (as the human as steam engine or universe as clock ideas were metaphors), probably an inevitable one (this is simply how humans seem to think; given the ubiquity of this particular type of machine [the computer] now, and its ability to act on highly complex sets of instructions to accomplish things in the world, a set of humans were bound to compare it to humans [first, then compare humans to it, then talk about humans as though they were only more powerful versions of the particular machine, etc.].
I also don’t accept the “Universality of Computation.”
Sorry. I don’t think everything can be reduced to calculation, information retrieval, processing. I do think there is a qualitative difference, further, between life and non-life (which would be “racist” as the author says, though he never says that something “racist” is therefore “not true”; I suppose we are all simply supposed to know this, since “racist” things are, by definition, “bad” and “bad” things can not be “true”.)
Nonsense.
A machine that fools you into thinking it’s not a machine is still a machine. You’ve just been fooled. The Turing test, for all of Turing’s obvious genius and accomplishments, is silly; more importantly, it’s epistemic, not physical/ontologic (it is a statement about conclusions we as humans have come to about the identity of a thing, our knowledge, or seeming knowledge, of that identity, not the identity itself.)
You can say whatever you’d like, but thinking something is alive or human or conscious or whatever does not make it so. I suppose it is fairer to say we simply cannot know, ultimately, whether something is alive in the same way that we are ourselves alive (conscious, sentient, however you’d like to describe it); saying, because we cannot really know, and the thing seems to be alive (sentient, conscious whatever) therefore it is, or may as well be, strikes me as wrong.
(Similarly, you may not know whether or not the cat in the box is dead or alive, but it is either dead or alive, not both, or neither; you simply don’t know. That is to say, it has properties in itself independent of your understanding or observation. A person is alive, sentient, intelligent, conscious, however you want to describe it in himself, not because you think he is.)
I simply do not accept the reductionist idea that life is just non-life that can compute and act with apparent volition, and that the only difference between a person and a software program is computing power and clever enough coding. (I also think it a monstrous idea; but that’s a moral/aesthetic judgment, not an argument against the validity of the concept, so I’ll leave it be).
What is it that drives your computations? (what makes a person want to go left rather than right, decide to write an essay rather than go skiing, etc.) Only living things have desire (as one of their characteristics), volition, drive; these things cannot be reduced to the product of “computation” however complex; they are qualitatively different from that.
Life is not just problem solving (something makes the living creature [not “entity;” a cat or a person is not a rock, a corporation or an iPad] decide to solve or attempt to solve one problem and not some other problem, experience something, abandon something else, etc.

Written by ulrichthered

February 21, 2013 at 3:08 pm

A Letter from a friend to the Singularitarians

leave a comment »

Good day.

I would describe myself as a qualified technology enthusiast (which is to say, someone excited about certain intermediate term technological possibilities [regenerative medicine, genomics, robotics, etc.], while skeptical about others [AI, the singularity, the Internet as it has evolved {post Web 2.0}, the surveillance state, etc.]).

So I’m neither a Luddite nor an extropian.  I have all of the rectangles (Plasma, LCD, Mac Pro [with dual widescreens!] MacBook Pro, iPad, iPhone, iPod, etc.), and I’ve been online one way or another since the BBS days, but I don’t watch commercial television; I don’t post on Facebook, Tweet, pass around YouTube clips, or otherwise spend my days playing with my telephone.

I also don’t believe that the Singularity will bring about abundance, the withering away of the state, or the peaceful ascension of humans into physical immortality (as indefinitely young post-humans) (most of that would be just fine, I just don’t think any of it will happen, not all at once, and if at all, not for a long time).

Technology is used by people (who seem intent on reducing it to its most moronic or destructive possible applications, like the blubbery infantile post humans chattering away at each other from their motorized chairs in Wall-E); I don’t believe people become magically transformed from the vapid, annoying, myopic, grasping clods they tend to be as individuals into some kind of wise superhuman force by stacking them on top of each other either (whether you call that stacking the market or democracy or state or metastate) (so I’m not receptive to arguments about market process [unhindered, of course, by the State] just solving all of our problems for us)  I’m not anti-market (I was an Austrian!), I just don’t sacrifice to the Market God, just as I’m not anti-Machine, but believe:

The machines were created to serve us, not replace us.

As a lifelong anti-Marxist, I never believed in the labor theory of value (and think the closest thing we can get to a social ideal would be having the machines  do all of the [non-creative] work, freeing humans to do more interesting things with their time; rather like Athens or the Roman Republic (with machines standing in for the slaves).

Until recently, nobody really cared about “jobs” (work was for slaves), what they cared about was wealth.

Written by ulrichthered

February 21, 2013 at 2:56 pm