h a l f b a k e r yProfessional croissant on closed course. Do not attempt.
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
I'm sure you have all had the thought that someday your job may be replaced by a robot. Eventually there may be more robots than people. this is a problem. The solution I propose is this: Everbody who is able to work is given a voucher for the ownership of a robot. The corparation that employs the
robot that you own the voucher to pays you. That would leave humans to do whatever they wanted to do, while their robot would be providing income.
(?) Sirius Cybernetics Corporation
http://www.viracoch...mon.co.uk/gsae.html Robot madness (as envisioned by Douglas Adams). [DrBob, Jan 24 2001, last modified Oct 05 2004]
(?) The meaning of life
http://sysopmind.co...l-faq/tmol-faq.html According to the Singularitarians [globaltourniquet, Jan 24 2001, last modified Oct 05 2004]
(?) A short history of robot evolution.
http://www.inourimage.org/history.html [angel, Nov 28 2001, last modified Oct 05 2004]
Actually existing cybernetic communism
http://blog.voyou.o...bernetic-communism/ According to this review of the splendidly titled 1959 work, Cybernetics at Service of Communism (3 volumes, US Department of Commerce), cybernetics seems to have been adopted in the USSR as something like an extension of Taylorism to the whole of society. [LoriZ, Jun 29 2010]
Robotic Freedom, by Marshall Brain
http://marshallbrai...robotic-freedom.htm "If you don't work, you don't eat" is a core philosophy of today's economy, and this rule could make a rapid robotic takeover extremely uncomfortable for our society. [LoriZ, Jun 29 2010]
(?) A Road Not Taken Cybernetic Socialism in the USSR
http://www.solidari...ialism-in-the-ussr/ [LoriZ, Jul 03 2010]
Robots for communism
http://truth-reason...-for-communism.html Thus, what we have isn't some imagine future world where machines do everything and we are reduced to passive consumers. Instead, we have technology actively enabling the principle "from each according to their ability, to each according to their need." And, from a libertarian communist perspective, that means a society without rulers and bosses. [LoriZ, Nov 07 2011]
Robot revolution
https://www.youtube...watch?v=B1BdQcJ2ZYY The distant future. [bungston, Sep 23 2015]
[link]
|
|
I don't see humans being
replaced by robots until robots
are as versatile as humans, at
which point ownership of a
robot would be tantamount to
slavery. |
|
|
There is no long-term problem arising from robots or other machines doing work and there never will be. |
|
|
degroof: Why would you create a
child? |
|
|
More to the point, you wouldn't
create a robot to want human
rights. But in order for the
scenario described above to
take place (my job being
replaced by
a computer), it would be
necessary for that computer to
be able to accomplish ALL of
the tasks I do. In order for
this to occur, that computer
would have to possess all of
the abilities I do: pattern
recognition, reason,
creativity, adaptation and all
of the other components of
intelligence. Now, we can argue
about whether such attributes
as self-awareness and
consciousness necessarily
follow from this, but I think
it's safe to say that at some
point, you're going to have a
computer say "I think,
therefore I am." What do you do
in that case? |
|
|
Yes, robots can be tools. No, a
computer isn't necessarily
going to be intelligent. Humans
are animals, but not all
animals are intelligent, and
some animals are used as tools
by humans. I obviously wouldn't
want a sentient VCR. I'd never
be able to throw it out if it
stopped working properly! But
if I want a computer to be able
to make plans and complex
decisions, it does need to be
essentially similar to a human. |
|
|
And yes, we know how to make
humans. We're very good at
that. But humans aren't well
adapted to all environments,
and humans are not the endpoint
of evolution. It's an ongoing
process, a process that is
itself evolving. If you argue
for stagnation, you won't have
my support. |
|
|
Remember that the republicans are in office. Nothing with "Socialism" in the name will pass. Now if you give corporations that sign up for this program a big tax break, and call it the "rich white guy in the head office gets a tax break on his robot" program you may have something. And as far as the rights of robots go, remember the republicans are in office. labor rights, ha ha. |
|
|
What do the robot-manufacturers have to gain by giving away their product to corporations? And even if they did, what do the corporations have to gain by paying workers to do nothing?— | VeXaR,
Feb 23 2001, last modified Feb 28 2001 |
|
|
|
Where are the true conservatives? Sigh. |
|
|
Companys would pay you for the use of "your" robot because they need to get the job done. |
|
|
degroof: Share and Enjoy! |
|
|
I agree with you, salmon, in theory. I understand that you're trying to solve the "lost my job 'cause a machine can do it now" problem, not implying anthropomorphic bots. This would be an interesting system, but it's not economically sound. Manufacturers would buy their own machines. Robots are, in effect, machines that seem to humans like living creatures. For all practical purposes, my computer is a robot. A tamagotchi, a calculator, an electric drill, is a robot. Yet, why would a factory use other people's machines? It won't happen. As usual, a socialist system fails when it requires large and powerful entities to act in good will against their own interests.
A fascinating idea, salmon. E-mail me.
-- Dmitri@Rome.com |
|
|
I think the argued concern here is that the job one would lose would be one that was wanted in the first place. By removing jobs like making pizzas, clerking shops and building cars, you force people into careers that would be... I suppose the word is satisfying. Work becomes more than paying the bills, it becomes something that's hard but also enjoyable. That work benefits both the individual and the whole in some way. |
|
|
There are downsides to everything. It's called balance. There will never be a perfect solution where everyone's happy and robots aren't slaves and no one loses a job and has no place to go. Change sucks but it's almost always followed by a notched improvement. |
|
|
Much like people aren't jumbles of veins and arteries with a specific task, niether are computers. In the quest for the absolute, systems engineers test program after program in an attempt to make a computer 'think'. They/we enjoy the speculation, find no other purpose or satisfaction in that than raw curiosity. We want to know if it's possible. It's not whether or not it's possible or whether toasters will talk to you; it's when will have computers that can think in ways that we are unable. True, the practical application of such 'thinking' systems would be as human assistants in whatever case would be most useful. Who can forsee where it would be applied? But it will exist. The question is, how to deal with it. |
|
|
A group calling themselves Singularitarians not only think we will create machines with sentience, but they claim that it is inevitable, and nothing short of the purpose for our existence. The Singularity, as I understand it, is the point at which mind meets machine. A new stage in evolution. I find one serious fundamental flaw in their argument, which brings their whole ontology tumbling down. See link. |
|
|
I see tweeking computer 'intelligence' as tantamount to attempting to tame fire or study the human genome. Most people are fearful of it; there's no obvious application for it and yet to take the next step in cultural evolution we must make the attempt. As I said previously, whether it will be useful or not, because it exists as a possibility, someone will study and research and model and endeavor to make it real. |
|
|
So what's wrong with selling a standard copy of Windows 2015 that can adapt to most anyone's lifestyle? |
|
|
Well yes there are many flaws, but the one major flaw that brings the whole argument down is that they equate pure raw processing power with thought: they assume that when a machine can work as fast as the brain, then by default it is thinking. They give no consideration for the possibility (without evidence I'd even venture to say probability) that thought, personality, desire, yearning, everything we think of as constituting our humanity -- our soul, if you will -- is much more than a measure of instructions per nanosecond. |
|
|
I do believe, though, that whatever basic laws causes us to do what we do can be demonstrated in a computer, but I'll let it end before this thing goes on into a book. |
|
|
globaltourniquet: The singularity isn't dependent on classical view of what constitutes AI. It's the generalization of Moore's observation into "technology bootstraps itself." The curve appears to be exponential, approaching infinite derivative sometime between 2020 and 2030. Not only are there no holes, its chance of occurrence is 1. IMHO the most likely route is this: around 2020 two technologies converge: ability to scan objects on the atomic level, and the processing power to run atomic simulations in realtime on commodity hardware. It's then a simple matter of scanning <insert personal Einstein archetype>, making a few hundred thousand copies, task the simulations with redesigning & improving themselves . Then these hyperintelligent beings will surely flip burgers for us at our behest and obviously won't be able to analyze and patch their own code to overcome any mindnumbingly trivial safeguards we put in. |
|
|
What evidence is there to suggest that at any point in time, ever, robots will have the intelligence, decision-making skills and abstract thinking abilities of humans? There are already computers that can solve math problems faster than humans, but they are not going around saying "Ithink, therefore I am." Besides, the advent of technology could make it that jobs could be fitted to the robots, not vice versa. |
|
|
Actually, it should be possible to create a human-level AI in a couple years. Heck, it could have been done with vacuum tubes, it would just be way too big. |
|
|
Everybody seems to concentrate on CPU-style processing, but for large-scale tasks, it's very inefficient. Instead, why not build the circuits into the machine directly? Or create mini-CPUs that perform a specific task, yet allow flexibility not provided by hardware? |
|
|
For example, take an arbitrary tree, with an arbitrary number of inputs and outputs for each node and has only one restriction: no node can output to anything at or lower than itself or input from anything at or higher than itself to prevent a feedback loop.
The tree will have N nodes and height H. The processing time required by a CPU is O(N), by a fixed-circuit machine, it's O(H). A CPU's processing time increases linearly, a fixed-circuit increases logarithmically.
For X cycles, the time required for a CPU is X*O(N), for a fixed-circuit, it's O(H+X) (assuming constant evaluation, the values propagate at a constant rate through the chain) |
|
|
If you take the above example and allow feedback and have a non-linear function of the inputs for a node, you've basically built a brain.
The true power of the brain is not sheer computational speed -- transistors beat neurons the way an olympic sprinter beats a child at the 100m -- but bandwith: it can perform all the calculation it does because every neuron acts as a simple circuit; similar to the way we turned add, subtract, multiply, divide, and, or, xor, compare, & data moving into computers, only we centralized everything for flexibility. |
|
|
Take the problem of sight. The resolution of the eyes is pretty high. Even a terahertz computer would have trouble processing it. However, it occurs in realtime because all the data is being processed simultaneously. |
|
|
The only real problem is figuring out what, exactly, each little circuit should do... |
|
|
If you don't agree with me or you didn't read my rant, consider that the brain is constrained by natural laws. Therefore, if said laws are understandable, a reasonable assumption, then it must follow that thought is _logical_ and has its foundation in logic. Since all logic can be expressed in terms of boolean logic, it _must_ follow that thought can be duplicated electronically. The so-called free association and leaps of intuition are probably nothing more than a particular set of neurons running slow or garbling data, causing associations that were never meant to happen. |
|
|
Btw, the real limit on computers is memory/physical storage bandwith, not CPU cycles/sec. Even if Moore's law remains true for the next 20 years, for every cycle processing something, it'll spend a large fraction waiting for more data from memory |
|
|
// the brain is constrained by natural laws. Therefore, if said laws are understandable // |
|
|
That postulate is untenable within the terms of the Copenhagen Interpretaion. Since perception and thought are limited by their nature and the symbolic forms available to express and communitcate those perceptions, and since "reality" is created by observation ("There is no deep reality"), it is not possible to "know" the "laws" involved since the very perception is not only constrained by those same laws, but may be constrained by other laws which it is incapable of perceiving because those other laws intrinsically limit such perception. |
|
|
Now go over there and open that crate. You'll need a crowbar. It's inside the crate. |
|
|
Or read a book on metaphysics. |
|
|
//What evidence is there to suggest that at any point in time, ever, robots will have the intelligence, decision-making skills and abstract thinking abilities of humans?//
In my neighbourhood there are toasters that already surpass most of the local humans in that respect. |
|
|
Same here. Depressing, isn't it ? |
|
|
[8th_of_7] Why would you find the fact that most people are
dumber than toasters depressing? |
|
|
[+} but what you're describing isn't socialism. |
|
|
It's because he can't fit the implants in the toasters, [mouseposture]. |
|
|
That's right. Most of the humans just aren't worth Assimilating. Every time we do, the overall intelligence of the Collective drops slightly. |
|
|
thta would explain the Borgs you see at the mall with their cranial implants on sidewards. |
|
|
The idea as described has nothing at all to do with socialism. |
|
|
Which industry was put under State control? |
|
|
Oh, I get it. The word "Liberal" lost its traction. |
|
|
So now words such as Socialist and Facist are resorted to. |
|
|
Can I introduce you to the word Abiotic? I think you'll find it a very comfortable word. One our former sworn deadly enemies espoused. |
|
|
Consider - Socialist Robots. |
|
|
Ian, it will happen but not in the rosy way which you
describe --
putting aside the stormy transition you allude to. |
|
|
200 to 300 years of technological progress -- even if it
includes
heavy genome editing capabilities -- are unlikely to repeal
several billion years of biology. For that reason, the
Diamond
Age paints a much more convincing picture than you lay
out.
Oh sure, the elimination of meaningless labor and
guaranteed
income are quite likely. But editing "want" from the
human
condition is quite unlikely -- even if it was feasible
technically
(and I believe it isn't without the type of hive existence
that's
unattractive to most). |
|
|
So long as "want" is part of it, so long as we are primates or
some derivation thereof even if represented in bits, some
sort
of "trading" has to exist. This trading necessitates
everyone to
have something of value to be able to get something of
value.
Even if that something is purely status, let me assure you
that
maintaining it will feel exactly like having to have a job,
at least in the middle class sense of that word. |
|
|
This idea is funnier when you know that the real "robots"
are people without the social skills needed to own the
means of production. |
|
|
//That postulate is untenable within the terms of
the Copenhagen Interpretaion. Since perception
and thought are limited by their nature and the
symbolic forms available to express and
communitcate those perceptions, and since
"reality" is created by observation ("There is no
deep reality"), it is not possible to "know" the
"laws" involved since the very perception is not
only constrained by those same laws, but may be
constrained by other laws which it is incapable of
perceiving because those other laws intrinsically
limit such perception.// |
|
|
I refute it thus: <drops heavy stone on [8th]'s toe. |
|
|
It is quite likely (in fact, almost inevitable) that
quantum noise has an impact on thought.
However, that does not prevent us from
understanding or simulating intelligence. Quantum
randomness can be quite effectively simulated by
a fully deterministic system and, if that's not good
enough, we can certainly create devices which are
themselves subject to quantum noise. |
|
|
/That would leave humans to do whatever they wanted to do, while their robot would be providing income/ |
|
|
This has been successful in several forms. My favorite is the laundromat. Docile, slave washingrobots. |
|
| |