How would the next generation be if...

How would the next generation be if money was absent from the living requirements?

Understand that humans are quickly approaching the proximity of the automatic age. Where robots (from little appliances, to androids) will be doing the "unfavorable" work for us. Meaning all unwanted jobs will be filled, and we can do whatever we'd like. (Even Jobs if that's your thing, but keep in mind, it'd be for free.)

Do you think the majority would grow lazy? How would we act if we had access to virtually limitless resources? Would school systems still need to be required, if people already had what they needed to live?
 
Robots need raw materials to build, and energy to function.

A pre-requisite for robots to be more widely used is strong AI. When that happens, I believe humans will become obsolete. There could be big problems. Robots will develop to be superior to humans in every possible way. Why should they let us grow food on land that could be covered in solar panels to power them? Far from robots doing the unfavourable work for us, it will be us who are struggling to find unfavourable jobs

I don't know when this future will happen, but assuming nothing else intervenes, I believe it will happen. (Other things that could intervene would be a global catastrophe that permanently destroys advanced civilization, humanity augmenting itself with cybernetics, possibly genetic engineering, or things we don't even know about yet.)
 
@cantab:
That sounds like a robot apocalypse theory to me. I'm certain we, as a species, will always have control over these programs. We shouldn't need AI's that are far superior than the human mind, especially for minor jobs. Also, the power sources thing is debatable. If anything we should be more sketched with Nano technology and Biotech. Robots aren't made to replace us, there are made to improve us. (Imagine robot helpers attached to people, making their fuel source from our blood.)
 
If robots work for nothing, there will be more of abundance in food I bet. More things would probably be cheaper. Imagine a self-planting bot running around acres making a whole forest of vegetables.
 
If robots work for nothing, there will be more of abundance in food I bet. More things would probably be cheaper. Imagine a self-planting bot running around acres making a whole forest of vegetables.
Food is abundant enough in the developed world. The developing world won't be able to afford robots.

Also, agricultural jobs are already very low paid. The jobs robots will get into first are the ones that are highly paid, labour intensive, and dangerous. The min-wage drudge-work jobs will be the last to be filled by robots, since the robots will need to be the cheapest to compete. And in general, robots won't just depress prices, they'll also depress wages in the fields they start becoming used in.

Thinking of professions robots will become more used in soon - well we're already starting to see them used in surgery, there for greater capabilities rather than lower costs. I reckon logging could be a good one for robotics; this would be heavy machinery, potentially equipped with the ability to identify a species of tree, if it's a target species cut it down, and transmit the co-ordinates for the trunk to be retrieved. You gain the obvious advantage of not having humans around falling trees and chainsaws, and the AI requirement is simple and well-defined. Also I think we'll see ground combat robots in due course. We already have UAVs in the air, and guided torpedoes (and missiles) meet the definition of robot too I think. Bipedal battle-droids are a long way away, but something like a semi-autonomous light tank is more feasible - although the Mars rovers demonstrate the limitations of current autonomous vehicles. There's already been a lot of progress in automated creation of 3D models of areas using LIDAR, some motivated by Google's desire to digitise the Earth. Such capabilities will be key to any robot intended to operate in an unpredictable, general environment.
 
A pre-requisite for robots to be more widely used is strong AI. When that happens, I believe humans will become obsolete. There could be big problems. Robots will develop to be superior to humans in every possible way. Why should they let us grow food on land that could be covered in solar panels to power them? Far from robots doing the unfavourable work for us, it will be us who are struggling to find unfavourable jobs
There is no reason robots would work towards their own welfare to the detriment of ours, especially if they were conceived to help us (in which case, their own welfare would be equated to ours). There is no magic attractor that compels an intelligent entity to work with the same kind of incentives as humans. There is also no reason machines cannot easily be made to identify more with us than with their brethren. If there is no high probability path from the reward/punishment system we set in place to a machine uprising, a machine uprising will not happen, and it seems to me that most systems would not contain such a path.

Even within humans, for most people, there are several things that they simply cannot do, no matter how much they try to rationalize it. A lot of that stuff is hard-wired. It would not be difficult to make machines that simply cannot harm humans, and if somehow they do, fall in a catatonic state of self-loathing.

In other words, if we make the rules of the game, machines will try to win *that* game, not another game. Human-like behavior is a way to win Nature's game, and will arise in any situation that implements similar rules. There is no reason to think it would arise in other situations.

We can't even keep the machines we have today under control 100% reliably.
No, but there is more than one failure mode. We will make machines that overheat and break down, machines that are incapable of learning certain concepts, machines that do their tasks in a sub-optimal fashion, etc. But generalized rebellion is a failure mode that I don't think is probable. In fact, I find it contrived.

I admit, though, that human incompetence or recklessness could lead to such a situation.
 
Well, it seems that some aspects of my "prediction" are in fact statements of already-underway research. "MDARS" is a sentry robot that the US is looking into giving it "the ability to track and engage targets independently". (New Scientist, 23 Oct 2010, p23). An Israeli company is developing "Guardium", which has already been patrolling Israel's borders, and it can be programmed to return fire if shot at. The South Koreans have a currently static robot that can be programmed to demand a password from an intruder, and if it is not given (or the robot fails to recognise an attempt to give it) open fire. The military are very much interested in robots, and their interests are in ones to harm people.

Even within humans, for most people, there are several things that they simply cannot do, no matter how much they try to rationalize it.
Other than physical impossibilities, name one? Multiple scientific studies, as well as informal anecdotes, show that "most people" will do pretty much anything if put in the "correct" situation. (The classic example of a formal study of this kind is the http://en.wikipedia.org/wiki/Milgram_experiment . For anecdotes, read http://en.wikipedia.org/wiki/Pranknet for what "most people" can be induced to do by a simple phone call.)

Also, you're assuming we make AI that we fully understand. That may not be the case; in particular, should we successfully create strong AI by genetic algorithm methods, we won't know how it works, and therefore won't be able to easily impose specific limits.
 
Well, it seems that some aspects of my "prediction" are in fact statements of already-underway research. "MDARS" is a sentry robot that the US is looking into giving it "the ability to track and engage targets independently". (New Scientist, 23 Oct 2010, p23). An Israeli company is developing "Guardium", which has already been patrolling Israel's borders, and it can be programmed to return fire if shot at. The South Koreans have a currently static robot that can be programmed to demand a password from an intruder, and if it is not given (or the robot fails to recognise an attempt to give it) open fire. The military are very much interested in robots, and their interests are in ones to harm people.
Well, yes, the military are very much interested in robots that harm enemies. But as more and more of these are made, robots will become a much more potent threat than humans, which means that they will be trained to take down enemy robots and drones, not humans. Now, another potential application of robots that would harm humans would be to enforce curfews or deploy a sort of thought police. But in any case, any party who implements robots without kill switches is retarded.

Other than physical impossibilities, name one? Multiple scientific studies, as well as informal anecdotes, show that "most people" will do pretty much anything if put in the "correct" situation. (The classic example of a formal study of this kind is the http://en.wikipedia.org/wiki/Milgram_experiment . For anecdotes, read http://en.wikipedia.org/wiki/Pranknet for what "most people" can be induced to do by a simple phone call.)
Unorthodox behavior of some machines in contrived situations is supposed to be worrying in what sense, exactly? What I mean is more on the order of phobias. Some humans have behaviors that cannot be turned off without therapy or medication. For the most part, nature selects against this, in a weak way. We would select for this, in a strong way.

You seem to be under the impression that we would train AI by putting them in a simulated or real natural environment until human-like intelligence arises, and then we'd pick out those that are the most successful at survival. We wouldn't do that. We would tightly control the survival criterion itself. For example, we could try to minimize the Kullback-Leibler divergence between the probability distribution of the robot's actions not knowing what other robots do, and the distribution of the robot's actions knowing what other robots do. What this means is that robots would have a higher probability of survival if they act independently of other robots, i.e., we can train them to be non-social.

Also, you're assuming we make AI that we fully understand. That may not be the case; in particular, should we successfully create strong AI by genetic algorithm methods, we won't know how it works, and therefore won't be able to easily impose specific limits.
No, I do not. I assume that we know the objective function that the AI optimizes. In other words, I don't claim to understand how the chess player plays, I claim to know that the chess player is playing chess. If I want to train an AI that does exactly what I tell it to using genetic methods, then the only AI that will survive is AI that obeys my commands. How the hell do you think this process could lead to AI that not only doesn't obey my commands, but works against me? The AI's environment is not nature, it is "obey me". AI isn't culled if it can't feed, as it would be on Earth. It's culled if I am dissatisfied. I mean, genetic algorithms can certainly lead to surprises, like AI that doesn't always obey me, but what you suggest is the equivalent of positing a natural species that never has sex and gleefully throws themselves on jagged rocks whenever they get the chance.
 

Users Who Are Viewing This Thread (Users: 1, Guests: 0)

Top