Quote Originally Posted by wufwugy

a big goal of AI sciences is to create emotions and consciousness.
I don't follow AI science very closely but those seem like almost unreachable goals. Moreover, how are you going to know if you've succeeded? Ask the computer "are you conscious?" or "Are you experiencing emotions?" And if it says yes, does that constitute any sort of proof?

Quote Originally Posted by wufwugy
every single example we have on this planet of entities that can learn from their actions/environment has shown that the purpose behing their learning is to move higher up the chain.
By entities that can learn I assume you're referring to living creatures. In that case, the real purpose behind their learning is to propogate their genes. Whether or not this moves them higher up the chain is incidental. It just happens the two often coincide - e.g., the alpha male is the one who mates, gets to eat first, etc..

Quote Originally Posted by wufwugy
by definition alone, adaptation is about betterment. we do not have any examples of things that adapt that dont try to achieve better than what they have. it is folly to think that this will not apply to the creation of free-thinking AI.
Again you're talking about living things that follow the rules of natural selection. There's no inherent requirement for AI to emulate us in this way. There's no reason to assume that because they possess the human characteristic of intelligence that this will make them human-like in any other way.


Quote Originally Posted by wufwugy
as of now, AI is not free thinking, and it will remain so until we have the brain and its neuronal communications understood. a theory for why neurons provide free-thinking while standard wiring does not is that wires are connected to just one other wire so there's a linear communication. neurons, on the other hand, connect with any and every neuron, and so communication and processing takes on a whole new paradigm. creating consciousness could be just as simple as simulating neurons.
Consciousness is a red herring in this debate. Just possessing consciousness or being free-thinking doesn't necessarily lead to all kinds of other human characteristics such as the will to power, any more than building a machine that can beat us at chess means that same machine will 'want' to beat us. It's just a machine, it's not driven in the same ways living things are.

Quote Originally Posted by wufwugy
it is a paradox to think that we could build machines that adapt while programming them to not adapt in certain ways. we just cant and wont be able to write a program that determines which adaptations are adapted to and which aren't.
There's plenty of machines that adjust their output to adapt to circumstances. For example, there's lots of artificial neural networks that learn things and none of them have yet run amok and tried to subjugate us. Whereas they are programmed to learn and adapt, to my knowledge no-one has specifically had to program them not to seek world domination. Sorry if that sounds facetious I just think you're making an invalid assumption here.

Quote Originally Posted by wufwugy
it boils down to a sense of self equating similar experiences to similar senses. for exmple: lets say we create AI to fight our wars. i seriously doubt this will happen because i believe that war amongst large human territories will also be obsolete soon, but anyways. this AI would need to have a sense of good and bad things the happen if its to operate in the field. we can program (teach) it to not think that anything about its CO is bad, but what happens when it experiences bad in the field then experiences the same exact bad during an experience with its CO? will it not come to a point in its mind where both reactions are both right and wrong and it must make a personal decision? which is basically what we all do. we experience right and wrong in everything but make our minds up due to personal reasons.
This is pretty much how the army operates, they program their soldiers to obey orders. So again you are assuming an AI would necessarily have to have human characteristics.

I think our disagreement boils down to the idea that I don't believe, even if we were capable of doing so, that we'd ever want to create an AI that would both a) have the ability to surpass us; and b) have the 'motivation' to dominate us. Certainly the former is well within our current capabilities, but the latter would be a very bad move on our part indeed. I suppose it could happen by accident...