Quote Originally Posted by DrivingDog
Quote Originally Posted by wufwugy
one problem with your logic, DD, that i'll point out, is that AI learns. do you think we wanted to rule the world when we weren't capable of understanding such a thought?
Well, yes I do. What i mean is we've always had the capacity for such a notion. It's our will to power, it's the ultimate alpha-male fantasy, even if a caveman thought the 'world' meant nothing more than his particular tribe and the tribes surrounding it. It's driven by natural selection and the goal of propogating our genes. You're arguing that machines can have this ambition, and I'm saying unless they're specifically programmed to have it, they won't just develop it as a by-product of intelligence. The two evolved in us completely independent of one another, and the one is much more basic than the other.

You seem to assume that when machines emulate and surpass our intellectual qualities they will also somehow adopt our attitudes and inner drives. Do you also think they will feel emotions as we do? I can't see how that will happen unless it's specifically woven into their making and like I said before, since we're the ones creating these machines, I don't see how that would benefit us or why we would do it.
a big goal of AI sciences is to create emotions and consciousness.

every single example we have on this planet of entities that can learn from their actions/environment has shown that the purpose behing their learning is to move higher up the chain.

by definition alone, adaptation is about betterment. we do not have any examples of things that adapt that dont try to achieve better than what they have. it is folly to think that this will not apply to the creation of free-thinking AI.

as of now, AI is not free thinking, and it will remain so until we have the brain and its neuronal communications understood. a theory for why neurons provide free-thinking while standard wiring does not is that wires are connected to just one other wire so there's a linear communication. neurons, on the other hand, connect with any and every neuron, and so communication and processing takes on a whole new paradigm. creating consciousness could be just as simple as simulating neurons.

it is a paradox to think that we could build machines that adapt while programming them to not adapt in certain ways. we just cant and wont be able to write a program that determines which adaptations are adapted to and which aren't.

it boils down to a sense of self equating similar experiences to similar senses. for exmple: lets say we create AI to fight our wars. i seriously doubt this will happen because i believe that war amongst large human territories will also be obsolete soon, but anyways. this AI would need to have a sense of good and bad things the happen if its to operate in the field. we can program (teach) it to not think that anything about its CO is bad, but what happens when it experiences bad in the field then experiences the same exact bad during an experience with its CO? will it not come to a point in its mind where both reactions are both right and wrong and it must make a personal decision? which is basically what we all do. we experience right and wrong in everything but make our minds up due to personal reasons.