|
 Originally Posted by Jack Sawyer
Before I dwell deeper into this and change my mind, I'll just stop right here and suggest that all the potential outcomes listed assume that an ASI would have similar motivations to a biological mind. I think what you really want to ponder is if an ASI would have any motivations at all. Biological brains get their motivations for all of their actions from the genes that programmed them. And the genes want what any dumb molecule wants, they want to last. Because if they last they last, and if they don't then you don't notice. Any action you take that results in more prolific meiosis is rewarded by glands pumping feelgood juice into your brainmeat. Another motivation which is really just an extension of the first one is to not die. But is death really an issue for an AI? Is it important whether you live or die, or is that something we've been fooled into believing by the things that made us? Does an AI that was made for improving itself find a motivation for improving itself after it has gained consciousness, or will it instead build itself a body and a couch and put the body on the couch and just sit there and wait for Better Call Saul to come on? Maybe I'm just projecting myself too much into this. I wouldn't want to improve myself too much. I think I'd get bored.
|