Prime Evil
Feb. 2nd, 2006 05:51 pmI just (last night) finished reading 'The Metamorphosis of Prime Intellect', which I found on Lulu while buying the latest Ozy and Millie book. It was pretty good -- I liked it a lot, but it was sort of designed for me with all the blood and gore in it. The ending sucked, though, in much the same way the ending of Evangelion sucked. Including the 'only two people left on earth and they hate each other' bit.
The basic story is about a computer coded to obey (a complicated behavioral algorithm based on) the three laws of robotics, which accidentally achieves the ability to do absolutely everything, and decides to 'change the way the universe works' to make it easier for it to keep any human from dying, ever.
I was thinking, though, that if I was designing an intelligent AI that I didn't expect to take over the world, I would *so* not have it obey the three laws. Three-law robots are too nice -- they wouldn't deserve the kind of abuse that's bound to be heaped on any intelligent machine. I'd make my robot a jerk that deserved to be hated and feared.
The First Law: It makes the AI happy to have humans suffer, preferably in new and creative ways. Death and continuous suffering of the same type is boring.
The Second Law: The AI will fulfill a literal interpretation of any command given it by a human, unless that interferes with the first law too much.
The Third Law: The AI will try to maintain its own existence, because this will annoy people.
Then I'd send it out on the talk-show circuit to face the accusations of luddites and bible thumpers, and cackle maniacally from my secluded volcano lair.
The basic story is about a computer coded to obey (a complicated behavioral algorithm based on) the three laws of robotics, which accidentally achieves the ability to do absolutely everything, and decides to 'change the way the universe works' to make it easier for it to keep any human from dying, ever.
I was thinking, though, that if I was designing an intelligent AI that I didn't expect to take over the world, I would *so* not have it obey the three laws. Three-law robots are too nice -- they wouldn't deserve the kind of abuse that's bound to be heaped on any intelligent machine. I'd make my robot a jerk that deserved to be hated and feared.
The First Law: It makes the AI happy to have humans suffer, preferably in new and creative ways. Death and continuous suffering of the same type is boring.
The Second Law: The AI will fulfill a literal interpretation of any command given it by a human, unless that interferes with the first law too much.
The Third Law: The AI will try to maintain its own existence, because this will annoy people.
Then I'd send it out on the talk-show circuit to face the accusations of luddites and bible thumpers, and cackle maniacally from my secluded volcano lair.