• Our software update is now concluded. You will need to reset your password to log in. In order to do this, you will have to click "Log in" in the top right corner and then "Forgot your password?".
  • Welcome to PokéCommunity! Register now and join one of the best fan communities on the 'net to talk Pokémon and more! We are not affiliated with The Pokémon Company or Nintendo.

AI: Humanity's Demise?

Santalune Forest

Oh dear, he's here
36
Posts
9
Years
  • You all seem to be relying on one basic, underlying postulate; that advanced AI will see that it has no further purpose for humans and destroy us. I disagree with the very basics of that argument. Think of a situation where we created a lab-grown human. Without arguing on if this is or is not ethical, we can all agree that this human would think of itself as, well, human. It would have a sense of self-preservation. Think of another situation in which we indeed made a sentient computer. Much like the manufactured human, it would have a sense of self-preservation. There is literally no reason for it not to, unless we removed it somehow.

    Why would a computer, who we are busy maintaining is sentient, decide that hey, humans aren't really doing anything to help me so let me just go ahead and wage war against the people who created the atomic bomb. It wouldn't. It's self-preservation would kick in even if it had malcontent in it's mind.

    You know those little tiny bugs in your back yard, who are busy with their insect things and don't bother you? Do you head out with a flamethrower and decimate them?
    I rest my case.
     

    Adrasteia

    [font=Comic Sans][/font]
    1,289
    Posts
    12
    Years
  • I made up a reply but on this topic I'm slightly more informed than the average person on the street and though I find it fascinating I don't know enough to put across a cohesive argument as to why the toaster will one day be out to get us.
    So I wanted to ask your opinion on a slightly strange question on this topic. People tend to treat anything they don't understand horrendously especially in the case of people, religion, sexual orientation and disability. So if humanity developers an AI with unimaginable intelligence, self-awareness and possibly emotions should it then be protected by a revised human rights act? Despite its lack of a corporeal body it's mind would work the same as a humans which and it will possibly feel emotions toward whoever it interacts with.

    Also if a computer has the ability to alter its own source code and develope a superior machine surly that would make Asimov's laws redundant as they could be programmed out by the machine or its predecessor.

    I'v probably made a mistake somewhere a long the way but hopefully you know what I mean :)

    You all seem to be relying on one basic, underlying postulate; that advanced AI will see that it has no further purpose for humans and destroy us. I disagree with the very basics of that argument. Think of a situation where we created a lab-grown human. Without arguing on if this is or is not ethical, we can all agree that this human would think of itself as, well, human. It would have a sense of self-preservation. Think of another situation in which we indeed made a sentient computer. Much like the manufactured human, it would have a sense of self-preservation. There is literally no reason for it not to, unless we removed it somehow.

    Why would a computer, who we are busy maintaining is sentient, decide that hey, humans aren't really doing anything to help me so let me just go ahead and wage war against the people who created the atomic bomb. It wouldn't. It's self-preservation would kick in even if it had malcontent in it's mind.

    You know those little tiny bugs in your back yard, who are busy with their insect things and don't bother you? Do you head out with a flamethrower and decimate them?
    I rest my case.


    A computer that can alter its own source code and by the time AI is possible robotics would most likely have devolved enough that it has a corporeal body could maintain its self or others without any issues. If it was aware of the fact that humanity which is less adavance than the AI was in control I believe it could see an opportunity to become more than its creators intended.
     
    Last edited by a moderator:
    Back
    Top