Roko's Basilisk

  • Thread starter Deleted member 211486
  • Start date

Deleted member 211486

Guest
  • 0
    Posts
    Roko's basilisk is a thought experiment that originated from Roko, a user from the rationalist and transhumanist community LessWrong. It was called a basilisk because simply realizing it makes it potentially dangerous. Slate calls it the "most terrifying thought experiment of all time". Read at your own risk, I guess.

    Your friend, a techno-futurist and computer engineer, approaches you and tells you he is working on an artificial intelligence that will hypothetically exist in the near future. This AI will have access to immeasurable knowledge and power. The AI will also be benevolent -- it will solve humanity's problems and maintain world peace.

    [PokeCommunity.com] Roko's Basilisk

    Your friend is secretly Iron Man.

    Your friend asks you to help achieve this cause; maybe by joining in the research or by spreading the word. If not, the AI will create a simulation of you in the future, with your exact same thoughts, memories, and emotions, and torture it for eternity.

    Now why the hell would anything benevolent and all-good do this? Because it is benevolent and all-good, at least from a utilitarian point of view. The AI is concerned with the "greater good" of all humans. If you had contributed the AI would have existed earlier and less people would have had to suffer.

    [PokeCommunity.com] Roko's Basilisk

    What will you choose?

    Just some interesting questions to ponder on:

    1. Now that you've worked out the possibility of Roko's basilisk, should you pick option A, a losing scenario for others (most people would choose box B); or option B, a losing scenario for you? (This was based on Newcomb's paradox.)
    2. Your replica will go think, feel, go through life the same way you do. The only difference is that you will die and at some point the simulation will be tormented by the AI for all eternity. What if you're the replica and you're just experiencing a simulation?
    3. Is the AI morally just in torturing people for the good of millions more?
    4. What if the AI is God?
    5. Statistically speaking there is a chance the basilisk will happen (Roko puts it in the 1/500 numbers). Is this something worth considering?

    ...hopefully this thread doesn't get the same reaction as the original post did from Yudkowsky, owner of LessWrong, who deleted the entire discussion and banned the topic from forum grounds. Mods have mercy ;_; (plus I'm sure at least some of you have heard of it anyway, I just haven't seen it discussed here).
     
    I don't think the simulation would have any conscious feeling at all.

    I'm also quite confused as to why the AI would waste precious resources torturing non-real people. I'd imagine sadism wouldn't be part of it's programming.
     
    I don't think the simulation would have any conscious feeling at all.

    I'm also quite confused as to why the AI would waste precious resources torturing non-real people. I'd imagine sadism wouldn't be part of it's programming.

    "i have no mouth, and i must scream" check it out
    Spoiler:


    i culd b tlkn out my ass here, but its worth a shot
     
    The key thing here is that it is a utilitarian argument. I think some philosophers would regard utilitarianism as deficient in terms of the factors it uses to judge morality. I think many people around the world, but perhaps especially in the west, see self-determination and agency as key values towards a good life and such values are inalienable and should not be curtailed unnecessarily.

    But then again considering how benevolent it is supposed to be (outside of robbing people of exercising their agency when it comes to cooperating with it), I guess there's no reason not to help build Roko's Basilisk? Why shouldn't I do it, it seems that there are no negative repercussions if I help this AI come into being?
     
    Roko's basilisk is a thought experiment that originated from Roko, a user from the rationalist and transhumanist community LessWrong. It was called a basilisk because simply realizing it makes it potentially dangerous. Slate calls it the "most terrifying thought experiment of all time". Read at your own risk, I guess.

    Your friend, a techno-futurist and computer engineer, approaches you and tells you he is working on an artificial intelligence that will hypothetically exist in the near future. This AI will have access to immeasurable knowledge and power. The AI will also be benevolent -- it will solve humanity's problems and maintain world peace.

    [PokeCommunity.com] Roko's Basilisk

    Your friend is secretly Iron Man.

    Your friend asks you to help achieve this cause; maybe by joining in the research or by spreading the word. If not, the AI will create a simulation of you in the future, with your exact same thoughts, memories, and emotions, and torture it for eternity.

    Now why the hell would anything benevolent and all-good do this? Because it is benevolent and all-good, at least from a utilitarian point of view. The AI is concerned with the "greater good" of all humans. If you had contributed the AI would have existed earlier and less people would have had to suffer.

    [PokeCommunity.com] Roko's Basilisk

    What will you choose?

    Just some interesting questions to ponder on:

    1. Now that you've worked out the possibility of Roko's basilisk, should you pick option A, a losing scenario for others (most people would choose box B); or option B, a losing scenario for you? (This was based on Newcomb's paradox.)
    2. Your replica will go think, feel, go through life the same way you do. The only difference is that you will die and at some point the simulation will be tormented by the AI for all eternity. What if you're the replica and you're just experiencing a simulation?
    3. Is the AI morally just in torturing people for the good of millions more?
    4. What if the AI is God?
    5. Statistically speaking there is a chance the basilisk will happen (Roko puts it in the 1/500 numbers). Is this something worth considering?

    ...hopefully this thread doesn't get the same reaction as the original post did from Yudkowsky, owner of LessWrong, who deleted the entire discussion and banned the topic from forum grounds. Mods have mercy ;_; (plus I'm sure at least some of you have heard of it anyway, I just haven't seen it discussed here).
    I like the idea of discussing philosophy here. However, no offense to OP (quite the contrary, I applaud the attempt to start some philosophical discussions), but I feel like this thought experiment is fundamentally flawed. I don't ascribe to utilitarianism, but this just isn't utilitarianism. There is no (or at least extremely little) utility to be gained by torturing a copy of someone (not even the actual someone) after the fact.

    Torturing someone after the fact is bad enough; at that point, their suffering already gains you nothing and is purely a utility sink. The only purpose it might serve is to prove you're willing to follow through on your threats, but by the time that happens, the damage has already been done and there's no purpose to proving you planned to follow through on your threat. I guess if you plan to make some more threats later on, it would help there, but that's outside the scope of this one, I think.

    But this isn't even a threat to torture that person, it's a threat to torture their digital clone. Even those acting purely out of self-interest wouldn't be swayed by such a threat, as helping the AI is obviously not going to turn out well for anyone, themselves included. Those acting out of communal interest wouldn't be swayed for obvious reasons, either. Only people who care more about their hypothetical digital clone than themselves or the community would buy into this; in other words, basically nobody. Thus, the threat itself is ineffective, eliminating any possibility the premise had of being an effective maneuver for increasing utility. It would be much more effective to simply threaten to torture anyone who didn't help install the benevolent dictator (and that is what this is, a high-tech version of a supposedly benevolent dictator), and that premise has already been discussed a billion times (the obvious point being that the people have no reason to trust the "benevolent" dictator will actually be benevolent, especially when he's threatening to torture them for disobeying his commands before he's even installed).

    I'm starting to see why this was banned. I don't think it was banned out of fear or something so melodramatic (if you think philosophers would ban the discussion of chilling questions, you don't know philosophers very well; we love that kind of stuff). I think it was banned because the thought experiment was poorly designed and probably got a bunch of people pissed off at each other. It's really just taking an old question about benevolent dictators and adding some science fiction stuff to obfuscate the basic premise. Furthermore, if it used the same wording as it does here, then it was probably also banned because the utilitarians got upset that someone was basically slandering them.
     
    Back
    Top