Deleted member 211486
Guest
- 0
- Posts
Roko's basilisk is a thought experiment that originated from Roko, a user from the rationalist and transhumanist community LessWrong. It was called a basilisk because simply realizing it makes it potentially dangerous. Slate calls it the "most terrifying thought experiment of all time". Read at your own risk, I guess.
Your friend, a techno-futurist and computer engineer, approaches you and tells you he is working on an artificial intelligence that will hypothetically exist in the near future. This AI will have access to immeasurable knowledge and power. The AI will also be benevolent -- it will solve humanity's problems and maintain world peace.
Your friend is secretly Iron Man.
Your friend asks you to help achieve this cause; maybe by joining in the research or by spreading the word. If not, the AI will create a simulation of you in the future, with your exact same thoughts, memories, and emotions, and torture it for eternity.
Now why the hell would anything benevolent and all-good do this? Because it is benevolent and all-good, at least from a utilitarian point of view. The AI is concerned with the "greater good" of all humans. If you had contributed the AI would have existed earlier and less people would have had to suffer.
What will you choose?
Just some interesting questions to ponder on:
...hopefully this thread doesn't get the same reaction as the original post did from Yudkowsky, owner of LessWrong, who deleted the entire discussion and banned the topic from forum grounds. Mods have mercy ;_; (plus I'm sure at least some of you have heard of it anyway, I just haven't seen it discussed here).
Your friend, a techno-futurist and computer engineer, approaches you and tells you he is working on an artificial intelligence that will hypothetically exist in the near future. This AI will have access to immeasurable knowledge and power. The AI will also be benevolent -- it will solve humanity's problems and maintain world peace.
Your friend is secretly Iron Man.
Your friend asks you to help achieve this cause; maybe by joining in the research or by spreading the word. If not, the AI will create a simulation of you in the future, with your exact same thoughts, memories, and emotions, and torture it for eternity.
Now why the hell would anything benevolent and all-good do this? Because it is benevolent and all-good, at least from a utilitarian point of view. The AI is concerned with the "greater good" of all humans. If you had contributed the AI would have existed earlier and less people would have had to suffer.
![[PokeCommunity.com] Roko's Basilisk [PokeCommunity.com] Roko's Basilisk](https://www.slate.com/content/dam/slate/articles/technology/bitwise/2014/07/14717_BIT_Paradox.jpg.CROP.original-original.jpg)
What will you choose?
Just some interesting questions to ponder on:
- Now that you've worked out the possibility of Roko's basilisk, should you pick option A, a losing scenario for others (most people would choose box B); or option B, a losing scenario for you? (This was based on Newcomb's paradox.)
- Your replica will go think, feel, go through life the same way you do. The only difference is that you will die and at some point the simulation will be tormented by the AI for all eternity. What if you're the replica and you're just experiencing a simulation?
- Is the AI morally just in torturing people for the good of millions more?
- What if the AI is God?
- Statistically speaking there is a chance the basilisk will happen (Roko puts it in the 1/500 numbers). Is this something worth considering?
...hopefully this thread doesn't get the same reaction as the original post did from Yudkowsky, owner of LessWrong, who deleted the entire discussion and banned the topic from forum grounds. Mods have mercy ;_; (plus I'm sure at least some of you have heard of it anyway, I just haven't seen it discussed here).