- 15
- Posts
- 13
- Years
- Seen Apr 30, 2021
Imagine a future in which humanity is able to create a machine that studies all data relevant to governing a country -- crime rates, economy, education, everything. This machine is then placed in charge of this country. The machine will decide on all of the laws, all of the big decisions, based on a hierarchy of human ethics. At the top of this is human life: if a decision can save more human lives than not, said decision will be made. Under human life would be another priority; for example, the right to clean water. This would continue down until the most trivial of benefits. If a decision affects multiple people, the machine will use this hierarchy to sort out what takes priority.
The reason this machine would be made is so that a country could be governed with 100% of the peoples' best interests in mind. No human politicians corrupted by money, power, or a weak sense of judgment. All decisions will be unbiased and based upon the Hierarchy. The machine will never act on its own whims because it will solely follow the processes set out before it. There will also be function for humanity to debate the machine's decisions; if the protests are too great, the machine will add such outcries to its database and use them to improve future computations.
Now, say the decision to make this machine a reality is up to you alone. Would you trust this machine with humanity? Why or why not? What sort of priorities should be placed in the Hierarchy? Do you think there are any foreseeable flaws with this machine? I'm very curious as to what everyone thinks on this subject.
The reason this machine would be made is so that a country could be governed with 100% of the peoples' best interests in mind. No human politicians corrupted by money, power, or a weak sense of judgment. All decisions will be unbiased and based upon the Hierarchy. The machine will never act on its own whims because it will solely follow the processes set out before it. There will also be function for humanity to debate the machine's decisions; if the protests are too great, the machine will add such outcries to its database and use them to improve future computations.
Now, say the decision to make this machine a reality is up to you alone. Would you trust this machine with humanity? Why or why not? What sort of priorities should be placed in the Hierarchy? Do you think there are any foreseeable flaws with this machine? I'm very curious as to what everyone thinks on this subject.