malanaphii
i'm about to monologue, son
- 197
- Posts
- 7
- Years
- Seen Sep 8, 2024
i was in my ethics lesson the other day, and we ended up discussing the future of robotics. specifically, robots that become almost their own sentient beings, capable of making moral decisions. do you think this will happen? if so, when?
one of the issues we had about this was when it comes to deciding ethical rules as it is. considering the huge debate over what moral rules should be (i.e. should there be fixed rules that cannot be broken or is morality more subjective), someone suggested that maybe powerful artificial intelligence could figure out laws of morality, without being confined by the emotions etc. that arguably flawed humans have. but then again - surely this would be impossible, as no matter what, the programming of the robots would have input from humanity and therefore be inherently flawed just as humans are (if one argues that humans are inherently flawed).
not exactly sure where i'm going with this, but i find the idea of morality, especially when it comes to the existence of artificial intelligence, extremely interesting, and wanted to start a discussion.
one of the issues we had about this was when it comes to deciding ethical rules as it is. considering the huge debate over what moral rules should be (i.e. should there be fixed rules that cannot be broken or is morality more subjective), someone suggested that maybe powerful artificial intelligence could figure out laws of morality, without being confined by the emotions etc. that arguably flawed humans have. but then again - surely this would be impossible, as no matter what, the programming of the robots would have input from humanity and therefore be inherently flawed just as humans are (if one argues that humans are inherently flawed).
not exactly sure where i'm going with this, but i find the idea of morality, especially when it comes to the existence of artificial intelligence, extremely interesting, and wanted to start a discussion.