• Our software update is now concluded. You will need to reset your password to log in. In order to do this, you will have to click "Log in" in the top right corner and then "Forgot your password?".
  • Welcome to PokéCommunity! Register now and join one of the best fan communities on the 'net to talk Pokémon and more! We are not affiliated with The Pokémon Company or Nintendo.

let's talk about artificial intelligence & ethics

malanaphii

i'm about to monologue, son
197
Posts
6
Years
  • Age 22
  • Seen Feb 17, 2023
i was in my ethics lesson the other day, and we ended up discussing the future of robotics. specifically, robots that become almost their own sentient beings, capable of making moral decisions. do you think this will happen? if so, when?

one of the issues we had about this was when it comes to deciding ethical rules as it is. considering the huge debate over what moral rules should be (i.e. should there be fixed rules that cannot be broken or is morality more subjective), someone suggested that maybe powerful artificial intelligence could figure out laws of morality, without being confined by the emotions etc. that arguably flawed humans have. but then again - surely this would be impossible, as no matter what, the programming of the robots would have input from humanity and therefore be inherently flawed just as humans are (if one argues that humans are inherently flawed).

not exactly sure where i'm going with this, but i find the idea of morality, especially when it comes to the existence of artificial intelligence, extremely interesting, and wanted to start a discussion.
 
650
Posts
6
Years
Excellent debate topic!

I'm definitely of the opinion that it will one day be possible to create a being with its own valid feelings and ability to make choices, although I really couldn't tell you when I think that will happen!

It always gets a bit touchy when talking about the laws of morality as whether you view morality as a subjective or objective thing you cannot help but form your own opinions even when trying to be as impartial as possible.

As far as deciding ethical rules goes: I think, for me, desire for self-preservation is key in how this hypothetical artificial intelligence should be treated (as that fits in with my own brand of morality). Does it have the capacity to want to exist, and to exist happily at that? Also I personally do not agree with the implication that our emotions are 'flaws' and believe they can be helpful for us when making important decisions (with moral implications) just as much as they can be a hindrance.
 
8,973
Posts
19
Years
I think this is worth a read about this very subject matter.

In order to create something that's completely sentient to the point where it would become any sort of a threat to humans, it seems it would have to be programmed to develop some forms of common sense in regards to how the real world operates. While we're currently at an impressive era where, say, Google Assistant can more or less predict what we want and send us useful information before we even think of it, it's moreso because of the data that we send to Google that makes that information possible.

Sentience is a long way away.
 

malanaphii

i'm about to monologue, son
197
Posts
6
Years
  • Age 22
  • Seen Feb 17, 2023
. Does it have the capacity to want to exist, and to exist happily at that?

this seems really important when it comes to AI - if the robots wanted to exist, then would turning them off then become unethical? would it be akin to murder if they have enough of a conscience to want to exist?

Also I personally do not agree with the implication that our emotions are 'flaws' and believe they can be helpful for us when making important decisions (with moral implications) just as much as they can be a hindrance.

i agree with this - i was just making a potential argument point. although, having studied people such as augustine (who believe humans are inherently sinful because of adam and eve sinning, in short), it makes me wonder whether humanity is inherently bad or flawed, and whether, if it is, robots would therefore also be inherently flawed. if that makes sense? tbh, whilst i don't believe a lot of this stuff, i do find it really interesting

Sentience is a long way away.

true - but do you think it's possible for robots to really be sentient? not necessarily in a way that means they could overrule humanity, perhaps, but sentient in that they have some sense of feelings and emotions?
 
650
Posts
6
Years
this seems really important when it comes to AI - if the robots wanted to exist, then would turning them off then become unethical? would it be akin to murder if they have enough of a conscience to want to exist?

For me, yes it would. Providing they haven't done anything so terrible as to warrant a termination of existence. I apply that same rules to humans too though. (As I assume the hypothetical AI we're talking about is sapient level intelligent i.e. having a moral responsibility... if that is your brand of ethics, of course). Again, this is my personal philosophy.

Although just to throw a spanner in the works I don't hold animals to the same moral accountability as humans, given their lack of sapience of course. But suppose we created an AI with intellect on the level of an intelligent animal: say a pig. This AI has the capacity to think, feel, enjoy, form bonds and also has that all important desire of self-preservation...but for some reason its continued existence threatened many forms of other life on the planet and even the planet itself. Do we have a moral right to end its existence?


i agree with this - i was just making a potential argument point. although, having studied people such as augustine (who believe humans are inherently sinful because of adam and eve sinning, in short), it makes me wonder whether humanity is inherently bad or flawed, and whether, if it is, robots would therefore also be inherently flawed. if that makes sense? tbh, whilst i don't believe a lot of this stuff, i do find it really interesting

Augustine also believed whether we're getting into heaven (the 'City of God') or not is decided from the moment of our conception and there's nothing we can do to change that, because God knows everything... but it's best for us to constantly strive to get into the City regardless of whether we're in or not so he's not exactly my choice of philosopher when thinking about the human condition haha.

Since I believe a lot of things in our reality is subjective I'm not the best person to try and answer the question of 'if humanity is flawed then does that make our AI inherently flawed too'. But for arguments sake let's say original sin is a thing and that we can only be saved from ourselves by 'the Grace of God' then I think it wouldn't matter for the AI anyway as only humans are allowed to receive that grace because of our soul which God gave to us and not AI.

Looking at another Abrahamic religion though, Islam, in the Qur'an God encourages us to seek out the wonders of the world [he] created and figure out the secrets of the universe. Probably a big reason as to why the Islamic world was the core of scientific knowledge and discovery for centuries. So from that differing religious angle Allah may not have a problem with us creating AI and may even be happy we unlocked that secret. I'm going slightly on a tangent and I realise that but a religious philosopher from a moralistic standpoint was brought up so I'm running with that and looking at it from another angle haha.
 
Back
Top