Internet & Technology Here's where we nerd out about the world wide web and all forms of technology!

Ad Content
Reply
 
Thread Tools
  #1   Link to this post, but load the entire thread.  
Old 3 Weeks Ago (6:06 AM).
malanaphii's Avatar
malanaphii malanaphii is offline
make me a fairy, whatever it takes
     
    Join Date: Dec 2017
    Location: hun wish i knew
    Age: 17
    Gender: Female
    Nature: Lonely
    Posts: 185
    i was in my ethics lesson the other day, and we ended up discussing the future of robotics. specifically, robots that become almost their own sentient beings, capable of making moral decisions. do you think this will happen? if so, when?

    one of the issues we had about this was when it comes to deciding ethical rules as it is. considering the huge debate over what moral rules should be (i.e. should there be fixed rules that cannot be broken or is morality more subjective), someone suggested that maybe powerful artificial intelligence could figure out laws of morality, without being confined by the emotions etc. that arguably flawed humans have. but then again - surely this would be impossible, as no matter what, the programming of the robots would have input from humanity and therefore be inherently flawed just as humans are (if one argues that humans are inherently flawed).

    not exactly sure where i'm going with this, but i find the idea of morality, especially when it comes to the existence of artificial intelligence, extremely interesting, and wanted to start a discussion.
    __________________
    (you waited smiling for this?)
    Reply With Quote
      #2   Link to this post, but load the entire thread.  
    Old 3 Weeks Ago (2:02 AM).
    Yue Han's Avatar
    Yue Han Yue Han is offline
       
      Join Date: Feb 2018
      Location: Guangzhou
      Age: 26
      Gender: Male
      Posts: 375
      Excellent debate topic!

      I'm definitely of the opinion that it will one day be possible to create a being with its own valid feelings and ability to make choices, although I really couldn't tell you when I think that will happen!

      It always gets a bit touchy when talking about the laws of morality as whether you view morality as a subjective or objective thing you cannot help but form your own opinions even when trying to be as impartial as possible.

      As far as deciding ethical rules goes: I think, for me, desire for self-preservation is key in how this hypothetical artificial intelligence should be treated (as that fits in with my own brand of morality). Does it have the capacity to want to exist, and to exist happily at that? Also I personally do not agree with the implication that our emotions are 'flaws' and believe they can be helpful for us when making important decisions (with moral implications) just as much as they can be a hindrance.
      Reply With Quote
        #3   Link to this post, but load the entire thread.  
      Old 3 Weeks Ago (2:51 AM).
      colours's Avatar
      colours colours is online now
      wandererjustlikeme 🌟
       
      Join Date: Apr 2005
      Location: in an eternal dream
      Gender: Female
      Nature: Jolly
      Posts: 3,162
      I think this is worth a read about this very subject matter.

      In order to create something that's completely sentient to the point where it would become any sort of a threat to humans, it seems it would have to be programmed to develop some forms of common sense in regards to how the real world operates. While we're currently at an impressive era where, say, Google Assistant can more or less predict what we want and send us useful information before we even think of it, it's moreso because of the data that we send to Google that makes that information possible.

      Sentience is a long way away.
      Reply With Quote
        #4   Link to this post, but load the entire thread.  
      Old 3 Weeks Ago (3:55 AM).
      malanaphii's Avatar
      malanaphii malanaphii is offline
      make me a fairy, whatever it takes
         
        Join Date: Dec 2017
        Location: hun wish i knew
        Age: 17
        Gender: Female
        Nature: Lonely
        Posts: 185
        Quote:
        Originally Posted by Yue Han View Post
        . Does it have the capacity to want to exist, and to exist happily at that?
        this seems really important when it comes to AI - if the robots wanted to exist, then would turning them off then become unethical? would it be akin to murder if they have enough of a conscience to want to exist?

        Quote:
        Originally Posted by Yue Han View Post
        Also I personally do not agree with the implication that our emotions are 'flaws' and believe they can be helpful for us when making important decisions (with moral implications) just as much as they can be a hindrance.
        i agree with this - i was just making a potential argument point. although, having studied people such as augustine (who believe humans are inherently sinful because of adam and eve sinning, in short), it makes me wonder whether humanity is inherently bad or flawed, and whether, if it is, robots would therefore also be inherently flawed. if that makes sense? tbh, whilst i don't believe a lot of this stuff, i do find it really interesting

        Quote:
        Originally Posted by colours View Post
        Sentience is a long way away.
        true - but do you think it's possible for robots to really be sentient? not necessarily in a way that means they could overrule humanity, perhaps, but sentient in that they have some sense of feelings and emotions?
        __________________
        (you waited smiling for this?)
        Reply With Quote
          #5   Link to this post, but load the entire thread.  
        Old 3 Weeks Ago (4:55 AM).
        Yue Han's Avatar
        Yue Han Yue Han is offline
           
          Join Date: Feb 2018
          Location: Guangzhou
          Age: 26
          Gender: Male
          Posts: 375
          Quote:
          Originally Posted by malanaphii View Post
          this seems really important when it comes to AI - if the robots wanted to exist, then would turning them off then become unethical? would it be akin to murder if they have enough of a conscience to want to exist?
          For me, yes it would. Providing they haven't done anything so terrible as to warrant a termination of existence. I apply that same rules to humans too though. (As I assume the hypothetical AI we're talking about is sapient level intelligent i.e. having a moral responsibility... if that is your brand of ethics, of course). Again, this is my personal philosophy.

          Although just to throw a spanner in the works I don't hold animals to the same moral accountability as humans, given their lack of sapience of course. But suppose we created an AI with intellect on the level of an intelligent animal: say a pig. This AI has the capacity to think, feel, enjoy, form bonds and also has that all important desire of self-preservation...but for some reason its continued existence threatened many forms of other life on the planet and even the planet itself. Do we have a moral right to end its existence?


          Quote:
          i agree with this - i was just making a potential argument point. although, having studied people such as augustine (who believe humans are inherently sinful because of adam and eve sinning, in short), it makes me wonder whether humanity is inherently bad or flawed, and whether, if it is, robots would therefore also be inherently flawed. if that makes sense? tbh, whilst i don't believe a lot of this stuff, i do find it really interesting
          Augustine also believed whether we're getting into heaven (the 'City of God') or not is decided from the moment of our conception and there's nothing we can do to change that, because God knows everything... but it's best for us to constantly strive to get into the City regardless of whether we're in or not so he's not exactly my choice of philosopher when thinking about the human condition haha.

          Since I believe a lot of things in our reality is subjective I'm not the best person to try and answer the question of 'if humanity is flawed then does that make our AI inherently flawed too'. But for arguments sake let's say original sin is a thing and that we can only be saved from ourselves by 'the Grace of God' then I think it wouldn't matter for the AI anyway as only humans are allowed to receive that grace because of our soul which God gave to us and not AI.

          Looking at another Abrahamic religion though, Islam, in the Qur'an God encourages us to seek out the wonders of the world [he] created and figure out the secrets of the universe. Probably a big reason as to why the Islamic world was the core of scientific knowledge and discovery for centuries. So from that differing religious angle Allah may not have a problem with us creating AI and may even be happy we unlocked that secret. I'm going slightly on a tangent and I realise that but a religious philosopher from a moralistic standpoint was brought up so I'm running with that and looking at it from another angle haha.
          Reply With Quote
          Reply

          Quick Reply

          Join the conversation!

          Create an account to post a reply in this thread, participate in other discussions, and more!

          Create a PokéCommunity Account
          Ad Content
          Thread Tools

          Posting Rules
          You may not post new threads
          You may not post replies
          You may not post attachments
          You may not edit your posts

          BB code is On
          Smilies are On
          [IMG] code is On
          HTML code is Off

          Forum Jump


          All times are GMT -8. The time now is 6:52 PM.