Rise of the machines?

  • 13,992
    Posts
    15
    Years
    Inspired by watching Terminator on monday.

    How do you feel about developing advanced robotics & cybernetics, to the point of being able to create artificial intelligences and androids alomost completely indistinguishable from us, a la The Terminator or David from Prometheus. Does the risk of them 'rebelling' or thinking for themselves outweigh the benefits towards mankind? (Assuming it's possible) And what of the ethics behind creating enhanced human beings with extraordinary bionic abilities?

    Discuss!
     
    Last edited:
    To be honest, once machines reach the point where they can think entirely for themselves and develop their own personalities, lives, and "souls", then they probably deserve a place in the world just like humans.

    However, I don't think that it's likely we will create full robots that have human-like traits. Instead, I see the rise of cyborg-like creatures -- full humans, with enhanced characteristics such as strength, intelligence, and abilities.
     
    It's kind of complicated. Computers can perform mathematical operations and other complex problems faster than humans once properly programmed. Give them mobile mechanical structures and free will and they'd probably be superior to humans unless one were to program the concept of human limits and morals into them. But of course, having free will, they'd likely begin to question those concepts just like humans do.

    It's a double-edged sword. On one hand, computers thinking for themselves would make them "smarter" and more human in a way, but with superior processing power and mechanics that don't operate under the exact same conditions as organic matter, they can become a serious threat if they felt like it.

    There are many angles to cover, and I suppose no one can be 100% sure of what the benefits and consequences of having free-thinking, mechanical humanoids can be until we reach that point and give it a shot. Overall, I suppose the risk of AI and androids rebelling would only arise from us treating them like slaves. Of course, given human nature, that's not exactly reassuring.

    On the other hand, I will not be the first nor the last man to admit I would so make it with a hot female android. Guilty as charged. Actually, no. I'm not guilty. To feel guilty would imply that I have some reservations, of which I have none. So I don't feel any guilt whatsoever. Bite me.
     
    Last edited:
    Computers with "free will" and the ability to "think" is so far into the realm of science fiction I don't think any of us will ever see anything like it in our lifetimes. Sure, there are some amazingly advanced machines and robots out there, but they can generally only do one or two things really well. Imagine what you'd need to make a robot that could order a cup of coffee. It would need to be able to stand and walk with people buzzing all around. It would need to be able to see a menu and interpret what it saw even if the handwriting wasn't very good or there was bad lighting or other visual distraction. It would need to be able to interact with a person and make an exchange of money, not only needing to be able to understand what the person was saying, but interpreting it to make sense within a context. You might be able to program it to know what certain phrases mean ("That'll be $3.50."), but what if anything outside of its experience happened? Say, someone bumps into them and drops money all over the place. There's just too much to take into account.

    tl;dr we're never going to see "thinking" robots

    But if we did and they could think and feel for themselves then they would deserve rights like anyone. The ethics of creating something like this? Do we talk about the ethics of people having children or just say that it's their right? I don't see why there'd be ethical problems in creating some kind of "living" machine.
     
    On the other hand, I will not be the first nor the last man to admit I would so make it with a hot female android. Guilty as charged. Actually, no. I'm not guilty. To feel guilty would imply that I have some reservations, of which I have none. So I don't feel any guilt whatsoever. Bite me.

    +1.

    I don't think that we will ever be able to build a android/reploid/whatever though. We could proably build robots that are near human in emotion, but a purely mechanical creation will never be able to perfectly emulate human emotions.

    I can see a Fallout like solution to this, taking a human brain and putting it in a robotic body, but this here would have serious moral implications. What makes a human, human, is the experiences and knowledge that they have aquired. Arguably, the brain is the 'soul' of a human. The moral implication for this is that this can be a form of immortality. As long as the brain remains undamaged, then the 'person' can switch from body to body until they decide to die, or the brain dies. (I know that the FO robots look like crap, but since the intent is to create a human mimicking robot, then the body would be built to look like a human.)

    As for the brain dying, technology exists now to grow new organs. While the entire brain couldn't be replaced with this type of technology, it could be used to repair small amounts of damage/degredation that occurs naturally. With this, the brain dying from old age wouldn't happen.

    Although, I do agree partially with Toujours. We'll proably see cyborgs before we see human-like robots, although depending just how much of their body a person decides to replace with robotics/cybernetics, they could technically be considered this. (I can't remember the exact episode, but I remember that a episode of Outlaw Star mentioned something like this.)
     
    Last edited:
    I am actually strongly opposed to the theory of what is known as "The Singularity". This theory states that humans will completely transcend human bodies and exist only as AI, and that the entire planet will become one huge computer. This theory starts with machines becoming stronger than humans. It then states that humans will combine themselves with machines, until becoming simply AI.

    Were this to happen, it would violate my religious beliefs, destroy all life as we know it, and create a dystopia. I am not ashamed to admit that I would rather die than become a machine. I would rather die as a human, practicing my traditional religion of Christianity, on Earth, than sacrifice my life, religion, and creativity to live in a digital world.

    Plus, this theory is so crackpot that it almost definitely won't happen.
     
    Religion isn't based upon the physical, but the supernatural.

    Your physical body is meaningless. All that matters is your soul.

    Cybernetics change the physical, the body.

    As for religion, and how this could intrude upon 'God''s territory, we've already done that. Medical technology has given us the ability to choose who lives, and who dies. (You know one of Christianities main points? The Lord giveth and the lord taketh away? Medical technology has given us the ability to kick 'God' in the balls, and take back what he has tryth to taketh away.)
     
    I am actually strongly opposed to the theory of what is known as "The Singularity". This theory states that humans will completely transcend human bodies and exist only as AI, and that the entire planet will become one huge computer. This theory starts with machines becoming stronger than humans. It then states that humans will combine themselves with machines, until becoming simply AI.

    Were this to happen, it would violate my religious beliefs, destroy all life as we know it, and create a dystopia. I am not ashamed to admit that I would rather die than become a machine. I would rather die as a human, practicing my traditional religion of Christianity, on Earth, than sacrifice my life, religion, and creativity to live in a digital world.

    Plus, this theory is so crackpot that it almost definitely won't happen.
    We practically are an AI in an organic body. I myself am Catholic, though I don't follow all its creeds and teachings but, I do know that humans and machines will intertwine sometime in the future. And even if we do reach it in this lifetime, it won't go against religious beliefs, God left us to our own devices.
    Also humans are dystopian by nature. Machines don't judge or hate.... yet.


    related - And it begins: https://www.gizmodo.com.au/2012/07/graphene-miracle-5347-it-can-repair-itself-entirely-unassisted/ T-1000 anyone? Graphene can also turn water from the ocean into clean drinking water.
     
    Last edited:
    Computers with "free will" and the ability to "think" is so far into the realm of science fiction I don't think any of us will ever see anything like it in our lifetimes. Sure, there are some amazingly advanced machines and robots out there, but they can generally only do one or two things really well. Imagine what you'd need to make a robot that could order a cup of coffee. It would need to be able to stand and walk with people buzzing all around. It would need to be able to see a menu and interpret what it saw even if the handwriting wasn't very good or there was bad lighting or other visual distraction. It would need to be able to interact with a person and make an exchange of money, not only needing to be able to understand what the person was saying, but interpreting it to make sense within a context. You might be able to program it to know what certain phrases mean ("That'll be $3.50."), but what if anything outside of its experience happened? Say, someone bumps into them and drops money all over the place. There's just too much to take into account.

    tl;dr we're never going to see "thinking" robots

    But if we did and they could think and feel for themselves then they would deserve rights like anyone. The ethics of creating something like this? Do we talk about the ethics of people having children or just say that it's their right? I don't see why there'd be ethical problems in creating some kind of "living" machine.

    I agree, I don't think we'll be able to create true AI within our life times, most of the "AI" we have are logorithms.
    Though when they become sentinent they may become like Hal from the movie 2000 Space Odyssey.
     
    Were this to happen, it would violate my religious beliefs, destroy all life as we know it, and create a dystopia. I am not ashamed to admit that I would rather die than become a machine. I would rather die as a human, practicing my traditional religion of Christianity, on Earth, than sacrifice my life, religion, and creativity to live in a digital world.

    I think if you were literally staring death in the face, you'd second guess your opinion a little bit.
     
    I don't think that we will ever be able to build a android/reploid/whatever though. We could proably build robots that are near human in emotion, but a purely mechanical creation will never be able to perfectly emulate human emotions.
    Disagree, since that thesis needs a correct definition of "feeling" and the only feeling I really felt that far is excitement and pain, everything else is just in your head; from that a robot could feel everything (having excitement and pain simulated through electric shocks or something).
    Oh, and I remember there IS a robot with feelings. It's an... ass. No really, an ass like in the thing our food comes out after about 24 h after eating it.

    Were this to happen, it would violate my religious beliefs, destroy all life as we know it, and create a dystopia. [...] Plus, this theory is so crackpot that it almost definitely won't happen.
    Although I share your opinions on the religious part, I'm a man of possibilities and I'm just saying we know some thousand years ago it was impossible that the earth could be round. No, that is no different from this story, because like them we don't have any chance to give a probability of the scenario happening. Also there is no point in thinking robots would be unable to get their own beliefs.

    Also humans are dystopian by nature.
    A very interesting thing to say.


    Well then, first off, for some amusement, have this. After seeing we are not having the apocalypse today, we can relax a little bit and think through. The one how thought of ethics in the time of robots was Isaac Asimov. His stories are very interesting to read and I can only advise it to you. We should also start following his suggestions.
    AIs are at their very start and I have a rough idea how it works, since I started working on that topic on myself.

    It is yet to prove if Turing-completeness is the highest form of computability. If it is, there is no chance to solve the HALT problem, meaning no machine can always be sure if another machine is in an infinity loop, especially not itself. That means, with AIs we have always the possibility to exploit a bug, good for us.
    Another problem is the P-NP-Question and similiar: We sure can describe one or another question with the help of Languages (in the sense of Turing-machines) and may find an algorithm for it, but that doesn't really help if we (or the machines) need exponential times to solve it. For example, if the machines would want the best way to conquer 28 cities at the shortest time it would take years for them to find this best way, as long P =/= NP. On the other hand, if P == NP we would make a huge jump in programming AIs because they could just guess a right solution for a easy-to-verify problem in polynomial time, like the one I gave before as an example.

    Damn, all interesting stuff I run into breaks down into P =?= NP, which is neither provable nor unprovable in my opinion.
     
    Back
    Top