I don't think that we will ever be able to build a android/reploid/whatever though. We could proably build robots that are near human in emotion, but a purely mechanical creation will never be able to perfectly emulate human emotions.
Disagree, since that thesis needs a correct definition of "feeling" and the only feeling I really felt that far is excitement and pain, everything else is just in your head; from that a robot could feel everything (having excitement and pain simulated through electric shocks or something).
Oh, and I remember there IS a robot with feelings. It's an... ass. No really, an
ass like in the thing our food comes out after about 24 h after eating it.
Were this to happen, it would violate my religious beliefs, destroy all life as we know it, and create a dystopia. [...] Plus, this theory is so crackpot that it almost definitely won't happen.
Although I share your opinions on the religious part, I'm a man of possibilities and I'm just saying we know some thousand years ago it was impossible that the earth could be round. No, that is no different from this story, because like them we don't have any chance to give a probability of the scenario happening. Also there is no point in thinking robots would be unable to get their own beliefs.
Also humans are dystopian by nature.
A very interesting thing to say.
Well then, first off, for some amusement, have
this. After seeing we are not having the apocalypse today, we can relax a little bit and think through. The one how thought of ethics in the time of robots was Isaac Asimov. His stories are very interesting to read and I can only advise it to you.
We should also start following his suggestions.
AIs are at their very start and I have a rough idea how it works, since I started working on that topic on myself.
It is yet to prove if Turing-completeness is the highest form of computability. If it is, there is no chance to solve the HALT problem, meaning no machine can always be sure if another machine is in an infinity loop, especially not itself. That means, with AIs we have always the possibility to exploit a bug, good for us.
Another problem is the P-NP-Question and similiar: We sure can describe one or another question with the help of Languages (in the sense of Turing-machines) and may find an algorithm for it, but that doesn't really help if we (or the machines) need exponential times to solve it. For example, if the machines would want the best way to conquer 28 cities at the shortest time it would take years for them to find this best way, as long P =/= NP. On the other hand, if P == NP we would make a huge jump in programming AIs because they could just guess a right solution for a easy-to-verify problem in polynomial time, like the one I gave before as an example.
Damn, all interesting stuff I run into breaks down into P =?= NP, which is neither provable nor unprovable in my opinion.