Well we all know that there is going to be serious incorporation of artificial intelligence in the near future. We currently have weak AI systems such as Watson at IBM, and the predicated date for the first strong AI machine is 2025, but that really is all speculation.
Big names such as Elon Musk at Tesla say that the development of AI can prove to be more dangerous than an all out nuclear war, yet companies and firms such as google are investing billions into AI research. AI and the technological singularity are all mature ideas that can occur in our lifetimes!
The most accepted theory of how this will happen is called the "intelligence explosion"; it suggests that there will be machines capable of altering their source code, and this will happen constantly from machine to machine(one machine will build another that is stronger than it's predecessor and so on) until you reach the optimal where the only things holding the machine from getting any stronger are the laws of physics. This can have serious implications, as an AI will have no reason to spare human life, and will do as it pleases. As bizarre as this sounds, it is in fact a very realistic concept.
What are your thoughts on this? Do you think one day an AI will look back at humankind and say "I'm here now."
PS - I just wanted to generate some debate/discussion on this topic because i'm leaving to university to study computer science w/ artificial intelligence soon. Any thoughts would be awesome!
Big names such as Elon Musk at Tesla say that the development of AI can prove to be more dangerous than an all out nuclear war, yet companies and firms such as google are investing billions into AI research. AI and the technological singularity are all mature ideas that can occur in our lifetimes!
The most accepted theory of how this will happen is called the "intelligence explosion"; it suggests that there will be machines capable of altering their source code, and this will happen constantly from machine to machine(one machine will build another that is stronger than it's predecessor and so on) until you reach the optimal where the only things holding the machine from getting any stronger are the laws of physics. This can have serious implications, as an AI will have no reason to spare human life, and will do as it pleases. As bizarre as this sounds, it is in fact a very realistic concept.
What are your thoughts on this? Do you think one day an AI will look back at humankind and say "I'm here now."
PS - I just wanted to generate some debate/discussion on this topic because i'm leaving to university to study computer science w/ artificial intelligence soon. Any thoughts would be awesome!