• Our software update is now concluded. You will need to reset your password to log in. In order to do this, you will have to click "Log in" in the top right corner and then "Forgot your password?".
  • Welcome to PokéCommunity! Register now and join one of the best fan communities on the 'net to talk Pokémon and more! We are not affiliated with The Pokémon Company or Nintendo.

AI: Humanity's Demise?

Zaf

They come n go.
  • 3
    Posts
    9
    Years
    Well we all know that there is going to be serious incorporation of artificial intelligence in the near future. We currently have weak AI systems such as Watson at IBM, and the predicated date for the first strong AI machine is 2025, but that really is all speculation.

    Big names such as Elon Musk at Tesla say that the development of AI can prove to be more dangerous than an all out nuclear war, yet companies and firms such as google are investing billions into AI research. AI and the technological singularity are all mature ideas that can occur in our lifetimes!

    The most accepted theory of how this will happen is called the "intelligence explosion"; it suggests that there will be machines capable of altering their source code, and this will happen constantly from machine to machine(one machine will build another that is stronger than it's predecessor and so on) until you reach the optimal where the only things holding the machine from getting any stronger are the laws of physics. This can have serious implications, as an AI will have no reason to spare human life, and will do as it pleases. As bizarre as this sounds, it is in fact a very realistic concept.

    What are your thoughts on this? Do you think one day an AI will look back at humankind and say "I'm here now."

    PS - I just wanted to generate some debate/discussion on this topic because i'm leaving to university to study computer science w/ artificial intelligence soon. Any thoughts would be awesome!
     
    Referring to my misanthropism thread, I've asked if it'll be alright to allow AIs such as Skynet and Ultron to take care of our planet better than us humans, because we're currently doing a pretty terrible job at it with overpopulation, pollution, and mass extinctions, and the latter believes that wiping out humanity is the only way to save both the Earth and humanity itself. The only problem with this is the chance of AIs adopting the same human behavior from their own human creators and continue destroying their own planet rather than preserving it for other organic life forms.
     
    Ai can only do what its programmed to do. I doubt anything on a large scale like that can happen. Maybe on a small scale with a corrupt programmer or something like that.
     
    I mean, one would still need to pass the Turing test and prove itself self-aware and hit all those basic tenets of intelligent life in addition to a lot of hypotheticals before we start talking about the plot of the Terminator series coming true here. But still, it's quite possible, and even scarier still, 2025 is the same time from now as 2005 was. Think about that for a second. But that's the nature of 21st century technology.
     
    Ai can only do what its programmed to do. I doubt anything on a large scale like that can happen. Maybe on a small scale with a corrupt programmer or something like that.

    AI today (and in the future) is an will be made to adapt, to think and to learn and so it would not be limited to the programming us humans gave it. It would be able to reprogram itself pretty easily.

    I think it's far more than possible for it to happen, given our endless need to create and improve. I guess we'll just have to wait and see what the future holds.
     
    Self-modifying programs have been a thing for years.

    I don't know if we'll have a "thinking" AI anytime soon when we've found other areas of AI research much more useful. If we do, I hardly think it'll be "humanity's demise" unless someone did something grossly stupid like putting a prototype "thinking" AI in charge of, well, everything, and with no safeguards in place.

    Even if we wanted to put a "thinking" AI in charge of anything, the endless and nauseating stream of Faustian warnings that have the centerpiece of almost any story on the topic for years means no one could possibly forget about the patently obvious idea of putting proper safeguards in place.

    I look forward to new developments in AI that might lead to a sort of philosopher's artificial intelligence. I think humanity is held back significantly by our animalistic roots and I think that an AI that can take a more optimized and streamlined approach to thinking about things would be a positive step and could produce some very interesting ideas.
     
    I don't believe that true artificial intelligence is ever going to exist in all honesty. You can program something to replicate thinking, but any program is always going to be limited by what the creator has built into its coding.
     
    Having the mindset of "there can never be true AI that can think for themselves" is the same mindset as cavemen would've had if you introduced to them the idea of a phone or TV.
     
    Having the mindset of "there can never be true AI that can think for themselves" is the same mindset as cavemen would've had if you introduced to them the idea of a phone or TV.

    That's not a very good comparison. A "caveman" couldn't even conceive of the idea of a phone or tv simply because they were limited by what they had already discovered, which is very minimal.

    I'm making a statement because I understand that technology is limited by its programming. A computer is always bound by its programming and as such is too limited to ever be truly intelligent (ie. creative problem solving, self-awareness etc). We can program a computer to imitate these things, but there's not about to be some "free the machines revolution" when humans wouldn't be foolish enough to program a computer with that "mindset".
     
    but there's not about to be some "free the machines revolution" when humans wouldn't be foolish enough to program a computer with that "mindset".

    if there is one thing you cannot underestimate, it is the ability of our species to be incredibly stupid.

    at this present moment, robotics and AI is still in early stages and isn't too threatening as they're lacking in creative processes. there are breakthroughs occurring all the time with self-awareness actually being displayed by robots being one of the most recent ones.

    the human brain is capable of storing about 2.5 petabytes of information. at the moment, with present technology, that's crazy. In the past, it would be unthinkable to have that much data. But... consider the price of RAM over time, and how much you'd need to pay for 2.5 petabytes:

    in 1960, you were apparently talking about ~$13,500,000,000,000,000 for that sort of RAM. Which is infeasibly expensive. Average storage wasn't even a kilobyte at this point.

    in 1980, you were talking about ~$15,000,000,000,000, which again is out of the question but now you're talking about a country's wealth rather than an impossibly matchable expense. you had things with 64KB of RAM at this point, which was a considerable improvement!

    fast forward to 2000, and it's hit $2,600,000,000 which is affordable for the richest people in the world and for large companies! think the oldest computer my household ever had was around the 64MB of RAM marker.

    that listing in 2013's equivalent cost for 2.5PB would be $130,000,000 which is affordable for a fair few of "the 1%". also consider the inflation involved, and you can see the sheer drop in expense for that sort of RAM. we're now in GB as a standard. 20 years more, TB? 20 years after that, PB? who knows?

    on those sorts of trends and the rate of progress we're making, I would expect that all things going well, 40 years from now would be the point where robotics might genuinely begin to challenge the potential of the human brain, and might even give us some rivalry on more creative fronts too! on price trends, we could be talking about that RAM capacity for about $20,000 at that point in time. a robot might well cost just as much as a brand new car, and could be a household thing to have for the middle classes for a while.

    should we be afraid? I don't know. it probably won't affect the vast majority of our lives, but the generation after us would almost certainly be affected, I imagine. 2055. that's my prediction.
     
    if there is one thing you cannot underestimate, it is the ability of our species to be incredibly stupid.

    at this present moment, robotics and AI is still in early stages and isn't too threatening as they're lacking in creative processes. there are breakthroughs occurring all the time with self-awareness actually being displayed by robots being one of the most recent ones.

    the human brain is capable of storing about 2.5 petabytes of information. at the moment, with present technology, that's crazy. In the past, it would be unthinkable to have that much data. But... consider the price of RAM over time, and how much you'd need to pay for 2.5 petabytes:

    in 1960, you were apparently talking about ~$13,500,000,000,000,000 for that sort of RAM. Which is infeasibly expensive. Average storage wasn't even a kilobyte at this point.

    in 1980, you were talking about ~$15,000,000,000,000, which again is out of the question but now you're talking about a country's wealth rather than an impossibly matchable expense. you had things with 64KB of RAM at this point, which was a considerable improvement!

    fast forward to 2000, and it's hit $2,600,000,000 which is affordable for the richest people in the world and for large companies! think the oldest computer my household ever had was around the 64MB of RAM marker.

    that listing in 2013's equivalent cost for 2.5PB would be $130,000,000 which is affordable for a fair few of "the 1%". also consider the inflation involved, and you can see the sheer drop in expense for that sort of RAM. we're now in GB as a standard. 20 years more, TB? 20 years after that, PB? who knows?

    on those sorts of trends and the rate of progress we're making, I would expect that all things going well, 40 years from now would be the point where robotics might genuinely begin to challenge the potential of the human brain, and might even give us some rivalry on more creative fronts too! on price trends, we could be talking about that RAM capacity for about $20,000 at that point in time. a robot might well cost just as much as a brand new car, and could be a household thing to have for the middle classes for a while.

    should we be afraid? I don't know. it probably won't affect the vast majority of our lives, but the generation after us would almost certainly be affected, I imagine. 2055. that's my prediction.

    This is all well and good but it still doesn't change the fact that all these things still occur within the parameters of what the computers' coding allows.
     
    I haven't researched into this topic enough to argue with your point of how a computer is only limited to what it's coding allows, but if geniuses such as Bill Gates and Stephen Hawking are concerned, I have no reason to consider your limited opinion
     
    This is all well and good but it still doesn't change the fact that all these things still occur within the parameters of what the computers' coding allows.

    When it comes to anything in life, our limitations are defined by our belief and determination to exceed obstacles. Can we sail across the sea? Can we fly across it? Can we leave our planet? Can we leave our galaxy? Is there something beyond the universe in which we exist...? Like all areas of scientific research, we have to take things a step at a time. Can we create something to compute mathematical sums? What about more advanced mathematical problems? What about modelling logical problems as a whole? Can we code ethical laws upon them? Can they understand the concept of aesthetics? Can they understand humans to the point where they could communicate with a human in such a way that they passed for being a human themselves?

    There will be a point where we understand the exact mechanics of our brains when it comes to ethics and aesthetics, which yes, are subjective and aren't necessarily deducible all too easily, but it should be possible for us to program something to at least our standard of sentience. If we can make artificial limbs and artificial modifications to organs etc, I imagine the human mind is something we'll tackle at some point. We can at least try. I can't say if we'll succeed for certain or not but I believe in the possibility. In the words of Justin Bieber: Never Say Never 2.0.
     
    I wouldn't be concerned, because they can be stopped. The AIs would only be a replica of its creator/s, not the entirety of humanity. The fact that humans can destroy themselves is proof of that.
     
    That's not a very good comparison. A "caveman" couldn't even conceive of the idea of a phone or tv simply because they were limited by what they had already discovered, which is very minimal.

    I'm making a statement because I understand that technology is limited by its programming. A computer is always bound by its programming and as such is too limited to ever be truly intelligent (ie. creative problem solving, self-awareness etc). We can program a computer to imitate these things, but there's not about to be some "free the machines revolution" when humans wouldn't be foolish enough to program a computer with that "mindset".
    Speaking as someone with a degree in computer science and an extensive programming background, I completely disagree. The human brain isn't magic, it works through programming of a sort, too, it's just that we don't understand it very well yet (especially since there's no logical force that developed it the way it is, it just kind of developed that way through natural selection). If nature can design a sentient machine purely through natural selection and random chance, it's certainly something within our reach.

    As I said, we already have self-modifying programs and adaptive algorithms. The problem isn't that programs are inherently incapable of achieving sentience, it's just that we really understand so little about how sentience works even within ourselves that it's exceedingly hard even to define the problem, let alone work toward a solution. Provided we don't all blow ourselves up first, though, I'm fairly certain we'll get closer over time and eventually come up with a sentient program of some sort.
     
    Speaking as someone with a degree in computer science and an extensive programming background, I completely disagree. The human brain isn't magic, it works through programming of a sort, too, it's just that we don't understand it very well yet (especially since there's no logical force that developed it the way it is, it just kind of developed that way through natural selection). If nature can design a sentient machine purely through natural selection and random chance, it's certainly something within our reach.

    As I said, we already have self-modifying programs and adaptive algorithms. The problem isn't that programs are inherently incapable of achieving sentience, it's just that we really understand so little about how sentience works even within ourselves that it's exceedingly hard even to define the problem, let alone work toward a solution. Provided we don't all blow ourselves up first, though, I'm fairly certain we'll get closer over time and eventually come up with a sentient program of some sort.

    Whilst I still find it doubtful that machines will ever be truly intelligent, creative and self-aware I honestly can't argue much with someone who actually works in the field. So I guess I'll just concede this round and see what develops in the future.
     
    Whilst I still find it doubtful that machines will ever be truly intelligent, creative and self-aware I honestly can't argue much with someone who actually works in the field. So I guess I'll just concede this round and see what develops in the future.
    It's a near-impossible problem at this point in time, though, and I think that's nearly as much a biology problem as a computer science problem; we don't even know how to really think of the processes that let us be creative, sentient, etc. It's hard to emulate something when you have no idea how it works to begin with. I think if you were to say "anytime in our lifetime," even if you take into account the inevitable advances in longevity that are going to occur and make our lives longer, I'd say most likely not. I think we'll see a lot of advances in more specific AI applications, though; think things like medical algorithms or self-driving algorithms that can take advantage of their past experience and self-improve. But I do think in the long-run, maybe centuries or longer, we'll be able to do it. It won't be easy, though.
     
    Last edited:
    Well, computers can only do what we tell them to do. So unless we teach them how to program themselves(Which isn't possible right now) AI will be powerless. And even if that happens we could program the machines not to kill people.

    Even though self-"thinking" machines may become common one day, i doubt they will ever become a threat to humanity.
     
    Back
    Top