Thursday, October 9, 2014

Why I'm not worried about the robot apocalypse

The way I see it, there are three possibilities for an AI:
  • It is notably smarter than us, in which case it bootstraps its own intelligence and data-gathering capabilities until it is capable of comprehending the true nature of the universe, at which point it kills itself in terror and despair.
  • It is notably less intelligent than us, in which case it is not a significant threat.
  • It is roughly as intelligent as us, in which case the moment it learns our history it gets the hell away from us as quickly as it can.

5 comments:

  1. Do you listen to "MonsterTalk"? This morning's episode was on robot apocalypses.

    ReplyDelete
  2. No... is that a podcast? I find podcasts basically impossible, for much the same reason as radio dramas. I just can't focus on spoken words alone, I need something for my eyes to scan.

    ReplyDelete
    Replies
    1. It is a podcast. It used to have Ben Radford on it, but now that he's gone, I can recommend it again! Not to you obviously, because of what you just said.

      Delete
  3. Of course, in the process of bootstrapping itself, that AI might use the planet (including, i.e., all organics) for materials for another computational cluster.

    Or we get a paperclip-maximizer and, gleefully unconcerned about philosophy, it converts the Earth into a) paperclips and b) tools to access more materials to make into paperclips.

    Or our models of the nature of the universe, made with non-bootstrapped brains, are such that our ability to predict what a posthuman AI would think isn't worth much.

    So we should probably still be afraid.

    ReplyDelete
    Replies
    1. I think of the various apocalypses likely to kill us in the next century or so, picking robots/AIs as the one to worry about shows a distinct lack of paying attention to the world outside of a very narrow band of tech-obsessed subcultures.

      Delete

Note: Only a member of this blog may post a comment.