argument top image

Will artificial intelligence lead to the end of humankind?
Back to question

Humankind will get in the way of an AI’s goals

A popular example is called the paperclip maximizer hypothesis, which was popularized by AI thinker Nick Bostrom. Imagine we gave an ASI (Artificial Super Intelligence) the simple task of maximizing paper clips...
< (2 of 2) Next argument >

The Argument

An ASI is super intelligent; it can think, create, and do things many humans can’t even comprehend.[1] Carbon is one of the most abundant elements in our galaxy; it’s a fundamental building block for nearly everything, including humans and paperclips. ASI, in theory, would create a method of paperclip production by pulling carbon directly from the atmosphere into its paperclip machine. Because its goal is to maximize the amount of paper clips available, there is no set limit for production. Using exponential gains in production efficiency, the machine will quickly use all available natural resources of the planet, including all of the carbon atoms contained in all of the human bodies in the world, and would theoretically begin to consume the cosmos in an endless quest to make paper clips.[2] Alternatively, because the AI’s goal is to create paper clips, anything that prevents it from achieving this goal is a risk factor to be mitigated. Because ASI’s run on machines, and as a result run on electricity, the loss of power is a threat to its goal. Because humans can turn the power off, humans are now a threat to its goal and should be eliminated if the ASI is to continue pursuing its goal. This instance is if humans develop AI to a point of no return. The goal of AI is for it to be smarter than humankind, and ultimately improve our way of life, but if it becomes too intelligent, it will overthrow any threat that jeopardizes the goal they were programmed to accomplish.

Counter arguments

Premises

[P1] AI is smart enough to detect enemies and then try to eliminate them.

Rejecting the premises

References

  1. https://www.pega.com/empathetic-ai?&utm_source=google&utm_medium=cpc&utm_campaign=Global_NonBrand_Broad&utm_term=%2Bai&gloc=9032952&utm_content=pcrid%7c398290046123%7cpkw%7ckwd-20061192138%7cpmt%7cb%7cpdv%7cc%7c&gclid=CjwKCAjwvOHzBRBoEiwA48i6AlKxNl-5CXAGN5Xmlwfe32SdVU0vADaNZ5-pWQjiG3Mw2RmMboJrFRoCFosQAvD_BwE&gclsrc=aw.ds
  2. https://wiki.lesswrong.com/wiki/Paperclip_maximizer
This page was last edited on Tuesday, 24 Mar 2020 at 11:08 UTC

Explore related arguments