Charlie Stross gives a great talk about Slow AI in which he argues that you don’t actually need a computer to build a paperclip oprimiser, and money is already a great paperclip.
You don't need any "weird" goal like paperclips. You just need the basic goals of survival and expansion that every species posesses (implicitly) to understand why a superintelligence is a danger.
>Are we not madly harvesting the world’s resources in a monomaniacal attempt to optimize artificial intelligence?
No, we are not madly harvesting the world's resources in a monomaniacal attempt to optimize artificial intelligence. We are, however, harvesting the world's resources in an attempt to optimize artificial intelligence.
I'm amazed no one has linked the game version - https://www.decisionproblem.com/paperclips/index2.html
Charlie Stross gives a great talk about Slow AI in which he argues that you don’t actually need a computer to build a paperclip oprimiser, and money is already a great paperclip.
Money is the paperclip.
Of course.
You don't need any "weird" goal like paperclips. You just need the basic goals of survival and expansion that every species posesses (implicitly) to understand why a superintelligence is a danger.
Instrumental convergence is final convergence (w/o final goal divergence following instrumental convergence as assumed by Bostrom).
>Are we not madly harvesting the world’s resources in a monomaniacal attempt to optimize artificial intelligence?
No, we are not madly harvesting the world's resources in a monomaniacal attempt to optimize artificial intelligence. We are, however, harvesting the world's resources in an attempt to optimize artificial intelligence.
Honestly, after thinking about it, i guess it will be