Should You Worry About Artificial General Intelligence
Originally published on LinkedIn on August 24, 2022
Lost in the recent media hype about Blake Lemoine and Google’s LaMDA Washington Post article here if you actually missed this is the underlying issue which matters regardless of any notion of “sentience” in machine intelligence – that is, the issue of AI safety.
A few well regarded thinkers have been working in this domain (not Elon Musk or Bill Gates) and a few useful books have been published about the issues – but the field is moving quickly and if you want to be up to speed on the latest I recommend the website LessWrong and in particular the writing of MIRI researcher Eliezer Yudkowsky and the many responses to his thinking. A starting point might be AGI Ruin: A List of Lethalities
After reading through this and the detailed and thoughtful responses (I recommend Paul Christiano and Zvi but many good ones on the LessWrong website exploring every aspect) I found myself wondering two things – first, as important an existential issue this is for the human race, why aren’t more people aware, engaged, discussing, and solving for these issues? And second, is there much that any of us can practically do or is this another topic like Russia attacking Ukraine or implementation of Covid protocols (just to take two recent examples) in which we are all just observers with no world-changing action possible by us as individuals? Of course the answer to question 2 might in fact inform the answer to question 1… So then I began to consider that there are actually a series of questions.
And so I offer a decision tree for you to consider the primary question: “Should you worry about Artificial General Intelligence?”
-
- Do you believe that AGI has the potential to become an existential threat to humanity? (if “NO” stop here… if “YES” continue…)
-
- Could AGI develop into an existential threat in the “near term” – let’s call that within a few decades since if you are reading this, that seems like a reasonable expectation for you to still be alive and thus have this be a personally meaningful topic. You could extend if you are a parent and wondering about your children/grandchildren or you could also extend if you feel compassion for the rest of the human race… (if “NO” stop here… if “YES” continue…)
-
- Is it possible for humans to do anything to make it less likely that AGI would either develop into an existential threat or, provide a mechanism for stopping an AGI that had become an existential threat? (if “NO” stop here… if “YES” continue…)
-
- Can you do something (donate, write, think, contribute in any way) to make it less likely that AGI would develop or be successful as an existential threat? (if “NO” stop here… if “YES” continue…)
If you answered “YES” through all four questions, here is a great organization to support, and I hope you’ll become more engaged in educating yourself and others on this topic… If not, well - go back to whatever else you were doing :-)