I hear a lot of extremely strong claims made about conceptual issues in machine learning – such as the prospect of a general artificial intelligence – and ethical questions – such as the existential risk posed by AI. It’s hard not to be concerned when somebody like Stephen Hawking thinks the thing you spend your days working on will likely destroy humanity.
But I haven’t found as much argument for these claims as the strength of their conclusions warrants. The absence of solid arguments means that those on either side of the debate end up talking past each other. I want to use this blog to understand and clarify to myself some of the philosophical issues in AI
* Does AI pose an existential risk?
* Is AGI possible?
* Could an AI be conscious?