What I still don't get about AI x-risk
Some thoughts on the apparent danger of ASI
AI existential risk (or 'x-risk') is about the prospect of AI causing the extinction of humanity.1 This is the outcome that some predict could take place if we somehow manage to create artificial superintelligence (ASI), namely machine intelligence superior to that of humans.
In Human Compatible: AI and the Problem of Control, Stuart Russell explains the …
Keep reading with a 7-day free trial
Subscribe to The Cyber Solicitor to keep reading this post and get 7 days of free access to the full post archives.