Describes new threats posed by the introduction of ML at the interfaces between people and nuclear weapons systems (and related systems), and proposes policy responses.
Reviews different methods to explore AI futures, including fiction, single-discipline studies, several multidisciplinary approches, and interactive methods.
Our interview features an excellent and mostly grounded exploration of how artificial intelligence could become a threat as a result of the cybersecurity arms race.
On this month’s podcast, Ariel spoke with FLI co-founder Victoria Krakovna and Shahar Avin from the Center for the Study of Existential Risk (CSER). They talk about CSER’s recent report on forecasting, preventing, and mitigating the malicious uses of AI, along with the many efforts to ensure safe and beneficial AI.