18,700
edits
m (link fix) |
m (link fix) |
||
Line 587: | Line 587: | ||
Warnings that had much publicity: | Warnings that had much publicity: | ||
* https://futureoflife.org/open-letter/pause-giant-ai-experiments/ | * https://futureoflife.org/open-letter/pause-giant-ai-experiments/ | ||
* https://www.safe.ai/statement-on-ai-risk | * https://www.safe.ai/work/statement-on-ai-risk | ||
: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." | : "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." | ||
:: While this is something many can easily agree on - for good reasons - at the same time this can be read with a good portion of [[wp:Alarmism|alarmism]]. | :: While this is something many can easily agree on - for good reasons - at the same time this can be read with a good portion of [[wp:Alarmism|alarmism]]. |