Jump to content

Talk:Restless Souls/Technology: Difference between revisions

m
no edit summary
mNo edit summary
mNo edit summary
Line 580: Line 580:


===Estimation of risks===
===Estimation of risks===
What makes evaluating the big warnings about AI problematic is that different interests are mixed up here:
What makes evaluating the big warnings about AI problematic is not just the complex topic but also that different interests are mixed up here:
* The big company founders like Musk, who are afraid of being left behind, apparently use the fears of others to push such warnings and take advantage of them
* The big company founders like Musk, who are afraid of being left behind, apparently use the fears of others to push such warnings and take advantage of them
* Pessimists and [https://www.spiegel.de/wissenschaft/kuenstliche-intelligenz-die-rueckkehr-des-wunderglaubens-kolumne-a-d53eb350-b5b5-4888-9bf8-8fc510d018b8 ideologs/followers] of [https://www.spiegel.de/wissenschaft/longtermism-was-ist-das-rettung-oder-gefahr-kolumne-a-983e60ba-6265-40a8-8c65-8f2668e4e9ff longtermism]
* Pessimists and [https://www.spiegel.de/wissenschaft/kuenstliche-intelligenz-die-rueckkehr-des-wunderglaubens-kolumne-a-d53eb350-b5b5-4888-9bf8-8fc510d018b8 ideologs/followers] of [https://www.spiegel.de/wissenschaft/longtermism-was-ist-das-rettung-oder-gefahr-kolumne-a-983e60ba-6265-40a8-8c65-8f2668e4e9ff longtermism]
* Real residual risks and foreseeable negative developments (fake news, weapons), societal control
* Real residual risks and foreseeable negative developments (fake news, weapons), societal control
Warnings that had much publicity:
* https://futureoflife.org/open-letter/pause-giant-ai-experiments/
* https://www.safe.ai/statement-on-ai-risk
: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
:: While this is something many can easily agree on - for good reasons - at the same time this can be read with a good portion of [[wp:Alarmism|alarmism]]. 


The '''irrational fears of the tech scene about AI''' are in great parts driven by Nick Bostrom -- like the '''[[Oni2_talk:Beyond_Dragons#Is_alien_life_hostile.3F|irrational fears of the tech scene about killer aliens]]''' are in great parts driven by Stephen Hawkings. (RIP)
The '''irrational fears of the tech scene about AI''' are in great parts driven by Nick Bostrom -- like the '''[[Oni2_talk:Beyond_Dragons#Is_alien_life_hostile.3F|irrational fears of the tech scene about killer aliens]]''' are in great parts driven by Stephen Hawkings. (RIP)
8,013

edits