Talk:Restless Souls/Technology: Difference between revisions

m
no edit summary
m (Das Wettrennen läuft.)
mNo edit summary
Line 492: Line 492:
First of all, technology is dual use. Humans decide how to use a tool. The greatest threat against humans are humans.
First of all, technology is dual use. Humans decide how to use a tool. The greatest threat against humans are humans.


There is a number of "unfixable" problems with AI. But we can manage them if the national state posesses has the mightiest AI(s) and provides services for the citizens. '''The assortment must be so ridiculous powerful and attractive that crimial individuals have neither much interest nor good success rates in creating own AIs and in prevailing against the "overpowered" state AIs.'''
There is a number of "unfixable" problems with AI. But we can manage them if the national state has the mightiest AI(s) and provides selected services for the citizens. '''The assortment must be so ridiculous powerful and attractive that crimial individuals have neither much interest nor good success rates in creating own AIs and in prevailing against the "overpowered" state AIs.'''


[...]
[...]
Line 498: Line 498:
===We cannot wait===
===We cannot wait===
Climat change. Population ageing. '''Enemies will not sleep.''' (Think forward "Team Jorge", "Vulcan Files", etc.) (Defence of social media and cyberspace in general.)
Climat change. Population ageing. '''Enemies will not sleep.''' (Think forward "Team Jorge", "Vulcan Files", etc.) (Defence of social media and cyberspace in general.)
The race is on. Pausing or stopping AI research is not possible. Like nuclear weapon, AI will spread. — Research should continue at all times. What services should be given public access is a different question.


[...]
[...]
===The sci-fy singularity will not happen===
Moores law ended. And serious quantum is decades away. A fast take off cannot happen. An AI cannot improved itself fast enough to become uncontrollable.
===Human fears shall not contaminate===
The rest risks originate from human projections.
===Censorship===
In sense of removing fake news and such.
Defending social media against hostile used AIs and bots can only be done with own AIs.
The decision to not use AI doesn’t exist.
===Self-censorship===
For obvious reasons this will be kept in bigger AIs.
===The intelligence exploit===
Miniaturization is inevitable. And uncensored AIs can be made with censored versions. Therefore "self-made" AI cannot be prevented, just limited…
This will follow the pattern of cat and mouse. But ultimately, each censored input can be bypassed by formulating subproblems.
There will be always a way to bypass security. Like always: absolute security is an illusion.
===Neutrality of state AIs is a must-have===
====Code control====
Code hosting platforms like github need to be scanned. Also private repos. When bugs are found, forks for security updates are made.
Any unchecked code is a potential threat, a target for hostile AIs that searches for weaknesses.
Code control by state AIs will be inevitable.


==Discussion==
==Discussion==
8,018

edits