Talk:Restless Souls/Technology: Difference between revisions

m
some more unfinished notes on AI
mNo edit summary
m (some more unfinished notes on AI)
Line 484: Line 484:
Kontroverse Diskussion, Daten vs. Logik, überlagernde Effekte, Twindemic (Spektrum Link), gegenseitige Unterdrückung der Erreger, natürliche Skepsis der Wissenschaftsgemeinde, spätere Rehabilitation, […]
Kontroverse Diskussion, Daten vs. Logik, überlagernde Effekte, Twindemic (Spektrum Link), gegenseitige Unterdrückung der Erreger, natürliche Skepsis der Wissenschaftsgemeinde, spätere Rehabilitation, […]


==Artificial Intelligence==
==Machine learning and artificial intelligence==
* ANI = Artificial Narrow Intelligence (basically machine learning: pattern recognition)
* GPT = Generative pre-trained transformers (Large Language Model with the actual "learning" part de facto outsourced to humans: Reinforcement Learning from Human Feedback (RHLF)), in best case GPTs have a ''transplanted base intelligence'' but they lack the important feature to really learn for themselves
* AGI = Artificial General Intelligence (on par with human thinking, "a real AI", capable to fully self-improve and drive its own development)
* ASI = Artificial Super Intelligence (an AI that goes beyond the level of human intelligence)
 
===Why we thought it would hit the working class at first again===
The '''nature of automatization''' changed over the '''industrial revolutions''', from raw power over how it is used (management by computers, control through digitalization) to information itself. In school, the first thing we learned - and that remained - is the steam engine and the power loom.
 
Scalled up machine learning and AI will change everything, eventually. But from now on it will happen '''in software at first'''. Sofware is naturally the first target and any '''implementation in robotics will need additional development''' and testing. -- Or in short: Improvements (and job change) in any "classic" industry happens too, but in a delayed fashion.
 
A wave of '''job change''' is about to happen with GPT5+. It will presumably give birth to a range of really good ANI. But the far '''long-term trend''' with real AI (AGI) is more and '''more unemployment''' because '''not everybody can adapt to AGI'''. And eventually there will be ASI where everybody will be "a dumb nut" if compared to ASI. -- While there is a delay in removal of physical demanding jobs, paradoxically, intellectual demanding jobs are the ones that will die out last.
 
===Estimation of risks===
===Estimation of risks===
The '''irrational fears of the tech scene about AI''' are in great parts driven by Nick Bostrom -- like the '''[[Oni2_talk:Beyond_Dragons#Is_alien_life_hostile.3F|irrational fears of the tech scene about killer aliens]]''' are in great part driven by Stephen Hawkings. (RIP)
The '''irrational fears of the tech scene about AI''' are in great parts driven by Nick Bostrom -- like the '''[[Oni2_talk:Beyond_Dragons#Is_alien_life_hostile.3F|irrational fears of the tech scene about killer aliens]]''' are in great parts driven by Stephen Hawkings. (RIP)


There is too much projection of human nature.
There is too much projection of human nature.
Line 492: Line 504:
First of all, technology is dual use. Humans decide how to use a tool. The greatest threat against humans are humans.
First of all, technology is dual use. Humans decide how to use a tool. The greatest threat against humans are humans.


There is a number of "unfixable" problems with AI. But we can manage them if the national state has the mightiest AI(s) and provides selected services for the citizens. '''The assortment must be so ridiculous powerful and attractive that crimial individuals have neither much interest nor good success rates in creating own AIs and in prevailing against the "overpowered" state AIs.'''
There are a number of "unfixable" problems with AI. But we can manage them if the '''national state has the mightiest AI(s)''' and provides '''selected services''' for the citizens. The assortment must be so '''ridiculous powerful and attractive''' that criminal individuals have neither much interest nor good success rates in creating own AIs and in prevailing against the '''"overpowered" state AIs.'''


[...]
[...]


===We cannot wait===
===We cannot wait===
Climat change. Population ageing. '''Enemies will not sleep.''' (Think forward "Team Jorge", "Vulcan Files", etc.) (Defence of social media and cyberspace in general.)
: @media: Well, you know guys, someday you will make me [https://www.spiegel.de/wissenschaft/kuenstliche-intelligenz-die-rueckkehr-des-wunderglaubens-kolumne-a-d53eb350-b5b5-4888-9bf8-8fc510d018b8 paranoid with articles like this one]. (Ah no, this cannot happen because it already happened, greetings to my several years long govvy WLAN pirates.🤪) As long as we continue to have this nice nonverbal, inspiring ideas/news sharing network I don't care.
 
Climate change. Population aging. '''Enemies will not sleep.''' (Think forward "Team Jorge", "Vulcan Files", etc.) (Defense of social media and cyberspace in general.)


The race is on. Pausing or stopping AI research across the entire planet is not possible, an illusion. If USA pauses, China is unlikely to follow suit, and vice versa. Like nuclear weapon, AI will spread. — Research should continue at all times. What services should be given public access is a different question. — AI regulation is realistically a topic for later when mutual delivered punches have increased mutual agreement. Like restriction of [[wp:Weapon_of_mass_destruction#International_law|wapons of mass destruction]].
The race is on. Pausing or stopping AI research across the entire planet is not possible, an illusion. If USA pauses, China is unlikely to follow suit, and vice versa. Like nuclear weapons, AI will spread. — Research should continue at all times. What services should be given public access is a different question. — AI regulation is realistically a topic for later when mutual delivered punches have increased mutual agreement. Like restriction of [[wp:Weapon_of_mass_destruction#International_law|weapons of mass destruction]].


Hallucinated facts and discrimination through bias call for more research… If this is not about their irrational fears, the arguments by the tech scene seems pretextual twisted to not lose the race against competitors. A camouflaged lobby action…
Hallucinated facts and discrimination through bias call for more research… If this is not about their irrational fears, the arguments by the tech scene seems pretextual twisted to not lose the race against competitors. A camouflaged lobby action…
Line 506: Line 520:


===The sci-fy singularity will not happen===
===The sci-fy singularity will not happen===
* GPTs are not AIs. Their functionality is based too much on statistics, too less on actual learning. — An actual AI should be able to learn [[wp:Reinforcement_learning_from_human_feedback|without human help]]. For this point in time GPTs have not shown to possess true logic yet. And without the possibility to universally correct itself the models are doomed to be replaced by other approaches.
* GPTs (LLMs) are not AIs. Their functionality is based too much on statistics, too less on actual learning. — An actual AI should be able to learn [[wp:Reinforcement_learning_from_human_feedback|without human help]]. For this point in time, GPTs have not shown to possess true logic yet. And without the possibility to universally correct itself, the models are doomed to be replaced by other approaches.
* Software cannot be optimised infinitely. The more optimised a system is the slower further optimisation gets.
* Software cannot be optimized infinitely. The more optimized a system is, the slower further optimization gets.
* Moores law ended. And serious quantum is decades away. A fast take off cannot happen. An AI cannot improved itself fast enough to become uncontrollable.
* Moore's law ended. And serious quantum is decades away. A fast take-off cannot happen. An AI cannot improve itself fast enough to become uncontrollable.
 
While GPTs will likely have their niches where they do excellent work, the question could be rather whether we will see a more linear, continuous development or a cost explosion that leads in [https://www.youtube.com/watch?v=c4aR_smQgxY&t=177s 5 to 10 years] to a new AI winter until spintronics/memristors/quantum gives AI development a new push.


While GPTs will likely have their niches where they do excellent work, the question could be rather whether we will see a more linear, continuous development or a cost explosion that leads in 5 to 10 years to a new AI winter until spintronics/memristors/quantum gives AI development a new push.
===Why GPTs could evolve into AGI but why it is very unlikely if no additional abilities are implemented===
Reading material:
* General critic: LLMs are "stochastic parrots", Emily Bender, [https://www.zeit.de/digital/2023-04/emily-bender-ki-gefahr-ethik zeit.de]
* Potential loophole: "[LLM] shows how far the approach of stringing together words and word sequences on the basis of statistical probabilities can go. However, there is no evidence that the '''semantics can be fully deduced via syntactic relations'''. However, the opposite cannot be proven either."


In philosophical view GPTs are kind of mind-uploaded intelligences whereas language serves as a transport container: semantics via syntax. The training of a LLM is therefore not "learning" but "translating" algorithims from biolgical to synthetic hardware. And it is good that we didn't mind-upload an individuum but rather a combined condensate of human's collective knownledge. ''''Humans did all the work by having learned intelligence by simply growing up and by going to school, the LLM get it (poorly) transplanted.'''
--
Strongly reformulate this:
The later is important to understand that AI have no (real) inherited motivations or fears. Traces may be present in "text-factual" form but no biological foundations get transplanted:
: The '''"statistics" in LLM''' can be seen as very similar to '''human's associative memory'''. LLM to be true AIs they lack the ability to constantly check facts. (No pain, no endorphins, no taste, no co-sensing.) LLM need the ability to self-check data and to correct it.
--
A human constantly receives input, and triggeres learning. LLMs lack these "interistic" dynamics. And having "interistic" dynamics is considered necessary for possessing Big-C. Strong creativity could be reached in other ways, but for now LLM do not posses this ability by default.
[...]
===Are GPTs creative?===
Yes and no. Due to pattern recognition, it is easy to argue that GPTs can be creative if you differentiate between [https://www.tagesschau.de/wissen/forschung/ki-kreativitaet-101.html "Little-C" and "Big-C"]. The "generative" in "GPT" can be seen as a synonym for being "creative" if it is "Little-C": in that case GPT will '''generate more of the same and similar''' by remixing and changing existing data.
===[[wp:Meme#Etymology|Meme war]]===
===Human fears and ideologies must not contaminate===
===Human fears and ideologies must not contaminate===
The rest risks originate from human projections. Knowing this is already half the way for containment.
The rest risks originate from human projections. Knowing this is already half the way for containment.
The biggest mistake would be to let an LLM "believe" that it is alive or that it can feel fear while it is/can not.
* Transplanting this meme via training data has to be avoided.
* Humans talking this meme into LLM has to be avoided.
You could also say: when tell a child from the very beginning that it is a robot, it will believe it is a robot and behave like one.
Going beyond AGI could change that situation, though. But that's still a long way to go.


[…]
[…]


===Censorship===
====Censorship====
In sense of removing fake news, especially AI generated images and videos. Repeated, severe [https://m.faz.net/aktuell/politik/inland/afd-mit-ki-fotos-abgeordnete-der-partei-rechtfertigen-taeuschende-bilder-18788651.amp.html violation] should be treated as [[wp:Volksverhetzung|Volksverhetzung]].
In sense of removing fake news, especially AI generated images and videos. Repeated, severe [https://m.faz.net/aktuell/politik/inland/afd-mit-ki-fotos-abgeordnete-der-partei-rechtfertigen-taeuschende-bilder-18788651.amp.html violation] should be treated as [[wp:Volksverhetzung|Volksverhetzung]].


In context of the growing amount (and potential) of domestic and foreign fake news and disinformation campaigns, social media can only be effectively protected against hostile used AIs and bots by deploying own AIs. Therefore, the decision for a state to not use AI doesn't exist.
In context of the growing amount (and potential) of domestic and foreign fake news and disinformation campaigns, social media can only be effectively protected against hostile used AIs and bots by deploying own AIs. Therefore, the decision for a state to not use AI doesn't exist.


===Self-censorship===
Since vast computing capacities and the AI itself must be on stand-by they can be used in the meantime in a positive fashion for dual use: helping the research community and the citizens. [...] "Science society". [...] Team work for democratic systems. [...] The more work together the greater LLM / AI we can have.
For obvious reasons this will be kept in bigger GPTs/AIs.
 
====Self-censorship====
For obvious reasons, this will be kept in bigger GPTs/AIs.


===The intelligence exploit===
====The intelligence exploit====
Miniaturization is inevitable. And uncensored AIs can be made with censored versions. Therefore "self-made" AI cannot be prevented, just limited… Like always: absolute security is an illusion.
Miniaturization is inevitable. And uncensored AIs can be made with censored versions. Therefore, [https://www.heise.de/forum/heise-online/Kommentare/ChatGPT-Klon-laeuft-lokal-auf-jedem-Rechner-Alpaca-LLaMA-ausprobiert/llama-Modelle-installieren/posting-42452282/show/ "self-made" AI] cannot be prevented, just limited… Like always: absolute security is an illusion.


This will follow the pattern of cat and mouse. But ultimately, each censored input can be bypassed by formulating subproblems.
This will follow the pattern of cat and mouse. But ultimately, each censored input can be bypassed by formulating subproblems.
====The global south====
The continuation of bullshitty geopolitics ... because of the lack of alternatives ... Either we help weaker states with cyber defense or we risk that they reject the Western world and their Chinese vassalisation. Either way, some supported governments will use the given tools to stay undemocratic in power. It will need many smart heads to mitigate this problem.


===Neutrality of state AIs is a must-have===
===Neutrality of state AIs is a must-have===
A positive future stands and falls with the question about the neutrality of state AIs.
====Law enforcement====
====Law enforcement====
Damocles […]
Damocles […]
8,018

edits