Jump to content

Talk:Restless Souls/Technology: Difference between revisions

m
no edit summary
mNo edit summary
mNo edit summary
Line 708: Line 708:


===The sci-fi singularity will not happen===
===The sci-fi singularity will not happen===
* GPTs (LLMs) are not AIs. '''Their functionality is based too much on statistics and too less on actual learning of discrete information.''' The statistical approach is great for machine learning since the pattern recognition looks for similar things (the patterns) in a set of many examples (training data). The final result (the trained LLM) is composed of ''averaged information'' and statistic-based logic while the weights determine how things will be connected (computed). As consequence these LLM sometimes hallucinate: They obtained some kind of machine logic but lost the ability to always get ''un-averaged'' (not altered) information like it is stored in normal databases. (Due to the averaging and the weights the stored '''information is generally not verbatim accessible anymore'''. Researchers need much effort to reconstruct information in LLM which were once training data.) This explains also how LLM became good at math. Random texts and images from the web contain many '''subjective truths'''. On the other hand, languages (as a concept) and especially math and programming languages - their rules pose '''objective truths''' - are much easier to learn via statistics. When it comes to generation of images and music we often greatly forgive errors because the output was good enough, many variants can pose a "valid" solution and we couldn't create it better ourselves. Invalid solutions - like "mutations" - can be addressed in negative prompts.<!--
* LLMs can be described as ''probabilistic AIs'' in best case. '''Their functionality is based too much on statistics and too less on actual learning of discrete information.''' The statistical approach is great for machine learning since the pattern recognition looks for similar things (the patterns) in a set of many examples (training data). The final result (the trained LLM) is composed of ''averaged information'' and statistic-based logic while the weights determine how things will be connected (computed). As consequence these LLM sometimes hallucinate: They obtained some kind of machine logic but lost the ability to always get ''un-averaged'' (not altered) information like it is stored in normal databases. (Due to the averaging the '''raw information as it was used in training is usually not accessible anymore'''.) This explains also how LLM became good at math. Random texts and images from the web contain many '''subjective truths'''. On the other hand, languages (as a concept) and especially math and programming languages - their rules pose '''objective truths''' - are much easier to learn via statistics. When it comes to generation of images and music we often greatly forgive errors because the output was good enough, many variants can pose a "valid" solution and we couldn't create it better ourselves. Invalid solutions - like "mutations" - can be addressed in negative prompts.<!--


The human memory also learns with statistics: When things correlate it will memorize a rule. However, individual information get better compressed. In order to control body functions and also store a reconstruction of a world in one tiny skull different memory systems are involved. It seems a memory is encoded in the short-term memory and faintly encoded in long-term memory at the same time. During sleep the short term memories gets erased while their counterparts in the long-term memory get strengthened. So much roughly about efficiency. While the short-term memory doesn't seems to store logic, the long-term memory does alongside discrete memories. [...]
The human memory also learns with statistics: When things correlate it will memorize a rule. However, individual information get better compressed. In order to control body functions and also store a reconstruction of a world in one tiny skull different memory systems are involved. It seems a memory is encoded in the short-term memory and faintly encoded in long-term memory at the same time. During sleep the short term memories gets erased while their counterparts in the long-term memory get strengthened. So much roughly about efficiency. While the short-term memory doesn't seems to store logic, the long-term memory does alongside discrete memories. [...]
Line 714: Line 714:
Today's AIs lack such a more sophisticated (sub-divided) memory system.-->
Today's AIs lack such a more sophisticated (sub-divided) memory system.-->


* '''An actual AI should be able to learn [[wp:Reinforcement_learning_from_human_feedback|without human help]] and adapt to a given problem.''' ** For this point in time, GPTs have not shown to possess ''true'' (universal) logic yet. (The capability to do all math with preinstalled or accessible "plugins" wouldn't be a good hint per se. Neither shouldn't we expect/demand from an AI to do mental calculation as its construction was inspired by biological neural networks. But an AI should be able to write and compile its own software on the fly.)
* '''An actual AI should be able to learn [[wp:Reinforcement_learning_from_human_feedback|without human help]] and adapt to a given problem.''' ** For this point in time, LLMs have not shown to possess ''true'' (universal) logic yet. (The capability to do all math with preinstalled or accessible "plugins" wouldn't be a good hint per se. Neither shouldn't we expect/demand from an AI to do mental calculation as its construction was inspired by biological neural networks. But an AI should be able to write and compile its own software on the fly.)
** Without the possibility to universally correct itself, the models are doomed to be replaced (or to be extended at least) by other approaches.
** Without the possibility to universally correct itself, the models are doomed to be replaced (or to be extended at least) by other approaches.


Line 727: Line 727:
: 2024: When you look at the data points you realize the [https://www.reddit.com/r/singularity/comments/1c79xg9/summary_of_stanford_universitys_2024_ai_index/ exponential start curve] of this new ''[https://www.researchgate.net/figure/S-curves-for-the-established-and-disruptive-technology-Performance_fig2_4899992 disruptive technology]'' is over. The second half of the S curve is about to begin. Pessimists see 2026 as a date of a possible slow down. GPT-5 and timely released competition products will be the last hype in the line. From then on only [https://www.heise.de/news/xLSTM-Extended-Long-Short-Term-Memory-bessere-KI-Modelle-aus-Europa-9711813.html gradual improvements] will happen until a new approach (like real self-improvement) gets implemented.
: 2024: When you look at the data points you realize the [https://www.reddit.com/r/singularity/comments/1c79xg9/summary_of_stanford_universitys_2024_ai_index/ exponential start curve] of this new ''[https://www.researchgate.net/figure/S-curves-for-the-established-and-disruptive-technology-Performance_fig2_4899992 disruptive technology]'' is over. The second half of the S curve is about to begin. Pessimists see 2026 as a date of a possible slow down. GPT-5 and timely released competition products will be the last hype in the line. From then on only [https://www.heise.de/news/xLSTM-Extended-Long-Short-Term-Memory-bessere-KI-Modelle-aus-Europa-9711813.html gradual improvements] will happen until a new approach (like real self-improvement) gets implemented.


While [https://www.heise.de/news/Branchenkenner-Die-Haelfte-der-KI-Start-ups-wird-es-bald-nicht-mehr-geben-9219155.html GPTs will likely have their niches] where they do excellent work, the question could be rather whether we will see a more linear, continuous development or a [https://www.youtube.com/watch?v=c4aR_smQgxY&t=177s cost explosion] that leads in 5 to 10 years to [[wp:AI_winter|another AI winter]] until new approaches or technologies like spintronics / [[wp:Optical_computing|optical]] [https://spectrum.ieee.org/photonic-ai-chip computing] / memristors / quantum computing gives AI development a new substantial push and make it ''disruptive'' again. From today's point of view it's more likely that we will see an intermediate cool down but not another severe winter.
While [https://www.heise.de/news/Branchenkenner-Die-Haelfte-der-KI-Start-ups-wird-es-bald-nicht-mehr-geben-9219155.html LLMs will likely have their niches] where they do excellent work, the question could be rather whether we will see a more linear, continuous development or a [https://www.youtube.com/watch?v=c4aR_smQgxY&t=177s cost explosion] that leads in 5 to 10 years to [[wp:AI_winter|another AI winter]] until new approaches or technologies like spintronics / [[wp:Optical_computing|optical]] [https://spectrum.ieee.org/photonic-ai-chip computing] / memristors / quantum computing gives AI development a new substantial push and make it ''disruptive'' again. From today's point of view it's more likely that we will see an intermediate cool down but not another severe winter.


Based on information from 2024 to 2025: Sam Altman admitted indirectly that he cannot meet his AGI goals. He introduced intermediate stages ([https://nxtli.com/en/openai-agi/ Level 1 to Level 5]) to console people.<!--As time progresses more and more this sentence has pretty much no more relevance: At the moment, we are at the stage of [https://www.spiegel.de/netzwelt/netzpolitik/kuenstliche-intelligenz-ki-agenten-sind-das-naechste-grosse-ding-nur-wo-a-12034439-b2fd-4936-8076-81d1dd18db8f AI agents].--> Therefore, also a small to moderate delay in the development of (useful) [https://www.spiegel.de/netzwelt/netzpolitik/physical-ai-humanoide-roboter-werden-die-neuen-autos-kolumne-a-83ac825d-c763-4ea4-81fe-db27145cdcb0 physical AI] was to be expected.<!--
Based on information from 2024 to 2025: Sam Altman admitted indirectly that he cannot meet his AGI goals. He introduced intermediate stages ([https://nxtli.com/en/openai-agi/ Level 1 to Level 5]) to console people.<!--As time progresses more and more this sentence has pretty much no more relevance: At the moment, we are at the stage of [https://www.spiegel.de/netzwelt/netzpolitik/kuenstliche-intelligenz-ki-agenten-sind-das-naechste-grosse-ding-nur-wo-a-12034439-b2fd-4936-8076-81d1dd18db8f AI agents].--> Therefore, also a small to moderate delay in the development of (useful) [https://www.spiegel.de/netzwelt/netzpolitik/physical-ai-humanoide-roboter-werden-die-neuen-autos-kolumne-a-83ac825d-c763-4ea4-81fe-db27145cdcb0 physical AI] was to be expected.<!--
8,773

edits