Jump to content

Talk:Restless Souls/Technology: Difference between revisions

m
no edit summary
mNo edit summary
mNo edit summary
Line 653: Line 653:


===The sci-fi singularity will not happen===
===The sci-fi singularity will not happen===
* GPTs (LLMs) are not AIs. Their functionality is based too much on statistics, too less on actual learning. An actual AI should be able to learn [[wp:Reinforcement_learning_from_human_feedback|without human help]]. For this point in time, GPTs have not shown to possess true logic yet. (The capability to do all math without "plugins" would be a very strong hint, if not a proof.) Without the possibility to universally correct itself, the models are doomed to be replaced (or to be extended at least) by other approaches.
* GPTs (LLMs) are not AIs. Their functionality is based too much on statistics, too less on actual learning. The statistical approach is great for machine learning since the pattern recognition looks for similar things (the patterns) in a set of many examples (training data). The final result (the trained LLM) is composed of ''averaged information'' and statistic-based logic while the weights determine how things will be connected (computed). As consequence these LLM sometimes hallucinate: They obtained some kind of machine logic but lost the ability to always get ''un-averaged'' (not altered) information like it is stored in normal databases. (Due to the averaging and the weights the stored '''information is generally not verbatim accessible anymore'''. Researchers need much effort to reconstruct information in LLM which were once training data.) This explains also how LLM became good at math. Random texts and images from the web contain many '''subjective truths'''. On the other hand, languages (as a concept) and especially math and programming languages - their rules pose '''objective truths''' - are much easier to learn via statistics. When it comes to generation of images and music we often greatly forgive errors because the output was good enough, many variants can pose a "valid" solution and we couldn't create it better ourselves. Invalid solutions - like "mutations"- can be addressed in negative prompts.
 
An actual AI should be able to learn [[wp:Reinforcement_learning_from_human_feedback|without human help]]. For this point in time, GPTs have not shown to possess true logic yet. (The capability to do all math without "plugins" would be a very strong hint, if not a proof.) Without the possibility to universally correct itself, the models are doomed to be replaced (or to be extended at least) by other approaches.
* Software cannot be optimized infinitely. The more optimized a system is, the slower further optimization gets.
* Software cannot be optimized infinitely. The more optimized a system is, the slower further optimization gets.
* Moore's law ended. And serious quantum is decades away.
* Moore's law ended. And serious quantum is decades away.
8,013

edits