Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
artificial intelligence It is moving forward at an unprecedented speed, and this is driven primarily by the dominant paradigm of “scaling”, more computing, more data, more parameters, i.e. larger models. The appeal of scaling lies in its simplicity. Keeping the model bigger will ultimately achieve human-like AI or artificial General information (AGI) – A machine capable of human-level intelligence Creativityadaptability, and generalization. However, as they are as impressive as these large-scale language models, the important theoretical questions remain unanswered. Is scaling enough to achieve authentic understanding, human-like creativity, or deeper awareness? In this post, I argue that scaling is fundamentally limited in its ability to produce true AGIs despite actual success. Instead, AI researchers need insights from modern neuroscience. These approaches reveal important blind spots in the current AI trajectory, challenge the simple concept of becoming bigger, always meaning better, and provide a richer and integrated pathway to authentic intelligence and creativity.
Berkeley’s top AI researcher, Stuart Russell, has sharply criticised the scaling approach, emphasizing that there is no fundamental guidelines underlying these large-scale models, often referred to as “huge black boxes.” Scaling is not an empirical strategy, but an empirical strategy. There is no strong scientific basis to guarantee progress towards AGI. Practical limitations include finite amounts of useful data and physical limitations on computing power. More annoyingly, Russell points out that even impressive breakthroughs, such as the acclaimed success of Alphago, can overturn the underlying grounds of misunderstanding and create fantasies of intelligence without real understanding. This raises serious questions about whether scaling alone leads to true AGI. If scaling fails to fulfill its promise, not only stagnation, but also potentially devastating “AI Winter” puts the potentially devastating “AI Winter” at risk, which is left behind economically and scientifically.
Recent research highlights the urgent ability to spontaneously generate skills that only occur after the model has exceeded a certain size threshold, so that it has similarity to the human brain. Wei et al. , 2022 reports that tasks such as arithmetic and multi-step inference suddenly appear at a particular scale, contrary to simple predictions. These properties initially appear to support scaling strategies. Perhaps the path to AGI is simply to discover a bigger emergency transition. However, unpredictability highlights important vulnerabilities. Emergency capabilities are essentially uncertain, appearing without warning and without a theoretical explanation. Without a deeper understanding of why these transitions occur, we cannot reliably predict future breakthroughs. Yann Lecun, Meta’s AI Chief Officer and Turing Award Winner, does this point again and again. Instead, it relies on trial and error, an inherently dangerous strategy, in terms of AI safety to develop things as important as AI. This unpredictability underscores the urgency of grounding AI in robust scientific principles.
Carl Friston’s Free Energy Principle (fep), a well-supported theoretical framework describes the brain as an adaptive nonlinear dynamic system that minimizes uncertainty through active inference. Unlike AI passive Pattern recognitionthe embodied brain engages in an action-perceptual cycle that predicts and controls sensory input. Our brains always generate predictions and adjust when reality doesn’t match. In the face of uncertainty, we either renew our beliefs or act to shape our outcomes. This provides what is lacking in “legal” scaling. Guidelines for adaptive intelligence. Similarly, Scott Kelso’s incarceration explains brain flexibility in the transition between order and disorder. It emphasizes that both theories have been embodied Cognitionhighlighting the limitations of AI that have not been embodied. Cognition arises from the dynamic connection between the brain, body, and environment. Today’s AI lacks and limits real-world understanding. True intelligence is manifested not only through static pattern matching, but also through real-time sensorimotor interactions.
At first glance, AI models such as ChatGPT and Human Brain share core similarities. Both act as prediction engines. Both rely on patterns to make predictions. LLM uses text-based patterns, and the brain integrates sensations and living experience patterns. However, the way they generate predictions reveals a major difference in intelligence itself. The basic model works by predicting the next token in a sequence, pulling it from a large dataset to determine the most likely response. Now they act as sophisticated pattern matching machines that approximate meaningless consistency.
In contrast, the human brain’s prediction engine works dynamically. This continuously generates hypotheses about the world, updates beliefs through sensory experiences, and adjusts actions accordingly. The brain doesn’t just passively predict. It acts to shape the environment To do Reduces uncertainty. This is a true agency and is not found in today’s AI systems. This is the distinction between agent AI and agency. Without the intrinsic drive to minimize uncertainty between implementation and actual interactions, AI remains fundamentally limited, but can be predicted, but not possible Understand.
Philosopher Luciano Floridi wrote a recent paper, AI as an agency without INTELIGETIALstrengthens the distinction between LLM-based AI and human cognition. LLMS shows prominent language flow and may display agents in primitive forms, but they act as statistical pattern processors rather than true intelligence systems. Note that Floridi has refined the popular “stochastic parrot” critique and that LLMS simply does not reflux the text. They integrate and reconstruct data in a novel and urgent way, just like students who put together essays from multiple sources with no deep understanding.
Morevoer, John Searle’s Chinese Room Experiment argues that AI simply simulates intelligence rather than understanding the language that truly understands, like those who follow the rulebook for manipulating Chinese symbols without understanding them. Similarly, AI models process symbols that have no real meaning or intentionality. This highlights the gap between simulation and true understanding, reinforcing that scaling alone does not produce true intelligence based on cognitive and real-world experience.
AGI does not mean Aclitical consciousness Neuroscientist Anil Seth. Intelligence involves flexible, goal-oriented behavior, while consciousness involves subjective experiences and sensations. Contrary to assumptions within the AI community, consciousness is not simply the complexity of algorithms running in brain wetware. Instead, consciousness emerges from being a living, embodied, self-organized organism motivated by self-preservation. This challenges the assumption that consciousness appears spontaneously from “simply” increased intelligence. Even if AI has reached human-level intelligence, consciousness may remain elusive unless explicitly explained. The distinction between intelligence and consciousness further emphasizes the need for neuroscience. Understanding consciousness through embodiment may be essential to moving beyond AI towards true consciousness.
Essential reading of artificial intelligence
Given this, these neuroscience insights can be bridged with practical agent AI. Kotler et al. , 2025’s recent neuroscience research highlights the state of flow. Peak performance And easy creativity – system 1 (fast, intuitive) and system 2 (deliberation, control) integrate cognition and enable adaptability decision making. Today’s AI mimics almost both. LLMS is excellent in quick pattern-based recognition (System 1), but inference time calculations allow for multi-step inference (System 2). However, AI does not have an embodied dynamic interaction between these processes.
Neuroscience-led Agent AI can partner with humans and increase creativity. Intuitionand performance. Future Agent AI must support human cognition, flow states, enable true human synergy through positive reasoning, and tailor it to the embodied intelligence. By grounding these agent systems to principles derived from neuroscience, we transform AI from passive calculators to true creative partners (and the brain is also a black box!). Ultimately, this integration promises transformative progress, allowing AI to augment human intelligence rather than simply replicate it. To move forward and embrace neuroscience in AI development is essential to responsibly navigate the pathway to truly intelligent and perhaps consciously aware machines.