Читать книгу: «The Existential Limits of Reason», страница 2
Over time, these elements evolved into advanced cognitive systems capable of abstract thinking, self-awareness, and future planning.
Differences in the Evolution of Intelligence in Mammals and Cephalopods
An intriguing example of the evolution of intelligence can be seen in mammals and cephalopods (such as octopuses) – two distinct evolutionary paths leading to advanced cognition.
Mammals, including humans, developed intelligence in a social context, where cooperation and group living played a crucial role. Their cognitive abilities evolved to solve problems related to cooperation, competition, and social communication. This led to the emergence of complex social hierarchies, empathy, theory of mind (understanding the thoughts and intentions of others), language, and abstract thinking. The mammalian brain features a large cerebral cortex, particularly the frontal lobes, responsible for planning, self-control, and decision-making.
Cephalopods, on the other hand, evolved intelligence in a solitary existence, requiring adaptation to diverse oceanic environments. Their cognitive abilities focus on solving spatial problems, camouflage, tactical behavior, and independent control of limbs. A unique feature of cephalopod brains is that about two-thirds of their neurons are located in their tentacles, allowing their limbs to act autonomously.
These two examples demonstrate that intelligence can evolve through different pathways, adapting to specific survival challenges.
As we continue exploring the evolution of intelligence, understanding how the brain functions and has developed over time remains essential..
The Principle of Brain Functioning
The brain consists of billions of neurons that process information and coordinate the organism’s actions. These neurons communicate with each other through chemical substances called neurotransmitters. When a neuron is activated, it transmits an electrical impulse that reaches the synapse – the contact point with another neuron. At this point, the electrical signal is converted into a chemical one, as neurotransmitters are released into the synaptic cleft and activate receptors on the next neuron.
Key neurotransmitters such as dopamine, serotonin, and glutamate regulate essential aspects of behavior and perception. For example, dopamine is associated with motivation and the reward system, while serotonin influences mood and anxiety levels. Glutamate serves as the primary excitatory neurotransmitter, playing a crucial role in learning and memory processes.
The Influence of Hormones on Brain Function
Hormones play a crucial role in regulating behavior and physiological states. For example, cortisol, the stress hormone, is produced in response to threats and helps the body cope with emergency situations. However, if its levels remain elevated for prolonged periods, it can lead to chronic stress, depression, and impaired cognitive function. Oxytocin, on the other hand, promotes the strengthening of social bonds and empathy, which are essential for complex forms of communication and interaction.
The influence of hormones on the brain is regulated through the hypothalamus, which controls the pituitary gland and, in turn, interacts with the endocrine system. This integration ensures the coordination of cognitive and physiological processes.
The Microbiota and Its Influence on the Brain
The microbiota, or the collective of microorganisms inhabiting our body, also plays a crucial role in brain function. In recent decades, it has become clear that microbes, especially those living in the gut, influence behavior, emotions, and cognitive processes. This interaction between the brain and microbes is known as the microbiome-gut-brain axis.
Some microbes can affect the levels of neurotransmitters, such as serotonin, which is produced in the gut, and influence inflammatory processes that, in turn, may impact the functioning of the nervous system. For example, disruptions in the balance of the microbiota are associated with the development of depression, anxiety disorders, and even neurodegenerative diseases such as Alzheimer’s disease.
Evolution and Development of These Systems
Over time, through the process of evolution, the systems in various animal species, including humans, became increasingly complex and adapted to the surrounding environment. In the human brain, several levels of development can be distinguished: from ancient structures found in our ancestors, including reptiles, to more complex and specialized regions, such as the neocortex, responsible for abstract thinking, planning, and self-awareness.
In reptiles and their ancestors, including early mammals, there was a part of the brain responsible for basic survival functions, such as instincts, aggression, and sexual behavior. As evolution progressed, and more complex cognitive functions developed, new structures were added to this ancient brain, such as the limbic system, which is responsible for emotions, and the neocortex, which developed in mammals and enables more complex cognitive tasks like abstraction, planning, and self-reflection.
These changes led to the creation of brain structures that process information not only based on current events but also in anticipation of future states, allowing adaptation to the changing conditions of the environment. Brain evolution not only improved survival mechanisms but also created conditions for more complex forms of behavior, such as social interactions, empathy, and language.
Brain Development in Octopuses
The brain of octopuses has a remarkable structure and functional features that distinguish it from the brains of mammals. While octopuses do not possess the same complex brain system as mammals, they demonstrate a high level of cognitive abilities such as learning, tool use, problem-solving, and even signs of personality.
The octopus brain is divided into several parts, with the majority of its mass concentrated in the head. However, two-thirds of its neurons are located in the arms. This unique structure allows each arm to operate relatively independently and make its own decisions. This trait provides octopuses with exceptional flexibility in interacting with their environment and adapting to changing conditions.
Differences in Brain Function Between Octopuses and Humans
Mammals, including humans, developed complex social structures, which contributed to the evolution of a more centrally organized brain. As mammals, we have a highly developed cerebral cortex (especially the frontal lobes), which is responsible for functions such as planning, self-control, and abstract thinking. Our brain is also closely connected to the hypothalamus and the endocrine system, which allows hormones like cortisol and oxytocin to regulate behavior in response to external and internal stimuli.
In contrast, the octopus brain, while also highly developed, functions somewhat differently. The concentration of neurons in their arms allows octopuses to make decisions at a local level without needing to send signals to the central brain. This provides them with remarkable autonomy and the ability to adapt to a variety of situations. For example, octopuses can solve problems related to spatial perception and object manipulation, not only thanks to their central brain but also through their body, which is a unique feature.
In both cases – in mammals and octopuses – the brain serves as an adaptive organ that processes information about the external world and makes decisions based on the organism’s current needs. However, while mammals developed a central brain to coordinate actions and social interactions, octopuses use local brain structures to maintain a high degree of independence for their body parts. This difference reflects distinct evolutionary survival strategies, where mammals rely on collective behavior and complex social interactions, while octopuses depend on individual decision-making and flexibility in manipulating their environment.
The Bayesian Approach to the Mind: The Free Energy Principle and Predictive Coding Theory
Predictive Coding and its foundations, related to Bayesian approaches, play a central role in contemporary understanding of how the brain perceives and processes information. Unlike traditional views of perception, where the brain simply reacts to sensory data, the theory of predictive coding argues that the brain actively constructs models of the world and uses them to predict future events. These predictions are then compared with the actual sensory information received through the senses. Prediction error – the difference between what the brain expects and what it actually perceives – serves as a signal for updating the mental model. This process allows the brain to minimize energy costs, accelerating perception and increasing adaptability, which forms the basis for the effective functioning of cognitive processes.
In recent decades, the theory of predictive coding has increasingly been seen as part of the broader Free Energy Principle, which links it with Bayesian inference, Active Inference, and other approaches focused on minimizing uncertainty and adapting to environmental changes.1. However, despite the growing interest in this integrative approach, predictive coding itself remains a fundamental concept for understanding how the brain constructs models of the world and updates them based on new data. This work will focus primarily on predictive coding, its neurobiological mechanisms, and its role in cognitive processes.
The historical roots of the theory of predictive coding indeed trace back to the works of Pierre-Simon Laplace, who laid the foundation for the concept of determinism. Laplace, one of the first to consider ideas of probability and determinism in the context of predicting the future, proposed that if one had complete knowledge of the current state of the universe, the future could be predicted with absolute certainty. His hypothesis of the “Laplace Demon,” which could predict the future with perfect accuracy, was based on the idea that if we knew all the parameters of microstates, including the position and velocity of every particle, all events – including human thoughts and actions – could be predicted.
This idea of an all-knowing observer and the ability to predict future events based on complete knowledge of present conditions provided an early conceptual foundation for understanding how the brain processes information and makes predictions about the future. Predictive coding and the free energy principle are modern extensions of this concept, where the brain continually updates its internal models of the world to minimize prediction errors and uncertainty.
However, the concept of prediction and world modeling began to develop much later. In the 18th and 19th centuries, Laplace’s ideas about determinism started to be questioned by contemporary philosophers and scientists such as Isaac Newton, Carl Friedrich Gauss, and others. Ideas related to probabilistic calculations and uncertainty gained popularity with the development of statistics and thermodynamics.
The shift toward probabilistic thinking marked a key turning point in the evolution of predictive models. It became increasingly clear that the world is not fully deterministic and that knowledge of the present state is often insufficient to predict the future with absolute certainty. This uncertainty was formally recognized in statistical mechanics, which introduced the concept of entropy – a measure of disorder or uncertainty in a system. As a result, the idea that the brain might work with probabilities, updating predictions based on new information, became more plausible and relevant in the context of cognitive neuroscience.
In the 20th century, the works of Klaus Heisler, Richard Feynman, and Jan Frenkel represented a significant step toward understanding how predictions can operate in conditions of uncertainty and how the brain can construct hypotheses in the context of probability and imperfection. These scientists proposed mathematical approaches that ultimately laid the foundation for the theory of predictive coding in neurobiology.
Equally important contributions to the development of the idea of prediction and coding theory came from researchers in the field of neuroscience in the mid-20th century, such as Benjamin Libet and Nobel laureates Roger Sperry and Jean-Pierre Chevalier. For example, Libet conducted experiments that demonstrated the brain starts the decision-making process several seconds before a person becomes consciously aware of their choice, challenging the idea of full conscious control over behavior.
However, theories similar to predictive coding began to actively develop only in the late 20th and early 21st centuries. A key role in this was played by research into neuroplasticity and the brain’s adaptive mechanisms. Neurobiological studies, including investigations of neurotransmitters such as dopamine and the influence of neural networks, allowed for significant insights into how the brain uses prediction and models to perceive the surrounding world. Founders of predictive coding theory, such as Karl Friedrich von Weizsäcker and Gregory Hooper, proposed that the brain is constantly forming hypotheses about the future based on past experience and correlating them with incoming sensory information.
Bayes’ theorem, proposed by the English mathematician Thomas Bayes in the 18th century, became an important mathematical tool for analyzing and updating probabilistic hypotheses in light of new data.
The essence of the theorem is that it allows for recalculating the probability of a hypothesis based on new data. Bayes’ theorem describes how the belief (or probability) in a hypothesis is updated in response to new information. In the context of the brain, this theorem can be used to explain how neural networks update their predictions about the future, considering both old and new experiences.
In the context of predictive coding theory, this theorem and formula illustrate how the brain updates its hypotheses (or predictions) about the world based on new sensory data. When the brain encounters new events (data), it revises its prior probability (predictions) to incorporate these data, which helps improve the accuracy of future predictions.
Thus, this process reflects a key feature of predictive coding: the brain does not simply react to data, but actively revises its expectations based on new inputs, always striving to minimize prediction errors.
The application of Bayes’ theorem to neurobiology and cognitive science became possible in the 1980s when scientists began to understand how the brain could use probabilistic methods to solve problems of uncertainty. In this paradigm, the brain is seen as a “Bayesian inference” (interpreter) that formulates hypotheses about the world and updates them in response to sensory information using principles of probability. The Bayesian model suggests that the brain maintains probabilistic models of future events and adjusts them based on prediction errors, which is directly connected to the theory of predictive coding.
This updating of probabilistic hypotheses is crucial because it allows the brain not only to adapt to changes in the environment but also to account for uncertainty in the world, even when information is incomplete. In this sense, Bayes’ theorem and its applications have become fundamental to understanding how the brain, when faced with uncertainty, can improve its predictions and forecast the future based on prior knowledge.
Thus, the connection between predictive coding theory and Bayes’ theorem became a key point in the development of neurobiological models explaining how the brain processes information and uses probabilistic computations to predict the future. Bayes’ theory, as the foundation for handling uncertainty and adaptation, provided an important mathematical and cognitive tool for understanding how the brain functions in the context of constant uncertainty and the ever-changing world.
Predictive Coding as an Adaptive Mechanism
The principle behind the theory of predictive coding is that the brain does not simply react to external stimuli, but actively predicts them using existing models of the world. The brain constructs hypotheses about what will happen in the future and compares them with current sensory information. If the predictions match reality, the prediction error is minimized, allowing the brain to use its resources efficiently. If an error occurs – when there is a mismatch between the prediction and reality – the brain updates its models of the world, which helps improve perception and adaptation.
This approach allows the brain to save energy and effort by minimizing the need to process all information from scratch. Instead of interpreting data anew each time, the brain works with simplified models that it constantly updates based on new sensory data. This significantly speeds up information processing and reduces energy expenditure. For example, when a person is walking down the street, their brain does not analyze each step individually but simply uses its predictions about what should happen in the next second.
Predictive Coding operates at different levels, ranging from simple sensory signals (such as sounds or colors) to complex social interactions and abstract ideas. At lower levels, the brain predicts basic sensory signals, such as shapes and movements, while at higher levels, it predicts more complex phenomena, such as people’s intentions or social interaction scenarios.
The Role of Hormones, Neurotransmitters, and Microbiota in Prediction
The effectiveness of predictive coding mechanisms also depends on various external and internal factors. Hormones, neurotransmitters, gut microbiota, and injuries can significantly influence the brain’s ability to predict and adapt.
Cortisol, the stress hormone, can impair the brain’s ability to adjust its predictions. For example, high levels of cortisol may disrupt the process of updating the world model, leading to persistent perceptual errors and increased anxiety. Neurotransmitters such as dopamine play a key role in reward and motivation processes, as well as in strengthening or weakening certain brain predictions. Recent studies have also shown that gut microbiota can influence cognitive functions and even the brain’s predictive abilities, as microbes interact with the central nervous system, affecting our mood and perception.
Injuries, especially brain injuries, can disrupt the neurobiological processes of prediction, leading to cognitive and emotional disorders. For example, depression and anxiety disorders can be associated with disruptions in the mechanisms of predictive coding, when the brain cannot effectively update its world models.
Modern brain research shows that the mind actively creates and updates models of the world using predictive coding and Bayesian approaches.
Predictive coding is the process by which the brain forms hypotheses about what it expects to perceive and compares these hypotheses with actual sensory information. When predictive coding results in a mismatch between the brain’s expectations and sensory input (prediction error), the brain can either update its world model or try to interpret the data through existing hypotheses. If the prediction error is too large, the brain may sometimes perceive it as reality, which can lead to hallucinations. For example, under conditions of sensory deprivation, when sensory information is insufficient, the brain may dominate with its predictions, and visual or auditory images may appear to compensate for the lack of real stimuli. In cases of excessive activation of predictions, such as during stress or neurochemical imbalances (such as excess dopamine), the brain may ignore real information and impose its own interpretation. This partially explains the hallucinations observed in schizophrenia.
Levels of Predictive Coding:
Low level (sensory): The brain predicts simple sensory signals (e.g., lines, colors, or sounds). For example, if you hear footsteps, your brain predicts that you will see a person.
Middle level (perceptual): Predictions include more complex structures – images, sounds of words, or objects. For instance, seeing quick movement in the bushes, you predict that it’s an animal.
High level (cognitive): At this level, the brain forms complex hypotheses, including social interactions and abstract ideas. For example, based on someone’s behavior, you might predict their intentions..
Ascending and Descending Signals
The hierarchy of information processing is based on two types of signals:
Descending Predictions (top-down signals): At each level of the brain, predictions are generated about sensory data that are sent to lower levels. For example, if a higher level predicts that a person is seeing a face, lower levels will expect facial features (eyes, nose, mouth).
Ascending Prediction Errors (bottom-up signals): When the actual sensory signal does not match the prediction, an error signal is generated. This signal is sent to higher levels to adjust the model and refine predictions..
How Does the Brain Correct Errors?
This process occurs through cyclic feedback:
Prediction: The higher level generates a prediction and sends it down the hierarchy.
Comparison: At the lower level, this prediction is compared with the actual sensory signal.
Error: If there is a discrepancy, a prediction error is generated.
Model Update: The error is sent back upward, where the model is adjusted to improve future predictions.
When the real sensory information matches the predictions, the brain minimizes the prediction error, which helps conserve resources. However, if the information does not align with expectations, a prediction error occurs, signaling the need to update the world model.
In the brain’s neural layers, there is a division between “prediction neurons,” which form expectations, and “error neurons,” which signal when predictions are not met. For example, in the supragranular layers (upper layers of the brain), there are error neurons that activate when something unexpected occurs. In the deeper layers, there are neurons that provide prediction signals.
However, the effectiveness of predictive coding is influenced by various factors, including hormones, neurotransmitters, microbiota, and injuries. Hormones, such as cortisol, produced in response to stress, can alter neuron sensitivity, affecting the brain’s ability to adapt and learn. Neurotransmitters, such as dopamine, play a key role in motivation and reward processes, which can enhance or diminish certain predictions and responses. The gut microbiota, interacting with the central nervous system, can influence mood and cognitive functions, reflecting in the process of prediction. Injuries, especially brain injuries, can disrupt the normal functioning of neural networks responsible for predictive coding, leading to cognitive and emotional disorders.
Errors in the process of predictive coding can occur for various reasons. They may be related to insufficient accuracy of sensory data, incorrect interpretation of information, or failure to update world models. Such errors can lead to distorted perception and impaired adaptive behavior. For example, during chronic stress, elevated cortisol levels can reduce the brain’s ability to adjust predictions, resulting in persistent perceptual errors and increased anxiety.
Thus, predictive coding is the foundation of adaptive behavior and human cognitive functions. Understanding the mechanisms of this process and the factors that influence its efficiency opens new horizons for the development of treatments for various mental and neurological disorders related to disruptions in predictive coding.
Conclusion
The emergence of the mind is the result of a complex evolutionary process that has led to the development of various forms of intelligence in different species. Predictive coding and Bayesian approaches demonstrate how the brain creates models of the world and adapts to new conditions, minimizing prediction errors. These mechanisms form the basis of our perception, learning, and thinking, making the mind a powerful tool for understanding and transforming reality.
4. Existential Limits of Forecasting
Mental models are internal cognitive structures through which we conceptualize and predict the world. These models help us navigate life by creating more or less accurate representations of reality. However, like any other tool, they are limited. Mental models, much like filters through which we perceive the world, are inevitably simplifications based on experience and expectations, allowing us to interact with the environment more efficiently. Yet, like any tool, these models cannot always accurately reflect reality, as the world does not always fit into the frameworks we create for it.
In Plato’s philosophy, these ideas find their continuation. In the famous “Allegory of the Cave,” Plato depicts individuals who, sitting in a dark cave, can only see the shadows cast by objects positioned in front of a fire. These shadows represent a distorted perception of reality, perceived as true because the cave dwellers have never seen the light. Only the one who escapes the cave can see the true reality hidden behind the shadows. Plato’s image symbolizes the limitations of our perception, which reflects only a fragment of the full picture of the world.
Later, Immanuel Kant argued that we perceive the world not as it is “in itself” (Ding an sich), but through the a priori forms of the mind, which help us understand the nature of these limitations. Kant believed that our knowledge of reality will always be constrained by the categories of the mind, such as space, time, and causality, which are imposed upon our experience and do not exist in the world “in itself.” This means that human perception will always be limited by these a priori forms, and we can understand and predict only those aspects of the world that fit within these frameworks.
The idea that our perception of the world is always limited was further developed in the later works of Thomas Bayes, whom we discussed earlier. In particular, Bayes used the example of the sunrise and sunset to explain how our models of the world can be updated based on observations. For instance, a person, stepping out of a cave for the first time, observes the sunrise and wonders: does this happen every day? With each new observation, they update their belief using Bayesian reasoning. With every sunrise, they strengthen their hypothesis that the sun indeed rises every day. However, if one day this prediction proves false, and the sun does not rise or set in its usual place, they will need to adjust their model of the world based on the new data.
Thus, in the Bayesian approach, we observe a process of continuous updating of our mental models based on new observations, which also echoes Plato’s idea of searching for true reality beyond distorted perceptions. Bayes emphasizes that perception and prediction of the world are dynamic processes that are always subject to adjustment, and that the reality we strive to understand may always be deeper than our current model of perception allows.
These ideas were further developed and expanded by Nate Silver2, who explored the principles of forecasting in conditions of uncertainty. Silver argues that successful forecasting depends on the ability to distinguish between “signal” (important information) and “noise” (random or insignificant data), which is directly related to Bayesian model updating. However, Silver goes further, emphasizing that not all models can be corrected simply by updating them with new data. In a world full of uncertainty and randomness, many predictions turn out to be incorrect, even if they follow the right methodology.
Silver emphasizes how people often overestimate their ability to interpret data, relying on predictions that seem plausible but may actually be the result of perceptual errors and biases. He explains that it is important not only to consider new data but also to understand the context in which it arises. In this sense, as in Bayesian models, the adjustment of mental models is a process that requires not only observations but also an awareness of the limitations we face when interpreting the world. Silver also underscores that the significance of “noise” in data is often overlooked, and without the ability to separate it from the “signal,” we will not be able to create accurate predictive models, even when using the most advanced data analysis methods.
Thus, like Bayesian theory, Silver emphasizes the importance of continually revising our assumptions and correcting our models of the world. However, unlike classical Bayesian theory, Silver points out the complexity of predictions in the real world, where the signal is often hard to distinguish from the noise, and our ability to make accurate predictions remains limited.
However, despite the fact that our mental models can be updated based on observations, even with all the complexity of predictions, the process of adapting to new data is not infinite. When the world becomes too complex, or when our expectations collide with fundamentally new and unpredictable phenomena, our models encounter limitations that cannot be overcome through conventional methods of adjustment. This opens up an insurmountable gap for the mind – a moment when we find ourselves unable to adapt our predictions to reality.
In such situations, when even the most flexible models prove powerless, the mind experiences a crisis caused by the inability to predict or comprehend what is happening. This confrontation with uncertainty leads to existential tension, questioning the very capacity of the mind to make sense of the world. And despite all efforts to update and revise models, it becomes clear that human cognition inevitably faces boundaries that cannot be surpassed by familiar forecasting mechanisms.
Бесплатный фрагмент закончился.
Начислим
+12
Покупайте книги и получайте бонусы в Литрес, Читай-городе и Буквоеде.
Участвовать в бонусной программе