top of page


Students'passion for knowledge goes beyoind books, but also reserach and essays are important. Read some publications and explore every kind of reserach's field 

Cognitive Neuroscience: Quest for the Secrets of Human Cognition

Written by Emil Koch

Cognitive neuroscience investigates behavioral manifestations resulting from physical and chemical activity in the brain (Smelser & Baltes, 2001). How does our mind function? What is the relationship between stimuli and observable behavioral responses? This interdisciplinary field, embracing philosophy, psychology, artificial intelligence, neuroscience, linguistics, and anthropology, studies the fundamental mechanism underlying the human mind and intelligence. Already back in the Ancient Greeks, philosophers such as Plato and Aristotle embarked on a quest to fathom the origins of human knowledge. Aristotle, for instance, conjectured that the brain functioned as the body's cooling system while attributing knowledge to the heart. Interestingly, it is believed that scientific psychology emerged only around 1879 from philosophical backgrounds, only to join forces with cognitive science starting in the 1960s (Hatfield, 2002).

For a long time, scientists had postulated the theory of localized mental abilities, suggesting that specific brain regions were responsible for distinct functions. This concept, known as localizationism, was originally proposed by Paul Broca in 1861. He documented the case of a patient who suffered from motor aphasia and demonstrated that damage to a particular area in the left frontal lobe was responsible for this impairment. Contrary to this hypothesis, a team of researchers at CalTech discovered intact brain networks governing functions like walking, talking, and other vital activities in six patients who had undergone a procedure to remove half of their brain to treat seizures (CalTech Brain Institute, 2019). Additionally, it has been confirmed by researchers at Penn Medicine in 2022 that the adult brain continues to undergo neurogenesis. This underscores the brain's remarkable axonal and synaptic plasticity, or in simpler terms, neuroplasticity (Neuroscience News, 2023).

In 1909, Korbinian Brodmann published the book Comparative Localization Theory of the Cerebral Cortex that would later coin the famously known Brodmann areas, the first known mapping of the cerebral cortex. Later in the 20th century, behaviorism rejected the idea of consciousness and demanded to focus on directly observable behavioral reactions to external stimuli. However, of course, empirical data is worthless without a theoretical framework. So, where does this leave us?

Cognitive scientists achieve this by developing computational simulations of neurons and their dynamic behavior. Computational linguists model language acquisition, along with the fundamental aspects of syntax, logic, and semantics, informing the workings of our minds. Interestingly, researchers at MIT, Cornell University, and McGill University have taken significant strides toward an artificial intelligence system that can automatically learn the underlying rules of human language from a small amount of data from phenology and morphology, employing a Bayesian Program Learning. Bayesian approaches to brain function are inspired by the assumed brain’s capabilities to process statistical principles by bottom-up prediction errors (PE) and top-down priors, which are constantly updated to reduce the former, particularly given the world’s inherent nature of uncertainties (Knill, 2004).

In a similar way, connectionists utilize deep learning networks to understand the nature of the mind employing artificial neural networks. Currently, Hebbian learning is the best form of unsupervised learning - activity-dependent synaptic plasticity between two connected pre- and postsynaptic neurons and weights between nodes in artificial neural networks (Choe, 2014). This type of learning is well-suited for tasks involving categorization, whereas supervised learning, frequently complemented by backpropagation, enables precise control over the model's output as it iteratively fine-tunes the weights of the nodes to align with the desired target output.

At first glance, connectionist models appear to align closely with our understanding of the brain. First and foremost, the brain naturally creates extensive neural networks comprised of neurons and synapses, allowing it to function effectively even in noisy environments, albeit with some degree of imprecision. Furthermore, these networks exhibit resilience to damage, as exemplified by neuroplasticity and excel in resolving conflicts related to object recognition and coordinated motion. However, it's important to note that these artificial neural networks often require thousands of learning trials, a notable disparity from the human capacity to swiftly draw heuristic conclusions from new information, as emphasized by Rumelhart in 1986. Notably, Pinker and Prince, in their 1986 work, argue that neural networks struggle with learning language rules or acquire rules that have no correspondence with any known human language. This implies that there are indeed specific rules and cognitive processes that cannot be entirely accounted for by the principles of connectivism. In other words, the authors argue for symbolic storage of information in the brain, contrasted by the connectionist claims that information is stored non-symbolically in weights. Does the brain have a symbolic processor? To reconcile these views, Eliasmith (2013) has proposed that neural networks have semantic pointers for symbolic processing. What does this mean? The prevailing concept posits that perceptual inputs traverse expansive networks of neurons, which capture them as mathematical entities, such as vectors, as they transition from the input stage to the output population, becoming progressively more condensed in their representation. Put differently, a semantic pointer of an object could be used for computational processes without relying on bottom-up percepts, thus reducing working memory needs. They can point to and regenerate these representations, allowing for more efficient higher-order abstract processes (Blouw et al., 2015).

This conception of mathematical vectors as semantic pointers and storage of information in fewer and fewer neurons goes in line with a reduction in the dimensionality of data and a decrease in the number of neurons in later hierarchical layers of the visual cortex (Hinton & Salakhutdinov, 2006). Additionally, it hints at hubs for symbolic processing, as suggested by Sokolowski et al. (2021), who performed parallel fMRI adaptation on participants to compare semantic representations of symbols, quantities, and physical size. Their findings revealed a correlation between activation in the right intraparietal sulcus and nonsymbolic representation, whereas symbolic magnitudes were associated with activation in the left inferior parietal lobule. Notably, there were overlaps in activation, but these brain regions appear to have distinct roles in either symbolic or non-symbolic processing, suggesting a specialization in their functions.

When talking about human cognition, Daniel Kahnemann always comes into play. Where can cognitive bias be added to the whole mix? Generally, we tend to use simple heuristics in situations of limited time, information, and uncertainty, which can be effective ecologically speaking due to savings in working memory, yet they can also lead to suboptimal decisions breaching rules of logic and probability. Could it be explained by a connectionist network overgeneralization or classic symbolic representation leading to confirmation bias of pre-existing beliefs or overgeneralization when filling the gaps that symbolic representations leave?

In their 2018 study, Korteling and colleagues introduced a biological neural network framework for understanding cognitive bias, which hinges on four fundamental principles: the associative principle, the compatibility principle, the retainment principle, and the focus principle. To begin, our minds perpetually engage in subconscious searches for correlations, often forging connections between unrelated elements, leading us to perceive them as coherent patterns of causality. Let’s say there is a correlation between owning a red car and having an accident. Is the correlation caused by (A) owning a RED car or (B) owning a car? The coherency machinery opts for (A). Thus, a connectionist view of backpropagation weightings would account for an overgeneralization of perceiving correlations as causality. Similarly, the compatibility principle posits that incoming information must align with the brain's current state, given that information is connected associatively rather than being simply stacked like data on a hard drive. In this regard, a reasonable explanation would be that neural networks are more easily activated by information consistent with higher-order semantic pointers because Hebbian processing and potentiation of connectivities occur faster when information complies with previous beliefs. Furthermore, anchoring effects, a psychological phenomenon where both numeric and non-numeric individual judgments are swayed by a reference point even when it may be entirely unrelated to the current task (Furnham et al., 2011), can be envisioned as a semantic pointer. Finally, when making decisions, we tend to rely on the information that is more readily or recently available (Kahneman & Tversky, 1973), also known as the WYSIATI rule (Kahneman, 2011). Both connectivism and classicism can be considered, potentially a hybrid model of those two. While both a more bottom-up or a more abstract top-down perspective can account for reported cognitive bias, a conjunction of these two may better unite theoretical and biological parameters.

In conclusion, cognitive neuroscience is a multidisciplinary field that delves into the intricate workings of the human mind and prompts profound questions about the nature of human thought. As we grapple with the mysteries of the mind, we not only seek to understand how we think but also why we think the way we do, considering the ecological and evolutionary forces that have shaped the intricate workings of our brains. Recent advancements in our understanding of neurocomputational and neurobiological processes in cognition make cognitive neuroscience an exciting interdisciplinary field to enter.



AI that can learn the patterns of human language. (2022, August 30). MIT News | Massachusetts Institute of Technology. Accessed on 10/15/2023.

Blouw, P., Solodkin, E., Thagard, P., & Eliasmith, C. (2015). Concepts as Semantic Pointers: a framework and computational model. Cognitive Science, 40(5), 1128–1162.

Broca, Paul. “remarks on the seat of the Faculty of Articulated Language, Following an Observation of Aphemia (Loss of speech)”. bulletin de la société Anatomique, 1861;6:330–357.

Choe, Y. (2014). Hebbian Learning. In Springer eBooks (pp. 1–5).

Cognitive Science (Stanford Encyclopedia of Philosophy). (2023, January 31). Accessed on 10/15/2023.

Discovery of Molecular Signatures of Immature Neurons in The Human Brain Throughout Life Provide New Insights into Brain Plasticity and Other Functions, According to Penn Medicine Researchers. (2022, July 18). Penn Medicine News. Accessed on 10/15/2023.

Eliasmith, C. (2013). How to build a brain: A neural architecture for biological cognition. OUP USA.

Finger, S. (2005). Minds behind the brain.

Furnham, A., & Boo, H. C. (2011). A literature review of the anchoring effect. Journal of Socio-economics, 40(1), 35–42.

Garey, L. J. (2005). Brodmann’s localisation in the cerebral cortex. In Springer eBooks.

Hatfield, G. (2002). Psychology, Philosophy, and Cognitive Science: Reflections on the history and philosophy of experimental psychology. Mind & Language, 17(3), 207–232.

Hinton, G. E., & Salakhutdinov, R. (2006). Reducing the Dimensionality of Data with Neural Networks. Science, 313(5786), 504–507.

Kahneman, D. (2011). Thinking, Fast and Slow. New York, NY: Farrar, Straus and Giroux.

Kahneman, D., & Tversky, A. (1973). On the psychology of prediction. Psychological Review, 80(4), 237–251.

Knill, D. C., & Pouget, A. (2004). The Bayesian brain: the role of uncertainty in neural coding and computation. Trends in Neurosciences, 27(12), 712–719.

Korteling, J., Brouwer, A., & Toet, A. (2018). A neural network framework for cognitive bias. Frontiers in Psychology, 9.

Neuroscience News. (2023). Live imaging reveals axon adaptability in neuroplasticity. Neuroscience News. Accessed on 10/15/2023.

Patients missing one brain hemisphere show surprisingly intact neural connections. California Institute of Technology. Accessed on 10/15/2023.

Pinker, S. (1988). On language and connectionism: Analysis of a parallel distributed processing model of language acquisition. Cognition, 28(1–2), 73–193.

Rumelhart, D. E., & McClelland, J. L. (1986). Parallel distributed processing.

Smelser, N. J., & Baltes, P. B. (2001). International Encyclopedia of Social & Behavioral Sciences. Pergamon.

Sokolowski, H. M., Hawes, Z., Peters, L., & Ansari, D. (2021). Symbols are special: an FMRI adaptation study of Symbolic, Nonsymbolic, and Non-Numerical magnitude processing in the human brain. Cerebral Cortex Communications, 2(3).

5 views0 comments


bottom of page