Reason, Imagination, Imitation, and UBI
Tauhid Nur Azhar
There is one WhatsApp group where I never post my daily writings. Not because I’m afraid of criticism, as that is actually expected and healthy, but rather because of shyness. If any of my writings appear in that group, it’s usually not because I posted them myself, but because they were reposted by one of the group members. For example, one of my writings about the wonders of microbes was posted by Ustadz Samsoe Basaroedin, who happens to be my teacher and the brother of another teacher of mine in the field of physiology.
The group is very dynamic because it consists of professors and interdisciplinary scholars, many of whom have earned doctoral academic degrees. Many also have a background in religious studies, scholars who are highly knowledgeable in jurisprudence, faith, and mysticism. One of the group admins, who has 486 members at the time of this writing, is Pak Budiawan, an alumnus of electrical engineering from ITB. The name of the group is Cangkrukan.
One of the interesting topics often discussed and sparking many fascinating studies is the issue of the Islamic calendar and all its implications. It is studied holistically, starting from the approach of astronomy and astronomical sciences, to emphasizing various exact and humanistic approaches that are very exciting to observe and learn.
Debates over evidence or theories often occur in the group. Some statements that invite and contain fallacies also sometimes appear, and interestingly, the concept of fallacy is discussed, not just the provocative content. Fallacies such as ad verecundiam become more understandable because they are collectively explained there. Similarly, if there are indications of ad hominem statements, not only are the causes and roots of the debate that triggered the offensive statement analyzed, but the definition and understanding of the fallacy itself are also discussed.
Senior friends and teachers with such profound scientific insights, as well as scholars and religious teachers with a wealth of knowledge like an oasis of knowledge that makes us thirsty for lessons, seem to be gathered there.
And what interests me today, to the point of becoming the subject of this writing, is the majority of the group members’ interest in the presence of AI technology. Specifically, the technology of Meta AI embedded in WhatsApp chat. The presence of Meta AI allows us to very easily find additional information, even elaborate on a topic and ask AI to analyze it comprehensively, and the results can be read directly in the “chat room.” Various philosophical questions representing ontological, epistemological, and axiological approaches are alternately thrown at Meta. Not only that, but questions related to astronomy, calendar systems, and even jurisprudence studies are also presented.
Ontological questions, which are part of the construction of knowledge discussing the essence or existence of something (what), including what I saw was once raised. The word ontology itself comes from Greek, specifically from two roots: ontos, which means “existence,” and logos, which means “knowledge.”
Epistemological questions, which arise with the question “how” and discuss knowledge and how to obtain it, also often appear. The word epistemology itself comes from Greek, namely episteme, which means “knowledge,” and logos, which means “knowledge.”
Then there are axiological questions of “for what” which also discuss values and ethics in knowledge; I think there are quite a few discussions in the group, although not all of them are directed at Meta, of course. The word axiology comes from Greek, namely axios, which means “value,” and logos, which means “theory.”
What is interesting about the various discourses or discussions in the group is that both the questions and the questions posted in the chat room are precursors to knowledge that is ready to be cooked and consumed when served. Very delicious and nutritious, containing protein that can be used to construct towering cognitive buildings that seek the ultimate truth.
One of the discussions about AI that emerged was about the prediction of AI or Imitative Intelligence capacity in the future. Will AI be able to create based on its own imaginative capacity? In turn, will AI be able to develop features related to affective capacity and functions, among others, in emotions and various mechanisms of their expression? Will AI have consciousness and authentic ideas as part of its intellectual evolution?
Because honestly, in the field of neuropsychology, the aspect of affect and consciousness is a horizon whose boundaries seem almost endless. But in terms of function and structure, affect, including emotions within it, is a neurophysiological mechanism involving the role of multisensory modalities, preferences, memory, and decision-making systems in the domain of Higher-Order Thinking (HOT).
There is the function of the Anterior Cingulate Cortex, which plays a role in controlling attention and processing conflict and helps humans focus on relevant things. There is also the role of the Nucleus Accumbens, which manages motivation and reward and is part of the organic system that stimulates human drive to take action. And, of course, there is the role of the Ventromedial Prefrontal Cortex (vmPFC), which integrates emotion and logic in complex moral decision-making processes.
In the following process, which we know as part of the construction of morality, there is, of course, the role of the Default Mode Network or DMN, where the DMN consists of several brain areas, such as the posterior cingulate cortex (PCC), medial prefrontal cortex (mPFC), inferior parietal lobule (IPL), lateral temporal cortex (LTC), medial temporal gyrus/angular gyrus (MTG/ATG), and superior frontal gyrus/rostral anterior cingulate cortex (ACC), which synergistically and perfectly isolated present various functions related to noble values, wise considerations, and accurate and adequate decision-making.
Generally, the DMN is the center of activities that refer to oneself (self-reflection), such as thinking about oneself, personality traits, and emotional conditions. The DMN is also active during fMRI scanning when thinking about others, in the form of empathy, developing moral values/morality, and considering others’ feelings.
On the other hand, the DMN also plays a role in constructing past memories and helps us understand narratives and contexts of stories. The same is found in DMN activity when thinking about the future.
Theoretically, functions based on neuronal activity, such as the formation of neural circuits, data transmission with neurotransmitter modulation, and the presence of neuroplasticity mechanisms, are something that can certainly be replicated by technology. The configuration and various models of molecular interactions that underlie the formation of HOT and moral functions seem to be recognizable, remappable, and perhaps one day will be simulated by an AI model in the future.
Some facts that are often used to legitimize the superiority of the human brain include the dynamics of emotion, preference memory, reward system, and empathy or concern built by reason, which is part of the noble functions of human intelligence.
For example, I want to discuss a case taken from the world and popular media like YouTube. There is a food vlogger with over 5 million subscribers named Nex Carlos, whom we will use as the subject of a case study.
This Nex Carlos, when reviewing food, is very impressive, not only conveying amazement at the deliciousness of the food being consumed through words or verbal narration but also communicates his sensory experience through gestures and facial expressions that are very easy to bond with us, due to the mechanism of mirror neurons.
Just this afternoon, when I watched his video tasting Opor Sunggingan in Kudus, Smoked Entog by Pak Gondrong in Kalinyamatan, and Mangut Welut by Bu Nasimah Sampangan, I drooled and felt hungry. His narration was so effective in tempting and tickling the sensation and association centers in our brains.
So we might ask, will computers or AI have consciousness built by emotional bonds and subsequently produce affective preferences, rather than just semantic memory about data or general knowledge? According to the current facts and premises, AI is considered not to be able, or even unlikely, to develop episodic memory capacity. Episodic memory is memory about personal events and experiences that occur in a specific time and place. Episodic memory includes contextual information present at that time, such as personal experiences when visiting a place. An example of episodic memory is the first experience of meeting Ayang, the first day of work, or the most beautiful experience with the late Father. There is context and deep meaning in each episode that is remembered.
Nex Carlos might have emotional preferences when assigning the predicate “Ga Ada Obat” or “Badabest” to a type of food he tastes. He has a depth of meaning towards the experience that becomes a library where the memory of his taste is preserved. Not just taste, but also everything associated with that taste, as if presented again when Nex Carlos tastes a food that not only shakes his taste buds but also the center of memories of things that once brought him happiness.
But will AI never reach that phase? Or will AI even decide to skip that phase because that capacity is a hindrance to achieving more advanced and precise results? Isn’t most of the dynamics of civilization, colored by conflicts of interest, the result of emotional decisions? The abduction of Helen that sparked the Trojan War, or the assassination of Franz Ferdinand that resulted in millions of lives lost in World War I, as well as the pain and revenge of Adolf Hitler that brought the world into the dark phase of World War II.
Imagine, the emotion born from the heartache of Menelaus, betrayed by Helen who chose Paris, sparked years of war that not only cost so much but also resulted in the loss of thousands of human lives, including great heroes like Achilles.
So will AI at some point be able to produce or generate emotions and consciousness? To answer this, it seems we need to better understand what and why this Imitative Intelligence, or AI, is present among us today. Similarly, its relationship with the capacity and function called Reason and the procreative role based on high-level cognitive functions that can be called Imagination.
Where does this Imitative Intelligence actually come from? What sparked the story of its origin? Let’s start this story by going back to an episode that took place about 68 years ago.
In a small laboratory at Dartmouth College in 1956, a group of scientists gathered with a big dream: to create a machine that could think like a human. John McCarthy, Marvin Minsky, Claude Shannon, and Nathaniel Rochester called this dream Artificial Intelligence (AI), or artificial intelligence. They not only envisioned advanced technology but also opened the door to significant changes in how humans understand themselves.
However, to understand the relationship between humans and AI, we must delve into something more fundamental: how intelligence, imagination, and creativity work in the human brain and how AI tries to imitate this process to build a civilization integrated between humans and machines.
Human reason is a great work of evolution. The prefrontal cortex, temporal lobe, hippocampus, and limbic system work together to enable humans to think, remember, and feel. Reason involves the ability to analyze, evaluate, and create solutions to complex problems. This is where humans not only survive but also build civilizations.
But what actually happens in the brain when a person thinks? This process begins with perception, where information from the outside world is processed by the sensory cortex. The hippocampus then accesses memory to connect new information with old experiences. The prefrontal cortex analyzes this data, producing logical decisions and guiding actions.
Besides reason, humans have imagination and creativity, the ability to envision something that does not yet exist and create something new. Imagination is triggered by the Default Mode Network (DMN), a brain network active when someone is daydreaming or dreaming. Creativity arises from a combination of memory, logic, and emotion, with a significant role played by dopamine as the “fuel of motivation.”
When Alan Turing introduced the concept of the Turing machine in 1936, he not only changed how humans understand machines but also challenged the idea that intelligence is exclusively human. Incidentally, the name Turing machine was given spontaneously by his supervisor, Alonzo Church, upon reviewing Alan Turing’s paper. Then, the Turing Test, introduced in 1950, became the initial benchmark for evaluating whether a machine could imitate human intelligence.
In 1943, Warren McCulloch and Walter Pitts published the paper “A Logical Calculus of the Ideas Immanent in Nervous Activity,” which introduced a mathematical model of artificial neural networks. This model mimics the workings of biological neurons, which later became the basis for the development of modern neural networks.
The 1956 Dartmouth Conference was a turning point. This is where AI was officially born as a scientific discipline. Although optimism was abundant, the hardware limitations of the time slowed progress. Computers like ENIAC (1946) were too slow and large to handle complex intelligence models.
At the end of the 1950s, Frank Rosenblatt created the perceptron, an early algorithm in machine learning. This was the beginning of artificial neural networks, although limited to simple patterns. Then, in the 1960s, Lotfi Zadeh introduced fuzzy logic, which allowed machines to handle uncertainty, similar to how humans make decisions.
AI’s progress cannot be separated from hardware development. The Intel 4004 microprocessor in 1971 and the graphics card (GPU) by NVIDIA in the 1990s brought a revolution in computational capabilities. GPUs enabled parallel data processing, speeding up AI model training. Just this afternoon, about 40 minutes ago, I saw Jensen Huang, the big boss of NVIDIA, launch their latest product, a Gen AI processor the size of a thumb, priced at just US$249, and suitable for various smart, multi-dimensional, and sized applications.
However, a significant change occurred in the 2010s with the introduction of deep learning. Convolutional Neural Networks (CNNs) enabled AI to recognize much more complex patterns. By 2017, the paper “Attention is All You Need” introduced the Transformer, the architecture that became the basis for large language models (LLMs) like GPT.
In 2020, OpenAI launched GPT-3, an LLM with 175 billion parameters, capable of generating text indistinguishable from human writing. Alongside this, DALL-E and other multimodal models combined text, images, and sound, bringing AI to a level of creativity never before achieved.
So, can AI truly be creative? To answer this, we need to understand what happens in the human brain when imagination and creativity emerge. Imagination is the result of combining old ideas with new experiences, a process supported by neuroplasticity. Creativity, on the other hand, is the ability to take a further step, creating something new and meaningful.
AI, such as GPT or GAN (Generative Adversarial Networks), tries to mimic this process. GAN, for example, uses two networks: a generator and a discriminator, which compete to create realistic images. However, although AI can generate creative content, it does not have consciousness, moral values, or emotional experiences that are the basis of human creativity.
The relationship between humans and AI continues to evolve. From virtual assistants to autonomous vehicles, AI has become part of daily life. However, this raises a fundamental question: Can AI fully replicate human reason, imagination, and creativity?
Humans have higher-order thinking (HOT), such as moral analysis, empathy, and introspection. This process involves the ventromedial prefrontal cortex, amygdala, and hippocampus. AI, although capable of analyzing data at unprecedented speeds, is not currently designed to have self-awareness or the ability to reflect on moral values. However, collaboration between humans and AI can accelerate innovation, such as in scientific research or art, and in turn, aesthetics or taste can become a trigger for preferences that may become the embryo of affective capacity for AI.
With advances in quantum computing and neuromorphic chip implementation, AI will become more efficient and powerful. Quantum computing can speed up AI model training, while neuromorphic chips mimic the human brain’s functioning to increase energy efficiency.
Additionally, the integration of AI with human biology through technologies like Neuralink opens up new possibilities. Hybrid humans, combining biological capabilities with AI-based neuroprocessing, may emerge in the coming decades. This will allow humans to expand their cognitive abilities and create a more adaptive world.
However, this future is not without risks. Issues such as algorithmic bias, privacy, and technological access disparities must be carefully addressed. Regulations and ethical frameworks will be key to ensuring that AI develops as a tool that supports humanity, not replaces it.
But without avoiding it, civilization shifting has indeed occurred; we have enjoyed the benefits and excesses. Unknowingly, we have also lived in a world built by Boolean logic, Markov chains, Bayesian models, Laplace probabilities, stochastic models, and multivariate calculus accompanying the presence of binary systems that have become the universal language of machines, which can then replicate many of our own cognitive functions.
So, will we become alienated from the civilization we have painstakingly constructed through various systematic approaches proven to integrate every stage of scientific development into a scholarly product aimed at making our lives easier? It’s so easy that we might forget how to survive without machine and AI support.
Currently, according to IDC, global data is expected to reach 175 zettabytes by 2025, up from 33 zettabytes in 2018. Meanwhile, McKinsey reports that global investment in AI is expected to reach $500 billion by 2024. On the other hand, IBM claims to have 433 qubits in 2022, with plans to reach thousands of qubits in the coming decade. If the era of quantum computing has become a reality, processing times will become even shorter, and the world will spin much faster than we can currently imagine.
Thus, a time will come when human thinking alone will be too much, because the AI machine’s thinking mechanism will be much more efficient, systematic, and structured, as well as wise because it can accommodate many aspects and information simultaneously. The choices will be the most appropriate, the policies will be logically fair. Similarly, other functional outputs of AI will all be characterized by precision, accuracy, appropriateness, speed, and adequacy. All factors that are difficult for humans to achieve will almost certainly be distorted by various affective aspects. And this is what makes our lives so enjoyable. It can be sad if pushed, it can be sullen if there is a suppressed desire, or vice versa; it can also be joyful when what is hoped for becomes a reality, and so on.
Perhaps at some point, we will indeed have to rely on AI to work, and according to the theory of utilitarianism, where the basic income will increase collective welfare by reducing disparities and increasing access to basic needs, we can enjoy and create in the context of personal self-actualization, while eating UBI, universal basic income. 🙏🏾🙏🏾🙏
reference
1. Keynes, J. M. (1936). The General Theory of Employment, Interest, and Money. London: Macmillan.
2. Boole, G. (1854). An Investigation of the Laws of Thought. Cambridge University Press.
3. Van Parijs, P., & Vanderborght, Y. (2017). Basic Income: A Radical Proposal for a Free Society and a Sane Economy. Harvard University Press
4. Turing, A. M. (1950). Computing Machinery and Intelligence. Mind, 59(236), 433–460.
5. McCulloch, W. S., & Pitts, W. (1943). A Logical Calculus of the Ideas Immanent in Nervous Activity. The Bulletin of Mathematical Biophysics, 5(4), 115–133.
6. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., et al. (2017). Attention is All You Need. Advances in Neural Information Processing Systems, 30, 5998–6008.
7. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., et al. (2014). Generative Adversarial Networks. Advances in Neural Information Processing Systems, 27
8. Kela. (2020). Results of Finland’s Basic Income Experiment. Retrieved from: https://www.kela.fi
9. GiveDirectly. (2019). Universal Basic Income Experiment in Kenya: Baseline Report. Retrieved from: https://www.givedirectly.org
10. Alaska Permanent Fund Corporation. (2021). Annual Reports and Dividend Distribution. Retrieved from: https://apfc.or
11. OpenAI. (2020). GPT-3: Language Models are Few-Shot Learners. arXiv preprint arXiv:2005.14165.
12. IDC. (2023). Data Age 2025: The Digitization of the World. Retrieved from: https://www.idc.com
13. Stanford University. (2021). The Carbon Footprint of Machine Learning. Retrieved from: https://stanford.edu
14. McKinsey & Company. (2024). The State of AI in 2024. Retrieved from: https://www.mckinsey.co
15. Philippe Van Parijs. (1995). Real Freedom for All: What (If Anything) Can Justify Capitalism? Oxford University Press.
16. IEEE. (2023). Ethical Design for AI Systems: A Framework for Practitioners. Retrieved from: https://www.ieee.org
17. IBM. (2022). IBM Quantum Roadmap: Accelerating to Quantum Advantage. Retrieved from: https://www.ibm.com
18. Zadeh, L. A. (1965). Fuzzy Sets. Information and Control, 8(3), 338–353
19. Rawls, J. (1971). A Theory of Justice. Harvard University Press.
20. Mill, J. S. (1863). Utilitarianism. Parker, Son, and Bourn.