The Dilemma of AI vs. the Human Touch - Toward a Humanization of Education
- mbrant2
- Apr 4
- 10 min read
James Michael Brant- President (World Institute for Social Education Development)

Questions to Ponder
As soon as modern technology brought us the potential for AI in education, some moral, ethical, and philosophically transcendental questions were presented to the education community in every country of the world that had the ability to use it:
1. How important are humans in teaching humans?
2. Does human life have value, meaning, purpose, and abilities above other organisms or consciousnesses, including, if it is real, that of a machine?
3. What is therefore, consciousness? Is it simply an interaction of electronic impulses, codes, and chemical processes?
4. Can our consciousness, including our personality and emotions, be uploaded and downloaded to a hard drive or cloud, and therefore put into another machine, a robot, not prone to sickness or disease, and thereby, a new “us”? Or is there something special about a human that is not to be tampered with?
5. What is reality?
6. What is the purpose of life? Is there any purpose or plan beyond what we ourselves attempt to make? Can such matters be decided for us by machines or those who program them?
7. With all these questions begging answers, what is the purpose of education, and what should its goals and priorities be? Are these domains something that others, including machines, have a right to decide for us?
8. Finally, how much of the human touch do students need, if any?
A Modern Pandora’s Box
Each of these questions is a very broad research question, yet imminently valid in its importance. The prospects of AI, and the accompanying technologies of 5G and beyond, quantum computation, robotics, biotechnology, and nanotechnology, have literally opened a “Pandora’s Box” of questions and potentialities for human development or at the other extreme, dehumanization (Selin, 2008). As dystopian as that may sound, it has become a very real possibility. The key theoretical lens through which this article examines these matters is the Intentional Consciousness Theory of Bernard Lonergan (1957) and Pierre Teilhard de Chardin’s Theory of Complexity and Philosophy on the Phenomena of Man (1938).

The Human Spirit
Dr. Steven Umbrello writes about “The Human Spirit in the Age of AI” (2024), differentiating between the two. He argues that AI technology is not human, nor conscious, lacking intentionality. AI systems are simply human systems created by humans, based on algorithms, statistics, and data sets. The system trains with this data input to recognize patterns, make predictions, and generate responses; the more data it is fed, the more accurate the predictions. This statistical approach lacks the depth innate to human cognition, involving awareness, intentionality, and understanding of context. Humans, like AI, can make decisions based on probabilities, but beyond that, human intelligence is nuanced by ethical considerations and conscious thought.
Umbrello, in a subsequent article, “Navigating AI with Lonergan’s Transcendental Precepts”, lists the precepts: be attentive, be intelligent, be reasonable, be responsible, and be in love (Lonergan,1957, as cited in Umbrello, 2024) —"as a map for navigating our interactions with AI systems”. These emerge from Lonergan’s Theory of Intentional Consciousness (Lonergan,1957, as cited in Umbrello, 2024). Umbrello draws a qualitative line of difference between human cognition and AI’s computational processes, this being that AI, as a human creation can simulate:
1. Being intelligent, related to experiencing, through data analysis.
2. Being reasonable, related to understanding, through pattern recognition.
3. Being responsible, related to deciding, through executing programmed tasks.
AI lacks:
1. Self Awareness, consciousness of existing as a being, related to being attentive.
2. Moral judgement, seeking good, related to being or acting in love.
More than Machines
Keeping these precepts in mind can help us to discern the appropriate application of AI as a tool designed by humans, and therefore subject to flaws and biases, yet able to simulate some human functions efficiently, which can greatly help us. Its purpose is to serve humanity, not to rule over humanity. It is not superior to humans according to this perspective. AI designers would have great accountability to embed this ethic, the value and dignity of human life, within AI’s programming. Umbrello (2024) reiterates that humans, unlike AI, are more than just sophisticated machines in that they have a spiritual nature. AI has a mechanical nature, so the contrast is not in degree but type. AI works within bounds of programmed algorithms, while humans experience transcendence, which makes them seek purpose beyond the material world.
Pierre Teilhard de Chardin (1938), in his Theory of Complexity and Philosophy on the Phenomena of Man, sees man as a spiritual being encapsuled in a physical human body. He says, “We are not human beings having a spiritual experience; we are spiritual beings having a human experience,” (Teilhard de Chardin,1938, as cited in Umbrello, 2024). Both he and Bernard Lonergan were Jesuit priests, holding that man was created in the image of God, the Creator, and therefore capable of a relationship with Him and others, that being his greatest purpose. At the same time, Pierre was a geologist, paleontologist, proponent of evolution, and philosopher. Bernard was a philosopher and theologian, considered one of the great thinkers of the 20th Century.
Modern Thought, Ancient Thought, or Both?
There can be a tendency to emphasize recent theories over older ones, and there will always be even more recent theories replacing those before them as time goes on, but many believe that true wisdom is timeless, transcendental, and speaks to us across generations, technological advances, and even millennia. If I were to try to attribute that concept to any particular philosophers, the list would be long, so I will choose one man known to be extremely wise, King Solomon (circa 971-931 BC). He personified Wisdom as a motherly woman calling out to passers-by at the gates of the city, to listen to her, explaining that she was there before the creation of all things. She pleads with them to value her instruction above gold, that they might prosper and not destroy themselves by ignoring her (Proverbs 8).
Knowledge and Intelligence compared to Wisdom
According to the Merriam-Webster Dictionary (2025), knowledge is the “condition of knowing something through familiarity gained by experience”, which could be through education or directly; intelligence is the ability to learn or understand; wisdom, by contrast, relates to a person’s good judgement, insight, and the ability to process and apply knowledge. Clayton (1982), in the National Library of Medicine, defines intelligence as the capacity to “think logically” and conceptualize something abstract from reality. It’s function deals with “how to do” and accomplish, whereas wisdom provokes a person to “consider the consequences” of one’s actions to self and effects on others, questioning if a course of action is correct. Jeste and Lee (2019), in the Harvard Review of Psychiatry, define wisdom as a “complex human trait” including components of “social decision making, emotion regulation, prosocial behaviors, self-reflection, acceptance of uncertainty, decisiveness, and spirituality.” They recommend greater emphasis on promoting wisdom throughout the education systems at every level.
Some educators and administrators prefer to brush these deeper aspects aside and simply continue down the current or traditional path and goals of our education systems, somewhat “business as usual”, but with new tools, in this case, AI. Others see the questions listed at the beginning of this article as warning lights, critical points of decision, and forks in the road of humanity’s future (Zgrzebnicki, 2017), which must be carefully and thoughtfully considered, not by just a limited group or groups of decision makers, but rather a broad spectrum of society, and particularly by parents, who ultimately have the responsibility for their children. (Umbrello, 2024)
Parents’ Perspectives
Some parents have even expressed fears of their children becoming some sort of property of the state, simply another natural resource to be developed, programmed, and utilized, with little room for individuality, creativity, or personal mindsets (Liu, 2019). This reaction was voiced over the experimental use of headband brain wave scanners placed upon students during class, which alerted the teacher that the student was not concentrating on the presented material (Li, 2019). The occasion which caused such furor from netizens was the trial run in Chinese classrooms by the company BrainCo, incubated at Harvard University. This was demonstrated by different-colored lights glowing on the headband, signifying the amount of engagement of the student, as shown in a video report by the Wall Street Journal (Tai, 2019). According to this report, the surveillance has become some students’ “worst nightmare”.
Limitations and Problems
Bruce Wexler (2019), professor emeritus and senior research scientist of psychiatry at Yale University, contributed an article in YaleGlobal Online commenting on this video report and on the above-mentioned experiment. Ironically, at that time, 2019, I was presenting some of what I have written here in this article during a conference on AI at Beijing Normal University, home to an advanced laboratory on neuro-science pedagogy. Wexler outlines four main flaws in the experiment and interpretation of findings:
1. Accuracy of collected data due to limited scalp contact of the electrodes: The headband touched only three areas of the skull and could potentially be in constant movement on a wiggly child’s head. In a large-scale research experiment, averages could be made to mitigate any erroneous interpretation, but taking the reading to draw conclusions on an individual basis about each child could easily lead to ill-founded decisions.
2. Extreme limitation on the amount of data being collected: Usually 64 to 256 scalp locations are used to take a reading of brain activity in clinical research, compared to only 3 locations in this classroom context. The recording from only one or a few locations would identify only general sorts of activity from only that area of the brain. The problem is that similar activities coming from different parts of the brain can appear the same but be demonstrating different functions. Wexler (2019) gives the analogy of not being able to “differentiate the words meat, meet, meal, beat, beet and so on.”
3. Attention states can show different readings electrically when the same student is studying different types of materials, and these could vary from child to child. The brain paying attention could look different, electrically, from one child to the next. There is no scientific consensus on this.
4. The students, knowing they are being surveilled, can easily have negative reactions which will show up in the electrical readings, giving a false or affected reading as to if the child is truly paying attention.
Social Pressure
To exacerbate the situation, analysis reports including graphs were sent out constantly throughout the day to each student’s parents, but all parents could see the readings for all the other students, too. All students could see the reports of the other students as well. This proved extremely embarrassing and put many students under great pressure. Some were being punished at home for their brainwave reports, as parents felt social pressure from other parents.

Space to Think
The question also arises as to how much intervention the teacher should give and in what form, so as not to continually interrupt the thinking process of the student or call undue attention in the classroom. Beyond this, the concept of paying attention itself, comes into question as some students’ brains could appear to daydream when actually comparing the data received to lived or learned experiences, demonstrating a function of deeper, higher order thinking. As pointed out by Wexler (2019), early labels on children can be not only flawed, but detrimental, and some brilliant minds, Albert Einstein, Thomas Edison, and Jack Ma admit to not always paying good attention in class. Marinosyan (2019) at the Academy of Pedagogy of Russia, makes an argument for the importance of giving space for “irrational thinking” aka “daydreaming” for students to develop their whole personalities. He explains that if this feature of being human is demeaned or ruled out, an essential feature of human intelligence is stifled, even extinguished, unable to contemplate the irrational.
AI: Takeover or Makeover?
Although an AI takeover is purported by some AI pundits and journalists to be inevitable, that is arguably a deterministic attempt (Umbrello, 2024) to make it so, being that AI is a sum product of what it is fed, which implies that there are feeders or creators of it. If those creators have, consciously or unconsciously, a bias toward AI controlling the world, or themselves controlling the world through AI, this tendency could logically be extrapolated and thereby permeate the AI’s aggregate “deliberations” and responses. The origin of this intentionality would still be human, not machine, yet the idea that “It was the AI’s fault! AI did it!” could be a convenient and convincing mechanism to mask accountability, sadly, a common characteristic of very human intelligence, as old as the story of Adam and Eve (Moses, cerca 1400 B.C.).
Conclusion
It seems most educators concur that some AI and other computer technology can be useful but can also lead to skewed judgment (Wexler, 2019) when it comes to understanding a human, and therefore, at this point in time, it is a necessary safeguard to use hybrid programs. Besides the safeguard aspect, many, if not most, consider that human education, as in development of the whole child or young person, is not just the transmission of knowledge and facts, but also the formation of the students’ characters during the process (Marinosyan, 2019). On this point, it seems that humans still need humans because humans have inherent, foundational, and irreplicable qualities, which by nature, AI can never have (Lonergan, 1957). Discourse on this topic is critical because decisions being made at all levels can be irreversible. As to the originality of this essay, there is nothing completely new under the sun, however, the review of timeless human values in the setting of this era is, of itself, original because we have never been here before, and we must see if indeed those values stand the test of time, particularly this time.
References
Clayton V. (1982). Wisdom and intelligence: the nature and function of knowledge in the later
years. International journal of aging & human development, 15(4), 315–321. https://doi.org/10.2190/17tq-bw3y-p8j4-tg40
Jeste, D. V., & Lee, E. E. (2019). The Emerging Empirical Science of Wisdom: Definition,
Measurement, Neurobiology, Longevity, and Interventions. Harvard review of psychiatry, 27(3), 127–140. https://doi.org/10.1097/HRP.0000000000000205
Londergan, ___1957
Li, Leo, Dec. 3, 2019, BrainCo’s FOCUS Headband: Brain Scan or Brain Scam?, Harvard
University Digital Innovation and Transformation MBA Student Perspectives, https://d3.harvard.edu/platform-digit/submission/braincos-focus-headband-brain-scan-or-brain-scam/, retrieved 2019.
Liu, Luffy, Nov. 3, 2019, China: A Headband for Your Thoughts, Design Lines, Internet of
Things, EETimes, China, https://www.eetimes.com/china-a-headband-for-your-thoughts/#:~:text=Reportedly%2C%20headbands%20worn%20by%20school,on%20children%20triggered%20a%20backlash, retrieved 2019.
Marinosyan T.E. (2019) The Significance of Irrational Aspect for the Formation of Relations in
the “Teacher – Student – Teacher”System. Russian Journal of Philosophical Sciences =
Filosofskie nauki. Vol. 62, no. 2, pp. 58–76.
DOI: 10.30727/0235-1188-2019-62-2-58-76
Merriam – Webster Online Dictionary, https://www.merriam-webster.com/, retrieved 2025
Moses, cerca 1400 B.C., Genesis 3, Holy Bible
Selin, Nov. 24,2008, The Sociology of the Future: Tracing Stories of Technology and Time,
Solomon, King circa 971-931 BC, Proverbs 8, Holy Bible
Tai, Crystal, October 1,2019, How China Is Using Artificial Intelligence in Classrooms | Wall
Street Journal Video Report – October 1, 2019.
Umbrello, S. (2024). Beyond Computation: The Human Spirit in the Age of AI, The Word on
Fire, number of volume. https://www.wordonfire.org/articles/beyond-computation-the-human-spirit-in-the-age-of-ai/
Umbrello, S. April, 2024. Navigating AI with Lonergan’s Transcendental Precepts, The Word on
Fire, number of volume. https://www.wordonfire.org/articles/navigating-AI-with-Lonargen’s-transcendental-precepts/
Wexler, Bruce E., Dec. 3, 2019, Mind Control in China’s Classrooms, Yaleglobal,
Wall Street Journal
Comentarios