The close link between ethics, artificial intellence and theology of the San Tomso of Aquino – The close link between ethics, Artificial Intelligence and the theology of Saint Thomas Aquinas – The close link between ethics, artificial intelligence and the theology of Saint Thomas Aquinas – The close connection between ethics, artificial intelligence and the theology of Saint Thomas Aquinas

Italian, english, español, dutch

THE CLOSE LINK BETWEEN ETHICS, INELLIGINS AND AND TOMOGE TO THIS TO THIS OF THE AQUINE

The machine only perfects what it finds already in place in man: can refine a true thought, but do not generate truth; can clean up a successful sentence, but do not infuse the spirit that generated it. And it is precisely here that the parallel with the Thomistic principle becomes evident: «Greason does not take away nature, but finisht (grace does not destroy nature, but he perfects it)»

— Theologica —

.

.

PDF print format article – article print format – article in printed format – article in print format

 

.

This article for our page Theologica It's based on my latest book Freedom denied, published by our editions and on sale Who.

I am preparing to address this topic linked to Artificial Intelligence, one of the prophetic masterpieces of modern cinema came to mind: 2001: Space Odyssey, directed by Stanley Kubrick and released in 1968. HAL appears in that film 9000, a very high level artificial intelligence, installed on board the spacecraft Discovery. HAL is perfect in calculation, infallible in data management, but devoid of what makes judgment human: conscience. When his programming conflicts with mission objectives, HAL doesn't "go crazy": it simply applies logic without the moral filter, without intentionality and without the ability to discern good from evil. The result is frightening: a very powerful machine becomes a mortal threat precisely because it does not understand man or the value of life. This intuition - cinematic but theologically lucid - shows that artificial intelligence raises problems that are not merely technical, but radically moral. What is at stake is not computing power - which no one disputes - but rather the risk that man delegates to an impersonal system what belongs exclusively to his conscience. And that's exactly what happens when you let a platform decide for itself what is "good" or "bad.", what can be said or what must be kept silent: an act that should be moral is delivered to the machine. And this is only the first step of the moral delegation to the machine.

Once the judgment on truth and falsehood has been surrendered to technology, the next step becomes almost inevitable: also give up educational common sense and personal responsibility. Or when a parent completely entrusts the algorithm with the task of filtering what a child can see, without critical vigilance: it means delegating educational responsibility to a statistical system. Or even when you ask Artificial Intelligence if a sentence is "offensive" or "morally acceptable": it means transferring a task that requires consciousness to the machine, I don't calculate.

What has been illustrated so far is not a set of technical details they are rather the decisive point. If the intention is missing, the machine can never understand what the man is doing when he speaks, warns, educa, treatment, corrects. And since he cannot access the “why”, reduces everything to the "how": does not evaluate the meaning, it only analyzes the shape. This is where misunderstanding becomes inevitable and systematic error. This is what happens, for instance, when a priest admonishes a believer or a father corrects a son: human conscience distinguishes between severity and cruelty, between correction and offense; the algorithm only records the harshness of the sentence and flags it as "hostile language". The doctor who writes «this risk leads to death» can see his words classified as “violent content”, because the machine doesn't distinguish a diagnosis from a threat. And a simple Bible verse can be censored as “offensive language” because Artificial Intelligence does not perceive moral purpose, but only the surface of the word. Because of this, any use of Artificial Intelligence that touches speech, the judgment, the relationship or freedom must be examined in the light of moral theology, not computer engineering.

The distinction is crucial: the machine doesn't decide, select; does not evaluate, filter; does not judge, Ranking. And what classifies is never good or bad, but only the probable and the improbable, the frequent and the rare, the statistical acceptable and the algorithmic suspicion. Human consciousness does the exact opposite: takes seriously the uniqueness of the act and the freedom of the agent; weighs intentions, circumstances, consequences; distinguishes between the reproach that saves and the offense that wounds; between severity out of love and cruelty out of contempt. The machine sees none of this.

When a father calls a son back, conscience recognizes the love that sustains it; the algorithm only sees a “potentially hostile” phrase. When a spiritual director admonishes one of his direct reports, conscience sees the mercy that accompanies the truth; the algorithm sees a violation of “community standards”. When a person speaks to correct, protect or educate, consciousness perceives finality, the machine only perceives the word hard. The result is paradoxical: where man combines justice and mercy, the machine only produces labels.

Moral ambiguity does not arise from technology: comes from the man who designs it. Because the algorithm is not neutral: carries out a moral he doesn't know, but that others have decided for him. And we see this every day: if a content calls into question the politically correct, the algorithm interprets it as “hostility”; if he criticizes some drifts of culture woke, labels it “discrimination”; if it addresses themes of Christian anthropology - for example sexual difference or the family - directing criticism at the powerful and politicized LGBT lobbies, reports it as “hate speech”, or “incitement to violence”, the so-called "hate speech”, verbatim: hate speech. All this not because the machine "thinks" like this, but because it was programmed to react and interact like this. The algorithm is not born neutral: it is born already educated by those who build it, shaped by ideological criteria that confuse criticism with aggression, reflection with offense, truth with violence. In other words, the algorithm has masters: reflects their fears, amplifies their beliefs, it censors what they fear. The platforms do not filter based on objective criteria but according to dominant ideologies: what the world idolizes is promoted, what the Gospel recalls is suspected; what satisfies is amplified, what warns is silenced. The result is a new form of cultural censorship: Elegant, polite, digitally sterilized — but still censored.

These analyzes of mine arise from reflections, from the studies and observations that I have been investigating for some time on the anthropological-cultural level and on the real functioning of digital platforms. This is precisely why I find it significant to note how, on a different but complementary level, the Dicastery for the Doctrine of the Faith recently recalled a decisive principle, essentially going in the same direction of thought as me, reiterating that Artificial Intelligence, while being able to "cooperate in the growth of knowledge", it cannot in any way be equated with human intelligence which possesses a depth and dynamics that no machine learning system can replicate. This document highlights that Artificial Intelligence does not understand, but elaborate, does not judge, but calculate, and is intrinsically incapable of grasping the moral dimension of the action, since it lacks conscience and interiority (cf.. Who). He then clearly warns that moral discernment cannot be attributed to an algorithmic device: doing so would mean abdicating man's ethical responsibility and handing over the truth to a statistical mechanism. The illusion of an artificial moral intelligence is defined by the document as a form of naive technological idolatry, because the truth is not the result of calculation, but of the encounter between freedom and grace[1].

This magisterial reflection confirms the central point: consciousness cannot be programmed. The machine can assist, but don't judge; can help, but don't interpret; can filter, but do not discern. What belongs to man's freedom - and therefore to his relationship with God - cannot be delegated to any technology.

The ethics of artificial intelligence thus reveals its fragility: a machine can be programmed to recognize words, but he cannot understand the Word. It can identify commands, not commandments. It can register behaviors, do not distinguish between virtue and vice. It can detect correlations, do not grasp divine revelation. And especially: cannot know God. A culture that gets used to replacing the judgment of conscience with the scrutiny of an algorithm ends up forgetting that freedom is a spiritual act, not a output digital[2]. And this is where moral theology becomes decisive, because it reminds man that: the truth is always personal; good is always intentional; consciousness is always irreducible; moral judgment cannot be delegated to anyone, much less to a software.

This doesn't mean demonizing technology, but put it back in its place: that of an instrument, not a judge. Artificial Intelligence, At that time, it can certainly make human work more agile, but he cannot replace him at the decisive point: moral judgment, the only area in which it is not enough to know "how things are", but you have to decide "why do them". It is the place of consciousness, where man weighs intentions, assumes responsibility, He is accountable for his actions before God. The car doesn't fit here, can't enter: calculate, but he doesn't choose; analyze, but he doesn't answer; beginning, but he doesn't love. Like an excellent plastic surgeon, Artificial Intelligence can enhance what is already beautiful, but it cannot make beautiful what is not beautiful, can correct disproportions, can attenuate certain signs of aging; but he cannot create from nothing nor the beauty that is not there, nor restore the faded youth. It can enhance a lined face, but he cannot invent a new face. In the same way, Artificial Intelligence can help organize data, to clarify a text, to put complex topics in order; but it cannot give intelligence to a limited and mediocre subject, nor conscience to those who don't have it.

The image, perhaps a little crude but effective, it is that of the thoroughbred horse and the pony: technology can train, cure, make the Arabian stallion perform at his best, but it will never turn a poor pony into a thoroughbred. What isn't there, no algorithm will ever be able to create it. The machine only perfects what it finds already in place in man: can refine a true thought, but do not generate truth; can polish a successful sentence, but it cannot reach the consciousness from which that sentence arose.

The machine only perfects what it finds already in place in man: can refine a true thought, but do not generate truth; can clean up a successful sentence, but do not infuse the spirit that generated it. And it is precisely here that the parallel with the Thomistic principle becomes evident:

«Greason does not take away nature, but finish (grace does not destroy nature, but he perfects it)»[3].

At this point it becomes inevitable turn your gaze towards the more delicate terrain: if the machine can only perfect what it finds, then the real issue is not about the algorithm, but the man who hands himself over to him. And it is here that the Thomist analogy unfolds all its strength: just as grace does not work on emptiness, so technology does not work on the absence of consciousness. And when man stops exercising his own moral interiority, it is not the machine that gains power: it is man himself who loses stature. From this point arises the decisive — non-technical — problem, but spiritual — which we must now address. If we understand that the moral delegation to the machine is not a technical accident but an anthropological error, the question will arise as a logical consequence: what does man lose when he abdicates his conscience? He doesn't just lose an ability, but a spiritual dimension, the one in which the meaning of good and evil is decided. Technology can be powerful, sophisticated, very fast, but it cannot become a moral subject.

The Christian tradition he has always taught that the exercise of common sense is an art that arises from grace and freedom: a balance between prudence, truth and charity. The algorithm does not know any of these three. It's not prudent, because it doesn't evaluate; it's not true, because he doesn't know; It's not charitable, because he doesn't love. Because of this, using Artificial Intelligence as a tool is possible; using it as a criterion is inhumane, to think that it can create in place of man incapable of articulating a thought, or to produce intellectual work, it is illusory to say the least. Technology can assist humans, never judge him; the word can help, never replace it; can serve the mission, never determine its boundaries.

A civilization that delegates to the machine what belongs to consciousness loses its spiritual identity: becomes a company that knows a lot, but he understands little; who talks continuously, but he rarely listens; who judges everything, but she no longer judges herself.

Catholic morality reminds us that the criterion of good is not what the world accepts, but what God teaches. And God doesn't speak to algorithms: speak to the hearts. The Logos he became flesh, not code; he became a man, I don't plan; a report was made, not mechanism. For this reason no artificial intelligence, however advanced, can it ever become the ultimate criterion of what is true, right, good and human. Because good cannot be calculated: and identify.

From the island of Patmos, 7 February 2026

.

NOTE

[1] See. Dicastery for the Doctrine of the Faith, Old and new. Note on the relationship between artificial intelligence and human intelligence (28 January 2025). — On the correct integration between human capacity and technological tools in the elaboration of moral judgment.

[2] N.d.A. Output means final result and is a technical-IT term that refers to the set of data that a computer emits during the production process, this in contrast to the input, which are instead the input data.

[3] Thomas Aquinas, QUESTION, I, q.1, a.8, ad 2, in The Works of Saint Thomas Aquinas, ed. Leo.

.

THE CLOSE LINK BETWEEN ETHICS, ARTIFICIAL INTELLIGENCE AND THE THEOLOGY OF SAINT THOMAS AQUINAS

The machine perfects only what it already finds at work in man: it may refine a true thought, but cannot generate truth; it may clean a well-formed phrase, but cannot infuse the spirit that generated it. And it is precisely here that the parallel with the Thomistic principle becomes evident: Grace does not destroy nature, but finish (grace does not destroy nature, but perfects it)”

— Theologica —

.

This article for our Theologica page is taken from my latest book Freedom denied, published by our own press and available for purchase here. As I set out to address this theme concerning Artificial Intelligence, my mind returned to one of the prophetic masterpieces of modern cinema: 2001: A Space Odyssey, directed by Stanley Kubrick and released in 1968. In that film appears HAL 9000, an extremely advanced artificial intelligence installed aboard the spacecraft Discovery. HAL is perfect in calculation, infallible in data management, yet devoid of what makes human judgement truly human: conscience. When its programming comes into conflict with the objectives of the mission, HAL does not “go mad”: it simply applies logic without moral filtering, without intentionality, and without the capacity to discern good from evil. The result is terrifying: a supremely powerful machine becomes a mortal threat precisely because it neither understands man nor the value of life. This intuition — cinematic, yet theologically lucid — shows that artificial intelligence raises issues that are not merely technical, but radically moral. What is at stake is not computational power — which no one disputes — but the risk that man may delegate to an impersonal system what belongs exclusively to his conscience. And this is precisely what happens when one allows a platform to decide autonomously what is “good” or “bad”, what may be said and what must be silenced: one hands over to the machine an act that ought to be moral. And this is only the first step in the moral delegation to the machine.

Once judgement over truth and falsehood has been ceded to technology, the next step becomes almost inevitable: renouncing educational common sense and personal responsibility as well. When a parent entirely entrusts to an algorithm the task of filtering what a child may see, without critical supervision, this means delegating educational responsibility to a statistical system. Or again, when one asks Artificial Intelligence whether a phrase is “offensive” or “morally acceptable”, this means transferring to the machine a task that requires conscience, not calculation.

What has been outlined so far is not a collection of technical details, but rather the decisive point. Where intention is lacking, the machine can never understand what man is doing when he speaks, admonishes, educates, heals or corrects. And since it cannot access the “why”, it reduces everything to the “how”: it does not evaluate meaning, it analyses only form. It is here that misunderstanding becomes inevitable and error systematic. This is what happens, for example, when a priest admonishes a faithful person or a father corrects a child: the human conscience distinguishes between severity and cruelty, between correction and offence; the algorithm merely registers the harshness of the phrase and flags it as “hostile language”. A physician who writes “this risk leads to death” may see his words classified as “violent content”, because the machine does not distinguish diagnosis from threat. And even a simple biblical verse may be censored as “offensive language”, because Artificial Intelligence does not perceive moral purpose, but only the surface of words. For this reason, any use of Artificial Intelligence that touches speech, judgement, relationship or freedom must be examined in the light of moral theology, not computer engineering.

The distinction is decisive: the machine does not decide, it selects; it does not evaluate, it filters; it does not judge, it classifies. And what it classifies is never good or evil, but only the probable and the improbable, the frequent and the rare, statistical acceptability and algorithmic suspicion. Human conscience does the exact opposite: it takes seriously the uniqueness of the act and the freedom of the agent; it weighs intentions, circumstances and consequences; it distinguishes between rebuke that saves and offence that wounds; between severity born of love and cruelty born of contempt. The machine sees none of this.

When a father reproves a child, conscience recognises the love that sustains it; the algorithm sees only a “potentially hostile” phrase. When a spiritual director admonishes one entrusted to him, conscience perceives mercy accompanying truth; the algorithm sees a violation of “community standards”. When a person speaks in order to correct, protect or educate, conscience grasps the purpose; the machine perceives only harsh words. The result is paradoxical: where man unites justice and mercy, the machine produces nothing but labels.

Moral ambiguity does not arise from technology: it arises from the man who designs it. For the algorithm is not neutral: it executes a morality it does not know, but which others have decided for it. And we see this every day: if content challenges political correctness, the algorithm interprets it as “hostility”; if it criticises certain excesses of woke culture, it labels it “discrimination”; if it addresses themes of Christian anthropology — for example sexual difference or the family — by criticising powerful and politicised LGBT lobbies, it flags it as “hate speech” or “incitement to violence”. All this not because the machine “thinks” this way, but because it has been programmed to react this way. The algorithm is not born neutral: it is already educated by those who build it, shaped by ideological criteria that confuse criticism with aggression, reflection with offence, truth with violence. In other words, the algorithm has masters: it reflects their fears, amplifies their convictions, censors what they fear. Platforms do not filter according to objective criteria but according to dominant ideologies: what the world idolises is promoted, what the Gospel recalls is suspected; what pleases is amplified, what admonishes is silenced. The result is a new form of cultural censorship: elegant, polite, digitally sterilised — yet still censorship.

These analyses arise from reflections, studies and observations that I have long been developing on the anthropological-cultural level and on the real functioning of digital platforms. It is precisely for this reason that I find it significant to note how, on a different yet complementary level, the Dicastery for the Doctrine of the Faith has recently recalled a decisive principle, essentially moving in the same direction of thought, reaffirming that Artificial Intelligence, while it may “cooperate in the growth of knowledge”, can in no way be equated with human intelligence, which possesses a depth and dynamism that no machine-learning system can replicate. This document stresses that Artificial Intelligence does not understand, but processes; does not judge, but calculates; and is intrinsically incapable of grasping the moral dimension of action, since it lacks conscience and interiority (cf. here). It therefore clearly warns that moral discernment cannot be attributed to an algorithmic device: to do so would mean abdicating human ethical responsibility and handing truth over to a statistical mechanism. The illusion of an artificial moral intelligence is defined by the document as a form of naïve technological idolatry, because truth is not the fruit of calculation, but of the encounter between freedom and grace[1].

This magisterial reflection confirms the central point: conscience cannot be programmed. The machine may assist, but not judge; it may help, but not interpret; it may filter, but not discern. What belongs to human freedom — and thus to man’s relationship with God — cannot be delegated to any technology.

The ethics of artificial intelligence thus reveal their fragility: a machine may be programmed to recognise words, but it cannot understand the Word. It can identify commands, not commandments. It can catalogue behaviours, not distinguish between virtue and vice. It can detect correlations, not grasp divine revelation. And above all: it cannot know God. A culture that becomes accustomed to replacing the judgement of conscience with algorithmic screening ends up forgetting that freedom is a spiritual act, not a digital output[2]. It is here that moral theology becomes decisive, for it reminds man that truth is always personal; good is always intentional; conscience is always irreducible; moral judgement cannot be delegated to anyone, least of all to software.

This does not mean demonising technology, but restoring it to its proper place: that of a tool, not a judge. Artificial Intelligence may certainly make human work more efficient, but it cannot replace it at the decisive point: moral judgement, the only realm in which it is not enough to know “how things are”, but one must decide “why to do them”. This is the realm of conscience, where man weighs intentions, assumes responsibility, and answers for his actions before God. Here the machine does not enter, cannot enter: it calculates, but does not choose; it analyses, but does not answer; it simulates, but does not love. Like an excellent plastic surgeon, Artificial Intelligence may enhance what is already beautiful, but it cannot make beautiful what is not; it may correct disproportions, soften certain marks of time, but it cannot create beauty from nothing nor restore youth once it has faded. It may enhance a marked face, but it cannot invent a new one. In the same way, Artificial Intelligence may help organise data, clarify a text, or order complex arguments; but it cannot give intelligence to a limited and mediocre subject, nor conscience to one who lacks it.

The image — perhaps somewhat stark, but effective — is that of the thoroughbred horse and the pony: technology may train, care for and bring out the best in the Arabian stallion, but it will never turn a poor pony into a thoroughbred. What is not there, no algorithm will ever create. The machine perfects only what it already finds at work in man: it may refine a true thought, but cannot generate truth; it may polish a successful phrase, but cannot reach the conscience from which that phrase arose.

The machine perfects only what it already finds at work in man: it may refine a true thought, but cannot generate truth; it may clean a well-formed phrase, but cannot infuse the spirit that generated it. And it is precisely here that the parallel with the Thomistic principle becomes evident:

Grace does not destroy nature, but finish (grace does not destroy nature, but perfects it)” [3].

At this point it becomes inevitable to turn our gaze to the most delicate ground: if the machine can perfect only what it finds, then the true question does not concern the algorithm, but the man who hands himself over to it. And it is here that the Thomistic analogy displays its full force: just as grace does not act upon a void, so technology does not work upon the absence of conscience. And when man ceases to exercise his moral interiority, it is not the machine that gains power: it is man himself who loses stature. From this point arises the decisive problem — not technical, but spiritual — that we must now confront. If we understand that moral delegation to the machine is not a technical accident but an anthropological error, the question will arise by logical consequence: what does man lose when he abdicates his conscience? He does not lose merely a skill, but a spiritual dimension, the one in which the meaning of good and evil is decided. Technology may be powerful, sophisticated, extremely rapid, but it cannot become a moral subject.

Christian tradition has always taught that the exercise of sound judgement is an art born of grace and freedom: a balance between prudence, truth and charity. The algorithm knows none of these three. It is not prudent, because it does not evaluate; it is not true, because it does not know; it is not charitable, because it does not love. For this reason, using Artificial Intelligence as a tool is possible; using it as a criterion is inhuman. To think that it can create in place of a man incapable of articulating a thought or producing intellectual work is, at the very least, illusory. Technology may assist man, never judge him; may help speech, never replace it; may serve the mission, never determine its boundaries.

A civilisation that delegates to the machine what belongs to conscience loses its spiritual identity: it becomes a society that knows much, but understands little; that speaks incessantly, but rarely listens; that judges everything, but no longer judges itself.

Catholic morality reminds us that the criterion of good is not what the world accepts, but what God teaches. And God does not speak to algorithms: He speaks to hearts. The Logos became flesh, not code; became man, not programme; became relationship, not mechanism. For this reason no artificial intelligence, however advanced, can ever become the ultimate criterion of what is true, just, good and human. Because good is not calculated: it is recognised.

From the Isle of Patmos, 7 February 2026

.

NOTES

[1] CF. Dicastery for the Doctrine of the Faith, Old and new. Note on the relationship between artificial intelligence and human intelligence (28 January 2025) — On the correct integration between human capacity and technological tools in the formation of moral judgement.

[2] A.N. Output means final result and is a technical computing term referring to the set of data produced by a computer through a processing operation, in contrast to input, which are the incoming data.

[3] Thomas Aquinas, QUESTION, I, q.1, a.8, ad 2, in the Works of Saint Thomas Aquinas, Leonine Edition.

.

THE CLOSE LINK BETWEEN ETHICS, ARTIFICIAL INTELLIGENCE AND THE THEOLOGY OF SAINT THOMAS AQUINAS

The machine perfects only what it already finds in action in man.: can hone a true thought, but not generate the truth; can clean up a successful sentence, but not instill the spirit that has generated it. And it is precisely here where the parallelism with the Thomistic principle becomes evident.: «Grace does not destroy nature, but finish (grace does not destroy nature, but perfects it)».

- Theological -

.

This article for our page Theologica It is taken from my latest book Freedom denied (Freedom denied) published by our editions and available for sale here.

When I am ready to discuss this topic related to Artificial Intelligence, one of the most prophetic works of modern cinema came to mind: 2001: space odyssey, directed by Stanley Kubrick and released in 1968. HAL appears in that movie 9000, a very high level artificial intelligence, installed aboard the Discovery spacecraft. HAL is perfect in calculation, foolproof in data management, but it lacks that which makes judgment truly human: the conscience. When your schedule conflicts with mission objectives, HAL does not “go crazy”: simply apply logic without the moral filter, without intentionality and without the ability to discern good from evil. The result is shocking: a very powerful machine becomes a mortal threat precisely because it does not understand man or the value of life. This cinematic intuition, but theologically very clear — shows that artificial intelligence raises problems that are not merely technical, but radically moral. What is at stake is not the computing power - which no one disputes - but the risk that man delegates to an impersonal system what belongs exclusively to his conscience.. And this is precisely what happens when a platform is allowed to autonomously decide what is “good” or “bad.”, what can be said and what should be silenced: an act that should be moral is handed over to the machine. And this is only the first step of moral delegation to the machine.

Once surrendered to technology the judgment about what is true and what is false, the next step becomes almost inevitable: also renounce educational common sense and personal responsibility. Occurs, For example, when a parent completely entrusts an algorithm with the task of filtering what a child can see, without critical oversight: means delegating educational responsibility to a statistical system. Or when Artificial Intelligence is asked if a phrase is “offensive” or “morally acceptable”: means transferring a task that requires consciousness to the machine, not calculation.

What has been explained so far does not constitute a set of technical details, but the decisive point. If the intention is missing, the machine can never understand what the man is doing when he speaks, reprimands, educa, cure or correct. And since you cannot access the “why”, reduce everything to the “how”: does not evaluate the meaning, analyze only the shape. It is here that misunderstanding becomes inevitable and systematic error. It's what happens, For example, when a priest admonishes a believer or a father corrects a son: human conscience distinguishes between severity and cruelty, between correction and offense; The algorithm only records the harshness of the phrase and marks it as “hostile language.”. The doctor who writes "this risk leads to death" may see his words classified as "violent content", because the machine does not distinguish a diagnosis from a threat. Even a simple Bible verse can be censored as “offensive language.”, because Artificial Intelligence does not perceive the moral purpose, but only the surface of the word. For this reason, any use of Artificial Intelligence that affects the word, to the trial, to relationship or freedom must be examined in the light of moral theology, not computer engineering.

The distinction is decisive: the machine does not decide, select; does not evaluate, filter; does not judge, classify. And what classifies is never good or evil, but only the probable and the improbable, the frequent and the rare, what is statistically acceptable and what is algorithmically suspicious. Human consciousness does exactly the opposite.: takes seriously the uniqueness of the act and the freedom of the agent; ponder intentions, circumstances and consequences; distinguishes between the rebuke that saves and the offense that hurts; between severity out of love and cruelty out of contempt. The machine sees none of this..

When a father rebukes a son, conscience recognizes the love that sustains it; the algorithm sees only one “potentially hostile” phrase. When a spiritual director admonishes those under his charge, conscience perceives the mercy that accompanies the truth; the algorithm sees a violation of “community standards”. When a person speaks to correct, protect or educate, consciousness grasps the purpose; the machine only perceives the hard word. The result is paradoxical: where man unites justice and mercy, the machine produces only labels.

Moral ambiguity is not born of technology: born from the man who designs it. Because the algorithm is not neutral: executes a morality that he does not know, but that others have decided for him. And we see this every day: if content questions political correctness, the algorithm interprets it as “hostility”; If you criticize certain cultural drifts woke, labels it “discrimination”; if it addresses issues of Christian anthropology — for example, sexual difference or the family — criticizing the powerful and politicized LGBT lobbies, He describes it as “incitement to hatred” or “incitement to violence”, the call (c). All of this is not because the machine “thinks” like that., but because it has been programmed to react that way. The algorithm is not born neutral: It is born already educated by those who build it, shaped by ideological criteria that confuse criticism with aggression, reflection with offense, the truth with violence. In other words, the algorithm has masters: reflects your fears, amplifies your convictions, censor what they fear. The platforms do not filter according to objective criteria, but according to dominant ideologies: what the world idolizes is promoted, what the Gospel remembers is suspicious; what pleases is amplified, what admonishes is silenced. The result is a new form of cultural censorship: Elegant, polite, digitally sterilized — but always censored.

These reflections of mine are born from studies, analysis and observations that I have been delving into for some time now at the anthropological-cultural level and in the real functioning of digital platforms. Precisely for this reason I consider it significant to point out how, on a different but complementary level, The Dicastery for the Doctrine of the Faith has recently recalled a decisive principle, going substantially in the same direction of thought, reaffirming that Artificial Intelligence, even being able to "cooperate in the growth of knowledge", cannot be compared in any way to human intelligence, that has a depth and dynamics that no machine learning system can replicate. This document highlights that Artificial Intelligence does not include, but processes; does not judge, but it calculates; and is intrinsically incapable of grasping the moral dimension of action, lacking consciousness and interiority (cf.. here). Warn, therefore, clearly that moral discernment cannot be attributed to an algorithmic device: To do so would mean abdicating man's ethical responsibility and handing over the truth to a statistical mechanism.. The illusion of an artificial moral intelligence is defined by the document as a form of naive technological idolatry, because the truth is not the result of calculation, but of the encounter between freedom and grace[1].

This magisterial reflection confirms the central point: consciousness is not programmed. The machine can assist, but don't judge; can help, but not interpret; can filter, but not discern. That which belongs to the freedom of man — and, therefore, to your relationship with God — cannot be delegated to any technology.

The ethics of artificial intelligence thus revealing its fragility: a machine can be programmed to recognize words, but can't understand the Word. Can identify orders, not commandments. Can census behaviors, not distinguishing between virtue and vice. Can detect correlations, not accepting divine revelation. Y, above all: can't know God. A culture that gets used to replacing the judgment of conscience with the screening of an algorithm ends up forgetting that freedom is a spiritual act, not a output digital[2]. This is where moral theology becomes decisive., because it remembers the man who: the truth is always personal; good is always intentional; consciousness is always irreducible; moral judgment cannot be delegated to anyone, and even less to a software.

This does not mean demonizing technology, but return it to its proper place: that of instrument, not that of judge. Artificial Intelligence can certainly make human work more agile, but cannot replace it at the decisive point: the moral judgment, the only area in which it is not enough to know “how things are”, but it is necessary to decide “why to do them”. It is the place of consciousness, where man ponders intentions, assumes responsibilities and is responsible for his actions before God. The machine does not fit here, can't get in: calculate, but don't choose; analysis, but he doesn't respond; beginning, but he doesn't love. As an excellent plastic surgeon, Artificial Intelligence can enhance what is already beautiful, but you cannot make beautiful what is not beautiful; can correct disproportions, can attenuate certain signs of time, but it cannot create from nothing beauty that does not exist nor restore youth that has already withered.. Can enhance a marked face, but can't invent a new face. In the same way, Artificial Intelligence can help organize data, clarify a text, sort complex arguments; but it cannot give intelligence to a limited and mediocre subject, nor conscience to those who lack it.

The image, maybe a little crude but effective, It is that of the race horse and the pony: technology can train, care for and make the Arabian stallion perform to the maximum, but it will never transform a poor pony into a thoroughbred. What does not exist, no algorithm can ever create it. The machine perfects only what it already finds in action in man.: can hone a true thought, but not generate the truth; can polish a successful sentence, but not reaching the consciousness from which that phrase has arisen.

The machine perfects only what it already finds in action in man: can hone a true thought, but not generate the truth; can clean up a successful sentence, but not instill the spirit that has generated it. And it is precisely here where the parallelism with the Thomistic principle becomes evident.:

«Grace does not destroy nature, but finish (grace does not destroy nature, but perfects it)»[3].

At this point, it becomes inevitable to look at the most delicate terrain: if the machine can perfect only what it finds, then the real issue does not concern the algorithm, but to the man who gives himself to him. And this is where the Thomistic analogy displays all its force.: just as grace does not act on emptiness, In the same way, technology does not work on the absence of consciousness.. And when man stops exercising his moral interiority, It is not the machine that gains power: It is the man himself who loses height. From here arises the decisive problem — not a technical one., but spiritual — which we must now face. If we understand that moral delegation to the machine is not a technical accident but an anthropological error, The question will arise by logical consequence: What does a man lose when he abdicates his conscience?? You don't just lose one skill, but a spiritual dimension, the one in which the meaning of good and evil is decided. Technology can be powerful, sophisticated, very fast, but cannot become a moral subject.

The Christian tradition has always taught that the exercise of good judgment is an art born of grace and freedom: a balance between prudence, truth and charity. The algorithm does not know any of these three. It is not wise, because it doesn't evaluate; it's not true, because you don't know; It is not charitable, because he doesn't love. For this reason, using Artificial Intelligence as an instrument is possible; using it as a criterion is inhumane. To think that I can create instead of a man incapable of articulating a thought or producing intellectual work is, at least, illusory. Technology can assist man, never judge him; can help the word, never replace it; can serve the mission, never determine its confines.

A civilization that delegates to the machine that which belongs to consciousness loses its spiritual identity: becomes a society that knows a lot, but understands little; who talks continuously, but rarely listens; who judges everything, but she no longer judges herself.

Catholic morality reminds us that the criterion of good is not what the world accepts, but what God teaches. And God does not speak to algorithms: speaks to the hearts. The Logos became flesh, not code; he became a man, not program; relationship was made, not mechanism. That's why no artificial intelligence, no matter how advanced it is, can never become the ultimate criterion of what is true, fair, good and human. Because good is not calculated: is recognized.

From the Island of Patmos, 7 February 2026

.

NOTES

[1] See. Dicastery for the Doctrine of the Faith, Old and new. Note on the relationship between artificial intelligence and human intelligence (28 January 2025). —On the correct integration between human capacity and technological instruments in the elaboration of moral judgment.

[2] N. from A. Output means final result and is a technical-computer term that refers to the set of data that a computer emits through a production process., as opposed to input, what is the input data.

[3] Thomas Aquinas, QUESTION, I, q. 1, a. 8, ad 2, en Sancti Thomas de Aquinas Opera Omnia, Leonina edition.

.

THE CLOSE CONNECTION BETWEEN ETHICS, ARTIFICIAL INTELLIGENCE AND THE THEOLOGY OF SAINT THOMAS AQUIN

The machine only perfects that, what it already finds in humans: It can refine a true thought, but produce no truth; she can clean up a successful sentence, but not breathe the spirit, who produced him. And it is precisely here that the parallel to the Thomian principle becomes evident: Grace does not destroy nature, but finish (grace does not destroy nature, but completes it)“

— Theologica —

.

.

This post for our category Theologica is my latest book Freedom denied (Freedom denied), which was published by our publisher and available here is.

When I set out to do it, to address this topic in connection with artificial intelligence, One of the most prophetic masterpieces of modern cinema came to mind: 2001: A space odyssey, directed by Stanley Kubrick and 1968 published. HAL appears in this film 9000, a highly developed artificial intelligence, which is installed on board the spaceship Discovery. HAL is perfect at arithmetic, infallible in data processing, but she misses that, what constitutes human judgment: the conscience. When their programming conflicts with the mission's goals, HAL doesn’t “go” crazy: it simply applies logic without a moral filter, without intentionality and without the ability, to distinguish between good and evil. The result is shocking: This is precisely why an extremely powerful machine becomes a deadly threat, because she doesn't understand people and the value of life. This one – cinematic, but theologically extremely clear - intuition shows, that artificial intelligence poses problems, which are not just of a technical nature, but radically moral. It's not the computing power that's at issue - no one disputes that -, but the danger, that man leaves to an impersonal system, which is solely the responsibility of his conscience. This is exactly what is happening, if you allow a platform, to decide autonomously, what is “good” or “evil”., what can be said and what must be kept quiet: You transfer an act to the machine, which would have to be moral. And this is just the first step of moral delegation to the machine.

As soon as technology is left to decide what is true and false, the next step becomes almost inevitable: also to forgo educational common sense and personal responsibility. This happens about then, when a parent completely delegates the task to an algorithm, to filter, what a child can see, without critical supervision: That means, to delegate educational responsibility to a statistical system. Or if you ask artificial intelligence, whether a sentence is “offensive” or “morally acceptable”.: Then you give the machine a task, which requires conscience, not calculation.

What was presented here, is not an ensemble of technical details, but the crucial point. The intention is missing, the machine can never understand, what man does, when he speaks, admonished, educates, heals or corrects. And because she has no access to the “why”., she reduces everything to the “how”: It doesn't evaluate the meaning, but only analyzes the form. This is where the misunderstanding becomes inevitable and the systematic error sets in. Something like that, when a priest admonishes a believer or a father corrects his son: Human conscience distinguishes between severity and cruelty, between correction and insult; the algorithm simply registers the harshness of the sentence and marks it as “hostile language”. The doctor, who writes: “This risk leads to death”, can see his words classified as “violent content”., because the machine cannot distinguish a diagnosis from a threat. Even a simple Bible verse can be censored as “offensive language.”, because the artificial intelligence does not perceive the moral goal, but only the surface of the word. That's why every use of artificial intelligence must, of language, Verdict, Relationship or freedom touched, be examined in the light of moral theology, not in the context of computer science.

The distinction is crucial: The machine doesn't decide, she selects; she doesn't judge, she filters; she doesn't judge, classifies them. And what classifies them, is never good or evil, but only the probable and the improbable, Common and rare, Statistically acceptable and algorithmically suspect. The human conscience does the exact opposite: It takes the uniqueness of the action and the freedom of the actor seriously; it weighs intentions, circumstances and consequences; it distinguishes between rebuke, that saves, and the insult, who hurt; between severity out of love and cruelty out of contempt. The machine doesn't see any of this.

When a father corrects his child, conscience recognizes love, who carries him; the algorithm only sees one “potentially hostile” sentence. When a spiritual director admonishes his entrusted person, conscience recognizes mercy, that accompanies the truth; the algorithm sees a violation of “community standards”. When someone speaks, to correct, to protect or educate, conscience grasps the objective; the machine only records the hard word. The result is paradoxical: There, where man combines justice and mercy, the machine only produces labels.

The moral ambiguity does not arise from technology, but to people, who designs them. Because the algorithm is not neutral: He carries out a moral, that he doesn't know, but which others have set for him. This is evident every day: Does a piece of content question what is politically correct?, the algorithm interprets this as “hostility”; he criticizes certain excesses of the woke culture, he labels it as “discrimination”; He deals with topics of Christian anthropology - such as gender differences or the family - and criticizes powerful ones, politicized LGBT lobbies, it is marked as “hate speech” or “glorification of violence”.. None of this, because the machine “thinks” like that, but because it was programmed that way. The algorithm is not born neutral: He is trained from the start by his developers, shaped by ideological criteria, criticism with aggression, Confusing reflection with insult and truth with violence. In other words: The algorithm has masters. He reflects their fears, reinforces their beliefs, censored, what they fear. Platforms do not filter based on objective criteria, but according to dominant ideologies: What the world adores, is encouraged; what the gospel brings to mind, is suspected; what you like, is reinforced; what admonishes, is silenced. The result is a new form of cultural censorship: elegant, polite, digitally sterilized – but still censorship.

These considerations arise from studies, Reflections and observations, which I have been deepening for some time on an anthropological-cultural level as well as with regard to the real functioning of digital platforms. That's precisely why I think it's important to note, that on another, but at a complementary level the Dicastery for the Doctrine of the Faith has recently recalled a crucial principle and is essentially moving in the same direction of thought: It affirms, that artificial intelligence can “contribute to the growth of knowledge”., However, in no way should it be equated with human intelligence, which has depth and dynamism, which no machine learning system can replicate. The document underlines, that artificial intelligence does not understand, but processed; doesn't judge, but calculated; and is fundamentally incapable due to lack of conscience and inwardness, to grasp the moral dimension of action (cf. here). It therefore clearly warns against this, to attribute moral distinction to an algorithmic system: This would mean, to abdicate man's ethical responsibility and leave the truth to a statistical mechanism. The illusion of artificial moral intelligence has been described as a form of naive technological idolatry, since truth does not arise from calculation, but from the encounter between freedom and grace[1].

This magisterial reflection confirms the central point: The conscience cannot be programmed. The machine can support, but don't judge; help, but not interpret; filter, but don't differentiate. What belongs to human freedom - and thus to his relationship to God -, cannot be transferred to any technology.

The ethics of artificial intelligence thus reveal their fragility: A machine can be programmed, to recognize words, but she can't understand the word. She can identify commands, not commandments. It can capture behavior, do not distinguish between virtue and vice. She can see correlations, do not grasp divine revelation. And especially: She cannot recognize God. One culture, who gets used to it, to replace the judgment of conscience with the testing of an algorithm, eventually forgets, that freedom is a spiritual act, not a digital one Output[2]. This is where moral theology becomes crucial, because it reminds people of it: Truth is always personal; the good is always intentional; conscience is always irreducible; Moral judgment cannot be delegated to anyone – least of all to one Software.

This does not mean, to demonize the technology, but to put them in their right place: that of the tool, not the judge. Artificial intelligence can certainly make human work more efficient, But it cannot replace it at the crucial point: in moral judgment, the only area, in which it is not enough to know, “how things are”, but in which decisions must be made, “why you do them”. It is the place of conscience, where people weigh up intentions, Takes responsibility and stands up for your actions before God. The machine has no access here, she can't have one: She calculates, but doesn't choose; analyzed, but doesn't answer; simulated, but doesn't love. Like a great plastic surgeon, artificial intelligence can enhance what is already beautiful, but it cannot make beautiful, what it is not; she can correct proportions, Alleviate signs of aging, but neither creating beauty out of nothing nor returning lost youth. It can enhance a drawn face, but don't invent a new face. Artificial intelligence can also help, to organize data, clarify texts, to structure complex arguments; However, it cannot give intelligence to a limited and mediocre subject, nor can it give intelligence to a person without conscience.

The picture – perhaps a bit drastic, but effective – is that of the noble thoroughbred and the pony: Technology can train the Arabian stallion, maintain and lead to peak performance, but she will never turn a poor pony into a racehorse. What doesn't exist, no algorithm can ever create. The machine only perfects that, what it already finds in humans: It can sharpen a true thought, but do not produce truth; she can polish a successful sentence, but do not reach the conscience, from which this sentence emerged.

The machine only perfects that, what it already finds in humans: It can refine a true thought, but produce no truth; she can clean up a successful sentence, but not breathe the spirit, who produced him. And it is precisely here that the parallel to the Thomian principle becomes evident:

Grace does not take away nature, but finish (grace does not destroy nature, but completes it)“[3].

At this point it becomes inevitable, to focus on the most delicate terrain: If only the machine can perfect that, what she finds, then the real question is not about the algorithm, but the people, who surrenders to him. This is where the Thomian analogy develops its full power: Just as grace does not work on the void, technology does not work in the absence of conscience. And when the person stops, to practice one's moral interiority, It is not the machine that gains power – the human being loses size. This is where the crucial problem arises – not a technical one, but of a spiritual nature –, which we now have to face. If we understand, that the moral delegation to the machine is not a technical accident, but is an anthropological error, the question inevitably arises: What does man lose?, if he renounces his conscience? He doesn't just lose an ability, but a spiritual dimension, those, in which the meaning of good and evil is decided. The technology may be powerful, sophisticated and incredibly fast, however, she can never become a moral subject.

The Christian tradition has always taught, that the exercise of sound judgment is an art, which comes from grace and freedom: a balance of wisdom, truth and love. The algorithm does not recognize any of these three. He's not smart, because he doesn't weigh things up; not true, because he doesn't recognize; not loving, because he doesn't love. That's why it's possible, to use artificial intelligence as a tool; Using it as a criterion is inhumane. To believe, she could create in place of a person, who is incompetent, to articulate a thought or produce an intellectual work, is at least illusory. Technology can support people, never judge him; it can serve the Word, never replace it; she can help the mission, never determine their boundaries.

A civilization, which is left to the machine, what belongs to the conscience, loses her spiritual identity: It becomes a society, who knows a lot, but understands little; who speaks incessantly, but rarely listens; who judges everything, but no longer judges himself.

Catholic morality reminds us of this, that the criterion of good is not that, what the world accepts, but that, what God teaches. And God doesn’t speak to algorithms: He speaks to the heart. The Logos became flesh, not code; he became human, not program; it has become a relationship, not mechanism. That's why no artificial intelligence can, no matter how advanced it is, ever become the final measure of that, what true, just, is good and humane. Because the good is not calculated: It is recognized.

From the island of Patmos, 7. February 2026

.

NOTES

[1] cf. Dicastery for the Doctrine of the Faith, Old and new. Note on the relationship between artificial intelligence and human intelligence (28. January 2025). — On the proper integration of human abilities and technological tools in the formation of moral judgments.

[2] Anm. (d). A.: Output refers to the final result and is a technical term in computer science, which refers to the entirety of the data, that a computer outputs as part of a processing process, in contrast to the input, i.e. the input data.

[3] Thomas Aquinas, QUESTION, I, q. 1, a. 8, ad 2, in the Works of Saint Thomas Aquinas, Leonine edition.

 

 

______________________

Dear Readers, this magazine requires management costs that we have always faced only with your free offers. Those who wish to support our apostolic work can send us their contribution through the convenient and safe way PayPal by clicking below:

Or if you prefer you can use our Bank account in the name of:

Editions The island of Patmos

n Agency. 59 From Rome – Vatican

Iban code: IT74R0503403259000000301118

For international bank transfers:

Codice SWIFT: BAPPIT21D21

If you make a bank transfer, send an email to the editorial staff,

the bank does not provide your email and we will not be able to send you a thank you message: isoladipatmos@gmail.com

We thank you for the support you wish to offer to our apostolic service.

The Fathers of the Island of Patmos