Thursday, July 27, 2023

Philosophy and Its Inverse, by Olavo de Carvalho

What is thinking? What connects Kant to the UN decisions in favor of a global government? Why does the cult of science “begin in ignorance of what reason is and culminate in the explicit appeal to the authority of the irrational”? These and other questions are answered by Olavo de Carvalho in this book that brings together some of his texts produced in recent years. But should we read Olavo de Carvalho? There are two possible answers: that of his detractors, always negative. And that of those who refuse to accept the indoctrination of the postmodern Weltanschauung, which, gathering supporters among liberals and leftists, is based on a corrupting tripod: relativism, hedonism, and atheism. Olavo knows that, for effective cultural resistance, those who wish to remain lucid must possess a consistent theoretical body, capable of presenting persuasive responses to the world of false fading of contemporary man and of advocating in defense of the truth, the value most vilified today. Thus, in the face of ideologues whose goal is to convince us that principles and values are obstacles to freedom, Olavo denounces the dictatorship of relativism – the weapon that remained to the left in the face of the failure of the dictatorship of the proletariat. And he does so with his characteristic style, which allows him, as he himself says, “to move freely between academic discourse and the voice of the heart”, driven by his “almost obsessive objective: the pursuit of the Supreme Good”. Nothing is small in this book. The response to certain polemicists turns into the steps that Olavo takes to teach Gothic architecture or to reposition logic as an accessory element of philosophical production. He dismantles Martial Guéroult, pays tribute to the unforgettable figure of Stanislavs Ladusãns, rebuts Peter Singer, Richard Dawkins, and other pseudo-luminaries. And he does so by following the method he proposes to his students: to be amazed at the reality of the experience. But not only that. Olavo de Carvalho reminds us that not forgetting our mortal condition is the starting point of metaphysical investigation. Here, he goes beyond philosophy – and resembles the masters of monastic spirituality, who recommend reflection on one’s own death to heal one of the most harmful diseases of the soul: acedia.

Dedication

I dedicate this book to all the students of the Philosophy Seminar.

Credits

Editor: Silvio Grimaldo de Camargo

Revision: Ronald Robson

Typesetting: Arno Alcântara Júnior

eBook Development: Loope Editora

Acknowledgments

I sincerely thank Sílvio Grimaldo, César Kyn, Lhuba Saucedo, Isabela and Alessandro Cota, Luciane Amato, my wife Roxane, and my daughter Leilah Maria, as well as everyone else who helped me preserve and edit these writings.

Prologue

One of the most significant moments in the history of philosophy is the one in which Socrates, about to meet Gorgias, is consulted by Chaerephon about the question he wants to ask the renowned sophist. Socrates responds, "Ask him who he is (ostyn estin)"1. Commenting on this passage, Eric Voegelin observes: “This is, for all times, the decisive question, cutting through the net of opinions, social ideas, and ideologies. It is the question that appeals to the nobility of the soul, and it is the only question that the ignoble intellectual cannot face head-on.”

For even greater clarity, the author of Order and History emphasizes: "What is at stake here is the substance of man, not a philosophical problem in the modern sense."2

Carefully avoiding “philosophical problems in the modern sense” seems to be, therefore, a sine qua non condition for the exercise of philosophy in the Socratic-Platonic sense. To avoid them, or at least not touch them without a clear awareness of the difference between philosophy proper and the discussion with the anti-philosopher. The first is the education of the soul in the pursuit of the eternal Good. The second is the removal of obstacles that do not arise from the pursuit itself but from the culture surrounding it, from the political society entirely focused on the attainment of immediate ends of earthly life, where men do not speak with the voice of their hearts but with the voice of social roles that suit them at the moment. The anti-philosopher can, of course, be another person or an aspect of the philosopher’s own soul. In this case, the discussion with him becomes a stage of philosophical learning. The main occupation of the anti-philosopher – internal or external – is to place obstacles in the philosopher’s path to make him give up the pursuit. Removing these obstacles requires some technique, the acquisition of which makes the philosopher more able to survive in a hostile environment, as well as to overcome his own inner hesitations. The technique – rhetoric, dialectic, and logic – includes training in the art of anticipating obstacles in order to avoid surprises in the debate. The apprentice plays the role of the devil’s advocate, arguing against his most beautiful hopes with the tenacity and cunning of a true demon. The problem arises when the subject develops a taste for this exercise, making it an end in itself, independent of the original goals of philosophy. Thus arise the “philosophical problems in the modern sense”. They grow until they dominate the entire horizon of the apprentice’s concerns and, over time, end up institutionalizing themselves as a prestigious academic profession, highly professionalized and highly developed from a technical point of view. In the discussions that take place in this environment, the most abstruse questions are examined in their minutest details, with admirable precision. Only one question is considered inappropriate there, so inappropriate that it does not need to be prohibited since no one thinks of asking it aloud. That question is: “Who are you?”

To answer it, the interrogated person would have to strip himself of his professional identity and speak from the living core of his flesh-and-blood person, but that is not compatible with either standardized technical language or the decorum that should prevail in academic institutions. Therefore, Gorgias, or any other “ignoble intellectual,” is there, confident that he will never be put in an embarrassing situation by Socrates' inconvenient curiosity.3

Hence, by keeping my distance from such sophisticated environments – in both the etymological sense of the term – I feel at ease to move freely between academic discourse and the voice of the heart, without disregarding the former but subjecting it to the demands of the latter, and not the other way around.

In this book, as well as in the volume Conhecimento e Presença that will follow it or in the preceding Dialética Simbólica, readers will find, in a baroque mixture that some may find somewhat obscene, subtly elaborated technical analyses and direct outpourings of a human soul that has never learned to deal impersonally with anyone, and that prefers to appear crude in the eyes of others rather than false in its own eyes.

It so happened that, since adolescence, finding myself alone without guidance in a confusing and inhospitable world, I quickly understood that, in order not to lose myself completely, I had no other means than to come to terms with myself, to find the center of my real self and settle there with the most modest modesty and the absolute security that whoever is sitting on the ground does not fall. Around me, there was so much confusion, so much deception, so much madness, so much falsehood, so much disorienting pretense, that if I myself were not sincere with myself, no sense of orientation in life would be possible for me. I chose inner sincerity not for some lofty moral reason, but for a simple question of psychic survival.

I have never been able nor tried, in the theater of the world, to play any role other than myself. I appear before the human audience without any social or professional adornment, almost naked.

This has brought me many problems. The first is that anyone who talks to me for five minutes already feels intimate with me and begins to give advice on my life. I have no defense against this. I have become accustomed to being treated with that affectionate disrespect that no one escapes when they are everyone’s cousin.

There are, of course, those who, from afar, are intimidated by a certain intellectual superiority they see in me and protect themselves from it with falsely ceremonious airs, poorly disguising that infallible intrusiveness that ultimately shares with the rest of my acquaintances. I cannot say that I detest these people, but if I could, I would hide from them under the sofa.

Nevertheless, their presence is a modest price I pay for the indescribable comfort of never having to police myself, evaluate my performance by the judgment of others, or shape my language according to the expectations of the respectable public.

I often say that, excluding sexual and excretory activities, which would expose me to legal penalties if I displayed them to the eyes of the crowd, there is nothing I do privately that I cannot repeat in public. To the curious who wish to delve into our domestic life, my wife always responds that I am exactly the same at home as everyone sees me in my classes and lectures.

Similarly, I would like my philosophy not to be the editorial and didactic crystallization of a professional identity, but the direct expression of what I see, feel, and think in everyday life. Above all, of what, far from the world, I say and confess before God.

This is not due to any exhibitionist impulse on my part, which would prevent me from sharing my thoughts with others as I did until the age of forty-eight when my first book was published.

It is so, I say, because I am convinced that only by adjusting one’s inner focus, speaking from the center of oneself and not from an external pseudo-personality adopted for the purposes of profession, neurotic self-compensation, or whatever else, can one truly see life with some realism. The philosopher, I believe, should not speak as a professor from the heights of his lectern, as a preacher from the heights of his pulpit, as an orator from the heights of his platform, but as the sincere believer who examines his conscience and confesses what he knows about himself and the world. Without this, even the practice of the phenomenological method, which aims to describe things as they appear, becomes unviable. After all, to whom do they appear? To an abstract and generic consciousness, morally and legally unaccountable? To a social identity of a teacher and intellectual hastily attached to a well-camouflaged and inaccessible ego? That would so divert the focus that even the most meticulous phenomenological description would risk becoming what it would least desire to be: a logical construction, an architecture of hypotheses, a “theory” in the common sense of the term. Clearly, fidelity to the object should articulate, with the same rigor, with the coincidence of the subject with himself, with the perfect frankness of the sinner before a God whom he knows he cannot deceive. This was the root of what I came to call the confessional method – the judgment of theoretical truth in the tribunal of inner sincerity.

On the other hand, of course, this was not about autobiographical honesty. Philosophy had to be confession, but not in its content, rather in its form, in its cognitive strategy. The objective was not to talk about myself but to speak from within myself, from the depths of my soul, about whatever I saw in it or around it.

Hence, the language to be employed had to be strictly personal, but at the same time, and somewhat paradoxically perhaps, occasional recourse to the technical and impersonal vocabulary of academic philosophy was neither insignificant nor dispensable but strictly necessary at certain moments, even to clearly define the most personal and intimate impressions.

Thus, occasionally and intermittently, it was impossible to present my thoughts in a systematically academic or treatise format.4

Friends and foes demand from me, from time to time, a systematic exposition of a philosophy that I have spread in part through oral and written fragments, while the other part I keep implicit, in the form of between-the-lines, trusting in the hermeneutic or divinatory capacity of those who possess some.

The former make this demand because they believe it would be good to explain in a more organized manner a thought in which they glimpse something valuable without being able to fully perceive it. The latter make it to prove that I am incapable of meeting it.

Both are right, but the latter is more so.

I have no talent whatsoever for doing something that I firmly believe should not be done.

Since the beginning of my scholarly adventure, I have been convinced that wisdom – the ultimate and ever-changing ideal of philosophy – does not consist of general truths crystallized into repeatable doctrinal formulas, but rather in the apprehension of the universal meaning of particular, unique, and concrete situations experienced by real human beings.

In the moral sphere, this is exemplarily obvious. The good person is not the one who knows the commandments by heart, but the one who can transform them into sound decisions and actions amidst the confusing demands and contradictory pressures of immediate existence, where they often become unrecognizable or assume a scandalous and paradoxical appearance.

Similarly, in aesthetics, there are no general principles capable of accounting, on their own, for the bewildering variety of unpredictable forms that the experience of beauty can assume, sometimes even under the camouflage of the ugly, the deformed, and the monstrous. Aesthetic sense lies in the ability to grasp the unity of beauty behind these forms, even without being able to condense it into general principles.

Why wouldn’t the same hold true for the higher philosophical disciplines of a purely theoretical nature, such as metaphysics and epistemology?

There is no metaphysical system that, when carefully examined, does not reveal some internal contradiction or a mismatch with experience. There is none whose errors do not, in compensation, provide inspiring suggestions for approaching a myriad of metaphysical problems that arise from real experience. Just as there can be no language that is completely literal and unambiguous, when reading great works of philosophy, there always remains the possibility of symbolically interpreting something that is manifestly wrong in the literal sense, thereby returning to the original perception of an obscure truth that the philosopher failed to convert into an explicit doctrinal conclusion.

There is a great difference between reading philosophers to learn their doctrines as such and reading them in search of truth. A doctrine crystallized in texts is only a historical truth, or more properly philological, not to say editorial. But no philosopher created their doctrines solely for us to know them, but rather for us to seek the truth through them; a truth that they, at best, can only partially apprehend or, in most cases, symbolically insinuate (being no more precise or accurate in this regard than a poem or a play). Yes, the text and the doctrine must be historically conquered and possessed. But that is still not philosophy, it is merely philosophical culture.5

Sometimes, a theory that is inherently unacceptable remains valid as a critique of another theory. When Hume denies the existence of the “self,” he is merely led to an absurd conclusion by the automatism of his own reasoning. But who can deny that, in doing so, he dismantled the deductive machinery of Cartesianism, showing that Descartes, while proving the existence of thought, erred in thinking that he had also proven the existence of a “thinking substance”? In fact, if the cogito is an instantaneous experience without duration, it is impossible to deduce from it the permanence of the self between the moment it experiences it and the moment it narrates it.6 By demonstrating the non-existence of the Cartesian “self,” Hume imagined denying the existence of any and all “selves” – an undue amplification like the one he criticizes in Descartes. However, it is certain that in exposing the difficulty of finding proof of the existence of the “self,” Hume created the eloquent symbol of a constitutive paradox of the human ego, which is that it can only apprehend itself as a substance from a posthumous point of view, where "tel quen lui-même enfin leternité le change?"7

Those who attempt to state literally valid universal truths usually only manage to outline a symbol. On the other hand, if we seek to walk only toward the universal truths that we see sketched in concrete situations, the order is reversed: instead of arriving unintentionally at a symbol, we start from it voluntarily, knowing that no matter how much we analyze it, we cannot transfigure it into a definitive literal truth, but only into another symbol that is clearer, more intelligible, and perhaps more satisfying. The limit we reach by this means is not determined by ultimate truth, but only by the degree of our demand for understanding, a demand that, in turn, is determined by the pressure of personal, cultural, and historical factors that delimit the object and course of the investigation.

I have never had any other intellectual ambition than this.

Hence my impatience with those generic philosophical problems – the “philosophical problems in the modern sense” – that professors and authors of manuals seem to consider the purest and loftiest expressions of philosophical inquiry: materialism and idealism, determinism and free will, the foundations of morality, the logic of meaning, and so on.

Of course, I cannot completely avoid these questions, as I stumble upon them at every step in my attempt to explain myself to an audience that has them in mind. But I try to approach them only superficially, as occasional complements to what I want to say about concrete realities of life.

Therefore, all my writings are strictly occasion-based: reactions of a curious and sincere intellect to the experiences of a moment, recorded and analyzed in multiple keys, in shameless cognitive opportunism immune to any presumption of systematization and, even more so, of subsequent textual organization. If they do not completely dissolve into a dust of impressions, it is because, from the multiplicity of occasions that give rise to them, the perspectives that shape them, and even the literary genres in which they are expressed, they always refer to a central core of concerns that unify around a constant, unique, and almost obsessive objective: the pursuit of the Supreme Good and therefore the removal of obstacles that may appear along the way.

The texts gathered in this book, as in the two other mentioned collections, reflect both the kaleidoscopic disorder of fragments and the unity of the light that runs through them.

In this sense, the absence of any order, whether in the chronology of the writings or in the distribution of subjects, is intentional, harmless, and even opportune.

The reader of this collection will have the opportunity, among the many changes in perspective and tone throughout the pages, to verify what I am saying. I only hope that they do not get annoyed by it but are encouraged and delighted to find that, from the baroque multiplicity I present, their own point of view is perhaps not entirely excluded.8

Richmond, March 20, 2012.

Philosophy and Its Inverse

If there is a historical FACT that cannot be doubted, it is that philosophy was born in Greece and acquired its classical form once and for all with Plato and Aristotle (both under the original inspiration of Socrates). You can become a philosopher without knowing Sartre, Husserl, Nietzsche, even Hegel, Leibniz, or St. Thomas Aquinas. But anyone who has not immersed themselves in the teachings of the two founding fathers will remain forever ignorant of the spirit of philosophy.

No one has described this spirit better than Eric Voegelin when he said that, with the loss of the ancient “cosmological” sense of orientation in life, where the order of existence appeared as an image of the cosmos, philosophy emerged as an attempt to find a new organizing principle no longer in the contemplation of the physical universe but in the interiority of the soul. In the general confusion of the world, the philosopher seeks to order his own soul in order to take it as a measure of the external disorder.

Among the multiple styles of thought that universal philosophy offers us, the student always ends up clinging to one in the end. Whether formally or informally, they become Kantian, Hegelian, Marxist, Nietzschean, structuralist, neo-empiricist, or something else. But none of these lines of orientation make any sense on their own if separated from the original organizing project inaugurated by Plato and Aristotle. This is mainly because those various schools define each other within the limits of a “professional” philosophical debate, with problems and terms established by a long academic tradition, while the Greek classics provide us with a much broader sense of orientation, a sense of orientation not within the network of university discussions but in life in general. Descartes, Kant, Husserl, or Wittgenstein teach us “philosophy,” that is, certain philosophical problems and sophisticated ways of approaching them. But only in Plato and Aristotle do you learn what it means to be a philosopher. Being a philosopher is not the same as mastering only a set of intellectual techniques that make you a recognizable or even respectable member of a particular academic corporation (assuming the university actually teaches them instead of just giving you a title to cover their absence). These techniques allow you to understand what philosophers are discussing and even formulate your own guesses in an academically acceptable language, but no one in their right mind would think of applying them to real life, to everyday life, outside the professional sphere. No one, when making decisions about marriage, employment, raising children, managing a household, or even more so when dealing with major crises of personal existence, will act based on Hegel or Wittgenstein. In fact, the mere idea of seeking guidance in philosophy for real-life orientation sounds strange in academic circles today. Philosophy, they say, is a serious intellectual activity, not self-help. When trouble arises, they forget about seriousness and seek the help of a psychotherapist (or a spiritual guide, like many professors at USP). But it is precisely in the decisive moments of life, in times of crisis and perplexity, that Plato and Aristotle (and, hovering above them, the spirit of Socrates) come to our aid, infusing us with the sense of the inner order of the soul that will make each of us not an academic professional but a spoudaios, a truly mature human being, humanly developed to the utmost limit of our cognitive powers, capable of perceiving reality and making decisions from the center and the summit of our consciousness, and not from the passions of a moment, from professional opportunism, from the fear of peer judgment, or from some fashionable prejudice.

In terms of pedagogical strength, in the power to order the soul, the writings of Plato and Aristotle are surpassed only by the Bible and the words of the Holy Fathers and Doctors of the Church—with one difference in their favor: the Bible is written in symbolic language, sometimes difficult to interpret, and the writings of the Fathers and Doctors fill entire libraries that you will not be able to read in the span of a lifetime, even assuming you emerge unscathed from the theological controversies that clutter the path.

It is also true that many scholars see in Plato and Aristotle only what they also find in Descartes, Kant, or Husserl: “philosophical questions” to fuel erudite research and enliven academic debate. But they do this because they want to, because they love philosophy as a profession, not as the norm and meaning of life. Nothing obliges them to do so except the decision they freely made to seek the security of a professional identity rather than the order of inner life, reconciling without greater crises of conscience the rigor of academic investigations with the fragmentation, disharmony, and deformity of their souls. That precisely these individuals typify in the eyes of the multitude the image of “philosophers” par excellence, since the multitude knows nothing about philosophy and judges everything by the appearance of social roles, is one of the greatest ironies of present-day society. For the orientation they adopted in existence is the exact opposite of the philosophical life as understood by Socrates, Plato, and Aristotle. They are “professional philosophers” precisely to the extent that they ignore or despise the spirit of philosophy.

From Socrates to Júlio Lemos – Philosophy and Its Inverse – II

Mr. Júlio Lemos, who never misses an opportunity to start a discussion, calls Socrates the “chief bore” for having practiced the same custom twenty-four hundred years ago.9 But there the similarity ends. Among numerous other differences, it is well-known that Socrates addressed his adversaries by their names, while Mr. Lemos, in criticizing the vices of surrounding philosophy, always leaves it to the reader to discover who the addicts might be, if they exist outside the author’s mind. He is so averse to mentioning flesh-and-blood people that his critical articles should come with a disclaimer: “Any resemblance to reality is purely coincidental.” Socratic dialogues, on the other hand, always unfold with real characters from Athenian life and deal with problems that are evident in society’s eyes. Socrates valiantly fought against the corruption of the polis, while Mr. Lemos maintains a prudent distance from this low world, dedicating his talents to logical-mathematical speculations – or discussions with hypothetical philosophers – that do not offend the established authorities. Perhaps he is somewhat ashamed of this deep down, but in his public statements, what transpires is quite the opposite: that distant, almost blasé ostentation of superior detachment, typical of a seasoned professional who consents, out of sheer charity, to utter a few words to the meddling amateur.

We all know what this superiority consists of: Mr. Lemos plays the role of a rigorous, scientific, academic arguer in the imaginary theater he wishes to fill with a real audience, contrasting himself with the guessers who “do philosophy in a crude manner, leaving aside speculation to inculcate moral criteria in listeners and readers, condemn behaviors, or provoke indignation.” Among the culprits of such a debacle, he includes Socrates, Plato, and Aristotle, always busy pointing out to the unwary the path to goodness, wisdom, and happiness – a task that, according to him, falls to “practical ethics” or techniques of “self-help,” having little or nothing to do with authentic and serious philosophy, which is evidently represented by Mr. Júlio Lemos himself.

To support his modest claims, he appeals to the authority of Blessed Cardinal John Henry Newman, who proclaims in Chapter 5 of Idea of a University10 that “knowledge is one thing, virtue is another” and that “philosophy, no matter how enlightened, does not supply commands for the passions, influential motives, or vital principles.” Newman cites the example of a character in Samuel Johnson’s novel Rasselas, Prince of Abissinia – a philosopher who, in the face of his deceased daughter, confessed to finding no consolation in the ethics of self-control he had taught his disciples (Mr. Lemos, with his peculiar rigor, conjectures that the man is a Pythagorean when it is quite evident that he is a Stoic). The episode foreshadows Franz Rosenzweig’s piercing protest, who, squeezed into a trench during World War I, amid piles of corpses, noted the complete impotence of academic philosophy in the face of global carnage.

It would be great if Mr. Lemos, before using a classic text as a bludgeon, learned how to read it. The quoted passage does not contrast moralizing philosophy with the “scientific philosophy” that Mr. Lemos so appreciates but with the Christian faith. When Newman suggests that the teaching of philosophy, instead of making false promises of salvation, should more modestly aim to develop in the student intellectual virtues, Mr. Lemos, attempting to turn the cardinal into an advocate of analytic philosophy avant-la-lettre, insinuates that these virtues consist merely of “conceptual precision, clarity, and logical rigor,” that is, the standardized qualities of scientific communication in the present sense. Any attempt to go beyond that, according to him, is pure superstition. Newman, however, makes it clear that it is not the case. According to him, what philosophy can and should develop is “a cultivated intellect, a delicate taste, a candid, equitable, dispassionate mind, a noble and courteous bearing in the conduct of life.” Who, reading these words, could fail to understand that the intellectual virtues to which the cardinal alludes are also and inherently moral virtues, precisely those that, according to Mr. Lemos, philosophy cannot teach in any way? For Newman explicitly makes them the very objective of philosophy teaching at a university (they are the objects of a University).

Newman only emphasizes that these virtues are inferior to Christian holiness. It is as if one were to exclaim, like the Lisbon citizen to whom a tourist asked for the location of the Hieronymites Monastery: “Oh, for heaven’s sake, who doesn’t know?” The cardinal rightly clarifies that philosophical education “produces not the Christian, not the Catholic, but the gentleman.” He is far from despising the virtues of the gentleman; on the contrary, he professes to advocate for them and to insist on their importance. He only warns that they are not a guarantee of sanctity, not even of conscientiousness; they can even stimulate pedantry, arrogance, and the spirit of controversy. All of this is exemplarily obvious, but only Mr. Lemos can see in it an appeal for philosophy to abstain from any moral ideal and concentrate solely on the pursuit of logical accuracy, taken as an end in itself. When Newman speaks of “disinterested study,” he is explicitly referring only to the classic distinction between liberal and servile arts. The latter aim at utilitarian purposes, while the former aim at the improvement of the human mind. By describing this improvement as a synthesis of cognitive, ethical, aesthetic, and social values, condensed in the symbol of the “gentleman,” he precludes in advance, in the most categorical manner possible, the interpretation that Mr. Lemos wishes to impose on his words. “Disinterested study” is disinterested in its technical, industrial, and economic applications, not in its psychological and moral effects on the student’s mind, which, according to Newman, is its very raison d’être.

The attentive reader will also not overlook the highly significant detail that, as examples of false saviors, Newman mentions only second-rate philosophers like Seneca, Cicero, and Cato, and ironically, Lord Francis Bacon, one of the precursors of Mr. Lemos’s “scientific philosophy” (the passing mention of Socrates has another meaning, as we will see later). Not a word (much less criticism) about the Christian philosophy of St. Thomas, St. Bonaventure, Duns Scotus, or Raymond Lull, whose edifying and even catechetical purposes shine forth from every page of their works. As for ancient philosophy, from which medieval Christian philosophy directly derives, the cardinal, instead of mocking its moral ideals or reducing its contribution, as Mr. Lemos desires, to the development of logic, mathematics, and physical sciences, makes it one of the pillars of human condition itself:

“As long as we are human, we cannot escape being, to a large extent, Aristotelians, for… in many matters, to think correctly is to think like Aristotle; and whether we like it or not, we are his disciples, though we may not know it.” Logic was surely one of those matters, and what Aristotle thought about it is that it is not even an integral part of philosophy but only preliminary training that, once absorbed, can be forgotten deep down and make way for less formalized modes of inquiry, more compatible with the elusive nature of certain questions. Although teaching that logic is the quintessential form of scientific proof, Aristotle warns that in all investigations, the fundamental problem is not exact logical demonstration but the discovery of premises, in which logic is absolutely powerless, giving way to dialectic, rhetoric, and even poetic imagination. A philosophy that sought to reduce itself to logic or, even more so, to the logic of the sciences, would, in Aristotle-Newman’s understanding, be the aberration of aberrations.

Following the tradition of medieval universities, Newman divides studies into three levels: utilitarian arts, liberal arts (which he indifferently calls “philosophy” or “science”), and Christian religion. If the second level should not usurp the prerogatives of the third, it should also not lower itself to the first – which, I observe, would necessarily happen if philosophy were reduced to logic and the improvement of the mind to the pursuit of “conceptual precision, clarity, and logical rigor,” disregarding the ethical, aesthetic, and social qualities that, according to Newman, constitute a well-formed intellect. If philosophy does not ensure the salvation of the soul, it does not mean that it is morally inert or that the only quality required in its practice is, as Mr. Lemos pretends – monstrously distorting Newman’s thinking – “love of studies.” Love of studies, without the corresponding love of truth, is an invitation to that pedantry, that academic presumption that Newman vehemently condemns, and of which Mr. Lemos’s lessons provide an unmistakable sample. Worse still would be reducing love of truth to a mere set of logical-technical precautions, omitting that its conquest is a constant struggle of the whole soul, involving feelings, habits, values, and, above all, the effort of self-knowledge without which “truth” becomes an empty formula ready to be repeated on the university stage or on a computer screen without any corresponding act of consciousness. If, in this as in other matters, “to think correctly is to think like Aristotle,” it is worth remembering that, according to the Stagirite, truth lies not in propositions but in judgment, in the inner act of the human intellect that approves or disapproves them. This act can only be performed by a real human being: all that logical technique can do is symbolize it, on paper or in a hard drive, with a negative or positive sign.

If it is indisputable that philosophy does not provide, nor should it promise, the salvation of the soul, even less convincing is the cardinal’s argument against the consoling powers of philosophical meditation in moments of danger and suffering. Firstly, it ignores the historical precedent of Boethius, who, condemned to death, finds consolation in philosophy while in prison. Secondly, it unjustifiably overlooks the heroic conduct of Socrates in the face of the tribunal that condemned him (we will see what the esteemed Mr. Lemos has to say about this). Thirdly, it fails to mention that the scholastic synthesis of faith and reason implies, almost intrinsically, the auxiliary appeal to reason as a reinforcement of faith in difficult moments of life.

The example that Newman refers to – the philosopher in Rasselas – is even more disastrous. Firstly, because it is fictional, and secondly, because it assumes that weeping over a deceased daughter is a damning vice, a decisive argument against the beliefs of a grieving father. If that were the case, the tears shed by the Virgin Mary at the body of Our Lord Jesus Christ would have put an end to Christianity once and for all. And if that were not convincing enough, the apostles' abandonment, the cry of despair from the abandoned Son on the cross, and Peter’s three denials before the rooster crowed would complete the job, leaving no room for criticism from Voltaire.

No example of human weakness ever undermines the dignity of a belief, whether religious or philosophical, nor does it diminish the value of the message it seemingly contradicts. Mr. Lemos himself acknowledges this when he states that if a philosopher “knows more about Thomistic ethics than Saint Philip Neri and privately acts like an irresponsible person, the fault lies not with philosophical ethics, but with him.” Unfortunately, our professor of logical rigor, after admitting this obvious fact, still tries to say something substantive against philosophy as a way of life by claiming that “it is very common for philosophical moralism to go hand in hand with private perversion.” In light of what he said in the previous sentence, the complete response to this observation is: “So what?”

I have explained a thousand times – thinking, in this, like Aristotle – that the argumentum ad hominem only has cognitive validity when it is also, inseparably, an exemplum in contrarium, the factual refutation of a previous generalization. For example, when Hobbes, after proclaiming that human beings only act out of a desire for power, professes to write Leviathan for the pure good of suffering humanity, without any personal ambition; or when Machiavelli, teaching that the Prince must kill his collaborators as soon as he comes to power, omits the main collaborator: the author of the plan, namely himself; or even when the bourgeois Karl Marx, affirming that only the proletarians can have an objective view of history, proceeds to offer us what he swears is the first objective view of history. Outside of these cases, the argumentum ad hominem is only valid as a dirty trick or, at best, as a vague suggestion of a possibility to be investigated.

Even if all the moralists in the world were immoral in practice, it would in no way detract from the dignity or necessity of morality, without even considering the possibility that accusations of immorality may be the work of malicious intriguers. In this sense, Newman’s observation that many philosophers have been ridiculed as hypocrites, including Socrates (in Aristophanes' Clouds), is the epitome of a suicidal argument that rebels against the arguer himself, since satirical literature aimed at denouncing religious hypocrisy, from the Carmina Burana to Rabelais, from Boccaccio to Molière, from Diderot and Stendhal to Alessandro Manzoni, and from Cervantes to James Joyce (not to mention the popes thrown into Dante’s Inferno), far surpasses, in volume, quality, and historical importance, anything that jesters of all times have written against philosophers. And must it be remembered that no one in the world has been (and still is) the target of more mockery than Christ himself?

One point that Newman fails to clarify is the exact relationship between the education of a gentleman and education for the Christian faith. To say that the former is not enough to produce the latter is more akin to the Counselor Acacius than to someone who wishes to elucidate the problem. However, to claim that all liberal education is useless in catechizing simple people, the common folk – something that Newman himself does not affirm – is highly questionable. This is evident from the fact that the first efforts towards universal literacy were initiated by the Church itself during the time of Charlemagne, and that the mechanical arts, diligently practiced, eventually sparked a curiosity for scientific or philosophical matters that they alone could not satisfy. But what about the religious education of scholars, teachers, priests, and monks? Is the preliminary education of the soul in the worldly virtues of a gentleman a dispensable stage or merely a technical training without any moral weight in itself?

History decisively answers that it is not. Newman draws inspiration from the example of the medieval university of the 13th century. However, we now know, although he could not have known at the time, as it was only revealed by later historiography, that the institution, far from representing the pinnacle of education in the Middle Ages, was merely a late crystallization, institutionalized, more formalized and less vigorous, of what was taught in the so-called “cathedral schools” from the 10th to the 12th centuries.11 And what was taught there were precisely the qualities of a gentleman – “a cultivated intellect, refined taste, a candid, fair, and dispassionate mind, noble and courteous conduct” – as preparations for acquiring Christian virtues, in the same sense that Clement of Alexandria proclaimed philosophy to be “the pedagogue that leads to Christ.” The education reached such heights there, and its fruits of goodness and wisdom were so evident, that it was said at the time that even the angels were envious of it. Despite its dazzling but brief intellectual prestige, the universities that came afterwards, with their history of strikes, riots, and even killings, and their subsequent descent into a depressing sterility, never deserved nor would deserve such praise. It is not unfair to say that the Statutes of the University of Paris in 1215, turning philosophy into a regulated profession and a means of social ascent, greatly contributed to the loss of inspiration received from the cathedral schools and to the influx of all kinds of careerists eager for power and prestige, inflated with technical ability and oblivious to the dictates of religious and even secular morality. It is no wonder that student riots erupted there in 1229, lasting two years and leaving a trail of corpses everywhere.

Relevant to the understanding of this process is the following difference. While universities privileged formalized teaching based on texts and documented in new texts, creating the monuments of written exposition that now represent visible figures of scholasticism for us, cathedral schools did exactly the opposite. On the one hand, they did not aim to produce “philosophical works,” but rather human personalities distinguished by the beauty, strength, balance, and purity of intentions, without the slightest concern for leaving behind documents that would attest to their presence on Earth. On the other hand, in pedagogical practice, they attached less importance to the study of texts or the acquisition of techniques than to the direct influence of the master as a living example of the intellectual and moral virtues to be instilled in the disciple.

In this respect, they remarkably resembled the Socratic circle and the original Platonic Academy. The best interpreters of Platonism – Paul Friedländer, A. E. Taylor, Paul Shorey, Julius Stenzel, Eric Voegelin, and Giovanni Reale, among others – teach us that it was never Plato’s intention to create a formalized doctrine condensed into a system of propositions that could be impersonally conveyed to generic recipients, as in a treatise on chemistry or logic. Stenzel writes: "He never conceived of learning as a matter of pure intellect, but always as a total influence from person to person, as a being shaped and molded by the intimate relationship and society with another human being."12 Even regarding the seemingly more “impersonal” and “scientific” aspects of his teaching, the master did not dispense with personal pedagogical examples. Taylor states: "One of Plato’s firmest convictions was that nothing worth learning could be learned by mere ‘instruction’: the only method of ‘learning’ science was to effectively engage, in the company of a more advanced mind, in the search for truth."13

What made this direct influence from soul to soul even more indispensable was the social circumstance in which the Socratic circle originated. Socrates did not enter the stage by engaging in discussions against any ideas, let alone, like Mr. Lemos, challenging a minority current (philosophy as a “norm of life”) that he himself declares to be unrelated to “serious” philosophy. On the contrary, Socrates turned against everything that was the dominant opinion in Athenian society, considered respectable and serious to the highest degree. Thanks to Socrates and Plato’s own efforts, Athenian doxa now appears ridiculous to us, but at the time, it was so respected that challenging it could be punishable by death, as it indeed was. It is merely a scholastic stereotype to say that Socrates opposed this constellation of established beliefs by appealing to “reason.” Both he and his opponents made use of reason, arguing, syllogizing, and drawing conclusions. If Socrates did it more skillfully than them, the qualitative superiority does not imply a difference in substance. Socrates' specific difference lies in a deeper layer of the experience of discussion. While his adversaries repeat common ideas, clinging to the security of social roles that give them the illusion of being right by thinking in line with the majority or the ruling class, Socrates speaks only as a human individual, without relying on any external authority. And not only does he do that, but he also appeals to his opponents' own intimate testimony, which is equivalent to stripping them of their social identities and inducing them to make a direct, sincere, human confession of their true feelings. One of the means he uses for this is inviting each person to imagine their own death and the afterlife. The reality of death and the prospect of judgment dissolve social defenses – the “rationalizations,” as a psychoanalyst might say – and equalize human beings in the consciousness of their concrete destiny. Mere confrontation of opinions transforms into a dialogue between souls, culminating in periagoge, the 180-degree turn in consciousness that abandons collective mirages and, turning inward, discovers the permanent foundations of its existence.

To force viewers to strip themselves of their civil and political identity in order to lead them to contemplate, defenseless, the fragility of the human condition was already the objective of Greek tragedy, which frequently chose as its hero the foreigner, the unknown, the rejected and marginalized, so that any sense of national or social identification would give way to the naked and raw humanity of fundamental experiences. Thus, Nicole Loraux, in a memorable essay, defined tragedy as the “anti-political” genre par excellence.14

It was only when tragedy was losing its effectiveness as a symbolic form that a new, more differentiated and explicit modality of appeal to profound humanity became necessary and possible. More than for his argumentative technique, which is deficient from many points of view, Socrates is notable for his psychological or psychopedagogical acuity, which we do not find a similar example of before Montaigne (16th century), Pascal (17th century), and the advent of modern novels in the 18th century. Throughout all Socratic dialogues, it is never about simply dismantling arguments, but about awakening moral sense through a cognitive deepening of fundamental experiences. It is impossible there to separate what is “philosophical investigation” from what is “moral education,” as the latter guides the former and receives from it its experimental foundation.

However, the operation is not always successful. Sometimes, the listener is so attached to their social identity that they cannot imagine themselves devoid of it, naked and defenseless, even for a minute. In the eagerness to evade intimate experience, to avoid periagoge, they resort to all sorts of subterfuges, ranging from fanciful reasoning15 to mockery and threatening words, or they withdraw from the dialogue. In such cases, the inevitable conclusion is that we are facing the formal and paradigmatic inversion of the figure of the philosopher: the philodox, “lover of opinion.”

This opposition is not casual or mere rhetorical artifice. The entire structure of the Republic and other dialogues is built upon pairs of opposites to which Plato gives a stable meaning and incorporates into his technical language. However, not all of these pairs have survived in the history of philosophy: some concepts have separated from their opposites and acquired an autonomous fictional life in the form of consecrated verbal fetishes. Eric Voegelin explains:

"Plato created his pairs of concepts in the course of his resistance to the corrupt society surrounding him. From the concrete struggle against the surrounding corruption, however, Plato emerged as a worldwide historical victor. Consequently, the positive side of his pairs became the ‘philosophical language’ of Western civilization, while the negative side lost its status as technical vocabulary… The loss of the negative half deprived the positive half of its flavor of resistance and opposition and left it with a quality of abstraction that is deeply alien to the concreteness of Platonic thought… The loss was most embarrassingly manifest in the pair philosopher and philodox. In English, we have philosophers, but not philodoxers. The loss is, in this case, particularly embarrassing because, in reality, we have an abundance of philodoxers; and, as the Platonic term that designated them has been lost, we refer to them as ‘philosophers.’ In modern usage, therefore, we call philosophers precisely the people against whom, as a philosopher, Plato opposed. And an understanding of the positive half of the pair has become virtually impossible today, except for a few scholars, because when we speak of ‘philosophers,’ we think of philodoxers."16

Newman, speaking of “philosophers,” precisely thinks of philodoxers, without knowing it. Hence the somewhat embarrassing ambiguity with which he depreciates the moralizing ambitions of philosophers while declaring himself an adept and follower of a philosophy that is obviously moralizing, such as Aristotle’s. Hence also the monumental blunder of accompanying Samuel Johnson when he mocks a father’s tears before his daughter’s corpse.

But the philodoxer is defined not only by their opposition to the person of the philosopher but also, even without realizing it, to the ultimate foundation of Platonic philosophy (and, by extension, all Christian philosophy): “Plato,” explains Voegelin, "speaks of the philodoxer as the man who cannot bear the idea that ‘the beautiful, or the just, or whatever it may be, are one and the same.’"17 Voegelin recalls Xenophanes’ statement: “The One is God.” We can also evoke Duns Scotus’s “transcendentals,” Unum, Verum, Bonum, which become one another. In Plato, Aristotle, or all of scholastic philosophy, the Supreme Good is not a “value,” much less a “cultural creation,” but the supreme reality, the ens realissimum, the first foundation and ultimate object of all knowledge.

The repulsion this causes in modern sensibility is notorious. Since Kant, the insurmountable separation between “reality” and “value” has become an unquestionable dogma of university mythology, without anyone realizing that it self-annuls at the moment when, professing to express an inescapable given of reality, it consecrates itself as a cultural value.

Max Weber, hypnotized by the vision of the insurmountable abyss but yearning to find a moral foundation that would justify his pursuit of scientific truth, fell into a crisis of nervous paralysis, spending five years incapacitated on a couch because he could not escape the tragic mistake of turning a passing historical situation into a founding principle of all scientific knowledge. The “independence between the spheres of values,” as he called it, is the central dogma of philodoxy. It does not result from the nature of things, but from the fact that many individuals, clinging to their social identities as teachers, scientists, artists, or preachers, at certain times find themselves unable to descend to the inner depth where the unity of human experience is revealed: mistaking the incompatibility between their respective professional languages for an objective ontological separation between the domains of reality, they lack even the Weberian integrity to recognize that they are sick. Thus, they fulfill Heraclitus’s prophecy that awake men live in the same world, while the sleeping retreat into their mutually incommunicable worlds. Several symptoms indicate this pathology. One of them is what I call “arbitrary morality”: the subject proclaims that moral values have no scientific basis or rational defense, yet continues to act externally as if they believed in good and virtue, or whatever they call it. They suggest that their ethical or apparently ethical conduct does not derive from the Supreme Good but from their own mysterious, arbitrary, and inexplicable personal goodness. It is the most cherished form of self-beatification among skeptical and materialistic intellectuals.

Others, like Mr. Lemos himself, prefer to consecrate the impassable separation between facts and values as if it were the supreme value itself, thus proclaiming that “practical ethics” has nothing to do with their “serious philosophy.” Mr. Lemos, clearly, confuses philosophers with philodoxers because he is one of the latter.

The innocent faith with which he accepts the impassable divorce between the real and the good as absolute, taking simple current names of professions or disciplines (“practical ethics,” “self-help,” “science,” “philosophy,” etc.) as if they corresponded to objective and eternal divisions in the structure of the cosmos, evidences that he does not understand, let alone assume as his own, the number one obligation of the philosopher, which is the search for unity beyond and above all abysses and difficulties that culture—the doxa—may have spread along the way. By separating the Verum and the Bonum, or rather, by acritically accepting this separation so dear to contemporary doxa as an unquestionable given of reality and not the mere historical crystallization of a notorious difficulty of communication between schools and styles of thought, he takes the disorder of culture as if it were cosmic order and thus blocks—for himself and for those who listen to him—any possibility of aspiring to the Unum. If, after that, he continues to present himself as a spokesperson for “reason,” it is evident that he has never wondered what can still be “rational” in a world from which unity has been expelled once and for all and the conventional division of labor has become the only remaining metaphysical principle. In other words, the “reason” he boasts of is only a verbal stereotype, not something whose experience he has ever probed deeply or even imagined that he should probe. Rarely has servile devotion to doxa shone with such obscene splendor.

From the frail and wavering existential position that this places him in, it is inevitable that he can only argue by falsifying the meaning of the texts he quotes and, under the ostentation of “logical rigor,” committing the most puerile and awkward fallacies. As even that is not enough to camouflage his insecurity, he resorts to historiographic psychosis and, as an old French popular expression would say, pète plus haut que son cul [farts higher than his ass]: without any explanation, without giving us the slightest idea of what may have led him to such an unusual opinion, he peremptorily declares that Socrates' heroism before the judges was “a legend” and includes the philosopher among those who, like the character in Rasselas, “failed in adversity.” The cold and seemingly disinterested tranquility with which he dispenses with trying to justify this enormity can only be explained by the absolute confidence he places in what he believes, as if he had witnessed it with his own eyes. Therefore, don’t worry: Mr. Lemos was there, saw everything, and no testimony in the world will dissuade him from the certainty that at the decisive moment, Socrates, instead of giving his disciples an example of courage, as Plato and other naïve people believe, wet his pants.18

Richmond, VA, April 7, 2012.

The Philodoxers in the Face of History – Philosophy and Its Inverse – III

In a note published on the Ad Hominem website, Mr. Joel Pinheiro comments on my article “Philosophy and Its Inverse II” and agrees with me that there is no philosophy without moral and existential implications. He then devotes himself to refuting the idea, which he attributes to me, that "medieval scholasticism was already a period of philosophical decline compared to the education provided in cathedral schools, which consisted of the example and charisma of the master and was conveyed through unwritten doctrines primarily transmitted through coexistence and witnessing the master philosophizing in loco."19

Against this idea, he argues that “this type of moral education and spiritual preparation, although very praiseworthy, is not properly philosophy. It cannot question its own foundations or engage in serious debate, since its purpose of forming a certain type of virtuous man is already given in advance; and therefore, it will not result in great philosophers.”

He continues: “The charismatic relationship, or even initiatory,20 between master and pupil does not replace rational debate. It is ridiculous and naive to imagine that semi-anonymous ‘wise men’ of the 12th century who did not leave written works had superior thinking to that of the great scholastics. The few written records that have survived from them show that, quite the contrary, their thoughts were much more conservative and conventional, albeit beautiful and noble.”

I

Before finding out whether Mr. Pinheiro is right or wrong about these matters,21 it is necessary to note that they have nothing to do with what I said in the article he imagines he is refuting. What I discussed there was not the quality of “proper philosophy” (in the sense that Mr. Pinheiro gives to this expression) produced in the schools of the 10th to 12th centuries and subsequently in the universities. Instead, it was Cardinal Newman’s educational conceptions, the role he attributed to philosophy in them, and therefore, the false interpretation that Mr. Júlio Lemos had given to the Cardinal’s words. Mr. Lemos claimed that the teaching of philosophy should not have moral objectives and, due to ineptitude or malice, he quoted a passage in support of this opinion where Newman said precisely the opposite.

In the second part of the article, I analyze those conceptions themselves, pointing out that they seemed to me to fail because they expected from the university institution precisely the result that its advent had made unattainable: the formation of a gentleman marked by the virtues of “a cultivated intellect, a delicate taste, a candid, equitable, dispassionate mind, a noble and courteous bearing in the conduct of life.” This result was precisely what the cathedral and monastic schools of the 10th to 12th centuries had achieved with great success, contrasting sharply with what followed: a climate of careerism, pedantry, corruption, and political violence that prevailed in universities from the 13th century onwards. Just as the students of the cathedral and monastic schools became known popularly as the “envy of angels” because of the brilliance of their virtues, the typical university student who succeeded them had a reputation for being presumptuous, a drunkard, and a troublemaker. The hostility of the city dwellers towards the horde of arrogant foreigners who disembarked there immunized against local laws by all sorts of corporate privileges was notorious.

Cardinal Newman, against Mr. Júlio Lemos, was entirely right in affirming that the study of philosophy could and should contribute to the moral formation of students, as it had done in the cathedral and monastic schools. However, it was also true that philosophy had begun to fail in this objective from the very moment it became a university profession and a means of social ascent. If this trajectory of human decay was accompanied by prodigious improvements in logical-dialectical technique and the opening of new spaces for free discussion, thus propitiating the advent of the great intellectual achievements of scholasticism, it clearly shows that these advancements, instead of adding to the achievements of the cathedral schools in moral education, replaced them and ultimately filled the entire space of higher educational activity. It was not the first or last time in history that moral degradation contrasted with intellectual progress. The zenith of philosophy in Greece, with Socrates, Plato, and Aristotle, only occurred when the glorious days of Pericles were already in the past, and the city-state was sinking into corruption and violence. In 1920s-30s Vienna, the spectacular flourishing of philosophy and the humanities coincided with the decline of the romantic empire of the Habsburgs, shaken by communist and Nazi agitation and corroded from within by political corruption. None of these examples is a reason to deny that it would be better for morality and the culture of superior intellect to progress together, but they show that this does not happen easily.

At no point did I discuss scholastic philosophy as such, which Mr. Pinheiro insists on defending against someone who did not attack it. I recall referring to it as “monuments of written exposition,” which is by no means a pejorative expression. I even pointed out that Cardinal Newman, when referring negatively to philosophers of the past, did not say “a word about (much less against) the Christian philosophy of St. Thomas, St. Bonaventure, and Duns Scotus.” So, what on earth is Mr. Pinheiro talking about? Something he thought he read but did not. About twenty years ago, educator Cláudio de Moura Castro already warned that in Brazil, nobody reads what authors write: they read what they imagine they thought, what they would like them to have thought, whether to applaud or to disparage them. Just like the famous Englishman in the anecdote, the Brazilian reader has not changed at all in the meantime.22

What confused Mr. Pinheiro’s mind was reading my article in the light of the commonplace belief that the great philosophy of the 13th century was a natural product of the university. From this perspective, two consequences follow. First: Mr. Pinheiro ends up understanding my criticism of medieval universities as if it implied a depreciation of scholastic philosophy, which only happens in his imagination. Second: from this confusion, he is led, as in a ricochet, to proclaim that the remarkable achievements of scholasticism only did not appear earlier because the cathedral and monastic schools adhered to a ready-made model of virtuous men from which great philosophers could not emerge. It was only when that model dissolved in “free discussion” that “proper philosophy” could flourish. He says this with complete frankness.

These are errors, of course, but for which I am very grateful because they allow me to take the discussion beyond Mr. Júlio Lemos’s blunders, which constituted his initial subject, and to explain myself on incomparably more important points.

First of all, the image we have today of scholastic splendor is built on a few names, especially St. Albert, St. Thomas, St. Bonaventure, and Duns Scotus. If we were to erase them from the records, scholasticism would have been nothing more than a curious episode in the history of education. And these are not just the names of philosophers but Doctors of the Church: three canonized saints and one blessed. There is no reason whatsoever to assume that these men had a looser, less strict, or less perfect conduct in their personal lives than the “ready-made model” that angels envied. I don’t see how the dissolution of the model through “rational discussion” could have contributed either to their holiness or to the strengthening of the special type of philosophical and mystical intelligence that characterizes them. This intelligence does not grow outside and independently of sanctifying grace; it stems from it as a special gift of the Spirit.

It is also naive to assume that these maximum incarnations of scholastic genius were typical products of the new academic environment, in which, on the contrary, they never comfortably fit. Their intelligence, their rigid suitability, their superior understanding of the mysteries of faith, and last but not least, their intellectual courage made these four masters the preferred targets of envy, pettiness, and slander from their colleagues.

Albert leaped like a young goat to make the congregation reluctantly swallow his Aristotelian theories about the physical world. Bonaventure suffered terrible attacks from William of Saint-Amour, a university potentate of the time, in the course of a sordid campaign waged by the secular clergy against the Mendicant Friars. It was Thomas who defended him, and later, also due to intrigues by academics, Thomas himself was denounced as a heretic twice (one of them posthumously). Duns Scotus was expelled from the university and had to flee from city to city, threatened with death, for defending unpopular doctrines and taking the Pope’s side in the dispute with royal power, which was hegemonic among intellectuals at the time. It was only five centuries after his death that he was removed from the list of undesirables when his great doctrine of the Immaculate Conception of Mary was finally accepted and became a dogma of the Church. His beatification came only a century later, in 1993.

At the very least, Mr. Pinheiro, by extolling the intellectual victories of scholasticism above the “merely moral” virtues of the monasticism that preceded it, should have had the prudence to note that the four major authors of those victories, the ones I just mentioned, could not in any way be typical university figures simply because they were not members of the secular clergy that dominated the universities. On the contrary, they came from monastic orders, where the moral discipline of the old schools was still preserved. The contrast between the mentalities of these two groups was so pronounced that the professors fiercely resisted the entry of monks into the university faculty (see the episode of Bonaventure mentioned above). Well, without their entry, the medieval university would be deprived of Albert, Thomas, Bonaventure, and Duns Scotus—of everything that characterizes and ennobles the image of scholastic philosophy more clearly and deservedly for us today.

Yes, miserable swine, the four of them were monks, intruders in the university community! How could they be typical of the corporation that rejected their presence? Far from being typical products of the university at the time, as Mr. Pinheiro believes, these severe and devout monks, coming from a different social milieu with contrasting habits and values, so outshone that environment that they could only barely survive there and, sometimes posthumously, triumph. The magnitude of their intellectual achievements is due less to the university atmosphere than to the strength of their majestically centered personalities, grounded in faith and integrity of purpose, in contrast to the sophisticated chatter of their colleagues, often technically admirable but so frequently inspired by futile motives and the seduction of heretical novelties. When we see medieval universities today as a luminous moment in the history of education, it is largely because the best men they rejected retroactively project the brightness of their glory onto them, and not the other way around. And this glory, undoubtedly, comes more from the monastic orders that formed them than from the social milieu they joined as adults, strong enough to challenge and eventually overcome it. If Mr. Pinheiro understands that when I criticize the medieval university, I am speaking ill of the philosophy of the great scholastics, it is partly due to his ignorance of history and partly due to following the established optical error that collectivizes individual merits and takes exceptions as rules, as if the university chairs at the time were overcrowded with men of the stature of Thomas and Albert, and not with technicians, bureaucrats, agitators, finger-wagging doctrinaires, bailiffs, and countless sycophants.

It is not Mr. Pinheiro’s fault; it is the widespread vice of understanding great men as “products of their time,” when precisely their greatness consisted of breaking the glass dome of the ideology of their time and injecting into the organism of culture, simultaneously and against the resistance of the environment, the forgotten wisdom of a very remote past and the most unimaginable perspectives of the future.

In the case of scholastic philosophy, which was entirely inspired by openings to eternity that no historical-social conditioning could ever explain, this should be evident at first glance.

Only the mediocre are products of their time. The wise, the inspired heroes and saints are fathers of their time; they are channels through which the light of transcendence breaks through the limitations of time and opens possibilities that the collective mind, on its own, could never conceive. If the prevailing opinion does not see this, it is because the access of millions of incapable individuals to the upper echelons of university professions today obliges us to conceive history sub specie mediocritatis. The fact that Albert and Thomas revitalized an old philosophy of seventeen hundred years, finally making it prevail over the dominant Augustinianism, and that Duns Scotus, against all odds, anticipated a Church dogma by five centuries, are facts that should make those devoted to historical conditioning at least scratch their heads, if they had any.

But to this widespread perspective error, which has spread to the point of infecting even textbooks, Mr. Pinheiro adds another that, if not of his own invention, is also not shared by the ignorant masses but only by a part of the professional elite of philodoxes: the idea that philosophy only exists in explicit doctrine, developed, organized, published, rationally verbalized, and argued to its last detail.

The idea has illustrious origins. It goes back to Georg W. F. Hegel, which, let’s face it, demands some respect. But like so many other opinions that we inherited from that ingenious muddlehead, it is completely false. Without explicitly mentioning it or citing its source (which he may not even know), Mr. Pinheiro writes as if impelled by Hegel’s spirit:

“The focus on the master-disciple relationship and non-verbal wisdom (and therefore cannot be written without being, to some extent, betrayed)23 brings us back to traditionalist and perennialist dreams, esoteric symbolic systems, and immersion in oral traditions.24But Philosophy is avidly pursuing the real; and that is accomplished by fleeing… It is strange that he [Olavo de Carvalho] and many of his followers continue to have this kind of fantasy as an ideal of life and philosophical education.”

In the universal gallery of shameful behavior, few things compare to the Brazilian habit of pretending to be superior to what they do not understand. Not all of our compatriots suffer from this vice, and even fewer are born with it, but many acquire it early in adulthood under the name of “university education.”

The words of Mr. Pinheiro, which sound so obvious and unquestionable to his own ears, contain a multitude of hairy problems that he himself does not even perceive.

II

From the beginning, if we exclude from the field of serious philosophical studies oral traditions, we would have to say goodbye not only to a good part of Platonism but also to all university education that is not recorded in texts. In fact, the only reason for the existence of universities is precisely that part of higher intellectual training that cannot be obtained through mere reading but requires direct contact between master and disciple. If it were not so, university institutions could advantageously be closed and replaced by the publishing industry. This applies not only to philosophical learning but also to arts, techniques, and sciences. And in all these cases, speaking of direct contact includes an indispensable portion of nonverbal communication. Nowadays, there is no scientific research that does not require the use of instruments whose handling requires extensive practice alongside a qualified technician who could hardly transmit to their students solely through verbal instruction, without visual and manual contact with the equipment and without resorting to gestures, postures, intonations, and looks whose translation into words would be practically impossible. If it were not so, anyone could become a technician in computed tomography, stereoscopic microscopy, or ballistic galvanometry simply by reading instruction manuals. They could also become an opera singer, painter, or dancer without ever witnessing a live example of how to sing, paint, or dance.

The weight of this factor is so crucial in scientific research that neglecting it can destroy the highest hopes of the sciences to constitute objectively verifiable knowledge. In science, a truth is worth nothing until it becomes a collective belief subscribed to by the community of professional scientists, but as Theodore M. Porter points out, “daily scientific practice has as much to do with the transmission of skills and practices as it does with the establishment of theoretical doctrines.” In the 1950s, Michael Polanyi already emphasized that scientific research involves a kind of “tacit knowledge” that cannot even be formulated in rules. “In practice,” Porter continues, “this means that books and scientific journal articles are necessarily inadequate vehicles for the communication of this knowledge since what matters most cannot be communicated in words” (emphasis mine)25. Thus, the elimination of nonverbal transmission would close every path to scientific research once and for all.

As we can see, Mr. Pinheiro’s attack on the nonverbal arises from an irrational aversion to pure stereotypes of popular culture and does not reflect any serious examination of the substantive issue.

  1. In the specific case of philosophy, the role of personal contact, circles of friendship, and corporate loyalties in the formation of schools and philosophical currents, as well as in the assimilation and mental modeling of newcomers, is now a widely accepted consensus in the important field of the sociology of philosophy26. This is essential not only for sociologists but also for the philosophers themselves: a philosopher who ignores the social foundations of their professional existence is like a ventriloquist’s dummy, limited to the sad function of echoing influences that it does not know where they came from or where they lead. I dare say that in the Brazilian academic class, this ignorance is almost mandatory.

Even more relevant, from this perspective, is the study of how personal prestige is built and dismantled, marking indelibly the historical profile of philosophy in a given period. How was it possible, for example, that certain philosophers (or philodoxes) achieved a much larger audience, both within and outside universities, than their equally or more capable contemporaries, producing enduring lines of influence and true traditions of thought, while the works of their competitors fell into complete oblivion? It would be unforgivable naivety to think that this is purely a result of “external factors” unrelated to the “intrinsic value” or the “philosophical content itself” of the works in question. The student population only has access to the “philosophical content itself” of the works they read, not the ones they ignore – and the selection automatically reinforces the dominant intellectual influences, establishing as unquestionable decrees of the nature of things the criteria of “intrinsic value” that prevail there and, therefore, the often subjectively biased and distorted view of the history of philosophy that is taken as the direct and obvious expression of the truth of the facts.

Now, when we seek to investigate how such prestige is formed, we discover that the main mechanism that generates it is the circles of personal relationships, where corporate interests and politically self-interested loyalties are indissolubly blended with the devoted cult of charismatic personalities involved, most of the time without objective merits to justify it, in an aura of mystical wisdom that rigidly separates the initiated from the profane.

Studying the careers of four of the most prestigious thinkers of the 20th century, whom he calls “the malign masters” – Wittgenstein, Lukács, Heidegger, and Gentile – the Australian philosopher Harry Redner asks why their shadows overshadowed the figures of their equally or more capable contemporaries, and he concludes:

"In the end, what distinguished the malign masters from their no less capable colleagues was a charismatic personality that ended up making generations of friends, followers, and students prostrate themselves before them with reverential fear. Almost everyone who encountered a malign master felt they were in the presence of a genius. They had that ability to impress from the beginning of their careers… It is hard to think of any great philosopher of the past who was as revered in their time as they were.

“The followers who gathered around each of the malign masters have some of the traits of the narrowest and broadest circles of any charismatic movement. Each of them was surrounded by esoteric and exoteric circles of friends and followers. Closer to the master was a group of disciples or close companions; further away were sympathizers and fellow travelers; and around this core was the mass of interested students and readers”27.

In the formation of this cult, the force of the magical element was never lacking, manipulated with theatrical refinements by professional seducers. In Martin Heidegger’s rise to prominence, Karl Löwith highlights the power of his “enchantment art” that “attracted more or less psychopathic personalities.” In the lectures he gave, “his method consisted of constructing a building of ideas that he himself would then dismantle, again and again, to disorient the fascinated listeners, only to leave them completely hanging in the end”28. Any resemblance to the rhetorical procedures of the Armenian esotericist George Ivanovitch Gurdjieff is not mere coincidence. Gurdjieff rendered his disciples completely intellectually impotent by presenting complex cosmological systems accompanied by the most sophisticated mathematical demonstrations, and just when the audience felt confronted with solid scientific truth, he would dismantle everything with devastating refutations. The only difference between these cases and the pedagogy of the ancient monks is that the latter used the power of charisma to instill virtues, while the philosophical or esoteric celebrities of the 20th century employed it as an instrument of psychic domination to establish the cult of their own persons.

However, evidently, the function of direct circles of interaction is not limited to creating idols. They also serve a less personalized, more collective utility, which is to impose the hegemony of influential groups through mafia-like mutual protection, mutual promotion, boycotting adversaries, sharing the best jobs among gang members, and as a result, controlling public opinion, especially in limited and comprehensible environments such as universities and cultural institutions.

The philosophies of the “malign masters,” according to Redner,

“tended to gravitate toward the university elites because, in the struggle for academic power, elite status matters a lot in attracting disciples and launching influential movements. From these high-status positions, it was easy to oversee and dominate all the positions in the lower-ranking universities. In the elite schools of dominant countries, such as the École Normale in France and the Ivy League in America, philosophy could be cultivated as a mystique for the privileged and the initiated. Only those who entered these institutions and went through them as students and professors had any chance of acquiring the ‘appropriate’ philosophical knowledge and being considered qualified in it. Through these means, a few universities were able to monopolize the teaching of philosophy and use this power to colonize the entire academic system of certain countries. A typical colonialist center-periphery relationship was established between the elite and the rest, allowing elite universities to perpetuate and consolidate their exclusivity and superior status.”

The “proper content” of philosophies was by no means indifferent to the role they played in the structure of university power:

“The philosophies that served this function of preserving the professional monopoly had to be those that no one could learn from books alone. They had to be those that no one outside the privileged institutional framework could acquire, transmit, or practice. They could only be learned if acquired through the correct channels and received from the appropriate hands. Such were, in fact, the philosophies that the malign masters themselves, and by the right of succession, their disciples, came to teach from the elite schools where they had gained positions of power. No one who did not pass through their hands could practice, teach, or even discuss their philosophies”29.

A highly well-documented example of how this process works in a particular country is provided in the book by Hervé Hamon and Patrick Rotman, Les Intellocrates30, which studies the social composition of the elite that controls university life and cultural press in France. This entire elite lives in Paris, distributed in a few neighboring blocks, and constant personal interaction is one of its essential mechanisms of self-preservation and growth.

As we can see, direct contact between masters, collaborators, and disciples has not lost any of the essential importance it had from the 10th to the 12th centuries. It has merely changed its function: from generating saints, it has transformed into a factory for careerists, agitators, managers of the cultural industry, flatterers, and militants. Perhaps for this reason, it has become less visible to inattentive observers like Messrs. Lemos and Pinheiro: it is in the nature of power circles to keep their existence as discreet as possible, so that the effects of their actions appear as accidental and anonymous results of the historical process.

Not surprisingly, one of the philosophical currents that most benefited from the struggle of influential groups for monopolistic control of universities was precisely “scientific philosophy” or neopositivism, which Mr. Júlio Lemos places so heavenly above the human world.

There is nothing strange about this, in fact. Neopositivism is, as the name itself suggests, a continuation of positivism, which was born not as pure theoretical philosophy for the use of angels but as a power project, one of the most ambitious and totalitarian projects of all time.

When, after World War II, the rapid growth of the Western economy accelerated the process of transforming philosophy into an academic profession, gradually eliminating the “public intellectuals” who used to set the tone of cultural debates,31 not all philosophies were equally suited to the new environment where philosophical discussions had to imitate as faithfully as possible the highly regulated and bureaucratized mechanism of scientific intercommunication.

In continental Europe, where philosophical discussion was imbued with a partisan and militant charge consecrated by decades of ideological confrontation, the solution was to infuse traditional left-wing discourse with touches of scientific language, mainly drawn from linguistics and mathematics. Thus, structuralism and deconstructionism were born and soon replaced existentialism and phenomenology in the public’s attention.

In the Anglo-Saxon countries, on the other hand, where the dominant tendency was to keep universities well integrated into the general functioning of the economy and immunized against the risk of right-wing and left-wing ideological labels, that was the great moment of “scientific philosophy.” The process was well studied by C. Wright Mills32, but as his description is very detailed and complex, I turn, once again, to the indispensable Redner, who summarizes it as follows:

“The older generation of philosophers, who were a strange mixture of lawyers, librarians, and scientists, was displaced by academic professors who organized themselves into a professional corporation with conferences, specialized journals, promotion hierarchies, and all the other trappings of academic disciplines. In these conditions, philosophers could no longer be considered freethinkers or intellectuals, as Russel Jacoby argues in a more recent study. For these academic professionals, the philosophy best suited to their demands was one that did not depend on theories, ideas, or any background of scientific or humanities knowledge, and that did not engage in contentious social and political issues. What they wanted was a mode of philosophizing that could be practiced as a technical skill to be pragmatically learned through training in the professional environment itself, through discussion, somewhat like that of lawyers”33.

What is “training in the professional environment” if not the so despicable, so dispensable direct contact between teacher and student? After all, why are lawyers, including Mr. Júlio Lemos, not qualified for professional practice as soon as they receive their little diploma, but have to do internships in law firms, see with their own eyes how courts, notaries, real estate records, and police stations work, learn through firsthand experience how to approach a judge, how to obtain the favors of a clerk, how to persuade a client to negotiate with the opposing party? And who doesn’t know that, in practice, the professional invested with these skills will have an infinite advantage over the erudite bachelor without direct experience?

If “analytical philosophy” can do without direct contact between master and disciple, why was precisely this the preferred teaching modality used to impose the prestige of this school in American universities?

Just like the aversion to the non-verbal, the contempt for direct teaching is affectation, a pose adopted as an irrational reaction of the moment, not a maturely thought opinion with knowledge of the subject.

III

It is pure fantasy for Mr. Pinheiro to believe that I attribute to the cathedral and monastic schools a possession of a “superior” philosophy compared to the scholasticism of the 13th century. But he would not be entirely wrong if he asserted that I see in the former a Christian wisdom superior to that of the average professors and university students who came later, and that I understand the great philosophy of Thomas, Albert, Bonaventure, and Scotus less as a “product” of the university environment and more as the natural development and, so to speak, the intellectual externalization of the Christian culture inherited from the cathedral and monastic schools through the monastic education received in youth by these four great masters, which immunized them against the pedantic, often heretical, babbling of the academic milieu.

That the flourishing of a great philosophy does not arise out of nothing, but is produced as an intellectually differentiated development of a worldview already previously crystallized in symbolic forms in the prevailing culture, is something that should not surprise anyone. Who is unaware that the central conception of Platonic philosophy, that of eternal laws that surpass the apparent order of a “nature” conceived in the image and likeness of the prevailing social order, was already prefigured in Homer’s poetry and in the theater of Aeschylus and Sophocles?

I learned from Paul Friedländer, Julius Stenzel, and Eric Voegelin that understanding a philosophy is not only about grasping the explicit meaning of its “theses,” nor discerning the structure of its “system,” and much less knowing how to compare it with other “systems” (although all of this is an indispensable school preparation), but to unearth from its formulation in concepts and doctrines the real experiences that inspired them, the human and historical substance that they transmuted into abstract ideas.

This, of course, is not a precept valid only for historians and philologists, but a basic requirement for anyone who intends to “discuss” these philosophies based on the real meaning they had for their creators and not only on their explicit formulation, stabilized in texts, even if apprehended beyond their verbal surface and visualized in the profound unity of their internal order.

I refer here to the brief oral explanations I gave about the “argument of St. Anselm.” This argument is originally presented in the form of a prayer. Since no one in their right mind – much less an experienced monk – can pray to a doubtful God, it is clear that the argument is not offered as a response to the doubt about the existence or non-existence of God, but as an intellectual deepening of the experience of prayer. The logical scheme of the argument, however, can be abstracted – imaginarily separated – from its original context and be discussed “in itself.” But then it will no longer be St. Anselm’s argument, but a schematic copy emptied of its experiential content, capable of being reproduced in an infinite number of different verbal formulations and even encoded in mathematical symbols for the purpose of computer analysis. And then the debates about its logical validity or invalidity can go on indefinitely, animating the evenings of argument enthusiasts, enriching the publishing market, and feeding academic careers, without increasing by a single gram the understanding of St. Anselm’s thought or, even more so, of the anselmian technique of converting a devotional practice into an intellectual experience – a technique without which nothing can be understood not only of Anselm’s philosophy itself but of the entire scholastic tradition that followed him.

This example illustrates the difference between what I and Mr. Lemos call “philosophy.” He gives that name to something that, from my point of view, is merely a technique of argumentation, as beautiful and sophisticated as it may be. I prefer to reserve the term for what it has always meant: the intellectual elaboration of experience aimed at achieving, to the greatest extent possible at a given historical moment, the unity of knowledge in the unity of consciousness and vice versa. In this sense, the internal unity of a philosophy, that is, its systemic and logical coherence, is worth less in itself than for its efficiency in accounting, even with inevitable logical imperfections, for the variety and confusion of human experience – personal, cultural, and historical – that served as its starting point. For this reason, we call great philosophers not those who have striven to arrive at the most detailed logical proof, but those who have managed to embrace, in a unifying gaze, the broader and more complex horizon of problems, thus creating a sense of orientation that remains useful for many subsequent generations. In this sense, the list of truly great philosophers is quite short. Without trying to resolve now the question of who deserves or does not deserve to be included in this classification, it seems evident to me that no one will deny a place to Plato, Aristotle, St. Thomas, and Leibniz. While much later philosophers have seen their essential contributions exhausted or challenged by the advance of knowledge (no one can be a committed Cartesian, Baconian, or Hobbesian without conflicting with the current state of the sciences), these four, excluding minor errors they may have made here and there, continue to inspire new discoveries in all fields of knowledge, and it seems they won’t stop doing so anytime soon. We will not err, therefore, if we take them as supremely typical models of what is meant by the term “philosopher.”

The adopted criterion implies that nothing is understood about a philosophy without an effective vision of the underlying experiences to which it responds with vigorous effort of expression, organization, unification, and clarification (the word “enlightenment” has other connotations that I wish to avoid).

If we were dealing with artists, with poets, their works would predominantly reflect a direct expression of experience. Philosophers take their basic material from a more elaborate state, which includes aspects of experience already worked on in artistic culture (as well as in laws, institutions, established beliefs, etc.). Frequently, art anticipates philosophers by providing them, in the form of compact concrete symbols, with the structuring schemes to which they will give a more differentiated, clearer, and more accessible intellectual expression, discernible by reason. It is a pure high school stereotype to believe, as Mr. Lemos and Mr. Pinheiro do, that philosophy is “rational discussion.” The possibility of rational discussion only appears after the great enterprise of unifying organization of experience has reached its end. This enterprise may also include, along the way, a portion of discussion, which aims mainly to rectify or complete certain aspects of previous attempts, but it is evident that it does not constitute the main strength of any philosophy worthy of the name. As John Stuart Mill observed, criticism, as indispensable as it may be, is the lowest faculty of intelligence. Even when a philosophy assumes the external appearance of a discussion, as in Plato’s dialogues, the aim there is not to “prove” anything, but to bring forth, to make visible, something that goes far beyond discussion and proof. Plato starts from the material of experience as he finds it in the culture of the time and, through successive ascensional steps and partial clarifications, rises – and when possible, raises his interlocutors – to the anticipation of the world of forms, principles, and eternal laws that unify and structure experience. This ascent, and not “rational discussion,” shapes and gives meaning to Plato’s endeavor. Once the summit is reached, the entire written work that documents the trajectory takes on the apparent form of a “doctrinal system” that can then feed “rational discussions” for centuries on end. The discussions can be more or less useful, but in most cases, they do not substantially add to the original philosophy. When Alfred Whitehead observed that twenty-four centuries of philosophy were nothing more than a collection of footnotes to Plato and Aristotle, he meant exactly that. Since those discussions are the livelihood of academics, some of them are foolish – or vain – enough to think that they constitute “the” philosophy, but that is as if, in a book, the footnotes took the place of the text.

“The” philosophy is not rational discussion or a doctrinal system. It is an intellectually differentiated symbolic structuring in which the world of experience must acquire a visibility, a clarity, that it did not have in the raw material of experience or in its previous cultural elaborations (social, political, artistic, religious).34

That is why art often anticipates philosophy. In the case of the scholastics, this could not be more evident. The examination of this point will show how far Mr. Lemos and Mr. Pinheiro, together or separately, and all those who think like them, are from understanding the relationship between the great philosophies of the 13th century and the practical teaching that preceded them in the cathedral and monastic schools.

Let’s take it step by step.

What was the greatest and most characteristic achievement of the scholastic philosophers? The creation of the Summae – a completely new literary genre, appropriate to the expository needs of Christian thought, which, after having responded to external and internal doubts for twelve centuries with loose, sporadic, and unsystematic apologetic and polemical improvisations that accumulated in a confused and unmanageable mass, found itself compelled, by the very demands of teaching and other factors that are not relevant to analyze here (including the impact of Arabic philosophy), to undertake a gigantic effort of organization and unification.35 The literary formula found was the “Summae”.

The first great Summa was that of Alexander of Hales, who began writing it in 1231 but left it incomplete. I don’t know the exact date of the second one, but it was not published before 1245 when St. Albert begins teaching at the University of Paris. In 1260, St. Bonaventure begins his lectures on the teachings of Peter Lombard, from which he will extract a summa under the title “Commentaries on the Book of Sentences of Peter Lombard.” Finally, the genre reaches perfection with St. Thomas Aquinas' Summa contra Gentiles (1264), followed closely by the Summa Theologica, written between 1265 and 1274.

The structure of the Summae has no precedent in the history of literary genres. They are composed of hierarchically organized parts, ranging from the most universal principles to their applications to particular beings, like a long deductive reasoning. But each part is subdivided into “questions.” Once a question is posed, the author provides a brief overview of the previously offered answers by various philosophers and theologians, updating the status quaestionis. Then, the author adds a few other possible answers to the list and proceeds to examine the pros and cons of each one until reaching a conclusion. Finally, the author formulates and responds to objections, reinforcing the conclusion, which then serves as a premise for the solution of subsequent questions.

Technically, this structure consists of a long analytical discourse composed internally of various dialectical discourses. It thus articulates two modes of discourse that Aristotle had carefully distinguished: one devoted to constructing demonstrations and scientific proofs, and the other seeking, among the uncertainties of debate and experience, the special premises regarding the various points under investigation. At a deeper level, this articulation synthesizes two opposing mental attitudes: the dogmatic or constructive attitude and the zetetic or investigative attitude. Nothing similar can be found in all previous philosophical literature.

Through this original combination, the Summae synthesize and unify not only the set of scientific, theological, and historical data relevant to Christian doctrine but also all the techniques that comprised university education, which were thus protected against the possibility of anarchic independent developments and harmoniously integrated into the overall order of knowledge.

Furthermore, the Summae inaugurated the practice of rationally distributing texts into parts, sections, chapters, paragraphs, and sub-paragraphs, a concept entirely unknown in antiquity, which would become universalized in the West to the point of banality. However, while today this division corresponds more to editorial conventions or pedagogical arrangements, in the Summae, it had a much more ambitious and organic function. The organization of the text rigidly corresponded to the structure of the analyzed realities, so that the work as a whole functioned as a symbol of the hierarchy of the divine, cosmic, and human world. The dialectical analyses spread in many directions, going into the minutest details (the principle of manifestatio, “externalization” or “clarification”) and then returned to unify in partial conclusions, which, in turn, articulated with each other through the principle of concordantia, or hierarchized reconciliation of multiple contradictory possibilities, serving as pillars supporting the structure of the whole.

The somewhat idealized image we have today of the hierarchical organization of medieval university studies reflects less the reality of daily teaching than the structure of the Summae, in which the various aspects of this education converge towards a culminating point that transcends them.

For example, the practice of disputatio trained students in the art of orderly dialectical confrontation, while the commented study of the sacra pagina instilled in them the necessary knowledge of the Scriptures, but it was only in the Summae that these two aspects were articulated within the unity of a comprehensive conception.

If we ask where Alexander of Hales and his successors obtained the inspiration for this unique and powerful endeavor, we find no written or even oral source. Plato developed Socrates' dialectical technique, but it does not contain the art of dogmatic construction. Aristotle adds to dialectic the technique of scientific proof, logical-analytical, but he does not leave any written examples of a logical-analytical discourse with a beginning, middle, and end. All that remains of him are class notes, constructed based on investigations and dialectical confrontations, in a fiercely skeptical spirit. The question of what would be a dogmatic construction of Aristotelianism, the formal and hierarchical structure of the “Aristotelian doctrine,” is a problem in which even today successors and commentators wrestle without finding any satisfactory solution. To give an idea of the difficulty: no one has given a definitive answer to the question of whether Aristotle’s mature philosophy is a coherent development of his youthful Platonism or a complete denial of it and the beginning of a different philosophy.36

In the philosophical bibliography from that time until Alexander of Hales, nothing resembling the structure of the Summas is found. There are only two alternatives: either creation ex nihilo or inspiration received from a non-philosophical and non-literary source. Since the first hypothesis is of divine prerogative, we must turn to lived experience, to the impact that scholastic philosophers received from the culture of the time, to ascertain if something in it may have suggested the idea of structuring the Christian worldview into a synthesis of all available knowledge and intellectual techniques, in which the countless skeptical quests launched in various directions would gradually converge and unify into a great dogmatic construction as a whole. The only precedent does not come from philosophy or any literary genre; it comes from the arts, especially architecture.

In 1948, the great art historian Erwin Panofsky presented the thesis in the Wimmer Lectures, later published in 1951 under the title Gothic Architecture and Scholasticism,37 according to which the Gothic style in the construction of the great medieval cathedrals reflected the influence of scholastic thought, illustrating, in its verticalism, the use of light and the interlacing of arches that supported the vaults, the same principles of manifestatio and concordantia that structured the Summas.

The thesis was never fully accepted nor fully rejected. The first problem with it is that there was no evidence that the anonymous architects of the cathedrals had ever studied scholastic philosophy. The second and main problem is that the essential elements of the Gothic style were already outlined long before, in the Abbey of Saint Denis and the cathedrals of Laon, Bourges, and Chartres, when Alexander of Hales began drafting the first outline of a Summa in 1231. And the new literary genre only approached its maximum splendor starting in 1264, with Thomas Aquinas' Summa contra Gentiles, when it had been twenty-three years since one of the greatest masterpieces of the Gothic style, the Sainte-Chapelle, was in plain sight at the center of Paris (it was only the following year that Thomas began writing the Summa Theologica).38 It is possible that scholastic thought may have influenced the architecture of cathedrals built after the 13th century, but until the time of Thomas Aquinas, if there was any influence, it was in the opposite direction.

Top left: Sainte Chapelle; top right: Laon Cathedral. Middle left: Bourges Cathedral; middle right: Basilica of Saint Denis. Bottom left: Chartres Cathedral.

However, even though the theory, as noted by its critics, failed to establish any causal connection between scholastic philosophy and Gothic architecture, it contained a portion of truth that no one ever denied: there was evident structural similarity between Gothic cathedrals and the Summas. Both the cathedrals and the Summas appeared as large symbolic summaries of the Christian conception of the world, and the order of their internal structure was practically the same: the arrangement of parts, the connections between the smallest details, the order of the whole, the pursuit of luminosity and transparency, the movement of ascent and descent between various levels or planes of reality, the mutual support between opposing arches as dialectical theses articulated in their contradiction – all displayed, in stone as in words, the same principles of manifestatio and concordantia. It is not an exaggeration to say that the cathedrals were like a graphic scheme of the structure of the Summas. Furthermore, both the new architectural style and the new literary genre were marked by the originality of their principles, shaped for the first time according to specific needs of Christian teaching, irreducible to any previous examples. The similarities were so numerous and fundamental that they could not be reduced to a mere “analogy”; it was necessary to speak, instead, of homology, of identity of structures.

This became even more evident when, in 1998, the professor of Tibetan Buddhism in the Department of Religious Studies at the University of California, José Ignácio Cabezón, discovered that an identical homology existed between the treatises of Buddhist scholasticism and the religious temples of medieval Tibet.39 In both cases, Cabezón pointed out, it was just as impossible to establish any direct causal connection as it was to deny the existence of a structural similarity that went far beyond the possibility of mere coincidence.

Without going into the details of the controversy, some observations seem evident and practically unquestionable to me:

  1. If the architects did not study scholastic philosophy, and the Gothic cathedrals preceded the great Summas, one cannot speak of the influence of the latter on the former, but precisely the opposite.

  2. The word “influence” would adequately describe the transmutation of a philosophical doctrine into a work of art, but not the reverse. Here, one can only speak, more vaguely, of “inspiration.”

  3. The anonymous architects of the cathedrals were not students of universities. They learned the construction technique in the craft guilds and the Christian doctrine in monastic and cathedral schools, most likely in the same cathedrals where they worked or would work as builders. Their architectural conceptions did not reflect scholastic doctrine but rather the Christian culture of monastic and cathedral schools, of which the richness and strength were witnessed in stone.

  4. Due to the novelty of the style, the contrast between its luminosity and the darkness of previous temples, the dazzling beauty of the stained glass windows, and the multitude of sculptural and pictorial details wonderfully integrated into the whole, as well as seeming to defy common sense by standing on seemingly fragile structures, the cathedrals attracted visitors and pilgrims from everywhere because they constituted, literally, the most forceful visual impact to which the European population had been subjected for over a millennium.

  5. It is practically impossible that someone in Paris, at the time of Albert and Thomas, did not know about the Sainte-Chapelle or, knowing about it, remained immune to the impact of the building on their feelings, imagination, and religious devotion.

  6. It is implausible that highly qualified and devout thinkers, imbued with the ambition to give greater intellectual visibility to the symbols of faith, remained immune to the imaginative impact of those treatises on Christian cosmology in stone and did not obtain some inspiration and motivation from them to attempt a similar endeavor at the most differentiated level of theoretical conceptualization and doctrinal exposition, moving from the mute language of buildings to the full verbal explication of the Summas.

I often use the geological term “extrusion,” and the corresponding verb “to extrude,” to describe the process of extracting and exposing the cognitive substance of experience. As we learn from Aristotle, and as no one has refuted to this day, abstract intelligence does not directly operate with sensory data but with images engraved and repeated in memory. Therefore, it is normal for this process, at the level of cultural history, to occur in two stages: first, experience is condensed into compact symbolic forms of art, myth, and ritual, and only later verbalized, when possible, as concept and theory.40 In other words, artistic creation shapes and delimits the imaginative ground upon which the theoretical constructions of science and philosophy will rise. The examples that illustrate this constant are innumerable, from the tragedies of Aeschylus and Sophocles that provided Socrates and Plato with the model of eternal laws, to Giotto’s perspective without which Galileo and Kepler’s new cosmology would be inconceivable, Dante’s Divine Comedy that inaugurates the possibility of the modern intellectual as the sovereign judge of society, Balzac’s Human Comedy from which Karl Marx obtained his first vision of the structure of capitalism, and so on. Therefore, it is not strange to conclude that the visual and human impact of Gothic cathedrals initially gave scholastic philosophers the inspiration for the extrusion of the intellectual content implicit in the Christian imaginary, to which they gave, for the first time, such complete and integrated visibility.41

If the architectural and pictorial imagination of the builders engraved in stone and glass the richness of inner experience obtained in monastic and cathedral schools, it must be emphasized that this only happened at a stage when these schools were already yielding, as models of education, to the success of emerging universities, where the sophistication of intellectual techniques developed pari passu with the degradation of customs and the loss of religious fervor. Over a hundred years after the Gothic remodeling of Saint Denis, the intellectual construction of the Summas took place at an even more advanced stage of the dissolution of the Christian cultural synthesis, foreshadowing, for the following two centuries, the spread of nominalist fashion, the flourishing of countless heretical movements, and the degradation of scholasticism itself into suffocating doctrinal formalism. None of this is strange. While the richness of inner life is an everyday reality, the impulse to crystallize it in stone is not an urgent necessity. The Gothic cathedrals are, so to speak, the swan song of an educational modality that already had its numbered days. In the 12th century, as increasingly impressive buildings rose, the envy of angels descended from the heavens and turned into admiration from the crowds.

It is even more understandable that the intellectual synthesis of the Summas only came to light at a time when the civilizational possibilities they condensed were already coming to an end. Just as the cathedrals fixed in stone the final appeal of monastic and cathedral education, the Summas represent the peak and, therefore, the final chapter of the great Christian civilization in Europe, just as the philosophies of Plato and Aristotle are the highest and ultimate expression of the dying polis. As Hegel noted, the owl of Minerva only takes flight at dusk.

In this sense, the great new creations that will represent the spiritual strength of extinct civilizations for future eras document the impoverishment of inner life and its replacement by externalized and visible testimony, bequeathed to future generations in the vague hope that one day the formula engraved in stone or in words can be unpacked and restored as lived experience, if not on a civilizational scale, at least in the souls of interested and capable individuals. The passage from the implicit to the explicit, from the compact to the differentiated, marks at the same time the glory and the end of civilizations. Apogee and decadence are not mutually exclusive terms but dialectical poles with internal developments marked by ambiguities and inversions.

The False Divorce of Science and Philosophy42

Numerous PHILOSOPHY manuals, and also some more prestigious works, report that in modern times several sciences that originated from philosophy were separating from it and acquiring an independent authority, even superior to that of the old mother and teacher, who, seeing herself stripped of jurisdiction over so many subjects dear to her, ended up having to justify her survival by seeking new occupations or carving out a modest niche in the few remaining areas of the condominium, always fearful that these may also be snatched away sooner or later.

The description of this historical process is almost invariably underlined by value judgments, explicit or implicit, according to which (a) what happened had to happen; (b) it was good that it happened; (c) its results are definitive and irrevocable, only leaving philosophy to accommodate itself to the fait accompli and seek more modest employment. I have never seen any attempt to justify these three assertions, which apparently must be accepted without any critical analysis. Even less have I seen any philosopher even conjecture the possibility that the state of affairs could be reversed, even in the very long term. I can only conclude from this that Hegelian doctrine of History as the supreme tribunal of reason has deeply permeated even the brains most hostile to Hegelianism. The unfolding of events, instead of being merely “the set of unintended results of our actions” as Max Weber saw it, becomes the rigorous syllogistic unfolding of a secret, divine logic, which inexorably leads to undeniable conclusions. Endorsed by the consensus of the bien-pensants, the sentence of the tribunal of History transfigures into universal dogma and standard of sanity, threatening with the menace of ostracism or hospital internment those who dare to doubt it.

Philosophy, which began as a critical analysis of established truths, now seeks to obediently adapt to the status quo, and considers itself very fortunate when it manages to fit into a small empty space where it causes no inconvenience around.

Many philosophers, in a desperate effort to justify the survival of their profession in a terrain marked by the empire of the sciences, have even gone so far as to exclaim, like the recently deceased Sir Michael Dummet: "Philosophy does not advance our knowledge: it clarifies that which we already have."43 In vast provinces of university philosophy, this phrase – like others of the same tenor – is considered the final expression of the undeniable obvious, and those who subscribe to it even show some satisfaction in stating it. None of them seems to have realized that a situation in which human intelligence is divided between two heterogeneous activities, one producing knowledge that it does not need to understand, the other engaged in understanding ready-made knowledge in which it cannot interfere, is the summary description of an unprecedented cognitive catastrophe. It’s as if in the fable of the blind man and the cripple the blind man was too weak to carry the cripple, and the cripple, in addition to being crippled, was mute, unable to show the way to the blind man.

Why, after all, so much effort to draw a clear border between “philosophy” and “sciences”, if only a few centuries ago a Newton or a Leibniz felt perfectly at ease in the midst of a joyous and multicolored mix of jurisdictions? The separatist process, quite evidently, reflects more the functional needs of the expanding university bureaucracy than an organized vision of the structure of reality and its objective subdivisions into distinct “regional ontologies,” as Husserl called them, each with its respective epistemological status. The various university chairs and departments cannot merge at will without causing crises and corporate protests, but the dimensions of reality do not cease to penetrate and merge regardless of academic regulations, rectors' decrees, and career plans. The fact that, a century after the birth of the analytic school, the question of borders still resurfaces in Dummet’s 2001 conferences,44 shows that separatism, to the same extent that it seeks to impose itself on the public as the final solution, does not have, from within, any assurance of itself.

What happens, in substance, when a science “separates” from philosophy? What does this proclamation of independence consist of, in the real world and not in the sphere of pure concepts?

Philosophy, as it appears in Socrates, Plato, and Aristotle, is characterized by its entrance into the problems it investigates without bringing any ready-made method, any previously established concept, and indeed not even standardized questions. It steps onto the field, quite literally, unarmed. It begins with astonishment (thambos) at the reality of experience, and by appealing to all cognitive resources it can find between heaven and earth – memory, imagination, logical reasoning, dialectical confrontation, prevailing opinions, travel reports, medical precepts, myths and poems, and even the rhetorical tricks of sophists -, it laboriously seeks to discover what are the most viable questions, the most appropriate descriptive concepts, the most productive methods and, finally, the basic principles from which the questions, once purified and formalized, can be answered with relative certainty.

Thus, it traverses the entire path from raw experience to its transfiguration into intelligible conceptual forms organized into coherent discourse.

Little by little, in a process that goes from the fourth century BC to the beginning of the modern age, various domains of knowledge are articulated into a system, concepts crystallize into repeatable formulas, methods stabilize into logical and dialectical routines, and consecrated into university teaching programs.

This does not mean that the initial problems have been resolved. Time and again, constantly expanding experience brings new questions that the established methods do not encompass, old questions reveal aspects that had escaped ancient philosophers, or, even more irritatingly, the most perfect reasoning leads to intolerable contradictions, showing that some subtle error, often not merely logical but of perception and abstraction, had escaped unscathed along the way. It is then necessary to start all over again from the base, drawing from experience, like the Greek pioneers, the rudiments of the possibility of satisfactory knowledge.

Whatever the case, by fits and starts, the process of stabilization moves forward, to the point where the real and personal experience of the abstractive climb is spared to generations and generations of students, to the extent that they do not have to apprehend for themselves the intelligible forms in the living mass of present objects, but receive the ready concepts of the philosophical tradition. Progress in philosophy is, therefore, an ambiguous achievement, in which one often loses as much sense of concrete reality (and the relationship between the concrete and the abstract) as one enriches the arsenal of received concepts, ready for use in philosophical discussions. Abstract concepts acquire a life of their own, phantom-like, and begin to obscure what they should reveal. Time and again, there arise calls for a return to concrete realities, to infuse new blood into these skeletal bodies that haunt philosophical discussions. The most famous of these appeals were Ockam’s and Abelard’s nominalism, Bacon’s experimentalism, Descartes' methodical doubt, Kierkegaard’s existentialism (or pre-existentialism), and Edmund Husserl’s cry, Zu den Sachen selbst! ("To the things themselves!"), which at the beginning of the twentieth century inaugurated the phenomenological school. In each of these cases, the announced return to the concrete, however, resulted in an upgrade of the abstractive climb and an increase in the stabilizing process.

There was a moment when the abstraction-stabilization process took a formidable leap. It was when, in the name of experimentalism itself, the last residue of concrete experience was suppressed, leaving only the dry and bare scheme of measurable appearances from the variety of sensible data. The artisans of this surgical amputation were Bacon, Galileo, Descartes, and John Locke. Excluded from scientific observation were the qualities that can only be known through subjective sensations, variable from individual to individual: color, taste, smell, sound. What remained were those that supposedly reside in the things themselves and can be determined with certainty by all human beings unanimously: shape, extension, movement, and number. These are the primary qualities that define physical reality. The others, the secondary, exist only for the individual psyche that apprehends them.

Focusing exclusively on “primary qualities” not only allowed for precise observations and their communication in a standardized language, but also made it relatively easy for the observer to make generalizations that could quickly be checked by other scholars with little margin for error, at least apparently.

Soon the set of procedures for observation, measurement, and verification standardized and stabilized into what would come to be called the experimental method – a system of uniform rules that could be followed by all students of nature, provided they agreed to set aside the “secondary” qualities, that is, the vivid impression of the observable world, and to stick, so to speak, to the mathematical skeleton of things and beings.

The immediate advantage this represented, from the point of view of the quantitative increase of knowledge, was patent: the new method constituted a more or less fixed and standardized protocol of uniform cognitive procedures that could be taught and endlessly repeated, producing results that integrated into the general scientific-philosophical discourse without major difficulties, opening in the heart of European civilization an entire field of homogeneous erudite intercommunication, foreign to the semantic difficulties that, over two millennia, had been a nightmare for philosophers. It is needless to say that, like a powder trail, the new method spread a fever of investigations and discoveries throughout Europe like never seen before in human history.

The new method, of course, did not fail to bring with it certain difficulties. Some of them were perceived almost immediately. G. W. von Leibniz, himself an enthusiast and practitioner of the method, soon noticed that the sum of the “primary qualities” was not enough to produce a thing, a real being. Besides having figure, extension, motion, and number (quantity), the object also needed to “be” something, to possess internal defining characters that differentiated it, as genus and species, from all other objects. It needed, in short, to possess what the old Aristotelian school called “intelligible form.” A satisfactory answer to this objection never appeared.

Other difficulties took centuries to be clearly formulated. One of them is what Prof. Wolfgang Smith would later call “bifurcation”45. The division of primary and secondary qualities, therefore of aspects of reality to be included or excluded from scientific observation, corresponded to what Descartes had called res extensa and res cogitans, or “matter” and “thought,” the first made up of figure, extension, motion, and number, the second entirely of inner states of the human being, such as reasoning, memory, feeling, etc. At the same time, however, Descartes saw in logical-mathematical thinking the supreme modality of human intelligence, the quintessence of res cogitans. Now, the so-called primary qualities were precisely those that only mathematical intelligence, and not the senses left to themselves, could grasp in objects through measurements and comparisons. The very word “measurement” betrayed its origin from the Latin mens, “mind.” The result was, inexorably, that the terms of the new methodological equation were inverted: everything that was most characteristically mental, or rational, in objects was called “matter” or “body,” while the truly corporeal, which could not be known by pure thought and only reached us through the impact of the five senses, came labeled as “mental.” The “world of Mr. Descartes,” as the book in which Descartes exposed his conception of nature was then commonly called, was nothing more, nothing less, than a world turned upside down.

The experimental method, however, carried within it a mechanism of automatic immunization against the serious examination of these difficulties (and countless others that are not relevant now). Insofar as, by definition, the field of study was limited to the measurement and comparison of the “primary qualities,” the examination of their relationship with the secondary ones, or with anything else in the universe, including the true ontological status of the objects of study, was a priori eliminated from the horizon of attention and the investigators did not have to give the slightest satisfaction to the objections of the discontented. The difficulties, in short, could be swept under the rug without disturbing the triumphant march of investigations and discoveries.

Moreover: the new method entailed an increase in mathematical precision that also automatically and inexorably fostered the progress of technology in all sectors of its virtually unlimited application in war, industry, medicine, agriculture, private and public administration etc.46

In a few decades machines and equipment had so changed the visible face of the world that they gave apparent credibility to the notion that “nature” was indeed what Descartes said: the systematized and organized system of “primary qualities.” Leibniz and ontology could go lick soap: the urgencies of the homo faber so dominated over the inquiries of the homo theoreticus that these seemed but erudite games with no interest for the general progress of humanity.

The difficulties and inconsistencies, of course, remained there, hidden in the background, and they did not cease to produce cultural and sociological effects that were invariably attributed to other causes or simply deflected. One of them was the advent of phenomenalism, which today we understand to have been one of the greatest intellectual disasters in human history. It happened that, unable to account for the ontological status of the objects they investigated, but increasingly uninterested in doing so, the practitioners of the new method ended up assuming the deficiency as a positive quality, declaring that the deep nature of things simply was not their business: all that interested them was the mathematicized organization of appearances (“phenomena”, from the Greek phainestai, “appear” or “seem”), in order to be able to manipulate them technologically, producing repeatable and desirable effects. There is no need to emphasize the powerful economic interests that supported this new vision of things, stimulating phenomenalism everywhere and the fundamentally unfair discredit of old philosophy. As odious as Mr. Antonio Negri seems to me in other aspects, I have to admit the fundamental accuracy of his thesis that makes Cartesianism a decisive ideological instrument in the rise of bourgeois power.47

Since then, the most dramatic and unavoidable philosophical questions have been excluded from the field of “serious” scientific attention and left to the curiosity of eccentric thinkers. That many of these, like Leibniz, Pascal, and Newton himself, were also among the most prominent practitioners of the new method, was retroactively explained as a biographical detail of no major importance in the general progress of knowledge.

It was from that moment, and only from it, that the formal separation between “science” and “philosophy” occurred, the former reigning sovereign over the world of “phenomena”, the latter insisting on questions about the nature of reality that no longer interested anyone. An obvious consequence of this separation was that, “science”, no longer able or willing to claim an explicit ontology in its favor, the divisions between the fields of various sciences, the delineation and therefore the definition of their objects, their methods, and their validation processes no longer had a basis in objective distinctions – “regional ontologies” – carved out from the living body of experience. The solution found for this difficulty was a brilliant arrangement, but fundamentally irresponsible and disastrous, a true intellectual bargain that today we would call the supreme workaround, the mother of all workarounds. The one who best articulated it in words was Immanuel Kant, but it was already scattered in the works of Hobbes, Berkeley, and Hume and implicit in scientific practice at least since Galileo. I will call it, for the purposes of this study, methodocracy. It can be summarized by the following rule: it is not the object that determines the method, but the method determines the object. In other words, the field of a science does not correspond to a set of beings, things, or facts objectively distinct, separated from others by real boundaries, but simply to the set of topics that prove to be more docile to the methods of that science, whatever they are and regardless of where they came from. Thus, for example, modern psychology can carry on its work without having the slightest idea of what the “psyche” is and without even knowing if it exists. The diversity of opinions on this topic spans a range from Carl-G. Jung, for whom everything in the world is psyche, to B. F. Skinner, according to whom there is no psyche and everything we call by that name are misleading appearances of certain neurological mechanisms. So what is the object of psychology? There is no other way to define it than as “anything that psychologists study”. It goes without saying that this state of affairs is practically an invitation to arbitrariness and charlatanism.

Cartesian bifurcation, phenomenalism, and methodocracy are three chronic inconsistencies of modern science, and they do not only affect the roughest and most imprecise sciences. On the contrary. Psychology, anthropology or sociology – not to mention political science – seem to live very well with these difficulties without feeling a great need to solve them or even to discuss them. It is precisely in the most developed sciences that these and other handicaps make themselves felt more loudly, painfully, to the point where no professional in the field has the cynicism to completely ignore them. The supreme example is physics, the greatest collector of glories and victories of the experimental method. It is not possible to study even a little bit of relativity, or quantum theory, without bumping into hairy questions every minute that the experimental method, by itself, cannot answer, and that force the scientist to delve into philosophical – sometimes pseudo-philosophical – considerations in an effort to understand what he is doing.

The reason for this is simple: the more precision achieved in the description of a phenomenon, the more emphatic becomes the contrast between the technical mastery exerted over it and the daily realization that, in the end, one does not know what it is. The more a science is in an infantile stage, crawling, nebulous and confused, unable to establish the verification methods that allow it to discern constants and announce rigorous predictions, the stronger is the tendency to keep trying and trying, accumulating hypotheses, observations, and numbers, in the hope that one day the general laws will appear and the facts will confirm them. In this state of affairs, it is understandable that questions of ontological foundation should be left for later, perhaps for “Saint Never’s Day”, for the simple reason that a precise object has not yet been obtained that could be substantiated. The occasional philosophical discussions that emerge in this state of affairs sound nothing more than interesting chatter, good only to adorn with a veneer of sophistication the bad conscience of the scientist who has in his hands (and knows he does not have) but a fluid object, poorly defined and experimentally uncontrollable. An “ontology of social being”, for example, as attempted by Gyorgy Lukács in the 1970s,48 was nothing more than a premature ejaculation, attesting to the impotence of Marxist sociology. When all predictions based on class struggle and surplus value turned out wrong, when even the definitions of the basic terms proved inadequate and the Marxist historian E. P. Thompson found that it was impossible to distinguish proletariat and bourgeoisie by economic criteria, it became evident that Marxist “science” of society had in its hands no accurately described object from which the ontology, the place in the general structure of being, could then be probed. But when, on the contrary, the object is as well described as the behavior of certain subatomic particles in quantum physics, to the point that it can boast, rightly, that there is no phenomenon more accurately measured, observed, proven, and meticulously tested thousands of times, then science can no longer advance a step without stumbling on the fateful question: “But, after all, what is it? Quid est?” At this point, the boundaries between scientific investigation and philosophical speculation blur as if by magic, and physicists begin to produce, in droves, books on philosophy, or almost on philosophy, some bad, others good, sometimes more serious than the works of professional philosophers.

The same thing happens in genetics, another successful, mature, and triumphant science. It is impossible to have in front of oneself a phenomenon as well described as the genetic code without wanting to know why it is as it is, what is the meaning of its existence, what consequences its discovery has for the general conception of the world, of humanity and of culture. It is equally impossible to prevent the mere fact of asking these questions from suggesting new experimental research, exerting a beneficial influence within the scientific territory itself. I had the joy of receiving a direct and personal confirmation of this when one of today’s most prominent geneticists, Laurent Danchin,49 wrote to me, years ago, saying that my book Aristotle in New Perspective (1995), which he had read in an unpublished French translation, had helped him in his investigations into the origin of life. How was such a thing possible? How can a reinterpretation of Aristotle’s Organon be useful in genetic research? The answer is simple: the task of philosophy is not limited to “understanding the knowledge we already have”, as Michael Dummet presumed in an exaggeration of typically Anglo-Saxon modesty, but the effort of understanding itself, however distant it may be from the laboratories, interferes in scientific practice, suggesting new theoretical articulations, new connections between concepts, new hypotheses, new lines of investigation. Conceptual analysis and laboratory work continue to be formally distinct, as they already were in the time of Newton’s “natural philosophy”, but there is between them a continuity, a solidarity that evokes the difference, so well traced by the scholastics, between “distinction” and “separation”. They are distinct moments, but chained in a unitary effort that no longer allows a tight separation between “knowledge” and “understanding”. Paraphrasing the Christian motto, the guiding line of this effort is: nosce ut intelligas, intellige ut nosceas – “know in order to understand, understand in order to know”.

The current state of affairs in the most advanced sciences, with their fruitful interaction of empirical research and philosophical analysis, suggests a return to the basic question: What is “knowledge”? Unable to scrutinize this question in detail here, I will go straight to the answer I usually give in my courses: knowledge is the transfiguration of raw experience into intelligible forms articulated in coherent and comprehensible discourse. But one thing is the comprehensibility of the discourse itself, another is that of the materials of the initial experience that give the reason for all cognitive effort. The former, evidently, is not enough: it is necessary that, through discourse, one arrives at an understanding of the experience itself. Each stage of this transfiguration “is” knowledge, in the potential sense, but it is not “knowledge” in the full and final sense. From this perspective, the results of scientific research that do not integrate into an adequate understanding – albeit partial and provisional – of their ontological status and their place in culture are not yet properly “knowledge”: they are potential knowledge, they are materials, they are pieces, they are parts and stages of possible knowledge, which will only materialize at the moment of “understanding”, however problematic and incomplete it may be. Philosophical understanding is the ultimate goal of scientific effort, which only in it is perfected -or should be perfected – as the effective victory of the human intellect over the confusion of things. If the conquest of this understanding often proves difficult and problematic, this does not justify either that the experimental search should stand still waiting for it, or that the experimental stage should be elevated to the condition of the final and autonomous goal of the cognitive process, as if understanding were just an additional ornament – or an exclusive occupation of the “philosophy” departments, with no importance for those of “science”.

By the way, what is “science” in the end? Here I use the word “science” in the modern sense of systematic experimental knowledge, and I provide in a compact format the answer I have been presenting in greater detail in my courses and lectures: within the set of philosophical inquiries, “science” is the partial and provisional stabilization of certain areas of investigation that, for a longer or shorter time, can be subjected to a homogeneous treatment according to a more or less fixed protocol of experimental procedures, without the need for further ontological grounding, until their results reach the level of perfection at which it becomes necessary again to seek this foundation and the science in question reintegrates, with all its results, into the general panorama of philosophical discussions.

Although the formulation in words is mine, it was not I who gave this answer: it was the evolution of the sciences in recent decades. It was they who brought philosophy and science closer together, showing that their divorce had been only a provisional stage, explainable by the very incipient state in which certain sciences were, and destined to dissolve spontaneously as soon as these sciences reached a certain level of maturity.

Richmond, VA, January 31, 2012

Appendix: Philosophy and Apriorism

In his otherwise memorable lectures on “The Logical Basis of Metaphysics”50, Sir Michael Dummett starts with two premises. First: philosophy should address certain questions of general interest, such as “Do we have free will? Can the soul or mind exist outside the body? How can we distinguish right from wrong? Is there a right and wrong, or do we simply invent them? Can we know the future or affect the past? Does God exist?” Second: philosophy should answer these questions through the use of pure logical reasoning a priori.

Both premises are mistaken. On the one hand, some of the mentioned questions are much more accessible to experimental methods than to any a priori analysis. Whether there is mental activity outside the body is, with all evidence, a de facto question, not a matter of principle, which can only be resolved – and has, in fact, been resolved – through observation and induction.51 Whether God exists or not is a perfectly futile or unsolvable question if it is not possible to observe and ascertain, through intellectually respectable means, the actions of such a God in the world.52 Why inquire about the existence of a God outside and above a universe that functions perfectly well without Him, and that only needs Him to appease philosophers anxious for a final explanation, a comfort that other human beings can comfortably postpone until Judgment Day?

On the other hand, the idea that philosophy should confine itself to a priori reasoning is a demand of classical rationalism – especially Spinozistic – that would seem absurd to Socrates, Plato, and Aristotle. The founding fathers of philosophy, as I have emphasized before, freely used all methods and resources available to them, including those least related to apriorism, such as history, the consensus of learned opinion, or myths. If this demand was absorbed in part by the analytical school (without much acknowledgment of its sources) and ended up reinforcing the restrictive concept of philosophy that still prevails in Anglo-Saxon universities, it is merely a historical-cultural phenomenon peculiar to a certain region of the globe,53 not a universally self-evident principle that should be taken as the mandatory starting point for all future philosophy, as Sir Michael seems to have imagined. To the British audience, the two premises I mentioned, and therefore the conviction that science and philosophy deal with separate realms under the respective headings of “knowledge” and “understanding,” might have sounded like unquestionable obviousness, leading logically to the unique mission that the lecturer assigned to all future philosophy: to continue working within the analytical school and thereby improve the logic of meaning, in the hope of one day providing a satisfactory a priori answer to the “grand questions.” However, while one may applaud the pious intention of a Catholic philosopher proposing to give a constructive meaning to logical-mathematical tools that have so far been used only for negation and destruction, it is impossible to ignore the following observations:

  1. Taking as a premise a concept of philosophy created by the analytical school and then concluding that only the continuation of what the analytical school started remains is clearly a circular reasoning that proves nothing.

  2. It is also an uncritical and somewhat unconscious adherence to the Hegelian prejudice mentioned earlier, according to which the results produced by historical development must be accepted as proven philosophical theses (not to mention that, in this case, it does not even concern the historical development of the entire human species but only a specific social group, the academic philosophers of the analytical lineage).

  3. This is not the first time in history that someone has relied on improving logic as the path to solving major philosophical problems. The creator of logical science himself entertained some hope of this kind but had the good sense to recognize that many philosophical questions resisted analytical treatment and were better suited to dialectical confrontation, rhetorical persuasion, or even poetic imagination. Medieval scholastics and later Renaissance and Iberian thinkers doubled down on this bet. As pointed out by Mário Ferreira dos Santos, many of the supposed innovations introduced by the modern analytical school were already formulated centuries ago – and some of them were challenged – in the works of Duns Scotus, William of Ockham, Peter Abelard, and especially the great Spanish and Portuguese scholastics of the Renaissance, whom Leibniz (himself a logical innovator) greatly admired.54 What is the sense of attempting a third bet without first conducting a rigorous review of the results (or lack thereof) from the second? It’s not just a matter of historical awareness or the risk of reinventing the wheel or repeating old errors. It is a sociological phenomenon, the concealment of the past for the sake of a prestige dispute in the present. The analytical school is structurally prevented from revising the past because, in defining philosophy as a matter of pure a priori reasoning, it must sever its ties with the philosophical tradition, giving the impression that philosophy began with Frege and that everything before him is of merely historical interest, if that. After that, attempting to restore old philosophical questions within the very analytical framework from which they were expelled is to pretend that only the questions, not the answers, have survived from the past, and that everything is still to be done, with thanks to the heavens for the advent of analytical philosophy, which came into the world to save us from millennia of uncertainty.

  1. If the improvement of the logic of meaning serves any purpose, it is to solve philosophical problems as formulated by the analytic school itself, offering little utility for those who reject this formulation. If the two major attempts at formalization in the past had yielded acceptable results from the perspective of the analytic school, it would have no reason to exist as an autonomous line of thought, or at least it would not try to impose itself by concealing its predecessors. Aristotle, having himself created logical techniques, rarely employs them in his philosophical analyses, preferring dialectical confrontation (which, indeed, is an ancestor of the experimental method). As for the scholastics, it makes no sense to disregard the results they achieved and, at the same time, attempt to redo what they did, without even trying to justify the expectation that what supposedly did not work once will work now.

Coherence and Integrity55

THE PREVIOUS CHAPTER could give rise to countless others, so many are the consequences it announces and the questions it suggests. One of these is: what is the importance of logic in the formation of a philosopher? In a way, this question has already been answered by the very unfolding of historical facts: philosophy existed, and great philosophy – the greatest of them -, a generation before Aristotle first formulated the rules of logic. Logical thought is, of course, a natural ability of the human being, and since the most remote times, philosophical speculation has made use of it almost instinctively, but logic as an explicit technique only appeared when philosophy, without it, had already reached its highest peaks, never surpassed by subsequent evolution. When Alfred N. Whitehead said that the history of philosophy is nothing more than a collection of footnotes to Plato’s writings, he included in this, of course, Aristotle’s entire philosophy. Just as this is only the advanced exploration of paths already opened by Platonism (and the philosopher from Stagira is the first to recognize it, referring to himself as one of “us, the Platonists”), the tekhne logike is nothing more than a special branch of Aristotelian philosophy, which infinitely transcends it and is in no way determined by it, neither in its expository form nor in its inner meaning.

The coherence of discourse, the object of logic, is indeed important, but only as an externalized expression of a deeper coherence: the consistency of the perception of the world, manifestation, in turn, of the unity and integrity of the soul – the internal balance of the spoudaios, the mature and maximally developed man, self-aware, master of his inner universe, capable of seeking, if you allow me to quote myself, “the unity of knowledge in the unity of consciousness (cognitive and moral) and vice versa”.

Separated from this background, the cult of coherent discourse becomes only a fetishism, hypnotically attractive as all, risking to erect the most sophisticated intellectual constructions on top of a poor or deformed perceptual base. That so many philosophers notable for their contributions to logic have descended to the level of the most crushing puerility when they left the realms of pure formalism and ventured to deal with substantive problems of history, morality, religion, and politics (Wittgenstein and Russell are exemplary cases), is not a marginal detail of their biographies, but the sign that the search for the integrity of discourse can sometimes be camouflage used to cover a fragmentary and dispersed consciousness, unable to respond for itself to the realities of life.

Aristotle was always aware that logical discourse does not arise in the air, but rises on top of a whole kaleidoscope of perceptions and memories that do not yield to the impulse of logical formalization until after a series of very laborious purifications, which go from poetic language (very well defined by Benedetto Croce as an expression of impressions), through rhetorical choices and dialectical confrontations, to the formalism of logical demonstration, incapable of encompassing more than a minimal fragment of human experience (I wrote a whole book about this and I don’t need to repeat myself). When the roots that logical reasoning has in less abstract modalities of discourse (and these in the complexity of the living soul) are lost from view, the progress of formalization risks becoming pretexts for an almost demented cognitive irresponsibility, all the more harmful the more adorned with imposing technical perfections.

Not coincidentally, the philosophical schools that privilege above all the logical analysis focused on the standardized language of the sciences and on the “ordinary language” (often composed of banal sentences invented ad hoc by the philosopher himself, of the type “the broom is behind the door”), avoiding facing the language of great literature and revelation, the only ones in which the maximum potentials of speech are expressed and, therefore, in which the true nature of language shines through. It was for this reason that, in his famous confrontations with Ludwig Wittgenstein, the genius literary critic F. R. Leavis, who only focused on language based on real examples taken from the complexity of the social plot and the literary heritage of the centuries, ended up defining himself as an “antiphilosopher”. In the Greek sense, he would be an even greater philosopher than his friend and antagonist. In an environment of “professional” philosophers attached to logical formalism, he could only be an “anti”.

A certain difficulty in learning modern logic (nothing, however, that cannot be overcome with a little patience) threatens to give the student the impression that this is the maximum “seriousness” that human intelligence can achieve. But the integrity of logical discourse is truly serious only when rooted in the integrity of a responsible personal vision, a comprehensive and mature perception of reality, extended far beyond the possibilities accessible by logical proof.

The discipline of logical thinking is definitely not the maximum standard of philosophical honesty, it is only its most external, most “visible” and least essential expression. The philosopher who neglects the discipline of the soul and indulges to the maximum in logical coherence is like a capomafioso, who, living from gambling, from the exploitation of procuring and from the murder of competitors, would think himself very honest for keeping his accounting books in perfect order.

The Starting Point of Metaphysical Investigation56

IF IT IS TRUE that all metaphysics must take as its foundation unquestionable truths, and if no one contests that beyond those very general truths that some say are formal and others metaphysical, such as the principle of identity, we only know as a certain and unavoidable thing the necessity of the death of our biological being and no other, then the acknowledgment of this mortality can and should constitute the starting point of all metaphysical investigation.

However, it is equally certain that, when the philosopher, instead of speaking in his own name and reasoning as if he were conversing intimately with another of his equals, as should always be done, takes the floor before an academic assembly to address it in the name of the intellectual or scientific consensus of his time, then he can no longer adopt this starting point, for the simple reason that the academic community or the literate class, not possessing the real unity of a biological being, but only the potential unity of a mathematical whole or of an inductive universal, cannot responsibly become aware of its own mortality as does the individual of flesh and bone, but instead, while recognizing in words the historically transient character of its currently accepted beliefs, always tends to take as an implicit premise its own immortality, to the extent that it always expects some of its beliefs, at least, to survive its time, as admitting the contrary would undermine its own authority with which it seeks, as a socially recognized power, to influence the shaping of the future. Moreover, if the biological individuality has a maximum duration that is hardly surpassed, academic communities do not have it, and, not knowing how long they should last, have no choice but to assume that they should last forever, even knowing they will not last. The consequence of this is that all philosophical speculation based on the scientific or literate consensus of a certain era carries within it a certain coefficient of duplicity and falsehood, to the extent that it cannot, or can hardly, avoid taking as a premise an absurd and self-contradictory belief according to which a duration that is simply difficult to calculate in practice can be admitted as objectively unlimited duration.

The individual of flesh and bone, being able to admit not only his own death but also the practically infallible certainty of being forgotten and leaving only faint and transient traces in the history of this world; being even obliged to admit it, for the reason that the awareness of his biological individuality is one and the same thing as the recognition of his physical mortality and the space-time limits of his form of existence, and being, moreover, obliged to recognize that these limits are bounded by an average durability that is hardly surpassed, is, for these reasons, practically obliged to admit as first truth the unquestionable certainty of death, and to philosophize responsibly according to this infallible axiom, the only one, perhaps, which is at the same time, and inseparably, self-evident principle57 and fact of experience.

The individual is thus the custodian of at least one certain truth whose responsible consciousness necessarily escapes collective consensuses, and, in this sense, is the guardian of a kind, at least, of philosophical rigor, which is unattainable even to the most serious and devoted scientific communities. As a community, none can recognize that within a determinable average term it will have turned into dust; and, for this reason, none can seriously answer for their words before the tribunal of the consciousness of mortality.

For this very reason it has been a great misfortune of Western thought the widespread belief that the judgments of individual conscience should be submitted to verification before the tribunal of the literate community, whenever this belief is not compensated by the admission of its necessary counterpart: the admission that only individual conscience can be fully responsible for its own words, while collectivities, devoid of unitary biological life, always dilute their responsibility among the individual heads that compose them and, while proclaiming to have as much authority as the number of their members is greater, in the same measure become increasingly incapable of assuming a moral, legal or intellectual responsibility for whatever they believe or affirm; and, above all, can indefinitely evade, because of their indefinite duration, the admission of the only universally valid material premise of all metaphysical reasoning, which is the reality of death.

The collectivity, being unable to responsibly become aware of its own death, can however admit pro forma that of the members that compose it. But even this acknowledgment is not an act of consciousness, but rather the protocolary expression of the logical coincidence between the contents of various acts carried out, independently, by the individual members of the collectivity.

In this sense, the collectivity does not meet the optimal condition to begin metaphysical investigation, a condition that lies in the act of taking personal and responsible awareness of one’s own mortality. The academic or literate consensus therefore has less authority in metaphysics than the solitary meditator.

Immortality as a Premise of Philosophical Method58

IF WE ARE IMMORTAL, we must be so essentially, not accidentally. Immortality is then our true condition and the plane of reality in which we effectively exist. In this case, our present bodily life is nothing more than a minute fraction of our reality, a transient appearance that veils our true substance. Consequently, all knowledge that we can acquire within the boundaries of corporeal existence is just an appearance within an appearance. Even if it captures genuine portions of reality, it cannot ground itself but must seek its foundation in the sphere of immortality.

All this is very clear. What confuses things is that the term “immortality”, in the current culture, has acquired the connotation of something that only manifests – if it exists – after physical death. Hidden here is a wholly absurd suggestion: we are mortal in life, but we “become” immortal after death, as if death were a passage to a state of existence radically separate, heterogeneous, and incommunicable with the present life. It is on this assumption that all hope of purely immanent knowledge, without references to the “beyond”, rests. If immortality exists, this hope is as absurd as the assumption that supports it. If we have a life that transcends all duration, this life transcends, and therefore encompasses, rather than excluding, its slice immersed in duration. If we are immortal, we must be so now, from the present life, instead of being, so to speak, immortalized by death. Death cannot immortalize the mortal: it can only make manifest pre-existing immortality and challenge, in the same act, the illusion of mortality.

But if we are already immortal in this life, it is clear that we cannot adequately know this life except in the light of immortality: mortal knowledge of mortal life is the illusory knowledge of an illusion.

The clarification of immortality thus becomes a primary demand of the philosophical method: either we demonstrate that immortality does not exist or, if we at least accept it as a hypothesis, we must base all the possibility of an effective knowledge of reality on it.

Demonstrating that immortality exists can be difficult, but proving that it does not exist is impossible: all proofs would be limited to what is accessible in the present life, in no way weakening the possibility that there might be something beyond it. However, the proofs of immortality lose nothing with this limitation, since the present life is within the immortal life and what is known of one can reveal something of the other.

The proofs, however, are of no use if, once obtained, they do not in any way modify the reflex habit of reasoning from the present life as if it were a closed and self-sufficient whole – a habit that can be based both on the denial and on the affirmation of immortality.

The very search for scientifically valid proofs, therefore binding for the entire scholarly community, already tends to make the present existence the measure of immortal life, since, on the scale of the latter, the human authority of the scientific community counts for absolutely nothing.

On the one hand, the scientific proof of immortality does not give anyone, by itself, a consciousness of personal immortality, let alone the strength to operate the level transition from cognition based on temporal experience to another founded on the sense of immortality.

On the other hand, anyone who has operated this transition does not need scientific proof of what was given to them in direct personal experience. They can use these proofs as pedagogical means to encourage others to seek identical experience, or to silence adversaries of immortality, but these two goals are minor and secondary compared to the experience itself.

The expression “experience of immortality” is, of course, metonymic. It designates the object of the experience by one of its parts, implying that this inescapably requires the existence of the whole. We should speak of experiences of extracorporeal cognition, or more appropriately supracorporeal, implying that, if consciousness operates outside and above the body, it has no reason to die when the body dies.

These experiences are not necessarily “paranormal”. Anyone can have access to them, provided that they prepare for it through a suitable series of meditations. In general, it is not about perceiving objects at a distance, or future, but about becoming aware of that which, in common and current perception, is already supracorporeal although it is not usually perceived as such. As soon as you become aware of the supracorporeal elements that permeate and underpin bodily perception, your notion of “I” will automatically change. When I say “become aware” I mean that there is something more than a simple act of isolated or even repeated perception. “Becoming aware” is something more than “gaining awareness”: it implies an act of intellectual and moral responsibility by which you commit yourself intimately not to allow the door opened to the consciousness of extracorporeity to close and the content assimilated there to dilute in the flow of bodily impressions until it is forgotten or at least loses all structuring force over your experience of “I”.

Existence and Possibility59

To read the main parts of Book I of the Summa against the Gentiles, we must mentally put ourselves on the level of abstraction and universality required by the subject. St. Thomas there deals with the first origin of everything that exists. It’s not about imagining a “force” that somehow acts on “things,” as that not only presupposes the existence of things but erroneously defines the agent through a transitive notion, that of “force,” when it’s clear that the very idea of a transitive movement requires that of something towards which it transits. It is, rather, about understanding that, if “existence” is the state of that which exists, it itself cannot exist in this sense, as it would then be reduced to an existent among others. Nor can existence be understood as the sum or set of that which exists, as in that case it would have no attribute of its own other than those that are in existents or those that result from relations between them and, therefore, would be nothing in itself. To grasp the notion of existence you have to make an effort of imagination to conceive the total nonexistence of whatever. Suppress the cosmos, suppress History, suppress all real or unreal entities, suppress even human consciousness (starting with your own), and try to conceive what remains. Is it nothing? Yes, certainly nothing. But not absolute nothingness, because we know that something exists and, if something exists, it is because it is possible. Excluded all existents, a nothing remains, but a nothing full of possibilities. If you even exclude these possibilities, you will have declared that everything is impossible, but you know that something is possible, since something happened. The nothing that remains when all existents are suppressed is therefore not exactly a nothing, but a bundle of possibilities. Which possibilities? All that have been realized and all that can still be realized. This is what we call “existence”: the possibility that existents exist. The possibility of existents does not exist as they exist: it exists independently of them – they depend on it. Moreover: the possibility infinitely transcends existents, as it also encompasses all possible relations between them. The set of possible relations between existents cannot be deduced from the sum of the attributes of all of them, as there are accidental possibilities that do not derive from these attributes. For each set of attributes of an entity, there is an immensely larger set of possible accidents around it, and these, if possible, are part of the possibility, are contained in that “nothing” that you found by mentally suppressing the totality of what exists.

The word “possibility” is used, on a daily basis, only as a measure of a conjecture we make about this or that entity, about this or that set of entities. But one thing is the possibility considered at the level of entities, another is the possibility considered in itself, above and before the existence of any entity. In the first sense, the possibility is a relation between entities. In the second, it is the very constitution of these entities as “essences”. The word “essence” designates what an entity is, regardless of whether it exists or not. As each existing entity is something, has some essence, and as everything that exists is necessarily possible, it is forced to conclude that, at the level of pre-existing possibility, all essences were already what they would become in real existence. Now, among the essences there are unavoidable logical relations, independent and prior to the existence of the entities that manifest them. Mathematical entities illustrate this in a splendid way: before any spherical object existed, the points on the surface of the sphere were already equidistant from its center; before a square existed, it was already necessary that, cut by the diagonal, the future square would result in two isosceles triangles. Therefore, if all essences were present in total possibility before any entity corresponding to them came into existence, we must also admit that all logical relations between all possible essences were already contained in total possibility. But among the entities there are relations that, without being illogical, are alien to logic, in the sense that they cannot be deduced from the essences: they are the accidental relations. If these relations were not contained in total possibility, they would be impossible and therefore never appear in existence; as they appear, it is necessary to conclude that they were.

Now ask how all these essences and all these possibilities were in total possibility. Would they be there in a confusing and mixed manner, only becoming distinct in the course of the existence process? It would be the same as saying that, in the course of their coming into existence, these essences realized a possibility that was not in total possibility, that is, an impossible possibility. The essences and their relations, including accidental ones, are all present in total possibility, and they are there in a perfectly ordered and clear mode.

The nothing that you found by suppressing all existents is beginning to look less and less like a nothing: it is rather the prior order of all possibilities manifested in the course of existence.

Now ask yourself whether universal possibility can be conceived only as a theoretical, hypothetical, passive and inert system of equations or any logical relations, without any effective existence. The answer is clear: if total possibility does not exist, there is no possibility at all. Universal possibility, therefore, does not exist as possibility in the weak sense of the word, as when we say that a chess game has the possibility of ending with the victory of the black or white pieces. On the contrary: containing in itself all the possibilities of existence, it encompasses and contains existence – all existence. Existence derives from possibility, and not the other way around. Containing existence in itself, it can neither be non-existent, nor can it “exist” as entities exist: it has a special modality of existence. As the scholastic philosophers would say, it exists eminently (eminenter). Containing existence in its entirety, as well as the nonexistence that limits existence, it is the existence of existence.

Now that you have understood this, start reading the Summa Against the Gentiles.

Two Methods60

WHAT IS UNDERSTOOD as “rigor” in the intellectual circles generated by the Faculty of Philosophy of USP (University of São Paulo) generally amounts to nothing more than affectation of superior coldness under the pretext of philological scruples. But sometimes the expression comes with some meaning. In this, the best of hypotheses, it designates the application, with or without deconstructionist and Marxist additions, of the method of structural analysis of texts created by Martial Guéroult in his classic study _Descartes selon l’Ordre des Raisons61 – a book that, by the way, I admire as much as the USP Guéroultians.

The method is inspired by a piece of advice from Victor Delbos – “Beware of those games of reflection which, pretending to discover the profound significance of a philosophy, begin by neglecting its exact significance.” To honor this caution, Guéroult starts from three assumptions: (1) a philosopher’s philosophy is in the texts he wrote; (2) in these texts, the internal logical form, the order of demonstration, the scheme of validation, is as important as the explicit theses left to us by the philosopher; sometimes even more so; (3) the logical structure of the demonstration does not always coincide with the linear order of the text but must be recomposed from it.

The assumptions 2 and 3 are obvious and universally applicable. The assumption number 1 is the problem. Although it may hold true, to some extent at least, for the work of some thinkers, like Descartes himself, Kant, and Bergson (the latter even claimed that his writings contained the complete expression of his doctrine, leaving nothing to add), it would be, at the very least, reckless to apply it to other philosophers whose writings, fragmentary or occasional, do not express a complete doctrine and are not necessarily arranged according to the best “order of reasons.” The classic example is Plato, whose main teaching was transmitted orally to his disciples, only appearing in his writings as enigmatic allusions. What to do with Aristotle, whose writings are often only class notes, frequently without identifiable order, and whose main work, the Metaphysics, is a collection of independent texts from different periods, put together long after the author’s death by a scholar who never attended his lectures or knew him personally? Even Leibniz, one of the most organized minds the world has ever known, did not leave any systematic exposition of his doctrine, which has to be reconstructed from letters, drafts, and occasional writings – leading some interpreters to see his work as more of an “eclecticism” than an organized philosophy. What can the structural analysis of texts do in these cases, other than provide us, even if each one well-clarified in its internal details, with isolated pieces of a puzzle?

The professors at Rua Maria Antônia (a street in São Paulo where the Faculty of Philosophy of USP is located) used the term “rigor” for decades as a tool to establish a hierarchical distinction between the professional philosophy they claimed to practice and the “literary philosophy” of those whom they despised as mere literary writers or weekend thinkers. However, at the same time, and in an unintentionally comical way, their obsessive dedication to the study of “texts,” with little direct engagement with substantive philosophical problems, reduced the philosophical activity at USP to a specialized branch of philology and literary studies. One of the most celebrated spokesmen of the institution, Professor José Arthur Gianotti, even defined philosophy as “working with texts,” while others tried to justify the failure of USP to produce a single philosopher worthy of the name over five decades, with the flimsy excuse that they were trained, at the very least, as excellent philologists and historians of philosophy. The fact is that no remarkable work of philology or history of philosophy ever originated from the Department of Philosophy at USP; even the monographic studies on the works of this or that philosopher produced there, with the possible exception of Lívio Teixeira’s Essay on Descartes' Morals,62 did not leave the slightest mark on the intellectual history of humanity.

Contrary to the superstition at USP, philosophy, of course, does not have as its essential purpose the production of texts. The number of great philosophical works that were put together by others based on class notes, transcribed recordings, or even table talks, shows this most clearly. Literary works are not composed in this manner because in literature, the written word is the end – the final formal object, as the scholastics would say – of the writer’s activity. In philosophy, what is fundamental is discovery, theory, philosophical intuition obtained, from which the writing will only be the most faithful or less faithful document.

Moreover, if in literature the text stands on its own, without the need to appeal to the author’s biography or any “external” data (except for some philological contingency), it is precisely because the formal perfection that is intrinsic to literary works gives them a character of complete totality, without which they could not be objects of aesthetic contemplation; and precisely because aesthetic contemplation, being that and not a scientific report, does not aim to discover an utopian “exact meaning,” but rather many possible meanings, all of them mysteriously compatible with the unity of the aesthetic form that contains them. By its own formal unity, a work of art is a symbol, and the symbol is not the final crystallization of an “exact meaning,” but, as Suzanne K. Langer aptly said, “a matrix of insights.” Finished form and open meaning are the very definition of a work of art.

A philosophical text, on the other hand, has an ideally exact meaning but cannot contain it within its own formal limits because it is almost always the expression of provisional conclusions obtained in the course of an investigation that, in principle, must continue until the author’s last day of life. A philosophical text is always an unfinished, open work.63 It can never be properly understood without referring to the preceding and subsequent writings, oral statements, and in most cases, other aspects of the philosopher’s life. This is because these “external” elements reveal much about the interpretation – and especially the existential and moral “weight” – that the philosopher himself attributed to his writings. For example, when we know that Socrates accepted his death sentence cheerfully, claiming that he was going to a better world, we understand that his belief in the immortality of the soul was genuine, not just a philosophical speculation; when we know that Leibniz made great personal efforts to reunify Catholics and Protestants, we understand that everything he said about universal harmony was not just an idea but something of deadly seriousness, perhaps the ultimate inspiration for his entire philosophy. But when we see a picture of Nietzsche harnessed to a cart, under the command of Lou Salomé holding a whip, we understand that everything he wrote about the inferiority of women – and expressly about the need to treat them with whip lashes – was mere bravado or neurotic compensation, not a moral thesis to be taken seriously. If a philosophy is not a mere collection of loose ideas but rather an effort to provide a coherent interpretation of available knowledge, then we cannot escape the question of the hierarchical order of a philosopher’s ideas; and if in real life the relative importance he attributed to one of his ideas is different from what can be deduced from the pure text, reality must prevail over the text.

For example, Martial Guéroult dedicates such meticulous attention to the internal order of Descartes' Meditations, that he forgets to ask what literary genre the book belongs to. He ends up reading as pure metaphysical treatise what is, explicitly, a spiritual autobiography. The result: amid so many marvelous discoveries he makes about Descartes' philosophy, he continues to treat the idea of the “evil genius” as if it were just “an artifice” (sic). Well, in the text of the Meditations, it is precisely that, but is it the same in the world view of René Descartes? Reading the Meditations as an autobiographical narrative, we do not traverse its steps as mere stages of a demonstration – as a “process of validation,” as Guéroult would put it – but as real interior experiences that can be imaginatively recreated by the reader, provided that they engage in them with a “Stanislavskian” spirit of identification with the author. When I attempted this experience over three decades ago, I arrived at a distressing realization: the “universal doubt” proposed by the philosopher was psychologically impossible; any effort to carry it out was blocked midway, not by the resistance of the ego cogitans that asserts its own existence (this comes much later), but by the simple reason that one cannot doubt one thing without simultaneously affirming many others. For example, I cannot deny the existence of God without admitting that I have heard of it, thereby affirming the validity of my memory while invalidating one of its contents. I cannot doubt the data of my senses without distinguishing them from my abstract thoughts, which presupposes an implicit epistemology as the basis of the question itself. And so on. The “universal doubt,” being impossible to experience in reality, had to be understood as a pedagogical or rhetorical artifice conceived by Descartes to express – and at the same time conceal – a very different interior experience. This hidden experience, as I came to understand later, could only be precisely that of the “evil genius,” which Descartes experienced in dreams in the year 1619, long before writing his first philosophical project, the Rules of 1628. The dreams show the philosopher’s consciousness threatened with annihilation by the interference of a demonic force. We can interpret this psychologically as fear of madness, or theologically, as a threatening anticipation of the “second death,” the death of the soul. In both cases, the extinction of consciousness automatically invalidates all its contents, depriving it of all knowledge. Clearly, the “universal doubt” was a translation of this fear into epistemological language, with the difference that fear can be experienced in reality, and universal doubt cannot. The result: what Guéroult saw as “an artifice” was, in reality, the original inspiration of the Meditations, and what he saw as the core of the demonstration was just an artifice. Descartes had replaced a real experience with a literary hyperbole, continuing to reason from it as if it were a real experience. This decisive move passes unnoticed if we stick to examining the philosophical doctrine – not to mention the pure text – as such, disregarding its existential roots. A philosophy considered in the text that conveys it can be seen as an impersonal theoretical edifice, but this is also just a figure of speech: this edifice did not rise by itself, out of nothing, by an original fiai, but was born from the experiences lived by a real human individual, a “hombre de carne y hueso” as Miguel de Unamuno insisted. Removed from this foundation, it becomes an object of contemplation, a fetish on the altar of academic religion.

Of course, we can isolate the text, treating it as an autonomous totality, but then we see it as a work of literary art and not as the expression of a philosophical quest infieri. In this case, the philosophical text becomes a symbol to us, with an open meaning, and it no longer makes sense to talk about an “exact meaning.” It seems that the professors at USP have never realized this problem: if we want the exact meaning, we have to go much further than the text.

The other day, while debating with a Christian who was simultaneously a scholar and an admirer of Wittgenstein, I heard from him that the Tractatus Logico-Philosophicus demolished the scientific pretensions of modernity but left Greek and Christian philosophy intact. I objected, seemingly in vain, that Wittgenstein’s goal was not the restoration of these philosophies, but the dissolution of modernity into something even worse, the realm of arbitrariness referred to as “post-modern”. Evidence of this was that after the Tractatus, he devoted himself to demolishing any and all presumption of objective knowledge – and not just the modern one – through his theory of “language games”. Implicitly considering Greek and Christian philosophies to also be pure “language games”, he buried them along with all the others, avoiding confronting them on their own ground. By doing so, he imitated the general procedure of modernity, which did not condemn previous philosophies through an honest confrontation with them, but through an opportunistic shift in the axis of discussion.

Regarding the possibility of a Christian interpretation of Wittgenstein’s philosophy, it had already been strangled in the cradle by thesis 6.432 of the Tractatus: “God does not manifest in the world.” It is a formal denial of the Incarnation. And it helps little to say that Wittgenstein soon after condemns his own statements as nonsense, for it is from these very nonsenses that he draws the final conclusion of the Tractatus, condemning everything that is not propositions about “atomic facts” (in the sense of “atomistic”) to universal silence. In the continuation of his work, even these propositions are reduced to “language games”.

When we learn that Wittgenstein engaged in Buddhist mystical exercises, while at the same time ignoring the data of the Christian religion to the point of declaring (proposition 6.4311) that “no one experiences their own death” – a statement directly contradicted by the Gospel and by thousands of testimonials of the resurrected64 -, we understand that we are facing a crude soul that, starting from a mediocre spiritual base, intends to legislate on science and faith and condemns humanity to choose between mundanely surrendering to “language games” or retreating to the Buddhistic silence of a precursor of New Age65_.

The postmodern conclusions that others drew from Wittgenstein’s philosophy were not, therefore, external additions, much less deformations of his thought: they were simple logical extensions of positions that were already implicit in the Tractatus, although they only became perfectly visible in the philosopher’s later work. No philosophical text is a perfect expression of its own meaning.

Hence, methods such as Guéroult’s, even if applied with exemplary mastery, which is not always the case when others use it, can never be the cornerstone of philosophical education. They may be useful for propaedeutic purposes, but they cannot even be the main element in the simple acquisition of a philosophical culture, much less in the formation of a competent philosopher.

As indispensable as the Guéroultian structural analysis may be, it must be complemented by the method of Paul Friedlander, who behind the written documents seeks the direct, living experience that gave rise to the central intuitions of a philosopher and determined the direction of his cognitive efforts.66 For example, in Plato, the encounter with Socrates, or, in Socrates, the permanent conflict with the ruling political class and their masters, the sophists. Socrates' entire philosophical life was determined by the desire to search for, know, and obey the “unwritten laws”, the divine norm that is beyond human community laws and from which these can be judged. He was led to this search by his disappointment with a dishonest ruling class, under whose orders he had served as a soldier. When the young Plato meets Socrates, he sees in him the ready and finished model of a new type of human being – the philosopher -, totally different from the intellectuals hitherto known in Greek society. As brilliantly summarized by Eric Voegelin (an author who owes much to Paul Friedlander), in the face of the collapse of the old social order based on cosmic order, the philosopher emerges as the man who, without any support in the prevailing beliefs, all contaminated with absurdity to a greater or lesser degree, seeks a new standard of order in the depths of his own soul, taken as a mirror of eternal laws, transcendent to society and the entire cosmos.67 Everything that Plato taught and wrote is like a long effort to exteriorize in theoretical language that which, at first, he saw in the soul of Socrates. It is the impact of this initial experience that determines the entire sense of his philosophical work.

The defining experience doesn’t need to, of course, be an episode from the philosopher’s external life. It can be a purely inner experience, of an emotional or cognitive order. In the case of René Descartes, the key lies in his three famous dreams, where the figure of the “evil genius” is insinuated for the first time, threatening to destroy at its base all confidence in the power of human knowledge. As I believe I have demonstrated in the booklet on “Consciousness and Estrangement”68, all the “order of reasons”, in Descartes, is the indirect expression of a struggle waged – and, in the end, lost – against the demon.

From founding experiences arise the central intuitions that guide the assembly of philosophical “doctrines”. Without the return to experiences, doctrines hang in the air as pure mental constructs, or “works”, in the literary sense of the term, thus lending themselves to a multiplicity of heterogeneous interpretations that end up dissolving the original meaning of the central intuitions. Even worse: the “history of philosophy”, told this way, can only be a succession of “thoughts” that generate each other in the sky of pure ideas, without roots in the world of human experience. This “history” is a fictional creation that, to justify itself, tends to transmute itself into a new philosophical “doctrine”.

An eloquent example is provided by Guéroult himself: "There is in Descartes a seminal idea that inspires his entire undertaking and that the Regulae ad direcionem ingenii express since 1628: it is that knowledge has insurmountable limits, founded on those of our intelligence, but that within these limits certainty is complete."69 It is an exact and truthful affirmation, which repeated readings of Descartes confirm as much as the study of his biography. This “seminal idea”, however, acquires two very different meanings if we contemplate it merely as validated by the “order of reasons” – even if we do it with all Guéroultian precautions – and if we graft it onto the fabric of the lived experience from where it emerged. In the first case, we only have a general thesis of epistemology, which could be proposed from very different contexts without losing anything from its schematic significance. In truth, this thesis, considered in abstract, is almost a truism. Who does not know that intelligence has limits but they do not affect our certainty that two plus two equals four?

However, if we ask ourselves why Descartes undertook the task of defending human knowledge within its limits and why he decided to do it through the radical and hyperbolic strategy of “doubting everything”, we understand that the salvation of knowledge against an apparently invincible enemy was for him a matter of life or death, not just a scientific task. The problem of the limits of knowledge has in Descartes a demonological dimension that the pure structural analysis of the text of the Meditations on First Philosophy cannot reveal, but which is quite clear in the three dreams of 1619.70 To apprehend it, it is necessary to do something that goes far beyond text analysis: it is necessary to personally re-experience the Cartesian experience of “universal doubt” and, as happened to me, realize in the end that it is absolutely unviable: there is no universal doubt, there are only specific doubts, and each one of them rises on a mountain of unshakeable certainties.71 Faced with this realization, the Cartesian method of doubt changes meaning: it is no longer a rational precaution, but an exaggerated rhetorical device, a forced hyperbolism. The demonstrative machine of the Meditations is not a science laboratory, but a theater of the absurd where an ego cornered by phantoms, to exorcize them, resorts to histrionic gestures. The final result of the undertaking is that the abstract ego, reduced to the affirmation of its own existence in a hypothetical atomistic instant, proclaims itself the source of all certainties but at the same time cannot jump from its solipsistic isolation to the external world, which it claims to know, except by an extemporaneous appeal to faith in a benevolent God – extemporaneous because the same God had previously been excluded from the game by the rule of methodical doubt. What is the “complete certainty” that remains “within the limits of knowledge”? On one side, the merely logical certainty of an empty ego; on the other, the multitude of sciences, but guaranteed, ultimately, only by faith.72 Without disputing any of Martial Guéroult’s conclusions, we see that they are correct, but inverted. As Guéroult himself emphasizes, the “order of reasons” is always a process of validation. Yes, but validation of what? Certain basic intuitions that precede and guide the validation process itself. If it is this process and not the basic intuitions that constitute the essence of a philosophy, philosophy becomes a purely discursive activity with no intuitive input, without any perception of reality, without any lived experience. It is understandable that the interest in this ends up being purely academic, if not philological.

Foundational experiences, on the other hand, can be imaginatively relived by the scholar and the reader, who in this way appropriate at least part of each philosopher’s “inner world”, while expanding their own inner world.

To discover the base experience, the structural analysis of texts is just a groundwork. The essential thing is to seek those passages where the author is not merely elaborating ideas, but taking a stand in the face of real-life challenges, without having (either not yet having or not having at that instant) the armor of a theoretical construct to protect himself. The theoretical construct – the “validation process” – can express and enrich this original experience or, on the contrary, camouflage it to the point of making it almost unrecognizable, but it will always take it as a basis, for it is from it that derive the motivation and the very purpose of the philosophical effort. The experience, in turn, can be richer or poorer, it can be the signal of a formidable discovery or merely the evidence of a neurotic complex, of a self-aggrandizing illusion, of an inability to live. If it is in it that ultimately resides the criterion of judgment of the educational value of a philosophical work – which has nothing to do with its historical importance but must override this inasmuch as philosophy owes no satisfactions to majority opinion -, this happens for a very simple reason. In the whole of what a philosopher writes or orally teaches, a hierarchical distinction must be established between what he sincerely believes and what he merely invents as validating reinforcement, artifice, assumption, logical adornment or mere intellectual diversion. For example, we cannot suppose that Plato believed as faithfully in what he wrote about the lost continent of Atlantis as much as he believed in the reality of eternal laws. If we do not grasp this distinction, it is clear that we understand nothing of his philosophy. The distinguishing criterion lies in the question: With which of his assertions was the philosopher existentially committed, to the point of making vital decisions based on them, and which did he enunciate without commitment, just for the sake of expository development, academic debate, literary brilliance, or something like that?

Not always possessing enough biographical data to answer this question, we often have to seek the solution in the texts themselves, and in these, it is not difficult to distinguish the points at which the philosopher responds to a real experience that he considers important and those in which he merely speculates ideas. When Ludwig Wittgenstein writes that “in death the world does not change, but ceases” (proposition 6.431 of the Tractatus), that “death is not an event of life: nobody experiences his own death” (6.4311), or that “the feeling of the world as a limited whole is the mystical feeling”, he is obviously recording sincere impressions, which deeply touched his soul on the occasion of his own “mystical” exercises. When, however, he explains the logic of propositions (proposition 5 and subsequent), he is merely erecting an intellectual construction, or, as Gueroult would say, validating his impressions. Even though this part is more rigorous and rationally grounded than those impressions, it is clear that the impressions motivated the construction – and not the other way around – and would remain the same without it. Here we have a distinction between what Wittgenstein “believes” and what he just “thinks”. The fact that the purely thought part attracts more attention from scholars than the substantively believed part only shows how often the academic exercise of philosophy tends to decline into a type of sophisticated frivolity, a system of elegant defenses against the realities of life.

It was in this kind of philosophy that Franz Rosenzweig, huddled in a trench of World War I, said he had not found decent answers to any important question.

One should, of course, always keep in mind Hegel’s warning that a philosophical idea only makes sense when it is embedded in the “system”, in the entire order of reasons that lead to it.

But why suppose that only the explicit reasons, recorded in the text, are valid, and not the real, existential motives that led the philosopher to this idea? If the “system” is isolated from the human mind that created it, one of two things happens: either it becomes a scientific theory to be verified by experimental means, or it is taken as a literary work, as a symbol. In both cases, the specific nature of philosophy is lost, which is an effort to coherently articulate experience by an individual consciousness.

By imaginatively reliving the founding experiences of each philosophy, the scholar acquires the key to understanding its meaning and value much more efficiently than he could do through a thousand structural analyses of texts.

It is clear that, to prepare the investigation or confirm what has been discovered about the foundational experience, the structural analysis, Gueroultian or other, has a formidable utility, but this utility depends on the method being applied from the standpoint of experience and not taking the text, materially, as if it were the very formal object of investigation. In the study of philosophy, texts are merely the documents, almost always partial and imperfect, through which we arrive at the very content of philosophy: the fundamental intuitions that justify and underpin an effort of validation, an “order of reasons”. The content of a philosophy does not consist of propositions, of sentences, but of the real, lived cognitive acts, which sometimes they express well, sometimes they express poorly. If it were not so, there would be no difference between studying a philosophical work and a literary creation. It was precisely because they did not grasp this distinction well that the professors of Philosophy-USP had to create a makeshift symbolic defense against the ghost of literature, which threatened them more from within than from outside.

Richmond, VA, August 27, 2010

Misery Without Greatness: Academic Philosophy in Brazil

IT HAS BEEN A WHILE since, driven by repeated disappointments, I stopped following the written production of Philosophy at USP (University of São Paulo). Up to the point I left, nothing published there even came close to at least one work from the first generation of USP philosophers, Lívio Teixeira’s Essay on Descartes' Morals (1955). Judging by the writings of professors Gianotti, Chauí, Arantes, and others, that faculty seemed to have regressed. Seeking other sources of Brazilian thought, I discovered the magnificent works of Mário Ferreira dos Santos, Maurílio Penido, Miguel Reale, Vicente Ferreira da Silva, and Vilém Flusser, before which the USP philosophers turned up their noses in a grotesque affectation of superiority. Believing that there was nothing wrong with repaying disdain for genius with an even greater disdain for mediocrity, I lost all interest in knowing what was being taught or discussed in Philosophy at USP. That was about twenty years ago. I say this not only to confess that my judgments about that institution may be a bit outdated but also to emphasize the interest Joel Pinheiro’s article Merits and Demerits of Academic Philosophy in Brazil73 aroused in me.

In the purely academic sense, Pinheiro says, the Philosophy Faculty at USP “is serious… it aims to teach its students to read philosophical texts and come away with some idea of what important philosophers from various areas and periods had to say.” Even better, “it is not a stage for leftist propaganda” and “it does not provide room, or provides very little room, for trickery.” If things are like that, there is no way to disagree with the author when he proclaims: “That this is done in Brazil is a merit, and it is a duty of justice to acknowledge it.”

I believe this merit is not new. The Philosophy Faculty at USP in the 1980s and 1990s was already doing these things, and I don’t think they did them much worse than now. The problem, back then, was also the same as today. Pinheiro continues: “However, this merit leaves something unsaid. An eloquent silence that points to what the faculty does not do: prepare its students for philosophical discussion; to think for themselves; to provide their own answers to big questions; to be, in short, philosophers.” The institution’s own spokespersons have long recognized this. In his 1994 book, A French Department Overseas, Paulo Arantes confessed that, in half a century of existence, it had not produced a single philosopher. Desperately seeking a justification for the entity’s existence, he declared that, in compensation, it had produced excellent historians of philosophy.

Which ones? I asked and I ask. What great works of the history of philosophy have come from there, even remotely comparable to those of Ueberweg, Zeller, Fraile, Copleston, Guthrie, Mondolfo, Giovanni Reale? None, absolutely none. The consolation prize Arantes offered was perfectly nonexistent. The Faculty indeed taught history of philosophy, and perhaps it did so quite well, but it produced nothing on its own.

Pinheiro is absolutely right in saying that something is missing there, and what is missing is the production of philosophers. But does encouraging free discussion suffice to address this deficiency? The remedy seems weak and doubtful to me. The author himself asks: “If there were such a space for discussion and personal positioning, does anyone doubt that we would hear a lot of nonsense?” Obviously, Joel Pinheiro’s approach does not escape the paralyzing dualism that has been recurrently resurfacing for decades, like an invincible tic or an obsessive-compulsive ritual, mechanically opposing the USP’s professionally rigid “rigor” to self-indulgent belletrism, guesswork, the free play of loose opinions, and the “bunda-lê-lê” of ideas. Recently, Mr. Júlio Lemos, with a somewhat puerile image, contrasted the Spartan discipline of the “engineer ants” with the festive irresponsibility of the “magic cicadas,” thinking he was making a great novelty, without realizing he was falling into a mental automatism that dates back to the 1940s, to the dispute between João Cruz Costa, Heraldo Barbuy, Oswald de Andrade, and others for the philosophy chair at USP, an episode I will comment on later;74 an automatism that, far from providing any idea of the range of alternatives in the reality of world thought, only reflects the provincial narrowness of a laughable pseudo-debate, folkloric in the most generous of hypotheses.

What is lacking at USP is not space for the inventive impulses of students' brains. What is lacking, I say right away, is the teaching of philosophy. But isn’t that precisely what Pinheiro says is being done there? Yes, but he is mistaken, in the etymological sense of the word “equivocation”: he gives the same name to different things. What is done at USP is not teaching philosophy: it is transmitting philosophical culture. Philosophical culture consists of three things: (a) knowing the philosophical bibliography and reading it to the greatest possible extent; (b) mastering the technique of text analysis to ensure that one understands what is being read; (c) knowing the history of philosophy, the philosophical schools in their chronology and their relationships with each other. These three things are excellent, but teaching them is not teaching philosophy. Strictly speaking, they are philology, the scientific study of written documents. The filological complex of USP is so ingrained that Professor José Arthur Gianotti went so far as to define philosophy as “an activity with texts.” In Philosophy at USP (I repeat: at USP up to the point I followed its activities), not only is philosophy not taught, but there is also no inkling of what that would be. Joel Pinheiro himself lacks it because he formed his mindset there and, even when pointing out the deficiencies of the received education, he reasons within a USP frame of reference.

To understand what teaching philosophy entails, one must start with a basic observation. As I already summarized this observation in the program of a recent course I gave, I will limit myself to quoting it: Philosophy is not a science; it is a technique. If a science seeks to delineate a homogeneous set of phenomena and reduce it to a common explanatory key that can be confirmed or challenged by all interested researchers, its result will necessarily be a series of sentences logically interconnected and referred to the world of experience through a system of verification procedures. In contrast, a technique brings together several autonomous and heterogeneous causal currents, irreducible to common principles, and unified solely by the result to be obtained. No technique, no matter how simple, can be reduced to the application of a single scientific principle. No technique, strictly speaking, can be entirely explained by science. The technique has its own rationality, intersecting with that of science but not reducible to it.75

If you examine carefully what philosophers have been doing over the centuries, you will see that the philosophical technique consists of the integration of the following activities: 1. Anamnesis, in which the philosopher traces the origin of his beliefs and assumes responsibility for them.

  1. Meditation, in which he seeks to transcend the circle of his ideas and allow reality itself to speak to him in an original cognitive experience.

  2. Dialectical examination, in which he integrates his cognitive experience into the philosophical tradition, and the tradition into it.

  3. Historical-philological research, in which he appropriates the tradition.

  4. Hermeneutics, in which he makes transparent for dialectical examination the sentences of past philosophers and all other elements of cultural heritage necessary for his philosophical activity.

  5. Examination of conscience, in which he integrates the acquisitions of his philosophical investigation into his total personality.

  6. Expressive technique, in which he makes his cognitive experience reproducible for others.

Clearly, what is taught at USP are only items 4 and 5 of this list, which are not enough to make a student a philosopher on their own, nor do they, separately, constitute anything deserving the name of “teaching philosophy.” However, they are the pillars of a solid philosophical culture.

Philosophical culture is what someone knows about philosophy without having to assume personal responsibility for philosophizing. Philosophical culture has two important properties: 1) It can be acquired entirely from books, without the need for teachers. The essential works of philosophers are translated into every language. Histories of philosophy, both general and specific, are abundant, and many of them are quite enjoyable to read, such as Coplestone’s or Michele F. Sciacca’s History of Greek Philosophy (despite all its scholarly apparatus, it is a masterpiece of literature). Terminological doubts can be clarified with philosophy dictionaries, also abundant, among which I prefer, among countless others, José Ferrater Mora’s (translated into Portuguese by Edições Loyola) and André Lalande’s. Even text analysis is so well explained in books that anyone who cannot learn it on their own has no aptitude for philosophy.

  1. By itself, however copious, philosophical culture will not make you a philosopher, only a scholar. The two men with the greatest philosophical culture who ever lived in Brazil ultimately revealed no special talent for philosophy. I refer to José Guilherme Merquior and Otto Maria Carpeaux. The first, about whom Raymond Aron exclaimed, “This young man has read everything!” showed a pathetic ineptitude whenever he strayed from his natural terrain—history, social science, and criticism—to venture into discussions of pure philosophy. The second did not even attempt such discussions. He glided among authors and doctrines like an expert swimmer, discovering affinities and differences with an incomparable reading skill, but no one ended up knowing what he thought about it.

In short, what is taught at USP is what a diligent individual could learn at home and, by itself, is not enough to make them a philosopher.

Philosophical technique, on the other hand, is something that only an inspired genius could learn alone. Techniques, almost always, are like this. You will hardly learn to drive a car, to sing, to dance, to perform in the theater, to handle or build complicated equipment, only by reading instruction manuals, without the living example of a qualified master. Even the most exact and “impersonal” sciences cannot operate without the use of complex instruments whose handling requires direct learning, years of practice with an instructor and the acquisition of subtle talents whose transmission includes a lot of non-verbal, personal and “human” communication to the highest degree. This is the subjectivity coefficient from which no scientific knowledge can ever escape. In all this vast area of intellectual activity, self-teaching has no place.76

Well, it is precisely to provide this type of knowledge that universities exist. If everything could be learned from books, they would have no reason to exist and could, advantageously, be replaced by public libraries.

The teaching of philosophy is one of the areas where this difference is most evident. Even a superficial research will show that there was only great teaching of philosophy where a living and present philosopher, at the height of his intellectual and pedagogical powers, transmitted to students, in daily personal interaction, the example of his pursuit and his know how. Many of these students left testimonials where there is no room for doubt: whoever has not seen a real philosopher grappling daily with the difficulties of his own philosophy will never know what it is to philosophize, no matter the immensity of his philosophical culture. What is, after all, the first great classic of Western philosophy if not the account of the fruitful interaction between a master and his brilliant disciple? Read Paul Friedländer’s Plato and you will have an idea of how far this interaction, with all its richness of personal experiences and direct perceptions, is indispensable to the formation of the philosopher. How many disciples have left us decisive testimonies about the power of the direct example gleaned from great philosophy teachers, great because they were not just teachers but philosophers in the full exercise of their quest for truth, a Sto. Alberto, a Hegel, a Boutroux, a Ravaisson, a Husserl, an Ortega, an Alain, a Croce, a Cassirer, a Rosenstock-Huessy?

The simple fact that at USP nothing is seen except philological rigorism, on the one hand, and irresponsible opinions, on the other, proves that no one there has the slightest idea of what the teaching of philosophy is. For philosophy moves precisely in the intermediate area between the two extremes of knowledge and opinion, purifying opinion to transform it into knowledge and scrutinizing knowledge to reveal what still remains in it of camouflaged opinion. Neither of these two activities can be realized by either of the two “paths” that in USP’s presumption divide and exhaust the entire globe of possibilities of intelligence.

No, what is lacking in USP Philosophy is not more space for students to talk nonsense. They already widely enjoy this space in student assemblies, in the university media and on the internet. They only lose in this to the professors themselves – Chauí, Gianotti, Safatle especially – who, if in class they scare the little students with the ghost of “rigor”, gladly exercise on TV and in the newspapers the right to opine about what they do not understand.

It is precisely at this point that I have to enter a chapter of autobiography that will greatly clarify what I am saying.

I have already told elsewhere the distant origin of my philosophical inquiries from childhood,77 but the first philosophy book I read was Descartes' Discourse on the Method, of which I found a Portuguese translation in my father’s office. I was about thirteen years old. I had no great difficulty in understanding the general argument, missing a multitude of details, but, alerted by the philosopher, I had great hopes for the teaching of geometry, which just that year was supposed to succeed algebra in the high school program.

What was my disappointment when, right in the first or second class, the teacher informed us, with the most bison-like face in the world, that a point measured nothing and that a line was made up of infinite points.

  • So, teacher, does that mean that by adding infinite nothings you get something, and even an unlimited size thing like a straight line?

The man got all messed up and demonstrated, by a + b,78 that he had never thought about the subject.

It was as if an abyss had opened at my feet. The discipline that promised to be the supreme model of rationality began by demanding that we, poor innocent children, swallow a premise that was the height of irrationality, a living contradiction, a total absurdity. That stalled my intelligence so much that from then until the end of the year, I only accumulated zeros in geometry, in the vague hope that they, added together, would give me a good final average. This geometric expectation did not come to pass.

From there on, I began to test the knowledge of teachers in other subjects, not out of spite, but due to genuine uncertainty. Result: I lost interest in all classes except for languages, which were an absolute necessity; zeros spread through the remaining columns of my report card, and by the end of the year, I had concluded that, if I wanted to understand anything, I had to figure it out myself. I began to skip classes regularly, not to go to the movies or play football, but to lock myself in the school library, the Municipal Library, or the cubicle that housed our Science Club (whose key I had due to the unjust favor of a benevolent teacher), reading philosophy books. One of them, which required prolonged attention – Spinoza’s Works in the old edition by Émile Saisset – I even took home and, mea culpa, never returned. I still have the two volumes, where, above the library stamp, a joker noted: “Surreptitiously extracted from… ”

In Bertrand Russell’s History of Western Philosophy, which was a very entertaining read, I learned who the main philosophers were and threw myself into the voracious consumption of their books, but soon realized that through this path I was going to end up rich in confusing ideas. As there was no philosophy teaching in the gymnasium and the prospect of college was still far off, I decided to investigate for myself what the teaching of philosophy was like in other countries and to guide my studies by the order that the manuals recommended. Soon, Armand Cuvillier’s Manuel de Philosophie, Ferdinand Alquié’s Cours de Philosophie, Alain’s Introduction, Maritain’s Minor Logic, Susanne K. Langer’s Introduction to Symbolic Logic, and several other books that gave me an idea of what boys my age would or should be (I imagined) learning in less barbarous lands, fell into my hands.

When I reached that phase where human beings begin to imagine themselves adults, I decided to investigate whether it was advantageous to attend a philosophy college. I had no career ambition at all. My professional problem was solved: having entered journalism at 17, I achieved some success there, enough money for my sustenance and above all the comfort of working part-time, as the profession’s regulation then determined, with spare time to study at home. Examining the salaries of older professionals, I saw that if I stayed in the job for a few more years, I would soon be earning five or six times more than an average university professor. It was decided: I was a journalist, I would be a journalist until death (later, when the bosses started to boycott part-time, I became a freelancer and remained the master of my schedule, even earning more). Attending college, then, was something with no professional purpose at all: it was worth it for the learning only, just as I had taken some theater and cinema courses also with no career intention.

Under these conditions, and also considering the most rational use of my free time, it was necessary to choose the best and only the best. I heard many recommendations, but by that time I already had enough philosophical culture to judge for myself the teaching that suited me best, and I started to read university course programs, academic magazines, books from the most notable local professors, going occasionally to the Faculty on Maria Antônia Street, the PUC on Monte Alegre, or Sedes Sapientiae to find out what was being taught there.

It is needless to say what happened: when I noticed that the teaching of philosophy in those institutions consisted almost exclusively of the history of philosophy and text analysis, I asked myself if there was any benefit in spending hours traveling by bus every day, just for the pleasure of hearing live what I could learn better at home. Prof. João Cruz Costa’s course, for example, was entirely based on Cuvillier’s Manual, which I already knew inside and out, and which in France was a book for secondary school. There was another obstacle: the dumbing-down prejudices, which the faculty, especially at USP, cultivated as if they were proofs of genius. To give the reader an idea of how far this went, note that Prof. José Arthur Gianotti, when in the 50s he decided to study something of phenomenology, had to do so through the bias of logic, because in that august institution it was believed that “ontology is the monopoly of the right.”79

Joel Pinheiro reports that today, at Philosophy-USP (University of São Paulo), even the lesser medieval philosophers, such as Matthew of Aquasparta, are seriously studied. At the time, things were not like this. Ignoring medieval philosophy was considered sophisticated. The most evident symptom of this ended up appearing in the collection of the Abril Publisher, The Thinkers, organized by professors from USP under the direction of José Américo Motta Pessanha. In the forty-odd titles that made it up, the greatest medieval philosophers – Thomas Aquinas, Duns Scot, Ockham – had all been squeezed together into a single volume, while entire books were dedicated to second-tier authors, who could hardly be qualified as philosophers, like the anthropologist Malinowski and the economist John Maynard Keynes. As I noted in § 3 of The Garden of Afflictions, "the distortions did not stop there: Pessanha found it indispensable to dedicate an entire volume to Kalecki, an economist who is not mentioned in any History of Philosophy, while omitting Dilthey, Croce, Ortega, Lavelle, Whitehead, Lukács, Jaspers, Cassirer, Hartmann, and Scheler… In short, the reader of The Thinkers, if he formed his image of the history of thought from this collection alone, would end up conceiving it quite differently from what he could get from any book or course on the subject (except, of course, the USP course, where Pessanha’s group reigns)."80

Things may have improved over time, but not until 1990, when the same group organized at the São Paulo Art Museum, the famous series of lectures on Ethics later published by Companhia das Letras, in which the same distorting selectivity that replaced the history of philosophy with Mr. José Américo Motta Pessanha’s personal mythology prevailed. As I reported this episode in The Garden of Afflictions, I need not repeat myself here. I just note that in 1990 I was already forty-three years old and, in the face of that show of incompetence, I could only congratulate myself for the youthful foresight that kept me at a distance from that ill-fated educational institution.

It is also possible that in Philosophy-USP, as Joel Pinheiro assures, there is no longer so much leftist propaganda. On the one hand, the fall of the Berlin Wall and the intellectual discredit of Marxism indeed recommend to its remaining followers a certain discretion. On the other hand, it is no longer necessary to make much propaganda, since, from the times of Fernando Henrique Cardoso, the USP intellectual community took power, controls the country and, busy making revolution from above, no longer has to engage in humble occupations of agitation and militancy, leaving this to the students. But it is historically certain that, from the start, the group of Gianottis and the like did not aim to study philosophy as such, but rather, as Roberto Schwarz confessed, "to transform the minimum and the maximum: to change the department’s curriculum, to take control of the place, to meddle in the ideological debate, to intervene in scientific politics and, more remotely, to change the social order of Brazil itself and the world."81

“To take control of the place”: could there be a more meaningful, more eloquent expression? “Changing the social order of Brazil and the world” may sound like great politics, but its immediate and concrete expression, on the scale of the Philosophy Department, was the sacred commitment to the lowest political maneuvering: to dominate the instruments of command, to boycott and nullify competitors, “to take control of the place.”

The first battle for the conquest of the “place” came right at the inauguration of the Department, when, in the competition for the provision of the Philosophy chair, all the candidates, except one, the left’s favorite, were vetoed in limine, prevented from presenting their theses, under the pretext that they did not have a "philosopher’s degree."82 The expression provoked laughter in two internationally renowned foreign observers, Enzo Paci and Luigi Bagolini.

The chosen one, João Cruz Costa, indeed had a small French degree, but even his disciple José Arthur Gianotti admits that he was a man without systematic studies, in the end an autodidact who "read whatever fell into his hands."83 I have nothing against autodidacts, being even considered (erroneously, as we shall see) one of them. But to hand over the Department to an amateur detached from all academic effort, while overlooking highly qualified men like Barbuy, Czerna, and Vicente Ferreira, was to ignore Bergson’s warning: “The autodidact capable of university work is, at the least, a genius.” University work to which the left’s chosen one remained perfectly foreign, while the “autodidacts” continued it outside USP. I have also never seen a USP professor confess that one of the guiding names of the Department, Gaston Bachelard, was himself an autodidact in philosophy. All philosophers without degrees are equal, but some are more equal than others.

I don’t need to report the Vilém Flusser episode, a depressing case that my dear student Ronald Robson has already put into circulation in response to the same article by Joel Pinheiro that I am commenting on.84 “Taking control of the place” was a successful operation, to the glory of an ambitious little group and the disgrace of national culture.

If the Philosophy-USP eventually turned its attention to medieval philosophy and even a little bit of Brazilian philosophy that it had previously disregarded, it was merely a maneuver by that department to adapt, albeit belatedly, to what it could not overcome. In part, the pressure came from within USP itself. While Pessanha and his circle concealed a thousand years of philosophy at the bottom of the trunk, the faculties of History and Education continued to fulfill their duties honorably. The former, with the medieval studies of Hilário Franco Júnior, which were never adequately praised, and the latter, with Ruy Affonso da Costa Nunes' masterful History of Education, a conservative Catholic who would never have had a place in the Philosophy Department. The latter work dedicated substantial volumes to medieval thought.

More concerned with its image than its obligations, Philosophy-USP became renowned for its ability to retroactively mimic the initiatives of others that it couldn’t boycott and then boast of a non-existent pioneering spirit. I myself had the depressing honor of being one of those copied. As soon as I published not one but two books on Aristotle, mine and Émile Boutroux’s, bringing back into circulation an author who had been shamefully absent from the national university bibliography for three decades, the USPians hastened to dust off and exhibit to the dazzled audience a thesis by Oswaldo Porchat Pereira, which, for thirty-six years, no one there had felt any urgency to publish.85

That being said, I return to my youthful wanderings. I continued studying on my own and was increasingly impressed by the number of important authors that the academic philosophical establishment was willfully ignoring. Since Cuvillier and Alquié’s manuals attached great importance to psychology as a preliminary study to epistemology, I decided to devote a few years to the study of this discipline with the help of my friend Juan Alfredo César Muller. Years later, I found that the recently graduated psychologists from USP and PUC had never heard of Maurice Pradines, Lipot Szondi, René Le Senne, Gustave Le Bon, Paul Diel, Igor Caruso, Bruno Bettelheim, Julian Jaynes, and many others, not even Viktor Frankl, who already had a study circle in the South of the country.

In the 1970s, I delved into the studies of comparative religions and spiritual traditions under the guidance of Michel Veber and through the books of René Guénon, Frithjof Schuon, Titus Burckhardt, Seyyed Hossein Nasr, Leo Schaya, and others (whose profound influence opened a breach in the Western intellectual world through which the Islamic invasion would later seep). At that time, I experienced the mental laziness of our academic university establishment. Invited by psychiatrist Jacob Pinheiro Goldberg to a debate on religions and later to a conference on spiritual traditions at the Institute of Biosciences at USP, I was struck by the self-satisfied indolence with which many USPian minds turned their backs on intellectual events of incomparable magnitude that clearly foreshadowed the immense historical transformations that would shake the world in the following decades. Throughout practically the entire university environment in São Paulo, not only at USP, I only met one scholar, besides Goldberg himself, who was not completely blind and indifferent to the cultural and potentially political upheaval that the Islamic penetration in Western intellectual circles was subtly preparing. My friend Ignácio da Silva Telles, a professor at the Faculty of Law, saw something, albeit confusedly, and had at least the merit of understanding that what I was saying was deadly serious. Two decades would pass before the “opinion formers” from our universities began to realize that Islam was a formidable power capable of changing the course of world history. Yet, even those who noticed were still unaware of its intellectual roots. They imagine it is all about propaganda, immigration, and terrorism.

I need not continue this litany of disappointments. In my early thirties, I had already concluded that the Brazilian academic class could be expected to do anything except the minimum requirement of intellectual initiative, the desire to know, without which a life of study becomes a dry and mindless routine of a bureaucratic profession.

Until then, despite having accumulated more philosophical knowledge than any professor I knew, and occasionally giving lectures here and there, I did not feel confident enough to publish anything on philosophical subjects because I still lacked the essential: personal experience, direct learning from an authentic philosopher at the height of their creative powers. Such an opportunity did not exist in any Brazilian university, and burdened with children and expenses, I couldn’t leave the country. The greatest of our philosophers, Mário Ferreira dos Santos, had died in 1968; Vicente Ferreira in 1963; Flusser had returned to Europe in 1972, and Miguel Reale’s Brazilian Institute of Philosophy was no longer at its peak. Intensive reading of biographies of notable professors and occasional fleeting encounters with some great minds like Julián Marías, Seyyed Hossein Nasr, Martin Lings, gave me a vague idea of what a pedagogical coexistence could be like, but in the end, it was nothing more than an impossible dream for a poor Latin American young man with no money in his pocket.

It was then that, through one of Mário Ferreira’s daughters, I met Father Stanislavs Ladusãns, S.J., an Estonian philosopher whom Pope John Paul II, his childhood friend, had tasked with the impossible mission of reintroducing some Catholicism into a Catholic university in Brazil.

Facing too much resistance in the Philosophy Department of PUC-Rio, he simply created another department in a beautiful mansion in Gávea, where he installed the largest philosophy library that ever existed in this country and, with a few collaborators, started the courses of Conpefil – the Set of Philosophical Research of PUC.

I had previously read an anthology of intellectual self-portraits of Brazilian philosophers, which he compiled, and I was deeply impressed by the fact that a European scholar, barely arrived in the country, was more interested in the local philosophical production than any Brazilian university. I was also informed that it was through his initiative that, towards the end of his life, Mário Ferreira had received, for the first time, an invitation to teach in a higher education institution in Brazil, even giving a few lectures at Faculdade Nossa Senhora Medianeira. Goethe used to say that it is a privilege of talent to recognize genius, whereas mediocrity seeks only to destroy. As Mário was probably the most discriminated against and sabotaged Brazilian thinker, Father Ladusãns, by recognizing and honoring him against all odds, had at least shown himself to be a man of talent and courage.

I went to him initially as someone well-versed in Mário’s work, which I had been immersed in for a few years. Having discovered a sort of secret order beneath the chaos of the philosopher’s texts, which explained the meaning of the whole, I had written a study of about thirty pages on "The Structure of Mário Ferreira dos Santos' Encyclopedia of Philosophical Sciences." I showed it to the priest to see if it was worth anything and if it made sense to publish it. He was a big, fat, strong man, with a rather unfriendly face, from whom one would expect a scolding rather than any encouraging words. I left the writing with him and returned in two weeks to receive a reprimand. To my great surprise, he replied, “I accept this right away as your final thesis, but first, you have to take the course.”

“What course?”

“Our course here at Conpefil. It lasts four years, and you receive your diploma from the University of Navarra, with which we have an agreement. We are not looking for quantity; we have only two students and want only the best.”

He then mentioned two distinguished students who had graduated there and were teaching, one in Liechtenstein, the other at a Brazilian college, although I don’t remember which one.

The tuition for the course was minimal. Classes were on Saturdays, from morning till night, corresponding to a daily workload of about three hours. For three years, I spent Saturday nights sleeping on the bus from São Paulo to Rio, and Sundays returning to São Paulo, where, exhausted but happy, I would sleep until Monday morning.

In the first class, I was shocked. The man presented the fundamental problems of the theory of knowledge, divided them into several questions, and announced, “We will examine each of these questions from the perspective of the main philosophical schools, comparing them with each other, and then we will outline the personal solution that seems most appropriate for each of them.”

He then went on to analyze sensory knowledge as seen by Plato, Aristotle, the Stoics, and continued until Husserl and Merleau-Ponty. But it wasn’t just a historical account. Each new chapter was a laborious and problematic stage of a dialectical process unfolding in the mind of the speaker at that very moment, with back-and-forths reflecting the intensity of an inner search that encountered no difficulty. There was no academic exposition there. It was the very philosophical search of our teacher, who, adopting the language of history, saw in the advances and setbacks of intelligence struggling with a problem throughout the ages the magnified image of a cognitive effort present, alive before us. It was not ready-made knowledge, nor a textual analysis; it was a living philosophy in action, the example of how to do it. It was as if a deaf person, having read scores and only known the mathematical structure of music, suddenly had their ears uncovered and their soul inundated with the chords of a Bach cantata.

“Is this it, my God in heaven?” I exclaimed within myself.

That was what I had been missing; that was what was missing in all the so-called philosophy education I had experienced in Brazil until then: not historical erudition, not textual analysis, not mere exposition of ready-made doctrines, but the living experience of philosophizing, the example of how it’s done. It was as if a deaf person, having read scores and only known the mathematical structure of music, suddenly had their ears uncovered and their soul inundated with the chords of a Bach cantata.

Professor Ladusãns repeated this performance in front of us many times, with his terrifying pronunciation filled with rolling r’s. I don’t know how many of my classmates (we were only four, then three, then two) clearly perceived what was happening. For some of them, much of it was new material, and the effort to memorize the content slightly dimmed the brilliance of the form. But for me, there were hardly any new facts. The difference was that everything I had received as ready-made, crystallized in texts, now came in the state of magma, burning and alive. You can appreciate thousands of sculptures in museums, in squares, or in printed reproductions; you may even come to master the entire history of sculpture through this means, and you may understand, through erudite explanations, many of the aesthetic principles and techniques behind those works. But you will never become a sculptor unless you have the opportunity to see a sculptor at work.

Father Ladusãns was a disciple of Husserl, committed to unifying phenomenology with scholasticism, in a way similar to André Marc and Cornelio Fabro, whom I admired so much. He was not a philosophy professor; he was a philosopher who happened to be philosophizing out loud in front of a group of students and, in that respect, was a professor. If you want to know, that is the very definition of a great philosophy teacher. Almost identical words were used by many students to describe their experiences in the classes of Alain, Bergson, Ortega, Zubiri, or Husserl himself. They were also said about Mário Ferreira, whom I did not know personally but had the opportunity to hear in many recorded lectures.

This experience left many marks on me, of which I point out two here. First, it gave me, for the first time, the confidence to write and publish philosophy texts, because now I knew not only the products, but the manufacturing process.86 Secondly, it instilled in me a taste for oral exposition, which I still value much more than anything written. I am certain that if I could reproduce in writing all the nuances of what I transmit in class, I would deserve the Nobel Prize in Literature.

There were some philosophers who came close to this, and one of them, Henri Bergson, even received the Nobel. Others were José Ortega y Gasset, Alain, Benedetto Croce, and George Santayana. What wonderful prose writers! But it is also notorious that the philosophical universe of each one of them is relatively schematic and simple, without the richness of perspectives, the polyphonic complexity of a Husserl, of a Zubiri, of a Voegelin, whose heavily technical language drives readers to despair.

I love to write, but I know I will never write at the level of what I explain in class. I console myself by saying that Plato thought the same thing.

Father Ladusãns himself did not leave writings at the level of his oral teaching, and recordings of his classes, if they exist, were lost forever when, after his death, the vandals of Liberation Theology invaded Conpefil and mercilessly tore apart the great library, reducing it to a pile of books in a corner of a cramped room.

I can never repay the unique experience he gave me, to be practically the only Brazilian of my generation, and the two following, who, without leaving the country, had access to a true university philosophy education. Without him, all the philosophical culture I had acquired through decades of self-teaching would never have gone beyond that, a philosophical culture incapable of transfiguring into philosophy. Exactly what one learns at Philosophy-USP and at the other colleges that it served as a model for.

For this very reason, it is unfair to consider me a self-taught person, a term that is derogatory only in appearance, which results in attributing to a single individual the merits that he sometimes shares with many sources. I, with at least one.

Over years of practice, I ended up developing a different style of exposition, more suitable to a baroque temperament, a lover of contrasts, paradoxes and stridencies, but in which the technique I learned from Father Ladusãns, of showing philosophy in its nascent state, and not as a finished product, is integrated as one of its most indispensable elements.

My students know that I sometimes abandon, without prior notice, a coherent line of exposition, jumping to a completely different subject and resuming it months later, when no one expected that I would. Thus, I illustrate the struggle for the “unity of knowledge in the unity of consciousness and vice versa,” which is the very definition of philosophy, showing that it is not made by constructivist, analytical, or logical-deductive effort, but by progressive agglutination, laborious and never complete, of partial and disjointed intuitions, just like in life itself.

Mário Ferreira dos Santos and Our Future87

WHEN the work of a single author is richer and more powerful than the entire culture of his country, one of two things happens: either the country consents to learn from him, or it rejects the gift from the heavens and inflicts upon itself the deserved punishment for the sin of arrogance, condemning itself to intellectual decay and all the accompanying moral miseries.

In Brazil, Mário Ferreira occupies a position similar to that of Giambattista Vico in the Neapolitan culture of the 18th century or Gottfried von Leibniz in Germany of the same era: a universal genius lost in a provincial environment incapable not only of understanding him but even of seeing him. Leibniz still had the resource of writing in French and Latin, thus opening some dialogue with foreign interlocutors. Mário is closer to Vico in his absolute isolation, which makes him a kind of monster. Who, in an intellectual environment imprisoned by the most petty immediacy and the most depressing materialism – materialism not even understood as a philosophical stance, but as the vice of believing only in what has bodily impact – could suspect that in a modest office in Vila Olimpia, actually a passage filled with books between the kitchen and the living room, an unknown man discussed on equal terms with the great philosophers of all ages, meticulously demolishing the most fashionable schools of thought and, on their ruins, erecting a new standard of universal intelligibility?

The problems Mário faced were the highest and most complex in philosophy, and therefore, they are so far above the banal cogitations of our intelligentsia that it couldn’t confront him without undergoing a metanoia, a conversion of the spirit, the discovery of an unknown and infinite dimension. It was perhaps the unconscious premonition of terror and awe – the Aristotelian thambos – that led it to flee from this experience, seeking refuge in its usual pettiness and gradually declining until reaching complete insignificance; undoubtedly the greatest phenomenon of intellectual self-annihilation ever to occur in such a short time in any age or country. The disproportion between our philosopher and his contemporaries – who were much superior, nonetheless, to the current generation – can be measured by an episode that occurred at an anarchist center, on a date that escapes me now, when Mário and the then most eminent official intellectual of the Brazilian Communist Party, Caio Prado Júnior, faced each other in a debate. Caio spoke first, responding from the Marxist perspective to the question posed as the Leitmotiv of the debate. When he finished, Mário stood up and said something like this: – I regret to inform you, but the Marxist point of view on the chosen topics is not what you have presented. I will, therefore, redo your lecture before giving mine.

And so he did. Highly appreciated in the anarchist group, not because he was entirely an anarchist himself, but because he defended the economic ideas of Pierre-Joseph Proudhon, Mário was never forgiven by the communists for the embarrassment he caused to a sacred cow of the Party. This fact may have contributed somewhat to the wall of silence that surrounded the philosopher’s work since his death. The Communist Party always claimed the authority to take out of circulation authors who bothered them, using their network of agents in high positions in the media, publishing world, and educational system. The list of those condemned to ostracism is large and notable. But in Mário’s case, I don’t believe that was the decisive factor. Brazil chose to ignore the philosopher simply because it didn’t understand what he was talking about. This collective confession of ineptitude certainly has the attenuating circumstance that the philosopher’s works, published by himself and sold door-to-door with a success that contrasted pathetically with the complete absence of mentions in cultural media, were printed with so many omissions, truncated sentences, and general revision errors that reading them became a real torment even for the most interested scholars – which certainly explains but doesn’t justify. The disproportion evidenced in that episode becomes even more eloquent because Marxism was the dominant or sole center of intellectual interests for Caio Prado Júnior, whereas in the infinitely broader horizon of Mário Ferreira’s fields of study, it was just a detail to which he could have dedicated only a few months of attention: in those months, he learned more than the specialist who had devoted a whole lifetime to the subject.

Mário Ferreira’s mind was so formidable in its organization that it was the easiest thing for him to immediately locate any new knowledge that came to him from a foreign and unfamiliar area within the entire intellectual order. In another lecture, questioned by a professional mineralogist who wished to know how to apply the logical techniques Mário had developed to his specialized field, the philosopher replied that he knew nothing about mineralogy, but that, deducing from the general foundations of science, the principles of mineralogy could only be such and such – and he enunciated fourteen of them. The professional admitted that he knew only eight of them.

The philosopher’s biography is full of such displays of strength, which frightened the audience but meant nothing to him. Those who listen to the recordings of his lectures, registered in the faltering voice of the man affected by the serious heart disease that would kill him at the age of 65, cannot help but notice the touching modesty with which the greatest sage ever seen in Portuguese lands addressed even the most unprepared and uncouth audiences with politeness and patience verging on the paternal. In these recordings, little is noticed of the gaps and grammatical incongruities characteristic of oral expression, almost inevitable in a country where the gap between speech and writing widens day by day. The sentences come complete, finished, in an admirable hierarchical sequence, pronounced in recto tono, as in a dictation.

When I refer to mental organization, I am not only talking about the philosopher’s personal ability but the most characteristic hallmark of his written work. If at first glance, this work gives the impression of an unmanageable chaos, a complete editorial disaster, a more thorough examination ends up revealing in it, as I demonstrated in the introduction to Sabedoria das Leis Eternas,88 an exceptionally clear and integral plan, carried out almost without flaws throughout the 52 volumes of his monumental construction, the Enciclopédia das Ciências Filosóficas.

In addition to poor editorial care – a sin that the author himself recognized and justifiably attributed to lack of time – another factor that makes it difficult for the reader to perceive the order behind the apparent chaos stems from a biographical cause. Mário’s written work reflects three distinct stages in his intellectual development, of which the first gives no hint of the two subsequent ones, and the third, compared to the second, is such a formidable leap in the scale of degrees of abstraction that we seem to be confronted not with a philosopher struggling with his uncertainties but with a prophet-lawmaker enunciating revealed laws before which the human capacity to argue must yield to the authority of universal evidence.

Mário Ferreira’s inner biography is truly a mystery, given the two intellectual miracles that shaped it. The first transformed a mere essayist and cultural disseminator into a philosopher in the most technical and rigorous sense of the term, a complete master of the questions debated over two millennia, especially in the fields of logic and dialectics. The second made him the only – I repeat, the only – modern philosopher who withstands a direct comparison with Plato and Aristotle. This second miracle announces itself throughout the second phase of the work, in a sequence of enigmas and tensions that demanded, in a certain way, to explode into a storm of evidence and, escaping dialectical play, invite intelligence to a contemplative ecstasy. But the first miracle, coming to the philosopher in his forty-third year, has nothing, absolutely nothing, to predict it in the works published until then. The philosopher’s family witnessed the unexpected. Mário was giving a lecture, in the half-literary, half-philosophical tone of his usual writings when suddenly he apologized to the audience and left, claiming that he “had an idea” and urgently needed to write it down. The idea was nothing less than the numbered theses intended to constitute the core of Filosofia Concreta, in turn, the crowning achievement of the initial ten volumes of the Enciclopédia, which would be written some at the same time, others afterward, but were already embedded there in some way. Filosofia Concreta is constructed geometrically as a sequence of self-evident affirmations and conclusions exhaustively based on them – an ambitious and successful attempt to describe the general structure of reality as it must necessarily be conceived so that the statements of science make sense.

Mário denominates his philosophy “positive,” but not in the Comtean sense. Positiveness (from the verb “to put”) there means only “affirmation.” The objective of Mário Ferreira’s positive philosophy is to seek what can legitimately be affirmed about the whole of reality in light of what has been investigated by philosophers over twenty-four centuries. Beneath the differences between schools and currents of thought, Mário discerns an infinity of points of convergence where everyone agrees, even without declaring it, while at the same time he constructs and synthesizes the necessary methods of demonstration to substantiate them from all conceivable angles.

Hence, positive philosophy is also “concrete.” A concrete knowledge, he emphasizes, is a circular knowledge that connects everything related to the studied object, from its general definition to the factors that determine its existence, its place in larger totalities, its position in the order of knowledge, etc. Therefore, a sequence of geometric demonstrations is articulated with a set of dialectical investigations, so that what was obtained in the sphere of high abstraction is rediscovered in the realm of the most singular and immediate experience. The ascent and descent between the two planes are carried out through “decadialectics,” which focuses on its object from ten aspects:

  1. Subject-object field. Every being, physical, spiritual, existing, non-existing, hypothetical, individual, universal, etc., is simultaneously object and subject, which is the same as saying – in terms not used by the author – receiver and transmitter of information. For instance, taking the highest and most universal object – God – He is evidently a subject, and only a subject, ontologically: generating all processes, He is not the object of any. However, for us, He is the object of our thoughts. God, who ontologically is pure subject, can be the object from a cognitive point of view. On the other hand, an inert object, like a stone, seems to be a pure object, without any subjectivity. However, it is obvious that it exists somewhere and emits information about its presence to the surrounding objects, for example, the weight with which it rests on another stone. With an immense gradation of differentiations, each entity can be precisely described in its respective functions as subject and object. To know an entity is, first and foremost, to understand the differentiation and articulation of these functions.

  2. Field of actuality and virtuality. Given any entity, one can distinguish between what it is effectively at a certain moment and what it can (or cannot) become in the next instant. Some abstract entities, such as freedom or justice, can change into their opposites. But a cat cannot turn into an antigato (anti-cat).

  3. Distinction between actual possibilities and non-real or merely hypothetical possibilities. Every possibility, once logically stated, can be conceived as real or unreal. We can obtain this gradation through the dialectical knowledge we have of the object’s potentialities.

  4. Intensity and extensity. Mário borrows these terms from the German physicist Wilhelm Ostwald (1853-1932), distinguishing what can only vary in the difference of states, such as the feeling of fear or the fullness of meanings of a word, and what can be measured through homogeneous units, such as lines and volumes.

  5. Intensity and extensity in actualizations. When entities undergo changes, these changes can be either intensive or extensive in nature. The precise description of changes requires the articulation of both points of view.

  6. Field of oppositions in the subject: reason and intuition. The study of any entity under the first five aspects cannot be done solely based on what is known about them, but it must take into account the modality of their knowledge, especially the distinction between the rational and intuitive elements that enter into it.

  7. Field of oppositions of reason: knowledge and ignorance. If reason provides knowledge of the general and intuition of the particular, in both cases, there is a selection: to know is also to not know. All dualisms of reason – concrete-abstract, objectivity-subjectivity, finite-infinite, etc. – stem from the articulation between knowing and not knowing. An object is not known until we know what must remain unknown for it to become known.

  8. Field of rational actualizations and virtualizations. Reason operates on the work of intuition, actualizing or virtualizing, that is, bringing to the forefront or relegating to the background, the various aspects of the perceived object. Any critical analysis of abstract concepts presupposes a clear awareness of what has been actualized and virtualized therein.

  9. Field of oppositions of intuition. The same separation between the actual and the virtual also occurs at the level of intuition, which is spontaneously selective. For example, if we look at this book as a singularity, we abstract from other copies of the same edition. Just like reason, intuition knows and does not know.

  10. Field of variant and invariant. There is no fact that is absolutely new or absolutely identical to its predecessors. Distinguishing the various degrees of novelty and repetition is the tenth and last procedure of “decadialectics.”

Mário complements the method with “pentadialectics,” a distinction of five different planes on which an entity or fact can be examined: as a unity, as part of a whole of which it is an element, as a chapter in a series, as a piece of a system (or structure of tensions), and as part of the universe.

In the first ten volumes of the “Encyclopedia,” Mário applies these methods to solve various philosophical problems divided according to the traditional distinction among the disciplines that compose philosophy – logic, ontology, theory of knowledge, etc. – thus composing the general framework with which, in the second series, he will delve into the detailed study of certain singular themes.

In the course of developing this second series, he focused more extensively on the study of numbers in Plato and Pythagoras, which ultimately determined the spectacular “upgrade” that marks the philosopher’s second metanoia and the final ten volumes of the “Encyclopedia,” as explained in the introduction to “Wisdom of Eternal Laws.” The book “Pythagoras and the Theme of Number,” one of the author’s most important works, bears witness to this mutation. What caught Mário’s attention was that, in the Pythagorean-Platonic tradition, numbers were not seen as mere quantities, in the sense they are used for measurements, but as “forms,” logical articulations of possible relations. When Pythagoras said, “all is number,” he did not mean that all differentiating qualities could be reduced to quantities, but that the quantities themselves were, so to speak, qualitative: each of them expressed a certain type of articulation of tensions, the whole forming an object. Thus, if this is indeed the case, Mário concludes, the sequence of integers is not merely a count, but an ordered series of logical categories. Counting is, even unconsciously, ascending the steps of a progressive understanding of the structure of reality. Let us see, just to exemplify, what happens in the transition from the number one to five. Any object is necessarily a unity. “Ens et unum convertuntur,” “being and unity are the same thing,” as Duns Scotus will say. At the same time, however, this object will contain some essential duality. Even simple unity, or God, does not escape the gnoseological dualism of the known and the unknown since what He knows of Himself is unknown to us. At the same time, the two aspects of duality must be connected, requiring the presence of a third element, the relation. But the relation, by articulating the previous two aspects, establishes a proportion, or quaternality. The quaternality, considered as a differentiated form of the entity whose abstract unity we grasp in the beginning, is in turn a fifth form. And so on.

The mere counting synthetically expresses the set of internal and external determinations that make up any material or spiritual object, actual or potential, real or unreal. Therefore, numbers are “laws” that express the structure of reality. Mário himself admits that he does not know if his very personal version of Pythagoreanism materially coincides with the historical philosophy of Pythagoras. Whether a discovery or a rediscovery, Mário’s philosophy unfolds before our eyes in a differentiated and meticulously finished edifice, an entire doctrinal structure that, in Pythagoras – and even in Plato – was only embedded in a compact and obscure manner. At the same time, in “The Wisdom of Principles” and the other final volumes of the “Encyclopedia,” he gives his own philosophical project an incomparably greater scope than one could predict, even by the masterful “Concrete Philosophy.” At this point, what began as a set of methodological rules transmutes into a complete system of metaphysics, the “mathesis megiste” or “supreme teaching,” far exceeding the original ambition of the “Encyclopedia” and elevating Mário Ferreira’s work to the status of one of the highest achievements of philosophical genius of all time.

I have no doubt that, when the current phase of intellectual and moral degradation in the country has passed, and reconstruction becomes possible, this work, more than any other, must become the foundation of a new Brazilian culture. The work itself does not need this: it will survive very well when the mere memory of something called “Brazil” has disappeared. What is at stake is not the future of Mário Ferreira dos Santos: it is the future of a country that gave him nothing, not even a recognition in words, but to which he can give a new life of the spirit.

Notes Towards an Introduction to Philosophy89

THERE IS NO elementary philosophy. No matter where you enter a philosophical question, regardless of what it is, you will end up right at the heart of the trouble. Nothing can help you but mastery of philosophical technique. Philosophical technique is knowing how to trace a theme, a problem, an idea, to its roots in the very structure of reality. It’s about thinking on the subject until thought reaches its limits and reality itself begins to speak. “Thinking”, here, is not talking to oneself, combining words or arguing trying to prove something. It’s not even building logical deductions, no matter how elegant they may seem (the mind’s constructive activity belongs to mathematics, not philosophy). It is, firstly, diving into inner experience in search of faithfully recalling how something came to your knowledge and where it arose in the larger picture of reality. Gradually you will distinguish what came from reality and what you yourself added, and why you added it. When you are sure you have the clean, unadulterated data (but without discarding the additions, which are sometimes useful later), you can look around it and see the surrounding and preceding conditions that made its presence possible. You can’t do this without deepening your own self-awareness in the very act of contemplating the object. The task requires a degree of mental concentration and sincerity that greatly exceeds the capacity of the average man (including “intellectuals”, even authentic ones; I won’t even mention their imitators). It is a task as demanding and even more bristling with psychological obstacles than the effort required to overcome neurotic resistances in the course of psychoanalytic treatment (and psychoanalytic treatments can drag on for years).

To measure the distance separating philosophical inquiry from any and all forms of “argumentation” (valid or invalid), just note that right at the first steps the inner perception of the object, if going in the right direction, already transcends your at least immediate ability to express it in words. It’s about becoming aware, not “reasoning”. Verbal thought serves only as initial support here. It’s about making present, by all available mental means, the entire picture of the real conditions that made it possible for you to know the object. From there to knowing the conditions that made its very existence possible is only a step, but it’s the decisive step. It is only at this moment that verbal exposition of this experience becomes possible in turn, because to place a real object in the frame of conditions that enabled it is to place it, automatically, at some point of a logical deduction. All you will be able to do is verbalize this deduction, not the inner path traversed. But it’s the journey that gives the logical deduction all its substantiality of meaning. Read or heard by someone who is not able to reconstruct the corresponding inner experience, the deduction will only be a formal scheme that, like any other formal scheme, can feed endless and unprofitable discussions and refutations. These discussions and refutations may be an imitation of philosophy, but they are as different from genuine philosophy as a midi file of a Bach cantata is different from a Bach cantata. They may serve as logical training, but training for a constructive mental activity, as useful as it may be for other purposes, is exactly the opposite of learning philosophical analysis: you cannot open yourself to reality by building something in place of it.

The only possible learning of philosophy is to read the expositions of philosophers by imaginatively reconstructing the inner activity that generated them. This is like reading a score and gradually learning to perform it with all the implied emotional nuances and emphases, which the score hints at but does not show. Before becoming a composer, you have to learn to do this with many songs from other composers. Before analyzing your first philosophical problem, you will have to play many songs composed by the philosophers of old. And, just as it happens with the music apprentice, you won’t perform a public recital with the first songs you barely learned to play. Aristotle studied with Plato for twenty years before he started teaching. To learn to philosophize is to learn to listen – and then to play – the secret melody behind the mere verbal signs. If everything goes right, after many years of practice you will end up discovering your own secret melodies – and when you write them you will find that practically no one will know how to play them but everyone will want to imitate them in the form of “arguments”. Philosophy professors – especially in Brazil – generally have no idea what philosophical investigation is. Instead of philosophy, they teach argumentation, at best. Most of the time they don’t even do that: they teach ready-made arguments and call anyone who doesn’t want to repeat them a fascist. It’s a kind of drug trafficking.

Advice for Philosophy Students90

  1. PHILOSOPHY is what its founders intended, not what its successors made of it. Only in Socrates, Plato, and Aristotle can you obtain an accurate image of what philosophy is.

Explanation. This is not what the professors will tell you, but they either lie or do not know what they are talking about. They consciously or unconsciously apply Hegel’s maxim that “the essence is in what the thing becomes,” meaning that only the complete development of the thing over time reveals what it is. Hegel says that we cannot know a tree by looking only at the seed, which is perfectly true. But applying this principle to philosophy, he and the professors believe that philosophy progresses toward self-awareness and full realization. Hence, only through knowledge of its current and most recent form can we have a correct idea of what it is. Consequently, our university philosophy education emphasizes more on recent thought rather than medieval and ancient ideas. However, Hegel’s principle can only be applied to beings whose development is predetermined from the origin, like the form of a tree is predetermined in the seed. An apple seed may or may not germinate, and the apple tree may grow to its full development or be cut halfway, knocked down by lightning, eaten by pests, but it cannot, under any circumstances, change its essential quality and become, for example, a seed of a jabuticaba, lemon, or almond tree. In other words, the nature of its course is predetermined; only whether this course will reach its full development or not is not predetermined. The same does not apply to human projects.

Once you have decided to gather money to build a house, nothing obliges you to go ahead until the final completion of the project; at any moment, you can change your mind, invest the money in a business, or spend it on a trip. Even after starting the construction, you can sell the unfinished house and buy, for example, a car or decide to spend the money on horse racing. A friend of mine, having founded a construction company, ended up making much more money in the demolition business. This means that the development of a human project does not have to follow the predetermined course from the beginning. It can change direction, alter itself, transform even into its opposite or into a realization completely unrelated to the initial project. Moreover, the realization of a natural development, such as a plant, follows the course of regular natural causes (except for human intervention); its achievement is not more uncertain than the general probabilism of nature and can, therefore, once the conditions are known, be predicted with reasonable accuracy. The same does not apply to human projects, where doubts, errors, chances, forgetfulness, volatility, betrayal, unconscious motives, changes of interests, etc., are introduced. Hence, the present state of philosophy does not necessarily reflect a development that contains all its previous stages. This would only be possible in the absurd hypothesis that each present philosopher had absorbed and transcended all the previous stages of philosophy. The fact is that at any stage in history, the state of philosophy reflects not absorption or overcoming but often forgetfulness, loss, which then requires laborious retakes. The number of philosophical schools with the prefix “neo” is evidence of this: neo-Scholasticism, neo-positivism, neo-Kantianism, etc. Each of these names presupposes that something was lost and must be found again. Furthermore, philosophy often changes its subject: new things happen, and they become new philosophical themes, coming from outside of philosophy. For example, Christianity. After Christ, philosophers had to start reasoning about Christian themes that were entirely absent from the original idea of philosophy. This means that the development of philosophy is not a unitary and organic process like that of a tree, but an irregular, inorganic process with foreign grafts and unforeseen ruptures. This is why new philosophies emerge, different from the previous ones—sometimes so different that they cannot even be compared by opposition. Therefore, the present state of philosophy does not have a nexus of organic continuity with the original idea of philosophy, to which, however, it remains connected by some ideal or normative reference. Hence, only knowledge of the original project, considered independently of its subsequent developments, can give us an idea of what philosophy is, given that many of these developments may be fortuitous and have nothing to do with the original project. A philosophy teacher who fills the students' minds with recent philosophical debates before giving them a massive dose of Plato and Aristotle is preventing them from accessing knowledge of philosophy. Unfortunately, this is the general rule in our university schools.

  1. You will hear that there are “eternal philosophical questions” to which philosophers offer answers and more answers without reaching any appreciable agreement. Do not believe it.

Explanation. Rarely have two philosophers dealt with the same question. And even when they seem to be dealing with the same question, like Aristotle and St. Thomas, the Latin principle applies: Duo si idem dicunt non est idem – “If two say the same thing, it is not the same thing.” More certain is Susanne K. Langer when she says that the great shifts in the History of Philosophy consist of the appearance of new constellations of questions.

  1. You will also hear that there are at least “philosophical questions,” a set of topics of specifically philosophical interest. Do not believe it.

Explanation. Philosophy is interested in the whole of human knowledge and not in this or that in particular. Philosophy is a particular treatment given to questions, not a specific set of questions.

  1. You will hear that philosophy seeks to create a general conception of the universe, life, etc. Do not believe it.

Explanation. Philosophy has never invented a single conception of this kind. What it has done is to discuss, deepen, and improve existing conceptions coming from religion, common sense, tradition, prevailing ideologies, etc. Inventing worldviews is not the task of a philosopher.

Explanation. There is no “normal state” in philosophy from which it could depart and enter into a crisis. Philosophy has always been in crisis; or rather, it is the crisis itself. Philosophy only appears when common beliefs have been shaken, when the worldview is discredited, or no longer understood. Philosophy comes into play to change or restore the worldview, depending on the case. Today, some academics, most of them, in fact, particularly in Brazil, confuse philosophy with worldview. Seeing that their personal or group worldviews (such as Marxism, evolutionism, scientism, etc.) have gone into crisis, they projectively believe that they are seeing a crisis in philosophy. A true philosopher would say, “The worldview of the intellectual class is in crisis; therefore, it is time to start good philosophy.” Now, those who speak of the crisis of philosophy are precisely the most incapable of critically transcending their shaken worldviews and creating a true philosophy. Therefore, being outside of philosophy, they have no authority to evaluate its state.

  1. Do not judge ancient philosophies by what your professors tell you. Judge your professors by the level of ancient philosophy.

Explanation. 1st – If the realization turned out better than the project, it is something we can only evaluate based on the project, and it is a sign that the project was better than it seemed at the beginning. Far from condemning it, it exalts it. If it turned out worse, then the project is the law that condemns it. In both cases, it is the old that judges the new, not the other way around. 2nd – The dead do not speak, it is true: but it is easier for them to influence us than for us to influence them. What Plato or Aristotle thought is something that weighs on us. What we think of them is something that neither pleases nor displeases them. Therefore, it is more important to know what they would think of us than what we think of them.

Who is a Philosopher and Who is Not91

As the awareness of the complete debacle of our public and private universities spreads, the number of Brazilians who courageously seek to study at home and acquire through their own efforts what they have already paid for from a thieving government—or thieving educational entrepreneurs—grows.

Almost ten years ago, the Odebrecht Foundation—a truly admirable institution—asked me what I thought of a campaign to demand better quality education from the government. I replied that it was futile. One should not ask or demand anything from swindlers. The best thing to do with the education system was to ignore it. If they wanted to provide a good service to the public, I added, they should help autodidacts—the heroic segment of our population that, from Machado de Assis to Mário Ferreira dos Santos, created the best of our superior culture. The way to help them was to make essential resources for self-education accessible to them, which, in the end, is the only true education that exists. I even conceived a collection of books and DVDs for this purpose, providing not only the indispensable introductory elements for each specialized field of knowledge but also the sources for continuing studies to a level far beyond what any Brazilian university could not only offer but even imagine.

My suggestion was kindly shelved, and with or without a demand campaign, the national education continued to decline until it became what it is today: intellectual abuse of minors, exploitation of popular good faith, organized or disorganized crime.

In the same measure, the number of desperate letters seeking pedagogical help that reach me has multiplied by ten, by a hundred, and by a thousand, surpassing my capacity to respond, forcing me to invent things like the True Outspeak program, the Seminário de Filosofia92 and other ongoing projects. Yet, I still cannot meet the demand. The letters keep coming, and the most repeated request is for an essential philosophical bibliography. It is an impossible request. The first step in this course of study is not to receive a list of books but to form it on your own initiative, through trial and error, until the student develops a kind of selective instinct capable of guiding them through the labyrinth of philosophical libraries. What I can do, however, is provide a basic criterion for you to learn to discern at first sight, among the authors who speak on behalf of philosophy, which ones deserve attention and which ones are better off forgotten.

I was fortunate to acquire this criterion through the living example of my professor, Father Stanislavs Ladusãns. When he tackled a new philosophical problem—new to the students, not to him—the first thing he did was analyze it according to the methods and viewpoints of the philosophers who had dealt with the subject, in chronological order, incorporating the spirit of each one and speaking as if he were a faithful disciple, without contesting or criticizing anything. Having done this with two dozen philosophers, the contradictions and difficulties would naturally appear, without the slightest intention of being polemical. He then ordered these difficulties, analyzing each one, and finally articulated, with the most solid elements provided by the various thinkers studied, the solution that seemed best to him.

It was a delight, to say the least. At a glance, we understood the living sense of what Aristotle intended when he stated that dialectical examination must begin with the listing of “the opinions of the wise” and attempt to articulate this material as if it were a single theory. Each philosopher must think with the minds of their predecessors to be able to understand the status quaestionis—the state in which the question came to them. Without that, all discussion is mere foolish abstraction, gratuitous opinionism, presumptuous amateurism.

The immediate conclusion was as follows: philosophy is a tradition, and philosophy is a technique. One arrives at the domain of technique through the active absorption of tradition, and tradition is absorbed by practicing the technique according to the various stages of its historical development.

Note the immense difference between acquiring pure information, no matter how erudite it may be, about the ideas of a philosopher, and faithfully putting them into practice as if they were our own in examining problems for which we feel a genuine and urgent interest. The first alternative kills philosophers and buries them in an elegant tomb. The second revives them and incorporates them into our consciousness as if they were roles we personally play in the grand theater of knowledge. It is the difference between museology and tradition. In a museum, many strange pieces can be preserved, relics of an incomprehensible past. Tradition comes from the Latin traditio, which means “to bring” or “to hand over.” Tradition means making the past present through the revival of the interior experiences that gave it meaning. The philosophical tradition is the history of struggles for clarity of knowledge, but since knowledge is inherently temporal and historical, one can advance in this struggle only by reviving the previous battles and bringing them into the conflicts of the present.

Many people, driven by an excessive love for their independence of opinions (as if any nonsense that comes out of their heads were a treasure), are afraid of allowing themselves to be influenced by philosophers and begin to argue with them from the very first line, if not entering the reading armed with an impenetrable shell of prejudices.

With Father Ladusãns, we learned that, on the whole, influences improve each other, and even the bad ones become good. Incorporated into the dialectical network, even the most unforgivable philosophical idiocies end up revealing themselves as useful, like natural errors that the intelligence must go through if it wants to reach a dense, living truth, and not just hit upon empty generalities by chance.

Some practical rules follow from these observations:

  1. When you encounter a philosopher, in person or in writing, verify if they feel comfortable reasoning together with the philosophers of the past, even those with whom they “disagree.” The flexibility to mentally incorporate the previous chapters of philosophical evolution is the hallmark of the genuine philosopher, heir to Socrates, Plato, and Aristotle. Those who lack this quality, even if they emit valuable opinions here and there, are not members of the guild: they are amateurs, at best talented guessers. Many allow themselves to be imprisoned in this atrophied state of intelligence out of laziness to study. Others do so because they adhered to a certain current of thought in their youth and became incapable of deeply absorbing all others, to the point where they can no longer comprehend even their own. One of these diseases, or both, is all you can acquire in a Brazilian university.

  2. Do not study philosophy by authors but by problems. Choose the problems that truly interest you, that seem vital to your orientation in life, and search philosophical dictionaries and bibliographic guides for the classic texts that addressed the subject. The formulation of the problem will change many times during the course of the research, but that is a good thing. Once you have selected a reasonable number of relevant texts, read them in chronological order, seeking to mentally reconstruct the history of the discussions on the subject. If there are gaps, go back to research and add new titles to your list until you compose a sufficiently continuous historical development. Then classify the various opinions according to their points of agreement and disagreement, always trying to determine where an apparent disagreement conceals a deep agreement about essential categories under discussion. Having done this, assemble everything again, not in historical order but in logical order, as if it were a single philosophical hypothesis, even if unsatisfactory and full of internal contradictions. Then you will be equipped to examine the problem as it appears in your personal experience and, confronting it with the legacy of tradition, provide, if possible, your own original contribution to the debate.

This is how it’s done, this is how you study philosophy. Anything else is amateurism, literary criticism, political propaganda, organized vanity, consumer exploitation, or improper use of public funds.

More on Philosophers 93

EXPRESSAR A experiência real em palavras é um desafio temível até para grandes escritores. Tão séria é essa dificuldade que para vencê-la foi preciso inventar toda uma gama de gêneros literários, dos quais cada um suprime partes da experiência para realçar as partes restantes. Se, por exemplo, você é Balzac ou Dostoiévski, você encadeia os fatos em ordem narrativa, mas, para que a narrativa seja legível, tem de abdicar dos recursos poéticos que permitiriam expressar toda a riqueza e confusão dos sentimentos envolvidos. Se, em contrapartida, você é Arthur Rimbaud ou Giuscppe Ungarctti, pode comprimir essa riqueza nuns poucos versos, mas eles não terão a inteligibilidade imediata da narrativa.

Essas observações bastam para mostrar que as idéias e crenças surgidas nas discussões públicas e privadas raramente se formam da experiência, pelo menos da experiência pessoal direta. Elas vêm de esquemas verbais prontos, recebidos do ambiente cultural, c formam, cm cima da experiência pessoal, um condensado de frases feitas bastante desligado da vida. Se vocês lerem com atenção os diálogos socráticos, verão que a principal ocupação do fundador da tradição filosófica ocidental era dissolver esses compactados verbais, forçando seus interlocutores a raciocinar desde a experiência real, isto é, a falar daquilo que conheciam em vez de repetir o que tinham ouvido dizer. O problema é que, se você repete uma ou duas vezes aquilo que ouviu dizer, não apenas você passa a considerá-lo seu, mas se identifica e se apega àquele fetiche verbal como se fosse um tesouro, uma tábua de salvação ou o símbolo sacrossanto de uma verdade divina.

Para piorar as coisas, as frases feitas vêm muito bem feitas, em linguagem culta e prestigiosa, ao passo que a experiência pessoal, pelas dificuldades acima apontadas, mal consegue se expressar num tatibitate grosseiro e pueril. Há nisso um motivo dos mais sérios para que as pessoas prefiram antes falar clcgantcmcntc do que ignoram do que expor-se ao vexame de dizer com palavras ingênuas aquilo que sabem. Um dos resultados dessa hipocrisia quase obrigatória c que, de tanto alimentar-se de símbolos verbais sem substância de vida, a inteligência acaba por descrer de si mesma em segredo ou mesmo por proclamar abertamente a impossibilidade de conhecer a verdade. Como essa impossibilidade, por sua vez, é também um símbolo prestigioso nos dias que correm, ela serve de líltimo e invencível pretexto para a fuga à única atividade mental frutífera, que é a busca da verdade na experiência real.

A própria palavra “experiência” já costuma vir carregada de uma nuance enganosa, pois se refere cm geral a “fatos científicos” recortados a partir de métodos convencionais, que encobrem e acabam por substituir a experiência pessoal direta. Nessas condições, a discussão pública ou privada torna-se uma troca de estereótipos nos quais, no fundo, nenhum dos participantes acredita. É esse o sentido da expressão popular “conversa fiada”: o falante compra fiado a atenção dos outros – ou a sua própria – e não paga com palavras substantivas o tempo despendido. (Sempre achei uma injustiça que as leis punissem os delitos pecuniários, mas não o roubo de tempo. O dinheiro perdido pode-se ganhar de novo – o tempo, jamais.)

De Sócrates até hoje, a filosofia desenvolveu uma infinidade de técnicas para furar o balão da conversa estereotipada c trazer os dialogantes de volta à realidade. Zu den Sacben selbsí – “ir às coisas mesmas” -, a divisa do grande Edmund Husscrl, permanece a mensagem mais urgente da filosofia depois de vinte e quatro séculos. Ninguém mais que o próprio Husserl esteve consciente dos obstáculos linguísticos c psicológicos que se opunham à realização do seu apelo. Todo o vocabulário técnico da filosofia – e o de Husserl é dos mais pesados – não se destina senão a abrir um caminho de volta desde as ilusões da classe letrada até à experiência efetiva. A conquista desse vocabulário pode ser ela própria uma dificuldade temível, mas decerto não tão temível quanto os riscos de ficar discutindo palavras vazias enquanto o mundo desaba à nossa volta. Ao incorporar-se à cultura ambiente como atividade academicamente respeitável, a própria filosofia tende a perder sua força originária de atividade esclarecedora e a tornar-se mais uma pedra no muro de artificialismos que se ergue entre pensamento c realidade.

Consciousness Without Consciousness94

ALL of us, in difficult moments of life, have tried to explain ourselves to someone who either doesn’t want or can’t comprehend us. The person’s gaze wanders from side to side behind an opaque veil, failing to grasp the focus of what we intend to show them. As there is no focus, they cannot articulate in a coherent frame what we are telling them. They apprehend the words and even entire sentences, but they drain them of meaning or attribute an improper, displaced sense to them, unrelated to the situation. It is annoying, sometimes despairing.

We have also seen people who, entangled in their own difficulties, cannot see the mess they have gotten into. They either remain alienated, in a suicidal carelessness, or become agitated and fearful for invented reasons unrelated to the real problem.

These two types of individuals are “conscious” in the sense of neurophysiology and cognitive science, but not in the sense that the word “consciousness” has in real life. The “consciousness” studied by these sciences is the simple ability to notice stimuli. They cannot go beyond this point. They cannot distinguish between the fool who feels the cold on the skin and the sensitive person to whom the sight of snow suggests, in a flash, the contrast between the beauty of the landscape and the danger faced by the poor homeless.

This difference, within its proportions, is the same as that between individuals gifted with musical sensitivity and a person with “tune deafness.” This term, for which I have not found a universally accepted translation in Portuguese (it could be “melodic deprivation”), refers to a person who, despite not suffering from any hearing impairment, simply cannot grasp a melody. They hear the notes separately but cannot catch the musical phrase they compose. If the singer sings out of tune or the pianist plays a D where there should be an F, they do not notice the slightest difference. In severe cases, the afflicted person cannot even understand what music is: they do not notice the slightest difference between the “Brandenburg Concertos” and the sound of horns in heavy traffic. The condition is peculiar but not rare: according to recent data, two percent of people have some degree of tune deafness.

Victor Zuckerkandl, in “Sound and Symbol” (1956) – a splendid book – states that this difference marks the specific distinction of music, setting it apart from all other acoustic phenomena. Music, in essence, not only has order – the noise of an engine also has that. It has meaning: it points to something beyond the sonic elements that compose it. The gap between hearing sounds and apprehending a melody is the same as that between hearing words and understanding what they say – or, even worse, between understanding the mere verbal sense of sentences and recognizing what they refer to in real life.

To make things even more complicated, a recent study that sought to find some neurocerebral explanation for tune deafness discovered, to the researchers' great astonishment, that although the people affected by this deficiency do not perceive a wrong note, their brains register the difference with the same acuity as Mozart’s brain would. They hear the music perfectly well, but as the authors of the research put it, they hear it “unconsciously.” Their brains perceive the melody: the ones who do not perceive it are the individuals themselves.95

Zuckerkandl, who passed away in 1965, could not have expected that his theory would receive, half a century after its publication, such eloquent confirmation. What he did not miss was the philosophical importance of his discovery, which, going against scientific trends, remained almost unknown to the literate classes for many decades (before the 1990s, I only saw it cited in Henry Corbin, who used it to explain mystical states in the Iranian esotericism of the 13th century – a subject that is not exactly a best-seller).

The perception of music, ultimately, requires the same kind of understanding needed to grasp a complex dramatic situation, be it your own, that of an interlocutor, or one you read about in Hamlet, Crime and Punishment, The Magic Mountain, and so on. Now, to explain the fact that the brain registers a sensation of cold, scientists are forced to break down this banal phenomenon into a series of incredibly complex neurobiological processes. Not even these processes are fully explained yet, but since the dream of materialistic science is to reduce consciousness entirely, explaining it as a “product” of the brain, many materialism adherents act as if they have already achieved this reduction and provided the most solid and irrefutable evidence for it, hence concluding that consciousness, as such, does not even exist: it is just a cerebral function among others. This is obviously charlatanism, but the sources that inspire it come from an even lower level than plain and simple charlatanism.

Note well: apart from the difference highlighted by “tune deafness,” consciousness also has a second distinctive trait that sets it apart from any other known phenomenon in the universe. No matter what you are talking about, the miracle of abstract language allows you to refer to objects not only without the need for them to be physically present but also without the need for you to think of them as real things. You can even replace the mere abstract concept of them with an algebraic symbol and continue reasoning about them without even remembering their real counterparts, confident that, if your reasoning is formally correct, you will arrive at conclusions that apply exactly to those counterparts. If it were not for this, computers could not exist. However, nothing similar happens with consciousness. You cannot talk about it without it being present and active in that very moment. The true discourse on consciousness, on the contrary, has the power to intensify consciousness in the very moment you reason about it, like a light that, as soon as it is turned on, automatically illuminates a series of others and lights up the entire room. This is the sense in which “consciousness” is spoken of in real life. This discourse demands the presence of the conscious and responsible speaker who assumes themselves as present in the act of discourse. If, in contrast, you reduce consciousness to a generic phenomenon, about which you can speak as an external thing, the object instantly escapes from your horizon of consciousness, as you are no longer talking about consciousness actually existing but only about some mechanism or aspect of it in particular, which does not exist in itself.

Consciousness, in the strong sense of the word, is current, responsible self-awareness – something that can only exist in the real, present, and acting individual. Generic, abstract consciousness is a mere logical fetish. If one day they discover how the brain produces this fetish, consciousness will remain unexplained. The reductive effort, in this case, has no real scientific scope whatsoever. It is merely a hypnotic deception, an instrument of totalitarian control over society.

Science against Reason96

What is proudly referred to today as “science,” claiming to be the ultimate and supreme authority in judging all public and private matters, is neither a univocally recognizable entity nor a knowledge with its own foundation.

The possibility of something like “science” rests on a variety of assumptions that cannot themselves be subjected to “scientific” testing, let alone provide any rational basis to give the so-called “science” the authority of the final word not only in general questions of human existence but even in the specialized domain of each scientific area.

Just to give a basic example, without the words “yes” and “no,” no logical reasoning is possible. No science can tell us what they mean. All formal logic is based on these two words, and formal logic itself cannot define them. Any logical-formal definition offered for them will always be purely tautological, saying nothing in itself and ultimately basing all its understanding on the appeal to the listener’s or reader’s personal experience. If we say, for example, that the meaning of “yes” is agreement, acceptance, etc., we assert nothing except that saying yes is saying yes. Similarly, “no” cannot be defined as rejection, objection, etc., for the simple reason that the meaning of these words consists precisely in saying no. The only possible meaning of the word “yes” is the full moral responsibility that a person assumes when stating something. This responsibility, in turn, subdivides into degrees ranging from an absolute willingness to die for what is said to the mere provisional acceptance of a hypothesis for the purposes of argumentation, and therefore refutation as well. The same applies to “no.” These words cannot be defined except by appealing to personal responsibility as it appears in subjective self-awareness. This simply means that any purely logical-formal use of these terms, severed from their roots in human moral experience, is only a conventional and hypothetical use that does not allow us to distinguish whether, in the end, “yes” means yes or no and “no” means “no” or “yes.”

A similar phenomenon occurs with numerous other terms used in scientific reasoning, such as equality, difference, cause, relation, etc. No science can define these terms, nor can scientific methodology do so if it takes the validity of scientific knowledge for granted instead of grounding it from its roots. We can, of course, establish logical-formal meanings for these words, as well as for many others, but only as a conventional cut made upon what they mean in responsible human experience.

It would also be senseless to imagine that this difficulty affects only the expression of scientific knowledge in words and not the substance of that knowledge itself. Either the usual terms of scientific language express the content and structure of scientific knowledge itself, or the latter is an unspeakable and mystical knowledge whose translation into words always remains external, approximate, and imperfect.

In summary, scientific knowledge – and even more so what is popularly understood as such today – is a specialized subdivision of general rational capacity and has its foundation in it, but cannot judge it by its own criteria. What is understood here as “reason” is not limited to the usual capacities of coherent language and calculation, as both of these capacities are also nothing more than specializations of a more basic ability. Reason is, first and foremost, the ability to imaginatively open oneself to the entire field of real and virtual experience as a totality and to contrast this totality with the dimension of infinitude that immeasurably transcends it. The finite and the infinite are the primary categories of reason, and I am not referring to the mathematical equivalents of these terms, which are merely translations of them into a specialized domain. From this first distinction, numerous others arise, such as inclusion and exclusion, limited and unlimited, permanence and change, substance and accident, and so on. Without this immense network of distinctions and inclusions that constitute the basic structure of reason, the scientific method would be nothing. It is even more foolish to imagine that once historically formed, the scientific method became independent of reason and can do without it or judge it according to its own criteria. It is reason, and not the scientific method, that gives meaning to scientific discourse itself, which, in turn, cannot account for reason in the least. “Science” can never be the ultimate authority on any subject except within the limits prescribed by reason, limits that are subject to rational critique at any time and in any circumstance of the scientific process.

The object of reason is human experience taken in its indistinct totality, limited only by the sense of infinitude. The object of science is a conventionally operated cut within this totality, and its validity can only be relative and provisional, always conditioned by critique according to the general categories of reason that infinitely transcend not only the domain of each particular science but that of all sciences together.

In the end, how is a science constituted? It is assumed that a certain group of phenomena obey certain constants, and then samples are cut from within this group to verify, through observations, experiments, and measurements, whether things happen as predicted in the initial hypothesis. Once the operation is repeated a certain number of times, an attempt is made to articulate its results in a logical-deductive discourse, structuring the reality of experience in the form of a logical demonstration, evidencing, at least ideally, the rationality of the real. All of this is impossible without the categories of reason, obtained not from this or that scientific experience, nor from all of them together, but from the very sense of human experience as an unlimited totality.

Human experience taken as an unlimited totality is the most basic of realities, whereas the object of each science is a hypothetical construction erected within a more or less conventional cut within that totality. This construction is worthless when severed from the background from which it was constituted. The attachment to the authority of “science,” as seen in most public debates today, is nothing more than a search for a fetishistic, socially approved protection against the responsibilities of using reason.

The most evident symptom of this is the ease, the trifling and bouncing change of channel with which the spokespeople of “science” move from relativistic and deconstructionist attenuations, in which all discourses are valid in some way, to absolutist proclamations of “scientific facts” immune to all discussion, so sacred that their challengers must be excluded from the academic environment and exposed to public condemnation. The cult of “science” begins with ignorance of what reason is and culminates in an explicit appeal to the authority of the irrational.

The Corporealist Illusion97

WHAT SEPARATES from normal humanity, abortionists, gay activists, globalists, Marxists, materialistic liberals, and other creatures affected by a revolutionary mentality, is not a question of opinion or belief: it’s a deeper difference, of an imaginative and emotional order.

Aristotle already taught – and the experience of twenty-four centuries ceaselessly confirms – that human intelligence does not form concepts directly from the objects of sensible perception, but from the forms preserved in memory and altered by imagination. This means that whatever escapes the limits of your imagination will be, for you, perfectly nonexistent. The imagination, in turn, reflects not only the individual’s dispositions but also the linguistic and symbolic schemes transmitted by culture. Culture has the power to shape the individual imagination, expanding or circumscribing it, making it brighter or more opaque.

The imagination of almost the entire human species, throughout millennia, was formed by cultural influences that invited them to conceive of the physical universe as just one part of total reality. Beyond the circle of immediate experience, a variety of other possible dimensions existed, occupying the immeasurable territory between the infinite and the finite, eternity and the passing instant.

From the moment when the cultural universe began to revolve around technology and natural sciences, with the concomitant exclusion of other possible perspectives, it was inevitable that the imagination of the masses would become increasingly limited to elements that could be expressed in terms of technological action and available scientific knowledge. Gradually, everything that escapes these two parameters loses its symbolic strength and ends up being reduced to the condition of “cultural product” or “belief,” without any more power of apprehension over reality. The impoverishment of the imagination is further aggravated by the growing public devotion to the power of science and technology, repositories of all hopes and, for that very reason, holders of all authority. This does not mean that the supra-material dimensions disappear entirely, but they only become accessible to popular imagination when translated into terms of technological and scientific symbology. Hence the fashion for science fiction, extraterrestrials, and astronaut gods. But it’s clear that this translation is not a real opening to the spiritual dimensions, but only their caricatural reduction to the language of the immediate and the mundane.

One of the consequences of this is that the body, millennially understood as one aspect among others in the structure of individuality, came to be not only its center but the ultimate limit of its possibilities. Those powers of the human being that only appear when confronted with the dimension of infinity and eternity become absolutely inaccessible and are explained as “cultural beliefs” of extinct eras, with the connotation of backwardness and barbarism. Hence, also, the most heinous achievements of technological society, such as total war and genocide, have to be explained, in an inverted and totally irrational way, as residues of uncivilized epochs instead of original and typical creations of the new culture. The “opinion maker” of these days is incapable of perceiving the specific difference between modern totalitarianism and the immeasurably milder forms of tyranny and oppression known in antiquity and the Middle Ages. For him, Gulag and Auschwitz are the same as the Inquisition. When we show him that extreme forms of totalitarian control of individual conduct were perfectly unknown everywhere before the 19th century, he feels that unease of someone who sees the ground opening up beneath their feet. Then he immediately changes the conversation or curses us as fundamentalist fanatics.

Still on the Corporealist Illusion98

The COGNITIVE ATTACHMENT to the body, which the old Hindu doctrines already taught to be the basis of all illusion and all error, has become obligatory to the point that people consider their bodies a “property”, over which they have all rights. In vain we show them that material property presupposes the physical existence of the owner; that the body, therefore, cannot be a property because it is the prerequisite for the existence of property. Furthermore, the body could only be understood as property if the existence of the owner were admitted beyond and outside of it. To call the body “property” (and even so not legal, but only logical) makes sense in the Hindu or Christian perspective, for which the existence of individuality transcends that of the body – but it makes no sense for the materialist perspective itself that, paradoxically, takes it as an unshakable dogma. If you believe that the body is everything, it cannot be your property: it is your substance, it is yourself. The madness here is taken to the extreme in the case of those advocating for abortion, who believe that everything that is inside their bodies belongs to them, as if the fetus, in turn, had nothing inside its own body and was not, in this logic, an owner of itself.

The tremendous potential for action unleashed by the advent of modern natural science and technology in the field of corporality has legitimized the illusion of the body as the center and ultimate limit of individuality, to such an extent that the very notion of individuals' biological continuity becomes hardly conceivable except as a totally artificial “narrative structure” without connection to reality. Giordano Bruno had already predicted this: deny the spiritual dimension, he said, and you will end up denying yourselves.

The phenomenon, which emerged in fiction literature at the beginning of the 20th century, is now quite visible in the practice of historiography. For the ancient historian, using narrative resources from novels or theater in a History book only proved that the real is apprehended as an aspect of the possible, something Aristotle had already explained in the Poetics. For “post-modern” historians, it proves that reality does not exist, that everything is fiction and “imposition of narratives” (curiously, without prejudice to this imposition expecting to have real effects on politics).

Along with biological continuity, the sense of individual responsibility for any action that the individual, after some years, no longer “feels” corporally as theirs, disappears. The fact, for example, that communists are the greatest murderers of communists and yet live in fear of external aggression, without realizing that the greatest danger comes from themselves, is one of the most notable cases of psychotic alienation that result from the impoverishment of the imagination.

The reduction of the field of human experience to the dimensions manipulable by science and technology is totally incompatible with the structure of reality, where the existence of the infinite, eternity and the unknowable is not, in any way, a provisional situation that the “advance of science” can overcome tomorrow or later, but a positive permanent fact, which once suppressed can only result in psychotic deformations and grotesque infantilisms, such as taking the mere hope of future scientific proofs as currently valid and indisputable evidence.

But the epidemic childishness of materialist intellectuals reaches its peak at the moment when Dr. Richard Dawkins, rejecting as barbaric the traditional doctrines of religions – and, along with them, the entire philosophical tradition from Socrates to Leibniz – explains the origin of life as possible intervention from… astronaut gods.99

Thanksgiving Day Meditation100

Thanksgiving Day, which has been celebrated since the 16th century but was officially established by George Washington as a national holiday, is one of the last remaining reasons for the United States not to become a nation of spoiled and hateful children, bent on avenging their benefactors. Despite attempts to instill bitterness and rebellion in them, Americans, in general, continue to be grateful for living in such a wealthy and generous country, so that in their hearts, the feeling of love for God is inseparably mixed with love for their homeland. In the U.S., it is sometimes difficult to distinguish where religion ends and patriotism begins. By establishing Thanksgiving Day on October 3, 1789, George Washington wrote: “It is the duty of all nations to acknowledge the providence of Almighty God, to obey His will, to be grateful for His benefits, and to humbly implore His protection and favor.” These words were already a preemptive response to those who deny the Judeo-Christian origin of American political institutions.

As some American friends asked me to celebrate Thanksgiving with them by writing a few lines about the feeling of gratitude, I decided to take as a starting point something that may be less Christian or Jewish: the ideas of philosopher Peter Singer, the Princeton professor who sees little difference between killing a chicken to eat it and strangling a baby to throw it in the trash.

Prof. Singer’s ethics are based on a set of very simple and reasonable arguments:

  1. Causing suffering is unquestionably evil.
  2. We necessarily cause suffering to animals when we kill and eat them.
  3. There is no evidence that an animal’s survival at the expense of another’s suffering is a good thing.
  4. Therefore, we live in evil, especially when we consider our own survival at the expense of others as a good thing.
  5. If we add to the suffering we cause to the animal kingdom the evil we inflict on each other since the beginning of time, we will see that evil prevails in the world in such quantities that there is no plausible reason to suppose that a good God created all this.

At first glance, there is no way to refute these arguments. On the contrary, all we can do is accept them and continue to reason based on them, in search of an ethics that does not close its eyes to the harsh reality they express.

Right from the start, there is no evidence that vegetables do not suffer as much as animals when we uproot them from the ground, cut them, roast them, and eat them. Since the publication of Peter Tompkins and Christopher Bird’s “The Secret Life of Plants” in 1973, to Anthony Trewavas' more recent study “Green plants as intelligent organisms” (2005), evidence has been accumulating that plants possess some cognitive and affective abilities. It is true that not all of the scientific community accepts this evidence, but the mere fact that the discussion continues without unanimous conclusions compels us to conclude that it would be foolhardy to assert, without further evidence, that eating vegetables is morally harmless.

There is even less evidence that exclusively feeding on vegetables makes human beings better or less violent. Adolf Hitler was a vegetarian, and the history of the most vegetarian of civilizations, the Indian civilization, is a procession of horrors that continues in the 20th century with the massacre of Muslims by Hindus during India’s independence and the systematic killing of Christians today.

From a singerian point of view, therefore, no living being – animal or vegetable – can morally be slaughtered and eaten by human beings. This amounts to saying that eating, in the broadest sense of the word, is a sin and a crime. But if everyone had refrained from committing this crime from the beginning of human history, there would be no human history at all, and we would not be here discussing this lovely subject. The inevitable conclusion that follows is that, in the broadest sense, human life is a sin and a crime – a conclusion that the Bible itself subscribes to under the name of “the Fall.”

Thus, there is no formal opposition between Christianity and the ideas of Prof. Singer. What exists is a difference of scale, as Prof. Singer bases his entire ethics on the observation of what happens in the material world subject to quantitative determinations, including the need for food, while the Bible includes the totality of this world within the immeasurably larger framework of divine infinity.

One does not need to be very intelligent to understand that everything that is quantitative and finite, however immensely large, is contained in the infinite like a grain of sand on the ocean floor. The infinite has no limitations of any kind and is, at the same time, the only thing that must necessarily exist. To claim that the quantitative and finite universe is the ultimate measure of reality is self-contradictory because one thing only ends where it borders another, so the idea of finitude itself presupposes the existence of the infinite beyond the finite. The finite universe is subject to the Second Law of Thermodynamics, or entropy, and cannot subsist unless continuously replenished and regenerated by the infinite. Furthermore, the infinite cannot even be considered solely from a quantitative point of view because quantity itself is a limitation. The infinite transcends all quantitative determinations and can only be conceived as a plethora of unlimited positive qualities, the Supreme Good spoken of by Plato. No rationally defensible argument can be presented against the existence of the Supreme Good, for all arguments result in attributing infinity to that which they themselves admit as finite. The Supreme Good is, at the same time, the Supreme Reality.

Seen on the scale of infinity, all the evils of the finite world, however immense they may be, are nullified in the same instant. One cannot conceive of a single deprivation or limitation that, on the scale of infinity, is not automatically compensated by the unlimited profusion of corresponding qualities.

The Bible describes the Fall as the moment when human beings lost sight of the scale of infinity, considering the finite world as the ultimate horizon of reality and, therefore, finite things as the exclusive object of their desires. The constant pejorative references in religious discourse to “carnal desires” popularly evoke the attraction between sexes, but this attraction cannot be inherently good or evil because it can mean both the obsession for the sexual possession of a specific body and the openness to the desire for infinite love behind its temporary realization in the affection between two human beings. According to Ernout and Meillet’s classic “Etimológico Dictionary,” the word “carne” (flesh), from Latin “caro,” comes from an Osco-Umbrian root meaning “to cut” or “to make in parts,” which is more clearly preserved in Greek “karenai,” Irish “scaraim,” and Lithuanian “skiriu,” all with the sense of “to cut” or “to separate,” as well as in the Latin “curtus,” which originated the Portuguese terms “cortar” (to cut), “curto” (short), and finally “castrar” (to castrate). The carnal desire that the Bible condemns is the hypnotic affection for the amputated earthly good, cut off, separated from its root in infinity. It is the blind desire for an illusory thing that can only result, in turn, in the separation between human consciousness and the divine background of reality – a phenomenon that condenses in itself the characteristics of alienation or distancing, and spiritual castration or self-castration. Castration consists of losing the generative capacity and, therefore, the regenerative capacity as well. On the scale of infinity, everything that is consumed, lost, extinguished, or spent in the domain of matter and time is instantly reconquered and recreated in eternity. Eternity is the infinite regeneration of everything. Everything that came into existence for a moment, even if very brief, cannot return to time nor disappear from eternity: what was once “being” cannot return to “nothingness” because nothingness never was. However, considered in itself, separate from the infinite, the finite world is the world of continuous extinction, the world of entropy. Spiritual castration consists of losing the sense of perpetual regeneration, through the cut between the finite and the infinite – imprisonment in the world of “flesh.” In this world, a simple lettuce leaf that you eat is an irreparable loss. Billions of chickens, sheep, cows, and pigs sacrificed in vain on the human species' table are bloody evidence of the universality of evil and absurdity.

Prof. Singer is entirely right concerning the finite world. However, curiously, instead of then turning with gratitude to the infinite that heals and regenerates everything, he uses the evil of the finite world as evidence of the non-existence of the infinite. This does not make sense since the finite cannot even be conceived in itself as totality without reference to the infinite. In other words, Prof. Singer condemns the finite world at the very moment he glorifies it as the ultimate reality, suppressing the infinite. But, as we have seen, it is precisely this suppression that makes the finite world evil and unbearable, an image of hell. Prof. Singer locks us in hell and then accuses us of living in hell.

His arguments against the finite world are true, but, on the scale of infinity, they become banal and irrelevant. Our existence only makes sense and has value when we recognize the limitation of the finite and, lifting our eyes to the infinite, admit that these limitations are also limited, temporary, and, in absolute terms, illusory: only divine infinity is fully real – and it is what makes our life possible, bearable, and meaningful, unlike the macabre festival of inter-devouring described by Prof. Singer. The feeling of gratitude to the divine infinity is not a religious ritual, although it can be one too: fundamentally, it is the only sensible attitude of human beings who recognize the structure of reality and do not let themselves be hypnotized by demonic nightmares, even if they come from Princeton. Giving thanks to the Lord is the obligation of all thinking creatures and all nations.

The Favorite Philosopher of the Incapable101

Echoing the consensus of the leftist intelligentsia, Nouvel Observateur introduces Alain Badiou as "one of the greatest names in world philosophy." But it’s obvious that he is not a philosopher at all, just a communist demagogue of the lowest kind, an atrophied reincarnation of the worst Jean-Paul Sartre, being applauded as a philosopher precisely because of this. Nothing more sharply characterizes the global media since the 60s than its visceral hatred of philosophy, its compulsive need to replace it with some idiotic simulacrum appropriate for the politics of the day. In the first decade of the 20th century, newspapers accepted as representative philosophers those whom philosophy scholars pointed out as such. Later the media adopted its own criteria and, instead of disseminating high culture, began to shape it at its own discretion. That’s when types like Badiou became eminent philosophers, while real philosophy became an esoteric secret, reserved for a small circle of highbrows.

Like Sartre, Badiou does not start from a question, a doubt, a desire for clarification and foundation, but from the hysterical expression of an unjustified and unjustifiable dogmatic preference, later dressing it up with rhetorical flourishes woven with philosophical vocabulary, but lacking the minimum analytical and self-critical sense they would need to even be admitted as school philosophy papers.

The essential dogma of the Badiou doctrine is the one trumpeted by Jean-Paul Sartre: “Every anti-communist is a dog.” If the idea occurs to me that every communist is a hyena, I do not take this as a premise, but as a mere figurative summary of well-documented historical expositions and critical analyses that leave no room for any softer conclusion. The Sartre-Badiou dogma, on the other hand, is a warning posted on the door to inform visitors that any attempt at critical analysis will be repelled with cries of horror. The flight from critical analysis, in Sartre, was pure Machiavellian pretense, but in Badiou it expresses a genuine inability. Sartre, when pretending to be fanatical, had an intellectually sophisticated pretext for it: his theory of the primacy of existence over essence justified irrational stances as an effort to “exist” – along similar lines, ultimately, with the arbitrary “decisionism” of Carl Schmitt, who justified the policies of the Führer with the same brazenness with which the author of Nausea justified Stalin’s, becoming nauseating himself. Badiou needs none of that. His passionate adherence to communism is a self-founding principle, in no need of any justification, even simulated. It is the fundamental axiom, and from it is deduced everything else that the tireless chatterbox may say about whatever.

In one of his most famous lectures,102 he takes communism as “a hypothesis” on the way to realization – and, with the philosophical skill of a bad high school student, he compares the beauties of this hypothesis, not to the opposing democratic-capitalist hypothesis, but to the real bad qualities he believes he sees in existing capitalism, while the evils of real communism need not enter the comparison because the hypothesis – by hypothesis – has already absorbed and sanctified them in its future hypothetical beauties. The structure of the reasoning, in itself, is that of a hysterical pretense trying to camouflage its own irrationality through furious invectives that dissuade the listener from demanding from the supposed philosopher the minimum duties of philosophical rationality. I admit it’s a technique, but it’s a charlatan’s technique.

Even more charlatanly, he condemns the bloody police violence of the Soviet regime not for being immoral in itself, but for “not being able to save the communist regime from bureaucratic inertia”. He appeals, from this point of view, to Mao Zedong’s doctrine according to which “the movement” must prevail over the static hierarchy of the Party. Recognizing that this theory also degenerated into violence, he forgets to note that it was violence three or four times greater than the Soviets', revealing itself to be a more lethal remedy than the disease and disqualifying itself, ipso facto, as a valid criticism of the Soviet fiasco. Tipping his little nose to make himself morally superior to the Soviet “state communism”, he praises May '68, when “civil society”, instead of the Party, took the initiative to try to strangle the bourgeoisie. But in the Soviet regime, it wasn’t the State that ruled, it was the Party, of which the State was just a malleable instrument. And what is “organized civil society” if not the renewed, Gramscian version of the Party? In short, against the evils of the Party, Badiou suggests as a remedy… the Party.

The thing is of a primordialism worthy of Dr. Emir Sader, and it’s no surprise that it ends with the proclamation of an unchanging irrational love for that which cannot be justified rationally.

Comparing ideals with ideals, facts with facts, and not the beautiful ideals of one side with the supposedly depressing facts of the other, is the elementary principle, not to mention philosophy, but of any intellectual activity, even rudimentary, that aims to be honest. This precept is infinitely above Alain Badiou’s capacity. This is precisely why entities dedicated to universal imbecilization, as today’s major media outlets are, consecrate him as an eminent philosopher. He is the philosopher of those who, by congenital ineptitude or acquired dishonesty, are doomed to never know what philosophy is.

Knowledge and Control103

In one of the latest issues of Prospect, Ian Stewart, a mathematics professor at the University of Warwick, notes that computers have made it possible to construct mathematical proofs that extend over millions and millions of pages, subtracting from human control. To believe in these proofs – or deny them – will be a leap in the dark: the hyper-development of mathematical rationality threatens to culminate in total irrationality. Will it be, Stewart asks, “the death of proof”? Many say “yes”; he aligns with those who say “no” – but, of course, once the question is posed in these terms, the proof of the answer would have to extend over a few million pages.

The problem, however, is not in the difficulty of the answer: it’s in the question itself. Who said that human rationality can be increased by improving logical-mathematical technique? The latter essentially consists of syllogistic reasoning, or the combination of two premises to arrive at a conclusion. Several syllogisms in sequence form a deductive chain, or demonstration.

The basic norms of this art were laid down by Aristotle and sufficed for the general needs of the human mind for about 2,300 years. It was only from the second half of the nineteenth century that some scholars found it convenient to fill in the gaps, so that reasoning would be continuous, without intuitive leaps. To facilitate the enterprise, they traded the verbal language of classical logic for mathematical symbolization. This sped up the construction of deductive chains and allowed for the mechanization of reasoning, anticipating computers.

With the advent of computers, the process became even faster – so fast that it allowed to assemble in a few seconds demonstrations so complex that the human mind could no longer follow them. The project to make demonstrations more precise and reliable ended up making them impossible to check. It’s either trust the computers or give up proving anything at all.

This is only alarming in appearance. Any tool that is discovered or invented, after all, exists precisely to perform some function more effectively than the human being could do it directly with the means that nature has endowed him with. The first person who had the idea of riding a horse only had some success because it was faster to ride a horse than to walk. Clothes have continued to be worn for millennia because they protect more than skin.

The problem is that it’s very inconvenient to feed a computer with a few dozen thousand premises and two seconds later it gives you a ready conclusion without you having the slightest idea of the path it took. You feel as if you were consulting an oracle. This wouldn’t be uncomfortable at all, of course, if besides the solution to the problem you didn’t also wish to have control of the situation. And the misfortune is that the first logician-mathematicians got into this precisely with the foolish hope of gaining more control of the situation. Like all modern scientists, they were not interested in knowledge per se, but in power. "Savoir pour prévoir, prévoir pour pouvoir," was Auguste Comte’s motto. They wanted to build a Golem, but an obedient Golem. The Golem, once grown, could no longer agree with this.

Every technique has its drawbacks, and it’s pure folly to believe that techniques increase the power “of” the human being. At best, they increase the power of some at the expense of diminishing that of others. To compensate for the difference, it is necessary to invent other techniques – political and sociological – whose drawbacks are generally even greater.

What is a Just Society?104

WHEN ASKED about the concept we have of a just society, the word “concept” enters here with a sense that is more American – pragmatist – than Greco-Latin: instead of designating only the verbal formula of an essence or being, it signifies the mental scheme of a plan to be carried out. In this sense, evidently, I have no concept of a just society, for, convinced that it is not up to me to bring such a wonderful thing into the world, it does not seem to me a profitable occupation to be inventing plans that I do not intend to carry out.

What is within my reach, instead, is to analyze the very idea of a “just society” – its concept in the Greco-Latin sense of the term – to see if it makes sense and if it has any utility.

From the start, the attributes of justice and injustice only apply to real entities capable of action. A human being can act, a company can act, a political group can act, but “society”, as a whole, cannot. Every action implies the unity of intention that determines it, and no society ever comes to have a unity of intentions that justifies pointing it out as the concrete subject of a determined action. Society, as such, is not an agent: it is the terrain, the framework where the actions of thousands of agents, moved by diverse intentions, produce results that do not fully correspond even to their original purposes, let alone to those of a generic entity called “society”!

“Just society” is therefore not a descriptive concept. It is a figure of speech, a metonymy. Precisely because of this, it necessarily has a multiplicity of meanings that overlap and blend into an indistinguishable confusion. This is enough to explain why the greatest crimes and injustices in the world have been committed, precisely, in the name of a “just society”. When you adopt as a goal of your actions a figure of speech imagining that it is a concept, that is, when you propose to realize something that you cannot even define, it is fatal that you end up accomplishing something completely different from what you expected. When this happens there is weeping and gnashing of teeth, but almost always the author of the trouble evades taking on their faults, clinging with the tenacity of a crab to a claim of good intentions that, precisely because they do not correspond to any identifiable reality, are the best analgesic for undemanding consciences.

If society, in itself, cannot be just or unjust, every society encompasses a variety of conscious agents who, yes, can carry out just or unjust actions. If the expression “just society” can have any substantive meaning, it is that of a society where the various agents have the means and disposition to help each other avoid unjust acts or to repair them when they cannot be avoided. A just society, in the end, means only a society where the struggle for justice is possible. When I say “means”, that means: power. Legal power, certainly, but not only that: if you do not have economic, political, and cultural means to enforce justice, it matters little if the law is on your side. For there to be that minimum of justice without which the expression “just society” is only a beautiful adornment of heinous crimes, there must be a certain variety and abundance of means of power spread throughout the population instead of concentrated in the hands of an enlightened or lucky elite. However, if the population itself is not capable of creating these means and, instead, trusts in a revolutionary group that promises to take them from their current holders and distribute them democratically, then the realm of injustice is definitively established. To distribute powers, one must first possess them: the future distributor of powers must first become the monopolistic holder of all power. And even if he later tries to fulfill his promise, the mere condition of being a distributor of powers will continue to make him, increasingly, the absolute master of supreme power.

Powers, means of action, cannot be taken, given, or borrowed: they have to be created. Otherwise, they are not powers: they are symbols of power, used to mask the lack of effective power. Whoever does not have the power to create means of power will always, at best, be the slave of the donor or distributor.

To the extent that the expression “just society” can transmute from a figure of speech into a reasonable descriptive concept, it becomes clear that a reality corresponding to this concept can only exist as the work of a people endowed with initiative and creativity – a people whose acts and enterprises are varied, novel, and creative enough that they cannot be controlled by any elite, be they comfortable oligarchs or ambitious revolutionaries.

Justice is not an abstract, fixed standard, uniformly applicable to a multitude of standardized situations. It is a subtle and precarious balance, to be discovered anew time and time again among the thousand and one ambiguities of each particular and concrete situation. In Sidney Lumet’s film, “e Verdict” (1982), the bankrupt lawyer Frank Galvin, splendidly played by Paul Newman, arrives at an obvious conclusion after having achieved a late and improbable legal victory: “Courts do not exist to do justice, but to give us an opportunity to fight for justice”. I have never forgotten this lesson of realism. The only just society that can exist in reality, and not in dreams, is one that, recognizing its inability to “do justice” – especially to do it once and for all, perfect and uniform for all – does not take from each citizen the opportunity to fight for the modest dose of justice that they need at each moment of life.

The Globalist Revolution105

For anyone who wishes to navigate today’s politics – or simply understand something of the history of past centuries – nothing is more urgent than obtaining some clarity about the concept of “revolution.” Both among the general public and in the sphere of academic studies, there is great confusion about this concept, simply because the general idea of revolution is often formed based on fortuitous analogies and blind empiricism, instead of seeking the deep and enduring structural factors that define the revolutionary movement as a continuous and overwhelming reality for at least three centuries.

To give an illustrious example, historian Crane Brinton, in his classic work Anatomy of Revolution, seeks to extract a general concept of revolution from the comparison between four major historical events nominally considered revolutionary: the English, American, French, and Russian revolutions. What is common among these four processes is that they were moments of great ideological fermentation, resulting in substantive changes to the political regime. Would that be enough to classify them uniformly as “revolutions”? Only in the popular and impressionistic sense of the word. Although I cannot, within the scope of this writing, justify all the conceptual and methodological precautions that led me to this conclusion, what I have to observe is that the structural differences between the first two and the last two phenomena studied by Brinton are so profound that, despite their equally spectacular and bloody appearances, they cannot be classified under the same label.

One can only legitimately speak of a “revolution” when a proposal for a comprehensive mutation of society is accompanied by the demand for the concentration of power in the hands of a ruling group as a means of achieving that mutation. In this sense, there have never been revolutions in the Anglo-Saxon world, except for Cromwell’s, which failed, and the Anglican Reformation, a very particular case that is not relevant to discuss here. In England, both the revolt of the nobles against the king in 1215 and the Glorious Revolution of 1688 sought to limit central power rather than concentrate it. The same happened in America in 1786. In none of these three cases did the revolutionary group attempt to change the structure of society or established customs, instead forcing the government to conform to popular traditions and customary law. What can be common between these processes, more restorative and corrective than revolutionary, and the cases of France and Russia, where a group of enlightened individuals, imbued with the project of a completely unprecedented society in radical opposition to the previous one, seizes power with the firm resolution to transform not only the system of government but also the morals, culture, customs, the mentality of the population, and even human nature in general?

No, there have been no revolutions in the Anglo-Saxon world, and this fact alone suffices to explain the global preponderance of England and the USA in recent centuries. If, in addition to the defining structural factors – the project of radical societal change and the concentration of power as a means to achieve it – there is anything common to all revolutions, it is that they weaken and destroy the nations where they occur, leaving behind nothing more than a trail of blood and the psychotic nostalgia of impossible ambitions. France, before 1789, was the wealthiest country and the dominant power in Europe. The revolution inaugurated its long decline, which today, with the Islamic invasion, reaches pathetic dimensions. Russia, after an imitation of artificial imperial growth made possible by American assistance, disintegrated into a no-man’s-land dominated by bandits and the unstoppable corruption of society. China, after accomplishing the feat of starving thirty million people in a single decade, was saved only by renouncing the revolutionary principles that guided its economy and gladly surrendering to the abominable delights of the free market. As for Cuba, Angola, Vietnam, and North Korea, I won’t say anything: they are theaters of Grand Guignol, where chronic state violence is not enough to hide the indescribable misery.

All misconceptions about the idea of “revolution” arise from the prestige associated with this word as a synonym for renewal and progress. However, this prestige precisely comes from the success achieved by the English and American “revolutions,” which, in the strict and technical sense in which I use this word, were not revolutions at all. This semantic illusion prevents the naive observer – including much of the specialized academic class – from recognizing the revolution where it occurs under the camouflage of slow and seemingly peaceful transmutations, such as the implementation of world government currently unfolding before the stunned eyes of the masses.

The sufficient distinctive criterion to eliminate all hesitations and misunderstandings is always the same: whether with sudden and spectacular transmutations, with or without insurrectional or governmental violence, with or without hysterical accusation speeches and general killing of opponents, a revolution is present whenever a project for a profound transformation of society, if not all of humanity, is in ascendance or in the process of implementation through the concentration of power.

It is due to the failure to understand this that liberal and conservative currents often, while opposing the most conspicuous and repugnant aspects of a revolutionary process, end up unconsciously fostering it under some other aspect whose danger escapes them at the moment. In today’s Brazil, an exclusive focus on the evils of petism, the MST, and similar movements may lead liberals and conservatives to court certain “social movements,” in the illusion of being able to exploit them electorally. What escapes the vision of these false experts is that such movements, at least in the long run, play an even more decisive role in the implantation of the new socialist world order than nominally radical left-wing movements.

Another dangerous illusion is to believe that the advent of planetary administration is an inevitable historical inevitability. The ease with which the small Honduras thwarted the worldist giant shows that, at least for now, the power of this monstrosity is merely a monumental publicity bluff. It is in the nature of every bluff to extract its vital substance from the fictitious belief it manages to instill in its victims. Very often, I see liberals and conservatives repeating the most foolish slogans of globalism, such as the notion that certain problems – drug trafficking, pedophilia, etc. – cannot be addressed on a local scale, requiring the intervention of a global authority. The absurdity of this statement is so evident that only a general state of hypnotic stupidity can explain its credibility. Aristotle, Descartes, and Leibniz taught that when you have a big problem, the best way to solve it is to break it down into smaller units. Globalist rhetoric cannot stand against this rule of method. Expanding the scale of a problem can never be a good way to tackle it. The experience of certain American cities, which practically eliminated crime in their territories using only their local resources, is the best proof that instead of enlarging, one must reduce the scale, subdivide power, and confront the evils on the level of direct and local contact rather than becoming intoxicated by the grandeur of global ambitions.

That globalism is a revolutionary process is undeniable. And it is the most vast and ambitious process of all. It encompasses the radical mutation not only of power structures but of society, education, morality, and even the most intimate reactions of the human soul. It is a complete civilizational project, and its demand for power is the highest and most voracious ever seen. So many are the aspects that compose it,

A Lesson from Hegel106

In the introduction to Philosophy of Right, G. W. F. Hegel explains that one of the essential capacities of the human ego is the ability to mentally suppress any external or internal given, whether it imposes itself as a physical presence or through any other means – the ability, in short, to deny the entire universe and make self-consciousness the only reality. Without this faculty, we would be trapped in the circle of immediate stimuli, like animals, and we would not have access to higher degrees of abstraction. The negation of the given – “the unrestricted infinity of absolute abstraction or universality, the pure thought of oneself,” according to Hegel – is one of the peculiar glories of human intelligence.

However, it is a dangerous force when exercised independently of other compensating and balancing capacities, among which, evidently, is the capacity to say “yes” to the totality of reality, the ability demonstrated by Hegel himself in the famous episode where, after contemplating a magnificent mountain for a long time, he lowered his head and declared, “Indeed, it is so.”

When the ego experiences abstract negation as an experience of freedom, and the self-determination of the will clings to this experience, Hegel continues, “then we have negative freedom, freedom in emptiness, which rises as passion and takes shape in the world.” It is worth quoting the paragraph in full, such is its analytical and prophetic force:

“When [this freedom] turns to practical action, it takes form in religion and politics as the fanaticism of destruction – the destruction of all existing social order – as the elimination of individuals who are objects of suspicion and the annihilation of any organization that attempts to rise anew from the ruins. It is only by destroying something that this negative will has the feeling of itself as existing. Of course, it imagines that it wants to achieve some positive state of affairs, such as universal equality or universal religious life, but in fact, it does not want this state to be effectively realized because that realization would lead to some kind of order, to a particular formation of organizations and individuals, whereas the self-consciousness of that negative freedom precisely comes from the negation of particularity, from the negation of all objective characterization. Consequently, what this negative freedom pretends to want can never be something particular, but only an abstract idea, and giving effect to this idea can only consist of the fury of destruction.”

This paragraph should be meditated upon daily by all scholars and practical men interested in understanding the world of politics. It elucidates some constants of the revolutionary movement that would otherwise be inexplicable – so inexplicable and paradoxical that the mind of the ordinary observer refuses to see them together, preferring to cling to isolated, occasional, and temporary aspects, mistakenly imagining that they represent the totality or essence of the phenomenon.

One of these constants is the constant self-denial that allows the revolutionary movement to take on various forms, changing its face from day to night and disorienting not only its adversaries but also a good part of its own followers. Since the unity of purpose of the movement is a pure abstraction and its proclaimed objectives are only imperfect and temporary incarnations of this abstraction, it can shed its particular manifestations as if changing socks, losing nothing and even rising to new levels of power by suddenly shifting from one policy to its opposite, ready to return to the previous one without notice if circumstances require it. Guerrillas and terrorism, for example, never achieve victory on the military field, but they produce a general longing for peace, and this longing can be fulfilled by denying the legitimacy of the violence that was defended as an inalienable right just yesterday, extracting from the violent shell a core of supposedly “legitimate” “demands” and offering “peace” in exchange for “legitimately acquired” power. Defeat is transfigured into victory, negation into triumphant affirmation. The ruling party in Brazil came to power precisely through this artifice, whose know-how it now offers to the FARC. When a faction of the revolutionary movement renounces its own violence, it is because violence is about to achieve its objectives. These mutations would not be viable if the concrete ends and values proclaimed by the revolutionary movement – its “particular objective characterization,” as Hegel would say – had any inherent reality and were not just illusory figures temporarily projected by the underlying abstraction.

But self-denial does not only affect the speeches, the ideological pretexts of the revolution. It reaches the very body of the movement, which is periodically sacrificed on the altar of its own ambitions.

The ultimate foundation of human society, as taught by St. Paul the Apostle and St. Augustine, is love for one’s neighbor. Tinged or not with hatred for the stranger (which is, so to speak, its demonic counterpart, a reflection of the inherent imperfection of human love and not a substantive independent factor as Emmanuel Levinas claimed), the community of the spirit, the common devotion to a sense of life open to transcendence, flows back upon each of its members, enveloping them in a kind of sanctity in the eyes of others, whether by naming them a member of the body of Christ or of the Islamic ummah, a Roman citizen, a descendant of Moses, an heir of the Nhambiquara tradition, or a simple “citizen” of modern democracy, participating in the community of inviolable rights acquired ultimately from millenia-old religious institutions. No “brotherhood” is conceivable without a common “parenthood.” Even in the most immediate sphere of economic life, no fruitful commerce is possible without the “society of trust” spoken of by Alain Peyrefitte, founded on the belief that the sacred values of one will not be violated by the other.

In contrast to this universal rule, the revolutionary movement stands out for the constancy with which, in the organizations and governments it creates, its own members pursue and annihilate each other with systematic obstinacy and in quantities never seen in any other type of human community throughout history. The French Revolution beheaded more revolutionaries than priests and aristocrats. The Russian Revolution of 1917 was not directed against tsarism but against the revolutionaries of 1905. Nazism rose to power over the corpses of its own militants, immolated to the opportunism of a political alliance in the “Night of the Long Knives” on June 29, 1934. But it would be an illusion to imagine that these bloody rituals reflect only the passing fury of revolutionary massacres. Once consolidated in power, revolutionary parties redouble their violence, driven by paranoid suspicion against their own members, killing them by the millions and tens of millions with a zeal that surpasses anything the most violent leaders of the reaction ever thought of doing to them. No right-wing dictator has ever arrested, tortured, and killed as many communists as the governments of the USSR, China, Vietnam, Cambodia, North Korea, and Cuba. The tears of hatred that well up in the eyes of left-wing militants when they speak of Francisco Franco, Augusto Pinochet, or even the very mild Brazilian dictatorship, express nothing but a hysterical mechanism of moral self-defense – the “repression of consciousness,” as Igor Caruso called it – the inverse projection of the incalculably greater guilt that the revolutionary movement bears towards millions of its own faithful.

Contrary to the universal inclination of human nature to found social life on love for one’s neighbor, the revolutionary movement creates societies entirely based on hatred, turning the temporary unity inspired by hatred of this or that external or internal enemy into a satanic imitation of love.

None of this would be possible if the ideals and banners raised by the revolutionary movement at every step of its history had any substantiality in themselves. In this case, the common fidelity to sacred values would protect the members of the revolutionary community from each other. But these ideals are like the figures formed by clouds in the sky, condemned to dissipate at the first gust of wind, leaving behind only the empty sky. The only central and permanent fidelity of the revolutionary movement is to abstract freedom, which, together with its Siamese sisters, abstract equality and abstract fraternity, cannot perfectly incarnate itself in any particular historical form and, consisting only of absolute emptiness, can find satisfaction only in the exercise of annihilation, in the insatiable “fury of destruction.”

Sacred Art and Profane Stupidity107

IN HIS MEMORABLE book on The Symbolism of the Christian Temple,108 Jean Hani observes that in modern times, sacred art has disappeared from the West, being replaced by merely “religious” art. The difference is that the latter expresses only subjective feelings and culturally localized conceptions, while the former is a visible crystallization of certain universal transcendent organizing principles not only of individual subjectivity but also of all historical and cultural conditioning. Along with sacred art, this difference has been vanishing from the horizon of modern consciousness since at least the 18th century, only partially recovered thanks to a small group of ethnologists and historians of religions, such as Mircea Eliade, Ananda Coomaraswamy, Matthila Ghyka, Schwaller de Lubicz, Louis Charbonneau-Lassay, and others. By studying sacred buildings in the Far East, India, Egypt, and classical antiquity, these scholars confirmed that the structure of temples followed a set of precepts, substantially the same as those observed in the cathedrals of the Christian Middle Ages. These precepts, in turn, condensed a symbolic knowledge about the order of reality in general and the place of man in the universe. Once the veil of symbols was pierced, the presence of these teachings in civilizations separated by vast distances in time and space testified to something that, at the very least, were “constants of the spirit” that History could not explain because, on the contrary, they constituted the framework of the very possibility of human History.

Hani should have added to his list of pioneers the names of René Guénon, Frithjof Schuon, Titus Burckhardt, Seyyed Hossein Nasr, and Martin Lings, who greatly influenced his own work. The detail that seems to have escaped him is that, of all these authors, only one – Charbonneau-Lassay – was Catholic, and none were Protestant. The reclamation of the symbolic understanding of Christian sacred art came, in substance, from outside: not only outside Western clergy but also from the entire Catholic and Protestant intellectual community. Even considered solely from the perspective of Art History, this fact would be disturbing: religious and lay people who do not understand the meaning of the buildings where they pray are, quite literally, lost in space. However, the loss of understanding of symbols is also the loss of the knowledge they convey. And this knowledge constitutes, to say the least, the only intellectually satisfactory foundation for a distinction between the sacred and the profane. Those who have lost it, no matter how religious they may be, are condemned to bow their heads before materialistic science, lowering themselves to the point of seeking rational validation of their faith from it.

Nothing could illustrate better the crisis of Christianity – and of the entire Western civilization – than this phenomenon, simultaneously humiliating and providential, of our intellectual treasures lost centuries ago being restored to us by people outside our religious communities. Sacred art is, by essence, the only sensible support for the believer’s ascent to a glimpse of ultimate spiritual realities. According to Plato, beauty is “the form of Truth.” Deprived of this support, religious practice is reduced to mere literalistic, crude, and compulsive obedience, occasionally adorned here and there by the often deformed fantasies of “artists,” Christians or atheists, many of whom are alien to the universe of spiritual knowledge that their works should theoretically express. Even excluding explicit monstrosities like the cathedrals of Brasilia and Rio de Janeiro and other stone celebrations of everything hostile to Christianity, places of worship nowadays are mere profane constructions used for nominally religious purposes.109

This phenomenon alone is enough to illustrate the state of alienation that has spread among Christian priests and intellectuals in recent centuries, making them incapable of facing the cultural and ideological challenges of modernity – challenges that, in themselves, are not so fearful and could have been exorcised without much difficulty by a capable intellectual class. The fact that the religious debate of recent centuries froze into the stereotype of “reason versus faith” was only the first sign of the ineptitude that had spread among religious intellectuals. The vulgarities of Catholic modernism and “liberal Protestantism,” not to mention “Liberation Theology” in its various versions, could have been easily strangled at birth if the defenders of religion had a deeper understanding of the universal principles on which it is based. In the absence of this condition, those currents gained disproportionate importance, giving rise, in reaction, to merely external traditionalisms based more on an exasperation of offended religious feelings than on a real understanding of the situation. It goes without saying that hundreds of millions of individual souls were affected and disoriented by this process, the political and cultural consequences of which are immeasurable. I don’t believe it’s possible to understand anything of the history of recent centuries without looking at it from this perspective since religions are the backbone of their respective civilizations, and the multitude led to abandon faith or sustain it without any aesthetic and intellectual support is condemned to be prey to all kinds of fantasies and satanic delusions, which end up incorporating into the higher culture and daily life. I don’t know a single human individual whose personal dramas don’t, in some way, trace back to this process. Neither can I imagine how the parallel phenomena of the Islamic invasion and widespread anti-Christian hatred can be explained outside of this framework, so distant from the imagination of political scientists and media analysts.

The Church has always insisted that the knowledge of the existence and qualities of God is not a matter of faith but of rational intelligence. On the other hand, matters of faith are the miraculous birth of Our Lord Jesus Christ, His mission as Savior, etc. But this faith, without that knowledge, can hardly defend itself against somewhat intellectually sophisticated attacks. What Christians lack is not faith but a clear consciousness of their unshakeable cognitive foundations. These are precisely the ones that genuine sacred art illustrates and makes accessible to the imagination of the masses, smoothing the path for subsequent intellectual understanding. These principles, not exclusively referring to matters of faith in the Christian religion, are substantially the same as those appearing in the sacred art of all great religions. The fact that this formidable intellectual weapon was lost for centuries and only returned through the hands of people outside the Christian community is one of the great ironies of history. However, it is also a providential opportunity that Christians do not have the right to ignore. Jean Hani’s own book is proof of how much they can gain from the lessons received from those Muslim, Buddhist, etc., scholars. I myself remember having first learned about the existence of a spiritual phenomenon as gigantic as Padre Pio of Pietrelcina through a Buddhist author, Marco Pallis. Guided by universal principles that had become incorporated not only into his intellect but also into his personality, Pallis, whom I met when he was already in his nineties, had a clear awareness that the miraculous deeds of Padre Pio were, after the Fatima apparitions, at the very center of Catholic life in the 20th century. However, the faithful and the Catholic media seem unable to distinguish between Padre Pio and Mother Teresa of Calcutta (or, even worse, Paul VI). Faith without proper intellectual support ends up seeking validation from the usual opinion-makers, for whom the distinction between a saint and a pop star is difficult to conceive. The praise of LOsservaíore Romano for Michael Jackson is not an isolated case of clerical madness. Neither are the praises of Pope Benedict XVI to the Cuban regime for its “solidarity with other peoples” (solidarity essentially constituted by the export of guerrillas and drugs) a mere accidental mistake. They are signs that Catholic consciousness has lost some sense of reality and seeks refuge in the simulacrum set up by the dominant opinion, even knowing that the latter is essentially anti-Christian. The debacle of intelligence precedes the dissolution of faith. But nowadays, you cannot speak of spiritual knowledge without some indignant believer accusing you of being “gnostic.” While the most outrageous revolutionary heresies are paternalistically tolerated within the Church (after all, Liberation Theology has never suffered anything beyond verbal reprimands), any attempt to provide faith with a broader intellectual support than Thomism from a manual is viewed with truly suicidal suspicion. How many card-carrying Thomists have noticed, for example, that the formal construction of the Summa Theologica, structurally identical to that of Gothic cathedrals, conveys a message even more luminous than the literal meaning of the text? I would never have realized this without the help of Erwin Panofsky, an author whose word Catholics would never give more credibility to than to Jacques Maritain, despite all the harm the latter did to the Church.

On the other hand, the works of the group of scholars mentioned by Hani also bring, along with their positive contribution, some considerable risks for the Christian believer who lets himself be dazzled by them. Firstly, their universalist perspective highlights the points that are common to all religions, and the sum of these points only outlines the metaphysical framework of reality, without any openness to the specific difference of Christianity, which is constituted, on one hand, by the historical and personal presence of the incarnate Logos and, on the other hand, by this same presence reverberated and extended in unending miracles, of which Padre Pio’s life is an indisputable testimony. Mere metaphysical doctrine, in itself, cannot account for these miracles. They do not happen because of universal laws, but through unpredictable divine acts that do not contradict them, of course, but cannot be deduced from them a priori.

Another danger inherent in these studies is that among the authors who dedicate themselves to them, many are those who, like René Guénon or Frithjof Schuon, under the pretext of emphasizing the priority of profound spirituality over mere devotional practices, end up excessively privileging the role of certain esoteric traditions and, to achieve this, resort to considerable doses of mystification. This does not invalidate, of course, the teachings they provide us on universal symbolism and metaphysical doctrines. However, when they enter the realm of “initiations,” they begin to distort things and instill in the reader the most extravagant illusions. In the prevailing spiritual confusion, some have become so attached to René Guénon’s intellectual authority that they celebrate him as an “infallible compass.” Not only René Guénon’s persistent fallibility but also unequivocal evidence of his intellectual dishonesty, at least in his early writings, appear so clearly in the meticulous analyses made sine ira et studio by Louis de Maistre in "L’Énigme René Guénon et les ‘Supérieurs Inconnus,’ Contribution à l’Étude de l’Histoire Mondiale ‘Souterraine’,"110 that continuing to deny them can only be the act of dazzled fanatics.

Another inherent danger in studying these authors is to ignore the fact that, while apparently contributing to the restoration of Christian civilization, they did not believe at all in its historical possibility and, on the contrary, put all their bets on the “Islamization of the West” (sic). Hence, the tremendous ambiguity of their contribution. Those who, desperate in the face of the ferocious self-destruction of our civilization, seek help in the study of Guénon, Schuon, Nasr, Lings, and their followers must be aware that they will find there a double-edged sword, quite difficult to handle without harm to the apprentice. The Islam that is now occupying Europe and the USA with overwhelming force and psychopathic self-confidence is not that beautifully spiritual, mythical Islam extolled by traditionalists with an unrealism bordering on hypocrisy. It is an Islam reduced to the crudest expression of a globalist imperialism inspired by the Muslim equivalent of “liberation theology,” going back to the ideas of Sayyd Qurnb.111

It is this Islam that the ostensive protection of Prince Charles of England – not coincidentally, a disciple of Martin Lings – opens the doors of his country to, deepening the British cultural crisis and hastening an imminent and fatal outcome. If even this aristocrat, long prepared for the highest leadership positions, can become an instrument of historical changes whose scope he hardly comprehends, how much more susceptible to this will be young intellectuals who, in despair over Western suicide, go in search of the “Lights of the East”?

Human Consciousness in Danger112

Once again, I invite the readers to join me in a brief philosophical investigation. The subject – the foundations, or lack thereof, of human self-awareness – may seem far from immediate political relevance, but those who have the patience to read through to the end of this article will see that it is not so. Never before, as today, when an elite of enlightened bureaucrats reshuffles the pillars of civilization as if they were a group of escapees from a madhouse playing scientists in a nuclear laboratory, has it been vital for every inhabitant of the planet to acquire a clear understanding of the constants that define the human condition before the very concept of humanity itself, under the impact of deformative experiments imposed on a global scale, disappears from memory. But one of these constants is precisely that every human constancy reveals itself, like filigree, against the background of incessant historical mutation. Only knowledge of the comparative history of civilizations and cultures shows, beneath the almost hallucinatory variety of forms, the durability of the general structure of the human spirit. And since what is at immediate risk of being lost in the maelstrom of forced transformations is, above all, the very unity of each individual’s self-awareness – cultural fragmentation resulting in the shattering of souls – it has never been more important to know the historical mutations of the image of the “self” throughout the ages, to distinguish what is accidental and transitory from what is essential, permanent, and indispensable for the ultimate defense of human dignity.

One of the richest sources of material for this study is autobiographies. The historical development of this literary genre clearly demonstrates the transformations of individual self-awareness over time, parallel to the changes that have occurred in the respective experiences of time, memory, and the act of narration.

Among the many works on this subject, Memory and Narrative: The Weave of Life-Writing (The University of Chicago Press, 1998) by James Olney, a professor of English at Louisiana State University, is one of the most useful because, focusing on the history of the autobiographical genre from Augustine’s Confessions (397) to Samuel Beckett’s theatrical monologue, Company (1979), it clearly outlines the progressive loss of the sense of unity of self-awareness, without which the very intention of narrating one’s own life becomes absurd.

The structural model of narrative is the same in both cases. Augustine summarizes it with the example of prayer. When he recites a psalm, he already knows it by heart, in its entirety, beforehand. While reciting it, the words that follow aloud are updated in time against the static background of the complete text that remains in memory. After the recitation is finished, the psalm has completed its course in time and is returned to memory, ready to be recited again and again. Every autobiographical writing has more or less this structure. The life to be recounted is complete in memory, but it continues in the act of remembering it and goes on after the narration is finished, returned to memory to be recounted again, read, or heard. What is the “substance” of this narrative? Time, but what time? The past, which no longer exists? The present, an infinitesimal atomistic instant that dissolves as soon as it appears? The future, which has only a conjectural existence? The enigma appears more or less the same in both Augustine’s Confessions and Beckett’s Company.

United by their common concern with time, memory, and the self, the two books could not be more antagonistic in their respective views on the matter.

Augustine’s memoirs are the formal confession of a soul that, fully assuming authorship, responsibility, and consequences of each of its acts, thoughts, and inner states, even the most obscure and remote in time, appears at its own judgment as if displaying a complete identity in which the various internal forces in conflict serve only to highlight the tensional unity of the whole. Augustine can do this because he composes his narrative before an all-knowing audience – God himself. “Walking before God” means nothing other than acting and thinking in permanent confrontation with the symbol “omniscience” – the unattainable and inescapable source of all consciousness, the only guarantee of the sincerity of thoughts, actions, and their remembrance. Although the expression appears in the Bible, Augustine was the first to explicitly articulate the meaning of the experience summarized there. The man who walks before God governs and conceives himself at every moment as if he were before the Last Judgment, in the complete form of his individual being, consciously responsible for choosing his own eternal destiny. The complete life that now appears as a future project is, therefore, the measure of the remembrance of the past, which the narrator undertakes in the present.

From this, Augustine also extracts the solution to the problem of the insubstantiality of time. God is not only omniscient but eternal. Boethius, later on, will define eternity as “the simultaneous and complete possession of all its moments,” but the concept is already implied in Augustine. If the various moments have no unity among themselves, they can only crumble into an immense nothingness. Only their total and simultaneous unity has existence, but that unity is eternity itself, and nothing more. Time, in itself, has no substantiality at all. It is merely a mirage, a “moving image of eternity.” If Augustine can intellectually master his past, it is because he exposes it to the gaze of omniscience. If he can have an intuition of the continuity of his existence, it is because he sees it as a temporal reflection of eternity. The articulation of moral self-awareness is the same articulation of the three times on the axis of eternity.

The idea of the individual as a complex and dramatic unity that forms and assumes itself at the crossroads of the three times has become so ingrained in Western tradition that it inspired all modern psychology of personality. Sixteen centuries after Augustine, Maurice Pradines, in his Traité de Psychologie Générale (1948), defined consciousness as “the memory of the past prepared for the tasks of the future.” Even in Freud, to whom much of the blame (or credit) for the dissolution of unity of the self is mistakenly attributed, personality is the result of an arbitration progressively imposed by consciousness on the antagonistic impulses of the Id and the Super-ego. Nothing could celebrate the victory of unity more clearly than the famous prophecy of the father of psychoanalysis: “Where there is Id, there shall be Ego.”

Completely different is the perspective in Beckett’s Company. Here, an old invalid on stage listens to episodes from his life – the life of Samuel Beckett himself – narrated and commented upon, in a monologue, by a faceless voice. Is it the “voice of conscience”? Yes and no. It speaks of him sometimes in the second person, sometimes in the third. The one who, in the present, remembers the past, no longer knows whether that past is his own, that of another, or that of an invented character. And the voice challenges the old man’s sense of identity in a formidable way: if you do not remember your own birth, how can you be sure that the life you are remembering is that of the one whose birth you believe to be yours?

Like Augustine, Beckett’s character – indistinguishable from the author – draws his memories on the surface of contrast provided by an invisible interlocutor who transcends the narrator and has authority over him as a formative instance. The result, therefore, differs according to the identity of this interlocutor. The eternity and omniscience of God give Augustine’s autobiographical self-image the unity of a story assumed as personal and responsible creation. But Beckett’s interlocutor is not omniscient; he is merely more astute than the character. He is the critical reason, a corrosive potion that dissolves the temporal unity of the self through epistemological demands that the character cannot meet. The immobilized old man does not even have the power to say “I” with conscious awareness, but for that reason, he may not be to blame for his sins or merit for his achievements. The fragmented self, unable to tell its own story, becomes a victim of its own existence and, therefore, bears no responsibility for it. Augustine’s narrative rises from the obscure depths of the heart to the divine light that, in return, confers participation in its own unity and clarity. Beckett’s narrative emerges from an external darkness that obscures the little light that the ego believed it possessed.

In the transition from one extreme to the other, Olney documents some stages of the “crisis of narrative memory” that, like a guiding thread, runs through the entire history of modern Western mentality. He dates the beginning of this “crisis” from Jean-Jacques Rousseau’s Confessions (1782), but he is mistaken. It was already fully established in René Descartes' Meditations on First Philosophy (1641), which presents itself as an interior autobiography, the narrative of a cognitive experiment. The dreadful confusion that the philosopher produces there between the concrete existential self and the abstract concept of the self as absolute self-awareness (cogito ergo sum), passing from the first to the second without noticing that he has jumped from the temporal order to the deductive order, is one of the most prodigious mutilations ever imposed on the autobiographical consciousness of Western man. All of Beckett’s problem is already there. As Jean Onimus aptly observed,113 "Install yourself in Descartes' cogito at its point of origin, and you will see Beckett’s man in the full extent of his misfortune."

The Cartesian self cannot narrate its own history because it is merely an abstract form isolated in space, amputated from temporal experience. However, if the philosopher presents it in narrative form, it is because, literally, he does not perceive what he is doing. Cartesianism is not the inaugural chapter of the dissolution of narrative self-awareness,114 but it is an important episode in the process. Descartes' incongruity will be greatly amplified by Immanuel Kant through the idea of the “transcendental self.” This astonishing creature of German philosophy has the authority to demarcate the boundaries of experience accessible to the poor existential self without being itself limited by them, but without even opening a narrow slit for the existential self to see beyond those boundaries. It is called “transcendental” precisely because it closes the doors of access to the “transcendent.” Established in the middle heights of the transcendental self, which is only slightly above the existential self, the philosopher does not allow anyone to rise above him. The perverse satisfaction with which he believes he determines the “limits of human knowledge” shows that he is conscious of being something like, in initiatory climbs, the “guardian of the portal,” a kind of metaphysical Pasionária, shouting to the seekers of eternity: No pasarán! No pasarán! I have no doubt that Beckett’s interlocutor is Kant’s transcendental self. On one hand, Kant believed that human knowledge is limited to sensory experience, space, and time; on the other hand, he said that the data of experience are a chaotic jumble, to which consciousness imposes its own unity. But, left to itself, without the background of eternity, consciousness crumbles. Even more clearly than in Descartes, the isolated and desperate man of Samuel Beckett is present and evident in Kant’s Critique of Pure Reason (1781). By prohibiting consciousness from accessing eternity, the transcendental self makes consciousness itself inaccessible and evanescent. Hence the apparent logic and profound absurdity of the demand that comes from the darkness: the idea that only the self that clearly remembers its own birth would have the authority to affirm that its history is its own history was entirely based on a Kantian hoax, and this hoax, in turn, has a colossal ineptitude as its premise: it results in assuming that the only legitimate self-awareness would be that of a being who could consciously observe his own birth. But for that, he would have to exist temporally before entering temporal existence. In real experience, every beginning, every gestation, occurs in obscurity: light is a progressive conquest. Narrating one’s life without being a witness to one’s own birth is not an undue pretension; it is simply the real condition of human experience. The transcendental self, pretending to critique experience, establishes premises that deny the possibility of all experience and, therefore, of critique itself. Beckett is aware of the humorous nature of his speculations. But Kantian humor is pathetically involuntary. Olney’s study merits elaborating on the fundamental concept of the “crisis,” but in exemplifying it, it is very incomplete. Descartes is mentioned only in passing, and Kant’s name does not even appear. Unforgivable is the omission of Proust, who spent his life trying to solve Augustine’s problem of time, as well as that of Arthur Koestler, who, in Darkness at Noon (1940), documented the reduction of self-awareness, under the pressure of modern totalitarianism, to a “grammatical fiction.” The author also fails to associate the “crisis of memory” with a parallel and inseparable process: the epidemic of consciously falsified autobiographical and biographical narratives for the purpose of political propaganda, a phenomenon observed in France for at least a century before this somewhat unconsciously dishonest Rousseau. Indeed, it would be impossible for the dissolution of self-awareness not to come together with the progressive loss of the sense of eternity, and it is not possible to accept the dissolution of self-awareness while trying to preserve at the same time high moral standards of conduct. In this end of an era, the historical consequences of intellectual decisions made three, four, five centuries ago take the form of totalitarianism, widespread violence, genocide, and, above all, the universal empire of lies. Those who seek a remedy for these evils in political action will have to understand, sooner or later, that their root lies in the ethereal regions of abstract thought. And those who, out of personal affection, dedicate themselves to abstract thinking must seriously examine the devastating political effects of apparently harmless abstractions created by philosophers of past centuries.

In this sense, philosophy is politics, and politics is philosophy.

The Audacity of Ignorance115

The Enlightenment’s CALL to “autonomy of thought,” condensed in the Kantian slogan Aude sapere! ("Dare to know!"), is commonly understood as a call for each person to free themselves from external authorities and follow only their own reason.

Enlightenment freedom is then opposed to traditional coercion as prudent discrimination is opposed to unreflective credulity, intelligence to irrational fear, knowledge to ignorance, light to darkness.

But this is only a popular image, an advertising slogan. It serves to excite the adolescent masses, camouflaging the true meaning of the Enlightenment program.

The motto Aude sapere! is closely associated with another topos of Kant’s philosophy, the “Copernican revolution” of the structure of knowledge. By this term, Kant means the radical inversion of the hierarchy of knowledge, carried out with the aim of having reason, instead of adapting to the reality of the facts, assume command of the situation and impose on the facts its own order. This is known through the analysis of the conditions necessary for “all possible knowledge”: the structure of perception and the structure of reason. Reason has, by definition, universal validity, but on its own, it only knows general abstract forms. Everything we know about concrete reality comes filtered through our structure of perception, so we know nothing about things in themselves, but only about those aspects – the “phenomena” or appearances – that pass through this filter. But, since the design of the sensible material is determined by our perceptual apparatus, we are forced to conclude that, beyond what this apparatus can capture, the world is just a chaotic mass of signals. This mass acquires form, order, and meaning when it passes through the filter of our perception and is then validated by the universal principles of reason. But, if everything accessible to us comes from our perceptual apparatus, and if the perceptions in turn have to be framed within the categories of rational thought, the result is that our reason is sovereign over all possible objects of knowledge: it does not have to answer to any “external reality”, but, on the contrary, it determines the conditions that this reality must meet to be admitted into the world of knowledge.

The famous “autonomy of thought”, then, does not essentially consist in being free from clerical or governmental authorities, but in despising the external coercion of facts. This is the meaning of the “Copernican revolution” in thought. In the old, medieval, and renaissance science, the total order of the world we live in was the sovereign judge of knowledge. Human reason was nothing more than a partial and limited manifestation of this total order that, in us, recognized itself to the extent of our possibilities, always leaving a horizon of mystery that receded with each new advance in knowledge. With Kant, human reason proclaimed its independence from the external world, radically changing the meaning of “truth”. Before, truth consisted in the coincidence of the thought with the order of known facts. Now, it became obedience to a predetermined rational filtering, a method freely conceived by reason through the Kantian analysis of itself. Whatever was outside the method, no matter how blatant its presence, was dismissed as irrelevant, null and ultimately nonexistent. And so it is today in well-thinking circles, where a censorial authority more stupid and intolerant than all previous ones cuts the world in the shape of its ignorance, abolishing entire continents of reality. The sentence “If the facts do not confirm my theory, so much the worse for the facts” is from Hegel, but it expresses rather the quintessence of Kantian enlightenment. The inner, esoteric, meaning of “Dare to know”, is in the end “Dare to ignore”: between the facts and the method, prefer the method. Obscurantism is the secret name of the Enlightenment.

115 Jornal do Brasil, March 30, 2006.

Which Human Mind?116

Aiming to distinguish itself from its ancient and medieval antecedents by virtue of critical thinking in opposition to dogmatic faith, modern thought is born on a set of assumptions of such blatant naivety, it’s as if centuries of critical training had suddenly disappeared from human memory and been replaced by the infantile presumption of knowing everything through simple tricks, as if by magic.

The doctrine of the human mind as a regulating center and source of meanings, which is the central dogma of modernity, can only seem plausible if the philosopher bases all his conclusions on the schematic model of a conscious observer in front of a passive object of the physical world – stone, tree, mountain -, completely abstracting from the action that this object, if it were a dog or a human being, could exert on the supposedly untouchable and supreme observer.

It is strange that, before any of the philosophers who proclaimed the sovereignty of the mind as the ordering center of external chaos, no one in the audience stood up to ask: – Which human mind, pal? Yours or mine? Am I a chaos that you order or are you the chaos and I the source of order? Because, if you answer that we both order each other, you will be admitting above both of us a common ordering principle that transcends us and that we do nothing but put into action the moment we mutually order, in the recognizable forms in which we visually present ourselves to each other, the supposed chaotic clusters of our respective bodily presences. From Descartes to Kant, a century and a half will pass before this very obvious difficulty appears with full clarity and receives a more elaborate critical treatment. The ordering power over the presumed chaos of reality will then be transferred from the individual human mind to the universality of reason and the a priori forms of sensitivity. But this solution is ridiculous: it’s like supposing that, between two observers, each one transmits to the other chaotic sensible impressions which both order instantly thanks to the universality of their respective reasons and a priori forms. That is to say: it may be that, beneath the human forms with which we mutually see each other, you are in fact a chicken and I am a hippopotamus, and we only see each other as identical human forms because, despite the immeasurable and unknowable difference of our respective bodily structures “in themselves”, we have been miraculously endowed with identical human rationality and identical a priori forms of sensitivity. The hypothesis is so far-fetched and artificial that it’s comical that it was seen as a solution rather than a problem. Wouldn’t it have been much more rational to assume that we see each other with human forms because our bodies have human forms, proportionate moreover to their respective perception structures and rational faculties? Oh no! Not that! That would be to suppose a comprehensive reason that would order at the same time the world, beings and their respective faculties of perception and reasoning. It would be to incur in the mortal sin of Aristotelism. It would lack “critical sense”. Critical sense, in this context, is to flee from real experience and limit the examination to fictional examples impossible in themselves, but logically appropriate to the conclusion one wants to obtain.

The Guru of the New World Order117

SOME readers are surprised that, amidst the rise of communism in Latin America, I divert myself from the explosive present to engage here and in other publications in what seems like an untimely battle against Immanuel Kant and enlightenment.118

Some may imagine that I have taken a dislike to the hunchbacked dwarf from Königsberg due to his physical resemblance to the one from Turin (Antonio Gramsci). However, I have nothing against dwarfs, except when they are monstrous inside. In a book published in 1999, I briefly described the latter. His German predecessor appears much less dangerous. Often, he appears in the media with the cheerful demeanor of a lover of peace and freedom. One cannot deny that he truly was, but in philosophy, words do not hold their dictionary-defined meanings but rather the specific and fully developed concepts they name. When we examine what Kant understood by peace and freedom, knowing that today’s candidates for world leaders also understand them in this way, we cannot help but perceive that the similarity between the philosopher and the founder of the Italian Communist Party is not only anatomical but also moral, especially in their capacity to beautify with idealistic language the ugliest historical forces that were being planted in the soil of the future.

In general, the increasing and more organized influence of intellectuals in the centers of global power and the widespread adoption of “cultural warfare” as the primary instrument of domination make politics incomprehensible to those who cannot closely follow the march of ideas. A deadly illusion is to imagine that there still exists a “practical” sphere separated from cultural, religious, and philosophical debates. The so-called “pragmatic” politicians or business leaders, who once boasted of looking down upon seemingly Byzantine discussions among academics, are now a dying breed. To destroy them, activist intellectuals need only conceive strategies that go beyond the horizon of their short-term pragmatism. The victory of Gramscian thought in Brazil can be explained, to a large extent, by the intellectual laziness of political and business leaders outside the left. In the United States, nothing is debated in the parliament, decided in the judiciary, or undertaken in the executive without having passed, long before, through the sieve of think tanks, where heavyweight intellectuals create the thought categories that subsequently guide the entire ensuing discussion. If you try to follow the unfolding of events without knowing the remote intellectual presuppositions behind the power conflicts, you end up understanding nothing. One of those presuppositions is Kant’s philosophy. Presented in an abstruse style that repels even philosophy students, it is the last thing that a “practical man” would be interested in. Therefore, it is becoming a reality right under their noses, without them having the slightest idea of where it threatens to lead them.

A few observations are enough to highlight the gravity of the matter.

Firstly, Kant’s notion of “eternal peace,” so appealing to sentimentalists due to its vague biblical resonance, means nothing other than “world government.” In an important study, Father Michel Schooyans,119 a Belgian philosopher who has taught in Brazil, shows that the new uniform legislation that the UN has been imposing on the world, such as the mandatory abortion mentioned in one of the previous articles, is directly inspired by Kant. The rapidly disorienting global government that the UN is constructing is the exact legal translation of what Kant understood as the “human community.” According to the philosopher, this community emerged spontaneously from the fact that men are all endowed with the same faculty of “reason.” But reason, for Kant, is not the same as it was for the ancients and medieval thinkers. They understood it as the simple gift of speech and coherent reasoning, a distant reflection of the divine Reason that created and sustains the world. Thanks to this gift, humans could apprehend something of the divine and cosmic order of the world, ordering their own souls' lives in accordance with it, to the extent of their limited capacities. For Kant, on the contrary, reason is the supreme and unsurpassable legislative authority that is not answerable to any pre-existing divine order or any real-world facts that do not fit into its sovereign self-regulation. Students of the history of philosophy know that the Enlightenment, in general, was characterized by the apology of abstract universality, with complete disregard for the variety of singular facts. In the French Revolution, thousands of singular heads were cut off to fit the remaining ones into the beautiful universality of reason. Kant adored this. The rigidity of his abstract moralism knew no bounds. Now imagine what can result from transforming this into the regulating principle of the world order. Eliminating nations that do not conform to the perfection of the new global order will be as easy as guillotining dissidents. If Colombian culture, for example, is resistant to abortion because it wants to remain faithful to its Christian origins, international credit can be cut off from Colombia, just as the head of the poet André Chénier or the physicist Lavoisier was once cut off. This is indeed happening, and it is an even more tempting solution because the Colombian government is successfully waging a war against drug trafficking, which the emerging global order would prefer to legalize as legitimate trade.120 For those seeking to frame the planet in a uniform legal model, crushing opponents and recalcitrants with the virtuous conscience of an apostle of eternal peace, nothing is more inspiring than Kant’s abstractions.

However, long before instilling these malicious ideas into the minds of Geneva’s bureaucrats, Kant had already caused irreparable harm to human intelligence. By consecrating the empire of uniform “reason” over the multiplicity of facts, he created the scientific dogmatism that allows entire continents of reality to be abolished, on the pretext that they are resistant to scientific study. Furthermore, this same science, which admits its inability to study them, is given the authority to declare that they do not exist. This idolatry of the method has produced tragicomic results. The epidemic of anthropological charlatanism in the 20th century was one of them. Based on Kant’s premise that a fact cannot be deduced from a value judgment, nor a value from a fact, inept social scientists professed to abstain ascetically from making value judgments about the cultural realities they studied and ended up concluding from this vow of chastity that, in this field, differences in value did not exist at all. The equality of cultures before Kant’s supreme Reason is now a dogma imposed on all nations by the politically correct pedagogues of the UN. The bibliography aimed at persuading the world that, for example, the Aztec rituals of human sacrifices were as decent a custom as Franciscan charity is immeasurable.

When Prof. Peter Singer resolutely affirms the human rights of chickens, extending to differences between animal species the same precept that was so successful regarding cultural differences, he is being strictly Kantian.

From the same inspiration comes that sublime rule that, as genetic science cannot perceive any difference between a human being and a chimpanzee at three months of gestation, humans are not really different from chimpanzees. Strengthened by Kant’s authority, every science believes itself authorized to proclaim that everything beyond its reach is perfectly non-existent. Any janitor knows that a human embryo, once grown, can become Plato or Michelangelo, and no chimpanzee embryo can expect an equally promising future. However, as embryology does not study anything that happens to embryos after they are no longer embryos, this difference is Kantianly abolished in favor of the sovereignty of the method. And for a long time, the suppression of this difference has ceased to be a mere academic speculation; it has already become law, and the heads it has been removing along the way do not belong to chimpanzees or chickens.

Another incalculable harm that Kantism has brought to humanity is the rigid and stereotyped separation between “science” and “religion.” According to Kant, the former concerns what we can “know,” and the latter concerns what we can only “hope for,” meaning desire and imagine. In short, there is a distinction between “knowledge” and “belief.” This distinction has so deeply permeated the Western soul that it has come to determine the daily use of the respective words in the media, schools, public and private discussions. This is perhaps the most successful terminological dogma of all time. Even in the automatism of the unconscious, religion has become “faith,” and that’s the end of it. But this is a naive and unsustainable concept, complete foolishness. No religion in the world begins with “belief.” It always starts with a succession of facts that mark the sudden and humanly inexplicable collective penetration into a higher sphere of reality, where all existence appears transfigured by a new meaning. I say “facts” because that’s what it is. The crossing of the Red Sea may have become a matter of “belief” for subsequent generations, but it was not the case for those who lived through the event. Jesus Christ could tell the cured blind and paralyzed, “Your faith has healed you.” But it’s just metonymy: the cure, if it were a matter of mere faith and not a fact of the physical order, would be fraud and nothing more. With the passage of time, as the living memory of witnesses fades, access to these facts may require some “faith,” but it makes no sense to confuse the nature of a fact with the way of knowing it centuries later. Either these miracles happened, or they did not. Shifting the problem to a remote past is merely evading the problem. Seventy-six percent of American doctors today believe in miraculous cures because they witness them daily and know they are even more frequent than cures through usual therapeutic means. Jesus Christ Himself, when asked if He was truly sent by God or if they should wait for someone else, did not respond with a “doctrine” to be believed or disbelieved but with facts to be confirmed or refuted.121 Religions only become a matter of “belief” for an audience that is far removed, in space or time, from their original sources. Direct knowledge and scientifically responsible study of miraculous events are the only intellectually valid ways of accessing religion. The rest is an empty discussion among ignorant chatterboxes sitting on the periphery of reality. Nowadays, however, any fact considered miraculous is automatically excluded from official discussion, except when it is a fraud or an illusion, that is, when, precisely because it is not at all miraculous, it can be explained away by some facile psychology or sociology. Expelled inconvenient data, Kantian “reason” rules absolute in its molehill. Kantism, the consecration of intellectual cowardice that shies away from everything it does not understand, blocks the possibility of coming to understand it. No dogmatic authoritarianism in history has been as petty and as harmful as this. Its disastrous effects on culture, history, and moral life are countless.

And let no one come to me with that flimflam that Kant had the best of intentions, that it was all the fault of overzealous and incomprehending disciples. The perverse consequences of Kantism, like those of Hegelianism and Marxism, did not come centuries or millennia later; they followed almost immediately. A thinker who believes he can turn the entire universe of human knowledge upside down has no excuse for ignoring the most obviously foreseeable effects of the spread of his ideas. It is indecent to pass from supreme intellectual arrogance to whining feigned innocence. Kant, like Hegel, Karl Marx, or even Nietzsche despite the mitigating factor of insanity, cannot be granted that right. Whoever claims to have understood the integral meaning of human history has the strict obligation to accurately predict the next episode, at least concerning their own limited field of personal action. If a person cannot even do that, it is because they have not reached the full philosophical self-awareness of a Plato, an Aristotle, a Thomas Aquinas, or a Leibniz. In that case, it is only out of idolatrous devotion that we continue to consider them a great philosopher and not just an interesting thinker.


  1. Gorgias. 447d. Ostyn is often translated as “what” instead of “who,” but Eric Voegelin’s preference for the latter translation seems justified by his interpretation of the entire text.

  2. Order and History, vol. III, Plato and Aristotle, and Collected Works of Eric Voegelin, vol. 16, Columbia and London, University of Missouri Press, 2000, p. 78.

  3. Such is the ironic situation that inspires the title of this book.

  4. I take advantage in the following paragraphs of notes I took for the class on January 22, 2011, in the Philosophy Seminar.

  5. See my essay “Two Methods” in Dicta & Contradicta No. 6, December 2010, reproduced later in this volume.

  6. See my lecture “Descartes and the Psychology of Doubt,” Descartes Colloquium of the Brazilian Academy of Philosophy, Faculty of the City, Rio de Janeiro, May 9, 1996 (reproduced at www.olavodecarvalho.org/apostilas/descartes.htm).

  7. See my course “The Consciousness of Immortality.”

  8. Note: In this edition, I solemnly disregard the 2009 spelling reform. An agreement allows me to do so until December of this year, but I do not intend to stop there. As long as I am alive and in my right mind, I will make no concessions to a senseless orthographic decree signed by a semi-literate person who boasts of not reading books.

  9. Júlio Lemos, “Sobre uma superstição,” on http://www.dicta.com.br/, April 5, 2012.

  10. The complete text is available online at http://www.newmanreader.org/works/idea/.

  11. V. C. Stephen Jaeger, The Envy of the Angels. Cathedral Schools and Social Ideals In Medieval Europe, 950-1200, Philadelphia, University of Pennsylvania Press, 1994.

  12. Stenzel, Platone Educatore, trad. Francesco Gabrieli, Bari, Laterza, 1966, p. 17.

  13. A. E. Taylor, Plato: The Man and His Work (1926), Mineola, NY, Dover, 2001, p. 6.

  14. See Nicole Loraux, The Mourning Voice: An Essay on Greek Tragedy, transl. Elizabeth Trapnell Rawlings. Cornell University Press. 2002.

  15. See the astute observations of Eric Voegelin on the “dream anthropology” that underlies contractualist theories, in Plato and Aristotle. Order and History vol. III, Columbia and London, University of Missouri Press, pp. 129-131.

  16. Op. cit., pp. 119-120.

  17. Id., ibid.

  18. Later, in the comments section, Mr. Lemos tried to justify himself by claiming that the sources Plato used in the Apology of Socrates are questionable. Based on this, he believes he is authorized to categorically affirm, without any source, the opposite of what Plato says. This is the man who wants to give lessons in “logical rigor” to a stunned world. He has not yet learned that between doubt and the certainty of the opposite, the distance is infinite.

  19. This paragraph already reveals the remarkable state of mental confusion to which the poor Mr. Pinheiro has been thrown by his poor reading of my articles. Because I have said elsewhere that direct learning, seeing and hearing a philosopher philosophizing, is an indispensable condition for learning philosophy, he imagined, who knows why, that by praising cathedral schools I was precisely doing so because I believed that this teaching modality prevailed in them, abandoned or neglected later on. Mr. Pinheiro attributes to me a nonsense of his own invention. The direct teaching of philosophy never ceased, in medieval or later universities; it is indeed the only reason for the existence of universities. What distinguishes cathedral and monastic schools from the 10th to the 12th centuries is not that: it is the presence of the master as a living embodiment of Christian virtues, not as an explainer of philosophy. The goal was not to train philosophers but gentlemen. This was the neglected objective in 13th-century universities, and that is why I judged Cardinal Newman wrong in taking them as a model precisely for a type of teaching that they had abandoned.

  20. The desire to associate me with the perennialist or traditionalist school, with all its paraphernalia of initiatory rituals, is indeed an obsession of Mr. Lemos and Mr. Pinheiro, who, with every line of my authorship they read, immediately start looking for a perennialist under the bed. I ask myself what the charisma of Christian virtues, exemplified by the teachers of cathedral and monastic schools, could have of initiatory in the sense of Guénon, who reserves this word to designate the practices of strictly esoteric organizations, clearly distinguishing them from everything that is “religious.” There may have been some initiatory element in trade guilds, but not in cathedral and monastic schools. Lemos and Pinheiro employ this term and the term “esotericism” not because they are appropriate to the topic under discussion, but because they know that they have negative connotations for the public they address and imagine that by using them, they can create an aura of bad impression around my person. Mr. Lemos, in a blatant display of Olympian superiority, mounted, with unintentional irony, with a grammatical mistake that starkly contrasts with the pedantry of an unnecessary Latin term, declares: “It makes a lot of sense that people coming from journalism and esotericism, pace Olavo, confuse things.” They can even say that I come from selling peanuts in a public square; I don’t care; but Mr. Lemos comes from the legal profession, that profession already cursed in Luke 11:52, whose practitioners, according to a famous joke, only differ from vultures because they earn mileage certificates.

  21. See footnote 40 below.

  22. For those who are not familiar with it, since the new generations have lost the best of the past, here is the joke. Two Englishmen, Paul and Peter, were having tea and chatting on a pleasant afternoon when Peter remarked: — You know, Paul, I dreamt about you last night. — Really? How was the dream? — I dreamt that you died, were buried, a little plant grew on your grave, a cow came, ate the plant, defecated, and when I saw the dung, I exclaimed, “Oh, Paul, how you’ve changed!” Paul, unperturbed, replied: — How interesting! You know, I also dreamt about you. — Really? How was it? — I dreamt that you died, were buried, a little plant grew on your grave, a cow came, ate the plant, defecated, and when I saw the dung, I exclaimed, “Oh, Peter, you haven’t changed at all.”

  23. Forgive the poor grammar. Neither Mr. Pinheiro nor Mr. Lemos are very good at agreement. [Translator’s note: Olavo was probably referring to the lack of agreement in gender between “betrayed” and “wisdom”, which was lost in the translation.]

  24. It is objectively strange, but also significant of the mentality we are dealing with, that after almost a century of scientific studies on the non-verbal substrate of verbal communication, which had among its pioneers the psychotherapist Milton Erickson (1901-1980), the expression does not evoke in Mr. Pinheiro’s mind anything other than “traditionalist and perennialist dreams,” as if they were the only historical reference in this regard. The obsession with turning me into a perennialist, a Guénonian, that is the real dream: the dream of making me a suspicious figure, so that people do not listen to what I say and only see me through a network of silly prejudices woven around my person by Mr. Lemos and Mr. Pinheiro.

  25. Theodore M. Porter, Trust in Numbers. The Pursuit of Objectivity in Science and Public Life, Princeton, NJ, Princeton University Press, 1995, pp, 13-13.

  26. For the foundations of this discipline, see Randall Collins, The Sociology of Philosophies: A Global Theory of Intellectual Change, Harvard University Press, 1998.

  27. Harry Redner, The Malign Masters: Gentile, Heidegger, Lukács, Wittgenstein. Philosophy and Politics in the Twentieth Century, New York, St. Martin’s, 1997, pp. 178-9.

  28. Karl Löwith, My Life in Germany before and after 1933, Urbana and Chicago, University of Illinois Press, 1994, pp. 28-9.

  29. Redner, op. cit., p. 189.

  30. Hervé Hamon and Patrick Rotman, Les Intellocrates. Expédition em Haute Intelligentsia, Paris, Ramsay, 1981.

  31. Processo eficazmente descrito por Russel Jacoby em The Last Intellectuals: American Culture in the Age of Academe, New York, Basic Books, 2000.

  32. C. Wright Mills, Sociology and Pragmatism. The Higher Learning in America, ed. Irving Louis Horowitz, New York, Galaxy Books, 1966.

  33. Redner, op. cit., p. 190.

  34. This does not mean that philosophy is a “worldview”. On the contrary: the worldview is already given, in some way, in the cultural material received by the philosopher. Philosophy is a clarifying and corrective elaboration of the worldview. I can provide more detailed explanations about this in another context, but here it would take us away from the subject.

  35. See Alois Dempf, Die Hauptformen mittelalterlicher Weltanschauung, München-Berlin, Oldenburg, 1925.

  36. The question arose in 1923 with Werner Jaeger’s book, Aristoteles: Grundlegung einer Geschichte seiner Entwicklung (English translation by Richard Robinson, Aristotle: Fundamentals of the History of His Development, 1934).

  37. French translation, Architecture Gothique et Pensée Scholastique, Paris, Éditions de Minuit, 1981.

  38. Here is the chronological order of events:

    1140 Reconstruction of the choir of the Abbey of Saint Denis in the Gothic style. 1160 Gothic cathedral of Laon. 1195 Construction begins on the Gothic cathedral of Bourges. 1220 Main structure of the Gothic cathedral of Chartres is completed. 1231 Alexander of Hales begins writing the Summa Universae Theologiae, left incomplete. 1241 Plans for the Sainte-Chapelle, which begins construction in 1246 and is quickly completed, consecrated on April 26, 1248. 1245 Albertus Magnus arrives in Paris. 1260 Bonaventure begins lecturing on Peter Lombard’s Book of Sentences, from which his Commentary will emerge. 1264 Summa contra Gentiles by Thomas Aquinas. 1265-1274 Thomas writes the Summa Theologica. 1266-1308 Life of John Duns Scotus.

  39. See José Ignacio Cabezón, Scholasticism: Cross-Cultural and Comparative Perspectives, Herndon, VA, State University of New York Press, 1998.

  40. For further explanations, see my book Aristóteles em Nova Perspectiva. Introdução à Teoria dos Quatro Discursos, Rio, Topbooks, 1996 (2nd ed., São Paulo, É Realizações, 2006).

  41. This is enough to show how Mr. Pinheiro, by opposing the non-verbal to the verbal as if they were incompatible with each other, and by qualifying the former as a “consummated escape,” only exemplifies his amateurish unpreparedness to deal with these issues. For him, the search for “reality” begins with verbal abstraction and moves upward, as if reality existed only in philosophical concepts and discussions, without the support of the physical and cultural world around us, and without the philosopher’s immersion in the living fabric of human society. What he calls “reality” is what I call “escape,” and vice versa.

  42. Text read at the Philosophy Seminar, on January 28, 2012.

  43. A formulation almost identical to that of Jean Piaget which I contested in The Garden of Afflictions, São Paulo, É Realizações, 2000, 2nd ed, p. 156.

  44. See The Nature and Future of Philosophy, New York, Columbia University Press, 2010 (initially published in Italian translation in 2001).

  45. See The Quantum Enigma, trans. Raphael de Paola, Campinas, Vide Editorial, 2011.

  46. I am far from believing that the new science has always been the cause of technological progress. Historically, technology often anticipated science, but even this fact cannot be explained as an exceptional coincidence. In several courses and conferences, which I hope to publish in a book sooner or later, I have explained that the modus ratiocinandi of technology is not only distinct from and independent of that of science, but is the reverse of it; that technology has its own specific rationality, in which the scientific contribution integrates as a material element among others, not as a form – in the Aristotelian sense – founding and articulating.

  47. See Antonio Negri, Political Descartes. Reason, Ideology and the Bourgeois Project, transl. Matteo Mandarini and Alberto Toscano, London, Verso, 2007.

  48. Georg Lukacs, Zur Ontologie des geselíscha lichen Seins. Hegeís falsche und echte Ontologie, Neuwied/Berlin, Hermann Luchterhand Verlag, 1971 (American translation, Ontology of Social Being Hegel’s False and Genuine Ontology, 3 vols., Merlin Press, 1978-79).

  49. Author, among other books, of the remarkable L’Oeuf et la Poule. Histoire du Code Génétique, Paris, Fayard, 1983.

  50. The Logical Basis of Metaphysics, Cambridge (Mass.), Harvard University Press, 1991.

  51. See, for example, Jeffrey Long and Paul Perry, Evidence of the Afterlife: The Science of Near-Death Experiences, New York, HarperOne, 2010; P. M. H. Atwater, The Big Book of Near-Death Experiences: The Ultimate Guide to What Happens When We Die, Charlottesville (VA), Hampton Roads, 2007; R. Craig Hogan et al., Your Eternal Self, Greater Reality Publications, 2008 (a highly informed but non-scientific book with a valuable bibliography of academic studies on the subject); Stephen Hawley Martin, The Science of Life After Death: New Research Shows Human Consciousness Lives On, Richmond (VA), The Oaklea Press, 2009.

  52. Hence my insistence on the philosophical importance of studying miracles. See Olavo de Carvalho, "What is a miracle?", at www.voegelinview.com/what-is-a-miracle.html.

  53. A phenomenon that can be explained more by academic politics than by any intellectual superiority of the analytical school. See Harry Redner, The Ends of Philosophy: An Essay on the Sociology of Philosophy and Rationality, London, Croom Helm, 1986, pp. 183, 189, 192.

  54. See Mário Ferreira dos Santos, Origem dos Grandes Erros Filosóficos, São Paulo, Matese, 1965, and Grandezas e Misérias da Logística, São Paulo, Matese, 1966.

  55. Diário do Comércio, February 28, 2012.

  56. Text read at the Philosophy Seminar, on August 11, 1996. The following chapter is a natural and indispensable dialectical complement to this one.

  57. Some may, reasoning more or less in the style of Hume, contest that the certainty of death is a self-evident principle, declaring that it is only a truth of experience obtained by induction. I will prove, later on, that they are wrong. [N.B – This “later on” refers to the continuation of the course. I do not provide the said proof in this book.]

  58. Text read at the Philosophy Seminar, on June 5, 2010.

  59. Text read at the Philosophy Seminar, on April 17, 2010.

  60. Dicta&Contradicta, n° 6, São Paulo, December 2010.

  61. Martial Guéroult, Descartes selon l’Ordre des Raisons, 2 vols., Paris, Aubier, 1953.

  62. Lívio Teixeira, Ensaio sobre a Moral de Descartes, Boletim 204 da Faculdade de Filosofia, Ciências e Letras da USP, São Paulo, 1955.

  63. Bergson’s case seems to be an exception, but it is not. His own declaration that he had nothing more to say except what was in his published books is not found in any of those books: it is an essential external datum for understanding those books.

  64. See a brief collection in Craig Horgan, Your Eternal Self, Greater Realities Publications, 2008 (electronic version at www.greaterrealities.com).

  65. It is not surprising that one of the most representative gurus of the New Age, the Buddhist monk Alan Watts, found in Wittgenstein the basis for the construction of his spiritual proposal. V. Watts, The Book: On the Taboo Against Knowing Who You Are (1966; reed. Vintage Books, 1989).

  66. See Paul Friedländer, Plato, 3 vols., Princeton University Press, 1958 (reprinted 1969).

  67. For Voegelin, this is the very definition of philosophy.

  68. See “Consciousness and Wonder – Descartes and the Psychology of Doubt – Part II”, at www.olavodecarvalho.org/notes/descartes2.htm.

  69. Guéroult, op. cit., p 15.

  70. See Amir C. Aczel, Descartes' Secret Notebook. A True Tale of Mathematics, Mysticism, and the Quest to Understand the Universe, New York, Broadway Books, 2005.

  71. See my lecture “Descartes and the Psychology of Doubt” at www.olavodecarvalho.org/notes/descartes.htm.

  72. One could argue that it is not a matter of pure faith nor much less of an extemporaneous appeal, since Descartes draws from the ego cogitans itself the proofs of the existence of God. But the fact is that the God of Descartes only enters the story as a concept thought by the ego (even if thought negatively, by its incomprehensibility and infinity), and not as a founding presence in the heart of the ego itself, without which it would not exist at all. I am certain that, in the face of what I am saying, Guéroult would argue that this abstract separation between ego and God is part of only the order of demonstration (vatio cognoscendi) and not of the order of being (ratio essendi) as conceived by Descartes. But, if in the Meditations Descartes insists that God is the ultimate foundation of our certainty, nowhere else will he return to the subject to speak of God as the founding force of the existence of the ego and not only of knowledge. This point should be the object of a separate study.

  73. Published on February 6, 2012, at http://www.dicta.com.br/meritos-e-demeritos-da-filosofia-academica-no-brasil.

  74. See this depressing story in Miguel Reale, Memories, São Paulo, Saraiva, 1986, Vol. I, p. 242.

  75. See “Introduction to the Philosophical Method,” available at www.olavodecarvalho.org/avisos/intro_mctodo_filosofico.html.

  76. Among all the techniques, the most notorious exception is text analysis itself, which can be learned entirely in books for the simple reason that text analyses… are texts.

  77. See “The Child Philosopher”, “A Philosophical Little Man”, and “Confessions of a Brontosaurus”, available at www.olavodecarvalho.org/blog.

  78. Translator’s note: To “prove by A plus B” is a typical Brazilian expression.

  79. See José Arthur Gianotti, “Faculty of Philosophy, Sciences and Letters: Memories”, in Informe. Newsletter of the Faculty of Philosophy, Letters and Human Sciences, no. 52, April 2009.

  80. The Garden of Afflictions. From Epicurus to Caesar’s Resurrection: Essay on Materialism and Civil Religion, p. 37 of the 2nd ed.

  81. Roberto Schwarz, "The grandson corrects the grandfather (Gianotti x Marx)", at http://obeco.planetaclix.pt/rsw1.htm.

  82. See the complete episode in Miguel Reale, Memoirs, loc. cit.

  83. See Gianotti, loc. cit..

  84. See Ronald Robson, “Vilém Flusser at the Polytechnic School”, at www.adhominem.com.br/2012/02/vilem-flusser-vai-escola-politecnica.html.

  85. See my note “Pauteiro da USP” at www.olavodecarvalho.org/textos/pauteiro.htm.

  86. A process that does not coincide, never or almost never, with the “text structure” or “order of reasons,” in the sense of Guéroult. You can learn these things by heart and rote, and you will never know how the philosopher came to produce that. You will be an established observer of the product, not a workshop colleague of the craftsman who created it.

  87. Dicta &Contradicta, n.3, São Paulo, June 2009.

  88. São Paulo, É-Realizações, 2001.

  89. Text read at the Philosophy Seminar, on February 26, 2007.

  90. Annotation from December 22, 1995, in SeminariumPages from a Philosophical Diary, unpublished.

  91. Diário do Comércio, May 7, 2009.

  92. See www.seminariodefilosofia.org

  93. Diário do Comércio, 27 de maio de 2009.

  94. Diário do Comércio, March 13, 2009.

  95. See Allen Braun et al., “Tune Deafness: Processing Melodic Errors Outside of Conscious Awareness as Reflected by Components of the Auditory ERP”, at www.plosone.org/ article/info:doi/10.1371 /journal.pone.0002349.

  96. Diário do Comércio, January 7, 2009.

  97. Jornal do Brasil, December 4, 2008.

  98. Jornal do Brasil, December 11, 2008.

  99. Don’t miss his pathetic testimony in Ben Stein’s movie, Expelled: No Intelligence Allowed. See www.expelledthemovie.com.

  100. Diário do Comércio, November 28, 2008.

  101. Diário do Comércio, (editorial), September 6, 2008.

  102. See www.alainindependant.canalblog.com/archives/2007/11/11/6847208.html

  103. Jornal do Brasil, December 27, 2007.

  104. Published on OrdemLivre.org, on June 1, 2011.

  105. Digesto Econômico, September/October 2009

  106. Diário do Comércio, November 14, 2008

  107. Text read at the Seminar of Philosophy on January 16, 2010.

  108. Le Symbolisme du Temple Chrétien, Guy Trédaniel, 1990.

  109. See Michael S. Rose, Ugly as Sin. Why They Changed Our Churches From Sacred Places To Meeting Spaces And How We Can Change Them Back Again, Manchester, N.H., Sophia Institute Press, 2008.

  110. Milano, Arché, 2004.

  111. See, for example, www.guardian.co.uk/world/2001/no /01/afghanistan.terrorism3.

  112. Diário do Comércio, March 13, 2006

  113. See Beckett, un Ecrivain devant Dieu, Desclée de Brouwer, 1967.

  114. “See my Maquiavel or The Demonic Confusion, Campinas, Vide Editorial, 2011, in which I attribute this doubtful honor to the autobiographical fragments of Nicolau Machiavelli.”

  115. Jornal do Brasil, 30 de março de 2006.

  116. Zero Hora, April 2, 2006

  117. Diário do Comércio, April 3, 2006.

  118. See previous chapters.

  119. La face cachée de l’ONU, Paris, Ed. Sarment Fayard, 2000.

  120. A vast campaign in this direction is subsidized by Mr. George Soros, who is also heavily investing in building the new order and buying land… in Colombia.

  121. See Matthew 11:1-6.

No comments:

Post a Comment